Is Artificial Intelligence good, bad, or neither? (Part 1)

In my review about Speaker for the Dead I briefly mentioned the artificial intelligence that accompanies Ender by the name of Jane. There are so many elements to consider when thinking about A.I.; does it already exist, will it create a utopia, will we all be annihilated? Some people think it’ll be the most extraordinary advance to change our lives since the industrial revolution, some think it’ll be the end of humanity as we know it, others still (like myself) are optimistically cautious. Each view holds some merit, especially since Moore’s law (the fact that computing power doubles every two years) has held true for over fifty years. Anything advancing that quickly demands our attention. The moral implications as we hand over decision making processes off to machines are astounding, the automation of killing people in certain military drones is quickly becoming a reality, and the displacement of jobs for the future is also a concrete wall that we’re speeding towards and there are no brakes.

A prime example of someone who believes our moral values are more important now than ever is Zeynep Tufekci. In her Ted Talk from 2016, she makes the case for human morals being more important than ever in a world where we are handing over more and more decision making processes to algorithms and automation we don’t entirely understand.

 

Zeynep begins with a great anecdote asking whether or not a computer can tell if a person is lying (her boss was asking her because he was cheating on his wife). This raises an interesting question on many levels; 1st can machines detect a lie, 2nd if they can then can they too also lie? Ponder on that for a moment, if a machine could learn that humans sometimes lie to avoid an undesired response could it then too not incorporate what it learned and use it itself? When we think about A.I. we need to consider that until this point machines and computers only had the capacity to do what they were told; humans maintained control. There’s always a person plugging in numbers into a calculator, there’s a person hitting the send button on that Facebook message, there’s a person behind the steering wheel pressing the gas or the brakes when needed. Although we all know we as humans aren’t perfect, we can count on (most) humans to act guided by a sound moral compass. Are we ready to trust machines with the same moral decisions?

A question I’ve pondered a lot lately is if a self-driving car kills someone then does their existence make roadways less safe than human drivers? An interesting article by Business Insider states, “A 2014 Google patent involving lateral lane positioning (which may or may not be in use) followed a similar logic, describing how an AV might move away from a truck in one lane and closer to a car in another lane, since it’s safer to crash into a smaller object.” Can we safely allow A.I. to make life altering decisions for us? How do we set acceptable limits? Zaynep argues that we don’t really have any bench marks or guides for making decisions in complex human affairs. Basically we’re not sure how an A.I. would make its’ decisions, and we don’t like what we don’t know. Zeynep also mentions “Machine Learning” unlike regular programming where its given detailed instructions on how to respond to certain scenarios, machine learning gives the computer tons of information which it then takes and uses to learn. For example, let’s say you show a program one hundred pictures of dogs, all kinds of dogs; it’ll analyze every picture and start to learn the features of a dog. Our programs do this so well that if you later showed it a picture of a cat it would tell you that the picture is “not a dog” if asked. Now that’s a REALLY basic summary but you get the picture (see what I did there?).

This to me is incredibly interesting because I’m one of those people sitting on the fence about A.I., I definitely see the appeal and become excited thinking about all the cures for disease, the technology we could further develop, and how much further we could get in our exploration of the universe if only we had systems like A.I. in place helping to do the research. In that same breath, I also see how giving these machines the ability to make probabilistic decisions in a way we don’t quite understand is worrisome, especially if these systems are put in place for military weapons, transportation systems, and even food and water filtration systems. It’s either a utopia or dystopia…great.

Another consideration is that although the computers will make decisions in ways we may not understand, it still does so using information we give it. Zaynep points out that these systems could pick up on our biases, good or bad. According to her, researchers on Google found that women were less likely than men to be shown job ads for high-paying jobs and searching for African-American names is more likely to bring up ads suggesting criminal history even when there is none. So what kind of future do we want to build? Are we unknowingly making A.I. with the same prejudices we have as a species? What further implications will that have?

All this is great speculation but are we even close enough to developing A.I. to be worried about it? Well, kind of. A blog post by nvida does a pretty good job breaking it down. The basic rundown is currently we’ve been able to program machines and computers to do specific tasks better than humans. This ability is classified as narrow A.I. and includes tasks like playing checkers or facial recognition on Facebook. Simple programs that once it knows the “rules” can execute nearly perfect. The next great step forward was machine learning, this is what Zaynep was referring to when instead of hand coding programmers use algorithms to help machines dissect information which they then use to make their calculated “best” decision. As we discussed earlier though, depending where that initial pool of data came from, there could be bias and even racism “programmed” into the machines unintentionally.

All great things worth considering and I want to know what YOU think about A.I. Are you for it or against it and why? Leave a comment below and I’ll see you on the next Part of Is Artificial Intelligence Good, bad, or neither?

 

*Featured image from http://mirrorspectrum.com/behind-the-mirror/the-terminator-could-become-real-intelligent-ai-robots-capable-of-destroying-mankind#

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s