Posted on

Is Artificial Intelligence good or bad? (Part 1)

In my review about Speaker for the Dead I briefly mentioned the artificial intelligence that accompanies Ender by the name of Jane. There are so many elements to consider when thinking about A.I.; does it already exist, will it create a utopia, will we all be annihilated? Some people think it’ll be the most extraordinary advance to change our lives since the industrial revolution, some think it’ll be the end of humanity as we know it, others still (like myself) are optimistically cautious. Each view holds some merit, especially since Moore’s law (the fact that computing power doubles every two years) has held true for over fifty years. Anything advancing that quickly demands our attention. The moral implications as we hand over decision making processes off to machines are astounding, the automation of killing people in certain military drones is quickly becoming a reality, and the displacement of jobs for the future is also a concrete wall that we’re speeding towards and there are no brakes.

A prime example of someone who believes our moral values are more important now than ever is Zeynep Tufekci. In her Ted Talk from 2016, she makes the case for human morals being more important than ever in a world where we are handing over more and more decision making processes to algorithms and automation we don’t entirely understand.

 

Zeynep begins with a great anecdote asking whether or not a computer can tell if a person is lying (her boss was asking her because he was cheating on his wife). This raises an interesting question on many levels; 1st can machines detect a lie, 2nd if they can then can they too also lie? Ponder on that for a moment, if a machine could learn that humans sometimes lie to avoid an undesired response could it then too not incorporate what it learned and use it itself? When we think about A.I. we need to consider that until this point machines and computers only had the capacity to do what they were told; humans maintained control. There’s always a person plugging in numbers into a calculator, there’s a person hitting the send button on that Facebook message, there’s a person behind the steering wheel pressing the gas or the brakes when needed. Although we all know we as humans aren’t perfect, we can count on (most) humans to act guided by a sound moral compass. Are we ready to trust machines with the same moral decisions?

A question I’ve pondered a lot lately is if a self-driving car kills someone then does their existence make roadways less safe than human drivers? An interesting article by Business Insider states, “A 2014 Google patent involving lateral lane positioning (which may or may not be in use) followed a similar logic, describing how an AV might move away from a truck in one lane and closer to a car in another lane, since it’s safer to crash into a smaller object.” Can we safely allow A.I. to make life altering decisions for us? How do we set acceptable limits? Zaynep argues that we don’t really have any bench marks or guides for making decisions in complex human affairs. Basically we’re not sure how an A.I. would make its’ decisions, and we don’t like what we don’t know. Zeynep also mentions “Machine Learning” unlike regular programming where its given detailed instructions on how to respond to certain scenarios, machine learning gives the computer tons of information which it then takes and uses to learn. For example, let’s say you show a program one hundred pictures of dogs, all kinds of dogs; it’ll analyze every picture and start to learn the features of a dog. Our programs do this so well that if you later showed it a picture of a cat it would tell you that the picture is “not a dog” if asked. Now that’s a REALLY basic summary but you get the picture (see what I did there?).

This to me is incredibly interesting because I’m one of those people sitting on the fence about A.I., I definitely see the appeal and become excited thinking about all the cures for disease, the technology we could further develop, and how much further we could get in our exploration of the universe if only we had systems like A.I. in place helping to do the research. In that same breath, I also see how giving these machines the ability to make probabilistic decisions in a way we don’t quite understand is worrisome, especially if these systems are put in place for military weapons, transportation systems, and even food and water filtration systems. It’s either a utopia or dystopia…great.

Another consideration is that although the computers will make decisions in ways we may not understand, it still does so using information we give it. Zaynep points out that these systems could pick up on our biases, good or bad. According to her, researchers on Google found that women were less likely than men to be shown job ads for high-paying jobs and searching for African-American names is more likely to bring up ads suggesting criminal history even when there is none. So what kind of future do we want to build? Are we unknowingly making A.I. with the same prejudices we have as a species? What further implications will that have?

All this is great speculation but are we even close enough to developing A.I. to be worried about it? Well, kind of. A blog post by nvida does a pretty good job breaking it down. The basic rundown is currently we’ve been able to program machines and computers to do specific tasks better than humans. This ability is classified as narrow A.I. and includes tasks like playing checkers or facial recognition on Facebook. Simple programs that once it knows the “rules” can execute nearly perfect. The next great step forward was machine learning, this is what Zaynep was referring to when instead of hand coding programmers use algorithms to help machines dissect information which they then use to make their calculated “best” decision. As we discussed earlier though, depending where that initial pool of data came from, there could be bias and even racism “programmed” into the machines unintentionally.

All great things worth considering and I want to know what YOU think about A.I. Are you for it or against it and why? Leave a comment below and I’ll see you on the next Part of Is Artificial Intelligence Good, bad, or neither?

 

*Featured image from http://mirrorspectrum.com/behind-the-mirror/the-terminator-could-become-real-intelligent-ai-robots-capable-of-destroying-mankind#

 

Posted on

Speaking about Speaker for the Dead

If you’ve read my first review on Ender’s Game then you probably already know that I’m a fan of the series, what you might not know however is just how different this sequel really is. Allow me to preface this by reiterating that the reason I decided to start reading Ender’s Game was because when I watched the trailer (yes for the movie) I was hooked by battle ships and aliens (which is really all I need to watch/read a SciFi book/film). That being said, Speaker for the Dead takes on a whole different turn and is not nearly as actioned-packed as the first book in the sense that there is no “war” going on… at least not in the traditional sense.

(Spoiler’s from Ender’s Game ahead, read at your own risk)

See after Ender essentially was tricked into killing nearly every bugger in existence he understandably feels a deep sense of regret and heartache (I mean how would you feel if every kill you every got in a video game turned out to be real?). Luckily there was a ray of hope; a phoenix from the ashes in the shape of an unhatched hive queen who can communicate in what I can only assume is telepathy. Since the military basically is indebted to our “Hero,” they give him his own starship and he decides to fly around at light speed with his sister Valentine and our beloved queen in the attempts to find her a new home. Now there’s a lot here that happens politically during and after the war that’s worth mentioning. Ender’s brother, Peter Wiggin, (remember the entire family are geniuses) starts to write influential political pieces alongside his sister (he pretty much manipulated her) under internet aliases to manipulate public opinion on the alien war, international political relations, and other global issues. Well Ender’s sister keeps writing under her Alias and eventually writes a series on the Formic Wars and adds Ender’s The Hive Queen to it.

Since they’re all traveling at light speed hundreds and thousands of years go by (for everyone else) as they travel from planet to planet looking for a home for the hive queen. Humans have now colonized tens of hundreds of planets and Valentine’s work had become almost like a religious text in popularity. Humankind seems to rebuke the actions of Ender Wiggin in this future and of course Ender himself is going by an alias and working as a “Speaker for the dead” now. This is where Speaker for the Dead picks up and continues the adventure.

It’s probably worth mentioning what a “Speaker for the dead” really is at this point. A Speaker in the Ender universe is basically a person that’s usually requested to come out after the death of someone and speak about them and their life in all aspects both good and bad so people could really know who that person was. Think of it as a secular funeral speaking but no real bias is intended since it’s a third party doing the speaking. At this point in humankind, they’ve developed certain protocols in dealing with alien life and eventually humans find intelligent life again on a planet called Lusitania. Since the planet is discovered to harbor such life the humans are only allowed to establish a small colony within certain borders that are never to get past a certain size. They are allowed to study and interact with the aliens (that they call piggies because of how they look) but their every interaction has regulations and limits regarding how they ask questions with the intention being to not affect their natural development (so no sharing tech or teaching them to farm, they’re very primitive in regards to technology).

The whole story revolves around how (mostly because of these regulations) the xenobiologists studying the piggies would often be murdered by them because of the communication regulations. If the humans were allowed to speak freely with them they would’ve been able to understand each other. After the first xenobiologist is killed, one of the colonists requests a Speaker to come on his behalf. Suffice it to say this is not like Ender’s game in the sense that you have and all-out war among different species but what makes this story great is all the depth involved in the Card’s story telling. You have the unhatched Hive queen convincing Ender that the planet Lusitania is perfect for her, while Ender deals with the politics of the Catholic church receiving him (Lusitania is a religious colony), Ender also has an A.I. friend that literally NO ONE knows exists and she helps him out a lot, you have the whole communication barrier with another species, and the drama that comes with the xenobiologist’s family who have been studying the piggies for a few generations now. Although this book isn’t overflowing with violence like Ender’s game, there’s definitely enough moving parts and character development to keep you interested. My guess is if you enjoyed the writing style of Ender’s Game, you’ll like this book too.

The A.I. (Artificial Intelligence) that helps Ender is probably my favorite aspect of this story, her name is Jane. The interesting part about Jane is she doesn’t really know how she was made. After humans started using bugger technology to communicate (the ansible) instantaneously through space, with enough time and random code falling to place Jane basically was created. Slowly but surely she became self-aware and started researching all the information available to her, after researching human history (along with the bugger war) she concludes that for the time being it’s best not to make her existence known for fear that humans would see her as a threat and try and kill her. After some time she finds The Hive Queen and easily connects it to Ender Wiggin and decides if anyone can change human opinion of “different life forms” so they don’t see her as a threat, it’s Ender.

The whole concept of Artificial Intelligence is one that both excites me and scares me simultaneously. The reality is, we’re getting closer to that possibility every day. Baidu’s A.I. team was recently able to teach a virtual agent the same way you would teach a baby human. The short summary is that the virtual agent can get a grasp of grammar and apply what it’s learned to other situations, something that previous computers/programs have an issue with. It looks like a lot of the philosophical questions posed in Orson Scott Card’s Speaker for the Dead might have to be answered (at least discussed) sooner than some of us would like to admit. If we do develop A.I. how to we handle it? Is it morally right to kill it if we perceive it as a potential threat? What if we’re wrong? The list can go on and on, it’s just amazing that we live in a time where a lot of Science fiction has or is becoming science fact!

Featured image came from:

http://enderverse.wikia.com/wiki/Andrew_Wiggin