Posted on

Is Artificial Intelligence good or bad? (Part 1)

In my review about Speaker for the Dead I briefly mentioned the artificial intelligence that accompanies Ender by the name of Jane. There are so many elements to consider when thinking about A.I.; does it already exist, will it create a utopia, will we all be annihilated? Some people think it’ll be the most extraordinary advance to change our lives since the industrial revolution, some think it’ll be the end of humanity as we know it, others still (like myself) are optimistically cautious. Each view holds some merit, especially since Moore’s law (the fact that computing power doubles every two years) has held true for over fifty years. Anything advancing that quickly demands our attention. The moral implications as we hand over decision making processes off to machines are astounding, the automation of killing people in certain military drones is quickly becoming a reality, and the displacement of jobs for the future is also a concrete wall that we’re speeding towards and there are no brakes.

A prime example of someone who believes our moral values are more important now than ever is Zeynep Tufekci. In her Ted Talk from 2016, she makes the case for human morals being more important than ever in a world where we are handing over more and more decision making processes to algorithms and automation we don’t entirely understand.

 

Zeynep begins with a great anecdote asking whether or not a computer can tell if a person is lying (her boss was asking her because he was cheating on his wife). This raises an interesting question on many levels; 1st can machines detect a lie, 2nd if they can then can they too also lie? Ponder on that for a moment, if a machine could learn that humans sometimes lie to avoid an undesired response could it then too not incorporate what it learned and use it itself? When we think about A.I. we need to consider that until this point machines and computers only had the capacity to do what they were told; humans maintained control. There’s always a person plugging in numbers into a calculator, there’s a person hitting the send button on that Facebook message, there’s a person behind the steering wheel pressing the gas or the brakes when needed. Although we all know we as humans aren’t perfect, we can count on (most) humans to act guided by a sound moral compass. Are we ready to trust machines with the same moral decisions?

A question I’ve pondered a lot lately is if a self-driving car kills someone then does their existence make roadways less safe than human drivers? An interesting article by Business Insider states, “A 2014 Google patent involving lateral lane positioning (which may or may not be in use) followed a similar logic, describing how an AV might move away from a truck in one lane and closer to a car in another lane, since it’s safer to crash into a smaller object.” Can we safely allow A.I. to make life altering decisions for us? How do we set acceptable limits? Zaynep argues that we don’t really have any bench marks or guides for making decisions in complex human affairs. Basically we’re not sure how an A.I. would make its’ decisions, and we don’t like what we don’t know. Zeynep also mentions “Machine Learning” unlike regular programming where its given detailed instructions on how to respond to certain scenarios, machine learning gives the computer tons of information which it then takes and uses to learn. For example, let’s say you show a program one hundred pictures of dogs, all kinds of dogs; it’ll analyze every picture and start to learn the features of a dog. Our programs do this so well that if you later showed it a picture of a cat it would tell you that the picture is “not a dog” if asked. Now that’s a REALLY basic summary but you get the picture (see what I did there?).

This to me is incredibly interesting because I’m one of those people sitting on the fence about A.I., I definitely see the appeal and become excited thinking about all the cures for disease, the technology we could further develop, and how much further we could get in our exploration of the universe if only we had systems like A.I. in place helping to do the research. In that same breath, I also see how giving these machines the ability to make probabilistic decisions in a way we don’t quite understand is worrisome, especially if these systems are put in place for military weapons, transportation systems, and even food and water filtration systems. It’s either a utopia or dystopia…great.

Another consideration is that although the computers will make decisions in ways we may not understand, it still does so using information we give it. Zaynep points out that these systems could pick up on our biases, good or bad. According to her, researchers on Google found that women were less likely than men to be shown job ads for high-paying jobs and searching for African-American names is more likely to bring up ads suggesting criminal history even when there is none. So what kind of future do we want to build? Are we unknowingly making A.I. with the same prejudices we have as a species? What further implications will that have?

All this is great speculation but are we even close enough to developing A.I. to be worried about it? Well, kind of. A blog post by nvida does a pretty good job breaking it down. The basic rundown is currently we’ve been able to program machines and computers to do specific tasks better than humans. This ability is classified as narrow A.I. and includes tasks like playing checkers or facial recognition on Facebook. Simple programs that once it knows the “rules” can execute nearly perfect. The next great step forward was machine learning, this is what Zaynep was referring to when instead of hand coding programmers use algorithms to help machines dissect information which they then use to make their calculated “best” decision. As we discussed earlier though, depending where that initial pool of data came from, there could be bias and even racism “programmed” into the machines unintentionally.

All great things worth considering and I want to know what YOU think about A.I. Are you for it or against it and why? Leave a comment below and I’ll see you on the next Part of Is Artificial Intelligence Good, bad, or neither?

 

*Featured image from http://mirrorspectrum.com/behind-the-mirror/the-terminator-could-become-real-intelligent-ai-robots-capable-of-destroying-mankind#

 

Posted on

Centrifugal force space stations closer than you think

So I wrote a review on what I thought about Ender’s Game here and one thing I mentioned is some of the technology and science that the book and movie showcased. Of course if you read the title you probably guessed that we’re talking about, the battle school space station and how it creates artificial gravity using Centrifugal force. Before I get into this, let me preface this blog by saying Ender’s Game is in no way the first Science fiction story to talk about or use artificial gravity let alone the Centrifugal force method of creating said gravity BUT it was the first time I saw it and started taking an interest in it.

I think one of the simplest ways to explain centrifugal force is with an experiment we did in elementary school with a bucket of water (do they still do experiments in elementary school or am I dating myself?). Does anyone else remember taking that bucket of water and spinning it over your head in a circle as fast as you possibly could? Did anyone else let go and hit someone with the bucket? Me either but what I do remember is that when I spun the water over my head not a drop fell out of the bucket as long as I spun it fast enough. That’s because spinning it in a circle like that forced all the water to stay at the bottom of the bucket, basically the force I created spinning that bucket was an artificial gravity keeping the water on the “floor” of the bucket. If you want more information on Centrifugal force click here, more in depth and scientific.

Just like the water in the bucket, battle school in Ender’s Game would’ve been spinning in a circle and the people on it would be like the water in the bucket, keeping their feet on the ground by the force of the spinning motion. Of course scientists have been thinking of ways to create artificial gravity in space for a while now and frankly, it’s something we’ll need to figure out if we want to travel further into space since the human body loses a lot of bone density if there is no gravity. Well it turns out NASA has been researching this problem for some time now, since at least April 2005 to be exact. In its heading NASA Gives Artificial Gravity a New Spin (got to give props to the pun) they talk about how they were basically putting test subjects on a bed to simulate weightlessness, and some of them would get spun for an hour a day at a force great enough to generate 2.5 times as much gravity as Earth. The purpose of the tests of course is to see just how much less bone deterioration occurred in the test subjects who experienced the gravity. Pretty darn neat right?

You can see here that test subjects might find themselves feeling a little down (see what I did there?) Click the picture for the whole story from NASA.

 

Okay but that was back in 2005 right? Is there anything related to centrifugal space stations that are more …recent? As a matter of fact there is! According to an article written by Dailymail.co.uk, suspiciously also in April 10 years later, there is a company called United Space structures that wants to create the first spinning station that I’m sure will turn some heads (cough cough). Basically they want to make a small version first as proof on concept (which they claim can be done in 12 months) and then they would get started on their final design which would be 330ft in diameter and 1,310ft long. As soon as production starts it would only take 30 years and $300 billion dollars, which in all seriousness doesn’t seem that bad considering what it accomplishes.

muchroom station

www.dailymail.co.uk/sciencetech/article-3030087/Could-300-billion-space-mushroom-replace-ISS-Giant-rotating-station-create-artificial-gravity-astronauts.html

At 1,310ft long it’s not just a Mushroom looking station, it’s a muchroom one…(I hear the crickets now)

Basically what I’m trying to say is Ender’s Game battle school (or at least a space station that generates gravity like it) is not too far off in the future, and if that doesn’t excite you all I can say is Geez (as in Gee forces :P)

Featured image came from

https://pics-about-space.com/battle-damaged-space-station-gravity?p=3#img4700285539864871137