Posted on

Why Elon Musk’s Hyperloop is a big deal

If you’ve been keeping up with the news lately you’ve probably seen a headline or two about some Hyperloop thing right? It seems like Elon Musk is making headway in so many industries that it’s hard to keep up with. SOOoooo what’s all the “hype”(rloop) about? I’m glad you asked. Hyperloop one started with the idea that we could all be traveling in high-speed vacuumed sealed tubes in the future much like the Jetsons. This is amazing because it would allow us to travel really long distances in fairly short amounts of time. One of Hyperloop’s videos even said something like “If you could travel 300 miles in 30 minutes where would you live?” Think about that, you could work in a different state if you wanted! Probably the most exciting thing about it can be captured in this quote directly from the Hyperloop website.

“Hyperloop systems will be built on columns or tunneled below ground to avoid dangerous grade crossings and wildlife. It’s fully autonomous and enclosed, eliminating pilot error and weather hazards. It’s safe and clean, with no direct carbon emissions.”

Efficient, safe, environmentally friendly, and this thing can travel at airliner speeds the only thing better would be teleportation! Could I get anymore onboard this train…I mean Hyperloop? So the main reason you’re hearing more about it on the news lately is because they successfully tested shooting a transportation pod down this tube and it was able to reach a speed of 192mph! Check out the video below.

It’s because of this test that they can now move onto the next phase of commercialization. Thenextweb.com reports that Hyperloop plans on having three systems in production somewhere around the world by 2021. Now for those of you thinking that 200mph isn’t all THAT fast, it’s important to note these are still early testing and they do expect the speeds to get much higher so we have something to look forward to. All I can say is, what a time to be alive. Let me know what you think about the Hyperloop in a comment below! Love it, hate it, I want to know why!

Posted on

Is Artificial Intelligence good or bad? (Part 2)

Following up on a previous post on Artificial intelligence (found here), we touched on a few topics regarding whether AI has morals. If they do how do they get these morals? Are they truly unbiased or unprejudiced? Is all this even worth worrying about? The field of artificial intelligence is so interesting because so many influential and intelligent people have many varying viewpoints. In fact just recently Mark Zuckerburg (owner of Facebook) and Elon Musk (Owner of Tesla) got into a disagreement on the topic.

 

 

In the above video we see theoretical physicist Michio Kaku discussing the points that both Mark Zuckerburg and Elon Musk are making. Although I personally admire both entrepreneurs for how they’ve revolutionized our lives I think Zuckerburg may be a little too optimistic on this front. Machines have already been weaponized in the past and will continue to be weaponized but what happens when you no longer have a machine controlled by a human and its job is still to kill people? (Like some sort or intelligent tank or drone) How does an intelligent machine make its killing decisions? What if something goes wrong with its programming? I suppose you could ask the same of Elon Musk’s Tesla? What happens if its decision gets someone killed? We already know that statistically speaking computers will make less mistakes than humans, but as humans we still feel a perhaps overreaction to an accident if it was caused by AI rather than a human. At least a human we can hold responsible by taking legal action of some sort giving us some kind of relief but a machine? Sure you can scrap the one, sue the company, but the same software will likely be on another car right? Although I agree with Elon that we should be wary to develop AI and it’s better to develop it slowly so we can understand it better, It’s also amusing because autonomous vehicles haven’t seen much in regulation since their inception (although according to this article Congress is supposed to start soon) and they pose a similar risk albeit not as large of a scale as exponential AI.

Then something amazing happened, like an ominous prophecy coming to fruition we started to see click-bait articles like this claiming Facebook HAD to shut down their AI because they invented their own language and were speaking in code in front of their human creators. Well before jumping on the bandwagon (which I admit-ably almost did) I checked the Snopes fact checker and turns out the whole purpose of the experiment was to improve “AI-to-human” communication. So even though I lean on the “AI will probably eventually kill us team” it’s still important to know that we’re just starting to be able to develop and research these topics ourselves so we should be cautious but also not jump to conclusions.

With all this back and forth about AI I want to know what YOU think! Leave a comment below telling us how you feel about AI, Elon Musk, Mark Zuckerburg, or anything else AI related.

Featured Image from : https://theringer.com/mark-zuckerberg-elon-musk-tech-battle-19fe681dbcf4

Posted on

Is Artificial Intelligence good or bad? (Part 1)

In my review about Speaker for the Dead I briefly mentioned the artificial intelligence that accompanies Ender by the name of Jane. There are so many elements to consider when thinking about A.I.; does it already exist, will it create a utopia, will we all be annihilated? Some people think it’ll be the most extraordinary advance to change our lives since the industrial revolution, some think it’ll be the end of humanity as we know it, others still (like myself) are optimistically cautious. Each view holds some merit, especially since Moore’s law (the fact that computing power doubles every two years) has held true for over fifty years. Anything advancing that quickly demands our attention. The moral implications as we hand over decision making processes off to machines are astounding, the automation of killing people in certain military drones is quickly becoming a reality, and the displacement of jobs for the future is also a concrete wall that we’re speeding towards and there are no brakes.

A prime example of someone who believes our moral values are more important now than ever is Zeynep Tufekci. In her Ted Talk from 2016, she makes the case for human morals being more important than ever in a world where we are handing over more and more decision making processes to algorithms and automation we don’t entirely understand.

 

Zeynep begins with a great anecdote asking whether or not a computer can tell if a person is lying (her boss was asking her because he was cheating on his wife). This raises an interesting question on many levels; 1st can machines detect a lie, 2nd if they can then can they too also lie? Ponder on that for a moment, if a machine could learn that humans sometimes lie to avoid an undesired response could it then too not incorporate what it learned and use it itself? When we think about A.I. we need to consider that until this point machines and computers only had the capacity to do what they were told; humans maintained control. There’s always a person plugging in numbers into a calculator, there’s a person hitting the send button on that Facebook message, there’s a person behind the steering wheel pressing the gas or the brakes when needed. Although we all know we as humans aren’t perfect, we can count on (most) humans to act guided by a sound moral compass. Are we ready to trust machines with the same moral decisions?

A question I’ve pondered a lot lately is if a self-driving car kills someone then does their existence make roadways less safe than human drivers? An interesting article by Business Insider states, “A 2014 Google patent involving lateral lane positioning (which may or may not be in use) followed a similar logic, describing how an AV might move away from a truck in one lane and closer to a car in another lane, since it’s safer to crash into a smaller object.” Can we safely allow A.I. to make life altering decisions for us? How do we set acceptable limits? Zaynep argues that we don’t really have any bench marks or guides for making decisions in complex human affairs. Basically we’re not sure how an A.I. would make its’ decisions, and we don’t like what we don’t know. Zeynep also mentions “Machine Learning” unlike regular programming where its given detailed instructions on how to respond to certain scenarios, machine learning gives the computer tons of information which it then takes and uses to learn. For example, let’s say you show a program one hundred pictures of dogs, all kinds of dogs; it’ll analyze every picture and start to learn the features of a dog. Our programs do this so well that if you later showed it a picture of a cat it would tell you that the picture is “not a dog” if asked. Now that’s a REALLY basic summary but you get the picture (see what I did there?).

This to me is incredibly interesting because I’m one of those people sitting on the fence about A.I., I definitely see the appeal and become excited thinking about all the cures for disease, the technology we could further develop, and how much further we could get in our exploration of the universe if only we had systems like A.I. in place helping to do the research. In that same breath, I also see how giving these machines the ability to make probabilistic decisions in a way we don’t quite understand is worrisome, especially if these systems are put in place for military weapons, transportation systems, and even food and water filtration systems. It’s either a utopia or dystopia…great.

Another consideration is that although the computers will make decisions in ways we may not understand, it still does so using information we give it. Zaynep points out that these systems could pick up on our biases, good or bad. According to her, researchers on Google found that women were less likely than men to be shown job ads for high-paying jobs and searching for African-American names is more likely to bring up ads suggesting criminal history even when there is none. So what kind of future do we want to build? Are we unknowingly making A.I. with the same prejudices we have as a species? What further implications will that have?

All this is great speculation but are we even close enough to developing A.I. to be worried about it? Well, kind of. A blog post by nvida does a pretty good job breaking it down. The basic rundown is currently we’ve been able to program machines and computers to do specific tasks better than humans. This ability is classified as narrow A.I. and includes tasks like playing checkers or facial recognition on Facebook. Simple programs that once it knows the “rules” can execute nearly perfect. The next great step forward was machine learning, this is what Zaynep was referring to when instead of hand coding programmers use algorithms to help machines dissect information which they then use to make their calculated “best” decision. As we discussed earlier though, depending where that initial pool of data came from, there could be bias and even racism “programmed” into the machines unintentionally.

All great things worth considering and I want to know what YOU think about A.I. Are you for it or against it and why? Leave a comment below and I’ll see you on the next Part of Is Artificial Intelligence Good, bad, or neither?

 

*Featured image from http://mirrorspectrum.com/behind-the-mirror/the-terminator-could-become-real-intelligent-ai-robots-capable-of-destroying-mankind#