Is Artificial Intelligence good or bad? (Part 2)

Following up on a previous post on Artificial intelligence (found here), we touched on a few topics regarding whether AI has morals. If they do how do they get these morals? Are they truly unbiased or unprejudiced? Is all this even worth worrying about? The field of artificial intelligence is so interesting because so many influential and intelligent people have many varying viewpoints. In fact just recently Mark Zuckerburg (owner of Facebook) and Elon Musk (Owner of Tesla) got into a disagreement on the topic.

 

 

In the above video we see theoretical physicist Michio Kaku discussing the points that both Mark Zuckerburg and Elon Musk are making. Although I personally admire both entrepreneurs for how they’ve revolutionized our lives I think Zuckerburg may be a little too optimistic on this front. Machines have already been weaponized in the past and will continue to be weaponized but what happens when you no longer have a machine controlled by a human and its job is still to kill people? (Like some sort or intelligent tank or drone) How does an intelligent machine make its killing decisions? What if something goes wrong with its programming? I suppose you could ask the same of Elon Musk’s Tesla? What happens if its decision gets someone killed? We already know that statistically speaking computers will make less mistakes than humans, but as humans we still feel a perhaps overreaction to an accident if it was caused by AI rather than a human. At least a human we can hold responsible by taking legal action of some sort giving us some kind of relief but a machine? Sure you can scrap the one, sue the company, but the same software will likely be on another car right? Although I agree with Elon that we should be wary to develop AI and it’s better to develop it slowly so we can understand it better, It’s also amusing because autonomous vehicles haven’t seen much in regulation since their inception (although according to this article Congress is supposed to start soon) and they pose a similar risk albeit not as large of a scale as exponential AI.

Then something amazing happened, like an ominous prophecy coming to fruition we started to see click-bait articles like this claiming Facebook HAD to shut down their AI because they invented their own language and were speaking in code in front of their human creators. Well before jumping on the bandwagon (which I admit-ably almost did) I checked the Snopes fact checker and turns out the whole purpose of the experiment was to improve “AI-to-human” communication. So even though I lean on the “AI will probably eventually kill us team” it’s still important to know that we’re just starting to be able to develop and research these topics ourselves so we should be cautious but also not jump to conclusions.

With all this back and forth about AI I want to know what YOU think! Leave a comment below telling us how you feel about AI, Elon Musk, Mark Zuckerburg, or anything else AI related.

Featured Image from : https://theringer.com/mark-zuckerberg-elon-musk-tech-battle-19fe681dbcf4

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s