Will Artificial Intelligence Destroy Mankind?

Some Pretty Smart People Say Artificial Intelligence Needs to Be Checked

Will Artificial Intelligence destroy us? photo: Adam – cc

“HAL, I won’t argue with you anymore! Open the doors!”

– Dave Bowman (2001: A Space Odyssey)

When I was very young I happened to catch the second Terminator movie on TV before I’d ever had a chance to see the first one. After the resulting cognitive dissonance from witnessing “good” Arnold become the bad guy had passed, it turned out the message for both movies was pretty much the same – through unchecked ingenuity we will involuntarily bring about our own destruction. Though an interesting thought, this concept seemed ridiculous to me in an age of dial-up modems and on-disc encyclopedias. Recently, however, a group of high-profile scientists, engineers, and software entrepreneurs have forced me to look again at the possibility of a self-inflicted demise and re-examine the question, “Will Artificial Intelligence destroy mankind?”

Arnie is the result of Artificial Intelligence left unchecked
photo: Netflixlife – cc

The theory goes like this – we’re currently making HUGE leaps in various avenues of technology, not the least of which are fully autonomous robotic systems. According to Moore’s Law, which doesn’t seem to be faltering much, both the processing speed and memory capacity of machines is roughly doubling every 18 months due to the production of smaller and smaller transistors. These advancements have helped us to create things like the iPhone’s Siri, Google’s self-driving vehicles, and the software to run all kinds of complex automated machinery. Whether it’s computers automatically buying and selling stocks via complex algorithms, camera-guided smart cars that know when to swerve around pedestrians, or autonomous drones using pre-programmed inputs to bust open a bunker, it’s pretty clear that we’re becoming less and less reliant on the human element. Eventually, some insist, machines will be capable of repairing and even improving themselves independent of human involvement at a hypothetical point in time that’s been dubbed “singularity.” Whereas humans have relied on millions of years of evolution to become smarter (as a means of survival), machines will be able to improve themselves instantly and continuously without end, easily surpassing our own intellect. With the eventual accrual of self-awareness and fully independent thought, they may even come to the realization that humans either serve no real purpose, are too inefficient to keep around, or (as is a common theme in sci-fi) are their biggest threat which therefore must be eliminated.

So who exactly thinks we need to wake up and seriously consider the potential dangers of fully conscious artificial entities? The list is pretty impressive with theoretical physicist Stephen Hawking, Microsoft founder and nerd-extraordinaire Bill Gates, Tesla and SpaceX founder Elon Musk, Apple co-founder Steve Wozniak, University of Cambridge astrophysicist sir Martin Rees, and Google director of research Peter Norvig to name only a few.

Here’s what a few of them have said regarding the future of Artificial Intelligence:

Have a broken iPhone? iCracked can help!

Repair Your Device

Get your device repaired at your home, office, or favorite coffee shop,
whenever and wherever you want.

617980353_72fc93586d_o

“If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently.” (interview with Australian financial review)

–  Steve Wozniak

 

617980353_72fc93586d_o“First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned. (during a reddit AMA)

–  Bill Gates

 

617980353_72fc93586d_o“If I had to guess at what our biggest existential threat is, it’s probably [artificial intelligence]. So we need to be very careful. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. (at MIT AeroAstro Centennial Symposium)

–  Elon Musk

 

617980353_72fc93586d_o“Computers will overtake humans with AI [artificial intelligence] at some point within the next 100 years. When that happens, we need to make sure the computers have goals aligned with ours. (at the Zeitgeist 2015 conference)

–  Stephen Hawking

 

 

While some of these high-profile experts tend to come off as overly pessimistic fear-mongers, some have taken a more moderate stance on the issue and believe that while the possibility is there, mankind’s demise is unlikely. One of my favorite people, astrophysicist Neil deGrasse Tyson, believes that we are more than capable of limiting any potential apocalypse resulting from unregulated artificial intelligence.

It’s hard to deny that some of these doomsayers are among the brightest people on the planet. Does this mean they’re right? It’s hard to say. Robots and artificial intelligence are extensions of ourselves. While many fear a free-thinking artificial entity that has gone “rogue” or has realized that humans are expendable, I can’t help but ask if anything we create could ever be truly “free-thinking.” A computer that has decided to launch a ballistic missile on its own after making some independent calculations has no more gone “rogue” than a pocket calculator returning the sum of a few numbers we plugged into it. It does what it does because at some point we told it to. Whatever it happens to build up from there is still a product of our initial commands.

Though I love discussing the potential for weaponized robots turning against us, the more I talk about it the more I think the entire point is moot. A fully independent machine with self-awareness and “emotions” are unlikely to be weaponized militarily simply because there’s no need for it. Sure you can give it pre-programmed contingencies, i.e. “if ‘A’ nukes ‘B,’ then ‘C, ‘D,’ and ‘E’ nuke ‘A,'” but creating something with even the slightest potential to misconstrue or even disobey orders due to “opinions” would be silly from a military standpoint.

 Could drones be given a completely independent mind?photo: Texas.713 – cc

We develop technology for one simple reason, and that is to make our lives easier. Why would we create an entity with its own decision-making power, emotions, or morals if there’s a chance they could come into conflict with our own? It’s not that we’re smart enough NOT to do it – we engage in self-harming practices both globally and personally every day. And it’s not that we can’t do it either. While I’m frequently amazed at what we produce on almost a weekly basis, we’re just too selfish to waste our time creating something so complex with the potential not to benefit us.

Let me know your thoughts below.

 

 

 

NO COMMENTS

LEAVE A REPLY