Steven Hill/Technically Speaking
It appears that some folks at Cambridge University aren't taking any chances when it comes to the possibility of a robot uprising wiping out humanity.
According to a BBC news report, researchers at Cambridge's Centre for the Study of Existential Risk (CSER) want to examine the dangers posed to mankind by biotechnology, artificial life, nanotechnology and climate change.
Three of those potential dangers seem perfectly reasonable, but the fourth strikes me personally as… science fiction-y?... mildly paranoid?... batshit crazy? Yeah. Batshit crazy. That's the one.
The group's website says the “seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake,” and then points visitors in the direction of an article about computers and artificial intelligences taking over the world — maybe.
The idea of robots wiping us out is nothing new. Hollywood has long regaled us with dystopian tales of humans letting our technology get the better of us, such as in the Terminator and Matrix movie franchises.
We're still a long way from killer Ahhh-nold Schwarzenegger-bots, though.
Admittedly, our computers are getting smarter and faster every year, and we've developed things like superfast robotic cheetahs, and even Watson, an artificial intelligence who beat the human champion of the game show Jeopardy.
I mean, put those two things together and what do you get?
A speedy cyborg that knows the capital of Uruguay and can phrase the answer in the form of a question… but still… that's kinda scary.
“I'll take World Dominating Robots Who Can Break the Four-Minute Mile for $100, Alex… and the answer is 'What am I?', muahahahahahaha.”
Although, now that I think of it, evil artificial intelligences probably don't laugh all that much… even maniacally.
But, this robo-calyptic vision of the future has always been fodder for films and video games, so this marks the first time actual scientists plan to take a serious look at the possibility of our technology gaining malicious self-awareness.
For my part, I've never worried about that kind of thing.
First, and most importantly, I know too well the current limitations of Artificial Intelligence. Most of the guys programming AI today work for video game companies and if even I can outsmart their current best efforts — forget about the 14-year-old gaming whizzes out there. I don't think mankind has a reason to be nervous around the toaster. That Watson thing is the best AI we've come up with so far, and as I pointed out earlier… that certainly ain't even close to a time-travelling metal Schwarzenegger with a serious hate-on for humanity.
But, even if it could happen, just as science fiction gave us the paranoid idea that computers could take over the world, it's also given us the way to defeat them. Simply give them a paradox to consider (Yo, Killer Cheetah Brainbot… What is the sound of one hand clapping?)… and the resulting logical inconsistency makes their heads explode. Well, that's how Captain Kirk beat rogue robots and corrupted computers in just abut every episode of Star Trek, so it seems like a solid strategy.
I probably should share that info with Cambridge.