Science
By Kharunya Paramaguru @timenewsfeed
BERTRAND GUAY/AFP/GETTY IMAGES
RELATED
While many Americans were finalizing preparations for Thanksgiving on Nov. 21, the U.S. Deputy Defense Secretary Ashton Carter signed a new policy directive aimed at reducing the risk and “consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.” In other words, the Pentagon was making sure that the U.S. military doesn’t end up in a situation where robots are able to decide whether to pull the trigger on a human.
If you’ve seen The Matrix, The Terminator or even 2001: A Space Odyssey, you know one thing is inevitable: The machines are coming, and someday they’re going to kill us all. And given the recent proliferation and sophistication of military drones and other automated weapons systems, that future could be getting closer than we think.
(MORE: TIME 1980 – The Robot Revolution)
In an attempt to head that future off, Terminator-like, Cambridge University has announced it’s setting up a center next year devoted to the study of technology and “existential risk” — the threat that advances in artificial intelligence, biotechnology and other fields could pose to mankind’s very existence.
The Cambridge Project on Existential Risk is the brainchild of two Cambridge academics — philosophy professor Huw Price and professor of cosmology and astrophysics Martin Rees — as well as Estonian tech entrepreneur Jann Tallinn, a co-founder of Skype. The center hopes to train a scientific eye on the philosophical issues posed by human technology and whether they could result in “extinction-level risks to our species as a whole”.
(MORE: Robot with Human Skeleton Steps Toward Artificial Intelligence)
Price tells TIME that while our demise at the hands of our own technological creations has long been the subject of Hollywood films and science fiction (again: Terminator), it is something that has hitherto seen little serious scientific investigation:
“I enjoy those science fiction films, but the success of those movies has contributed in a way to making these issues seem not entirely serious. We want to make the point that there is a serious side to this too.”
Take, for example, the still little-understood flash crash of May 6, 2010. In just six minutes, automated trades executed by computers caused one of the biggest single-day declines in the history of the Dow Jones Industrial Average, causing the stock index to plummet almost 1,000 points, only to recover again within minutes. The dip caused alarm among regulators who realized that this technology — lightning-fast trades set to execute based on computerized analysis of market conditions — is already in many ways beyond our control.
(MORE: Money Talking: How High-Frequency Trading is Impacting Your Investments)
Price says that advances in biotechnology —a specialty of his colleague Rees — are equally concerning; thanks to new innovations, the steps necessary to produce a weaponized virus or other bioterror agent have been dramatically simplified. “As technology progresses,” Price says, “the number of individuals needed to wipe us all out is declining quite steeply.” His words echo that of the scientists involved in a seemingly harmless genetic parlor trick from earlier this year — in which they encoded the text of a book in DNA — who acknowledged that the same technology could perhaps be used to encode a lethal virus.
Price emphasizes that the focus of his work won’t just be on artificial intelligence, insisting that the center would look more widely at how human technology could threaten our species. But he admits that AI is nevertheless something he finds “quite fascinating”:
“The way I see it as a philosopher is that more than anything else, what distinguishes us as humans is our intelligence, and this has been a constant throughout history. What seems likely is that this constancy is going to change at some point in the next couple of centuries, and it is going to be one of the most fascinating phases in our history.”
That future, too, is closer than we think. The New York Times devoted a page one story on Nov. 24 to the advances in an artificial intelligence technology known as deep learning, already used in programs like Apple’s Siri. The machine learning system, modeled after the network of neural connections in the brain, illustrates how close we are to mimicking human intelligence in computers. The real worry, however, is that once we do achieve that milestone, how long our dominance over our creations can last.
Read more:
http://newsfeed.time.com/2012/11/29/rise-of-the-machines-cambridge-...
You need to be a member of 12160 Social Network to add comments!
Join 12160 Social Network