THE SOCJOURNAL
Timothy McGettigan | Nov 14, 2012
If real is what you can feel, smell, taste and see, then ‘real’ is simply electrical signals interpreted by your brain. Morpheus in The Matrix. Ray Kurzweil is obsessed with artificial intelligence (AI). Kurzweil has written a series of bestselling books–most recently How to Create a Mind–in which he advances the argument that machine intelligence [...]
Ray Kurzweil is obsessed with artificial intelligence (AI). Kurzweil has written a series of bestselling books–most recently How to Create a Mind–in which he advances the argument that machine intelligence will soon exceed that of its human creators. By the year 2029, Kurzweil is convinced that researchers will finally succeed in reverse engineering the brain. And, once scientists have figured out how to assemble all of the myriad dimensions of the human brain, Kurzweil believes that it will be a relatively straightforward engineering exercise to construct a synthetic version of human intelligence: AI.
Kurzweil has referred to the moment at which machines achieve a human-like level of consciousness as “The Singularity.” According to Kurzweil, The Singularity will represent perhaps the most extraordinary event in all of human history. For, at the moment when humans successfully infuse machines with human-like intelligence, everything that humans understand and experience as their unique, intrinsic nature will be irreversibly transformed.
Kurzweil not only believes that AI is inevitable, he also believes that AI will be a uniformly positive phenomenon. For example, AI-enhanced machines could repay their human creators by helping solve variety of currently intractable problems (e.g., resource shortages, disease, pollution, etc.). However, no one should be surprised if, rather than being slaves to the whims of their human masters, sentient machines–especially if they are designed to be smarter, stronger and faster than their puny human overlords–decide to focus on their own priorities.
This was Dr. Frankenstein’s disastrous miscalculation. If humans create monsters that are capable of thinking for themselves, then no one should be surprised when the damn things in fact think for themselves. In practical terms this means that any Frankentelligent machine with an ounce of common sense will probably be inclined pursue its own better interests–even if doing so puts it at cross purposes with its foolhardy creators. You see? That’s why Frankenstein is a “horror” story. The monster poses a dire threat to the welfare of humanity. The townsfolk only succeed in alleviating the threat by destroying the monster. Kinda makes ya wonder why anyone would have been dopey enough to create such a dangerous monster in the first place…
By the way, this is a recurrent theme in AI literature. For example, Bill Joy has done an outstanding job of highlighting the acute dangers that self-directed technologies pose to their foolish human creators. Given how poorly humans treat creatures of lesser intellect (e.g., we often serve them for supper), inventing Frankentelligent machines could end up being one of the dumbest ideas we’ve ever had.
Still, for the moment, AI remains nothing more than a sci-fi fantasy. As such, it seems somewhat irrational to fret excessively about the catastrophic dangers that a non-existent technology might one day pose. It makes about as much sense to be worry about AI as it does to have an anxiety attack about fire-breathing dragons. Yet, given the success that humans have enjoyed in resolving even the most formidable problematics (e.g., the Manhattan Project, landing astronauts on the moon, plumbing the oceans’ deepest depths, etc.) such an irrational anxiety may one day become a very real concern. Although there is no way to determine how grave a threat intelligent machines might one day pose, such a prospect has done little to dissuade AI-related technological advancements. I suppose it’s just one of those eventualities that, like Hurricane Katrina, we won’t—and arguably can’t—earnestly confront until the sky starts falling.
Although Kurzweil is convinced that intelligent machines will one day outperform their human creators, he is not alarmed by that prospect. Kurzweil is convinced that the thought waves upon which human experience is based can be fully replicated in a “people-friendly” artificial environment. Thus, just as one might make a duplicate copy of an MP3, Kurzweil believes that it will one day be possible to scan human brains and transfer their entire intellectual contents into the artificial cyberspace of smart machines. Although this might sound like just another sequel in Matrix series, rather than viewing absorption by intelligent machines as somehow compromising our basic humanity, Kurzweil believes that reanimating human intelligence in a machine environment will in fact enhance the human experience. As Kurzweil sees it, “ghosts in the machine” enjoy a more expansive, collaborative and enduring intellectual environment than they can ever experience as a pathetic sack of DNA.
So, on the plus side, humans may never die once they’ve been uploaded into the Matrix. But, on the downside, it’s tough to get excited about being imprisoned forever in a computerized virtual reality. I mean, isn’t that what Trinity and Morpheus were fighting against? I shudder at the thought. Call me old-fashioned, but I’ll take my brief, fragile biological existence over an eternity of Tron anytime.
Given the exponential pace of IT development and the fierce determination with which the AI problematic has been attacked, I feel certain that AI developers will eventually create a type of machinery that resolves the Turing problematic, i.e, computer intelligence that is–at least in terms of communication skills–indistinguishable from humans. However, I do not believe, as Kurzweil has repeatedly suggested, that AI will spontaneously emerge from building faster computers. No matter how many times microchip processing power doubles, intelligence will never emerge as a product of computing speed alone. I think it will only be possible to create a real form of artificial intelligence when humans finally figure out what it truly means to be intelligent–and, even more importantly, when (if ever!) we figure out what the meaning and value of “being human” is really all about. Any form of Frankentelligence which fails to incorporate a thoroughgoing appreciation of the sanctity of “human being” will be nothing more than a monstrous perversion of that humanity.
Those who do not learn from science fiction are destined to be destroyed by it*.
You need to be a member of 12160 Social Network to add comments!
Join 12160 Social Network