Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat is quite the wake up call. Indeed, it almost reads like horror at certain points. The book opens with a hypothetical future where an Artificial Super Intelligence is created and immediately becomes the smartest and most powerful being on the planet. It then invents nanotechnology so advanced that it can reconfigure all matter into itself. This matter, of course, includes our own. Thus, the entire human race is consumed into extinction by microscopic robots and the Artificial Super Intelligence who created them.
As absurd as this story sounds, it appears to be much closer to science fiction than science fantasy. First, a few definitions. Artificial General Intelligence is "The intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can." Many scientists presume and Artificial General Intelligence is the first step toward Artificial Super Intelligence, which "is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds." This would come about in what Barrat refers to as an "intelligence explosion" by a machine that had the ability to improve itself. A series of "recursive self-improvement" (which with computing technology would probably mean a doubling of "intelligence" several million times per second) would create a machine with cognitive capacities that were as far above us as our capacity is above that of ants. We can argue all we want about whether or not this type of Super Intelligence would be "alive" or not. I'm not sure if I would call it "alive," but for all intents and purposes, it doesn't matter. The key point is that Barrat is convinced we will soon be able to create such a thing and while I have some doubts, I would absolutely not bet against it. Already, we have algorithms that improve themselves through "recursive self-improvement" as bots, for example, figure out which Youtube video you are most likely to watch next if you watched the series of videos you had watched before. That way they can feed those particular videos to you and increase the amount of time you spend on the site. (A good explanation of these bots can be found here.) This makes the question jump out: Does anyone even understand the most complex algorithms that these various tech companies, intelligence agencies and the like have created? And Barrat wrote his book in 2015. Technology has improved substantially since then. Many experts were predicting Artificial General Intelligence by 2030 or 2040 or thereabout. Some have an extremely rosy outlook of it (such as the famous futurist Ray Kurzweil, who popularized the term "singularity"), but a growing number are becoming concerned this invention will turn around and exterminate us (such as those at the Machine Intelligence Research Institute that Barrat profiles). After all, why would an Aritficial Super Intelligence care about us anymore than we care about ants? And how do we program into a machine the mandate to be kind to us when it can improve and change itself immensely? People like to cite Asimov's Three Laws, but they certainly won't do. Hell, they didn't even "do" in Asimov's own books! Barrat also stresses that with so many actors seeking out Artificial Intelligence, it is probably impossible to stop. These actors include governments, rogue governments such as North Korea, intelligence agencies, tech companies like Google, "stealth companies" who are trying to keep a low profile and various research institutes. Banning it will likely just mean the group that discovers it is more likely to have malicious intent. Even if a friendly organization finds it, it may accidentally release it. I don't consider the CIA to be particularly friendly, but maybe you do. Regardless, an accidental release is what happened with Stuxnet. Stuxnet was a proto-AI malware created by the United States and Israel to damage Iran's nuclear program. That part worked. But computer viruses don't blow up when they go off and it appears we lost control of it and who knows who all has access to it now. Barrat isn't optimistic about our ability to escape an AI-induced extinction. That being said, the positive potential is incredible: from healing diseases, solving poverty, ending global warming, etc. etc. But only if, you know, it doesn't exterminate us. On that note, as Gary Marcus points out, we should question whether the "tendencies toward self-preservation and resource acquisition are inherent in any sufficiently complex, goal-driven system." IBM's Deep Blue and Watson have shown no sign of this. In fact, assuming resource acquisition and self-preservation are inherent goals of such a system flirts with "anthropomorphizing" artificial intelligence; something Barrat warns us against. Barrat recommends first trying to build in an apoptosis mechanism into the AI. Apoptosis is the process cells have for programmed death. If they begin multiplying without dying afterward, that's cancer. This would at least prevent the AI from spreading all over the place. My thought (which I emailed Barrat about and will add his response to this review if I am lucky enough to get one) is that anything programmed into a Superintelligence should be as objective as possible. Programming to "be kind to humans" is subjective and open to very different interpretations (see, for example, the myriad of political opinions out there). How about programming into it to have the "correct" view on those two points Marcus isn't sure the AI will have:
And more importantly,
Sure, I know that is easier said than done. But I would think it is easier to do that than program it to be nice to us even once it becomes several trillion times more intelligent than us. An indifference to self-preservation would allow us to turn it off as soon as it started to become dangerous. This would also allow us to beta test it. Which is kind of important since we don't want to turn something on that might exterminate us without proper testing. Regardless, Barrat has written a very important book that we should all take seriously. The biggest threats are often things we don't expect instead of things we are hunkered down preparing for. Right now, it doesn't appear like many people are expecting a threat from AI, if they are expecting AI at all.
Comments
|
Andrew Syrios"Every day is a new life to the wise man." Archives
November 2022
Blog Roll
The Real Estate Brothers The Good Stewards Bigger Pockets REI Club Meet Kevin Tim Ferris Joe Rogan Adam Carolla MAREI 1500 Days Worcester Investments Just Ask Ben Why Entrepreneur Inc. KC Source Link The Righteous Mind Star Slate Codex Mises Institute Tom Woods Michael Tracey Consulting by RPM The Scott Horton Show Swift Economics The Critical Drinker Red Letter Media Categories |