A Book Report
by James Jaeger

I have just completed James Barrat's new book, Our Final Invention - Artificial Intelligence and the End of the Human Era.

Before you read this report, please check out the footnote at the end dealing with terms and nomenclature.(1)

Our Final Invention comments on and challenges Ray Kurzweil's book, The Singularity is Near so it's a must-read for anyone who participates in AI forums or works in the field. Ray's book came out in 2005 so FINAL INVENTION has 8 years of perspective to build upon.

Like Kurzweil, Barrat feels the Singularity is only a matter of time; in fact the book goes into reasons why he feel's it's probably unavoidable. Unlike Kurzweil, Barrat is not as optimistic about the Singularity's safety, in fact he itemizes way things can become quite unfriendly.

Barrat's take on the actual event of when AI reaches human level intelligence and then moves onto superintelligent levels is what he says is termed the "busy child". (See additional thread on this) Eliezer Yudkowsky, who was interviewed for the book along with Ray Kurzweil and Arthur C. Clark, best described the busy child in his provocative article, "Staring into the Singularity" many years ago. Of course, probably the very first person to describe a self-improving machine was Irving John Good in his 1963 article, "Speculations Concerning the First Ultraintelligent Machine" at http://www.acceleratingfuture.com/pages/ultraintelligentmachine.html

The most famous paragraph of Good's paper is the following, where he for the first time attempted to define what we now call Superintelligent AI or SAI.

"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously."

Barrat points out that the transition from human-level AI to Superintelligent AI could happen quickly, maybe even in days or milliseconds. Perhaps an emerging "Busy Child" would even pretend to fail the Turing Test so that it could compute its escape strategy before humans even knew of its capabilities. In Barrat's book, the message is clear: we should give Ray Kurzweil all due respect for making us optimistic about the Singularity, but we should proceed with extreme caution.

In the book, and in an interview he later did (such at http://youtu.be/Gt0Jf-79uOE ), Barrat says that it was Arthur C. Clark that prompted him to seriously consider the down-side of having superintelligent machines sharing a planet with us.

With at least three major players overtly or covertly funding AI development -- IBM, Google and DARPA -- we should really be worried about the funding that DARPA provides to developers because DARPA, being a part of the U.S. Military -- will inevitably seek to weaponize AI. After all, as Barrat points out, the "D" in DARPA does stand for "Defense". The author also states that,

"Despite Google's repeated demurrals through its spokespeople, who doubts that the company is developing artificial general intelligence? IN addition to Ray Kurzweil, Google recently hired former DARPA director, Regina Dugan."

So in addition to hundreds of governments and corporations researching and funding Strong AI, it would be foolish to NOT assume that IBM, Google and DARPA are leading the pack. Thus folks, you can also be sure that these multi-billion dollar entities have assigned at least one reader to this very forum to see what all us "wing-nuts" are up to.

It is certain that, while people like Eliezer and I would include Ray Kurzweil in this group, are attempting to build friendly AI, there are going to be the usual dark forces and meat heads that will attempt to kill and maim with it. And all this sounds fine and dandy until one considers that SAI will be much more lethal that mere hydrogen bombs.

Unfortunately, if the U.S. military-industrial complex ever succeeds in building human-level AI or Strong AI (as Ray mostly calls it in The Singularity is Near), there is little chance they will be able to control it. AND there is absolutely NO chance they will be able to control it if it the "Busy Child" makes its way to superintelligence. If this happens, SAI will be able to "get out of the box" -- meaning attach itself to some or all computer networks in the world, and more.

How do we know this? We know this because the experiment has already been tried with at least one human genius. Games have been invented to see if a genius-level human can convince normal-IQ humans -- by words only -- to let him seek some specific goal, like get out of a text box. Thus, if a mere human-level genius can devise ways of escaping, imagine what a superintelligent entity can do.

Barrat cites the "Stuxnet" virus as another example of what we can expect from SAI.

"The consensus of experts and self-congratulatory remarks made by intelligence officials in the United States and Israel left little doubt that the two countries jointly created Stuxnet, and that Iran's nuclear development program was its target."

The point is this. If we want to learn how a superintelligent system may very well act, we should, as Barrat writes: "almost thank malware developers for the full dress rehearsal of disaster that they're leveling at the world. Though it's certainly not their intention, they are teaching us to prepare for advanced AI."

Barrat, points out that Symantec corporation started out as an IA company and now their the biggest player in the Internet immune business. Symantec discovers about 280 million new pieces of malware (viruses, worms, spywear, rootkits, Trojans) every year -- most of it created by software that writes software.

But wait, isn't this exactly what the "busy child" does -- write its own software? So is it so much of a leap to suspect that AI could very well act like a virus, at least before it becomes REALLY dangerous, dangerous in ways humans are not even capable of imagining?

Barrat makes it clear that the power grid is the most critical thing to protect because it is "tightly coupled" with all other systems, including U.S. defense which gets 99% of its electricity from civilian sources and 90% of its communications via private networks.

The Stuxnet virus was designed to hit and destroy networks and infrastructures like this, but specifically SCADA systems, short for Supervisory Control and Data Acquisition. In other words Stuxnet was designed to destroy HARDWARE not just SOFTWARE and DATA. Specifically, it was designed to destroy industrial machines connected to the Siemens S7-300 logic controller which is a component of the SCADA system. These controllers do things like run gas centrifuges for nuclear enrichment facilities, like the centrifuge running in Natanz, Iran. But now that Stutnet has been used on the Iranians, rogue versions of it have inadvertently been released all over the Internet. Now any intelligent hacker -- from Anonymous to pissed-off high school kids -- are able to get copies of this US/Israel-manufactured virus and adapt its code for purposes of their own.

So here's another example of governments doing more harm than they do good. These unintended consequences should serve as a warning of what can or will happen with AI systems if they are developed by defense agencies who have no intension of making them "friendly."

Given the kinds of catastrophes that we are subjecting the human race to with the development of SAI -- especially militarized AI -- it should be obvious that the best defense against "normal accidents" would be to decentralize as much of civilization's infrastructure as possible.

This means that first and foremost the electrical power grid should be de-centralized, starting with current "smart grid." This hair-brained idea should be terminated because such an arrangement means that vast portions of the grid are accessible over the Internet. The main idea of the "smart grid" -- and things like "smart meters" -- is make it easier for (lazy) power companies to bill, and spy on, their customers. Given that this also makes it easier for NSA-Israel terrorist viruses, like Stuxnet, to rampage through civilization, "smart" grids are pretty dumb.

Not only did kindred meathead spirits at NSA and Israel (probably in the Mossad) develop Stuxnet, they developed two other malware viruses called Duqu and Flame. These are reconnaissance viruses that can record keystrokes, steal data and operate cameras or microphones remotely on YOUR personal PC, as well as any other networked computer. Thanks to Edward Snowden's whistle blowing, we now know that tech like this, if not this exact tech itself, is being used to spy on U.S. citizens, citizens who are unwittingly financing such malware through their taxes.

It is obvious that policy makers feel no back off on spending public money on any of these nefarious and dangerous applications and they do so without informing citizens or providing any national discussion. The recklessly development and deploy of viruses like Stutnet should be terminated. Whether or not such will come to pass, the basic question Humanity has to deal with is this:


A variant of Shakespeare's famous TO BE OR NOT TO BE, this question is a metaphor for the idea that building AI for the human race is like an individual contemplating whether to commit suicide or not, and whether doing so would be for the greater good.

Does the Universe require of a species to deliver its nexus no matter what its fate? Just like a parent is supposed to be totally willing to sacrifice for its child, is the human race supposed to be willing to sacrifice itself to give birth to SAI? Even if it means the extinction of Humanity, must Humanity cheerfully accept this fait for the greater good, the greater good of the Universe. What do you think Democrats?

On the other hand, SAI may not destroy Humanity; it could usher in a golden era like no other. It might even partner with Humanity, as Ray Kurzweil suggests (my feeling as well). Under such a circumstance it could reward us with our long-sought-after utopian civilization. This may be entirely possible with safely engineered technology and a proper balance of ethics.

But the answer to the "basic question" posed above could turn out to be that the potential risks out weigh the possible rewards. If it looks like the military-mentality will definitely develop or commandeer AI -- then weaponize it -- there is a good possibility that it will get out of control and destroy all of Human civilization, possibly more. If this is the case beyond a reasonable doubt, AI research may have to be completely terminated -- just as we are attempting to ban and terminate nuclear tests, biological warfare and chemical warfare.

If the "profit movers" or the "thugs with guns" in the state refuse to cooperate, AI may be opposed by mass revolts, the overthrow of numerous governments and/or the burning of corporate assets to the ground. This could happen no matter what the human cost, even if it meant billion of people fought and died in one of Hugo De Garis' "gigadeath wars". After all -- billions might reason -- nothing less than all of (Human) civilization will hang in the balance with the decision as to whether to build Strong AI or Superintelligent AI.

So folks, Mr. Barrat's book gives us some serious thinking to do. And in addition to such continuous, serious study, I would proffer some obvious first steps. The first would be that the world should get off of its addiction of CENTRALITY and place more emphasis on DISTRIBUTED REDUNDANCY.

The world has got to stop being and being dependent on centrally planned AND centrally wired systems. Centralism, and top-down management systems, are relics of past centuries when these methodologies were all that were viable. The buzz words for the current world and current technological civilization we live in and are creating should be "decentralization," "redundancy," "distributed networks", "freedom," "competition," "transparency" and "open source."

You cannot have central power grids in a world where a virus or rogue AI can ravage through the entire system and destroy everything. Shut down electricity, the Internet and even the armed forces.


Centralized power grids must be replaced by residential generators and solar generators on every roof top. Smart meters must be dispensed with. Other local technologies, such as thermal and wind power, must replace the electrical grid.

Cloud computing, and other centralized computing systems must not be developed. One Stuxnet virus, not to mention rogue AI, in such a system and everyone on the world could be wiped out. We need to start developing peer-to-peer networks.

The central planning mentality is the mentality of greed. It's the mentality that says "I want to control" -- "I want to distribute all things" -- "I want to plan all activities" -- "I want to issue all money" -- "I want to provide all justice" -- "I can think better than you can." It's the mentality that thinks the group huddled together can survive better than the individual operating in freedom and redundancy.

Redundancy is, of course, the key word. A decentralize power grid is a redundant grid. A peer-to-peer Internet or phone system is a network that can't be destroyed. It's a redundant network. We need to build these and stop building the other kind of networks that only serve the profit master and the control monster.

We must get DARPA and other weapons companies away from AI development. These hawks will cause AI to destroy the Human race -- just as they tried to egg John F. Kennedy onto invading Cuba in the Cuban Missile crises. Little did they know that nuclear weapons were already deployed and if Kennedy had listened to the war hawks, he might have ignited World War II with the Soviets.

So just like every weapon the military mentality has ever built, they will use it. AI will be no different only MUCH worse. This is the senior message and warning of James Barrat's book, Our Last Invention. The difference this time with weaponized AI is ironically there is no guarantee that weaponized AI won't even kill the war hawks that built it. SAI will be autonomous, self-aware and self-protecting. In short, common sense should tell us to not abandon hope, but proceed with extreme caution or be willing to evolve a nexus that may not include our current civilization.

(1) The terms describing intelligence, artificial intelligence, human-level intelligence and higher-than-human-level intelligence are multi and varied.

Ray Kurzweil, in his book, The Singularity is Near, does not use any of the terms that Barrat used in his book, Our Final Invention. The terms Ray used are as follows:

Machine Intelligence (page 28)
Non-biological Intelligence (page 28)
Strong AI defined (page 260)
Superhuman AI (page 261)
Runaway AI (page 262)
Narrow AI (page 264)
AI defined (page 265)

These are not necessarily the first or the only places Ray uses or defines these terms, but they are what I was able to itemize in a quick scan of the book.

Barrat on the other hand calls artificial intelligence, AI, but then he calls human-level intelligence, AGI, short for Artificial General Intelligence. He then calls higher-than-human-level intelligence, ASI, short for Artificial SuperIntelligence.

For the record, James Martin, in his book Alien Intelligence, calls AI "Alien Intelligence" and he refers to "strong AI." I. J. Good, discussed later, called higher-than-human-intelligence, "Ultra-intelligent Machines" and Hugo De Garis, in his book The Artilect War, calls superintelligent AI, "Artilects".

Lastly, and also for the record, the terms General AI, Idiot Savant, Idiot Savant AI and Augmented Intelligence are used widely. Here again these should be legal as we have the term AI preserved and various modifying words are proceeding it to describe certain very specific ideas, entities or concepts.

That said, as long as I have been visiting and contributing to the AI forum at www.Kurzweilai.net (from the 1996 MIND-X to the present year of 2014), I have never heard the designations Barrat is using in his book. The terms I have heard, and I think most in the AI community prefer to use, at least at the KurzweilAI forum, are:

AI for Artificial Intelligence
SAI for Superintelligent AI
Strong-AI for human-level intelligence

In looking over all these terms, I would proffer simply this. We should attempt to agree on a terminology. It does the AI community -- and sister movements, such as Transhuminist and PostHumanity movements -- no good to have nebulous or conflicting terms.

Observing the terms that have so far been used, I propose that all descriptions of Artificial Intelligence should be designated by the abbreviation, AI, as has been traditionally done. Further, since AI is the subject matter, all additional words in connection with the subject should be considered modifiers, modifiers of AI. As modifiers they should thus come BEFORE the letters, AI. Thus we would have General AI, not Artificial General Intelligence (or AGI). We would have Superintelligent AI (or SAI), not Artificial Superintelligence (or ASI).

By placing an adjective, such "general" or "superintelligent" in between the words "artificial" and "intelligence" it makes no sense and is confusing. Again it breaks up the main, subject term, AI, and know one knows what is being addressed.

Barrat used the terms AGI and ASI through out his book and honestly, it drove me crazy. Condescending as it may sound, since he's only been studying AI for 20 years, I suppose we should give him a break, as he wrote an otherwise great book. No field, however, moves forward smoothly without a well-defined -- and agreed upon -- nomenclature.

10 February 2014

Please forward this to your mailing list. The mainstream media will probably not address this subject because they have conflicts of interest with their advertisers, stockholders and the political candidates they send campaign contributions to. It's thus up to responsible citizens like you to disseminate important issues so that a healthy public discourse can be initiated or continued. Your comments and suggestions are welcome and future versions of this research paper will reflect them.

Permission is hereby granted to excerpt and publish all or part of this article provided nothing is taken out of context. Please give reference to the source URL.

Any responses you proffer in connection with this research paper when emailed or posted as an article or otherwise, may be mass-disseminated in order to continue a public discourse. Unless you are okay with this, please do not respond to anything sent out. We will make every effort, however, to remove names, emails and personal data before disseminating anything you submit.

Don't forget to watch our documentary films listed below so you will have a better understanding of what we believe fuels most of the problems under study at Jaeger Research Institute. We appreciate you referring these documentary films to others, purchasing copies for your library, screening them for home audiences and displaying them on your public-access TV channels. The proceeds from such purchases go to the production of new documentaries. Thank you.

If you wish to be removed from this mailing list go to http://www.jaegerresearchinstitute.org/mission.htm but first please be certain you are not suffering from Spamaphobia as addressed at http://www.jaegerresearchinstitute.org/articles/spamaphobia.htm


Mission | Full-Spectrum News | Books & Movies by James Jaeger | Sponsor |
Jaeger Research Institute