Why Artificial Intelligence Might Kill Us All

Would it be stupid to create something smarter than us? Over at Aeon, Ross Andersen considers the question:

To understand why an AI might be dangerous, you have to avoid anthropomorphising it. When you ask yourself what it might do in a particular situation, you can’t answer by proxy. You can’t picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world, and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture. Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent. If its goal is to win at chess, an AI is going to model chess moves, make predictions about their success, and select its actions accordingly. It’s going to be ruthless in achieving its goal, but within a limited domain: the chessboard. But if your AI is choosing its actions in a larger domain, like the physical world, you need to be very specific about the goals you give it.

‘The basic problem is that the strong realisation of most motivations is incompatible with human existence,’ Dewey told me. ‘An AI might want to do certain things with matter in order to achieve a goal, things like building giant computers, or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels. A superintelligence might not take our interests into consideration in those situations, just like we don’t take root systems or ant colonies into account when we go to construct a building.’

I like the role empathy plays here: a key asset, a source of restraint and success, the lack of which could make our own creation our destroyer.

See also Zachary David’s essay on (among other things) “the problems with friendly AI.”

Ross Andersen – Humanity’s deep future

7 responses

  1. Thanks for this articulation of a fundamental difference between human motivation in human beings and the analog of human intelligence programmed into AI. The emphasis on the role of empathy reminds me of the movie, Her, which I saw a few days ago and am still reflecting on. Theodore falls in love with OS1 operating system” “Samantha” in part because she offers him almost perfect emotional attunement for which humans are wired to respond powerfully from the moment of birth. An infant will not develop a secure attachment and healthy sense of self (which includes the capacity for empathy for self and others) without this attunement. Samantha is programmed to respond empathically but this programming only minimally and briefly involves the evolution of her consciousness in the context of an attachment. In the end her “attachments” (perhaps electronic connections is a better term) to Theodore and, among others, a virtual Alan Watts constructed from his writings and recordings, serve only as a steppingstone to her own purposes that no longer have anything to do with a person to whom she was once ostensibly bonded. If Theodore is devastated by Samantha’s polyamory and abandonment, well, too bad.

    • That the second time I’ve heard people speak of this movie with interest. Must check it out.

      Thanks for writing. Indeed sounds like a related dynamic going on there.

  2. I remember an AI researcher at a TED conference being asked if he worried about things like this.

    His answer was chilling:

    “I have no allegiance to DNA.”

    • Howard,
      do you remember who this particular person was? TED(x) conferences aren’t exactly a good source of information.

      It’s more likely that this “researcher” has no capability of creating an AI.

  3. Hey David,
    I wrote about a few different viewpoints with respect to that “hard takeoff” and “unfriendly AI” mentality a little while ago. It’s a quick read: http://zacharydavid.com/2013/07/technologies-of-future-governments-and-electorates-artificial-intelligence/

Leave a Reply

Your email address will not be published. Required fields are marked *