Beyond AI

From CasGroup

Revision as of 17:13, 30 December 2011 by Jfromm (Talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search


Is it possible?

Igor Aleksander says in his book "World in My Mind, My Mind in the World" on the first page:

"I am convinced that in principle it is possible to design machines that are conscious in much the same way as I am. I am also convinced that this is a direct and uncluttered way of understanding what it is to be conscious. The strategy for designing conscious machines is tough but, in the end, doable."

Is it possible to go beyond traditional AI? Some people are persuaded that "no computer of the sort we know how to build—that is, one made with wires and switches—will ever cross a threshold to become aware of what it is doing." (quote from Martin Gardner's Review of Hofstaedter's book 'A strange loop'). They are indeed right for computer of the sort we use now, but this doesn't apply for huge data centers and large networks of computers. Arguments about whether an artificial system could ever be conscious range among scientists from "possible in principle" to "sure". The problem of strong AI is not a problem in the sense of “could it possibly exist?”; it is evidently an engineering problem. So the answer is: Yes, of course, why not. It should be possible. Certainly we can engineer someday a machine (or a network of machines) which is conscious of itself, just like we are conscious of ourselves. And if we really make software that comes to life, then evolution will enter a new phase. There is no doubt that self-consciousness does not require some immaterial mind-stuff. Daniel C. Dennett writes in his book "Brainchildren - Essays on Designing Minds", Penguin Press Science, 1998 (p. 154)

"Over the centuries, every other phenomenon of initially 'supernatural' mysteriousness has succumbed to an uncontroversial explanation within the commodious folds of physical science. Thales, the pre-Socratic protoscientist, thought the loadstone had a soul, but we now know better; magnetism is one of the best understood of physical phenomena, strange though its manifestations are. The 'miracles' of life itself, and of reproduction, are now analyzed into the well-known intricacies of molecular biology. Why should consciousness be any exception? Why should the brain be the only complex physical object in the universe to have an interface with another realm of being?"

Examples in fiction

In fiction and esp. science fiction, fictional characters equipped with artificial intelligence are common. They appear either in form of robots, or in form of avatars in virtual worlds. Two examples are:

  • A Tachikoma is a fictional artificial intelligence robot in the "Ghost in the Shell" universe
  • ISOs or “isomorphic algorithms” are digitally-evolved independent forms of AI, artificial lifeforms that have somehow spontaneously evolved and emerged from the artificial environment of the virtual world in the film Tron:Legacy

What is the problem ?

John McCarthy and Marvin Minsky already tried to achieve human-level AI in 1950, when Minsky came to Princeton as a graduate student. At this time, the first neural networks were constructed, and the concept of Cellular Automata (CA) was first explored by von Neumann. Although more than 50 years have passed, unfortunately no computer has ever been designed that is aware of what it's doing. Many biologically inspired fields of computer science have failed to produce the complexity of living beings: Connectionism, AI, ALife, Genetic Algorithms. Christof Teuscher says in the ACM article 'Biologically Uninspired Computer Science': "Biological organisms are constantly doing things no artifact can match. The syllogism of simple rules governing complex patterns — or more outlandishly, the whole universe — is seductive but oversimplified. [...] Trying to copy or mimic life or lifelike behavior in all scientific disciplines has generally produced disillusion after high initial hopes and hype."

Although living systems are composed only of non-living atoms, we can not produce similar things out of non-living items. Rodney Brooks says "AI just does not seem as present or aware as even a simple animal and Alife cannot match the complexities of the simplest forms of life.". He asked what went wrong, and lists four possibilities:

  • The parameters of our models are wrong;
  • We are below some complexity threshold;
  • We lack computing power; and
  • We are missing something fundamental and unimagined.

Perhaps it is a mixture of all four reasons. The major problem is a complexity threshold. Humans take 20 years to grow up, and they learn all these years every day new things. Have we build a machine or an agent which is able to learn 20 years? No animal learns so long, and all animals except humans do not reach our levels of intelligence, even if we try to teach them they are not able to learn language. They are below some cognitive complexity threshold. Cats reach their full size in a half year, even big bears and horses in 2-3 years. The complexity threshold consists in building an adaptive agent which is able to match the vast complexity of a whole world in a tiny space, see [1]. Similar to the quest of quantum gravity, it is the attempt to match the very large and the very small, the infinite and the infinitesimal: It means to put a world in a grain of sand - if a brain is like a grain of sand.

Daniel C. Dennett writes in his book "Brainchildren - Essays on Designing Minds" (p. 158/159):

"After all, a normal human being is composed of trillions of parts (if we descend on the level of the macromolecules), and many of these rival in complexity and design cunning the fanciest artifacts that have ever been created. We consist of billions of cells, and a single human cell contains within itself complex 'machinery' that is still well beyond the artifactual powers of engineers. We are composed of thousands of different kinds of cells [...] If all that complexity were needed for consciousness to exist, then the task of making a single conscious robot would dwarf the enire scientific and engineering resources of the planet for millennia. And who would pay for it?"

How will it happen?

"A man, viewed as a behaving system, is quite simple. The apparent complexity of his behavior over time is largely a reflection of the complexity of the environment in which he finds himself." - H.A. Simon in "The Sciences of the Artificial"
"We may hope that machines will eventually compete with men in all purely intellectual fields. But which are the best ones to start with? Even this is a difficult decision. Many people think that a very abstract activity, like the playing chess, would be best. It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and teach it to understand and speak English. This process could follow the normal teaching of a child." - Alan M. Turing, Computing Machinery and Intelligence, MIND, LIX, 1950, p. 460]

Despite all these problems, obstacles and failures, it is very likely that it is indeed possible to build a large distributed system of computers which is able to achieve self-consciousness and which is aware of what it's doing. How?

Alan Turing's suggestion: build a disorganized machine with the curiosity of a child. "Instead of trying to produce a program to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain the adult brain", see [2]. That's exactly the right way: build an adaptive agent that is able to learn. Such agents would be able to acquire self-consciousness in exactly the same way as we do, as Susan Blackmore says: "they would acquire language, social conventions of behaviour, and knowledge of the intellectual world, in roughly the same way as we do." [3].

Thus to go beyond traditional AI one has to build or simulate an agent that is able to learn, so long until it is able to understand a complex artificial 3D world as complex as the real world. The agent should have a scalable and adaptive architecture, and it should be able to learn.

  • create a adaptable or adaptive system (with the right kind of internal architecture)
  • place the system in an artificial environment which is complex enough
  • let the system learn and grow

At least this is what humans do for nearly 20 years : to grow and to learn. Of course the world must be complex enough to create a complex character. For example an online multi-player game in a true 3D environment. If you have created a system which understands a 3D world, you have created a system which can understand natural language, since language is a description of the physical, real world and is inseparable from it. As soon as you start to separate words from it though, they loose their meaning, and you can regain it only if you pull them "back to earth" with metaphors.

How do we create an agent that is smart enough to understand a complex 3D world? The complexity of the code must be able match the billions of connections in the human brain. Take a few of Google's advanced data centers in the near future. Together they reach the complexity of a hundred billion interconnected neurons. Now connect them to a single agent in a virtual world (this sounds much easier than it is). Instead of search queries, it gets input directly from this world: each moment is a "what is this" query. And instead of search results, it produces a list of perceptions and corresponding actions, which human trainers have proposed. The whole thing is trained for years, first to recognize the basic environment, then to move around, and finally to learn a language, similar to the way kids grow up.

Such a system doesn't and shouldn't know itself, and one data center doesn't need to know how the others work. It is important that the system doesn't know its own distributed structure. It must perceive itself as a single und unique entity, who lives in a complex world (and not as a complex system which lives in a single world). In order to develop the idea of a personal "self", the system needs a base, a single body to identify oneself with. And it needs the ability to perceive the traces and shadows of the own actions.

In the end you have a giant program running on 50.000 servers or more, which doesn't has the slightest idea how it works, and has the impression that a single self is in charge. It is able to be aware of itself as long as it is connected to the virtual world. And in this world, this massive system thinks it is a unique entity, a single person. This illusion is similar to self-consciousness in humans. Unlike the shadow emergence in humans, though, we have here a real ghost in the machine, which can be copied to other machines, if you find 50.000 servers you do not use anymore..

It sounds paradox, but if we want to enable a system of computers to think about itself, we must prevent any detailed self-knowledge. This is one of the puzzling aspects of self-consciousness: only those who are not aware what their "self" is made of can become aware of themselves, although the conscious awareness of the own thought processes enables self-consciousness. Only those of who do not know their own blueprint and their own internal neural circuits are able to become conscious of themselves. If we could perceive how our minds work on the microscopic level of neurons, we would drown in chaos and notice immediately that there is no central organizer or controller. Luckily, we are not able to examine our own internal neural circuits, especially not at a time when we develop the first forms of self-consciousness. The illusion of the self would probably break down if a brain would be conscious of the distributed nature of it's own processing. In this sense, self-consciousness is only possible because the true nature of the self is not conscious to us..

The complex adaptive system in question is aware of what is doing only indirectly through and with the help of the external world. To be more precise, the system can only watch its own activity on a certain level: on the macroscopic level it can recognize macroscopic things, and on the microscopic level, it can recognize other microscopic things - a neuron can recognize and react to other neurons - but there is no "level-crossing" awareness of the own activity.

What do we need ?

So you have to build a giant system which consists of a huge number of computers, and only if it doesn't have the slightest idea how it works, it can develop a form of self-consciousness. And only if you take a vast number of items - neurons, computers or servers - the system is complex enough to get the impression that a single item is in charge..

There is something else we need: the idea of the self must have a base, a single item to identify oneself with. Thus we need two worlds: one "mental" world where the thinking - the complex information processing - takes place, and where the system is a large distributed network of nodes, and one "physical" world where a single "self" walks around and where the system appears to be a single, individual item: a person. This "physical" world could also be any virtual world which is complex enough to support AI. Each of this worlds could be be realized by a number of advanced data centers.

There are a number of conditions for both worlds: The hidden, "mental" world must be grounded in the visible, "physical" world, it must be complex enough to mirror it, and it must be fast enough to react instantly. Grounded means we need a "1:infinite" connection between both worlds. The collective action of the "hidden" system must result in a single action of an item in the "visible" system. And a single instant in the "visible" system must in turn trigger a collective activity of the "hidden" system during perception. Every perception and action for the system must pass a single point in the visible, physical world. If both worlds are complex enough, then this is the point where true self-consciousness can emerge.

Koch and Tononi say "to be conscious, then, you need to be a single integrated entity with a large repertoire of highly differentiated states.", see [4] or [5]. To summarize, in order to build a computer system which is able to think about itself, we need to separate the "thinking" from the "self":

(a) a prevention of self-knowledge which enables self-awareness

(b) a "1:infinite" connection between two very complex worlds which are in coincidence with each other

What will happen?

What will happen if the first "strong" AI with human-level intelligence and self-consciousness is created? Probably

- it will be just a confused as we are (where was I before? I am that? what am I?..)

- it will be very familiar with the world where it is living in (for us and robots, this is the physical world, but in principle this could be any virtual world which is complex enough to support AI)

- it will have difficulties to imagine what it is like to be dead

- we will consider all technology so far suddenly as less intelligent. Robots which know seem to be intelligent will be reduced to artificial pets and are suddenly candidates for the machine zoos of the future. We are self-aware, our pets and animals are not. Therefore we consider ourselves as higher lifeforms. If we create a self-aware computer then a Playstation would be a kind of technological ape or monkey - a technological "lifeform" related to an evolutionary ancestor.


  • SEED article, Out of the blue - Can a thinking, remembering, decision-making, biologically accurate brain be built from a supercomputer?
Personal tools