SELF AND MORALITY IN THE WORLD WIDE BRAIN

SELF AND MORALITY IN THE WORLD WIDE BRAIN

In closing, I will draw together some of the technical and philosophical ideas already presented to briefl y consider a question posed to me by Jayne Gackenbach, the editor of this volume: Will the World Wide Brain be moral? Will it have a sense of ethics? This question is a good entry into the general question of the self- psychology of the emerging global Web brain.

332 Ben Goertzel

Conscience is experienced by humans as a “voice inside,” giving recom- mendations as to which actions are right or wrong. Judgments of right and wrong vary a great deal from culture to culture, family to family, and person to person, and seem to depend strongly on early childhood experience. Since the global Web mind’s early childhood experience will be rather different from that of a human being, there is certainly reason to question whether any consciencelike phenom- enon will emerge.

In the Webmind AI design, there was a Java class called Self, which was an integral part of the system, and whose role was to continuously record what a system is doing, for the purposes of adjusting the system’s numerous parameters and guiding the system’s introspection (self-querying) processes. According to the animist view of consciousness I have advocated (Goertzel, 1997), this class, like the brain processes regulating human attention, should not be thought of as the loca- tion of “raw experience,” but rather as the “vortex” within the system at which raw awareness, the primal ground of being, achieves its greatest effect on the world of concretized pattern. The Novamente AI system uses a different approach, in which there is no Self explicitly programmed in, and self-understanding is expected to emerge, but the basic concept is the same.

An AI system may be explicitly programmed to seek behaviors that please its human users, serving their needs as best possible, and also to seek behaviors that please the other AIs with which it is interacting. In this sense, an individual machine or cluster of machines running an AI may have a conscience. The specifi c contours of an individual AI’s conscience will be different from one AI to another, based on experience and adaptation, but this is only to be expected. Asimov’s Three Laws of Robotics, which in his science-fi ction stories were wired into robots preventing them from doing harm to humans, would be diffi cult to implement in the context of an emergent, self-organizing intelligence. It is easy to hard-code restrictions preventing an AI from sending commands to the Pentagon instructing bombs to be dropped, but not so easy to prevent it from taking actions indirectly causing humans harm; it is for these cases that conscience must use its own intuition, which adapts over time, sometimes well, sometimes poorly.

We have also seen that various AI systems may interact socially. Most sim- ply, each one may query others for information, and will need to know which others are the best to ask for which types of information. Each one may give the others advice on yet others, and each one must judge the reliability of each other one’s advice on each particular topic. This fairly simple concept of adaptive information sharing leads to an Internet AI version of the Collective Unconscious, different from the human Collective Unconscious that emerges from the sum total of human data on the Net. There will be a collection of shifting patterns regulating the interaction within the collective AI unity, dimly perceived by any individual AI mind, but providing a ground for intersubjective creativity.

12 World Wide Brain 333

However, the interaction between AI minds will be more intense than social interaction between humans, in that different AIs will actually be able to exchange “brain lobes” (collections of knowledge, emotion, opinion, etc.). This social infor- mation exchange has no overseeing inner eye, no potential conscience as such, because it is a fundamentally heterogeneous, decentralized process.

An additional twist on these observations is obtained by noting that physical reality for many future AIs may basically consist of the Internet itself, and the com- puters on it. Some AIs may be embodied in physical robots or simulation worlds, but all may not be. Furthermore, the statistical average patterns of the social interac- tion of AIs will determine patterns of network traffi c, ultimately the laying down of new cable, and so on. The physical substrate is, in this sense, going to be molded by the social dynamics, a mirror of how, according to quantum physics, the col- lective perceptions of the macroscopic systems in the universe lead to the creation of a concrete universe from the underlying microscopic world of uncertainty, the carving out of a mutual path through multiple-universe space.

We thus arrive at the puzzling observation that there may be no global con- science for the World Wide Brain, because of the distributed, multiowner nature of the Internet (i.e., because of the very feature that has led the Internet’s explo- sive growth). The inner eye of conscience has to have the ability to look into and change each part of the system it interpenetrates. For example, if a particular part of the global brain is made up of software owned by company X, then company

X would have to agree to give the inner eye of conscience access to their software, trusting that the inner eye was going to improve it for the good of the whole mind. This goes against the corporate competitive ethic, needless to say.

What this discussion points up, above all, is the way the Net blurs the line between the individual and the social. Whereas for us there is a sharp division between individual mind and social mind, for the intelligent Net, there will be more of a continuum. There will be a panoply of overlapping and disjointed inner eyes, a pandemonium of consciences, overlapping and interfering with each other in a much more intimate way than is possible in human intercourse.

Speculatively, one might conjecture that society is less moral than the indi- vidual precisely because it has no inner eye, no overall stream of consciousness. However, the disorganization and immorality of society may be necessary for the organization, focus, and morality of the individual. The Net, by avoiding the rigid distinction between immoral society and the moral individual, may avoid many of the problems that have plagued human history, but will surely experience new problems all its own. Consciousness existing in such an environment of constantly shifting boundaries will probably suffer less from the human tendency toward excessive reifi cation and rigid boundary drawing, but may fall into the opposite error more often.

And what will happen to human consciousness, interlined with such a sys- tem, a system that blurs the boundaries of the individual and the social? The only

334 Ben Goertzel

reasonable conclusion is that human consciousness will also lose some of its rigidity in such matters. As, in general, the consciousness of those organisms with which it interacts; so, as humans interact with self-organizing Internet AI systems in the workplace and at home, they will cease to act as if the individual and social were rigidly separate domains, and will begin to actualize the collective unconscious more explicitly in their individual thoughts and doings.

Joan Preston (Chapter 11, this volume) has discussed the process by which computer interfaces, of the standard and VR type, gradually become “transparent” through repeated use and through similarity in various respects to ordinary human environments. What we are discussing here is something related but perhaps more profound; the increasing transparency of the human self, as the transpersonal aspects of mind become much more directly manifest in physical reality. Selves are now the boundaries that divide mental patterns from each other, but in a world of symbiotic human/AI mind, this will no longer be the case, at least not as strongly as it is now. The transparency of computer interfaces between one human and another, and between humans and the collective-unconsciousness-embodying intelligent Net, engenders a deeper kind of transparency. Morality, which is based on “compassion,” the reaching out of feelings from one self to another, will take an entirely different guise, as the self boundaries that give morality meaning become more fl uid and multileveled.