Issues of Artificial Intelligence

17.1 Issues of Artificial Intelligence

Let us begin our considerations with an analysis of the term artificial intelligence. In fact, it has two basic meanings. Firstly, it means a common research field of

computer science and robotics, 2 in which development of systems performing tasks which require intelligence when performed by humans is a research goal. Secondly, it means a feature of artificial systems which allows them to perform tasks that require intelligence, when made by humans. Thus, in this meaning artificial

1 Therefore, the reader is recommended to recall the considerations contained in Sect. 15.1 . 2 Usually it is assumed that Artificial Intelligence is a subfield of computer science. However, in this case we exclude from AI studies such important issues as, e.g., manipulation and locomotion

performed by certain AI systems. © Springer International Publishing Switzerland 2016

235 M. Flasi´nski, Introduction to Artificial Intelligence, DOI 10.1007/978-3-319-40022-8_17

236 17 Prospects of Artificial Intelligence intelligence is not a thing, but a property of certain systems, just as mobility is a

property of mobile robots 3 which allows them to move.

Let us notice that artificial intelligence in the second meaning is the subject of research in a discipline called cognitive science rather than computer science or robotics.

Cognitive science is a new interdisciplinary research field on mind and cogni- tive processes concerning not only humans, but also artificial systems. Its research focuses on issues which belong to philosophy, psychology, linguistics, neuroscience, computer science, logic, etc.

In the first chapter we have discussed the Chinese room thought experiment, which was introduced by Searle [269]. On the basis of this experiment views concerning artificial intelligence can be divided into the following two groups:

• Strong Artificial Intelligence, which claims that a properly programmed computer is equivalent to a human brain and its mental activity,

• Weak Artificial Intelligence, in which a computer is treated as a device that can simulate the performance of a brain. In this approach a computer is also treated as

a convenient tool for testing hypotheses concerning brain and mental processes. According to Searle the Chinese room shows that simulation of human mental activity

with the help of a computer (Weak AI) does not mean that these activities take place in a computer in the same way as they do in a human brain. In other words, the brain is not a computer, and computing processes performed according to computer programs should not be treated as equivalent to mental processes in a human brain.

The term computationalism is related to Strong AI. It means the view that a human brain is a computer and any mental process is a form of computing. 4 So a mind can be treated as an information processing system. Computationalists assume that information processed in both kinds of system is of a symbolic form. 5

Let us begin our presentation of views in the modern theory of mind with those which are close to Strong AI. They relate to the Cartesian mind-body problem, which has been presented in Chap. 15 . This problem can be described as the issue of the place of mental processes in the physical (material) word. Adherents of analytical (logical) behaviorism treat the mind-body problem with reserve, treating it as an unscientific pseudo-problem [258]. If we talk about mental states, we use a kind of metaphor. In fact, we just want to describe human behavior.

3 In fact, we could say artificial mobility, since a robot is an artefact, i.e., an artificial object, which does not exists in nature, so its properties are also “artificial”. While we, as Homo sapiens do not

object a term robot mobility, in case of a term computer intelligence we prefer to add artificial. Of course, somebody could say that in this case a term artificial means imperfect. In the author’s opinion, it is not a good interpretation. Does the reader think that some day we construct a mobile robot, which can dance Odette in Swan Lake like Sylvie Guillem?.

4 Let us notice that this view is consistent with the philosophical views of T. Hobbes and G.W.

Leibniz, which have been presented in Chap. 15 .

5 An assumption of computationalists about the symbolic form of information processed by intel- ligent systems triggered a discussion with followers of connectionism in the 1980s and 1990s (cf.

Sect. 3.1 ).

17.1 Issues of Artificial Intelligence 237 Therefore, instead of using concepts related to mind, i.e., using an inadequate lan-

guage, we should use terms describing behavioral patterns. 6 Gilbert Ryle, who has been mentioned in Sect. 2.4 (structural models of knowledge representation) is one of the best-known logical behaviorists. Physicalism is also a theory which can be used to defend Strong AI, since mental phenomena are treated here as identical to physiological phenomena which occur in

a brain. There are two basic approaches in physicalism, and assuming one of them has various consequences in a discussion about Artificial Intelligence. Type-identity theory (type physicalism) was introduced by John J.C. Smart 7 [279] and Ullin Place 8 in the 1950s [227]. It asserts that mental states of a given type are identical to brain (i.e., physical) states of a certain type. If one assumes this view, then the following holds. Let us assume that in the future we will define a mapping between types of brain state and types of mental state. Then, our discussion of concepts and

the nature of intelligence in philosophy and in psychology (in Chap. 15 ) could be replaced by a discussion in the fields of neuroscience and neurophysiology (reductive physicalism ). Thus, the construction of artificial systems will depend only on the state of knowledge in these fields and technological progress in the future. This theory

has been further developed by David M. Armstrong 9 [11], among others. The weaker assumption is the basis of token-identity theory (token physicalism). Although any mental state is identical to a certain brain state, mental states of a given type need not necessarily be identical to brain states of a certain type. Anomalous

monism was introduced by Donald Davidson 10 in 1970 [64]. It is a very interesting theory from the point of view of a discussion about Artificial Intelligence. According to this theory, although mental events are identical to brain (physical) events, there are no deterministic principles which allow one to predict mental events. Davidson

also assumed that mental phenomena supervene 11 on brain (physical) phenomena. For example, if the brains of two humans are in the indistinguishable states, then their

6 We have introduced the issue of inadequate language in Chap. 15 , presenting the views of William of Ockham and Ludwig Wittgenstein. Analytical behaviorism has been introduced on the basis of

the views of the Vienna Circle. 7 John Jamieson Carswell Smart—a professor of philosophy at the University of Adelaide and

Monash University (Australia). His work concerns metaphysics, theory of mind, philosophy of science, and political philosophy.

8 Ullin T. Place—a professor at the University of Adelaide and the University of Leeds. His work concerns the philosophy of mind and psychology. According to his will, his brain is located in a

display case at the University of Adelaide with the message: Did this brain contain the consciousness of U.T. Place?

9 David Malet Armstrong—a professor of philosophy at the University of Sydney, Stanford Univer- sity, and Yale University. His work concerns theory of mind and metaphysics.

10 Donald Herbert Davidson—a professor of philosophy at the University of California, Berkeley and also other prestigious universities (Stanford, Harvard, Princeton, Oxford). He significantly

influenced philosophy of mind, epistemology, and philosophy of language. He was known as an indefatigable man who had a variety of interests, such as playing piano, flying aircraft, and mountain climbing.

11 We say that a set of properties M supervene on a set of properties B if and only if any two beings which are indistinguishable w.r.t. the set B are also indistinguishable w.r.t. the set M.

238 17 Prospects of Artificial Intelligence mental states are also indistinguishable. Theory of supervenience has been further

developed by Jaegwon Kim 12 [157]. Consequently, one can conclude that mental phenomena cannot be reduced to physical phenomena, and laws of psychology cannot

be reduced to principles of neuroscience (non-reductive physicalism). In order to preserve a chronology (at least partially), let us consider now certain views which relate to Weak AI. In 1961 John R. Lucas 13 formulated the following argument against the possibility of constructing a cybernetic machine that is equiva- lent to a mathematician on the basis of Gödel’s limitation (incompleteness) theorem [188]. A human mind can recognize the truth of the Gödel sentence, whereas a machine (as a result of Gödel’s incompleteness theorem) cannot, unless it is incon- sistent. However, if a machine is inconsistent, it is not equivalent to a human mind. Let us notice that the Lucas argument concerns the intelligence of an outstanding human being, who is capable of developing advanced theories in logic. In 1941 Emil

Post 14 had similar objections to machine intelligence, when he wrote in [230]: “We see that a machine would never give a complete logic; for once the machine is made

we could prove a theorem it does not prove.” Although some logicians do not agree with this view of J.R. Lucas, modified versions

of it appear in the literature from time to time, such as, e.g., an idea of Roger Penrose 15 presented in “The Emperor’s New Mind” [224].

In 1972 Hubert Dreyfus 16 expressed his criticism of Strong AI in a monograph entitled “What Computers Can’t Do: The Limits of Artificial Intelligence” [74]. He has presented four, in his opinion, unjustified assumptions defined by adherents of Strong AI. The biological assumption consists of treating the brain as a kind of digital

machine, which processes information by discrete operations. 17 Viewing the mind as

a system which processes information according to formal rules is the psychological assumption . The conviction that knowledge of any kind can be defined with a formal representation is the epistemological assumption. Finally, adherents of Strong AI are convinced that the world consists of independent beings, their properties, relations among beings, and categories of beings. Consequently all of them can be described

12 Jaegwon Kim—a professor of philosophy at Brown University, Cornell University, and the Uni- versity of Notre Dame. His work concerns philosophy of mind, epistemology, and metaphysics.

13 John Randolph Lucas—a professor of philosophy of Merton College, University of Oxford, elected as a Fellow of the British Academy. He is known for a variety of research interests, including

philosophy of science, philosophy of mind, business ethics, physics, and political philosophy. 14 Emil Leon Post—a professor of logic and mathematics at the City University of New York

(CUNY). His pioneering work concerns fundamental areas of computer science such as com- putability theory and formal language theory.

15 Roger Penrose—a professor of the University of Oxford, mathematician, physicist, and philoso- pher. In 1988 he was awarded the Wolf Prize (together with Stephen Hawking) for a contribution

to cosmology. 16 Hubert Lederer Dreyfus—a professor of philosophy at the University of California, Berkeley. His

work concerns phenomenological and existentialist philosophy, and philosophical foundations of AI.

17 Let us notice that H. Dreyfus formulated this argument when the study of neural networks was beyond the research mainstream in AI.

17.1 Issues of Artificial Intelligence 239 adequately with formal models, e.g., by representing them by constant symbols,

predicate (relation) symbols, function symbols, etc. in FOL. Dreyfus calls this view the ontological assumption and he claims that there is also an unformalizable aspect of our knowledge which results from our body, our culture, etc. Therefore, this kind of (unconscious) knowledge cannot be represented with the help of formal (symbolic) models, because it is stored in our brains in an intuitive form. 18

The first version of functionalism, which is one of the most influential theories in Artificial Intelligence, was formulated by Hilary Putnam 19 in 1960 [232]. Accord- ing to this theory, mental states are connected by causal relations in an analogous way to formal automata states, which have been discussed in Chap. 8 . Similarly as automaton states are used for defining its behavior via the transition function, mental states play a functional role in the mind. Additionally, mental states are in causal

relationships with mental system inputs (sensors) and outputs (effectors). 20 In early machine functionalism 21 the following computer analogy was formulated: brain = hardware and mind = software . Consequently, mental states can be represented by various physical media (e.g., a brain, a computer, etc.) similarly as software can be

implemented by various computers. 22 The Turing machine is especially attractive as

a mind model in functionalism. 23 John R. Searle has criticized functionalism on the basis of his Chinese room thought experiment [269], which has been introduced in Chap. 1 . In this experiment

he tries to show that a system can behave as if it had intentional states 24 if we deliver a set of instructions 25 allowing it to perform such a simulation. J. Searle calls such intentionality “as-if intentionality” [270]. However, this does not mean that the system really has intrinsic intentionality. 26 Thus, in functionalism, which equates an

18 Dreyfus represents here the phenomenological point of view, which has been introduced in Sect. 15.1 . Especially this relates to the work of Martin Heidegger.

19 Hilary Whitehall Putnam—a professor of philosophy at Harvard University. He is known for a variety of research interests, including philosophy of mind, philosophy of language, philosophy

of science and mathematics, and computer science (the Davis-Putnam algorithm). A student of H. Reichenbach, R. Carnap, and W.V.O. Quine. Due to his scientific achievements, he has been elected a fellow of the American Academy of Arts and Sciences and the British Academy, and he was the President of American Philosophical Association.

20 Analogously to the way we have defined transducers in Chap. 8 .

21 At the end of the twentieth century H. Putnam weakened his orthodox version of functionalism and in 1994 he published a paper “Why Functionalism Didn’t Work”. Nevertheless, new theories

(e.g., psychofunctionalism represented by Jerry Fodor and Zenon Pylyshyn) were developed on the basis of his early model.

22 This thesis was formulated by H. Putnam in the late 1960s as an argument against type-identity theory. It is called multiple realizability.

23 Since the Turing machine is an automaton of the greatest computational power (cf. Appendix E). 24 The concept of intentionality has been introduced in Sect. 15.1 , when the views of Franz Brentano have been presented.

25 For example, a computer program is such a set of instructions. 26 In other words, a computer does not want to translate a story, does not doubt whether it has

translated a story properly, is not curious to know how a story ends, etc.

240 17 Prospects of Artificial Intelligence information system with a human being, there is no difference between something

that is really intentional and something that is apparently intentional. Daniel Dennett 27 proposed another approach to the issue of intentionality in 1987 [68]. The behavior of systems can be explained on three levels of abstraction. At the lowest level, called the physical stance and concerning both the physics and chemistry domains, we explain the behavior of a system in a causal way with the help of the principles of science. The intermediate level, called the design stance, includes biological systems and systems constructed in engineering. We describe

their behavior in a functional way. 28 Minds and software belong to the highest level, called the intentional stance. Their behavior can be explained using concepts of intentionality, beliefs, etc. 29

The argument of the Chinese room can be challenged if one assumes the most extreme view which supports Strong AI, namely eliminative materialism (elimina- tivism) introduced by Patricia Smith Churchland 30 and Paul M. Churchland 31 [49]. According to this view psychical phenomena do not exist. Concepts such as inten- tionality, belief, and mind do not explain anything. So they should be removed from science and replaced with terms of biology and neuroscience.

Researchers who develop AI systems also take part in the discussion about Strong AI. Similarly to the case of philosophers and cognitivists, views on this matter are divided. For some of them, successes in constructing AI systems show that in the

future the design of an “artificial brain” will be possible. Hans Moravec 32 and Ray- mond Kurzweil 33 are the most notable researchers who express such a view.

Dokumen yang terkait

Hubungan pH dan Viskositas Saliva terhadap Indeks DMF-T pada Siswa-siswi Sekolah Dasar Baletbaru I dan Baletbaru II Sukowono Jember (Relationship between Salivary pH and Viscosity to DMF-T Index of Pupils in Baletbaru I and Baletbaru II Elementary School)

0 46 5

Institutional Change and its Effect to Performance of Water Usage Assocition in Irrigation Water Managements

0 21 7

The Effectiveness of Computer-Assisted Language Learning in Teaching Past Tense to the Tenth Grade Students of SMAN 5 Tangerang Selatan

4 116 138

the Effectiveness of songs to increase students' vocabuloary at second grade students' of SMP Al Huda JAkarta

3 29 100

The effectiveness of classroom debate to improve students' speaking skilll (a quasi-experimental study at the elevent year student of SMAN 3 south Tangerang)

1 33 122

Kerjasama ASEAN-China melalui ASEAN-China cooperative response to dangerous drugs (ACCORD) dalam menanggulangi perdagangan di Segitiga Emas

2 36 164

The Effect of 95% Ethanol Extract of Javanese Long Pepper (Piper retrofractum Vahl.) to Total Cholesterol and Triglyceride Levels in Male Sprague Dawley Rats (Rattus novergicus) Administrated by High Fat Diet

2 21 50

Factors Related to Somatosensory Amplification of Patients with Epigas- tric Pain

0 0 15

The Concept and Value of the Teaching of Karma Yoga According to the Bhagavadgita Book

0 0 9

Pemanfaatan Permainan Tradisional sebagai Media Pembelajaran Anak Usia Dini untuk Mengembangkan Aspek Moral dan Bahasa Anak Utilization of Traditional Games as Media Learning Early Childhood to Develop Aspects of Moral and Language Children Irfan Haris

0 0 11