Multi-agent Systems
14.2 Multi-agent Systems
For solving very difficult problems we can implement teams of interacting agents. Such teams are called multi-agent systems (MAS). There are four basic features of a multi-agent system [295]:
208 14 Cognitive Architectures
BLACKBOARD
Blackboard controller
Subsystem management agent
ENVIRONMENT
Component behavior recognition agent
Component monitoring agent
Fig. 14.3 An exemplary multi-agent system based on the blackboard architecture
• no agent can solve the problem itself, • the system is not controlled in a centralized way, • data are distributed in the system,
• computing in the system is performed in an asynchronous way. 11 The form of communication among agents is a key issue in the theory of multi-
agent systems. There are two basic scenarios. The first scenario is based on the blackboard architecture [132]. An example 12 is shown in Fig. 14.3 . The agents com- municate via writing and reading messages to and from a global hierarchical reposi- tory called a blackboard. It plays a role similar to the working memory in rule-based systems and it can contain hypotheses to be verified, partial results of an inference process, etc. Agents in blackboard systems are called knowledge sources. One distin- guished agent plays the role of the blackboard controller. It is responsible for focusing agents’ attention to subproblems to be solved, managing access to the blackboard, etc.
Agents in blackboard systems usually create a hierarchical structure correspond- ing to levels of abstraction of the blackboard information structure. As one can see in Fig. 14.3 , agents placed at the lowest level perform simple monitoring of compo- nents of complex equipment, which is the environment of the multi-agent system.
11 A multi-agent system is a distributed system, i.e., it is a system consisting of independent com- puting components. An asynchronous way of computing means here that we do not assume any
time restrictions for performing computations by these components. Of course, if we were able to impose time restrictions (the synchronous model), then we would be able to control the whole computing process. Unfortunately, in multi-agent systems we cannot assume what length of time an agent requires to solve its subproblem.
12 The example is discussed in the paper: Behrens U., Flasi´nski M., et al.: Recent developments of the ZEUS expert system. IEEE Trans. Nuclear Science 43 (1996), 65–68.
14.2 Multi-agent Systems 209 Each such agent reads information describing the current situation in the component
supervised. Then, it identifies the current state of this component and performs the action of sending a message about this state to an agent on a higher level. Thus, the lowest-level agent performs according to the principle: perception—action. An agent of this kind is called a reflex agent.
At a higher level are placed agents which recognize the behavior of components in time series. Such agents are not only able to perceive a single event, but they can also monitor processes, i.e., they can remember sequences of events. As a result of
monitoring a process the internal state of such an agent can change. 13 In some states the agent can perform certain actions. An agent of this type is called a reflex agent with internal states (or model-based agent). 14
Since components of the equipment are grouped into subsystems which should be supervised by the multi-agent system, managing agents are placed at the highest level of the hierarchy. These agents make use of information delivered by agents of lower levels in order take make optimum decisions. An agent of this level makes decisions after communicating (via the blackboard) with other agents of this level, because the subsystem it supervises does not work independently from other subsystems of the equipment. A managing agent should be able to determine a goal which should
be achieved in any specific situation in the environment. Therefore, an agent of this kind is called a goal-based agent. In the example above a utility-based agent has not been defined. An agent of this type determines goals in order to reach the maximum satisfaction (treated as
a positive emotion) after their achievement. The main idea of this agent is based on the appraisal theory of emotions introduced in the twentieth century by Magda Arnold. 15 According to this theory an evaluation (appraisal) of a perceived situation results in an emotional response, which is based on this evaluation. In the case of an utility-based agent, however, the main problem concerns ascribing numerical values to its internal (emotional) states.
In the second scenario of communication among agents, the message passing model is used. This model is based on speech-act theory, inspired by Wittgenstein’s Sprachspielen concept. 16 The model was introduced by John L. Austin 17 in the second half of the twentieth century [13]. According to this theory, uttering certain sentences,
13 Changes of internal states of an agent can be simulated with the help of a finite automaton introduced in Chap. 8 .
14 The difference between a reflex agent and a reflex agent with internal states is analogous to the difference between a human being who reacts immediately, in a non-reflexive manner, after
perceiving a stimulus and a human being who does not react at once, but observes a situation and then takes an action if his/her internal state changes (e.g., from being calm to being annoyed).
15 Magda B. Arnold—a professor at Loyola University in Chicago and Harvard University. Her work mainly concerns psychology of emotions. She was known as a indefatigable woman. In her
nineties she was climbing hills. She died when she was nearly 99 years old.
16 The Sprachspielen concept is introduced in Chap. 15 .
17 John Langshaw Austin—a professor of philosophy at Oxford University, one of the best known scientists of the British analytic school. His work concerns philosophy of language, philosophy of
mind, and philosophy of perception.
210 14 Cognitive Architectures
management agent
Fig. 14.4 An exemplary multi-agent system based on message passing
called by Austin performatives, in certain circumstances is not just saying something, but performing an action of a certain kind. For example, uttering a marriage formula by a priest results in a marriage act. In the case of multi-agent systems, performatives correspond to communication acts, which are performed by an agent in relation to other agents. For example, they can be of the form of a query concerning something,
a demand to perform some action, a promise of performing some action, etc. In this model agents communicate according to predefined rules, which should ensure the performative result of the messages sent.
An example of the scheme of a multi-agent system based on the message passing model is shown in Fig. 14.4 . 18 As one can see, agents interact directly with each other. These interactions are performed by sending performative messages, which result in influencing the environment in the required way.
Bibliographical Note
A good introduction to multi-agent systems can be found in [80, 88, 274, 312, 319]. The prospects for future development of cognitive architectures are discussed in [77].
18 The example is discussed in the paper Flasi´nski M.: Automata-based multi-agent model as a tool for constructing real-time intelligent control systems. Lecture Notes in Artificial Intelligence 2296
Part III
Selected Issues in Artificial Intelligence