Multimodal-Eliza Perceives and Responds to Emotion – S. Fitrianie L.J.M Rothkrantz
ISSN 1858-1633 2005 ICTS 155
possibilities of reply sentences based on their topic and history than Eliza. Our developed QA system uses
Wallace’s pattern matching operation.
3. ADDING NONVERBAL BEHAVIOUR
Our developed system is capable to extract emotion indications or emotion eliciting factors from
a dialog. The system will reason the results to trigger one of possible displayed expressions. As a reference,
we have performed an experiment to determine a list of possible expressions applied by our QA system.
3.1. Dialog Processing
Our prototype extracts emotion-eliciting factors from a dialog using two approaches. First, the system
analyzes the choices of words in a string. For this purpose, we developed an emotive lexicon dictionary.
Currently, it consists of 347 emotion words merged from 000. Based on 0, the words were depicted into
eight octants of valence-arousal see table 1. For some ambiguous emotion words, we used a thesaurus
to figure out the closeness semantic meaning of the words with other words within an octant. A parser
matches the string against the dictionary and calculates a counter C for “pleasant” and
“unpleasant” using the following equation:
∀ l
i
∈ d
i
| C
i
t = C
i
t-1 + I
i
. s ∀ j
≠ i| C
j
t = C
j
t-1 – I
i
3
Where, l is the lexicon and d is the dictionary, i is the active pleasantness, I is the lexicon’s arousal
degree, s is a summation factor, and j is [pleasant, unpleasant]. The system will take the counter with the
highest values.
category affect name=”neutral” patternWHAT IS YOUR NAMEpattern
thatthat template
setconcernpleasantsetconcern setaffectpleasantsetaffectMy
set_topicnameset is bot name=”name”.
template affectcategory
topic name=NAME category affect name=”unpleasant”
thatMY NAME IS that patternYOUR pattern
templaterandom lisetconcernpleasantsetconcern
I am sorry, but tell me your name.li
lisetconcernunpleasantsetconcern I am sorry, tell me what
happened.li random template
affectcategory ...
Figure 21. Example units in the AIML database.
Finally, the system extracts the dialog emotional situation. For this purpose, we added two labels in the
AIML scheme see figure 3: 1 a label to distinct a user’s emotional situation “affect”; and 2 a label
to distinct the system’s emotional situation “concern”. These labels describe a type of a
valance neutral, pleasant or unpleasant or a sign of a joke. By these additional tags, the input pattern
matching operation searches first then not only in the same conversation topic and the same history pattern,
but also in the same user’s emotional situation. By this way, the tag also indicates the conversation’s
emotional situation.
3.2. Emotion Expression Experiment
How many and what kind of displayed emotional expressions are used in conversation poses a non-
trivial question. Many theorists and psychologists tried to categorized emotion types, e.g. 000. An experiment
has been performed to recognize the most expressive facial expressions used in conversations. This
experiment also addressed to figure out what kind objects, events, and actions that triggered these
expressions.
We recorded four dialogs of two participants. The participants were requested to perform dialogues
about different topics and show as many expressions as possible. The video recordings were amounted. As
a first step, three independent observers marked the onset and offset of an expression. In the next step,
these expressions were labelled according to the context. The agreement rates between the observers in
both steps were about 73.
The experimental results indicated that our participants showed most of the time a neutral face.
However, we managed to capture in total 40 different facial expressions; about 20-35 different expressions
per participant in each dialog. The results also showed that the expressions were dependent not only on the
choices of words but also on the context of the conversation. A word could mean different things
according to the context of the conversation. Thereby, the speaker or the listener might display different
facial expressions.
Our experimental results were endorsed by an experiment conducted by Desmet 0. He found 41
displayed emotion expressions actually used to appraise a product table 1 – our experimental results
did not have “greedy”. Based on 0, he depicted these expressions in two dimensions degree of
“pleasantness” valence and “activation” arousal.
Table 2. Emotions in Eight Octants, modified from 0
No Valence-Arousal Emotion
Expressions
1. Neutral-Excited
Curious, amazed, avaricious, stimulated, concentrated,
astonished, eager. 2.
Pleasant- Excited
Inspired, desiring, loving 3.
Pleasant- Average
Pleasantly surprised, fascinated, amused, admiring,
sociable, yearning, joyful 4.
Pleasant-Calm Satisfied, softened
5. Neutral-Calm Awaiting,
deferent 6.
Unpleasant- Bored, sad, isolated,
Information and Communication Technology Seminar, Vol. 1 No. 1, August 2005
ISSN 1858-1633 2005 ICTS 156
No Valence-Arousal Emotion
Expressions
Calm melancholy, sighing
Unpleasant- Average
Disappointed, contempt, jealous, dissatisfied, disturbed,
flabbergasted, cynical Unpleasant-
Excited Irritated, disgusted, indignant,
unpleasantly surprised, frustrated, greedy, alarmed,
hostile
3.3. Facial Expression Generation