Decision Making
In FAtiMA each agent/Role Play Character has a decision making process of their own. This process is based on rules with logical conditions.
It is important to note that while we define what actions its effects are within FAtiMA Toolkit, if the author is using it to extend a game or an application there must be a bridge between both environments. The agent might decide to move a table or to compliment another agent in FAtiMA but outside the Toolkit it is not able to actually move a table in a game or move towards another agent to talk to them. Actions, emotions and events in FAtiMA must be latter implemented in the desired environment.
The Role Play Character asset has a “Decide” method that returns a list of all the action the agent wants to perform, these actions are defined in the Emotional Decision Making (edm) asset.
Here are a few example on how actions are composed in the Toolkit’s framework:
“Insult agents that I do not like when I am in a negative mood”:
Action:Insult, Target:[x], Conditions:
-
IsAgent([x])=True,
-
Like(SELF,[x])<0,
-
Mood(SELF)<0
“Compliment agents that I like when I am in a positive mood”:
Action:Compliment, Target:[x], Conditions:
-
IsAgent([x])=True,
-
Like(SELF,[x])>0,
-
Mood(SELF)>0]
“Imitate the facial expression of another agent I like”:
Action:Express([e]) Conditions:
-
IsAgent([x])=True,
-
Like(SELF,[x])>0,
-
Facial-Expression([x])=[e]
The agent can decide to do multiple actions simultaneously. Which can be important for combining verbal and non-verbal actions.
In order to maintain consistency and readability, all conditions are expressed in exactly the same manner:
-
Likes(SELF,[x])=True
-
Mood(SELF)>5
-
Emotion Intensity(SELF,Distress)>2
In some of the conditions above we used “Meta-Beliefs” keywords, these are registered in the Knowledge Base as procedures with their name becoming a reserved keyword. These will be further discussed in Section 7.
Testing a list of conditions consists in creating an activation tree, where conditions are evaluated from top to bottom. Let’s take a look at an example:
“Insult agents that I do not like when I am in a negative mood”:
Action:Insult, Target:[x], Conditions:
-
IsAgent([x])=True,
-
Like(SELF,[x])<0,
-
Mood(SELF)<0
On the left is the agent’s knowledge base, on the right is a figure of how the action’s conditions were evaluated:
- IsAgent(John) = True
- IsAgent(Mary) = True
- IsAgent(Luke) = True
- Like(SELF,John) = 3
- Like(SELF, Mary) = 5
- Like(SELF, Luke) = -2
- Mood(SELF) = -2
The agent tries to unify all conditions with the beliefs in its KB (Knowledge Base). As shown in the picture there were 3 possible values attributed to variable [x] after evaluating the first condition. The agent kept that restriction in mind and moved on to the next condition where only one of the previous restriction remained. By unifying the restriction [x] is Luke the agent evaluated all the remaining conditions which returned True. As a result the agent will want to perform action Insult with the Target: Luke.
Note that in Θ4 and Θ5 the [x] / Luke unification remains, and is applied during the whole process. In the end the resulting substitutions are applied to the action which in this case results in the Target: [x] of the action “transforming “into “Luke”
Let’s go back to our scenario and add some actions for the agent to do:
I wrote 2 different action rules. The first deals with drinking coffee, and it only has 2 different conditions:
Mood(SELF) < -1 isAnxious(SELF) = True
If the agent is nervous and in a bad mood he wants to drink coffee.
The second action rule is shown in the figure above, Insult, it has 3 conditions. If Charlie does not like [x], if he is angry and if that [x] is an agent then he will want to insult that agent [x].
Please note that the “Insult” action has a higher priority than the “DrinkCoffee” action, this means that if the agent decides to do them both he will prioritise the highest priority action.
We define what exactly is the Insult action, and what will the agent say using the Dialogue Manager.
Let’s take a look at another reasoning component that can help authors improved the theory of mind behind their agents: The Social Importance Asset