Using the Integrated Authoring Tool

FAtiMA Toolkit is a collection of tools and assets designed for the creation of characters (virtual or robotic) with social and emotional intelligence.

FAtiMA is Character-centric which means that, the Narrative emerges from how the player and the agents interact with each other in accordance to their given narrative goals.

FAtiMA Toolkit is composed by lot of different pieces working together, as seen in the figure below.

Each component can work independently, however they work best together.

As a first look it might seem overwhelming, however, here will show you how all of it works with simple and understandable examples.

The latest release of the FAtiMA-Toolkit is available in its GitHub page. It includes new features such as a chat simulator, a dialogue tree generator, a World Model Asset and a Monte Carlo Tree Search support asset. 

The Integrated Authoring Tool asset is the most important component of FAtiMA Toolkit as it comprises all the others.
The program is Windows Form application and it can be found in the “Executables” folder by the name: “IntegratedAuthoringToolWF.exe”.

The following image is a screenshot of its latest version.

1 – Creating a new Scenario

The Authoring Tool reads ‘.iat’ files. They gather all the information of the scenario they are responsible for, its name, its description, its characters and a dialogue-actions “pool”.

In order to create a new .iat click on the “File->Save As” button and save the new file in the directory you prefer.

Here is an overview of what each area within the Tool does.

 

Let’s start by naming our scenario, anything you’d like and writing a brief description.
Additionally let’s create a character for our scenario. In FAtiMA each agent is represented as a Role Play Character, which means that when we click the “Create” button (in the Characters Section) the toolkit will create an “.rpc” file.

Here is a screenshot of what I ended up with:

Once you create a character the tab (on the right) will change to the Role Player Character Editor, here we can edit each of the character’s traits. Let’s start by changing its Name and by saving the scenario in the “File-Save” button.

Below the “Name” field there are different tabs which lead to each different reasoning and emotional assets within FAtiMA Toolkit.
The first is the “Emotional State” tab which captures the agent’s starting emotional state, we will discuss this later, for now, let’s move on to:

 

2 – The Knowledge Base and Autobiographic Memory

Just as the Emotional State leads with what emotional state the agent starts with these components deal with the Memory of the agent at the start of the scenario.

  • Knowledge Base: 
    • Stores the agent’s beliefs about the environment
      • Properties of agents and objects
      • Relationships
  • Autobiographic Memory:
    • Stores the agent’s recollection of past events and emotions associated with it

 

In FAtiMA beliefs and events are described by “Well-Formed-Names” (WFN):

  • Symbols:
    • Represent constant entities (actions, objects, name of properties, name of relations)
    • Ex: “Sam”, “A1”, “Table
  • Variables:
    • Represent an entity or value that is not specified yet
    • Can be replaced by symbols
    • Ex: “[x]”, “[target]
  • Composed Names
    • Represent properties or relations
    • Ex: “Likes(Emys,Chocolate)”, “Has(Sam, [x])

 

 

Additionally, as we mentioned, we can store past events in the Autobiographic Memory.

Event names represented by a composed name: 

  • Event(Action-Start, Player, Smile, John)
  • Event(Action-End, Player, Greet(Casual), John)
  • Event(Property-Change, World, Weather(Now), Raining)

Above are the 3 types of events present in FAtiMA. Action-Start and Action-End define when a particular action starts and when it finishes. The Property Change events are used to update variables in the beliefs of the agent. They are defined using the following template:

  • Action-StartEvent(Action-Start, Subject, Action, Target)
  • Action-EndEvent(Action-End, Subject, Action, Target)
  • Property-ChangeEvent(Property-Change, Subject, Property Name, New Value)

The Subject field represents which agent started or finished that event.

Here is what I wrote on Charlie’s beliefs, the last one relates to the state of the conversation Charlie will have with the Player.

 

 

3 – Emotional Appraisal

Emotions in FAtiMA are based on the OCC theory of emotions:

  • Cognitive theory of emotions : Emotions develop as a consequence of certain cognitions and interpretations

  • Emotions represent valenced (good or bad) reactions to events/perceptions of the world

  • OCC defines a set of appraisal variables:
    • Main variables used to determine the type and intensity of
      emotions generated such as
       Desirability
       Praiseworthiness
       Likelihood
       Liking

 

Appraisal Process – What does an agent feel about an event?

In FAtiMA the appraisal process is defined by two main steps, the Appraisal Derivation and the Affect Derivation.

The first, Appraisal Derivation, determines what appraisal variables are affected by the event and their value.

The second, Affect Derivation, computes what emotions are being triggered by those variables, their intensity and the resulting mood which in turn will affect the agent’s emotional state.

In FAtiMA the author is in control of the first process, the Appraisal Derivation, by defining Appraisal Rules. The second process, the Affect Derivation, is internally computed by the framework.

Appraisal Rules

Appraisal rules determine how an agent subjectively evaluates events. This is done by defining event templates, that associate events with the OCC variables. 

  • “Receiving candy is desirable”
    • “EventTemplate”: “Event(Action-End, *, Offer(Candy), SELF)”,
    • “Desirability” = 5
  • “Being insulted is undesirable and blameworthy”
    • “EventTemplate”: “Event(Action-End, *, Insult , SELF)”,
    • “Desirability” = -5,
    • “Praiseworthiness” = -3

Let’s take a look at how this works in the Authoring Tool. Charlie feels happier every time someone gives him money:

Appraisal rules may also have conditions. In the gif above I defined that whenever the Charlie agent perceives an event of the type GivesMoney([x]), if variable [x] is bigger than 0 then he would appraise the event with desirability of 5.
An even better to have done it would be to set the desirability to have the same value as [x], this way the desirability of the event would change according to the amount of money received:

  • “Receiving money is desirable according to the amount received”
    • EventTemplate”:“Event(Action-End,Player,GiveMoney([x]),SELF)”,
    • Desirability=[x]

Another appraisal rule example, this time with conditions:

  • “Receiving something I like is desirable”
    • EventTemplate”:“Event(Action-End,Player,Offer([x]),SELF)”,
    • Desirability=5,
    • Conditions”:[“Likes(SELF,[x])=True”]

 

Note: “SELF” is a special reserved word that is substituted by the agent’s real name when being evaluated.

 

4 – Decision Making

In FAtiMA each agent/Role Play Character has a decision making process of their own. This process is based on rules with logical conditions.

It is important to note that while we define what actions its effects are within FAtiMA Toolkit, if the author is using it to extend a game or an application there must be a bridge between both environments. The agent might decide to move a table or to compliment another agent in FAtiMA but outside the Toolkit it is not able to actually move a table in a game or move towards another agent to talk to them. Actions, emotions and events in FAtiMA must be latter implemented in the desired environment.

The Role Play Character asset has a “Decide” method that returns a list of all the action the agent wants to perform, these actions are defined in the Emotional Decision Making  (edm) asset.

Here are a few example on how actions are composed in the Toolkit’s framework:

“Insult agents that I do not like when I am in a negative mood”:

Action:Insult,

Target:[x],

Conditions:
  • IsAgent([x])=True,
  • Like(SELF,[x])<0,
  • Mood(SELF)<0

 

“Compliment agents that I like when I am in a positive mood”:

Action:Compliment,

Target:[x],

Conditions:
  • IsAgent([x])=True,
  • Like(SELF,[x])>0,
  • Mood(SELF)>0]

 

“Imitate the facial expression of another agent I like”:

Action:Express([e])

Conditions:
  • IsAgent([x])=True,
  • Like(SELF,[x])>0,
  • Facial-Expression([x])=[e]

The agent can decide to do multiple actions simultaneously. Which can be important for combining verbal and non-verbal actions.

In order to maintain consistency and readability, all conditions are expressed in exactly the same manner:

  • Likes(SELF,[x])=True
  • Mood(SELF)>5
  • Emotion Intensity(SELF,Distress)>2

 

In some of the conditions above we used “Meta-Beliefs” keywords, these are  registered in the Knowledge Base as procedures with their name becoming a reserved keyword. These will be further discussed in Section 7.

Testing a list of conditions consists in creating an activation tree, where conditions are evaluated from top to bottom. Let’s take a look at an example:

“Insult agents that I do not like when I am in a negative mood”:

Action:Insult,

Target:[x], 

Conditions:
  • IsAgent([x])=True,
  • Like(SELF,[x])<0,
  • Mood(SELF)<0

On the left is the agent’s knowledge base, on the right is a figure of how the action’s conditions were evaluated:

 

  • IsAgent(John) = True
  • IsAgent(Mary) = True
  • IsAgent(Luke) = True
  • Like(SELF,John) = 3
  • Like(SELF, Mary) = 5
  • Like(SELF, Luke) = -2
  • Mood(SELF)  = -2

 

 

The agent tries to unify all conditions with the beliefs in its KB (Knowledge Base). As shown in the picture there were 3 possible values attributed to variable [x] after evaluating the first condition. The agent kept that restriction in mind and moved on to the next condition where only one of the previous restriction remained. By unifying the restriction [x] is Luke the agent evaluated all the remaining conditions which returned True. As a result the agent will want to perform action Insult with the Target: Luke.

Note that in Θ4 and Θ5 the [x] / Luke unification remains, and is applied during the whole process. In the end the resulting substitutions are applied to the action which in this case results in the Target: [x] of the action “transforming “into “Luke”

Let’s go back to our scenario and add some actions for the agent to do:

I wrote 2 different action rules. The first deals with drinking coffee, and it only has 2 different conditions:

Mood(SELF) < -1
isAnxious(SELF) = True

If the agent is nervous and in a bad mood he wants to drink coffee.

The second action rule is shown in the figure above, Insult, it has 3 conditions. If Charlie does not like [x], if he is angry and if that [x] is an agent then he will want to insult that agent [x].

Please note that the “Insult” action has a higher priority than the “DrinkCoffee” action, this means that if the agent decides to do them both he will prioritise the highest priority action.

We define what exactly is the Insult action, and what will the agent say using the Dialogue Manager.

 

5 – Dialogue Manager

Dialogue trees are still the most familiar solution for game developers. A dialog tree is a tree of possible choices for the player to make in conversation with an NPC. This is an extremely popular method for NPC-player interaction, allowing the player to make choices (or at least the illusion of choices) in the game while progression the conversation in a controlled manner.

However, they are finite state machines which have a lot of scalability problems. While in for short conversations it might be sustainable, it quickly becomes too complex or difficult to author in long-term interactions.

Additionally they provide no room for agency from the side of the “computer controlled characters”.

For example, the image on the left shows the dialogue tree for a single social interaction used the in the Skyrim Mod: “Social NPCs”. I used this particular example because I’m the author of that mod and consequently I can use that image for all its purposes. If you want to learn more about that work you can read the paper: CiF-CK: An architecture for Social NPCs in commercial games.

 

FAtiMA’s approach to dialogue is what we consider to be a hybrid solution:

Each utterance is coded as a dialogue action with the following properties:

  • <CurrentState, NextState, Meaning, Style, UtteranceText>

The Current State defines the state of the conversation, the Next State defines where the conversation will move to next, the Meaning and Style fields are used for auxiliary tags, typically we use them to define if a dialogue has a particular personality trait or discusses a particular context but it can be used as a flag for anything the author wants.

Examples:

  • <Start, Opening1, Greeting, Polite, “Good morning. How can I help you?”>
  • <Opening1, FirstResponse, StateIssue, PositiveMood, “Hello, I’m having trouble with my laptop”>
  • <Opening1, FirstResponse, StateIssue, NegativeMood, “The battery has failed again”

 

The dialogue manager does not make decisions for the agents (nor the player). Instead, it tells the agent what are the available options that it can choose from.

This is achieved with a Meta-Belief (which we will talk about next) that is registered in the agent’s KB:

  • ValidDialogue ([currentState] , [nextState] , [meaning] , [style])

Before we move on to the Meta-Beliefs let’s apply what we’ve learned to our scenario. Let’s add some dialogues to the Integrated Authoring Tool’s Dialogue Editor.

 

 

I’ve added 5 different dialogue actions. Each has a current state and a next state. This functions as if it were a finite state machine. Let’s say that in one of the states I wanted to add another option, it would look like this:


 

When the agent or the player is in the S2 state it can choose between answering “I’m feeling great!” and “Not that great actually”.

 

Now we need to link the existing dialogue to the agent’s Emotional Decision Making asset

Previously we defined the “Insult” and “DrinkCofee” action in the agent’s emotional decision making asset, let’s add another with the following template:

Speak([currentState] , [nextState] , [meaning] , [style])

In order to better show how this mechanism works let’s add a rule that allows the agent to use any dialogue. In FAtiMA the ” * ” symbol is a way of saying all, let’s try adding the following rule:

 

It basically means that the agent will decide to speak with any current state, any next state, any meaning and any style.

If you want to see this working you can use the Simulator within the Authoring Tool, each time we click the “Start” button the Charlie agent will say something different.

Let’s try to make him start the conversation according to what we defined, we need to tell him to use a dialogue with the correct Current State.

 

As we mentioned in the Section 2, we store the state of the conversation in the beliefs of the agent. For instance, the state of the player’s conversation with agent Charlie is “Start”. Therefore in the beliefs of the player there should be the following belief:

DialogueState(Charlie) = Start

As a result we can attribute a value to the [currentState] variable in the EDM:

Note: The variable name can be anything you desire as long as they are within “[ ]”. Additionally they must be in the correct order in the action Template, as in, the Current State variable must be the first argument of the “Speak “Action Template.

 

It is now possible to tell the difference in the simulator, if the agent has the beliefs we described earlier, specifically:

 DialogueState(Player) = Start

By hitting the “Start” button the Charlie agent is now only deciding to execute the action “Speak(Start, * ,* , * )”, which in our Dialogue Manager corresponds to only one possible utterance: “Hi how are you?”.

Let’s do the same for the Player RPC character. However, let’s try and instead of naming the target of the action, use a variable. Because we already use the name of the agent in the beliefs: “DialogueState(Charlie) = Start”, FAtiMA is able to “retrieve” the value. This is the result:

 

 

Here we defined the Target as variable [x] and then in the conditions we specified that the [x] value is in the DialogueState belief. In the simulator we can take a look at the results. In this case both the Player and the Charlie agent only have one possible action.

 

 

In order to move to the next state there needs to be a way to define what changes after each event, this is done through the World Model.

 

6 – World Model

To help the authoring process, the author is capable of defining what are the  consequences of an agent performing a specific action in the World Model.

It is important to note that the effects of actions might be programmed directly in the game itself and communicated to the agents via events. When using the World Model Editor the author asks the model what should the consequences of given events be, these can then be applied to the rpcs or to the environment around the.

The main benefit of using the World Editor is that these effects are configurable without having to recompile the game. Additionally, the action effects defined in the World Model Editor are visible in the Simulator. This allows the author to quickly run and test the defined scenario.

Let’s use it in our example scenario to better understand it. We want to define a rule where each time an agent is the target of a Speak action, his dialogue state is updated:

After adding the rule it need to have an effect:

In the figure above I added a World Model Rule that says everytime an agent is the target of a Speak(*,[ns],*,*) event caused by agent [x] then his dialogue state with agent [x] is updated to [ns]. Use the simulator to check the results:

 

 

Now we can have a proper conversation. As you can see the dialogue will go through its different stages as you select the dialogue the Player rpc. Additionally you can see the value of the “DialogueState” belief in the belief inspector section at any particular time you wish.

Let’s add more choices to the dialogue, additionally we will use the meaning and style tags to create a more dynamic and interesting conversation.

 

I added 6 different dialogue actions. My objective is to make the Charlie agent respond according to its emotional state. That state will be affected by the Player’s decisions. First of all let’s go to Charlie’s emotional appraisal asset and add a new rules:

 

 

As you can see in the figure above I added three different appraisal rules, one for each of the dialogue tags defined for the meaning field. The Sad meaning event will be appraised with desirability of -5, the Neutral with Desirability 2 and the Happy one with Desirability 7.

Once again let’s turn to the Simulator to see the results, now each time we try a different sentence we can see its effect on the mood of the Charlie agent.:

“M:” is the mood variable value and “S. EM” represents the Strongest Emotion the agent is feeling, at the time of the screenshot the agent was feeling “Distress”.

Let’s use this variables to influence the agent’s decision making process by adding rules to the EDM component:

 

Here is the FAtiMA definition of each action rule defined above:

  • "Action": "Speak([cs], *, *, Depressed)",
  • "Target": "[x]",
  • "Layer": "-",
  • "Conditions":
    • "DialogueState([x]) = [cs]",
    • "Mood(SELF) < -1",
    • "S2 = [cs]"]
  • Priority": 6

 

 

  • Action": "Speak([cs], *, *, Neutral)",
  • "Target": "[x]",
  • "Layer": "-",
    
    "Conditions":
    • DialogueState([x]) = [cs],
    • Mood(SELF) > -1,
    • Mood(SELF) < 1
    • S2 = [cs]
  • "Priority": 6

 


  • Action": "Speak([cs], *, *, Positive)",
  • "Target": "[x]",
  • "Layer": "-",
  • "Conditions":
    • DialogueState([x]) = [cs],
    • StrongestEmotion(SELF) = Joy,
    • Mood(SELF) > 1,
    • S2 = [cs]
  • "Priority": 6

 

  • "Action": "Speak([cs], *, *, *)",
  • "Target": "[x]",
  • "Layer": "-",
  • "Conditions":
    • DialogueState([x]) = [cs]
  • "Priority": 4

 

The last Action Rule shown above is the same we already had for any dialogue but with a lower priority than the others. When the agent reaches the S2 dialogue state he will also want to perform this action however the simulator will choose one of the others because of their higher priority value.

Note: The “Decide” method, mentioned in Section 4, returns a list of all the action the agent wants to perform. The order of the actions in the list is defined by their priority value. If the agent decides to perform different actions with the same priority value then the list will be returned in a random order.

In all of these conditions we use at least one “Meta-Belief” such as “Mood” and “StrongestEmotion”.

 

 

7 – Meta-Beliefs

Question: What if we want more complex action conditions instead of only testing KB beliefs? Let’s say we want an action that is only performed after an specific event has happened, or what if we want a condition where we need an average value, or where we want the strongest emotion the agent is feeling?

Answer: Meta-Beliefs: knowledge that is produced by affective or reasoning processes.

The difference between normal beliefs and meta-beliefs is that the latter are registered in the Knowledge Base as procedures with their name becoming a reserved keyword. As such, the unification algorithm is able to identify them when parsing  the composed names within a logical condition. Whenever it encounters a meta-belief it will dynamically execute its associated code to retrieve its value. The code can be either very simple of quite complex, depending on the nature of the meta-belief itself.

For example, one of the meta-beliefs that the Role-Play Character adds is Mood([a]) (which we just used), which simply retrieves the mood value of agent [a].

There are several different Meta-Belief functions already implemented. Previously we referred to them as Dynamic Properties. In the released version of the Toolkit you can check what meta-beliefs are available by clicking “Help->Show Available Dynamic Properties” button, which will open a new window:

 

Let’s take a look at an example, in the Tutorials folder you can find some examples of scenarios where each asset is used. In one of them there is an Emotional Decision Making example where you can find the following action definition:

  • "Action": "Speak([cs], [ns], [m], [s])"
  • "Target": "[x]"
  • "Layer": "-",
  • "Priority": 1
  • "Conditions":
    • "Has(Floor) = SELF"
    • "IsAgent([x]) = True",
    • "[x] != SELF",
    • "DialogueState([x]) = [cs]",
    • "ValidDialogue([cs], [ns], [m], [s]) = True",
    • "EventId(Action-End, *, Speak([cs], [ns], [m], *)

Let’s take a look at each condition and deconstruct what they mean:

  • “Has(Floor) = SELF” : In this example we use this belief value to indicate who is speaking in a particular time, if the agent believes he himself has the floor he may talk.
  • “IsAgent([x]) = True”: The target of the Speak action must be an agent….
  • “[x] != SELF”: …different from himself.
  • “DialogueState([x]) = [cs]”: The dialogue state belief describes in what state is the conversation in, we will discuss this further ahead when describing a speaking action .
  • “ValidDialogue([cs], [ns], [m], [s]) = True”: This is a meta belief, it is a function that determines if there is a valid dialogue with the variables given to the function, if the variables have no value the function will provide a possible value to them.
  • “EventId(Action-End, *, Speak([cs], [ns], [m], , [x]) = -1”: This is another meta-belief, the EventID function will return the ID of a past event, if there is no past event with the values provided it will return -1. Therefore in this case the condition is verifying that there hasn’t been an event of the type “Action-End, *, Speak([cs], [ns], [m], , [x])” in the past events of this agent.

     

     

    The demonstration in the “Demo” page is a great example of the usage of Meta-Beliefs and an excellent learning case for authors. Take a look at the the FAtiMA Toolkit’s scenario specific files used in the example.

    Feel free to download them and use the Toolkit to “mess around” with the Scenario any way you like.

    Additionally we will be releasing a guide on how to integrate FAtiMA on the Unity Game Engine (soon).

     

     

     

    Author’s Note: This guide is currently being worked on, as a result some of the content in here is subject to change.  Additionally I might add more content to already written sections or change their order.
    Thank you for reading, once again, if you have any doubts or questions feel free to contact us at:
    rage@gaips.inesc-id.pt


    Leave a Reply

    Your email address will not be published.

    No announcement available or all announcement expired.