Dialogue trees are still the most familiar solution for game developers. A dialog tree is a tree of possible choices for the player to make in conversation with an NPC. This is an extremely popular method for NPC-player interaction, allowing the player to make choices (or at least the illusion of choices) in the game while progression the conversation in a controlled manner.
However, they are finite state machines which have a lot of scalability problems. While in for short conversations it might be sustainable, it quickly becomes too complex or difficult to author in long-term interactions.
Additionally they provide no room for agency from the side of the “computer controlled characters”.
For example, the image on the left shows the dialogue tree for a single social interaction used the in the Skyrim Mod: “Social NPCs”. I used this particular example because I’m the author of that mod and consequently I can use that image for all its purposes. If you want to learn more about that work you can read the paper: CiF-CK: An architecture for Social NPCs in commercial games.
FAtiMA’s approach to dialogue is what we consider to be a hybrid solution:
Each utterance is coded as a dialogue action with the following properties:
<CurrentState, NextState, Meaning, Style, UtteranceText>
The Current State defines the state of the conversation, the Next State defines where the conversation will move to next, the Meaning and Style fields are used for auxiliary tags, typically we use them to define if a dialogue has a particular personality trait or discusses a particular context but it can be used as a flag for anything the author wants.
<Start, Opening1, Greeting, Polite, “Good morning. How can I help you?”>
<Opening1, FirstResponse, StateIssue, PositiveMood, “Hello, I’m having trouble with my laptop”>
<Opening1, FirstResponse, StateIssue, NegativeMood, “The battery has failed again”
The dialogue manager does not make decisions for the agents (nor the player). Instead, it tells the agent what are the available options that it can choose from.
This is achieved with a Meta-Belief (which we will talk about next) that is registered in the agent’s KB:
ValidDialogue ([currentState] , [nextState] , [meaning] , [style])
Before we move on to the Meta-Beliefs let’s apply what we’ve learned to our scenario. Let’s add some dialogues to the Integrated Authoring Tool’s Dialogue Editor.
I’ve added 5 different dialogue actions. Each has a current state and a next state. This functions as if it were a finite state machine. Let’s say that in one of the states I wanted to add another option, it would look like this:
When the agent or the player is in the S2 state it can choose between answering “I’m feeling great!” and “Not that great actually”.
Now we need to link the existing dialogue to the agent’s Emotional Decision Making asset
Previously we defined the “Insult” and “DrinkCofee” action in the agent’s emotional decision making asset, let’s add another with the following template:
Speak([currentState] , [nextState] , [meaning] , [style])
In order to better show how this mechanism works let’s add a rule that allows the agent to use any dialogue. In FAtiMA the ” * ” symbol is a way of saying all, let’s try adding the following rule:
It basically means that the agent will decide to speak with any current state, any next state, any meaning and any style.
If you want to see this working you can use the Simulator within the Authoring Tool, each time we click the “Start” button the Charlie agent will say something different.
Let’s try to make him start the conversation according to what we defined, we need to tell him to use a dialogue with the correct Current State.
As we mentioned in the Section 2, we store the state of the conversation in the beliefs of the agent. For instance, the state of the player’s conversation with agent Charlie is “Start”. Therefore in the beliefs of the player there should be the following belief:
DialogueState(Charlie) = Start
As a result we can attribute a value to the [currentState] variable in the EDM:
Note: The variable name can be anything you desire as long as they are within “[ ]”. Additionally they must be in the correct order in the action Template, as in, the Current State variable must be the first argument of the “Speak “Action Template.
It is now possible to tell the difference in the simulator, if the agent has the beliefs we described earlier, specifically:
DialogueState(Player) = Start
By hitting the “Start” button the Charlie agent is now only deciding to execute the action “Speak(Start, * ,* , * )”, which in our Dialogue Manager corresponds to only one possible utterance: “Hi how are you?”.
Let’s do the same for the Player RPC character. However, let’s try and instead of naming the target of the action, use a variable. Because we already use the name of the agent in the beliefs: “DialogueState(Charlie) = Start”, FAtiMA is able to “retrieve” the value. This is the result:
Here we defined the Target as variable [x] and then in the conditions we specified that the [x] value is in the DialogueState belief. In the simulator we can take a look at the results. In this case both the Player and the Charlie agent only have one possible action.
In FAtiMA 4.0 it is possible to use beliefs in the utterances using the [[ ]] symbol.
Utterance: Well, I haven’t been able to sleep well the past couple of months [[DialogueState(Player)]]
Result: “Well, I haven’t been able to sleep well the past couple of months Concern”
This is a silly example but it is just to show you that you can access the Speaker’s KB to get their beliefs and use them on the utterances
In order to move to the next state there needs to be a way to define what changes after each event, this is done through the World Model.