Social Important Asset

Social Importance is a concept that is based on a sociological theory of human motivation that conceptualises human action in accordance to two fundamental relational dimensions: status and power.

According to this theory, we are motivated to confer status to those who deserve it and refrain from claiming more status than what we perceive to have in the eyes of others. The amount claimed and conferred by each action depends a great deal on cultural conventions but generally, the higher the request, the more status it claims.

 

In short, we often act voluntarily in the interest of others. But some matters more than others. The amount of Social Importance attributed to person B by person A represents the extent to which person A will voluntarily respect/comply with the wishes/needs/interests of person B.

 

This component adds the meta-belief:

 SI([target])

which calculates the amount of  Social importance (SI) the [target] agent has in the agent’s perspective. This is a numerical value, ranging from 1 to 100, that signifies the extent to which the agent is willing to act in the interest of the [target] agent.

 

For a FAtiMA agent, the purpose of SI is to implement this notion of status so it can better navigate and understand different relational contexts. For instance, a stranger asking a personal question is inappropriate but not a friend. This is because, all else being equal, the SI of a friend should be higher than the SI of a stranger.

These conventions are implemented in the component as attribution rules, which are defined as the following tuple:

<target,conditions,siValue>

These rules work in a similar manner to the appraisal and decision rules described previously. Essentially, the unification algorithm will process each rule individually and try to find valid substitutions for the rule’s conditions. If so, the siValue defined in the rule is added to the target’s total SI.

Note that these rules can refer to beliefs about properties of other agents, such as whether or not they are a family member, or they can refer to their past actions in the environment, such as the amount of times they were rude towards the agent for example.

Some practical examples:

In order to add a Social Importance Rule you need 3 things:

  • Rule Description: A description you want to give to a particular rule, for instance, “Close Friends Rule”
  • Target Variable: The name of the variable you want the SI value to be towards with SI([t]), if we dont specify the Target, it will be applied to every character
  • Value: The literal value or the variable you want to use to add onto the overall SI([t])

 If you have multiple rules, where the conditions are true and towards the same target they add up

An easy way to understand Social Importance is to start by using Beliefs in the conditions. For instance, characters have a higher Social Importance to however has the Floor:

Rule Description: Simple Rule
Target Variable: [t]
Value: 15
Conditions: Has(Floor) = [t]

If you test the scenario in the Simulator and you can use the “belief inspector” to check if the SI value is correct. With rule we’ve written above every agent has a higher Social Importance attributed to the one that has the floor.

Now let’s make some rules that actually make sense:

Rule Description: Close Friends
Target Variable: [t]
 Value: 20 
Conditions: CloseFriends([t]) = True

Rule Description: Good Mood
Target Variable: [t]
Value: 10 
Conditions: Mood(SELF) > 2

Essentially, these rules tell agents that “If you’re in a good mood you attribute more social importance to others”. Similarly, “if we are close friends I also give more social importance”.

Let’s now make some decision making rules that use these SI values. For example, I created a new Dialogue Style called “Informal”, which the agent only uses if they attribute a high SI value to the target they are talking to:

Action: Speak([cs], [ns], [mean], Informal)
Target: [t]
Priority: 5
Layer: -
Conditions:
     Has(Floor) = SELF
     DialogueState([t]) = [cs]
     SI([t]) > 25
     ValidDialogue([cs], [ns], [mean], Informal) = True

The agent will only use “Informal” dialogues when talking to someone they regard as Socially Important.

This example also shows one of the many advantages of using Social Importance. Not only does make sense theoretically as it mimics real life relations but it is also easier to manage Action/Appraisal conditions.

Essentially, it is easier to design an action space based on the Social Importance attribution.

Instead of having 4 or 5 belief checks we just need one. We don’t need to check if they are close friends, are in a good mood etc…we can just write SI > 25 and handle the checks in the SI Asset.

Additionally, if we have an action that can be triggered multiple ways, for instance, I would talk informally if I was close friends with that person or if I was in a good mood. Without using SI I would have to write 2 different rules with the same action but different conditions. Using Social Importance I can just say if the SI attributed is > 10, in this case.

You can easily change agent’s beliefs using the World Model, which in turn will automatically be taken into account in the Social Importance module

 

 

 

Next let’s take a look at another reasoning component based on another social architecture developed by academia: The Comme il Faut

 

 

 

No announcement available or all announcement expired.