Narratives are essentially like “stories”, and are analogous to the concept of episodic memory in human cognition. They can be thought of as temporally-ordered sequences of events, where events are either perceptions of something which has occurred in the world or behaviors which may be enacted in that world. People use stories to make sense of the unfolding events and situations in the world around them, and to develop strategies to best insert themselves into that unfolding situation: “Someone is approaching me. Do they intend to rob me, or perhaps instead to ask for assistance? And more importantly, what are the possible future events that may follow, when will I be able to disambiguate between these two possible interpretations, and what are my own options for behavior in each case?” In this manner, narratives serve as a model for one’s own behavior, as well as a model for the interpretation of the behavior of others.
Leveraging this understanding of human cognition, CHI Systems has developed narrative-based reasoning technology for synthetic cognition. This technology has been successfully applied to produce agents for decision support as well as teammate/opponent agents for training. On top of the concept of narratives, we additionally lay the concept of “motives” (e.g., self-preservation, risk tolerance). With this combination, a great amount of variability is produced from even very simple stories. For example, an agent’s motives, combined with the motivational significance of story elements, will cause an agent to temporarily, preferentially attend to one particular story over another. In the end, the combination of the narrative and motive spaces results in computational intelligence that can produce highly-variable, realistic, and context-sensitive decision making in response to emerging threats and opportunities.
By using a representational approach that is rooted in and inspired by human cognition, we get closer to achieving three long-term, overarching goals for human-machine collaboration, namely the ability (1) for humans to understand the recommendations or behavior of agents in a transparent and interpretable form, (2) for humans to interact in an intuitive manner with such agents, thus allowing for heterogeneous teams of humans and machines, and finally (3) for subject matter experts to easily understand, modify, and elaborate underlying models to address new contexts, without requiring the assistance of engineering resources.