Artificial Emotional Intelligence


An automated copilot recognizes that their human driver is too distressed to be operating a motor vehicle without posing a danger to themselves and others. A consumer relations chatbot changes tactics to better manage an irritated customer. A healthcare assistant bot aids a doctor to diagnose depression in a patient through analysis of microexpressions and body language. All of these examples are potential applications of artificial intelligence capable of recognizing emotional states in the humans they interact with. But how much does the machine really know in these hypothetical applications? There is recognition of emotional states occurring and that recognition links those states to particular actions, but can the machine be said to actually understand what an emotion is? Is it even possible for a machine to understand an emotion? And if not, is understanding of any real importance or is recognition and appropriate action enough?

Historically, the word “emotion” has often been applied to refer specifically to the subjective experience (or “feeling”) associated with that emotional state, but in recent years the scientific community has come to adopt a broader perspective of emotion. Component process models of emotion break the concept down into multiple constituents[1], in which framework the subjective component takes on a less central role in the composition of an emotion. This is an important point if an artificial intelligence is to have any hope of understanding an emotion. The existence (or lack thereof) of any subjective state in an artificial entity is ultimately unverifiable, so any definition of emotion that takes the subjective component to be fundamental poses an obvious problem for artificial intelligence. In a framework of emotion where feelings take a backseat to other components of the emotional complex, the door is left open for something akin to understanding to be achieved in artificial entities.

If one accepts the proposition that emotions can be broken down into a number of different features, the obvious next questions with regard to our original inquiry are: What are these features? What function do they serve? And importantly, which ones are essential for a machine to have a working knowledge of emotions?

First off, there is the expressive component of an emotion: facial expression, body language, and vocal expression (which may be expressed in written language as well—very relevant if our machine’s primary interface with the human is through text). This feature serves the function of communicating the emotional state and the behavioral intentions it may entail. Expressive behavior constitutes the means through which an emotion is best recognized, so any machine with a modicum of emotional intelligence must be able to recognize it and match it to the appropriate emotional state.

Next we have the neurophysiological component of an emotion. This is the background activity occurring throughout the nervous and neuro-endocrine system that produces, regulates, and maintains the emotional state. It encompasses the changes in activity of the different hormones and neurotransmitters that ultimately underlie an emotional state. Unless our machine has some very particular sensors, this kind of activity will remain entirely undetectable. That shouldn’t pose a problem though, as humans do not generally have access to this information either and they manage just fine without it.

I’ve already mentioned the subjective feeling component, which may serve the purpose of monitoring one’s internal state. As humans manage to infer the emotional state of others without access to their subjective experience, a machine should be able to get by without this. The exception to this supposition would be if having experienced a given emotion serves an essential function in interpreting that state in others. Certain aspects of the subjective experience may be explained to a machine (for example, the persistent thinking characteristic of anger and despair), but the so-called qualia of the experience may be something beyond the reach of a machine. However, even humans have no assurance that their emotional qualia actually matches the corresponding emotional qualia in others, so a machine lacking this experience may in fact be no more worse off for it than humans when it comes to interpreting emotional states.

There is also the cognitive component of an emotion, which pertains to the evaluation of objects and events. In other words, emotions bias us toward interpreting events in a positive or negative light. These evaluations play a causal role in the formation and maintenance of an emotion. The negative evaluation of an event may trigger an emotional state that will contribute to negative evaluations of subsequent events, which may result in the continuance of the emotion. This is a feature that it is important for the machine to grasp so that it may understand what may have caused the emotion, how it may be affecting the way that human interprets the world, and how that state could potentially be changed. For example, if the automated copilot of a vehicle recognizes that their human driver is experiencing anger, it may suggest a positive interpretation of the events upsetting the driver. This may break the self-reinforcing chain of negative evaluations characteristic of anger and help the driver to overcome their anger in favor of a less dangerous emotional state to operate a vehicle in.

A final component pertains to the actual behavior that is likely to arise from the emotional state. The likelihood of a human making particular choices or selecting particular actions may shift depending on their emotional state. A typically calm individual may act violently while in a state of anger or fear. Understanding this shift in “action tendencies” is the most important aspect of emotion with regard to behavioral prediction.

Of course things aren’t as simple as providing a one-to-one map of how different emotional states will lead to particular actions or expressions. Humans show enormous variation in emotional expression and reactivity. The display norms of some cultures may cause individuals belonging to that cultural group to suppress or exaggerate their emotions. But even on the level of the individual, people may vary considerably in their tendency to act on their emotions. Some individuals will possess high levels of emotional control while others need an anger management course. Though it’s theoretically possible to codify and provide to the machine a set of cultural display rules regarding emotion, in practicality this information will often not be available. An even trickier issue is accounting for individual variation in emotional reactivity. Only in rare cases will the machine be able to obtain any external information regarding this, and would otherwise have to construct unique individual profiles on the matter.

Returning to the final of the questions we opened with: why is it so important for a machine to understand emotion? The process of recognizing and reacting to an emotion can be broken down to a set of rules, but an actual empathic response requires emotional reasoning. It may sound sci-fi at this point, but we’re aiming to design a machine that actively reasons about the most appropriate response given the emotional needs of the user. This may translate into the machine recognizing its user is having a bad day and keeping its head down. Or perhaps the machine recognizes that a user who has spent 20 years working on a problem may be disheartened if it solves that problem in 4.5 seconds so chooses to wait a longer interval of time before presenting the answer (to give the appearance that it is a justifiably difficult problem). Determining what response a given user most needs given their emotional state requires actual emotional reasoning. Such reasoning lies at the base of any true empathic response, and this is where understanding emotion on a deeper level becomes essential. More than a simple set of interaction rules linked to recognition, we’re striving for empathic responses driven by emotional reasoning.

So where does all this leave us with regard to designing a machine with true emotional intelligence? Expressions, be they facial, verbal, or gestural, are the clearest markers for recognition of an emotion. Depending on the sort of sensors our hypothetical machine is equipped with, certain physiological changes such as increased respiration/heart rate or reddening of the cheeks could act as markers as well. The changes in neurophysiological and endocrine activity can effectively be ignored. There are specific aspects of the subjective experience that could be communicated to the machine, but much of this component will remain inaccessible (though that shouldn’t be a problem now that we’ve removed it from the forefront of the emotional complex). Insight into cognitive evaluations can help the machine to understand why an individual might be experiencing a given emotion, or help to clarify specifically which emotion they may be experiencing. An essential feature for any interactive machine to understand is the change in action tendencies that will occur with the shift between various emotional states. This is the channel through which emotions have a direct impact on the world. Predictive models regarding that individual must then be adjusted in order to maximize effective interaction with that subject in the future.

We at Interactive Engineering are working on implementing emotional intelligence in the AGI we are developing. Our AGI encodes all of its knowledge in natural language, which provides us with the opportunity to equip our machine with in-depth explanations synthesizing all relevant features of an emotion. Our machine’s main interface channel is currently textual, meaning it will primarily be reliant on analysis of linguistic patterns in order to recognize emotions in the individuals it interacts with. Its limitations will be a product of how thoroughly we have “explained” the emotion to it, so our first goal is the creation of comprehensive explanations of emotional states. These explanations need to encompass everything relevant to know about an emotion in order to recognize and understand it. Is this overly ambitious? Perhaps, but we’ve never let that frighten us off before.

So where to next? Unlocking the key to emotions may contribute to solving another problem as well—a related set of concepts has proven difficult for a machine to understand. Concepts like “trustworthiness”, “loyalty”, and “ethics”. These are not emotions in their own right, but they too belong to a category of abstract qualities that vary between individuals and have a persistent effect on behavior. While emotions tend to represent more transient state changes, possessing a system of ethics or loyalty to a given person may produce much longer-lasting effects. These are not concepts that a machine will have an intuitive grasp of and they will need to be just as meticulously explained as emotions. Fortunately, the features that most require explanation may very well be shared with emotions, and it’s our hope that our work on emotions can be extended to encapsulate this entire group of concepts.



[1] Klaus Scherer, “What are emotions? And how can they be measured?” Social Science Information 44, no. 4 (2005):695-729.

Comments

Popular Posts