– Why we should strive to make robots as empathic as humanly possible
Slowly but surely robots are moving from an industrial setting into more interactive and social environments. To navigate social environments and social interactions, robots must successfully be able to interact with humans. Often these robots are modeled taking human cognition into account, both because we have to share some common ground and understanding of each other to a certain extent for interaction, and because for a lot of cognitive capabilities humans are the best example we have. After all, humans generally consider themselves to be the smartest and most social species on earth [6].
Empathy: what is it?
One of these fundamental human capacities is the ability to empathise with one another. Empathy has been thoroughly studied. However, there is no consensus over the definition of empathy. Generally, one can speak of three different kinds of empathy. Cognitive Empathy, Affective (or Emotional) empathy, and a combination of both [7,12,13]. Cognitive empathy can be roughly defined by people’s ability to understand other people’s emotions and behaviour through mental perspective taking. Affective empathy, also known as the sharing of vicarious emotions, can be defined as follows: “considering it as a process that makes a person (or agent) have “feelings that are more congruent with another’s situation than with his own situation.”” [12]. A. Tapus and M. Mataric made guidelines of the needed capacities of a social robot to emulate empathy [14]. In their paper they acknowledge machines cannot feel and express empathy, however, they focus on how a social robot can appear empathic.
Also, B. Duffy states that due to the embodied differences between humans and robots, we cannot ensure robots will have the same empathic emotions. However, he argues that for successful social interactions it would be enough for a robot to appear intelligent or have social capabilities. Within human-human social interactions people do not use standardised metrics as measures for intelligence and social capacities. We simply observe them. When a robot appears to be (socially) intelligent it can already facilitate social interactions as they “speak our language” [6]. We might not be able to model “the real deal” when trying to make empathic robots, but we can make them appear to be empathic. Here we would use empathy not as much as a set of feelings, but as a driving force in behaviour and social interactions. But why would we want to have empathic robots? What makes empathy so crucial in social interactions?
Empathy: fundamental to human interactions
As stated earlier, empathy is fundamental to human cognition and human-to-human social interactions. It mainly has been associated with, and is a known mediator of, prosocial behaviour, which can significantly impact how people treat each other [10]. An example of prosocial behaviour would be donating money or extending a helping hand when someone had an accident. It has been argued that cognitive empathy has evolved because of the complex social demands in human interactions during evolution [13]. Cognitive empathy enables the following behaviours: facilitating conversations and social expertise and predicting behaviour. But also less savoury behaviours such as lying and deceiving, or recognizing when such things happen to you. Affective empathy, on the other hand, is mainly associated with altruistic behaviour and nurturing relationships and is related to the moral mechanisms and similar behaviours. A simple example would be that you might be less inclined to punch someone in the face because you understand it will hurt and can empathize with the other’s pain. Another example which shows how empathy modulates social relations is given by Bastian et al., who made students eat spicy chilli peppers in groups and showed that the shared pain facilitated group formation [1].
Empathy and moral decision making
As robots are entering social spaces and their autonomy grows, the need for moral agency in robotics increases. How and if robots can be moral agents is highly discussed. B. Duffy argues that especially a humanoid robot specifically designed for social interaction needs to have moral rights and duties. It uses and looks like a human and thus uses our frame of reference, which sets high expectations. Even more fiercely discussed is the role of empathy in moral agency. Morality and empathy are often mentioned in the same breath. However, the relationship between morality and empathy is not yet very clear [15]. Generally, there is consensus about the importance and influence of perspective taking in moral judgements. Empathy allows agents to understand the effect of their actions on the emotions and mental state of others. This directly influences people’s behaviour towards others, which refers to what we earlier described as cognitive empathy. Others, however, have argued for a more fundamental role of emotions in empathy and moral decision-making. L. Damm describes an account of moral responsibility which critically depends on the empathic capacities of the agent. In her paper, she argues that moral responsibility depends on their status as a moral agent. When one cannot satisfy the criteria for moral agency, they are considered not fully responsible [5]. This would indicate we also need a form of affective empathy mechanisms in robots for them to be considered fully morally responsible.
Whether this is possible, is highly contested, but still open for debate and critically depends on whether we believe you need a biological body to experience emotions [4]. This does mean that a robot cannot be fully morally responsible much like a child cannot, given the assumption that affective empathy is indeed needed for full moral responsibility. Whether one believes robots can experience emotions and thus can be truly and fully empathic does not mean the role of robot empathy in robot morality is over. We can still use the construct of cognitive empathy. Or abstract away from biology and view emotions as one of the driving forces behind behaviour. We have now seen how empathy is a foundation for social interactions and morality and that this is much needed in the future. But how does this information translate into practice? What are the applications we can dream about in the future apart from moral robots?
Empathy from a practical standpoint
Current empathic agents are still in its infancy and are very limited in their empathic abilities. However, there are a few domains in which the use of these systems are being investigated. An example is empathic tutoring. A current, very ambitious project is the EMOTE project. This project aims to develop an empathic tutoring system which should facilitate the learning experience of children [3]. Knowledge from social psychology and models of empathy are applied to create emotionally intelligent and responsive tutor agents (robotic and virtual).
A. Tapus and M. Mataric made their recommendations on how to model empathy with the intervention to improve therapeutic robots [14]. They found that empathy is a crucial component of therapy and a robotic system that is supposed to be a therapeutic aid, thus also should have empathic capabilities. Social robots may also be used in therapy for example in supporting ASD patients [8]. Another use case for emotional robots could be robots as companions. Leite et al. developed a social robot companion which reacted in an empathic manner to a chess game played between people by displaying facial expressions and uttering phrases [10]. People to which the robot reacted empathically to, rated the robots as being friendlier. Empathy can also go the other way around: the robot evokes empathy in humans. Paiva et al. addressed how an empathy evoking agent can be used to persuade children to do “the right” action[11]. This demonstrates we can even use robots to enhance people’s moral or social behaviour!
Empathy: a demon we need to tackle
Apart from the debate on if robots can experience emotions and thus can truly be empathic themselves, it is important to know that affective intelligence in cognitive robotics is highly controversial [4]. A critical moral dilemma accompanying social robots that portray empathy and are there to be a listening ear (for example patients) is that of deception [2]. If it merely appears to be empathic and understanding, it might betray our trust. The question is whether the end justifies the means.
However, we have seen that empathy is crucial in social interactions, and we would also need to implement empathic capabilities to a certain extent to make robots capable of navigating social environments. Empathy could have practical benefits when we try to use it to make tutoring or therapeutic systems, but it could also be crucial for a companion robot. We have seen that the need for robot ethics is rising and empathy is a crucial component in human morality. All in all, we should strive to make robots as empathic as humanly possible.