Artificial Intelligence vs Emotional Computers

This article is actually adapted from a Usenet post I made in comp.ai back on January 14, 1997. I have pieced together a few of my comments and removed intervening replies from other people to make it read more like an article, but the main points are all original to the comp.ai post.

What do I think of this article today? I stand by it completely. Moreover, I believe that many of my criticisms of the way people in robotics approached emotion back then have even been addressed by researchers in more recent decades.

What would it take to get robots to actually FEEL emotions?

Herbert Simon (1967), in his seminal paper on motivation and emotion, describes how a resource bound agent, operating in a dynamic environment presenting many threats and opportunities to the agent’s goals, will require an “attention interrupt” mechanism that speedily replaces current goals with new goals. For example, an animal may be feeding, only to notice a predator, which causes the interruption of its current goal and its replacement by a goal of flight. The scenario described by R. Jones, in which a Brooksian robot appraises a danger, is identical to this. These are the sorts of things that these researchers claim amount to implementing emotions in robots or computers.

But they are wrong. This kind of programming has nothing to do with emotions, and one will never be able to simulate or artificially embody emotions in robots in this way.

Phenomenologically, emotional information is experienced as a different kind of information than “cognitive” information. That is, feeling scared about the prospect of falling off a cliff is phenomenologically different from thinking, “I may fall off a cliff, and so should change my course of action.” Physiologically, these kinds of information are embodied differently, as well. The experience of fear depends, at least in part, on systemic hormonal and neurochemical regulatory changes and changes in levels of specific neurochemicals in different areas. These are mediated by patterns of electrochemical neural firing, but without the hormonal and neurochemical changes there is no experience of “fear.” On the other hand, what we experience as “cognitive activity” seems likely to depend primarily on patterns of electrochemical firing (in cortical areas, esp.), and not on specific systemic changes in neurochemical or hormonal levels.

Any program / robot that is intended to move toward implementing emotions needs to capture this qualitative difference in the processing of emotional and non-emotional information. Emotional information usually has systemic, bodily effects that modulate ALL behavior (increased energy, heart rate, etc), apart from merely influencing what  specific behavior is chosen (i.e. moving away from the cliff).

Of course, the argument I have heard from some has been that if we pair this “change in goals” (the “attentional interrupt” of Herbert Simon) with an increase in energy use, or some other physical mechanism that drives the robot’s motor activities, won’t that be akin to “increased physiological arousal”? And by pairing the cognitive processing condition (the “goal state”) with the increased energy consumption (“arousal”) won’t that satisfy the definition of emotion?

But the subjective experience of “fear” seems to involve more than merely the co-active states of 1) physiological arousal expressed by systemic changes in neurotransmitter and hormonal levels, and 2) information-processing / signalling states in the cortex that make an attribution that can be described as “fear”. These two subsystems feed back onto each other in important (and not fully understood) ways. The role played by levels of system wide distributions of hormones and neurotransmitters is functionally different than firing patterns in neurons, and they often appear to play a modulatory role on firing rates of regions of neurons in the cortex.

I’m not saying implementing it on a computer is impossible.  These different roles and impacts of neurochemicals and neural firing can all be abstracted away from biology. But in doing so, your computational theory must still recognize the functional difference between something that augments the rate of computation for a set of processes, versus the signals that constitute those computational processes themselves.

But even beyond this, I have another objection to this approach as well. I think it is built on a very limited and incorrect assumption about what emotions are for.

One can come from an intellectual tradition in which one conceives of emotions as driving cognition, rather than merely interrupting it. That would produce a very different type of model from those proposed by Simon and Brooks. The idea that emotions are primarily an “interrupt” seems to come from centuries-old theories about how emotions “mess up” otherwise “rational” thinker, and based on current research that seems to be completely wrong.

For example, Schore (1994 “Affect Regulation and the Origin of Self”) begins from a developmental and clinical perspective, and when he looks at the neurophysiological data it makes as much sense to speak of cognition as emerging out of basic emotional activity, rather than emotional activity interrupting/interfering with/augmenting cognitive activity.

Personally, I think the notion of emotions as an interrupt mechanism springs from a basic faulty (unspoken) philosophical assumption: there can be such a thing as “pure” cognition without emotion.

I know this assumption is there for many researchers, and I fear we will have to move beyond it if we ever want to really understand emotions or implement them in machines.