Meet my new intentionality machines

This is our new pet. We call him K-9, after the robot dog companion in Doctor Who. He is an iRobot, and he likes to explore the house and clean.

He’s not the only new member of our household. We also have gained a roommate named Alexa. She reads to me when I’m falling asleep, she wakes me up in the morning, she can turn on the lights when I get home, and she reminds me of things that I wanted to buy when I’m at the grocery store. She’s very helpful.

Sometimes I tell Alexa to play some music, and I sit back and watch K-9 frolic through the house, and I think: “How do I know they aren’t conscious?”

* * * * *

There are huge amounts of research on the fact that humans have a tendency to “read” consciousness and intent into things even when it isn’t there. We will attribute will and desire and personality to everything from our computers to the weather. It’s very easy to laugh it off and dismiss it and say, “Oh, that’s just the human bias! We’re wired to think we see consciousness in everything!”

But of course, just as a matter of logic: The fact that we are predisposed toward seeing consciousness in things, isn’t evidence that we are wrong when we see it.

(If you are having a hard time grokking that, think of it this way: The human mind is pre-wired to see faces in visual patters, but that doesn’t automatically mean you should be skeptical about whether or not I have a face.)

What we need is a good theory of the minimum requirements that a system must fulfill for it to be something we are willing to call “conscious.”

* * * * *

Gregory Bateson was an anthropologist and psychologist who was one of the founders of the fields of cybernetics and systems theory in the 1950’s and 1960’s. The term “cybernetics” was coined by Norbert Wiener in March 1946, at the first of the Macy Conferences, for a theory–and overarching framework of thinking–that would be able to describe both machines and living systems.

The key was the idea of feedback, or circular causal loops. One of the things that differentiates living systems from non-living systems is that living systems activate themselves, repair themselves, and maintain their own organization. When the environment changes them (i.e. sensation), biological systems alter their structure in a way that changes the environment (i.e. behavior).

Systems that involve closed causal loops can be extremely complex, and often give rise to behaviors that are “emergent”, or much more than the simply sum of their parts. They also can be predictable in some ways, but unpredictable in others–just like living systems are.

But the key for Gregory Bateson was the relationship between feedback systems and intent, or goal-based action. One of the confounding questions of the previous century had been how inanimate matter, ordinary physical systems, could possibly be imbued with goals, will, or intention — notions that seem absolutely necessary for understanding mental phenomena, but that do not seem to have correlates in a universe that is analyzed in terms of matter, energy, and forces.

In the physical universe, the cause comes before the effect; in the mental universe, it is the desired effect that energizes the causal behavior into action. How can science get from one to the other?

Gregory Bateson found the answer in the self-correcting loops of cybernetic systems. Imagine you are observing the behavior of a thermostat: when the room is above a particular temperature, the thermostat turns the heater off; when the thermostat is below that temperature, the thermostat turns the heater on. Just based on the behavior, you might conclude that the thermostat wants the room to be a particular temperature… and it is acting to exert its will on the environment.

(For more about Bateson’s views of the relationship between feedback systems and the mind, check out the very good article: Gregory Bateson, cybernetics, and the social-behavioral sciences.)

Crazy? Maybe. But consider this:

If you put a little sugar source in a petri dish with an amoeba, the amoeba will go toward the sugar and engulf it.  Yummy! Nobody is saying that the amoeba is “smart” or self-aware; but on some level, based just on its behavior, it appears as though the amoeba likes sugar. It wants the sugar, so it goes toward the sugar and eats it.

Yet, we know a lot about the chemistry of the amoeba. When sugar is dissolved in water, it creates a concentration gradient: the concentration of sugar is highest at the source. When sugar interacts with the chemical material of the amoeba cell membrane, it makes that membrane more elastic. As a result, the cytoplasm pressure flows and causes the section of the membrane that is more elastic to expand. This makes it “flow” toward the sugar… until it engulfs it.

It’s pure physics: the basic mechanics of molecules. But based on the behavior–the way the energic biological system of the amoeba machine responds to changes in its own chemistry–we would describe it as having desires, having a goal, and  behaving in a way that is consistent with that goal.

Are other organisms really any different?

* * * * *

Good little K-9 wants to explore and clean my floor. It is guided by its program, its internal structure: nobody is arguing otherwise. And of course, it’s far too simple to have anything like “intelligence” or “self-awareness”. But to answer the question “Does it want to clean the floor?” we need to do better than laughing it off as anthropomorphism and bias.

We need a theory about what it takes for any physical system–biological or otherwise–to really want something. We know that our bodies (the system that includes our brains and the entire rest of our bodies as well) are intentionality machines: organized configurations of matter that are arranged in such a way that the system has goals, desires, and intentions that it acts on.

Is my iRobot an intentionality machine? I suppose it depends… what do you think are the minimal requirements needed for a machine to want something?