Magazine / The Science of Emergence: Understanding How Intelligence Arises

The Science of Emergence: Understanding How Intelligence Arises

Book Bites Science Technology

Below, co-authors Gaurav Suri and Jay McClelland share five key insights from their new book, The Emergent Mind: How Intelligence Arises in People and Machines.

Gaurav is an associate professor of psychology at San Francisco State University. Jay is a professor of psychology and of computer science at Stanford University.

What’s the big idea?

When many simple elements work together, they can create a complex whole. The firing of a single neuron follows a rather basic mechanism, but in combination with billions of other brain cells, those tiny electrical impulses form the remarkable abilities of our human minds. Understanding how our own intelligence emerges is key to guiding and understanding the advancement of artificial intelligence.

Listen to the audio version of this Book Bite—read by Gaurav and Jay—below, or in the Next Big Idea App.

https://nextbigideaclub.com/wp-content/uploads/BB_Gaurav-Suri-Jay-McClelland_MIX.mp3?_=1

1. Simple parts can build a complex whole.

The concept of “emergence” is one that scientists have used to explain the behavior of things in terms of their parts. The behavior of the whole can be difficult to understand if you just think about the parts by themselves. How the parts interact with each other gives rise to the effects that you see at the level of the whole.

The properties of water are a good example. Water is made up of hydrogen and oxygen, and that’s about as simple as it could be. But the way the hydrogen and the oxygen bind to make a single water molecule, and the way those single water molecules then interact with each other in collective clusters of water molecules, is what gives water the properties that we observe. We think it’s similar with the mind.

Another example comes from my (Gaurav’s) real life. When I was growing up in India, we didn’t have social media to interact with. We had nature. The nature around my house included trains of marching ants. I discovered that if you put an obstacle in the path of a train of ants, they would invariably find the shortest way around the obstacle. I was about seven, and I asked around to see if anyone could tell me how the ants were to find the short way on their first try.

The answers were very different and not satisfying. Someone thought that maybe the queen ant somehow knows, or maybe the ants talk to each other. But none of that appears to be true. It wasn’t until much later that I learned that this is an example of an emergent phenomenon. Ants do at least two things: they lay down trail pheromones and follow strong pheromone trails. The shorter path has stronger trail pheromones because ants, going back and forth, travel more on it than on the longer path. This simple pheromone following gives rise to the emergent behavior of being able to find the shortest path. The ant alone doesn’t have this intelligence, but collectively in a colony, they do.

The mind is somewhat similar. Our mental life, thoughts, experiences, and cognitions arise from the interactions of brain cells, which are themselves not super intelligent. They do very simple things, but through their interactions, our minds arise.

2. Neurons are pretty basic—until they connect as a network.

In our minds, we have approximately a hundred billion neurons. These neurons do simple things. They fire little jolts of electricity, referred to as action potentials, and neurons connect with each other. These connections enable the action potential of one neuron to influence the action potential of another neuron. The startling proposition of the neural network view of the mind is that our own intelligence, thoughts, and actions emerged from these simple properties of neurons.

“They do very simple things, but through their interactions, our minds arise.”

Let’s consider a thought that I could have about a person. I could imagine that Gaurav is a man of Indian descent with dark hair and glasses. The thought behind that sentence could be visualized as a pattern of activation over a large population of neurons. Some of them are in my visual system, where I’m imagining his dark hair and glasses. And some of them are in my linguistic system, where I’m representing the sound of his name as I say it to myself. These different patterns of activation are in populations of neurons in different parts of my brain. So, the thought is distributed widely over neurons throughout my entire brain.

In that context, we can ask, how is it that hearing Gaurav’s name could help bring to mind a visualization of what he looks like? And that’s where the idea of connections comes in. When I experienced meeting Gaurav and was seeing him and hearing him speak his name all at once, I could start forming connections among the neurons that represent the sound of his name and the neurons that represent what he looks like. These neurons could also form connections with each other so that they could collectively become a stable state of mind that represents me thinking about what Gaurav looks like and what his name sounds like.

3. The emergence of AI.

Neural networks can be used to build models of various aspects of human cognition. In fact, I (Jay) started working with David Rumelhart, who had been intrigued by what we now call good old-fashioned artificial intelligence. He tried to build models of how people do things, such as understanding language and making inferences from stories. Rumelhart began to feel frustrated with AI systems. Together, we began working on using neural networks to model perception and understanding.

At the same time, we met Geoffrey Hinton, who came from a computer science background and brought a computational perspective. He believed it might be possible to build artificially intelligent systems as well as models of our human minds by using neural networks.

“Neural networks enable modern AI systems to be emergent systems.”

What’s often lost in the development of modern AI is how tightly integrated these systems are with neural networks that were originally developed by you and others to understand the human mind. Neural networks allow for the emergence of intelligence, and our mind is an emergence system. Neural networks enable modern AI systems to be emergent systems.

Other people tried a more propositional approach: if this happens, do that. But that stalled AI progress. Once we allowed neural networks to be the core engines of modern artificial intelligence, then and only then did modern AI evolve. It’s not a coincidence that both our intelligence and AI system intelligence are emergent processes.

4. Neural networks can inform a culture upgrade.

Not only do these neural networks allow us to understand our thoughts and actions better and to develop machines that show an intelligence of their own, but they also say a lot about living a better life.

Our mind is a process. It involves the activation of neurons connected to other neurons, which then propagate thoughts that lead to actions. And we are all fundamentally made of this same stuff. We are neurons interacting with each other. Many of our connections are specified innately. Many others are developed with the experiences that we have. But regardless of these biological and experiential differences, there is an underlying unity amongst all of us. For me, this speaks volumes about kindness. Kindness should be a default. We should approach each other with a stance of kindness and patience because, at a deep level, we are the same.

“Kindness should be a default.”

Our behavior is not defined by some internal good engine or bad engine. It is defined by context—biological and social. This suggests that, for our culture and for each other, we need safeguards and incentives that enable us to behave pro-socially. Our behavior doesn’t arise out of nowhere. It is a product of interactions in a neural network that are informed by the inputs into that neural network.

5. AI hasn’t caught up, but it will.

The artificial intelligence systems of today are like us in many ways, but not in others. They capture some of the emergent characteristics of our human thought processes, but they still have some fundamental limitations, which are going to be addressed as researchers continue to explore these systems.

Our human minds can learn with far less experience than what is needed for training today’s artificial neural networks. Contemporary models need a hundred thousand times more training data than a person could ever experience in their lifetime. Another limitation is that these models seem to be passively driven by their inputs rather than operating from goals, as people do. The real intelligence is still the intelligence of engineers designing these systems.

There isn’t something inherent about goals that limits machines from developing goals of their own. In fact, a goal-directed AI system is a necessity for these systems to gain flexibility. But that is a double-edged sword. As soon as AI systems have the autonomy to develop their own goals, we will enter new territory where we must be careful and confront existential issues. Setting safeguards so that we, as people, are more likely to do pro-social things will also apply to our machines, especially as they become more autonomous and goal-directed. As we venture into this rapidly changing world, thoughtfulness about these safeguards will be crucial to ensuring we develop machines that enable us to thrive.

Enjoy our full library of Book Bites—read by the authors!—in the Next Big Idea App:

Download
the Next Big Idea App

Also in Magazine

Sign up for newsletter, and more.