Mind in the machine

Do nerve cells hold the key to an epic advance in computing?

Featured Media for Mind in the machine

To write your girlfriend a poem, GPT-4, an artificial intelligence system, requires orders of magnitude more energy than your brain does.

That’s because AI doesn’t really function like the brain. Rather, it runs like all other computer software by flooding microchips with huge quantities of binary signals, in the form of zeros and ones, and gobbling up electricity along the way.

Kwabena Boahen, PhD, a professor of bioengineering and of electrical engineering, admires the brain’s efficiency and elegance, and he’s dedicated his career to developing a computer that actually works like one. Recently, he took a major step in that direction with the creation of a nanoscale transistor that emulates a dendrite, a slender fiber that protrudes from a nerve cell.

Dendrites are broadly understood to function like cables for receiving the electrical signals that nerve cells, or neurons, use to communicate information to one another. Yet Boahen and many other scientists suspect the branch-like structures do much more: namely, that they decode patterns of signals to help neurons determine whether to “spike,” or relay their own signals.

What’s remarkable, according to Boahen, is just how much information a few neuronal spikes can carry with the help of dendrites to interpret them. A computer chip that relied on an analogous sparsity of signals could lead to significant energy savings, particularly in light of the enormous computing demands of AI. This type of chip could also circumvent the challenge of keeping microchips from overheating. These are goals that Boahen hopes the device he’s invented, which he calls a nanodendrite, will someday help achieve.

‘Neural networks today are about as similar to a brain as an airplane is to a bird.’

Kwabena Boahen, PhD, professor of bioengineering and of electrical engineering

The nanodendrite is the product of neuromorphic computing — that is, designing computer hardware and software to function like the brain. It’s a burgeoning field, driven largely by a desire to keep up with the computing demands of AI and reduce the massive amounts of energy it consumes. Technology companies such as IBM, Intel and HP, as well as a number of universities, have invested time and money to develop neuromorphic microchips.

Boahen is one of the field’s pioneers. He designed his first neuromorphic chip as an undergraduate in the 1980s. After joining the faculty at Stanford in 2006, he proposed Neurogrid, a circuit board that would simulate 1 million neurons with 6 billion synapses, the structures where signals are passed between neurons.

His lab, Brains in Silicon, completed the project in 2010 and reported on it in 2014 in Proceedings of the Institute of Electrical and Electronics Engineers. Boahen and his co-authors noted that Neurogrid was about 100,000 times as energy efficient as a conventional computer’s simulation of 1 million neurons. Yet they also noted that a human brain, with 80,000 times as many neurons, needs only three times as much power. Boahen hopes the nanodendrite will help bridge that gap.

Underwhelmed by computing

As a child growing up in the outskirts of Accra, the capital of Ghana, Boahen was interested in learning first principles. He took apart engines and electronics. He built a microscope. “I just wanted to know, in my own way, how things worked and understand them and try to re-create them,” he said.

A collection of instruments is used to evaluate the performance of nanodendrites attached to a microchip. (The green circuit board occludes a view of the chip in this image.) Electrodes, whose terminal ends are arrayed in a circle, are lowered onto the chip’s electrical contacts, and then electrical pulses are run through the chip. A researcher will peer through the microscope to help with positioning the electrodes above the contacts. (Photography by Misha Gravenor)

In the early 1980s, his father, a professor of history at the University of Ghana, returned from a sabbatical in England with a desktop computer. Boahen was hesitant about taking it apart. “I was too intimidated by this thing,” he said. “So I went to the library and read everything about how a computer worked — you know, about the memory, RAM, program counter, how to do a branch instruction. And I wasn’t impressed at all. I thought it was so brute-force — just a lot of circuitry. There had to be a more elegant way.”

As an undergraduate at Johns Hopkins University, Boahen got a glimpse of what that way could be when he attended a talk by a biophysicist who demonstrated how to train a neural network, a kind of AI that can learn from its errors. He was hooked.

After earning bachelor’s and master’s degrees in electrical engineering from Hopkins, he enrolled in a graduate program at the California Institute of Technology, where he earned a PhD in computation and neural systems.

He said that although neuromorphic engineering has made major strides in the past couple of decades, the field is still largely aspirational — especially when it comes to designing systems that mimic the brain’s architecture as opposed to simply being inspired by the brain.

“Neural networks today are about as similar to a brain as an airplane is to a bird,” he said.

One problem, as Boahen sees it, is that AI relies on a “synaptocentric” mode of computing, in that half of the nodes — lines of binary code that act like the AI’s neurons — respond to an input. Some of those responses are weak and some are strong, depending on how the network has configured the synapses, or connections — again, more code — between the nodes. Still, most are active.

If most of our 86 billions neurons were constantly signaling each other through their 100 trillion synapses, our brains would overheat, Boahen said.

Today, that’s a risk to computer chips as they try to handle ever-increasing processing demands while facing limits on how small their integrated circuits can be designed and how effectively the heat they’re producing can be dissipated. Since the mid-20th century, engineers have been able to double the number of transistors on a chip about every two years. That growth rate, however, is expected to hit a ceiling this decade: Even as circuits and transistors shrink in size, they consume the same amount of electricity, leading to higher energy density that threatens to roast them.

A ‘dendrocentric’ approach

To overcome this obstacle, Boahen has proposed a “dendrocentric” mode of computing, which he wrote about in a Nature article published in late 2022. He asserts that instead of using a binary system of signaling, computers could use a unary system, like the brain does.

The brain’s signals are sparser but carry more meaning based on their sequence. If neurons A through J receive signals from some other neurons, prompting A, B and C to spike — in that order — and a dendrite on a neighboring neuron recognizes that pattern as part of the information needed to process, say, the smell of an orange peel, that neuron will generate a spike of it own. But if the dendrite instead detects sequence B-A-C, the neuron won’t spike.

To make such a system work in a machine requires a transistor that could act like a dendrite — in other words, determine whether a sequence of signals merits a spike. Boahen asserts that the nanodendrite can do this. It’s essentially a variant of a ferroelectric field-effect transistor, a decades-old technology using material with a natural electric polarization that reverses when electricity runs through it.

‘It would take some innovative research ideas to find solutions, but I am optimistic that we will get there.’

H.-S. Philip Wong, PhD, professor of electrical engineering 

Like a conventional microchip’s logic gate — a circuit that performs logic functions on one or more binary inputs and provides an output — the nanodendrite uses a series of gates to determine whether a sequence of signals should prompt it to relay its own signal.

He designed the tiny transistor with H.-S. Philip Wong, PhD, professor of electrical engineering and the Willard R. and Inez Kerr Bell Professor in the School of Engineering, and graduate students Matthew Beauchamp and Hugo Chen.

Chen will present an experimental proof-of-concept paper that illustrates the operation of the nanodendrite in December 2023 at the International Electron Devices Meeting, the major forum for reporting advances in semiconductor and electronic device technology.

 “From a device-technology point of view, there are many unanswered questions,” said Wong, who directs the Stanford Nanofabrication Facility. One such question is how to build nanodendrites in three dimensions — that is, stacked on top of each other in a single silicon chip. “Yet, I don’t believe those unanswered questions present any fundamental roadblocks,” Wong added. “It would take some innovative research ideas to find solutions, but I am optimistic that we will get there.”

Boahen is also optimistic. Such a technology, were it available now, could cut GPT’s signals 400-fold, with an equivalent decrease in energy consumption. He concedes the work is in its early stages, with an actual dendrocentric computer chip probably about a decade away from realization.

“But once you see it, you can’t unsee it,” he said