Lab leader Javier Movellan, PhD, had assigned the task to the students as part of an experiment, to see if the robot could learn, through a regular baby's experience, to recognize human faces. This is a feat that human babies can perform just minutes after birth, spurring the theory that we are born with knowledge about what faces look like. The baby robot had no such knowledge programmed in, so the baby robot's creators, including graduate students Ian Fasel and Nick Butko, expected the robot wouldn't catch on quickly. They planned to run the experiment for several months. But the robot astounded everyone: It learned to recognize human faces after just six minutes of life, accurately drawing squares around its babysitters' faces.
"We thought this would be a very difficult problem, and we are experts in machine learning," says Movellan. "We were very surprised that by six minutes it was already doing so well."
Movellan's finding suggests that human infants, like the baby robot, might not start out with any information about human faces. Rather, they could be extremely quick to learn that their mothers' faces tend to go along with interesting environmental cues-a lullaby, for instance, or movement. If a baby robot with the equivalent of only one million neurons in its programming can recognize human faces so quickly, imagine what a human baby, with its 10 billion neurons, can do, Mollevan says.
The baby robot is just one example of how machines that learn are contributing to theories of human development, says Terri Lewis, PhD, a psychology professor at McMaster University in Ontario. In fact, psychologists are increasingly teaming up with computer scientists to write programs that allow machines to interact with their environments and change from their experiences. The result: Robots can perform tasks that stump typical computers, even though standard computers are born into the world with gads of information preprogrammed into them. But aside from the practical applications, learning machines are starting to serve psychologists as a new model animal-one whose brain you can simply crack open and watch work.
"It is certainly a fascinating area of research and it's changing the way we think about the role of experience in development," Lewis says.
Most computers are developmentally stunted. The ATM machine is not going to get faster at counting out cash; your e-mail software won't get better at recognizing spam. That's because programmers imbued them with a set of fixed rules: If an e-mail is not from a known address, then send it to the trash. The computer simply puts such rules in action.
However, computer programmers are increasingly writing programs that can change their own code, says Tony Jebara, PhD, a computer science professor at Columbia University. These programs are often based on "machine learning algorithms"-a handful of relatively simple rules that allow machines to get better at whatever it is they are supposed to do.
The inner workings of these programs are modeled on what we know about the structure of the human brain. Just as we have neurons, the baby robot's program has little units of information analysis that take in data and then produce a signal that affects the next "neuron" down the line. Such artificial neural networks result in systems that are much less likely to break than traditional programs-if one neuron malfunctions the system is relatively unaffected.
In addition to being more resilient than traditional systems, neural networks are producing human-like learning in machines. For instance, Movellan's baby robot was born knowing only that it should take note of sights and sounds that happen relatively rarely. So the hum of a nearby laptop was not very interesting to the robot-it heard that sound all the time. However, the voice of a graduate student talking to the robot was a special event, so it took note. The robot also knew to link interesting events, associating unusual sights with sounds, and unusual sounds with sights.
As the research assistants hefted the baby robot around, it took snapshots of the world with a built-in camera. Sometimes human faces would be in the picture, and the robot discovered that people often appeared at the same time it recorded unusual sounds. After just a few minutes, the robot learned that human faces were a particularly interesting aspect of its environment.
This amazed the researchers because previous computer programs were not very good at locating faces in busy backgrounds, says Mollevan. Impressively, the baby robot could find a human face in a sea of similarly shaped objects. It could recognize a face in profile and it could locate a face that was partially covered by hair, according to results presented at the 2006 International Conference on Development and Learning, in Bloomington, Ind.
And after a few more hours of training, the baby robot sometimes recognized that a line drawing of a face was the same kind of thing it had seen before, on the heads of graduate students.
The results are similar to that of an experiment with human infants, published in Psychological Science (Vol. 10, No. 5, pages 419-422) in 1999. In this study, Lewis and her colleagues tested infants' preference for faces just minutes after birth. They found that if you show an infant two cards, one with a face that has right-side-up features and one with those features turned upside down, the infant tends to look toward the right-side-up face.
At the time, the most likely explanation was that babies are born preferring face-like images-a tendency with clear evolutionary advantages, says Lewis.
"You want to be quick to orient to faces because there is definitely a survival benefit," Lewis notes. "Those faces are going to provide food for the baby."
But given the recent performance of the baby robot, the possibility that infants rapidly learn about faces seems just as plausible, she says.
"The people who do neural network modeling...are in the business of showing that the blank mind can learn a lot very quickly," she says.
Learning machines are helping psychologists understand not only what people can learn, but also how we do it. Once researchers have built a computer that processes information like a human would, they can run experiments on it that would never work with living humans, says Gedeon Deak, PhD, a cognitive science professor at UCSD.
"We can simulate different kinds of processing models...and find out which ones most closely simulate the kinds of detailed decisions and errors that humans make," Deak notes.
One study, by Marian Bartlett, PhD, and her colleagues in the UCSD Machine Learning Lab, did just that: The researchers pitted two different face-processing programs against each other to see which worked best.
Linking faces to names has proven very difficult for computer programs. Past programs told computers to analyze the distance between features such as eyes. That approach resulted in computers that were not very good at seeing that two different pictures could be of the same person. A shadow could throw the program off entirely.
Bartlett and her colleagues took a different approach. Instead of giving the computer rules for analysis, they fed the computer hundreds of images. The computer figured out how to identify the images on its own, using one of two tools provided by the researchers.
Both tools required the computer to represent the image as a two-dimensional grid of pixels-much like how the retina registers light. The computer then took note of the brightness of each pixel, and flagged those that were near others of a markedly different brightness. The program used this technique to determine boundaries between different features.
However, one of the tools, known as Eigenfaces, only allowed the computer to make simple associations between pixels, while the other tool-Independent Component Analysis (ICA)-gave the computer the ability to make higher-order associations.
Both learning tools resulted in a program that was better at recognizing faces than past attempts, but the ICA version worked best. And when ICA did made mistakes, they were similar to the ones humans would make, according to results in-press at Neurocomputing. For example, when the program was trained with a set of faces that were mostly white, it had more difficulty distinguishing between Asian faces, just as people often have trouble distinguishing between individuals of other races.
"There are a number of effects in human-face perception research that are consistent with our model," Bartlett says.
The results suggest that human brains may follow a rule similar to ICA, whereby it takes in patterns of light and dark and performs high-level statistical analysis to determine which dark patches form noses and which are just shadows-something we probably learn to do as infants, Deak says.
"All normal developing humans are expert face processors, and this [study] suggests that we do incredibly powerful computations to identify faces," he notes.
Computers that can identify individuals may have anti-terrorism applications-allowing security cameras to flag people entering a building who are on a known-criminal list, for instance, or even identifying someone who had never been in the building before. But Deak is more excited by the contributions that machine learning can make to psychological research.
For instance, once you have a model of human-face recognition, you can change a little bit of the program and see whether the resulting mistakes look like human malfunctions-a process that could provide insight into disorders like autism. Similarly, researchers can investigate how a baby robot develops when it's neglected or abused-something they would obviously never try with real infants.
However, computers have a lot of catching up to do before they can approach the capabilities of the human brain, says Deak. And even with state-of-the-art computers, programmers will need findings from neuroscience and behavioral science to know if their models are true-to-life, says Jebara. But the biggest hurdle-and the point of the entire endeavor-will be figuring out the underlying program that allows humans to learn so much, despite having relatively little knowledge to begin with, he notes.
"What are the guiding principles that find meaningful patterns in the world-that is the Holy Grail for the future of machine learning and even human intelligence," says Jebara.
Letters to the Editor
- Send us a letter