Interview with H. Chad Lane of the Institute for Creative Technologies at USC concerning his research and work on virtual humans
By Fran C. Blumberg, PhD Associate Professor
Division of Psychological & Educational Services
What exactly is a virtual human? How are they used and in what contexts?
A virtual human is a kind of embodied conversational agent that seeks to simulate key elements of human communication, reasoning, and behavior. Although more and less sophisticated implementations exist depending on the need, a virtual human is generally able to (1) engage in conversations with users in natural language, (2) use non-verbal communication to achieve rapport with users and convey emotions, (3) reason about and appraise its environment (to make decisions and formulate beliefs), and (4) and have a highly realistic appearance. To achieve these goals, a variety of techniques from artificial intelligence (AI) are used, including natural language processing, cognitive modeling, emotional modeling, and planning.
H. Chad Lane
Virtual humans are frequently used as role players in specific training contexts, such as language learning, intercultural development, clinical interviewing, and investigative interviewing (including police training). They can also be used in more traditional roles such as tutor, coach, or as a provider of information (so-called pedagogical agents). Because virtual humans are complex software systems, they are often expensive to build. This is gradually changing, however, with the emergence of authoring tools that consolidate functionalities and provide simplified interfaces for creating dialogue content and corresponding animations. Virtual human research relies on a great deal of foundational work in psychology (e.g., appraisal theory), involves a significant amount of software engineering and integration, and because it allows for tight controls on communicative behaviors, produces a steady stream of empirical findings on rapport, social interaction, and learning.
Can you tell us briefly about your work developing virtual humans?
I am a research scientist at the USC Institute for Creative Technologies (ICT: www.ict.usc.edu) and my particular focus is on the application of AI techniques to educational problems. One of the primary goals of our group is to build intelligent tutoring systems (ITS) that help learners interact with and learn from virtual humans. This includes projects that involve virtual humans as role players and some to develop pedagogical agents. For example, we built an ITS that helps learners learn a new culture by assessing performance with a virtual human role player. This system assessed learner actions, gave feedback on performance, and supported metacognitive skills such as self-assessment and reflection. In many ways, I am a consumer of the work of the virtual human group at ICT, which may be the largest collection of researchers in one place working on virtual humans in the world. It consists of research groups focused on natural language processing, nonverbal behaviors, cognitive and emotional modeling, animation, and computer graphics. I’m really fortunate to work with such talented people and to be able to leverage their excellent work to try and push educational technologies to their limits.
From the research your team has done, how have people responded to the virtual humans? For example, what were children’s responses to the Ada Lovelace and Grace Hopper, the virtual human guides at the Museum of Science in Boston?
People respond to virtual humans in generally consistent ways – the first reaction is typically a little bit of disbelief combined with intense curiosity (see image of the formal opening of the Twins in Boston below). Next, people seem to want more and continue asking questions and this typically turns into wanting to know the limits and trying to “break” the system by asking questions that are far out of scope. If you have used Siri on an iPhone, you’ve probably seen people asking ridiculous questions – this is the same phenomenon. There is strong evidence that people treat virtual humans as real (see Reeves & Nass, 1996, The Media Equation as well as more recent work from Gratch and colleagues), and so when people kind of realize this, they seem to want to find a way to show that real humans are superior by asking questions that reveal virtual human inadequacies. I do believe people feel a sense of pleasure when they can stump a virtual human. Honestly, it is kind of fun and actually isn’t that hard to do!
For the Twins, Ada and Grace, the staff observed that the exhibit tended to “stop visitors in their tracks”. In informal learning you sometimes hear that kids “run in, run around, and run out,” failing to learn much. If we can stop that pattern with highly engaging and novel exhibits, then we’re probably moving in the right direction. The (external) evaluation confirmed this to a certain extent, but also found that people increased their awareness, knowledge, and acceptance of virtual humans after meeting the Twins. The evaluation was conducted by the Institute for Learning Innovation and will be released publicly very soon (see informalscience.org for that, search for Foutz & Ancelet).
One last thing to note about virtual human research: if something is off, even just a little thing, it can be costly. If a virtual human’s gaze is slightly astray, or the blinking patterns or breathing is odd, it can be very distracting to learners. Research on high fidelity learning environments is sometimes criticized, but in this case evidence is mounting that these distractions can have a negative impact by occupying valuable working memory of learners. In a nutshell, virtual humans need to look and act like real humans because that’s what people expect – anything outside of that can hinder the experience.
Can you tell us briefly about your latest project involving the use of virtual humans?
There are many virtual human projects at ICT spanning many different applications and research questions. One of the more recent is a project involving user sensing and head tracking – this work seeks to recognize nonverbal behaviors such as nodding and smiling so the virtual human can respond appropriately (Louis-Philippe Morency leads these efforts).
In my group, though, we are mostly focused on the educational application of these underlying technologies. Right now we have two ongoing projects that involve virtual humans. The first is to build authoring tools for intelligent tutors that help you learn with a virtual human. In essence, we are trying to take ourselves out of the loop – when an educator wants to use a virtual human for a role-playing training task, we are working on tools that will allow them to create the character, the dialogue, and the pedagogical content needed to make it effective. This project is called Situated Pedagogical Authoring and our approach is to allow an expert to interact with a virtual human just as a learner would, but indicate along the way which actions are correct, incorrect, and create hints and feedback messages that would be delivered to a learner at specific times.
The second project is quite different and focuses on pediatric obesity. We are building a small pedagogical agent that will inhabit a game-based environment for gardening – the goal of this project, called Virtual Sprouts (www.virtualsprouts.com) is to teach at-risk children the science of gardening and promote dietary behavior change. This agent will be a more traditional pedagogical agent and probably will not qualify for virtual human status (no speech input and it might not even be human), but much of our basic research on tutoring and coaching will still be relevant. This project is a highly interdisciplinary effort including researchers from behavioral psychology, cinema and television, nutrition, and education along with teachers and master gardeners.
What do you see as the future for work with virtual humans? In what settings might we expect to see them appear?
We like to ask visitors to the Boston Museum of Science this question. It’s hard to answer with confidence, but if current research is going to transition into the mainstream, we will probably see more use of virtual humans used in educational and entertainment settings. A great deal of work in the last decade has gone into virtual patients for training doctors and nurses, for example. This has included support for diagnosis skills as well as bedside manner. We view our work in Boston as only the beginning for informal learning. Compare the usual method of providing information – a printed sign – to an interactive experience with a virtual character, perhaps even a historical figure like Thomas Edison. Adding the ability to ask questions and hear explanations while focused on the content of the exhibit sounds to me like something we have to do. If informal learning doesn’t at least try to keep up with the powerful entertainment technologies available in homes, the future will probably not be bright. We hope that virtual humans can play a role in the future of informal education. If you haven’t seen the clip from the Time Machine of Orlando Bloom as the virtual librarian, this was an inspiration to us: http://www.youtube.com/watch?v=Rkc09sTiS7g
Where might people in the Division 46 learn more about your work at ICT?
H. Chad Lane’s research involves applications of artificial intelligence to educational problems. He joined the USC Institute for Creative Technologies in 2004 where his work has focused on issues related to learning in game-based and informal learning environments. The central aim of this effort has been to augment systems with automated guidance and feedback, in order to ensure that learners acquire the knowledge that the systems intend to teach. Recently, Lane was a co-PI on the Responsive Virtual Human Museum Guides project where he directed the development of Coach Mike, a pedagogical agent that teaches programming at the Boston Museum of Science. Because of the interdisciplinary nature of his research, Lane has ongoing collaborations with cognitive and educational psychologists from the Army Research Institute (ARI), the USC Rossier School of Education, and the USC Keck School of Medicine. In addition, he has worked closely with U.S. Army instructors and subject-matter experts to both understand their pedagogical approaches, as well as to integrate computer-based training support into Army programs of instruction. The tutoring team, lead by Lane and Mark Core, will next shift their focus to pedagogical authoring tools for virtual-human based systems. The approach, called situated pedagogical authoring, will allow a non-expert to populate models of expert performance and feedback by interacting with a system as the learner will see it. Authors will be asked to simulate expert and novice performance, and from this it will infer appropriate intelligent tutoring interventions. Lane earned his Ph.D. in computer science from the University of Pittsburgh in 2004 and has over 40 publications in AI and the learning sciences. For the past two years he has served on the executive committee for the AIED Society and on the Senior Program committee for the AIED and ITS conferences. He is also serving on the advisory board (of 8 members) and was editor for the prestigious NSF Cyberlearning Summit in January of 2012.