Research Infrastructure for Multimodal Language Processing in Situated Human Robot Dialogue
Supported by National Science Foundation (3/1/2010 - Present)
In situated human robot dialogue, a robot and its human partner are co-present in a shared physical world. The spatial relations between the robot, the human, and the environment and the dynamic nature of the surroundings have a significant influence on how the human and the robot interact with each other and how the robot accomplishes its tasks. A robot and its human partner both have physical bodies in the environment. In embodied communication, speakers make extensive use of non-verbal modalities (e.g., eye gaze and gestures) to engage in conversation and make reference to the shared environment. These unique charateristics make it extremely challenging for automated interpretation of human language. This project aims to develop a research infrastructure that will promote understanding of human multimodal language behavior and facilitate development, integration, and evaluation of multimodal language processing technology for situated human robot dialogue.
Selected Papers:
- Ambiguities in Spatial Language Understanding in Situated Human Robot Dialogue. C. Liu, J. Walker, and J. Y. Chai. AAAI 2010 Fall Symposium on Dialogue with Robots. Arlington, VA. November 2010.