Computer Science and Engineering

Collaborative Model in Interpreting Vague Descriptions in Situated Dialogue

Supported by Office of Naval Research (1/1/2011 - 12/31/2015)

Our perception of the environment often leads to the use of imprecise language, e.g., a tall building, a small cup, a car to the left, etc. While processing this kind of imprecise language may be less problematic to humans, interpreting imprecise language can be challenging to automated agents, especially in situated interaction. Although an artificial agent (e.g., robot) and its human partner are co-present in a shared environment, they have significantly mismatched perceptual capabilities (e.g., recognizing objects in the surroundings). Their knowledge and representation of the shared world are significantly different. When a shared perceptual basis is missing, grounding references, especially those vague language descriptions to the environment will be difficult. Therefore, a foremost question is to understand how partners with mismatched perceptual capabilities collaborate with each other to achieve referential grounding. This project has developed a simulated environment to address this question.


Here is an example of our experimental setup to study collaborations between conversation partners with mismatched visual perceptual capabilities


Related Videos:

Related Papers: