Dr. Shaolin Qu
received his Ph.D. in 2009. Currently he is a software
engineer at Google in Mountain View, California.
Incorporating Non-Verbal Modalities in Spoken Language
Systems. S. Qu, Ph.D. Dissertation, 2009.
Acquisition for Situated Dialogue in a Virtual World. S. Qu
Y. Chai. Journal of Artificial Intelligence Research, Volume
37, pp.347-377, March 2010.
The Role of Interactivity in Human Machine
Conversation for Automated Word Acquisition. S. Qu and
J. Y. Chai. The 10th Annual SIGDIAL Meeting on Discourse and
London, UK, Sep
Temporal and Semantic Information with Eye Gaze for Automatic Word
Acquisition in Multimodal Conversational Systems. S. Qu and
J. Chai. Proceedings of the 2008 Conference on Empirical Methods in
Natural Language Processing (EMNLP). Honolulu, October 2008.
Attention: The Role of Deictic Gesture in Intention Recognition in Multimodal Conversational
Interfaces. S. Qu and J. Chai. ACM 12th International Conference on Intelligent User interfaces
(IUI). Canary Islands, Jan 13-17, 2008.
Exploration of Eye Gaze in Spoken Language Processing for Multimodal Conversational Interfaces. S. Qu
and J. Chai. 2007 Meeting of the North American Chapter of the Association of Computational Linguistics
(NAACL-07), Rochester NY, April, 2007.
Principles in Robust Multimodal Interpretation. J. Chai, Z. Prasov, and
S. Qu. Journal of Artificial Intelligence Research, Vol 27, pp. 55-83, 2006.
Modeling based on Non-verbal Modalities for Spoken Language Understanding. S. Qu and
J. Chai. ACM 8th International Conference on Multimodal Interfaces
(ICMI), pp. 193-200, Banff, Canada, November 2-4, 2006.
Salience Driven Approach to Robust Input Interpretation in Multimodal Conversational
Systems. J. Chai and S. Qu. Conferences on Human Language
Technology and Empirical Methods in Natural Language Processing
(HLT/EMNLP), pp. 217-224, Vancouver, Canada, October 6-8, 2005.