Abstract
The development of social communication skills in children relies on multimodal aspects of communication such as gaze, facial expression, and gesture. We introduce a multimodal learning environment for social skills which uses computer vision to estimate the children's gaze direction, processes gestures from a large multi-touch screen, estimates in real time the affective state of the users, and generates interactive narratives with embodied virtual characters. We also describe how the structure underlying this system is currently being extended into a general framework for the development of interactive multimodal systems. © 2010 ACM.
Original language | English |
---|---|
Title of host publication | Proceedings of the International Conference on Multimedia |
Place of Publication | New York |
Publisher | Association for Computing Machinery |
Pages | 1111-1114 |
Number of pages | 4 |
ISBN (Print) | 9781605589336 |
DOIs | |
Publication status | Published - 2010 |
Event | 18th ACM International Conference on Multimedia ACM Multimedia 2010 - Firenze, Italy Duration: 25 Oct 2010 → 29 Oct 2010 |
Conference
Conference | 18th ACM International Conference on Multimedia ACM Multimedia 2010 |
---|---|
Abbreviated title | MM'10 |
Country/Territory | Italy |
City | Firenze |
Period | 25/10/10 → 29/10/10 |
Keywords
- technology-enhanced learning