From LinguisticAnnotation
Revision as of 19:45, 29 September 2007 by Herrner (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

VISLab's Cross-Modal Analysis of Signal and Sense (Francis Quek)
This project is based at the Vision Interfaces and Systems Laboratory (VISLab) at Wright State University. The intent is to create a large database of video annotated with information about gesture, speech and gaze. This, then, will be used to empirically test theories of multimodal communication. Process descriptions and downloadable tools and data are available. It is a collaborative project with partcipants from Wright State, The University of Chicago, Purdue, the University of Illinois, Chicago, the University of Wisconsin, Milwaukee, Reed College, and the National Yang-Ming University, Taiwan [NSF award].