From LinguisticAnnotation
m (Reverted edits by TaaceLacno (Talk); changed back to last version by Herrner) |
|
| (One intermediate revision by one other user not shown) | |
(No difference)
| |
Latest revision as of 19:45, 29 September 2007
VISLab's Cross-Modal Analysis of Signal and Sense (Francis Quek)
This project is based at the Vision Interfaces and Systems Laboratory (VISLab) at Wright State University. The intent is to create a large database of video annotated with information about gesture, speech and gaze. This, then, will be used to empirically test theories of multimodal communication. Process descriptions and downloadable tools and data are available. It is a collaborative project with partcipants from Wright State, The University of Chicago, Purdue, the University of Illinois, Chicago, the University of Wisconsin, Milwaukee, Reed College, and the National Yang-Ming University, Taiwan [NSF award].