The project aims to develop algorithms for the analysis of multimodal "social signals" emerging among several subjects engaged in an interaction scenario. One of the fundamental issues in the recognition of social signals is that, by definition, they do not exist in isolation, but are a result of the interaction of two or more agents. This calls for a context-aware approach which takes into account local environmental information, and game theory appears to be a natural, yet unexplored framework to use. The main idea behind the project is that the social cues detected, their evolution and their interplay among the interactants can be modeled as strategies in a multi-agent evolutionary signaling game thereby allowing us to determion the agents' interactions and intentions, and the whole group dynamics. These cues will be captured using the multimodal audio/visual appearance of the interactants which include, for example, gaze, gesturing, body posture, global/local body motion, speech features, etc.
In this project ECLT will collaborate with Samsung Collaborative R&D Dept.