Multimodal interactive systems for self-experimentation

Choreomorphy

DSC01912

Choreomorphy is a Unity-based interactive application that supports reflective dance improvisation through the use of MoCap technologies. The design idea is that different avatars and visualisations of movement highlight different aspects and eventually trigger different qualities and patterns of moving, which is meaningful from both a pedagogic as well as creative and aesthetic perspective. By wearing an inertial motion capture suit, the user can visualise his/her movements in real time in the form of a 3D avatar with related
customisable motion effects, allowing to focus on specific aspects of the movement such as traces, or volumetric space. The tool allows to change and customise the visualization by switching among different avatars and settings in real time, facilitating self-reflection and experimentation. The application also allows to load pre-recorded MoCap data and watch them in a variety of avatars, environments and effects or even in augmented reality through the HoloLens.

Low-end VR platform

DSC_0681

This web-based visualisation layer is designed to watch MoCap recordings as an immersive VR experience on a common smartphone and can be placed on top of other applications. The platform supports tracking of head orientation and includes a standard avatar to watch MoCap recordings, a system to watch videos on virtual walls and customizable 3D environments.

Sonification tool

DSC_0058

The tool is designed for sonification of movement qualities in real time. Multimodal feedbacks are fundamental to highlight details or different aspects of a dancer’s movement in a variety of contexts, such as rehearsal, performance or choreographic production. While a visualization may offer more information, a dancer cannot focus on a screen while performing: in this case, sonification enables to have a real-time, responsive feedback on the movement without causing distraction. The sonification tool relies on a movement quality library: while different sensors (e.g., Kinect V2, XOSC IMU, MYO, etc.) capture dancer’s movements and positions on stage, several EyesWeb-based analysis modules analyse them and stream the extracted qualities to a sonification environment (e.g., supercollider, Max) that maps movement qualities with various elements of sonification.