Work in progress of the reactive audiovisual environment.
Experimental try out of the low cost sensors with ML model of gesture detection. The lights and sounds react to the position and intensity of movement.
Character preparation
The XSens capture software records information about the kinetic information (spatial position, aceleration and rotation) in time of the joints of the dancers.
We can then exports this information in fbx format. This enables us to import it in Unity3d and adjusting the joints to the skeleton of a character produced externally in Blender (or from an existent library).
More information on this process can be found here.