The amazing thing about virtual environments is that with a bit of imagination and creativity, you are able to create your own world. For the past few months, I’ve been playing with the concept of emulating gestural interactions with objects from the real world and perceiving the same auditory feedback in the virtual realm, yet without the object itself.
In a previous work, I used Integra Live and the Myo armband, to simulate the interaction of virtual piece of paper, which when crumbled gave the same auditory feedback of a real crumbled piece of paper.
Inspired by this work, I continued exploring this direction during the XTH Sense Creative Lab. For this occasion, my goal was to emulate the auditory feedback generated by the interaction of a tea cup filled with water. In about two days I put together this demo below, pretty neat imho!
A few words about my creation process. It was my first time using the XTH Sense, so I first had to experiment with the XTH Sense device to get familiar with its capabilities. After a couple of hours spent testing and observing the data variation in relationship to the hand and arm’s movements, I made few observations.
First, the XTH Sense, which captures motion, direction and orientation sensors (integrated in a 9-DoF IMU) and muscle sound (also known as mechanomyogram or MMG), engages the user differently than other seemingly similar technologies such has EMG- and camera-based devices. This is due to the nature of the MMG signal itself. As we can see from the picture below, during a static contraction, i.e., when muscles are contracted without further limbs movement, MMG signals decay differently than EMG signals. Thus, in order to vary the amplitude of the signal during a static contraction, the user needs to continuously slightly move her hand or arm.
Picture: Esposito, Fabio et al. (2016), Electromechanical delay components during skeletal muscle contraction and relaxation in patients with myotonic dystrophy type 1, Neuromuscular Disorders , Volume 26 , Issue 1 , 60 – 72
Second, because the bioacoustic sensors are highly sensitive, I was able to make the XTH Sense learn my personal biological & expressive signature through various gestural interactions, in less than 30 minutes. Once I learned the device’s sensibility in tracking my unique muscles’ movements, my experience using the XTH Sense became really engaging and enjoyable.
After several iterative test, I found the best way to create the auditory feedback I had in mind: modulating the amplitude of a water flow sound file by directly mapping MMG data packets.
In order to process the water sound’s timbre in relationship to my arm movements, I also mapped data from the IMU sensor to delay parameters using a support vector machine (SVM) unsupervised learning algorithm, implemented with ml.lib for Pd.
If you’re interested in trying this out at home with your own XTH Sense, you can find the Pd patch created for my project here. The patch is designed to work with the new wireless XTH Sense, but if you don’t have the new XTH Sense yet, you can modify the patch to make it work with the previous wired XTH Sense.
The XTH creative lab was great fun, I absolutely recommend anyone to take part in the next labs that the folks at XTH will be presenting in the near future. I look forward to conducting some more experiments with the XTH Sense and sharing them with the community when the XTH Platform is live!
Got any question about this project? Drop me a line in the comments below! No need to sign in, just leave your question or idea and I’ll get back to you.
Here’s my website with more research insights and experiment.