‘Magic’ Carpet Guesses Human Poses

A new tactile sensing carpet from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) can guess human poses without using cameras— a step to enhancing self-powered personalized healthcare, gaming and smart homes.

Previous research has discussed use of single RGB cameras, wearable omnidirectional cameras and regular webcams, but with the common problems of privacy concerns and camera occlusions.

The CSAIL team’s system uses only cameras to build the dataset the system was trained on, and only captured the moment of the person doing the activity. To infer the pose, the person would have to get on the carpet and perform an action, and then the deep neural network making use of just the tactile information would figure out if the person was stretching or doing something else.

‘You can imagine leveraging this model to enable a seamless health monitoring system for high-risk individuals, for fall detection, rehab monitoring, mobility and more,’ says Yiyue Luo, lead author of the paper.

The carpet is low cost and scalable, and made of commercial, pressure-sensitive film and conductive thread, with over nine thousand sensors across thirty six by two feet.

Each sensor converts the human’s pressure to an electrical signal through the physical contact between people’s limbs, torso, feet and the carpet. The system was particularly trained on synchronized tactile and visual data, like a video and corresponding heatmap of someone doing a pushup.

The model takes the extracted pose from the visual data, uses the tactile data as input and then outputs the 3D human pose. For example, after stepping onto the carpet and doing some pushups, the system would produce a 3D image or video of someone doing a pushup.

The model has been able to predict a person’s pose with an error margin by less than 10cm. For categorizing specific actions, the system had a 97% accuracy rate.

‘You may envision using the carpet for workout purposes. Based solely on tactile information, it can recognize the activity, count the number of reps, and calculate the amount of burned calories,’ says Yunzhu Li, co-author.

As a result of most of the pressure distributions being prompted by the lower body and torso movement, the information was more accurate than the upper body data. The model was however unable to predict poses without clearer floor contact, such as free-floating legs during sit-ups.

Although the system can understand an individual person, the scientists want to further improve the metrics for multiple users, like two people hugging or dancing on the carpet. They aim to get more information from the tactical signals like a person’s weight or height.

Luo wrote the paper alongside co-authors MIT CSAIL Ph.D. students Yunzhu Li and Pratyusha Sharma, MIT CSAIL mechanical engineer Michael Foshey; MIT CSAIL postdoc Wan Shou; MIT professors Tomas Palacios, Antonio Torralba and Wojciech Matusik. Toyota Research Institute funded the research.

By Marvellous Iwendi.

Source: MIT CSAIL