Free COVID at-home test kits Omicron vs. delta New Google Maps features Jack Reacher trailer Walmart PS5 and Xbox Series X restock Cyber Week deals

Feeling kind of blue? This digital avatar can tell

University of Southern California researchers are developing SimSensei, an avatar that uses facial recognition tech and depth-sensing cameras built into Microsoft's Kinect to conduct mental health interviews.

The avatar psychologist (right) talks to a test subject. Video screenshot by Leslie Katz/CNET

It's nice to think each of us is entirely unique, a one-of-a-kind aggregate of life experiences colliding with genes that set us apart from everyone else. And while this is true to an extent, it's also true that certain telltale blueprints exist for us, all the way down to the way we move our faces if we are, say, depressed.

So researchers at the University of Southern California's Institute for Creative Technologies are developing a Kinect-driven avatar they call SimSensei to track and analyze in real time a person's facial movements, body posture, linguistic patterns, acoustics, and behaviors such as fidgeting which, taken together, signal psychological distress.

In work to be presented at the Automatic Face and Gesture Recognition conference in Shanghai later this month, Stefan Scherer and colleagues incorporated facial recognition tech and depth-sensing cameras into Microsoft's Kinect to develop the avatar psychologist. They then used the Kinect to record interviews of volunteers who had already been identified as healthy or suffering from depression or post-traumatic stress disorder to develop the code of movements and behaviors the avatar screens for.

"Broad screening is done by using only a checklist of yes/no questions or point scales, but all the non-verbal behavior is not taken into account," Scherer recently told New Scientist. "This is where we would like to put our technology to work."

It turns out the volunteers who were depressed smiled less than average, averted their gaze, and fidgeted more than those who were not depressed. In similar work by a team out of the University of Pittsburgh that is also presenting in Shanghai, 66 different parts of the face were tracked in 34 people diagnosed with depression as they answered questions about their condition. Those participants actually used fewer expressions associated with sadness, perhaps as a result of trying to refrain from engaging with others, and when they did smile they used facial muscles as "smile controls" to restrain these expressions.

It could take years for clinicians to develop a sense of these micro signs of depression, while systems such as SimSensei merely need to be programmed. Of course, it could also take years for this programming to be nuanced enough to be reliable, but these advances will soon be put to the test. Researchers are gathering in Barcelona, Spain, in October to participate in a contest to determine the most accurate system for diagnosing depression. Contestants will review a video database of interviews, some of them with clinically depressed people. The group that proves best at finding the clinically depressed interviewees wins.