Here's somewhere you probably didn't expect machine learning to go, but here we are anyway.
The reason for its existence? Fun, mainly. Google also wanted to "make machine learning more accessible to coders and makers" while inspiring them to take the tech and run with it for their own applications.
The "mirror" uses an open-souce "pose estimation model" from Google called PoseNet, which can detect body poses, and TensorFlow.js, a library for in-browser machine learning framework.
In finding a matching image, the experiment uses your "pose information" -- the location of 17 different body parts including your shoulders, ankles and hips. According to Google's explainer, it doesn't take any individual characteristics into account, such as gender, height or body type.
I gave it a real challenge by dancing like a goofball and it seemed to keep throwing up a young lady in a white dress.
Using computers to detect poses isn't new, of course -- motion capture technology has been used for decades to capture real human movements for blockbusters. Video games have used it too, just look at Microsoft's 3D imaging device, the Kinect. But those methods require expensive hardware.The triumph here is that it all happens in browser, with just a webcam.
Google does not send any of your images to its servers, all the image recognition happens locally, in browser. The technology also doesn't recognize who is in the image because there is "no personal identifiable information associated to pose estimation."
If you're interested in the incredible amount of work that went into building Move Mirror, TensorFlow has an extensive rundown of the challenges and programming hurdles they overcame in its blog.
You can try it for yourself, provided you have a webcam, at Google's experiments page.
'Hello, humans': Google's Duplex could make Assistant the most lifelike AI yet.
Comic-Con 2018: We're headed to America's epic entertainment geekfest, and bringing you all the latest.