FROG 2021: Ten Years of Magic Mirror: I and my Avatar

2021/11/27

Since 2012 we have been building the augmented reality system Magic Mirror based on Kinect V1 and V2's native API. It relies on the magic mirror metaphor, where a large screen shows a mirrored camera view with overlaid graphical elements. In our case, it shows a different face mesh over the person's face which reliably tracks face poses in real time while leaving the eyes and mouth of the person visible for interaction and to improve immersion; replaces the background with images that may be changed, smoothly zoomed and dragged; and allows to take screenshots which are automatically printed out on photo cards with an unique QR code linking to its digital image. Control of the system is primarily via easily learned hand gestures very similar to multitouch screen gestures known from mobile phones and tablets. We have demonstrated the system to the public as well as in private (including at FROG conferences 2012, 2013 and 2014, albeit without presenting a paper) in a wide variety of settings, faces and backgrounds. Here, we explain the challenges inherent in creating high-quality face meshes and textures from 2D images, and how we solved them; describe the different versions of the system, how they differ and their limitations; and demonstrate the usefulness of our system in several applications from people counting and tracking to obtaining height measurements without storing or processing personal data.

This talk was held at the Future and Reality of Gaming 2021 online streaming conference.