Depth-sensing cameras like the Kinect give us the opportunity to mix physical environments and virtual environments, creating new immersive experiences. In this Second Story Labs experiment, we demonstrate how the use of multiple cameras helps solve problems with occlusion or “holes” the use of a single camera creates.
If you’ve ever worked with a Microsoft Kinect, you know that occlusion is no laughing matter. It’s responsible for gaping holes in people’s chests, disappearing necks, the noseless faces of zombies. Humans are full of convexities, and to a Kinect that means that we are also full of holes. We’ve dealt with this in lots of creative ways—filling in gaps with best estimates, “blurring” data, hiding holes with smoke and mirrors. But the best and perhaps most obvious solution to the dilemma of occlusion is simply to add more Kinects. A second gunman, if you will, shooting from an angle that will cover the first Kinect’s blind spot.
In this case, we are calibrating two Kinects in space about a meter apart and angled inward toward their subject. This way we can “see” our way around noses, arms, and other pesky occlusions. Then all we have to do is combine their data into a single mesh, and the rest is up to our imaginations.
At full resolution, we can actually get a pretty accurate model of a person’s face. And all of this is being rendered in real time, so that a user’s reality can be “augmented” while they interact. Here we have added three-dimensional models and rain particles to the virtual space to put the user into an imaginary landscape.