The kinect IR pattern is not very dense, thus many patterns coming from many Kinects might combine without major interference.
Let’s remind that the pattern is a laser infrared fixed (not blinking) structured light. Structured in the way that it’s like an image that’s already available in the hardware memory and the disparity allows to process the distance from the sensor.
We have two kinds of multi kinects system interference:
- Sensors face to face
We can notice on the left image on the middle more noise than in the right image. In fact, that’s another Kinect facing the first one. In one case, we assume that luckily, no special beams reached the first sensor. In the second case, the sensor has slightly been moved, that placed some IR beams to interfere with the sensor.
- Sensors side by side
Here in the right image, a second sensor projector has been activated in the same area sensed by the first one.
So what are the ways out of this trap ? Are we limited to one sensor for good depth sensing accuracy. Ways out might be:
- As simple as orienting the sensors to different directions
- Sensors scanning the same object from both sides might be placed higher than the object and a little bent so that the projector of one doesn’t reach the second sensor.
- Another open idea is to use a unique pattern with multiple sensors. Well already an infrared pattern textured surface can better deal with classic stereo vision, or a pattern localization might be operated but it seems to be a bit complex.