Click For Photo: http://www.popsci.com/sites/popsci.com/files/styles/medium_1x_/public/images/2017/09/iphonex_face_recognition_beach.jpg?itok=UGd_g-vU
The technique of projecting something onto a three-dimensional object to help computer vision systems detect depth dates back decades, says Anil Jain, a professor of computer science and engineering at Michigan State University and an expert on biometrics. It’s called the structured light method.
Generally, Jain says, computer vision systems can estimate depth using two separate cameras to get a stereoscopic view. But the structured light technique substitutes one of those two cameras for a projector that shines light onto the object; Apple is using a dot pattern, but Jain says that other configurations of light, like stripes or a checkerboard pattern, have also been used.
Calibration - Camera - Projector - Depth - System
“By doing a proper calibration between the camera and the projector, we can estimate the depth” of the curved object the system is seeing, Jain says. Dots projected onto a flat surface would look different to the system than dots projected onto a curved one, and faces, of course, are full of curves.
During the keynote, Schiller also explained that they’d taken steps to ensure the system couldn’t be tricked by ruses like a photograph or a Mission Impossible-type mask, and had even “worked with professional mask makers and makeup artists in Hollywood.” Jain speculates that what makes this possible is the fact that the system makes use of infrared light, which he says can be used to tell the difference between materials like skin or a synthetic...
Wake Up To Breaking News!