- They hoped to do it in real time, Kinect did it in super real time.
- They hoped for specific postures recognition, Kinect recognizes general purpose ones.
Robotics community promises us since the 70s that very soon we will have home robots that will clean our tables and do home tasks yet intelligent robots do only give indirect advances for all other fields like multidimensional path planning and navigation (they can move), precise control (they can act) but only if they’re certain of what’s around them, but they’re still too young to be trusted for autonomous decision.
Embedded vision research is going side by side with robotics research, where vision researchers promises roboticists that very soon a robot will be able to perceive all of it’s environment and why not make the difference between a needle and a paperclip. Based on these assumptions, roboticists do simulations (or scenarios) and promises all what is possible to do given that vision fulfills its promises.
After some first real applications of cars plate number readings that only made us get more fined, the next generation of vision applications are finally here. Active vision, scene flow processing, machine learning, might help to perceive and why not augment the reality, hopefully not making it worse.