Vision system for autonomous vehicles watches not just where pedestrians walk, but how

TechCrunch | 2/16/2019 | Staff
eymira (Posted by) Level 3
Click For Photo: https://techcrunch.com/wp-content/uploads/2019/02/pedestrians-umich.jpg?w=715




The University of Michigan, well known for its efforts in self-driving car tech, has been working on an improved algorithm for predicting the movements of pedestrians that takes into account not just what they’re doing, but how they’re doing it. This body language could be critical to predicting what a person does next.

Keeping an eye on pedestrians and predicting what they’re going to do is a major part of any autonomous vehicle’s vision system. Understanding that a person is present and where makes a huge difference to how the vehicle can operate — but while some companies advertise that they can see and label people at such and such a range, or under these or those conditions, few if any can or say they can see gestures and posture.

Vision - Algorithms - Nowadays - Pixels - Frames

Such vision algorithms can (though nowadays are unlikely to) be as simple as identifying a human and seeing how many pixels it moves over a few frames, then extrapolating from there. But naturally human movement is a bit more complex than that.

UM’s new system uses the lidar and stereo camera systems to estimate not just a person’s trajectory, but their pose and gait....
(Excerpt) Read more at: TechCrunch
Wake Up To Breaking News!
A slice out of infinity
Sign In or Register to comment.

Welcome to Long Room!

Where The World Finds Its News!