A new machine learning based intention detection method using first-person-view camera for Exo Glove Poly II

phys.org | 1/30/2019 | Staff
kims (Posted by) Level 3
Click For Photo: https://3c1703fe8d.site.internapcdn.net/newman/gfx/news/2019/eyesarefaste.jpg

A Korean research team has proposed a new paradigm for a wearable hand robot that can aid people with lost hand mobility. The hand robot collects user behaviors with a machine learning algorithm to determine the user's intention.

Professor Sungho Jo (KAIST) and Kyu-Jin Cho (Seoul National University) have proposed a new intention-detection paradigm for wearable hand robots. The proposed paradigm predicts grasping/releasing intentions based on user behaviors, enabling spinal cord injury (SCI) patients with lost hand mobility to pick and place objects.

Method - Machine - Algorithm - Intentions - Camera

They developed the method based on a machine learning algorithm that predicts user intentions via a first-person-view camera. Their development is based on the hypothesis that user intentions can be inferred through the collection of user arm behaviors and hand-object interactions.

The machine-learning model used in this study, Vision-based Intention Detection network from an EgOcentric view (VIDEO-Net), is designed based on this hypothesis. VIDEO-Net is composed of spatial and temporal sub-networks, which recognize user arm behaviors, and a spatial sub-network that recognizes hand-object interactions.

SCI - Exo-Glove - Poly - II - Hand

An SCI patient wearing Exo-Glove Poly II, a soft wearable hand robot, successfully picked and placed various objects and performed essential activities of daily living, such as drinking coffee, without any additional help.

This development is advantageous in that it detects user intentions without requiring any person-to-person calibrations or additional actions. This enables a human to use the wearable hand robot seamlessly.

Q - How - System - Work

Q: How does this system work?

A: This technology aims to predict user intentions, specifically grasping and releasing intent toward a target object, by utilizing a first-person-view camera mounted on glasses. VIDEONet, a deep learning-based algorithm, is devised to predict user intentions from the camera based on user arm behaviors and hand-object interactions. Instead of using bio-signals, which is often used for intention detection of disabled people, we use a simple camera to find out whether the person...
(Excerpt) Read more at: phys.org
Wake Up To Breaking News!
Sign In or Register to comment.

Welcome to Long Room!

Where The World Finds Its News!