top of page

AI Case Study

Newcastle University researchers automate identification of grip type needed to pick up objects by prosthetic limb with 88% success

Researchers at Newcastle University test convolutional neural networks as a method to enable a prosthetic hand to automatically judge the type of grip needed to pick up objects using a mounted camera. After an hour of real-time testing with two amputee subjects, the prosthesis was successful in eventually choosing the correct grip type 88% of the time.

Industry

Public And Social Sector

Education And Academia

Project Overview

According to The Verge, biomedical researchers at Newcastle University "have developed a prototype prosthetic limb with a AI-powered camera mounted on top. The camera uses the sort of computer vision technology that big tech companies have developed, with researchers using deep learning to teach it to recognize some 500 objects. When the wearer of the limb moves to grab, say, a mug, the camera takes a picture of the object, and moves the hand into a suitable “grasp type.” (For example, a pinching motion for picking up a pen; or a vertical grip for grabbing a soda.) The user then confirms the grip action with a myoelectric signal."

Reported Results

From the research paper, "after about an hour of practice, the participant could accomplish 88% of trials successfully" for objects that had both been in the training set and newly introduced objects.

Technology

From the research paper "There is mounting evidence that CNN-based structures can learn and classify visual patterns efficiently if provided with a large amount of training (labelled) samples [40–45]. The components of the CNN structure, namely, local connectivity, parameter sharing and pooling, make it reasonably invariant against object shift, scale and distortion. These features make the CNN structure a suitable candidate for upper-limb prosthetics applications. We therefore trained a CNN structure to identify the appropriate grasp for a database of household objects. For classification of images into grasp groups, we examined two CNN architectures: a one-layer and a two-layer, and explored the trade-off between accuracy, generalisability and computational complexity. Following the proposed CNN-based feature extraction, for classification, we used Softmax (or multi-nomial logistic) regression. Training was carried out through back propagation using the mini-batch momentum gradient descent algorithm for optimising the learned filters within each iteration."

After the initial offline training and testing with the ALOI and NCL datasets and a secondary in real-time using a computer, a webcam was attached to a prosthetic hand and two amputee subjects tested the program in real-time. If the hand made a wrong grip assessment when the webcam took a photo of the object, the subjects could trigger the camera to retake the photo.

Function

R And D

Product Development

Background

From the researchers: "Computer vision-based assistive technology solutions can revolutionise the quality
of care for people with sensorimotor disorders. The goal of this work was to enable trans-radial amputees to use a simple, yet efficient, computer vision system to grasp and move common household objects with a two-channel myoelectric prosthetic hand".

Benefits

Data

Two datasets of photographs of objects were used, the Amsterdam Library of Object Images (ALOI) with 473 objects, and one created by the researchers, the Newcastle Grasp Library (NCL) with 71 objects. Each of the objects could be picked up by one of four types of hand grasp (tripod, pinch, palmar wrist neutral, and palmar wrist pronated). The photos were converted to grey-scale.

bottom of page