AI Case Study

Microsoft's Seeing AI app identifies and describes objects at which the smartphone camera is pointed to assist visually impaired people

The deep learning app uses computer vision to recognise objects, people or read text via a phone or tablet’s camera and describes them to the user. It can also recognise handwriting as well as several bank notes.

Industry

Technology

Software And It Services

Project Overview

"The app also recognizes currency, identifies products via their barcodes and, through an experimental feature, can describe entire scenes, such as a man walking a dog or food cooking on a stove. Basic tasks can be carried out directly within the app, without the need for an internet connection."

According to Techly: "In a nutshell, Seeing AI uses your smartphone’s camera to scan the environment and describe it back to you. The app relies on machine learning and cloud computing power to allow users to “see” the world around them.

It works with:

Short text: Instantly reads short lines of text as soon as they come into view of the camera
Barcodes: Tells you the name of a product by scanning the barcode
Document: Helps you copy documents by helping you capture every corner
People: Describes the general appearance and estimates the age of anyone you take a picture of. It can even remember familiar faces and call them by name if they show up on camera
Scene: Describes the composition of any photo (still in beta)
Currency: Identifies different notes to assist with cash payment"

Reported Results

In the US in one year, the app has been downloaded more than 100,000 times and has helped users with over three million tasks.

Technology

Function

R And D

Product Development

Background

The app is designed to help blind and partially-sighted people better navigate the world.

Benefits

Data