Speech2Signs: Spoken to Sign Language Translation using Neural Networks

Type Start End
Other Nov 2017 Nov 2018
Responsible URL
Xavier Giro-i-Nieto Caffe2 Research Awards 2017

Description

Hearing impairment is the most common communication disorder affecting about 360 million people worldwide according to the World Health Organization. For many of these individuals, American Sign Language (ASL) is their primary mean of communication. Speech2Signs aims to remove the difficulties and barriers which deaf people encounter when watching online video, by automatically generating a puppet interpreter that will translate the speech signal into American Sign Language. While there exist tools that can automatically generate textual captions from video, this solution presents some limitations. Firstly, most pre-lingually deaf people prefer sign language to captions, as it is richer and more natural for them. For example, captions make it very hard to track who is speaking in a scene with multiple people. Secondly, some users present language disorders that prevent them from understanding captions, but can communicate with sign language.

The automatization of the speech to sign language would solve one of the two communication flows of a video relay service (VRS). These existing services provide an online human interpreter in communications between individuals, for example, in domains such as emergency rooms, where patients may need to quickly communicate with the medical personnel. In this project, we will not focus on the opposite communication direction from sign to spoken language, which may me addressed in future calls based on the outcomes of the present one.

This project has been awarded with one of the five Caffe2 Research Awards 2017 granted by Facebook.

Collaborators