A translator for American Sign Language to text and speech

Vi Truong, Chuan-Kai Yang, Quoc-Viet Tran

Research output: Book chapter/Published conference paperConference paperpeer-review

34 Citations (Scopus)


In the year 2001, Viola and Jones's study is a milestone in developing an algorithm capable of detecting human faces in real time. The original technique was only used for the face detection, but many researchers have applied it for the detection of many other objects such as eyes, mouths, car's number plates and traffic signs. Amongst them, the hand signs are also detected successfully. This paper proposed a system that can automatically detect static hand signs of alphabets in American Sign Language (ASL). To do that, we adopted the two combined concepts AdaBoost and Haar-like classifiers. In this work, to increase the accuracy of the system, we use a huge database for training process, and it generates impressive results. The translator was implemented and trained using a data set of 28000 samples of hand sign images, 1000 images for each hand sign of Positive training images in different scales, illumination, and the data set of 11100 samples of Negative images. All the Positive images were taken by the Logitech Webcam and the frames size were set on the VGA standard 640×480 resolution. Experiments show that our system can recognize all signs with a precision of 98.7%. Input of this system is live video and output is the text and speech.
Original languageEnglish
Title of host publicationIEEE Global Conference on Consumer Electronics (GCCE)
Place of PublicationKyoto, Japan
Publication statusPublished - 2016
Externally publishedYes


Dive into the research topics of 'A translator for American Sign Language to text and speech'. Together they form a unique fingerprint.

Cite this