Abstract
The main goal of this study aims to develop a system that can automatically detect static hand signs of alphabets in American Sign Language and translate it into text and speech. Currently, there are several approaches based on color and shape detection to detect a hand sign, but there are also approaches using machine-learning techniques. In 2003, Viola and Jones's study is a milestone in developing an algorithm capable of detecting human faces in real time. The original technique was only for the face detection, but many other researchers have also successfully applied it for the detection of many other objects such as eyes, mouths, car’s number plates and traffic signs. Among them, the hand signs are detected successfully. In doing so, we adopted the two combined concepts AdaBoost and Haar-like classifiers. In this work, to increase the accuracy of the system, we use a huge database for training process, and it generates impressive results. The system was implemented and tested using a data set of 28000 samples of hand sign images, 1000 images for each hand sign of positive training images in different scales, illumination, and the data set of 11100 samples of negative images. All the Positive images were taken by the Logitech Webcam and the frames size were set to the VGA standard 640x480 resolution. Experiments show that our system is able to recognize all signs with a precision of 98.7%. Finally, displayed text will be converted into speech using speech synthesizer. In summary, the proposed system acquired a remarkable result in recognizing sign language against complex background.
Original language | English |
---|---|
Awarding Institution |
|
Supervisors/Advisors |
|
Place of Publication | Taiwan |
Publication status | Published - 2016 |
Externally published | Yes |