Abstract
The work presented in this paper aims to develop a system that can automatically detect static hand signs of alphabets in American Sign Language. Currently, there are several approaches based on color and shape detection to identify a hand sign, but there are also approaches using machine-learning techniques. In 2003, Viola and Jones's study marked a milestone in developing an algorithm capable of detecting human faces in real time. The original technique was designed solely for face detection, but many other researchers have successfully applied it to detect various objects, such as eyes, mouths, car number plates, and traffic signs. Among these, hand signs have been successfully detected. To achieve this, we adopted the combined concepts of AdaBoost and Haar-like classifiers.
In this work, to enhance the system's accuracy, we utilized a large database for the training process, which yielded impressive results. The system was implemented and tested using a dataset of 26,000 hand sign images, comprising 1,000 images for each hand sign as positive training samples, taken at different scales and illuminations, along with a dataset of 11,100 negative images. All positive images were captured using the Logitech Webcam, with frame sizes set to the VGA standard of 640x480 resolution.
Experiments demonstrate that our system can recognize all signs with a precision of 98.7%. In summary, the proposed system achieved remarkable results in recognizing sign language even against complex backgrounds.
In this work, to enhance the system's accuracy, we utilized a large database for the training process, which yielded impressive results. The system was implemented and tested using a dataset of 26,000 hand sign images, comprising 1,000 images for each hand sign as positive training samples, taken at different scales and illuminations, along with a dataset of 11,100 negative images. All positive images were captured using the Logitech Webcam, with frame sizes set to the VGA standard of 640x480 resolution.
Experiments demonstrate that our system can recognize all signs with a precision of 98.7%. In summary, the proposed system achieved remarkable results in recognizing sign language even against complex backgrounds.
Original language | English |
---|---|
Publication status | Published - 2016 |
Event | The 2016 Computer Graphics Workshop (CGW) - National Taiwan University of Science and Technology, Taipei, Taiwan, Province of China Duration: 11 Jul 2016 → 12 Jul 2016 http://www.siggraph.org.tw/act_workshop.html |
Workshop
Workshop | The 2016 Computer Graphics Workshop (CGW) |
---|---|
Country/Territory | Taiwan, Province of China |
City | Taipei |
Period | 11/07/16 → 12/07/16 |
Internet address |