Lee, Yi Bin (2019) Development of American Sign Language Recognition System Using Convolution Neural Network. Final Year Project (Bachelor), Tunku Abdul Rahman University College.
Text
LEE YI BIN.pdf Restricted to Registered users only Download (2MB) |
Abstract
Sign language is a form of communication language for helping the deaf-mute person. Sign language is a unique language which uses the hand gestures and also body movement to express an idea. Due to rapid advancement in image processing using AI, many sign language recognition has been widely researched. However, there is still no perfect system that can accurately and convenient for daily user use — the higher cost of the hardware design when using a hardware approach. For the visual-based approach, environmental concern such as lighting sensitivity, camera position, and background of the image to be trained. The objective of this project is to develop the American Sign Language recognition system using a convolutional neural network. This project is the visual-based approach which needs the knowledge of artificial intelligence work with. To overcome those problems, there is an excellent approach must be present for sign language recognition system even though this approach also more on visual-based technique to generate a machine learning algorithm. But this algorithm can work with a webcam or a camera. Due to the rapid development of Information Technology (IT), the development of camera hardware and Commercialization of web cameras also improve. Also, the camera or webcam are affordable and efficiently to get. This project is focused on the visually based approach rather than a hardware approach in sign language recognition. This system can deal with the sign language that performs by the user whether it is a deaf-mute person or an ordinary person. The system can work with the normal webcam in laptop or computer. The dataset will be prepare and the CNN classification algorithm was be design for this project. After that, validate the accuracy in of the system using Convolutional Neural Network (CNN) image classification. This project consists of the knowledge of convolutional neural network (CNN) which will generate in Jupyter Notebook. Jupyter Notebook was created to develop open source software, and it can interact with machine learning using python programming language. There is 24 set of alphabet dataset with 200 images per each alphabet. In American Sign Language (ASL), J and Z require movement. This the reason why 24 set of data instead of using 26 set data. Convolutional Neural Network (CNN) image classification to be used is because CNN is pre-processing compared to other image classification algorithms. The CNN classifier was achieve the accuracy of 95%. It is hoped that this research contributes to reducing the barrier between the deaf-mute people and ordinary people.
Item Type: | Final Year Project |
---|---|
Subjects: | Technology > Mechanical engineering and machinery |
Faculties: | Faculty of Engineering > Bachelor of Engineering (Honours) Mechatronic |
Depositing User: | Library Staff |
Date Deposited: | 07 Feb 2020 09:27 |
Last Modified: | 14 Apr 2022 00:54 |
URI: | https://eprints.tarc.edu.my/id/eprint/13173 |