Classification of Sign-Language Using MobileNet - Deep Learning

Download Edit this record How to cite View on PhilPapers
Abstract
Abstract: Sign language recognition is one of the most rapidly expanding fields of study today. Many new technologies have been developed in recent years in the fields of artificial intelligence the sign language-based communication is valuable to not only deaf and dumb community, but also beneficial for individuals suffering from Autism, downs Syndrome, Apraxia of Speech for correspondence. The biggest problem faced by people with hearing disabilities is the people's lack of understanding of their requirements. In this paper we try to fill this gap. By trying to translate sign language using artificial intelligence algorithms, we focused in this paper using transfer learning technique based on deep learning by utilizing a MobileNet algorithm and compared it with the previous paper results[10a], where we get in the Mobilenet algorithm on the degree of Accuracy 93,48% but the VGG16 the accuracy was 100% For the same number of images (43500 in the dataset in size 64*64 pixel ) and the same data split training data into training dataset (70%) and validation dataset(15%) and testing dataset(15%) and 20 epoch .
Categories
(categorize this paper)
PhilPapers/Archive ID
ABUCOS-3
Upload history
Archival date: 2022-08-03
View other versions
Added to PP index
2022-08-03

Total views
31 ( #68,499 of 71,407 )

Recent downloads (6 months)
31 ( #27,206 of 71,407 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.