Classification of Sign-Language Using MobileNet - Deep Learning

International Journal of Academic Information Systems Research (IJAISR) 6 (7):29-40 (2022)
  Copy   BIBTEX

Abstract

Abstract: Sign language recognition is one of the most rapidly expanding fields of study today. Many new technologies have been developed in recent years in the fields of artificial intelligence the sign language-based communication is valuable to not only deaf and dumb community, but also beneficial for individuals suffering from Autism, downs Syndrome, Apraxia of Speech for correspondence. The biggest problem faced by people with hearing disabilities is the people's lack of understanding of their requirements. In this paper we try to fill this gap. By trying to translate sign language using artificial intelligence algorithms, we focused in this paper using transfer learning technique based on deep learning by utilizing a MobileNet algorithm and compared it with the previous paper results[10a], where we get in the Mobilenet algorithm on the degree of Accuracy 93,48% but the VGG16 the accuracy was 100% For the same number of images (43500 in the dataset in size 64*64 pixel ) and the same data split training data into training dataset (70%) and validation dataset(15%) and testing dataset(15%) and 20 epoch .

Author's Profile

Samy S. Abu-Naser
North Dakota State University (PhD)

Analytics

Added to PP
2022-08-03

Downloads
1,282 (#8,450)

6 months
546 (#2,540)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?