Classification of Sign-language Using VGG16

International Journal of Academic Engineering Research (IJAER) 6 (6):36-46 (2022)
  Copy   BIBTEX

Abstract

Sign Language Recognition (SLR) aims to translate sign language into text or speech in order to improve communication between deaf-mute people and the general public. This task has a large social impact, but it is still difficult due to the complexity and wide range of hand actions. We present a novel 3D convolutional neural network (CNN) that extracts discriminative spatial-temporal features from photo datasets. This article is about classification of sign languages are not universal and are usually not mutually intelligible although there are also similarities among different sign languages. They are the foundation of local Deaf cultures and have evolved into effective means of communication. Although signing is primarily used by the deaf and hard of hearing, hearing people also use it when they are unable to speak, when they have difficulty speaking due to a health condition or disability (augmentative and alternative communication), or when they have deaf family members, such as children of deaf adults. In this article we use the 43500 image in the dataset in size 64*64 pixel by use CNN Architecture and achieved 100% accuracy.

Author's Profile

Samy S. Abu-Naser
North Dakota State University (PhD)

Analytics

Added to PP
2022-07-01

Downloads
1,112 (#10,958)

6 months
407 (#4,571)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?