Abstract
American Sign Language, or ASL as its acronym is commonly known, is a fascinating language, and many people outside of the Deaf community have begun to recognize its value and purpose. It is a visual language consisting of coordinated hand gestures, body movements, and facial expressions. Sign language is not a universal language; it varies by country and is heavily influenced by the native language and culture. The American Sign Language alphabet and the British Sign Language alphabet are completely contrary. Fingerspelling is one-handed in ASL and two-handed in BSL, with the exception of the letter C. AI technologies, particularly deep learning, can play an important role in breaking down communication barriers between deaf or hearing-impaired people and other communities, significantly contributing to their social inclusion. Recent advances in sensing technologies and AI algorithms have paved the way for the development of a wide range of applications aimed at meeting the needs of the deaf and hearing-impaired communities. To that end, the purpose of this paper is By attempting to translate sign language using artificial intelligence algorithms, we focused in this paper on a transfer learning technique based on deep learning using a ResNet algorithm and compared it to previous papers in which we used VGG19[1]and mobileNet [2], where we get in the ResNet algorithm on the degree of Accuracy 93,48% e number of images (43500 in the dataset in size 64*64 pixel ) and the data split training data into training dataset (70%) and validation dataset(15%) and testing dataset(15%) and 20 epoch.