Abstract
there is a communication gap between hearing-impaired people and those with normal hearing, sign language is the main means of communication in the hearing-impaired population. Continuous sign language recognition, which can close the communication gap, is a difficult task since the ordered annotations are weakly supervised and there is no frame-level label. To solve this issue, we compare the accuracy of each model using two deep learning models, Inception and Xception . To that end, the purpose of this paper is By attempting to translate sign language using deep learning models, we focused in this paper a comparing between Inception and Xception ¬¬¬¬¬ algorithm and compared it to previous papers in which we used VGG19[1], mobileNet [2], where we get in the Inception and Xception algorithm on the degree of Accuracy 99.32%, 99,43% respectively. number of images (29000 in the dataset in size 75*75 pixel ) and the data split training data into training dataset (70%) and validation dataset(15%) and testing dataset(15%) and 20 epoch.