Gender Prediction from Retinal Fundus Using Deep Learning

Download Edit this record How to cite View on PhilPapers
Deep learning may transform health care, but model development has largely been dependent on availability of advanced technical expertise. The aim of this study is to develop a deep learning model to predict the gender from retinal fundus images. The proposed model was based on the Xception pre-trained model. The proposed model was trained on 20,000 retinal fundus images from Kaggle depository. The dataset was preprocessed them split into three datasets (training, validation, Testing). After training and cross-validating the proposed model, it was evaluated using the testing dataset. The result of testing, the area under receiver operating characteristic curve (AUROC) of the model was 0.99, precision, recall, f1-score and accuracy were 99%, precision, recall, f1-score and accuracy were 96.83%, 96.83%, 96.82% and 96.83% respectively.. Clinicians are presently unaware of dissimilar retinal feature variants between females and males, stressing the importance of model explain ability for the prediction of gender from retinal fundus images. The proposed deep learning may enable clinician-driven automated discovery of novel visions and disease biomarkers.
(categorize this paper)
PhilPapers/Archive ID
Upload history
Archival date: 2022-06-03
View other versions
Added to PP

136 (#46,696)

6 months
136 (#4,115)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?