Abstract
Over the past year, technology companies have made headlines claiming that their artificially intelligent (AI) products can outperform clinicians at diagnosing breast cancer, brain tumours, and diabetic retinopathy. Claims such as these have influenced policy makers, and AI now forms a key component of the national health strategies in England, the United States, and China. While it is positive to see healthcare systems embracing data analytics and machine learning, concerns remain about the efficacy, ethics, and safety of some commercial, AI health solutions. This paper argues that improved regulation and guidance is urgently required to mitigate risks, ensure transparency and best practice, Without this, patients, clinicians, and other stakeholders cannot be assured of an app’s efficacy, and safety.