Abstract
Intense discussions concerning the hazards and ethical ramifications of artificial intelligence were
sparked by its introduction and broad societal adoption. Traditional discriminative machine learning carries
hazards that are frequently different from these risks. A scoping review on the ethics of artificial intelligence,
with a focus on big language models and text-to-image models, was carried out in order to compile the recent
discourse and map its normative notions. Enforcing accountability, responsibility, and adherence to moral and
legal standards will become more challenging as artificial intelligence systems get more adept at making
decisions on their own. Here, a user-centered, realism-inspired method is suggested to close the gap between
abstract concepts and routine research procedures. It lists five particular objectives for the moral application of
AI: 1) comprehending model training and output, including bias mitigation techniques; 2) protecting copyright,
privacy, and secrecy; 3) avoiding plagiarism and policy infractions; 4) applying AI in a way that is advantageous
over alternatives; and 5) employing AI in a transparent and repeatable manner. Every objective is supported by
workable plans, real-world examples of abuse, and remedial actions. This paper will discuss the nature of an
accountability framework and related concerns in order to enable the organized responsibility for assignment
and proof of AI systems. The suggested architecture for regulating AI incorporates crucial components like
transparency, human oversight, and adaptability to address the issues with accountability that have been brought
to light. Some crucial suggestions for putting the framework into practice and growing it were also provided
through industrial case studies, guaranteeing that companies increase compliance, trust, and responsible AI
technology adoption.