Abstract
In the case of AI, automated systems are making increasingly complex decisions with significant ethical implications, raising questions about who is responsible for decisions made by AI and how to ensure that these decisions align with society's ethical and moral values, both in India and the West. Jonathan Haidt has conducted research on moral and ethical decision-making. Today, solving problems like decision-making in autonomous vehicles can draw on the literature of the trolley dilemma in that it illustrates the complexity of ethical decisions faced in emergency situations, as well as the moral implications of the decisions made. However, a series of moral principles must be understood and cited to avoid insufficient mechanistic explanations. Here we present a compilation of ethical positions both in India and the West.