Problems of Using Autonomous Military AI Against the Background of Russia's Military Aggression Against Ukraine

Baltic Journal of Legal and Social Sciences 2022 (4):131-145 (2022)
  Copy   BIBTEX


The application of modern technologies with artificial intelligence (AI) in all spheres of human life is growing exponentially alongside concern for its controllability. The lack of public, state, and international control over AI technologies creates large-scale risks of using such software and hardware that (un)intentionally harm humanity. The events of recent month and years, specifically regarding the Russian Federation’s war against its democratic neighbour Ukraine and other international conflicts of note, support the thesis that the uncontrolled use of AI, especially in the military sphere, may lead to deliberate disregard for the moral standards of controlled AI or the spontaneous emergence of aggressive autonomous AI. The development of legal regulation for the use of technologies with AI is prolonged concerning the rapid development of these artefacts, which simultaneously cover all areas of public relations. Therefore, control over the creation and use of AI should be carried out not only by purely technical regulation (e.g., technical standards and conformance assessments, corporate and developer regulations, requirements enforced through industry-wide ethical codes); but also by comprehensive legislation and intergovernmental oversight bodies that codify and enforce specific changes in the rights and duties of legal persons. This article shall present the “Morality Problem” and “Intentionality Problem” of AI, and reflect upon various lacunae that arise when implementing AI for military purposes.

Author's Profile

Tyler Jaynes
University of Utah


Added to PP

89 (#73,097)

6 months
68 (#26,813)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?