Abstract
The application of modern technologies with artificial intelligence (AI) in all spheres of human
life is growing exponentially alongside concern for its controllability. The lack of public, state, and international
control over AI technologies creates large-scale risks of using such software and hardware that (un)intentionally
harm humanity. The events of recent month and years, specifically regarding the Russian Federation’s war against its democratic neighbour Ukraine and other international conflicts of note, support the thesis that
the uncontrolled use of AI, especially in the military sphere, may lead to deliberate disregard for the moral
standards of controlled AI or the spontaneous emergence of aggressive autonomous AI. The development
of legal regulation for the use of technologies with AI is prolonged concerning the rapid development of
these artefacts, which simultaneously cover all areas of public relations. Therefore, control over the creation
and use of AI should be carried out not only by purely technical regulation (e.g., technical standards and
conformance assessments, corporate and developer regulations, requirements enforced through industry-wide
ethical codes); but also by comprehensive legislation and intergovernmental oversight bodies that codify and
enforce specific changes in the rights and duties of legal persons. This article shall present the “Morality
Problem” and “Intentionality Problem” of AI, and reflect upon various lacunae that arise when implementing
AI for military purposes.