artificial intelligence and machine learning: the new era of the use of force
- Paper number
IAC-20,E7,1,8,x61006
- Author
Ms. Giuliana Rotola, Italy, International Space University (ISU)
- Coauthor
Mr. Giacomo Fedele, Italy
- Year
2020
- Abstract
One of the main objectives of the United Nations is to maintain international peace and security. To achieve this goal, they agreed to take collective measures to prevent and remove threats to peace and repress acts of aggression; the system developed is based on the provision of Article 2 (4), which prohibits the threat or use of force in interstate relations. International law recognizes two exceptions to the use of force: the use of force by States authorized by the Security Council according to the provisions of chapter VII of the UN Charter, and the right of individual or collective self-defense provided for in Article 51. This work, starting from the assumption that exceptions to the use of force can also apply to outer space according to Article 3 of the Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, including the Moon and Other Celestial Bodies, aims to investigate in particular the first moment in which this right can be exercised. There are two primary schools of thought. The traditional and more widespread one considers a restrictive interpretation of Article 51, according to which the right to self-defense could be justified as a response to an armed attack. A minor fraction of the doctrine has instead advanced the hypothesis of a right of preventive self-defense, subject to the presence of rigorous criteria, such as a proven and immediate threat to the territory or the forces of a state, and the absence of a viable alternative to the action of military self-defense. In this regard, the study focuses on the analysis of the use of machine learning and artificial intelligence, which could lead to the automation of self-defense operations. The most critical aspects concern the decision-making process related to the ascertainment of the existence of proportionality and necessity requirements and the consequent response times. Algorithms could instantaneously collect and interpret a massive amount of data and convert them in few seconds in response to (alleged) threats. The possibility that an erroneous maneuver can be construed as an attack and generate an automatic defensive response cannot be excluded. It would also jeopardize the safety of other space objects, and cause contamination of the extra-atmospheric environment. This work concludes by analyzing the potential for error in such automatic evaluations and the legal, political, and ethical consequences that could derive from it.
- Abstract document
- Manuscript document
IAC-20,E7,1,8,x61006.pdf (🔒 authorized access only).
To get the manuscript, please contact IAF Secretariat.
