AN EVALUATION OF LETHAL AUTONOMOUS WEAPON SYSTEMS UNDER THE LAW OF ARMED CONFLICT AND THE QUESTION OF KARGU-2
According to the United Nations Panel of Experts on Libya report of March 2021, a Turkish made loitering munition, Kargu-2, played an important role in targeting Haftar Armed Forces (HAF) by Government of National Accord. In the report, it was alleged that “lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability”. It was claimed that a lethal fully autonomous system, in other words, a “killer robot” involved in battlefields and targeted humans for the first time. This development provoked much criticism in international media outlets about the legality of autonomous weapon systems and their use, particularly Kargu-2.
The legal dimension of lethal autonomous weapon systems (LAWS) has already been on the agenda of international community. Human rights organisations, prominent figures and some countries called for a prohibition on these systems. As one of the most significant efforts, Human Rights Watch initiated a campaign named “Stop Killer Robots” and called for a pre-emptive ban for these systems in 2012. Moreover, a group of robotics and artificial intelligence researchers including Stephen Hawking, Elon Musk, Steve Wozniak published an open letter demanding a ban on offensive autonomous weapons beyond meaningful human control in 2015. These calls paved the way for a process resulted in a series of informal meetings between states. The High Contracting Parties to Convention on Certain Weapons (CCW) established a Group of Governmental Experts (GGE) to discuss challenges brought by LAWS in 2016.
In the light of current developments, this post briefly examines the problems posed by LAWS to the fundamental principles of the law of armed conflict (LOAC) – alternatively international humanitarian law- and the status of Kargu-2.
- Lethal Autonomous Weapon Systems and The Law of Armed Conflict
Autonomy may refer different aspects in a system such as the nature of interaction between human and machine, the sophistication level of machine and the type of function performed by the machine. In terms of the level of human-machine interaction, there are three different position in which human is located: human-in-the-loop, human-on-the-loop and human-out-of-the-loop. The latter is considered as the fully autonomous mode. Autonomous weapon systems are “weapons system that, once activated, can select and engage targets without further intervention by a human operator”. These systems operate in fully autonomous mode with artificial intelligence (AI), which leads to question whether they are compatible with LOAC. Since LAWS are still not specifically regulated by any international convention, Geneva Conventions and customary law govern their employment and use. Within this framework, there are two different dimensions to determine the legality of LAWS: the law of weapon and the law of targeting.
The law of weapon determines whether a weapon is in accordance with LOAC by its nature. When Article 35 and 36 of the Additional Protocol I to Geneva Conventions which bear customary character evaluated together, it is generally accepted that a weapon must not be indiscriminate and uncontrollable, cause unnecessary suffering and superfluous injury to humans and widespread, long-term, and severe damage to the natural environment. For example, a biological weapon – a virus – would be deemed as unlawful per se due to its nature which does not allow distinguishing civilians from combatants and controlling its effects.
On the other hand, the law of targeting assesses the legality of the way a weapon is utilised during hostilities. Three main pillars of the law of targeting are the principle of distinction, proportionality and precaution. These principles are indispensable and constitute the core of LOAC. The principle of distinction requires distinguishing between the civilians and combatants and directing military operations only at military objectives. The principle of proportionality prohibits attacks that are “excessive in relation to the concrete and direct military advantage anticipated”. Lastly, the principle of precaution demands constant care from belligerents by taking all feasible precautions in order to eliminate or minimize damage to civilians / civilian objects and avoiding disproportionate attacks.
Traditionally, the targeting process is conducted by humans who appreciate facts, contexts, and values before them during military operations. With the development of AI, it is disputed whether LAWS can be used pursuant to these fundamental principles with their own capacity. Previously, there had been nation-state conflicts between regular militaries, thus it was easier to uphold the principle of distinction. However, distinguishing combatants and civilians can be a very challenging task in the current theatres of war due to the rapid but deep change in actors and methods of warfare, for example, combatants no longer carry an insignia or wear uniform.
Furthermore, it is not clear how AI will interpret the act of “taking direct part hostilities”, which is a decisive condition for the determination of legitimate targets in non-international armed conflicts, since it is a fuzzy and context-dependent concept. For example, even AI detect rifle on a person, this does not necessarily mean the person in question is a legitimate target as it may lack belligerent nexus as in the case of self-preservation purposes. In the same vein, the judgement given by AI would be concerning in terms of the proportionality principle. While there are quantitative methods to estimate probable collateral damage adopted by militaries, the principle of proportionality also needs qualitative and relative evaluations for the concepts such as “excessiveness” and “military advantage”.
When all these considerations and the present technology are taken into account, the use of fully autonomous systems pursuant to LOAC principles is likely to be unlawful under the LOAC. Nevertheless, it should be noted that in exceptional circumstances, for example the use of anti-material autonomous weapon systems in the distant area of desert, high seas or demilitarised zone can be lawful. Except these uncommon and strictly controlled areas, in combat zones where civilian population and combatants coexist, human control in a certain extent is necessary to ensure compliance with LOAC. Even conflict zone consists of only combatants, an anti-personnel LAWS still may not comply with LOAC since AI could be incapable of perceiving combatants who are hors de combat due to their wounds, illnesses and unconscious state. In fact, the need of human control is confirmed by the CCW Group of Governmental Experts as reflected under the guiding principles adopted in 2019.
- The Question of Kargu-2
In the UN Panel of Experts on Libya Report, it was stated that “Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2” and this weapon was “programmed to attack targets without requiring data connectivity between the operator and the munition.” It was further noted that Kargu-2 with a combination of various weapons had found significantly effective in terms of their impact resulted in overcoming Pantsir systems and casualties. So, is Kargu-2 a true “killer robot” programmed to hunt humans as portrayed in the report and in international media outlets? Is the Kargu-2 a weapon system that should be banned?
According to its manufacturer, Kargu-2 is an autonomous system known as loitering munition or kamikaze drone that is “designed to provide tactical ISR [intelligence, surveillance and reconnaissance] and precision strike capabilities for ground troops”. This system can be used as anti-personnel weapon by virtue of object classification capability through machine learning. The manufacturer considers Kargu-2 as completely autonomous, similar to the UN Experts, however there is a crucial nuance here. While the manufacturer employs the term autonomous in the context of the performed function, UN experts uses it with reference to the human-machine interaction dimension. The former says that Kargu-2 works in accordance with the human-in-the-loop principle in targeting and further puts that its fully autonomous features belong to navigation skills in the conflict area. In this regard, the report do not contain any data evidencing human-out-of-the-loop use of the Kargu-2 during targeting, as noted by some scholars.
With respect to the law of weapon, it could be inferred that Kargu-2 can be legal by fulfilling conditions set forth above if there is sufficient parameters and artificial intelligence elements inserted in the system for the intended use and purpose, which can effectively prevent Kargu-2 to be indiscriminate. This conclusion appears to be in line with the international legal practice as International Court of Justice emphasised the importance of particular circumstances while deciding on the legality of nuclear weapons. In the Advisory Opinion, the Court declared that there were not enough basis evidencing the use of nuclear weapons would always be in violation of IHL principles.
In terms of the law of targeting, it could be argued that as long as there is a human in or on the loop who can select, confirm or supervise targeting selections, Kargu-2 can operate in conformity with LOAC principles. Considering that it is a kamikaze drone which can provide the operator real-time footage for target detection and identification until the last moment it detonates, it expands the time margin of the operator for intervention in a way that reinforces the principles of distinction and precaution. It can also strengthen the principle of proportionality by reducing collateral damage. While some conventional weapons such as howitzers launched from more distant ranges exerting a greater level of violence increase the amount of collateral damage given to civilians and civilian objects, kamikaze drones such as Kargu-2 may significantly decrease collateral damage by improving precision since it is able to approach the target closer without the fear of life.
In conclusion, the classification of Kargu-2 as a fully autonomous weapon system by UN Panel of Experts on Libya Report appears to be based on unevidenced assumption, which do not have sufficient substantive grounds in terms of technical features. As criticized by some drone policy experts including Ulrike Franke, it is not straightforward that why “the killer robot” claim attracted much interest from media outlets, although the features, mode of operation, and consequences of the Kargu-2 could not be ascertained within the scope of the report. It could be established that by not specifying the nature of the autonomy within Kargu-2 and not examining appropriately whether it can act in accordance with the principles of the LOAC, the report created an inaccurate perception.
Research Assistant Beyza Arslan
INTERNATIONAL LAW COMMISION
 Letter dated 8 March 2021 from the Panel of Experts on Libya Established pursuant to Resolution 1973 (2011) addressed to the President of the Security Council, S/2021/229 <https://digitallibrary.un.org/record/3905159?ln=en>
 Scharre and Horowitz “An Introduction to Autonomy in Weapon Systems” 2015
 US Department of Defense Directive, 3000.09
 Article 48
 Article 51(5)(b)
 Article 57
 Schmitt and Thurner “Out of the Loop”: Autonomous Weapon Systems and the Law of Armed Conflict” Harvard NSJ 2013
 Article 41
 Background on LAWS in the CCW. <https://www.un.org/disarmament/the-convention-on-certain-conventional-weapons/background-on-laws-in-the-ccw/>
 ICJ, Legality of the Threat or Use of Nuclear Weapons para 95