
Every industry has seen radical transformation as a result of artificial intelligence (AI), from medicine and finance to education and logistics. But as it becomes more and more integrated into military systems, serious concerns about the nature of contemporary combat are raised. The ability of Autonomous Weapons Systems (AWS) to choose and attack targets without direct human assistance is one of the most hotly contested innovations.
While proponents of military AI emphasize the strategic benefits of accuracy, speed, and lower human risk, detractors bring up moral, legal, and humanitarian issues. Autonomous systems put into question established standards of responsibility, adherence to humanitarian laws (IHL), and the propriety of using force. It is crucial for governments to make sure that the next generation of adaptive warfare technologies they invest in function within a preexisting legal and ethical framework.
The emergence of AI in warfare is examined in this paper, with particular attention paid to autonomous weaponry, pertinent regulations, and the necessity of sensible regulation that encourages both creativity and accountability.
I. Defining Autonomous Weapons Systems and Their Capabilities
Weapons That Operate on Their Own Systems are firearms that can engage and eliminate targets without additional human intervention after they are engaged. These technologies are not the same as remotely controlled platforms, such human-piloted drones. Rather, AWS uses automated analytics, sensor fusion, and machine learning algorithms to make judgments in real time.
Although there isn’t a one, widely agreed-upon definition of AWS, many nations and international organizations, including as the UN, concur on the essential components of liberty in target engagement and selection. AWS’s capabilities can include lethal air and marine systems as well as land-based robotic vehicles. Examples include sentry technologies like South Korea’s Super AEgis II, which can track and target autonomously under specific conditions, and loitering weapons like the Israeli-made Harpy drone.
The majority of deployed systems are now semi-autonomous and need human approval to launch strikes. Nonetheless, developments in battle simulation software and deep learning indicate that completely autonomous systems might soon proliferate.
II. Military Motivations and the Strategic Utility of AI
AI’s application in combat has real advantages. The ability to lower military deaths by removing troops from direct battle is one of the main reasons for creating AWS. AI systems can swiftly process large volumes of data, which enhances target recognition and speeds up reaction times.
Additionally, militaries contend that by providing consistent, dispassionate decision-making, autonomous systems can lessen the “fog of war.” For example, artificial intelligence (AI) can examine security footage to find trends or irregularities that human analysts would overlook..
National defense strategies make clear the strategic motivations. By creating a Joint Robotics Center to organize military applications, the US has made AI a key part of its defense modernization agenda. 2. Similarly, AI is specifically envisioned as a “strategic technology” for both military and civilian applications in China’s Next Generation Artificial Intelligent Development Plan. 3. The United Kingdom, Israel, and Russia are among the other countries that have made large investments in AI-powered defense systems.
But this quest for military superiority increases the possibility of an international arms race in self-contained weapons, which could result in deployment before the moral and legal ramifications are completely resolved.
III. Legal Frameworks Governing the Use of Autonomous Weapons
Several legal frameworks, most notably International Humanitarian Law (IHL) and International Human Rights Law (IHRL), govern the deployment and use of AWS. These legal frameworks control how hostilities are conducted and aim to lessen the impact of armed conflict.
Principles of International Humanitarian Law
Three fundamental ideas underpin IHL, especially as it is enshrined in the Conventions of Geneva and their Additional Protocols:
1. Distinction: Combatants need to be able to tell the difference between military targets and civilians.
2. Proportionality: Considering the expected military gain, a siege must not inflict undue harm on civilians.
Military Necessity: Attacks must be legal in the given situation and meant to aid in the enemy’s defeat.
Whether or not AWS can adhere to these principles is the main issue. Can an AI system, for instance, correctly differentiate between a combatant and an ordinary person in a complicated, quickly evolving environment? Modern technology fails the contextual awareness and judgment that actual human decision-makers possess, despite having sophisticated sensors and algorithms.
B. Article 36 Reviews
States must examine new weapons to see if their use would be illegal under international law, as stipulated in Article 36 of the 1st Protocol to the Geneva Conventions. 4. All new military technologies, including AWS, are covered by this clause. However, there are significant differences in how states carry out these reviews, and many of them are opaque.
According to some academics, current legal frameworks are enough as long as autonomous weapons are developed and used with human supervision and built-in protections. Others contend that in order to take into consideration the special traits and dangers presented by AI-enabled systems, new, specialized legal tools are required.
IV. Accountability and Responsibility in the Use of Force
Accountability is one of among the most important legal issues pertaining to AWS. In conventional warfare, a person or state actor can be held accountable when a weapon results in unlawful harm. However, identifying accountability becomes more difficult when a self-organizing system uses probabilistic reasoning and algorithms to make decisions.
Software developers, defense contractor commanders, and legislators are some of the actors involved in the creation and implementation of AWS. In the case that an AWS causes illegal injury, who is responsible? If the system performed as intended but yet caused unintended civilian casualties, the problem is made worse.
The idea of significant human oversight is promoted by certain legal scholars as a prerequisite for using AWS legally. According to this theory, human operators ought to maintain supervision and ultimate decision-making power, especially when it comes to the application of deadly force. 5. Accountability would continue to be traceable and compliant with current legal standards thanks to this oversight.
V. Ethical and Humanitarian Considerations
Beyond legalities, giving machines the ability to make life-or-death decisions raises ethical questions. No matter how sophisticated, AI systems are incapable of ethical reasoning or empathy. Even in times of war, killing decisions carry significant moral weight and frequently call for human judgment that takes ethical considerations, cultural context, and subtleties into account.
AWS deployment may potentially change the character of dispute. Nations may be more likely to start a battle and lower the bar for war if they think they can fight without endangering human soldiers. Known as the “moral danger of remote warfare,” this phenomena has previously been seen with attack drones in a number of locations.
Critics contend that using AWS risks turning people into inanimate data points and dehumanizes the battlefield. Citing the need to protect human dignity and sustain moral responsibility, the groups Human Rights Watch and the Alliance to Stop Deadly Robots have argued for a preventive ban on entirely self-sufficient weaponry..6
VI. The Role of International Governance and Diplomacy
There are continuous but dispersed attempts to regulate AWS under international law. The main platform for international negotiations has been the Convention on Certain Conventional Weapons (CCW). A Group of Governmental Experts (GGE) has been hosted by the CCW since 2014 with the responsibility of investigating the effects of AWS. No legally binding document has yet to be produced, even after several rounds of meetings.
A statutory international convention that forbids or severely restricts fully autonomous weapons is supported by a number of states and nonprofit groups. Others, such as powerful military nations, support a more adaptable, non-binding strategy that emphasizes technological standards and best practices.
.
Notably, a resolution calling for the EU to endorse a ban on weapons that operate entirely autonomously beyond human control was passed by the European Parliament in 2018. 7. In a same vein, the UN Secretary-General has advocated for international agreement on maintaining human control over weaponry systems on numerous occasions.
VII. Proposals for a Balanced Regulatory Framework
Innovation need not be hindered by AWS regulation. A well-rounded strategy could protect moral and ethical values while maintaining the strategic advantages of military AI. A route toward responsible governance is provided by the following suggestions:
1. Codify Effective Human Control: Provide explicit legal guidelines for supervision and intervention, and require human participation in all decisions involving the use of deadly force.
2.Transparent Weapon Reviews: Enhance and standardize peer review and public reporting procedures as well as other Article 36 review procedures.
3.Fully Autonomous Weapons Moratorium or Treaty: States ought to think about drafting a legally binding agreement that forbids the use of weaponry that functions without significant human oversight.
4.Ethical Designing and Compliance Audits: Demand that independent organizations conduct legal and moral audits of AI systems used in combat.
5.Global Equity and Capacity Building: Assist underdeveloped countries in acquiring the technological and regulatory know-how necessary to participate in AWS governance.
In addition to improving legal compliance, these actions would lower the danger of destabilization and foster international trust.
VIII. Conclusion
The future of combat could be drastically altered by artificial intelligence, which presents both possibilities and difficulties. The creation and possible application of autonomous gun systems bring up important issues regarding responsibility, legitimacy, and the role of humans in war. International humanitarian law serves as a basis, but more regulatory development is necessary to make sure that advancements in technology don’t beyond ethical and legal protections.
The international community can forge a course that welcomes innovation while preserving the fundamental values of humanity and the rule of law by means of inclusive discourse, moral design, and multilateral cooperation. As countries traverse this changing terrain, the emphasis must continue to be on what AI may and should not be permitted to do in combat.
REFERENCES
Noel Sharkey, Autonomous Weapons Systems and the Laws of War, 22 Law, Innovation and Tech. 1 (2012). ↩
- U.S. Dep’t of Def., Summary of the 2018 Department of Defense Artificial Intelligence Strategy (Feb. 2019), https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-THE-2018-DEFENSE-ARTIFICIAL-INTELLIGENCE-STRATEGY.PDF. ↩
- State Council of China, New Generation Artificial Intelligence Development Plan (July 2017), translation available at https://www.newamerica.org/documents/1959/translation-fulltext-8.1.17.pdf.
- Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts art. 36, June 8, 1977, 1125 U.N.T.S. 3.
- ICRC, Autonomous Weapon Systems: Implications of Increasing Autonomy in the Critical Functions of Weapons (May 2021), https://www.icrc.org/en/document/autonomous-weapon-systems-icrc-position.
- Human Rights Watch, Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control (Aug. 2020), https://www.hrw.org/report/2020/08/10/stopping-killer-robots/country-positions-banning-fully-autonomous-weapons-and.
- European Parliament, Resolution on Autonomous Weapon Systems, 2018/2752(RSP), Sept. 12, 2018.
Author: Vayu Shivangi Sharma
