The Looming Threat of Lethal Autonomous Weapons Systems (LAWS): A Ticking Time Bomb

Threat of Lethal Autonomous Weapons Systems (LAWS)

Think of a battlefield where decisions of life and death are not made by soldiers in boots but by the bots (robots); robots that use Artificial Intelligence (AI), particularly machine learning, a program that “trains” the machine on data to learn on its own. In a sense, the program would write itself. That ignites a fear, which has been long-standing in human perception that the AI machines turned against humanity. The Machines that were programmed to learn, had the purpose of being automated or autonomous. Similar fears have been reflected in several sci-fi movies, including familiar examples, like Skynet’s murderous robots in the Terminator franchise, Agent Smith in The Matrix, and the android hosts of Westworld. These works of fiction, which were once only a part of our imagination or far future, have brought to light the complex and frequently uncertain human-AI interaction in the minds of human beings which now are becoming a reality.

Now going beyond the screenplay, the technological advancements and rapid growth in AI, have led humans to successfully make machines or more precisely weapons that are programmed to use algorithms to select, target, and kill, to put that in more simple words, weapons that work with humans completely out-of-the-loop. They are referred to by various names such as autonomous weapons, killer robots, slaughter-bots, and unmanned autonomous weapons, and the most famous and the most technical term that is being used is Lethal Autonomous Weapons Systems, abbreviated as LAWS. Some call it the “3rd Revolution in warfare” after revolutions like the gunpowder and nuclear weapons. However, as it can be understood from the statement of the UN Office for Disarmament Affairs, “At present, no commonly agreed definition of Lethal Autonomous Weapon Systems (LAWS) exists.  The definition of such systems remains vague. Currently, there’s no worldwide agreement over the development and usage of LAWS equivalent to Nuclear Non-Proliferation (NPT) since autonomous weapons lack a defined definition and regulatory framework.

From the timeline, on which the LAWS have received the attention of the Convention on Conventional Weapons (CCW), we begin from the year 2013 when Christof Heyns, the UN Special Rapporteur on extrajudicial, summary or arbitrary executions demanded halting the testing, production, assembling, transferring, acquiring, deploying and using of LAWS until when the international norms governing the future of such weapons have been developed. It drew awareness within the UN to the fact that there is a need to exercise control over the weapons before the latter is used. Later in 2016, CCW then established the Group of Governmental Experts (GGE) to study the new autonomous weapons technologies in the process.

However, some of the states who are already working on building such weapons, such as the United States, Russia, and the United Kingdom, stated their open objection to this proposal, therefore the GGE was unable to decide whether or not to limit or even outright ban LAWS. This has underlined the fact that the nations have a vested interest in the provision of technologies for enhancing their military might but at the same time, they also have an interest in controlling certain technologies that may bring volatility to the already volatile international system. More than 32785 Robotics Researchers and AI experts in an open letter signed in 2018 under Future Life Institute (FLI) urged the United Nations (UN) to rein in the development of LAWS stating that they pose moral and legal dilemmas and are beyond any sane human’s control. Even then, the interest and investment by the military around the globe in LAWS are still on the rise.

The debate over whether or not humans should continue to have control over weapons that can recognize targets and shoot at them is the essence of the problem. Or should we allow an AI algorithm to make decisions that are potentially fatal on its own, without the benefit of human judgment or legal accountability? The political forces that currently oppose an outright ban on LAWS by some of the world’s most powerful nations may make such a step difficult at this time. However, civil society and responsible governments have to push on for human control over autonomous weapons again before we unleash a Pandora’s box that would be difficult to shut. The other scenario, that of automated weapons being deployed out there in the field making decisions leading to either side’s loss of life, which are decisions that ought to be made by accountable human beings, is simply unimaginable. We simply cannot afford to reduce our military to the role of merely supervising the killing robots we have integrated into our society.

Technology and advancement are not necessarily adversarial. Self-driving technologies could potentially decrease the military’s rates of fatalities and make the war zones safer for the soldiers as what LAWS offer. However, any power comes with potential, the same applies to this case: More power, means more danger. We cannot abdicate moral life and death over to machines and so we need to be careful how we implement them.

Future generations of autonomous weapons are a major source of concern because they might not perform in a way that resonates with human morality. It could incorrectly attack noncombatants in dubious circumstances or increase hostility beyond necessary levels without supervision. However, even if the technology is precisely as described, an important segment of society considers machines making deadly choices as unethical. But it is as if this doesn’t fit in properly. It is recommended that we move more carefully forward with the development and application of autonomous systems. In particular, any weapon that is capable of taking human life on its own should always be closely governed and supervised by humans. This will guard against tragedy and make the public believe that, on balance, human reason and the ability to feel for fellow humans are not overshadowed by ruthless pragmatism when it comes to using fatal force.

As tempting as it may be to press on with fully autonomous lethal weapons, even if we can, the world will be worse off for not having heeded this warning if they are not done with proper safeguards. The goal of development should be the improvement of moral principles and personal integrity rather than the pursuit of supremacy and power. If we act morally and logically, preserving human control over robots will allow us to look forward and enhance military capabilities. The direction ahead is not about black and white but a more complex, delicate, and empathetic one.

Ayesha Zulfiqar
Ayesha Zulfiqar
+ posts

Ayesha Zulfiqar is a student of International Relations, currently pursuing a Bachelor's degree at Air University, Islamabad. She is an aspiring IR scholar aiming to work in the areas of international law, emerging technologies, humanitarian relief, and international stability. Her research interests include emerging technologies and national security.

Share this post :

Facebook
Twitter
LinkedIn
Pinterest

One Response

Leave a Reply

Your email address will not be published. Required fields are marked *

Latest News

Subscribe our newsletter

Sign up our newsletter to get update information, news and free insight.