1 0

Emerging trends of AI between great powers and its impacts on strategic stability

Read Time:18 Minute, 54 Second

Abstract

Contemporary advancements in artificial intelligence (AI) indicate the considerable and groundbreaking impact of the latest technology on military strength, strategic stability, international politics, and global security. Despite the weaknesses and vulnerabilities brought about by the AI’s rapid spread, its proliferation could become a problem. In the wake of the flawless wave of comprehensive opinion in the literature on AI, the research is aimed to add some precision to the argument. Moreover, it is to imply that AI’s instant proliferation and dispersion might trigger instability and strategic rivalry among various great powers if its vulnerabilities and ambiguities are left unimpeded. Additionally, the research is to highlight several technological developments and advances led by the US, China, and Russia, which might be able to offer notable repercussions for military applications ranging from tactical battlefield to strategic levels. Also, the study examines the impacts of these developments on strategic stability.

Keywords: Artificial Intelligence, Global Security, Strategic stability, Decision-making, Autonomous weapon systems (AWS), military

Introduction:

Autonomous weapon systems (AWS) have been used in military hardware since World War II. The current advancements in machine learning and AI constitute an evolution in the application of intellectual resolutions and automation to improve the understanding of modern battle space.   AI has the potential to alter military power, with the implications of restoring the balance of power. The race to develop AI capabilities could add many challenges to the geopolitical conflict between the U.S, China, and Russia. The international community has not been quick to grasp AI’s spacious implications as an essential instrument of national security. However, AI could contribute significant changes to conventional power with the variety of modern AI-based military hardware. Russia has set a goal of having 30% of its military might structure robotic by 2025[1]. Its aims and actions show that the international community recognizes AI’s revolutionary capabilities for national security.AI has limited strategic impacts in its current development stage. However, it increases the military power and boosts other developing sectors of technology, such as cyber security, military application of robotics, etc. The ambiguities and vulnerabilities associated with the growth of dual AI technologies may intensify concerns about global security. The simultaneous development of AI by developed and developing nations would provide wider opportunities for competition in geo-politics. In an increasingly complex threat environment, the use of AI technology in electronic warfare could assist countering attempts by adversaries to meddle with either military GPS or communications satellite signals. In addition, highly advanced electromagnetic devices might be used to disrupt an adversary’s sensors and communication techniques, which could then be used in concert with cyber-attacks to manipulate, obscure, and overpower a rival’s defences. The Russian military is said to have placed jammers in Syria and Eastern Ukraine to thwart guided UAVs. The embodiment of the application of AI into EWS (Early Warning Systems) may well shorten the time taken in decision making, negatively impacting crisis management at the nuclear threshold. As a result, the main threat to transnational security is that geopolitical tensions force states to employ AI-enabled autonomous weapon systems before the technology enabling them matures, making these systems more vulnerable to subversion. In the worst-case scenario, an adversary might feel that Artificial Intelligence is counter-productive, leading to inaccurate and potentially soar decision-making[2].

Research Questions

  • Why Artificial intelligence-associated technologies could be shaped as new strategic competition between great powers?
  • How great power competition in AI is counterproductive to strategic stability?

Research Objectives

  • To evaluate the extent of AI development by great powers in future warfare
  • To examine the impacts of AI rapid developments on strategic stability

Literature Review

Chang and Daniels (2021) argue that the industrial revolution, when machines began to assist people with physical labor in new and structured ways, is one of the most well-known examples of modern aspects of power. The character of war and military power changed dramatically after the industrial revolution. Before that time, any group’s military might be directly connected with the number of professional persons under arms, a metric of both taxable population and soldiers. Following this revolution, any assessment of military power has taken into society’s industrial capacity and available resources to enable that capacity, which is a measurement of a society’s ability to build crucial military gear like warships, Armor vehicles, and fighter aircraft. Today, AI technologies can be seen as part of another large-scale transition. Machines are increasingly assisting humans in new and organized ways with some types of cognitive tasks[3].

Scharre Paul (2018) states that the size of the total population may become less relevant for national military and economic power as AI technologies increasingly replace human labor. AI technologies will automate routine cognitive labor, from identifying maintenance needs to leveraging imagery intelligence, in the same way, that machines took over physical labor during industrialization. This could minimize the amount of human work required to keep a military operational. In full-scale conflicts, autonomous AI technologies could minimize the need for human combatants even more. Defense planners may count autonomous systems and their available domestic supply of AI chips the way they once counted soldiers and the available domestic recruiting pool of military-age individuals as armies rely more on autonomous systems for military operations[4].

Mises Ludwig (2012) argued that the advancement of artificial intelligence (AI) may help authoritarian nations by lowering the costs and repercussions of state interventions in internal markets. The standard argument against centrally managing complex economies is that doing so creates insoluble optimization issues. AI technologies are unlikely to change this for a variety of practical reasons, ranging from human organizational problems to corruption. Artificial intelligence (AI) technology, on the other hand, may be able to mitigate the detrimental effects of government interference in markets to some extent. For example, AI applications could aid in the collection and interpretation of large amounts of data required for more effective economic controls. There are numerous obstacles to properly deploying similar instruments for state economic policy, probably the most significant of which is the planners’ own changeable goals[5].

Fisher (2013) states that AI-assisted information warfare has the potential to reduce the expenses of both influencing foreign people and implementing large-scale economic warfare programs. If the fight between AI influence systems can significantly affect mass opinion, China may decide that heavy investment in AI-empowered propaganda is the best chance for reabsorbing Taiwan. 14 Center for Security and Emerging Technology Economic systems and financial markets can potentially be targeted by information attacks, particularly AI systems that manage equity investments. After the AP’s Twitter account was hacked in 2013, an inadvertent early instance of this possibility happened when U.S. trading algorithms responded to disinformation released by the AP’s Twitter account. Information warfare, rather than only political disruptions, may be increasingly related to economic warfare[6].

D.S. Sagan (1993) highlights that under the NAT; even the weapons of Mass Destruction (WMDs)with the highest disastrous capabilities are vulnerable to regular mishaps. The first justification for NAT is that AWSs will be far more complicated than WMDs. If an individual supervisor is present, he or she might not be capable to foresee the system’s behaviors right away, particularly in a recent and unfamiliar area. There have already been real-world incidents involving systems that aren’t as advanced as AWSs are planned to be. Due to a radar malfunction and human mistake, the Patriot system shot down two US planes during the 2003 Iraq war[7]. Another example is the incident, while the army was approaching Baghdad and Patriot radars were positioned in an unusual arrangement, resulting in a misleading signal generated by the radars’ interference. The system instantly launched a warhead, which once again collided with a friendly plane[8].

Matei & Bruneau (2012) demonstrated that one of the pillars of stability in established democratic regimes is civilian monitoring of the military. Several fundamental characteristics define an effective democratic control system. Several things are challenged by AI’s application in AWS, an effective chain of command, the right to be wrong, and the presence of civic society in the control mechanism. Military accountability to societal, political, and judicial institutions is ensured via an effective chain of command. This promotes transparency in decision-making and accountability for actions[9]. Increasing military robotic technology is expected to jeopardize democratic control of the armed forces, diminish political leaders’ accountability, and reduce the war threshold[10]. The accountability of military, societal, political, and judicial institutions is ensured via an effective chain of command. This promotes transparency in decision-making and accountability for actions[11].

Pandya (2019) argues that the majority of ongoing AI research takes place in the business sector or at universities and is not directly tied to the military[12]. However, given the nature of the technology, it appears that preventing its weaponization is unavoidable. The development of a method to control military AI use is still in its early stages and may even be impossible. There is presently no worldwide agreement on how to effectively govern the development, implementation, and spread of this technology. There are fears that AI would disrupt state strategic balances, erode nuclear deterrence-based stability, or increase ambiguity over the new power balance, resulting in a brand-new turn of security concerns. The number of young men in a country’s military is still regarded as a significant element of a nation’s strength[13].

Kvasňovský (2019) in his research argues that one of the significant reasons to hamper the global discussions and notable pushback from epistemic groups is that it’s difficult to define and calculate autonomy, design, and attributes of machinery and the amount of intelligence required by its processing unit to perform autonomously. Additionally, the weaponized AI leads to a plethora of impediments and risks along with strategic and tactical improvements. Weaponizing AI creates a plethora of obstacles and threats, as well as tactical and strategic advantages. The positioning and readiness of AWSs make it harder to achieve IHL principles, especially because it lowers the threshold for using violence, dehumanizes conflict, erodes civil-military relations, and prevents responsibility for the conduct of machines. Noncompliance stems from the nature of the technology, as well as the language of the legislation and our inability to answer basic ontological problems. The central plot will revolve around AI using advances in robotics on the actual battlefield while also utilizing the digitized and networked world through cyberspace. We can anticipate less destructive conventional hostilities and more recurrent unconventional strikes as we move closer to civil-military convergence. AWS will produce a change of war beginnings by drones, by eliminating “enculturated human minds” from policymaking processes, making them speedier, more responsive, and possibly even more humanitarian, but also more vulnerable to accidents, proliferation, and Chain of Command interruption[14].

Research Methodology

The research approach used to conduct the research is Qualitative analysis. The research is exploratory in nature and is based on data sources of secondary nature. Secondary data was compiled from books, government documents, official reports, and journals. Being only secondary data, the study is focused on research-based analysis. And the data is analyzed using the discourse analysis method. All the ethical considerations have been kept in mind during research.

Theoretical framework

In 1989, Nicholas Onuf used the term “Constructivism” in the context of international relations as “people and societies construct or constitute each other”. However, Alexander Wendt is considered the founder of constructivism in IR. Constructivism states that non-material powers (ideas, beliefs, cultures, etc.) are more crucial than material powers (military, etc.) in international politics. The reality of the world is shaped by ideational factors rather than materialistic factors. States behave as per their ideas, and norms are constructed against any other state, bypassing material strength.  For example, 5 North Korean nuclear missiles possess a greater threat to the U.S than 500 U.K nuclear missiles. The amount and quantity are not a threat to the U.S but the ideology against North Korea, constructed in the U.S.

When we observe the developments in Artificial intelligence, we could see that AI possesses more commercial applications rather than military applications. The research and development in AI are mostly done by tech companies for futuristic ventures. According to national resources and policies of sustainable developments, every state has the right to do research in various domains of technology for national interest and economic growth. It should not be considered a global security threat until the respective state possesses or posture the threat to global peace by the means of specific technology. Till this time, great powers have not attained the maturity in the usage of AWS and AI-associated military equipment.

Much AI-augmented hard power didn’t even use in the wider spectrum of actions.  However, the U.S continuously fabricates the threat as China and Russia are modifying their military structure by the induction of the latest AI technologies. The U.S itself is doing so with the largest defense budget in the world. On the other hand, these technologies helped the governments in fighting against the COVID-19. China did monitor its seized regions during the pandemic with the help of the latest AI-augmented equipment. Artificial intelligence was not meant to introduce for military applications at first. The diffusion in the development of AI is also associated with the absence of any global or regional monitoring regime, ratified by a group of countries. If the great powers desire to contain the lethal usage of AI as in AWSs, treaties could be signed for maintaining strategic stability.

Discussion/Analysis

Artificial Intelligence (AI), Autonomous weapon systems (AWS), and Autonomous Unmanned Vehicles (AUVs) are changing our daily lives and society and will continue to change how we fight future wars. Rapid developments in artificial intelligence (AI) have sparked a surge of interest in the military and political realms. As AI technology advances, these systems will be required to preserve global security. At the same time, AI raises important issues for the people and societies, such as surveillance, lack of personal freedom, and breaching privacy.

The world will need to take advantage of breakthroughs in AI technologies to preserve global security. In future wars, integrating AI technologies into the strategic, operational, and tactical levels would have varying advantages, costs, and risks. Because of its strategic importance, several countries are developing artificial intelligence (AI) for a variety of military functions, which might include “Control, Command, Communications, Intelligence, Computers, Surveillance, and Reconnaissance,” as well as a diverse nature of “autonomous and semi-autonomous machines.” As the initial screening for data that could be later assessed by human resources, this new technology can play a critical part in the collecting and processing of “open-source intelligence” and information available on the internet. As a result, AI can help to overcome the most significant impediment to various sectors of intelligence, which is the vast volume of data available.

Role of AI in Strategic stability:

The modern AI technology and military capabilities assisted by it might impact strategic stability between states that have great military might. The subtle, varied interactions of this new technology with modern conventional weapons can jeopardize nuclear capabilities, intensifying the weapons’ potentially destabilizing impacts.  With a new generation of artificial intelligence-enhanced conventional capabilities, the risk of unintended escalation induced by the combination of nuclear and non-nuclear weapons could be increased. The growing speed of conflict could jeopardize strategic stability and heighten the probability of nuclear war.

Artificial intelligence perils that forge skepticism about strategic stability could be the outcome of an adversary’s exaggeration of its effectiveness or the mistaken notion that a specific AI capability is operationally effective when it is not. For example, a state may become confident of its capacity to oppose or corrupt an AI program without retribution, leading an opponent to take an escalator path, including a preemptive first strike. Despite US guarantees, China and Russia are concerned that the US would use AI in conjunction with mobile and autonomous sensor platforms to threaten their nuclear retaliatory capability, particularly mobile ICBMs, which they rely on for deterrence. Example: AI software combined with “big data analytics and quantum-enabled sensors” could make it much easier to track down an enemy’s submarines.

This could lead to “use it or lose it” circumstances, putting strategic stability at risk. Autonomous weapons, unlike nuclear weapons, do not require expensive, tightly controlled, or difficult-to-obtain raw materials. Furthermore, as drones grow more common and their unit costs fall, these capabilities will become more capable, autonomous, and easy to mass-produce. However, the contemporary incline of AWS, in contrast to human-operated automation systems (HOAS), will undoubtedly challenge states’ abilities to anticipate and attribute drone attacks. These asymmetric approaches could aggravate “strategic ambiguity,” undermine deterrence and enhance vulnerabilities in crisis and conflict situations. North Korea, for example, used small drones to spy on South Korea’s defenses in 2016, potentially escalating a military clash in the demilitarized zone[15]. Autonomous weapons will become increasingly appealing to eroding a superior adversary’s deterrent posture and resolve, they are perceived as a relatively low-risk capability with murky rules of engagement and a lack of established normative and legal frameworks.

Conclusion

In the current multipolar geostrategic and geopolitical world order, the uneven AI-driven AWS capabilities having cloudy regulations of engagement and morally deficient and lacking legitimate structure would prove to be a significantly appealing asymmetric alternative to impact an advanced military deterrence and resolve. Moreover, the AI-augmented conventional warheads might obscure conflict management in case of any crises in the future, especially those between the US and China. Such new technology added capability would potentially disrupt the effective and reliable transfer of the information and the communication between the actors involved in the conflict, allies, and different military organizations involved in the conflict.

AI systems are operational at considerably high speeds. They could accelerate the intensity of conflict to the point that machine actions will outperform human decision-makers’ cognitive and physical abilities to regulate or even understand events. Efficient deterrence relies on opponents’ explicit transmission of plausible dangers and the consequences of violations, which requires that the dispatcher and recipient of these signals share a common context that allows for mutual interpretation.

At last, the rapid contest between the United States and China to compete in artificial intelligence will have far-reaching and perhaps destabilizing ramifications for future geopolitical stability. Both sides are likely to perceive these emerging technical developments very differently. These biases could deepen mistrust, suspicion, and misconceptions between the US and China in times of crisis and conflict.

 

 

Bibliography

Elena Chernenko Nikolai Markotkin, “Developing Artificial Intelligence in Russia: Objectives and Reality,” Carnegie Endowment for International Peace, accessed June 11, 2022, https://carnegiemoscow.org/commentary/82422.

Edward Geist and Andrew J. Lohn, How Might Artificial Intelligence Affect the Risk of Nuclear War? (Santa Monica, CA: RAND, 2018).

“National Power after Ai – Cset,” accessed June 11, 2022, https://cset.georgetown.edu/wp-content/uploads/CSET_Daniels_report_NATIONALPOWER_JULY2021_V2.pdf.

Paul Scharre, Army of None: Autonomous Weapons and the Future of War (New York: W. W. Norton & Company, 2018).

 

Ludwig von Mises, Human Action, 678-80; More colloquially, see CosmaShalizi, “In Soviet Union, Optimization Problem Solves You,” Crooked Timber, May 30, 2012,

Max Fisher, “Syrian hackers claim AP hack that tipped stock market by $136 billion. Is it terrorism?” Washington Post, 23 April 2013

 

Sagan, S. D. (1993). The Limits of Safety Organizations, Accidents, and Nuclear Weapons. Princeton: Princeton University Press.

 

Graham, B. (2003). Radar Probed in Patriot Incidents. Retrieved 6 10, 2019, from The Washington Post:

 

Bruneau &Matei, 2012; Geneva Centre for the Democratic Control of Armed Forces [DCAF], 2015)

 

Payne, K. (2018a). Artificial Intelligence: A Revolution in Strategic Affairs? Survival, 60(5), 7-32.

 

(DCAF, 2008)

 

Pandya, J. (2019). The Dual-Use Dilemma Of Artificial Intelligence. Retrieved 6 20, 2019, from Forbes: https://www.forbes.com/sites/cognitiveworld/2019/01/07/the-dualuse-dilemma-of-artificial-intelligence/

 

Maas, M. M. (2019). How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons. Contemporary Security Policy, 3(40), 285-311.

“Faculty of Social Sciences – DSPACE.CUNI.CZ,” accessed June 11, 2022, https://dspace.cuni.cz/bitstream/handle/20.500.11956/116314/120341140.pdf?sequence=1.

Sky, “Seoul Fires Warning Shots at ‘North Korea Drone’,” Sky News (Sky, January 13, 2016), https://news.sky.com/story/seoul-fires-warning-shots-at-north-korea-drone-10128538.

 

[1] Elena Chernenko Nikolai Markotkin, “Developing Artificial Intelligence in Russia: Objectives and Reality,” Carnegie Endowment for International Peace, accessed June 11, 2022, https://carnegiemoscow.org/commentary/82422.

[2] Edward Geist and Andrew J. Lohn, How Might Artificial Intelligence Affect the Risk of Nuclear War? (Santa Monica, CA: RAND, 2018).

[3] “National Power after Ai – Cset,” accessed June 11, 2022, https://cset.georgetown.edu/wp-content/uploads/CSET_Daniels_report_NATIONALPOWER_JULY2021_V2.pdf.

[4] Paul Scharre, Army of None: Autonomous Weapons and the Future of War (New York: W. W. Norton & Company, 2018).

[5]Ludwig von Mises, Human Action, 678-80; More colloquially, see CosmaShalizi, “In Soviet Union, Optimization Problem Solves You,” Crooked Timber, May 30, 2012,

[6] Max Fisher, “Syrian hackers claim AP hack that tipped stock market by $136 billion. Is it terrorism?” Washington Post, 23 April 2013

[7] Sagan, S. D. (1993). The Limits of Safety Organizations, Accidents, and Nuclear Weapons. Princeton: Princeton University Press.

[8] Graham, B. (2003). Radar Probed in Patriot Incidents. Retrieved 6 10, 2019, from The Washington Post:

[9] (Bruneau &Matei, 2012; Geneva Centre for the Democratic Control of Armed Forces [DCAF], 2015)

[10] Payne, K. (2018a). Artificial Intelligence: A Revolution in Strategic Affairs? Survival, 60(5), 7-32.

[11] (DCAF, 2008)

[12] Pandya, J. (2019). The Dual-Use Dilemma Of Artificial Intelligence. Retrieved 6 20, 2019, from Forbes: https://www.forbes.com/sites/cognitiveworld/2019/01/07/the-dualuse-dilemma-of-artificial-intelligence/

[13] Maas, M. M. (2019). How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons. Contemporary Security Policy, 3(40), 285-311.

[14] “Faculty of Social Sciences – DSPACE.CUNI.CZ,” accessed June 11, 2022, https://dspace.cuni.cz/bitstream/handle/20.500.11956/116314/120341140.pdf?sequence=1.

[15] Sky, “Seoul Fires Warning Shots at ‘North Korea Drone’,” Sky News (Sky, January 13, 2016), https://news.sky.com/story/seoul-fires-warning-shots-at-north-korea-drone-10128538.

 

About Author:

Ibrahim GDI

Ibrahim Azhar is a Mphil strategic studies at National Defence University, Islamabad. His areas of interest include defense policy analysis.

 

 

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Popular

spot_img

More from author

Boeing, Nammo Complete Long-Range Ramjet Artillery Test

Boeing and Norwegian defense and aerospace company Nammo has successfully test-fired a ramjet-powered artillery projectile, further demonstrating the viability of one of the U.S....

Airbus Helicopters delivers the world’s first ACH160 to a Brazilian customer

Airbus Helicopters delivered the world's first ACH160 to a customer in Brazil on the eve of the 17th edition of the Annual Latin American Business Aviation...

Turkiye’s Missiles: ATMACA Anti-Ship Cruise Missile

ATMACA is an anti-ship cruise missile co-developed by "Roketsan" and "Aselsan". Turkiye indigenously developed this cruise missile to replace already existing American Harpoon Anti-Ship...

Turkiye’s Latest Anti Tank Guided Missile “KARAOK”

KARAOK is a single man-portable, fire-and-forget & short-range Anti Tank Guided Missile (ATGM) developed by Turkish firm "Roketsan". The missile is armed with a tandem...