top of page

5 items found for ""

  • From Flash Crash to the Battlefield: Understanding Algorithmic Risks in Modern Warfare

    An XQ-58A Valkyrie unmanned aerial vehicle flies in formation with an F-22 Raptor and F-35A Lightning. 9 Dec 2020 (US Air Force). May 28, 2031, Commander Alex Reed checked the screen in the control room. The time was 4:37 pm, and everything seemed normal. His autonomous drone swarm was patrolling a volatile conflict zone, and the live feeds showed the usual landscape of dusty roads and scattered settlements. The drones were programmed with advanced algorithms to identify and neutralize threats swiftly, reducing the need for human intervention in high-risk situations. Suddenly, an alert flashed on Alex's screen. A drone had identified a potential hostile target: a convoy of vehicles approaching a secured area. The algorithm flagged the vehicles as high threat based on their speed and direction. Trusting the system, Alex authorized the engagement, and the drones swooped in. Seconds later, explosions rocked the convoy. However, as the smoke cleared, Alex's heart sank. The live feed showed civilians scrambling from the wreckage—families and children. The convoy was not an enemy force but a group of refugees fleeing the conflict. The autonomous system had misidentified them due to a simple oversight in the threat identification algorithm. Before Alex could react, the situation spiraled out of control. The autonomous drones, operating at blinding speeds, interpreted the chaos as continued hostility. They engaged further, compounding the error. Nearby allied forces, seeing the drone strikes, believed they were under attack by a significant threat and called for reinforcements. In the enemy camp, commanders saw the drone activity and the sudden increase in allied forces as the beginning of an offensive. They launched a counter-attack, sending their own autonomous systems into the fray. Within minutes, the entire region was engulfed in conflict, escalating far beyond the initial incident. A Real-World Parallel: The 2010 Flash Crash This scenario is not just speculation; it has parallels in the 2010 flash crash. The crash was initiated by a massive sell order from the Waddell & Reed mutual fund, which executed a sell program of 75,000 E-mini S&P 500 futures contracts worth about $4.1 billion. This large sell order overwhelmed the market, and high-frequency trading algorithms exacerbated the situation. These automated trading bots initially absorbed the sell order but then began to sell aggressively to manage their risk. The algorithms fed off each other, amplifying the downward spiral. This cascading effect caused the market to lose a trillion dollars in a matter of seconds. Understanding Algorithmic Logic Algorithms work on programmed logic that, while more predictable than human decision-making, can lead to predictable yet uncontrollable escalation patterns when combined with certain human actions. In the case of the 2010 flash crash, the initial human action—a large sell order—triggered a series of automated responses from trading algorithms. These algorithms, operating at high speeds and based on predefined logic, interacted in ways that led to a rapid and severe market decline. Similarly, autonomous weapons systems rely on algorithms to make split-second decisions based on predefined criteria. While this can increase efficiency and reduce human error, it also means that the systems can react in ways that humans might not anticipate. When these automated responses are triggered by human actions, the results can quickly spiral out of control. The inherent speed and complexity of these interactions can make it difficult for human operators to intervene effectively once the process has begun. The Underlying Drive: The Security Dilemma To understand the increasing drive towards giving weapons greater autonomy, it's important to consider the concept of the security dilemma. This dilemma arises when states, in their efforts to enhance their security, develop new military capabilities that inadvertently make their adversaries feel less secure. In response, those adversaries build up their own capabilities, leading to a continuous cycle of escalation. Autonomous weapons systems are a prime example of this dynamic. As one state enhances its autonomous capabilities, others feel compelled to follow suit, escalating tensions and increasing the risk of conflict. The drive towards more autonomous weapons stems from the belief that these systems can offer significant advantages in terms of speed, precision, and reduced casualties among military personnel. However, the same features that make these weapons attractive also contribute to the risks of escalation. The need to stay ahead of potential adversaries in terms of technological capabilities creates a feedback loop, where advancements by one state prompt similar developments by others, perpetuating the arms race. Challenges in Arms Control Historically, arms control treaties have placed a braking effect on the security dilemma, and in some cases, have frozen the continuous feedback loop. Examples include the Limited Test Ban Treaty of 1963 and the various nuclear arms treaties that followed, which successfully limited the buildup of nuclear weapons in the United States and the Soviet Union, reducing the risks of escalation during the Cold War. These treaties worked because they addressed a limited number of actors with clear stakes in maintaining stability. However, this approach is unlikely to succeed in today's multilateral world, characterized by rising tensions and the greater accessibility of AI technology. The rapid pace of technological advancement and the diverse range of actors involved complicate traditional arms control measures. Unlike the relatively stable bipolar world of the Cold War, today's international landscape includes numerous state and non-state actors with varying interests and capabilities. Additionally, the widespread availability of AI technology means that even small states and non-state actors can potentially develop and deploy autonomous weapons, making comprehensive arms control agreements more difficult to achieve. The Three Pillars of Strategic Signaling Given the challenges of relying on treaties to address the security dilemma posed by autonomous weapons, a new approach is needed. Strategic signaling refers to the actions and communications by states designed to convey intentions, capabilities, and resolve to other actors to influence their behavior. By clearly communicating the vulnerabilities of adversaries' autonomous systems to hacking and other forms of disruption, states can create a deterrent effect. This signaling acts as a braking force, discouraging the unchecked advancement and deployment of autonomous weapons by highlighting the risks involved. NATO  with its robust framework for cooperation and intelligence sharing, is particularly well-positioned to leverage strategic signaling. By playing to its strengths in these areas, NATO can effectively limit the proliferation of highly autonomous weapons systems. The alliance's ability to coordinate and share critical information can enhance collective security and provide a unified stance against the risks posed by autonomous weapons. Additionally, communicating the fundamental escalation risks of never completely being able to trust code on both sides can prevent unintended interactions that might escalate to war. It happened in the 2010 flash crash; it can happen again. When adversaries understand that autonomous systems might not interact in predictable ways and could lead to unintended escalations, it adds a layer of caution. This mutual understanding helps to slow down the race to hand over control to weapons systems, reinforcing the necessity of maintaining human oversight. Finally, moving discussions around autonomous weapons from forums like the  United Nations  Convention on Certain Conventional Weapons (CCW) to a new forum specifically dedicated to these issues can significantly signal the potential threat of these weapons to humanity. A dedicated forum highlights the global recognition of the dangers posed by autonomous systems, fostering more focused and effective dialogue on mitigating these risks. It emphasizes the need for collaborative international efforts to better understand these weapons, enhancing our ability to manage and control their development from an informed, international perspective. Strategy for Moving Forward While strategic signaling can put brakes on the security dilemma, it cannot completely stop it. The objective should be to slow the development of giving weapons greater control. This pause allows us to better figure out how to deploy these weapons in ways that lower the probability of escalation compared to our current situation. By maintaining human oversight, rigorously testing and validating algorithms, and engaging in continuous dialogue with adversaries, we can work towards a more stable and secure world where the risks of automated warfare are carefully managed. However, it is important to recognize that solving the security dilemma entirely is highly unlikely. The inherent nature of international relations, where states act to maximize their security, makes it almost impossible to completely eliminate the arms race dynamic. As long as states perceive threats from one another, they will continue to develop advanced weapons systems, including autonomous ones. The goal, therefore, should not be to eliminate this dynamic but to manage it in a way that reduces the risks of unintended escalation and conflict.

  • Laser and Microwave Weapons: Shaping the Future of Defense

    DragonFire laser directed energy weapon fires during a trial of the weapon by the UK MOD, Photo by UK Ministry of Defence/Open Government Licence In the evolving landscape of modern warfare, the development of laser and microwave weapons represents a pivotal shift towards addressing some of the most pressing challenges in defense. Traditional munitions, while continuously improved, carry inherent limitations such as supply chain dependencies, escalating costs, and logistical complexities. These issues become particularly acute in scenarios like the Houthis attacking shipping lanes, where the economics of using expensive interceptors to down cheap, asymmetric threats are increasingly untenable. Directed-energy weapons (DEWs), encompassing both lasers and high-powered microwaves, promise a solution that is not only technologically advanced but also economically and operationally viable. Development and Investment The quest for DEWs has seen significant investment from global powers, driven by the potential these technologies hold for revolutionizing defense capabilities. The United States, among others, has actively pursued both laser and microwave systems, dedicating billions of dollars towards their research and development. These efforts aim to create weapons that offer speed, precision, and cost-effectiveness, capable of engaging targets ranging from drones and missiles to artillery and potentially even manned aircraft, without the logistical and economic burdens of conventional munitions. Operational Systems and Challenges While the promise of DEWs is compelling, the path from concept to operational deployment is fraught with technical and operational challenges. Power generation, beam control, atmospheric interference, and the compactness of systems for mobility are among the hurdles that need to be overcome. Programs like the U.S. Air Force's THOR highlight both the potential and the current limitations of these technologies. U.S. Air Force's THOR (Tactical High-power Microwave Operational Responder) THOR, designed to counter swarms of drones with microwaves, exemplifies the advantages of DEWs in engaging multiple targets simultaneously, a stark contrast to the one-at-a-time approach of lasers. However, the effectiveness of these systems is subject to environmental conditions and technical constraints, underscoring the complexity of making DEWs a reliable part of the defense arsenal. Countermeasures and Strategic Costs Adversaries are also likely to develop countermeasures against DEWs, which may include material enhancements to deflect energy or hardening electronic components against microwave attacks. However, these adaptations entail increased costs for the offensive side, subtly altering the economic dynamics of warfare. Even if countermeasures are developed, they enforce a higher financial burden on the attacker, indirectly validating the defensive value of DEWs by escalating the cost of overcoming them. Economic and Strategic Implications DEWs promise to transform the economic landscape of defense strategies by offering a more cost-efficient method of engaging threats. By potentially reversing the cost asymmetry inherent in defending against low-cost UAVs and missiles with expensive interceptors, these weapons could facilitate more sustainable defense operations, particularly in critical scenarios like maritime security against asymmetric threats. Future Outlook The future of laser and microwave weapons is poised on the brink of transformative potential and enduring challenges. As these technologies evolve, they promise to redefine the economics and effectiveness of military engagements. However, the journey to operationalization is complex, marked by technical hurdles, the development of countermeasures, and the need for strategic integration into existing defense frameworks. The ultimate impact of DEWs on warfare and defense will depend on the ability of nations to harness these technologies, overcoming challenges to unlock their full potential. In conclusion, laser and microwave weapons represent a frontier in defense technology with the potential to address critical challenges in modern warfare. As the world grapples with evolving threats and the limitations of traditional munitions, DEWs offer a promising alternative. However, realizing this promise requires overcoming significant technical and strategic challenges, a task that nations are actively pursuing in the quest for future defense capabilities.

  • Stopping Proliferation of Autonomous Killer Drones

    Screenshot from the 2017 video "Slaughterbots" In the annals of modern warfare and defense technology, few developments have garnered as much attention—and concern—as the rapid evolution of drones. These unmanned aerial vehicles, once a mere supplementary tool in the military's arsenal, have swiftly metamorphosed into autonomous agents of power, propelled by advancements in artificial intelligence and decreasing production costs. This article will explore the "killer drones" small autonomous quad-copter drones equipped with explosive devices, triggered by algorithms. Their capabilities bring both promise—increased precision and reduced human risk—and concern, presenting new ethical, strategic, and regulatory issues. This article delves into the future threat of autonomous killer drones, the challenges they pose to traditional arms control, and the pressing need for international intervention in this uncharted territory. While there have been significant advancements in AI-driven target tracking and drone technology, particularly following the Ukraine war, killer drones are not yet commonplace in today's technological landscape. However, in recent years, there's been an observable surge in the affordability and autonomy of drones. According to Grand View Research, investments in the commercial drone market are predicted to increase by 266% over the next decade. As a result, we can anticipate further enhancements in drone autonomy, amplified capabilities to transport larger payloads, expanded flight ranges, coupled with reduced costs and heightened user-friendliness. Excluding explosive components, every aspect of these drones are being improved by the commercial sector. This indicates that the efficiency of killer drones will persistently advance. At the dawn of the 21st century, the USA, with its colossal military budget, stood as the sole nation operating strike drones capable of executing precision strikes. Today, various non-state entities have employed this technology to conduct precision attacks, albeit on a smaller scale in terms of payload, range, and stamina. Historically, the military sector has been the primary driver behind the development and proliferation of drone technology. From initial surveillance applications to the more advanced precision strike capabilities, militaries worldwide have pushed the boundaries of what's possible with drones. Defense departments, in their pursuit of tactical and strategic advantages on the battlefield, have heavily invested in research and development, leading to rapid advancements in drone technologies. Key collaborations with defense contractors and tech firms have resulted in innovations like enhanced AI algorithms, longer flight duration's, and more efficient target acquisition methods. Many of today's commercial drone applications have, in fact, drawn inspiration from military prototypes or have been direct adaptations of technology first introduced in military contexts. Amidst this proliferation, the democratization of drone technology underscores a crucial shift. No longer confined to the dominion of superpowers with vast resources, drone capabilities are now accessible to smaller nations and even non-state actors. This decentralization not only alters the dynamics of modern conflict but also amplifies the urgency to establish international norms and controls over their usage. With the emergence of more affordable and precise systems like the MQ1 Predator drone, the U.S. demonstrated a greater inclination to use them, as noted in the Air & Space Power Journal. These drones, being more precise and cost-effective without jeopardizing human pilots, became a preferred approach over traditional aircraft bombings, in the war on terror. Moreover, this technological evolution hints at a future where states might be more inclined to deploy cheaper killer drones equipped with AI algorithms that can select targets based on a given character profile, such as age, gender, ethnicity, etc. The potential use of autonomous weaponry isn't inherently negative. When used responsibly, they could reduce civilian casualties in wars and counter terrorism operations due to their heightened precision. However, there's a compelling argument that allowing militaries to deploy killer drones could normalize their usage and speed up the technological arms race. This could compel states to increasingly delegate decisions to machines, as human reactions would be too slow. Such a shift may lead to the outbreak of a flash war. This term draws inspiration from the 2010 flash crash, where trading algorithms on Wall Street interacted unpredictably, causing the market to lose over a trillion dollars in mere minutes. The exact cause of the crash remains unclear. Similarly, we might inadvertently initiate a war, driven by algorithmic interactions we cannot fully comprehend or control. Building upon these concerns, the rapidly decreasing barriers to entry, driven by the commercial sector, also raises alarms about non-state actors. The 2016 report from Combating Terrorism Center, which documents the use of drones in combat by various non-state actors, states that groups such as ISIS managed to create combat drones using readily available consumer products. Given the advancements in autonomy, drones capable of longer flights and heavier payloads, along with their increasing accessibility, there are troubling prospects ahead. For instance, one could imagine a group like ISIS, known for its interest in drones, using drones in a terrorist attack only targeting a specific ethnicity, age, and gender by using consumer drones and open source AI models for target acquisition. As the commercial sector persistently seeks to lower drone technology costs, the tight-knit relationship between private enterprises and military endeavors becomes evident. The plummeting costs of this technology extend its reach far beyond national military arsenals, putting it into the hands of rogue organizations and even individual actors. This democratization of lethal force amplifies the scale of potential threats exponentially. Compounding this issue is the inherent anonymity that autonomous drone attacks can provide, making the attribution of these acts increasingly challenging. The convergence of these factors not only escalates global security risks but also muddies the waters of accountability, creating a volatile environment rife with potential for misuse and misunderstanding. In the ever-evolving landscape of warfare, AI-powered drones represent the latest intertwining of technology and strategy. Beyond their tactical advantages, they reflect a deeper societal change: a growing reliance on machines over human judgment, often seen in our daily lives and now magnified on the battlefield. But as these lines between machine-driven efficiency and human conscience blur, we grapple with fundamental questions about the direction and implications of technological progress. Are we paving a path to a safer future, or are we teetering on the brink of a new era of conflict, dictated more by cold algorithmic decisions than human values? This delicate balance between progress and peril demands introspection, not just by nations but also by industries that fuel these advancements. Bridging the technological and ethical landscape of killer drones to the broader realm of arms control, it becomes clear that history offers valuable lessons. As with earlier arms, the push for controls and restrictions on drone usage can be seen as part of a continuum. Each new weapon technology has been met with global concern and, in many instances, collective efforts to regulate or prohibit their use. As we face the challenges posed by autonomous drones, it is worth revisiting these historical precedents, understanding their successes and shortcomings, to chart a viable path forward for the responsible management of modern warfare technology. Navigating the challenges of AI arms control, however, requires tackling some foundational questions, the most immediate of which is defining what constitutes an "autonomous weapon." For instance, does a cruise missile employing optical target recognition to engage sea vessels qualify as an autonomous weapon? Or what about modern air-to-air missiles that leverages image recognition to circumvent countermeasures? It is crucial to have universally agreed upon definitions on what an autonomous weapon encompasses for effective arms control. By refining broad terms like "autonomous weapon" to more specific descriptors such as "killer drones" – which could be defined as a quad-copter drone under a certain weight, equipped with an explosive device that destroys the drone when detonated, and capable of self-directed target selection and engagement without a human– we can side step some of the ambiguity surrounding the term "autonomous weapon." Building upon this foundation of clear definitions, we find that effective arms control measures often leverage past successes. As noted by the Center for New American Security, the 2008 ban on cluster munitions likely stemmed from the successful 1997 ban on antipersonnel mines. Similarly, the ban on chemical and biological weapons emerged from a longstanding sentiment against the use of poisons that dates back to ancient times. The incremental approach is crucial in new fields like AI arms control. Starting with manageable steps, such as countering killer drones, doesn't necessitate major players to relinquish significant military capabilities. The primary focus of the treaty should be to counteract the unchecked spread of drone technology driven by commercial incentives. By addressing the decentralization of drone technology, the treaty aligns with the security interests of major global actors. This alignment boosts the likelihood of successfully establishing an arms control treaty. Once in place, this treaty can pave the way for more comprehensive and stringent control measures in the future. The great challenge with killer drones lies in their deep integration with the commercial sector. Any regulation aimed at these drones could inadvertently impact commercial applications of drone technology. Drones serve a myriad of beneficial purposes, including healthcare, agriculture, and search and rescue operations. Therefore, treaty efforts designed to mitigate the risks of killer drones must carefully consider the broader implications for the entire drone market. Innovative solutions are required, as past arms control measures have not grappled with the complexities of regulating software or the commercial realm. Potential strategies could include implementing traceable features in drone technology for both complete and incomplete drone hardware. Embracing open-source solutions might foster trust and transparency, enhancing the chances of successful regulation. Another consideration could be mandating treaty signatories to ensure their domestic firms contribute to a global, open-access database, documenting drone sales, quantities, and intended use. In conclusion, the rapid evolution of drone technology, driven largely by advancements in AI and the commercial sector, is an inevitable march of progress. Yet, as with all significant technological leaps, it brings with it both promise and peril. The increasing autonomy and capabilities of drones, especially "killer drones" underscore a pressing need for international guidelines and controls. It is a dual challenge of maintaining technological advancement while ensuring ethical and global safety considerations. Tackling this challenge requires a holistic approach, involving not only nations and international bodies but also the commercial entities at the heart of this innovation. History teaches us that proactive measures, built upon the foundations of past successes, can shape the trajectory of technological warfare for the better. As we stand at this critical juncture, collective global action will determine whether drones become instruments of unchecked chaos or tools that, while powerful, are harnessed responsibly for the greater good.

  • What is The Future of Autonomous Weapons Warfare?

    Concept art from the Air Force Research Lab (AFRL) The history of warfare has witnessed a remarkable transformation, evolving from the use of rudimentary weapons like swords and spears, progressing to guns and tanks, and now employing advanced technology in the form of long-range missiles and aerial drones. The aim has been to minimize the loss of manpower and the need to lessen the human factor from the warfare synopsis. Seen in this light lethal algorithms and autonomous weapon systems are the next step in the evolution of warfare. In the current global landscape, the greatest obstacle for militaries isn't a lack of technological advancements, but rather the overwhelming expenses associated with them. State-of-the-art submarines carry price tags in the billions, while aircraft carriers cost ten times more. Additionally, the newest tanks, equipped with advanced armor and aiming systems, along with fifth-generation aircraft, each cost more than 100 million dollars. These soaring prices render the prospect of maintaining a military force comparable to that of the World War II era practically unfeasible. However, autonomous drones regardless of their size are much cheaper allowing militaries to operate much larger fleets. An additional factor driving the adoption of lethal autonomous weapons is that any military force that opts not to do so is at a major disadvantage. In the future you will need autonomous systems, to fight autonomous systems. The simple reason is that humans are increasingly outpaced by the speed and efficiency of these machines. The world's leading military powers are now racing to develop robotic weapon systems that can operate autonomously. Such weapons offer numerous advantages, and nations that adopt an autonomous robotic doctrine early on will likely gain substantial military advantages, at least initially. However, as autonomous and robotic warfare become mainstream the drawbacks and counters to this technology will likely level the playing field. It wont happen this year or the next, but in the not-so-distant future, we are likely to see fully autonomous drones dominating the skies. War may not be an inherent part of human nature, it certainly is a habit, one that has been perpetuated and refined throughout history. Given that technology typically advances much faster than policy and governance, it is essential to start thinking seriously about the implications of autonomous weapons. The decisions we make today will shape the battlefield of tomorrow and, with it, the fate of human lives.

  • Artificial Intelligence and Arms Control

    Shutterstock When analyzing the history of arms control, it becomes clear that arms control among states is the exception rather than the rule. Many reasons contribute to the difficulty of establishing such control. First, weapons that provide significant military value are hard to relinquish, as states are often reluctant to give up something that grants them unique capabilities or decisive battlefield advantages. The desirability of arms control is often determined by a weapon’s value, weighed against its perceived horribleness. And even if states finds it desirable to regulate a weapon practical realities might make it unsustainable. An example is is submarines. During the early 20th century, submarines were seen as an unethical weapon. In the 1899 Hague Convention, Russia proposed a ban on submarines, but this was rejected. In 1907, nations met to codify the laws of maritime warfare, establishing a series of provisions relating to the treatment of hospital ships, merchant vessels, and prisoners. The 1907 convention did not discuss banning submarines further, but it did establish maritime laws that proved problematic for submarine operations. The primary issue with submarines was their ability to comply with maritime law concerning attacks on merchant ships. Much of naval warfare aims to disrupt the opponent's merchant vessels to impede the war effort. According to maritime law, sinking merchant ships without providing safe passage for the crew or commandeering the ship was illegal. Submarines, however, relied on stealth for their combat effectiveness, and surfacing to inspect a merchant ship exposed them to detection and potential counterattacks. Furthermore, a merchant ship could potentially ram a submarine, an action that would have been suicidal against a warship but feasible against a submarine. As World War I broke out the parties initially complied with maritime law, but as the hostilities escalated there were more violation, until Germany declared unrestricted submarine warfare against merchant ships. Rather than trying to regulate specific actions, another approach might have been to ban submarines entirely. The simplicity of this approach, combined with a lack of complex rules to follow, has historically made it the most successful method of weapon restriction in warfare. Bans on landmines, cluster munitions, chemical weapons, and biological weapons are generally considered successful. The drafts of these treaties suggest that the creators knew outright bans could help stigmatize a weapon. And complex exceptions that were necessary for states to reach agreement were pushed to the fine print. The challenge when considering AI in warfare is its broad and general application. An outright ban on AI in military operations would be akin to declaring "no industrialization" in militaries during the turn of the 20th century. Therefore, the key to regulating AI in warfare isn't an absolute prohibition but rather a strategic understanding of its potential implications and applications. One must thoroughly understand its capabilities and design appropriate regulations that would prevent misuse while still allowing for technological advancement. To mitigate the uncertainty surrounding the use of AI in warfare, there are several proactive steps that policymakers, scholars, and civil society members can adopt. These steps include establishing dialogues at various levels to enhance understanding of the technology. Academic conferences, peer exchanges, bilateral and multilateral dialogues, and discussions in international forums are valuable for grasping how this technology could be deployed in warfare. Analysis of potential arms control measures must be tightly linked to the technology itself, and these dialogues must include AI scientists and engineers to ensure that policy discussions are grounded in technical realities. The industrial revolution unleashed the wrath of industrial scale warfare that the world never previously witnessed. Large efforts were made to regulate the new weapons. Leading military powers at the time met to discuss arms control in 1868, 1874, 1899, 1907, 1909, 1919, 1921, 1922, 1923, 1925, 1927, 1930, 1932, 1933, 1934, 1935, 1936, and 1938. Though many of these efforts for arms control failed or faltered in wartime, their frequency highlights the significant effort and patience needed for even modest success in arms control. Artificial intelligence offers an unprecedented leap in warfare capabilities, much like the industrial revolution. However, just like the industrial revolution's horrific consequences on the battlefield, so too could the unregulated application of AI in warfare. Much as those generations grappled with their new reality, we too must grapple with ours. Once again we stand at a crossroad. Arms control is not a new challenge but rather an evolving one. It has been a part of our global history for centuries, with a track record of both failures and successes. Each new technological advance brings with it new ethical, political, and military dilemmas, and AI is no different. As the latest development that could potentially revolutionize warfare, AI presents us with a unique opportunity to learn from past mistakes and victories. In conclusion, the complexities of AI in warfare and the critical need for effective arms control cannot be understated. The challenge lies in striking a balance between the desire to gain military advantage and the need to preserve ethical standards and humanitarian principles. While the path to this balance may not be clearly defined, we must commit ourselves to this essential endeavor. After all, one does not need to see the top of the staircase to take the first step.

bottom of page