The Jan./Feb. LD topic is here, and with it - Killer Robots! We all know what that means - Marcus Viney is back, Terminator style, with a topic analysis here to help us all efficiently debate about the inevitable robot apocalypse that is upon us. Resolved: States ought to ban lethal autonomous weapons.
The man, the myth, the wanna-be evil robot mastermind legend, Marcus spent his Christmas wiring his slaughterbots for debate destruction while working on a new topic analysis to share - complete with a topic background, resolution analysis, framework, arguments on both sides, and some closing thoughts. Additionally, Marcus reveals his top five killer robots and provides some helpful resources to further explore the resolution and write cases.
By the way, here are my top five favorite killer robots (in no particular order):
Ava from Ex Machina - talk about deadly on so many levels
K - a Nexus 9 Replicant in Blade Runner 2049 played by Ryan Gosling
IG-11 from the Mandalorian
Terminators of all shapes, sizes, and varieties for sure make my list too
And, of course, the American Cyborg: Steel Warrior - from the beautifully terrible 1993 film of the same name
Share your top five favorite killer robots on One Clap social media. Tag us or share in comments on one of my posts (which will be up soon). I’ll pick one person who provides their list at random to win some One Clap stickers, coasters, and maybe some other swag… plus, I'll give them a shout out on the podcast.
Thank you to Marcus for his hard work, his robot-like precision, and his awesome human heart of generosity in sharing this with all of us!
Marcus Viney's Jan-Feb 2021 LD Topic Analysis Transcript
My name is Marcus Viney, head coach of Cheyenne East, and this is a topic analysis for the 2021 January-February Lincoln Douglas topic: States ought to ban lethal autonomous weapons. Wow, does this sound futuristic or what?! When you were a little kid, did you ever think you’d be debating about slaughterbots in the year 2021? Personally, I could not be more excited about this resolution, and I hope you’re ready to boot up and get started. To preview, there will be five main sections: background, resolution analysis, framework, arguments on both sides, and some closing thoughts.
Let’s begin with some background. In this section, we’ll look at what lethal autonomous weapons are, where they come from, and what their status is today. So, what is a lethal autonomous weapon? For an official definition, we can look at what the Congressional Research Service says: “a special class of weapon systems that use sensor suites and computer algorithms to independently identify a target and employ an onboard weapon system to engage and destroy the target without manual human control of the system.” That’s a mouthful, which is why I prefer the definition given by Stuart Russell: weapons that can “locate, select, and eliminate targets without human intervention.” We’ll talk more about what it means to be autonomous and what kind of weapons are involved later, but basically what we’re talking about are weapons that can complete deadly tasks on their own. At the risk of oversimplifying the debate, I’m going to refer to lethal autonomous weapons throughout this episode as “killer robots,” one, because it’s easier, two, because it captures the core idea well enough, and, three, it’s way cooler. Just know that I’ll have a word of warning about the language of this debate in my closing thoughts.
That covers the basic concept of what killer robots are, but where did they come from? As you read about this topic, you’ll come across endless science fiction references, and my guess is that you could probably make a list of your top five favorite killer robots right now. For my part, it might go: Daleks, The Borg, Matrix Sentinels, Bounty Hunter Droids, and, of course, Terminators, with a special shout out to the T-1,000, who scared me quite a bit as a child. Some of you might object and say, wait, not all of those were totally robotic, to which I would say, meh, close enough. There is no shortage of ways humans have imagined robots, or robot-like beings, taking over and killing us all. What you might be surprised to learn is that this started a long time ago in a place where most great things began: Ancient Greece. Some say that the first “robot” in the human imagination comes from the myth of Talos, a giant being forged from bronze by the god of metalworking to protect the island of Crete from invaders. Talos is said to have patrolled the island multiple times a day, and was powered by a magical liquid called ichor. But perhaps thinking about robots became a little more scientific in the modern period with the French philosopher, Descartes, who was obsessed with the machine-like movements of the body, and how they might be reproduced through a clock-like combination of inanimate materials. Supposedly, after his daughter died of scarlet fever, Descartes constructed an automaton in the shape of a young girl that terrified the few people who saw it. This is of course tragic, but to me also sounds a little too close to the doll Talking Tina from the Twilight Zone, which I think counts as a killer robot, if you allow that they can be powered by curses. Now, things get a little more on topic with the great renaissance thinker, Leonardo DaVinci, who designed the first ever mechanical knight that could theoretically move around with a system of internal gears and pulleys, so yeah a medieval mobile guillotine, awesome. The next major advancement comes from none other than Nikola Tesla in the late 1800s with his first ever remote-controlled boat. This concept fueled mutliple weapons in World War 1 and 2, that amounted to really nothing more than putting bombs on vehicles and remotely smashing them into things, which many people agree didn’t work out as well as planned, but hey sounds fun. The world of killer robots really began to accelerate in the 70s, 80s, and 90s with advancements in smart bombs, automated defense systems, and of course, unmanned aerial vehicles, aka drones, which is what most people probably think about when they hear this topic. But even unmanned drones are not strictly speaking unmanned--also, sexist--there’s still a person on the other end making most of the decisions. So the question is, have we ever created a fully autonomous killer robot? Do they exist, and are they allowed?
This brings us to the status of killer robots today. Although there is disagreement about what “autonomous” means with respect to robots, most experts agree that currently there are no existing weapon systems that would count as fully autonomous. But many believe that we are possibly within a few short years of this achievement. Consider how the International Committee of the Red Cross describes the current situation: “During the past 15 years there has been a dramatic increase in the use of robotic systems by military forces… recent developments in robotics and computing... are influencing the development... of novel… systems with increasing levels of autonomy. For example, in 2001 the United States military had around 50 unmanned air systems. In 2013... it had around 8,000...” with more advanced capabilities. And like Pokemon it’s not just Americans that gotta catch em all. While only a few countries have what we have now, some estimate that the number of countries that are currently developing such systems range from 50 to 80. That’s a ton of brains working on the same problem, so it’s really only a matter of time until we crack the code. The importance of this moment in history cannot be understated. An open letter from the Future of Life Institute eerily describes the rise of autonomous weapons as the third major revolution in warfare, after gunpowder and nuclear arms. This means that killer robots aren’t just another advancement in warfare, it’s a new kind of warfare altogether, and once they hit the scene, nothing will ever be the same. This raises an important question about whether we currently have any rules or regulations about robots in war. You won’t be surprised to find that the law is trailing behind the technology. As for the U.S. there are currently no laws on the books that would tie our hands with the use of autonomous weapons. A more comprehensive place to check is the CCW, or the Convention on Certain Conventional Weapons, an international agreement formed by the United Nations with the purpose of banning weapons considered to cause excessively inhumane outcomes. Although this agreement is recognized by 125 countries, there are no formal rules on the use of killer robots. Treaties have been discussed, but both Russia and the United States have opposed them; I guess the bullies tend to get their way. Because killer robots are coming and there’s apparently nothing stopping them, we are led to the inescapable conclusion that this must be the reason why we have this resolution and why it’s so important for us to debate it.
Resolution Analysis [09:09]
Now that we have a little background under our belts, it’s time to zoom in and take a closer look at the resolution itself. We’ll take a look at each major term before moving on to some framework. To refresh, the resolution says: States ought to ban lethal autonomous weapons. Let’s begin with “States.” Oxford dictionary defines a state as “A nation or territory considered as an organized political community under one government.” In my opinion, this should be the most uncontroversial term in the bunch. A state is a government, just let it stick. Yeah, we can get goofy and argue that Delaware has no right to use drones, but the debate is interesting enough without stretching this one. Where it gets really interesting is when we start talking about what this means for the contrasting notions of rogue states or non-state actors, the truly villainous dimension of this debate. We’ll have more to say about this in the argument section. The next note: the term “states” is plural, so unless there’s a fancy argument out there that I don’t know about, the default debate should be about states in general, not just one. Consider the analogy: Schools ought to ban weapons on campus. Clearly the ordinary meaning of this statement extends beyond one school to schools generally, even if there are exceptions like police academies or martial arts dojos. This means that the affirmative has the burden to show that governments in general ought to ban killer robots. Now, whether or not they have the burden to show that all states in all times ought to do this, I’ll leave that to you. For what it’s worth, my gut says that the more extreme interpretation will not resonate well with the average person’s understanding of the topic. As for the negative, the most reasonable burden to take, likewise, would be to argue that states, generally speaking, lack an obligation to ban killer robots. Although it may be tempting to argue that all you have to show is one example of a state that lacks such an obligation, this kind of interpretation, in my view, will also tend to rub the average judge the wrong way, even if some are willing to listen. The debate is plenty fun without trying to do something sneaky.
With that, let’s move to the next term, “ought.” Without thinking too much, we might want to define “ought” in isolation, roughly as a “duty or obligation,” but since it immediately follows the term “States,” we have an opportunity to say something slightly more sophisticated. Here we can ask about what kind of obligations a state can have and where they come from. Traditionally, thinkers have theorized about three basic types of government obligations. First, states can have an obligation to their own citizens, which derives from an internal agreement, such as a social contract or constitution. Second, states can have an obligation to other states and people, which can come from an external agreement, like an international treaty or convention. And finally, states can have an obligation derived from a higher ideal, like Justice, that places constraints on human action regardless of human consent or agreement. Each of these interpretations could prove to be useful when designing your framework on either side. The key will be to provide warrants for why these obligations exist, or why they are more important than others. But more on framework later.
The next term on our list is “ban,” and I lied earlier; this is the most uncontroversial term of the bunch. Ban means you can’t have it anymore, at all. Consider some synonyms: prohibit, forbid, outlaw, exclude, stop, block, cease. You get the picture, ban means no more. If the framers of the resolution wanted something other than such an absolute action, they would have given us a different verb. However, the most interesting point here isn’t the uncomplicated meaning of the word “ban,” but the possibilities of its negation. Just because you’re on the negative, doesn’t mean you have to be Ultron and go all in for the army of robot drones. You can be more optimistic like Tony Stark, and hold out hope that, with the right tweaking, robots may someday save the world. This is known as the old, “amend it, don’t end it” strategy, which we will definitely talk about more in the argument section.
The last set of words can be taken together or separately: lethal autonomous weapons. I’ll give each a little time to shine on its own but also talk about the ways they work together to build meaning as a unified phrase. Taken by itself, the term “lethal” means deadly, or capable of causing death or severe bodily harm. When paired with the phrase “autonomous weapon,” there’s the added meaning that the lethality is built-in or intentional. Your Roomba is an autonomous system of sorts, but it’s not intentionally lethal, despite that you can come up with a bizarre sequence of events where a knife somehow gets lodged on the front and catches your ankle right as you walk out of the bedroom. Here the lethality would merely be incidental, and so a ban on rogue Roomba’s wouldn’t be topical. What we’re really talking about are autonomous systems that are designed to kill. This could be relevant in a round that brings up autonomous military systems that have other functions like supply and transport, engineering and maintenance, or health and medicine.
Now the next term is the star of the resolution: autonomous. What does it mean for something to be “autonomous”? Since this is Lincoln Douglas, and the word “autonomy” is not uncommon in this format, it’s worth pausing and reflecting on its deeper meaning. The word itself comes from Greek, where “auto” means self and “nomos” means law. Taking together autonomy means self-governing or driven by one’s own laws. In human beings, some philosophers identify autonomy as the mysterious and spontaneous source of our own free will. It’s what allows us to determine the purpose and course of our own lives, and what allows me to go to the kitchen and choose Doritos for a snack. By contrast, trees may be beautiful, but they’re not autonomous in this sense, so they’re definitely not getting up to choose anything like chips… hmmm, unless you’re in Lord of the Rings, then they can do whatever they want. The point is, being autonomous means the ability or freedom to do what you want or to set your own agenda. In machines, autonomy means something a little bit different, because they still have to be programmed and activated. Here, being autonomous means something more like the capacity to operate without external control for extended periods of time. But it gets a little bit more technical than this since the experts agree that autonomy in robots is less like an on-off switch, where a system has it or not, but more like a dimmer, where a system has greater and lesser degrees of autonomous capabilities. One helpful way to think about this is how much the system needs a human to complete a task, kind of like the way a kid learns to ride a bike. On the first level, the kid may just ride on the bike with a parent while the parent is in full control; on the second level, the kid may sit in the seat, but the parent still walks alongside and helps guide the bike with the handle bars; and the third level is when the parent finally releases control of the bike over to the child, hopefully avoiding a little disaster followed by bandaids on knees. Likewise, a robotic system may need significant input to operate on the first level, also known as “human in the loop,” but require less input and mostly supervision on the second level, called “human on the loop,” and theoretically no input at the highest level, aka “humans out of the loop,” where the robot finally learns to ride the bike on its, you know, like a Harley racing down the highway while suspenseful chase music plays in the background. Of course, we need to couple autonomy back with lethality to get a better sense of what’s meant by the key phrase in the resolution. Here we can turn to Michael Klare for another definition: “Autonomous weapons systems are lethal devices that have been empowered by their human creators to survey their surroundings, identify potential enemy targets, and independently choose to attack those targets on the basis of sophisticated algorithms.” What this means is that what we really are talking about are Terminator-like beings that move and act on their own, adapt to their environment, and make decisions about what to destroy or who to kill without direct human input. If that doesn’t give you a shiver down your spine, I don’t know what will.
The last word in the resolution is “weapons,” and you may be wondering how much it adds since we’re already talking about robots that kill people. But in fact this word opens up a large space for debate, since there are a wide variety of autonomous weapons. The first divide is between fixed and mobile systems. A fixed system is one that plays a stationary role. For example, many countries currently use semi-autonomous weapon systems for defence of ships or ground installations against rockets, missiles, and aircraft. It’s kind of like a tower defense game, but in real life. A mobile system, on the other hand, would be one that autonomously moves itself through an environment, like land, sea, air, or, yeah, space. We’re finally there: Star Wars. R2D2, a lethal autonomous system; sure, his deadliest weapon is a retractable zapper, but he’s a droid that’ll leave you in a trash compactor if you look at him the wrong way. The sky is truly the limit when imagining what mobile killer robots might look like. They could take the form of the traditional weapon systems we’re used to, like tanks, submarines, or jets, but they may also take unexpected forms. For instance, they could be animalistic like the hounds in Black Mirror, or they could be more insect-like and travel in swarms of tiny units, you know, like in Black Mirror. Sorry, that’s our future now. But maybe the scariest possibility might be something like satellites that select and eliminate targets from space without any warning at all, actually that’s awesome. The point here is that the debate shouldn’t be limited to one kind of autonomous weapon, like drones, which are certainly relevant, but not necessarily representative of the full power and danger of autonomous weapons more broadly. The affirmative side will definitely want to play this expansive interpretation of weapons up, while the negative will want to play goalie on what counts as a weapon, what seems unrealistic, or what might be turnable to the negative side. Either way, you’re going to want to read up as much as you can so you can outsmart your opponent.
I think it’s finally time for some framework. In this section, we’ll look at two ideas for the affirmative and two for the negative. These will certainly not be exhaustive of what you could do, but should provide a springboard to something else you may want. For both sides, we’ll consider a deontological and consequentialist framing. Quick reminder: a deontological view focuses on what is right or good in itself, based on a rule or principle, whereas a consequentialist one focuses on what is right or good based on the outcome or consequences of an action. If you’re a deontological thinker, you may not lie to your friend when they ask you how their fanny pack looks because you think honesty is best whether it hurts their feelings or not, but if you’re a consequentialist thinker, you might just fib a little about the fanny pack and give them a thumbs up, because you think it’s more important to protect their feelings than to follow some rigid rule about truth and honesty.
Just War Theory [19:53]
Let’s get to the affirmative. The first framework is a classic one and feels most comfortable on deontological ground. The Just War Theory. This idea is no stranger to Lincoln Douglas, so it’s important to know about this one beyond this resolution. Just War Theory is essentially an attempt to put forward the conditions under which a war would be justified. Think of it this way, we all know that war is horrible and tragic, but we also know that it’s sometimes necessary. People have been struggling with this exact tension for millennia, well except barbarians, they love war, but more some civilized folk burdened with a conscience came together and reasoned: well, if we have to go to war, we might as well follow these two basic rules: jus ad bellum, and jus in bellum. Basically, you need a just reason to go to war, and you need to conduct the war in a just manner. Put another way, you can’t just start a war for no reason, and you can’t just fight the war in any way that you want--I’m looking at you John Wick. This is one of the reasons why we even have the concept of a war crime, where either some group waged war when they shouldn’t have, or they did it in a way that outraged the conscience of mankind. The whole purpose of this theory is to appeal to our greater sense of justice and what is right and fair in the world, especially in times when these might be under the greatest threat. As a little spoiler before we dig into the details, the basic idea here is that autonomous weapons violate basic principles of justice in war. Now, for our purposes, we’re mostly interested in the second rule, about justice in the conduct of war, since this resolution isn’t about the reasons behind a war, but only in how they will be fought. Traditionally, the idea of justice in war splits into two important sub-rules: Distinction and Proportionality. Distinction means that a military force can distinguish between combatants and noncombatants, or civilians that have no part in the war, and proportionality means that a military force will use a proportionate or minimal amount of force wherever needed, and not over do it. These are captured well in the context of a food fight. Imagine two high schoolers start throwing food across the table at one another, I know this is going to be hard. If the fight stays between the two, and both get roughly an equal amount of mess on them, this could qualify as a just encounter. But imagine, first, that a cup of chocolate pudding misses its intended target and splatters on an innocent freshman girl. According to a Just War Theorist, this would violate the principle of distinction, because the fight just spilled over, literally, and harmed someone that wasn’t involved. Of course, this girl may now become involved, as the dictates of movie food fights demand, but that’s beside the point. Now, imagine a second scenario where in response to being hit on the cheek with a grilled cheese, one food fighter decides to grab a bucket of boiling chili from the kitchen and tip over the head of the other aggressor, kind of like when the Viserys Targaryen gets a crown of gold. He’s no dragon. This most certainly would upset the Just War Theorist, because it violates the principle of proportionality; it’s like bringing a machine gun to a toothpick fight; it’s just not fair. To apply these ideas to our resolution, the affirmative could argue that killer robots are incapable of following either rule. As advanced as killer robots may get, they will always miss subtle differences that humans could catch, like the difference between a civilian with a large piece of metal in their hand versus a combatant holding a rifle and dressed in plain clothes. This failure to discriminate will inevitably result in the death of innocent people. And likewise, a killer robot may be great at completing tasks, but not in assigning or adjusting values to subjects in the heat of the moment, which could cause massive overkill or an unnecessary amount of destruction. In short, war requires judgement and awareness of complex human factors that might just not compute to a fleet of killer robots. We’ll get into more details about the evidence available here in the argument section. One more twist before moving onto the next framework. Don’t be too surprised to find someone using Just War Theory on the negative; that’s right, just like Anakin, it too can turn to the darkside. The basic idea here would be to flip the script with one simple thought: you think killer robots are bad in war, wait until you see humans. Because they’re made of meat, they’re stupider, slower, and stinkier, Mr. Anderson, and therefore more prone to mistakes in war. If you really want justice, give the metal gods an opportunity to clean things up. This of course is going to force a debate on the details of what autonomous weapons are truly capable of, so it might be best to start researching, hmmm, yesterday.
Now, let’s shift gears to a more consequentialist framing of the affirmative. Here I want to talk about the idea of minimizing dehumanization, and the fairly popular argument that the use of autonomous weapons leads to the treatment of people as dots on screen or obstacles in a video game. In simple terms, dehumanization is the process of depriving a person or group of their human qualities. Basically, this is when you treat someone as less than human, or not human at all. In ordinary life, most people would agree that all humans have intrinsic value and deserve some minimum amount of dignity and respect. But strange things happen in the world and make us forget what we know to be true. Writer Maiese argues that dehumanization is dangerous because “Once certain groups are stigmatized as evil, morally inferior, and not fully human, the persecution of those groups becomes more psychologically acceptable. Restraints against aggression and violence begin to disappear. Not surprisingly, dehumanization increases the likelihood of violence and may cause a conflict to escalate out of control.” The consequentialist color of this frame should be clear: dehumanization opens the flood gates for human rights violations, war crimes, and even genocide. So how exactly do killer robots lead to dehumanization? Well, as the Human Rights Watch explains, because of “their lack of emotion and... ethical judgment… autonomous weapons... face significant obstacles in... the respect for human life.” Even in war, we as humans experience compassion and empathy for others that allows us to minimize harm and make considered judgements based on an understanding of a particular context. But machines don’t have this empathy or judgment and would simply “base their actions on pre-programmed algorithms, which don’t [always] work well in complex and unpredictable situations.” In addition, as we begin to replace the human forces with the machines, we literally begin to lose the skin we have in the game, making it more likely that we’ll engage in more and riskier conflicts. With a fleet of killer robots, war begins to feel more like a video game than a real conflict with consequences and casualties. Advocates for this framework would argue that it’s not just about justice and fairness in the conduct of war that’s at stake, it’s our very humanity that we could lose, and we might be tip-toeing to the edge of global disaster, the likes of which we have never seen or imagined, except in Terminator 2: Judgement Day, when literally the whole world is annihilated. But I think you get the idea.
Social Contract [26:58]
On that note, let’s move into the negative and look at the first, and deontologically flavored, framework. This one’s a classic too, so get your Lincoln Douglas diary out for some notes kids, and if you don’t have one, there’s no time like the present to get one started. The basic idea here is to argue that a state is justified in using killer robots because it has a duty to protect its citizens in light of a very old promise it made to them, aka the social contract. Let’s look at the general features of this idea before linking it to the resolution. The social contract, in essence, is an agreement between a government and its people where individuals, suffering in a state of nature, choose to give up certain freedoms in exchange for protection from the government. The state of nature is a hypothetical moment in history before any governments or military existed and where human beings were left to their own devices in a disorderly, natural world, so yeah, like the purge, but in the great outdoors. Although it may seem like an historical theory, the prevailing wisdom is that social contract is not necessarily designed to explain the actual origin of a government, but rather the purpose of a government and its obligations and duties to its citizens. Thus, the social contract is better understood as a normative as opposed to an empirical theory, meaning it tells us what to do, not how things are. Two prominent social contract theories come from John Locke and Thomas Hobbes. They agree about the basic idea of the social contract, but disagree about the details. For Locke, the primary purpose of a government is to protect individual rights that already existed in the state of nature, whereas for Hobbes it is to protect the safety and wellbeing of the public, and rights exist as useful constructions after the fact. Because of its focus on protection, we could tend toward a Hobbesian version of the social contract for this resolution. For Hobbes, the original reason why people agree to come together and hand power over to what he calls the “Leviathan,” a monstrous political unity more powerful than any individual that makes it up, is to protect themselves from outside threats and dangers. Put simply, a state only exists for the purpose of self-defense. We can now connect this to killer robots. The world is full of enemies and dangerous actors, like terrorist organizations, or countries like China or Russia, who might prefer to play king of the Earth for a while. And not only do these entities exist, they’re on the lookout for some shiny new toys as well. What this means is that we might already be locked in an autonomous weapons arms race that we can’t afford to lose. Consider that by 2025 global spending on military robotics and the advancement of autonomous weapon technology is anticipated to reach over $16.5 billion. Despite the pleas of human rights advocates, the killer robot bowling ball is already racing down the alley, leading to an ever intensifying competition among global powers to master this technology. When viewed from this perspective, perhaps a government is not only justified in developing and using killer robots, but is in fact obligated to do so per their founding agreement of the social contract. After all, if a government doesn’t do everything it can to protect its own citizens, according to Hobbes, it’s not fulfilling its core responsibility. If the negative can mitigate some of the major disadvantages of killer robots, I think this could form one of the strongest arguments on the resolution.
Casualties of War [30:22]
Let’s hop over to another negative framework; this time, a consequentialist one. Here I want to talk about minimizing casualties of war, which I think is the other side of the dehumanization coin. Basically the idea here is that autonomous weapons in fact make war better not worse, if that phrase even makes sense, by reducing the total number casualties we would record. In the context of war, a casualty is defined as someone being injured or killed in a military engagement. This can include combatants or non-combatants, and may range from light injuries, to loss of limbs or even death. As we talked about in the Just War Theory section, almost everyone agrees that war is a tragic and horrible reality, but because it is sometimes unavoidable, people also agree that we have at the very least, an obligation to try and prevent or minimize the harm caused by our engagements. A little bit of language from the US Department of Defense helps capture this idea: In times of war “U.S. forces… [try to] protect civilians because it is the moral and ethical thing to do… [but also because] Minimizing… casualties can further mission objectives; help maintain the support of partner governments … and enhance the legitimacy... of U.S. operations critical to... national security.” I’m sure that there are some people out there in the world who might scratch their heads about how rosy we make this all sound, but there’s an important idea here worth reflecting on: yes, we should minimize harm in war because it’s the right thing to do, but even if you’re an evil government that is as heartless as the tinman from the Wizard of Oz, you still have a reason to minimize casualties for host of self-interested considerations. So, for good or bad, it’s still true we should all try to take it down a notch in the game of war. For all you clever kids out there, I bet you can predict where this is headed: States shouldn’t ban killer robots because this tech can minimize casualties and overall destruction in war. Without going into specific details, the core idea here is, as Ronald Arkin explains: “robots are already faster, stronger, and in certain cases… smarter than humans'' and therefore they are more capable of preventing harm and reducing casualties on the battlefield. Humans are biased, flawed, and emotional. We get hungry and tired easily, and we can fall into bad habits or make poor judgments in highly consequential moments. But robots remain completely unaffected by the whims and weaknesses of human nature, and can carry out tasks far more objectively and efficiently than we ever could. We already hand over thousands of tasks to computers and trust them to complete them with accuracy and precision. Why should this be any different? Once again, you can see that this debate can easily begin to boil down to whoever can provide a stronger picture to the judge of what autonomous weapon systems can really do. Will killer robots lead to more death and destruction or less? I challenge you to discover the truth.
Now, it’s time for the moment you’ve all been waiting for: contention level arguments on both sides. Again we’re not going to look at anything approaching an exhaustive list of arguments on this topic, but we will look at three big categories of arguments for each side that should give you a pretty place to start your work on this resolution. I don’t know if it’s just me, but I’m noticing quite a bit more symmetry on this topic than usual. You’ll see that for the first two major arguments on affirmative and negative, they’re basically mirror images of each other that cover a lot of the same territory. For this reason, I think answers and rebuttals on this topic can be found by going deeper into the warrants behind your own arguments. But let’s jump into the affirmative.
Affirmative Arguments [34:02]
On the affirmative side of killer robots, we’re going to talk about three big arguments covering war, bad actors, and responsibility. As a devoted LD barbarian, I choose to start with war. The basic argument here is that states ought to ban killer robots because they make war so much worse in two fundamental ways: first, they make wars more likely to break out and second, they make them more deadly when they do. Let’s pause for a moment and reflect on the first. We’ve already talked about the autonomous weapons arms race that is currently underway. It seems that countries are more crazed about acquiring killer robots than those, well, unusual adults were about acquiring Beanie Babies in the 90s. This technology is already being developed and manufactured, and it’s only a matter of time until they see the light of day. But why does this necessarily mean that war will become more likely? For an answer to this, we can turn to Burgess Laird. He explains: “First, a state facing an adversary with… autonomous weapon systems capable of making decisions at machine speeds is likely to fear the threat of sudden and potent attack... The posturing of… autonomous weapon systems during a crisis would likely create fears that one's forces could suffer significant, if not decisive, strikes. These fears in turn could translate into pressures to strike first… for fear of having to strike second from a greatly weakened position.” In other words, war becomes more likely, because the whole world becomes increasingly trigger happy; no one wants to be the person who pulls the trigger second. But as Laird continues, this isn’t the only reason why war becomes more likely with killer robots: “as the speed of military action in a conflict involving the use of… autonomous systems… [including hypersonic weapons] begins to surpass the speed of political decision making, leaders could [easily] lose the ability to manage the crisis and… the ability to control escalation. With tactical and operational action taking place at speeds driven by machines, the time for exchanging signals and communications and for assessing diplomatic options... will be significantly foreclosed.” Put simply, we can mess up real bad, real quick. If you’ve ever participated in or experienced any nuclear arms topic in PF or LD, this is basically the “miscalculation” argument, where we misinterpret or fail to respond to a situation accurately and appropriately. This is fine when you’re at the grocery store and accidentally wave to someone you think you know, but not so much when we’re talking about weapons that could kill hundreds of thousands or even millions of innocent people in one go. If you like this argument, I’m certain that you could find even more warrants for why killer robots make war more likely, but let’s turn to the second half of this argument that says killer robots make wars more deadly when they happen. The reason for this is fairly straightforward. As writer Kelsey Piper, not to be confused with Kelsey Potter, explains very simply: “Fully autonomous weapons will make it easier and cheaper to kill people.” She asks us to imagine an army that “wants to take a major city but doesn’t want troops to get bogged down in door-to-door fighting as they fan out across the urban area. Instead, [this army] sends in a flock of thousands of small drones, with simple instructions: Shoot everyone holding a weapon. A few hours later, the city is safe for an [army] to enter.” What’s so scary is how easy military decisions like this become when you’re not worried about losing your own soldiers: “Because you don’t need a human, you can launch thousands or millions of [autonomous weapons] even if you don’t have thousands or millions of humans to look after them.” It’s possible for someone to object here and say, well, what’s the problem if you’re taking out the bad guys? The more efficiently we can take out a military force that wants to kill us, the better. Well, the problem, again, as we’ve touched on before, is that while a killer robot can take the place of a human on the battlefield, they can’t fully replace what they can do in certain situations. As the aptly named Campaign to Stop Killer Robots argues, “Fully autonomous weapons will lack the human judgment necessary to evaluate the proportionality of an attack, distinguish civilian from combatant, and abide by other core principles of the laws of war.” What is likely to happen is an overuse of force and an increase in accidental death and destruction because robots can’t always see what a human would see or value what a human would value. In short, by allowing killer robots on the battlefield, we’re opening Pandora’s box, and we might not like what comes out. Once again, the door is open for you to discover even more warrants for why killer robots make war more deadly or horrific; it shouldn’t be too hard to do.
Let’s move on to a different category of affirmative arguments: bad actors. Here, the argument is that we should ban killer robots because they can get into the wrong hands and threaten national security and wreak havoc across the world. Loosely speaking we can consider two groups of bad actors: terrorist organizations and authoritarian regimes. Let’s start with the terrorists. According to Jacob Ware, terrorist groups, like the Islamic State, are interested in acquiring autonomous weapons for three reasons: cost, traceability, and effectiveness. He explains, first, that “killer robots are likely to be extremely cheap, while... maintaining lethality. Experts agree that lethal autonomous weapons, once fully developed, will provide a cost-effective alternative to terrorist groups looking to maximize damage… [where] small AI-powered killer drones are likely to cost little more than a smartphone.” Personally, I think smartphones are still too pricey, but this is scary nonetheless. Next, Ware argues that autonomous weapons will reduce the trace left by terrorists, allowing them to escape without detection. He imagines “a terrorist wanting to assassinate a politician… [and where] all they need to do is upload their target’s photo and address into the killer robot: it can then fly to the destination, identify and eliminate the person, and self-destruct to ensure nobody knows who was responsible.” Yikes, not sounding so good so far, and we’re not even done. Finally, Ware notes that killer robots could essentially eliminate the physical costs of terrorism, where someone would no longer need to be suicidal to carry out horrific attacks against the public, essentially creating a new generation of terrorists that are far more dangerous and deadly than we could have imagined. You might be saying to yourself, yeah, that’s all scary and all, but that assumes they can get their hands on killer robots, which is unlikely. Unfortunately you’d be wrong. As Ware describes, there are multiple ways that extremists can realistically acquire killer robot tech. First of all, “modern terrorist organizations have advanced scientific and engineering departments, and actively seek out skilled scientists for recruitment”; secondly, “autonomous weapons technology will likely proliferate through sales” in ways that already exist. Like drones, autonomous systems will likely become more widely available in commercial markets, and terrorists will be able to modify and repurpose them for their own evil deeds. Finally, like that old sink in the basement, there’s just a natural leak: Innovation in this field is “led by the private sector, not the military.” This means it will be more difficult to contain the technology. Terrorists can hack, steal, or even salvage the technology from the field, if they can’t just buy it. All of this builds up to one simple point: if we don’t ban this technology and restrict the ability of terrorists to get their hands on it, it’s going to come back and bite us really hard, and any benefit that we might get from having our own killer robots is going to be outweighed by the carnage extremists will inflict on innocent people across the world. But terrorists aren’t the only bad guys in movies, and reality; we also have to worry about pesky authoritarian regimes. The Human Rights Watch argues that by allowing killer robots in war, we might begin to see them creep into everyday life. They explain that “after the weapons enter national arsenals, countries might be tempted to use the weapons in inappropriate ways… against their own people” and that such weapons “could be perfect tools of repression for autocrats seeking to strengthen or retain power. Even the most hardened troops can eventually turn on their leader if ordered to fire on their own people. An abusive leader who resorted to fully autonomous weapons [however] would not have to fear that armed forces would resist being deployed against certain targets.” Perhaps in the worst case scenarios autonomous weapons could be used by repressive regimes to target specific populations of people for ethnic cleansing or genocide. The impact of bad regimes abusing autonomous weapons may rival or even surpass the harm that could be inflicted by extremist organizations. Either way, it’s clear that the affirmative wants to advocate for a strict preemptive ban on this technology to prevent this kind of chaos from being unleashed on the world. Pretty cool arguments if you ask me.
Let’s move onto the third and last main category of affirmative arguments we’ll consider in this episode. We’ll call this one the “Responsibility Gap.” The thrust of this argument is that when we take humans completely out of the loop in fully autonomous weapons systems, we unavoidably create a gap or hole in responsibility for whatever the system ultimately does. This problem is not unique to killer robots, but is playing out in the realm of something that we might find more familiar: self-driving cars. As you may remember, in March of 2018, the world witnessed the first ever death caused by a self-driving car in Arizona. Following the incident, there was a public debate about who exactly was at fault: Was it the driver, the pedestrian, the car? Well, depending on how you answer these, more puzzling questions begin to emerge. If it’s not the driver or the pedestrian, then is it the car’s fault? But then who’s responsible for the car: the dealership, the manufacturer, the programmer? It’s totally unclear. Now, some might be tempted to brush this off and say it doesn’t really matter. But the problem is, it does matter. It matters because when accidents happen and people die in society, there needs to be a recourse or a remedy of some kind to alleviate the situation. More specifically, we need to compensate the victim and we need to make sure the same thing never happens again, or at least make sure there are precautions to make it less likely. But if there is no one at fault who can be held responsible for the incident or death, then there can be no remedy. Someone’s name has to be on the other side of the lawsuit, you can’t just sue no one in particular. The same exact problem exists but with much greater urgency in the world of killer robots. Ray Acheson raises a similar set of questions about responsibility for autonomous weapons: “Who is responsible if a robot kills civilians or destroys houses, schools, and marketplaces? Is it the military commander who ordered its deployment? The programmer who designed or installed the algorithms? The hardware or software developers? We can’t lock up a machine for committing war crimes—so who should pay the penalty?” And we can begin to transform these concerns into a proper Lincoln Douglas argument with the help of the Human Rights Watch, which argues that allowing a responsibility gap like this in killer robots is unacceptable: “International humanitarian law establishes a duty to prosecute criminal acts committed during armed conflict… and human rights law establishes the right to a remedy for any abuses of human rights.” They explain further why these rules are in place: “Accountability serves multiple moral, social, and political purposes… it deters future violations, promotes respect for the law, and provides avenues of redress for victims.” What we’re talking about here is ensuring that injustices in the world are corrected, and if we don’t have mechanisms to enforce these corrections, then people are going to continue to be abused and killed, and the bad guys are going to continue to get away with it. As the argument goes, because it’s impossible to close the responsibility gap and hold some people accountable for what damage fully autonomous weapons cause, we should never allow them to see the battlefield. This is an argument that is fairly prevalent in the literature, so you should have no problem building it up much stronger than I did here.
Negative Arguments [46:23]
Let’s go ahead and jump over the fence to some arguments on the negative side of this debate. As I mentioned before, you’ll notice a bit of mirror image here. We’ll cover three main arguments concerning war, bad actors, and the old “amend it, don’t end it” idea. Once again, let’s begin with war. You won’t be surprised to hear that the negative completely disagrees with the affirmative here on exactly the two points from before. The negative says we shouldn’t ban autonomous weapons because they will make war better in the sense that they will, one, make it less likely to occur and, two, make it far less deadly when it does. For those who recognize the pattern, the first one of these arguments is about deterrence, or preventing war from breaking out in the first place for fear of the consequences. For advocates of this idea, it is precisely the immense and god-like power that makes killer robots such an effective deterrent. Etzioni makes this argument when he explains that “human-out-of-the-loop” weapons are the perfect tool for reinforcing red lines, because other nations will be less likely to test such a line when it is backed up by a nearly flawless threat of an entire fleet of autonomous weapons than by a deployment of human troops they believe might not be as strong or capable. Basically, people do stuff because they’re afraid of your huge magical stick. But there’s also another route to deterrence that comes from the idea of enhanced defense. Alex Wilner argues “better defense equals better denial,” and that “by improving the speed and accuracy of… defensive weapons, and by subsequently improving the reliability of defending infrastructure, weapons platforms, and territory against certain kinetic attacks… [autonomous weapon systems will] deter certain types of behavior by altogether denying their utility.” In other words, maybe you don’t want to shoot on a goalie that you know is guaranteed to block your shot. Thus, the negative argues war and military action all around becomes less desirable and less probable because people recognize the robotic chess game just won’t unfold in their favor. But the negative wants to push this argument further and say that, even if war does still happen in the negative world, autonomous weapons make it shorter, less deadly, and more humane. The basic reason for this we’ve talked about quite a bit. Robots are better than humans at, well, just about everything. Some military experts have argued that killer robots should not only be allowed in combat, but that in fact they would be preferable to human soldiers. Our old friend Ronald Arkin agrees and argues that autonomous weapons will be more ideal for a number of reasons: first, “they don’t need to be programmed with a self-preservation instinct, thus… eliminating the need for a “shoot-first, ask questions later” attitude”; second, such systems won’t “be clouded by emotions like fear or hysteria, and they will be able to process much more incoming sensory information than humans, without discarding or distorting it to fit preconceived notions”; and finally, where robots and humans fight alongside, “robots could be more relied upon to report ethical infractions that they observe than would a team of humans who might close ranks.” You heard that right, robots won’t be afraid to die, they won’t get all emotional, and they won’t lie to protect others who’ve crossed the line. So, yeah, if anything, killer robots are more ethical than a human combatant. And if you think we’re done here, we’re not. Michael Horotwiz makes an even stronger claim, namely, that a ban on killer robots would actually cause an increase of civilian casualties. This is because “eliminating precision-guided weapons with… automation,” would only make our attacks far less precise and would inevitably “increase civilian suffering in war.” He asks us to reflect on the massive devastation brought down on countless cities in World War 2 because of unguided weapons. Why shouldn’t we celebrate and support the technical progress we’ve made? For the first time in history, we have weapons with the highest level of precision imaginable, and we can achieve objectives without the kind of collateral damage we’ve seen in the past. For Horowitz, the affirmative represents a “knee-jerk” reaction to a new technology, and banning it would simply “deprive the world of a key means of reducing civilian casualties in war.” When constructed properly, this seems to me to be one way for the negative to take back the moral high ground in the debate. But this isn’t the only way.
Let’s jump over to another negative argument that can do something similar. Once again, we’re going to talk about bad actors, starting with terrorists, only this time, the robots are coming for them. This negative argument makes the push that autonomous weapons should not be banned because they represent a key tool in the line of defense against terrorist organizations, like the perpetrators of 9/11. For this one, we’re going to borrow an argument from the 2018 Nationals Topic: The United States’ use of targeted killing in foreign countries is unjust, which I highly suggest getting into and watching the final round to give you a boost on this topic; there are several parallels. But one argument the negative made was that drone strikes effectively block terrorist activities, as Michael Hayden explains: the targeted drone strike program “has been the most precise and effective application of firepower in the history of armed conflict. It disrupted terrorist plots and reduced the original Al Qaeda organization… to a shell of its former self.” And the specific reasons why drones are so effective in fighting terrorism become clear when you look at the evidence: “each study finds that leadership decapitation has historically tended to disrupt militant operations and degrade their capabilities, ultimately weakening [extremist] organizations and shortening their lifespans. Simply put, when terrorists are afraid to poke their heads above ground, it becomes exceedingly difficult for them to communicate, coordinate, and conduct attacks—especially sophisticated ones like 9/11.” Thus, the negative argues, a ban on autonomous weapons, which will only become more powerful and precise in the war against terror, would only tie our hands unnecessarily in our effort to prevent attacks and save innocent lives around the world. But once again, terrorists aren’t the only bad actors we need to worry about. There will inevitably be other rogue states or non-state actors we need to put in check with our own autonomous weapons in order to secure our safety and well-being. For example, Kenneth Anderson asks us to consider that “at some point in the not-distant future, someone—maybe in China… or in Russia… will likely design, build... and sell… a [black market] autonomous weapon system” to a group or organization with money but no conscience. If this were to happen in the affirmative world, where the United States had agreed to a ban, we would quickly find ourselves “facing a weapon system on the battlefield that conveys significant advantages to its user” but which we ourselves would not be able to deploy. The scary scenario here is the idea that, regardless of a ban, autonomous weapon systems will exist, and there will be nefarious entities that get their hands on them. In such a world, it would be like handing machine guns to criminals, but banning the police from having anything more than pepper spray or a billy club. This would leave us and others utterly defenseless, which would be a socially and morally unacceptable position to take. For this reason, the negative says we simply cannot accept a ban. Now that should be plenty to pick up and play with for the bad actors idea. I’ll leave you to figure out how to make it even stronger.
It’s time now for the third and final negative argument. If you’ve been in Lincoln Douglas for more than a year, you may begin to see some argumentative patterns that repeat even across different resolutions. This next one is definitely one of those, and re-emerges every time we encounter a “ban” or “eliminate” topic. This is the old “Amend it, Don’t End It” strategy, or the “Fix it, Don’t Nix it” move. Here the negative says, sure, there may be some problems with killer robots, but let’s not throw the killer robot baby out with the matrix slime water. Instead of an all-out ban, why don’t we find a compromise or middle ground. From what I see, there are two different ways someone could propose a change instead of a ban to negate the resolution. The first is legal and the second is technical. Let’s start with the legal. Our buddy Arkin begins this argument with a reality check. He says the “horse is already out of the barn.” It’s clear to anyone who’s looking that lethal autonomous weapons are already on their way. And because a ban isn’t ideal for several reasons, “a better strategy is to try and control its uses and deployments” through International Law. Saahil Dama, agrees: “Instead of a ban, the world would be better served by a treaty that lays down minimum standards for training, developing, testing, and operating [lethal autonomous weapons]...Such a treaty would... garner [way] more support than a treaty that bans [them] since States would be reluctant to accept an outright ban given the advantages [they] would provide them with.” But another option entirely is to tinker with the technology itself to mitigate or prevent the affirmative harms. One person who advocates for this is Larry Lewis who urges that “before calling for society to ban such weapons, it behooves us to understand what we are really talking about, what the real risks are, and that there are potential benefits to be lost.” With regard to errors and mistakes, Lewis says, they’re not some inscrutable mystery that we can’t look at and research. In fact, he suggests, “the Pentagon could analyze which applications… are inherently unsafe or unreliable in a military setting. [And] The Defense Department could then leverage expertise in academia and industry to better characterize and then mitigate these types of risks. This dialogue could allow society to better determine what is possible and what applications should be deemed unsafe for military use.” For Lewis and others, this may be a difficult process, but one that is undeniably worth the effort, given the benefits that autonomous weapons will bring. While I personally think this negative strategy can be pretty strong and work well with a range of different judges, I feel the need to at least mention one downside it has: when you say we should regulate instead of ban, it sounds very reasonable, but you’re also, by definition, making a sizable concession to the affirmative that, yes, in fact, there are problems with killer robots and something needs to be done, just not a ban. If the judge doesn’t buy your “something,” they may be more likely to vote affirmative. Just something to chew on.
Closing Thoughts [57:01]
That does it for the arguments on both sides. I know there are more out there, but that should give you a little boost. I want to end with some closing thoughts about the resolution. The first is about the language of the debate itself. As I have read about the topic, I’ve noticed several writers commenting on how the rhetoric or language of the debate, including the popular phrase “killer robots” or references to well known movies might bias the average person in favor of the affirmative position on lethal autonomous weapons. I discovered that there’s even a little sub-debate about this with some research that has been conducted. The results are not entirely clear, but it does appear that people who consume more “robo-apocalyptic” media tend to oppose the creation of autonomous weapons more. But there’s also some research out there that suggests the language of your arguments themselves may not have that much of an effect as you might desire. Mostly this is just to alert your attention to the issue so that you can be cognizant of it as you construct arguments and cases. The next closing thought I have about preparing for the resolution is actually a set of questions I think you should ruminate on and ultimately write answers for on both sides of the debate. I consider these key issues that will likely recur every round. Here they are with a little splash of why they’re important: (1) What counts as autonomous? The affirmative needs a robust interpretation to sell the dangers of machines making truly independent decisions, while the negative may want to sell a more restrictive and realistic notion to tether the debate back to realworld benefits of existing systems. (2) Are bans effective? I know we didn’t talk about arguments on this one way or the other, but this is a sub-debate that will likely take place and will matter quite a bit if it does. I don’t think the affirmative should take for granted that bans are magical buttons we can press, and I don’t think the negative should ignore the possibility of pressing this issue if the affirmative leaves it open. (3) Do killer robots make war better or worse? This is going to be a central issue in most debates, and the winners are going to be those who dug for the deepest warrants on either side. The more you can discover about the strengths and weaknesses of robots and humans, the better. The debaters that know the most, grow the most. (4) What actors do killer robots help the most? This is similar to the last one: do they help the heroes or villains, and why? Why? Why? Why? Always have an extra answer for why you’re right and your opponent is wrong. (5) Who is responsible when a killer robot makes a mistake? Even if you don’t run the responsibility gap argument yourself, you will hit it. If you do run it, you’re gonna need more than your argument, and an answer to the answers a good debater will bring. Play the little chess game for this one well before the first round. And finally, (6) Is a regulation better than a ban? My prediction is that most negatives will run some variation of this amend-don’t-end-it argument either as a main or side strategy. It will therefore be imperative to plot out the line of responses and warrants necessary to come out on top, whatever side you’re on. I really think that all 6 of these questions deserve special time and attention on your part, and if you take them seriously, you will become a more powerful, and let’s say, autonomous debater.
Thank you so much for letting me drone on for so long. Now get out there and kill it! Not literally, just get out there and do your best. Good luck and we’ll see you next time!
Sources in Order of Episode:
Congressional Research Service. “Defense Primer: U.S. Policy on Lethal Autonomous Weapon Systems.” CRS. December 1, 2020
Stuart Russel. “Lethal Autonomous Weapons.” UC Berkeley. April 2016.
“Talos.” Greek Mythology.
Minsoo Kang. “Mechanical Daughter of Rene Descartes.” Cambridge. August 2016.
“Robotic Knight.” Leonardo DaVinci.
Ty McCormick. “Lethal Autonomy: A Short History.” FP. January 2014.
Wilson Rothman. “Unmanned Warbots of WWI and WWII.” Gizmodo. March 2009.
International Committee of the Red Cross. “Autonomous weapon systems technical, military, legal and humanitarian aspects.” ICRC. November 2014.
Future of Life Institute. “An Open Letter to the UN CCW.” FHI. August 2017.
CCW. “The Convention on Certain Conventional Weapons.” United Nations. 2001.
Human Rights Watch. “Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control.” HRW. August 2020.
Michael Klare. “Autonomous Weapons Systems and the Laws of War.” ACA. March 2019.
Alexander Moseley. “Just War Theory.” IEP.
Marco Sassoli. “Autonomous Weapons and International Humanitarian Law.” 2014.
Michelle Maiese. “Dehumanization.” Beyond Intractability. July 2003.
Human Rights Watch. “Heed the Call: A Moral and Legal Imperative to Ban Killer Robots.” HRW. August 2018.
Encyclopedia Britannica. “Social Contract.”
Ryan Swan. “The Burgeoning Arms Race in Lethal Autonomous Weapons.” TAP. June 2020.
Ronald Arkin. “Lethal Autonomous Systems and the Plight of the Non-combatant.” AISB. July 2013.
Burgess Laird. “The Risks of Autonomous Weapons Systems for Crisis Stability and Conflict Escalation in Future U.S.-Russia Confrontations.” Rand. June 2020.
Kelsey Piper. “Death by algorithm: the age of killer robots is closer than you think.” Vox. June 2019.
Campaign to Stop Killer Robots. “The threat of fully autonomous weapons.” 2020.
Jacob Ware. “Terrorist Groups, Artificial Intelligence, and Killer Drones.” War on the Rocks. 2019.
Human Rights Watch. “Making the Case: The Dangers of Killer Robots and the Need for a Preemptive Ban.” HRW. December 2016.
Human Rights Watch. “Q&A on Fully Autonomous Weapons.” HRW. October 2013.
NYTimes. “Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam.” March 2018.
Ray Acheson. “To Preserve Our Humanity, We Must Ban Killer Robots.” The Nation. October 2018.
Amitai and Oren Etzioni. “Pros and Cons of Autonomous Weapons Systems.” MR. June 2017.
Alex Wilner. “Artificial Intelligence and Deterrence.” NATO. 2018.
Michael Horowitz. “Do Killer Robots Save Lives?” Politico. November 2014.
Michael Hayden. “To Keep America Safe, Embrace Drone Warfare.” NYTimes. February 2016.
Patrick Johnston. “Do Targeted Killings Work?” Rand. September 2012.
Kenneth Anderson. “Law and Ethics for Autonomous Weapon Systems.” Hoover. 2013.
Saahil Dama. “Banning Autonomous Weapons is not the Solution.” Script-ed. 2018.
Vincent Muller. “Killer Robots: Regulate, Don’t Ban.” BSG. November 2014.
Charli Carpenter. “The New York Times says movies about killer robots are bad for us. It’s wrong.” Washington Post. November 2018.
Charli Carpenter. “The SkyNet factor: Four myths about science fiction and the killer robot debate.” Washington Post. September 2014.
Killer Robots Super Link #1
Killer Robots Super Link #2
Killer Robots Super Link #3