With the ongoing development of artificial intelligence, robotics, and image recognition technology, it was only a matter of time—and not much time—before innovations were applied to military weapons.
Already in March of this year a “lethal autonomous weapons system” was used by the government of Libya against rebel militia. This was a Kargu-2, an intelligent drone, manufactured by a Turkish defense contractor. The drone was able to follow and attack fighters as they fled a rocket barrage—it was unclear from reports whether people were still in control of the device at the time of its deployment.
Last year, in the Nagorno-Karabakh region, Azerbaijan used special attack drones against Armenian soldiers. These were able to hover in the air, awaiting signals, not from a human controller but from the assigned target, before initiating an attack. The world currently offers numerous small regional conflicts that provide ideal opportunities to test and further refine the intelligence of increasingly autonomous war machines.
According to a report last October1, Tyndall Air Force Base in Florida announced that it will soon be the first to test-deploy Quadrupal Unmanned Ground Vehicles (Q-UGVs) for “enhanced situational awareness.” Described as robot dogs, the machines will be used to surveil “areas that aren’t desirable for human beings and vehicles.”
A Q-UGV designed by Ghost Robotics—featured at the Association of the United States Army annual conference this fall—sports a custom built “special purpose unmanned rifle”, developed to provide accurate fire from unmanned platforms. The unit looks like a small tank, but with legs instead of wheels. The rifle mounted on top of the robot dog has a 30x optical zoom, thermal camera for targeting in the dark, and a range of 1200 meters.
So far, the robot dogs are still remotely controlled by human operators, but around the world, arms manufacturers are exploring ways to give these devices some degree of autonomy and decision-making ability, that is, free will. It seems unlikely that these weaponized robots of the near future will abide by Asimov’s quaint “laws of robotics”.2 Perhaps civilian robots will.
These ominous developments prompted a recent meeting of 125 nations in Geneva to review the Convention on Certain Conventional Weapons, also known as the “Inhumane Weapons Convention”.3
Predictably, this U.N. sponsored event was unable to produce more than a vague statement of the need to restrict the development of this technology. The U.S. and Russia were opposed to any curbs, and the “Inhumane Weapons Convention” does not currently address the use of killer robots. The American government feels that existing international law is sufficient to regulate lethal autonomous weapons systems, and that banning the technology is premature—probably because it has already invested heavily in systems that apply artificial intelligence to long range missiles, swarm drones, and missile defense systems.
I haven’t thought much about killer robots since I was ten years old. At that age, my peers and I would draw frenetic pictures of spacecraft and robots shooting missiles and laser beams, surrounded by flames and explosions. “My robot can destroy your robot” we would insist, though this was often disputed. At night our televisions would bring us images of the ongoing war in Viet Nam. Since my Cold War childhood I have never heard any serious discussion of how our various nations and tribes will “…beat their swords into plowshares and their spears into pruning hooks.”4 In fact, quite the opposite. Said spears and pruning hooks are now the innards of new and terrifying weapons systems.
Why not killer robots? Concerned groups like Human Rights Watch and others have argued that “Robots lack the compassion, empathy, mercy and judgement necessary to treat humans humanely, and they cannot understand the inherent worth of human life.” A related concern is that mass produced killbots may lower the threshold for war by taking people out of the decision tree in armed disputes.
But these concerns beg the question: How well historically have human beings done with these decisions? And perhaps a more troubling question: If war is terrible and unavoidable, will this technology at least make it more precise, efficient and short-lived?
**********
1”Weaponized robot dog makes debut appearance at US Army annual conference”, American News, 10/14/21.
2Asimov’s Three Laws of Robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the order given it by human beings except where such orders would conflict with the first law.
- A robot must protect its own existence as long as such protection does not conflict with the first or second law.
3“Killer Robots Aren’t Science Fiction. Calls to Ban Such Arms Are on the Rise”, New York Times, 12/18/21.
4Isaiah 2:4