This Washington Post story about the use of autonomous robots to conduct military tasks isn’t particularly new but it’s generated some interesting comments in the sphere o’blogs.
But one thing I did learn of was the existence of the ICRAC (International Committee for Robot Arms Control). According to the WaPo:
The ICRAC would like to see an international treaty, such as the one banning antipersonnel mines, that would outlaw some autonomous lethal machines. Such an agreement could still allow automated antimissile systems.
Yeah…the landmine treaty. The one in which the nations most likely to engage in big conflicts don’t sign. Let’s hope they’ve set their sights higher than that.
Thomas P. Barnett sees the coming of our robot overlords as a good thing and (dang it) if we just waited to invade Iraq and Afghanistan until we had them up and running we’d totally of kicked ass and won.
the whole slog of counterinsurgency is about two sides trying to create a sense of strategic despair (“How can we possibly win?”) in the minds of the other side. The more the US signals its usual historic approach (“We will win with technological stuff in large numbers that keeps our casualties low”), the more we create strategic despair on the other side. This sort of technology will go a long way toward creating such despair.
Really? Our problem was that we weren’t technologically advanced enough in relation to our enemies in Iraq and Afghanistan. Are you kidding me? These guys were building bombs out of the flotsam and jetsam left behind during the Soviet-Afghan war of two decades earlier. While we were dropping satellite guided bombs, conducting drone strikes and seeing deeper into the various electromagnetic spectra to identify and target enemies than most people thought possible a generation ago and Barnett thinks if we just had the iPhone 7 we’d win the war?
His ‘strategic despair’ sounds good but I don’t think he realizes it works the other way as well. We spend almost half the world’s defense allocation and have the most highly trained and best equipped army in the world. When an army like that finds it can’t win against a rag tag collection of fighting bands over time and begins to take casualties, isn’t there a risk that the force (and nation behind it) come to the conclusion the war is unwinable regardless of the investment? Isn’t that where we are right now?
This just sounds an awful lot like ‘triumph of the will’ talk where wars are all won by people who want victory the most. Hmmmm….I just don’t buy it.
Paul Pillar identifies what may be a silver lining surrounding robots with an autonomous ability to exercise lethal force.
Programming a robot weapon to make those determinations instead of a human forces the criteria to be clear. A vague sense of what makes someone enough of a bad guy to be bumped off from high altitude is not the sort of basis for decision that can be translated into computer code…But forcing the construction of explicit standards for pulling the trigger would enable the entire effort, even the part involving humans, to be put on firmer moral and legal ground.