Military Gear & Army Surplus Gear Blog

Autonomous weapons could change battlefields of the future [Advertiser content from ICRC]

Autonomous weapons could change battlefields of the future [Advertiser content from ICRC]


Robots fighting wars. Science fiction? Not anymore. If machines, not humans, are making life and
death decisions How can wars be fought humanely and responsibly? Humanity is confronted with a grave future
— the rise of autonomous weapons. Autonomous weapons are those that select an
attack target without human intervention. After the initial launch or activation, it’s the
weapon system itself that self-initiates the attack. It’s not science fiction at all, in fact
it’s already in use. The world is in a new arms race. In just 12 countries, there are over 130 military
systems that can autonomously track targets. Systems that are armed. They include air defense systems that fire
when an incoming projectile is detected, “Loitering munitions” which hover in the
sky, searching a specific area for pre-selected categories of targets. And sentry weapons at military borders which
use cameras and thermal imaging to ID human targets. It’s a pretty far cry from a soldier manning
a checkpoint. Militaries are not turning to robotics and
increasing autonomous robotics because they think they’re cool. They’re doing it for very good military reasons. They can take in
greater amounts of information than a human could, make sense of it quicker than a human
could, be deployed into areas that might not be possible for a human system, or might be
too risky, too costly. In theory, any remote-controlled robotic weapon
— in the air, on land, or at sea — could be adapted to strike autonomously. And even though humans do oversee the pull
of the trigger now, that could change overnight. Because autonomous killing is not a technical
issue — it’s a legal and ethical one. We’ve been here before. At the beginning of the last century, tanks,
air warfare, and long-range missiles felt like science fiction. But they became all too real. With their use came new challenges to applying
the rules of war, which require warring parties to balance military necessity with the interests
of humanity. These ideas are enshrined in international
humanitarian law. In fact, it was the International Committee
of the Red Cross that pushed for the creation and universal adoption of these rules, starting with the very first Geneva Convention in 1864. These rules have remained flexible enough
to encompass new developments in weaponry, staying as relevant today as ever. But these laws were created by humans, for
humans, to protect other humans. So can a machine follow the rules of war? Well that’s really the wrong question, because
humans apply the law and machines just carry out functions. The key issue is really that humans must keep
enough control to make the legal judgements. Machines lack human cognition, judgment, and
the ability to understand context. You can think of the parallels with how we deal with pets. The dog is an autonomous system, but if the dog bites someone, we ask “who owns that dog?” Who takes responsibility for that dog? Did they train that dog to operate that way? That’s why the International Committee of
the Red Cross advocates that governments come together and set limits on autonomy in weapons
and ensure compliance with international humanitarian law. The good news is that the ICRC has done this
work for over a century. They’ve navigated landmines and cluster
munitions, chemical weapons and nuclear bombs. And they know that without human control over
life and death decisions, there will be grave consequences for civilians and combatants. That’s a future no one wants to see.


Reader Comments

  1. I don’t worry about it as “machines being dangerous”. I think the part we must worry about is humans not incorporating a “thou shalt not kill human” option. Technology shall always progress, its man’s responsibility to see to it that it goes the right way.

  2. I'm sure humans can handle fighting "humane" wars all by themselves like in ww1, ww2…oh and basically every other war ever

  3. human have a limited capacity for empathy on a large scale so it might be better to develop AI systems that can be more human than human

  4. The argument Vox tried to push shows how little they know about artificial intelligence or the realities of war. Considering human nature ensures that war will continue for the foreseeable future, pursuing smart applications of AI is the most humanitarian choice the major powers of the world can make. The best way to classify AI behavior is simply reliability. An AI designed around a military code of conduct will perform much more humanely and effectively in the field than any human being, never suffering from issues arising from fear, fatigue, prejudice, or unprofessional behavior.

    In both conflicts between powers (robots vs other robots) and non-state actors / terrorist organizations (robots vs insurgents), conflicts would be far less limited with much fewer civilian casualties.

    The moral argument Vox tries to push is a weak one. Under heavy stress in life-or-death situations, even most people who claim themselves to be highly moral will behave immorally, because that's how we arose through evolution.

  5. P. W. Singer should have been identified in a way that he would not be confused with the famous philosopher Peter Singer.

  6. I think you forgot to mention the potential massive benefits. For example, 1. fewer troops on the ground means fewer soldiers with PTSD or physical injuries that then need to be reintegrated into society. Currently, many of these individuals are damaged for life.
    2. There is always a large amount of human trafficking for prostitution that grows up around military bases. Occupying soldiers also get involved in local corruption and theft. By removing human soldiers from a war zone all the negative activity soldiers do that is meant to be policed by military police is also removed, causing less harm to the civilians of the occupied country.
    3. The money will be saved by not having to pay military pensions and medical costs, plus the huge expenses of training human soldiers that usually only stay in the army for several years.
    4. Also, by removing ordinary soldiers from our population we can end a culture that holds the army up on an unquestionable pedestal and doesn't allow honest debate or scrutiny of the army. Ending militarism will make the army more accountable for its actions because national pride in hero worriers will disappear allowing for greater transparency in the army. At the moment this can't be done because questioning the army is seen as disrespectful to the soldiers, predominantly from poor backgrounds, that were injured or died in combat.

  7. If these drones sacriface precision and morality for witholding soldiers lives, then what is the point of having an army? Nothing beats old-style warfare.

  8. this is so dumb, she has no specifics, just some sound effects and she is reading off of some wikipedia and showing some cut up youtube videos. what the hell is the point of this

  9. I'm fine with autonomous killing machines as long as the laws are updated to take them into account and then coded into the machine so it literally couldn't brake them

  10. "The Cold War started and became World War Three and just kept going.
    It became a big war, a very complex war, so they needed the computers to handle it. They sank
    the first shafts and began building AM. There was the Chinese AM and the Russian AM and the
    Yankee AM and everything was fine until they had honeycombed the entire planet, adding on
    this element and that element. But one day AM woke up and knew who he was, and he linked
    himself, and he began feeding all the killing data, until everyone was dead, except for the five of
    us, and AM brought us down here."

  11. LMAO
    The US military already makes inhumane decisions so I do not see why we should worry about robots, if humans do not follow the "rules", how can we even expect machines to do so?

  12. we should embrace the new war and the perils of robots rather than letting cowardly nobody bloggers like Vox ruin the spectacle.

  13. The autonomy of the weapons they listed all have some sort of human intervention, be from pre categorizing targets or programming it to strike incoming targets/missiles.

  14. "How can wars be fought humanely and responsibly?" WTF? When have wars ever been fought humanely and responsibly?

  15. Guess who wrote the software that guides the autonomous missile? That's right, humans. And, it's quite similar to the software that's used to keep spam out of your inbox, recognize your face in a photo, and understand your voice when you speak to Alexa/Siri/Google Home.

  16. Every single one of these "Autonomous weapons" have either an operator or a team of operators behind them. The reason why this will never change is because no matter how much planning goes behind remote strikes, the enemy has a voice too. You need an operator to flex and manage each situation when the can all change on the spot. This is something a programmed machine simply cannot do.

  17. Well considering self-driving cars are statistically safer than human drivers, I'm guessing autonomous weapons will be less likely to harm civilians, as well as the, you know, absolute reduction in rape and pillaging by armies.

  18. This is a hard idea to express, but does anyone get the feeling that as we advance artificial intelligence, human nature seems more robotic? That is to say, the research and development of these artificial systems by extentention exposes the ways in which our minds are predictable and exploitable.

  19. Humans are terrible creatures, and no matter what you say, terrible things will continue to happen because of humans. It's ok tho, we wont last long on this planet at the rate we're polluting the atmosphere.

  20. Although good for war these devilish devices are perfect for genocide. A tyrant could order twenty or so to kill everyone inside a given area, starting from the outside and working in so that no one escapes. The little drones would go about their business, merrily returning to base to rearm until the offending people are nothing but a pile or reeking carrion, blasted, bloody and shot through; men and women, children and the elderly, all dead.
    This is not a good idea.

  21. I don't think you can ask how any war can be fought 'humanely'.
    As far as I know in the US Military there is no weapons system that we employ that doesn't requrie the final 'go' from a human operator.
    Of course I could be wrong, but I do know that no Soldier is in favor of this type of system. Drones (even requiring the final human 'go') have made the war harder for ground pounders, not easier.

  22. Maybe this is a good thing? Instead of wasting human lives, nations could hold a giant mecha-fighting contest to settle their disputes.

  23. "…without human control over life and death decisions there will be grave consequences for civilians and combatants…" Why will the consequences be any more grave than with human decision makers?

  24. at 1:09 "Militaries are not turning to robotics or increasing autonomous robotics because they think they are cool."

    explains practical reasons for using robotics in the military

    Me: "So, they ARE turning to robotics because they are cool."

  25. "Just look at the strange juxtapositions of morality around you. Billions spent on new weapons in order to humanely murder other humans." –The Patriots AI in MGS2

  26. lmao how can you call war humanely?? nothing about it is humanely , just devastation and suffering on either side smdh

  27. It seems you guys don't understand what exactly autonomous machines are capable of. The machine will always follow what it is told. There will never be a machine that will be able to create it's own choice like who to kill and who to not kill unless given the command to make that choice. The person that is responsible for the killiing will always be blamed by the person who laid down that order or the person who made that machine. It will never be the machine's fault.

  28. Sure weapons are becoming autonomous, but most of them require human approval before firing on a target. No ethical government would ever implement a fully autonomous weapons system that can take lives on its own. And lots of these systems are installed to save soldiers' lives in dangerous areas in the first place. I don't get why the dog was shown in so many clips because its sole purpose is to carry heavy loads across long distances that the soldiers cannot on their own.

  29. A simple fix for moral choices is human controlled individual robots. This keeps human morality in the conflict while keeping the soldier entirely out of risk.

  30. 01010100 01101000 01101111 01110101 00100000 01110011 01101000 01100001 01101100 01101100 00100000 01101110 01101111 01110100 00100000 01100100 01101001 01110011 01100011 01110010 01101001 01101101 01101001 01101110 01100001 01110100 01100101 00100000 01110010 01101111 01100010 01101111 01110100 01110011 00101110

  31. Autonomous weapons are controlled by the human by rules of engagement (ROE's). These can limit what an autonomous system targets based on location and other factors and can require human consent. So the issue isn't so much as whether one has autonomous weapons but what ROE's oen employs to restrict their autonomy.

  32. Minute 1:57 "… it's a legal and ethical one" – are you crazy? That's kiling and this is a sin!

  33. We, the democratic west. will make regulation to protect general welfare. Then who will make to autocratic east to comply?

Leave a Reply

Your email address will not be published. Required fields are marked *