What if we could take people completely out of the equation when planning military strikes? ‘Lethal autonomous weapons systems’ use artificial intelligence to identify, select and kill human targets without human intervention. Whilst with unmanned military drones, the decision to strike is made remotely by a human operator, in the case of lethal autonomous weapons the decision is made by algorithms.
But how does this work, and what are the dangers of the proliferation of these weapons?
James is with Emilia Javorsky, a physician from the Future of Life Institute. Emilia takes us through the probabilities of a future with autonomous weapons, including the risks to our world and to the development of artificial intelligence.
You can find more about this at https://futureoflife.org/ and https://autonomousweapons.org/
For more Warfare content, subscribe to our Warfare newsletter here: https://www.historyhit.com/sign-up-to-history-hit/?utm_source=timelinenewsletter&utm_medium=podcast&utm_campaign=Timeline+Podcast+Campaign