The Ethical Dilemmas of Autonomy
Technology is developing at a pace that’s unprecedented in the history of human history and autonomy is going to be an incredibly powerful tool as we move forward with that development. But with great power comes great responsibility, as they say.
So what could an autonomous future look like? How is it going to be regulated? Will it make society safer? Those were the question posed as three experts discussed the ethical dilemmas surrounding the future of autonomy, particularly within the defence sector, at D3IP’s Autonomy Unleashed expo.
Autonomy on the Battlefield
“If a potential adversary is developing AI and autonomous weapons systems at a level humans cannot normally interact, intercept, and defeat, there is an imperative to create a defence system that can at least match that as a minimum,” said Peter Lee, Professor of Applied Ethics at the University of Portsmouth
“But then we also know that if we create a defence system with that kind of kinetic capability – and Iron Dome in Israel has been publicised quite a lot in last couple of weeks – that kind of capability, we all know it doesn’t take very much to turn that from a defensive to an offence system.
So, where do we draw the lines when it comes to autonomy on the battlefield?
Professor Peter Lee commented: “The ethics of it all is highly movable.
“There is a need for the UK PLC to develop the latest technologies. But we need to look very carefully at how we field that and where the lines are drawn between using them for defensive systems, and at what point can you or would you use it as an offensive system.”
But he added: “In a situation like Ukraine if you’re fighting for survival, this is kind of luxury conversation we’re having here. In Kyiv, this is not a luxury conversation you can have.”
Anna Dowle, a risk management expert working with the British Army, suggested there might also be psychological elements at play. Does removing the human element of war make it more appealing? When the physical damage is being suffered by machines rather than soldiers, are there fewer objectives to launching military campaigns?
“I think we can all realise, especially over the last 15-20 years, that it is the injured soldiers and the dead soldiers that are coming back that stop a lot of governments engaging in conflict because they don’t want that negativity,” she said.
“One of the things that scares me is that there might be – and I’m talking way in the future – there might be more of a propensity to engage in conflict because it’s robots that are doing it.”
Building Assurance into Autonomous Capabilities
Once you’ve determined then requirement for autonomy and you can navigate the ethical dimensions of deploying it, how do you ensure assurance is built into the systems – especially when there are so many components coming from different levels of the supply chain.
Dowle commented: “It’s the supply chain: where are the chips coming from? Have they been interfered with?… You’re looking at some of the big companies, they could be buying items from seven or eight vendors down. They have no idea where it’s coming from.
“I think one of the things we need to do in this country is start doing a bit more of our manufacturing. We cannot allow ourselves to be reliant on these systems if we don’t know exactly what has gone into it.
“At the end of the day, they could turn it off, which we had with the GPS… The Russians are doing it quite regularly.”
When you’re relying on systems that have been programmed by a third-party supplier and you can’t be sure of the security of that supply chain, that begins to pose a risk – particularly when deployed in a battlefield environment.
So who is accountable for an autonomous system when it needs to make the decision in a fraction of the time it would take humans.
Dowle said: “The whole point of using some of this autonomy is that rapid decision making because we all know that, especially in conflict, it is rapid decision making. He who hesitates, is lost, as they say.
“So users are really keen; they want to know how the systems work and how they’re going to utilise them, and also who is going to be responsible if it goes wrong? Is it them, is it the person who made the decision to use it, or does it come right back down to the manufacturer and the coders?”
Reece Oliver, Experimentation Team Lead at NavyX, added to the concerns that arise from using these complex systems somewhat autonomously, particularly when you add machine learning into the mix.
He commented: “We obviously want to progress this technology to a position where humans are less in the loop and that then means we have to front-load the assurance. But it’s not just software, it’s also all the data that we feed that software with. We then have to explore the software, and then we have to explore the outputs.
“What we’re then throwing on top is another complicated layer of a system that essentially changes every day and the day you assure it; it will be different to the next day.”
While there are no clear answers to this questions, the discussions highlight the critical need for careful consideration and robust safeguards. As technology continues to advance, it is imperative to establish clear boundaries for the use of autonomy particular within the defence sector.
The path forward demands a balance between technological progress and ethical responsibility, with a focus on building assurance into autonomous capabilities.
Discover more from D3IP...
UAV Aerosystems Conduct Centenary Flight with D3IP
UAV Aerosystems has conducted the centenary flight of one of their Providence platforms with the support of D3IP. Hosted at Dorset Innovation Park, D3IP was able to accommodate the proof-of-concept flight and demonstration. A small team, headed up by UAV Aerosystems'...
D3IP Facilitates NavyX Autonomy Collaboration
D3IP has successfully facilitated an innovation partnership between the Royal Navy’s autonomy and lethality accelerator and two SMEs to speed up the exploration of autonomous capabilities at sea. As part of the NavyX mission to achieve Degree 4 Autonomy, D3IP...
Fortifying Autonomous Systems: Adopting a Zero-Trust Approach
As we move into an era that utilises increased autonomy through the Internet of Things (IoT), ensuring a resilient cyber-security posture is paramount. While presenting countless opportunities, it also massively increases the number of attack vectors for hostile actors to target.