AI & Machine Learning

The ethical use of AI agents in defence and national security

Richard Beck highlights the need for an ethical framework to help realise the power of AI systems in the realm of defence.

Artificial intelligence (AI) powered tech, such as autonomous systems, have the potential to reshape our national defence posture in unprecedented ways. Their use raises ethical concerns that must be managed to ensure these technologies are designed, built, and deployed safely and responsibly.

The defence sector is well known for pioneering cutting-edge technology, albeit notoriously slow to fully integrate innovations at scale. The critical expansion of Human Machine Teaming (HMT) tech with AI will need robust ethical explainability to drive this innovation. For this generation of tech, it is crucial to uphold ethical standards to safeguard human rights, privacy, and international law, while mitigating risks to safety and accountability without ceding military AI initiatives to our adversaries.

Earlier this year, NATO updated its AI strategy, outlining key principles for the responsible use of artificial intelligence in defence. The strategy emphasises ethical standards such as accountability, lawfulness, and human rights protection, for transparent and safe use of AI.

The UK’s Ministry of Defence produced a Defence AI Playbook designed to accelerate the use cases for ‘AI-ready’ strategic advantage, much like the US DoD AI adoption strategy.

Despite the intent, we have a disconnect between government recruitment, procurement strategies, and digital skills enablement processes needed to expedite the critical capabilities in this playbook.

It's not just about leveraging AI or autonomous agents for enhanced defence capabilities; it’s important to figure out how to scale rapidly and exploit the benefits, (see the US position on Advancing AI Leadership), while maintaining ethical decision-making, transparency, safety, and our national security.

Why ethics matter in emerging defence technologies

As we face growing demands to balance efficiency with ethical responsibility, we must understand why ethics matter in the defence context.

Adoption of a clear ethical framework could mitigate significant risks. An example of this is the Responsible AI in the Military Domain, or REAIM’s recent ‘Blueprint for Action 2024’ where 61 countries declared their support.

Without scrutiny for biases in AI algorithms used in military applications, we could see unfair outcomes or unintended escalations in conflict. Autonomous agents or weapons systems introduce accountability concerns. When we empower technology to make a decision that results in harm, who is responsible?

I believe that we must always have a ‘human in the loop’ as a counterbalance. Responsibility for outcomes should lie fully with those who assign tasks, ensuring that human oversight is maintained for every decision made through the AI lifecycle.

The use of autonomous agents as recognised by recent research, works very well in cyber security for interconnected systems. Using pattern recognition to identify threats before they materialise is a game changer for cyber defence.

This proactive, predicative analysis can drastically alter how defence strategies are developed and deployed. Ethical consideration is crucial to avoid false positives or false information blurring on-the-ground conditions, masking a lack of evidence for autonomous AI decision making.

By embedding ethical considerations into the design and deployment of these systems, as we do today for ‘Secure by Design,’ defence organisations can ensure they build ‘Ethical Oversight by Design,’ enabling the technology to operate within the boundaries of international law and agreed ethical frameworks, such as the UK government guidance.

Identifying ethical risks and trade-offs in advanced defence technologies

Autonomous systems and AI algorithms operate in ways that are difficult for defence operators in the field to understand. This lack of transparency could result in a lack of trust or in misuse.

When developing solutions, we should prioritise explainable AI (known as XAI). This means semi-autonomous, or autonomous AI systems can justify decisions or recommendations, before delegating lethal authority to a predictive algorithm.

It’s my view that we shouldn’t wait for regulation, we should develop the tech in alignment to existing agreed frameworks.

Establish who is responsible for these technologies’ actions, given known rules of engagement, and meaningful human control, whether it's the defence prime, operators, or back through the chain of command right up to law makers and political decision makers.

Autonomous surveillance technologies are crucial for national security – a force multiplier for defence.

AI tech designed for efficiency may sometimes compromise fairness over utility. AI-based threat assessments might prioritise speed and accuracy overlooking inherent biases in the training data.

Upholding transparency and fairness without sacrificing the core purpose, requires a deep commitment to ethical system design.

AI-driven systems are complex and difficult to explain in simple terms. To maintain trust and ethical use, military personnel, policymakers, and politicians must understand how these systems make decisions, and why. Then, they can be considered equipped to approve its use in times of conflict.

By recognising these ethical dilemmas and trade-offs, and applying frameworks for these competing factors, we can make informed decisions that align with both military enablement and national security considerations.

Ethical integrity as a strategic defence advantage

The development of ethical systems goes beyond the defence sector and individual technologies.

Ethical leadership is especially important in emerging tech defence capability, where decisions often carry life-and-death consequences. We need a culture of ethical awareness and an ethical skill base for emerging technologies.

We must implement processes and systems to monitor the use of emerging technologies and ensure compliance with ethical standards. This includes regular AI audits, transparent reporting mechanisms, and the establishment of oversight and assurance, as outlined by CETaS research, to evaluate the ethical impact of deployed systems.

For our strategic defence, ethical integrity is not just a moral imperative, but a strategic one.

While primarily focused on defence, integrating AI-driven autonomous systems for humanitarian purposes, such as disaster response and relief can showcase the positive applications of these technologies.

This dual-use approach will improve public perception of the ethical use of advanced emerging tech in defence.

Elevating the benefits and ethical use cases for AI technologies could foster trust and support for military initiatives. Ensuring that public concerns are acknowledged, in line with compliance and international laws, will reduce the risks that could otherwise undermine both national security and global stability.

 

Explore our AI training solutions

Related Articles