
Delivering AI at the operational edge
Tags
The adoption of AI and machine learning in defence and security has the power to transform operational capabilities. However, specific issues need to be continuously managed if AI’s true potential at the operational edge is to be successfully realised.
There is a lot of debate on the use of AI in defence and security. None more so than around the extent to which humans ought to be involved in operational decision making, particularly where there is an ensuing impact on the life and liberty of people in the real world.
AI is highly efficient at determining outcomes based on probabilities and algorithms. But sentient AI systems capable of consciously understanding the real world – to select a target for offensive military purposes, or to determine if someone is going to commit a crime – currently remain hypothetical. So, in most operational contexts, human oversight is essential to ensure AI-assisted decisions are as accurate as possible and align with legal and ethical standards. However, there are also examples of critical operational capabilities, such as air defence systems or maintaining cyber security, where any ‘human in the loop’ could negate the instantaneous response that AI can deliver.
The future will be characterised by human-machine teaming across most operational contexts. However, the increasing sophistication of AI systems, coupled with the need to stay ahead of criminals and other adversaries, who will be ruthless in exploiting AI for different purposes, will mean there is a need to cede more authority to machines to take decisions for the sake of speed and accuracy. In this context, the key questions for organisations will be about ensuring humans have control of AI systems through development and oversight of intelligent rules, thresholds and policies. This will maximise the potential of AI to carry out increasingly complex operational tasks within defined parameters.
Rethink decision-making loops to play to AI and human strengths
As large language models improve and become more attuned to the operating environments they are deployed in, organisations need to be bold in minimising the role of humans in decision-making loops. Instead, defence, security and law enforcement organisations should accept that their role will be more focused towards designing ways of working, policies, and guardrails that maximise the potential of AI without making ethical compromises. Doing this effectively will require organisations to have an adaptive culture, where they are not wedded to how things have always been done. Organisations will also need to allow operational methods to constantly change to make the most of AI (and other technologies) to build and sustain advantages over adversaries. As lessons from Ukraine and elsewhere have shown, this kind of culture needs to be underpinned by forward deployment of technologists into ‘frontline’ teams and high levels of AI literacy across operational workforces.
Deliver AI learning regimes at the pace of relevance
As with all emerging technologies, equipping people with the right skills to understand and use AI systems lies at the heart of their successful adoption. For example, the UK Government’s Defence AI Strategy, published in June 2022, reflects the growing positive sentiment and central role AI is now playing in Government initiatives. However, there is a need for a more specific drive and clearer direction of travel as AI skills and implementation risk failing to match this ambition.
Combining formal training with hands-on experience is critical in enabling the culture of continuous learning required to build operational teams with sufficient AI expertise. Operational teams will also need to understand barriers to deploying AI in operational settings, such as inadequate technology platforms or outdated policies, and be empowered to develop solutions to overcome them. Conversely, defence-employed or commissioned AI developers will need to evolve systems attuned to the unique behaviours and needs of the people who will use them. And they’ll need to do this in a way that is engaging for staff and enables them to maintain pace with rapid technology change. Increasingly, this will mean blurring the lines between technology and operational planning through formation of multidisciplinary ‘mission’ teams where specialists coalesce around specific operational problem sets and learn from each other. This more collaborative approach will make it easier to identify gaps and design knowledge sharing initiatives with industry to speed up innovation.
A crucial enabler in ensuring proficiency in AI skills at speed will also come through more diverse ways of building knowledge and skills. These could include peer-to-peer learning, e-learning, and experimentation in real-world or simulated operating environments rather than infrequent, set piece training courses that are often a hallmark of organisations across the public sector. And, for senior leaders, whose roles will require them to determine levels of investment in AI-enabled capabilities, setting parameters for their usage in deployments, and directly using AI to assist with more strategic considerations - specific AI ‘coaching’ may be required.
Assuring AI systems for operational use
The successful deployment of AI in different operational contexts is tied to an expectation of intelligent assurance. This should include assurance taken within operational teams and their parent organisations, alongside objective oversight from independent parties, including AI experts working in or behalf of bodies like the Investigatory Powers Commissioner’s Office. Such assurance will make AI solutions more effective through constructive challenge, targeted use of stress-testing measures in virtual environments and continuous refinement of policies about how it is used as technology evolves and wider contexts change.
Assurance regimes need to be multi-faceted, covering the technical configuration of deployed AI, the quality of datasets, practices deployed by human operators working in tandem with AI, and wider usage of AI solutions in different contexts. The technical assurance of AI systems needs to be woven into the decision-making fabric of organisations, so decision makers are clear about the risks and trade-offs surrounding their use in different operational contexts.
Effective, tiered assurance, combined with transparency and public engagement, can be a significant enabling factor in the development and implementation of AI systems at the operational edge. A good example of this approach is the use of live facial recognition (LFR) to identify criminals and wanted persons in public spaces. Since LFR was first deployed at Notting Hill Carnival in the UK in 2016, its use in policing has faced intense scrutiny. There have been particular concerns around racial and gender bias, infringement of privacy, and a lack of transparency in third-party algorithms. However, in more recent years, greater focus has been placed on publishing laws, policy, and safeguards along with case studies on how LFR has been used to keep society safe. This has resulted in greater public acceptance of police using the technology for crime detection its use and resulted in 540 arrests, of which, 406 have been charged or cautioned. Moreover, the technology has exceeded expectations, initially rates of false positives were predicted to be on in 6,000 but in reality they have been one in 40,000.
The defence, national security, and law enforcement sectors can unlock significant benefits of AI to support enhanced decision making when and where it’s needed the most. Being bold in rethinking AI and human decision-making loops, learning regimes underpinned by relevance, and weaving assurance systems into the heart of AI adoption are crucial to realising AI ambitions at the operational edge.
Explore more
