Ad

AI and the new defence advantage: How militaries can win the race for decision superiority

Photo. Katarzyna Głowacka/Defence24

Artificial intelligence is moving from experimentation to core military capability. Across air, land, sea, cyber, and space, it is beginning to change how forces sense, decide, act, and sustain operations.

The question facing defence leaders worldwide is no longer whether AI will matter, but how quickly their institutions can adopt it at scale while maintaining ethical and operational control. Nations that combine AI driven tempo, trusted autonomy, and resilient digital infrastructure will hold a decisive edge in future conflict scenarios.

Ad

Dr Aleksander Olech: As AI becomes increasingly embedded in defence ecosystems, how do you see the balance evolving between human decision makers and automated systems in time critical environments?

Kieran Gilmurray: AI will accelerate decision cycles, but it will not replace human authority. Across recent doctrine from the US Air Force and European defence bodies, the emerging operational model treats AI as a decision advantage engine that handles sensing, fusion, and rapid course of action generation. At the same time, humans retain responsibility for judgement, escalation, legality, and accountability.

Therefore, we will see more human on the loop arrangements for high tempo operations, supported by technologies such as digital twins, automated target recognition, and real time mission analysis. These tools will allow commanders to operate at a superior tempo, with greater battlespace awareness, and better decision quality under uncertainty, achieved without losing meaningful human control.

Done successfully, this transition strengthens control rather than weaken it.

The danger is not the use of AI. It is the absence of people who can work with it as fast as those who may use it against you.

NATO and partner nations show large differences in digital readiness. What are the main obstacles stopping defence institutions from scaling AI effectively?

Most obstacles are structural rather than algorithmic.

First, defence data is often fragmented, poorly governed, classified in inconsistent ways, and often locked inside legacy systems. Modern AI cannot thrive without access to clean, discoverable, interoperable data. This is why the USMC and EU Readiness 2030 strategies both make „data as a foundation” the starting point of any digital transformation.

Second, acquisition and accreditation processes were built for hardware, not continuously updated software. AI does not fit comfortably into multiyear, single delivery programmes. It requires modular architectures, continuous updates, and rapid validation cycles. Slow certification cycles, siloed requirements, and risk aversion make it difficult to field AI tools at desired operational speed.

Third, there is a global shortage of AI fluent military leaders, product teams, programme managers, and technical specialists. Nations progressing fastest, such as the US, the UK, and some Nordic countries, are investing in new AI career pathways, software factories, and joint experimentation units. Many ministries still outsource critical digital capabilities, limiting their ability to assess, test, or safely adopt advanced technologies.

Finally, interoperability remains a barrier. Without shared standards, model validation methods, and compatible architectures, allies risk building isolated AI pilot programmes that do not integrate during combined operations, rather than AI becoming a scaled, interoperable capability across the alliance.

Unless these foundations improve, AI will remain confined to pilot programmes rather than becoming a scaled, interoperable capability across the alliance.

Cybersecurity threats are increasing in scale and sophistication. Which AI capabilities show the greatest promise for strengthening national resilience, and where do the key vulnerabilities lie?

Cyber defence is one of the clearest areas where AI already delivers value. Its greatest promise lies in AI driven detection, prediction, and automated response.

  • AI enabled anomaly detection can identify subtle behavioural shifts that traditional signature-based systems may overlook.
  • Automated triage and containment tools can neutralise threats at machine speed.
  • AI powered threat intelligence platforms can fuse data from open sources, classified sensors, and the dark web, to anticipate campaigns before they mature.

However, the vulnerabilities are equally significant and are increasing rapidly.

  • Adversarial attacks against AI models - data poisoning, model evasion, and manipulation - are becoming a serious concern. Defence systems must incorporate robust testing and "runtime assurance" practices.
  • Supply chain exposure remains a strategic weakness. Many countries depend on foreign chips, cloud services, or opaque open source AI components that may contain hidden risks.
  • Over-reliance on automation creates its own risk if operators lose the ability to challenge or validate AI outputs. Operators must be trained to question alerts, interpret confidence levels, and recognise when an AI system is being deceived.

The opportunity is to use AI not only to strengthen cyber defence but to build national cyber resilience architectures that are adaptive, self-monitoring, and sovereign where it matters most.

Defence procurement is often criticised for being too slow. What reforms could help armed forces adopt advanced AI and automation without compromising security or accountability?

The future of defence acquisition will look more like a digital ecosystem than a linear supply chain. To adopt AI at operational speed, armed forces need procurement approaches that support modularity, continuous delivery, and joint experimentation.

Contract structures should require transparency, testability, and human oversight from the outset. Shared development environments and digital test ranges will allow militaries and industry to iterate together, reducing technical and operational risk.

Emerging frameworks for trustworthy AI can provide the guardrails, ensuring speed does not come at the expense of safety or accountability.

This is not about replacing rigour with agility. It is about modernising rigour to suit a world where software evolves faster than traditional acquisition cycles can handle.

Three reforms could have an immediate impact.

1. Modular, iterative procurement models Defence must shift from multi-year, single-delivery programmes to agile, service oriented acquisition. This includes smaller increments, continuous updates, shared APIs, and open architectures. AI systems evolve quickly and procurement must evolve with them.

2. Built-in testing, trustworthiness, and lifecycle governance Frameworks emerging in Europe and the US highlight the need for transparent model documentation, audit trails, adversarial testing, and robust human machine oversight. Embedding these requirements in contracts enables faster adoption without weakening control.

3. Collaborative development with industry Shared experimentation units, digital test ranges, and joint software factories allow militaries and industry to co-develop solutions in weeks rather than years. This reduces risk, builds trust, and ensures that technology is validated in mission representative environments before scaling.

Procurement does not need to choose between speed and safety. With modern governance, nations can achieve both.

As information warfare intensifies, how can AI help governments detect and counter disinformation while staying within ethical boundaries?

AI will play a central role in countering disinformation campaigns that aim to undermine national cohesion and democratic institutions.  It gives governments a way to understand and counter hostile narratives at scale without resorting to censorship. The critical point is how these tools are used. Democracies must avoid automated censorship and instead focus on transparency, prebunking, factual clarification and resilience building. Ethical frameworks provide a foundation where privacy, proportionality, and human oversight are essential. With the right approach, AI strengthens democratic integrity rather than threatening it. It gives governments the situational awareness needed to counter manipulation without undermining fundamental rights.

-Detection: AI models can identify coordinated networks, bot clusters, deepfakes, and narrative spikes across multiple platforms in near real time.

-Analysis: AI can map how false narratives spread, which communities they target, and which psychological levers they exploit. This improves attribution and helps prioritise responses.

-Response: Instead of suppressing content, governments can use AI to support prebunking, rapid factual clarification, and targeted public communication delivered through trusted channels.

Responsible application is essential. Frameworks for trustworthy AI emphasise privacy protection, transparency, proportionality, and human oversight. Governments should keep humans in control of decisions around content moderation and focus AI on situational awareness, not speech restriction.

Looking ahead to the next decade, which developments in autonomous systems and AI driven decision making will most transform defence strategy?

Three developments could be decisive.

1. Multi domain autonomous systems Swarming drones, autonomous underwater vehicles, robotic logistics, and distributed sensor grids will redefine persistence, reach, and mass. Forces that can coordinate many autonomous assets through resilient command systems will gain a strategic advantage.

2. AI enhanced command and decision support Digital twins, real time campaign simulation, and advanced ISR fusion will allow commanders to test strategies before acting, understand second order effects, and compress their decision cycles. This is the essence of the „decision advantage” that both US and European doctrines now prioritise.

3. Data driven military planning and industrial mobilisation AI will not only shape operations but also procurement, readiness, supply chains, and workforce structures. Nations with sovereign control over their data, high trust AI tooling, and flexible industrial bases will be able to adapt faster to crises or technological shocks.

Conclusion

The strategic landscape is shifting. AI is no longer a technical experiment but a foundation of future deterrence, crisis response. and warfare. The nations that succeed, will not be those with the most algorithms, but those with the clearest vision of how human judgement, machine speed, and resilient infrastructure can work together.

The next decade will favour defence organisations that treat AI as a strategic capability, not a bolt on.  There is a tremendous opportunity to build forces that are more resilient, more predictive, and more capable in contested environments.

Yet, whilst the opportunity is significant and the window to act is narrowing.

Ad

Kieran Gilmurray is widely regarded for his expertise in artificial intelligence and digital transformation, and is one of the most in-demand technology speakers in 2026. He is represented by the Champions Speakers Agency and is available for event bookings.

Ad

Komentarze

    Ad