Wednesday, December 18, 2024

How Europe’s next-generation combat jet aims to catch the AI wave

Must read

BERLIN — Mainland Europe’s Future Combat Air System, an ambitious effort to field a suite of warplanes and drones in the 2040s, could become the first large-scale defense program with artificial intelligence fully baked in.

A consortium of Germany, France and Spain – with Belgium joining as an observer last year – promises to have the first airworthy demonstrators of the futuristic idea flying by this decade’s end. Artificial intelligence will play a key role in practically all aspects of the system, engineers and experts told Defense News in a series of interviews, influencing everything from the platform’s development to kill-chain decisions and even the very things that pilots see.

The key novelty of FCAS, compared to existing platforms, is its use of so-called loyal wingmen. These drones are to travel alongside the main, manned aircraft and act to enhance the mission – collecting more data, allowing for more firepower or simply overwhelming enemy defenses by sheer numbers.

“Because you don’t want to have to control these out of a cockpit with a stick and throttle,” these drones will require a certain level of automation or autonomy, said Thomas Grohs, Airbus’ head of future capabilities and chief engineer of the FCAS project.

Building this type of intelligence, which entails finding the optimal degree of pilot involvement in different situations, will be crucial to the success of the program as a whole.

Always online

Onur Deniz’s company, NeuralAgent, is tasked with ensuring that data flows where it needs to go, enabling each of the system’s components to be in constant contact. The Munich-based startup is taking what it called an “AI agent approach”: Instead of a centralized, cloud-based decision-making algorithm, each of the wingmen drones will use smaller, locally-run models to operate autonomously and will exchange information with their peers using communication channels such as optical, narrowband radio or even infrared. While doing so, they will continuously construct redundant and ever-changing data links providing permanent connectivity.

“This gives you a very fast ability to build your own networks in blackout regions or conflict regions,” Deniz said. In computer simulations testing the concept based on real-world scenarios, this approach maintained connectivity in adverse electronic warfare environments for over 95% of the time. By comparison, centrally administered and cloud-based models saw a less than 0.5% success rate in NeuralAgent’s tests, he said.

By the end of 2025, the software will be ready for integration into existing hardware – legacy systems, at first – said Deniz, describing it as a “plug-in that you can put anywhere you want.”

The company claims that its models are extremely resource-efficient. “You could run them on a Raspberry Pi and they take less than a gigabyte of space,” Deniz said, referring to the small, single-board computers popular with schools and hobbyists, priced at around $50. “All we need is a Linux environment.” Access to the communications stack of the platform is really the only other prerequisite, he explained. “Because of containerization, installation will be as simple as if you were installing a library to code.”

While the current focus in NeuralAgent’s FCAS development is on networking, Deniz anticipates that the next step will be to use the technology for more complex mission-planning. This would include making decisions based on the combat environment and moving assets around to optimize communications and achieve objectives.

What is a pilot, anyway?

The constellation of manned and unmanned aircraft working together will require a radical redefinition of what a pilot’s role is, said Grohs, the Airbus chief engineer. Sitting in the cockpit of Europe’s next fighter will not only be about flying the aircraft but “really about becoming a mission operator,” he said, “elevating yourself above your own asset and operating the mission together with your peers that may be manned or unmanned.”

In fact, flying the aircraft may only take a secondary role altogether; the plan is to give even the manned aircraft the option of flying entirely on their own to allow the pilots to focus on mission management, the chief engineer said.

The project’s goal “is definitely to go for autonomy,” he added. “The loyal wingmen are given a high-level task and within set boundaries, the assets can work autonomously.”

Compared to automation – which Grohs defined as a system automatically fulfilling a predefined sequence of events – autonomy includes decision-making.

He described the envisioned implementation as a pilot choosing what action they want to have taken while not needing to issue “a specific trigger press.” A capability like this would be quick to implement, he said, pointing to the fact that similar AI pilots have already flown in the U.S. and that “we are testing things out.”

Organizing principle

At least initially, the AI models in FCAS will all be “frozen,” said Grohs, meaning that no machine learning will take place during missions. The algorithms, whether for processing sensor data or for making decisions about striking enemies, will be pre-developed and retrained off-board. At some point, however, machine learning may be integrated into the airborne platforms themselves, he said.

Even so, AI will touch every element of the observe, orient, decide, and act loop, Grohs said, referring to the “OODA Loop” framework popularized in U.S. military command textbooks. He said the algorithms would be paired with sensors in order to improve the quality of images, for example, but also play a role in developing courses of action.

The extent to which artificial intelligence will make targeting decisions on its own is still up for discussion, he added.

While details on the appearance and specific capabilities of FCAS are still sparse, the resources being invested are considerable. At Airbus alone, over 1,400 people are currently working on Europe’s next-generation air combat platform, said Christian Doerr, a company spokesperson. The European aerospace giant plays a key coordinating role in making the project come to life along with Dassault Aviation in France.

Many of the applications of the AI-based algorithms already exist in some form, though in isolation, said Grohs. The process of integrating them, with countless companies and thousands of engineers working on the project, runs the risk of becoming unmanageable, said Simon Pfeiffer, associate director of programs at the Munich-based AI company Helsing. Because AI models have dependencies on one another and the data they ingest, the company, along with partners, is working on creating a “digital assembly hall” to put it all together.

Through a cloud environment for developers, workflows can be improved, data can be exchanged and interoperability can be ensured, all while adhering to the particularly sensitive constraints of working in the defense sector, company representatives told Defense News. The online platform is already being used by over 50 contributors working on the FCAS project and was developed specially for it, according to Helsing.

In that sense, engineers already are using AI to breathe AI into the next-gen weapon, said Grohs. There is even the idea of developing an FCAS-specific Chat GPT-equivalent to help engineers with their jobs.

Killer robots?

Meanwhile, nongovernmental organizations and autonomous-weapons experts have warned about delegating too much power to machines. Concerns include the unreliability of machine vision, the often-opaque nature of machines’ decision making and the danger of autonomy applying a tactical mindset to questions with strategic implications.

In interviews with Defense News, some analysts expressed concern that even if a weapon system might not by default be allowed to kill autonomously, changing this may only entail a simple software switch – and incentives to flip that switch would be strong. Grohs was not able to rule out that FCAS might have this ability to switch between such modes, depending on the “rules of engagement that may apply for the respective conflict,” said the Airbus chief engineer.

“I don’t see a major difference between autonomous and human decisions,” Grohs said. “You shouldn’t assume that either an AI decision or a human decision is always 100% correct.”

Linus Höller is a Europe correspondent for Defense News. He covers international security and military developments across the continent. Linus holds a degree in journalism, political science and international studies, and is currently pursuing a master’s in nonproliferation and terrorism studies.

Latest article