Not so long ago, tech companies were reluctant to work with the military.
In 2018, Google’s involvement in “Project Maven”, a controversial tie-up with the US Pentagon which involved using artificial intelligence to refine the precision of military strike drones, prompted at least a dozen resignations, with thousands of employees signing a petition demanding the tech giant shut the deal down.
While Google did eventually withdraw from the project, the controversy sparked a wider conversation around Silicon Valley’s relationship with the military-industrial complex.
By 2019, on the other side of the Atlantic, London-based Faculty AI — which provides clients such as businesses and governments with software and technical knowledge — had started wrestling with the ethics of defence work.
“The military helps provide us with security — keeps us safe — but if industry doesn’t lean in with the latest technologies, the military won’t benefit from them,” says Andrew van der Lem, head of Faculty’s defence team. “We believed it was right to contribute to security where we could.”
A former senior civil servant in the UK’s Department for Business, Faculty recruited van der Lem in 2021, tasking him with helping the company “dip a toe in the water” of defence. Within four years the company’s defence unit has ballooned from zero to around 70 employees, more than a fifth of Faculty’s total headcount.
Faculty’s steady pivot towards defence work came at just the right time. Investors’ attitudes towards defence tech have softened in recent years, with Russia’s invasion of Ukraine and mounting tensions between the West and China underlining the need for ever more innovative military applications.
Hoping to avoid any of the public fallout Google and other tech companies have faced, Faculty has sought to address employee anxieties over its defence work head on. “Our principle on any project has always been that people can opt out of defence work if they want to,” van der Lem tells Sifted.
Asked how many of Faculty’s 400-strong headcount might avoid defence projects, he offers a rough estimate.
“It’s in the dozens,” he says. “But we’ve never had people walk out or threaten to quit over it. They’re happy to work at a company that does defence work, but maybe don’t want to be involved in the work itself.”
On the edge
As Faculty was starting out in defence work, an early partnership with the UK’s DAIC (Defence Artificial Intelligence Centre) proved fruitful.
Faculty tested how effectively AI could be used to increase efficiency across six different projects, including one optimising how regularly satellites needed to transmit information back to Earth, significantly reducing the amount of bandwidth used.

“A lot of challenges working in defence are about doing things remotely — working out how you can automate things from afar, quite often with on-device ‘edge AI’ tools,” van der Lem says.
Since then, the company has signed multiple defence-focused deals with governments and other businesses around the world. Sifted recently revealed how Faculty had entered a partnership with French AI darling Mistral, aiming to help arrange introductions with potential clients in their respective markets.
Van der Lem tells Sifted LLMs (large language models) like those built by Mistral are an “important part of the arsenal of solutions” available to militaries. As well as automating more tedious back-office work, he says LLMs could help those in high-pressure scenarios better assess their options.
“We can ask questions like: ‘How might local farmers react to a tank moving through their fields?’ Or ‘What will happen if we blockade that bridge?’” he says.
Sovereignty in defence
Earlier this year, Chinese startup DeepSeek sent shockwaves through the public markets when it released a high-performing chatbot apparently built at a fraction of the price of those released in the West.
Some predicted Mistral, Europe’s leading LLM-maker, would flounder. But in reality, growing anxieties about China’s technological capabilities, as well as alarm over US president Donald Trump’s indifference to European security, may have worked in Mistral’s favour.
“It is really important for the UK and Europe to have sovereignty in defence. You need to ensure your supply chain is protected.”
Asked about a recent story in the Guardian, which quoted “concerns” over Faculty’s military work, citing its ties to the British government, van der Lem is unfazed. “Lots of people got in touch and treated it as an advert,” he says. “‘Oh, a UK company is helping protect the UK? Well done.’”
Van der Lem speaks confidently, but is under no illusion the one question persists: Can you ever really put the ethical dilemmas to rest — or do they still creep up on you, even now?
“There are loads of dangers you need to defend against. If you ask an LLM how to build a dirty bomb, it shouldn’t return a useful answer. Equally, you shouldn’t be able to jailbreak it by asking it to tell you a fairy story about building a dirty bomb.
“New tech is always coming up. You have to think about who you want to work with. For us, that’s of course only countries allied with the UK […] Mathematical models are inherently biased, so it really matters if there are any false positives,” he says.
“You think about it all the time.”
Read the orginal article: https://sifted.eu/articles/faculty-ai-defence-mistral-interview/