US intelligence agencies’ adoption of generative AI is both cautious and urgent

ARLINGTON, Virginia (AP) — Long before the AI ​​generation boom, a Silicon Valley firm contracted to collect and analyze unclassified data on China’s illicit fentanyl trade made a compelling case for adoption by U.S. intelligence agencies.

The results of the operation were far superior to human-only analysis, finding twice as many companies and 400% more people engaged in illegal or questionable commerce in the deadly opioid.

Excited US intelligence officials went public with the findings – the AI ​​made connections based mainly on internet and dark web data – and shared them with Beijing authorities, urging a crackdown.

One important aspect of the 2019 operation, called Sable Spear, that was not previously reported: The firm used generative AI to provide evidence summaries to US agencies – three years before the release of ChatGPT OpenAI’s brand new product – in criminal cases that ‘could be there. countless working hours.

“You wouldn’t be able to do that without artificial intelligence,” said Brian Drake, the Defense Intelligence Agency’s AI director at the time and the project’s coordinator.

The contractor, Rhombus Power, would later use next-generation AI to predict with 80% certainty an all-out Russian invasion of Ukraine four months in advance, for another US government client. Rhombus says it also alerts government customers, whom it declines to name, of North Korean missile launches and impending Chinese space operations.

US intelligence agencies are struggling to embrace the AI ​​revolution, believing they will be overwhelmed by otherwise exponential data growth as sensor-generated surveillance technology further covers the planet.

But officials are well aware that the technology is young and brittle, and that generative AI — predictive models trained on massive data sets to generate on-demand text, images, videos and human-like conversation — is anything but customized. your dangerous trade. steeped in deception.

Analysts require “sophisticated artificial intelligence models that can access vast amounts of open-source and covert information,” CIA director William Burns recently wrote in Foreign Affairs. But that will not be simple.

The CIA’s founding chief technology officer, Nand Mulchandani, thinks that gen AI models – capable of great insight and creativity but also fibers of depravity – are best treated as “crazy, drunk friends”. There are also security and privacy issues: they could be stolen and poisoned by guards, and they could contain sensitive personal data that officers are not allowed to see.

That is not stopping the experiment, however, which is happening mostly in secret.

Exception: Tens of thousands of analysts across the 18 US intelligence agencies now use a CIA-developed AI gen called Osiris. It runs on unclassified and publicly or commercially available data — known as open source. It writes annotated summaries and its chatbot function allows analysts to go deeper with questions.

Mulchandani said he employs multiple AI models from various commercial providers that he would not name. He also wouldn’t say if the CIA is using gen AI for anything on classified networks.

“It’s still early days,” Mulchandani said, “and our analysts need to be able to mark with absolute certainty where the information is coming from.” The CIA is testing all the major next-gen AI models — without committing to any one — in part because AIs continue to leapfrog each other in capabilities, he said.

Mulchandani says gen AI is mostly good as a virtual assistant looking for “the needle in the haystack.” What it will never do, officials insist, is replace human analysts.

Linda Weissgold, who resigned as CIA’s deputy director of analysis last year, thinks war gaming will be a “killer app”.

During her tenure, the agency was already using regular AI — algorithms and natural language processing — for translations and tasks that included informing analysts during off hours about potential developments. important. The AI ​​wouldn’t be able to describe what happened – that would be classified – but it could say “this is something you need to come in and look at.”

Gen AI is expected to improve such processes.

According to Rhombus Power CEO, Anshu Roy, the most powerful information will be used in predictive analysis. “This is probably going to be one of the biggest paradigm shifts in the entire field of national security — the ability to predict what your adversaries are likely to do.”

Rhombus’ AI machine draws on over 5,000 data streams in 250 languages ​​collected over 10 years or more, including global news sources, satellite imagery and cyberspace data. It’s all open source. “We can track people, we can track things,” Roy said.

Among the major AI releases seeking US intelligence agency business is Microsoft, which announced on May 7 that it was offering GPT-4 OpenAI for top-secret networks, although the product still needs to be accredited for work on classified networks.

A competitor, Primer AI, lists two unnamed intelligence agencies among its customers – including military services, documents posted online for military AI workshops recently show. It offers AI-powered search in 100 languages ​​to “detect emerging signs of disruptive events” from sources including Twitter, Telegram, Reddit and Discord and help identify “key people, organizations, sites .” In a demonstration at an Army conference immediately after the October 7 Hamas attack on Israel, company executives described how their technology separates fact from fiction in the flood of online information from the Middle East.

Chief executives declined to be interviewed.

In the short term, how US intelligence officials use gen AI may not matter as much as opposing how adversaries use it: To push US defenses, disinformation that spread and try to undermine Washington’s ability to read their intentions and abilities.

And because Silicon Valley drives this technology, the White House is also concerned that any gen AI models adopted by US agencies could be infiltrated and poisoned, which research shows is a major threat.

Another concern: Ensuring the privacy of “US people” whose data may be embedded in a major language model.

“If you talk to any researcher or developer who’s training a big language model, and ask them if it’s basically possible to delete one single piece of information from an LLM and forget about that — and have a strong empiricist. forgetting is guaranteed — that’s not possible,” John Beieler, head of AI in the Office of the Director of National Intelligence, said in an interview.

That’s one reason the intelligence community isn’t in a “move-fast-and-pause-things” mode when it comes to gen AI adoption.

“We don’t want to be in a world where we move quickly and deploy one of these things, and then realize two or three years from now that they have some intelligence or effect or some emergent behavior that we didn’t expect,” Beieler said.

It is a matter of concern, for example, if government agencies decide to use AIs to explore bioweapons and cyberweapons technology.

William Hartung, a senior researcher at the Quincy Institute for Responsible Statecraft, says intelligence agencies need to carefully assess AIs for potential misuse in case they lead to unintended consequences such as illegal surveillance or increased civilian casualties in conflicts.

“All of this comes in the context of repeated cases where the military and intelligence sectors have come to grips with “miracle weapons” and revolutionary approaches — from the electronic battlefield in Vietnam to the Star Wars program of the 1980s to to the “revolution in military affairs in Co. the 1990s and 2000s — just barely getting them,” he said.

Government officials insist they are sensitive to such concerns. Moreover, they say, AI missions will vary greatly depending on the agency involved. There is no one size fits all.

Take the National Security Agency. It intercepts communications. Or the National Geospatial Information Agency (NGA). His job involves seeing and understanding every inch of the planet. Then there is measurement and signature intel, which is used by multiple agencies to track threats using physical sensors.

Supercharging such missions with AI is a clear priority.

In December, the NGA issued a request for proposals for a new type of AI generation model. The aim is to use images it collects – from satellites and at ground level – to achieve precise geospatial intelligence with simple voice or text prompts. Gen AI models don’t map roads and railways and “don’t understand the basics of geography,” NGA innovation director Mark Munsell said in an interview.

Munsell said at the April conference in Arlington, Virginia that the US government has only modeled and labeled about 3% of the planet.

Gen AI applications also make a lot of sense in cyber conflict, where attackers and defenders are constantly engaged and automation is already in place.

But a lot of critical intelligence work isn’t related to data science, says Zachery Tyson Brown, a former defense intelligence officer. He believes that Intel agencies will invite disaster if they adopt Gen AI too early or too fully. The models do not cause. They are just predictions. And their designers cannot fully explain how they work.

Not the best tool, then, for matching wits with other masters of deception.

“Information analysis tends to be more like the old anthropomorphic jigsaw puzzle, only someone else is always trying to steal your pieces and putting pieces of a completely different puzzle into the pile you’re working with ,” Brown wrote recently. internal CIA diary. Analysts work with “incomplete, ambiguous, often contradictory skirts of unreliable partial information.”

They place considerable trust in instinct, peers and institutional memories.

“I don’t see AI replacing analysts anytime soon,” said Weissgold, the CIA’s former deputy director of analysis.

Sometimes quick life and death decisions need to be made based on incomplete data, and current gen AI models are still too opaque.

“I don’t think it’s ever going to be acceptable to some president,” Weissgold said, “for the intelligence community to come in and say, ‘I don’t know, that black box told me.'”

Leave a Reply

Your email address will not be published. Required fields are marked *