Microsoft says US rivals are starting to use next-generation AI in cyber-offensive operations

BOSTON (AP) – Microsoft said Wednesday that it has detected and intercepted instances of US adversaries – primarily Iran and North Korea and to a lesser extent Russia and China – using or seeking to exploit harness artificial intelligence generation developed by the company and its business partner. or researching offensive cyber operations.

The techniques observed by Microsoft, in conjunction with its OpenAI partner, represent an emerging threat and were “not particularly novel or unique,” the Redmond, Washington, company said in a blog post.

But the blog provides insight into how US geopolitical rivals are using major language models to increase their ability to more effectively breach networks and conduct influence operations.

Microsoft said all of the “attacks” were noted to be related to major language models owned by partners and said it was important to disclose them publicly even if they were “incremental, early-stage moves”.

Cyber ​​security firms have long used machine learning for defense, primarily to detect anomalous behavior in networks. But it’s also used by criminals and offensive hackers, and the introduction of OpenAI’s ChatGPT-led major language models has brought that cat-and-mouse game up a notch.

Microsoft has invested billions of dollars in OpenAI, and Wednesday’s announcement coincided with the release of a report noting that generative AI is expected to improve malicious social engineering, leading to more sophisticated deepfakes and voice cloning. A threat to democracy in a year where over 50 countries will conduct elections, exacerbating the disinformation already underway,

Here are some examples provided by Microsoft. In each case he said the AI ​​generation accounts and assets of the named groups were disabled:

— A North Korean cyber espionage group known as Kimsuky used the models to research foreign think tanks studying the country, and to generate material likely to be used in spear phishing hacking campaigns.

— Iran’s Revolutionary Guard used large-scale models to help with social engineering, to troubleshoot software errors, and even to study how intruders could avoid detection in a compromised network. That includes generating phishing emails “including one pretending to be from an international development agency and another trying to lure a prominent feminist to a website built by an attacker of feminism.” The AI ​​helps speed up and boost email production.

— Russia’s GRU military intelligence unit known as Fancy Bear used the models to research satellite and radar technologies that could be involved in the war in Ukraine.

— A Chinese cyber-spirit group called Aquatic Panda — which targets a wide range of industries, higher education and governments from France to Malaysia — has interacted with the models “in ways that suggest limited exploration of how for LLMs to increase their technical operations.”

— Chinese group Maverick Panda, which has targeted US defense contractors among other sectors for more than a decade, had interactions with major language models suggesting they were evaluating their effectiveness as a source of information “on matters potentially sensitive, high-profile individuals, regional geopolitics, US influence, and internal affairs.”

In a separate blog published on Wednesday, OpenAI said the techniques found were consistent with previous assessments that found its current GPT-4 model chatbot offers “but incremental capabilities limited to malicious cybersecurity tasks beyond already possible with publicly available, non-AI-powered tools.”

Last April, the director of the US Cybersecurity and Infrastructure Security Agency, Jen Easterly, told Congress that “there are two threats and challenges that define the time. One is China, and the other is artificial intelligence.”

Easter said at the time that the US needs to make sure AI is built with security in mind.

Critics of ChatGPT’s public release in November 2022 – and subsequent releases by rivals including Google and Meta – argue that it was too rushed, seeing as security was largely an afterthought in their development.

“Of course there are bad actors using major language models – that decision was made when Pandora’s Box was opened,” said Amit Yoran, CEO of cyber security firm Tenable.

Some cybersecurity professionals complain about Microsoft’s creation and pursuit of tools to address vulnerabilities in major language models when it could more responsibly focus on making them more secure.

“Why not create more secure black box LLM base models instead of selling defense tools for a problem they are helping to create?” asked Gary McGraw, a computer security violinist and co-founder of the Berryville Machine Learning Institute.

NYU professor and former AT&T Chief Security Officer Edward Amoroso said that while the use of AI and big language models may not pose an immediate threat, they will be “one of the most powerful weapons in every nation-state’s military arsenal.”

Leave a Reply

Your email address will not be published. Required fields are marked *