The online forum OpenAI employees use for confidential internal communications was breached last year, anonymous sources have told The New York Times. Hackers lifted details about the design of the company’s AI technologies from forum posts, but they did not infiltrate the systems where OpenAI actually houses and builds its AI.

OpenAI executives announced the incident to the whole company during an all-hands meeting in April 2023, and also informed the board of directors. It was not, however, disclosed to the public because no information about customers or partners had been stolen.

Executives did not inform law enforcement, according to the sources, because they did not believe the hacker was linked to a foreign government, and thus the incident did not present a threat to national security.

An OpenAI spokesperson told TechRepublic in an email: “As we shared with our Board and employees last year, we identified and fixed the underlying issue and continue to invest in security.”

How did some OpenAI employees react to this hack?

News of the forum’s breach was a cause for concern for other OpenAI employees, reported the NYT; they thought it indicated a vulnerability in the company that could be exploited by state-sponsored hackers in the future. If OpenAI’s cutting-edge technology fell into the wrong hands, it might be used for nefarious purposes that could endanger national security.

SEE: OpenAI’s GPT-4 Can Autonomously Exploit 87% of One-Day Vulnerabilities, Study Finds

Furthermore, the executives’ treatment of the incident led some employees to question whether OpenAI was doing enough to protect its proprietary technology from foreign adversaries. Leopold Aschenbrenner, a former technical manager at the company, said he had been fired after bringing up these concerns with the board of directors on a podcast with Dwarkesh Patel.

OpenAI denied this in a statement to The New York Times, and also that it disagreed with Aschenbrenner’s “characterizations of our security.”

More OpenAI security news, including about the ChatGPT macOS app

The forum’s breach is not the only recent indication that security is not the top priority at OpenAI. Last week, it was revealed by data engineer Pedro José Pereira Vieito that the new ChatGPT macOS app was storing chat data in plain text, meaning that bad actors could easily access that information if they got hold of the Mac. After being made aware of this vulnerability by The Verge, OpenAI released an update that encrypts the chats, noted the company.

An OpenAI spokesperson told TechRepublic in an email: “We are aware of this issue and have shipped a new version of the application which encrypts these conversations. We’re committed to providing a helpful user experience while maintaining our high security standards as our technology evolves.”

SEE: Millions of Apple Applications Were Vulnerable to CocoaPods Supply Chain Attack

In May 2024, OpenAI released a statement saying it had disrupted five covert influence operations originating in Russia, China, Iran and Israel that sought to use its models for “deceptive activity.” Activities that were detected and blocked include generating comments and articles, making up names and bios for social media accounts and translating texts.

That same month, the company announced it had formed a Safety and Security Committee to develop the processes and safeguards it will use while developing its frontier models.

Is the OpenAI forums hack indicative of more AI-related security incidents?

Dr. Ilia Kolochenko, Partner and Cybersecurity Practice Lead at Platt Law LLP, said he believes this OpenAI forums security incident is likely to be one of many. He told TechRepublic in an email: “The global AI race has become a matter of national security for many countries, therefore, state-backed cybercrime groups and mercenaries are aggressively targeting AI vendors, from talented startups to tech giants like Google or OpenAI.”

Hackers target valuable AI intellectual property, like large language models, sources of training data, technical research and commercial information, Dr Kolochenko added. They may also implement backdoors so they can control or disrupt operations, similar to the recent attacks on critical national infrastructure in Western countries.

He told TechRepublic: “All corporate users of GenAI vendors shall be particularly careful and prudent when they share, or give access to, their proprietary data for LLM training or fine-tuning, as their data — spanning from attorney-client privileged information and trade secrets of the leading industrial or pharmaceutical companies to classified military information — is also in crosshair of AI-hungry cybercriminals that are poised to intensify their attacks.”

Can security breach risks be alleviated when developing AI?

There is not a simple answer to alleviating all risks of security breach from foreign adversaries when developing new AI technologies. OpenAI cannot discriminate against workers by their nationality, and similarly does not want to limit its pool of talent by only hiring in certain regions.

It is also difficult to prevent AI systems from being used for nefarious purposes before those purposes come to light. A study from Anthropic found that LLMs were only marginally more useful to bad actors for acquiring or designing biological weapons than standard internet access. Another one from OpenAI drew a similar conclusion.

On the other hand, some experts agree that, while not posing a threat today, AI algorithms could become dangerous when they get more advanced. In November 2023, representatives from 28 countries signed the Bletchley Declaration, which called for global cooperation to address the challenges posed by AI. “There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models,” it read.

Tech