Key Points
- OpenAI Mac ChatGPT app was found to store data in plain text.
- Encryption update released post-discovery.
- Internal security concerns were raised after a 2023 hack.
- Ex-employee claims wrongful termination for whistleblowing.
OpenAI is no stranger to the news, but recent breaches have placed the AI giant in an uncomfortable spotlight.
This week, OpenAI faced two major security problems that have called into question its ability to protect user data and maintain cyber hygiene.
OpenAI faced two security issues: unencrypted ChatGPT Mac app data and a 2023 internal hack, raising concerns about their cybersecurity.#ObjectObjecthttps://t.co/3Oha863WNV
— HaywaaWorldwide (@HaywaaWorldwide) July 5, 2024
Encryption Fault in Mac ChatGPT App
Earlier this week, Swift developer Pedro José Pereira Vieito discovered a serious flaw within the Mac ChatGPT app. He disclosed that user conversations were being saved by the app in plain text which is not secure at all.
In other words, any information users shared with ChatGPT could be easily accessed by any other application or malware present on their device.
What made this flaw even more worrying was that the Mac ChatGPT app is available directly from OpenAI’s website rather than Apple’s App Store, bypassing the latter’s strict sandboxing requirements.
Sandbox containers off vulnerabilities within an app so that they cannot affect other parts of an operating system. The failure to meet these conditions left the ChatGPT app wide open.
Upon learning about it through The Verge’s reportage of his findings, OpenAI acted fast by introducing encryption for locally stored chats in a subsequent update. Nevertheless, this initial slip-up has already undermined trust in OpenAI’s attitude toward protecting customer information when producing consumer goods.
Ongoing Fallout from a 2023 Hack
The second issue stems from an incident that occurred three years ago and still has consequences today. In spring 2023, a hacker broke into OpenAI’s internal messaging systems and stole sensitive company details.
Leopold Aschenbrenner, who worked as a technical program manager at OpenAI, raised concerns about the breach’s implications, warning that it exposed significant weaknesses that could be exploited by hostile nations.
He claims that not only were his worries brushed aside but they also led to his sacking, which he insists was an act of retaliation against his decision to blow the whistle. Aschenbrenner has made public his disagreement with OpenAI over why he got fired.
However, according to one representative from OpenAI, “While we share his commitment to safety around AGI [artificial general intelligence], many claims he has made about our work are inaccurate.”
This incident shows how much inner strife there is within this organization and what happens when people try to speak up on security matters or any other subject for that matter.