t's been a week of bad cyber security revelations for OpenAI, after news emerged that the startup failed to report a 2023 breach of its systems to anybody outside the organization, and that its ChatGPT app for macOS was coded without any regard for user privacy.According to an exclusive report from the New York Times, citing a pair of anonymous OpenAI insiders, someone managed to breach a private forum used by OpenAI employees to discuss projects early last year.OpenAI apparently chose not to make the news public or tell anyone in law enforcement about the digital break in, because none of the Microsoft-backed firm's actual AI builds were compromised. Execs who disclosed the breach to employees didn't think it was much of a threat, because it was believed the miscreant behind the breach was a private individual unaffiliated with any foreign governments.But keeping a breach secret isn't a good look, especially considering several high-ranking employees – including chief scientist Ilya Sutskever – recently left OpenAI over what many believe to be concerns about a lack of safety culture.The ChatGPT maker committed to setting up an AI safety committee after the departures of Sutskever and Jan Leike – the head of OpenAI's previous safety team devoted to tackling the long-term threats of AI.Whether news of a secret, heretofore unreported, breach that OpenAI leadership reportedly thought it knew better about than federal regulators will help repair its tarnished safety reputation is anyone's guess. The other OpenAI security news this week probably won't help, though.According to software developer Pedro José Pereira Vieito, the macOS version of ChatGPT was programmed to side-step the Mac's inbuilt sandboxing that prevents apps from exposing private data, and instead stored all user conversations in plain text in an unsecured directory.OpenAI has reportedly fixed the issue but didn't respond to our questions.