In an open letter, former employees of OpenAI refer to whistleblower safeguards as "inefficient."
In an open letter, a number of former workers from OpenAI—the startup that created ChatGPT—criticiously criticized governance and oversight of AI companies. The letter, which is signed by 13 former workers—six of whom opted to maintain their anonymity—discusses the dangers of artificial intelligence (AI) systems, including manipulation, false information, and losing control of autonomous AI systems. It implies that these businesses should be transparent to the public and subject to criticism from both present and former staff members, given the lack of effective government monitoring over their operations at the moment. The letter states AI businesses now have “strong financial incentives” to ignore safety and that existing “corporate governance” mechanisms are not suitable. Additionally, it implies that AI businesses have not yet disclosed to the public the strengths and weaknesses of these systems as well as "the risk levels of different kinds of harm" that they are capable of.
What's Your Reaction?