AI Experts Warn of Unregulated AI Risks

Former and current AI employees urge better oversight to prevent potential dangers.


An open letter from employees of AI companies like OpenAI and Google DeepMind highlights the dangers posed by unregulated AI technology. The group argues that financial motives hinder effective oversight and bespoke corporate governance structures are insufficient to address these risks. They warn that the unchecked development of AI could spread misinformation, deepen inequalities, and even threaten human extinction.

The letter, signed by 11 current and former OpenAI employees and two from Google DeepMind, stresses that AI companies have weak obligations to share critical information with governments. This lack of transparency, they say, cannot be remedied by relying on these firms to voluntarily disclose the capabilities and limitations of their systems.

Highlighting the issue, researchers have identified instances where AI-generated images spread voting-related disinformation, despite policies against such content. The group calls for AI companies to allow employees to raise concerns about risks without fearing confidentiality agreements that prevent criticism.

In a related development, OpenAI, led by Sam Altman, announced it had disrupted five covert influence operations using its AI models for deceptive activities online. This underscores the urgent need for robust oversight and transparent practices in the AI industry.

 

Join Our VIP List

Subscribe To AO

 Don’t miss out on all the exciting news and updates, subscribe now!