Senators Push For Pre-Deployment Review Of OpenAI Models

A group of Democratic and Independent senators has urged OpenAI to grant the government pre-deployment access to future ChatGPT models. In a July 22 letter to OpenAI CEO Sam Altman, the senators—Brian Schatz (D-HI), Ben Ray Luján (D-NM), Peter Welch (D-VT), Mark Warner (D-VA), and Angus King (I-ME)—expressed concerns about safety and employee practices. They emphasized the importance of transparency and accountability in AI development and specifically asked if U.S. government agencies could review and test new models before their public release.

The senators’ request focuses on potential safety risks and the broader implications of AI technology. They posed detailed questions about OpenAI’s safety protocols, employee whistleblower protections, and post-deployment monitoring. The letter highlighted OpenAI’s collaboration with national security and defense agencies to develop cybersecurity tools, suggesting that this partnership necessitates greater oversight.

Sen. Warner, known for his previous efforts to regulate Big Tech, particularly regarding “Russia-linked” content, supports this move as a continuation of his advocacy for tech accountability. Sen. Luján, who has pushed for legislation to revoke liability protections for tech companies spreading health misinformation, also underscored the importance of this oversight.
Critics argue that such pre-deployment access could lead to increased government control and potential censorship. They fear this could establish a precedent for government involvement in tech development, potentially stifling innovation and limiting free speech. However, the senators maintain that their primary concern is ensuring the safe and ethical deployment of AI technologies that could significantly impact society.