A bipartisan group of senators has introduced the Artificial Intelligence Safety and Innovation Act, the most comprehensive AI regulation proposal to reach the Senate floor. The bill aims to balance technological innovation with public safety concerns.
Key provisions include mandatory safety testing for AI systems above a certain computational threshold, transparency requirements for AI-generated content, and the creation of a federal AI safety board modeled on the NTSB. Companies would be required to conduct and publish risk assessments before deploying large-scale AI systems.
The bill has attracted an unusual coalition of supporters. Technology companies including Microsoft and Google have endorsed the framework, preferring federal standards to a patchwork of state regulations. Civil rights organizations support the anti-discrimination provisions.
Opposition comes from smaller AI startups that argue the compliance costs would entrench large incumbents, and from libertarian-leaning lawmakers who view any AI regulation as premature. The bill's sponsors have included provisions for regulatory sandboxes to address startup concerns.
Senate leadership has scheduled committee hearings for May, with a floor vote possible before the August recess. The House is developing companion legislation with similar provisions.