In a rare public disagreement, Anthropic and OpenAI are at odds over a proposed Illinois law that would significantly limit the liability of AI companies for mass deaths or financial disasters. The bill, which has sparked heated debate in the tech and legal communities, has drawn support from OpenAI but strong opposition from Anthropic.
The legislation, formally titled the AI Liability Shield Act, would exempt AI developers from lawsuits in cases where their systems cause catastrophic harm unless plaintiffs can prove “willful misconduct.” Supporters argue that the bill is necessary to foster innovation by protecting AI firms from crippling lawsuits. However, critics, including Anthropic, warn that it could allow companies to evade accountability for preventable harms.
Sources close to the matter indicate that OpenAI’s backing of the bill aligns with its broader advocacy for regulatory frameworks that balance innovation with oversight. “OpenAI believes that this bill strikes the right balance between encouraging technological advancement and ensuring accountability,” said one industry analyst familiar with the company’s position.
Anthropic, on the other hand, has taken a more cautious approach. In a statement attributed to company insiders, Anthropic emphasized the need for “robust accountability mechanisms” to ensure that AI systems are developed and deployed responsibly. “AI has the potential to cause significant harm, and companies must be held accountable when that harm occurs,” the statement read.
The debate comes as lawmakers nationwide grapple with how to regulate rapidly evolving AI technologies. Analysts suggest that the outcome of this legislative battle could set a precedent for future AI-related laws in other states. “Illinois is becoming a testing ground for AI regulation,” said one legal expert. “This bill could shape the national conversation on how to balance innovation with public safety.”