Anthropic has begun requiring government ID and selfie verification for users of its Claude AI chatbot, according to multiple user reports and company communications reviewed by SourceRated. The move—unprecedented among major AI providers—comes weeks after the company gained users fleeing OpenAI’s ChatGPT over surveillance concerns.
The verification system, which Anthropic has not publicly announced, appears targeted at select regions and high-volume users. Screenshots show prompts requesting passport scans or driver’s licenses alongside live facial recognition. Analysts suggest this may preempt upcoming EU AI Act compliance or financial sector partnerships.
‘This contradicts Anthropic’s ‘constitutional AI’ ethos,’ said a researcher at Stanford’s Center for Internet and Society, speaking anonymously due to ongoing collaborations. ‘Users migrated here precisely to avoid biometric data collection.’
Anthropic’s privacy policy, updated last month, now states it may collect ‘identity verification data’ to prevent ‘harmful uses.’ Company sources cite growing pressure from investors to monetize Claude through enterprise APIs, which often require Know Your Customer (KYC) protocols.
The policy shift risks alienating privacy-conscious users who embraced Claude as an OpenAI alternative. However, analysts note all major AI firms will likely adopt similar measures as regulators demand accountability for AI-generated content.