The UK government has proposed new legislation that could jail tech executives who fail to remove AI-generated non-consensual imagery from their platforms. The move aims to combat the rising threat of deepfake pornography and other harmful synthetic media.
Under the draft law, senior managers at social media companies and other online platforms would face criminal liability if they do not take ‘reasonable steps’ to eliminate such content. Penalties could include unlimited fines and up to two years in prison.
Officials say the measure is necessary to protect individuals, particularly women and public figures, from having their likenesses manipulated without consent. ‘The spread of AI-generated intimate imagery is a growing problem that demands urgent action,’ a government spokesperson told reporters.
Analysts note this would be among the world’s strictest regulations targeting platform accountability for AI misuse. The proposal follows similar efforts in the EU’s Digital Services Act and US state-level bills.
Tech industry groups have expressed concerns about implementation challenges. ‘While we share the goal of combating harmful content, vague standards could lead to over-removal of legitimate material,’ warned a representative from a major tech trade association.
The legislation is expected to undergo parliamentary debate later this year. If passed, enforcement would likely begin in 2025 after a grace period for compliance measures.