The German Federal Court of Justice recently clarified online platforms' liability for user-generated content, ruling platforms aren't directly liable without knowledge but must act swiftly upon notification, significantly impacting digital service providers across Europe.
Australia’s eSafety Commissioner ordered Telegram to pay AUD 1 million for ignoring transparency obligations. Officials requested details on terrorist and child sexual content steps, but Telegram delayed months, triggering enforcement under the Online Safety Act.
On 28 February 2025, Japan’s Cabinet announced significant plans to introduce a Bill to promote research, development, and practical application of artificial intelligence technologies. The legislation focuses on transparency, protection of rights, and international cooperation.
UK Ministry of Justice Unveils Criminal Penalties for Deepfake Creators and Abusers
The UK Ministry of Justice plans new criminal offences targeting explicit deepfakes and non-consensual intimate images. Offenders could face up to two years in prison as part of broader efforts to combat online harassment and protect vulnerable individuals.
UK Government to Introduce Criminal Offences for Explicit Deepfakes and Image Abuse
The UK Ministry of Justice (MoJ) has announced its intention to introduce robust legal measures targeting the creation and distribution of sexually explicit deepfake content and the non-consensual taking of intimate images.
These proposals will be part of the forthcoming Crime and Policing Bill, marking a crucial step in tackling intimate image abuse and protecting vulnerable individuals from online harassment.
The proposed legislation introduces criminal offences that carry a maximum penalty of two years’ imprisonment. By focusing on explicit deepfakes—digitally manipulated media that creates realistic yet fabricated intimate images—the government seeks to address the harm caused by this technology.
Alongside this, the non-consensual taking and sharing of intimate images will also be criminalised under the new law.
These offences build upon existing legal provisions, filling gaps identified by the Law Commission’s recommendations. The approach reflects the government’s commitment to addressing emerging threats in the digital space, particularly those disproportionately affecting women and girls.
Technology Law
Read the latest Technology Law updates and news on artificial intelligence, privacy and data protection law, digital assets regulation, and beyond—delivered straight to your inbox!
No spam. Unsubscribe anytime.
Tackling Deepfakes and Online Harassment
Deepfake technology, while offering creative possibilities, has been increasingly weaponised to harass and exploit individuals. Explicit deepfakes often target women, celebrities, and public figures, distorting their likenesses to create fake pornographic content.
This type of abuse not only harms reputations but also leaves victims vulnerable to emotional distress and potential blackmail.
The MoJ’s initiative represents a direct response to this escalating issue. Criminalising the creation and sharing of sexually explicit deepfakes sends a clear message that such conduct will not be tolerated and that perpetrators will face legal consequences.
In addition to introducing these new offences, the government has designated intimate image-related crimes as priority offences under the Online Safety Act 2023. This classification compels online platforms to act swiftly in removing harmful content or risk regulatory penalties.
The Online Safety Act 2023 equips regulators with tools to hold platforms accountable, ensuring they implement mechanisms to detect and remove abusive material effectively. Platforms failing to comply could face substantial fines and reputational damage, creating a strong incentive for proactive moderation.
Addressing the Impact on Women and Girls
Intimate image abuse disproportionately affects women and girls, often leaving victims feeling helpless and exposed. The government’s reforms are a response to the growing demand for stronger protections and legal recourse for those targeted by such crimes.
Studies have shown that victims of intimate image abuse frequently experience psychological trauma, social isolation, and even threats to their safety. By introducing specific offences and enforcing stricter online platform responsibilities, these measures aim to provide victims with greater support and justice.
On 28 February 2025, Japan’s Cabinet announced significant plans to introduce a Bill to promote research, development, and practical application of artificial intelligence technologies. The legislation focuses on transparency, protection of rights, and international cooperation.
California introduced Bill AB 1018 to regulate automated decision systems impacting employment, education, housing, and healthcare. The Bill mandates performance evaluations, independent audits, and consumer disclosures to ensure accountability and transparent decision-making.
The European Data Protection Board has broadened its task force to include DeepSeek alongside other advanced AI systems, establishing a quick response team to support national data protection authorities in enforcing privacy rules effectively nationwide.
Japan’s Ministry of Economy, Trade and Industry published a new AI contract checklist to help companies handle AI safely and effectively. It covers data protection, intellectual property rights, and legal considerations for domestic and international agreements