The German Federal Court of Justice recently clarified online platforms' liability for user-generated content, ruling platforms aren't directly liable without knowledge but must act swiftly upon notification, significantly impacting digital service providers across Europe.
Australia’s eSafety Commissioner ordered Telegram to pay AUD 1 million for ignoring transparency obligations. Officials requested details on terrorist and child sexual content steps, but Telegram delayed months, triggering enforcement under the Online Safety Act.
On 28 February 2025, Japan’s Cabinet announced significant plans to introduce a Bill to promote research, development, and practical application of artificial intelligence technologies. The legislation focuses on transparency, protection of rights, and international cooperation.
New California Legislation (AB 2655) Targets Manipulated “Deepfake” Media To Prevent Voter Misinformation in Elections
California’s new law (AB 2655) targets deepfake media, requiring large online platforms to remove or label deceptive content about political candidates. It aims to combat misinformation during elections, protecting voters from manipulated media that could distort democratic processes.
California Targets Deepfake Threats with New Election Law
Starting 1 January 2025, California’s Defending Democracy from Deepfake Deception Act of 2024 (AB 2655) takes aim at the growing threat of manipulated media online.
Designed to combat the spread of deceptive content, the Act holds large online platforms accountable for addressing deepfakes that misrepresent political candidates during crucial election periods.
AB 2655 places significant responsibilities on online platforms with over one million California users, ensuring they act swiftly against deceptive content, particularly during election seasons.
Here’s how it works:
Election Periods: From 120 days before an election, platforms must remove materially deceptive content within 72 hours of receiving a valid report. This applies to manipulated audio, video, or images designed to mislead voters about a political candidate’s actions or statements.
Non-Election Periods: Outside these critical windows, platforms are required to label such deceptive content clearly, providing users with context about its manipulative nature.
Defining "Materially Deceptive Content"
The law defines "materially deceptive" as media altered in a way that could mislead viewers into believing a candidate said or did something false.
This includes doctored videos, dubbed audio, or fabricated images intended to harm a candidate’s reputation or manipulate voter perceptions.
The focus is not just on malicious content but also on its potential impact on democratic processes. By mandating swift action during election periods, the law aims to protect voters from falling prey to misinformation campaigns.
Technology Law
Read the latest Technology Law updates and news on artificial intelligence, privacy and data protection law, digital assets regulation, and beyond—delivered straight to your inbox!
No spam. Unsubscribe anytime.
Enforcing Accountability
AB 2655 empowers California’s Attorney General, district attorneys, and city attorneys to take legal action against platforms that fail to comply. These enforcement mechanisms highlight the state’s commitment to holding tech giants responsible for their role in combating the spread of harmful deepfakes.
Platforms that do not act within the stipulated timelines or fail to implement proper labelling systems during non-election periods could face legal consequences, setting a precedent for proactive digital content moderation.
Deepfake technology, powered by advancements in artificial intelligence, has become a tool for creating highly convincing yet entirely fabricated media. While the technology has applications in entertainment and education, its darker uses, such as spreading political misinformation, have raised alarm.
In recent years, incidents of deepfakes targeting political figures have multiplied. These manipulated videos and audio clips often go viral, misleading audiences and creating public distrust.
AB 2655 represents a direct response to these growing threats, aiming to restore integrity to the digital information ecosystem.
Challenges for Platforms
Large platforms like social media giants now face the dual challenge of implementing robust detection systems and responding quickly to flagged content. Identifying deepfakes is a technically demanding task, requiring advanced AI tools and skilled moderation teams.
Moreover, the law’s 72-hour removal requirement during election periods places additional pressure on platforms to act efficiently. Failure to comply could not only result in legal action but also damage their reputations as responsible digital intermediaries.
On 28 February 2025, Japan’s Cabinet announced significant plans to introduce a Bill to promote research, development, and practical application of artificial intelligence technologies. The legislation focuses on transparency, protection of rights, and international cooperation.
California introduced Bill AB 1018 to regulate automated decision systems impacting employment, education, housing, and healthcare. The Bill mandates performance evaluations, independent audits, and consumer disclosures to ensure accountability and transparent decision-making.
The European Data Protection Board has broadened its task force to include DeepSeek alongside other advanced AI systems, establishing a quick response team to support national data protection authorities in enforcing privacy rules effectively nationwide.
The CFPB seeks to categorise certain data brokers as consumer reporting agencies under Regulation V. Doing so would tighten obligations, require more transparency, and ensure consumers can see, correct, and control their own information.