The German Federal Court of Justice recently clarified online platforms' liability for user-generated content, ruling platforms aren't directly liable without knowledge but must act swiftly upon notification, significantly impacting digital service providers across Europe.
Australia’s eSafety Commissioner ordered Telegram to pay AUD 1 million for ignoring transparency obligations. Officials requested details on terrorist and child sexual content steps, but Telegram delayed months, triggering enforcement under the Online Safety Act.
On 28 February 2025, Japan’s Cabinet announced significant plans to introduce a Bill to promote research, development, and practical application of artificial intelligence technologies. The legislation focuses on transparency, protection of rights, and international cooperation.
The European Union publishes Second Draft of General Purpose AI Code of Practice
The European AI Office released the second draft of the General Purpose AI Code of Practice, establishing compliance, risk mitigation, and transparency. Stakeholders are invited to provide feedback, shaping AI regulation under the EU AI Act.
Second Draft AI Code of Practice Advances AI Accountability in the EU
The European AI Office has taken a significant step forward in regulating artificial intelligence with the release of the second draft of the General-Purpose AI Code of Practice.
Published on 19 December 2024, this document forms part of the obligations outlined in Article 56 of the European Union AI Act, aiming to ensure providers of general-purpose AI models meet rigorous compliance standards.
The second draft states commitments, actionable measures, and key performance indicators (KPIs) to enforce compliance.
These KPIs serve as a yardstick to evaluate how effectively AI providers are mitigating risks and aligning with regulatory expectations.
The European AI Office seeks to streamline the compliance process, making it easier for organisations to adapt while safeguarding public interests.
Providers are required to maintain comprehensive documentation, covering technical specifications, training data, usage guidelines, and compliance measures. This initiative is designed not only to ensure transparency but also to promote trust across the AI value chain.
Technology Law
Read the latest Technology Law updates and news on artificial intelligence, privacy and data protection law, digital assets regulation, and beyond—delivered straight to your inbox!
No spam. Unsubscribe anytime.
Enhancing Risk Mitigation and Compliance
One of the standout features of the draft is its focus on systemic risk assessment at a Union-wide level. The Code outlines practical steps for AI providers to identify, evaluate, and address risks that could affect public safety, fundamental rights, or economic stability. This includes offering clear frameworks for transparency and cooperation with the AI Office to simplify assessment processes.
Another critical area of focus is copyright compliance, an often-overlooked yet vital element in the AI lifecycle. The Code confirms that general-purpose AI models respect intellectual property laws, addressing a key concern among creators and stakeholders.
Building Trust Across the AI Value Chain
The draft Code encourages collaboration and understanding across the AI value chain. By fostering communication between AI developers, end-users, and regulators, the European AI Office hopes to create a unified approach to fulfilling the obligations of the AI Act.
This collaborative focus aims to ease the integration of compliance measures while fostering innovation within ethical and legal boundaries.
Call for Feedback and the Path Ahead
Stakeholders are invited to provide written feedback on the draft until 15 January 2025. Additionally, a series of discussions will be held to address community concerns and refine the Code further.
The next milestone is the publication of the third draft, scheduled for 17 February 2025.
The second draft of the General-Purpose AI Code of Practice signals a proactive approach by the EU in shaping the future of AI regulation.
As the consultation progresses, the focus remains on balancing innovation with public safety and trust, ensuring that general-purpose AI models contribute positively to society.
On 28 February 2025, Japan’s Cabinet announced significant plans to introduce a Bill to promote research, development, and practical application of artificial intelligence technologies. The legislation focuses on transparency, protection of rights, and international cooperation.
California introduced Bill AB 1018 to regulate automated decision systems impacting employment, education, housing, and healthcare. The Bill mandates performance evaluations, independent audits, and consumer disclosures to ensure accountability and transparent decision-making.
The European Commission recently submitted a proposal for an EU Blueprint on cybersecurity crisis management. The recommendation outlines response mechanisms, promotes Union, and calls for collaboration between civilian authorities and military partners.
The European Data Protection Board has broadened its task force to include DeepSeek alongside other advanced AI systems, establishing a quick response team to support national data protection authorities in enforcing privacy rules effectively nationwide.