The German Federal Court of Justice recently clarified online platforms' liability for user-generated content, ruling platforms aren't directly liable without knowledge but must act swiftly upon notification, significantly impacting digital service providers across Europe.
Australia’s eSafety Commissioner ordered Telegram to pay AUD 1 million for ignoring transparency obligations. Officials requested details on terrorist and child sexual content steps, but Telegram delayed months, triggering enforcement under the Online Safety Act.
On 28 February 2025, Japan’s Cabinet announced significant plans to introduce a Bill to promote research, development, and practical application of artificial intelligence technologies. The legislation focuses on transparency, protection of rights, and international cooperation.
Poland's Ministry of Digital Affairs Attempts to Regulate Forbidden AI Systems under EU AI Act
Poland’s Ministry of Digital Affairs closed consultations on regulating prohibited AI systems under the EU AI Act, focusing on practices like manipulative technologies, real-time biometric surveillance, and social scoring to protect public interests and rights.
Ministry of Digital Affairs in Poland tackles prohibited AI systems to protect fundamental rights
On 31 December 2024, Poland’s Ministry of Digital Affairs concluded its public consultation on the implementation of the European Union’s Artificial Intelligence (AI) Act, focusing specifically on prohibited AI systems.
This action reflects the country’s commitment to aligning with the EU’s broader efforts to establish a uniform regulatory framework for AI while addressing national concerns about machine learning (ML) and AI-related risks.
The consultation centred on Article 5 of the EU Regulation 2024/1689, which lists prohibited AI practices. These include systems that manipulate human behaviour to the detriment of individuals, exploit vulnerable groups, or pose unacceptable risks to fundamental rights.
The EU AI Act categorises certain AI practices as prohibited due to their high risk of causing harm. Poland’s consultation sought feedback on these practices, which include:
Manipulative AI Systems: Technologies designed to subliminally influence individuals’ decisions in harmful ways.
Exploitation of Vulnerable Groups: AI systems targeting children, the elderly, or individuals with disabilities for manipulative purposes.
Social Scoring by Governments: Systems that evaluate individuals’ trustworthiness based on behaviour, a practice reminiscent of China’s social credit system.
Real-Time Biometric Surveillance: Systems used in public spaces for mass surveillance without adequate safeguards.
The consultation also explored practical challenges related to identifying, monitoring, and enforcing these prohibitions, particularly as AI systems become increasingly integrated into daily life.
Educating Businesses and the Public
A key aspect of the Ministry’s initiative is raising awareness among affected businesses and the general public. The Ministry has emphasised the importance of educating stakeholders about the AI Act’s requirements and their implications.
Businesses involved in AI development must understand the boundaries of permissible activities to avoid penalties and contribute to a trustworthy AI ecosystem.
For the public, the Ministry’s efforts include explaining how the AI Act protects their fundamental rights and what safeguards are in place against potentially harmful AI systems.
This dual focus on businesses and citizens aims to foster a culture of accountability and informed participation in the digital economy.
Implementing prohibitions on certain AI systems presents significant challenges. Identifying manipulative or exploitative AI practices requires sophisticated monitoring mechanisms and technical expertise.
Additionally, ensuring compliance across various sectors—from healthcare to advertising—requires coordinated efforts between regulators and industry stakeholders.
The consultation process also revealed concerns about balancing regulation with innovation.
Stakeholders expressed fears that overly stringent rules could stifle the development of beneficial AI applications, particularly in sectors like education and healthcare where AI has transformative potential.
Poland’s Role in Shaping EU AI Standards
Poland’s proactive approach to implementing the EU AI Act demonstrates its commitment to playing a leading role in shaping AI governance within the bloc. The country sets an example for other EU member states navigating similar regulatory challenges.
Poland’s efforts to regulate forbidden AI systems highlight the importance of collaboration, education, and vigilance in ensuring that AI technologies serve the public good without compromising fundamental rights.
On 28 February 2025, Japan’s Cabinet announced significant plans to introduce a Bill to promote research, development, and practical application of artificial intelligence technologies. The legislation focuses on transparency, protection of rights, and international cooperation.
California introduced Bill AB 1018 to regulate automated decision systems impacting employment, education, housing, and healthcare. The Bill mandates performance evaluations, independent audits, and consumer disclosures to ensure accountability and transparent decision-making.
The European Commission recently submitted a proposal for an EU Blueprint on cybersecurity crisis management. The recommendation outlines response mechanisms, promotes Union, and calls for collaboration between civilian authorities and military partners.
The European Data Protection Board has broadened its task force to include DeepSeek alongside other advanced AI systems, establishing a quick response team to support national data protection authorities in enforcing privacy rules effectively nationwide.