The German Federal Court of Justice recently clarified online platforms' liability for user-generated content, ruling platforms aren't directly liable without knowledge but must act swiftly upon notification, significantly impacting digital service providers across Europe.
Australia’s eSafety Commissioner ordered Telegram to pay AUD 1 million for ignoring transparency obligations. Officials requested details on terrorist and child sexual content steps, but Telegram delayed months, triggering enforcement under the Online Safety Act.
On 28 February 2025, Japan’s Cabinet announced significant plans to introduce a Bill to promote research, development, and practical application of artificial intelligence technologies. The legislation focuses on transparency, protection of rights, and international cooperation.
United States Revokes AI Safety and Data Protection Executive Order
The United States has revoked an executive order focused on AI safety, privacy, and ethical development. Federal agencies are directed to halt implementation efforts and review previous actions, raising questions about the future of AI regulation and oversight.
New Executive Order Ends Measures Addressing Artificial Intelligence Privacy and Safety Regulations
On 20 January 2025, the President of the United States issued a new executive order that revoked several measures enacted by the previous administration.
Among these was the executive order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, a directive designed to address the rapidly advancing AI sector.
The rescinded order had outlined comprehensive guidelines for ensuring AI safety, privacy, equity, and consumer protection while fostering innovation and competition. This decision has sparked discussions across the technology, policy, and civil rights sectors regarding the future direction of AI governance in the United States.
The revoked executive orders, issued under the former administration, was an ambitious initiative to create a safer AI environment while preserving privacy and civil liberties. It required federal agencies to prioritise data protection measures and to promote fairness, equity, and competition within the AI ecosystem.
Provisions within the order included safeguards against potential misuse of AI, particularly in areas such as surveillance and algorithmic bias.
Under the new order, federal agencies have been instructed to cease all implementation efforts related to the revoked measures. Within 45 days, the Directors of the Domestic Policy Council and the National Economic Council are tasked with reviewing and amending actions previously taken under the rescinded orders.
Meanwhile, the National Security Advisor has been directed to assess all National Security Memoranda issued between 2021 and 2025, identifying those that may warrant rescission.
Technology Law
Read the latest Technology Law updates and news on artificial intelligence, privacy and data protection law, digital assets regulation, and beyond—delivered straight to your inbox!
No spam. Unsubscribe anytime.
Implications for AI Governance
The decision to revoke this executive order raises critical questions about the United States' approach to AI regulation. While the original order aimed to establish trust and transparency in AI development, its termination leaves uncertainties about the country's regulatory framework.
Industry leaders and policymakers are closely monitoring how federal agencies will respond to the directive to halt ongoing implementations and review their previous efforts.
The revocation also impacts the data protection measures that had been emphasised under the rescinded order. These provisions were designed to safeguard individual privacy in an era of pervasive AI technology, addressing concerns over surveillance, algorithmic decision-making, and the potential misuse of personal data.
Critics argue that removing these protections could erode public trust and leave critical gaps in the nation’s AI policy landscape.
National Security and Broader Reviews
In addition to addressing AI-specific concerns, the new executive order has broader implications for national security. By instructing a review of all National Security Memoranda issued over the past four years, the administration signals its intent to reassess the country's overarching priorities and strategies.
This move could lead to significant shifts in how AI technologies are integrated into national defence and intelligence operations.
The decision to revoke this executive order aligns with the administration's strategic effort to re-evaluate policies enacted by the previous government. However, it also leaves room for debate about the balance between fostering innovation and ensuring ethical oversight in AI development.
The German Federal Court of Justice recently clarified online platforms' liability for user-generated content, ruling platforms aren't directly liable without knowledge but must act swiftly upon notification, significantly impacting digital service providers across Europe.
Australia’s eSafety Commissioner ordered Telegram to pay AUD 1 million for ignoring transparency obligations. Officials requested details on terrorist and child sexual content steps, but Telegram delayed months, triggering enforcement under the Online Safety Act.
The European Commission recently submitted a proposal for an EU Blueprint on cybersecurity crisis management. The recommendation outlines response mechanisms, promotes Union, and calls for collaboration between civilian authorities and military partners.
China's new rules on military content sharing impose tighter guidelines on what can be posted online. The rules mandate platforms to follow official sources, banning misinformation while promoting government-approved perspectives on national defence, history, and military achievements.