The German Federal Court of Justice recently clarified online platforms' liability for user-generated content, ruling platforms aren't directly liable without knowledge but must act swiftly upon notification, significantly impacting digital service providers across Europe.
Australia’s eSafety Commissioner ordered Telegram to pay AUD 1 million for ignoring transparency obligations. Officials requested details on terrorist and child sexual content steps, but Telegram delayed months, triggering enforcement under the Online Safety Act.
On 28 February 2025, Japan’s Cabinet announced significant plans to introduce a Bill to promote research, development, and practical application of artificial intelligence technologies. The legislation focuses on transparency, protection of rights, and international cooperation.
Financial Conduct Authority Responds to AI and Big Tech Challenges Raised by Industry Panels
The Financial Conduct Authority responds to industry panel concerns on AI regulation and Big Tech’s role in financial services, addressing risks like bias, competition, and data privacy while exploring opportunities through initiatives like the Digital Sandbox.
Regulatory updates tackle AI risks and Big Tech data in financial markets
The Financial Conduct Authority (FCA) has responded to feedback from its six independent statutory panels regarding its approach to regulating artificial intelligence (AI) and Big Tech.
The concerns come amid the rapid development of generative AI and the growing influence of technology giants in financial markets.
These insights, part of the FCA’s annual report exchange with the panels, highlight pressing issues and propose measures to address risks while capitalising on opportunities.
Regulating Artificial Intelligence: Balancing Innovation and Safety
Industry panels, including the Practitioner Panel and Markets Practitioner Panel, have expressed concerns over the lagging pace of AI regulatory controls compared to the swift advancements in generative AI capabilities.
The panels also highlighted disparities in AI deployment and future risks associated with imbalances created by Big Tech’s role in financial services.
In response, the FCA explained its technology-agnostic and principles-based approach to AI regulation, focusing on the safe and responsible use of AI in financial services.
The FCA published an AI update in April 2024 outlining its commitment to outcomes-based regulation, stressing the importance of assessing AI’s impact on consumers and financial markets.
Key findings from a joint FCA and Bank of England AI survey revealed that 17% of AI use cases in financial services involve foundation models, including Large Language Models (LLMs).
Operations and IT represent the largest share of these implementations, followed by general insurance, risk and compliance, and retail banking for higher-materiality use cases. Notably, a third of these use cases involve third-party providers, with three leading providers dominating the market for cloud, model, and data services.
While AI can mitigate certain cyber risks, the survey identified the potential for biases in machine learning models, particularly in consumer decision-making.
The FCA’s ongoing research on AI bias, aligned with the Digital Regulation Cooperation Forum’s work on AI fairness, seeks to address these challenges.
Technology Law
Read the latest Technology Law updates and news on artificial intelligence, privacy and data protection law, digital assets regulation, and beyond—delivered straight to your inbox!
No spam. Unsubscribe anytime.
Big Tech and Data Challenges in Financial Services
The Consumer Panel and Smaller Business Practitioner Panel raised concerns about the risks associated with Big Tech’s growing role in financial services, including potential price discrimination, competition issues, and data privacy challenges.
These concerns prompted the FCA to launch a Call for Input (CFI) in 2024, focusing on data asymmetry between Big Tech and financial services firms.
The FCA’s findings revealed that while current adverse effects are limited, future risks could significantly impact competition and consumer outcomes. Big Tech’s data could play a transformative role in areas such as consumer credit and insurance, where personalised marketing and risk-based pricing could lead to both opportunities and challenges.
The FCA’s response included plans to explore these issues further through its Digital Sandbox initiative. This platform could help assess the value of Big Tech data in financial services and examine how incentives for data sharing can align with achieving positive outcomes for consumers.
Digital Wallets and Wholesale Markets
Feedback from the panels also called attention to the regulatory treatment of digital wallets. The FCA, in collaboration with the Payment Systems Regulator (PSR), launched a Call for Information in mid-2024 to examine whether digital wallets should fall within its regulatory scope.
The findings will inform updates to the Payment Services Regulations as the UK continues to replace EU laws with domestic legislation.
In the context of wholesale markets, the FCA concluded that strict privacy agreements limit Big Tech’s ability to compete directly with incumbents.
A Wholesale Data Market Study found minimal evidence of Big Tech firms entering wholesale markets in ways that challenge traditional players. However, the FCA stated it would maintain vigilance in monitoring these activities.
On 28 February 2025, Japan’s Cabinet announced significant plans to introduce a Bill to promote research, development, and practical application of artificial intelligence technologies. The legislation focuses on transparency, protection of rights, and international cooperation.
California introduced Bill AB 1018 to regulate automated decision systems impacting employment, education, housing, and healthcare. The Bill mandates performance evaluations, independent audits, and consumer disclosures to ensure accountability and transparent decision-making.
The European Data Protection Board has broadened its task force to include DeepSeek alongside other advanced AI systems, establishing a quick response team to support national data protection authorities in enforcing privacy rules effectively nationwide.
Japan’s Ministry of Economy, Trade and Industry published a new AI contract checklist to help companies handle AI safely and effectively. It covers data protection, intellectual property rights, and legal considerations for domestic and international agreements