UK financial regulators are engaging in urgent discussions with banks and cybersecurity officials following the revelation of significant vulnerabilities by Anthropic's latest artificial intelligence model, Claude Mythos Preview. This development has prompted a coordinated response involving the Bank of England, Financial Conduct Authority, HM Treasury, and the National Cyber Security Centre to assess the potential risks associated with the model's findings.
The Financial Times reports that major banks, insurers, and exchanges will be briefed on these vulnerabilities in an upcoming meeting. This action mirrors steps taken in the United States, where Treasury Secretary Scott Bessent has already convened Wall Street leaders to discuss the implications of the AI tool. Both UK and US authorities are concerned about how the technology could expose weaknesses that malicious actors might exploit.
Anthropic has disclosed that the Claude Mythos Preview model identified thousands of high-severity vulnerabilities, including some in every major operating system and web browser. The company warned that these flaws, some undetected for decades, could have severe consequences for economies, public safety, and national security. The issue will be a key topic at the next Cross Market Operational Resilience Group meeting, which involves regulators and financial firms in assessing systemic threats.
Despite the seriousness of the findings, the Bank of England has not yet activated its rapid-response Cross Market Business Continuity Group, opting instead to monitor developments within existing resilience frameworks. Meanwhile, the UK’s AI Security Institute is testing the Mythos model alongside others, as policymakers consider implementing standardised testing for AI systems used by financial institutions.
Given the heightened concern over recent cyber attacks on major UK companies, regulators are increasingly focused on emerging threats to operational resilience. Financial institutions are advised to stay informed about the ongoing assessments and prepare for potential regulatory changes aimed at enhancing the security and reliability of AI systems in the sector."
Source: http://www.fstech.co.uk/fst/Financial_Services_Regulators_Assess.php


