Some banks moving too slow to address AI-powered cyberthreats, Treasury says

Some financial institutions are not moving fast enough to adopt adequate risk management frameworks that would help address AI-driven cybersecurity threats, according to a report released Wednesday by the Treasury Department.

Cybercriminals who deploy hacking techniques powered by tools like generative AI chatbots are likely to have an advantage against banks, at least in the short term, adds the analysis, citing 42 interviews with financial firms, trade associations and service providers. cybersecurity.

The Treasury investigation was called as part of a broad executive order on AI in which the federal landscape was ordered to study and reorient its operations around the rapidly evolving technology that made headlines over the past year for its rapid adoption in consumer-facing markets.

The order identified financial services, along with education, housing, law, healthcare and transportation, as industries that could be affected by the misuse of particular artificial intelligence technology. Several federal agencies were asked to draft sector-specific reports assessing the risks and benefits of AI with different due dates after the directive was signed.

AI chatbots and related tools, such as OpenAI's ChatGPT or Google's Gemini, have been hailed as powerful productivity improvements, but the tools can also increase hacker skills to carry out increasingly credible cyberattacks and social engineering scams.

These include phishing campaigns, in which hackers create a cloned email, web page, or other common digital item with an underlying program that siphons victims' data or places malware on their systems if clicked. him, according to a Treasury official who spoke to reporters before the conference. release.

Email phishing attempts against monetary entities have begun to look more realistic, the official said, recounting conversations the agency had with institutions. Historically, many phishing emails contained language telling targets that the hacker does not speak perfect English, but artificial intelligence systems have made these attempts more accurate, the official said.

Scammers have tried it too voice cloning technologies to pose as victims and gain access to their financial information, the official added.

These artificial intelligence tools are also used for code and malware generation. One example described involved using a generative AI platform to create a fake copy of a company's website to collect customer credentials. Hackers have also considered using such tools to scan websites for vulnerabilities, according to the report.

While institutions have been sharing anonymous threat information with cyber vendors more frequently, financial firms have been less willing to share fraud protection information with each other, the analysis found, adding that the lack of data sharing about fraud "is likely to affect smaller institutions more significantly than larger institutions." .โ€

The report will be widely distributed in Capitol offices on Wednesday, with the hope that lawmakers will rally around its findings, the official later added.

Other data-specific risks highlighted by the Treasury report include data poisoning, data leaks and data integrity attacks, all of which target sensitive information used to train the AI โ€‹โ€‹models themselves. By compromising critical information within the source data, attackers can permanently alter the output of a large language model, leading an AI system to produce biased, unethical, or false responses in response to a given message. .

While critical training data is the primary target of hackers, all data handled throughout the development and production cycle of an AI system requires protocols that protect it from access by cybercriminals, the Treasury warned.

Financial regulators have been frequently sounding the alarm about artificial intelligence systems and their integration into investment services. Securities and Exchange Commission Chairman Gary Gensler has said that uncontrolled artificial intelligence systems could cause financial collapse in the future.

The SEC issued a ruler which requires publicly traded companies reveal hacking incidents that could materially affect its investors. Its goal is to provide more transparency about how cyberattacks impact companies' bottom lines, forcing them to report breaches within four days.


Leave a Comment

Comments

No comments yet. Why donโ€™t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *