Governmentโ€™s trailblazing Institute for AI Safety to open doors in San Francisco

  • United Kingdom AI Safety Institute will expand across the Atlantic to expand its technical expertise and solidify its position as a global authority on AI Security.
  • expansion presented as AI Safety Institute publishes for the first time AI results of safety tests on publicly available models and agrees on a new collaboration with Canada.
  • overtakes co-host AI Seoul summit, UK demonstration AI The Safety Institute's continued leadership globally AI security.

The UK's pioneering government AI Safety Institute will expand its international horizons with the opening of its first overseas office in San Francisco this summer, Secretary of Technology Michelle Donelan announced today (Monday, May 20).

The expansion marks a critical step that will allow the UK to tap into the wealth of tech talent available in the Bay Area, engaging with the world's largest companies. AI laboratories based in London and San Francisco, and consolidate relations with the United States to advance AI security for the public interest.

The office is expected to open this summer and the first team of technical staff headed by a Research Director will be hired.

It will be a complementary branch of the London Institute. campus, which continues to grow more and more and already has a team of more than 30 technicians. The London office will continue to grow and gain the expertise needed to assess border risks AI systems.

By expanding its presence in the US, the Institute will establish close collaboration with the US, fostering the country's strategic partnership and its approach towards AI security, while sharing research and carrying out joint evaluations of AI models that can inform AI security policy around the world.

Secretary of State for Science and Technology, Michelle Donelan, said:

This expansion represents British leadership in AI in action. It is a crucial moment in the UK's ability to study both the risks and potential of AI from a global perspective, strengthening our partnership with the US and paving the way for other countries to leverage our expertise as we continue to lead the world in AI security.

Since the Prime Minister and I founded the AI Safety Institute, has been growing from strength to strength and in just over a year, here in London, we have built the world's leading government. AI research team, attracting the best talent from the UK and beyond.

Opening our doors abroad and developing our partnership with the US is critical to my plan to set new international standards in AI security that we will discuss at the Seoul Summit this week.

The expansion comes as the UK AI Safety Institute Releases Selected Recent Safety Test Results from Five Advanced Publicly Available Systems AI models: the first government-backed organization in the world to reveal the results of its evaluations.

While they are only a small part of the Institute's broader approach, the results show the significant progress the Institute has made since the November meeting. AI Security Summit as you develop your capabilities for next-generation security testing.

The Institute evaluated AI models against four key risk areas, including the effectiveness in practice of the safeguards that developers have installed. As part of the findings, the Institute's testing has found that:

  • Several models completed cybersecurity challenges, while striving to complete more advanced challenges.
  • Several models show similar to Doctor-Knowledge at the level of chemistry and biology.
  • All models tested remain very vulnerable to basic jailbreaks, and some will produce harmful results even without dedicated attempts to bypass safeguards.
  • The tested models were unable to complete more complex and time-consuming tasks without humans supervising them.

AI Security Institute president Ian Hogarth said:

The results of these tests mark the first time we have been able to share some details of our model evaluation work with the public. Our evaluations will help contribute to an empirical assessment of the model's capabilities and lack of robustness as it relates to existing safeguards.

AI Security is still a very young and emerging field. These results represent only a small part of the evaluation approach. AISI it is developing. Our ambition is to continue expanding the frontiers of this field by developing cutting-edge assessments, with an emphasis on national security risks.

AI Security remains a key priority for the UK as it continues to drive the global conversation around the secure development of technology.

This effort began with the November meeting AI Security Summit at Bletchley Park, and momentum continues to build as the United Kingdom and the Republic of Korea prepare to co-host the AI Seoul Summit this week.

As the world prepares to gather in Seoul this week, the UK has committed to working with Canada, including through their respective AI Security Institutes, to advance its ambition to create a growing network of state-backed organizations focused on AI security and governance. Confirmed by UK Technology Minister Michelle Donelan and Canada's Science and Innovation Minister Franรงois-Philippe Champagne, this partnership will serve to deepen existing ties between the two nations and inspire collaborative work in security research. systemic.

As part of this agreement, countries will seek to share their experience to reinforce existing testing and evaluation work. The partnership will also enable secondment routes between the two countries and work to jointly identify areas of research collaboration.

Notes for editors

The Institute's security testing has been carried out this year on five large publicly available language models (LLM) that are trained with large amounts of data. The tested models have been anonymized.

The results provide only a snapshot of the model's capabilities and do not designate systems as "secure" or "insecure." The tests performed represent a small part of the evaluation techniques. AISI is developing and using, as described in the Institute's approach to evaluations that was published earlier this year.

Today's post can be found at AI Security Institute website.

Today also marks the latest progress update from Institute President Ian Hogarth, which may be found here in the AI Security Institute website.

Leave a Comment

Comments

No comments yet. Why donโ€™t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *