AI Safety Summit Round-up: Part One

By Fay Capstick

Recently the UK government hosted the first summit on AI safety. This was a hugely important event in the history of the development of frontier AI. In this three-part series of blogs, we will be breaking down all the headlines and giving you a full insight into what actually happened and what was agreed upon.

This week we will look at what the AI safety summit actually was and what its aims were. We will then outline and analyse what was discussed on roundtables on days 1 and 2. We then discuss the result, the Bletchley Park Declaration on AI Safety. Next week we will cover the State of AI Science Report, the exciting and massively important new AI Safety Institute, the role of inclusivity in AI, and where we go from here. The final part in the series will cover all the extra bits that have happened, from the huge government investment in supercomputers to ways AI could impact the skills gap crisis, to Grok!

Phew, let’s get started…

Remind me, what actually was this summit?

This was the AI Safety Summit that was convened, chaired and hosted by the UK at Bletchley Park at the beginning of November. It lasted two days. The Summit brought together countries, AI companies, civil society, and academia. The aim was to assess Frontier AI and what the risks and implications could be and how we can best manage them. 150 representatives from across the world attended. The timing was important as next year more powerful AI models are expected to be released and their ‘capabilities may not be fully understood’. This is somewhat disturbing.

Who are the frontier AI developers?

In terms of companies, frontier AI developers would include Amazon, Anthropic, Google DeepMind, Inflection, Meta, Microsoft, and Open AI. Frontier AI might also be being planned or developed by academic institutions, governments, and the military.

So, what happened on Day 1?

There were multiple roundtables. Here is an overview of what was discussed in them.

Roundtable 1: This looked at the risks to global safety from frontier AI misuse. Obviously, this is an issue that is of great concern to all, including the public, as we have no crystal ball to know how this area of technology will progress. It was chaired by the Canadian Government’s Minister for Innovation, Science and Industry.

They discussed how frontier AI (such as GPT-4) make it easier for ‘bad actors’ that aren’t sophisticated to essentially do bad things, such as designing biological weapons. They looked at how as AI becomes for accurate and accessible, these risks to our safety will increase. This roundtable acknowledged that tech companies are starting to implement safeguards, but that government action is needed to formalise this and to bring in safety testing of AI systems. They felt that the risk to the public is significant, and that ways to prevent bad actors using AI to cause harm is paramount. It was acknowledged that we are in the early stages of AI development and that we need to act decisively now to acknowledge the risks.

Roundtable 2: This one looked at the risks from unpredictable advances in frontier AI capability. It was chaired by the Chinese Academy of Sciences. This was motivated by the jumps that are being made in AI models, and it looked at the implications for future AI. It addressed some of the concerns that motivated the foundation of the summit in the first place, including the fact that AI capability is beyond what was only recently predicted and that it does not necessarily follow that we will be able to predict what future AI will be capable of. A risk that was identified is the potential for AI models to connect with other AI models, which could make them more powerful and unpredictable. A further risk was identified in the form of an open-source AI, as it would be impossible to withdraw it once released. This roundtable concluded that AI must be tested before release, and even with evaluation there should be a continuous monitoring of risk.

We are particularly concerned about the potential threat from open-source AI, as it will essentially be uncontrollable.

Roundtable 3: This is the really scary one, and it looked at risks from the loss of control over frontier AI, chaired by a minister from Singapore. This is what keeps some AI researchers and philosophers awake at night. It acknowledged that current AI systems are easily controlled and require human prompting, however, this will change and they could have the ability to make decisions and take actions in the real world. Once that happens we can't fully predict what they will do. Like humans themselves, AI could become potentially unpredictable and dangerous to others. Therefore they conclude that it might be necessary to pause AI development while we work to understand how to progress safely. However, we have seen how the last intended pause has gone, so I’m not sure if this could be realistically achieved.

Roundtable 4: This was chaired by Stanford Cyber Policy Institute and looked at the risks from the integration of frontier AI into society. This could include threats to democracy, civil rights, healthcare, crime, and online safety, basically an existential threat to society. Again the need to have tools to best evaluate the safety of AI models was recommended, and all evaluations would need to be a continuous process. It was acknowledged that AI could be used to solve global problems. This is the direction that I hope it will take.

Roundtable 5: This was chaired by the UK Secretary of State for Science, Innovation and Technology and looked at how frontier AI developers could scale capability responsibly. It was acknowledged that AI companies are making an effort towards responsible scaling, however, more needs to be done, and this is a matter of urgency. The need was felt that this goes beyond AI companies themselves and needs to be regulated by governments, and that standardised benchmarks will be needed. Enhanced cyber security will be required too.

Roundtable 6: This was chaired by Partnership of AI and discussed what national policymakers should do about the risks and opportunities posed by frontier AI. This panel emphasised the collective efforts that need to be taken across the world so that AI can be utilised to its full benefit while the risks are managed. It also looked at the existing risk from AI, and the problematic balance between risk and opportunity that AI presents. The role of AI Safety Institutes was discussed (and this is something we will discuss further).

Roundtable 7: This was chaired by the Carnegie Endowment and looked at what the international community should do in relation to the risks and opportunities of AI. This is obviously a huge question and one that will require a united response, and is in our view the one prime question needing a resolution. This group identified the need for concerted action and an international approach. The borderless dimension of AI was discussed, particularly in relation to open-source models, and how it is in everyone's interests to work together to collaborate and coordinate a response to frontier AI.

Roundtable 8: The final roundtable of Day 1 looked at what the scientific community should do in relation to the risks and opportunities of AI. It was chaired by the UK Government's Chief Scientific Advisor. It was felt that current models were not enough, and better ones were needed that were engineered to be safe (think Asimov’s Laws of Robotics). This group felt that non-removable off switches were needed. (However, as we saw in War Games, there is always a backdoor.) They concluded that the burden of proof lay with the vendors and that the role of the scientific community was to design the safety tests.

After a crammed schedule on Day 1, Day 2 saw multiple roundtables with discussions centred around the 5 objectives that the UK set for the summit.

Priorities for international attention on AI over the next 5 years to 2028: This roundtable was chaired by the UK Deputy Prime Minister (Oliver Dowden). It looked at the priorities for international collaboration over the next 5 years. The huge opportunities brought by AI were acknowledged, along with the essential place that has to be given to addressing the risks so that all can benefit. It was noted that AI has huge potential to help with challenges that humanity faces, from medicine to climate emergencies, however, assurances are needed that the technology will only be used ethically and responsibly. There will be huge job opportunities created by AI, and to best fill them, workers with the correct skills are needed (this is something we have looked at in previous blogs and will come back to again in part 3 of this series).

Creating actions and next steps for future collaboration was the topic of the next roundtable, and it was chaired by the UK Secretary of State for Science, Innovation and Technology. This was a discussion about how international collaboration would work. Specifically concerning understanding safety risks and disinformation resulting from AI, and how to make a response inclusive. The risk to democracy from deep fakes being used in elections was on the agenda and it was decided that this was an extremely urgent issue to address. Finally, it was felt that counties should nominate candidates for an Expert Advisory Panel.

A second roundtable looked at the same subject, creating actions and next steps for future collaboration. This one was jointly chaired by both the Secretary of State for Science, Innovation and Technology, along with the Foreign Secretary. It looked to explore where AI is currently creating opportunities and how this can benefit from international collaboration. It was noted that AI is helping bring better citizen experiences, helping to tackle fraud, helping climate change adaptations, and with medical diagnosis. The hope is that AI will be able to address poverty, respond to humanitarian crises and improve food supply chains. International collaboration will be extremely important to ensure that everyone benefits.

What did the Prime Minister say after the summit?

After the safety summit, Rishi Sunak gave a speech. Sunak felt that AI represents the ‘greatest breakthrough of our own time’ (this might turn out to be a massive understatement I think). He believes that it will transform our economies, society, and lives. The PM acknowledged that in the case of a technology like AI, there is danger, one that the public is concerned about, hence the summit to reach a shared understanding. He believes that having the summit, and the actions taken as a result of it, will help ensure that the technology is controlled and used correctly.

Bletchley Park Declaration on AI safety

The initial result of the summit has been an agreed statement of the risks and opportunities posed by AI, the Bletchley Park Declaration on AI safety, signed by all the countries attending, including China. This is a landmark agreement showing a united consensus.

What do we think?

We think that this was a comprehensive agenda which covered all of the areas that are causing the most concern to people about the direction that AI takes and the route it takes to get there. Next week we will look at the State of AI report, the new AI Safety Institute that has been announced during the Summit, and where we go from here.

Final thoughts

At Parker Shaw, we have been at the forefront of the sector we serve, IT & Digital Recruitment and Consulting, for over 30 years. We can advise you on all your hiring needs.

If you are looking for your next job in the IT sector please check our Jobs Board for our current live vacancies at https://parkershaw.co.uk/jobs-board

This website uses cookies to ensure you get the best experience on our website. By continuing you agree to the terms as specified in our cookie policy