AI Safety Summit Round-up: Part Two
By Fay Capstick
Last week we looked at the ground that had been covered during the AI Safety Summit, held and chaired by the UK. This week we shall be looking at some of the outcomes of the Summit, including the newly announced AI Safety Institute, how we plan to test future AI models, and the State of AI Science Report.
What is the State of AI Science report exactly?
The State of AI Science report was proposed by the UK government. It will be international, and this was agreed upon at the AI safety summit. It will ensure that developments in AI are understood along with their consequences. It will be similar to the Intergovernmental Panel on Climate Change. The UN Secretary General supports it and each country will nominate experts to be on it. The first report it produces will be chaired by Yoshua Bengio, a Turing Prize Winner. Going forward, we will of course keep you updated on the findings of this report when it is released.
Any conclusions on the testing of AI models?
Yes, thankfully! As a result of the safety summit, governments and AI companies have agreed to work together to ensure that AI models are tested before they are released. How this can actually be enforced is unclear (see further below about Elon Musk’s new AI model).
When setting up the AI Safety Institute (see below), the government wished to remind us that AI is not a ‘natural phenomenon that is happening to us, but a product of human creation that we have the power to shape and direct.’ This is a prescient reminder that we still have the power to decide and shape the direction that AI takes, and hopefully stop any loss of control over it.
New AI Safety Institute!
The British government have announced the establishment of a world-leading AI safety institute, where the most advanced frontier AI models will be tested for the public interest. The mission of the Institute will include ‘minimising surprise’ from ‘unexpected advances in AI.’ Basically, they hope to keep control of Skynet and see the UK as a global hub for safety research.
Companies at the summit have agreed to increase the access that the UK has to their models for testing purposes. The first aim of the Safety Institute will be to access the next generation of AI models before they are deployed next year. The Institute will be working closely with the Alan Turing Institute.
The UK government hope that the AI Safety Institute will solidify the UK as a world leader in AI safety. It will work with leading companies and nations, and Google DeepMind will partner with the Institute. It will be based in the UK and will therefore provide employment in the tech sector. It is being formed from the Frontier AI Taskforce, which was in existence for a few months prior.
As part of their work, the Institute will need access to computing power. They will be using the new AI Research Resource, which is a £300 million network of European supercomputers.
An obvious limitation will be the emergence of open-source AI, which will be unregulated by the AI Safety Institues and impossible to predict. This issue will need further consideration in our opinion.
Further, the Institute's goal is to give ‘peace of mind’ to the public in Britain that this country will have the ‘most advanced AI protections of any country.’ This is laudable, but as we know, AI is a truly international problem and we are only safe if every country works together to make it so. As a move towards this, the US has also announced an AI Safety Institute that the UK will be in partnership with.
Inclusivity
It was acknowledged that AI is for everyone and the access and the benefits need to be felt by all. This particularly includes poorer countries, women, and minority groups. No one can be excluded.
It was further noted how important it is that AI models are trained on data sets that do not perpetuate discrimination, bias, or incorrect ideas.
Now what?
There will be another summit in 6 months so that a check-up can be made to assess how well goals are being actioned. This will be a mini virtual summit hosed by the Republic of Korea. The next full summit is scheduled for a year's time in France.
Summit participants suggested that the way forward in future could be to put regulations in place to control AI development. As we will see below, voluntary isn’t good enough (Mr Musk, we’re looking at you).
Our discussion has mentioned the use of AI by the military and the separate risks and challenges that this brings. This was briefly mentioned at this summit. There has already been a summit discussing this, the Summit on Responsible AI in the Military Domain (REAIM), which was held in February 2023 and co-hosted by the Netherlands and the Republic of Korea. The aim of this summit was to build an international consensus surrounding the development, deployment, and use of AI in the military.
The environment
The impact of the computing power needed for AI was mentioned, a topic discussed in a previous blog. It was hoped that AI could be developed in an environmentally stable way. This is something we hugely support.
What next
Next week we will conclude our AI Safety Summit round-up by looking at what else has been happening, including Grok and the huge new UK investment in supercomputers and what this means for our industry.
Final thoughts
At Parker Shaw, we have been at the forefront of the sector we serve, IT & Digital Recruitment and Consulting, for over 30 years. We can advise you on all your hiring needs.
If you are looking for your next job in the IT sector please check our Jobs Board for our current live vacancies at https://parkershaw.co.uk/jobs-board