The Future is AI series: 5. What is being done to control AI?

By Fay Capstick

Last week in our The Future is AI series we looked at what AI and the potential emergence of Artificial General Intelligence (AGI) could mean, why it could be a problem, whether the development of AI has happened quicker than expected, the deep fake problem, and the role that philosophy could play. This week we will conclude our deep dive by looking at how governments and tech companies are responding and what is being done to control AI.

What has been happening?

As we have all seen, AI now exists in a form that is easy to use and very good at the tasks it has been set with performing. There is a risk from this, as tech companies are evolving their AI ever faster and there is the chance of AGI, a super-intelligence, developing soon. This means that action needs to be taken to ensure that none of these developments can pose harm to humans, society, or democracy. AI needs to only be the force for good that we all hope it can be.

Is it really that worrying?

Some experts have been more positive about any risks posed by AI, and think scenarios such as humanity being wiped out are unrealistic (which is a relief). Computer scientists, such as Arvind Narayanan at Princeton, point out that current AI is nowhere near capable of causing such disastrous scenarios (https://www.bbc.co.uk/news/uk-65746524).

Martha Lane Fox is also one of the calming voices, urging that people do not become ‘too hysterical’ over the risks posed by AI (https://www.bbc.co.uk/news/technology-65162257). She is suggesting that guidelines are put in place.

Other experts are more concerned. Geoffrey Hinton has resigned from Google, where he was considered an AI pioneer for his work on deep learning. He now says that he regrets his work (https://www.bbc.co.uk/news/world-us-canada-65452940). He believes that AI will soon be more intelligent than humans. Further, the Google boss admits he doesn’t fully understand what their AI chatbot can do.

Therefore we need to make sure that humans understand the ramifications of the genie that we have let out of the bottle.

Who are the Future of Life Institute and what have they said?

The Future of Life Institute is a non-profit organisation, formed with the aim of ensuring transformative technologies do not pose a risk and are of benefit to life. In March 2023 they called for AI labs to pause training of all AI systems more powerful than GPT-4 for a period of six months. This letter was signed by people including Elon Musk. They feel that ‘advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed.’ FLI feel that this isn’t happening and that AI labs are in an ‘out-of-control race’ to develop ever more advanced ‘digital minds.’ They feel that these systems cannot be fully understood or predicted at this stage, and question the sense of developing minds that might ‘eventually outnumber, outsmart, obsolete and replace us’.

Their sensible conclusion is that there should be a pause while what humanity is doing is properly considered. FLI would like this to be a verifiable pause, and if companies do not agree they want governments to step in.

The main problem here is verifying any pause, as companies in countries such as China may not feel the need to be bound by such moratoriums. Further, what about systems that are being developed by countries themselves? Countries that can or will be secretive about their endeavours.

To read their statement and learn more about the Future of Life Institute, please visit: https://futureoflife.org/open-letter/pause-giant-ai-experiments/

What is the UK government doing?

Rishi Sunak, the UK Prime Minister, has responded. His stance is to focus on the benefits that could result for society and the economy, citing the new antibiotic find, which is indeed amazing.

Mr Sunak has also met with the CEOs of the major AI developers to discuss what limits and guidelines are needed. He reports that the UK government is taking the risk very seriously and is assessing it. The government wants to see risks from AI carefully managed, which is very positive.

Last year, AI contributed £3.7bn to the UK economy (https://www.bbc.co.uk/news/technology-65102210). This makes it a very valuable sector in the UK economy. The Department for Science, Innovation and Technology has published a white paper on AI. The current position of the UK government is that it is not in favour of a single new regulator just to look at AI. They want existing regulators (eg the Health and Safely Executive) to develop approaches that suit how AI is used in their individual sectors. Regulators are expected to issue their practical guidance in the next year.

We suggest that in future a single new regulator to oversee everything might be required once the impact of AI on our economy and country is clearer.

What about the rest of the world?

The G7 has created a working group on AI. The worry here is how long the working group might take to come to conclusions and recommendations.

Italy blocked ChatGPT on the 1st April (https://www.bbc.co.uk/news/technology-65139406). Their ban is related to privacy, GDPR, concerns, but it shows how quickly a government can act if required.

The American Department of Defence already has a policy banning AI from autonomously launching nuclear weapons. Lawmakers in the US plan to make sure that this definitely cannot happen and to ensure that any launches are under human control. It is suggested that such measures would hopefully spur other countries, such as Russia and China, to make similar guarantees.

China has already implemented some AI regulations. Users of services now have to be notified when an AI algorithm is involved.

The EU has published the Artificial Intelligence Act. This includes escalating regulation depending on the harm an AI product might cause, with some things banned altogether. This would apply to things including photo-generating software and spam filters to medical software. The EU acknowledge the speed that the technology is evolving.

Similar to the position held by Mr Sunak and the UK government, the EU wants to make sure AI reflects our values and is a ‘force for good in society’, as well as encouraging businesses to ‘embrace AI-based solutions’.

The full EU proposal can be found here: https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF

Is pausing development enough?

As we have seen, The Future of Life Institute (FLI), wants a six-month pause in the development of AI, but is this enough?

Eliezer Yudkowsky leads research at the Machine Intelligence Research Institute. Writing for an opinion piece in Time he suggests that a six-month moratorium is better than nothing, but that he did not sign the FLI statement as he does not think it goes far enough as it understates ‘the seriousness of the situation.’

Yudkowsky's major concern is what happens when AI gets ‘smarter-than-human-intelligence.’ He thinks that such a tipping point might have blurred lines and that thresholds might not be obvious. Worryingly, he thinks that a lab might cross such a threshold without realising it.

Most worrying of all is that he notes that many researchers, including himself, are expecting dire consequences from the existence of ‘superhumanly smart AI.’ Such an advanced intelligence would need planning for, which isn’t happening currently. A major concern is that AGI might simply not care about sentient life. He thinks this is something that could put more than humans at risk and if we get it wrong we do not get a do-over.

You can read the full article here, and it is well worth a considered read: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

Centre for AI Safety statement

The Centre for AI Safety has published a Statement on AI Risk. The statement simply reads: Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

This statement has been signed by prominent scientists, politicians, philosophers, and business leaders including Bill Gates, Geoffrey Hinton, Sam Altman, Emad Mostaque, Daniel Bennett, and Marin Rees.

The statement is elegant in its simplicity. It is being backed by the very people who developed AI in the first place, and that suggests that we need to take its message extremely seriously and take action now.

Final thoughts

At Parker Shaw we have been at the forefront of the sector we serve, IT & Digital Recruitment and Consulting, for over 30 years. We can advise you on all your hiring needs. If you are looking for your next job in the IT sector please check our Jobs Board for our current live vacancies at https://parkershaw.co.uk/jobs-board




This website uses cookies to ensure you get the best experience on our website. By continuing you agree to the terms as specified in our cookie policy