Six months on - what has happened?

By Fay Capstick

If you cast your mind back six months, you’ll remember that we were all (well, some of us were) panicking that AI was going to cause the extinction of humanity. Tech leaders and governments were worried about how fast things were progressing and an open letter was signed calling for a six-month pause on the development of AI. Six months have passed, and this week we ask, what has happened?

Remind me, what happened six months ago?

ChatGPT-3 had arrived with a bang, and we were all chatting with it to see what it could do. Spoiler alert - this turned out to be a lot. The chatbot, created by Sam Altman and Open AI, could give advice, write songs, complete college essays, and generate computer programs. The worry was that many would be put out of jobs.

Those with a more futuristic leaning were worried that Artificial General Intelligence (AGI) could soon emerge and that humanity had not put in place any guidelines for how to deal with this. The longer-term worry was that AI could threaten humanity's very existence should it evolve far enough. This seems like a very real consequence of something that could quickly become more than a thought experiment.

So what was this letter?

The Future of Life Institute published an open letter calling on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” This seems simple enough. The letter had many prominent signatories, including Elon Musk.

Did development stop?

Um, no.

In fact, development seems to be speeding up, which would be great if humanity had a plan for the possible implications of AI. Currently, there is no plan and no consensus. Tech companies are just rushing to try and get ahead of competitors and other countries without thinking of the consequences, and without a plan.

What about Elon Musk?

If you’ll recall, Elon was one of the signatories, so you’d expect him to be on board with the plan to pause things. However, this isn’t what happened; instead, he started a new company to develop AI. This new venture is called xAI (Elon does seem obsessed with the letter X). Musk wants it to compete with all the current players (Google, Microsoft, OpenAI) and exceed what they have achieved. In fact, his goal is to use AI to ‘understand the nature of the universe’ (https://www.wired.com/story/fast-forward-elon-musks-xai-chatgpt-hallucinating/). This is about as far from respecting the pause as you can get, being announced as it was in July, halfway through the pause period.

What about everyone else?

Google are working on their own AI called Gemini. They hope it will exceed GPT-4, so again hardly respecting the pause. It more sounds like the start of a worrying AI arms race.

Anthropic (https://www.anthropic.com/), a company based in San Francisco, has launched an AI company based around AI safety. They want to create beneficial AI. I do think we need to question whether any private company should be deciding what is safe or beneficial in society.

In fact, no company seems to have paused development.

What do the Future of Life Institute say?

The Future of Life Institute, the organisation behind the letter, say that they never thought that companies would undertake a voluntary pause, Wired reports (https://www.wired.com/story/fast-forward-elon-musk-letter-pause-ai-development/?redirectURL=/story/fast-forward-elon-musk-letter-pause-ai-development/). This could lead us to ask what the point of the letter was in the first place? It seems that the intended aim was to generate a discussion on the safety of developing AI. It has succeeded there, as many people are very worried.

Why did development continue?

A problem is that many of the people who happily signed the letter have simply carried on their work on AI. For many, this will be simply because it is their job and therefore they have to, or else find alternative employment.

Big tech companies have not stopped development as they want to stay ahead of their competitors and a 6-month pause would be a disaster. Further, for the publically owned ones it would not make business sense.

There is also the feeling that companies developing AI believe that it won’t be ‘their’ AI that would cause problems for humanity, the same way that your child is the good one and not the troublemaker. However, this won't be possible without a worldwide code that anyone developing AI must adhere to. This problem will get worse when AGI inevitably emerges.

It has also been suggested that pausing AI development could trigger economic problems due to job losses, loss of economic confidence, and issues of social stability. This could be true and would be a reason that countries, keen to protect their economy, would want development to carry on.

All this is correct, but none of it addresses the consequences that unchecked and unregulated AI development could have.

So what next then?

The discussion that the FLI letter has generated is welcome and the public is certainly more aware of where this technology is going.

In May a statement was signed by, among others, Sam Altman (CEO of OpenAI), warning that their technology may ‘someday pose an existential threat to humanity.’ It is therefore extremely shortsighted that development is surging ahead. Humanity has a choice to make, and it seems that the choice is being made for us by a few tech-bros in California.

AI Safety Summit

The UK is taking the issue very seriously, and on 1st-2nd November there will be an AI Safety Summit taking place at Bletchley Park. It will be the world’s first summit on artificial intelligence safety. The UK government hope that ‘talks will explore and build consensus on rapid, international action to advance safely at the frontier of AI technology.’ We think this is excellent, as it keeps the UK at the forefront of regulation, and helps our economy to prepare for the boost and benefits that AI will bring.

You can learn more here: https://www.gov.uk/government/news/iconic-bletchley-park-to-host-uk-ai-safety-summit-in-early-november. We will, of course, follow up on this after the summit.

Our view

We think that more needs to happen and governments worldwide need to step in now to collectively regulate the development of a technology that could be detrimental to the future of humanity. There is a small window of time and opportunity to put regulation in place which means humanity can take control of this process, rather than it take control of us.

Final thoughts

At Parker Shaw, we have been at the forefront of the sector we serve, IT & Digital Recruitment and Consulting, for over 30 years. We can advise you on all your hiring needs.

If you are looking for your next job in the IT sector please check our Jobs Board for our current live vacancies athttps://parkershaw.co.uk/jobs-board

This website uses cookies to ensure you get the best experience on our website. By continuing you agree to the terms as specified in our cookie policy