The Future is AI series: 4. Where are we heading?

By Fay Capstick

Last week in our The Future is AI series we looked at the jobs market and the impact that AI could have on it. The news has recently been full of speculation and scaremongering about AI. This week we will start to widen our inquiry to look at AGI (artificial general intelligence) as a whole and where we go from here; unpicking the headlines and the truth, seeing who has said what, and what governments and tech companies are doing about the AI Revolution.

What has happened?

As we have discussed in previous weeks, there have recently been big advances in AI technology, with more expected. It seems to be snowballing with the pace of developments. Good things have happened, such as using AI to find a new antibiotic, but there is also a risk that AI could be used to build chemical weapons with the same tools.

What does this mean?

This is the problem: this is all uncharted territory, and while we hope AI will be a good thing for civilisation (think Star Trek), there is no guarantee of this or the path we take to get there.

It has been noted that all new developments seem to cause panic or concern among the public, such as electricity and wifi, but in this case, AGI poses extra risks that we cannot predict.

Why could AI be a problem?

AGI could be a problem because, in a short time, we might be living in a reality where there are more intelligent systems than ourselves on the planet. An even bigger problem might be that these intelligent systems do not care about humans and human life and potentially see us as a risk to the planet and the other species on it. Yes, this is rather like Terminator, but we simply cannot rule out any possible consequences yet. We need to ensure that AGI, as a super-intelligence, is guided in the correct way to become a force for good for all and one that can improve daily life for all.

AI systems will be able to help create even more advanced AI. Therefore it will be a relatively fast process to end up with higher intelligence than humans, which will then rapidly advance to a level that humans are incapable of understanding. The implications of this are daunting and maybe even beyond our control. I don’t feel that the ramifications of this are currently being correctly acknowledged and planned for, however, it is starting to be looked at urgently around the world.

Did AI development happen quicker than we expected?

Most researchers expected AGI to be decades away. A ‘godfather’ of AI, Geoff Hinton, said that he believed “general purpose AI” was still “20 to 50” years away. He now thinks it will be less than 20 years, with 5 years being a possibility, according to an interview with CBS. Many think that GPT4 might already be showing signs of AGI, as it can pass the Turing test (which was seen as a benchmark). Time reports that the expected time from AGI to super-intelligence emerging might be less than a year (https://time.com/6273743/thinking-that-could-doom-us-with-ai/). This means that this is a reality that we will be confronting very soon.

In what other ways could AI be a problem?

AI could have many other side effects that we should be thinking about and planning how to deal with. This could include job losses, mental health problems associated with the upheaval to society that will occur, mental health problems resulting from the actual existence of AI and the resulting feelings and uncertainty, plus deep fakes and their associated risks (more on this later).

Can something more intelligent than humans actually exist?

Yes, this can happen. Intelligence relates to information processing and is therefore not something restricted to brains. Consciousness is more tricky, and we are still trying to understand that, but intelligence, as it relates to information processing, can happen in the brain or in a computer: intelligence isn’t picky.

The role philosophers can play

Philosophers haven’t had much of a role in the forefront of society since natural philosophy (and therefore science) split with philosophy in the 19th century. Many researchers in different branches of philosophy will have plenty of work to do to fully make sense of the changes society will experience and the direction it should take. Ethics and safety philosophers will be especially in demand as we seek to make sense of the new world we are creating.

Why are deep fakes an issue?

Deep fakes could become a large problem in the near future. Images and videos that are not real but look genuine could pose a big threat to democracy, as the public will not be able to tell what is real and what is fake. The media could be manipulated to have people believe that terrorist attacks have occurred, for instance, or that politicians have engaged in behaviour the electorate wouldn’t like. For non-democratic countries, deep fakes would give leaders even more opportunity to manipulate their citizens with propaganda, or potentially manipulate the citizens of democracies. A worst-case scenario could see democracies being overthrown.

Recently, a Republican advert used a deep fake image of Joe Biden (https://www.bbc.co.uk/news/uk-politics-65582386). This highlights how troubling the issue could become.

Even more alarming, a super-intelligence with its own agenda could just use deep fakes to manipulate humans. That agenda could be a ‘good’ one such as helping reduce climate change, but its application might be bad for humans and good for the planet.

Therefore at this early stage in the development of super-intelligence, we need to ensure that the goals and ideals of humans are protected.

The AI arms race

We seem to be at a point where tech companies are in an arms race to have the best AI. This is a worthy goal, except that safety factors are not being properly considered. It is important that businesses step back and work together to create regulations and codes of conduct. Sam Altman, the co-founder of OpenAI (who created ChatGPT), has stated that the worst-case scenario from AI is ‘lights-out” for us all.

Next week we shall conclude our deep dive series into AI by looking at what is being done to control this emerging technology.


Final thoughts

At Parker Shaw we have been at the forefront of the sector we serve, IT & Digital Recruitment and Consulting, for over 30 years. We can advise you on all your hiring needs. If you are looking for your next job in the IT sector please check our Jobs Board for our current live vacancies at https://parkershaw.co.uk/jobs-board




This website uses cookies to ensure you get the best experience on our website. By continuing you agree to the terms as specified in our cookie policy