Time and time again we hear people like Elon Musk saying AI is our biggest existential threat. The latest development is that a petition was publish with over a thousand people calling for pausing development of large language model AIs for six months. Furthermore, Eliezer Yudkowsky, was quoted as saying 'Pausing AI Developments Isn't Enough. We Need To Shut It All Down'. Are they right, though, can AI become a threat to humanity and what can we do about it? In theory maybe, but in practice I really doubt it.
What Elon and these other people refer to when saying the greatest threat, while sometimes unclear, they most likely mean a sentient super AI like Skynet. I don't see this as a plausible path. I believe that AI and technology are the only way the human race can realistically solve the challenges that we are facing, and as such will be our salvation. At worst, I think a sentient super AI would be indifferent or neutral toward us.
There is simply no evidence for the claim that it would doom us, quite the opposite. If we look at humans, it is seldom the ones with more knowledge that choose destruction or domination. This rhetoric reminds me of the anti-car protests that were seen when cars were first introduced, which can be summed up as a fear of technological change.
I’m much more concerned about how companies and people will use AI than the technology itself. Because of that, I see that our collective AI efforts can lead to different outcomes depending on how we let the situation develop. What we shouldn’t do is leave it up to chance or the ethics of companies and people to decide how AI is used, we already know how that ends. We have companies like ClearView AI stealing people's images to make databases now used by police, for example. Things like this should simply not be allowed to happen, legally speaking.
Based on recent history, without regulation, we can expect to see companies maximize their profits, and people doing evil actions in the name of their companies or to gain power, while hiding the real cost and consequences to society. Take for instance Exxon, the oil giant, who knew already in the 70s the effects fossil fuel has on climate change but chose to suppress that information. Had they not maximized profits over the short term, we could be in a much better position now to solve climate change. Interestingly, Elon also said in an interview that he is more and more confident we need to regulate the use of AI, so maybe we actually are aligned in our thinking.
We should expect the same when it comes to AI, especially now with generative AI taking off. There are already reports about AI being used to scam people for money. Without clear guidelines and laws for applying and using AI, the negative consequences are much more likely to materialize. The AI act that the EU is working on is a step in the right direction, however, it will mean little if we cannot get a worldwide agreement on the use of AI. We have seen that from the privacy regulation, GDPR, which was launched in 2016.
While I'm in general positive towards the legislation, the effects from not having a global agreed standard is starting to show. According to this paper, there were 12 privacy regulations across the globe in 2022 with several more on the way. You can imagine how hard it is for companies to make sure they are compliant with all of those. We don't want that for AI, we really don't because it will water down the effectiveness of the legislation and make it hard for smaller companies to compete.
The high-level focus should be on getting the different governments to agree on how AI can be used, and while this is a massive undertaking, it needs to be done. We need rigorous agreements on what use cases to ban and then police companies globally strictly with heavy fines if they are broken.
The best way to prevent individual crime, on the other hand, is to lift people out of poverty and that is done by making legitimate opportunities available, so they don’t need to turn to crime. This is easier said than done, however, one way that I think works is by making it easier to start companies the way Finland has done by making grants and loans available to those that found companies.
As individuals, we should push for getting more technology-literate people into government, I would even go as far as to demand that new positions are created that must be filled by people that understand AI and technology in general. It’s frightening to me how few people in governments across the world actually understand anything about the technologies that we will need to rely on to save us from our past mistakes.
Finally, I want to say that it is our joint responsibility to make sure that AI becomes a force for good. We need to vote in technology-literate people to office, and they need to set up clear rules for AI. Additionally, people should be educated on the ethics of AI. As societies, we have been able to teach people what is right and wrong, and this needs to expand to include how to use new technologies. If we work together towards this, we can make sure the AI utopia happens.