Humanity Is at “Risk of Extinction From AI”—How Can It Be Stopped?

Yet again, a group of leading AI researchers and tech firms have warned that the current rapid development of artificial intelligence could spell disaster for humankind.The risks span nuclear conflicts, disease, misinformation, and runaway AI lacking oversight, all of which present an immediate threat to human survival.But it won’t be lost that many of those warnings come from the same people leading AI development and pushing artificial intelligence tools and programs at their respective companies.

Why Are Tech Companies and AI Scientists Warning About AI Risk?

On May 30, 2023, more than 350 AI researchers, developers, and engineers released a cosigned statement warning against AI’s threat to humanity.

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

4

Signatories to theSafe.ai statementincluded Sam Altman of OpenAI, Demis Hassabis of Google DeepMind, and Dario Amodei of Anthropic, along with Turing Award winners Geoffrey Hinton and Yoshua Bengio (though Yann LeCun, who also won the same award, neglected to sign). The list is almost a who’s who of the AI development world—the people leading the way with AI—yet here they all are, warning that AI could spell disaster for humankind.

It’s a short statement that makes the threats clear, specifically citing two key areas that could endanger the world as we know it: nuclear warfare and global health issues. While the threat of nuclear conflict is a worry, the risk of a pandemic is a more tangible threat to most.

ai robot with red eyes with other robots feature

However, it isn’t just a global pandemic that could cause AI-related health issues.The Guardianreports several other AI health issues that could impact humans if not checked before widespread usage. One example related to using AI-powered oximeters that “overestimated blood oxygen levels in patients with darker skin, resulting in the undertreatment of their hypoxia.”

Furthermore, it’s not the first time a group of tech leaders has called for a pause or serious reassessment of AI development. In March 2023,Elon Musk and other AI researchers signed a similar call-to-actionrequesting a moratorium on AI development until more regulation could be implemented to help guide the process.

lm studio openai gpt-oss-20b local ai on comptuer screen.

What Is the Risk of AI?

Most of the risks associated with AI, at least in this context, relate to the development of runaway AI technology that exceeds the capabilities of humans, in which it eventually turns on its creator and wipes out life as we know it. It’s a story covered countless times in science-fiction writing, but the reality is now closer than we may think.

Large language models (that underpin tools like ChatGPT) are drastically increasing in capabilities. However,tools like ChatGPT have many issues, such as inherent bias, privacy concerns, and AI hallucination, let alone its ability to be jailbroken to act outside the boundaries of its programmed terms and conditions.

Meta AI being used on a laptop and cellphone

As large language models increase and have more data points to call upon, along with internet access and a greater understanding of current events, the AI researchers fear it could one day, in OpenAI CEO Sam Altman’s words, “go quite wrong.”

How Are Governments Regulating AI Development to Stop Risks?

AI regulation is key to preventing risks. In early May 2023,Sam Altman called for more AI regulation, stating that “regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.”

Then, theEuropean Union announced the AI Act, a regulation designed to provide a much stronger framework for AI development throughout the EU (with many regulations spilling into other jurisdictions). Altman initially threatened to pull OpenAI out of the EU, but then walked his threat back and agreed that the company would comply with the AI regulation he’d previously asked for.

pc built by chatgpt internal fans.

Regardless, it’s clear that regulation of AI development and uses is important.

Will AI End Humanity?

As much of the debate around this topic is built on hypotheticals about the power of future versions of AI, there are issues around the longevity and power that any AI regulation can have. How best to regulate an industry already moving at a thousand miles a minute and breakthroughs in development happen daily?

Furthermore, there is still some doubt about the capabilities of AI in general and where it will end up. While those fearing the worst point to artificial general intelligence becoming a human overlord, others point to the fact that current versions of AI can’t even do basic math questions and that full-self-driving cars are still a ways off.

It’s hard not to agree with those looking to the future. Many of the folks shouting loudest about the issues AI may pose are in the driving seat, looking at where we might be heading. If they’re the ones demanding AI regulation to protect us from a potentially horrendous future, it might be time to listen.

You don’t need to fork out for expensive hardware to run an AI on your PC.

The best features aren’t the ones being advertised.

Every squeak is your PC’s way of crying for help.

Who asked for these upgrades?

Free AI tools are legitimately powerful; you just need to know how to stack them.

Revolutionize your driving experience with these game-changing CarPlay additions.

Technology Explained

PC & Mobile