There is a 50 per chance that artificial intelligence (AI) will wipe out humanity, according to Max Tegmark, a physicist at the Massachusetts Institute of Technology (MIT).Tegmark said that humans, known as Earth's smartest species, are responsible for the extinction of 'lesser' species such as the dodo. Likewise, when AI will become smarter than humans, we could meet a similar fate, he said, DailyMail reported.
"About half of all other species on earth have already been exterminated by us humans," the AI expert was quoted as saying.Because we were smarter, they had no control."What we are warning about now is that if we humans lose control over our society to machines that are much smarter than us, then things can go just as bad for us," he noted.
Tegmark is among a growing number of industry experts who are warning of the dangers of AI.Last month, leaders from OpenAI, Google DeepMind, Anthropic and other AI labs warned in an open letter that future systems could be as deadly as pandemics and nuclear weapons.
"Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war," reads a one-sentence statement released by the Center for AI Safety, a nonprofit organisation. The open letter was signed by more than 350 executives, researchers and engineers working in AI.
The statement comes at a time of growing concern about the potential harms of artificial intelligence, led by the advent of ChatGPT and other chatbots over fears that AI could eliminate millions of white-collar jobs.
However, some experts also claim that the AI technology's existential threats may currently be "overblown".Gary Marcus, a New York University professor said AI is a "threat to democracy, not to humanity", TechCrunch reported.
He said AI's potential threat to democracy ranges "from misinformation that is deliberately produced by bad actors, from accidental misinformation. You can also use these tools to manipulate other people and probably trick them into anything you want".He stated that rather than being focussed on annihilation of humanity, governments should caution against AI's ultra-fast development and adoption.