Neil deGrasse Tyson shares Musk’s view that AI is ‘our biggest existential crisis’

neil degrasse tyson ai artificial intelligence warning elon musk danger crisis existential society ethics

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (@gadgetry@techhub.social)


Legendary astrophysicist Neil deGrasse Tyson shares the view of Tesla founder Elon Musk that AI poses mankind’s “biggest existential crisis”.

Musk made his now-infamous comment during the South by Southwest tech conference in Austin, Texas last year as part of a call for regulation. Musk warned: “I think that’s the single biggest existential crisis that we face and the most pressing one.”

A year later, Neil deGrasse Tyson was asked what he believes to be the biggest threat to mankind during an episode of his StarTalk radio show.

Dr Tyson appeared alongside Josh Clark, host of the “Stuff You Should Know” and “The End of The World” podcasts, who was also asked the same question.

“I would say that AI is probably our biggest existential crisis,” Clark said. “The reason why is because we are putting onto the table right now the pieces for a machine to become super intelligent.”

Clark goes on to explain how we don’t yet know how to fully-define, let alone program, morality and friendliness.

“We make the assumption that if AI became super intelligent that friendliness would be a property of that intelligence. That is not necessarily true.”

Dr Tyson chimed in to say he initially had a different answer to what poses the greatest threat to mankind. “I had a different answer, but I like your answer better than the answer I was going to give,” he said.

“What won me over with your argument was that if you locked AI in a box, it would get out. My gosh, it gets out every time. Before I was thinking, ‘This is America, AI gets out of control, you shoot it’… but that does not work, because AI might be in a box, but it will convince you to let it out.”

Dr Tyson does not say what his previous answer was going to be, but he’s warned in the past about the dangers of huge asteroids impacting the Earth and joined calls for action on climate change.

Earlier this week, AI News reported on comments made by Pope Francis who also warned of the dangers of unregulated AI. Pope Francis believes a failure to properly consider the moral and ethical implications of the technology could risk a ‘regression to a form of barbarism’.

(Image by Thor Nielsen / NTNU under CC BY-SA 2.0 license)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

Tags: , , , , , , , , ,

View Comments
Leave a comment

Leave a Reply