Skip to main content
See every side of every news story
Published loading...Updated

Yudkowsky Critiques OpenAI's Stated Goals

Eliezer Yudkowsky, founder of the Machine Intelligence Research Institute, argues unaligned AI may cause human extinction and calls for strict international bans to prevent catastrophe.

  • Eliezer Yudkowsky, in his new book with Nate Soares, argues near-future AGI will cause global Armageddon and calls for an international ban up to nuclear retaliation; he said, `That was the day I realized that humanity probably wasn’t going to survive this` about OpenAI's launch.
  • Eliezer Yudkowsky cofounded the Singularity Institute for Artificial Intelligence, later renamed the Machine Intelligence Research Institute, to prevent doomer scenarios and helped define the alignment problem.
  • High-Profile tech figures like Sam Altman have praised Yudkowsky, who produced extensive fanfiction including 1.8M words of Harry Potter and the Methods of Rationality, reflecting his unconventional output.
  • OpenAI's Superalignment Team has run with alignment ideas within a multi-billion dollar company supporting the US economy, while David Krueger at the University of Montreal says superhuman AI will kill everybody.
  • Yudkowsky's personal tragedy, with Yehuda's 2004 death, led him to donate $1800 to Machine Intelligence Research Institute and cite a `99.5%` catastrophe risk in his longtermist calculus.
Insights by Ground AI

20 Articles

Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 45% of the sources are Center
45% Center

Factuality 

To view factuality data please Upgrade to Premium

Ownership

To view ownership data please Upgrade to Vantage

Time Magazine broke the news in United States on Thursday, September 7, 2023.
Sources are mostly out of (0)
News
For You
Search
BlindspotLocal