Talks
Will Superintelligent AI End the World? | Eliezer Yudkowsky
Decision theorist Eliezer Yudkowsky has a simple message: superintelligent AI could probably kill us all.
Decision theorist Eliezer Yudkowsky has a simple message: superintelligent AI could probably kill us all.
So the question becomes: Is it possible to build powerful artificial minds that are obedient, even benevolent?
In a fiery talk, Yudkowsky explores why we need to act immediately to ensure smarter-than-human AI systems don’t lead to our extinction.