If you ever want to chat about AI, feel free to email me.
Articles
AGI Ruin: A List of Lethalities
MR Tries The Safe Uncertainty Fallacy
MIRI announces new "Death With Dignity" strategy
AI Could Defeat All Of Us Combined
The Unreasonable Effectiveness of Recurrent Neural Networks
The Apocalypse is Coming or Why I’ve Been Existentially Depressed
The best way so far to explain AI risk
Becoming strange in the Long Singularity
How to navigate the AI apocalypse as a sane person
There's No Fire Alarm for Artificial General Intelligence
When we change the efficiency of knowledge operations, we change the shape of society.
What Are You Tracking In Your Head?
MR Tries The Safe Uncertainty Fallacy
The AI Revolution: The Road to Superintelligence
Papers
The singularity: A philosophilcal analysis
The Vulnerable World Hypothesis
Strategic Implications of Openness in AI Development
Artificial Intelligence as a Positive and Negative Factor in Global Risk
Disjunctive Scenarios of Catastrophic AI Risk
The Ethics of Artificial Intelligence
Books
Videos
Can we build AI without losing control over it?
Will Superintelligent AI End the World?
Mesa-Optimizers and Inner Alignment
10 Reasons to Ignore AI Safety
Why Would AI Want to do Bad Things? Instrumental Convergence
Intelligence and Stupidity: The Orthogonality Thesis
Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality
Sorting Pebbles Into Correct Heaps
Is Technological Singularity Inevitable?
S-Risks: Fates Worse Than Extinction
2027 AGI, China/US Super-Intelligence Race, & The Return of History
Reasoning, RLHF, & Plan for 2027 AGI
Scaling, Superhuman AIs, AlphaZero atop LLMs, Rogue Nations Threat
Intelligence Explosion, Primate Evolution, Robot Doublings, & Alignment
AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity's Far Future