Skip to content

AI

If you ever want to chat about AI, feel free to email me.

Articles

AGI Ruin: A List of Lethalities

Machine Intelligence, part 1

Detached Lever Fallacy

MR Tries The Safe Uncertainty Fallacy

MIRI announces new "Death With Dignity" strategy

AI Could Defeat All Of Us Combined

The AI Control Problem

The Unreasonable Effectiveness of Recurrent Neural Networks

The Bitter Lesson

The Apocalypse is Coming or Why I’ve Been Existentially Depressed

The best way so far to explain AI risk

Simulators

A Silicon Person

The Merge

The Scaling Hypothesis

Becoming strange in the Long Singularity

Tick, tock, tick, tock… BING

How to navigate the AI apocalypse as a sane person

There's No Fire Alarm for Artificial General Intelligence

When we change the efficiency of knowledge operations, we change the shape of society.

What Are You Tracking In Your Head?

AI Researchers On AI Risk

MR Tries The Safe Uncertainty Fallacy

The AI Revolution: The Road to Superintelligence

   

Papers

The singularity: A philosophilcal analysis

The Vulnerable World Hypothesis

Strategic Implications of Openness in AI Development

Artificial Intelligence as a Positive and Negative Factor in Global Risk

Disjunctive Scenarios of Catastrophic AI Risk

The Ethics of Artificial Intelligence

   

Books

Superintelligence

The Singularity Is Near

   

Videos

Can we build AI without losing control over it?

We're All Gonna Die

Will Superintelligent AI End the World?

Mesa-Optimizers and Inner Alignment

10 Reasons to Ignore AI Safety

Why Would AI Want to do Bad Things? Instrumental Convergence

Intelligence and Stupidity: The Orthogonality Thesis

Avoiding AGI Apocalypse

Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

EleutherAI, Conjecture

Sorting Pebbles Into Correct Heaps

Is Technological Singularity Inevitable?

After AI

Coexistence of Humans & AI

S-Risks: Fates Worse Than Extinction

2027 AGI, China/US Super-Intelligence Race, & The Return of History

Reasoning, RLHF, & Plan for 2027 AGI

Scaling, Superhuman AIs, AlphaZero atop LLMs, Rogue Nations Threat

Preventing an AI Takeover

Intelligence Explosion, Primate Evolution, Robot Doublings, & Alignment

AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity's Far Future

Building AGI, Alignment, Spies, Microsoft, & Englightenment

Robert Miles