Life 3.0 - Image via Penguin Random House

Tegmark Discusses Mind-Bending Future of Life, AI, and the Cosmos

IEEE’s Spectrum news site has recently published a fascinating interview with Max Tegmark, the MIT physics professor gaining eminence as one of the most influential public minds in AI and author of the recently released AI treatise Life 3.0. In the interview, Tegmark covers a broad range of ideas such as the fate of humanity, his own opinions on the future of AI and superintelligence — including commentary on some of his 12 theorized end states for AI described in Life 3.0–, and the imperative for preemptive safety measures in AI research.

ieee.org

Spectrum:  Is there any one of these 12 scenarios that seems most plausible to you?

Tegmark: I really don’t like when people ask what will happen in the future, as if the future is just going to happen to us. I’d rather they ask, what can we do today to make the future good?

We can create a great future with technology as long as we win the race between the growing power of technology and the wisdom with which we manage it. At the moment we’re still using our old outdated strategy of learning from mistakes. We invented fire, screwed up a bunch of times, and invented the fire extinguisher. We invented the car, screwed up a bunch of times, and invented the safety belt. But with things like nuclear weapons and superintelligent AI, we don’t want to learn from mistakes. We need to get it right the first time, because that might be the only time we have.

That means working on AI safety, which we’re being very flippant about now. The vast majority of the billions being spent on AI research is going to make the tech more powerful. I’m not advocating slowing down that research, I’m advocating for also accelerating research on AI safety.

(Click here to read the rest of the interview at IEEE Spectrum)