Author of Life 3.0 Talks Ethics & the Future of AI
Swedish-American researcher and MIT professor Max Tegmark brought AI into the mainstream earlier this year with the release of his book Life 3.0: Being Human in the Age of Artificial Intelligence. In this bestselling treatise, well received in both popular and academic circles, Tegmark proposes the idea that AI represents a forthcoming third epoch of life as we know it, following life’s advent on Earth–“life 1.0”–and our own evolution as a species—”life 2.0″—which made way for our human ability to create and share cultural and technological information flourishing across generations and continents.
Tegmark asserts that the rise of this new “life 3.0” will represent not only the largest technological achievement in the history of humanity, but also a profound evolutionary shift. Superintelligent AI may herald life’s transcendence from the narrow and limited biological forms we know and love into a new and unbounded technological realm where life can alter not only its own software (which Tegmark notes humans already do in the form of learning and culture), but also its own hardware (such as its “body,” be it biological or technological.)
With what could literally be the future of all life in the universe—let alone our society—at stake, Tegmark suggests that we should approach AI research with caution and guarded optimism, and to this end co-founded the Future of Life Institute, a think-tank dedicated to preserving humanity in the face of existential risks such as nuclear warfare, climate change, and of course AI.
In late September 2017, a month after the release of Life 3.0, Tegmark gave this presentation presented by Town Hall Seattle, where he discusses the ethical problem posed by AI and some potential solutions, including some of the Asilomar AI Principles, developed by Tegmark and other AI researches at their inaugural conference on AI safety. In this excellent survey of some of the ideas he proposes in Life 3.0, Tegmark compares the current state of AI research with the Apollo 11 moon mission and offers some of his ideas on how we can “steer” the engine of AI toward beneficial and bountiful aims.