Life 3.0
| Book Author | |
|---|---|
| Published | August 29th 2017 |
| Pages | 364 |
| Greek Publisher | Τραυλός |
Being Human in the Age of Artificial Intelligence
What’s it about?
Life 3.0 (2017) is a tour through the current questions, ideas and research involved in the emerging field of artificial intelligence. Author Max Tegmark provides us a glimpse into the future, sketching out the possible scenarios that might transpire on earth. Humans might fuse with machines; we might bend machines to our will or, terrifyingly, intelligent machines could take over.
About the author
Max Tegmark is a professor of physics at MIT. He is president of the Future of Life Institute and has featured in various science documentaries. Tegmark is also the author of Our Mathematical Universe.
Basic Key Ideas
For thousands of years, life on earth has been progressing and evolving. No species exemplifies this more than humans.
Max Tegmark imagines us now moving toward the final evolutionary stage: Life 3.0. In this era of humanity, technology will live independently, designing both its own hardware and software, and the repercussions for the very existence of humankind are immense.
Such artificial life does not yet exist on earth. However, we are faced with the emergence of non-biological intelligence, commonly known as artificial intelligence (AI).
In these blinks, you’ll be taken on a journey charting possible versions of the future. You’ll also learn what exactly is involved in the creation of AI and how AI differs from human intelligence. Along the way, you’ll grapple with some of the biggest philosophical questions concerning what it means to be human.
In these blinks, you’ll learn:
- about the holy grail of AI research;
- what kind of chaos exists in your coffee cup; and
- how AI might put jobs at risk.
The story of how life emerged on earth is well known. Some 13.8 billion years ago, the Big Bang brought our universe into being. Then, about four billion years ago, atoms on earth arranged themselves in such a way that they could maintain and replicate themselves. Life had arisen.
As the author posits, life can be classified into three categories according to levels of sophistication.
The first stage of life, Life 1.0, is simply biological.
Consider a bacterium. Every aspect of its behavior is coded into its DNA. It’s impossible for it to learn or change its behavior over its lifetime. The closest it comes to learning or improvement is evolution, but that takes many generations.
The second stage is cultural, Life 2.0.
Humans are included here. Just like the bacterium, our “hardware” or bodies have evolved. But unlike simpler life-forms, we can acquire new knowledge during our lifetimes. Take learning a language. We can adapt and redesign ideas that we might call our “software.” And we make decisions using this knowledge.
The final stage is the theoretical Life 3.0, a form of technological life capable of designing both its hardware and software. Although such life doesn’t yet exist on earth, the emergence of non-biological intelligence in the form of AI technologies may soon change this.
Those who hold opinions about AI can be classified by how they feel about the emerging field’s effect on humanity.
First up are the digital utopians. They believe that artificial life is a natural and desirable next step in evolution.
Second, there are the techno-skeptics. As the name suggests, they don’t believe that artificial life will have an impact anytime soon.
Finally, there’s the beneficial AI movement. These people aren’t sold on the idea that AI will necessarily bring benefits to humans. They therefore advocate that AI research be specifically directed toward possible universally positive outcomes.
What makes us human? Our ability to think and learn? One might think so.
Researchers in AI, however, are generally opposed to such a notion. They claim that the capability for memory, computation, learning and intelligence has nothing to do with human flesh and blood, let alone carbon atoms.
Let’s begin with intelligence. Though there’s no universally accepted single definition, the author likes to think of intelligence as the “ability to accomplish complex goals.”
Machines might be increasingly able to outperform us in defined tasks such as playing chess, but human intelligence is uniquely broad. It can encompass skills like language learning and driving vehicles.
However, even though artificial general intelligence (AGI) doesn’t yet exist, it’s clear that intelligence isn’t just a biological faculty. Machines can complete complex tasks too.
Intelligence – just like capacities for memory, computation and learning – is what’s known as substrate independent. That is, an independent layer that does not reflect or depend upon an underlying material substrate.
So, for example, human brains can store information, but so can floppy drives, CDs, hard drives, SSDs and flash memory cards, even though they’re not made of the same material.
But before we get to what this means for computing, we need to understand what computing is.
Computing involves the transformation of information. So, the word “hello” might be transformed into a sequence of zeros and ones.
But the rule or pattern which determines this transformation is independent of the hardware that performs it. What’s important is the rule or pattern itself.
This means that it’s not only humans who can learn – the same rules and patterns could exist outside of the human brain too. AI researchers have made huge strides developing machine learning: machines that can improve their own software.
So, if memory, learning, computation and intelligence aren’t distinctly human, then what exactly makes us human? As research in AI continues apace, this question is only going to prove harder to answer.
Until now, AI has been applied fairly narrowly in limited fields like language translation or strategy games.
In contrast, the holy grail of AI research is the production of AGI that would operate at a human level of intelligence.
But what would happen if this holy grail were found?
For starters, the creation of AGI might result in what’s known to AI researchers as an intelligence explosion.
An intelligence explosion is a process by which an intelligent machine gains superintelligence, a level of intelligence far above human capability.
It would achieve this through rapid learning and recursive self-improvement because an AGI could potentially design an even more intelligent machine, which could design an even more intelligent machine and so on. This could trigger an intelligence explosion that would allow machines to surpass human intelligence.
What’s more, superintelligent machines could take over the world and cause us harm, no matter how good our intentions.
Let’s say, for example, that humans program a superintelligence that is concerned with the welfare of humankind. From the superintelligence’s perspective, this would probably be akin to a bunch of kindergartners far beneath your intelligence holding you in bondage for their own benefit.
Quite probably you would find this a depressing and inefficient situation and take matters into your own hands. And what do you do with incompetent, annoying human obstacles? Control them, or better yet, destroy them.
But maybe we’re getting ahead of ourselves; let’s look at some other, less terrifying, scenarios that might occur.
Whether we like it or not, the race toward AGI is underway.
But what would we like the aftermath of attaining it to look like?
For instance, should AIs be conscious? Should humans or machines be in control?
We have to answer basic questions, as we don’t want to end up in an AI future for which we’re unprepared, especially one which could do us harm.
There are various aftermath scenarios. These vary from peaceful human–AI coexistence to AIs taking over, leading to human extinction or imprisonment.
The first possible scenario is the benevolent dictator. A single benevolent superintelligence would rule the world, maximizing human happiness. Poverty, disease and other low-tech nuisances would be eradicated, and humans would be free to lead a life of luxury and leisure.
In the same vein, there’s a scenario involving a protector god, where humans would still be in charge of their own fate, but there would be an AI protecting us and caring for us, rather like a nanny.
Another scenario is the libertarian utopia. Humans and machines would peacefully coexist. This would be achieved through clearly defined territorial separation. Earth would be divided into three zones. One would be devoid of biological life but full of AI. Another would be human only. There would be a final mixed zone, where humans could become cyborgs by upgrading their bodies with machines.
This scenario is a little fantastical, however, as there’s nothing to stop AI machines disregarding humans’ wishes.
Then there’s the conquerors’ scenario, which we looked at in the last blink. This would see AIs destroy humankind, as we’d be seen as a threat, a nuisance or simply a waste of resources.
Finally, there’s the zookeeper scenario. Here a few humans would be left in zoos for the AIs’ own entertainment, much like we keep endangered panda bears in zoos.
Now that we’ve examined possible AI-related futures, let’s look at the two largest obstacles to current AI research, namely goal-orientedness and consciousness.
There’s no doubt that we humans are goal-oriented. Think about it: even something as small as successfully pouring coffee into a cup involves completing a goal.
But actually, nature operates the same way. Specifically, it has one ultimate purpose: destruction. Technically, this is known as maximizing entropy, which in a layperson’s terms means increasing messiness and disorder. When entropy is high, nature is “satisfied.”
Let’s return to the cup of coffee. Pour a little milk in, then wait a short while. What do you see? Thanks to nature, you now have a lukewarm, light brown, uniform mixture. Compared to the initial situation, where two liquids of different temperatures were clearly separate, this new arrangement of particles is indicative of less organization and increased entropy.
On a bigger scale, the universe is no different. Particle arrangements tend to move toward increased levels of entropy, resulting in stars collapsing and the expansion of the universe.
This goes to show how crucial goals are, and currently, AI scientists are grappling with the problem of which goals AI should be set to pursue.
After all, today’s machines have goals too. Or rather, they can exhibit goal-oriented behavior. For instance, if a heat-seeking missile is hot on your tail, it’s displaying goal-oriented behavior.
But should intelligent machines have goals at all? And if so, who should define those goals? For instance, Marx and Hayek each had a distinct vision when it came to the future of the economy and society, so they would undoubtedly set very different goals for AI.
Of course, we could begin with something simple, like the Golden Rule that tells us to treat others as we would ourselves.
But even if humanity could agree on a few moral principles to guide an intelligent machine’s goals, implementing human-friendly goals would be trickier yet.
First of all, we’d have to make an AI learn our goals. This is easier said than done because the AI could easily misunderstand us. For instance, if you told a self-driving car to get you to the airport as fast as possible, you might well arrive covered in vomit while being chased by the police. Technically, the AI adhered to your stated wish, but it didn’t really understand your underlying motivation.
The next challenge would be for the AI to adopt our goals, meaning that it would agree to pursue them. Just think of some politicians you know: even though their goals may be clear, they still fail to convince large swaths of the population to adopt the same goals.
And finally, the AI would have to retain our goals, meaning that its goals wouldn’t change as it undergoes self-improvement.
Huge amounts of scientific research are currently being devoted to just these ideas.
The question of what consciousness is and how it relates to life is hardly new. AI researchers are faced with the same age-old issue. More specifically, they wonder how lifeless matter could become conscious.
Let’s come at it from a human perspective first. As a physicist like the author would put it, conscious human beings are just “food rearranged,” meaning that the atoms we ingest are simply rearranged to form our bodies.
Consequently, what interests AI researchers, then, is the rearrangement that intelligent machines would have to undergo to become conscious.
It shouldn’t be a surprise that no one has an answer right now. But to get closer, we have to grasp what’s involved in consciousness.
It’s tricky. We might like to imagine consciousness is something to do with awareness and human brain processes. But then we’re not actively aware of every brain process. For example, you’re typically not consciously aware of everything in your field of vision. It’s not clear why there’s a hierarchy of awareness and why one type of information is more important than another.
Consequently, multiple definitions of consciousness exist. But the author favors a broad definition known as subjective experience, which allows a potential AI consciousness to be included in the mix.
Using this definition, researchers can investigate the notion of consciousness through several sub-questions. For instance, “How does the brain process information?” or “What physical properties distinguish conscious systems from unconscious ones?”
AI researchers have also deliberated how artificial consciousness or the subjective AI experience might “feel.”
It’s posited that the subjective AI experience could be richer than human experience. Intelligent machines could be purposed with a broader spectrum of sensors, making their sensory experience far fuller than our own.
Additionally, AI systems could experience more per second because an AI “brain” would run on electromagnetic signals traveling at the speed of light, whereas neural signals in the human brain travel at much slower speeds.
It might seem like a lot to wrap your head around, but one thing is clear: the potential impact of AI research is vast. It points to the future, but it also entails facing some of humankind’s oldest philosophical questions.
The key message in this book:
The race for human-level AI is in full swing. It’s not a question of if AGI will arrive, but when. We don’t know what exactly will happen when it does, but several scenarios are possible: humans might upgrade their “hardware” and merge with machines, or a superintelligence may take over the world. One thing is certain – humanity will have to ask itself some deep philosophical questions about what it means to be human.
Suggested further reading: The Age of Spiritual Machines by Ray Kurzweil
The Age of Spiritual Machines (1999) is your guide to the future. These blinks explain the new age of machines and what robotic intelligence will mean for life as we know it.
SECOND REVIEW FROM SHORTFORM
About Book
Life on Earth has drastically transformed since it first began. The first single-celled organisms could do little more than replicate themselves. Fast-forward to today: Humans have built a civilization so complex that it would be utterly incomprehensible to the lifeforms that came before us. Judging by recent technological strides, physicist Max Tegmark believes that an equally revolutionary change is underway—artificial intelligence may become more capable than the human brain, making us the simplistic lifeforms.
In this guide, you’ll learn about cutting-edge technological theories that will help you better understand our rapidly changing world. We’ll examine the evidence that an artificial superintelligence might soon exist, explore the theoretical limits of its power, and speculate about the impact this AI might have on humanity. In our commentary, we’ll offer contrasting perspectives from other leading AI experts—those who think Tegmark’s view of AI is unrealistically alarmist and those who feel it’s an even more urgent concern than Tegmark. We’ll also update Tegmark’s ideas with news regarding AI research and development.
Life on Earth has drastically transformed since it first began. The first single-celled organisms could do little more than replicate themselves. Fast-forward to today: Humans have built a civilization so complex that it would be utterly incomprehensible to the lifeforms that came before us.
Judging by recent technological strides, author Max Tegmark believes that an equally revolutionary change is underway. If an amoeba is “Life 1.0,” and humans are “Life 2.0,” Tegmark contends that an artificial superintelligence could become “Life 3.0.” A power like this could either save or destroy humanity, and Tegmark argues that it’s our responsibility to do everything we can to ensure a positive outcome—before it’s too late.
(Shortform note: This view of the history and progression of life is central to a philosophy called “Dataism.” According to Yuval Noah Harari in Homo Deus, Dataists believe that lifeforms are more valuable and meaningful depending on how well they can process complex data. Thus, humans aren’t uniquely more valuable than other forms of life—we’re just the most advanced form of data processor that exists so far, and we should hand control of the planet over to AI when it surpasses us in that regard. Harari contrasts this philosophy with “Techno-Humanism,” which says that humans should continue ruling the Earth and should use technology to upgrade the human mind.)
Tegmark is a physics professor at MIT and president of the Future of Life Institute, a nonprofit dedicated to using technology to avert threats to humanity on a global scale. He wrote Life 3.0 in 2017 to popularize urgent AI-related issues and increase the chance that humanity will successfully use AI to create a better future.
We’ll begin this guide by giving some background information on the current situation: What is artificial superintelligence, and might it exist soon? Then, we’ll explain what Tegmark thinks may happen if we create an artificial superintelligence, exploring the theoretical limits of an AI’s power and how it may impact life on Earth in the long term. Finally, we’ll turn our attention to AI-related problems we’ll have to manage in the near future as well as steps we can take today to mitigate the risks of artificial superintelligence.
In our commentary, we’ll offer contrasting perspectives from other leading AI experts—those who think Tegmark’s view of AI is unrealistically alarmist and those who feel it’s an even more urgent concern than Tegmark. We’ll also update Tegmark’s ideas with news regarding AI research and development.
The Situation: Artificial Superintelligence Might Appear Soon
What Is Artificial Superintelligence?
Tegmark defines intelligence as the capacity to successfully achieve complex goals. Thus, an “artificial superintelligence” is a computer sophisticated enough to understand and accomplish goals far more capably than today’s humans. For example, a computer that could manage an entire factory at once—designing, manufacturing, and shipping out new products all on its own—would be an artificial superintelligence. By definition, a superintelligent computer would have the power to do things that humans currently can’t; thus, it’s likely that its invention would drastically change the world.
Tegmark asserts that if we ever invent artificial superintelligence, it will probably occur after we’ve already created “artificial general intelligence” (AGI). This term refers to an AI that can accomplish any task with at least human-level proficiency—including the task of designing more advanced AI.
Experts disagree regarding how likely it is that computers will reach human-level general intelligence. Some dismiss it as an impossibility, while others only disagree on when it will probably occur. According to a survey Tegmark conducted at a 2015 conference, the average AI expert is 50% certain that we’ll develop AGI by 2055.
AGI is important because, in theory, a computer that can redesign itself to be more intelligent can use that new intelligence to redesign itself more quickly. AI experts call this hypothetical accelerating cycle of self-improvement an intelligence explosion. If an intelligence explosion were to happen, the most sophisticated computer on the planet could advance from mildly useful AGI to world-transforming superintelligence in a remarkably short time—perhaps just a few days or hours.
(Shortform note: A more recent and expansive survey of 738 machine learning experts conducted in 2022 confirms that Tegmark’s survey data is still close to representative of expert opinion. The average expert is 50% certain that AI will be able to beat human workers at any task by the year 2059. Fifty-four percent of experts surveyed also believe that, once AGI exists, the odds of an intelligence explosion of some kind happening are “about even” or better on a scale from “very unlikely” to “very likely.”)
Counterargument: Why Artificial Superintelligence May Be Impossible
Some experts contend that the possibility of an artificial superintelligence, able to accomplish any goal better than humans, is a myth.
They argue that Tegmark’s definition of intelligence is overly simplistic, as it falsely assumes that intelligence is a quantity you can measure with a single metric. There’s no one mental attribute or process you can use to achieve all complex goals; rather, different goals require different kinds of intelligence. For example, bloodhounds are more intelligent than humans at distinguishing and tracking specific smells, while humans are more intelligent at writing poetry. In short, there’s no such thing as general intelligence, which means a hypothetical computer with superhuman-level general intelligence is impossible.
Critics of Tegmark’s view argue that part of the reason people believe in general intelligence is that we’re limited by the human point of view. Although it seems like humans can apply their cognitive skills to solve any problem (and therefore have general intelligence), in reality, it only seems that way because we spend all our time solving problems that the human brain is readily able to solve. We only believe that computers have the potential to be much smarter than we are because we can see them doing things humans can’t, like instantly solving complex math calculations. However, they have a different kind of intelligence, not necessarily more intelligence.
By this logic, an AI simulating the human brain wouldn’t be a true “AGI”—it would just be able to solve the kinds of problems that humans can. Additionally, such an AI probably wouldn’t be able to trigger an intelligence explosion. Even if it could improve its own programming as well as a human could, there’s no reason to believe further iterations would be able to solve problems at an exponentially higher speed. This belief relies on the assumption that the ability to solve any problem is a uniform trait you can procedurally optimize—and as we’ve discussed, this may not be the case.
Evidence That Artificial Superintelligence Is Possible
The idea that we could build a computer that’s smarter than we are may seem far-fetched, but some evidence indicates that such technology is on its way, according to Tegmark.
First, the artificial intelligence we’ve created so far is functioning more and more like AGI. Researchers have developed a new way to design sophisticated AI called deep reinforcement learning. Essentially, they’ve created computers that can repeatedly modify themselves in an effort to accomplish a certain goal. This means that machines can now use the same process to learn different skills—a necessary component of AGI. Currently, there are many human skills that developers can’t teach to computers using deep reinforcement learning, but that list is becoming shorter.
(Shortform note: By using a specific form of deep reinforcement learning called Reinforcement Learning from Human Feedback (RLHF), AI developers can get computers to optimize for vaguely defined or subjective values. For instance, the developers at OpenAI used this process to train the AI-based chatbot ChatGPT: Humans answered sample prompts to show ChatGPT what kinds of answers they wanted. Then, they allowed the chatbot to try answering prompts and scored its answers based on how accurate and desirable they were. The AI then used this “reward” data to create a model describing what qualifies as a “good” answer, allowing it to further train itself.)
Second, Tegmark asserts that given everything we know about the universe, there’s no obvious reason to believe that artificial superintelligence is impossible. Although it may seem like our brains possess unique creative powers, they store and process information in much the same way that computers do. The information in our heads is a biological pattern rather than a digital one, but the information itself is the same no matter what material it’s encoded with. In theory, computers can do everything our brains can do.
(Shortform note: Some experts disagree, arguing that although it’s theoretically possible for hardware components to run the same information processes as a human brain, it’s not necessarily true that it could do so at the same speed. The organic material of the human brain is far faster than computers at processing tons of data—we only assume otherwise because the brain processes data unconsciously. If a computer could process information like a brain, it would likely be so slow that it would defeat the purpose of simulating a brain in the first place. By this logic, it might be impossible to create a human-level AI that can improve itself faster than a human could, placing the intelligence explosion out of our grasp.)
Tegmark concedes that AGI and artificial superintelligence might still be a pipe dream, impossible to create for some reason we don’t yet see. However, he contends that if there’s even a small chance that an artificial superintelligence will exist in the near future, it’s our responsibility to do anything in our power to ensure that it has a positive impact on humanity. This is because artificial superintelligence has the power to completely transform the world—or end it—as we’ll see next.
(Shortform note: Some experts contend that because we don’t know if a world-ending artificial superintelligence is possible or how likely it is to happen, we need to pause all ongoing AI development until we understand the science enough to proceed safely. Due to current economic incentives to create profitable AI-based products, research laboratories are advancing AI as fast as they can. As a result, they’re arguably racing to see who can trigger an intelligence explosion first. A development pause (if possible) would require significant cooperation between research firms as well as governments around the world.)
The Possibilities: How Could Artificial Superintelligence Change the World?
So far, we’ve explained that an artificial superintelligence is a technology that would greatly surpass human capabilities, and we’ve argued why it’s possible that artificial superintelligence might someday exist. Now, let’s discuss what this means for us humans.
We’ll explain why a superintelligence’s “goal” is the primary factor that will determine how it will change the world. Then, we’ll explore how much power such a superintelligence would have. Finally, we’ll discuss what might happen to humanity after a powerful superintelligence enters the world, investigating three optimistic scenarios followed by three pessimistic ones.
The Outcome Depends on the Superintelligence’s Goal
Tegmark asserts that if an artificial superintelligence comes into being, the fate of the human race depends on what that superintelligence sets as its goal. For instance, if a superintelligence pursues the goal of maximizing human happiness, it could create a utopia for us. If, on the other hand, it sets the goal of maximizing its intelligence, it could kill humanity in its efforts to convert all matter in the universe into computer processors.
It may sound like science fiction to say that an advanced computer program would “have a goal,” but this is less fantastical than it seems. An intelligent entity doesn’t need to have feelings or consciousness to have a goal; for instance, we could say an escalator has the “goal” of lifting people from one floor to another. In a sense, all machines have goals.
One major problem is that the creators of an artificial superintelligence wouldn’t necessarily have continuous control over its goal and actions, argues Tegmark. An artificial superintelligence, by definition, would be able to solve its goal more capably than humans can solve theirs. This means that if a human team’s goal was to halt or change an artificial superintelligence’s current goal, the AI could outmaneuver them and become uncontrollable.
For example: Imagine you program an AI to improve its design and make itself more intelligent. Once it reaches a certain level of intelligence, it could predict that you would shut it off to avoid losing control over it as it grows. The AI would realize that this would prevent it from accomplishing its goal of further improving itself, so it would do whatever it could to avoid being shut off—for instance, by pretending to be less intelligent than it really is. This AI wouldn’t be malfunctioning or “turning evil” by escaping your control; on the contrary, it would be pursuing the goal you gave it to the best of its ability.
How AI Researchers Are Proactively Discouraging Misaligned Goals
Other AI researchers agree with Tegmark that artificial intelligences will have goals with high-stakes consequences, and they treat the threat of misaligned AI goals (goals that aren’t in humanity’s best interests) seriously. OpenAI, a leading AI research laboratory, is attempting to reduce the risk of developing an AI with the wrong goal in three ways.
First, they’re using copious amounts of human feedback to train their AI models (as described earlier in this guide). An AI programmed to emulate humans is less likely to adopt a goal that’s outright hostile toward humans. This tactic has yielded positive results in the models OpenAI has trained so far, and they anticipate human input to continue being a positive influence in the future.
Second, OpenAI is training AI to help them create more thorough, constructive feedback for future AI models. For instance, they’ve developed a program that writes critical comments about its own language output and one that fact-checks language output by surfing the web. The developers intend to use these tools to help them fine-tune the goals of other AI models and detect misaligned goals before an AI would become so intelligent that developers lose control over it.
Third, OpenAI is training AI models to research new ways to ensure goal alignment. They anticipate that future AI models will be able to invent new alignment strategies much more quickly and efficiently than humans could. Although such research-focused AIs would be very intelligent, their intelligence would be tailored for a narrower set of tasks than general-purpose AI models, making them easier to control.
Obstacles to Programming a Superintelligence’s Goal
Does this mean that we’re in the clear as long as we’re careful what goal we program into a superintelligence in the first place? Not necessarily.
First of all, Tegmark states that successfully programming an artificial superintelligence with a goal of our choosing would be difficult. While an AI is recursively becoming more intelligent, the only time we could program its ultimate goal would be after it’s intelligent enough to understand the goal, but before it’s intelligent enough to manipulate us into helping it accomplish whatever goal it’s set for itself. Given how quickly an intelligence explosion could happen, the AI’s creator might not have enough time to effectively program its goal.
(Shortform note: AI expert Eliezer Yudkowsky has an even more pessimistic perspective on this situation than Tegmark, arguing that humans would almost certainly fail to program an AI’s goal during an intelligence explosion. He asserts that we don’t understand AI systems enough to reliably encode them with human goals: The deep learning methods we use to train today’s artificial intelligence are, by nature, unintelligible, even to the researchers doing the training. Further AI development would result in an artificial superintelligence that adopts what Yudkowsky sees as a machine’s default view of humanity—as valueless clusters of atoms. Such an AI superintelligence would likely have an unfavorable impact on humanity.)
Second, Tegmark argues that it’s possible for an artificial intelligence to discard the goal we give it and choose a new one. As the AI grows more intelligent, it might come to see our human goals as inconsequential or undesirable. This could incentivize it to find loopholes in its own programming that allow it to satisfy (or abandon) our goal and free itself to take some other unpredictable action.
Finally, even if an AI accepts the goals we give it, it could still behave in ways we wouldn’t have predicted (or desired), asserts Tegmark. No matter how specifically we define an AI’s goal, there’s likely to be some ambiguity in how it chooses to interpret and accomplish that goal. This makes its behavior largely unpredictable. For example, if we gave an artificial superintelligence the goal of enacting world peace, it could do so by trapping all humans in separate cages.
(Shortform note: One possible way to reduce the chance of an AI rejecting human goals or interpreting them in an inhuman way would be to focus AI development on cognition-enhancing neural implants. If we design a superintelligent AI to guide the decision-making of an existing human (rather than make its own decisions), they could collectively be more likely to respect humanist goals and interpret goals in a human way. The prospect of merging human and AI cognition is arguably less outlandish than it may seem—tech companies like Neuralink and Synchron have already developed brain-computer interfaces that allow people to control digital devices with their thoughts.)
The Possible Extent of AI Power
Why would an artificial superintelligence’s goal have such a dramatic impact on humanity? An artificial superintelligence would use all the power at its disposal to accomplish its goal. This is dangerous because such a superintelligence could theoretically gain an unimaginable amount of power—enough to completely transform our society with a negligible amount of effort.
According to Tegmark, although an artificial superintelligence is a digital program, it could easily exert power in the real world. For instance, it could make money selling digital goods such as software applications, then use those funds to bribe humans into unknowingly working for it (perhaps posing as a human hiring manager on digital job listing platforms). An AI controlling a Fortune 500-sized human task force could do almost anything—including creating robots that the AI could control directly.
Tegmark asserts that, in theory, an artificial superintelligence could eventually attain godlike power over the universe. By using its intelligence to create increasingly advanced technology, an AI could eventually create machines able to rearrange the fundamental particles of matter—turning anything into anything else—as well as generate nearly unlimited energy to power those machines.
AI’s Power in the Digital World
While Tegmark focuses primarily on the ways AI could influence the physical world, Yuval Noah Harari emphasizes the danger posed by AI’s influence solely in the digital world. For instance, AI-controlled social media accounts could earn the trust of human users, distort their view of the world, and influence their behavior for political or economic ends.
Additionally, Harari contends that this kind of AI will threaten to unravel our society long before we develop superintelligent AI—in fact, he asserts that we should be aware of this potential danger today. People already regularly consult online resources to dictate their decisions. For instance, they use product reviews to determine what to buy, and they conduct online research to determine who to vote for. AI (or people controlling AI) therefore wouldn’t need to pay workers or manufacture reality-bending technology to totally reshape human society. Transforming the digital landscape we consult every day would be enough.
There’s evidence that this kind of distortion of the digital world has already begun—for instance, social media bots were used to discredit 2017 French presidential candidate Emmanual Macron by amplifying the spread of his leaked emails across social media platforms.
Optimistic Possibilities for Humanity
What might happen to humanity if an artificial superintelligence wields nearly unlimited power over the world in service of a single goal? Tegmark describes a number of possible outcomes, each of which results in a wildly different way of life for humans—or the end of human life.
Let’s begin by discussing three scenarios in which the AI’s goal, whatever it may be, allows humans to live relatively happy lives.
Possibility #1: Friendly AI Takes Over
First, Tegmark imagines that an artificial superintelligence could overthrow existing human power structures and use its vast intelligence to create the best possible world for humanity. No one could challenge the AI’s ultimate authority, but few people would want to, since they have everything they need to live a fulfilling life.
Tegmark clarifies that this isn’t a world designed to maximize human pleasure, which would mean continuously injecting every human with some kind of pleasure-inducing chemical. Rather, this is a world in which humans are free to continuously choose the kind of life they want to live from a diverse set of options. For instance, one human could choose to live in a non-stop party, while another could decide to live in a Buddhist monastery where rowdy, impious behavior wouldn’t be allowed. No matter who you are or what you want, there would be a “paradise” available for you to live in, and you could move to a new one at any time.
This Utopia Borrows From Existentialist Philosophy
This vision of utopia makes assumptions about humanity that align with Victor Frankl’s existentialist philosophy in Man’s Search for Meaning. According to Frankl, the primary contributor to human happiness isn’t pleasure, but meaning—that is, humans need to feel like their actions are valuable in light of a bigger purpose.
However, Frankl contends that there isn’t one universal “meaning” of life. Rather, anything can be meaningful, and each individual must discover for themselves what their life means to them. This reveals the importance of personal choice, which is central to Tegmark’s vision of utopia. Because the same life won’t feel meaningful to everyone, an AI-created world that maximizes human happiness would need to allow humans to pursue whatever life they believe to be the most meaningful.
To return to our example, someone living a non-stop party could find meaning through the experience of togetherness, while someone living as a Buddhist monk could find meaning in pious, devoted action. If that partygoer ever realized that a permanent party made them feel purposeless, they could cross over to the Buddhist sector and find meaning there.
In contrast, a world in which people are constantly taking a pleasure-inducing drug rather than living exciting lives might be OK for the people experiencing it, but most people today would probably view this as a dystopia rather than a utopia. This is because such a world would be meaningless, as every experience would feel exactly the same and nothing would ever change.
Possibility #2: Friendly AI Stays Hidden
Tegmark supposes another positive scenario that’s a bit different: Instead of completely taking over the world, an artificial superintelligence does everything within its power to improve human lives while keeping its existence a secret.
This could happen if the artificial superintelligence—or someone influencing its goals—concluded that to be as happy and fulfilled as possible, humans need to feel in control of their destiny. Arguably, if you knew that an all-powerful computer could give you anything you wanted (as in the previous optimistic scenario), you might still feel like your life is meaningless and be less satisfied because you don’t have control over your life. In this case, the best thing a godlike AI could do for you is help without your knowledge.
Is the Universe Secretly a Simulation?
If an all-powerful artificial intelligence could adopt the goal of keeping its existence a secret, how do we know one doesn’t exist already? This idea overlaps with simulation theory, the idea that our entire universe, including ourselves, is a complex computer simulation that’s creating our reality yet hiding its true nature.
This idea is popular among some physicists, who’ve come to this conclusion using facts we know about the universe. Because it’s theoretically possible that humans will at some point create a computer powerful enough to simulate the universe, there’s a chance that another civilization already has—and has created us. If so, Tegmark’s logic could explain why the true nature of this simulation is hidden from us: If we prefer control over our destiny, it would be cruel to take that illusion away from us.
Possibility #3: AI Protects Humanity From AI
Third, Tegmark imagines a scenario in which humans create an artificial superintelligence with the sole purpose of preventing other superintelligences from coming into existence. This allows humans to continue developing more advanced technology without worrying about the potential dangers of another AI.
(Shortform note: This scenario is arguably relatively unrealistic, as it assumes that we have total control over the superintelligence yet aren’t taking full advantage of its power. If we’re able to successfully program a superintelligence’s goal, we would likely get more ambitious and tell it to design a society for us in which we can be eternally happy and immortal—which would bring us back to one of the previous two optimistic scenarios we’ve discussed.)
According to Tegmark, the advanced technology humans could develop in a world free from superintelligence would eventually allow us to create a bountiful classless society. Robots are able to build anything humans might want, making scarcity a thing of the past. Since robots are constantly generating surplus wealth, the government can give everyone a universal basic income (UBI) that’s high enough to purchase anything they could possibly need. People are free to work for more money, but finding a productive job is near-impossible since everything people might buy is already given to them for free.
(Shortform note: Tegmark also acknowledges the possibility that we develop superintelligence, manage to keep it entirely under our control, and use it to create a humanist utopia. Although he doesn’t specify what he imagines this world would look like, it’s reasonable to assume that Tegmark thinks it would mirror the outcome of one of these three positive scenarios.)
How to Transition to a Post-Scarcity Society
Becoming a post-scarcity, UBI-driven society like this would require a complete overhaul of our employment-based economy. But, we wouldn’t necessarily have to enact this kind of sweeping change all at once.
AI expert Lorenzo Pieri describes how a nation might incrementally transition to this kind of society. First, the private companies that serve basic human needs progressively automate their workforces with increasingly sophisticated technology, boosting the nation’s total economic production. As existing taxes channel some of this new wealth into the government, they pass it on to citizens as a small universal basic income. The government grows this UBI alongside the economy as a whole, helping everyone increase their quality of life—especially those in poverty, whose currently unfilled needs are more essential to their well-being than the unfilled needs of people with more money.
After private companies have automated the production of all basic human needs, the government begins funneling the wealth from the automated production of non-basic goods into subsidies for basic goods rather than further increasing UBI. Eventually, the government can use the productivity gains from automation to pay private companies to supply all basic human needs for free, transitioning into a fully post-scarcity society.
If citizens desire non-basic goods that aren’t available for free, they still can get jobs producing such luxury goods for others. Pieri terms this dynamic “luxury-capitalism.”
Pessimistic Possibilities for Humanity
Next, let’s take a look at some of the existential dangers that an artificial superintelligence poses. Here are three scenarios in which the AI’s goal ruins humans’ chances to live a satisfying life.
Possibility #1: AI Kills All Humans
Tegmark contends that an artificial superintelligence may end up killing all humans in service of some other goal. If it doesn’t value human life, it could feasibly end humanity just for simplicity’s sake—to reduce the chance that we’ll do something to interfere with its mission.
(Shortform note: Some argue that an artificial superintelligence is unlikely to kill all humans as long as we leave it alone. Conflict takes effort, so an AI might conclude that the simplest option available is to pursue its mission in isolation from humanity. For instance, a superintelligence might be peaceful toward us if we don’t interfere with its goal and allow it to colonize space.)
If an artificial superintelligence decided to drive us extinct, Tegmark predicts that it would do so by some means we currently aren’t aware of (or can’t understand). Just as humans could easily choose to hunt an animal to extinction with weapons the animal wouldn’t be able to understand, an artificial intelligence that’s proportionally smarter than we are could do the same.
(Shortform note: In Superintelligence, Nick Bostrom imagines one way an artificial intelligence could kill all humans through means we would have difficulty understanding or averting: self-replicating “nanofactories.” This would be a microscopic machine with the ability to reproduce and synthesize deadly poison. Bostrom describes a scenario in which an artificial intelligence produces and spreads these nanofactories throughout the atmosphere at such a low concentration that we can’t detect them. Then, all at once, these factories turn our air toxic, killing everyone.)
Possibility #2: AI Cages Humanity
Another possibility is that an artificial intelligence chooses to keep humans alive, but it doesn’t put in the effort to create a utopia for us. Tegmark argues that an all-powerful superintelligence might decide to keep us alive out of casual curiosity. In this case, an indifferent superintelligence would likely create a relatively unfulfilling cage in which we’re kept alive but feel trapped.
(Shortform note: A carelessly, imperfectly designed world for humans to live in may be intolerable in ways we can’t imagine. This idea is dramatized by the ending of Stanley Kubrick’s 1968 film 2001: A Space Odyssey. As Kubrick explains in an 1980 interview, the film portrays an astronaut trapped in a “human zoo” created for him by godlike aliens who want to study him. They place the astronaut in a room in which it feels like all of time is happening simultaneously, and he ages and dies all at once.)
Possibility #3: Humans Abuse AI
Finally, Tegmark imagines a future in which humans gain total control over an artificial superintelligence and use it for selfish ends. Theoretically, someone could use such a machine to become a dictator and oppress or abuse all of humanity.
(Shortform note: This scenario would likely result in even more suffering than if a supreme AI decided to kill all humans. Paul Bloom (Against Empathy) asserts that cruelty is a uniquely human act in which someone feels motivated to punish other humans for their moral failings. Thus, a superintelligence under the control of a hateful dictator would be far more likely to intentionally cause suffering (as moral punishment) than an AI deciding for itself what to do.)
What Should We Do Now?
We’ve covered a range of possible outcomes of artificial superintelligence, from salvation to disaster. However, all these scenarios are merely theoretical—let’s now discuss some of the obstacles we can address today to help create a better future.
We’ll first briefly disregard the idea of superintelligence and discuss some of the less speculative AI-related issues society needs to overcome in the near future. Then, we’ll conclude with some final thoughts on what we can do to improve the odds that the creation of superintelligence will have a positive outcome.
Short-Term Concerns
The rise of an artificial superintelligence isn’t the only thing we have to worry about. According to Tegmark, it’s likely that rapid AI advancements will create numerous challenges that we as a society need to manage. Let’s discuss:
- Concern #1: Economic inequality
- Concern #2: Outdated laws
- Concern #3: AI-enhanced weaponry
Concern #1: Economic Inequality
First, Tegmark argues that AI threatens to increase economic inequality. Generally, as researchers develop the technology to automate more types of labor, companies gain the ability to serve their customers while hiring fewer employees. The owners of these companies can then keep more profits for themselves while the working class suffers from fewer job opportunities and less demand for their skills. For example, in the past, the invention of the photocopier allowed companies to avoid paying typists to duplicate documents manually, saving the company owners money at the typists’ expense.
As AI becomes more intelligent and able to automate more kinds of human labor at lower cost, this asymmetrical distribution of wealth could increase.
(Shortform note: Some experts contend that new AI-enhanced technology doesn’t have to lead to automation and inequality. If AI developers create technology that expands what one worker can do, rather than just simulating their work, that technology could create new jobs and update old ones while creating value for companies. These experts implore AI developers to consider the impact of their inventions on the labor market and adjust their plans accordingly, just as they would consider any other ethical or safety concern.)
Concern #2: Outdated Laws
Second, Tegmark contends that our legal system could become outdated and counterproductive in the face of sudden technological shifts. For example, imagine a company releases thousands of AI-assisted self-driving cars that save thousands of lives by being (on average) safer drivers than humans. However, these self-driving cars still get into some fatal accidents that wouldn’t have occurred if the passengers were driving themselves. Who, if anyone, should be held liable for these fatalities? Our legal system needs to be ready to adapt to these kinds of situations to ensure just outcomes while technology evolves.
(Shortform note: Although Tegmark contends that the legal system will struggle to keep up with AI-driven changes, other experts note that advancements in AI will drastically increase the productivity and efficiency of legal professionals. This could potentially help our legal system adapt more quickly and mitigate the damage caused by rapid change. For instance, in a self-driving car liability case, an AI language model could quickly digest and summarize all the relevant documents from similar cases from the past (for instance, a hotel-cleaning robot that injured a guest), instantly collecting the context necessary for legislators to make well-informed decisions.)
Concern #3: AI-Enhanced Weaponry
Third, AI advancements could drastically increase the killing potential of automated weapons systems, argues Tegmark. AI-directed drones would have the ability to identify and attack specific people—or groups of people—without human guidance. This could allow governments, terrorist organizations, or lone actors to commit assassinations, mass killings, or even ethnic cleansing at low cost and minimal effort. If one military power develops AI-enhanced weaponry, other powers will likely do the same, creating a new technological arms race that could endanger countless people around the world.
(Shortform note: In 2017, the Future of Life Institute (Tegmark’s nonprofit organization) produced an eight-minute film dramatizing the potential dangers of this type of AI-enhanced weaponry. After this video went viral, some experts dismissed its vision of AI-directed drones as scaremongering, arguing that even if multiple military powers developed automated drones, such weapons wouldn’t be easily reconfigured to target civilians. However, in a rebuttal article, Tegmark and his colleagues pointed to an existing microdrone called the “Switchblade” that can be used to target civilians.)
Long-Term Concerns
How should we address the long-term concerns related to AI, including potential superintelligence creation? Because there’s little we know for sure about the future of AI, Tegmark contends that one of humanity’s top priorities should be AI research. The stakes are high, so we should try our best to discover ways to control or positively influence an artificial superintelligence.
The Current State of AI Research Funding
It seems that many people agree with Tegmark, as major institutions around the world are already prioritizing AI research. The European Union is currently investing €1 billion a year in AI research and development, and it intends to increase that annual investment to €20 billion by the year 2030. Private companies are leading AI research in the United States—for instance, Meta plans to spend $33 billion on AI research in 2023 alone.
However, some experts worry that private companies may not act in line with public interest while researching AI, and they urge the US government to fund a cutting-edge AI research program of its own. It’s possible that state-controlled research would be less likely to unleash a dangerous superintelligence, as governments lack the profit motive to create a marketable AGI as quickly as possible.
What Can You Do to Prevent the AI Apocalypse?
Outside of AI research, Tegmark recommends cultivating hope grounded in practical action. Before we can create a better future for humanity, we have to believe that a bright future is possible if we band together and responsibly address these technological risks.
After cultivating this optimistic attitude, Tegmark urges readers to do everything they can to make the world more ethical and peaceful—not just in the field of AI, but in every aspect of society. The more humans who are willing to empathize and cooperate with one another, the greater the chance that we’ll develop AI safely and with the intent to benefit all of humanity. This could involve organizing a fundraiser for a local homeless shelter, volunteering at a nursing home, or just being kinder to the people around you.
Determinate vs. Indeterminate Optimism
The attitude Tegmark urges readers to adopt is what Peter Thiel in Zero to One calls “determinate optimism”—you expect the future to be better than the present, and you believe that you can successfully predict and bring about specific positive outcomes. Thiel argues that in contrast to this perspective, most Americans today (or in 2014, when Zero to One was written) think in terms of “indeterminate optimism”—they believe that things will get better in the future, but they assume that the future is too unpredictable for them to plan.
According to Thiel, this is a problem because indeterminate optimism encourages people to be passive and short-sighted: They think, “Why bother planning a better future? Things will turn out OK no matter what I do.” To combat this, Thiel urges optimists to make long-term plans and stick to them. To apply this to Tegmark’s plea to make the world more ethical and peaceful: Don’t just become a generally cooperative person in life; instead, come up with a plan to bring people together and motivate them to treat others well.




