Imagine you traveled back in time to the late 1700s in the U.S. You met a young agrarian family struggling to feed themselves due to a low-crop year. You arrive on their doorstep dressed in modern clothing with your phone in hand. They give you a funny look and hesitate to invite you in. You show them your mobile phone. They gasp and conclude you are possessed by Satan because the technology is so advanced that it scares the hell out of them.

They’re frightened. They have no concept of how something like a mobile phone works. They are human, just like you, but can’t visualize or understand 21st-century technology. You try to explain to them that there is nothing to be afraid of, nothing to worry about. You attempt to show them the advantages of a small handheld computer, but they don’t get it.

The man of the house grabs his gun, cocks it, and points it at your face. You turn around and run.


woman's head with ones and zeros on her face, depicting AI
Photo by cottonbro studio on Pexels.com

It is no surprise that artificial intelligence, or AI, has improved our lives in many ways. I use Grammarly to proofread my writing work, for example. Today, AI is part of business worldwide. It is integral in finance, healthcare, transportation, technology, software, manufacturing, agriculture, and more. AI is everywhere. But Eliezer Yudkowsky and Nate Soares, in If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All, argue that if AI ever becomes artificial super-intelligence, or ASI, it would end humanity.

If Anyone Builds It, Everyone Dies is divided into three parts. Part one explains how AI works, how it is grown and not created, and why engineers do not fully understand how it works. They explain gradient descent, a method used by LLMs, or Large Language Model AIs, such as ChatGPT or Claude. Knowing nothing about data science or machine learning, I found most of this section easy to follow and understand.

In the second part of the book, authors Yudkowsky and Soares give several examples of extinction or doomsday scenarios if AI should become superintelligent. I found these stories far-fetched, to be honest. However, I realized we are like the farmers in the 1700s: we have no idea how to judge what is dangerous or not because we cannot see or imagine future technology. We might see the next step, or maybe several steps ahead, but we do not know what the final product will do. We cannot know.

Since we can’t conceptualize what a super-intelligent AI could do, we speculate to the best of our current knowledge, however diverse or limited. According to Yudkowsky and Soares, the progression of AI is too dangerous. AI must be shut down now before it becomes superintelligence – a machine that will think and process thousands of times faster than any human being on the planet.

Part three of the book further explains why we need to stop the advancement of AI or LLMs now, before it becomes impossible to do so. The authors conclude their argument with examples of how to bring awareness to the public and demand that worldwide leaders ensure AI safety.

If Anyone Builds it, Everyone Dies is a scary read – I am not going to lie. Not all AI engineers or researchers agree with Yudkowsky and Soares. Although currently far from perfect, AI can process computational tasks and data in seconds, making decisions that would take humans days, months, or even years. Today, humans excel in complex reasoning and creativity, but AI is quickly improving. Will it reach superintelligence? Will it learn to reason better than humans? If so, the authors argue we will face extinction. You will have to see how by reading the book.

I highly recommend this book to anyone curious about AI or the artificial superintelligence (ASI) argument. It is vital to read it to understand what we are facing as humans.

A bit about the authors, Eliezer Yudkowsky and Nate Soares:

Eliezer Yudkowsky headshot
Photo from MIRI website

Eliezer Yudkowsky is a founding researcher in the field of AI alignment and the co-founder of the Machine Intelligence Research Institute (MIRI), a nonprofit working toward a positive impact of artificial intelligence.

His work in the field of artificial intelligence spans more than twenty years. Yudkowsky played a major role in shaping the technical research agenda at MIRI and other research centers and has helped introduce the topic of AI extinction risk to mainstream audiences.

He appeared on TIME magazine’s 2023 list of the 100 Most Influential People in AI and has contributed chapters to the academic anthologies The Cambridge Handbook of Artificial Intelligence and Global Catastrophic Risks.

Nate Soares headshot
Photo from MIRI website

Nate Soares is the Executive Director of Machine Intelligence Research Institute (MIRI).

As an expert in AI for over a decade, Soares has worked as an engineer for Microsoft, Google, the National Institute of Standards and Technology, and as a contractor for the US Department of Defense. He has written largely on the subject of AI alignment, including foundational work on value learning, decision theory, and power-seeking incentives in smarter-than-human AIs.

To purchase a copy of If Anyone Builds It, Everyone Dies: Why Superhuman Intelligence Would Kill Us All, visit online bookstores.

© Copyright Vilma G. Reynoso 2026


Discover more from Vilma G. Reynoso

Subscribe to get the latest posts sent to your email.

Vilma G. Reynoso's avatar
Posted by:Vilma G. Reynoso

Vilma, aka Vilms, is a writer, storyteller, essayist, freelance content writer, blogger, and gardening enthusiast near the Rockies. She writes about the human experience, culture, identity, wellness, trauma recovery, personal growth, life lessons, vegan living, great books, and other timely topics.

Leave a comment