Book Reviews 11 min read

Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World

I've been reading this book off and on for months now. Not because it's bad; it's fascinating. But it's dense with ideas that require digestion. Genius Makers chronicles the people building AI, the competition between labs, and the race toward AGI. It's the kind of book where you read a chapter, put it down to think, and pick it up weeks later when you're ready for the next dose of "holy shit, this is actually happening."

Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World

The Book You Read in Doses Because It's Too Rich to Binge

Introduction: The Book I Haven't Finished But Keep Thinking About

Confession time: I started Genius Makers months ago, and I'm still not done. Not because I'm not enjoying it; I am. Not because it's poorly written; it's excellent. I keep putting it down and picking it back up because it's dense in a specific way. Each chapter contains enough ideas, implications, and "holy shit" moments that I need time to digest before continuing.

This is the story of AI: not the technology itself, but the people building it. The rivalries between labs (Google Brain vs. DeepMind vs. OpenAI). The personalities driving this transformation (Geoffrey Hinton, Demis Hassabis, Sam Altman, Ilya Sutskever). The race toward Artificial General Intelligence (AGI) and what that might mean for humanity.

Reading this in 2026, after living through the ChatGPT explosion, the agent mode revolution, and watching AI infiltrate every industry, hits different than it would have a few years ago. The book chronicles events that seemed like academic research or corporate competition at the time but now feel like the opening chapters of a much bigger story we're living through.

I'm reviewing this as an in-progress read because sometimes that's more honest than pretending I finished it. The fact that I keep returning to it says something important—it's compelling enough to pull me back despite being heavy enough to require breaks.

Genius Makers book cover by Cade Metz

Why this book demands your attention (even if you read it slowly):

The People Behind AI: You're using AI systems built by the people in this book. Understanding their motivations, rivalries, and visions matters.

The Competition: Google vs. Facebook vs. OpenAI vs. DeepMind. The corporate AI race shaped what we have now and what's coming.

Historical Record: This documents a pivotal transformation in real-time. Reading it now is like reading about the early internet in the late 90s.

AGI Implications: The quest for Artificial General Intelligence isn't science fiction anymore. These people are actually trying to build it, and this book shows how and why.

Personal Stakes: If you work with AI, invest in tech, or just exist in the modern world, understanding this history helps you navigate what's coming.

AI lab rivalry and competition visualization

For anyone working with AI, anyone curious about how we got here, or anyone wondering where AI is headed—this book provides essential context. (Buy on Amazon)

Book Details at a Glance

Feature Details
Title Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World
Author Cade Metz
Publication Year 2021
Genre Technology, Biography, Business, History
Length ~384 pages (reads denser than page count suggests)
Main Themes AI development, Corporate competition, Deep learning revolution, AGI quest, Technology ethics
Key Figures Geoffrey Hinton, Demis Hassabis, Sam Altman, Ilya Sutskever, Yann LeCun, Andrew Ng, many others
Relevance Today Essential context for understanding current AI landscape in 2026
Reading Style Dense, information-rich, requires active engagement
Who Should Read? AI practitioners, tech investors, anyone trying to understand the AI transformation

What I've Learned So Far: Partial Insights

Since I'm still reading this (no spoilers for myself!), here's what's stuck with me from the chapters I've completed:

  1. The Geoffrey Hinton Story: Persistence Through Winter

Early chapters chronicle Geoffrey Hinton's decades working on neural networks when nobody believed in them. Through the "AI winter" when funding dried up and the field was considered dead, Hinton kept pushing.

The narrative is fascinating—this wasn't stubborn attachment to a failed idea. Hinton had theoretical reasons to believe neural networks would work at scale, even when empirical results weren't there yet. He just needed more compute and more data.

Then AlexNet happened in 2012. Hinton's student Alex Krizhevsky used deep learning to win ImageNet, and suddenly everyone who'd dismissed neural networks for decades wanted in. The rest is history we're living through.

Deep learning pioneers and neural network research

Why this matters: The people building today's AI systems spent decades being told they were wrong. They were right, but proving it required persistence most people don't have. Understanding this context changes how you evaluate their current predictions about AGI.

Personal resonance: Reading this while working with AI agents feels like watching the origin story while living in the sequel. The neural networks Hinton pioneered are what power the systems I use daily.

  1. The Great Talent War: Google vs. Facebook vs. Everyone

Mid-book sections detail the corporate competition for AI researchers. When Google acquired DeepMind for $500 million in 2014, it signaled AI was transitioning from academic curiosity to corporate battleground.

The talent war got absurd: researchers commanding million-dollar salaries, companies poaching entire teams, non-compete agreements, secretive projects. Facebook built FAIR (Facebook AI Research) to compete with Google Brain. Elon Musk started OpenAI as a nonprofit counterweight to corporate AI labs.

This corporate competition shaped what AI became. The researchers had academic ideals about open research, but corporate pressures pushed toward secrecy and competitive advantage.

Reading this in 2026: Knowing how this played out—OpenAI becoming for-profit despite the name, Google releasing then retracting papers, the entire LLM race—makes these early moves feel even more significant. The seeds of current AI landscape were planted in these corporate decisions.

  1. DeepMind and the AGI Dream

The DeepMind story is particularly compelling. Demis Hassabis wasn't just building narrow AI for specific tasks; he explicitly aimed for Artificial General Intelligence from day one. Not incrementally better image recognition or language processing, but actual thinking machines.

The book chronicles DeepMind's achievements: AlphaGo beating the world Go champion (2016), AlphaZero learning games from scratch without human data, protein folding breakthroughs with AlphaFold. Each success felt like a step toward AGI.

The philosophical questions embedded here are profound: What does it mean for a machine to "understand"? Is AGI inevitable or impossible? Should we be racing toward it or cautious about it?

The quest for Artificial General Intelligence

2026 perspective: We're closer to AGI now than when this book was published in 2021. GPT-4, Claude, Gemini—these feel qualitatively different from earlier AI. Reading the book's chronicle of early steps makes you wonder how close we actually are and whether the people building it know.

  1. The Ethics Problem Nobody Wanted to Address

Scattered throughout the book are warnings about AI ethics, safety, and alignment that were largely ignored at the time. Researchers raising concerns about bias, misuse, or existential risk were often dismissed as fearmongers.

The book documents how ethical considerations were consistently deprioritized in favor of capability development. "We'll solve the ethics later" was the implicit motto: ship the product, win the race, worry about consequences afterward.

Reading this now: With AI bias scandals, disinformation concerns, and existential risk discussions dominating headlines in 2026, the early dismissal of ethics feels tragically shortsighted. The warnings were there. They were ignored.

  1. The People Are as Important as the Technology

What makes this book compelling isn't just the AI story; it's the human story. These are brilliant, ambitious, sometimes petty, often visionary people with competing motivations and values.

Hinton's academic idealism vs. corporate pragmatism. Hassabis's AGI dreams vs. Google's product needs. Altman's open-source rhetoric vs. OpenAI's commercial trajectory. LeCun's scientific ideals vs. Facebook's business model.

Understanding these personalities helps explain why AI developed the way it did. Technology doesn't emerge in a vacuum—people with specific values, incentives, and blindspots build it.

The AI transformation and its impact on society

Why I keep returning: Each chapter introduces new people and dynamics that recontextualize what I thought I understood about AI's development. It's not a simple "progress marches forward" story; it's messy, political, human.

Why I Read This in Doses (And You Might Too)

The off-and-on reading pattern isn't a criticism—it's how this book deserves to be read. Here's why:

Information Density: Each chapter contains multiple storylines, technical concepts, corporate maneuvers, and ethical questions. It's a lot to process.

Requires Context: I find myself pausing to look up papers mentioned, people referenced, or technologies described. That enriches the reading but slows it down.

Provokes Reflection: The ethical and philosophical questions don't have easy answers. I need time to think about implications before moving to the next chapter.

Real-World Connection: I'm actively working with AI, so reading this makes me constantly connect historical events to current practice. That requires pausing to trace those connections.

Emotional Weight: This isn't light reading. The stakes—both opportunities and risks—are enormous. Reading about people racing toward AGI without clear safeguards is simultaneously exciting and terrifying.

Perfect for Modular Reading: Chapters are relatively self-contained. You can read one, put the book down for weeks, and pick it back up without losing the thread.

The 2026 Lens: Reading History While Living the Sequel

Publishing in 2021 means the book predates:

  • ChatGPT's explosion (Nov 2022)
  • The LLM race (GPT-4, Claude, Gemini, etc.)
  • Agent mode workflows becoming practical
  • AI disruption of creative industries
  • Current AI safety and alignment debates

Reading it now feels like watching a historical documentary about events leading up to the present. You know how certain decisions played out. You see warnings that were ignored. You recognize inflection points that didn't seem significant at the time.

This temporal distance actually enhances the book. You're not just learning history—you're understanding how we got to this specific present moment and maybe getting hints about where we're heading.

Partial Recommendation & Where to Buy

⭐ Rating: 4.5/5 (based on what I've read so far, subject to change upon completion)

I can't give a full review since I haven't finished, but I can recommend it with confidence. This is essential reading for anyone trying to understand the AI transformation we're living through.

Who should read this:

  • Anyone working with AI professionally
  • Tech investors trying to understand the landscape
  • People curious about how AI actually developed (not the mythology)
  • Anyone concerned about AGI and its implications
  • Engineers who want to understand the human/political side of technology

Who can skip this:

  • People looking for technical deep dives (this is about people and companies, not algorithms)
  • Anyone wanting quick, light reading (this demands engagement)
  • People uninterested in corporate competition and personalities

Reading strategy I recommend:

  • Don't try to binge it—read a chapter, digest, think
  • Look up unfamiliar people and concepts as they appear
  • Connect what you read to current AI developments
  • Take breaks when it gets heavy (which it will)

📖 Buy Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World on Amazon

Related Reading

For more perspectives on AI, technology, and computing:

Companion reading:

  • Life 3.0 by Max Tegmark (philosophical AI future)
  • The Alignment Problem by Brian Christian (AI safety deep dive)
  • AI Superpowers by Kai-Fu Lee (China vs. US AI race)

Update promise: When I finish this book (soon, I promise!), I'll update with complete thoughts. For now, what I've read is compelling enough to recommend despite incomplete status.

Honest reflection: The fact that I keep returning to this book despite putting it down repeatedly says something. It's not an easy read, but it's an important one. The people and decisions chronicled here shaped the AI systems I use daily and will shape the future we all navigate.


This post contains affiliate links. If you purchase through these links, I may earn a small commission at no extra cost to you. Thank you for supporting this blog!