Articles by Victoria

Inside a Mastermind’s Mind: How Empire of AI Deconstructed the Great Game

Beyond the ethics and environmentalism, Karen Hao’s exposé reveals the chillingly brilliant, multi-move strategy of AI’s quintessential architect.

Apr 5, 202617 min read
cover

Recently, I enjoyed reading Karen Hao’s Empire of AI and watching her on the Diary of a CEO episode. Like most people, I had heard the initial headlines about carbon footprints, labor exploitation, and data centers. Most readers walk away from her work feeling a sense of dread or a call to social activism.

But as I turned the pages and listened to her break down the history of OpenAI, how he engages in mythmaking, his way of manipulating other around him for his goals, I became so intrigued. This is no longer a book about how a company rise to power in the AI space to me. Instead, I recognized the "scent" of a high-level architect, a grandmaster play a game of chess while everyone else was still trying to remember how the pieces moved.

Welcome back to another Articles by Victoria, the place where I randomly write things I’m curious about.

In this article, I want to explore something I don’t usually write about: psychology, specifically what might be going on inside the mind of a mastermind like Sam Altman. Whether we like him or not, it’s hard to deny that he understands the game and has played it well, strategically building one of the most influential AI empires of our time.

Why is Sam Altman a Mastermind

As an INTJ myself, I’ve always been drawn to understanding how people think at a systems level, how they make decisions and the underlying logic behind them. In my article on How Cognitive Functions Changed the Way I Understand People, I wrote a 3000+ word article on my deep dive into how I observe and interpret human behaviours.

Within the framework of cognitive functions, the INTJ archetype is often called the “mastermind” or the “architect” because of their ability to see patterns, anticipate outcomes, and strategize long term.

For those of us wired this way, the world is a series of systems to be understood and mastered. We don't just look at what a person does. We look at the "why" and the "how" behind the execution. Watching Sam Altman through Hao’s lens became an obsession for me. It wasn't about whether his impact was "good" or "bad" (honestly that's a whole another article waiting to be written haha).

Instead, it was about the sheer, breathtaking scale of his psychological and long term strategic planning. Emphasis on the long term part. And that's why I was so intrigued, simply from a scientific point of view.

He didn't just build an app/product. He shaped and framed a global narrative, redirected the flow of billions in capital, and outmaneuvered the smartest and most powerful people in the room. And that's something I want to dissect in this article. How did he do it? What went through his mind as he planned with such precision, positioning each move for the long game, almost like placing pawns on a chessboard several steps ahead of everyone else?

And more importantly, why does it now feel like parts of that strategy are starting to fracture? What was the blind spot the one that even highly strategic thinkers tend to overlook?

This is a study of that mind: the motiva tions, the goals, and the dark, complex psychology of a man who is always thinking three steps ahead.

To those of you who usually read my work on tech, leadership, and reflections this is a bit of a digress.

If this direction intrigues you, I’m happy to go deeper and explore more of these psychological and strategic breakdowns in future pieces. But if it feels a little too intense or uncomfortable, I can stick to what I usually write about. Let me know by reaching out!

The Dinner Party Gambit: Engineering the Original Team

One of the most revealing sections of the book involves the literal birth of OpenAI. It wasn't a spontaneous meeting of minds, but was a carefully staged social engineering project. Altman didn't just send a generic LinkedIn message. He launched a high-stakes recruitment campaign through cold emails and frequent, intimate dinners, targeting the "titans" of the field: Dario Amodei, Ilya Sutskever, and Greg Brockman.

As an INTJ, I found the mechanics of these dinners fascinating. Sam Altman wasn’t just offering high salaries to attract talent. He was leveraging something far more powerful: social currency.

By hinting that Elon Musk might show up, he created an almost irresistible pull. Musk, at the time, was one of the most polarizing and magnetic figures in tech and Altman understood exactly what people were drawn to.

It was a classic blend of FOMO and proximity bias. At that time, he knew he was a nobody so he strategically borrowed Musk’s brand to validate his own vision and get these geniuses into the room.

And the result: they all joined. Maybe it’s easy to say this now, knowing how things played out. But even from the start, I never fully bought Altman's "non-profit" pitch. I don't believe in the altruistic mission. How can a person like him not make this revolutionary technology for profit? It makes zero sense to me.

Then again, perhaps he hid his true intentions well. Perhaps if I was there in the room, I would have been fooled too.

Only later, as Hao’s narrative unfolds, do we see the fallout. One by one, the original team realized they weren't partners in a mission but were part of a system being built for something else entirely, like components in a machine Altman was building for a very different purpose.

When it became clear that the “non-profit” structure was more of a phase than a principle, some chose to leave. People like Dario Amodei went on to build alternatives like Anthropic.

From the outside, it looks like a talent drain. Was the non-profit vision ever meant to last, or was it simply the most effective way to gather the right minds at the right time to build something much bigger?

From my analysis of the mastermind’s perspective, he had already extracted what he needed from them: the initial breakthrough and the technical legitimacy to pivot into a global power player. It was all part of the plan, a gambit. He never intended for OpenAI to stay a non-profit, he just needed their brilliance to build the foundation of his for-profit empire.

Myth-Making as a Tool of Control

In the DOAC episode, I particularly enjoyed how Karen Hao talks about how empires create power and control using an analogy from the sci-fi epic Dune. In that world, leaders engage in "myth-making" to control the masses. They plant prophecies and religious narratives among the people to ensure their own rise to power. Hao argues that AI leaders, starting with Altman, are doing the exact same thing.

They know the "AGI for humanity" mission is, at its core, a myth designed to gain power and control. By showing off incredible tech demos and speaking about a future where AI solves every human problem, they make the public eager to "jump into" the AI era. But the question is: Can AGI really help humanity?

This is something I’ve spent a lot of time discussing with close friends and colleagues. There are a couple of likely scenarios for the future once AGI arrives, but maybe I’ll save that for a future article.

Anyways, back to myth-making for control, here is where the psychology gets truly complex as Hao explains in the episode: when you embody, live, and breathe a myth daily to convince others, the line between the lie and reality begins to blur.

These companies eventually lose themselves in their own stories. They started by using the myth as a tool to bypass ethical scrutiny and gain funding, but eventually, the mask becomes the face.

This reminds me of a quote by French author François de La Rochefoucauld:

We are so accustomed to disguise ourselves to others that in the end we become disguised to ourselves

It’s a reminder of how easy it is to lose sight of our true intentions when we’re constantly performing for the world. For someone like Altman, I find this a fascinating study in cognitive dissonance. He had to play the part of the selfless visionaries so convincingly for so long, has he actually started to believe it himself?

He convinced the world that AGI was a singular event that must happen. More importantly, he convinced people that it must be achieved by his team to ensure it was done safely. This was his social shield. By framing his company as the protector of humanity’s future, he made any scrutiny of his business model seem petty or even dangerous.

When you frame yourself as the one saving the world, people stop asking about your environmental footprint or where your training data came from. You provide a vision so big that people are happy to inhabit it, even if it means ignoring or underestimating the "dark" reality of how that vision is being built.

Gathering Resources: The Long Game of Recruitment

As a gamer, I mostly play strategy games because they are both fun and mentally stimulating. In any strategy game, the first step to gaining exponential returns is gathering resources. In an RPG or MOBA games, for example, you need to understand how the system lets you acquire resources, whether it is money, items, or abilities. You need to know what to prioritize, where to get it, and how to leverage it most efficiently. Watching Altman operate in the tech world reminded me a lot of that.

The longest-running game begins with the gathering of resources. In the tech world, the rarest resource is not money but elite talent. The founding of OpenAI as a non-profit in 2015 was, from this perspective, the ultimate recruitment funnel.

To the public, this was inspiring. To the researchers, it was a philosophical beacon. To a strategic mastermind, it was a way to lower the psychological guard of the world's most skeptical minds. Because he understood one thing: the best AI researchers didn't want to build ad-click algorithms for Facebook. By framing OpenAI as a "non-profit mission," Altman gave them a "cause" to follow rather than just a job to do.

He effectively aggregated an unparalleled concentration of brainpower under a banner of ideological purity. He wasn't building a company yet, he was building a following. This allowed him to lock down the human capital he would later need to execute his true, much larger vision.

This game's screenshot has nothing to do with Altman, but I love Pokopia and thought it was a cute visual to represent “building a following” haha

The most chess-like move in the book is Altman’s recent trend of going to governments and asking for AI regulation. For the average observer, this looks like a responsible leader recognizing his own power. For the strategist, this is the final phase of dominance: regulatory capture.

Hao’s reporting makes it clear that building AI is becoming incredibly expensive. The real threat to OpenAI is not Google; it is the smaller, faster, open-source startups. By asking for regulation, Altman is essentially helping write a set of safety rules that are so complex and expensive to follow that only a giant like OpenAI can afford to meet them.

He is using the government to build a moat around his empire. While the public sees a responsible leader, I see a master architect making it nearly impossible for anyone else to compete. It is a masterclass in using your enemies' tools, the law, to protect your own throne.

From my perspective as an INTJ and someone who loves strategy games, it is fascinating to see the parallels. The careful planning, the leverage of psychological incentives, and the patient accumulation of resources all point to a mind thinking many moves ahead. In both games and real life, the early stages of resource gathering are often invisible, but they set the stage for everything that follows.

Decoding the Pivot: Results Over Consistency

The moment I realized just how deep Altman’s game went was when Hao described the "pivot." In 2019, OpenAI moved from a true non-profit to a "capped-profit" entity, taking a massive $1 billion investment from Microsoft. Many critics in the book called this a betrayal. To me, it looked like a cold, logical solution to a physical constraint. Typical way of how an INTJ like him would operate.

An INTJ’s mind is a machine built for execution. We have a singular vision (Introverted Intuition or Ni) and we use logic (Extraverted Thinking or Te) to make it real. By 2019, Altman likely realized that AGI wasn't just a research problem. It was a power and hardware problem. He saw an immovable wall: the hundreds of millions of dollars needed for GPU clusters and electricity.

He didn't (or should I say couldn't) allow the "non-profit" label become a prison and constraint for me. He calculated that consistency was less important than survival.

While everyone else was mourning the "soul" of the company, Altman was busy redesigning the entire financial architecture of his organization to accommodate the reality of the hardware costs. He prioritized the end goal over the public's perception of his values because at the end of the day, it was all about keeping control for him.

That is the hallmark of a mastermind: the ability to pivot the entire system the moment the old way becomes a liability.

The Blind Spot: Visibility Over Caution

Despite all his brilliance, Altman’s strategies were not invincible. The pivot and the recruitment of world-class talent show a mind operating at the highest level, a masterclass in long-term planning, leverage, and influence. And yet, even the most strategic thinkers have blind spots. The very moves that give them power internally can expose vulnerabilities externally.

In my opinion, Altman’s biggest misstep may have been that he didn’t lie low. He built a genius-level system of recruitment, narrative, and leverage, yet he also made himself highly visible. His boldness drew attention, both admiration and scorn. The mission of AGI, once untouchable, began to reflect on him personally and to some, it painted him as cold, calculating, or even sociopathic.

This blind spot became obvious during the legendary November 2023 firing and reinstatement. The board attempted to remove him, citing a lack of transparency and a breakdown of trust. They claimed he “pitted executives against one another” and withheld major developments, including the launch of ChatGPT.

This is classic INTJ strategy. He made himself the load-bearing pillar of the organization. The company’s identity, Microsoft’s investment, and the entire AGI mission were tied to his persona. Removing him risked collapsing the structure.

But what makes this moment even more interesting is hearing it from Sam Altman himself. In the Social Radars podcast interview, he described the firing as a complete shock. He called it a “fog of war,” implying that even someone who plans several moves ahead did not anticipate this attack.

Yet, his system responded exactly the way it was designed to. While he was caught off guard, the structure he built held. Within days, over 95% of employees and Greg Brockman threatened to resign unless he returned.

He did not need to fight directly because the system fought for him. In less than a week, he was reinstated, and the very board that fired him was dismantled. From a strategic perspective, it was almost flawless execution under pressure.

But from what I see, this is also where the blind spot reveals itself. The fact that he was genuinely surprised shows something important. Even with all his foresight, he underestimated internal resistance and the human layer of trust, politics, and perception. That is something INTJs are often weaker at.

More importantly, by making himself so central and so visible, he exposed himself to a different kind of risk. His system could protect him internally, but it could not shield him from external scrutiny. The world was watching this play out in real time. People started asking questions, not just about the company, but about him.

The same strategic brilliance that made him indispensable also made him a target. The moment he returned was not just a victory. It was also the moment the facade began to crack.

The Slow Downfall

Over the next several months, cracks in Altman’s strategy became impossible to ignore. In May 2024, after OpenAI’s non-disparagement agreements were exposed, he was accused of lying about whether he knew about equity cancellation provisions for departing employees. Former board member Helen Toner explained that he had withheld critical information, including the ChatGPT release timeline and his ownership of OpenAI’s startup fund. She also reported that some executives described psychological abuse and feared retaliation if they did not support him. Critics pointed to similar patterns from his time as CEO of Loopt, describing "deceptive and chaotic management".

At the same time, Altman’s public credibility started to erode. Writers like Karen Hao highlighted these patterns in bestsellers, and commentary in publications such as The Guardian raised questions about circular financing deals and the sustainability of OpenAI’s strategy. His defensive responses to investors, including Brad Gerstner, were widely seen as unconvincing. The hype around GPT-5 only made matters worse, as the product fell short of expectations despite Altman’s persistent promises.

Technical setbacks were mirrored by business challenges. Competitors like Anthropic captured corporate clients, DeepSeek forced dramatic price cuts, and OpenAI continued to operate without turning a profit. Reports showed corporate customers were not seeing meaningful returns on their investments. In December 2025, Altman declared a code red. Even high-profile partnerships were starting to falter. Apple’s 2024 pilot with OpenAI appeared to underdeliver, leading the company to switch to Google for integrating generative AI into Siri.

A Reflection on the Architect’s Style

By the time I finished Empire of AI and the DOAC episode, I felt a mix of awe, curiosity, and disbelief. I found it fascinating that Sam Altman could push a strategy this far while carrying such obvious blind spots, especially in how he manages people.

As an INTJ, this hits close to home. The same traits that make someone effective at building systems, thinking long term, and executing with precision can also create distance from the very people those systems depend on. It is easy to prioritize outcomes over relationships, logic over empathy, and vision over alignment.

Watching Altman’s trajectory felt like watching that trade-off play out at scale. He mastered leverage, narrative, and timing, but the human layer kept pushing back. Trust eroded. Perception shifted. And once people start questioning the person behind the system, even the most well-designed strategy begins to feel fragile.

What stood out to me is not just how far he got, but how long the system held despite those cracks. It says a lot about the strength of his thinking, but also about the limits of it. At some point, no amount of strategy can fully compensate for blind spots in how you deal with people.

Conclusion

Karen Hao’s book may be a warning to many, but to me, it was a psychological profile of a genius at work. The world is currently living in an "Empire" that was envisioned years ago in silence. Sam Altman isn't just a CEO, he is a systemic architect who understands that the long game is won not through luck, but through the cold, precise manipulation of every available variable.

We are all just pieces on the board he designed. Whether that is a good thing or a bad thing is up for debate, but for those of us who appreciate the art of the mastermind, it is a fascinating game to watch.

Thanks for reading! I’m curious to know your own personal thoughts and experiences on this topic! Feel free to connect or let me know in the comments! Cheers!

Let's Connect!

References

Share:

More from Book Reviews and Reflections

View full series →

More Articles