Happy Monday. We have a new book this week.
After spending too much time buried in crypto books, my mind genuinely saturated with block times, and the ghost of Satoshi and I needed to go somewhere else. Haven’t read fiction for a while. So the deviation had to be real, and it had to matter.
So here we go. The right time to read on AI.
This week I’m starting Superintelligence by Nick Bostrom. Published in 2014. Written by an Oxford philosopher who, at the time, most people in tech politely ignored. The book is about what happens when we build something smarter than us, and whether we’ll have any say in what comes next. I know we are already living that conversation.
I’m reading it in March 2026. Which, as you’ll see, makes it a very different read than it was ten years ago.
I want to start with the fable Bostrom opens the book with.
A colony of sparrows decides, one evening, that life would be much easier if they had an owl to help them. An owl could watch over their young, protect them from the neighborhood cat, build better nests. So they decide to find an owl egg, raise it, and put it to work. One elder sparrow — Pastus — raises the question of whether they shouldn’t figure out how to tame the owl first? The others wave him off. That’s a problem for later. First, let’s find the egg.
Bostrom doesn’t tell us how the story ends.
That’s the whole book, honestly. We are the sparrows. We are very busy finding the egg. And the question of what happens when it hatches and whether it will be “docile enough,” in the words of mathematician I.J. Good, is the problem we are not discussing yet.
I’ve noticed that the language around AI in our industry has gotten very comfortable. Platforms talk about AI agents as if they’re just smart plugins. Projects talk about autonomous systems like they’re productivity tools. Everyone is building. Nobody is asking Pastus’s question.
So I figured it was time to actually sit with it.
For most of human existence, growth was so slow that it took a million years for civilisation to sustain one extra million people. Then the Agricultural Revolution happened, and the same growth took two centuries. Then, during the Industrial Revolution, the same growth took 90 minutes today. Every time humanity crossed one of these thresholds, the people living through it couldn’t have imagined it beforehand. Bostrom argued that there might be another one coming. If machine intelligence triggers a new growth regime, the world economy could double every two weeks.
Then he walks through the history of AI research, and it reads like a recurring trauma. Hope, hype, disappointment, repeat. The field was born at the 1956 Dartmouth Conference, where ten scientists convened with the belief they’d crack machine intelligence in a summer. They did not crack it in the summer. Followed by that, there was a cycle of AI summers and winters, expert systems in the 1980s, neural networks in the 1990s, each wave cresting and breaking.
Bostrom said that the failure of past predictions doesn’t mean AI is impossible. which, sitting here in 2026 with Perplexity in one tab and Higgsfield in another, feels a little funny to even type.
It means the technical difficulties were greater than the pioneers expected. A problem being harder than you thought is not the same as a problem being unsolvable. And he notes something the crypto world understands intuitively. Sometimes a problem looks completely intractable right up until it suddenly isn’t.
January 2025 was a perfect illustration. DeepSeek dropped R1 as an open-weight model, built for a fraction of what American labs were spending, and briefly rattled markets while forcing a serious reassessment of who can compete in frontier AI. Nobody saw it coming. The consensus had been that compute costs created an effective moat. Then the moat was crossed by a Chinese lab, in public, on a Tuesday.
He also introduces the idea that’s already haunting me a little bit. Even if we get to human-level AI, we won’t stop there. Human-level is not the terminus. The train, as he puts it, might not even pause at Humanville Station. Looking at where we are right now, the train does not appear to be decelerating. The length of tasks AI can complete autonomously has been doubling roughly every seven months. Claude Opus 4.5, released last November, can now complete software engineering tasks that take a human nearly five hours. That is a material capability that did not exist eighteen months ago.
Bostrom then maps out five distinct paths to superintelligence, such as pure software AI, whole brain emulation, genetic cognitive enhancement, brain-computer interfaces, and networked collective intelligence. He imagined them as roughly parallel possibilities. This has aged well, too.
Almost all the capital, all the talent, all the attention is on software AI today. A $500 billion infrastructure project, a data center campus consuming more power than the city of Seattle, is currently under construction just for training and running AI models. Brain-computer interfaces, which he was already skeptical about, haven’t closed the gap between “paralysis patient moves a cursor” and “enhanced human cognition.” The path that would surprise him most is simply how fast the software route moved.
According to Bostrom, Superintelligence could take three distinct forms.
Speed superintelligence, a mind that works like a human but much faster, capable of a century of scientific progress in a decade. Collective superintelligence, which is many AI minds organised together, whose combined output exceeds what any individual component could produce. And quality superintelligence, a mind not just faster or more numerous, but qualitatively superior in ways we may not fully understand.
We don’t have quality superintelligence. What we have in Claude, GPT-5, Gemini is impressive, but still fundamentally pattern-completion trained to reason. The honest answer is we don’t have a reliable test for when that threshold gets crossed. And the people building these systems are the first to admit they’re not entirely sure what’s happening inside them at scale.
We are, however, seeing early signs of the collective form. AI agents started communicating with each other, not just with humans. Google introduced Agent2Agent protocol. Anthropic built its Model Context Protocol. The plumbing for AI systems to coordinate at scale is being laid right now. This is where we are at right now:
If an AI system reaches a point where it can improve its own design, and the improved version is smarter, and that version improves itself further, you get a feedback loop. The question is how fast that loop runs. Bostrom introduces the concept of recalcitrance: how hard is it to get smarter? If high, the explosion is slow, and we have time. If low, it could happen within days.
I don’t know if recalcitrance is high or low. I’m not sure anyone does. But the task duration doubling every seven months is the data point I keep returning to. In early 2024, the ceiling was under thirty minutes of autonomous work. By late 2025, Claude Opus 4.5 could work continuously for seven hours on a single complex problem. If the doubling rate holds, we are looking at multi-day autonomous AI work within the next couple of years. Bostrom wrote this chapter when the most advanced AI couldn’t reliably finish a paragraph without drifting. He was describing a hypothetical.
The final piece of this section asks what happens if one actor — one company, one government — gets there first. Bostrom argues that superintelligence could confer a decisive strategic advantage. The ability to dominate economically, technologically, and militarily. He was uncertain whether a single winner would emerge or whether multiple groups would arrive around the same time.
The question of who controls the AI is now actively contested between a sitting president, a defense secretary, and an AI company’s CEO, while the model itself keeps running. The race is fragmenting into parallel races between actors with very different incentive structures. Some safety-focused, some not, some open-source, some not, some under democratic governments, some not. Bostrom wrote this before AI was a geopolitical asset. Now chip export controls are real, US-China AI competition is a live diplomatic flashpoint, and the question of who gets to AGI first is being managed at the highest levels of government.
He was a decade early. He was not wrong.
Reading this in 2026, the main emotion is recognising that someone laid out the problem very clearly a decade ago, and we proceeded anyway. Because the incentives to build were overwhelming, the benefits were real, and the risks were abstract until they weren’t.
In crypto, we’ve spent a decade watching what happens when you give a system rules and it follows them exactly. Sometimes the rules were wrong. Sometimes someone found an exploit the designers didn’t anticipate. The system did exactly what it was designed to do. The problem was the design.
Bostrom’s book is a very long, very rigorous argument that the same thing could happen at the civilisational scale with AI. And that we are spending about as much time on the taming problem as the sparrows did.
The egg has been found. It is very large. It is getting larger.
Part 2 is next Monday, where Bostrom gets into what superintelligence would actually want, why it would probably want things we wouldn’t like, and the various ways that could go catastrophically wrong. Until then… we wait.
Token Dispatch is a daily crypto newsletter handpicked and crafted with love by human bots. If you want to reach out to 200,000+ subscriber community of the Token Dispatch, you can explore the partnership opportunities with us 🙌
📩 Fill out this form to submit your details and book a meeting with us directly.
Disclaimer: This newsletter contains analysis and opinions of the author. Content is for informational purposes only, not financial advice. Trading crypto involves substantial risk - your capital is at risk. Do your own research.








