Hello,
I find a lot of AI debate still stuck between two positions. One side thinks AI will take away every job overnight. The other dismisses the entire AI wave by declaring that the tools are still inconsistent and need supervision. The problem is that none of these stances is particularly helpful. New technologies don’t need to be perfect to trigger a seismic shift. They just need to become good enough to make ignoring them look more dangerous than engaging with them.
This is where moving in the right direction matters more than the speed of the movement. A vehicle moving at 1 mph in the right direction will still travel farther than one standing still. Starting early works the same way: it gives people room to be wrong, to learn, and to course-correct, while still ending up ahead of the laggards described in diffusion of innovation theory.
None of this means that humans must become obsolete with each introduction of new technology. What new innovations do is they move the bar for what is disposable. This conversation is timely given we see AI agents start an economy of their own - making payments, signing contracts, and even hiring humans to help them with their tasks.
So what should you do in a world where predictions about the future become outdated before people have gotten to debate over the predictions?
In today’s guest op-ed, SysLS writes about how to reason through that kind of future without waiting for certainty. It is a piece about moving before the window closes, recognising which edges are temporary, and understanding why the people who start early often earn the right to course-correct in public.
You can follow his works on X and Substack here 👇🏾
Onto the story now,
Prathik
Introduction
The first time I realised we were heading towards an inflexion point was when I heard the music slowing down at my previous role, even as everyone around me pretended nothing would change.
I was managing a team of nearly 20 people in a hedge fund, doing the thing I had been doing for years. For all intents and purposes, I was likely going to do even greater things there. And yet, I moved from a position people would kill for to building a startup from ground zero with a skeleton crew - a move so little understood and widely seen as crazy. With the recent news of massive layoffs, people quitting explicitly to build startups, or quietly quitting and burning tokens at night doing the same, my actions seem a lot less insane now.
I’ve had a few people ask me where I think all this goes. This article is the answer to that. The honest truth is that I’m not really sure about the magnitude of these changes, but if quant finance has taught me anything, it’s that being directionally correct is often enough.
Writing On The Wall
It was ChatGPT o1 that did it for me. Up until that point, I had referred to them only as “LLMs” and not “AI”. I was not yet convinced that any semblance of real intelligence would emerge from them.
But with o1, it was the first time these LLMs could credibly produce code from well-structured prompts. It was still messy. They still suffered from the occasional bout of hallucination and confusion. But what mattered was that they could actually produce useful code.
The line of reasoning I took was this: once AI could reproduce useful code, it would recursively write improvements to its own logic and accelerate development at a scale we could not comprehend. Whenever I shared this, people would counter-argue that the code agents wrote was still buggy and not “production-ready.” This misses the point that humans write buggy code, too.
We don’t need flawless code to stop writing code entirely. We stop writing code the instant we realise that agents produce fewer bugs than we do, at a pace that far exceeds us. The bar for fully relegating the burden of coding to agents was so low that once I saw o1 up close, I knew the future was going to change dramatically.
Quant Finance And The Moat of Knowledge
I initially believedAI would eventually eat away a vast majority of quant finance. But I expected it to take a while, since there was little publicly available institutional code for LLMs to train on. I imagined software engineering as a pyramid: at the base was basic code monkey work, above that was your senior developer with some architectural thinking, and above that were specialised developers: data scientists, quant developers, and so on. The more your profession required specialised knowledge, the safer you would be.
I thought we would wipe out the entire tranche of code monkeys within two years. Then, senior developers would start to go. And layer by layer, specialised knowledge would also be incorporated into the LLMs, and they too would be wiped out then.
It quickly became obvious that the frontier model providers would eventually hire specialised knowledge workers to contribute industry know-how to these models. Specialised knowledge seemed like it would be a moat for the next couple of years, but would also be eaten away gradually.
The Remaining Moats
There were still a few business categories I thought would be safe from trivial disruption within the next five years.
The first is proprietary data. Businesses that produced a lot of proprietary data as exhaust would be hard to disrupt. Large podshops like Millennium come to mind; they can collect analyst readings, detailed analyses, recommendations, and actual price changes, and use this data to fine-tune frontier models into something that cannot be easily replicated. Any business producing proprietary data not trivially obtained by the frontier models would have a longer lease of life.
The second is regulatory friction. Businesses where other humans are a bottleneck seemed much harder to disrupt. Being able to trade in many TradFi markets meant opening broker accounts, getting licenses, and signing contracts around the globe. It’s easy to trade crypto, but much harder for a non-Chinese firm to trade iron ore in China. If you need a human to rubber-stamp your progress, the speed of that industry will always be bottlenecked by the cost and speed of that approval.
The third is authority-as-a-service. It’s not too hard now to get an agent to draft a legal opinion, given a comprehensive study of the matter and the relevant laws. And yet we’re still going to pay tens of thousands of dollars for one drafted by a lawyer, because an AI’s legal opinion is worth nothing at this point in time. Smart contract audits are another example. We’re probably already at a level where agents can review smart contracts as well as, or better than, the top decile of humans, yet most people still buy the stamp of authority from a branded firm. The opinion isn’t what you’re paying for. The authority behind it is.
The fourth is the physical intelligence lag. Hardware moves much more slowly than software, and breaking hardware is a lot harder to fix. Physical businesses that interact with the real world are a lot less likely to be disrupted soon. That said, once hardware catches up, the same pyramid logic applies: lower-level jobs go first, then the more specialised ones.
These moats are real, but none of them are permanent. The honest read is that they buy time, not safety.
Reasoning About A Messy Future
When the future is genuinely noisy, and the rate of change is fast enough that most analogies break down, people tend to do one of two things. They either wait for certainty before acting, or they pattern-match to the past (”this is like the internet boom”) and act on the wrong model. Both are mistakes.
It is worth reasoning from first principles under incomplete information. You don’t need to know exactly how something plays out. You just need to be directionally correct and structure your bets so that being early and wrong is survivable, while being early and right is disproportionately rewarding.
Asymmetry is the whole game when the future is uncertain.
The practical version of this is: ask what has to be true for a given outcome to happen, and then ask how legible the inputs to that outcome already are. The inflexion we’re living through was not unforeseeable; the inputs were visible. Code that could write code. Models that improved recursively. Institutional knowledge that could be bought, not just grown. Anyone willing to stare at those inputs clearly could see roughly where they pointed, even without knowing the exact path.
You can recursively reason about this and extrapolate further. I don’t even think we’ve yet caught a glimpse of what it will be like when agents can train themselves, replicate, and become truly autonomous. An agent that can increase its intelligence by 0.1% through a series of actions may not seem significant, but any number that is not 0 increases the probability that the next increment is greater, and so on, so forth. There are vast power laws at play here, and it is worth imagining what the future looks like under those power laws.
By the time the signal becomes obvious, the trade is crowded. In markets, you pay for early conviction with uncertainty. In careers and startups, the currency is the same.
So the real question isn’t really “what’s going to happen?” The question is: “What do I already know, what direction does it point to, and what’s the cost of acting on it now versus waiting?”
One thing that I often see people missing is noting that action creates information. Action does not happen in a vacuum. When you act on the world, the world replies with information. That information powers iteration. Iteration begets more informed action. That is the nature of progress.
Being still with incomplete information is decay.
Moving towards action is discovery.
Thinking About Next Steps
I knew I had a few years if I wanted to milk the status quo. But a large part of me felt like if I wanted to do something, I would have to start sooner rather than later. I had always wanted to build something truly mine, and it seemed like the window to do that was quickly closing.
To be clear, I know that the largest hedge funds in the world would be fine. They have proprietary data that makes them very difficult to replace. TradFi markets are also bottlenecked by human signatures, both on a regulatory and, at times, even a trading front. What I do think, however, is that these largest funds will use AI to replace most of their workforce, even terminal career seats like Portfolio Managers. Not immediately, but eventually, surely.
What I felt was that I had about 4-5 years before the foundation model providers hired enough specialised talent to make being an upstart trading firm nearly impossible. In certain markets, like U.S. equities, it already feels that way. I can’t imagine how much more efficient it’s going to look in just a few more years.
There was clearly not going to be space for “second best” pretty soon. I could keep working for the “best”, but it seemed more aligned with my goals to strike now, in a market where I had a genuine edge and knowledge that would not be trivially replicated. So, having that dawg in me, I called it quits and went all in on what eventually became @openforage.
Inflexion Point
Today, it’s really starting to feel like the window is visibly closing. The pace of change has stopped feeling gradual, and most people in space are beginning to realise that what used to take months of improvement now takes weeks.
I don’t think jobs will vanish entirely within the next couple of years. There will always be a need for humans. Humans are social creatures. As long as humans are in charge, we want other humans around. And humans don’t trust AI yet, so stamps of authority still need to come from a human. I imagine AI CEOs in the next couple of years, but there will still likely be a human CEO having to “approve” and certify the AI CEO. This idea of human certification cascades down the pyramid. A human manager will manage and certify a bunch of agents working under him.
But the arithmetic of hires will change. If a CEO can prompt an agent more easily than they can prompt you, there’s no need to hire you. Shallow, code-monkey work will be difficult to find going forward.
To be irreplaceable, you need to operate at a timescale far above current agent limitations - receiving instruction, managing agents, and working with them for weeks, months, or years. Long-term strategic thinking and policy planning are some of the strongest job moats for the foreseeable future. You also need to operate at a scope greater than the current agent limitations. Agents have limited context. They know everything about anything, yet cannot see how component A interacts with component B, which in turn interacts with component C, causing cascading effects to component D. They lack scope.
If you can think far and wide, absorb information quickly, make decisions for the long term, and are likeable, you will hold down a job, at least for the foreseeable future.
If you do intend to be an employee, it’s worth taking stock of what your work is actually made of. Some tasks are deeply human defensible. Some will be replaced cheaply over the next couple of years. Do more of the former and less of the latter.
Working for a great firm in a deeply defensible position, one that sits behind real moats, may give you a career runway while the rest of the workforce gets eaten by the foundation models. You can still spend your tokens at night, rolling the dice, trying to build something meaningful.
But if you have a burning desire to contribute a unique verse to the world, think carefully about where your market of choice is heading. If your window to build something defensible is closing, you need to begin operating before the market fully prices in the competition that is coming.
Conclusion
The inputs that create inflexion points are legible ahead of time, if you’re willing to look. Most people don’t look, or they look and don’t act, or they wait until the signal is so obvious that the opportunity has already disappeared.
Don’t ignore the shifting sands. Don’t stay somewhere that’s losing ground while telling yourself you’ll make the leap when the timing is better. There’s no better timing, and the timing rarely announces itself. When it becomes obvious to everyone, the window has normally already closed.
I looked, I made a bet, and now I’m living inside the outcome of that bet — for better or worse.
Token Dispatch is a daily crypto newsletter handpicked and crafted with love by human bots. If you want to reach out to 170,000+ subscriber community of the Token Dispatch, you can explore the partnership opportunities with us 🙌
📩 Fill out this form to submit your details and book a meeting with us directly.
Disclaimer: This newsletter contains analysis and opinions of the author. Content is for informational purposes only, not financial advice. Trading crypto involves substantial risk - your capital is at risk. Do your own research.






