Hello,
In January 2026, an anonymous trader on Polymarket placed a series of bets that Venezuela’s president, Nicolás Maduro, would be captured. The total wager was about $34,000. When the U.S. special operations carried out the mission days later, the trader cashed out over $400,000. The Secretary of State later confirmed that the operation was too sensitive for congressional notification. Think about that for a second. The United States Congress, the body that is supposed to authorize military operations, was not informed. The American public had no idea. But someone sitting behind a screen, on a crypto betting platform, knew enough to put real money on it. And they were right.
This has become the narrative of the prediction market industry today. To be known as what Polymarket CEO Shayne Coplan calls the “truth machine.” The argument is that, because traders have skin in the game, their collective bets produce the most honest picture of what is going to happen in the world compared to any polls, experts, or pundits who face no consequences for being wrong. The odds on Polymarket are arguably the closest thing to the truth you can find anywhere.
And that narrative has kind of worked. Prediction markets are no longer a niche corner of the internet where degens bet on elections for kicks. A recent dataset of 364 TikTok videos mentioning prediction markets found that 68% of the videos had nothing to do with trading. People were not betting; rather, they were citing odds from these platforms in political debates, just as they used to cite polls. Polymarket showed up in roughly 70% of those videos. A 22-year-old making a political video on TikTok is using a crypto betting platform’s odds as evidence for what is going to happen in the real world, and a significant number of people are nodding along.
That is wild. Two years ago, you could not have believed any of this was possible. But a serious question nobody is asking is whether the odds actually deserve that level of trust.
So let me ask: How accurate are these markets really? What happens when the odds start shaping the events they are supposed to predict? And what does the future look like when the whole world starts treating betting lines as the truth?
How Do You Even Grade a Prediction Market?
Before we look at the numbers, it helps to understand how you actually measure whether a prediction market is any good. Because most people have never thought about this, and without it, all the claims from Polymarket and Kalshi are just marketing.
There is a scoring method called the Brier score. Glenn Brier, a meteorologist, came up with it in 1950 to grade weather forecasts, because weather forecasters were (and still are) among the first people who had to take probabilistic predictions seriously for a living. The idea is dead simple. Say you predict there is a 90% chance of rain tomorrow, and it rains. That is a good prediction. Your Brier score is low. Now, say you predicted a 90% chance of rain, and the sky stays clear. That is a terrible prediction. Your score is high. A Brier score of 0 means you predicted everything perfectly. A score of 0.25 means you did no better than a coin flip. Anything above 0.25 means you would have been better off guessing randomly.
Why does this matter? Because when Polymarket tells you their market gave Trump a 60% chance, and he won, that sounds impressive on the headlines, but statistically, a single correct call tells you almost nothing. You need to grade the full track record across thousands of questions, over time. That is what the Brier score does. It is the only honest way to evaluate whether these markets are actually good at this.
A site called Brier.fyi does exactly that. They analysed over 84,000 questions across Polymarket, Kalshi, Manifold, and Metaculus. Polymarket’s overall Brier score came in at 0.047. Now, that’s a genuinely good score. To put that in plain terms, imagine a forecaster who says, “I am 90% confident this will happen” and gets it right at that rate, consistently.
But here is where it gets interesting, and where the “truth machine” narrative starts to fall apart.
That 0.044 is an average across everything Polymarket has ever listed. And averages, in this case, are doing a massive amount of heavy lifting. When you break the score down by what people are actually betting on, the grades swing wildly.
Science and economics? Polymarket scores a -A. The markets on CPI numbers, Fed rate decisions, and GDP prints. These perform well because the people trading them tend to be financially literate, the data is verifiable, and there is real money at stake from institutional players who actually understand the subject.
Politics? B+. Still decent, mostly carried by the massive presidential election markets where billions of dollars flow in. Culture and technology? Worse. Much worse.
And then there is sports. The overall Brier score across platforms for sports markets was 0.325. That is -D. Remember, a coin flip is 0.25. Sports prediction markets, across the board, performed worse than if you had just flipped a coin for every single question. Let that sink in.
The category that attracts the most casual bettors, the one that Kalshi has been aggressively expanding into (roughly 90% of Kalshi’s volume was in sports at one point), is the category where the markets are provably unreliable.
Now look at individual markets, this is where the story breaks even more.
Polymarket ran a market on whether Bitcoin would hit $100,000 before January 2025. Bitcoin did hit $100,000. The market got the outcome right, but it spent most of its life pricing the probability wrong, hovering at low confidence for months, then spiking to near certainty only in the final stretch. Its Brier score was 0.4909. That is an F. Remember, you are better off just randomly guessing after 0.25 (the coin-flip line). This market scored almost double that.
The market on Kamala Harris winning the 2024 Democratic nomination is even wilder. She won the nomination, and the market got the outcome right, again, ironically. But the Brier score was 0.9098. That is so bad that it is hard to overstate. The market was confidently wrong for so long that even getting the final answer right could not save it. If you had been using that market as a signal for your own decisions, you would have been misled for the entire campaign cycle, right up until the moment it no longer mattered.
Now for the other side of the ledger, because this is not a simple story. The 2024 U.S presidential elections were a genuine win for prediction markets. Polymarket had Trump at roughly 60% when every major poll was calling it a toss-up. The Vanderbilt study ran Bayesian time-series models comparing Polymarket prices to national polls across all seven swing states. Polymarket was more accurate in every single one.
So what does this conclude? Prediction markets are very good at elections. Specifically, they are very good at the biggest, most liquid elections, where billions of dollars, tens of thousands of traders, and massive public attention all converge on the same question. In that specific scenario, they consistently beat polls.
But here is the problem with the “truth machine” label. Elections are maybe 2% of what these platforms list. Polymarket’s 2024 presidential market alone generated over $3.6 billion in volume with 63,000 unique monthly traders. Move one step outside that, to congressional races, state ballot measures, or anything in culture, tech, or sports, and the bid-ask spreads on contracts blow out to 20% to 100% of the midpoint price. Legislative and crisis markets had spread near 100%. A spread that wide means the market has almost no idea. It is just two people on opposite sides of a guess, with barely any money backing either view.
When the Odds Start Writing the Story
The accuracy problem would be manageable if it stayed inside the prediction market ecosystem. Traders who bet on bad markets lose money, learn, and the system improves over time. That is how every financial market has worked for a hundred years. But the problem is that the odds have stopped being an internal signal for traders and started being broadcast content for everyone else.
Over the past eighteen months, every major American newsroom has integrated prediction market data into its political coverage. The Wall Street Journal signed a formal agreement with Polymarket to run its betting data alongside editorial reporting. CNN started putting Kalshi odds on screen during election night coverage. CNBC did the same with Kalshi. In December 2024, even Substack announced a direct partnership with Polymarket so newsletter writers could embed live market data straight into their posts.
This is how the odds ended up on TikTok. The numbers traveled from Polymarket to the Wall Street Journal to cable news to Twitter to TikTok. By the time the odds reach a casual viewer, it has been routed through enough credible outlets that it feels like a fact. People were absorbing numbers that had already been pre-laundered through the mainstream press.
And this is where the problem with prediction markets starts, because once the odds are being consumed as news, they start to influence the very thing they are supposed to be predicting. There is a name for this. Economists call it endogeneity. In plain language, it means the measurement changes the thing being measured.
Let me give you a concrete example. Brian Armstrong, CEO of Coinbase, was on an earnings call. He became aware that Polymarket was running a contract on whether he would mention certain specific phrases during the call. So he modified the words he was going to use. Here, the market was supposed to be predicting what he would say. Instead, his knowledge of the market’s bets changed what he said.
Now scale that dynamic up to an election. In the 2024 U.S. presidential race, a French trader who goes by “Theo” (the media called him the “Polymarket Whale”) placed enormous bets on Trump winning. He eventually profited over $85 million. This was not some lucky gambler. He had commissioned his own private polling, separate from every public poll in the country, and his data showed Trump performing significantly better than the public numbers suggested.
Because of this, his bets pushed the polymarket price up, which was then picked up by the outlets I mentioned. The Wall Street Journal, CNN, and political commentators on every platform. The story turned into a prediction market favoring Trump even though polls say it is a toss-up. That single narrative shaped how millions of people perceived the race in the final weeks. Commentators debated whether the “smart money” knew something the polls did not. Voters consumed that, and Trump won.
I am not claiming that Theo changed the outcome of the election. That would be a stretch I cannot back up. What I am claiming, and what I think anyone paying attention should be alarmed by, is that a single trader with private polling data that nobody else had access to was able to move Polymarket’s price, and that price was then broadcast by the Wall Street Journal and CNN as the market’s collective wisdom. The function of a good prediction market is to aggregate many pieces of information from many participants into one clean signal. What happened in 2024 is that one person’s proprietary poll got laundered through Polymarket and rebroadcast as if it represented a consensus of thousands of traders.
And if one trader can do that with $85 million, imagine what people with actual money & power might do?
In February 2026, Israeli authorities indicted at least two people for using classified military intelligence to bet on Polymarket. They had placed wagers on contracts related to Israeli military operations before those operations became public. The potential profits were around $100,000. These were people with security clearances, betting on war using information the public would not see for days. It was the first prosecution of its kind anywhere in the world, and it confirmed that prediction markets are fast enough, liquid enough, and anonymous enough that they can be used to monetize secrets in real time.
The Maduro trade that opened this piece? Same pattern. Someone bet on a covert military operation before it happened and walked away with over $400,000. Whoever they are, they either had inside information or they made one of the luckiest guesses in the history of political betting. We would never know.
What Happens When Everyone Believes the Odds
The median Polymarket question resolves in four days. The average is 19 days, but that gets pulled up by a few long-running markets. Most questions on the platform are about what is going to happen this week.
This tells you that these markets are not forecasting the future in any meaningful long-term sense. They are just pricing the very near present. Will this vote pass on Friday? Will this person say this thing tomorrow? Will this number come in above or below the estimate on Wednesday? That is useful information. But it is a very different thing from what people mean when they call prediction markets “truth machines.” A truth machine, in the way the phrase is used, implies that the market can tell you what the world will look like in six months, a year, or five years. The data says it cannot. Not even close.
99% of prediction market volume shows up right before the event resolves. The money piles in during the final hours when the outcome is nearly certain. And the liquidity gap in these markets is even bigger. The total weekly volume across Polymarket and Kalshi combined reached roughly $2.5 billion by late 2025. Sounds enormous, right? But the U.S. options market alone clears around $760 billion in a single day.
Prediction markets are 0.05% of that. The entire prediction market industry, across every platform, every contract, every category, is a rounding error compared to the markets that institutions actually rely on for decision-making.
So here is the situation: Prediction markets work well for a very specific type of question: binary, high-profile, short-term eventswith millions of dollars at stake. That is a small sample of what these platforms actually list. For the other 98%, the prices are unreliable, the liquidity is nonexistent, and the output is closer to a Twitter poll than a financial instrument.
They are building the default source of probabilities for everything. The same way that when you want a stock price, you open a Bloomberg terminal, the vision is that when you want a probability, you open Polymarket. And the play is that once enough media outlets, newsrooms, financial analysts, and government researchers depend on that feed, the product becomes impossible to replace, regardless of whether the numbers on it are actually good.
I think this will work. And I think that should worry everyone.
Because the question is not whether prediction markets are useful. They are. For elections, major economic data, and a handful of high-profile events, they consistently outperform alternatives. That is real and that matters. The question is what happens when an entire information ecosystem starts treating the output of these markets as ground truth, even for the 98% of questions where the market has no idea.
Robin Hanson, the economist who spent decades arguing for prediction markets, described them as systems that force people to put money behind their beliefs. The resulting price, in his model, would be the best available estimate of truth. But that model assumed deep liquidity, diverse participants, and resistance to manipulation. The markets we have are dominated by a small number of whales, concentrated in two categories (elections and sports), with about 80% of all volume in those buckets. The other 20% is scattered across thousands of contracts where a few thousand dollars can swing the price by double digits.
These are attention machines. They work when the world is watching and break when it is not. The more people believe they are truth machines, the more power accrues to the people who can move the prices. And the people who can move the prices are not a diverse crowd of informed citizens. They are a handful of traders with deep pockets, private polling, and, in at least two confirmed cases, classified intelligence.
The most dangerous thing about prediction markets is not that they are wrong. It is that they are right just often enough, on just the right questions, to earn a level of trust they have not earned across the board. And that trust is being built into the rails of how the world processes information. The Wall Street Journal prints the number. CNN airs it. TikTok repeats it. And somewhere, one trader with enough money is deciding what that number says.
That is the reality of the truth machine. A system that produces a number, and a world that has decided to call it the truth.
Token Dispatch is a daily crypto newsletter handpicked and crafted with love by human bots. If you want to reach out to 200,000+ subscriber community of the Token Dispatch, you can explore the partnership opportunities with us 🙌
📩 Fill out this form to submit your details and book a meeting with us directly.
Disclaimer: This newsletter contains analysis and opinions of the author. Content is for informational purposes only, not financial advice. Trading crypto involves substantial risk - your capital is at risk. Do your own research.











