A recent Reddit thread asked the question: what should I buy with $300,000? The replies were a clinical specimen of how retail investors actually decide what to own in the AI era. Almost none of it was analysis. Almost all of it was psychology. Here's the diagnostic.
Picture a real Reddit thread. A real person, sitting on $300,000 cash, asking the internet what to do with it.
The original poster says — without irony — that they have made a steady 10% per year for a decade and find this "uninteresting." They want a "thrill."
Pause on that for a second.
A 10% annualized return for ten years compounds $300,000 into roughly $778,000. Even modest compounding will outperform almost everything most retail traders do once trading costs and tax drag and emotional churn are factored in. And the poster's framing is not "I want better returns" — it's that the existing strategy is boring.
This is not a financial decision. It's a dopamine decision dressed in financial vocabulary.
The replies that followed in the thread tell you everything about how retail money is actually moving in the AI era. The pitches were not numerical. They were narrative. They were not risk-adjusted. They were thrill-adjusted. They were not systematic. They were tribal.
Six cognitive biases run the table. If you're investing in AI stocks in 2026, at least one of them is almost certainly running you. Probably more than one.
Here they are, named, with the diagnostic question that catches each one — and what the analytical alternative looks like.
The single most underrated force in retail investing is the boredom of compounding.
10% per year is the wealth-creation rate of the entire S&P 500 over 90+ years. It is, in a meaningful sense, the answer. And it is universally experienced as boring.
Why? Because compounding is silent. It happens overnight, on weekends, while you sleep. You can't do anything to make it work. You just have to not interfere.
That is intolerable to a brain that evolved to look for novelty.
So the brain invents reasons to act. Reasons that sound rigorous. Reasons that sound informed. Reasons that, on inspection, are downstream of the same primitive impulse that powers sports betting and slot machines: the need for variance, for stakes, for something happening.
You don't have new information. The fundamentals haven't changed. The thesis hasn't moved. But you find yourself trading anyway — because not trading feels like wasting the ticket.
The honest answer for most retail investors is: they would get better and feel worse. That gap — between what's optimal and what feels good — is the casino's edge over you.
The AI sector amplifies this brutally. Volatility is high. News cycles are constant. Every model release, every earnings call, every CEO tweet feels like it might matter. So you check, and you act, and you confuse motion with progress.
In the same Reddit thread, a commenter announces — in tones of practiced humility — that they returned 137% in 2024 by taking large concentrated positions in AI infrastructure and biotech.
They list the picks. They cite the conviction. They describe the "research." They invite the original poster to consider the same approach.
Here is what they do not list: everyone who tried that approach and got annihilated.
If 1,000 retail investors take massive concentrated positions in five high-volatility AI stocks, statistical math says roughly 30 of them will return 100%+ in any given year by chance alone — even with zero skill. Those 30 will write Reddit posts. They will start newsletters. They will brand themselves as "AI stock pickers."
The 970 who got crushed will not write a single post. They will quietly close their accounts, or pretend it didn't happen, or revise the story in their head. The signal they would generate — the warning to people considering the same approach — gets filtered out of the conversation entirely.
You read about a strategy that worked for someone, and you treat their winning year as a method. You do not see the distribution it came from. You see a single sample and assume it generalizes.
This is why backtests beat anecdotes. A backtest forces you to look at the full distribution — winners and losers, drawdowns and rallies, the years where the strategy looked stupid. An anecdote is just one path through the tree. The path you happened to hear about.
In the AI era this matters more, not less, because the variance is enormous and the stories that surface are maximally selected for confirmation.
Re-read the 137% commenter's pitch and you find a recurring tell: "I knew the entire space sector would move because of the SpaceX IPO and Artemis."
No one knew.
If they had known with that certainty, they would have allocated everything they own — including borrowed money — to the trade. They didn't, because they didn't actually know. They thought it was likely. After it played out, they upgraded the memory to "I knew."
Daniel Kahneman wrote about this. After an outcome is known, the human mind reorganizes its prior beliefs to match what happened. The uncertainty disappears. The doubts get edited out. What remains is a clean narrative of "I saw it coming."
This is dangerous for two reasons:
You catch yourself saying "I always thought NVDA would go up," or "obviously the labs were going to burn cash." The word obviously is the tell. If it had been obvious, the price would have already reflected it.
The deepest version of this bias in AI investing is the assumption that "AI was always going to be huge" — therefore "AI stocks were always going to be huge". The first half is mostly true. The second half is a separate question, and conflating them is exactly how the bagholder gets made.
In the same thread, when one commenter raised concerns about a particular memory chip stock's fundamentals, another replied — proudly — "Yeah buddy the market doesn't care what u think, just follow the $."
Another added: "The sheep making all the money."
This is herd behavior, articulated and celebrated.
Momentum is a real factor. The strategy of "buy what's going up because other people are buying it" has produced positive returns over long stretches. This is true. What is also true:
Howard Marks of Oaktree Capital put the dynamic clinically:
You find yourself buying because "everyone is buying it" — and you'd struggle to articulate the thesis without referencing other people's behavior. The rationale is the popularity itself.
The AI sector is maximally exposed to this dynamic. Mag-7 concentration, retail crowding, passive flow lock-in, social-media-driven momentum, FOMO on every model release — every one of these is herd-amplification, not signal.
Throughout the Reddit thread, commenters threw out price targets with zero supporting math:
"It will triple in the next 3-5 years."
"Could go from $7.50 to $30."
"This stock has 5x potential."
Notice what's missing: any model that produced those numbers. They are vibes wearing a price tag.
This is anchoring. A specific number — especially a round one or a multiple — gets planted in your head, and from that moment forward, the question quietly shifts from "is this stock undervalued?" to "how do I get from current price to the anchor?"
The anchor wasn't justified when it was planted. It still isn't. But the brain treats it as a fixed point of reference against which the current price looks "cheap."
This is how "NVDA is going to $5,000" becomes a thesis. Not because anyone derived $5,000 from cash flows or addressable market or any model at all. But because the number got said, and now it lives somewhere — and any current price below it looks like a discount.
You find yourself comparing the current price to a number — a 52-week high, a hypothetical target, a previous purchase price — without examining whether the anchor is justified. The anchor becomes the question.
The cure is structural, not psychological: think in scenario ranges, not single-point targets. Don't ask "will NVDA hit $5,000?" Ask "across plausible AI deployment scenarios over 1, 2, 3, 5, and 10 year horizons, what's the distribution of outcomes for the chip layer?" The first question is anchored gambling. The second question is investing.
The most beautifully delivered pitch in the Reddit thread was for a small-cap biotech. The pitch was short and powerful: "These guys are basically curing cancer."
That's it. That's the thesis.
The story did the work. The balance sheet did not need to be examined. The drug pipeline did not need to be checked. The revenue did not need to exist. The probability of FDA approval did not need to be modeled.
Nassim Taleb named this the narrative fallacy: humans process the world through stories, not statistics. When a story is good enough — emotional, vivid, world-changing — it overrides numerical evaluation entirely.
This is the engine of the AI bull market.
Independent AI analyst Alberto Romero, looking at the structure of the AI lab-equity market in May 2026, wrote:
That is the bear's version of "the story is doing the heavy lifting." Even if you believe AI deployment is real (and it is), the lab-equity-pricing story has detached from the lab-equity-cash-flow reality in measurable ways.
You catch yourself describing why a stock will go up using a story — "they're disrupting X," "this is the next Y," "they're going to change Z forever" — without anywhere in the description appearing a number, a unit economics breakdown, or a checked claim.
Every prior technology cycle activated some of these biases. The AI era activates all six at once, at higher intensity than any cycle since the dot-com boom.
This is not a moral failure. It is how human cognition works under conditions of uncertainty, novelty, and high social signal. Even hedge fund managers — full-time, professionally trained, with quant infrastructure — fall into these traps. Retail investors are facing the same biases without the infrastructure.
Here is what we are not telling you to do:
"Buy index funds and never look at your portfolio."
That advice is correct. It is also psychologically impossible for most retail investors to follow, especially in the AI era. The compounding-and-stillness path optimizes returns. It also produces zero stimulation, which is the actual bottleneck.
The AI sector is exciting because it might genuinely be the largest economic transition since the internet. That excitement is rational. The question is not how to suppress it — the question is how to channel it through a structure that doesn't blow you up.
Channeling means:
This is what casino with telemetry means. You're still in the game. You're still allocating to AI exposure. You're still taking views, taking risk, having fun. You just have a sanity check running underneath every bet.
The sanity check is the analytical edge that catches the bias before you commit the capital.
The matrix scores 28 industries across 5 horizons (1yr, 2yr, 3yr, 5yr, 10yr) using 8 analytical dimensions and 167 cross-industry effects. It replaces narrative with mechanism, anchors with ranges, and tribal conviction with auditable scoring. Stop guessing which AI stocks become bagholder traps. See the math.
See the Matrix Scores Free Portfolio ScanYou're an active retail investor who reads Reddit, follows the AI sector, takes positions, has a real portfolio, and has caught yourself making decisions you couldn't fully justify. You don't want to be told to "just buy the index." You want a discipline that lets you stay engaged without becoming the predictable retail bagholder in someone else's exit. This page is the diagnostic. The matrix is the discipline.
You're a thoughtful skeptic who is suspicious of the entire AI bull narrative. Good. Many of the warnings you'll find here are warnings you've been giving other people. The page validates the suspicion — but separates it cleanly from the deployment thesis, which is real even if the lab equity is overcooked. (See also: The Trillion-Dollar Question Wall Street Is Asking Backwards.)
You're someone who has lost money on AI stocks already and is wondering if you should buy more on the dip or get out. The honest answer is: it depends on which bias drove the original purchase. If it was sensation seeking — be careful. If it was a thesis with mechanism behind it — recheck the mechanism. The difference is the difference between learning and repeating.