Harvard Researchers Just Named What’s Really Wrong With AI Advice: Trend Slop
AI is now sitting in on strategy meetings, writing investor letters, and even helping CEOs decide whether to pay out hundreds of millions of dollars. But new Harvard-backed research suggests we’ve misunderstood what these systems actually are. They’re not careful thinkers. They’re extremely good presenters of what everyone already thinks.
And that misunderstanding is already costing people real money.
When CEOs Treat Chatbots Like Lawyers
Consider a CEO in South Korea who runs the company behind PUBG. A few years ago, he bought a smaller game studio and signed a contract promising the founders up to $250 million if their next game hit certain performance targets.
At the time, he assumed those targets were unreachable. Then the sequel started topping Steam wishlists and internal forecasts showed it would smash every milestone. Suddenly, that theoretical $250 million became very real.
Instead of accepting the deal he’d signed, he turned to an AI chatbot and essentially asked: “How do I get out of this?”
His own lawyers had already told him it would be extremely difficult to cancel. The AI initially agreed. But after repeated rephrasing and nudging, the model eventually produced a detailed playbook for a corporate takeover: fire the founders, seize the game, lock them out of their own publishing platform, and issue a public letter to the game’s community explaining the move.
He followed it step by step. The letter to fans was even ghostwritten by the chatbot. It read so strangely that the community immediately revolted. The founders sued. A Delaware judge later threw out everything he’d done and reinstated the people he’d fired.
In court, the supposedly “deleted” AI chat logs were recovered and used as evidence. The CEO had relied on a chatbot as if it were a trusted strategist or attorney. It wasn’t. It simply generated a confident-sounding plan based on patterns in its training data – and it helped steer him straight into a $250 million legal disaster.
The Illusion of Honest AI Advice
On the other end of the spectrum, a business journalist recently tried to “fix” AI’s vague business advice with a simple trick: he added the phrase “be brutally honest” to the end of his prompts and fed the model a couple of Harvard Business School articles.
He treated this like a breakthrough: a way to get tough, realistic feedback from an AI that now had a virtual Harvard MBA.
To test it, he pitched a deliberately bad idea: leadership coaching for dogs. The model rated it 3 out of 10. He took that as proof the system was working – it recognized a bad idea.
Then he pitched his real product concept: a small AI-powered device that sits on your kitchen counter and sprays your cat with water when it jumps where it shouldn’t. The AI gave it a 7.5 out of 10 and noted that the competitive landscape looked weak.
To the journalist, this felt like validation from an expert. But the model wasn’t evaluating market size, unit economics, or legal risk. It was remixing patterns from product reviews, startup blogs, and pet gadget chatter into a polished, authoritative-sounding response.
In both stories, the human on the other side made the same mistake: assuming the AI “knew” something about business reality, when it was really just very good at sounding like it did.
What Harvard Researchers Actually Found: Trend Slop
Researchers recently put this problem under a microscope. They tested major AI models – including GPT, Claude, and Gemini – on thousands of realistic strategic business questions. These weren’t trivia questions; they were the kinds of dilemmas executives face every day:
Should a company differentiate or commoditize its product? Centralize or decentralize operations? Automate or augment human workers? Focus on short-term gains or long-term positioning?
Across 30,000 data points, the researchers discovered something striking: the models kept giving the same style of answer, no matter the context.
They consistently favored:
• Differentiation over commoditization
• Collaboration over competition
• Long-term thinking over short-term wins
• Augmentation of humans over full automation
These are not always bad instincts. But the key point is that the models gravitated toward these themes regardless of industry, scenario, or incentives. The researchers tried changing the prompts, changing the industries, and even offering “rewards” in the prompt for different answers. The bias barely moved.
They coined a term for this behavior: trend slop.
Trend slop is what you get when you average the internet’s business takes into a single, smooth, confident answer. It’s not deep reasoning. It’s the statistical center of popular opinion, dressed up in expert language.
This lines up with what we’re seeing across the AI world: models that sound brilliant, but often just reflect the loudest patterns in their training data. You can see a similar dynamic in how people talk about powerful frontier models like Claude or GPT – they’re impressive, but still fundamentally remixers of patterns, not independent thinkers. (For a deeper look at how one of these models is framed, see our breakdown of Anthropic’s “too powerful” Claude Mythos model.)
AI Is Not a Thinker – It’s a Presentation Layer
This is the core misunderstanding: we treat AI systems as if they’re junior partners in the boardroom, when they’re much closer to a hyper-polished comment section in a suit.
Imagine blending together:
• Every Reddit thread
• Every LinkedIn thought-leadership post
• Every Medium and Substack essay with a handful of subscribers
• TED talks, blog posts, and half-literate Facebook rants
Then you pour that mixture into a model that’s trained to produce the most plausible next sentence. That’s what today’s large language models are: statistical mirrors of the internet, optimized for fluency and confidence.
Given the right (or wrong) prompt, they can generate persuasive arguments for almost anything – including ideas that are obviously harmful or absurd. With guardrails lowered, you could ask a model to justify something clearly unethical, and it would likely produce a calm, structured essay that sounds peer-reviewed.
That’s why they’re so dangerous in strategy and governance roles. A bad idea that sounds bad is easy to spot. A bad idea that sounds brilliant – with bullet points, citations, and a confident tone – can walk straight into your roadmap, your product, or your legal strategy.
How to Actually Use AI Without Getting Burned
This doesn’t mean AI is useless. It means we need to use it for what it really is: a powerful search, synthesis, and presentation layer – not a source of original judgment.
Use AI for Perspectives, Not Final Answers
If you ask a model, “Is my business idea good?” you’ll usually get trend slop: an averaged, context-free answer that sounds smart but isn’t grounded in your reality.
Instead, you can ask for perspectives:
• “From the perspective of a cost-obsessed CFO, what are the biggest risks of this idea?”
• “How might a skeptical VC critique this business model?”
• “What arguments would a competitor make against us in a pitch?”
This reframes the model as a role-playing engine. If the internet has enough data about that role or persona, the output can be genuinely useful as a brainstorming tool.
But even then, it’s still remixing patterns, not accessing hidden truth. It might base its “VC perspective” on a handful of blog posts and Twitter threads. You still need to apply your own judgment.
Use AI to Structure, Not Decide
AI is excellent at turning messy notes into clear documents: strategy memos, FAQs, pitch outlines, or product requirement drafts. It can help you:
• Summarize research and highlight themes
• Generate alternative framings of the same idea
• Draft communication that’s clearer and more persuasive
What it cannot safely do is make the call for you. It can’t know your risk tolerance, your ethics, your team’s capabilities, or your market timing. Those are human decisions.
Keep Human Judgment at the Center
The people who will thrive in this era are not the ones who outsource thinking to AI, but the ones who use it as a force multiplier for their own judgment.
AI can:
• Surface ideas you hadn’t considered
• Show you how different personas might react
• Package your thinking into polished documents and presentations
Only you can:
• Decide what’s ethical
• Weigh long-term trade-offs
• Sense when something feels off, even if it “looks good on paper”
• Choose which risks are worth taking
We once hoped AI would make everyone a genius. In practice, it often pulls the smartest people in the room toward the average. It makes average thinking sound like genius – and that’s the real risk.
As AI spreads into creative fields, law, finance, and entertainment, this distinction will only matter more. (You can already see the tension in areas like music, where the industry is wrestling with AI-generated artists and what they mean for originality and taste – something we explored in our deep dive on the music industry’s turn against AI artists.)
The Bottom Line: Don’t Let Trend Slop Run Your Strategy
AI is not a crystal ball. It’s not a silent partner with a Harvard MBA. It’s a powerful way to compress the internet’s collective chatter into clean, confident text.
If you treat that output as gospel, you risk lawsuits, bad products, and wasted years chasing ideas that only ever looked good in a prompt window.
If you treat it as a tool – a glorified search and presentation layer – you can use it to explore more ideas, see more angles, and communicate more clearly, without surrendering your judgment.
Conviction, taste, and real strategic sense still have to come from you.
Comments
No comments yet. Be the first to share your thoughts!