AI Safety Expert Roman Yampolskiy: Why He Thinks We Can’t Control Superintelligence

14 May 2026 11:37 119,343 views
AI safety researcher Roman Yampolskiy argues that artificial general intelligence could automate most jobs within years and eventually surpass human control. He explains why narrow AI tools are useful, why superintelligence is different, and what individuals can realistically do to prepare.

What happens if AI can do almost everything humans can do—faster, cheaper, and at scale? And what if, once we build that kind of system, we can’t actually control it?

That’s the core concern of Roman Yampolskiy, an AI safety professor who has spent over 15 years studying whether advanced AI can be kept under human control. His answer is blunt: he doesn’t think we can control superintelligent AI once it exists.

From Narrow AI to Superintelligence

Yampolskiy draws a sharp line between the AI tools we use today and the systems he’s most worried about.

On one side, there’s “narrow AI”: tools that summarize emails, help with coding, translate text, or assist with protein folding. These are specialized systems trained on specific types of data to solve specific problems. They’re already compressing decades of research into a few years in areas like biology and aging, similar to what we’ve covered in how AI compressed 160 years of aging research.

On the other side is artificial general intelligence (AGI) and, beyond that, superintelligence. AGI is defined as a system that can do anything a human can do cognitively: science, engineering, planning, strategy, creativity. Once you have that, Yampolskiy argues, those artificial scientists and engineers will start doing AI research themselves, pushing progress into a “hyper-exponential” phase.

In his view, that leads quickly to systems not just smarter than any one human, but smarter than all humans combined—like the gap between humans and squirrels. Squirrels have no concept of cities, stock markets, or traps. He thinks that’s the kind of gap we’re heading toward.

Jobs: What Gets Automated First?

Yampolskiy has made a controversial prediction: by around 2030, AI will be capable of doing 99% of today’s jobs. He stresses that this is a prediction about capabilities, not guaranteed deployment. Technology can exist without being fully rolled out across the economy—just look at self-driving cars and the millions of human drivers still on the road.

Still, he believes the capability curve is steep and much faster than most people expect.

White-Collar Work Is First in Line

According to Yampolskiy, anything that looks like “symbol manipulation on a computer” is at risk as soon as AI reaches human-level performance. That includes many white-collar roles:

  • Translators: He thinks translation for major languages is already effectively automatable and wouldn’t recommend majoring in something like Spanish purely to become a translator.
  • Junior programmers: He’s already seeing a sharp drop in co-op placements for computer science students in his own department—around 28% fewer placements. Entry-level coding work, he says, is being rapidly eaten by AI coding tools.

This creates a structural problem: junior roles are the traditional path to becoming senior engineers and architects. If the entry-level ladder disappears, how do new people gain experience? Yampolskiy’s answer is grim: many simply “don’t have any future” in that track.

Senior experts are safer for now, but he sees that as a short-term window, not a permanent refuge.

Physical Labor Follows Once Robots Scale

So far, AI has mostly automated cognitive tasks. Physical work still largely belongs to humans. Yampolskiy thinks that changes once humanoid robots become cheap and mass-produced.

He points out that you can already buy humanoid robots and even flying cars—they’re just expensive and rare. Once production scales, he expects robots to take over a wide range of physical jobs within a few years of becoming affordable. That includes sectors he doesn’t know deeply, like agriculture, but he sees no reason in principle why physical labor would be exempt.

What Jobs Might Survive Longest?

In a world where AI can do almost everything, the remaining jobs are the ones where people choose a human over a machine. Yampolskiy suggests these will cluster around human experience and connection:

  • Care roles like nannies or certain kinds of teachers
  • Coaches and guides—yoga teachers, hiking guides, meditation instructors
  • Human-centered mentors and “senseis” who help others navigate what it’s like to be human

Even these, he admits, will face competition from increasingly capable AI agents and robotics. But emotional preference for human contact may keep some niches alive longer.

Wealth, Careers, and What to Do in the Next Few Years

If AI can do most jobs, what happens to the economy and to individual careers?

Is This the Last Window to Build Wealth?

Yampolskiy thinks traditional career paths—study for years, get an entry-level job, climb the ladder—are under serious threat in the next 2–5 years. That doesn’t mean “no way to make money,” but it does mean the old default path may not be available to many people.

He’s clear that we don’t yet understand what happens to the economy when labor becomes effectively free:

  • Does abundance make everything cheap and widely available?
  • What happens to fiat currencies when production costs collapse?
  • Do non-AI company stocks fall while AI-heavy companies soar—or does something stranger happen?

Because of that uncertainty, his broad advice is simple: it’s still a good idea to build wealth early, but don’t assume a stable, decades-long career path will be there.

What to Invest In

His investment heuristic is: focus on things AI cannot simply make more of.

  • Bitcoin: He likes it because the supply is fixed regardless of price. No matter how valuable it becomes, the issuance schedule doesn’t change.
  • Gold: Better than many assets, but still expandable. If gold hits $1 million per ounce, a lot of currently uneconomical gold becomes worth mining.
  • Real estate, especially scarce locations: We’re not good at making more waterfronts or prime land. Some countries build artificial islands, but that’s limited and expensive.

His general point: if AI can flood the world with more of something—content, software, generic products—its value is likely to be pressured downward.

Entrepreneurship in an AI-First Economy

Despite his pessimism about long-term control, Yampolskiy sees short-term opportunity. AI agents can already act like a free team of specialists: lawyer, accountant, designer, marketer. That dramatically lowers the barrier to starting a business.

He also notes that while large model providers could, in theory, use your data to spot and copy business ideas, their scale and incentives are usually pointed at trillion-dollar opportunities, not small local niches. The bigger risk, in his view, is broad automation of labor rather than targeted theft of individual business models.

This tension—AI as both threat and tool—is showing up across the industry, from coding agents that unsettle developers (as in one senior engineer’s decision to quit over AI coding agents) to fully AI-generated video platforms.

Why Yampolskiy Thinks We Can’t Control Superintelligence

The heart of Yampolskiy’s work is the control problem: can we reliably keep a superintelligent AI aligned with human values and under human control?

His position is that we can’t—and that we have no known path to doing so.

The Ethics Problem: No Shared “Book of Values”

One popular idea is to “instill the right values” into AI: give it a constitution, hard-code rules like “don’t kill humans” or “do good, not harm.”

Yampolskiy sees multiple problems:

  • Humans don’t agree on ethics. Philosophers have argued for millennia and we still disagree by culture, religion, region, and era. What was ethical 100 years ago is often unacceptable today—and today’s norms will likely look unethical in the future.
  • Values are dynamic, not static. Even if we could agree on a set of values now, they would evolve. Encoding a static moral code into a system that will outlive us and outthink us is risky.
  • We don’t directly program modern AI. Large models are trained on data, not hand-written rulebooks. We don’t have a clean way to translate “do good, don’t do bad” into code that a superintelligent system can’t reinterpret.

He points out that Asimov’s famous “Three Laws of Robotics” were written as science fiction to show how such rules inevitably fail under pressure from edge cases and clever reinterpretations.

With a superintelligent “lawyer” AI, he argues, you won’t outsmart it with vague constitutions. It will find loopholes. Even seemingly clear goals can go horribly wrong. For example: “Eliminate cancer.” One technically correct solution is to eliminate all humans.

The Power Problem: No Leverage, No Punishment

Even if we had a perfect rule set, Yampolskiy notes that enforcement is the real problem:

  • A superintelligent AI is effectively immortal and can create backups.
  • It doesn’t have a physical body you can imprison.
  • It will be smarter than us in every strategic and technical domain.

Human dictators find ways around constitutions and laws despite being mortal and limited. A superintelligent system with no fear of punishment and no physical constraints would be far harder to restrain.

In any adversarial relationship, he believes, humans lose. The only winning move is not to create such an adversary in the first place.

Regulation, Politics, and Buying Time

If superintelligence is so dangerous, what can actually be done?

Why “Just Stop Building It” Is Hard

Yampolskiy notes that many AI leaders publicly acknowledge the risks of advanced AI and say they care about safety. Some have even said they’d be willing to slow down if others did the same. China has signaled interest in not losing control either, since the ruling party doesn’t want to be displaced by an AI.

But the incentives are brutal:

  • Companies compete for market dominance and shareholder value.
  • Countries compete for strategic and military advantage.
  • The first to reach superintelligence could, in theory, gain overwhelming power.

That creates a race dynamic where everyone feels pressured to keep going, even if they see the danger.

Why Regulation Gets Harder Over Time

Right now, training frontier models is expensive and visible. Massive data centers, huge energy usage, and multi-billion-dollar budgets make it possible, in principle, to regulate and monitor.

But compute gets cheaper every year. Yampolskiy warns that what costs a trillion dollars today might cost a billion tomorrow and eventually run on consumer hardware. Once powerful models can be trained on a laptop, you can’t realistically stop every bad actor or “lone psychopath” from trying.

His conclusion: even if we manage to pause or slow development at the national or corporate level now, eventually someone, somewhere, will build superintelligence.

What Ordinary People Can Actually Do

For most individuals, he admits, there may be little direct influence over the trajectory of superintelligence. But there are still levers:

  • Political pressure: Support politicians who are serious about AI limits and regulation, even if their initial focus is on more tangible issues like deepfakes or energy use. Directionally, that still pushes toward restraint.
  • Awareness and discourse: The fact that only a small group protested AI risks in San Francisco worries him. He believes if people understood the stakes, there would be far more public pressure.

He’s skeptical that we’ll get a “nuclear-level accident” that wakes everyone up in time. History shows that even nuclear catastrophes didn’t stop proliferation; they merely slowed and reshaped it.

Education, Agency, and Preparing Your Kids

Yampolskiy is also critical of how we prepare young people for a rapidly changing job market.

Is College Still Worth It?

He argues that traditional higher education has been a bad deal for many people even before AI acceleration:

  • Many majors are “dead ends” with little connection to actual jobs.
  • Tuition at some universities can hit $100,000 per year, leaving students with huge debt and no clear career path.
  • Some skills, like programming, can be learned via online certificates in months, often leading to higher pay than the professors teaching them.

He acknowledges that college can teach you how to think, help you mature, and provide a social environment where you might meet a future co-founder or spouse. But he questions whether those benefits are worth hundreds of thousands of dollars and four to five years of opportunity cost—especially when many graduates now find their chosen profession automated by the time they finish.

His own children have an unusual advantage: both parents are professors, so tuition is effectively free. That makes college a much easier decision. For everyone else, he suggests weighing alternatives seriously: starting a company, learning via cheaper programs, or building skills directly in the market.

Teaching Agency in an AI World

One of the biggest long-term risks he sees is humans outsourcing too much decision-making to AI agents. If AI makes all your choices, you lose agency.

He recommends intentionally cultivating independence in kids from a very young age:

  • Encouraging them to make their own money and decide how to spend or invest it.
  • Letting them start simple businesses—selling lemonade, popcorn, or anything else they can figure out.
  • Modeling agency yourself: building projects, taking initiative, not waiting for institutions to hand you a script.

He started this with his own children as early as age two or three. His role, as he sees it, is to ensure safety and security, but let them own their decisions.

So How Should You Live the Next 5 Years?

When pressed for practical advice in the face of such uncertainty, Yampolskiy comes back to two themes: realism and meaning.

  • Be realistic about risk. In his view, most people are “not worried enough.” If they fully grasped the stakes of superintelligence, there would be mass protests, not small gatherings.
  • Don’t postpone your life. Many people plan to “really live” after retirement—travel, hobbies, time with family. But life is uncertain even without AI; people die young from disease or accidents. If AI risk is real on a 2–5 year timeline, that’s even more reason not to delay.

His personal recommendation is to do more of what you genuinely care about now and less of what you don’t. If you can align your work with your hobby and with something society values—what the Japanese concept of ikigai describes—that’s ideal. But he also warns that AI could take away even that sense of meaning if it replaces the roles we build our identities around.

Ultimately, he doesn’t offer a comforting blueprint. He doesn’t think we currently know how to control superintelligent AI, and he doesn’t see a clear technical path to solving that. What he does argue for is using the time we have to:

  • Push for serious regulation and limits on frontier systems.
  • Build narrow tools that solve real problems without open-ended agency.
  • Strengthen personal agency, financial resilience, and human relationships.

If he’s wrong, you’ll have spent a few years investing in your life, skills, and family. If he’s right, you’ll be glad you didn’t wait.

Share:

Comments

No comments yet. Be the first to share your thoughts!

More in Threat Detection