Skip to content
Version 1.0 — Last updated: April 10, 2026
Human 13 min read

Half of US Jobs Will Change in Three Years. The Rest Won't Notice.

BCG's new model shows 50-55% of US jobs will be fundamentally different in 2-3 years. The methodology is honest. The implications aren't.

BCG just published the most detailed model anyone has built for how AI reshapes work. The headline — 50 to 55 percent of American jobs will be fundamentally different within two to three years — sounds like another consultant trying to sell transformation workshops. I almost closed the tab. But then I read the methodology, and it's not what I expected.

What They Actually Did

The BCG Henderson Institute didn't survey executives about their feelings. They decomposed 165 million US jobs into individual tasks across roughly 1,500 roles, scored each task on a rubric for automation potential, then layered on demand elasticity and market dynamics to model what happens when you automate pieces of a job but not the whole thing.

That last part is what makes this different from the usual McKinsey slide deck. Most AI-and-jobs studies ask: "Can a machine do this task?" BCG asks: "If a machine does this task, does the employer hire fewer people, more people, or different people?" The answer depends on whether demand for the output is elastic. And that distinction matters more than the automation score itself.

Here's the framework, condensed. They found six segments.

About 5% of jobs get amplified — AI makes workers more productive and demand expands to absorb the extra capacity. Software engineering falls here, which I'll come back to. Another 14% get rebalanced — the work changes but headcount stays flat because demand is capped. Think content marketing: you can produce more, but budgets don't grow proportionally.

Then it gets uncomfortable. 12% of jobs are divergent — the junior positions get automated while senior roles expand. Insurance agents, IT support techs. The entry-level pipeline collapses but the profession survives. Another 12% are substituted outright — financial analysts, call center workers. Demand is fixed, AI handles it, fewer humans needed.

The remaining 57% either get modest AI tooling (23%) or aren't meaningfully affected (34%) — nurses, plumbers, teachers, therapists. Jobs where you need a body in the room.

The number that stuck with me: 61% of the roles most vulnerable to substitution are entry-level and junior positions. Not middle management. Not senior staff. The people just starting out.

Why This Matters If You Write Code

I spend most of my day in a Laravel codebase with Claude Code running in one terminal and Cursor in another. I'm the demographic this study is talking about. So I read the software engineering section twice.

BCG classifies software engineering as "amplified" — more AI means more demand for software, which means more demand for engineers, not less. The logic: when you make it cheaper to build things, organizations build more things. The appetite for digital products is effectively infinite. If I can ship a feature in two hours instead of two days, my employer doesn't fire me. They give me more features to ship.

That matches my experience. I'm more productive than I was eighteen months ago. Dramatically more productive. And the backlog hasn't shrunk — it's grown, because stakeholders now know what's possible and they want more.

But BCG includes a "what if" box that I think is more honest than the main analysis. What if software engineering migrates from "amplified" to "divergent"? What if the next generation of models handles system design, architecture decisions, security reviews, and cross-system integration — not just boilerplate code? Then you'd need fewer engineers, but they'd need to be better. The study calls this "substantially more output from a smaller population of deeply knowledgeable engineering leaders."

I don't think we're there yet. But I'm not confident we're five years away from it either.

The data is contradictory in a way that nobody wants to acknowledge. Software engineering job postings on Indeed are up 11% year-over-year. There are 67,000 open positions — a three-year high. CNN ran a piece last week titled "The demise of software engineering jobs has been greatly exaggerated." At the same time, CS enrollment dropped 8.1% this academic year — the steepest decline of any major. 64% of pessimistic CS students cite AI as the reason. GitHub Copilot generates 46% of code for active users. Cursor hit $2 billion in annualized revenue, doubling in three months.

So companies are hiring more engineers while students are fleeing the field. That's not a stable equilibrium. Something gives, and I'm not sure which side.

Anthropic surveyed 132 of their own engineers. Many described themselves as "managers of AI agents." One estimated that 70% of the work is now code review rather than code writing. Another said — and I had to read this a few times — "It feels like I come to work every day to make myself unemployed."

The Counterargument That's Stronger Than It Looks

Daron Acemoglu won the Nobel Prize in Economics in 2024, partly for his work on technology and labor markets. His position: AI will affect about 5% of jobs in the next decade, contribute maybe 1.1 to 1.6 percent to GDP over ten years, and most of the current hype is companies doing "so-so automation" — replacing workers without actually improving productivity. Self-checkout at grocery stores is his go-to example. Slower for customers, worse experience, marginal cost savings, but it looks good on a quarterly earnings call.

An NBER study of 25,000 workers across 7,000 workplaces found — and I quote the researchers here, not a summary — "precisely zero effect on earnings or hours worked in any occupation." The Yale Budget Lab found no aggregate employment change for workers in AI-exposed occupations. AEI wrote in March that "AI job panic continues to outrun the evidence."

Then there's Klarna. Probably the most instructive case study we have so far. CEO Sebastian Siemiatkowski cut the workforce from 7,000 to about 3,000, replaced 700 customer service agents with an OpenAI chatbot, and claimed $40 million in annual savings. Customer satisfaction cratered. The AI couldn't handle complex, emotionally charged, or multi-step interactions. Siemiatkowski publicly admitted they'd gone too far and started rehiring humans. The rehiring costs exceeded the original savings estimates.

Block went the other direction. Jack Dorsey cut 4,000 people — 40% of the company — explicitly citing AI. The stock jumped 18 to 24 percent the same day. The market rewarded the cut. Whether the company actually functions better with half the staff is a question nobody will answer honestly for another year.

Look — I'm not going to pretend I know which of these outcomes is the default. Klarna overcut and backtracked. Block overcut and got rewarded. Same thesis, different results. The honest answer is that we're running dozens of simultaneous natural experiments and the data isn't in yet.

The Part Nobody Is Talking About

Goldman Sachs published something the same week that fits the BCG study like a key into a lock. Their data shows AI is already eliminating about 25,000 US jobs per month while creating roughly 9,000 — a net loss of 16,000 jobs monthly. That's not a projection. That's what's happening right now, measured in actual employment data.

But here's the number that haunts me. Goldman's "scarring" analysis found that workers displaced by technology take about a month longer to find new work, lose over 3% in real wages, and fall 10 percentage points behind never-displaced peers over a decade. Ten percentage points. That's not a temporary setback. That's a permanent downward trajectory.

BCG's study acknowledges none of this. Their framework models where jobs go but not what happens to the people who lose them. It's a map of the terrain, not a survival guide. The six segments are analytically clean and practically useless for a 24-year-old financial analyst who just got replaced by a Claude workflow someone built in a weekend.

And that 24-year-old is the real story. When BCG says 61% of the most affected roles are entry-level, they're describing the collapse of the professional on-ramp. If junior positions disappear, how do people become senior? Where do they learn the judgment and context that BCG says makes senior workers irreplaceable? AWS CEO Matt Garman called replacing junior developers with AI "one of the dumbest things I've ever heard" — not because AI can't write code, but because you've just destroyed the pipeline that produces the people you actually need.

CS enrollment is already falling. Entry-level tech postings are down 35% in eighteen months. CFOs privately estimate AI-related cuts will hit 502,000 roles in 2026 — nine times last year. And the overall unemployment rate is 4.3%, which looks fine until you realize the economy only needs 10,000 new jobs per month to keep it there — the lowest breakeven in 65 years, because boomers are retiring faster than anyone's getting fired.

The labor market isn't collapsing. It's restructuring in a way that the aggregate numbers can't see.

What I Think Is Actually Happening

BCG's six segments are useful, but they describe a steady-state transition that assumes companies make rational decisions about which roles to automate and which to preserve. Companies don't do that. They overcut in panic, underinvest in training, and hire back in eighteen months at higher cost. Klarna is the template. Block might be next. We'll see.

The more I look at this, the more I think the real split isn't between "jobs that survive" and "jobs that don't." It's between people who figure out how to work with AI tools at a deep level — not prompting, not vibe coding, but actually integrating them into architectural thinking and system design — and people who treat them as autocomplete with a personality.

I run a Laravel stack. Eighteen months ago I wrote every migration, every controller, every test by hand. Now I write maybe 30% of it by hand and review the rest. My output has tripled. The work I do has shifted — less typing, more thinking, more reviewing, more saying "no, that's wrong, here's why." Whether that makes me amplified or eventually obsolete depends on where the models go next.

BCG's study is the most honest thing a consulting firm has published on this topic. It's also insufficient. The map is good. The territory is moving faster than the cartographers.

A

Alexei Volkov

I build software for a living and write about tech on the side — because someone has to say what everyone else is thinking.