#78: Skeptical of AI-Driven Layoffs
AI’s impact on the workforce is driven by narrative
Author’s note: Today marks the one-year anniversary of Relentlessly Curious. What started as a short-term creative pursuit has turned into a fast-growing, weekly digest. I’m continually inspired by the feedback I receive and the value Relentlessly Curious brings to the community (or so I’m told). Cheers to the start of year two!
You see it in the news. Big companies are growing revenue despite laying off tens of thousands of employees. All thanks to AI. Quarterly earnings call after earnings call; it’s the same storyline. The market rewards the efficiency wins, and as a result, the stock price soars. You’ve seen this across Big Tech, as well as technology-adjacent industries like logistics and retail.
I believe we are only at the beginning of this layoff trend. As AI innovations like Claude Cowork and OpenClaw enter the mainstream, it’s only a matter of time until employers rely more heavily on AI agents rather than traditional employees to handle the lion’s share of the workload.
However, I believe AI’s efficiencies are best demonstrated in small companies or recently founded companies. It’s a lot easier to build an AI-native foundation and culture than to convert a decades-old enterprise into one of AI-embedded workflows. For larger companies, it’s likely to take years and significant cultural change to see tangible financial results driven by AI-enabled efficiencies.
I’m skeptical of some of the mainstream headlines surrounding AI layoffs that you see today because they tend to highlight the impacts at large enterprises. Back in August, we discussed an MIT report that claimed, “95% of generative AI pilots at companies are failing.” To jog your memory, the gist of the report is most enterprises are struggling to have success turning AI into revenue gains. In my opinion, the MIT report borders on sensational and misses the point that pilots are tests, and most tests fail.
But the news keeps telling us that AI is responsible for mass layoffs. Yet, if most AI pilots are “failing,” how can managers lay off tens of thousands of people while revenues still grow?
Here’s my thought: I believe AI is taking too much credit for the recent rounds of layoffs. It starts with understanding the incentive structure of most Corporate America executives.
Right now, the AI-driven efficiency story is generously rewarded in the stock market. A common scenario: The CEO of a large company hops on the earnings call, mentions that revenues grew, and says they laid off employees due to “AI.”
I’d really like to dive in deeper to understand what percentage of these layoffs is driven by successful implementations of AI. See, Corporate America has plenty of people who aim to do the bare minimum. All the power to you, but keep in mind that your manager and your manager’s manager will likely know you’re coasting. And so do the executives, who have the power to lay off entire divisions if no longer deemed productive. Maybe there wasn’t a sense of urgency to cut underperformers or redundancies in the past. But now, there’s a carrot on the stick.
It’s just more convenient to lay off employees now because you can lump them into your company’s AI story and be handsomely rewarded by the stock market for doing so. Even if you haven’t seen success with AI yet. See, executives tend to be incentivized through stock price growth, and if they can increase the price of their stock through an AI-driven layoff story, they likely will do so.
I’m bullish about AI and believe it will lead to step-function productivity gains for society, while at the same time creating real adverse impacts on the job market. However, until I hear more executives clearly articulate how AI directly enabled headcount reductions, I’ll remain skeptical. Let’s dive into two topics that would enhance the credibility of an AI-driven layoff if included in the narrative.
AI Fluency
This starts from the top. An organization must define what they consider their baseline for AI fluency. A CEO can come out and say that everyone at their company has access to a ChatGPT account, but this can mean so many things. Engineers may be using OpenAI’s Codex to write 80% of their code, allowing them to significantly increase output and tackle more projects, while others at the company may be typing a few keywords into ChatGPT and claim to be “AI fluent.”
It’s imperative for companies to establish a baseline around both hard and soft AI skills. Tactical details around what “good” looks like, which programs to depend on for specific tasks, when to leverage AI and when not to, safety guardrails, and plenty more must be shared broadly with employees.
Helpful examples include Shopify and Meta. Last year, Shopify’s CEO implemented a new hiring policy where managers must demonstrate that AI can’t perform a job function before gaining approval to post the role. Additionally, Meta recently announced that AI usage will be factored into go-forward employee performance reviews.
These two policies stand out as tactical changes that can lead to cultural shifts. They place AI front of mind for their employees and incentivize them to think of AI-first solutions. The sooner companies adopt similar principles surrounding what AI fluency is at their organization, the sooner they will reap the rewards.
Institutional Knowledge Transfer
The longer-term impact of reducing headcount is the loss of institutional knowledge. Even when employees leave a company on their own today, there tend to be details that fall through the cracks. And even with a robust handoff plan, not everything makes its way from the departing employee’s brain into internal documents.
This particularly matters as the workforce changes shape. It’s especially difficult for junior employees to find work as AI has begun serving an entry-level role in certain pockets. If the workforce ends up being a mix of senior people setting the strategy and mid-level people operating the AI agents, there isn’t a next generation to train and pass the knowledge on.
We’ve already seen what happens when knowledge transfer breaks down, notably in the case of Boeing. I recommend reading this essay by the Democracy Journal which highlights how Boeing’s transition away from an engineering culture to a business culture influenced where employees were hired and how they were trained. Despite admitting that it was relying on an inexperienced workforce at its new South Carolina facility designated for 737 MAX production, Boeing trudged on since it was the more cost-effective option. They relocated to a less expensive location (from Seattle) and did not properly train or hire employees with the same skill levels as those in Seattle. Boeing lost institutional knowledge, and I think you know how this story ends.
Companies will need to introduce stringent requirements and build comprehensive intelligence layers to turn their company data into living, breathing organisms that can function like humans. This could lead to policies around recording all internal meetings, and external meetings when appropriate, asking employees to save all documents on a company shared drive, and leveraging data from instant messaging software for additional context.
I imagine most organizations are doing some combination of the above today, but it will need to be turned into an intelligence layer for employees to ask questions of, helping reduce the inevitable knowledge gap during periods of layoffs and structural changes to the workforce.


Great piece. Happy one year of Relentlessly Curious!
100% agree with this. Large enterprises haven’t meaningfully replaced human roles with AI. But the headlines make for great reassurance to equity holders.
I do think that startups are already benefitting at a much more meaningful scale but don’t make up a high % of total jobs and therefore do not bring the same bombastic headlines.