One of the most widely cited findings in AI policy comes from a 2023 paper by Eloundou, Manning, Mishkin, and Rock titled “GPTs are GPTs.” The title is a nice double meaning: the paper studies how general-purpose technologies (GPTs) powered by large language models (also GPTs) may reshape the labor market. The headline finding is that around 80% of U.S. workers could have at least 10% of their tasks affected by LLMs, and roughly 19% may see half or more of their tasks impacted. Broadly, these exposure measures try to capture how “exposed” the occupation is to AI as a function of whether AI can augment the tasks involved in the job: direct exposure is defined as “whether access to an LLM or LLM-powered system would reduce the time required for a human to perform a specific DWA or complete a task by at least 50%.” The authors are crystal clear on this in the paper: exposure corresponds to the capacity of AI to be involved in the job, not the extent to which the job can be automated away.
But the word “exposure” turned out to bring on all sorts of anxieties about exactly that—displacement. And perhaps for this reason, these AI exposure measures have routinely gone viral on social media over the last couple of months.
A recent example is by Andrej Karpathy, one of the co-founders of OpenAI and a leader in how to think about AI more generally (e.g., he coined both the terms “jagged intelligence” and “vibe coding”). His dashboard, which he described as a “vibe-coded” weekend project, was a ranking of how exposed major occupations are to AI-driven automation. It quickly went viral on X, as it fed all of the already-existing narratives about rapid job loss due to AI.
After seeing the dashboard sensationalized and spread like wildfire, Karpathy clarified that his “exposure” scorecard was based on a quick, LLM-generated measure of how digital a job is, and was never meant to be a serious forecast of which occupations will shrink or disappear. While his own project website made the same caveat, it was largely ignored on X. To butcher the well-known phrase: “A vibe coded weekend project will travel twice around the world before the caveat has time to put its pants on.”
What this recent episode illustrates, however, is that such exposure measures have caught the public eye but are routinely misread (with some proposing a moratorium on the term “exposure” altogether). When people hear that a job is “80% exposed” to AI, they picture 80% of that job disappearing. The actual economics of AI exposure and job loss are pretty far from that characterization.
What is a “job”?
A job is a set of tasks; a person typically gets paid based on how well they complete all of the tasks associated with the job. So let’s say you’re a project manager. Your job involves a bunch of tasks like generating ideas, outlining those ideas succinctly and getting feedback from team members, putting together presentations, and a bunch of rote work (e.g., approving time sheets, fielding logistics). As the AI models become better, you’ve realized that you can automate many of these things: AI can do a lot of the rote work for you, and can even help you put together presentations. According to the exposure measure, your job is now “exposed” to AI. What happens to your job and what happens to your wage? Well, if automating some of the tasks frees up time to generate better ideas, your overall productivity goes up—you become even more valuable to the firm. Humans are still employed and if anything the wages go up.
On the other hand, if AI automates all of the tasks—let’s say your job only involves two tasks and they both get automated—then yes, human labor will get displaced. Importantly, the fewer the number of tasks (what we call the dimensionality of a job), the greater the incentive of the company to automate it in the first place. This is the part much of the analysis on automation misses: adopting AI into an existing organization is costly, so the firm will be more likely to invest if it can automate the job, not just the task. “Exposure” and risk of automation is not just a function of model capabilities, it also depends on firm incentives. And this is not a hypothetical: we now have plenty of evidence that such incentives matter greatly for what gets automated and when (e.g., firms are much more likely to automate when the cost of human labor increases).
Lastly, even if AI makes people more productive and yields higher wages, there can still be massive layoffs in that sector if consumers do not “absorb” the increased productivity: if productivity-driven price drops do not increase demand for the product, then fewer workers will be needed in that sector.
More generally, a task being exposed to AI—even if that exposure corresponds to full automation of that task—can potentially lead to higher wages and more hiring for that occupation. Or it can lead to layoffs and even full displacement. Whether exposure leads to better or worse labor market outcomes for workers depends on two key variables: the elasticity of consumer demand in that sector (how much more of the product people buy as prices decrease), and the dimensionality of the job (how many tasks are involved in that job). As we hope to convince you by the end of the piece, we should be a lot more worried about jobs like trucking and warehousing than we currently are.
The standard approach to automation
Let us start with the “standard” approach to thinking about automation. First, we decompose jobs into tasks using a taxonomy like O*NET, then evaluate how many of those tasks can be automated or augmented by AI. The total impact on the job is a weighted average of how much each task was improved, which means you can build an “exposure index”—typically defined as what share of a job’s tasks can AI do?—and that index maps linearly into how much the job is affected (see, e.g., Michael Webb’s already-classic paper). This approach has been enormously useful for mapping the landscape of AI’s potential reach. But it contains an assumption that is almost certainly wrong for most real-world jobs: it assumes tasks are separable. That is, automating task A has no effect on the productivity of task B, and the overall impact is just the sum of parts.
Consider the jobs that you know. There are many out there where the output consists of doing many different things right, not just some of them. You can’t have a cook who follows most of the steps of a recipe, a drummer who is mostly on the beat, a programmer whose code only partially works (or, for that matter, a professor who only does the research half of the job…though some have tested this requirement). These are jobs where each task needs to be completed successfully for the output to be acceptable.
Put differently, the tasks are not separable; they are complements, i.e., doing one task right or wrong affects how well you can do others in the job in order to complete it. That tasks within a job are complements rather than substitutes seems quite plausible for most real-world production. And this has a wide range of important implications for how AI will actually affect jobs.
The O-ring model of jobs
The idea that complementary tasks create nonlinear productivity goes back to Michael Kremer’s classic 1993 paper, “The O-Ring Theory of Economic Development”. The name comes from the tragic Challenger disaster: a single faulty O-ring caused the catastrophic failure of the entire system. Kremer’s insight was that if production requires many steps, and each step needs to be done well for the final product to have value, then productivity becomes a multiplicative rather than a linear function of skill. A worker who makes slightly fewer errors per task will be dramatically more productive overall, because those small quality gains compound across every step.
This task-based model of jobs has gained fresh relevance with a recent paper by Joshua Gans and Avi Goldfarb, “O-Ring Automation,” which applies Kremer’s framework directly to AI-driven automation. While their model might appear simple at first glance, its implications are far-reaching and profound. At least one of us (Alex) has been obsessed with this paper for months (see here, here, and here).
Gans and Goldfarb build a model of a firm where each worker’s job is composed of n tasks. The job’s output is multiplicative in the quality of each task—this is the O-ring production function:
A worker has a time endowment h and allocates it across the n tasks. If task s is performed manually, the worker spends h_s hours on it and generates quality:
where a is labor productivity, assumed constant across tasks (a simplifying assumption). The worker's time constraint is:
The firm can also choose to automate any task by renting a piece of capital that delivers a fixed quality θ at cost r per task. This is the key part to pay attention to: whether firms invest in automating a task depends on the trade-offs embedded in this problem. Once a task is automated, the worker no longer needs to spend any time on it.
So far the setup is quite simple. The interesting part is what the multiplicative structure of the production function implies once automation enters the picture.
How can automation raise wages?
Now suppose a firm chooses to automate k out of n tasks. What happens to the worker, and how does that affect the wage?
Before automation, the worker allocates time evenly across all n tasks, which is optimal given the symmetric structure. Each manual task therefore receives h/n hours and has quality a · h/n. Total output is:
After k tasks are automated at quality θ, the worker now has all h hours to allocate across only n - k remaining manual tasks. Each manual task now gets h/(n-k) hours, producing quality a · h/(n-k). Total output becomes:
So output rises after partial automation if and only if:
This is an important condition which states that if the automated task quality θ is at least as good as the worker’s original pre-automation manual quality on those tasks, then the output increases for sure. Output does not automatically rise just because some tasks are automated; it rises when the quality of automation is high enough.
But here is the key insight: because automation also frees the worker to concentrate more time on the remaining tasks, output can increase even if the automated tasks are performed at slightly lower quality than the worker originally achieved before automation. Automation lets the worker concentrate on fewer tasks, raising the quality of each one. This is the “focus effect.” Because of the functional form of the production function, higher quality on the remaining manual tasks doesn’t just add to output—it multiplies through the production function. The worker becomes more productive precisely because they’re doing fewer things.
When the automation quality is sufficiently high relative to what the worker was producing manually on those tasks, the worker’s marginal product rises—and so (typically) does their wage. Partial automation, in the O-ring world, is often a complement to human labor rather than a substitute for it, which increases the worker’s wage.
But this is not necessarily good news for labor
Higher worker productivity is good for wages, but does it lead to more jobs or fewer? This depends on consumer demand. Each worker makes one calculator a day and the firm has 10 workers. All calculators are sold at the prevailing price. Now imagine each worker becomes much more productive so that each worker can make 10 calculators. The price of each calculator falls (costs fall), but consumers still demand roughly the same number of calculators. This is the case of inelastic demand—one that does not respond much to prices. Now the firm will fire 9 of the workers. But what if consumers buy way more calculators at lower prices, i.e., demand is very elastic. Then the firm will actually end up hiring more workers to meet the new demand, despite the fact that they’re more productive.
More generally, if demand is elastic (elasticity > 1), then a price decrease leads to a more-than-proportional increase in quantity demanded. Output expands a lot. The firm needs more workers to produce this higher output, even though each worker is now more productive. Net effect: more hiring.
If demand is inelastic (elasticity < 1), a price decrease leads to a less-than-proportional increase in quantity demanded. Output does not expand much and the firm can produce the same (or slightly more) output with fewer workers since each one is more productive. Net effect: displacement.
This is closely related to a popular idea commonly referred to as Jevons’ paradox: when a resource becomes more efficient to use, total consumption of that resource often increases rather than decreases. When the steam engine made coal more efficient, coal consumption skyrocketed because so many new applications became economically viable. The same logic applies to labor: if AI makes a worker dramatically more productive, and demand for that product is elastic, one may end up with more workers in that occupation, not fewer.
Why job dimensionality matters: The case of firm incentives
The relationship between tasks and the elasticity of consumer demand is an important dimension for predicting AI-driven displacement, but one variable that is often overlooked is the number of tasks in the job itself, i.e., its dimensionality. A job’s dimensionality matters for two reasons.
First, conditional on a task being automated, a low-dimensional job is more likely to be fully displaced. If a job has 20 tasks and one gets automated, a human worker is still required to do the other 19 tasks. But if a job has one task and one task gets automated, that job is gone. Second—and this dimension is perhaps overlooked the most—organizations have a stronger incentive to automate tasks the fewer non-automated tasks are left in the job. Imagine that automating a task requires a $10 million dollar investment (buying the software, onboarding, connecting it to the rest of the system, etc.). In one case, this task is the only non-automated task left in a job; in the other case, if this task is automated, there are 19 other non-automated tasks left. The firm has a much higher incentive to automate the task in the first case than the second because it can then replace the worker and reap the cost savings involved.1
Because of this, firms have a stronger incentive to invest in technology to automate low dimensional jobs. In a low-dimensional job, automating all or most of the core tasks can eliminate the position and the wage bill altogether. That makes the return to automation much larger. In other words, not all “unexposed” tasks matter equally: in some jobs the remaining tasks still keep the existing worker at the firm; in others they do not.
This gives a clear prediction: even if a job is not currently “exposed” to AI, in the sense that AI is not being used for the tasks involved, if it is low dimensional and the technology is getting close to automating the tasks, it should be considered at risk. Firms will work harder and invest more to automate the task(s) involved than in the case where jobs have many non-automated tasks.
Trucking and warehousing, the overlooked canaries in the coal mine
This is why we think people should be more worried about jobs like trucking and warehousing.
Roughly 3 million Americans drive trucks for a living. Many are in their 50s, have been driving for decades, and live in communities where trucking is an economic backbone. Trucking is one of the best jobs one can get without a college degree. The actual work of a long-haul truck driver is dominated by a few core functions: moving the truck safely from point A to point B. The logistics, loading/unloading, etc. are all done by others. If autonomous driving becomes reliable on long-haul routes, the job of a truck driver is not just being augmented; it is fundamentally threatened and may even be displaced entirely. And that possibility is no longer theoretical. Companies such as Aurora Innovation and Kodiak Robotics are already running large-scale autonomous trucking pilots and commercial deployments on constrained routes. Warehousing tells a similar story. Warehousing employs millions of U.S. workers, and many warehouse jobs—picking, packing, sorting, pallet movement—are relatively narrow and increasingly automatable. Abroad, firms are already operating highly automated “dark warehouses” that run around the clock with minimal human labor. These warehouses look nothing like what we see today: they are designed from the ground up to be run by machines.
Now compare that to a knowledge worker, say, a management consultant. The job combines research, data analysis, client communication, presentation design, strategic reasoning, team coordination, and relationship management. That’s at least seven or eight distinct complementary tasks. Claude or Codex might automate the first pass on the data analysis and slide deck creation, but the consultant is still needed for everything. In O-ring terms, automating some tasks can make the remaining ones more valuable by allowing the worker to allocate more time to them—the consultant can spend more time talking to the client and making them comfortable with the implementation, getting buy-in from the various units, etc. As a consequence, wages may rise, and employment may rise too if better output and lower prices expand client demand.
You can see the same logic in many high-stakes professions such as medicine and academia. There are now over 870 FDA-approved radiology AI tools, and 66% of doctors use at least one AI tool, mainly for note dictation and diagnostic support. But these tools are augmenting radiologists and physicians, not replacing them. AI typically handles the routine pattern recognition aspect of the job, freeing doctors to focus on complex cases, patient communication, and clinical judgment. Likewise, academics have been debating whether advances in AI make research assistants more or less valuable. As AI automates routine analytical tasks, both professors and RAs can concentrate more on ideas and judgement, thereby expanding output and demand for skilled research labor. This is yet again the O-ring focus effect in practice.
What do exposure indices capture?
Let us bring this back to the exposure framework. In the standard approach, a management consultant is highly “exposed” to AI whereas a truck driver is not. But does this mean that the consultant is at higher displacement risk than the truck driver? Not necessarily. The consultant’s high exposure may actually be good news because it means AI will augment many of their complementary tasks, triggering the focus effect and potentially raising wages. On the other hand, the truck driver’s moderate exposure on a single critical task is much more dangerous because trucking companies have a much higher incentive to automate the task of driving, and once that’s done, the job is gone as well. These incentives are already playing out in practice:
The relevant object therefore is not average task exposure, but the structure of bottlenecks and how automation reshapes worker time around them. Two jobs with identical exposure scores can have completely opposite displacement risks depending on whether their tasks are complements, whether demand for their output is elastic or inelastic, and the incentives of the firm to invest in automation. The workers at greatest risk are not necessarily those with the highest average exposure, but those whose jobs are built around a small number of core tasks that AI can automate.
In the case where jobs are not fully automated, the cost savings from automating the marginal task will depend on the complementarities between the other tasks in the job. The exact relationship is worked out in the O-ring model of automation paper.







