12315 stories
·
36 followers

A Cryptography Engineer’s Perspective on Quantum Computing Timelines

1 Share

My position on the urgency of rolling out quantum-resistant cryptography has changed compared to just a few months ago. You might have heard this privately from me in the past weeks, but it’s time to signal and justify this change of mind publicly.

There had been rumors for a while of expected and unexpected progress towards cryptographically-relevant quantum computers, but over the last week we got two public instances of it.

First, Google published a paper revising down dramatically the estimated number of logical qubits and gates required to break 256-bit elliptic curves like NIST P-256 and secp256k1, which makes the attack doable in minutes on fast-clock architectures like superconducting qubits. They weirdly1 frame it around cryptocurrencies and mempools and salvaged goods or something, but the far more important implication are practical WebPKI MitM attacks.

Shortly after, a different paper came out from Oratomic showing 256-bit elliptic curves can be broken in as few as 10,000 physical qubits if you have non-local connectivity, like neutral atoms seem to offer, thanks to better error correction. This attack would be slower, but even a single broken key per month can be catastrophic.

They have this excellent graph on page 2 (Babbush et al. is the Google paper, which they presumably had preview access to):

graph of physical qubit cost over time

Overall, it looks like everything is moving: the hardware is getting better, the algorithms are getting cheaper, the requirements for error correction are getting lower.

I’ll be honest, I don’t actually know what all the physics in those papers means. That’s not my job and not my expertise. My job includes risk assessment on behalf of the users that entrusted me with their safety. What I know is what at least some actual experts are telling us.

Heather Adkins and Sophie Schmieg are telling us that “quantum frontiers may be closer than they appear” and that 2029 is their deadline. That’s in 33 months, and no one had set such an aggressive timeline until this month.

Scott Aaronson tells us that the “clearest warning that [he] can offer in public right now about the urgency of migrating to post-quantum cryptosystems” is a vague parallel with how nuclear fission research stopped happening in public between 1939 and 1940.

The timelines presented at RWPQC 2026, just a few weeks ago, were much tighter than a couple years ago, and are already partially obsolete. The joke used to be that quantum computers have been 10 years out for 30 years now. Well, not true anymore, the timelines have started progressing.

If you are thinking “well, this could be bad, or it could be nothing!” I need you to recognize how immediately dispositive that is. The bet is not “are you 100% sure a CRQC will exist in 2030?”, the bet is “are you 100% sure a CRQC will NOT exist in 2030?” I simply don’t see how a non-expert can look at what the experts are saying, and decide “I know better, there is in fact < 1% chance.” Remember that you are betting with your users’ lives.2

Put another way, even if the most likely outcome was no CRQC in our lifetimes, that would be completely irrelevant, because our users don’t want just better-than-even odds3 of being secure.

Sure, papers about an abacus and a dog are funny and can make you look smart and contrarian on forums. But that’s not the job, and those arguments betray a lack of expertise. As Scott Aaronson said:

Once you understand quantum fault-tolerance, asking “so when are you going to factor 35 with Shor’s algorithm?” becomes sort of like asking the Manhattan Project physicists in 1943, “so when are you going to produce at least a small nuclear explosion?”

The job is not to be skeptical of things we’re not experts in, the job is to mitigate credible threats, and there are credible experts that are telling us about an imminent threat.

In summary, it might be that in 10 years the predictions will turn out to be wrong, but at this point they might also be right soon, and that risk is now unacceptable.

Now what

Concretely, what does this mean? It means we need to ship.

Regrettably, we’ve got to roll out what we have.4 That means large ML-DSA signatures shoved in places designed for small ECDSA signatures, like X.509, with the exception of Merkle Tree Certificates for the WebPKI, which is thankfully far enough along.

This is not the article I wanted to write. I’ve had a pending draft for months now explaining we should ship PQ key exchange now, but take the time we still have to adapt protocols to larger signatures, because they were all designed with the assumption that signatures are cheap. That other article is now wrong, alas: we don’t have the time if we need to be finished by 2029 instead of 2035.

For key exchange, the migration to ML-KEM is going well enough but:

  1. Any non-PQ key exchange should now be considered a potential active compromise, worthy of warning the user like OpenSSH does, because it’s very hard to make sure all secrets transmitted over the connection or encrypted in the file have a shorter shelf life than three years.

  2. We need to forget about non-interactive key exchanges (NIKEs) for a while; we only have KEMs (which are only unidirectionally authenticated without interactivity) in the PQ toolkit.

It makes no more sense to deploy new schemes that are not post-quantum. I know, pairings were nice. I know, everything PQ is annoyingly large. I know, we had basically just figured out how to do ECDSA over P-256 safely. I know, there might not be practical PQ equivalents for threshold signatures or identity-based encryption. Trust me, I know it stings. But it is what it is.

Hybrid classic + post-quantum authentication makes no sense to me anymore and will only slow us down; we should go straight to pure ML-DSA-44.6 Hybrid key exchange is reasonably easy, with ephemeral keys that don’t even need a type or wire format for the composite private key, and a couple years ago it made sense to take the hedge. Authentication is not like that, and even with draft-ietf-lamps-pq-composite-sigs-15 with its 18 composite key types nearing publication, we’d waste precious time collectively figuring out how to treat these composite keys and how to expose them to users. It’s also been two years since Kyber hybrids and we’ve gained significant confidence in the Module-Lattice schemes. Hybrid signatures cost time and complexity budget,5 and the only benefit is protection if ML-DSA is classically broken before the CRQCs come, which looks like the wrong tradeoff at this point.

In symmetric encryption, we don’t need to do anything, thankfully. There is a common misconception that protection from Grover requires 256-bit keys, but that is based on an exceedingly simplified understanding of the algorithm. A more accurate characterization is that with a circuit depth of 2⁶⁴ logical gates (the approximate number of gates that current classical computing architectures can perform serially in a decade) running Grover on a 128-bit key space would require a circuit size of 2¹⁰⁶. There’s been no progress on this that I am aware of, and indeed there are old proofs that Grover is optimal and its quantum speedup doesn’t parallelize. Unnecessary 256-bit key requirements are harmful when bundled with the actually urgent PQ requirements, because they muddle the interoperability targets and they risk slowing down the rollout of asymmetric PQ cryptography.

In my corner of the world, we’ll have to start thinking about what it means for half the cryptography packages in the Go standard library to be suddenly insecure, and how to balance the risk of downgrade attacks and backwards compatibility. It’s the first time in our careers we’ve faced anything like this: SHA-1 to SHA-256 was not nearly this disruptive,7 and even that took forever with the occasional unexpected downgrade attack.

Trusted Execution Environments (TEEs) like Intel SGX and AMD SEV-SNP and in general hardware attestation are just f***d. All their keys and roots are not PQ and I heard of no progress in rolling out PQ ones, which at hardware speeds means we are forced to accept they might not make it, and can’t be relied upon. I had to reassess a whole project because of this, and I will probably downgrade them to barely “defense in depth” in my toolkit.

Ecosystems with cryptographic identities (like atproto and, yes, cryptocurrencies) need to start migrating very soon, because if the CRQCs come before they are done, they will have to make extremely hard decisions, picking between letting users be compromised and bricking them.

File encryption is especially vulnerable to store-now-decrypt-later attacks, so we’ll probably have to start warning and then erroring out on non-PQ age recipient types soon. It’s unfortunately only been a few months since we even added PQ recipients, in version 1.3.0.8

Finally, this week I started teaching a PhD course in cryptography at the University of Bologna, and I’m going to mention RSA, ECDSA, and ECDH only as legacy algorithms, because that’s how those students will encounter them in their careers. I know, it feels weird. But it is what it is.

For more willing-or-not PQ migration, follow me on Bluesky at @filippo.abyssdomain.expert or on Mastodon at @filippo@abyssdomain.expert.

The picture

Traveling back from an excellent AtmosphereConf 2026, I saw my first aurora, from the north-facing window of a Boeing 747.

Aurora borealis seen from an airplane window, with green vertical columns and curtains of light above a cloud layer, stars visible in the dark sky above.

My work is made possible by Geomys, an organization of professional Go maintainers, which is funded by Ava Labs, Teleport, Tailscale, and Sentry. Through our retainer contracts they ensure the sustainability and reliability of our open source maintenance work and get a direct line to my expertise and that of the other Geomys maintainers. (Learn more in the Geomys announcement.) Here are a few words from some of them!

Teleport — For the past five years, attacks and compromises have been shifting from traditional malware and security breaches to identifying and compromising valid user accounts and credentials with social engineering, credential theft, or phishing. Teleport Identity is designed to eliminate weak access patterns through access monitoring, minimize attack surface with access requests, and purge unused permissions via mandatory access reviews.

Ava Labs — We at Ava Labs, maintainer of AvalancheGo (the most widely used client for interacting with the Avalanche Network), believe the sustainable maintenance and development of open source cryptographic protocols is critical to the broad adoption of blockchain technology. We are proud to support this necessary and impactful work through our ongoing sponsorship of Filippo and his team.


  1. The whole paper is a bit goofy: it has a zero-knowledge proof for a quantum circuit that will certainly be rederived and improved upon before the actual hardware to run it on will exist. They seem to believe this is about responsible disclosure, so I assume this is just physicists not being experts in our field in the same way we are not experts in theirs. 

  2. “You” is doing a lot of work in this sentence, but the audience for this post is a bit unusual for me: I’m addressing my colleagues and the decision-makers that gate action on deployment of post-quantum cryptography. 

  3. I had a reviewer object to an attacker probability of success of 1/536,870,912 (0.0000002%, 2⁻²⁹) after 2⁶⁴ work, correctly so, because in cryptography we usually target 2⁻³². 

  4. Why trust the new stuff, though? There are two parts to it: the math and the implementation. The math is also not my job, so I again defer to experts like Sophie Schmieg, who tells us that she is very confident in lattices, and the NSA, who approved ML-KEM and ML-DSA at the Top Secret level for all national security purposes. It is also older than elliptic curve cryptography was when it first got deployed. (“Doesn’t the NSA lie to break our encryption?” No, the NSA has never intentionally jeopardized US national security with a non-NOBUS backdoor, and there is no way for ML-KEM and ML-DSA to hide a NOBUS backdoor.) On the implementation side, I am actually very qualified to have an opinion, having made cryptography implementation and testing my niche. ML-KEM and ML-DSA are a lot easier to implement securely than their classical alternatives, and with the better testing infrastructure we have now I expect to see exceedingly few bugs in their implementations. 

  5. One small exception in that if you already have the ability to convey multiple signatures from multiple public keys in your protocol, it can make sense to to “poor man’s hybrid signatures” by just requiring 2-of-2 signatures from one classical public key and one pure PQ key. Some of the tlog ecosystem might pick this route, but that’s only because the cost is significantly lowered by the existing support for nested n-of-m signing groups. 

  6. Why ML-DSA-44 when we usually use ML-KEM-768 instead of ML-KEM-512? Because ML-KEM-512 is Level 1, while ML-DSA-44 is Level 2, so it already has a bit of margin against minor cryptanalytic improvements. 

  7. Because SHA-256 is a better plug-in replacement for SHA-1, because SHA-1 was a much smaller surface than all of RSA and ECC, and because SHA-1 was not that broken: it still retained preimage resistance and could still be used in HMAC and HKDF. 

  8. The delay was in large part due to my unfortunate decision of blocking on the availability of HPKE hybrid recipients, which blocked on the CFRG, which took almost two years to select a stable label string for X-Wing (January 2024) with ML-KEM (August 2024), despite making precisely no changes to the designs. The IETF should have an internal post-mortem on this, but I doubt we’ll see one. 

Read the whole story
denubis
16 hours ago
reply
Share this story
Delete

Team Mirai and Democracy

1 Share

Japan’s election last month and the rise of the country’s newest and most innovative political party, Team Mirai, illustrates the viability of a different way to do politics.

In this model, technology is used to make democratic processes stronger, instead of undermining them. It is harnessed to root out corruption, instead of serving as a cash cow for campaign donations.

Imagine an election where every voter has the opportunity to opine directly to politicians on precisely the issues they care about. They’re not expected to spend hours becoming policy experts. Instead, an AI Interviewer walks them through the subject, answering their questions, interrogating their experience, even challenging their thinking.

Voters get immediate feedback on how their individual point of view matches—or doesn’t—a party’s platform, and they can see whether and how the party adopts their feedback. This isn’t like an opinion poll that politicians use for calculating short-term electoral tactics. It’s a deliberative reasoning process that scales, engaging voters in defining policy and helping candidates to listen deeply to their constituents.

This is happening today in Japan. Constituents have spent about eight thousand hours engaging with Mirai’s AI Interviewer since 2025. The party’s gamified volunteer mobilization app, Action Board, captured about 100,000 organizer actions per day in the runup to last week’s election.

It’s how Team Mirai, which translates to ‘The Future Party,’ does politics. Its founder, Takahiro Anno, first ran for local office in 2024 as a 33 year old software engineer standing for Governor of Tokyo. He came in fifth out of 56 candidates, winning more than 150,000 votes as an unaffiliated political outsider. He won attention by taking a distinctive stance on the role of technology in democracy and using AI aggressively in voter engagement.

Last year, Anno ran again, this time for the Upper Chamber of the national legislature—the Diet—and won. Now the head of a new national party, Anno found himself with a platform for making his vision of a new way of doing politics a reality.

In this recent House of Representatives election, Team Mirai shot up to win nearly four million votes. In the lower chamber’s proportional representation system, that was good enough for eleven total seats—the party’s first ever representation in the Japanese House—and nearly three times what it achieved in last year’s Upper Chamber election.

Anno’s party stood for election without aligning itself on the traditional axes of left and right. Instead, Team Mirai, heavily associated with young, urban voters, sought to unite across the ideological spectrum by taking a radical position on a different axis: the status quo and the future. Anno told us that Team Mirai believes it can triple its representation in the Diet after the next elections in each chamber, an ostentatious goal that seems achievable given their rapid rise over the past year.

In the American context, the idea of a small party unifying voters across left and right sounds like a pipe dream. But there is evidence it worked in Japan. Team Mirai won an impressive 11% of proportional representation votes from unaffiliated voters, nearly twice the share of the larger electorate. The centerpiece of the party’s policy platform is not about the traditional hot button issues, it’s about democracy itself, and how it can be enhanced by embracing a futuristic vision of digital democracy.

Anno told us how his party arrived at its manifesto for this month’s elections, and why it looked different from other parties’ in important ways. Team Mirai collected more than 38,000 online questions and more than 6,000 discrete policy suggestions from voters using its AI Policy app, which is advertised as a ‘manifesto that speaks for itself.’

After factoring in all this feedback, Team Mirai maintained a contrarian position on the biggest issue of the election: the sales tax and affordability. Rather than running on a reduction of the national sales tax like the major parties, Team Mirai reviewed dozens of suggestions from the public and ultimately proposed to keep that tax level while providing support to families through a child tax credit and lowering the required contribution for social insurance. Anno described this as another future-facing strategy: less price relief in the short term, but sustained funding for essential programs.

Anno has always intended to build a different kind of party. After receiving roughly $1 million in public funding apportioned to Team Mirai based on its single seat in the Upper Chamber last year, Anno began hiring engineers to enhance his software tools for digital democracy.

Anno described Team Mirai to us as a ‘utility party;’ basic infrastructure for Japanese democracy that serves the broader polity rather than one faction. Their Gikai (‘assembly’) app illustrates the point. It provides a portal for constituents to research bills, using AI to generate summaries, to describe their impacts, to surfacing media reporting on the issue, and to answer users’ questions. Like all their software, it’s open source and free for anyone, in any party, to use.

After last week’s victory, Team Mirai now has about $5 million in public funding and ambitions to grow the influence of their digital democracy platform. Anno told us Team Mirai has secured an agreement with the LDP, Japan’s dominant ruling party, to begin using Team Mirai’s Gikai and corruption-fighting Mirumae financial transparency tool.

AI is the issue driving the most societal and economic change we will encounter in our lifetime, yet US political parties are largely silent. But AI and Big Tech companies and their owners are ramping up their political spending to influence the parties. To the extent that AI has shown up in our politics, it seems to be limited to the question of where to site the next generation of data centers and how to channel populist backlash to big tech.

Those are causes worthy of political organizing, but very few US politicians are leveraging the technology for public listening or other pro-democratic purposes. With the midterms still nine months away and with innovators like Team Mirai making products in the open for anyone to use, there is still plenty of time for an American politician to demonstrate what a new politics could look like.

This essay was written with Nathan E. Sanders, and originally appeared in Tech Policy Press.

Read the whole story
denubis
14 days ago
reply
Share this story
Delete

How Will AI-driven Automation Actually Affect Jobs?

1 Share

One of the most widely cited findings in AI policy comes from a 2023 paper by Eloundou, Manning, Mishkin, and Rock titled “GPTs are GPTs.” The title is a nice double meaning: the paper studies how general-purpose technologies (GPTs) powered by large language models (also GPTs) may reshape the labor market. The headline finding is that around 80% of U.S. workers could have at least 10% of their tasks affected by LLMs, and roughly 19% may see half or more of their tasks impacted. Broadly, these exposure measures try to capture how “exposed” the occupation is to AI as a function of whether AI can augment the tasks involved in the job: direct exposure is defined as “whether access to an LLM or LLM-powered system would reduce the time required for a human to perform a specific DWA or complete a task by at least 50%.” The authors are crystal clear on this in the paper: exposure corresponds to the capacity of AI to be involved in the job, not the extent to which the job can be automated away.

But the word “exposure” turned out to bring on all sorts of anxieties about exactly that—displacement. And perhaps for this reason, these AI exposure measures have routinely gone viral on social media over the last couple of months.

A recent example is by Andrej Karpathy, one of the co-founders of OpenAI and a leader in how to think about AI more generally (e.g., he coined both the terms “jagged intelligence” and “vibe coding”). His dashboard, which he described as a “vibe-coded” weekend project, was a ranking of how exposed major occupations are to AI-driven automation. It quickly went viral on X, as it fed all of the already-existing narratives about rapid job loss due to AI.

After seeing the dashboard sensationalized and spread like wildfire, Karpathy clarified that his “exposure” scorecard was based on a quick, LLM-generated measure of how digital a job is, and was never meant to be a serious forecast of which occupations will shrink or disappear. While his own project website made the same caveat, it was largely ignored on X. To butcher the well-known phrase: “A vibe coded weekend project will travel twice around the world before the caveat has time to put its pants on.”

What this recent episode illustrates, however, is that such exposure measures have caught the public eye but are routinely misread (with some proposing a moratorium on the term “exposure” altogether). When people hear that a job is “80% exposed” to AI, they picture 80% of that job disappearing. The actual economics of AI exposure and job loss are pretty far from that characterization.

What is a “job”?

A job is a set of tasks; a person typically gets paid based on how well they complete all of the tasks associated with the job. So let’s say you’re a project manager. Your job involves a bunch of tasks like generating ideas, outlining those ideas succinctly and getting feedback from team members, putting together presentations, and a bunch of rote work (e.g., approving time sheets, fielding logistics). As the AI models become better, you’ve realized that you can automate many of these things: AI can do a lot of the rote work for you, and can even help you put together presentations. According to the exposure measure, your job is now “exposed” to AI. What happens to your job and what happens to your wage? Well, if automating some of the tasks frees up time to generate better ideas, your overall productivity goes up—you become even more valuable to the firm. Humans are still employed and if anything the wages go up.

On the other hand, if AI automates all of the tasks—let’s say your job only involves two tasks and they both get automated—then yes, human labor will get displaced. Importantly, the fewer the number of tasks (what we call the dimensionality of a job), the greater the incentive of the company to automate it in the first place. This is the part much of the analysis on automation misses: adopting AI into an existing organization is costly, so the firm will be more likely to invest if it can automate the job, not just the task. “Exposure” and risk of automation is not just a function of model capabilities, it also depends on firm incentives. And this is not a hypothetical: we now have plenty of evidence that such incentives matter greatly for what gets automated and when (e.g., firms are much more likely to automate when the cost of human labor increases).

Lastly, even if AI makes people more productive and yields higher wages, there can still be massive layoffs in that sector if consumers do not “absorb” the increased productivity: if productivity-driven price drops do not increase demand for the product, then fewer workers will be needed in that sector.

More generally, a task being exposed to AI—even if that exposure corresponds to full automation of that task—can potentially lead to higher wages and more hiring for that occupation. Or it can lead to layoffs and even full displacement. Whether exposure leads to better or worse labor market outcomes for workers depends on two key variables: the elasticity of consumer demand in that sector (how much more of the product people buy as prices decrease), and the dimensionality of the job (how many tasks are involved in that job). As we hope to convince you by the end of the piece, we should be a lot more worried about jobs like trucking and warehousing than we currently are.

The standard approach to automation

Let us start with the “standard” approach to thinking about automation. First, we decompose jobs into tasks using a taxonomy like O*NET, then evaluate how many of those tasks can be automated or augmented by AI. The total impact on the job is a weighted average of how much each task was improved, which means you can build an “exposure index”—typically defined as what share of a job’s tasks can AI do?—and that index maps linearly into how much the job is affected (see, e.g., Michael Webb’s already-classic paper). This approach has been enormously useful for mapping the landscape of AI’s potential reach. But it contains an assumption that is almost certainly wrong for most real-world jobs: it assumes tasks are separable. That is, automating task A has no effect on the productivity of task B, and the overall impact is just the sum of parts.

Consider the jobs that you know. There are many out there where the output consists of doing many different things right, not just some of them. You can’t have a cook who follows most of the steps of a recipe, a drummer who is mostly on the beat, a programmer whose code only partially works (or, for that matter, a professor who only does the research half of the job…though some have tested this requirement). These are jobs where each task needs to be completed successfully for the output to be acceptable.

Put differently, the tasks are not separable; they are complements, i.e., doing one task right or wrong affects how well you can do others in the job in order to complete it. That tasks within a job are complements rather than substitutes seems quite plausible for most real-world production. And this has a wide range of important implications for how AI will actually affect jobs.

The O-ring model of jobs

The idea that complementary tasks create nonlinear productivity goes back to Michael Kremer’s classic 1993 paper, “The O-Ring Theory of Economic Development”. The name comes from the tragic Challenger disaster: a single faulty O-ring caused the catastrophic failure of the entire system. Kremer’s insight was that if production requires many steps, and each step needs to be done well for the final product to have value, then productivity becomes a multiplicative rather than a linear function of skill. A worker who makes slightly fewer errors per task will be dramatically more productive overall, because those small quality gains compound across every step.

This task-based model of jobs has gained fresh relevance with a recent paper by Joshua Gans and Avi Goldfarb, “O-Ring Automation,” which applies Kremer’s framework directly to AI-driven automation. While their model might appear simple at first glance, its implications are far-reaching and profound. At least one of us (Alex) has been obsessed with this paper for months (see here, here, and here).

Gans and Goldfarb build a model of a firm where each worker’s job is composed of n tasks. The job’s output is multiplicative in the quality of each task—this is the O-ring production function:

A worker has a time endowment h and allocates it across the n tasks. If task s is performed manually, the worker spends h_s hours on it and generates quality:

where a is labor productivity, assumed constant across tasks (a simplifying assumption). The worker's time constraint is:

The firm can also choose to automate any task by renting a piece of capital that delivers a fixed quality θ at cost r per task. This is the key part to pay attention to: whether firms invest in automating a task depends on the trade-offs embedded in this problem. Once a task is automated, the worker no longer needs to spend any time on it.

So far the setup is quite simple. The interesting part is what the multiplicative structure of the production function implies once automation enters the picture.

How can automation raise wages?

Now suppose a firm chooses to automate k out of n tasks. What happens to the worker, and how does that affect the wage?

Before automation, the worker allocates time evenly across all n tasks, which is optimal given the symmetric structure. Each manual task therefore receives h/n hours and has quality a · h/n. Total output is:

After k tasks are automated at quality θ, the worker now has all h hours to allocate across only n - k remaining manual tasks. Each manual task now gets h/(n-k) hours, producing quality a · h/(n-k). Total output becomes:

So output rises after partial automation if and only if:

This is an important condition which states that if the automated task quality θ is at least as good as the worker’s original pre-automation manual quality on those tasks, then the output increases for sure. Output does not automatically rise just because some tasks are automated; it rises when the quality of automation is high enough.

But here is the key insight: because automation also frees the worker to concentrate more time on the remaining tasks, output can increase even if the automated tasks are performed at slightly lower quality than the worker originally achieved before automation. Automation lets the worker concentrate on fewer tasks, raising the quality of each one. This is the “focus effect.” Because of the functional form of the production function, higher quality on the remaining manual tasks doesn’t just add to output—it multiplies through the production function. The worker becomes more productive precisely because they’re doing fewer things.

When the automation quality is sufficiently high relative to what the worker was producing manually on those tasks, the worker’s marginal product rises—and so (typically) does their wage. Partial automation, in the O-ring world, is often a complement to human labor rather than a substitute for it, which increases the worker’s wage.

But this is not necessarily good news for labor

Higher worker productivity is good for wages, but does it lead to more jobs or fewer? This depends on consumer demand. Each worker makes one calculator a day and the firm has 10 workers. All calculators are sold at the prevailing price. Now imagine each worker becomes much more productive so that each worker can make 10 calculators. The price of each calculator falls (costs fall), but consumers still demand roughly the same number of calculators. This is the case of inelastic demand—one that does not respond much to prices. Now the firm will fire 9 of the workers. But what if consumers buy way more calculators at lower prices, i.e., demand is very elastic. Then the firm will actually end up hiring more workers to meet the new demand, despite the fact that they’re more productive.

More generally, if demand is elastic (elasticity > 1), then a price decrease leads to a more-than-proportional increase in quantity demanded. Output expands a lot. The firm needs more workers to produce this higher output, even though each worker is now more productive. Net effect: more hiring.

If demand is inelastic (elasticity < 1), a price decrease leads to a less-than-proportional increase in quantity demanded. Output does not expand much and the firm can produce the same (or slightly more) output with fewer workers since each one is more productive. Net effect: displacement.

This is closely related to a popular idea commonly referred to as Jevons’ paradox: when a resource becomes more efficient to use, total consumption of that resource often increases rather than decreases. When the steam engine made coal more efficient, coal consumption skyrocketed because so many new applications became economically viable. The same logic applies to labor: if AI makes a worker dramatically more productive, and demand for that product is elastic, one may end up with more workers in that occupation, not fewer.

Why job dimensionality matters: The case of firm incentives

The relationship between tasks and the elasticity of consumer demand is an important dimension for predicting AI-driven displacement, but one variable that is often overlooked is the number of tasks in the job itself, i.e., its dimensionality. A job’s dimensionality matters for two reasons.

First, conditional on a task being automated, a low-dimensional job is more likely to be fully displaced. If a job has 20 tasks and one gets automated, a human worker is still required to do the other 19 tasks. But if a job has one task and one task gets automated, that job is gone. Second—and this dimension is perhaps overlooked the most—organizations have a stronger incentive to automate tasks the fewer non-automated tasks are left in the job. Imagine that automating a task requires a $10 million dollar investment (buying the software, onboarding, connecting it to the rest of the system, etc.). In one case, this task is the only non-automated task left in a job; in the other case, if this task is automated, there are 19 other non-automated tasks left. The firm has a much higher incentive to automate the task in the first case than the second because it can then replace the worker and reap the cost savings involved.1

Because of this, firms have a stronger incentive to invest in technology to automate low dimensional jobs. In a low-dimensional job, automating all or most of the core tasks can eliminate the position and the wage bill altogether. That makes the return to automation much larger. In other words, not all “unexposed” tasks matter equally: in some jobs the remaining tasks still keep the existing worker at the firm; in others they do not.

This gives a clear prediction: even if a job is not currently “exposed” to AI, in the sense that AI is not being used for the tasks involved, if it is low dimensional and the technology is getting close to automating the tasks, it should be considered at risk. Firms will work harder and invest more to automate the task(s) involved than in the case where jobs have many non-automated tasks.

Trucking and warehousing, the overlooked canaries in the coal mine

This is why we think people should be more worried about jobs like trucking and warehousing.

Roughly 3 million Americans drive trucks for a living. Many are in their 50s, have been driving for decades, and live in communities where trucking is an economic backbone. Trucking is one of the best jobs one can get without a college degree. The actual work of a long-haul truck driver is dominated by a few core functions: moving the truck safely from point A to point B. The logistics, loading/unloading, etc. are all done by others. If autonomous driving becomes reliable on long-haul routes, the job of a truck driver is not just being augmented; it is fundamentally threatened and may even be displaced entirely. And that possibility is no longer theoretical. Companies such as Aurora Innovation and Kodiak Robotics are already running large-scale autonomous trucking pilots and commercial deployments on constrained routes. Warehousing tells a similar story. Warehousing employs millions of U.S. workers, and many warehouse jobs—picking, packing, sorting, pallet movement—are relatively narrow and increasingly automatable. Abroad, firms are already operating highly automated “dark warehouses” that run around the clock with minimal human labor. These warehouses look nothing like what we see today: they are designed from the ground up to be run by machines.

Now compare that to a knowledge worker, say, a management consultant. The job combines research, data analysis, client communication, presentation design, strategic reasoning, team coordination, and relationship management. That’s at least seven or eight distinct complementary tasks. Claude or Codex might automate the first pass on the data analysis and slide deck creation, but the consultant is still needed for everything. In O-ring terms, automating some tasks can make the remaining ones more valuable by allowing the worker to allocate more time to them—the consultant can spend more time talking to the client and making them comfortable with the implementation, getting buy-in from the various units, etc. As a consequence, wages may rise, and employment may rise too if better output and lower prices expand client demand.

You can see the same logic in many high-stakes professions such as medicine and academia. There are now over 870 FDA-approved radiology AI tools, and 66% of doctors use at least one AI tool, mainly for note dictation and diagnostic support. But these tools are augmenting radiologists and physicians, not replacing them. AI typically handles the routine pattern recognition aspect of the job, freeing doctors to focus on complex cases, patient communication, and clinical judgment. Likewise, academics have been debating whether advances in AI make research assistants more or less valuable. As AI automates routine analytical tasks, both professors and RAs can concentrate more on ideas and judgement, thereby expanding output and demand for skilled research labor. This is yet again the O-ring focus effect in practice.

What do exposure indices capture?

Let us bring this back to the exposure framework. In the standard approach, a management consultant is highly “exposed” to AI whereas a truck driver is not. But does this mean that the consultant is at higher displacement risk than the truck driver? Not necessarily. The consultant’s high exposure may actually be good news because it means AI will augment many of their complementary tasks, triggering the focus effect and potentially raising wages. On the other hand, the truck driver’s moderate exposure on a single critical task is much more dangerous because trucking companies have a much higher incentive to automate the task of driving, and once that’s done, the job is gone as well. These incentives are already playing out in practice:

The relevant object therefore is not average task exposure, but the structure of bottlenecks and how automation reshapes worker time around them. Two jobs with identical exposure scores can have completely opposite displacement risks depending on whether their tasks are complements, whether demand for their output is elastic or inelastic, and the incentives of the firm to invest in automation. The workers at greatest risk are not necessarily those with the highest average exposure, but those whose jobs are built around a small number of core tasks that AI can automate.

1

In the case where jobs are not fully automated, the cost savings from automating the marginal task will depend on the complementarities between the other tasks in the job. The exact relationship is worked out in the O-ring model of automation paper.

Read the whole story
denubis
14 days ago
reply
Share this story
Delete

One Man a Hero. One Man a Monster.

1 Share
Robert Mueller, 1944-2026, as a young Marine Corps officer preparing to be a platoon leader during the bloodiest year of combat in Vietnam, 1968. He was wounded in action, received a Purple Heart, and was awarded a Bronze Star with ‘V’ for valor among other decorations. It was the beginning of a long career in public service. (National Archives/Getty Images.)

Robert Mueller died today, at age 81. Decent people around the world mourn his passing. Americans who know about public service recognize him as a stellar example. If the phrase “with privilege, comes responsibility” can apply to any Americans of recent history, the list might start with him.

And today, the person who is now on his 24th golfing trip to Mar-a-Lago of his second term—at millions of dollars per trip in public expense, while the world reels from a war he started on a whim, while families he promised to help are struggling with medical expenses and gas prices and tariff increases and everything else—today that same person wrote publicly of Mueller:

Good, I’m glad that he is dead.

This is the most despicable public statement by an American public official in my lifetime.

It needs to be recognized as such.

Any head of state who can say this in public about a countryman, even about a political adversary, is a moral monster. Either he has no ability whatsoever to empathize with others; or he has no sense whatsoever of a leader’s duty; or he has no remaining cognitive ability whatsoever to “filter” what he says. Or all three.

If I thought Trump had ever heard of John Donne, I would remind him of “no man is an island.” If I thought he had ever been seriously in any place of worship, I would remind him that none teaches being “glad” at another person’s death. If I thought he had a soul, I would recommend that he attend to it.

Just while I’m at it, here is how Donne’s most famous passage ends:

Any man’s death diminishes me,

Because I am involved in mankind.

And therefore never send to know for whom the bell tolls;

It tolls for thee.

Whatever your political views, including about “the Mueller report,” respect Robert Mueller’s example of service. And stand up against Trump’s example of depravity.


‘I have been very lucky. I should spend time paying it back.’

Let us consider, briefly, former FBI director Robert Mueller—before, and apart from, his past two decades in the news.

—He grew up in privilege, son of a DuPont executive. For high school he went to the elite St. Paul’s boarding school in New Hampshire.

—At St. Paul’s he was a renowned athlete: Captain of three teams—soccer, hockey, and lacrosse—and winner of the school’s medal as outstanding overall athlete.

—He went to Princeton, where he played varsity lacrosse. A lacrosse teammate one year ahead of him was another notable athlete named David Spencer Hackett.

—At Princeton, Hackett had been in ROTC, and after graduation in 1965, in those early days of the Vietnam war, he was commissioned as a Marine Corps officer. Early in 1967, leading a platoon in Vietnam, he was killed in action. You can read about his life and death here.

—After his own graduation from Princeton in 1966, Mueller spent a year getting a master’s degree. And then he enlisted in the Marine Corps, in part because of his teammate Hackett’s death. As he said years later in an interview, with emphasis added:

I have been very lucky. I always felt I should spend some time paying it back. One of the reasons I went into the Marine Corps was because we lost a very good friend, a Marine in Vietnam, who was a year ahead of me at Princeton. There were a number of us who felt we should follow his example and at least go into the service. And it flows from there.

—In Vietnam he led combat platoons through the carnage of 1968; he was wounded; he received numerous decorations. Decades later he told my friend Garrett Graff that among his achievements, he was “most proud the Marine Corps deemed me worthy of leading other Marines.” Combat bravery is far from the only mark of civic courage. But Mueller displayed both kinds. You can think of examples a generation older than Mueller: the first George Bush, who was an 18-year-old combat aviator during World War II. William Webster, Mueller’s predecessor at the FBI, who served in the Navy both during World War II and the Korean war.

—Mueller left the military to go to law school. He spent the decades that followed mainly in public service, including 12 years as director of the FBI. He was appointed by a Republican president (GWB), and re-appointed by a Democrat (Obama).

And this is kind of person the country’s current “leader” says he is “glad” has died.

I don’t know when I have ever felt more disgusted by an elected leader than right now.

I’ve disagreed with people, often. But this is beneath contempt.

Share


What can any of us do?

A national leader, who celebrates any prominent citizen’s death, is not fit to lead.

But we know this already about the morally empty vessel who at this moment is lolling or dining in Florida, while others serve and suffer and die.

But what can anyone do?

—One week from today, the next “No Kings” mass protest will occur. The preceding one, last October, was the biggest one-day demonstration in the nation’s history. And that was before the ICE murders in Minnesota, the war-on-a-whim in Iran, the surge in gas prices, the “glad he’s dead” post.
Next Saturday’s should be bigger. Find out more about it here.

—Call and write the White House and leave messages of outrage about this vile expression from a serving president. The address as always is 1600 Pennsylvania Avenue, Washington DC 20500. The main phone number is the same one I remember: 202-456-1414. They now have a comment line, 202-456-1111. Flood them with outrage.

—Call and write your Senators and Representative, especially if they are Republican. You can look them up on their websites. But the main Capitol switchboard number, as always, is 202-224-3121. They notice when people call and write.

—Insist that those who presume to hold the same positions Robert Mueller once did, notably including Kash Patel at the FBI, issue statements of sorrow and sympathy at Robert Mueller’s death, and apologies for their leader’s offensive message. Reporters: Make Kash Patel answer, “Are you also ‘glad’ that Robert Mueller has died?” Citizens: The FBI’s main phone number is 202- 324-3000.

—Make every single Republican office holder, at every single press availability, answer the same question. Do they agree that the country should be “glad” to have lost a man like Robert Mueller? Don’t let them try to eel their way out, with evasions like Mike Johnson’s trademark “I haven’t seen that yet” or “no comment.” It’s a simple question: “The president says he is glad Mr. Mueller is dead. Do you agree?”

They stand up to this moral monster now. Or they stand with him. It’s a bright line.

—Also for reporters: If I were running your news organization, I’d avoid honorifics like “Mr. President” or even “Sir” in association with this abysmal moral example. He has forfeited his right to all terms of respect. It’s a favor to call him even “Mr. Trump.”


My personal note.

I had no official or personal dealings with Robert Mueller during his long career. I never interviewed him or went to hear him testify.

Bizarrely, I shared physical space with him on two occasions, both from local life in DC. Once was in the waiting room for colonoscopies at a medical center several years ago. (Apparently we both were fine.) The other was at DCA airport’s then-notorious “Gate 35X,” which was like a rundown bus station for regional airline flights. I did not approach or speak with him either time.

These both were in periods when Mueller’s face was almost nightly on the TV news, and he could expect to be recognized. But he carried himself as just another citizen. Once he sat reading a newspaper. The other time, reading a hardback book. He did not look around to check whether people noticed him. He comported himself as a normal, decent man—aware of his good luck, and the responsibilities that placed on him.

Let us remember him as an exemplary American. And learn from him rather than the monster who now controls the airwaves.

Subscribe now

Read the whole story
denubis
16 days ago
reply
Share this story
Delete

Quoting Tim Schilling

1 Share

If you do not understand the ticket, if you do not understand the solution, or if you do not understand the feedback on your PR, then your use of LLM is hurting Django as a whole. [...]

For a reviewer, it’s demoralizing to communicate with a facade of a human.

This is because contributing to open source, especially Django, is a communal endeavor. Removing your humanity from that experience makes that endeavor more difficult. If you use an LLM to contribute to Django, it needs to be as a complementary tool, not as your vehicle.

Tim Schilling, Give Django your time and money, not your tokens

Tags: ai-ethics, open-source, generative-ai, ai, django, llms

Read the whole story
denubis
20 days ago
reply
Share this story
Delete

A working email privacy template

1 Share

Unrelated to what has been said above: we note that you use a work email account for this correspondence. We have no particular insight into your specific workplace situation, but want to caution that in general, such an arrangement means that your employer has both the opportunity and at times obligation to partake of the contents of exchanges such as the one we are currently engaged in. We can not guarantee the confidentiality of personal information sent to such accounts, and wish to inform you that your continued use of this account does not conform to the standards of privacy we seek to uphold. If you for whatever reason have second thoughts about this arrangement, we urge you to use a personal email account moving forward

Sincerely,



Read the whole story
denubis
21 days ago
reply
Share this story
Delete
Next Page of Stories