12314 stories
·
36 followers

Team Mirai and Democracy

1 Share

Japan’s election last month and the rise of the country’s newest and most innovative political party, Team Mirai, illustrates the viability of a different way to do politics.

In this model, technology is used to make democratic processes stronger, instead of undermining them. It is harnessed to root out corruption, instead of serving as a cash cow for campaign donations.

Imagine an election where every voter has the opportunity to opine directly to politicians on precisely the issues they care about. They’re not expected to spend hours becoming policy experts. Instead, an AI Interviewer walks them through the subject, answering their questions, interrogating their experience, even challenging their thinking.

Voters get immediate feedback on how their individual point of view matches—or doesn’t—a party’s platform, and they can see whether and how the party adopts their feedback. This isn’t like an opinion poll that politicians use for calculating short-term electoral tactics. It’s a deliberative reasoning process that scales, engaging voters in defining policy and helping candidates to listen deeply to their constituents.

This is happening today in Japan. Constituents have spent about eight thousand hours engaging with Mirai’s AI Interviewer since 2025. The party’s gamified volunteer mobilization app, Action Board, captured about 100,000 organizer actions per day in the runup to last week’s election.

It’s how Team Mirai, which translates to ‘The Future Party,’ does politics. Its founder, Takahiro Anno, first ran for local office in 2024 as a 33 year old software engineer standing for Governor of Tokyo. He came in fifth out of 56 candidates, winning more than 150,000 votes as an unaffiliated political outsider. He won attention by taking a distinctive stance on the role of technology in democracy and using AI aggressively in voter engagement.

Last year, Anno ran again, this time for the Upper Chamber of the national legislature—the Diet—and won. Now the head of a new national party, Anno found himself with a platform for making his vision of a new way of doing politics a reality.

In this recent House of Representatives election, Team Mirai shot up to win nearly four million votes. In the lower chamber’s proportional representation system, that was good enough for eleven total seats—the party’s first ever representation in the Japanese House—and nearly three times what it achieved in last year’s Upper Chamber election.

Anno’s party stood for election without aligning itself on the traditional axes of left and right. Instead, Team Mirai, heavily associated with young, urban voters, sought to unite across the ideological spectrum by taking a radical position on a different axis: the status quo and the future. Anno told us that Team Mirai believes it can triple its representation in the Diet after the next elections in each chamber, an ostentatious goal that seems achievable given their rapid rise over the past year.

In the American context, the idea of a small party unifying voters across left and right sounds like a pipe dream. But there is evidence it worked in Japan. Team Mirai won an impressive 11% of proportional representation votes from unaffiliated voters, nearly twice the share of the larger electorate. The centerpiece of the party’s policy platform is not about the traditional hot button issues, it’s about democracy itself, and how it can be enhanced by embracing a futuristic vision of digital democracy.

Anno told us how his party arrived at its manifesto for this month’s elections, and why it looked different from other parties’ in important ways. Team Mirai collected more than 38,000 online questions and more than 6,000 discrete policy suggestions from voters using its AI Policy app, which is advertised as a ‘manifesto that speaks for itself.’

After factoring in all this feedback, Team Mirai maintained a contrarian position on the biggest issue of the election: the sales tax and affordability. Rather than running on a reduction of the national sales tax like the major parties, Team Mirai reviewed dozens of suggestions from the public and ultimately proposed to keep that tax level while providing support to families through a child tax credit and lowering the required contribution for social insurance. Anno described this as another future-facing strategy: less price relief in the short term, but sustained funding for essential programs.

Anno has always intended to build a different kind of party. After receiving roughly $1 million in public funding apportioned to Team Mirai based on its single seat in the Upper Chamber last year, Anno began hiring engineers to enhance his software tools for digital democracy.

Anno described Team Mirai to us as a ‘utility party;’ basic infrastructure for Japanese democracy that serves the broader polity rather than one faction. Their Gikai (‘assembly’) app illustrates the point. It provides a portal for constituents to research bills, using AI to generate summaries, to describe their impacts, to surfacing media reporting on the issue, and to answer users’ questions. Like all their software, it’s open source and free for anyone, in any party, to use.

After last week’s victory, Team Mirai now has about $5 million in public funding and ambitions to grow the influence of their digital democracy platform. Anno told us Team Mirai has secured an agreement with the LDP, Japan’s dominant ruling party, to begin using Team Mirai’s Gikai and corruption-fighting Mirumae financial transparency tool.

AI is the issue driving the most societal and economic change we will encounter in our lifetime, yet US political parties are largely silent. But AI and Big Tech companies and their owners are ramping up their political spending to influence the parties. To the extent that AI has shown up in our politics, it seems to be limited to the question of where to site the next generation of data centers and how to channel populist backlash to big tech.

Those are causes worthy of political organizing, but very few US politicians are leveraging the technology for public listening or other pro-democratic purposes. With the midterms still nine months away and with innovators like Team Mirai making products in the open for anyone to use, there is still plenty of time for an American politician to demonstrate what a new politics could look like.

This essay was written with Nathan E. Sanders, and originally appeared in Tech Policy Press.

Read the whole story
denubis
2 hours ago
reply
Share this story
Delete

How Will AI-driven Automation Actually Affect Jobs?

1 Share

One of the most widely cited findings in AI policy comes from a 2023 paper by Eloundou, Manning, Mishkin, and Rock titled “GPTs are GPTs.” The title is a nice double meaning: the paper studies how general-purpose technologies (GPTs) powered by large language models (also GPTs) may reshape the labor market. The headline finding is that around 80% of U.S. workers could have at least 10% of their tasks affected by LLMs, and roughly 19% may see half or more of their tasks impacted. Broadly, these exposure measures try to capture how “exposed” the occupation is to AI as a function of whether AI can augment the tasks involved in the job: direct exposure is defined as “whether access to an LLM or LLM-powered system would reduce the time required for a human to perform a specific DWA or complete a task by at least 50%.” The authors are crystal clear on this in the paper: exposure corresponds to the capacity of AI to be involved in the job, not the extent to which the job can be automated away.

But the word “exposure” turned out to bring on all sorts of anxieties about exactly that—displacement. And perhaps for this reason, these AI exposure measures have routinely gone viral on social media over the last couple of months.

A recent example is by Andrej Karpathy, one of the co-founders of OpenAI and a leader in how to think about AI more generally (e.g., he coined both the terms “jagged intelligence” and “vibe coding”). His dashboard, which he described as a “vibe-coded” weekend project, was a ranking of how exposed major occupations are to AI-driven automation. It quickly went viral on X, as it fed all of the already-existing narratives about rapid job loss due to AI.

After seeing the dashboard sensationalized and spread like wildfire, Karpathy clarified that his “exposure” scorecard was based on a quick, LLM-generated measure of how digital a job is, and was never meant to be a serious forecast of which occupations will shrink or disappear. While his own project website made the same caveat, it was largely ignored on X. To butcher the well-known phrase: “A vibe coded weekend project will travel twice around the world before the caveat has time to put its pants on.”

What this recent episode illustrates, however, is that such exposure measures have caught the public eye but are routinely misread (with some proposing a moratorium on the term “exposure” altogether). When people hear that a job is “80% exposed” to AI, they picture 80% of that job disappearing. The actual economics of AI exposure and job loss are pretty far from that characterization.

What is a “job”?

A job is a set of tasks; a person typically gets paid based on how well they complete all of the tasks associated with the job. So let’s say you’re a project manager. Your job involves a bunch of tasks like generating ideas, outlining those ideas succinctly and getting feedback from team members, putting together presentations, and a bunch of rote work (e.g., approving time sheets, fielding logistics). As the AI models become better, you’ve realized that you can automate many of these things: AI can do a lot of the rote work for you, and can even help you put together presentations. According to the exposure measure, your job is now “exposed” to AI. What happens to your job and what happens to your wage? Well, if automating some of the tasks frees up time to generate better ideas, your overall productivity goes up—you become even more valuable to the firm. Humans are still employed and if anything the wages go up.

On the other hand, if AI automates all of the tasks—let’s say your job only involves two tasks and they both get automated—then yes, human labor will get displaced. Importantly, the fewer the number of tasks (what we call the dimensionality of a job), the greater the incentive of the company to automate it in the first place. This is the part much of the analysis on automation misses: adopting AI into an existing organization is costly, so the firm will be more likely to invest if it can automate the job, not just the task. “Exposure” and risk of automation is not just a function of model capabilities, it also depends on firm incentives. And this is not a hypothetical: we now have plenty of evidence that such incentives matter greatly for what gets automated and when (e.g., firms are much more likely to automate when the cost of human labor increases).

Lastly, even if AI makes people more productive and yields higher wages, there can still be massive layoffs in that sector if consumers do not “absorb” the increased productivity: if productivity-driven price drops do not increase demand for the product, then fewer workers will be needed in that sector.

More generally, a task being exposed to AI—even if that exposure corresponds to full automation of that task—can potentially lead to higher wages and more hiring for that occupation. Or it can lead to layoffs and even full displacement. Whether exposure leads to better or worse labor market outcomes for workers depends on two key variables: the elasticity of consumer demand in that sector (how much more of the product people buy as prices decrease), and the dimensionality of the job (how many tasks are involved in that job). As we hope to convince you by the end of the piece, we should be a lot more worried about jobs like trucking and warehousing than we currently are.

The standard approach to automation

Let us start with the “standard” approach to thinking about automation. First, we decompose jobs into tasks using a taxonomy like O*NET, then evaluate how many of those tasks can be automated or augmented by AI. The total impact on the job is a weighted average of how much each task was improved, which means you can build an “exposure index”—typically defined as what share of a job’s tasks can AI do?—and that index maps linearly into how much the job is affected (see, e.g., Michael Webb’s already-classic paper). This approach has been enormously useful for mapping the landscape of AI’s potential reach. But it contains an assumption that is almost certainly wrong for most real-world jobs: it assumes tasks are separable. That is, automating task A has no effect on the productivity of task B, and the overall impact is just the sum of parts.

Consider the jobs that you know. There are many out there where the output consists of doing many different things right, not just some of them. You can’t have a cook who follows most of the steps of a recipe, a drummer who is mostly on the beat, a programmer whose code only partially works (or, for that matter, a professor who only does the research half of the job…though some have tested this requirement). These are jobs where each task needs to be completed successfully for the output to be acceptable.

Put differently, the tasks are not separable; they are complements, i.e., doing one task right or wrong affects how well you can do others in the job in order to complete it. That tasks within a job are complements rather than substitutes seems quite plausible for most real-world production. And this has a wide range of important implications for how AI will actually affect jobs.

The O-ring model of jobs

The idea that complementary tasks create nonlinear productivity goes back to Michael Kremer’s classic 1993 paper, “The O-Ring Theory of Economic Development”. The name comes from the tragic Challenger disaster: a single faulty O-ring caused the catastrophic failure of the entire system. Kremer’s insight was that if production requires many steps, and each step needs to be done well for the final product to have value, then productivity becomes a multiplicative rather than a linear function of skill. A worker who makes slightly fewer errors per task will be dramatically more productive overall, because those small quality gains compound across every step.

This task-based model of jobs has gained fresh relevance with a recent paper by Joshua Gans and Avi Goldfarb, “O-Ring Automation,” which applies Kremer’s framework directly to AI-driven automation. While their model might appear simple at first glance, its implications are far-reaching and profound. At least one of us (Alex) has been obsessed with this paper for months (see here, here, and here).

Gans and Goldfarb build a model of a firm where each worker’s job is composed of n tasks. The job’s output is multiplicative in the quality of each task—this is the O-ring production function:

A worker has a time endowment h and allocates it across the n tasks. If task s is performed manually, the worker spends h_s hours on it and generates quality:

where a is labor productivity, assumed constant across tasks (a simplifying assumption). The worker's time constraint is:

The firm can also choose to automate any task by renting a piece of capital that delivers a fixed quality θ at cost r per task. This is the key part to pay attention to: whether firms invest in automating a task depends on the trade-offs embedded in this problem. Once a task is automated, the worker no longer needs to spend any time on it.

So far the setup is quite simple. The interesting part is what the multiplicative structure of the production function implies once automation enters the picture.

How can automation raise wages?

Now suppose a firm chooses to automate k out of n tasks. What happens to the worker, and how does that affect the wage?

Before automation, the worker allocates time evenly across all n tasks, which is optimal given the symmetric structure. Each manual task therefore receives h/n hours and has quality a · h/n. Total output is:

After k tasks are automated at quality θ, the worker now has all h hours to allocate across only n - k remaining manual tasks. Each manual task now gets h/(n-k) hours, producing quality a · h/(n-k). Total output becomes:

So output rises after partial automation if and only if:

This is an important condition which states that if the automated task quality θ is at least as good as the worker’s original pre-automation manual quality on those tasks, then the output increases for sure. Output does not automatically rise just because some tasks are automated; it rises when the quality of automation is high enough.

But here is the key insight: because automation also frees the worker to concentrate more time on the remaining tasks, output can increase even if the automated tasks are performed at slightly lower quality than the worker originally achieved before automation. Automation lets the worker concentrate on fewer tasks, raising the quality of each one. This is the “focus effect.” Because of the functional form of the production function, higher quality on the remaining manual tasks doesn’t just add to output—it multiplies through the production function. The worker becomes more productive precisely because they’re doing fewer things.

When the automation quality is sufficiently high relative to what the worker was producing manually on those tasks, the worker’s marginal product rises—and so (typically) does their wage. Partial automation, in the O-ring world, is often a complement to human labor rather than a substitute for it, which increases the worker’s wage.

But this is not necessarily good news for labor

Higher worker productivity is good for wages, but does it lead to more jobs or fewer? This depends on consumer demand. Each worker makes one calculator a day and the firm has 10 workers. All calculators are sold at the prevailing price. Now imagine each worker becomes much more productive so that each worker can make 10 calculators. The price of each calculator falls (costs fall), but consumers still demand roughly the same number of calculators. This is the case of inelastic demand—one that does not respond much to prices. Now the firm will fire 9 of the workers. But what if consumers buy way more calculators at lower prices, i.e., demand is very elastic. Then the firm will actually end up hiring more workers to meet the new demand, despite the fact that they’re more productive.

More generally, if demand is elastic (elasticity > 1), then a price decrease leads to a more-than-proportional increase in quantity demanded. Output expands a lot. The firm needs more workers to produce this higher output, even though each worker is now more productive. Net effect: more hiring.

If demand is inelastic (elasticity < 1), a price decrease leads to a less-than-proportional increase in quantity demanded. Output does not expand much and the firm can produce the same (or slightly more) output with fewer workers since each one is more productive. Net effect: displacement.

This is closely related to a popular idea commonly referred to as Jevons’ paradox: when a resource becomes more efficient to use, total consumption of that resource often increases rather than decreases. When the steam engine made coal more efficient, coal consumption skyrocketed because so many new applications became economically viable. The same logic applies to labor: if AI makes a worker dramatically more productive, and demand for that product is elastic, one may end up with more workers in that occupation, not fewer.

Why job dimensionality matters: The case of firm incentives

The relationship between tasks and the elasticity of consumer demand is an important dimension for predicting AI-driven displacement, but one variable that is often overlooked is the number of tasks in the job itself, i.e., its dimensionality. A job’s dimensionality matters for two reasons.

First, conditional on a task being automated, a low-dimensional job is more likely to be fully displaced. If a job has 20 tasks and one gets automated, a human worker is still required to do the other 19 tasks. But if a job has one task and one task gets automated, that job is gone. Second—and this dimension is perhaps overlooked the most—organizations have a stronger incentive to automate tasks the fewer non-automated tasks are left in the job. Imagine that automating a task requires a $10 million dollar investment (buying the software, onboarding, connecting it to the rest of the system, etc.). In one case, this task is the only non-automated task left in a job; in the other case, if this task is automated, there are 19 other non-automated tasks left. The firm has a much higher incentive to automate the task in the first case than the second because it can then replace the worker and reap the cost savings involved.1

Because of this, firms have a stronger incentive to invest in technology to automate low dimensional jobs. In a low-dimensional job, automating all or most of the core tasks can eliminate the position and the wage bill altogether. That makes the return to automation much larger. In other words, not all “unexposed” tasks matter equally: in some jobs the remaining tasks still keep the existing worker at the firm; in others they do not.

This gives a clear prediction: even if a job is not currently “exposed” to AI, in the sense that AI is not being used for the tasks involved, if it is low dimensional and the technology is getting close to automating the tasks, it should be considered at risk. Firms will work harder and invest more to automate the task(s) involved than in the case where jobs have many non-automated tasks.

Trucking and warehousing, the overlooked canaries in the coal mine

This is why we think people should be more worried about jobs like trucking and warehousing.

Roughly 3 million Americans drive trucks for a living. Many are in their 50s, have been driving for decades, and live in communities where trucking is an economic backbone. Trucking is one of the best jobs one can get without a college degree. The actual work of a long-haul truck driver is dominated by a few core functions: moving the truck safely from point A to point B. The logistics, loading/unloading, etc. are all done by others. If autonomous driving becomes reliable on long-haul routes, the job of a truck driver is not just being augmented; it is fundamentally threatened and may even be displaced entirely. And that possibility is no longer theoretical. Companies such as Aurora Innovation and Kodiak Robotics are already running large-scale autonomous trucking pilots and commercial deployments on constrained routes. Warehousing tells a similar story. Warehousing employs millions of U.S. workers, and many warehouse jobs—picking, packing, sorting, pallet movement—are relatively narrow and increasingly automatable. Abroad, firms are already operating highly automated “dark warehouses” that run around the clock with minimal human labor. These warehouses look nothing like what we see today: they are designed from the ground up to be run by machines.

Now compare that to a knowledge worker, say, a management consultant. The job combines research, data analysis, client communication, presentation design, strategic reasoning, team coordination, and relationship management. That’s at least seven or eight distinct complementary tasks. Claude or Codex might automate the first pass on the data analysis and slide deck creation, but the consultant is still needed for everything. In O-ring terms, automating some tasks can make the remaining ones more valuable by allowing the worker to allocate more time to them—the consultant can spend more time talking to the client and making them comfortable with the implementation, getting buy-in from the various units, etc. As a consequence, wages may rise, and employment may rise too if better output and lower prices expand client demand.

You can see the same logic in many high-stakes professions such as medicine and academia. There are now over 870 FDA-approved radiology AI tools, and 66% of doctors use at least one AI tool, mainly for note dictation and diagnostic support. But these tools are augmenting radiologists and physicians, not replacing them. AI typically handles the routine pattern recognition aspect of the job, freeing doctors to focus on complex cases, patient communication, and clinical judgment. Likewise, academics have been debating whether advances in AI make research assistants more or less valuable. As AI automates routine analytical tasks, both professors and RAs can concentrate more on ideas and judgement, thereby expanding output and demand for skilled research labor. This is yet again the O-ring focus effect in practice.

What do exposure indices capture?

Let us bring this back to the exposure framework. In the standard approach, a management consultant is highly “exposed” to AI whereas a truck driver is not. But does this mean that the consultant is at higher displacement risk than the truck driver? Not necessarily. The consultant’s high exposure may actually be good news because it means AI will augment many of their complementary tasks, triggering the focus effect and potentially raising wages. On the other hand, the truck driver’s moderate exposure on a single critical task is much more dangerous because trucking companies have a much higher incentive to automate the task of driving, and once that’s done, the job is gone as well. These incentives are already playing out in practice:

The relevant object therefore is not average task exposure, but the structure of bottlenecks and how automation reshapes worker time around them. Two jobs with identical exposure scores can have completely opposite displacement risks depending on whether their tasks are complements, whether demand for their output is elastic or inelastic, and the incentives of the firm to invest in automation. The workers at greatest risk are not necessarily those with the highest average exposure, but those whose jobs are built around a small number of core tasks that AI can automate.

1

In the case where jobs are not fully automated, the cost savings from automating the marginal task will depend on the complementarities between the other tasks in the job. The exact relationship is worked out in the O-ring model of automation paper.

Read the whole story
denubis
5 hours ago
reply
Share this story
Delete

One Man a Hero. One Man a Monster.

1 Share
Robert Mueller, 1944-2026, as a young Marine Corps officer preparing to be a platoon leader during the bloodiest year of combat in Vietnam, 1968. He was wounded in action, received a Purple Heart, and was awarded a Bronze Star with ‘V’ for valor among other decorations. It was the beginning of a long career in public service. (National Archives/Getty Images.)

Robert Mueller died today, at age 81. Decent people around the world mourn his passing. Americans who know about public service recognize him as a stellar example. If the phrase “with privilege, comes responsibility” can apply to any Americans of recent history, the list might start with him.

And today, the person who is now on his 24th golfing trip to Mar-a-Lago of his second term—at millions of dollars per trip in public expense, while the world reels from a war he started on a whim, while families he promised to help are struggling with medical expenses and gas prices and tariff increases and everything else—today that same person wrote publicly of Mueller:

Good, I’m glad that he is dead.

This is the most despicable public statement by an American public official in my lifetime.

It needs to be recognized as such.

Any head of state who can say this in public about a countryman, even about a political adversary, is a moral monster. Either he has no ability whatsoever to empathize with others; or he has no sense whatsoever of a leader’s duty; or he has no remaining cognitive ability whatsoever to “filter” what he says. Or all three.

If I thought Trump had ever heard of John Donne, I would remind him of “no man is an island.” If I thought he had ever been seriously in any place of worship, I would remind him that none teaches being “glad” at another person’s death. If I thought he had a soul, I would recommend that he attend to it.

Just while I’m at it, here is how Donne’s most famous passage ends:

Any man’s death diminishes me,

Because I am involved in mankind.

And therefore never send to know for whom the bell tolls;

It tolls for thee.

Whatever your political views, including about “the Mueller report,” respect Robert Mueller’s example of service. And stand up against Trump’s example of depravity.


‘I have been very lucky. I should spend time paying it back.’

Let us consider, briefly, former FBI director Robert Mueller—before, and apart from, his past two decades in the news.

—He grew up in privilege, son of a DuPont executive. For high school he went to the elite St. Paul’s boarding school in New Hampshire.

—At St. Paul’s he was a renowned athlete: Captain of three teams—soccer, hockey, and lacrosse—and winner of the school’s medal as outstanding overall athlete.

—He went to Princeton, where he played varsity lacrosse. A lacrosse teammate one year ahead of him was another notable athlete named David Spencer Hackett.

—At Princeton, Hackett had been in ROTC, and after graduation in 1965, in those early days of the Vietnam war, he was commissioned as a Marine Corps officer. Early in 1967, leading a platoon in Vietnam, he was killed in action. You can read about his life and death here.

—After his own graduation from Princeton in 1966, Mueller spent a year getting a master’s degree. And then he enlisted in the Marine Corps, in part because of his teammate Hackett’s death. As he said years later in an interview, with emphasis added:

I have been very lucky. I always felt I should spend some time paying it back. One of the reasons I went into the Marine Corps was because we lost a very good friend, a Marine in Vietnam, who was a year ahead of me at Princeton. There were a number of us who felt we should follow his example and at least go into the service. And it flows from there.

—In Vietnam he led combat platoons through the carnage of 1968; he was wounded; he received numerous decorations. Decades later he told my friend Garrett Graff that among his achievements, he was “most proud the Marine Corps deemed me worthy of leading other Marines.” Combat bravery is far from the only mark of civic courage. But Mueller displayed both kinds. You can think of examples a generation older than Mueller: the first George Bush, who was an 18-year-old combat aviator during World War II. William Webster, Mueller’s predecessor at the FBI, who served in the Navy both during World War II and the Korean war.

—Mueller left the military to go to law school. He spent the decades that followed mainly in public service, including 12 years as director of the FBI. He was appointed by a Republican president (GWB), and re-appointed by a Democrat (Obama).

And this is kind of person the country’s current “leader” says he is “glad” has died.

I don’t know when I have ever felt more disgusted by an elected leader than right now.

I’ve disagreed with people, often. But this is beneath contempt.

Share


What can any of us do?

A national leader, who celebrates any prominent citizen’s death, is not fit to lead.

But we know this already about the morally empty vessel who at this moment is lolling or dining in Florida, while others serve and suffer and die.

But what can anyone do?

—One week from today, the next “No Kings” mass protest will occur. The preceding one, last October, was the biggest one-day demonstration in the nation’s history. And that was before the ICE murders in Minnesota, the war-on-a-whim in Iran, the surge in gas prices, the “glad he’s dead” post.
Next Saturday’s should be bigger. Find out more about it here.

—Call and write the White House and leave messages of outrage about this vile expression from a serving president. The address as always is 1600 Pennsylvania Avenue, Washington DC 20500. The main phone number is the same one I remember: 202-456-1414. They now have a comment line, 202-456-1111. Flood them with outrage.

—Call and write your Senators and Representative, especially if they are Republican. You can look them up on their websites. But the main Capitol switchboard number, as always, is 202-224-3121. They notice when people call and write.

—Insist that those who presume to hold the same positions Robert Mueller once did, notably including Kash Patel at the FBI, issue statements of sorrow and sympathy at Robert Mueller’s death, and apologies for their leader’s offensive message. Reporters: Make Kash Patel answer, “Are you also ‘glad’ that Robert Mueller has died?” Citizens: The FBI’s main phone number is 202- 324-3000.

—Make every single Republican office holder, at every single press availability, answer the same question. Do they agree that the country should be “glad” to have lost a man like Robert Mueller? Don’t let them try to eel their way out, with evasions like Mike Johnson’s trademark “I haven’t seen that yet” or “no comment.” It’s a simple question: “The president says he is glad Mr. Mueller is dead. Do you agree?”

They stand up to this moral monster now. Or they stand with him. It’s a bright line.

—Also for reporters: If I were running your news organization, I’d avoid honorifics like “Mr. President” or even “Sir” in association with this abysmal moral example. He has forfeited his right to all terms of respect. It’s a favor to call him even “Mr. Trump.”


My personal note.

I had no official or personal dealings with Robert Mueller during his long career. I never interviewed him or went to hear him testify.

Bizarrely, I shared physical space with him on two occasions, both from local life in DC. Once was in the waiting room for colonoscopies at a medical center several years ago. (Apparently we both were fine.) The other was at DCA airport’s then-notorious “Gate 35X,” which was like a rundown bus station for regional airline flights. I did not approach or speak with him either time.

These both were in periods when Mueller’s face was almost nightly on the TV news, and he could expect to be recognized. But he carried himself as just another citizen. Once he sat reading a newspaper. The other time, reading a hardback book. He did not look around to check whether people noticed him. He comported himself as a normal, decent man—aware of his good luck, and the responsibilities that placed on him.

Let us remember him as an exemplary American. And learn from him rather than the monster who now controls the airwaves.

Subscribe now

Read the whole story
denubis
2 days ago
reply
Share this story
Delete

Quoting Tim Schilling

1 Share

If you do not understand the ticket, if you do not understand the solution, or if you do not understand the feedback on your PR, then your use of LLM is hurting Django as a whole. [...]

For a reviewer, it’s demoralizing to communicate with a facade of a human.

This is because contributing to open source, especially Django, is a communal endeavor. Removing your humanity from that experience makes that endeavor more difficult. If you use an LLM to contribute to Django, it needs to be as a complementary tool, not as your vehicle.

Tim Schilling, Give Django your time and money, not your tokens

Tags: ai-ethics, open-source, generative-ai, ai, django, llms

Read the whole story
denubis
6 days ago
reply
Share this story
Delete

A working email privacy template

1 Share

Unrelated to what has been said above: we note that you use a work email account for this correspondence. We have no particular insight into your specific workplace situation, but want to caution that in general, such an arrangement means that your employer has both the opportunity and at times obligation to partake of the contents of exchanges such as the one we are currently engaged in. We can not guarantee the confidentiality of personal information sent to such accounts, and wish to inform you that your continued use of this account does not conform to the standards of privacy we seek to uphold. If you for whatever reason have second thoughts about this arrangement, we urge you to use a personal email account moving forward

Sincerely,



Read the whole story
denubis
7 days ago
reply
Share this story
Delete

The one science reform we can all agree on, but we're too cowardly to do

2 Shares
photo cred: my dad

If you ever want a good laugh, ask an academic to explain what they get paid to do, and who pays them to do it.

In STEM fields, it works like this: the university pays you to teach, but unless you’re at a liberal arts college, you don’t actually get promoted or recognized for your teaching. Instead, you get promoted and recognized for your research, which the university does not generally pay you for. You have to ask someone else to provide that part of your salary, and in the US, that someone else is usually the federal government. If you’re lucky—and these days, very lucky—you get a chunk of money to grow your bacteria or smash your electrons together or whatever, you write up your results for publication, and this is where the monkey business really begins.

In most disciplines, the next step is sending your paper to a peer-reviewed journal, where it gets evaluated by an editor and (if the editor sees some promise in it) a few reviewers. These people are academics just like you, and they generally do not get paid for their time. Editors maybe get a small stipend and a bit of professional cred, while reviewers get nothing but the warm fuzzies of doing “service to the field”, or the cold thrill of tanking other people’s papers.

If you’re lucky again, your paper gets accepted by the journal, which now owns the copyright to your work. They do not pay you for this! If anything, you pay them an “article processing charge” for the privilege of no longer owning the rights to your paper. This is considered a great honor.

The journals then paywall your work, sell the access back to you and your colleagues, and pocket the profit. Universities cover these subscriptions and fees by charging the government “indirect costs” on every grant—money that doesn’t go to the research itself, but to all the things that support the research, like keeping the lights on, cleaning the toilets, and accessing the journals that the researchers need to read.

Nothing about this system makes sense, which is why I think we should build a new one. In the meantime, though, we should also fix the old one. But that’s hard, for two reasons. First, many people are invested in things working exactly the way they do now, so every stupid idea has a constituency behind it. Second, our current administration seems to believe in policy by bloodletting: if something isn’t working, just slice it open at random. Thanks to these haphazard cuts and cancellations, we now have a system that is both dysfunctional and anemic.

I see a way to solve both problems at once. We can satisfy both the scientists and the scalpel-wielding politicians by ridding ourselves of the one constituency that should not exist. Of all the crazy parts of our crazy system, the craziest part is where taxpayers pay for the research, then pay private companies to publish it, and then pay again so scientists can read it. We may not agree on much, but we can all agree on this: it is time, finally and forever, to get rid of for-profit scientific publishers.

MOMMY, WHERE DO SCAMS COME FROM?

The writer G.K. Chesterton once said that before you knock anything down, you ought to know how it got there in the first place. So before we show for-profit publishers the pointy end of a pitchfork, we ought to know where they came from and why they persist.

It used to be a huge pain to produce a physical journal—someone had to operate the printing presses, lick the stamps, and mail the copies all over the world. Unsurprisingly, academics didn’t care much about doing those things. When government money started flowing into universities post-World War II and the number of articles exploded, private companies were like, “Hey, why don’t we take these journals off your hands—you keep doing the scientific stuff and we’ll handle all the boring stuff.” And the academics were like “Sounds good, we’re sure this won’t have any unforeseen consequences.”

Those companies knew they had a captive audience, so they bought up as many journals as they could. Journal articles aren’t interchangeable commodities like corn or soybeans—if your science supplier starts gouging you, you can’t just switch to a new one. Adding to this lock-in effect, publishing in “high-impact” journals became the key to success in science, which meant if you wanted to move up, your university had to pay up. So, even as the internet made it much cheaper to produce a journal, publishers made it much more expensive to subscribe to one.

Robert Maxwell, one of the architects of the for-profit scientific publishing scheme. When he later went into debt, he plundered hundreds of millions of pounds from his employees’ pension funds. You may be familiar with his daughter and lieutenant Ghislaine Maxwell, who went on to have a successful career in child trafficking. (source)

The people running this scam had no illusions about it, even if they hoped that other people did. Here’s how one CEO described it:

You have no idea how profitable these journals are once you stop doing anything. When you’re building a journal, you spend time getting good editorial boards, you treat them well, you give them dinners. [...] [and then] we stop doing all that stuff and then the cash just pours out and you wouldn’t believe how wonderful it is.

So here’s the report we can make to Mr. Chesterton: for-profit scientific publishers arose to solve the problem of producing physical journals. The internet mostly solved that problem. Now the publishers are the problem. These days, Springer Nature, Elsevier, Wiley, and the like are basically giant operations that proofread, format, and store PDFs. That’s not nothing, but it’s pretty close to nothing.

No one knows how much publishers make in return for providing these modest services, but we can guess. In 2017, the Association of Research Libraries surveyed its 123 member institutions and found they were paying a collective $1 billion in journal subscriptions every year. The ARL covers some of the biggest universities, but not nearly all of them, so let’s guess that number accounts for half of all university subscription spending. In 2023, the federal government estimated it paid nearly $380 million in article processing charges alone, and those are separate from subscriptions. So it wouldn’t be crazy if American universities were paying something like $2.5 billion to publishers every year, with the majority of that ultimately coming from taxpayers.

(By the way, the estimated profit margins for commercial scientific publishers are around 40%, which is higher than Microsoft.)

To put those costs in perspective: if the federal government cut out the publishers, it would probably save more money every year than it has “saved” in its recent attempts to cut off scientific funding to universities. It’s unclear how much money will ultimately be clawed back, as grants continue to get frozen, unfrozen, litigated, and negotiated. But right now, it seems like ~$1.4 billion in promised science funding is simply not going to be paid out. We could save more than that every year if we just stopped writing checks to John Wiley & Sons.

PUNK ROCK SCIENCE

How can such a scam continue to exist? In large part, it’s because of a computer hacker from Kazakhstan.

The political scientist James C. Scott once wrote that many systems only “work” because people disobey them. For instance, the Soviet Union attempted to impose agricultural regulations so strict that people would have starved if they followed the letter of the law. Instead, citizens grew and traded food in secret. This made it look like the regulations were successful, when in fact they were a sham.1

Something similar is happening right now in science, except Russia is on the opposite side of the story this time. In the early 2010s, a Kazakhstani computer programmer named Alexandra Elbakyan started downloading articles en masse and posting them publicly on a website called SciHub. The publishers sued her, so she’s hiding out in Russia, which protects her from extradition. As you can see in the map below, millions of people now use SciHub to access scientific articles, including lots of people who seem to work at universities:

This data is ten years old, so I would expect these numbers to be higher today. (source)

Why would researchers resort to piracy when they have legitimate access themselves? Maybe because journals’ interfaces are so clunky and annoying that it’s faster to go straight to SciHub. Or maybe it’s because those researchers don’t actually have access. Universities are always trying to save money by canceling journal subscriptions, so academics often have to rely on bootleg copies. Either way, SciHub seems to be our modern-day version of those Soviet secret gardens: for-profit publishing only “works” because people find ways to circumvent it.

Alexandra Elbakyan, “Pirate Queen of Science” (source)

In a punk rock kind of way, it’s kinda cool that so many American scientists can only do their work thanks to a database maintained by a Russia-backed fugitive. But it ought to be a huge embarrassment to the US government.2

Instead, for some reason, the government insists on siding with publishers against citizens. Sixteen years ago, the US had its own Elbakyan. His name was Aaron Swartz. He downloaded millions of paywalled journal articles using a connection at MIT, possibly intending to share them publicly. Government agents arrested him, charged him with wire fraud, and intended to fine him $1 million and imprison him for 35 years. Instead, he killed himself. He was 26.

Swartz with glasses, smiling with Jason Scott (cut off from the picture from the left)
Swartz in 2011, two years before his death (source)

THE FOREST FIRE IS OVERDUE

Scientists have tried to take on the middlemen themselves. They’ve founded open-access journals. They’ve published preprints. They’ve tried alternative ways of evaluating research. A few high-profile professors have publicly and dramatically sworn off all “luxury” outlets, and less-famous folks have followed suit: in 2012, over 10,000 researchers signed a pledge not to publish in any journals owned by Elsevier.

None of this has worked. The biggest for-profit publishers continue making more money year after year. “Diamond” open access journals—that is, publications that don’t charge authors or readers—only account for ~10% of all articles.3 Four years after that massive pledge, 38% of signers had broken their promise and published in an Elsevier journal.4

These efforts have fizzled because this isn’t a problem that can be solved by any individual, or even many individuals. Academia is so cutthroat that anyone who righteously gives up an advantage will be outcompeted by someone who has fewer scruples. What we have here is a collective action problem.

Fortunately, we have an organization that exists for the express purpose of solving collective action problems. It’s called the government. And as luck would have it, they’re also the one paying most of the bills!

So the solution here is straightforward: every government grant should stipulate that the research it supports can’t be published in a for-profit journal. That’s it! If the public paid for it, it shouldn’t be paywalled.

The Biden administration tried to do this, but they did it in a stupid way. They mandated that NIH-funded research papers have to be “open access”, which sounds like a solution, but it’s actually a psyop. By replacing subscription fees with “article processing charges”, publishers can simply make authors pay for writing instead of making readers pay for reading. The companies can keep skimming money off the system, and best of all, they get to call the result “open access”.

These fees can be wild. When my PhD advisor and I published one of our papers together, the journal charged us an “open access” fee of $12,000. This arrangement is a tiny bit better than the alternative, because at least everybody can read our paper now, including people who aren’t affiliated with a university. But those fees still have to come from somewhere, and whether you charge writers or readers, you’re ultimately charging the same account—namely, the US government.5

The Trump administration somehow found a way to make a stupid policy even stupider. They sped up the timeline while also firing a bunch of NIH staffers—exactly the people who would make sure that government-sponsored publications are, in fact, publicly accessible. And you need someone to check on that, because researchers are notoriously bad about this kind of stuff. They’re already required to upload the results of clinical trials to a public database, but more than half the time they just...don’t.

To do this right, you cannot allow the rent-seekers to rebrand. You have to cut them out entirely. I don’t think this will fix everything that’s wrong with science; it will merely fix the wrongest thing. Nonprofit journals still charge fees, but at least the money goes to organizations that ostensibly care about science, rather than going to CEOs who make $17 million a year. And almost every journal, for-profit or not, uses the same failed system of peer review. The biggest benefit of shaking things up, then, would be allowing different approaches to have a chance at life, the same way an occasional forest fire clears away the dead wood, opens up the pinecones, and gives seedlings a shot at the sunlight.

Science philanthropies should adopt the same policy, and some of them already have. The Navigation Fund, which oversees billions of dollars in scientific funding, no longer bankrolls journal publications at all. , its director, reports that the experiment has been a great success:

Our researchers began designing experiments differently from the start. They became more creative and collaborative. The goal shifted from telling polished stories to uncovering useful truths. All results had value, such as failed attempts, abandoned inquiries, or untested ideas, which we frequently release through Arcadia’s Icebox. The bar for utility went up, as proxies like impact factors disappeared.

Sounds good to me!

CATCH THE TIGER

Fifteen years ago, the open science movement was all about abolishing for-profit journals—that’s what open science meant. It seemed like every speech would end with “ELSEVIER DELENDA EST”.

Now people barely bring it up at all.6 It’s like a tiger has escaped the zoo and it’s gulping down schoolchildren, but when people suggest zoo improvements, all the agenda items are like, “We should add another Dippin’ Dots kiosk”. If you bring up the loose tiger, everyone gets annoyed at you, like “Of course, no one likes the tiger”.

I think two things happened. First, we got cynical about cyberspace. In the 1990s and 2000s, we really thought the internet would solve most of our problems. When those problems persisted despite all of us getting broadband, we shifted to thinking that the internet was, in fact, causing the problems. And so it became cringe to think the internet could ever be a force for good. In 1995, for-profit publishers were going to be “the internet’s first victim”; in 2015, they were “the business the internet could not kill”.

Second, when the replication crisis hit in the early 2010s, the open science movement got a new villain—namely, naughty researchers. The fakers, the fraudsters, the over-claimers: those are the real bad boys of science. It’s no longer cool to hate international publishing conglomerates. Now it’s cool to hate your colleagues.

Both of these shifts were a shame. The internet utopians were right that the web would eliminate the need for journals, but they were wrong to think that would be enough. The replication police were right to call out scientific malfeasance, but they were wrong to forget our old foes. The for-profit publishers are just as bad as they ever were, and while the internet has made them more vulnerable then ever, now we know they won’t go unless they’re pushed.

If we want better science, we should catch the tiger. Not only because it’s bad for the tiger to be loose, but because it’s bad for us to look the other way. If you allow an outrageous scam to go unchecked, if you participate in it, normalize it—then what won’t you do? Why not also goose your stats a bit? Why not publish some junk research? Look around: no one cares!

There are so many problems with our current way of doing things, and most of those problems are complicated and difficult to solve. This one isn’t. Let’s heave this succubus off our scientific system and end this scam once and for all. After that, Dippin’ Dots all around.

Experimental History opposes the tiger and supports ice cream, in that order

1

Seeing Like a State, 203-204, 310

2

For anyone who is all-in on “America First”: may I also mention that three of the largest publishers—Springer Nature, Elsevier, and Taylor and Francis—are all British-owned. A curious choice of companies to subsidize!

3

Don’t get me started on this “diamond open access” designation. If it costs money to publish or to read, it’s not open access, period. “Oh, you’d like your car to come with a steering wheel and brakes? You’ll need our ‘diamond’ package.”

4

I assume this number is much higher now. At the time, Elsevier controlled 16% of the market, so most people could continuing publish in their usual journals without breaking their pledge. I started graduate school in 2016, and I never heard anyone mention avoiding Elsevier journals at all.

5

The NIH has announced vague plans to cap these charges, which is kind of like saying, “I’ll let you scam me, but just don’t go crazy about it.”

6

For example, the current strategic plan of the Center for Open Science doesn’t mention for-profit journals at all.

Read the whole story
denubis
9 days ago
reply
Share this story
Delete
Next Page of Stories