
Click here to go see the bonus panel!
Hovertext:
Also a heap is when you have 26 of something.
Today's News:
Hovertext:
Also a heap is when you have 26 of something.
A bit more than a year ago, I was reminiscing about my interactions with the investor relations departments of companies around earnings season:
“The IR teams themselves had some hilariously pathological incentives … they were to a large extent judged on the objective performance measure of whether the share price spiked upward or downward on the results day.
But, of course, the results day move was largely determined by whether the lousy results were better or worse than expectations. Consequently, the investor relations department had the incentive to spend the entire rest of the quarter talking to analysts like me saying “it’s awful, it’s terrible, it’s so goddamn bad”, so that when the results were merely a bit lousy, we had to upgrade. I played this stupid game for the best part of a decade, would you believe, and even won a couple of stupid prizes for doing so.
The game is still going on, according to Bryce Elder at Alphaville, and it appears to have got significantly more pathological in the last decade. The average “beat” of expectations is now often quite a bit smaller than the extent to which the forecasts got talked down in the first place.
I think this might be another one that we can blame on the business schools. Specifically, on the role of MBA finance classes which I wrote a bit about in “The Unaccountability Machine”, as incredibly efficient transmitters of ideology in the guise of objective science. It’s just that in this case, unlike the efficient markets hypothesis and the leveraged buyout boom, things have gone completely pathological, with a result that nobody at all wanted.
One of the first, and most important things you learn in an introductory class in empirical finance is the “event study”. It’s basically a natural experiment on share prices. You pick a date of an event, download the price dataset, clean it up for stock splits, dividends and the like and then do a test to see if the price move on the date of your event is statistically significant and has the predicted sign. Group a bunch of them together and you can answer questions like “does the stock market like mergers and acquisitions?” or “do dividend changes matter?”. Take them one by one and you can, maybe, answer questions like “did the market respect that CEO or did the price go up when he resigned?” or “was that product launch a success or a failure?”. The great advantage they have is that the actual statistics is really easy (it’s usually just a t-test), so you can drag MBA students through it even if they really signed up for the course because they wanted to do the “Leadership” modules.
The pathology, I think, comes in with the second group of examples I suggested above. If you have to do a bunch of event studies (which you usually do have to do as part of your MBA coursework), then you get really used to identifying the question “what does the market think?” with the statistical significance of the excess abnormal returns during a short time window. Or, once you’ve left business school and started work again, the “results day pop”.
Financial theory has a very strong tendency to drive financial practice – Donald McKenzie’s “An Engine Not A Camera” is a fantastic sociological and historical study of the way that modern derivatives markets, their institutions and even the governing law were shaped by advances in modelling and theory. But I’ve argued in the past (1, 2, 3, 4) that misunderstood theory is often more influential than correctly understood theory. I think that the event study in empirical finance has become, unintentionally, “antiperformative” in McKenzie’s sense with respect to earnings results – it has changed reality in a way that makes it no longer a useful measurement of anything.
I am working on new themes for a new book, which will presumably work their way into this stack. For the moment, I just want to post a little picture, which is shaping my thinking on a lot of policy issues and might, I hope, have a little bit of inflluence on yours. This version comes from a 1977 journal article about forecasting prison riots:
Gorgeous, isn’t it? (The thorn/shark fin at the bottom is a projection of the surface above, showing the region where the overlap happens). I made a joke about Stafford Beer’s diagrams in the book, but I actually love hand-drawn graphs in articles and hope we never lose the art.
This thing is called a “cusp catastrophe”. Invented by Rene Thom but mostly associated with Christopher Zeeman, who was the lead author on the article I’ve clipped in from. It’s trying to show how, quite often, the relationship between two control parameters and an outcome variable can be discontinuous.
In this case, the two control variables are “tension”, basically meaning how angry the prisoners are and “alienation” meaning how good or bad their ability to communicate with the authorities other than by rioting. The vertical axis is “disorder” and a riot is defined as a sudden increase. The three dimensional form is called a “catastrophe because at some points it’s bifurcated - the solution just suddenly jumps from one region to another. Also note that once the jump has been made, it’s not easily reversible; the jump to the “riot” outcome also moves you to a different part of the control surface, and you have to reduce the tension parameter a lot more to get back to where you were in terms of disorder. And that the location of the cusp for the tension depends on the level of alienation.
Taking a step back, note that many policy (and political) debates are carried out in a framework which implicitly assumes that the control surface is nice and smooth, not like the cusp catastrophe:
“Catastrophe theory” has a lot of problems, but it is a good metaphor, in my view - it’s expanding our inventory of mental models to take into account how things might happen that are otherwise hard to explain. Whenever I hear someone talking about “tradeoffs” in policy, this is the picture that flashes into my mind - you can see that at every point in this diagram, there are tradeoffs, but that doesn’t mean that every point is easily accessible by changing a parameter and it doesn’t mean that the outcome of nudging the big dial a little is going to be predictable.
Support LRR: http://patreon.com/loadingreadyrun
Merch: https://store.loadingreadyrun.com
Discord: https://discord.gg/lrr
Twitch: https://www.twitch.tv/loadingreadyrun
Check out our other channels!
Video Games: http://youtube.com/LRRVG
Tabletop: http://youtube.com/LRRTT
Magic the Gathering: http://youtube.com/LRRMTG
Comedy: http://youtube.com/LoadingReadyRun
Streams: http://youtube.com/LoadingReadyLive
#LRR #comedy #crapshot
I’ve been reading The Social History of the Machine Gun, which tells the story of the introduction and adoption of automatic weaponry to the battlefield.1 I’m not really a gun person, but I found it fascinating because it is a real life story of how a new technology challenges the values and assumptions of people and institutions. The life and death stakes add weight to the resistance of key leaders to adapt to the implications of the new technology. It caused me to reflect on how AI is changing software development and gave me some practical ideas on how teams and people should be adapting to get the most of the technology.
What is about to follow greatly simplifies large periods of military history, but I believe is a directionally correct description of John Ellis’s central argument.
Prior to the deployment of the Gatling Gun, the decisive charge was the center of most military operations. The goal of an army was to time their decisive charge to overwhelm their opponents, break their lines, and take the field. This is how Napoleon fought and not altogether different from how Julius Caesar fought.
As guns — muskets and cannons — were introduced to the battlefield, they were introduced in service of the decisive charge (again, radically simplifying). The purpose of lining up lots of men in well ordered lines and firing muskets was to concentrate enough firepower to soften up the enemy ahead for the bayonet charge to come. So central was the decisive charge to battlefield tactics that one late 19th century British Army Captain was quoted in the book as saying that “guns were not as a rule made for actual warfare, but for show.” 2
The machine gun, starting with the Gatling gun, but later the Maxim and Browning guns changed everything. Different guns have different levels of performance, but one primary source in the book notes that an early machine gun allowed a single soldier to concentrate 40x as much firepower compared to existing methods. Furthermore, this firing speed was reliable; it was the same for new recruits as it was for highly drilled veterans.
Over the following fifty years, in fits and spurts, the ability to concentrate firepower begins to change warfare. At first, machine guns are primarily used in defensive contexts. There is ample evidence in colonial conflicts that charges are useless against them, even in (previously) overwhelming numbers. Then in the Russo-Japanese war, the Japanese pioneered the use of covering fire to execute offensive maneuvers.
Despite these examples, militaries around the world are reluctant to take the evidence in front of them to its logical conclusion and reorganize around the new weapon. As late as 1915, the British Army is placing heavy emphasis on bayonet training and telling its soldiers: “The bayonet… is the ultimate weapon in battle.” In Ellis’s view, it is the machine gun more than anything else that causes the First World War to turn into a war of attrition and it’s only after the war that a true reimagining of tactics begins.
So why were militaries so slow to adopt new technology when the stakes were so high? Ellis makes a persuasive argument that adoption of the machine guns and the tactics enabled by them was hindered by the values of military leaders and the institutions they maintained. One quote from the book in reaction to a demonstration of the Gatling gun: “But soldiers do not fancy it… it is so foreign to the old familiar action of battle — that sitting behind a steel blinder and turning a crank —that enthusiasm dies out; there is no play to the pulses; it does not seem like soldiers work.”
The new weaponry and the changes in tactics required conflicted with their sense of what it meant to be a good soldier. They couldn’t let go of orderly lines and courageous charges, even under pain of death.
I’m not a military historian, but I am a software creator. While reading this book, I’ve been thinking about AI in general and software development in particular. For at least the last 15 years (my entire career), the assumption has been that code is expensive to create and must be done with extreme care… and that isn’t the case anymore.
It’s easy from the perspective of 2025 to look back at the military elites of the 1890s with their uniforms and funny facial hair and laugh at how backwards they are. I struggled at times to fully believe the stories in the book. Who has such an emotional attachment to how a victory is won?
It’s harder to realize that these were accomplished, intelligent, competent men who had these habits drilled into them and who had literal victories to their names. The values that made them successful had become second nature to them and natures are hard to change.
So how can we learn from their experience?
If I took one thing away from this book it’s that our values bleed into our work. Timeless values like remaining disciplined under pressure are expressed in actions like marching in a straight line and we become attached to those actions rather than the values. When technology changes those actions, it feels viscerally wrong to us. I see a lot of this in the discussion around vibe coding. We should be prepared for this feeling and seek to be curious rather than judgmental. It’s never a bad time to reflect upon your essential values!
A second take away was the interaction between values, tactics, organizational design, and training. Unlocking the power of the machine gun required changes in:
Values (e.g., the understanding of what made a good soldier)
Tactics (e.g., machine guns are used differently than other weapons)
Organizational design (e.g., increasing the number of machine gunners in a unit)
Training (e.g., giving units time and resources to master the new technology)
To be effective, these changes had to happen together. This should make intuitive sense. Changing your tactics will be ineffective if you aren’t trained on the tools you’re using and you’ll never invest the time in training on something you don’t value.
At the margin, all of us probably experiment too little, but this is even more true now. Throughout the entire book, there was only one anecdote I can remember of a unit overestimating the capabilities of a machine gun and hundreds of people who underestimated it. Often there were pockets of experimentation from outsiders or units operating in atypical circumstances, like the previously mentioned British colonial and Japanese units. Central commands were quick to discount these experiences rather than seeking to understand them.
Taking my own advice, here’s a proposal for what the software team of the future looks like:
Using an agent, (virtually) everyone in the organization has the ability to code, proposing changes to the product. Sales, customer support, marketing operations and more are all attempting to improve the product.
This may even extend to people outside the formal organization — for instance, customers may be given the ability to propose product changes that first go live only on their account and then are adopted more broadly.
A relatively smaller set of people are tasked with managing the scalability, design, and strategy of the product. They’re reviewing working prototypes and thinking about the second order implications, a blend of executives and hands-in-the-code architects, designers, and PMs.
Experimentation with these prototypes becomes much, much more common. New ways of starting, assessing, and sunsetting experiments are needed.
All of this will be heavily mediated by AI agents that both improve the output of the “non-technical” team and give leverage to the keepers of product quality.
Despite heavy use of AI, attention to detail and the ability to get into the weeds to make something great will continue to be prized — if anything, it may become even more important.
All-in-all, it becomes more like a well maintained and opinionated open source project than the standard “three-in-a-box” PM / Designer / Engineering lead.
Shoutout to Jordan Schneider whose essential ChinaTalk podcastbrought this to my attention ↩
Ellis does note that this was an extreme position, but the Captain in question was an advisor to Hiram Maxim, one of the early machine gun innovators. ↩
Mr. Brown was one of my all-time favorite teachers. He was a stocky African-American man with a big presence and a deep voice, who would loudly implore students to “T-H-I-N-K” after posing a difficult question. When one of us would approach him with some sort of complaint, he would reply, “Do I seem to be concerned?,” which meant you were equipped to handle this issue on your own. If he needed to send someone to the principal’s office — which, with a classroom of rambunctious middle schoolers, happened frequently — he would look at the culprit, point at the door, and say powerfully (but not always cheerfully), “Have a lovely day.” Mr. Brown did not suffer fools, but he ran the classroom with a theatrical sensibility that kept everyone interested and laughing.
He taught something called technology education, which as far as I could tell at the time was some sort of problem-solving class. It was project-based, and we often worked in small groups: Each team was given a design brief and a set of tools, then asked to build something. We competed to see who could conjure the best solution. I remember, for example, having to build a bridge with a certain span, and testing to see how much of a load it could hold. We had supervised access to a few machine tools for the more complicated projects.
The program gave me a sense of agency that I had never felt before. I could learn about a problem, game out possible solutions with my classmates, then craft a solution. I learned what to do when I didn’t know what to do: research, try stuff, ask for help. Sometimes we failed spectacularly — a balsa wood airplane I destroyed during a sanding pass still haunts me — but that was part of the process. I could try again. I absolutely loved how collaborative, creative, and open-ended it all was, and I continued with technology education and its associated student group, the Technology Student Association (TSA), until I graduated high school. Almost every day for seven years, I worked on a TSA project. We competed in events at conferences all around the country. It was the most formative educational experience of my youth.