11772 stories
·
35 followers

The Cult of Microsoft

2 Comments and 3 Shares

Soundtrack: EL-P - Flyentology

At the core of Microsoft, a three-trillion-dollar hardware and software company, lies a kind of social poison — an ill-defined, cult-like pseudo-scientific concept called 'The Growth Mindset" that drives company decision-making in everything from how products are sold, to how your on-the-job performance is judged.

I am not speaking in hyperbole. Based on a review of over a hundred pages of internal documents and conversations with multiple current and former Microsoft employees, I have learned that Microsoft — at the direction of CEO Satya Nadella — has oriented its entire culture around the innocuous-sounding (but, as we’ll get to later, deeply troubling) Growth Mindset concept, and has taken extraordinary measures to institute it across the organization.

One's "growth mindset" determines one’s success in the organization. Broadly speaking, it includes attributes that we can all agree are good things. People with growth mindsets are willing to learn, accept responsibility, and strive to overcome adversity. Conversely, those considered to have a "fixed mindset" are framed as irresponsible, selfish, and quick to blame others. They believe that one’s aptitudes (like their skill in a particular thing, or their intelligence) are immutable and cannot be improved through hard work.

On the face of things, this sounds uncontroversial. The kind of nebulous pop-science that a CEO might pick up at a leadership seminar. But, from the conversations I’ve held and the internal documents I’ve read, it’s clear that the original (and shaky) scientific underpinnings of mindset theory have devolved into an uglier, nastier beast at Redmond. 

The "growth mindset" is Microsoft's cult — a vaguely-defined, scientifically-questionable, abusively-wielded workplace culture monstrosity, peddled by a Chief Executive obsessed with framing himself as a messianic figure with divine knowledge of how businesses should work. Nadella even launched his own Bible — Hit Refresh — in 2017, which he claims has "recommendations presented as algorithms from a principled, deliberative leader searching for improvement." 

I’ve used the terms “messianic,” “Bible,” and “divine” for a reason. This book — and the ideas within — have taken an almost religious-like significance within Microsoft, to the point where it’s actually weird. 

Like any messianic tale, the book is centered around the theme of redemption, with the subtitle mentioning a “quest to rediscover Microsoft’s soul.” Although presented and packaged like any bland business book that you’d find in an airport Hudson News and half-read on a red eye to nowhere, its religious framing extends to separation of dark and enlightened ages. The dark age — Steve “Developers” Balmer’s Microsoft, with Microsoft stagnant and missing winnable opportunities, like mobile — contrasted against this brave, bright new era where a nearly-assertive Redmond pushes frontiers in places like AI. 

Hit Refresh became a New York Times bestseller likely due to the fact that Microsoft employees were instructed (based on an internal presentation I’ve reviewed) to "facilitate book discussions with customers or partners" using talking points provided by the company around subjects like culture, trust, artificial intelligence, and mixed reality.

Side note: Hey, didn’t Microsoft lay off a bunch of people from its mixed reality team earlier this year

Nadella, desperate to hit the bestseller list and frame himself as some kind of guru, attempted to weaponize tens of thousands of Microsoft employees as his personal propagandists, instructing them to do things like...

Use these questions to facilitate a book discussion with your customers or partners if they are interested in exploring the ideas around leadership, culture and technology in Hit Refresh...

Reflect on each of the three passages about lessons learned from cricket and discuss how they could apply in your current team. (pages 38-40)

"...compete vigorously and with passion in the face of uncertainty and intimidation" (page 38)

"...the importance of putting your team first, ahead of your personal statistics and recognition" (page 39)

"One brilliant character who does not put team first can destroy the entire team" (page 39)

Nadella's campaign was hugely successful, with years of fawning press around him bringing a "growth mindset" to Microsoft, turning employees from "know-it-alls" into "learn-it-alls," Nadella is hailed as "embodying a growth mindset," claiming that he "pushes people to think of themselves as students as part of how he changed things," the kind of thing that sounds really good but is difficult to quantify.

This is, it turns out, a continual problem with the Growth Mindset itself.


If you're wondering why I'm digging into this so deeply, it's because — and I hate to repeat myself — the Growth Mindset is at the very, very core of Microsoft's culture. It’s both a tool for propaganda and a religion.. And it is, in my opinion, a flimsily-founded kind of grift-psychology, one that is deeply irresponsible to implement at scale.

In the late 1980s, American Psychologist Carol Dweck started researching how mindsets — or, how a person perceives a challenge, or their own innate attributes — can influence outcomes in things like work and school. Over the coming decades, she further refined and defined her ideas, coining the terms “growth mindset” and “fixed mindset” in 2012, a mere five years before Nadella took over at Microsoft. These can be explained as follows: 

  • A "fixed" mindset, where one believes that our intelligence and skills are innate, and cannot be significantly changed or improved upon.
    • To quote Microsoft's training materials, "A fixed mindset is an assumption that character, intelligence, and creative ability are static givens that can't be altered."
  • A "growth" mindset where you believe that your intelligence and the things that you can do can be improved with enough effort.
    • To quote Microsoft's training materials, "A growth mindset is the belief that abilities and intelligence can be developed through perseverance and hard work."

Mindset theory itself is incredibly controversial for a number of reasons, chief of which is that nobody can seem to reliably replicate the results of Dweck's academic work. For the most part, research into mindset theory has been focused on children, with the idea that if we believe we can learn more we can learn more, and that by simply thinking and trying harder, anything is possible.

One of the weird tropes of mindset theory is that praise for intelligence is bad. Dweck herself said in an interview in 2016 that it's better to tell a kid that they worked really hard or put in a lot of effort rather than telling them they're smart, to "teach them they can grow their skills in that way." 

Another is that you should say "not yet" instead of "no," as that teaches you that anything is possible, as Dweck believes that kids are "condition[ed] to show that they have talents and abilities all the time...[and that we should show them] that the road to their success is learning how to think through problems [and] bounce back from failures."

All of this is the kind of Vaynerchuckian flim-flam that you'd expect from a YouTube con artist rather than professional psychologist, and one would think that it'd be a bad idea to talk about it if it wasn't scientifically proven — let alone shape the corporate culture of a three-trillion-dollar business around it. 

The problem, however, is that things like "mindset theory" are often peddled with little regard for whether they're true or not, peddling concepts that make the reader feel smart because they sort of make sense. After all, being open to the idea that we can do anything is good, right? Surely having a positive and open mind would lead to better outcomes, right?

Sort of, but not really. 

A study out of the University of Edinburgh from early 2017 found that mindset didn't really factor into a child's outcomes (emphasis mine).

Mindset theory states that children’s ability and school grades depend heavily on whether they believe basic ability is malleable and that praise for intelligence dramatically lowers cognitive performance. Here we test these predictions in 3 studies totalling 624 individually tested 10-12-year-olds.

Praise for intelligence failed to harm cognitive performance and children’s mindsets had no relationship to their IQ or school grades. Finally, believing ability to be malleable was not linked to improvement of grades across the year. We find no support for the idea that fixed beliefs about basic ability are harmful, or that implicit theories of intelligence play any significant role in development of cognitive ability, response to challenge, or educational attainment.

...Fixed beliefs about basic ability appear to be unrelated to ability, and we found no support for mindset-effects on cognitive ability, response to challenge, or educational progress

The problem, it seems, is that Dweck's work falls apart the second that Dweck isn't involved in the study itself.

In a September 2016 study by Education Week's Research Center, 72% of teachers said the Growth Mindset wasn’t effective at fostering high standardized test scores. Another study (highlighted in this great article from Melinder Wenner Moyer) run by Case Western University psychologist Brooke MacNamara and Georgia Tech psychologist Alexander Burgoyne published in the Psychological Bulletin said that “the apparent effects of growth mindset interventions on academic achievement are likely attributable to inadequate study design, reporting flaws, and bias.” 

In other words, the evidence that supports the efficacy of mindset theory is unreliable, and there’s no proof that this actually improves educational outcomes. To quote Wenner Moyer:

Dr. MacNamara and her colleagues found in their analysis that when study authors had a financial incentive to report positive effects — because, say, they had written books on the topic or got speaker fees for talks that promoted growth mindset — those studies were more than two and half times as likely to report significant effects compared with studies in which authors had no financial incentives.

Wenner Moyer's piece is a balanced rundown of the chaotic world of mindset theory, counterbalanced with a few studies where there were positive outcomes, and focuses heavily on one of the biggest problems in the field — the fact that most of the research is meta-analyses of other people's data: Again, from Wenner Moyer. 

For you data geeks out there, I’ll note that this growth mindset controversy is a microcosm of a much broader controversy in the research world relating to meta-analysis best practices. Some researchers think that it’s best to lump data together and look for average effects, while others, like Dr. Tipton, don’t. “There's often a real focus on the effect of an intervention, as if there's only one effect for everyone,” she said. She argued to me that it’s better try to figure out “what works for whom under what conditions.” Still, I’d argue there can be value to understanding average effects for interventions that might be broadly used on big, heterogeneous groups, too.

The problem, it seems, is that a "growth mindset" is hard to define, the methods of measuring someone's growth (or fixed) mindset are varied, and the effects of each form of implementation are also hard to evaluate or quantify. It’s also the case that, as Dweck’s theory has grown, it’s strayed away from the scientific fundamentals of falsifiability and testability. 

Case in point: In 2016, Carol Dweck introduced the concept of a “false growth mindset.” This is where someone outwardly professes a belief in mindset theory, but their internal monologue says something different. If you’re a social scientist trying to deflect from a growing corpus of evidence casting doubt on the efficacy of your life’s work, this is incredibly useful.

Someone accused of having a false growth mindset could argue, until they’re blue in the face, that they genuinely do believe all of this crap. And the accuser could retort: “Well, you would say that. You’ve got a false growth mindset.”

To quote Wenner Moyer, "we shouldn't pretend that growth mindset is a panacea." To quote George Carlin (speaking on another topic, although pertinent to this post): “It’s all bullshit, and it’s bad for you.”

In Satya Nadella's Hit Refresh, he says that "growth mindset" is how he describes Microsoft's emerging culture, and that "it's about every individual, every one of us having that attitude — that mindset — of being able to overcome any constraint, stand up to any challenge, making it possible for us to grow and, thereby, for the company to grow."

Nadella notes that when he became CEO of Microsoft, he "looked for opportunities to change [its] practices and behaviors to make the growth mindset vivid and real." He says that Minecraft, the game it acquired in 2014 for $2.5bn, "represented a growth mindset because it created new energy and engagement for people on [Microsoft's] mobile and cloud technologies." At one point in the book, he describes how an anonymous Microsoft manager came to him to share how much he loved the "new growth mindset," and "how much he wanted to see more of it," pointing out that he "knew these five people who don't have a growth mindset," adding that he believed that the manager in question was "using growth mindset to find a new way to complain about others," and that was not what they had in mind.

The problem, however, is that this is the exact culture that Microsoft fosters — one where fixed mindsets are bad, growth mindsets are good, and the definition of both varies wildly depending on the scenario. 

One employee related to me that managers occasionally add that they "did not display a growth mindset" after meetings, with little explanation as to what that meant or why it was said. Another said that "[the growth mindset] can be an excuse for anything, like people would complain about obvious engineering issues, that the code is shit and needs reworking, or that our tooling was terrible to work with, and the response would be to ‘apply Growth Mindset’ and continue churning out features."

In essence, the growth mindset means whatever it has to mean at any given time, as evidenced by internal training materials that that suggest that individual contributions are subordinate to "your contributions to the success of others," the kind of abusive management technique that exists to suppress worker wages and, for the most part, deprive them of credit or compensation.

One post from Blind, an anonymous social network where you're required to have a company email to post, noted in 2016 that "[the Growth Mindset] is a way for leadership to frame up shitty things that everybody hates in a way that encourages us to be happy and just shut the fuck up," with another adding it was "KoolAid of the month."

In fact, the big theme of Microsoft's "Growth Mindset" appears to be "learn everything you can, say yes to everything, then give credit to somebody else." While this may in theory sound positive — a selflessness that benefits the greater whole — it inevitably, based on conversations with Microsoft employees, leads to managerial abuse. 

Managers, from the conversations I've had with Microsoft employees, are the archons of the Growth Mindset — the ones that declare you are displaying a fixed mindset for saying no to a task or a deadline, and frame "Growth Mindset" contributions as core to their success. Microsoft's Growth Mindset training materials continually reference "seeing feedback as more fair, specific and helpful," and "persisting in the face of setbacks," framing criticism as an opportunity to grow.

Again, this wouldn't be a problem if it wasn't so deeply embedded in Microsoft's culture. If you search for the term “Growth Mindset” on the Microsoft subreddit, you’ll find countless posts from people who have applied for jobs and internships asking for interview advice, and being told to demonstrate they have a growth mindset to the interviewer. Those who drink the Kool Aid in advance are, it seems, at an advantage. 

“The interview process works more as a personality test,” wrote one person. “You're more likely to be chosen if you have a growth mindset… You can be taught what the technologies are early on, but you can't be taught the way you behave and collaborate with others.”

Personality test? Sounds absolutely nothing like the Church of Scientology

Moving on.

Microsoft boasts in its performance and development materials that it "[doesn’t] use performance ratings [as it goes] against [Microsoft's] growth mindset culture where anyone can learn, grow and change over time," meaning that there are no numerical evaluations of what a growth mindset is or how it might be successfully implemented.

There are many, many reasons this is problematic, but the biggest is that the growth mindset is directly used to judge your performance at Microsoft. Twice a year, Microsoft employees have a "Connect" with managers where they must answer a number of different questions about their current and future work at Microsoft, with sections titled things like "share how you applied a growth mindset," with prompts to "consider when you could have done something different," and how you might have applied what you learned to make a greater impact. Once filled-out, your manager responds with comments, and then the document is finalized and published internally, though it's unclear who is able to see them.

In theory, they're supposed to be a semi-regular opportunity to reflect on your work and think about how you might do better. In practice? Not so much. The following was shared with me by a Microsoft employee.

First of all, everyone haaaaates filling those out. You need to include half-a-year worth of stuff you've done, which is very hard. A common advice is to run a diary where you note down what you did every single day so that you can write something in the Connect later. Moreover, it forces you into a singular voice. You cannot say "we" in a Connect, it's always "I". Anyone who worked in software (or I would suspect most jobs) will tell you that's idiotic. Almost everything is a team effort. Second, the stakes of those are way too high. It's not a secret that the primary way decisions about bonuses and promotions are done is by looking at this. So this is essentially your "I deserve a raise" form, you fill out one, max two of those a period and that's it.

Microsoft's "Connects" are extremely important to your future at the company, and failing to fill them in in a satisfactory manner can lead to direct repercussions at work. An employee told me the story of Feng Yuan, a high-level software engineer with decades at the company, beloved for his helpful internal emails about working with Microsoft's .NET platform, who was deemed as "underperforming" because he "couldn't demonstrate high impact in his Connects."

He was fired for "low performance," despite the fact that he spent hours educating other employees, running training sessions, and likely saving the company millions in overhead by making people more efficient. One might even say that Yuan embodied the Growth Mindset, selflessly dedicating himself to educating others as a performance architect at the company. Feng's tenure ended with an internal email criticizing the Connect experience.

Feng, however, likely needed to be let go for other reasons. Another user on Blind related a story of Feng calling a junior engineer's code "pathetic" and "a waste of time," spending several minutes castigating the engineer until they cried, relating that they had heard other stories about him doing so in the past. This, clearly, was not a problem for Microsoft, but filling in his Connect was.

One last point: These “Connects” are high-stakes games, with the potential to win or lose, depending on how compelling your story is and how many boxes it ticks. As a result, responses to each of the questions invariably takes the form of a short essay. It’s not enough to write a couple of sentences, or a paragraph. You’ve really got to sell yourself, or demonstrate — with no margin for doubt — that you’re on-board with the growth mindset mantra. This emphasis on long-form writing (whether accidental or intentional) inevitably disadvantages people who don’t speak English (or whatever language is used in their office) natively, or have conditions like dyslexia. 


The problem, it seems, is that Microsoft doesn't really care about the Growth Mindset at all, and is more concerned with stripping employees of their dignity and personality in favor of boosting their managers' goals. Some of Microsoft's "Connect" questions veer dangerously close to "attack therapy," where you are prompted to "share how you demonstrated a growth mindset by taking personal accountability for setbacks, asking for feedback, and applying learnings to have a greater impact."

Your career at Microsoft — a $3 trillion company — is largely defined by the whims of your managers and your ability to write essays of indeterminate length, based on your adherence to a vague, scientifically-questionable "mindset theory." You can (and will!) be fired both for failing to express your "growth mindset" — a term as malleable as its alleged adherents — to managers that are also interpreting its meaning in realtime, likely for their own benefit.

This all feels so distinctly cult-y. Think about it. You have a High Prophet (Satya Nadella) with a holy book (Hit Refresh). You have an original sin (a fixed mindset) and a path to redemption (embracing the growth mindset). You have confessions. You have a statement of faith (or close enough) for new members to the church. You have a priestly class (managers) with the power to expel the insufficiently-devout (those with a sinful fixed mindset). Members of the cult are urged to apply its teachings to all facets of their working life, and to proselytize to outsiders.

As with any scripture, its textural meanings are open to interpretation, and can be read in ways that advantage or disadvantage a person. 

And, like any cult, it encourages the person to internalize their failures and externalize their successes. If your team didn’t hit a deadline, it isn’t because you’re over-worked and under-resourced. You did something wrong. Maybe you didn’t collaborate enough. Perhaps your communication wasn’t up to scratch. Even if those things are true, or if it was some other external factor that you have no control over, you can’t make that argument because that would demonstrate a fixed mindset. And that would make you a sinner.  

Yet there's another dirty little secret behind Microsoft's Connects.

Microsoft is actively training its employees to generate their responses to Connects using Copilot, its generative AI. When I say "actively training," I mean that there is an entire document — "Copilot for Microsoft 365 Performance and Development Guidance" — that explains, in detail, how an employee (or manager) can use Copilot to generate the responses for their Connects. While there are guidelines about how managers can't use Copilot to "infer impact" or "make an impact determination" for direct reports, they are allowed to "reference the role library and understand the expectations for a direct report based on their role profile."

Side Note: What I can't speak to here is how common using Copilot to fill in a Connects or summarize someone else's Connects actually is. However, the documents I have reviewed - as I'll explain - explicitly instruct Microsoft employees and managers on how to do so, and frame them doing so positively.

In essence, a manager can't say how good you were at a job using Copilot, but they can use Copilot to see whether you are meeting expectations using it. Employees are instructed to use Copilot to "collect and summarize evidence of accomplishments" from internal Microsoft sources, and to "ensure [their] inputs align to Microsoft's Performance & Development philosophy."

In another slide from an internal Microsoft presentation, Microsoft directly instructs employees how to prompt Copilot to help them write a self-assessment for their performance review, to "reflect on the past," to "create new core priorities," and find "ideas for accomplishments." The document also names those who "share their Copilot learnings with other Microsoft employees" as "Copilot storytellers," and points them to the approved Performance and Development prompts from the company.

At this point, things become a little insane.

In one slide, titled "Copilot prompts for Connect: Ideas for accomplishments," Microsoft employees are given a prompt to write a self-assessment for their performance review based on their role at Microsoft. It then generates 20 "ideas for success measurements" to include in their performance review. It's unclear if these are sourced from anywhere, or if they're randomly generated. When a source ran the query multiple times, it hallucinated wildly different statistics for the same metrics. 

Microsoft's guidance suggests that these are meant to be "generic ideas on metrics" which a user should "modify to reflect their own accomplishments," but one only has to ask it to draft your own achievements to have these numbers — again, generated using the same models as ChatGPT — customized to your own work.

While Copilot warns you that "AI-generated content may be incorrect," it's reasonable to imagine that somebody might use its outputs — either the "ideas" or the responses — as the substance of their Connect/performance review. I have also confirmed that when asked to help draft responses based on things that you've achieved since your last Connect, Copilot will use your activity on internal Microsoft services like Outlook, Teams and your previous Connects.

Side note: How bad is this? Really bad. A source I talked to confirmed that personalized achievements are also prone to hallucinations. When asked to summarize one Microsoft employee’s achievements based on their emails, messages, and other internal documents from the last few quarters, Copilot spat out a series of bullet points with random metrics about their alleged contributions, some of which the employee didn’t even have a hand in, citing emails and documents that were either tangentially related or entirely unrelated to their “achievements,” including one that linked to an internal corporate guidance document that had nothing to do with the subject at hand.

On a second prompt, Copilot produced entirely different achievements, metrics and citations. To quote one employee, “Some wasn't relevant to me at ALL, like a deck someone else put together. Some were relevant to me but had nothing to do with the claim. It's all hallucination.”

To be extremely blunt: Microsoft is asking its employees to draft their performance reviews based on the outputs of generative AI models — the same ones underpinning ChatGPT — that are prone to hallucination. 

Microsoft is also — as I learned from an internal document I’ve reviewed — instructing managers to use it to summarize "their direct report's Connects, Perspectives and other feedback collected throughout the fiscal year as a basis to draft Rewards/promotion justifications in the Manage Rewards Tool (MRI)," which in plain English means "use a generative AI to read performance reviews that may or may not be written by generative AI, with the potential for hallucinations at every single step."

Microsoft's corporate culture is built on a joint subservience to abusive pseudoscience and the evaluations of hallucination-prone artificial intelligence. Working at Microsoft means implicitly accepting that you are being evaluated on your ability to adhere to the demands of an obtuse, ill-defined "culture," and the knowledge that whatever you say both must fit a format decided by a generative AI model so that it can be, in turn, read by the very same model to evaluate you.

While Microsoft will likely state that corporate policy prohibits using Copilot to "infer impact or make impact determination for direct reports" or "model reward outcomes," there is absolutely no way that instructing managers to summarize people's Connects — their performance reviews — as a means of providing reward/promotion justifications will end with anything other than an artificial intelligence deciding whether someone is hired or fired. 

Microsoft's culture isn't simply repugnant, it's actively dystopian and deeply abusive. Workers are evaluated based on their adherence to pseudo-science, their "achievements" — which may be written by generative AI — potentially evaluated by managers using generative AI. While they ostensibly do a "job" that they're "evaluated for" at Microsoft, their world is ultimately beholden to a series of essays about how well they are able to express their working lives through the lens of pseudoscience, and said expressions can be both generated by and read by machines.

I find this whole situation utterly disgusting. The Growth Mindset is a poorly-defined and unscientific concept that Microsoft has adopted as gospel, sold through Satya Nadella's book and reams of internal training material, and it's a disgraceful thing to build an entire company upon, let alone one as important as Microsoft.

Yet to actively encourage the company-wide dilution of performance reviews — and by extension the lives of Microsoft employees — by introducing generative AI is reprehensible. It shows that, at its core, Microsoft doesn't actually want to evaluate people's performance, but see how well it can hit the buttons that make managers and the Senior Leadership Team feel good, a masturbatory and specious culture built by a man — Satya Nadella — that doesn't know a fucking thing about the work being done at his company.

This is the inevitable future of large companies that have simply given up on managing their people, sacrificing their culture — and ultimately their businesses — to as much automation as is possible, to the point that the people themselves are judged based on the whims of managers that don't do the actual work and the machines that they've found to do what little is required of them. Google now claims that 25% of its code is written by AI, and I anticipate Microsoft isn't far behind.

Side note: This might be a little out of the scope of this newsletter, but the 25% stat is suspect at best.

First, even before generative AI was a thing, developers were using autocomplete to write code. There are a lot of patterns in writing software. Code has to meet a certain format to be valid. And so, the difference between an AI model creating a class declaration, or an IDE doing it is minimal. You’ve substituted one tool for another, but the outcome is the same.

Second, I’d question how much of this code is actually… you know… high-value stuff. Is Google using AI to build key parts of its software, or is it just writing comments and creating unit/integration tests? Based on my conversations with developers at other companies that have been strong-armed into using Copilot, I’m fairly confident this is the case.

Third, lines of code is an absolute dogshit metric. Developers aren’t judged by how many bytes they can shovel into a text editor, but how good — how readable, efficient, reliable, secure — their work is. To quote The Zen of Python, “Simple is better than complex… Sparse is better than dense.”

This brings me on to my fourth, and last, point: How much of this code is actually solid from the moment it’s created, and how much has to get fixed by an actual human engineer? 

At some point, these ugly messes will collapse as it becomes clear that their entire infrastructure is written upon increasingly-automated levels of crap, rife with hallucinations and devoid of any human touch.

The Senior Leadership Team of Microsoft are a disgrace and incapable of any real leadership, and every single conversation I've had with Microsoft employees for this article speaks to a miserable, rotten culture where managers castigate those lacking the "growth mindset," a term that oftentimes means "this wasn't done fast enough, or you didn't give me enough credit."

Yet because the company keeps growing, things will stay the same.

At some point, this deck of cards will collapse. It has to. When you have tens of thousands of people vaguely aspiring to meet the demands of a pseudoscientific concept, filling in performance reviews using AI that will ultimately be judged by AI, you are creating a non-culture — a company that elevates those who can adapt to the system rather than service any particular customer.

It all turns my fucking stomach.

Read the whole story
tante
2 days ago
reply
"The "growth mindset" is Microsoft's cult — a vaguely-defined, scientifically-questionable, abusively-wielded workplace culture monstrosity, peddled by a Chief Executive obsessed with framing himself as a messianic figure with divine knowledge of how businesses should work."
Berlin/Germany
denubis
2 hours ago
reply
Share this story
Delete

the only message the channel can carry is a scream

1 Share

Rather than speculate in the absence of even a completed vote count, I thought I’d send out this edited excerpt from the penultimate chapter of “The Unaccountability Machine”, the point at which I start winding up digressions into 1970s management science and start explaining how this is going to turn into an answer to the big political questions asked at the start of the book.

[I’ve turned comments off on the website version of this post. If you want to make a comment to me I think the email works, but my experience is that in the aftermath of political shock events, people often get into online arguments which they wouldn’t otherwise have done. Look after yourselves and each other.]

Subscribe now


[…] When Stafford Beer was making his initial presentation to President Allende, he drew his diagrams and built up the components of the Viable Systems Model, showing how basic operational systems were embedded in larger blocks for the purpose of coordinating the national economic plan. The plan was itself a bargain, struck between the systems responsible for optimisation of current production and those which looked out to the future. And, as we’ve seen, balancing those two systems and managing the development of the bargain between them was the responsibility of the highest-level function of the model. As Beer tells the story:

I drew the square on the piece of paper… [the President] threw himself back in his chair: ‘at last’, he said, ‘el pueblo’. This remark, as I have previously attested, had a profound effect on me.

The people. It’s hard to know to what extent this was a rhetorical flourish on the part of Salvador Allende, but one of the most underestimated techniques of political and social analysis is to look at people’s jokes and metaphors, and take them literally.

El pueblo. The system closes itself – or at least, it does in any non-totalitarian society – by the fact that the highest-level decision-making system operates by consent of the decided-upon. Even in a dictatorship, there is a collective veto capable of being exercised if the situation becomes intolerable to the individuals who make it up. Even the abstract, high-level, unthinkably complex decision making entities we’ve been thinking about – slow moving artificial intelligences like “global capitalism” – are embedded in an even slower, even more abstract collective decision-making system of the whole population. We just don’t notice its existence, for two reasons.

First, it hardly ever does anything. The purpose of a top-level system – System Five – is to be the last resort absorber of information. If the system is working correctly, all parameters are balanced and there is no excess variety to absorb. When things are going well, you don’t notice el pueblo as a collective decision-making entity.

And second, el pueblo doesn’t have a postal address, let alone a Telex link to the Presidential palace. In general, not much effort is expended on considering what kinds of communication channels should be maintained to allow the population to express its views to the government, let alone how they should change to keep up with events. It seems quite clear that different arrangements might have different characteristics – a proportional election system should be capable of carrying slightly more information than a first-past-the-post one, a monthly opinion poll has a shorter lag than an annual one, and so on. But this isn’t how they’re thought of; elections are simply horse races with executive power as the prize, and opinion polls are rarely used as more than a sort of racing form to predict the winners of the next race. The channels seem to be designed to carry very few bits of information.

The only kind of information that such a constrained channel can carry is a scream. Populist politics acts as a signalling system for a population which wants to convey a single bit of information; the message that translates as “HELP! THE CURRENT STATE OF AFFAIRS IS INTOLERABLE TO ME”.

[…]

One of the longest-running pieces of research in medicine is called the “Whitehall Studies”.  From 1967 to 1977, 17,500 British civil servants had their general health outcomes recorded; there’s been another group studied in 1985 (which included women), and both cohorts have been followed up and re-examined over the years.

Michael Marmot, the lead researcher on the original study, made an extremely interesting discovery.  One of the strongest predictors of serious mortality outcomes – heart attacks, strokes, cancer – was the civil service grade that someone occupied.  And social status (as represented by the civil service grade) is itself highly correlated with unhealthy behaviours, in a way that doesn’t appear to be related to education, intelligence or any other variables that might be associated with self-control.

This “social gradient” seems to be there in data from other countries, too.  Marmot ended up concluding that the psychic feeling of being in control of your life is extremely important as a source of well-being, and that conversely, being out of control is physiologically harmful as well as emotionally intolerable.

At various points in this book, we’ve noted that you can tell when a cybernetic system is overloaded because it breaks down and becomes unregulated.  Marmot’s main conclusion from his research was that inequality in society was a major driver of public health risks, but this could be given a cybernetic interpretation instead.  The connection that he found looks like the result of a variety mismatch; people are, increasingly, unable to regulate the input from their immediate environment, and they correctly perceive this as a threat to health and life. 

And what’s true at one level of a system can be true of others.  The breakdown in the economic and political system reflects the same imbalance that causes the “deaths of despair”.  People are overloaded with information that they can’t process; the world requires more decisions from them than they’re capable of making, and the systems that are meant to shield them from that volatility have stopped doing the job.

And so the nature of the crisis is …

It’s not a crisis per se; it’s part of the way that the system achieves long-term stability. The world isn’t going to stop growing, so it will only get more complex. That means that systems have to be built that absorb the volatility and variety at the appropriate levels. The overall system is always looking for some organising principle of identity, which tells it what to ignore and how to balance the present and the future.  

For fifty years, the free market played that role; the underlying guarantee that all decisions would get taken care of, even if nobody made them. When that fell apart, the ultimate basis of the system – el pueblo, so to speak – sounded the alarm. Ever since then, we’ve been looking for a new organising principle.

I think this is what explains the common thread between MAGA, Brexit, Five-Star, Hindutva and all the rest. The populist movements of the 2010s all promised a simpler world; they were, in the words of JK Galbraith, taking on the great anxiety of their people and addressing it. They were also promising to restore the broken communication channels – to make voices heard, to force the managerial class to listen.

But the same analysis tells us that they’re fake solutions.  You can’t promise a simpler world – that’s equivalent to claiming to be able to reverse the direction of time. […]



Read the whole story
denubis
3 hours ago
reply
Share this story
Delete

Election conspiracy theorists force TTX cancellation

1 Share

The New York Times published an article on November 1 about how election conspiracy theorists forced the cancellation of a tabletop exercise on critical infrastructure resilience.

Since 2021, a security conference in Atlanta had played host to a simple tabletop exercise in which attendees talked about how they would respond to fictional disasters like plane crashes or water treatment issues.

Sitting around a big table, participants from federal agencies or local departments devoted to emergency preparedness shared how their crews would react. Round and round they went, role-playing, sometimes for hours, as the scenario got more complex.

This year’s meeting was scheduled on Nov. 5 — Election Day — with the fictional scenario expected to focus on transportation, or possibly the chemical industry.

For conspiracy theorists who have fixated on falsehoods about widespread election fraud, though, the timing alone was enough to transform the event into something far more sinister. They spread claims that the conference was a secret meeting of top federal security experts in a bid to hack or steal this year’s presidential election — though it was neither of those things.

As news about the conference spread online, conspiracy theorists painted the event as cover for a “cyberattack” on election infrastructure or a fallback plan to somehow flip Georgia to Democrats should former President Donald J. Trump lead in early voting.

The Republican National Committee and Senator Rand Paul joined in on the criticism, issuing letters asking for more details about potential involvement by federal security agencies and implying that the agencies might have been distracted by the conference during a crucial election period. A spokeswoman for Mr. Paul shared a conference agenda showing the Department of Homeland Security was expected to participate. Organizers and the D.H.S. said they had not confirmed.

The organizer, the Atlanta chapter of AFCEA International, a nonpartisan, nonprofit and nongovernmental group, fielded angry phone calls for days. On Oct. 24, it canceled the conference, blaming “a rapid and unanticipated rise in rhetoric and threats stemming from disinformation.”

Social media, of course, contributed to the disinformation.

[The organizer] shared with The New York Times some of the dozens of voice mail messages, emails and text messages he had received from angry Americans who believed the event was an effort to disrupt the election.

Some messages accused Mr. Wertz of “election interference” or of orchestrating a “threat to the nation to the point of being treasonous.” Others threatened organizers with lawsuits or criminal charges, claiming that they would have to one day “come to terms with our creator” for their supposed wrongdoing.

“Do you think we are stupid,” began another email that threatened to sue Mr. Wertz personally — for what, it was unclear. It was signed simply, “WE THE PEOPLE.”

Mr. Wertz said the angry emails and phone calls largely stemmed from posts on X, the social network helmed by Elon Musk that has become a hot spot for election misinformation.

Laura Loomer, a far-right influencer and Trump ally, posted on X about the conference to her more than one million followers on Oct. 19, falsely saying it was “ELECTION INTERFERENCE BY HOMELAND SECURITY ON ELECTION DAY.”

You should be able to access the full article here (gift link).

h/t Steven Sowards



Read the whole story
denubis
2 days ago
reply
Share this story
Delete

I've had a change of heart regarding employee metrics

1 Share

I know that if you go back far enough in these posts of mine, you will find some real crap in there. Sometimes that's because I had a position on something that turned out to not be very useful, or in some cases, actively harmful. This sucks, but that's life: you encounter new information and you are given the opportunity to change your mind, and then sometimes you actually do exactly that.

Recently, I realized that my position on something else has changed over time. It started when someone reached out to me a few weeks ago because they wanted me to get involved with them on some "employee metrics" product. It's some bullshit thing that has stuff like "work output" listing how many commits they've done, or comments, or whatever else. I guess they wanted me to shill for something of theirs, because from my posts, clearly I was such a fan of making that kind of tool, right?

I mean, sure, way back in 2004-2006, I was making all kinds of tools to show who was actually doing work and who was just sitting there doing nothing. I've written about a number of those tools, with their goofy names and the "hard truths" they would expose, showing who's a "slacker" and all of this.

When this company reached out, I did some introspection and decided that what I had done previously was the wrong thing to do, and I should not recommend it any more.

Why? It's surprisingly simple. It's the job of a manager to know what their reports are up to, and whether they're doing a good job of it, and are generally effective. If they can't do that, then they themselves are ineffective, and *that* is the sort of thing that is the responsibility of THEIR manager, and so on up the line. They shouldn't need me (or anyone else) to tell them about what's going on with their damn direct reports!

In theory, at least, that's how it's supposed to work. That's their job: actually managing people!

So, my new position on that sort of thing is: fuck them. Don't help them. Don't write tools like that, don't run tests to see if your teammates will take care of basic "service hygiene" issues, and definitely don't say anything substantive in a performance review. None of it will "move the needle" in the way you think it will, and it will only make life worse for you overall. "Peer reviews actually improve things" is about the biggest crock of shit that people in tech still believe in.

Once again, if management is too stupid to notice what's going on, they deserve every single thing that will happen when it finally "hits the fan".

Make them do their own damn jobs. You have enough stuff to do as it is.

...

I feel like giving this a second spin right here in case I failed to reach some of the audience with the first approach. Here's a purely selfish way of looking at things, for those who are so inclined.

Those tools I wrote 20 years ago didn't really indicate who was slacking at working tickets or whatever. What they *actually indicated* was that the management at Rackspace, by and large, had no clue what was going on right under their noses. And, hey, while that was true, that can be a dangerous thing to say! You want enemies? That's a great way to get them.

So, why expose yourself? Suppress the urge to point out who's slacking. It will only come back on you.

Read the whole story
denubis
2 days ago
reply
Share this story
Delete

made up numbers are just pretend

1 Share

Another week, another Friday – as in part one, this occasional series describes fixations, hobbyhorses and other bees in my bonnet. The usual warning applies – argue in the comments if you like, but these are not issues on which I’m susceptible to rationality or persuasion, they’re not opinions, they’re bees.

This particular buzzy little fellow was annoying me the other week because once more they awarded a Nobel Prize (in Economics, but still) at least partly for something I consider to be actually embarrassing.  In this case, the practice of carrying out econometrics using an “index” of “institutions” on the left hand side.

Guys, it’s just a number that somebody made up!  I mean, I suppose that as an accounting influencer, I have to say that nearly all statistical data is to some extent invented, that coding systems and collection practices are ideological rather than neutral and the creation of “objective” data is often a crucial tool of rhetoric.

But when you’re using the Freedom House Index Of Effective Property Rights[1], and asking “why is France a 3 and Germany a 4? If America is 2 and Cuba is 8 does that mean America is four times better?  Could somewhere be 3.5 and if not why not?”, then … all those questions are potential showstoppers from a statistical point of view, but more importantly these are just numbers that somebody made up!  At their desk, in the knowledge that they were going to publish them.  They might have had a set of criteria and a weighting scheme, they might even have been surprised at some of the conclusions, but fundamentally they, and their boss, knew how the rankings were going to have to look.

Why do people create these numerical indexes rather than just saying which systems they think are best?  Usually, because you can then do statistical analysis and demonstrate rigorously that certain desirable characteristics are correlated with economic growth and prosperity.  Can you really do that? Of course you bloody can’t.  You put the rabbit in the hat, then you took the rabbit out of the hat; the statistical analysis is just your original argument.

Buzz, buzz, buzz.  At various points in my career, I used to be responsible for creating “Eurocrisis political risk indicators” and the like, basically because research reports look better with a chart on the front.  I was always happy to print them, because it was actually potentially useful to my clients (and to me, comparing back to previous times) to have a time series line that went up and down and roughly summarised my views of the situation.  If I was comparing complicated things like tax systems between countries, I’d much rather have a knowledgeable professional’s score out of ten than a list of marginal rates, for example.  But that’s all these things are; they’re made up numbers.

Subscribe now


[1] It’s not called that, it’s not scaled like that and I don’t care.  Mention of FH rather than any other provider of indices shouldn’t be taken as endorsement or its opposite – it was just the one that sprung to mind.  In many ways, I have more respect for the more nakedly ideological and half-assed versions of these indicators than the ones which try too hard to be scientific.  As my dad used to love to say, “if a job’s not worth doing, it’s not worth doing properly”.



Read the whole story
denubis
5 days ago
reply
Share this story
Delete

‘Scary’: Why US expats are tossing their citizenship – and it’s not just Trump

1 Share
Becoming un-American is complicated and expensive – yet in recent times, a rising number have embraced the “ex” in “expatriate”.

Read the whole story
denubis
6 days ago
reply
Share this story
Delete
Next Page of Stories