11645 stories
·
35 followers

Painted Tunnel

1 Share

comic, webcomic, last place comics, wile e coyote, wylie coyote, coyote, road runner, eldritch, old god, cthulu, portal, cult, worship

The post Painted Tunnel appeared first on Last Place Comics.

Read the whole story
denubis
1 hour ago
reply
Share this story
Delete

the cybernetic way to think about betting on horses

1 Share

I promised this last week, but here it is; DD’s horse racing betting methodology. I’ve called it “the cybernetic way to think about betting” because, to be honest, thinking about betting is often a lot more fun, and less expensive, than actually doing so, a point I’ll return to later.  But it’s also a fun way to think about some core cybernetic concepts, and so I’m taking my lead from an obscure Australian leftist grouplet someone once told me about, which managed to spread their message of Marxism-Leninism significantly further among the working class than many of their rivals simply because one party member was an excellent tipster and they gave him a column on the back page of their newspaper.

The basis of the system is an insight which I attribute to Ross Ashby, but which is pretty implicit in most of the mathematics of statistical prediction – that you can often use the recent history of a system to proxy for an unobservable current state.  In the case of horse racing, this is obviously talking about the form book. The mental step you need to make is that when you’re reading the form, you shouldn’t be treating it as a time series to find trends to extrapolate.  The form only matters because it’s informative about what you’re really interested in; the current state of fitness of the horse.

For me, that’s the most important thing to think about, because the First Principle of my system is that I only bet on mid-quality races.  The most prestigious and biggest races of the season are much more of a lottery, because they are races which attract the very best horses, who will be trained toward them and so be at or close to the peak of their form.  To me, picking the winner in a race like that seems like much harder work than the task of looking at an eight horse race at an unprestigious course and being able to eliminate at least half the field because they are running half-fit. Conversely, the very lowest-quality races tend to have fields which consist mainly of absolute no-hopers, plus a few unknown quantities, so you end up eliminating ten out of twelve runners, trying to guess which out of the two remaining ones might be better based on no form, then looking up to see that everyone else worked it out before you and your selection is 5/2 odds on.

(Why would someone enter a horse in a race it has no chance of winning?  Lots of reasons!  Maybe the owner fancied a day out. Maybe it needed to get experience of the course.  Maybe it hadn’t run for a while and the trainer didn’t want it to get rusty. Maybe it’s being warmed up for a better race somewhere else.  But most importantly, this happens all the time to get the handicap down.  It is very strongly forbidden for a jockey to slow-roll a horse in one race, so that it carries less weight and has higher odds in a future competition.  But there’s absolutely nothing stopping a trainer from taking a fast horse with no stamina and entering it in a series of three mile races.  Then watching it build up a hellish record for leading the field two miles out then coming in last, and finally putting a massive bet at 20/1 on it when they put it in at a one-and-a-half mile distance).

Reading the form with this in mind makes the problem easier to address, and lets you use the other big tool of narrative probability.  By which I mean – every time you read the record of a horse’s last few trips out, you’re telling yourself little stories about the horse, saying things like “this has won a few times, including at this course, but the distance is a bit shorter than it’s used to, on the other hand it did all right last time out and that was against a better class of competition”.  You take the outcomes of the past few races, consider the course, distance and class of the races (in roughly that order of importance; the difference between a flat and hilly track matters much more than the difference between twelve and sixteen furlongs[1]).  And you try to fit them into a narrative about what the horse might be like at its best, and how close to peak condition it’s in now.

Given those little stories, does it make sense to say this horse could win today?  I tend to think of the answer to that question in terms of a five point scale:

·        It really doesn’t make sense at all to say it has a chance

·        It makes a bit of sense, it’s not absolutely crazy to say so

·        It makes as much sense as it does for any other horse in the race

·        It makes more sense to say it about this horse than any other

·        It makes no sense to say it about any other horse in the race

In my experience, in a ten horse race, these statements about sense-making correspond roughly to odds-on, evens to threes, 100/30 to sevens, 8/1 to 20/1 and 25/1 or higher.  In my view, there’s really no distinction between 25/1 and 100/1 except the extent to which the bookies have pushed out the odds trying to attract enough money to balance their market, but it’s also important to remember that in horse racing, things which don’t make any sense (particularly to someone who’s only spent a short while reading the form) happen quite often.

So I look for a discrepancy between my understanding and the posted odds.  Most of the time, no such discrepancy exists and the effort has been wasted.  When it does, it’s most likely to be a horse where the odds are 7/1 but you think it should be 5/1 [2].  That’s incredibly annoying, because it’s a solidly positive expected value, but you’re going to lose 80% of your bets.

Which comes back to the point I foreshadowed earlier; this is not an easy or fun way to make money.  I tried to take it seriously for a while, a few years ago, keeping records and everything; I concluded that a) it was basically possible to reasonably consistently show a profit, and b) to do so involved more and more tedious work than I was used to from having a full time job in the stock market.  My conclusion is that the a version of Keynes’ famous maxim applies; you cannot actually make money out of horse race gambling unless you are, to say the least, a bit funny about horses.

I can promise that there will be absolutely no more betting tips.


[1] A furlong is an eighth of a mile, for readers unfamiliar with the turf.

[2] Yes, I know that can’t be read off the table above – I’m intentionally making it impossible to actually use this as a system without further effort, to discourage anyone from losing their shirt and blaming it on me!



Read the whole story
denubis
23 hours ago
reply
Share this story
Delete

The Trigger Effect

1 Share

I have vague memories of seeing episodes of James Burke’s CONNECTIONS show on television as a child, which must have meant it was in syndication for a long while as it was produced and aired by the BBC in 1978, and I would have seen it a decade or so later. But I remember it quite fondly, and I know from conversations with other people of my own age that many others do as well.

The premise of the show is that history, especially the history of technology, can be thought of as a series of unusual “connections,” where something that happens in one place or time leads to something else happening in another place or time.

It’s a very fun premise for a show. The essence of its model of history — that everything is connected through time, that history moves in unexpected directions — is better than a lot of alternatives out there, even if it is incomplete in many ways. But I appreciate its willingness to try and cover a pretty wide spectrum of time and ideas. (My friend Latif Nasser did a riff on it in his own Netflix show, CONNECTED, in 2020. You can watch me sweating through the New Mexico heat in Episode 6, “Nukes.” And I just now realized that Burke has made something of an update to the series as recently as 2023.)

The first full episode of the series was called “The Trigger Effect,” and I was once assigned to watch this in a course on the history of technology when I was an undergraduate. I’ve since had many courses of students of my own watch it, because it’s pretty interesting. The basic premise is taken from a real event, one of the several times in which New York City suffered a catastrophic power loss in the 1970s. What happens, Burke asks, when the electricity goes out in a modern environment? And what happens when it doesn’t come back on again? And what would happen if it never came back on again? And what does this tell us about the nature of modern civilization and its dependency on technology?

Below is an edit I’ve made of the episode that is about 30 minutes long. I’ve trimmed out some bit at the beginning and end that is plugging the rest of the series, and a longish-interlude that I don’t think is necessary for my purposes here. It’s worth the watch. My commentary on it will follow, but you should watch it first.

So there is an obvious cut that I made which trims out his discussion about the origins of agriculture and “civilization” (from the Fertile Crescent to Ancient Egypt). His ultimate argument thus evolves a bit into the idea that if you pull the rug out on electricity in particular, you end up having to go very far “back” technologically to “restart” the whole process.

Whether that’s true or not in a strict sense is, of course, debatable. But it’s still a fun idea — that what we call “civilization” is maintained by a technological infrastructure that is vast, often invisible to us, but perhaps more brittle than we’d like to believe.

The question of how brittle is our modern technological civilization is the question of how much resilience there is — how adaptable is our system to sudden shocks, failures, collapses, and so on. It was impressive to me to see how much COVID revealed about how brittle our logistical systems were, for example; that market forces had created a system that was arguably a lot less resilient than it had been a few decades prior.

I get a lot of pleasure showing this to undergraduates. It’s provocative and always generates interesting discussions. I think the very of its time aspects of it make it easier for them to see it as something that can be engaged with and critiqued (as opposed to something with a more “modern” and “slick” aesthetic). Burke’s basic argument, that people living in technologically-based civilization subject themselves to a “technology trap” constantly — because our way of life is entirely dependent on the functioning of large, complex, and fallible technology systems — is often quite surprising, yet clearly true to some degree, to people who have not as of yet considered it.

One can take issue with some of his generalizations, like about what happens if the technology suddenly fails:

But what happens when the effects become widespread, irreversible, devastating? What happens when what little resources you have to help you cope… give up? Then what?

Well, in all the disaster scenarios you read, what happens is that without power, technologically-based civilization cracks up rapidly. Without enough auxiliary power, and most major cities don’t have it, organization is impossible. It’s every man for himself. Looting and arson follow. And in a city not prepared to be a fortress, supplies run out.

I think two words in above are doing a lot of rhetorical “work.” First is “disaster scenario,” by which I think he means something between “speculative predictions” and “speculative fiction” about how people would respond to this kind of disaster (whether we call them “predictions” or “fictions” is something to dwell on by itself). Because he’s not describing something that we have records of actually happening, and that’s because he has slipped in another condition to this scenario. And that’s hidden in the other word doing a lot of work here: “irreversible.”

The chief example he gives in the clip, the New York City blackout, was not irreversible. Obviously. The whole premise falls apart if you take that word out of his scenario. Which gets us back to the question of resilience, again. If the power comes back on in a few hours, it’s mostly an irritation. If it never comes on, that’s what feels like some kind of apocalypse and “reset.”

And it’s worth pointing out explicitly that basing any broader theory of technology and civilization around the specific conditions of New York City in the 1970s is… an interesting choice. We don’t have to dwell on it, but it’s clear that New York in the ‘70s was a pretty specific context, with aspects both good and terrible.1 One similarly can’t really base conclusions on what people might do in the aftermath of nuclear explosion entirely on the cases of Hiroshima and Nagasaki — those are very specific places, peoples, and times.

But putting that aside, the transition he makes from high-minded discussion of technological society to post-apocalyptic survival narrative is both smooth and effective:

So, sooner or later, exhausted and desperate, you may have to make the decision to give up and die. Or to make somebody else give up and die because they won’t accept you in their home voluntarily. And what, in your comfortable urban life, has ever prepared you for that decision?

Repositioning the narrative from the perspective of the viewer is a really great rhetorical move, here, especially in such a gripping (if well-worn) way. That he can then step back from this horror story and transition to a discussion about the development of agriculture in Mesopotamia is, well, kind of brilliant. It works.

What I like about this clip isn’t so much that I think it’s accurate. It rests on too many assumptions about human behavior, and too many arbitrary and potentially implausible assumptions about the “disaster scenario,” to make its point, for example. And its model of the relationship between technology and society doesn’t quite give society as much credit as it really deserves; the semi-catastrophic results after the New York City blackout were arguably much more the result of social conditions than technological ones.

But, the clip is still is wonderfully provocative, and a really fun example of tying together what might appear to be very different domains of thought, like ancient history and social reactions to infrastructure failures, in the service of encouraging deeper thought about what it means to live in a civilization such as our own. Students tend to want to argue about it, and that’s sort of the point of showing it, right? But even I enjoy arguing with myself about it.

Doomsday Machines is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber, if you’re not already. If you live in a post-electricity zone, you will be forgiven for not subscribing. But if that’s the case, then how are you reading this? 🤔

1

For an indulgently-nostalgic and myth-perpetuating take on this, NY77: The Coolest Year in Hell (2007) is a really fun watch. I admit that as a transplant who has lived NYC-adjacent for the last decade, I have grown increasingly tired of “remember when New York was dangerous and dirty yet cheap and full of life and culture?” nostalgia factory. I’ll hold my tongue on this, but I have thoughts!



Read the whole story
denubis
1 day ago
reply
Share this story
Delete

Saturday Morning Breakfast Cereal - Scarcity

2 Shares


Click here to go see the bonus panel!

Hovertext:
Next he kicks the robot, and it moans in pleasure.


Today's News:
Read the whole story
denubis
1 day ago
reply
Share this story
Delete

How Dating Apps Contribute to the Demographic Crisis

1 Share

The dating apps are under a lot of pressure. Falling revenues, people not wanting to pay, and Gen Z seemingly uninterested have led to collapsing stocks and circling activist investors. Their only path forward is monetization - but what does that mean for the demographic crisis?

Kyla’s Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Dating apps are a puzzle. Like most apps, they solve a matchmaking problem1. Doordash matches customers to food. Uber matches the customers to the drivers. Dating apps match the customer to love… right?

Sort of.

According to the Census Bureau, about 47% of the US population has never been married (about 117 million people), and almost 60 million people in the US2 (about 3 in 10 people) have used online dating services. That number skyrockets if we do a bit more age bucketing - almost 60% of people under 50 have used the apps.

Dating apps promise love. The entire premise is pretty straightforward (and if you don’t know what they are, some might call you lucky). Jadrian Wooden described the apps as the opportunity - to “break free from geographic constraints and match with someone who shares your interests and preferences.” It’s algorithmic matchmaking.

Or more simply - “Hey, you might find someone who you would never have found using our special algorithm (which sometimes might work against you if we can make money on it) but at least promises a base set of occurrences of potential love. Thanks for PAYING!”.

It works for a lot of people!

But the stocks look like this right now3 - and that could change how the apps operate in the future, with potentially unexpected consequences.

Bumble (blue line) charted against Match Group (green line) and Grindr (purple line).

Let’s start from the beginning.

The History of Dating Apps

I’ll keep the background on the apps brief. Dating apps began with Match.com back in 1995. There had been predecessors, but Match was taking advantage of the rapid rise of the World Wide Web and the shifting trend of people waiting later to marry. Match was a byproduct of the online classifieds section - and people loved it. 

The dating industry continued to grow, and Match decided rather than compete, they would buy. They purchased OkCupid, a top competitor, in 2012. This began the reign of what I like to call the Match.com Mafia. They were ruthless in acquisition and single-mindedly focused on monetizing love and love lost. Match grew but the stigma of online dating was still a problem.

But by the early 2010s, dating apps became the norm.  More and more people had smartphones, and the game of love became one that you could carry in your pocket. Grindr, a geosocial app for gay men, launched in 2009. The dawn of dating apps was upon us - and the big three were created.

  • Tinder: They launched in 2012 (and Match purchased in 2017 for $3b) was the first time that online dating was ‘cool’, scientifically speaking. Tinder made it fun - and that’s why more than half of people between 18-29 years old have used a dating app - because it’s fun

  • Hinge: They were also created in 2012 (purchased by Match in 2018), was meant to be the ‘relationship-minded’ app popular with the Millennials and Gen Z. It is a mobile-only experience with a higher level of ‘intent’ - it’s not a pure hookup app like its competitor Tinder. It’s an app that’s designed to be deleted.

  • Bumble: They came along in 2014 - a novel app designed for women to swipe first. The company was founded by Whitney Wolfe Herd, formerly of Tinder, and quickly became popular. They went public in 2021. Match.com did not buy them because Bumble had Badoo money.

So in terms of market players, there is Match, Bumble, and Grindr. Match has a portfolio of 40+ companies, including Tinder, Hinge, Match, OkCupid, Plenty of Fish, Meetic, The League, and a bunch of other “Insert Name Here” People Meet type apps like Petpeoplemeet - with almost 15 million paying users across the world.

Match separated from their parent company, IAC, in 2020, and both Ryan Reynolds and Wendi Murdoch joined their board. They had a market cap of $30 billion then.

Now, their market cap is $9 billion. Match reported earnings on July 31st and posted slowing growth - Tinder is up 1%, and payers are down 5%. Hinge is still doing quite well - up 48% on the quarter (due to growth in users and the a la carte options, which we will discuss later).

But there is stagnation across the board, and their stock prices reflect that. Match now has three activist companies going after it - Elliott Management4, Anson Funds, and Starboard. Match needs to make money or perish.

Bumble is down over 90% since their IPO. On August 8th, Bumble shares fell 38% at open - the largest drop on record. Their earnings report showed that they were nervous about their future (much like their users), had missed revenue, and are totally rethinking their entire strategy.

This is the kicker: Match and Bumble reported slow revenue growth for their most recent quarters, at 4% and 3%, respectively. People no longer want to monetize their loneliness - but that’s the plan dating apps have for them.

I Give Up

Many people aren’t finding love - and seem to be giving up. The number of monthly active dating app users worldwide has dropped from 287 million people in 2020 to 237 million people in 2023, according to the Economist.

Part of the reason that people are ‘giving up’ is that people aren’t that interested in dating anymore, as shown in this graph from Pew Research. 50% of singles are tapped out of the dating market. According to Morning Consult, 79% of women are uninterested in using the apps in the future. This is unsurprising. The US is already an individualistic society, and being single is more affordable than ever (however, married people do better economically).

To be clear, the apps work for some people.5 According to Pew Research, 1 in 5 partnered adults under 30 met on a dating app. It’s a brilliant way to get outside of a social bubble, to meet people you might have never crossed paths with, and to get practice dating (and find love!) There is a reason that 10% of adults met their significant other on the apps (rising to 20% for under-30s) - and it’s because they can work.

Freemium For Love 🤨

However, the dating app business model is rapidly shifting, and as discussed earlier, it’s not working anymore - it is a mismatch between user and platform. The consumers and the company have competing goals. The dating apps want you to continue paying into perpetuity because your subscription (and subsequently, your inability to find love) is how they make money. Morgan Stanley, always business-minded, wrote this on the apps opportunity to monetize:

A focus on getting users who are already paying to increase their spending could be one tactic toward growth, as analysts believe the top 1% of dating spenders remain heavily undermonetized. Additionally, apps could target payers who can't afford monthly subscriptions or other premium features with more a la carte features or weekly subscriptions. Even the holdouts who prefer not to pay at all offer a large revenue opportunity via advertising. 

You are mere dollar signs to the company. But you want to find love!

As Planet Money described, Hinge has a “freemium” model. If you really want to play the game of love, you have to pay up for things like ‘Roses’ which are a way to signal to someone that you think the algorithm did a great job putting them in front of you. You can also pay for unlimited likes, see who likes you, set more preferences, boost visibility, etc. Gamify the game.

Almost 30% of Americans have paid for dating apps - spending almost $20 a month on a la carte purchases (the Rose, the Superlike) or subscriptions (Hinge+ is $100 for 6 months, allowing users to like an unlimited number of people, see who likes them, blah blah. The League, a fancy app, costs $400 for 3 months).

This also leads to the apps pooling people you’d likely match into a category they want you to pay for (Hinge Standouts, for example) which degrades the user experience.

And of course, it’s not all bad. Part of the appeal of paying is that it shows you’re more serious about dating. You’re paying for a premium service - just like we would pay for Lyft or Spotify, as CBS News highlights. Some see it as an investment. Some see it as time curation. Some see it as extractive.

The Convenience Contradiction

Dating is not easy. Every human being is infinitely complex, and algorithmic matchmaking can simplify the complexity for pairing purposes, but it is a large task to tackle!

Numerous articles have been penned on the problems with dating apps and the problems with hookup culture. ‘Tinder and the Dawn of the Dating Apocalypse’ was written almost ten (!) years ago about how Tinder was morphing modern dating into pure hookup culture.

All sorts of words tie into the various articles - “immediate gratification,” “choice paralysis” “the millennial lifestyle subsidy,” and “zero-sum mindset” are just a few.

I think there is also something called ‘the convenience contradiction.’ We believe things should be far easier (and cheaper!) than they are. So when things become hard (or expensive!), it gets frustrating.

Dating apps, like most things that seem easy and cheap but in reality are hard and expensive, are a byproduct of the ZIRPy world of easy money and fast checks. The appification of society fueled by a land of low interest rates and herd mentality.

The apps are struggling now because they have to prove profitability, and are experiencing ‘platform decay’. Turns out, it costs money to grow, free money isn’t free, etc. Now, everyone is no longer grinning, laughing, and chanting “growth at any cost!”. Now, they just ask about margins with a single tear running down their cheek.

Finally, there is a demographic shift hidden in all of this.

  • 41% of online dating users over 30 have paid for the app - but only 22% of under 30s have paid.

This is important. The youth aren’t (1) paying and (2) they aren’t really dating - only 56% of Gen Z adults and 54% of Gen Z men have been in a romantic relationship during their teen years, compared to over 75% of Baby Boomers and Gen Xers according to the Survey Center on American Life.

So what happens next?

Demographics

There is a convenience contradiction here too - where love feels like it should be easy. It is easier to give up when you feel like you’re failing at something that is simple.

  • Gen Z is not down with the apps - only 21% of college and graduate students use dating apps once a month, according to an Axios/Generation Lab survey. This age group is shrinking too - the 18-29 year old age group has fallen from 53 million to 52.6 million

But the data gets interesting here. A Guardian survey6 highlighted that 63% of men under 30 are single, compared to 34% of women7. There are all sorts of complications with this. As the Guardian points out, sure, the men aren’t dating… but also aren’t hanging out in general. According to a USA Today survey, only 21% of young men received emotional support from a friend within the past week, versus 41% of women.

So I think we have two problems.

  1. The foundation for relationshiphood (housing, childcare, beginner wages) is exorbitant. Insurance costs, especially auto and housing, are skyrocketing. This is what Talmon Joseph Smith calls ‘structural affordability’, and it’s challenging to form a solid foundation for a relationship if there is no underlying foundation. The average age to leave home in the 1990s was 23 - now, it is 26. The labor market dynamics have shifted to where it’s tough to get a job if you’re looking for one right now.

  2. We have an aging population. The fertility rate is 1.8, far below the replacement rate of 2.1. By 2040, 1 in every 5 Americans will be 65 or older - up from 1 in every 8 in 2000.

The US population is aging. We need either 1) more legal immigration or 2) more opportunities for partnership and babies.

Social security, Medicare, Medicaid, and eldercare are all going to become bigger issues. Eldercare is an average $5k a month! The sandwich generation, the Gen X’ers, and elder Millennials are dealing with childcare costs (up 32% since 2019) and aging parents. 43% of Baby Boomers have no retirement savings and will need help from an expanding workforce.

According to Morgan Stanley, the number of 65+ singles is forecast to expand from 26.3 million in 2021 to 34.4 million in 2030. Prime dating app targets - but what about the next generation?

Dating apps are a beacon of hope - but when a lack of love becomes monetizable… what does it mean for the demographic crisis?

The Economic Value of Loneliness

Some countries help newlyweds out with housing and childcare costs. It’s very tough to build a relationship if you’re not secure in a job, living with your parents, and can’t save money. It’s structural affordability all the way down.

But the other end of the extreme is things like the $ 190k-a-month dating app assistant. The Tinder Premium $500/month model. People who are looking for love and can pay are the perfect candidates.

And dating apps (and apps in general) have changed the way we interact with one another.

  • It is now it’s possible to avoid the risk of asymmetric commitment entirely.

  • There is now a seemingly endless optionality and infinite pools of potential candidates.

But when we think about dating in light of the demographic crisis, we have to ask what these business models mean for our future. People are able to find love on them, that’s for certain.

But with the stocks cratering, Gen Z uninterested and smaller as a cohort, one can only imagine what the payment models and dark UI/UX will look like as they attempt to revive shareholder value.

There are two paths forward

  1. Meeting people in real life. Running clubs have gained popularity because you get to gut-check someone’s choice of shoes and assess beyond an algorithm. The dating apps have invested in IRL experiences, so this is something that people are thinking about.

  2. Unmonetize the apps (lol). Part of it could be… federalism. Japan has a government-controlled dating app. Part of it could be an acknowledgment that we exist in the online space in a major way, and the digital world is now a place we go, versus an extension of reality. Who even knows what is going to happen with AI. Like this chart is a trend, and it probably won’t change anytime soon. But maybe we shouldn’t charge? It’s a big ask.

But we have a rapidly aging population and are facing a demographic crisis. The dating apps are not at all at fault for that - that is an issue that would take a few more newsletters to unpack.

To be very clear, this is not a “everyone must have babies” post. It’s highlighting the parallels between dating apps and the broader societal demographic problem.

But the monetization of love has consequences, and we have to give people avenues to meet one another (not just to address demographics, but for broader vibes too) - and literally, love is the only thing that can save us.

To receive new posts and support my work, consider becoming a free or paid subscriber.

Disclaimer: This is not financial advice or recommendation for any investment. The Content is for informational purposes only, you should not construe any such information or other material as legal, tax, investment, financial, or other advice.

1

Real kyla's newsletter fans will remember when I did a whole write-up on the Gale-Shapley algorithm and analyzed my scraped dating app data

2

The rub here is that these don’t necessarily have to be single people. For example, Grindr has 13 million users, and Hinge has roughly 20 million. Both apps cater to people with a wide variety of dating intentions (and some who might just be sneaking around)

3

This piece is very much focused on heterosexual focused dating themes. As you can see in the graph, Grindr is doing GREAT. They are having their own problems as an app, but the focus of this piece is more on the thematics created by Match and Hinge and Bumble.

4

Elliot Investment Management took a $1 billion stake in Match to push them to make more money, which is more of a going turbo mode situation than a single tear, but the metaphor stands. 

5

It did not work for me, despite my best efforts for about 4 years. I met my boyfriend on a group bike ride through a mutual friend (thanks Mitch).

6

The methodology of this survey was weird because people might define ‘partnered’ differently - so women might be like “yes we are dating!” and the guy might be like “that’s someone I see once a month” etc. Surveys!

7

The numbers are similar for people who identify as LGB - 62% of LGB men and 37% of LBG women

Read the whole story
denubis
1 day ago
reply
Share this story
Delete

Why I don’t expect to be convinced by evidence that scientific reform is improving science (and why that is not a problem)

1 Share

Since more or less a decade there has been sufficient momentum in science to not just complain about things scientists do wrong, but to actually do something about it. When social psychologists declared a replication crisis in the 60’and 70’s, nothing much changed (Lakens, 2023). They also complained about bad methodology, flexibility in the data analysis, a lack of generalizability and applicability, but no concrete actions to improve things emerged from this crisis.

 

After the 2010 crisis in psychology, scientists did make changes to how they work. Some of these changes were principled, others less so. For example, badges were introduced for certain open science practices, and researchers implementing these open science practices would get a badge presented alongside their article. This was not a principled change, but a nudge to change behavior. There were also more principled changes. For example, if researchers say they make error-controlled claims at a 5% alpha level, they should make error controlled claims at a 5% alpha level, and they should not engage in research practices that untransparently inflate the Type 1 error rate. The introduction of a practice such as preregistration had the goal to prevent untransparently inflating Type 1 error rates, by making any possible inflation transparent. This is a principled change because it increases the coherence of research practices.

 

As these changes in practices became more adopted, a large group of researchers was confronted with requirements such as having to justify their sample size, indicate whether they deserved an open science badge, or make explicit that a claim was exploratory (i.e., not error controlled). As more people were confronted with these changes, the absolute number of people critical about these changes increased. A very reasonable question to ask as a scientist is ‘Why?’, and so people asked: “Why should I do this new thing?’.

 

There are two ways to respond to the question why scientific practices need to change. The first justification is ‘because science will improve’. This is an empirical justification. The world is currently in a certain observable state, and if we change things about our world, it will be in a different, but better, observable state. The second justification is ‘because it logically follows’. This is, not surprisingly, a logical argument. There is a certain way of working that is internally inconsistent, and there is a way of working that is consistent.

 

An empirical justification requires evidence. A logical justification requires agreement with a principle. If we want to justify preregistration empirically, we need to provide evidence that it improved science. If you want to disagree with the claim that preregistration is a good idea, you need to disagree with the evidence. If we want to justify preregistration logically, we need to people to agree with the principle that researchers should be able to transparently evaluate how coherently their peers are acting (e.g., they are not saying they are making an error controlled claim, when in actuality they did not control their error rate).

 

Why evidence for better science is practically impossible.

Although it is always difficult to provide strong evidence for a claim, some things are more difficult to study than others. Providing evidence that a change in practice improves science is so difficult, it might be practically impossible. Paul Meehl, one of the first meta-scientists, developed the idea of cliometric meta-theory, or the empirical investigation of which theories are doing better than others. He proposes to follow different theories for something like 50 years, and see which one leads to greater scientific progress. If we want to provide evidence that a change in practice improves science, we need something similar. So, the time scale we are talking about makes the empirical study of what makes science ‘better’ difficult.

But we also need to collect evidence for a causal claim, which requires excluding confounders. A good start would be to randomly assign half of the scientists to preregister all their research for the next fifty years, and order half not to. This is the second difficulty: It is practically impossible to go beyond observational data, and this will always have confounds. But even if we would be able to manipulate something, the assumption that the control condition is not affected by the manipulation is too likely to be violated. The people who preregister will – if they preregister well – have no flexibility in the data analysis, and their alpha levels are controlled. But the people in the control condition know about preregistration as well. After p-hacking their way to a p = 0.03 in Study 1, p = 0.02 in Study 2, and p = 0.06 (marginally significant) in Study 3, they will look at their studies and wonder if these people will take their set of studies seriously. Probably not. So, they develop new techniques to publish evidence for what they want to be true – for example by performing large studies with unreliable measures and a tiny sprinkle of confounds, which consistently yield low p-values.

So after running several studies for 50 years each, we end up with evidence that is not particularly difficult to poke holes in. We have invested a huge amount of effort, for what we should know from the outset will yield very little gain.

 

As we wrote in our recent paper “The benefits of preregistration and Registered Reports(Lakens et al., 2024):

 

It is difficult to provide empirical support for the hypothesis that preregistration and Registered Reports will lead to studies of higher quality. To test such a hypothesis, scientists should be randomly assigned to a control condition where studies are not preregistered, a condition where researchers are instructed to preregister all their research, and a condition where researchers have to publish all their work as a Registered Report. We would then follow the success of theories examined in each of these three conditions in an approach Meehl (2004) calls cliometric metatheory by empirically examining which theories become ensconced, or sufficiently established that most scientists consider the theory as no longer in doubt. Because such a study is not feasible, causal claims about the effects of preregistration and Registered Reports on the quality of research are practically out of reach.

 

At this time, I do not believe there will ever be sufficiently conclusive empirical evidence for causal claims that a change in scientific practice makes science better. You might argue that my bar for evidence is too high. That conclusive empirical evidence in science is rarely possible, but that we can provide evidence from observational studies – perhaps by attempting to control for the most important confounds, measuring decent proxies of ‘better science’ on a shorter time scale. I think this work can be valuable, and it might convince some people, and it might even lead to a sufficient evidence base to warrant policy change by some organizations. After all, policies need to be set anyway, and the evidence base for most of the policies in science are based on weak evidence, at best.

 

A little bit of logic is worth more than two centuries of cliometric metatheory.

 

Psychologists are empirically inclined creatures, and to their detriment, they often trust empirical data more than logical arguments. We published the nine studies on precognition by Daryl Bem because they followed standard empirical methods and yielded significant p values, even when one of the reviewers pointed out that the paper should be rejected because it logically violated the laws of physics. Psychologists too often assign more weight to a p value than to logical consistency.

And yet, a little bit of logic will often yield much greater returns, with much less effort. A logical justification of preregistration does not require empirical evidence. It just needs to point out that it is logically coherent to preregister. Logical propositions have premises and a conclusion: If X, then Y.

In meta-science logical arguments are of the form ‘if we have the goal to generate knowledge following a certain philosophy of science, then we need to follow certain methodological procedures.’ For example, if you think it is a fun idea to take Feyerabend seriously and believe that science progresses in a system that cannot be captured by any rules, then anything goes. Now let’s try a premise that is not as stupid as the one proposed by Feyerabend, and entertain the idea that some ways of doing science are better than others. For example, you might believe that scientists generate knowledge by making statistical claims (e.g., ‘we reject the presence of a correlation larger than r = 0.1’) that are not too often wrong. If this aligns with your philosophy of science, you might think the following proposition is valid: If a scientist wants to generate knowledge by making statistical claims that are not too often wrong, then they need to control their statistical error rates’. This puts us in Mayo’s error-statistical philosophy. We can change the previous proposition, which was written on the level of individual scientist, if we believe that science is not an individual process, but a social one. A proposition that is more in line with a social epistemological perspective would be: “If the scientific community wants to generate knowledge by making statistical claims that are not too often wrong, then they need to have procedures in place to evaluate which claims were made by statistically controlling error rates”.

 

This in itself is not a sufficient argument for preregistration, because there are many procedures that we could rely on. For example, we can trust scientists. If they do not say anything about flexibly analyzing their data, we can trust that they did not flexibly analyze their data. You can also believe that science should not be based on trust. Instead, you might believe that scientists should be able to scrutinize claims by peers, and that they should not have to take their word for it: Nullius in Verba. If so, then science should be transparent. You do not need to agree with this, of course, just as you did not have to agree with the premise that the goal of science is to generate claims that are not too often wrong. If we include this premise, we get the following proposition: “If the scientific community wants to generate knowledge by making statistical claims that are not too often wrong, and if scientists should be able to scrutinize claims by peers, then they need to have procedures in place for peers to transparently evaluate which claims were made by statistically controlling error rates”.

Now we have a logical argument for preregistration as one change in the way scientists work, because it makes it more coherent. Preregistration is not the only possible change to make science coherent. For example, we could also test all hypotheses in the presence of the entire scientific community, for example by live-streaming and recording all research that is being done. This would also be a coherent improvement to how scientists work, but it would also be more cumbersome. The hope is that preregistration, when implemented well, is a more efficient change to make science more coherent.

 

Should logic or evidence be the basis of change in science?

 

Which of the two justifications for changes in scientific practice is more desirable? A benefit of evidence is that it can convince all rational individuals, as long as it is strong enough. But evidence can be challenged, especially when it is weak. This is an important feature of science, but when disagreements about the evidence base can not be resolved, it quickly leads to ‘even the experts are do not agree about what the data shows’. A benefit of logic is also that it should convince rational individuals, as long as they agree with the premise. But not everyone will agree with the premise. Again, this is an important feature of science. It might be a personal preference, but I actually like disagreements about the premises of what the goals of science are. Where disagreements about evidence are temporarily acceptable, but in the long run undesirable, disagreements about what the goals of science are is good for the diversity in science. Or at least that is a premise I accept.

 

As I see it, the goal should not be to convince people to implement certain changes to scientific practice per se, but to get scientists to behave in a coherent manner, and to implement changes to their practice if this makes their practice more coherent. Whether practices are coherent or not is unrelated to whether you believe practices are good, or desirable. Those value judgments are part of your decision to accept or reject a premise. You might think it is undesirable that scientists make claims, as this will introduce all sorts of undesirable consequences, such as confirmation bias. Then, you would choose a different philosophy of science. That is fine, as long as you then implement research practices that logically follow from the premises. Empirical research can guide you towards or away from accepting certain premises. For example, meta-scientists might describe facts that make you believe scientists are extremely trustworthy, and transparency is not needed. Meta-scientists might also point out ways in which research practices are not coherent with certain premises. For example, if we believe transparency is important, but most researchers selectively publish results, then we have identified in incoherency that we might need to educate people about, or we need to develop ways for researchers to resolve this incoherency (such as developing preprint servers that allow researchers to share all results with peers). And for some changes to science, such as the introduction of Open Science Badges, there might not be any logical justifications (or if they exist, I have not seen them). For those changes, empirical justifications are the only possibility.

 

Conclusion

 

As changes to scientific practice become more institutionalized, it is only fair that researchers ask why these changes are needed. There are two possible justifications: One based on empirical evidence, and one on logically coherent procedures that follow from a premise. Psychologists might intuitively believe that empirical evidence is the better justification for a practice. I personally doubt it. I think logical arguments will often provide a stronger foundation, especially when scientific evidence is practically difficult to collect.

Read the whole story
denubis
2 days ago
reply
Share this story
Delete
Next Page of Stories