12321 stories
·
36 followers

A sociologist’s take on the Artemis moon round-trip

1 Share

The project of modernity had one overarching value above all others: progress. Unlike ancient religious values, this one was realized and made manifest at remarkable speed. Whole categories of disease were completely removed from lived experience, magnificent buildings were constructed higher and faster than at any point in history, and the phrase “better living through chemistry” combined the widely disparate realms of marketing and prediction into one deliverable package. Progress was no mere idle speculation of useless dreamers – steel-eyed men in suits went to work every day to make it happen, one rationally calculated decision at a time. As projects go, modernity had legs.

The biggest achievement of modernity as a project was the moon landing. As a technical accomplishment, it was astounding – get humans safely to the moon, keep them alive on its surface for a while, and then get them back to tell the tale. This required new advances in rocketry, communications, metallurgy and computer science to pull off, all of which had to combine flawlessly into one single package for it to work. So many things had to progress so fast and so many things had to go just so, lest the whole endeavor become a very expensive and well-documented heap of scrap metal. One single mistake could spell disaster, yet through feats of heroic engineering and even more heroic resource management, it got done.

The technical accomplishment, as impressive as it is, is only the second biggest in the story of modernity. The biggest accomplishment is that a decision was made to go to the moon, and then the decision was made a reality. In the narrow sense of the space race, the US got to show off that it was better at performing the modern project than the Soviets. They got there firstest with the mostest, as it were. In the wider sense, the moon landing showed that not even the cold indifferent expanse of space could deter the indomitable spirit of humanity, once it set its mind to it. Modernity could progress anything, given time.

When sociologists say that modernity is characterized by a belief in eternal progress, this is the belief, and also the progress. Up until the space race, the belief in eternal progress remained steadfastly manifest as a social reality – just ask your grandparents about how things were back in the days, and compare it to now. No priests were required to explain what anyone could see with their own two eyes. Just take a trip to the grocery store and see for yourself.

Eternal progress is a powerful belief, and a powerful dream. It can mobilize millions (humans, dollars, you name the unit) in pursuit of goals that could not be attained otherwise. It’s what allows even the most capitalist of westerner to look at a Soviet propaganda poster of workers Doing It Together, and sense a tingle of belonging. The project of modernity did not divide east and west; the competition was about who could perform it better. The belief in eternal progress is a unifying force.

Until, that is, the progress slowed down, and showed itself to be unevenly distributed, and beholden to a narrow definition of who gets to make the decisions that define the modern project. The project of modernity had made tremendous strides forward, true, but even the slightest amount of scrutiny showed that the steely-eyed men in suits who went to work every day to make it happen were, in fact, all men. Moreover, they were men of a certain ethnicity, of a certain social background, with certain ideological commitments. The universal project of humanity’s eternal progress into the promised land of future technological utopias, turned out to be the domain of a very specialized subset of the very humanity it claimed to represent. For all mankind. Asterisk.

Or, as Gil Scott-Heron phrased it: I can’t pay no doctor’s bill while whitey’s on the moon.

As sociologists, we here have to be careful in pointing out that this is not a mere question of budget reallocation. As big of an accomplishment as the moon landing was, it is but a mere fraction of the overall project. Leaving this one fraction undone would just mean the status quo without the moon landing. A more useful framework is to remember that modernity was a matter of deciding to do something, and then to make that decision a reality. Or, rephrased: how come that the powers that be never decided to make universal healthcare a solved problem, and then set to work realizing that decision? Why did this never become a priority?

The belief in the eternal progress of humanity as a universal project, encompassing every living human on the planet, can survive a great many things, including world wars and atom bombs. It can not survive the inescapable conclusion that many of the problems that face us every day are not only solvable, but solvable within existing monetary and political frameworks without too much revolutionary heavy lifting, and actively left unsolved through an active decision of a narrow subsection of a fraction of a percentage point of the powers that be. Humanity can solve any problem, go to any planet, achieve any technological marvel – but will not solve these specific problems that affect you and everyone you know, on the word of these specific individuals.

When sociologists talk of post-modernity, we emphasize the prefix, post. After modernity.

The progress stopped. So too did the belief, at least as a universal motivational force. It got replaced with austerity instead. The postmodern condition is one where the project of modernity ceased moving forward, and instead moved inward, becoming an attempt to preserve a status quo rather than a project to overcome it. We still live in the ruins of the modern project, some of which are still as empowered as they were in their glory days.

Two things can be gleaned from this state of affairs. The first thing is that those who insist that postmodernists (note the suffix -ists, rather than -y; certain persons who hold a specific ideological position, rather than a more generalized mode of being) have gone too far in relativizing the truth, often have a vested interest in keeping the powers that be in place. When such persons insist that there is too much identity politics afoot, what they really want is a return to the good old days, when steel-eyed men got things done, and no one else got invited; a return to when movies were in black and white, and so was everything else.

Postmodernity rests on the recognition that a commitment to a universal humanity means you have to include everyone, which means the institutions that were specifically built to channel the interests of not-everyone have to be reformed. Those aligned with said institutions are not prone to accept such reforms quietly, and to instead speak fondly of ye olden days. Their words have to be read accordingly.

The second thing is that the oft-expressed sentiment that the Artemis round-trip represents a continuation of something that humanity used to do, but is no longer doing, reflects a genuine longing for the modern project. Not as it actually was, but as it claimed to be: for all mankind. No asterisk.

The postmodern project – one of them, at least – is to recapture the adamant optimism that humanity can in fact set its mind to solve problems, and then get to work solving them. We could go to the moon. We could solve universal healthcare. “We” could be a pronoun that includes everyone.

It is a beautiful dream. The Artemis mission has to be interpreted as an extension of it. We have the technology.



Read the whole story
denubis
58 minutes ago
reply
Share this story
Delete

The Bromine Chokepoint: How Strife in the Middle East Could Halt Production of the World’s Memory Chips

1 Share

The U.S.-Israeli war with Iran, now in an unstable ceasefire, has exposed a structural failure in the global semiconductor memory supply chain, and it is not the one analysts seem to be tracking. The story receiving attention is helium: Qatar’s Ras Laffan facility went offline, a 45-day inventory clock started running, and spot prices doubled within days. The story receiving almost no attention is bromine, and it is potentially the more dangerous one. Bromine is the raw material from which specialized chemical suppliers produce semiconductor-grade hydrogen bromide gas, the etch chemical that South Korean fabs use to carve the transistor

The post The Bromine Chokepoint: How Strife in the Middle East Could Halt Production of the World’s Memory Chips appeared first on War on the Rocks.

Read the whole story
denubis
19 hours ago
reply
Share this story
Delete

Musings on Recursive Self-Improvement

1 Share

When thinking about recursive self-improvement, there are two things to separate out: whether we’re talking about models (and scaffolds) improving more rapidly, or the wider societal/economic impacts of this recursive dynamic - i.e., stuff that depends mostly on deployments, adoption, and other bottlenecks. The lines are sometimes blurred in the discourse, and I want to make this separation clearer here. I also suspect many of these barriers are more structurally resistant to being dissolved by better capabilities than people often assume.

The models will get better

On the former: yes, we are already using AI to filter data, write better code, build experimental setups, and so on. These enhancements can make the human-led process of developing and researching models more efficient. As a result, I expect the time required to achieve a given increase in model capability to fall over time. Since we want good models, this is good news. But there are a few things worth noting here:

First, there are huge costs in doing all this. Even if much of the model development process is automated, frontier training still requires massive amounts of capital and compute. So far labs have deep reserves or investors willing to fund losses in expectation of future gains. But over any longer horizon, these costs still need to be justified economically, and real-world deployment remains one important way this happens. In a way, this is analogous to what Dario was telling Dwarkesh: “Even though a part of my brain wonders if it’s going to keep growing 10x, I can’t buy $1 trillion a year of compute in 2027. If I’m just off by a year at that rate of growth, or if the growth rate is 5x a year instead of 10x a year, then you go bankrupt.

Second, when 95% of a process is automated, the remaining 5% can act as an important speed limiter: this can be taste, creativity, bureaucracy, anything that may require ‘human time’. Executives in large organisations are more cognizant of these than researchers who only see their immediate aperture. Now, economic history warns us against assuming new technology will simply replicate legacy pipelines perfectly. Just as the Indian pharmaceutical industry in the 1990s bypassed Western R&D bottlenecks by inventing vastly cheaper manufacturing processes, AI might circumvent current human bottlenecks by inventing entirely new ‘menus of production.’ But even the Indian pharma revolution took years of trial, regulatory navigation, and institutional adaptation before it reshaped the industry. Routing around bottlenecks is itself a deployment problem, not an overnight breakthrough.

Third, even if model improvement accelerates sharply, that alone is not sufficient. Aggregate capability gains only matter insofar as they can be identified, productized, and translated into real-world use. I think a lot of people continue to underestimate the importance of deployments across society, which matter for both (a) justifying training costs; and (b) generating useful data to improve models that customers want to continue using; (c) understanding the strengths and limitations of an existing generation of models in real world settings; and (d) getting the transformative changes you want to see in the world.

Finally, there may also be diminishing returns within any given paradigm, since this can be observed nearly everywhere. It is unclear where and when these hit, but my intuition is that current systems are especially strong at exploiting paradigms that already exist, particularly in domains with dense, legible feedback where RL and synthetic data can productively extend the loop. That can still produce extremely fast progress. But I am less sure that these same systems smoothly generalize from paradigm-exploitation to paradigm-generation, by which I mean the creation of genuinely new abstractions and their productive reintegration into the training and research loop.

More narrowly, I do not think it is yet obvious that we have entered an actual ‘intelligence explosion’, as opposed to simply extending the AI-assisted development loop that was already underway over the past few years. Crucially though, the diminishing returns I’m describing are returns within the current paradigm. One could argue that RSI itself is the mechanism by which you escape this — that sufficiently capable systems identify entirely new abstractions and innovative architectural approaches. I expect a version of that in the coming decade, but initially still through a cyborgian dynamic where researchers leverage increasingly capable agents to crack problems that neither humans nor models would solve alone. In any event, the broader argument does not depend on where exactly that ceiling lies.

When people talk about recursive self-improvement, they sometimes acknowledge these frictions but then treat them as secondary, or assume that sufficiently capable systems can route around most of them via internal deployments and accelerated R&D. I think this is often overstated: these bottlenecks do not disappear just because model development speeds up. They are structural, not incidental, and they push strongly against the more explosive versions of the RSI story.

The inconvenience of deployment

On the deployment side, things get even more complicated. Deploying models into the world is not just a ‘nice to have’ thing that labs do out of charity. Labs have strong incentives to see these systems deployed, permitted, adopted, and integrated across the economy. Over time, this is one major way the scale of frontier spending gets justified. And in parallel, you need to go through the court cases, the regulatory burdens, the legal compliance, the weird adoption dynamics, the integration into legacy systems, the cultural adjustments, the political headwinds, everything! There are all sorts of reasons why deployment takes time and I think people are too quick to just wave these away with some handwavy remark about ‘competitive pressures’. This is less a point about narrow model self-improvement than about industrial diffusion: even if models improve quickly, the automation of the economy still has to run through deployment bottlenecks.

When people talk about recursive self-improvement and then talk about society being unrecognizably transformed at a very fast speed, they’re not talking about models developing, but essentially about the entire economy self-improving, where every physical and human constraint disappears. I think it’s uncontroversial to claim that getting to this point will take time. Even if you get much better robots in the coming years, which I expect will happen, getting humans completely out of the physical and digital economy loop is a pretty damn high bar. And even in such a world, you still do not get a ‘hard takeoff’, because so much remains tethered to human time still.

This points to a general issue in a lot of AI thinking: the concepts of consumption and demand are often muddled, and the focus is solely on the supply of capabilities. To make sense of this, we need to clearly separate economic demand (the rate at which human, and ultimately AI, consumers buy, adopt, and integrate products day-to-day) from final utility (the ultimate human purpose or directive that gives this economic activity a reason to exist).

For some time, I expect the economy’s ability to absorb, integrate, and productively deploy these systems to remain an important constraint, although not forever. Viewed through the macroeconomic lens of Say’s Law and capital deepening, it’s true that immediate consumer spending doesn’t necessarily have to be a hard speed limit per se. If AI triggers massive technological deflation, the economy could in principle equilibrate by reinvesting excess surplus into highly capital-intensive processes: essentially, machines building data centers and robots for other machines. This means an ‘Agent-to-Agent’ (A2A) economy can grow incredibly fast without waiting for humans to consume final products today.

Yet, even if this automated A2A loop takes hold someday, it remains fiercely tethered to final utility. Conditional on systems remaining broadly aligned and instruction-following (which is my current assumption), AIs will not be consuming for their ‘own’ sake: they do not possess intrinsic utility, and they do not build server farms for their own amusement. They are doing so purely a extensions of what a human principal somewhere in space and time ultimately desires. It’s also worth noting that this does not require perfect alignment: human economies have always operated with all sorts of principal-agent problems and we manage these through institutional design, incentives, monitoring, and redundancy, not by solving them in the abstract or by relying on an ‘aligned vs misaligned’ dichotomy.

Imagine a human gives an AI system a top-level directive: invent and mass produce a cure for Alzheimer’s. An autonomous A2A supply chain spins up: Agent A (the R&D lab) realizes it needs 100x more compute. Agent A pays Agent B to build a massive new data center. Agent B pays Agent C to mine the silicon, copper, and steel required. Agent C pays Agent D to build a fusion reactor to power the mining equipment. In this scenario, 99.9% of the economic activity is A2A; trillions of dollars are moving, and massive physical infrastructure is being built. No human had to buy a final product, click an ad, or culturally adapt to to keep this massive industrial boom running. Economically, this loop successfully bypasses the friction of human consumers.

But the initial “seed” of all this activity is still a human goal, and that is the tethered link. Because the A2A ouroboros is anchored to human purposes, it does not operate in a frictionless void. To deliver something like an Alzheimer’s cure, the relevant systems will often still need to interface with the human world: biological reality, legal and institutional processes, property and infrastructure constraints, and human judgments about acceptable risk. Some of these interfaces may become faster and more automated, but institutional adaptation is itself often contested and uneven (which is often a feature, not a bug).

So at some point, the bottleneck is no longer how fast humans can buy/consume things, but how fast AIs can deploy, verify, and physically build things in our highly frictional, human-regulated world. Reality bites: this ouroboros-shaped economy cannot spontaneously generate in a vacuum; it must navigate legacy infrastructure, power grids, API limits, and regulatory realities (yes, they will exist then too, for good reason). As long as AIs are instruction-following, there are no runaway scenarios. So whilst orders of magnitude more efficient than industry today, we shouldn’t confuse a future automated supply chain with a frictionless hard-takeoff type singularity.

What to make of this

It’s worth noting that the very forces that push toward better model development and faster experimentation — the general purpose nature of the improvements that AI provides — also apply to safety, to control, monitoring, verification, robustness, and all sorts of other desirable things. It is in the interest of companies and whoever adopts and uses these agents for them not to be reward-hacking, or for their agents not to do weird things no one asked, or for them to be vulnerable to serious attacks that threaten their consumer base.

So automating ML R&D should also accelerate many of the safety-relevant properties we care about, such as interpretability or getting more deterministic systems with better controls. This only looks implausible if you think of capabilities and alignment as almost entirely separate domains. I do not think that separation really holds. Many safety properties are deeply entangled with broader advances in model quality and engineering, even if that does not mean every failure mode is solved automatically. AI systems are engineered machines, and I expect some of the same forces accelerating capabilities to be brought to bear on alignment, control, and oversight as well. The case for using more intelligence to accelerate alignment work is at least as strong as the case for buying time to do that work manually.

And to be clear, as usual, that’s not to say everything will go perfectly well or that society is perfectly calibrated to handle new technologies optimally. Naturally, I expect all sorts of negative developments and externalities, though I expect many of these will get addressed if they become problematic enough; for example I do expect more cyber incidents in the short run but better adaptation over time (just as we did with spam or DDoS attacks). In general, it’s clear that you want a lot of resources devoted to safety and governance, which I think we do today (and will continue doing). And of course, in a world where you get incrementally faster deployments and societal developments taking shape, you also want governance to be benefiting from the technology. Think of the early days of the internet: you definitely want courts, regulators, and civil society to use the internet too, otherwise they wouldn’t do their job effectively at all. I think the same applies here, and improving governance and institutions remains one of the most important things to focus on in the next few years.

To conclude, the term ‘recursive self-improvement’ often conjures a science-fiction image of a blurry abstraction magically improving itself overnight and leading to some sort of hard take-off. The reality will be both more grounded and more profound. Because we are essentially ‘inventing the inventors’, we may well be heading toward a period of very high economic growth. Even so, I remain sceptical that this translates into a super-exponential takeoff in the wider economy within the current decade, even if model capabilities continue improving rapidly.

But rejecting an instantaneous ‘hard takeoff’ today doesn’t mean using AI to improve AI is no big deal. When this super-exponential flywheel eventually spins up, it won’t do so in a frictionless vacuum and will be tethered to the physical world, constrained by energy limits, robotic manufacturing speeds, and the messy reality of integrating software and robots across human institutions and societies. Unless you believe more intelligence magically bypasses all of this, or that it necessarily means power-seeking and deception, then the future is less about an overnight singularity and more about navigating a massively accelerated, but ultimately jagged and physical, industrial revolution. Self-improvement itself will be uneven: a jagged frontier where breakthroughs in some domains coexist with stubborn stasis in others. We have a window of time to upgrade our institutions for what’s coming, and I think one of the most effective ways to do so is by deploying AI across governance and institutions themselves.

With thanks to Nathaniel Bechhofer, Rohin Shah, Samuel Albanie, Jamie Rumbelow, Ben Clifford, Tim Hwang, Harry Law, and Gustavs Zilgalvis for discussions and feedback.

Read the whole story
denubis
1 day ago
reply
Share this story
Delete

Online courses, supply and demand, and academic integrity

2 Shares

What makes a college course popular or unpopular? I’ve long been interested in courses for non-science majors that satisfy “general education” requirements, their aim being to foster overall scientific literacy and to convey an understanding of topics that are important to society. I often teach such courses at the University of Oregon, for example a biophysics-for-non-scientists course and one on renewable energy. Last term I again taught The Physics of Energy and the Environment, a course for non-science-majors that I’ve written about before (for example, this).

Here’s the enrollment in Physics of Energy and the Environment for the past 15 years. (See Methods for how I constructed the plot.) The datapoints with the circles are the terms in which I taught the course.

You’ll notice that there are enormous fluctuations, with the number ranging from about 40 to 140. Last term had among the lowest numbers of students. I wondered why.

Here’s enrollment data for The Physics of Light and Color, usually a popular course. Last term was particularly low, less than 50 when it’s usually over 150.

Are there “general education” Physics courses with more students, and in which enrollment last term was high? Yes: Essentials of Physics. Note the scale, 300 students last term:

These were the three general education Physics courses offered in Winter 2026. Even before the term started, I was paying attention to the enrollment, tensely checking to see if my course would cross the 20-student threshold to avoid cancellation. Here’s the graph, starting a week after enrollment opened:

300, by the way, is the maximum allowed for Essentials of Physics. The ceilings for Energy and the Environment and Light and Color were 76 and 218 respectively, indicated by dashed lines above.

What if we look at all Physics general education courses for the past 15 years?

There’s a spaghetti of lines, but it’s clear that something is unusual in recent terms.

What sets the Essentials of Physics course apart? Why is it so popular? The content is “Physics 101” for non-science-majors, i.e. not a particular theme of social or humanistic interest.

While you’re formulating a guess, I’ll note that I’ve often heard great things about the Physics of Light, Color, and Vision course.

Though I’m biased, I’ll note that students also seem fond of Physics of Energy and the Environment. I’ve had enthusiastic students tell me, sometimes even years later, that they like the course. Plus, it has a lot of real-world relevance, and we like to think our students care about this.

From this past term’s student evaluations:

“The relevance of this course content can’t be overstated. This course clearly connects to real world examples and helps explain world phenomenons.”

and

“He [i.e. me] also is very good at including active learning in his lectures by making students think first before directly stating answers.” (The relevance of this will be clear in a moment.)

I’ve posted all the student evaluations here, so you can verify that I’m not cherry-picking a few cheerful kids from an otherwise angry mob.

I have yet to hear praise of Essentials of Physics, though I haven’t specifically investigated. (We don’t have access to other courses’ evaluations.)

Modalities and the Ethics of Instruction

As you’ve likely guessed, what’s different about Essentials of Physics in Winter 2026 (and Winter 2025), is that it’s an online, asynchronous course. This means that there’s no in-person interaction; lectures are recorded. Most importantly, Most importantly, students submit all work online. In principle there could be proctored in person exams at a testing center, but this doesn’t exist for this course, or for most UO online courses. The other two courses, Light … and Energy and the Environment, like nearly all of our other Physics courses, are in person.

The University of Oregon is a residential university that makes a point of stressing in its public relations “live” interactions, student experiences, topical courses, etc. University of Oregon students, therefore, are presumably not enrolling from far away, nor enrolling with the aim of taking classes in their pajamas. The interactions enabled by actually having a room full of students, especially incorporating active learning methods that stimulate student engagement and allow a back-and-forth of questions and answers, are effective ways to enhance learning. Plus, they’re fun.

Apparently all this does not diminish the appeal (or temptation?) to students of online courses.

Obviously, one can’t think about online courses in 2026 without thinking about artificial intelligence. (This has been true since at least 2024, but in 2024 one could perhaps be unaware of AI without being professionally negligent.) Even in high-level undergraduate classes, there is nothing one can assign that can’t be answered perfectly by AI; in a general education course, perfect AI-delivered answers are trivial to obtain. We are all seeing as one of the consequences the evaporation of correlation between homework scores and (in person) exam scores, the former being generally perfect and the latter increasingly bimodal with a large fraction showing stunningly low levels of understanding.

The concern is not simply academic dishonesty, though addressing this is essential to avoiding the devaluation of higher education. Perhaps more sadly, we’re seeing students use AI as a crutch for their understanding. It’s easy to ask any modern LLM to answer and then explain a homework question, read that explanation, and think this is a substitute for thinking about the question and constructing the solution oneself. The student, then, bypasses the actual process of learning, and without meaningful assessments (like quizzes or exams), the students delude themselves about their skills.

Is the immediate filling of the 300-student Essentials of Physics really a consequence of it being online? As an additional datapoint, note the Physics Behind the Internet in the graph above. Having hovered between about 20 and 100 students, it surged to 150 two years ago, and 300 this term. What’s new about Physics Behind the Internet? Two years ago it became an online asynchronous course (ceiling 154 students in 2024, 300 now).

It is possible, I should add, to create a meaningful, rigorous asynchronous online course. As noted above, one can have human-proctored exams, though UO doesn’t have the capacity to do this for large courses. One can schedule online video chats for presentation and assessment (oral exams or quizzes); one of my colleagues in Biology does this — it is effective. This won’t scale to classes larger than 20 or maybe 30; certainly not 300.

It seems obvious that online courses are pedagogical disasters. There are, as mentioned, ways to structure them well. (Doing so requires more work than an in person course, I think!) And, of course, there are motivated and self-aware students who will learn very well from such courses, as they would from other courses. However, for a 300-person general education course with no independent assessment or validation, there’s no way to take such courses seriously, or to be proud to offer them. We may as well just tell students to send a check in return for an “A”, and spare everyone 10 weeks of pretending. There would be considerable student demand for this, just as there is currently considerable demand for the online asynchronous courses.

At a faculty meeting, I asked our department to stop permitting online assessments, which would effectively stop our teaching online asynchronous courses. There was some agreement and some concern with details, but not enough enthusiasm to move forward. I lacked the energy to push the issue vigorously enough, especially because there’s a structural problem with “unilaterally” taking such a step:

The resources of a department, such as my Physics department, are tied to the number of students it teaches. (This connection doesn’t need to exist, but it’s understandable; even more than most public universities, the University of Oregon is dependent on student tuition, so an administrative insistence that departments carry their weight is understandable.) My analysis above suggests that our online courses are siphoning students from our other general-education courses, so canceling these courses would send students to these other courses, like Energy and the Environment, which I would argue would be an educational improvement. However, it would likely also send students to online courses in other departments. Should we hurt our own income, which helps us accomplish our many worthwhile goals, to uphold a general principle about educational validity? I’d argue yes, but I can see that this isn’t an obvious choice.

What we need to solve this dilemma is a university-wide policy about online education that is honest and forthright about what learning looks like in 2026, that considers actual teaching goals and student experiences, and that has teeth. So far, we lack such a policy. UO is not unique; this is a common problem.

On the plus side, my many conversations about AI and teaching with faculty at many institutions, and with students, show a universal agreement that online, un-proctored assessment is meaningless and that universities need to think clearly about what they’re doing. (Students, by the way, are some of the strongest voices against AI-enabled cheating and its facilitation by clueless professors and administrators.) At some point, this will have to translate into changes in how we run universities. The institutions that do this quickly and well may survive more easily than those that don’t.

Methods

Data on course enrollment over time at the University of Oregon isn’t readily available, at least for those of us without any administrative superpowers. However, all our course schedules are available online, so it’s possible to get a web page for every course offered by a given department (like Physics) in a given term, and save it as an HTML file. Reading this by eye is easy. Writing code to read the HTML is hard — the table structure isn’t simple. This is a completely uninteresting programming task and is, therefore, ideal for current AI tools! (Without this, I would not have bothered with this analysis.) I therefore downloaded the HTML files, asked Claude (Sonnet 4.6) to convert all the HTML files to more comprehensible CSVs, and then asked it to write code to extract information from the CSVs. I then read the code, made a few changes, and ran it. this works well.

I don’t use AI to write prose, and I’m witnessing the disastrous results of students offloading learning to AI, but writing routine and boring code is an ideal task for modern artificial intelligence. There’s a lot to think about with all these developments.

Today’s illustration…

I painted a whale to use in a public talk I gave in January. My wife noted that I’ve had two whale paintings on the blog before, in 2013!

— Raghuveer Parthasarathy, April 12, 2025



Read the whole story
istoner
14 hours ago
reply
Saint Paul, MN, USA
denubis
2 days ago
reply
Share this story
Delete

Quoting Giles Turnbull

2 Shares

I have a feeling that everyone likes using AI tools to try doing someone else’s profession. They’re much less keen when someone else uses it for their profession.

Giles Turnbull, AI and the human voice

Tags: ai-ethics, writing, ai

Read the whole story
denubis
6 days ago
reply
Share this story
Delete

Television interview - Sky NewsDay

1 Share
KIERAN GILBERT, HOST: Prime Minister Anthony Albanese, thanks for your time. What's your reaction to news of a two-week ceasefire, including the reopening, albeit temporarily, of the Strait of Hormuz?
Read the whole story
denubis
7 days ago
reply
Share this story
Delete
Next Page of Stories