12119 stories
·
35 followers

Quoting Claude's system prompt

2 Shares

If asked to write poetry, Claude avoids using hackneyed imagery or metaphors or predictable rhyming schemes.

Claude's system prompt, via Drew Breunig

Tags: drew-breunig, prompt-engineering, anthropic, claude, generative-ai, ai, llms

Read the whole story
denubis
8 hours ago
reply
Share this story
Delete

SQLite CREATE TABLE: The DEFAULT clause

1 Share

SQLite CREATE TABLE: The DEFAULT clause

If your SQLite create table statement includes a line like this:

CREATE TABLE alerts (
    -- ...
    alert_created_at text default current_timestamp
)

current_timestamp will be replaced with a UTC timestamp in the format 2025-05-08 22:19:33. You can also use current_time for HH:MM:SS and current_date for YYYY-MM-DD, again using UTC.

Posting this here because I hadn't previously noticed that this defaults to UTC, which is a useful detail. It's also a strong vote in favor of YYYY-MM-DD HH:MM:SS as a string format for use with SQLite, which doesn't otherwise provide a formal datetime type.

Tags: sql, sqlite, datetime

Read the whole story
denubis
8 hours ago
reply
Share this story
Delete

AI makes the humanities more important, but also a lot weirder

1 Share

Writing recently in The New Yorker, the historian of science D. Graham Burnett described how he has been thinking about AI:

In one department on campus, a recently drafted anti-A.I. policy, read literally, would actually have barred faculty from giving assignments to students that centered on A.I. (It was ultimately revised.) Last year, when some distinguished alums and other worthies conducted an external review of the history department, a top recommendation was that we urgently address the looming A.I. disruptions to our teaching and research. This suggestion got a notably cool reception. But the idea that we can just keep going about our business won’t do, either.

On the contrary, staggering transformations are in full swing. And yet, on campus, we’re in a bizarre interlude: everyone seems intent on pretending that the most significant revolution in the world of thought in the past century isn’t happening. The approach appears to be: “We’ll just tell the kids they can’t use these tools and carry on as before.” This is, simply, madness. And it won’t hold for long. It’s time to talk about what all this means for university life, and for the humanities in particular.

Subscribe now

I suspect that a significant chunk of my historian colleagues had a negative reaction to this article. But I wholeheartedly agree with the central point Burnett makes within it — not that generative AI is inherently good, but simply that it is already transformative for the humanities, and that this fact cannot be ignored or dismissed as hype.

Here’s how I’m currently thinking about that transformation.

Generative AI elevates the value of humanistic skills

Ignoring the impact of AI on humanistic work is not just increasingly untenable. It is also foolish, because humanistic knowledge and skills are central to what it is that AI language models actually do.

The language translation, sorting, and classification abilities of AI language models — the LLM as a “calculator for words” — are among the most compelling uses for the current frontier models. We’re only beginning to see these impacts in domains like paleography, data mining, and translation of archaic languages. I discussed some examples here:

… and the state of the art has progressed quite a bit since then. But since this is one aspect of AI and humanities I’ve written about at length, I’ll leave it to the side for now.

Another underrated change of the past few years is that humanistic skills have become surprisingly important to AI research itself.

One recent example: OpenAI’s initial fix for GPT-4o’s bizarre recent turn toward sycophancy was not a new line of code. It was a new piece of English prose. Here’s Simon Willison on the change to the system prompt that OpenAI implemented:

This was not the only issue that caused the problem. But the other factors in play (such as prioritizing user feedback via a “thumbs up” button) were similarly rooted in big-picture humanistic concerns like the impact of language on behavior, cross-cultural differences, and questions of rhetoric, genre, and tone.

This is fascinating to me. When an IBM mainframe system broke down in the 1950s (or a steam engine exploded in the 1850s), the people who had to fix it likely did not spare a moment’s thought to consider any of these topics.

Today, engineers working on AI systems also need to think deeply and critically about the relationship between language and culture and the history and philosophy of technology. When they fail to do so, their systems literally start to break down.


Then there’s the newfound ability of non-technical people in the humanities to write their own code. This is a bigger deal than many in my field seem to recognize. I suspect this will change soon. The emerging generation of historians will simply take it for granted that they can create their own custom research and teaching tools and deploy them at will, more or less for free.

My own efforts so far have mostly been focused on two niche educational games modeled on old school text-based adventures — not exactly something with a huge potential audience. But that’s exactly why I choose to do it. The stakes were low; the interest level for me personally was high; and I had significant expertise in the actual material and format, if not the code.

The progression from my first attempt (last fall) to my second (earlier this spring) has been an amazing learning experience.

Here’s the first game (you can find a free playable version here). It’s a 17th century apothecary simulator that requires students to read and utilize actual early modern medical recipes to heal patients based on real historical figures. You play as Maria de Lima, a semi-fictional female apothecary in 1680s Mexico City with a hidden past:

Maria assesses the potential melancholia of her first patient of the day.

It was fascinating to make, but it also has significant bugs and usability issues, and it fairly quickly spools out into LLM-generated hallucinations that are unmoored by historical realities. (For instance, in one play-through, I, as Maria, was able to become a ship’s surgeon on a merchant vessel sailing to England, then meet with Isaac Newton in London. The famously quarrelsome and reclusive Newton was, for some reason, delighted to welcome me into his home for tea.)

My second attempt, a game where you play as a young Darwin collecting finches and other specimen on one of the Galápagos Islands in 1835, is more sophisticated and more stable.

The terrain-based movement system, with specific locations based directly on actual landscapes Darwin wrote about in his Voyage of the Beagle, forces the AI to maintain a kind of literal ground truth. It is difficult to leave the island, and the animals and terrain you encounter are pulled directly from the actual writings of Darwin, reducing the tendency to hallucinate.

There is also a more robust logging system which will come in handy when I want to add an assessment layer to the game and turn it into an actual assignment. You can play Young Darwin here.

Darwin considers whether he should approach a tortoise or try to catch a snake.

My idea is that students will read Darwin’s writings first, then demonstrate what they learned via the choices they make in game. To progress, you must embody the real epistemologies and knowledge of a 19th century naturalist.

The crucial thing is that this would be done alongside an in-class essay and in-person discussions of the reading — it would not replace, but augment the human element of teaching.

I’ll write more about all this in a future post, but the upshot is that this iterative process has been among the more intellectually challenging and enriching experiences of the last few years for me. Anyone who thinks you can’t learn from interactive tutoring by an AI has not tried. You absolutely can.

Subscribe now

Generative AI makes it harder to teach humanistic skills

On the other hand, it is just a brutal fact that AI chatbots are significantly damaging core aspects of the educational system. There’s no denying it, and it needs to be taken seriously by educators, students, politicians, and above all by the frontier AI labs themselves.

Educators tend to point to the ways ChatGPT and its competitors have affected us — eroding our ability to accurately assess student writing because such a large proportion of students turn in machine-generated essays, and forcing us to come up with entirely new assignments and lesson plans as a result.

But in the longer run, the damage is being done to students. By making effort an optional factor in higher education rather than the whole point of it, LLMs risk producing a generation of students who have simply never experienced the feeling of focused intellectual work. Students who have never faced writer’s block are also students who have never experienced the blissful flow state that comes when you break through writer’s block. Students who have never searched fruitlessly in a library for hours are also students who, in a fundamental and distressing way, simply don’t know what a library is even for.

New York Magazine has a new article on student use of ChatGPT which captures the problem well. Here’s a Columbia student who speaks for a significant chunk of the current university population:

“Most assignments in college are not relevant,” he told me. “They’re hackable by AI, and I just had no interest in doing them.” While other new students fretted over the university’s rigorous core curriculum, described by the school as “intellectually expansive” and “personally transformative,” Lee used AI to breeze through with minimal effort. When I asked him why he had gone through so much trouble to get to an Ivy League university only to off-load all of the learning to a robot, he said, “It’s the best place to meet your co-founder and your wife.”

I will speak frankly. This sucks.

It sucks the joy out of teaching, and it sucks the meaning out of the whole experience of getting an education.

When I was a postdoc at Columbia, I taught one of the core curriculum classes mentioned here, with a reading list that included over a dozen weighty tomes (one week was spent on the Bible, the next week on the Quran, another on Thomas Aquinas, and so on). There wasn’t much that was fun or easy about it. And yet — probably for that very reason — I learned more from teaching that class than from any other. Something fundamental about that experience feels like it’s ruined now.

But this is not the entire story. The middle part of D. Graham Burnett’s New Yorker piece strikes me as an important corrective to this sort of thing. Burnett is, I think it’s fair to say, rapturous about his students’ response to an assignment asking them to discuss the concept of attention with ChatGPT, then edit and submit the results.

Here’s a sample:

Reading the results, on my living-room couch, turned out to be the most profound experience of my teaching career. I’m not sure how to describe it. In a basic way, I felt I was watching a new kind of creature being born, and also watching a generation come face to face with that birth: an encounter with something part sibling, part rival, part careless child-god, part mechanomorphic shadow—an alien familiar.

I have had the same feelings, for instance when I first began tinkering with history simulation assignments. Language models are a genuinely novel teaching tool. Their impact is still uncertain. What that means is that now is exactly the time when people who are genuinely passionate about teaching and learning for its own sake — not as a scorecard to judge politicians, not as a source of corporate profit — need to take an active role.

Share


My greatest concern when it comes to LLMs in humanities education is that they will lead to a further polarization in educational outcomes. The Princeton students who Burnett teaches seem extraordinarily thoughtful and creative in their responses to his assignment. I suspect students in a social studies class at an underfunded public high school class would not be.

For this reason, it is vitally important that educators learn how to personally create and deploy AI-based assignments and tools that are tailored directly for the type of teaching they want to do. If we cede that ground, if we ignore the challenge, then we will watch helplessly as education gets taken over by cynical and stultifying “AI learning tools” which trumpet their interactivity while eroding the personalized student-teacher relationship that is at the heart of learning.

This is the basic thinking behind an NEH grant which I and two of my UCSC colleagues, Pranav Anand (linguistics) and Zac Zimmer (literature), were awarded in January of this year… and which got cancelled by the Trump administration/DOGE last month.1 We are continuing our planned work, and I’ll keep writing about it here.

I’d love to hear your thoughts in the comments, and please consider supporting my work via a paid subscription if this is an option for you.

Res Obscura is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Weekly links

• D. Graham Burnett’s book The Sounding of the Whale, is, incidentally, the most unusual and delightful book about the history of cetacean science you will ever read. I relied on his chapter about John C. Lilly extensively when I was writing the “dolphins on LSD” part of Tripping on Utopia.

• “The fragmentary letter was found preserved inside a 1608 book by Johannes Piscator, an almost 1,000-page tome dissecting biblical texts. The letter was not tucked into the pages as bookmark or memento but was part of the book’s very construction, as strips of wastepaper deployed as padding to prevent the text block from chafing against the binding.” New findings about Shakespeare’s relationships with his wife, Anne Hathaway (Washington Post).

• Congratulations to the UNC historian Kathleen DuVal, whose most recent book Native Nations: A Millennium in North America won the Pulitzer prize for best work of history this week. DuVal’s The Native Ground (2006) is among the more interesting history books I’ve ever read, and I’m looking forward to reading her new book.

Subscribe now

Leave a comment

1

And if you happen to be the sort of person who makes donations to universities and would like to support that cancelled NEH grant, please email me at bebreen [at] ucsc [dot] edu.

Read the whole story
denubis
1 day ago
reply
Share this story
Delete

VMware perpetual license holders receive cease-and-desist letters from Broadcom

1 Share

Broadcom has been sending cease-and-desist letters to owners of VMware perpetual licenses with expired support contracts, Ars Technica has confirmed.

Following its November 2023 acquisition of VMware, Broadcom ended VMware perpetual license sales. Users with perpetual licenses can still use the software they bought, but they are unable to renew support services unless they had a pre-existing contract enabling them to do so. The controversial move aims to push VMware users to buy subscriptions to VMware products bundled such that associated costs have increased by 300 percent or, in some cases, more.

Some customers have opted to continue using VMware unsupported, often as they research alternatives, such as VMware rivals or devirtualization.

Over the past weeks, some users running VMware unsupported have reported receiving cease-and-desist letters from Broadcom informing them that their contract with VMware and, thus, their right to receive support services, has expired. The letter [PDF], reviewed by Ars Technica and signed by Broadcom managing director Michael Brown, tells users that they are to stop using any maintenance releases/updates, minor releases, major releases/upgrades extensions, enhancements, patches, bug fixes, or security patches, save for zero-day security patches, issued since their support contract ended.

The letter tells users that the implementation of any such updates “past the Expiration Date must be immediately removed/deinstalled," adding:

Any such use of Support past the Expiration Date constitutes a material breach of the Agreement with VMware and an infringement of VMware’s intellectual property rights, potentially resulting in claims for enhanced damages and attorneys’ fees.

Some customers of Members IT Group, a managed services provider (MSP) in Canada, have received this letter, despite not receiving VMware updates since their support contracts expired, CTO Dean Colpitts told Ars. One customer, he said, received a letter six days after their support contract expired.

Similarly, users online have reported receiving cease-and-desist letters even though they haven't issued updates since losing VMware support. One user on Spiceworks’ community forum reported receiving such a letter even though they migrated off of VMware and to Proxmox.

Some users who reported receiving a letter from Broadcom said they ended up getting legal teams involved. Ars has also seen confusion online, with some people thinking that the letter means Broadcom perceives that they've broken their agreement with VMware. However, it seems that Broadcom is sending these letters to companies soon after their support contracts have expired, regardless of whether they continue to use (or not use) VMware.

Broadcom didn't respond to a request for comment.

Broadcom warns of potential audits

The cease-and-desist letters also tell recipients that they could be subject to auditing.

Failure to comply with [post-expiration reporting] requirements may result in a breach of the Agreement by Customer[,] and VMware may exercise its right to audit Customer as well as any other available contractual or legal remedy.

In response, Colpitts told Ars:

"The one thing that does kind of piss me off is the fact that Broadcom retains the right to still perform audits whenever they choose. But ... that's utter BS anyways. If a customer wanted to hide stuff, it could easily be done (disclaimer: I have never done this, but since it's all self-reporting in clear text with no security checksums or anything to detect tampering, it would be easy to do)."

Since Broadcom ended VMware's perpetual licenses and increased pricing, numerous users and channel partners, especially small-to-medium-sized companies, have had to reduce or end business with VMware. Most of Members IT Group’s VMware customer base is now running VMware unsupported. The MSP's biggest concern there is ensuring that staff don't accidentally apply patches to customers, Colpitts noted.

In recent months, Broadcom has sought to rein in potential use of VMware products that it considers unwarranted. For example, it engaged in a since-resolved legal battle with AT&T over the telecom’s right to renew support services and has accused Siemens of pirating VMware software.

Broadcom’s changes to how VMware is distributed have resulted in various firms ditching VMware and doubting Broadcom's care for customers. While Broadcom’s financial success since acquiring VMware suggests that its business plan will remain steadfast, sending cease-and-desist letters to VMware users risks further harming its reputation with current and former customers.

Read full article

Comments



Read the whole story
denubis
1 day ago
reply
Share this story
Delete

What's the carbon footprint of using ChatGPT?

1 Share

What's the carbon footprint of using ChatGPT?

Inspired by Andy Masley's cheat sheet (which I linked to last week) Hannah Ritchie explores some of the numbers herself.

Hanah is Head of Research at Our World in Data, a Senior Researcher at the University of Oxford (bio) and maintains a prolific newsletter on energy and sustainability so she has a lot more credibility in this area than Andy or myself!

My sense is that a lot of climate-conscious people feel guilty about using ChatGPT. In fact it goes further: I think many people judge others for using it, because of the perceived environmental impact. [...]

But after looking at the data on individual use of LLMs, I have stopped worrying about it and I think you should too.

The inevitable counter-argument to the idea that the impact of ChatGPT usage by an individual is negligible is that aggregate user demand is still the thing that drives these enormous investments in huge data centers and new energy sources to power them. Hannah acknowledges that:

I am not saying that AI energy demand, on aggregate, is not a problem. It is, even if it’s “just” of a similar magnitude to the other sectors that we need to electrify, such as cars, heating, or parts of industry. It’s just that individuals querying chatbots is a relatively small part of AI's total energy consumption. That’s how both of these facts can be true at the same time.

Meanwhile Arthur Clune runs the numbers on the potential energy impact of some much more severe usage patterns.

Developers burning through $100 of tokens per day (not impossible given some of the LLM-heavy development patterns that are beginning to emerge) could end the year with the equivalent of a short haul flight or 600 mile car journey.

In the panopticon scenario where all 10 million security cameras in the UK analyze video through a vision LLM at one frame per second Arthur estimates we would need to duplicate the total usage of Birmingham, UK - the output of a 1GW nuclear plant.

Let's not build that panopticon!

Tags: ai-ethics, generative-ai, ai-energy-usage, chatgpt, ai, vision-llms, ai-assisted-programming, llms

Read the whole story
denubis
2 days ago
reply
Share this story
Delete

Quoting Max Woolf

1 Share

Two things can be true simultaneously: (a) LLM provider cost economics are too negative to return positive ROI to investors, and (b) LLMs are useful for solving problems that are meaningful and high impact, albeit not to the AGI hype that would justify point (a). This particular combination creates a frustrating gray area that requires a nuance that an ideologically split social media can no longer support gracefully. [...]

OpenAI collapsing would not cause the end of LLMs, because LLMs are useful today and there will always be a nonzero market demand for them: it’s a bell that can’t be unrung.

Max Woolf

Tags: max-woolf, generative-ai, openai, ai, llms

Read the whole story
denubis
3 days ago
reply
Share this story
Delete
Next Page of Stories