12285 stories
·
36 followers

Al Sweigart's Python books are available for free

1 Share

I saw someone online today saying they enjoyed Python Programming Exercises, Gently Explained.

I went to the author’s site and I saw that he makes all of his books (Automate the Boring Stuff with Python and more) available for free at https://inventwithpython.com/. I’d heard of his books many times before but I didn’t realize that he made them available on his site like that!

I’m Al Sweigart, and I write books to teach beginners to code. I put them online for free because programming is too valuable and needs to be accessible to all.

I always think it’s so cool when authors do that, another example is Mark Dominus’s Higher Order Perl

Read the whole story
denubis
10 hours ago
reply
Share this story
Delete

Content warning:tw traumaTo the end, friends. https://www.youtube.com/watch?v=uI...

1 Share

Content warning:tw trauma


To the end, friends. youtube.com/watch?v=uIfqSTBTJX

Read the whole story
denubis
10 hours ago
reply
Share this story
Delete

messing around for the sake of doing so

1 Share

Stian Westlake had an interesting post out last week about something called “technoscience”, that being the practice of doing scientific things where the output is a dataset or some software or something rather than a new piece of statistically justified rhetoric (or to give it the more common but less rigorous name, “science”.

Coming from the perspective of a professional funder of science (he’s the chief executive of the ESRC or something), Stian’s asking the always important question of “should we be spending a bit of money on this?”. He goes through a number of possible reasons why “technoscience” might be a more productive way of doing inquiry at the moment, but in my view misses a quite important one, which is just simply that it’s new.

The idea here would be that if we accept that technoscience is something different from the normal process of publishing peer-reviewed papers in scientific journals, then this is not a disadvantage – it’s a very great advantage! One of the few things in metascience and innovation studies which we really do know with a high degree of certainty is that academic publishing is absolutely to hell and a really bad way of achieving anything. It was a broken system at least ten years ago, and the added burden of AI slop has so far not helped.

Looking through some of the projects that Stian cites as examples of the technoscience movement, I get a strong sense that a significant motivating factor is indeed “wanting to do almost anything other than write papers for publication”. It might be that technoscience has a productivity advantage, not because it’s better adapted to the world of AI, longer term approaches or co-operation with the private sector or anything like that, but simply because its practitioners are not wasting quite as much of their time on bullshit[1]. This is hinted at in Stian’s point 3 in the linked post, but I think it’s worth drawing out a wider lesson.

Which is that, sometimes it’s a benefit simply to shake things up – the deadweight cost of reorganisation can be negative. If a set of institutions has been hanging around for a long time, in the wrong kind of environment, it can pick up the organisational equivalent of barnacles. Profiteers, of course, but also calcified institutional conventions that are being maintained beyond their usefulness. As with academic publishing, the whole system might have been subverted to a different purpose, like career advancement and promotion.

And so, in order to make progress, you have to (in the words of the TV show) “go out and do something less boring instead”. The way that institutions survive is sometimes not to reorganise, but rather to start small projects outside the current constraints which then grow big enough to abandon the former host. I think this intrinsic benefit might be responsible for some improvements which are wrong credited to the Hawthorne Effect.


[1] I just want to distinguish a good from a bad version of the implicit argument here. A few weeks ago, people were shocked that an article in Nature claimed that in some cases the cost of grant applications and overhead was greater than the amount of money spent on the research itself. This isn’t intrinsically something you can say in my view. The tort system, for example, also consumes slightly more than the amount of money paid out in lawsuits. But that’s not a particularly useful ratio because the numerator and denominator aren’t related as parts of the same process. The purpose of the tort system isn’t to efficiently generate the maximum amount of litigation proceeds; it’s to get the right decisions and to underpin the system of legal incentives. Similarly, if you regard the grant application process as admin, that’s not quite right – it’s metascience, the stage of deciding what scientific inquiry should take place at all. In fact, the grant system is screwed up and broken, and there’s all sorts of evidence that this is the case, but that ratio isn’t a good measure. (This is another specific case of a general principle that might be called the “chalkface fallacy”. The principle that you really can’t decide what the correct tooth-to-tail ratio of operations to administration might be a priori just by having intuitions about numbers that seem too big).

Subscribe now



Read the whole story
denubis
11 hours ago
reply
Share this story
Delete

Giving University Exams in the Age of Chatbots

2 Shares

Giving University Exams in the Age of Chatbots

What I like most about teaching "Open Source Strategies" at École Polytechnique de Louvain is how much I learn from my students, especially during the exam.

I dislike exams. I still have nightmares about exams. That’s why I try to subvert this stressful moment and make it a learning opportunity. I know that adrenaline increases memorization dramatically. I make sure to explain to each student what I was expecting and to be helpful.

Here are the rules:

1. You can have all the resources you want (including a laptop connected to the Internet)
2. There’s no formal time limit (but if you stay too long, it’s a symptom of a deeper problem)
3. I allow students to discuss among themselves if it is on topic. (in reality, they never do it spontanously until I force two students with a similar problem to discuss together)
4. You can prepare and bring your own exam question if you want (something done by fewer than 10% of the students)
5. Come dressed for the exam you dream of taking!

This last rule is awesome. Over the years, I have had a lot of fun with traditional folkloric clothing from different countries, students in pajamas, a banana and this year’s champion, my Studentausorus Rex!

An inflatable Tyranosaurus Rex taking my exam in 2026
An inflatable Tyranosaurus Rex taking my exam in 2026

My all-time favourite is still a fully clothed Minnie Mouse, who did an awesome exam with full face make-up, big ears, big shoes, and huge gloves. I still regret not taking a picture, but she was the very first student to take my words for what was a joke and started a tradition over the years.

Giving Chatbots Choice to the Students

Rule N°1 implies having all the resources you want. But what about chatbots? I didn’t want to test how ChatGPT was answering my questions, I wanted to help my students better understand what Open Source means.

Before the exam, I copy/pasted my questions into some LLMs and, yes, the results were interesting enough. So I came up with the following solution: I would let the students choose whether they wanted to use an LLM or not. This was an experiment.

The questionnaire contained the following:

# Use of Chatbots

Tell the professor if you usually use chatbots (ChatGPT/LLM/whatever) when doing research and investigating a subject. You have the choice to use them or not during the exam, but you must decide in advance and inform the professor.

Option A: I will not use any chatbot, only traditional web searches. Any use of them will be considered cheating.

Option B: I may use a chatbot as it’s part of my toolbox. I will then respect the following rules:
1) I will inform the professor each time information come from a chatbot
2) When explaining my answers, I will share the prompts I’ve used so the professor understands how I use the tool
3) I will identify mistakes in answers from the chatbot and explain why those are mistakes

Not following those rules will be considered cheating. Mistakes made by chatbots will be considered more important than honest human mistakes, resulting in the loss of more points. If you use chatbots, you should be held accountable for the output.

I thought this was fair. You can use chatbots, but you will be held accountable for it.

Most Students Don’t Want to Use Chatbots

This January, I saw 60 students. I interacted with each of them for a mean time of 26 minutes. This is a tiring but really rewarding process.

Of 60 students, 57 decided not to use any chatbots. For 30 of them, I managed to ask them to explain their choices. For the others, I unfortunately did not have the time. After the exam, I grouped those justifications into four different clusters. I did it without looking at their grades.

The first group is the "personal preference" group. They prefer not to use chatbots. They use them only as a last resort, in very special cases or for very specific subjects. Some even made it a matter of personal pride. Two students told me explicitly "For this course, I want to be proud of myself." Another also explained: "If I need to verify what an LLM said, it will take more time!"

The second group was the "never use" one. They don’t use LLMs at all. Some are even very angry at them, not for philosophical reasons, but mainly because they hate the interactions. One student told me: "Can I summarize this for you? No, shut up! I can read it by myself you stupid bot."

The third group was the "pragmatic" group. They reasoned that this was the kind of exam where it would not be needed.

The last and fourth group was the "heavy user" group. They told me they heavily use chatbots but, in this case, were afraid of the constraints. They were afraid of having to justify a chatbot’s output or of missing a mistake.

After doing that clustering, I wrote the grade of each student in its own cluster and I was shocked by how coherent it was. Note: grades are between 0 and 20, with 10 being the minimum grade to pass the class.

The "personal preference" students were all between 15 and 19, which makes them very good students, without exception! The "proud" students were all above 17!

The "never use" was composed of middle-ground students around 13 with one outlier below 10.

The pragmatics were in the same vein but a bit better: they were all between 12 and 16 without exceptions.

The heavy users were, by far, the worst. All students were between 8 and 11, with only one exception at 16.

This is, of course, not an unbiased scientific experiment. I didn’t expect anything. I will not make any conclusion. I only share the observation.

But Some Do

Of 60 students, only 3 decided to use chatbots. This is not very representative, but I still learned a lot because part of the constraints was to show me how they used chatbots. I hoped to learn more about their process.

The first chatbot student forgot to use it. He did the whole exam and then, at the end, told me he hadn’t thought about using chatbots. I guess this put him in the "pragmatic" group.

The second chatbot student asked only a couple of short questions to make sure he clearly understood some concepts. This was a smart and minimal use of LLMs. The resulting exam was good. I’m sure he could have done it without a chatbot. The questions he asked were mostly a matter of improving his confidence in his own reasoning.

This reminded me of a previous-year student who told me he used chatbots to study. When I asked how, he told me he would tell the chatbot to act as the professor and ask exam questions. As a student, this allowed him to know whether he understood enough. I found the idea smart but not groundbreaking (my generation simply used previous years’ questions).

The third chatbot-using student had a very complex setup where he would use one LLM, then ask another unrelated LLM for confirmation. He had walls of text that were barely readable. When glancing at his screen, I immediately spotted a mistake (a chatbot explaining that "Sepia Search is a compass for the whole Fediverse"). I asked if he understood the problem with that specific sentence. He did not. Then I asked him questions for which I had seen the solution printed in his LLM output. He could not answer even though he had the answer on his screen.

But once we began a chatbot-less discussion, I discovered that his understanding of the whole matter was okay-ish. So, in this case, chatbots disserved him heavily. He was totally lost in his own setup. He had LLMs generate walls of text he could not read. Instead of trying to think for himself, he tried to have chatbots pass the exam for him, which was doomed to fail because I was asking him, not the chatbots. He passed but would probably have fared better without chatbots.

Can chatbots help? Yes, if you know how to use them. But if you do, chances are you don’t need chatbots.

A Generational Fear of Cheating

One clear conclusion is that the vast majority of students do not trust chatbots. If they are explicitly made accountable for what a chatbot says, they immediately choose not to use it at all.

One obvious bias is that students want to please the teacher, and I guess they know where I am on this spectrum. One even told me: "I think you do not like chatbots very much so I will pass the exam without them" (very pragmatic of him).

But I also minimized one important generational bias: the fear of cheating. When I was a student, being caught cheating was a clear zero for the exam. You could, in theory, be expelled from university for aggravated cheating, whatever "aggravated" could mean.

During the exam, a good number of students called me panicked because Google was forcing autogenerated answers and they could not disable it. They were very worried I would consider this cheating.

First, I realized that, like GitHub, Google has a 100% market share, to the point students don’t even consider using something else a possibility. I should work on that next year.

Second, I learned that cheating, however lightly, is now considered a major crime. It might result in the student being banned from any university in the country for three years. Discussing exam with someone who has yet to pass it might be considered cheating. Students have very strict rules on their Discord.

I was completely flabbergasted because, to me, discussing "What questions did you have?" was always part of the collaboration between students. I remember one specific exam where we gathered in an empty room and we helped each other before passing it. When one would finish her exam, she would come back to the room and tell all the remaining students what questions she had and how she solved them. We never considered that "cheating" and, as a professor, I always design my exams hoping that the good one (who usually choose to pass the exam early) will help the remaining crowd. Every learning opportunity is good to take!

I realized that my students are so afraid of cheating that they mostly don’t collaborate before their exams! At least not as much as what we were doing.

In retrospect, my instructions were probably too harsh and discouraged some students from using chatbots.

Stream of Consciousness

My 2025 banana student!
My 2025 banana student!

Another innovation I introduced in the 2026 exam was the stream of consciousness. I asked them to open an empty text file and keep a stream of consciousness during the exam. The rules were the following:

In this file, please write all your questions and all your answers as a "stream of consciousness." This means the following rules:

1. Don’t delete anything.
2. Don’t correct anything.
3. Never go backward to retouch anything.
4. Write as thoughts come.
5. No copy/pasting allowed (only exception: URLs)
6. Rule 5. implies no chatbot for this exercice. This is your own stream of consciousness.

Don’t worry, you won’t be judged on that file. This is a tool to help you during the exam. You can swear, you can write wrong things. Just keep writing without deleting. If you are lost, write why you are lost. Be honest with yourself.

This file will only be used to try to get you more points, but only if it is clear that the rules have been followed.

I asked them to send me the file within 24h after the exam. Out of 60 students, I received 55 files (the remaining 5 were not penalized). There was also a bonus point if you sent it to the exam git repository using git-send-email, something 24 managed to do correctly.

The results were incredible. I did not read them all but this tool allowed me to have a glimpse inside the minds of the students. One said: "I should have used AI, this is the kind of question perfect for AI" (he did very well without it). For others, I realized how much stress they had but were hiding. I was touched by one stream of consciousness starting with "I’m stressed, this doesn’t make any sense. Why can’t we correct what we write in this file" then, 15 lines later "this is funny how writing the questions with my own words made the problem much clearer and how the stress start to fade away".

And yes, I read all the failed students and managed to save a bunch of them when it was clear that they, in fact, understood the matter but could not articulate it well in front of me because of the stress. Unfortunately, not everybody could be saved.

Conclusion

My main takeaway is that I will keep this method next year. I believe that students are confronted with their own use of chatbots. I also learn how they use them. I’m delighted to read their thought processes through the stream of consciousness.

Like every generation of students, there are good students, bad students and very brilliant students. It will always be the case, people evolve (I was, myself, not a very good student). Chatbots don’t change anything regarding that. Like every new technology, smart young people are very critical and, by defintion, smart about how they use it.

The problem is not the young generation. The problem is the older generation destroying critical infrastructure out of fear of missing out on the new shiny thing from big corp’s marketing department.

Most of my students don’t like email. An awful lot of them learned only with me that Git is not the GitHub command-line tool. It turns out that by imposing Outlook with mandatory subscription to useless academic emails, we make sure that students hate email (Microsoft is on a mission to destroy email with the worst possible user experience).

I will never forgive the people who decided to migrate university mail servers to Outlook. This was both incompetence and malice on a terrifying level because there were enough warnings and opposition from very competent people at the time. Yet they decided to destroy one of the university’s core infrastructures and historical foundations (UCLouvain is listed by Peter Salus as the very first European university to have a mail server, there were famous pioneers in the department).

By using Outlook, they continue to destroy the email experience. Out of 55 streams of consciousness, 15 ended in my spam folder. All had their links destroyed by Outlook. And university keep sending so many useless emails to everyone. One of my students told me that they refer to their university email as "La boîte à spams du recteur" (Chancellor’s spam inbox). And I dare to ask why they use Discord?

Another student asked me why it took four years of computer engineering studies to get a teacher explaining to them that Git was not GitHub and that GitHub was part of Microsoft. He had a distressed look: "How could I have known? We were imposed GitHub for so many exercises!"

Each year, I tell my students the following:

It took me 20 years after university to learn what I know today about computers. And I’ve only one reason to be there in front of you: be sure you are faster than me. Be sure that you do it better and deeper than I did. If you don’t manage to outsmart me, I will have failed.

Because that’s what progress is about. Progress is each generation going further than the previous one while learning from the mistakes of your elders. I’m there to tell you about my own mistakes and the mistakes of my generation.

I know that most of you are only there to get a diploma while doing the minimal required effort. Fair enough, that’s part of the game. Challenge accepted. I will try to make you think even if you don’t intend to do it.

In earnest, I have a lot of fun teaching, even during the exam. For my students, the mileage may vary. But for the second time in my life, a student gave me the best possible compliment:

— You know, you are the only course for which I wake up at 8AM.

To which I responded:

– The feeling is mutual. I hate waking up early, except to teach in front of you.

About the author

I’m Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!

Read the whole story
denubis
19 hours ago
reply
Share this story
Delete

Electricity use of AI coding agents

1 Share

Electricity use of AI coding agents

Previous work estimating the energy and water cost of LLMs has generally focused on the cost per prompt using a consumer-level system such as ChatGPT.

Simon P. Couch notes that coding agents such as Claude Code use way more tokens in response to tasks, often burning through many thousands of tokens of many tool calls.

As a heavy Claude Code user, Simon estimates his own usage at the equivalent of 4,400 "typical queries" to an LLM, for an equivalent of around $15-$20 in daily API token spend. He figures that to be about the same as running a dishwasher once or the daily energy used by a domestic refrigerator.

Via Hacker News

Tags: ai, generative-ai, llms, ai-ethics, ai-energy-usage, coding-agents, claude-code

Read the whole story
denubis
1 day ago
reply
Share this story
Delete

Large Manly ferry and last of RiverCats to be turned into scrap metal

1 Comment
The large double-ended Collaroy will join the Dawn Fraser in making their final trips to their respective ferry graveyards in the coming months.

Read the whole story
denubis
8 days ago
reply
booooo
Share this story
Delete
Next Page of Stories