12250 stories
·
35 followers

Court Injunctions are the Thoughts and Prayers of Data Breach Response

2 Shares

Presently sponsored by: Malwarebytes Browser Guard blocks phishing, ads, scams, and trackers for safer, faster browsing

Court Injunctions are the Thoughts and Prayers of Data Breach Response

You see it all the time after a tragedy occurs somewhere, and people flock to offer their sympathies via the "thoughts and prayers" line. Sympathy is great, and we should all express that sentiment appropriately. The criticism, however, is that the line is often offered as a substitute for meaningful action. Responding to an incident with "thoughts and prayers" doesn't actually do anything, which brings us to court injunctions in the wake of a data breach.

Let's start with HWL Ebsworth, an Australian law firm that was the victim of a ransomware attack in 2023. They were granted an injunction, which means the following:

The final interlocutory injunction restrained hackers from the ALPHV, or “BlackCat”, hackers group from publishing the HWL data on the internet, sharing it with any person, or using the information for any reason other than for obtaining legal advice on the court’s orders.

To paraphrase, the injunction prohibits the Russian crime gang that hacked the law firm and attempted to extort them from publishing the data on the internet. Right... The threat actor was subsequently served with the injunction, to which, per the article, they responded in an entirely predictable fashion:

Fuck you fuckers

And then they dumped a huge trove of data. Clearly, criminals aren't going to pay any attention whatsoever to an injunction, but this legal construct has reach far beyond just the bad guys:

The injunction will also “assist in limiting the dissemination of the exfiltrated material by enabling HWLE to inform online platforms, who are at risk of publishing the material”, Justice Slattery said.

In other words, the data is also off limits to the good guys. Journalists, security firms and yes, Have I Been Pwned (HIBP) are all impacted by injunctions like this. To some extent, you can understand this when the data is as sensitive as what a law firm typically holds, and you need only use a little bit of imagination to picture how damaging it can be for data like this to fall into the wrong hands. But data in a breach of a company like Qantas is very different:

As well as my interest in running HIBP, I also appear to be a victim of their data breach, along with my wife and kids. And just to highlight how much skin I have in the game, I'm also a Qantas shareholder and a very loyal customer:

As such, I was particularly interested when they applied for, and were granted, a court injunction of their own. Why? What possible upside does this provide? Because by now, it's pretty clear what's going to happen to the data:

Court Injunctions are the Thoughts and Prayers of Data Breach Response

This is from a Telegram channel run by the group that took the Qantas data, along with some other huge names:

"Scattered LAPSUS$ Hunters" is threatening to dump all the data publicly in a couple of days' time unless a ransom is paid, which it won't be. The quote from the Telegram image is from a Qantas spokesperson, and clearly, the injunction is not going to stop the publishing of data. Much of my gripe with injunctions is the premise that they in some way protect customers (like me), when clearly, they don't. But hey, "thoughts and prayers", right?

Without wanting to give too much credit to criminals attempting to ransom my data (and everyone else's), they're right about the media outlets. An injunction would have had a meaningful impact on the Ashley Madison coverage a decade ago, where the press happily outed the presence of famous people in the breach. Clearly, the Qantas data is nowhere near as newsworthy, and I can't imagine a headline going much beyond the significant point balances of certain politicians. The data just isn't that interesting.

The injunction is only effective against people who meet the following criteria:

  1. People who know there's an injunction in place
  2. People who are law-abiding
  3. People in Australia *

The first two points are obvious, and an asterix adorns the third as it's very heavily caveated. This from a chat with a lawyer friend thir morning who specialises in this space:

it would depend on which country and whether it has a reciprocal agreement with Australia eg like the UK and also who you are trying it enforce it against and then it’s up to the court in that country to determine - but as this is an injunction (so not eg for a debt against a specific person) it’s almost impossible - you  can’t just register a foreign judgement somewhere  against the world at large as far as I know.

So, if the injunction is so useless at providing meaningful protections to data breach victims, what's the point? Who does it protect? In researching this piece, the best explanation I could find was from law firm Clayton Utz:

Where that confidentiality is breached due to a hack, parties should generally do - and be seen to be doing - what they can to prevent or minimise the extent of harm. Even if injunctions might not impact hackers, for the reasons set out above, they can provide ancillary benefits in relation to the further dissemination of hacked information by legitimate individuals and organisations. Depending on the terms, it might also assist with recovery on relevant insurance policies and reduce the risk of securities class actions being brought.

That term - "be seen to be doing" - says it all. This is now just me speculating, but I can envisage lawyers for Qantas standing up in court when they're defending against the inevitable class actions they'll face (which I also have strong views on), saying "Your honour, we did everything we could, we even got an injunction!" In a previous conversation I had regarding another data breach that had successfully been granted an injunction, I was told by the lawyer involved that they wanted to assure customers that they'd done everything possible. That breach was subsequently circulated online via a popular clear web hacking site (not "the dark web"), but I assume this fact and the ineffectiveness of the injunction on that audience was left out of customer communications. I feel pretty comfortable arguing that the primary beneficiary of the injunction is the shareholder, rather than the customer. And I assume the lawyers charge for their time, right?

Where this leaves us with Qantas is that, on a personal note, as a law-abiding Australian who is aware of the injunction, I won't be able to view my data or that of my kids. I can always request it of Qantas, of course, but I won't be able to go and obtain it if and when it's spread all over the internet. The criminals will, of course, and that's a very uncomfortable feeling.

From an HIBP perspective, we obviously can't load that data. It's very likely that hundreds of thousands of our subscribers will be impacted, and we won't be able to let them know (which is part of the reason I've written this post - so I can direct them here when asked). Granted, Qantas has obviously sent out disclosure notices to impacted individuals, but I'd argue that the notice that comes from HIBP carries a different gravitas: it's one thing to be told "we've had a security incident", and quite another to learn that your data is now in circulation to the extent that it's been sent to us. Further, Qantas won't be notifying the owners of the domains that their customers' email addresses are on. Many people will be using their work email address for their Qantas account, and when you tie that together with the other exposed data attributes, that creates organisational risk. Companies want to know when corporate assets (including email addresses) are exposed in a data breach, and unfortunately, we won't be able to provide them with that information.

I understand that Qantas' decision to pursue the injunction is about something much broader than the email addresses potentially appearing in HIBP. I actually think much of the advice Qantas has given is good, for example, the resources they've provided on their page about the breach:

Court Injunctions are the Thoughts and Prayers of Data Breach Response

These are all fantastic, and each of them has many good external resources people worried about scams should refer to. For example, ScamWatch has this one:

Court Injunctions are the Thoughts and Prayers of Data Breach Response

And cyber.gov.au has a handy tip courtesy of our Australian Signals Directorate makes this suggestion:

Court Injunctions are the Thoughts and Prayers of Data Breach Response

Not to miss a beat, our friends at IDCARE also offer great advice:

Court Injunctions are the Thoughts and Prayers of Data Breach Response

And, of course, the OAIC has some fantastic guidance too:

Court Injunctions are the Thoughts and Prayers of Data Breach Response

The scam resources Qantas recommends all link through to a service that will never return the Qantas data breach. Did I mention "thoughts and prayers" already?

Read the whole story
denubis
17 hours ago
reply
Share this story
Delete

canaries and islands

1 Share

Presumably, some time in the eighteenth century, in a pit somewhere, the dumbest man in Cwm Rhondda seriously said something like “why does the foreman care so much about bloody canaries? Why are we having to cut a shift short and lose pay just for the sake of his pet?”. And this became a joke repeated down the mines until the invention of the Davy Lamp.

But here we are today, and people still seem not to understand that when development is stalled by protected habitats for bats, newts and snails, it’s not really the snails we’re protecting.

in order to blame the system, you first have to understand the system, as a system

There are three kinds of species that are mentioned in the various habitat protection legislation. Some things are protected simply because they’re endangered themselves[1]. Some are “keystone species”, which are protected because they’re known to occupy an important place in the system and if they go, a lot of cascading damage will ensure.

But there are also a bunch of little critters (often invertebrates like the fabled newts and salamanders) which need to be monitored as part of the job of assessing the health of a particular habitat. And, of course, if you monitor something at regular intervals, and take action when it appears to be endangered, then that looks a lot like protection for the actual thing; just like it looked to Dewi Twp in 1788 like the foreman was looking after his canary.

Water regulation is a bit of a wicked problem in this regard, because the nature of groundwater is that when you draw some of it out in one place, the effects might show up somewhere else. In the particular site in Horsham which our Chancellor was talking about, developments have been stalled for a while in a big area, because in the course of monitoring a couple of protected sites (including one of the three habitats of the whirlpool snails), Natural England noticed that things were drying out at a faster rate than expected.

This is, actually a big problem. The whole of south east England has inadequate reservoirs (due to the kind of problems I wrote about in The Problem Factory), and consequently makes too much use of the underground aquifer. It’s been known for ages that this was unsustainable.

When you start seeing the predicted problems actually happening, unfortunately that really is a “down tools lads” moment for housing development, until you can do a bunch of expensive survey work and find out exactly what’s happening to the aquifer. Quite apart from anything, you don’t necessarily want to be building tens of thousands of houses in a location where it turns out that supplying them with water is going to be vastly more expensive than you had planned for.

I think there’s a real risk of the phenomenon “IBGYBG” happening here. In the run-up to the financial crisis of the 2000s, a lot of bad deals went out the door because the people selling them figured that “I’ll be gone, you’ll be gone”; the consequences would only become apparent after all the parties involved in structuring the deals had moved on in their careers or retired.

The nature of early warning systems is that if they’re functioning, they stop activity when it isn’t causing any harm. And so at the point where the red light flashes, it looks like bureaucracy gone mad has prevented the development of 20,000 family homes in order to protect some funny snails. The trouble is, that in fifty years’ time, when the aquifer is depleted, the houses are worthless because of subsidence, the river is a kill-zone and everything has to be put right at much greater cost, all the people who wanted to get the red tape out of the way and be builders rather than blockers will not be picking up the phone. In fact, they’ll most likely be nonplussed at the idea that anyone might point the finger for this completely unpredictable environmental act of God anywhere near them.

The purpose of this blog is “things that stick in your mind”, and the thing that’s stuck fastest in my mind is what an old friend said to me when we met up for a pint this summer. “Looking around at this world, all I can really say is that I’m glad I’m old enough and have enough money to just watch all of this happen and then die”.


[1] The specific snails in the case I linked to are also actually extremely rare themselves, poor things, but it’s clear from reading the position statement that Natural England is worried about a lot of things happening in the habitat zones affecting a lot more things than the snails.



Read the whole story
denubis
21 hours ago
reply
Share this story
Delete

I guess I haven’t clearly articulated this in writing, but friends do not let fr...

1 Share

I guess I haven’t clearly articulated this in writing, but friends do not let friends without substantive IT work experience and/or a credible IT degree take cybersecurity career bootcamps in 2025.

They are up to no good. Shenanigans. Malfeasance. They are not a safe way to get a job.

Read the whole story
denubis
1 day ago
reply
Share this story
Delete

SOPPPPs and SLAPPs

1 Share

As promised earlier in the week, here is a rare example of me proposing something positive rather than moaning all the time. Although I don’t think there is much to be gained from tweaking judicial review (and considerable potential to do damage, both by missing important objections and by further undermining the sense of democratic legitimacy of the process), there is a real problem in planning that needs to be addressed.

And that problem is what might be called the SOPPPP – a Strategic Objection Purely to Protect Property Prices.

Subscribe now

I think what really bothers people about the environmental, habitat and other protections is the perception that they are being used in bad faith. And this does happen; in researching the Sheephouse Wood Bat Mitigation Structure, I was struck by the fact that Buckinghamshire Council had, very late in the process, issued a lot of Tree Protection Orders on woodland that they had never thought worth protecting until it became clear that they weren’t going to be able to rely on Natural England and the bat habitats to deflect HS2 for them. The

That sort of behaviour can’t be tolerated; I don’t think it’s actually as common as the people who want to leave the Aarhus Convention suspect, but I can see why they think that way. Bad faith use of the legal system is incredibly corrosive of public trust; it’s much more antisocial behaviour than painting graffiti on a bus stop.

I said on Wednesday that infrastructure planning problems, in my mental model, share a common structure with the problem of libel law, which explains why both of these fields have consistently disappointed us with successive rounds of reform that don’t work. Libel risk is, unfortunately and due to the extreme expensiveness of lawyers, close to existential for news organisations. Consequently, they take the same approach to it that infrastructure developers take to planning risk; rather than maximising expected value, they first need to reduce the likelihood of a bad outcome below some threshold value. In newspapers, that means that good stories get spiked.

Libel also has the problem of strategic abuse, to the extent that they have a name for it – the Strategic Lawsuit Against Public Participation or SLAPP. And giving a name to the practice of using meritless lawsuits to intimidate critics has had an affect. Once you’ve given it a name, you can start to legislate against the practice; lots of places now have actual anti-SLAPP statutes, and even where there isn’t one on the books, there’s a bit of precedent and basis for a judge to take into account whether something is a SLAPP or not when making case management decisions and considering motions to dismiss.

And once you’ve named something, you can stigmatise it. Lawyers who care about their professional reputation don’t like to be associated with SLAPPs. If you want to SLAPP someone, you usually end up going to one of the firms that are known to be SLAPP merchants, which reduces their effectiveness because it alerts everyone to what’s going on.

So – I think one useful reform (that wouldn’t even cost anything!) would be to introduce the SOPPPP concept to planning, and even maybe pass some token legislation against them. I think it would be extremely difficult to ever actually prosecute someone for abuse of the planning process, but laws do sometimes send a message. You could include it in the standards in public life that elected representatives shouldn’t get involved in this sort of thing and should be censured if they do. And professional services firms would have to make more of a choice about the kinds of clients they took on, if they didn’t want to get a bad name.



Read the whole story
denubis
3 days ago
reply
Share this story
Delete

Saturday Morning Breakfast Cereal - Princess

2 Shares


Click here to go see the bonus panel!

Hovertext:
The worst part is the ocean of blood you can't see at the bottom of the last panel.


Today's News:
Read the whole story
denubis
3 days ago
reply
Share this story
Delete

Spec-driven development: Using Markdown as a programming language when building with AI

2 Shares

The usual workflow with AI coding agents like GitHub Copilot is simple: “Write app A that does X. You start with that seed, then iterate: “Add feature Y,” “Fix bug Z. This works, at least until the agent loses track of your app’s purpose or past decisions. 

If you’re new to AI coding agents, the change is subtle. Suddenly, the agent asks you to repeat things you’ve already explained, or suggests changes that ignore your previous instructions. Sometimes, it forgets why a feature exists, or proposes solutions that contradict earlier choices.

Some AI coding agents try to address this by supporting custom instructions files. For example, GitHub Copilot supports copilot-instructions.md. You can put your app’s purpose and design decisions in this Markdown file, and GitHub Copilot will read it every time it generates code.

When I’m in a coding rush, I often forget to update copilot-instructions.md after asking GitHub Copilot to do things. It feels redundant to put the same information into both the chat prompt and the instructions file.

Which made me wonder: What if I “wrote” the entire app in the Markdown instructions file?

For my latest pet project — GitHub Brain MCP Server — I tried exactly that by writing the app code in Markdown and letting GitHub Copilot compile it into actual Go code. As a result, I rarely edit or view the app’s Go code directly. 

This process should work with any AI coding agent and programming language, though I’ll use VS Code, GitHub Copilot, and Go as examples. GitHub Brain MCP Server will be my example app throughout this post.

Let’s jump in. 

Setup: What I used to get started

There are four key files:

.
├── .github/
│   └── prompts/
│       └── compile.prompt.md
├── main.go
├── main.md
└── README.md

At a high level, I edit README.md or main.md to develop the app, invoke compile.prompt.md to let the AI coding agent generate main.go, then build and run main.go like any other Go app. Next, I’ll break down each file and the workflow.

README.md: User-facing documentation

The example app, GitHub Brain MCP Server, is a command-line tool. Its README.md provides clear, user-facing instructions for installation and usage. If you write libraries, this file should contain API documentation. Below is a condensed excerpt from the example app’s README.md:

# GitHub Brain MCP Server

**GitHub Brain** is an experimental MCP server for summarizing GitHub discussions, issues, and pull requests.

## Usage

```sh
go run main.go <command> [<args>]
```

**Workflow:**

1. Populate the local database with the `pull` command.
2. Start the MCP server with the `mcp` command.

### `pull`

Populate the local database with GitHub data.

Example:

```sh
go run main.go pull -o my-org
```

Arguments:

- `-t`: Your GitHub personal access token. **Required.**
- `-o`: The GitHub organization to pull data from. **Required.**
- `-db`: Path to the SQLite database directory. Default: `db` folder in the current directory.

### `mcp`

Start the MCP server using the local database.

...README.md continues...

Nothing special here ,  just regular documentation. But it gets interesting when this file is included in main.md.

main.md: AI coding agent specification

main.md is the actual source code of the app: the Markdown instructions file. Whenever I need to add features or fix bugs, I edit this file. Here’s the opening of the example app’s main.md:

# GitHub Brain MCP Server

AI coding agent specification. User-facing documentation in [README.md](README.md).

## CLI

Implement CLI from [Usage](README.md#usage) section. Follow exact argument/variable names. Support only `pull` and `mcp` commands.

## pull

- Resolve CLI arguments and environment variables into `Config` struct:
  - `Organization`: Organization name (required)
  - `GithubToken`: GitHub API token (required)
  - `DBDir`: SQLite database path (default: `./db`)
- Use `Config` struct consistently, avoid multiple environment variable reads
- Pull items: Repositories, Discussions, Issues, Pull Requests, Teams
- Use `log/slog` custom logger for last 5 log messages with timestamps in console output

...main.md continues...

Notice how the user-facing documentation from README.md is embedded in the specification. This keeps documentation and implementation in sync. If I want to add an alias for the -o argument, I just update README.md with no extra steps required.

Here’s another snippet from the example app’s main.md:

### Discussions

- Query discussions for each repository with `has_discussions_enabled: true`
- Record most recent repository discussion `updated_at` timestamp from database before pulling first page

```graphql
{
  repository(owner: "<organization>", name: "<repository>") {
    discussions(first: 100, orderBy: { field: UPDATED_AT, direction: DESC }) {
      nodes {
        url
        title
        body
        createdAt
        updatedAt
        author {
          login
        }
      }
    }
  }
}
```

- If repository doesn't exist, remove the repository, and all associated items from the database and continue
- Query discussions ordered by most recent `updatedAt`
- Stop pulling when hitting discussions with `updatedAt` older than recorded timestamp
- Save or update by primary key `url`
- Preserve the discussion markdown body

...main.md continues...

This is effectively programming in Markdown and plain English: storing variables, loops, and logical conditions. You get all the usual keywords — if, foreach, or continue. It’s a blend of structural and declarative styles, with Markdown links []() for imports.

The database schema is also coded in Markdown:

## Database

SQLite database in `{Config.DbDir}/{Config.Organization}.db` (create folder if needed). Avoid transactions. Save each GraphQL item immediately.

### Tables

#### table:repositories

- Primary key: `name`
- Index: `updated_at`

- `name`: Repository name (e.g., `repo`), without organization prefix
- `has_discussions_enabled`: Boolean indicating if the repository has discussions feature enabled
- `has_issues_enabled`: Boolean indicating if the repository has issues feature enabled
- `updated_at`: Last update timestamp

...main.md continues...

compile.prompt.md: AI coding agent prompt

compile.prompt.md uses GitHub Copilot’s prompt file format. This repeatable prompt tells the agent to compile main.md into main.go. Here’s compile.prompt.md from the example app:

---
mode: agent
---

- Update the app to follow [the specification](../../main.md)
- Build the code with the VS Code tasks. Avoid asking me to run `go build` or `go test` commands manually.
- Fetch the GitHub home page for each used library to get a documentation and examples.

I keep this prompt simple .  The real information is in main.md, after all. This example uses GitHub Copilot’s format, but keeping it simple makes it portable to other AI coding agents.

The workflow to bring this all together

The development loop is straightforward:

  1. Edit the specification in main.md or README.md.
  2. Ask the AI coding agent to compile it into Go code.
  3. Run and test the app. Update the spec if something doesn’t work as expected.
  4. Repeat.

In GitHub Copilot for VS Code, use the / command to invoke the prompt.

Screenshot showing the use of the / command in GitHub Copilot for VS Code to invoke the AI coding agent prompt.

For smaller specs, GitHub Copilot usually catches changes automatically. As the spec grows, I nudge it in the right direction by appending ”focus on <the-change>”.

Screenshot demonstrating how to prompt GitHub Copilot in VS Code to focus on a specific change using the / command.

Coding

Coding in main.md is sometimes harder than writing Go directly . You have to clearly describe what you want, which might be the hardest part of software development 😅. Fortunately, you can use GitHub Copilot to help with this, just like you probably do with your Go code daily.

Here we ask it to add pagination to all MCP tools in main.md. Copilot not only saves us from doing repetitive work, but it also recommends proper pagination style and parameter names.

Screenshot showing GitHub Copilot in VS Code recommending pagination style and parameter names for MCP tools in the Markdown specification.

Linting

main.md can get messy like any code. To help with this, you can ask Copilot to clean it up. Here’s lint.prompt.md from the example app:

---
mode: agent
---

- Optimize [the app specification](../../main.md) for clarity and conciseness
- Treat the english language as a programming language
- Minimize the number of synonyms - i.e. pull/get/fetch. Stick to one term.
- Remove duplicate content
- Preserve all important details
- Do not modify the Go code with this. Only optimize the Markdown file.
- Do not modify this prompt itself.

Like with compile.prompt.md, I use the / command to invoke this prompt. The AI coding agent lints main.md, and if the result looks good, I can compile it to Go with compile.prompt.md.

Screenshot of GitHub Copilot in VS Code cleaning up and linting the Markdown specification for improved clarity and conciseness.

Closing thoughts

After a few months using this workflow, here are my observations:

  • It works! And it gets better with each agentic update to Copilot.
  • Compilation slows down as main.go grows. Something I want to work on next is modifying the spec to break compiled code into multiple modules — by adding “Break each ## section into its own code module.”
  • Testing? I haven’t tried adding tests yet. But even with spec-driven workflows, testing remains essential. The spec may describe intended behavior, but tests verify it.

Something else I want to try next? Discarding all Go code and regenerating the app from scratch in another language. Will the new code work right away?

The rapid advances in this field are really encouraging, and I hope my experimental workflows give you some practical ideas to try. 

The post Spec-driven development: Using Markdown as a programming language when building with AI appeared first on The GitHub Blog.

Read the whole story
denubis
7 days ago
reply
Share this story
Delete
Next Page of Stories