12245 stories
·
35 followers

Spec-driven development: Using Markdown as a programming language when building with AI

2 Shares

The usual workflow with AI coding agents like GitHub Copilot is simple: “Write app A that does X. You start with that seed, then iterate: “Add feature Y,” “Fix bug Z. This works, at least until the agent loses track of your app’s purpose or past decisions. 

If you’re new to AI coding agents, the change is subtle. Suddenly, the agent asks you to repeat things you’ve already explained, or suggests changes that ignore your previous instructions. Sometimes, it forgets why a feature exists, or proposes solutions that contradict earlier choices.

Some AI coding agents try to address this by supporting custom instructions files. For example, GitHub Copilot supports copilot-instructions.md. You can put your app’s purpose and design decisions in this Markdown file, and GitHub Copilot will read it every time it generates code.

When I’m in a coding rush, I often forget to update copilot-instructions.md after asking GitHub Copilot to do things. It feels redundant to put the same information into both the chat prompt and the instructions file.

Which made me wonder: What if I “wrote” the entire app in the Markdown instructions file?

For my latest pet project — GitHub Brain MCP Server — I tried exactly that by writing the app code in Markdown and letting GitHub Copilot compile it into actual Go code. As a result, I rarely edit or view the app’s Go code directly. 

This process should work with any AI coding agent and programming language, though I’ll use VS Code, GitHub Copilot, and Go as examples. GitHub Brain MCP Server will be my example app throughout this post.

Let’s jump in. 

Setup: What I used to get started

There are four key files:

.
├── .github/
│   └── prompts/
│       └── compile.prompt.md
├── main.go
├── main.md
└── README.md

At a high level, I edit README.md or main.md to develop the app, invoke compile.prompt.md to let the AI coding agent generate main.go, then build and run main.go like any other Go app. Next, I’ll break down each file and the workflow.

README.md: User-facing documentation

The example app, GitHub Brain MCP Server, is a command-line tool. Its README.md provides clear, user-facing instructions for installation and usage. If you write libraries, this file should contain API documentation. Below is a condensed excerpt from the example app’s README.md:

# GitHub Brain MCP Server

**GitHub Brain** is an experimental MCP server for summarizing GitHub discussions, issues, and pull requests.

## Usage

```sh
go run main.go <command> [<args>]
```

**Workflow:**

1. Populate the local database with the `pull` command.
2. Start the MCP server with the `mcp` command.

### `pull`

Populate the local database with GitHub data.

Example:

```sh
go run main.go pull -o my-org
```

Arguments:

- `-t`: Your GitHub personal access token. **Required.**
- `-o`: The GitHub organization to pull data from. **Required.**
- `-db`: Path to the SQLite database directory. Default: `db` folder in the current directory.

### `mcp`

Start the MCP server using the local database.

...README.md continues...

Nothing special here ,  just regular documentation. But it gets interesting when this file is included in main.md.

main.md: AI coding agent specification

main.md is the actual source code of the app: the Markdown instructions file. Whenever I need to add features or fix bugs, I edit this file. Here’s the opening of the example app’s main.md:

# GitHub Brain MCP Server

AI coding agent specification. User-facing documentation in [README.md](README.md).

## CLI

Implement CLI from [Usage](README.md#usage) section. Follow exact argument/variable names. Support only `pull` and `mcp` commands.

## pull

- Resolve CLI arguments and environment variables into `Config` struct:
  - `Organization`: Organization name (required)
  - `GithubToken`: GitHub API token (required)
  - `DBDir`: SQLite database path (default: `./db`)
- Use `Config` struct consistently, avoid multiple environment variable reads
- Pull items: Repositories, Discussions, Issues, Pull Requests, Teams
- Use `log/slog` custom logger for last 5 log messages with timestamps in console output

...main.md continues...

Notice how the user-facing documentation from README.md is embedded in the specification. This keeps documentation and implementation in sync. If I want to add an alias for the -o argument, I just update README.md with no extra steps required.

Here’s another snippet from the example app’s main.md:

### Discussions

- Query discussions for each repository with `has_discussions_enabled: true`
- Record most recent repository discussion `updated_at` timestamp from database before pulling first page

```graphql
{
  repository(owner: "<organization>", name: "<repository>") {
    discussions(first: 100, orderBy: { field: UPDATED_AT, direction: DESC }) {
      nodes {
        url
        title
        body
        createdAt
        updatedAt
        author {
          login
        }
      }
    }
  }
}
```

- If repository doesn't exist, remove the repository, and all associated items from the database and continue
- Query discussions ordered by most recent `updatedAt`
- Stop pulling when hitting discussions with `updatedAt` older than recorded timestamp
- Save or update by primary key `url`
- Preserve the discussion markdown body

...main.md continues...

This is effectively programming in Markdown and plain English: storing variables, loops, and logical conditions. You get all the usual keywords — if, foreach, or continue. It’s a blend of structural and declarative styles, with Markdown links []() for imports.

The database schema is also coded in Markdown:

## Database

SQLite database in `{Config.DbDir}/{Config.Organization}.db` (create folder if needed). Avoid transactions. Save each GraphQL item immediately.

### Tables

#### table:repositories

- Primary key: `name`
- Index: `updated_at`

- `name`: Repository name (e.g., `repo`), without organization prefix
- `has_discussions_enabled`: Boolean indicating if the repository has discussions feature enabled
- `has_issues_enabled`: Boolean indicating if the repository has issues feature enabled
- `updated_at`: Last update timestamp

...main.md continues...

compile.prompt.md: AI coding agent prompt

compile.prompt.md uses GitHub Copilot’s prompt file format. This repeatable prompt tells the agent to compile main.md into main.go. Here’s compile.prompt.md from the example app:

---
mode: agent
---

- Update the app to follow [the specification](../../main.md)
- Build the code with the VS Code tasks. Avoid asking me to run `go build` or `go test` commands manually.
- Fetch the GitHub home page for each used library to get a documentation and examples.

I keep this prompt simple .  The real information is in main.md, after all. This example uses GitHub Copilot’s format, but keeping it simple makes it portable to other AI coding agents.

The workflow to bring this all together

The development loop is straightforward:

  1. Edit the specification in main.md or README.md.
  2. Ask the AI coding agent to compile it into Go code.
  3. Run and test the app. Update the spec if something doesn’t work as expected.
  4. Repeat.

In GitHub Copilot for VS Code, use the / command to invoke the prompt.

Screenshot showing the use of the / command in GitHub Copilot for VS Code to invoke the AI coding agent prompt.

For smaller specs, GitHub Copilot usually catches changes automatically. As the spec grows, I nudge it in the right direction by appending ”focus on <the-change>”.

Screenshot demonstrating how to prompt GitHub Copilot in VS Code to focus on a specific change using the / command.

Coding

Coding in main.md is sometimes harder than writing Go directly . You have to clearly describe what you want, which might be the hardest part of software development 😅. Fortunately, you can use GitHub Copilot to help with this, just like you probably do with your Go code daily.

Here we ask it to add pagination to all MCP tools in main.md. Copilot not only saves us from doing repetitive work, but it also recommends proper pagination style and parameter names.

Screenshot showing GitHub Copilot in VS Code recommending pagination style and parameter names for MCP tools in the Markdown specification.

Linting

main.md can get messy like any code. To help with this, you can ask Copilot to clean it up. Here’s lint.prompt.md from the example app:

---
mode: agent
---

- Optimize [the app specification](../../main.md) for clarity and conciseness
- Treat the english language as a programming language
- Minimize the number of synonyms - i.e. pull/get/fetch. Stick to one term.
- Remove duplicate content
- Preserve all important details
- Do not modify the Go code with this. Only optimize the Markdown file.
- Do not modify this prompt itself.

Like with compile.prompt.md, I use the / command to invoke this prompt. The AI coding agent lints main.md, and if the result looks good, I can compile it to Go with compile.prompt.md.

Screenshot of GitHub Copilot in VS Code cleaning up and linting the Markdown specification for improved clarity and conciseness.

Closing thoughts

After a few months using this workflow, here are my observations:

  • It works! And it gets better with each agentic update to Copilot.
  • Compilation slows down as main.go grows. Something I want to work on next is modifying the spec to break compiled code into multiple modules — by adding “Break each ## section into its own code module.”
  • Testing? I haven’t tried adding tests yet. But even with spec-driven workflows, testing remains essential. The spec may describe intended behavior, but tests verify it.

Something else I want to try next? Discarding all Go code and regenerating the app from scratch in another language. Will the new code work right away?

The rapid advances in this field are really encouraging, and I hope my experimental workflows give you some practical ideas to try. 

The post Spec-driven development: Using Markdown as a programming language when building with AI appeared first on The GitHub Blog.

Read the whole story
denubis
8 hours ago
reply
Share this story
Delete

The Crossing || Crapshots 809

1 Share
From: loadingreadyrun
Duration: 1:12
Views: 3,665

Read the whole story
denubis
1 day ago
reply
Share this story
Delete

The Searle Chair

1 Share
John Searle died a couple weeks ago. Since people are sharing stories, I'll share one of my own.

In the 1990s, as a philosopher of science studying developmental psychology, my dissertation committee initially consisted of Elisabeth Lloyd, Martin Jones, and Alison Gopnik. The topic led me toward philosophy of mind, and Martin graciously suggested that if John Searle was willing to join, I might consider swapping him in.

So I approached Searle, mentioning that Lisa and Alison were the other members. He said, "Alison Gopnik?! Well, I guess it's okay, as long as I don't have to sit in the same room with her."

I thought, wow, he must really hate Alison! But Berkeley dissertations didn't require an oral defense, so indeed he wouldn't have to sit in the same room with her. I took his answer as a yes. Only later did I realize that his comment had a very specific meaning.

To understand this specific meaning, you need to know about the Searle Chair. At the time, the main seminar and meeting room in the Philosophy Department -- the Dennes Room -- had a peculiar and inconvenient layout. There was no seminar table. Up front by the chalkboard was a chair for the person leading the meeting. (I seem to remember it as a little folding chair with a card table, but it might not have been quite as informal as that.) Two elegant but uncomfortable antique couches lined the walls, and the remaining wall featured two large cozy armchairs, separated by a few smaller seats.

One armchair sat awkwardly near the front, angled partly away from the chalkboard. The other occupied the corner by the window, with a commanding view of the room. This corner armchair was plainly the best seat in the house. Everyone called it the Searle Chair, because whenever Searle attended a meeting, that's where he sat. Even if he arrived late, no one dared claim it.

My girlfriend Kim, briefly the graduate student representative at faculty meetings, once saw Barry Stroud make a play for the Searle Chair. Searle was late, so Barry sat in the chair. According to Kim, Searle arrived and practically sat on Barry, then mumbled something grumpy.

Barry, feigning innocence, said "Well, no one was sitting here."

Searle replied that he needed that chair because of his back -- something like "If my back starts hurting too much, I guess I'll just leave." (Indeed, he did have back troubles.)

Barry relented. "Well, if it's about your back...." He relocated to one of the bench couches. Searle settled into the Searle Chair. Order restored!

Later I shared this story with Alison. She said, "Oh, that's very interesting! One time I was at this meeting in the Dennes Room and there was this obviously best chair and no one was sitting in it. I thought, that's weird, so I just sat in it. And then John came in and said something about his back. I said, John, if your back starts hurting, just let me know."

And that, it turns out, is why John Searle didn't want to sit in the same room with Alison Gopnik.

[The Dennes Room as it looks now, with John Searle's photo in the corner that used to house the Searle Chair. Images sources: here and here]
Read the whole story
denubis
2 days ago
reply
Share this story
Delete

Details of a Scam

1 Share

Longtime Crypto-Gram readers know that I collect personal experiences of people being scammed. Here’s an almost:

Then he added, “Here at Chase, we’ll never ask for your personal information or passwords.” On the contrary, he gave me more information—two “cancellation codes” and a long case number with four letters and 10 digits.

That’s when he offered to transfer me to his supervisor. That simple phrase, familiar from countless customer-service calls, draped a cloak of corporate competence over this unfolding drama. His supervisor. I mean, would a scammer have a supervisor?

The line went mute for a few seconds, and a second man greeted me with a voice of authority. “My name is Mike Wallace,” he said, and asked for my case number from the first guy. I dutifully read it back to him.

“Yes, yes, I see,” the man said, as if looking at a screen. He explained the situation—new account, Zelle transfers, Texas—and suggested we reverse the attempted withdrawal.

I’m not proud to report that by now, he had my full attention, and I was ready to proceed with whatever plan he had in mind.

It happens to smart people who know better. It could happen to you.

Read the whole story
denubis
2 days ago
reply
Share this story
Delete

Good job people, congratulations…

1 Share

Sonnet 4.5 does complete replication checks of an econpaper.

That is Kevin Bryan, here is more from Ethan Mollick.

The post Good job people, congratulations… appeared first on Marginal REVOLUTION.

Read the whole story
denubis
2 days ago
reply
Share this story
Delete

Armin Ronacher: 90%

1 Share

Armin Ronacher: 90%

The idea of AI writing "90% of the code" to-date has mostly been expressed by people who sell AI tooling.

Over the last few months, I've increasingly seen the same idea come coming much more credible sources.

Armin is the creator of a bewildering array of valuable open source projects - Flask, Jinja, Click, Werkzeug, and many more. When he says something like this it's worth paying attention:

For the infrastructure component I started at my new company, I’m probably north of 90% AI-written code.

For anyone who sees this as a threat to their livelihood as programmers, I encourage you to think more about this section:

It is easy to create systems that appear to behave correctly but have unclear runtime behavior when relying on agents. For instance, the AI doesn’t fully comprehend threading or goroutines. If you don’t keep the bad decisions at bay early it, you won’t be able to operate it in a stable manner later.

Here’s an example: I asked it to build a rate limiter. It “worked” but lacked jitter and used poor storage decisions. Easy to fix if you know rate limiters, dangerous if you don’t.

In order to use these tools at this level you need to know the difference between goroutines and threads. You need to understand why a rate limiter might want to"jitter" and what that actually means. You need to understand what "rate limiting" is and why you might need it!

These tools do not replace programmers. They allow us to apply our expertise at a higher level and amplify the value we can provide to other people.

Via lobste.rs

Tags: armin-ronacher, careers, ai, generative-ai, llms, ai-assisted-programming

Read the whole story
denubis
3 days ago
reply
Share this story
Delete
Next Page of Stories