Posts tagged with "GitHub"

Me vs Claude

1 min read

After writing the earlier post I had to AFK to attend to normal life things. When I finally sat back at my keyboard, I decided to write my own take on minified_filename.

To recap, this is what Copilot/Claude came up with first:

def minified_filename(source: str) -> str:
    """Compute the minified output filename for a given source filename.

    Transforms the file extension: ``.css`` becomes ``.min.css`` and
    ``.js`` becomes ``.min.js``.  For example, ``theme.js`` becomes
    ``theme.min.js`` and ``style.css`` becomes ``style.min.css``.

    Args:
        source: Source filename ending in ``.css`` or ``.js``.

    Returns:
        The corresponding minified filename.

    Raises:
        ValueError: If *source* does not end with ``.css`` or ``.js``.
    """
    if source.endswith(".css"):
        return source[: -len(".css")] + ".min.css"
    if source.endswith(".js"):
        return source[: -len(".js")] + ".min.js"
    raise ValueError(f"Unsupported file extension for minification: {source!r}")

This is what it arrived at once it had self-reviewed the above:

def minified_filename(source: str) -> str:
    """Compute the minified output filename for a given source filename.

    Transforms the file extension: ``.css`` becomes ``.min.css`` and
    ``.js`` becomes ``.min.js``.  For example, ``theme.js`` becomes
    ``theme.min.js`` and ``style.css`` becomes ``style.min.css``.

    Args:
        source: Source filename ending in ``.css`` or ``.js``.

    Returns:
        The corresponding minified filename.

    Raises:
        ValueError: If *source* does not end with ``.css`` or ``.js``.
    """
    if source.endswith(".css"):
        return source.removesuffix(".css") + ".min.css"
    if source.endswith(".js"):
        return source.removesuffix(".js") + ".min.js"
    raise ValueError(f"Unsupported file extension for minification: {source!r}")

The tests it wrote looked like this:

class TestMinifiedFilename:
    """Test the minified_filename utility function."""

    def test_css_extension_becomes_min_css(self) -> None:
        """Test that a .css extension is replaced with .min.css."""
        assert minified_filename("style.css") == "style.min.css"

    def test_js_extension_becomes_min_js(self) -> None:
        """Test that a .js extension is replaced with .min.js."""
        assert minified_filename("theme.js") == "theme.min.js"

    def test_hyphenated_css_filename(self) -> None:
        """Test that a hyphenated CSS filename is handled correctly."""
        assert minified_filename("tag-cloud.css") == "tag-cloud.min.css"

    def test_hyphenated_js_filename(self) -> None:
        """Test that a hyphenated JS filename is handled correctly."""
        assert minified_filename("search.js") == "search.min.js"

    def test_unsupported_extension_raises(self) -> None:
        """Test that an unsupported extension raises ValueError."""
        with pytest.raises(ValueError, match="Unsupported file extension"):
            minified_filename("style.txt")

I wasn't too keen on the obsession with just .css and .js files (it seemed unnecessary), and neither did I like the hard-coding of the resulting extensions, etc. It all felt too job-specific.

So my take on the code was this:

def minified_filename(source: str | Path) -> str:
    """Compute the minified output filename for a given source filename.

    Args:
        source: Source filename.

    Returns:
        The corresponding minified filename.
    """
    if isinstance(source, str) and not source:
        return source
    if (source := Path(source)).suffix:
        source = source.with_suffix(f".min{source.suffix}")
    return str(source)

The tests being this:

class TestMinifiedFilename:
    """Test the minified_filename utility function."""

    @pytest.mark.parametrize(
        "before,after",
        [
            ("style.css", "style.min.css"),
            ("theme.js", "theme.min.js"),
            ("style.min.css", "style.min.min.css"),
            ("file", "file"),
            (".file", ".file"),
            (".file.css", ".file.min.css"),
            ("", ""),
        ],
    )
    def test_min_file(self, before: str, after: str) -> None:
        """Test that converting a filename to the minified version has the expected effect."""
        assert minified_filename(before) == after

So, yes, my version does work ever so slightly differently, but I feel it's more generic. It shouldn't be the business of this function to decide which type of file can have a .min slapped prior to its extension; if a caller asks for it, let them have it, they know what they're doing! Also, although it's not really necessary (because the code calling on it doesn't currently pass a Path), it will accept either a str or a Path.

I feel the big difference here too is the testing. Rather than one method after another, testing more or less the same thing with little variation, it makes more sense to have just the one test and then pass it lots of different input/output values. This is far more maintainable and also easier to write most of the time.

Of course, for an agent, it's probably easier for it to take a copy/paste approach than it is for it to "reason" about what makes for a maintainable test. I sense this is one of the dangers of letting an LLM do this job (and it's one that's often touted as being a prime job to do): good tests can be useful documentation if you're trying to understand a codebase. Badly-written tests, no matter how much coverage they offer, are going to slow you down.

It was such a simple request

5 min read

As mentioned a couple of times in the last couple of days, aside from one particular issue I found and fixed, I'm in more of a "let's review some of the code and tidy things up" phase with the codebase. This process is at times me hand-making changes, and also in part me directing the agent to make a very specific improvement that I want.

Yesterday evening I did a little experiment of getting Gemini CLI to look for code that really needed some cleaning up, and then I had it write the issue text which I fed directly to Copilot/Claude and had it do the work. Finally, when that was done, I had Gemini review the work that Copilot had done (it was "happy" with the changes).

So, this morning, I thought I'd tackle another little thing I'd noticed in the code that rubbed me up the wrong way. Early on in the development lifecycle of BlogMore I added the optional minification of CSS and JS files (HTML too eventually, but that's not involved here). Because it's often been a convention I also prompted Copilot to ensure that if a file called whatever.css was minified, it be called whatever.min.css.

The resulting code did something that made sense, but which I wouldn't ever have done. The constants that held the filenames looked like this:

CSS_FILENAME = "style.css"
CSS_MINIFIED_FILENAME = "styles.min.css"
SEARCH_CSS_FILENAME = "search.css"
SEARCH_CSS_MINIFIED_FILENAME = "search.min.css"
STATS_CSS_FILENAME = "stats.css"
STATS_CSS_MINIFIED_FILENAME = "stats.min.css"
ARCHIVE_CSS_FILENAME = "archive.css"
ARCHIVE_CSS_MINIFIED_FILENAME = "archive.min.css"
CALENDAR_CSS_FILENAME = "calendar.css"
CALENDAR_CSS_MINIFIED_FILENAME = "calendar.min.css"
GRAPH_CSS_FILENAME = "graph.css"
GRAPH_CSS_MINIFIED_FILENAME = "graph.min.css"
TAG_CLOUD_CSS_FILENAME = "tag-cloud.css"
TAG_CLOUD_CSS_MINIFIED_FILENAME = "tag-cloud.min.css"
GRAPH_JS_FILENAME = "graph.js"
GRAPH_JS_MINIFIED_FILENAME = "graph.min.js"
CODE_CSS_FILENAME = "code.css"
CODE_CSS_MINIFIED_FILENAME = "code.min.css"
THEME_JS_FILENAME = "theme.js"
THEME_JS_MINIFIED_FILENAME = "theme.min.js"
SEARCH_JS_FILENAME = "search.js"
SEARCH_JS_MINIFIED_FILENAME = "search.min.js"
CODEBLOCKS_JS_FILENAME = "codeblocks.js"
CODEBLOCKS_JS_MINIFIED_FILENAME = "codeblocks.min.js"

Like... sure, 10/10 for not hard-coding these all throughout the codebase as magic strings1, but this feels a little redundant. Personally I think I'd have just mentioned the non-minified name and then I'd have a function that generates the minified name from it. While technically, it would add the smallest amount of runtime overhead to the code, I think the single-source-of-truth pay-off is worth it.

For a good while though I left this alone. I was having fun playing with other things in the application, and adding all sorts of other amusing toys. But now that I'm more into a "how can this code be improved and what issues does the code have" mode, it felt like time to tackle this.

Given that a change here would touch so much of the code, and given I wasn't massively keen on spending ages walking through all the code and making the changes related to this, I decided to prompt Copilot to get on with this. It felt like something it couldn't get that wrong.

While it didn't get it wrong, as such, it made some questionable choices along the way. It did do the main thing I would have done: make a function to turn a filename into a minified filename. The initial version looked like this:

def minified_filename(source: str) -> str:
    """Compute the minified output filename for a given source filename.

    Transforms the file extension: ``.css`` becomes ``.min.css`` and
    ``.js`` becomes ``.min.js``.  For example, ``theme.js`` becomes
    ``theme.min.js`` and ``style.css`` becomes ``style.min.css``.

    Args:
        source: Source filename ending in ``.css`` or ``.js``.

    Returns:
        The corresponding minified filename.

    Raises:
        ValueError: If *source* does not end with ``.css`` or ``.js``.
    """
    if source.endswith(".css"):
        return source[: -len(".css")] + ".min.css"
    if source.endswith(".js"):
        return source[: -len(".js")] + ".min.js"
    raise ValueError(f"Unsupported file extension for minification: {source!r}")

That string-slicing with len and so on is nails on a chalkboard to me. When something like removesuffix exists, why on earth would "you" elect to do this? Of course the answer is obvious, but still... ugh.

Now, I will have to give credit to the process though. So the above was the initial version of the code. Once the PR had been created by Copilot, and I'd pulled it down for review and testing, it kicked off a review of its own. Reviewing its own code, it pushed back on itself:

In src/blogmore/generator.py, lines 90-93: The slice syntax source[: -len(\".css\")] is less readable than using source.removesuffix(\".css\"), which is available in Python 3.9+. Since this codebase targets Python 3.12+, consider using removesuffix() for clarity.

It then went on to do a further commit to tidy this up. I approve. Bonus point to Copilot here.

So now we have this:

def minified_filename(source: str) -> str:
    """Compute the minified output filename for a given source filename.

    Transforms the file extension: ``.css`` becomes ``.min.css`` and
    ``.js`` becomes ``.min.js``.  For example, ``theme.js`` becomes
    ``theme.min.js`` and ``style.css`` becomes ``style.min.css``.

    Args:
        source: Source filename ending in ``.css`` or ``.js``.

    Returns:
        The corresponding minified filename.

    Raises:
        ValueError: If *source* does not end with ``.css`` or ``.js``.
    """
    if source.endswith(".css"):
        return source.removesuffix(".css") + ".min.css"
    if source.endswith(".js"):
        return source.removesuffix(".js") + ".min.js"
    raise ValueError(f"Unsupported file extension for minification: {source!r}")

At this point the code is less worse. I don't think it's great, but it's less worse. Honestly, I think I'd be more inclined to do something with PurePath.suffixes and PurePath.suffix, leaning into the fact that we're dealing with filenames here, and so making it less about pure string slicing.

I also have other issues with the code, which I might still fix by hand:

  • The fact that it makes a point of only handling .css and .js files, and throws an error otherwise, is an odd choice. I mean, in context, that's what it's here to serve, but it seems oddly-specific and an attention to detail that wasn't really necessary.
  • The hard-coding of .min a couple of times grates a little.
  • The hard-coding of both .css and .js a couple of times, with the doubled-up if feels unnecessary.

It's a small function. It works in context. It does the job. But it also could be more elegant in the way it does it.

I'd also like to go on a small aside for a moment, because there's something else in the above that bothers me: yesterday evening I spent some time directing Copilot to tidy up all the docstrings in the code. While any agent I've thrown at it does seem to have taken note of the AGENTS.md file, and the instructions on how to write the docstrings (Google style please), it seems to have decided it was aiming more at Sphinx when it came to the content. That's fine, I hadn't been explicit.

So last night I made it clear that I wanted something more like I use in all my Python code, that aims to work with mkdocstrings. It should use the inline code and cross-reference styles that are more common when using that tool. I even made a point of telling Copilot to update AGENTS.md to make it clear that this is the preference:

- All inline code and cross-references in docstrings **must** use mkdocstrings-compatible Markdown style:
    - Inline code: use single backticks (\`like_this\`).
    - Cross-references: use mkdocstrings reference-style Markdown links (e.g., [`ClassName`][module.ClassName] or [module.ClassName][]).
    - Do **not** use Sphinx roles (e.g., :class:`ClassName`) or double-backtick code (``ClassName``).

Now go back and look at the docstring for minified_filename. So much for agents making a point of following the instructions from AGENTS.md.

Anyway, back to the main flow here: given that I was thinking that I might rewrite minified_filename by hand so that it works "just so", I made a point of checking that it had written tests for this; something I couldn't take for granted.

Again, to the credit of the agent, it had written some tests:

class TestMinifiedFilename:
    """Test the minified_filename utility function."""

    def test_css_extension_becomes_min_css(self) -> None:
        """Test that a .css extension is replaced with .min.css."""
        assert minified_filename("style.css") == "style.min.css"

    def test_js_extension_becomes_min_js(self) -> None:
        """Test that a .js extension is replaced with .min.js."""
        assert minified_filename("theme.js") == "theme.min.js"

    def test_hyphenated_css_filename(self) -> None:
        """Test that a hyphenated CSS filename is handled correctly."""
        assert minified_filename("tag-cloud.css") == "tag-cloud.min.css"

    def test_hyphenated_js_filename(self) -> None:
        """Test that a hyphenated JS filename is handled correctly."""
        assert minified_filename("search.js") == "search.min.js"

    def test_unsupported_extension_raises(self) -> None:
        """Test that an unsupported extension raises ValueError."""
        with pytest.raises(ValueError, match="Unsupported file extension"):
            minified_filename("style.txt")

It's a start, but I think it could be done better. There's the test of the intended outcomes, and the test of the ValueError for passing something that isn't a .js or a .css file. Meanwhile, that business of testing "hyphenated" seems oddly specific for no good reason. But it's even worse: the test for a "hyphenated" JS file doesn't use a hyphenated file name.

Hilarious.

That's not all. What about the more obvious things like testing what happens if you pass a filename that has no extension, or a filename that already has two extensions, or a filename that already ends in .min.js, or a filename that has .min.css somewhere in its path that isn't at the end of the name, or an empty string, or...

Also why aren't most of these tests done using pytest.mark.parametrize?

As I said a few days ago: the code is mostly fine. It gets the job done. I've seen worse. I reviewed worse. I've inherited worse. I think the thing that concerns me the most is that there has to be a lot of code like this being uncritically accepted after generation2, which in turn is surely going to be feeding back into future training. So while I can't deny that something has improved in the last six or so months, when it comes to agent-generated code, might it be that we are at peak quality right now? Might it be that from this point on we start to decline as "eh, it's... fine" code starts to overwhelm the most popular forge we have?

This is fine

I suppose the main benefit still is that this approach is nice and cheap. Right?


  1. Actually, I think it did hard-code the filenames throughout the codebase, initially, until I asked it not to. Perhaps I'm misremembering, but agents do seem to love magic strings and numbers for some reason (I think we know the reason). 

  2. As I have been doing with BlogMore, on purpose. 

At least there are tests

3 min read

In a post yesterday I finished off by saying:

At least I have, as of the time of writing, 1,380 tests to check that I've not broken anything when I do hand-clean the code. But, hmm, there's a question: can I actually trust those tests? It's not like I wrote them.

This was, of course, slightly tongue-in-cheek, because I did anticipate that the coverage might not be as useful as you'd hope an agent would deliver, and especially not at the level you'd personally aim for. On the other hand, I did expect it to have covered some of the fundamentals.

Being serious about wanting to hand-tidy some of the code as a way to start to get myself into the codebase1, I set out to look at validate_path_template in content_path.py. My plan for how to tidy the code had overlap with how both Claude and Gemini had approached it, but also with a slightly different take. Nothing too radical, with the main difference being that I didn't want a baked-in default for which variables were required (to recap: both the agents saw the need to make this configurable rather than hard-coded into the body of the function, but both still kept a "backward-compatible" default that had a "mixing of concerns" code smell about it).

A function such as validate_path_template, which has a core use, is intended to be of fairly general utility, and which has a very obvious set of outputs given certain inputs, and which has zero side effects and no dependencies, seems like a really obvious candidate for a good set of unit tests. This in turn should have meant that I could modify the code with confidence, and experiment with confidence, knowing that said tests would let me know when I've screwed up.

I went looking for those tests so I could run them and them alone as I did this work.

Keep in mind, at this point, there are 1,380 tests that Copilot/Claude has written. That's a lot of tests. Of course there will be some direct tests of validate_path_template!

Spoiler: there weren't. No tests. At all. 1,380 tests inside the tests/ directory and not one that directly tested this utility function.

Now, sure, the function did have coverage. Before making any changes, the codebase itself had 94% coverage and content_path.py itself had 93% coverage. In fact, the only thing that wasn't covered was the code that raised an exception if a template looked broken.

Coverage in main

This, for me anyway, is a good example of how and where coverage doesn't help me. Sure, other code that is being tested is calling this and if I change this code in ways that breaks that other code, I'll (probably) get to know about it. But if I want to properly understand the code (remember, I didn't write it, this is like getting to know someone else's2 code) it's really helpful to see a set of dedicated tests for that specific function.

There were none.

For a moment, I'm going to give Copilot/Claude an out. When I started BlogMore, right at the very start, just as I was messing about to see what would happen, I gave no thought to tests. It was only after a short while that I asked it to a) create a set of tests for the current behaviour and b) made it clear that all new code had to have tests. It is possible, just possible, that the content of content_path.py fell through that crack. I don't know for sure without going back and looking through the PR history. I'm not that curious right now.

What is interesting though is that, in setting both Copilot/Claude and Gemini on the same problem with the same prompt, and having them both identify the same area for improvement, neither seemed to arrive at the conclusion that adding dedicated tests was something worth doing.

So the point here -- which isn't a revelation at all, but I think has been nicely illustrated by what I've seen happen -- is that an agent might indeed create a lot of tests, and perhaps even achieve pretty good coverage too, but it's no guarantee that they're going to be useful tests when you want to get your hands dirty in the codebase.

Turns out that some of those tests might still need writing by hand, like I did for this tidy-up of content_path.py. Well, I say, "by hand", I did take this as an opportunity to test being pretty lazy about typing out the tests I wanted.

PS: While looking through the tests and tidying some code related to the above, I came across this:

from blogmore.pagination_path import (
    DEFAULT_PAGE_1_PATH,
    DEFAULT_PAGE_N_PATH,
    # ...other imports removed for brevity...
)

class TestDefaults:
    """Tests for the default constant values."""

    def test_default_page_1_path(self) -> None:
        """The default page_1_path should be 'index.html'."""
        assert DEFAULT_PAGE_1_PATH == "index.html"

    def test_default_page_n_path(self) -> None:
        """The default page_n_path should be 'page/{page}.html'."""
        assert DEFAULT_PAGE_N_PATH == "page/{page}.html"

Brilliant. I guess line goes up has come to agent-written tests. But look! 1,380 tests guys!


  1. Remember: up until this point this has mostly been an experiment in uncritically letting Copilot do its thing. 

  2. Arguably this is someone else's code, with extra steps. 

A different approach

4 min read

As mentioned in the previous post, I've been having a play around with Copilot/Claude vs Gemini when it comes to getting the agents to seek out "bad" code and improve it. In that first post on the subject, I highlighted how both tools noticed some real duplication of effort, both addressed it in more or less the same way, and neither of them took the clean-up to its logical conclusion (or, at the very least, neither cleaned it up in a way that I feel is acceptable).

The comparison of the two PRs (Gemini vs Claude via Copilot) is going to be a slow and occasional read, and if I notice something that catches my interest, I'll note it on this blog.

Initially, I was looking at which files were touched by both. With Gemini it was:

And with Copilot/Claude:

On the surface, it looks like Claude might have done a better job of finding untidy issues in the code. Of course a proper read/assessment of the outcome is needed to decide which is "better"; not to mention the application of a lot of personal taste.

So, with the initial/surface impression that "Claude went deeper", I took a look at the first file they had in common: content_path.py. This is documented as a module related to:

Shared path-resolution utilities for content output paths.

This module provides the generic building blocks used by page_path and post_path. Each content type supplies its own allowed-variable set and variable dict; this module handles the common validation, substitution, and safety checks.

There's 3 functions in there:

  • validate_path_template -- for validating a format string used in building a path.
  • resolve_path -- given a template and some values to populate variables in the template, create a path.
  • safe_output_path -- helper function for joining paths and ensuring they don't escape the output directory.

These seem like sensible functions to have in here, and I can imagine me writing a similar set in terms of the problem they seek to solve.

Both agents seemed to agree on what needed some work: validate_path_template. Both also seem to agree that building knowledge of which variable is required into the function itself isn't terribly flexible; I feel this is a reasonable review of the situation. However, the two agents seem to disagree on how this should be resolved.

Claude's take on this is that the function should grow an optional keyword argument called required_variable, which defaults to slug. It also adds an assert to test if the required variable exists in the allowed_variables (okay, I could quibble about this but given this is a code-check rather than a user-input check, eh, I can go with it). Finally it does the check using the new variable and also makes the error reporting a touch more generic too.

--- /Users/davep/content_path.py        2026-04-30 13:20:00.737955197 +0100
+++ src/blogmore/content_path.py        2026-04-30 13:20:04.560178727 +0100
@@ -17,13 +17,15 @@
     template: str,
     config_key: str,
     allowed_variables: frozenset[str],
-    item_name: str,
+    item_name: str = "",
+    *,
+    required_variable: str | None = "slug",
 ) -> None:
     """Validate a path format string for a content type.

     Checks that *template* is non-empty, well-formed, references only
-    variables from *allowed_variables*, and includes the mandatory
-    ``{slug}`` placeholder.
+    variables from *allowed_variables*, and (when *required_variable* is
+    not ``None``) includes the mandatory placeholder.

     Args:
         template: The path format string to validate.
@@ -33,11 +35,19 @@
             template.
         item_name: The human-readable name of the content type used in
             the uniqueness error message (e.g. ``"page"`` or ``"post"``).
+            Ignored when *required_variable* is ``None``.
+        required_variable: The variable name that must appear in the
+            template, or ``None`` if no variable is mandatory.  Defaults
+            to ``"slug"`` for backward compatibility.

     Raises:
         ValueError: If the template is empty, malformed, references an
-            unknown variable, or omits the ``{slug}`` placeholder.
+            unknown variable, or omits the required placeholder.
     """
+    assert required_variable is None or required_variable in allowed_variables, (
+        f"required_variable {required_variable!r} is not in allowed_variables"
+    )
+
     if not template:
         raise ValueError(f"{config_key} must not be empty")

@@ -61,9 +71,9 @@
             + f". Allowed variables are: {', '.join(sorted(allowed_variables))}"
         )

-    if "slug" not in field_names:
+    if required_variable is not None and required_variable not in field_names:
         raise ValueError(
-            f"{config_key} '{template}' must contain the {{slug}} variable so that "
+            f"{config_key} '{template}' must contain the {{{required_variable}}} variable so that "
             f"each {item_name} can be uniquely identified"
         )

Gemini, on the other hand, has a very similar idea but allows for the fact that the caller might want to specify multiple required variables. So in this case it adds required_variables (as a positional/keyword argument rather than a pure-keyword argument) and defaults it to a frozenset that contains "slug". The rest of the change is also about making the test for the required variables, and the reporting of the error, generic. It doesn't do anything about checking that the required variables are within the allowed variables.

--- /Users/davep/content_path.py        2026-04-30 13:20:00.737955197 +0100
+++ src/blogmore/content_path.py        2026-04-30 14:47:41.607748447 +0100
@@ -18,12 +18,13 @@
     config_key: str,
     allowed_variables: frozenset[str],
     item_name: str,
+    required_variables: frozenset[str] = frozenset({"slug"}),
 ) -> None:
     """Validate a path format string for a content type.

     Checks that *template* is non-empty, well-formed, references only
-    variables from *allowed_variables*, and includes the mandatory
-    ``{slug}`` placeholder.
+    variables from *allowed_variables*, and includes the
+    *required_variables*.

     Args:
         template: The path format string to validate.
@@ -33,10 +34,13 @@
             template.
         item_name: The human-readable name of the content type used in
             the uniqueness error message (e.g. ``"page"`` or ``"post"``).
+        required_variables: The set of variable names that MUST appear
+            in the template to ensure uniqueness. Defaults to
+            ``{"slug"}``.

     Raises:
         ValueError: If the template is empty, malformed, references an
-            unknown variable, or omits the ``{slug}`` placeholder.
+            unknown variable, or omits a required variable.
     """
     if not template:
         raise ValueError(f"{config_key} must not be empty")
@@ -61,10 +65,12 @@
             + f". Allowed variables are: {', '.join(sorted(allowed_variables))}"
         )

-    if "slug" not in field_names:
+    missing = required_variables - set(field_names)
+    if missing:
         raise ValueError(
-            f"{config_key} '{template}' must contain the {{slug}} variable so that "
-            f"each {item_name} can be uniquely identified"
+            f"{config_key} '{template}' must contain the "
+            + ", ".join(f"{{{v}}}" for v in sorted(missing))
+            + f" variable(s) so that each {item_name} can be uniquely identified"
         )

For the most part I think I prefer what Gemini is trying to do, although Claude's sanity check that the required variable is one of the possible variables makes sense. I kind of feel like both of them missed the point when it came to handling the fact that "slug" is required: given that validate_path is otherwise built to be pretty generic, I think I would have defaulted to nothing and simply left it up to the caller to be explicit that "slug" is required, because that matters in context of the caller. This feels like a pretty obvious case of a "business logic" vs "generic utility code" separation of concerns scenario.

As mentioned in passing in another post, it's interesting to see that neither of them noticed the opportunity to turn this:

unknown = set(field_names) - allowed_variables
if unknown:
    ...

into this:

if unknown := (set(field_names) - allowed_variables):
    ...

I know at least one person who would be happy about this fact.

So where does this leave me? At the moment I'm not inclined to merge either PR, but that's mainly because I want to carry on reading them and perhaps writing some more notes about what I encounter. What this does illustrate for me is something we know well enough anyway, but which I wanted to experiment with and see for myself: the initial implementation of any working code written by an agent seems optimised for that particular function or method, perhaps class if you're lucky. It will happily repeat the same code to solve similar problems, or perhaps even use very different approaches to solve the same problem. What it won't do well is recognise that this problem is solved elsewhere and so either use that other code by calling it, or perhaps modify it slightly to make it more generic and more applicable in more situations.

On the other hand, it has shown that with a bit of prompting (and keep in mind that the prompt that arrived at this comparison was really quite vague) it is possible to get an agent to "consider" the problem of duplication and boilerplate and to try and address that.

Having seen the two solutions on offer here, it's hard not to conclude that the best solution would be for me to take the PRs as flags marking places in the code that could be cleaned up, and do the tidy myself.

At least I have, as of the time of writing, 1,380 tests to check that I've not broken anything when I do hand-clean the code. But, hmm, there's a question: can I actually trust those tests? It's not like I wrote them.

Guess that's a whole other thing to worry about at some point...

Duplication of effort

3 min read

While I don't, for a moment, think that the work on BlogMore is complete, I think it's fair to say that the rate of new feature additions has slowed down. Which is fine, there's only so much I need from a self-designed/directed static site generator; at a certain point there's a danger of adding features for the sake of it.

Around this point I think I want to start to pay proper attention to the code quality and maintainability of the ongoing experiment.

As I mentioned the other day, while working through this, I had noticed plenty of bad habits that Copilot (and in this case pretty much always Claude Sonnet 4.6) has. All were very human (obviously), but also the sort of thing you'd expect a human developer to educate themselves out of.

Yesterday evening, out of idle curiosity, I installed Gemini CLI because I wanted to see what would happen if I pointed it at the v2.18.0 codebase and asked it to look for things to clean up, and then what would happen if I did the same with Copilot CLI.

I've saved the results as a PR for what Gemini came up with and what Copilot came up with1. I've not given them a proper read over yet, but while having a quick glance at them something leapt out at me: in the code before the request, there was this in utils.py:

def count_words(content: str) -> int:
    """Count the number of words in the given content.

    Strips common Markdown and HTML formatting before counting so that only
    prose words are included.  The same normalisation rules as
    :func:`calculate_reading_time` are applied.

    Args:
        content: The text content to analyse (may include Markdown/HTML).

    Returns:
        The number of words in the content.

    Examples:
        >>> count_words("Hello world")
        2
        >>> count_words("word " * 10)
        10
    """
    # Remove code blocks
    content = re.sub(r"```[\s\S]*?```", "", content)
    content = re.sub(r"`[^`]+`", "", content)

    # Remove markdown links but keep the text: [text](url) -> text
    content = re.sub(r"\[([^\]]+)\]\([^\)]+\)", r"\1", content)

    # Remove markdown images: ![alt](url) -> ""
    content = re.sub(r"!\[([^\]]*)\]\([^\)]+\)", "", content)

    # Remove HTML tags
    content = re.sub(r"<[^>]+>", "", content)

    # Remove markdown formatting characters
    content = re.sub(r"[*_~`#-]", " ", content)

    return len([word for word in content.split() if word])


def calculate_reading_time(content: str, words_per_minute: int = 200) -> int:
    """Calculate the estimated reading time for content in whole minutes.

    Uses the standard reading speed of 200 words per minute. Strips markdown
    formatting and counts only actual words to provide an accurate estimate.

    Args:
        content: The text content to analyze (can include markdown)
        words_per_minute: Average reading speed (default: 200 WPM)

    Returns:
        Estimated reading time in whole minutes (minimum 1 minute)

    Examples:
        >>> calculate_reading_time("Hello world")
        1
        >>> calculate_reading_time("word " * 400)
        2
    """
    # Remove code blocks (they typically take longer to read/understand)
    content = re.sub(r"```[\s\S]*?```", "", content)
    content = re.sub(r"`[^`]+`", "", content)

    # Remove markdown links but keep the text: [text](url) -> text
    content = re.sub(r"\[([^\]]+)\]\([^\)]+\)", r"\1", content)

    # Remove markdown images: ![alt](url) -> ""
    content = re.sub(r"!\[([^\]]*)\]\([^\)]+\)", "", content)

    # Remove HTML tags
    content = re.sub(r"<[^>]+>", "", content)

    # Remove markdown formatting characters
    content = re.sub(r"[*_~`#-]", " ", content)

    # Count words (split by whitespace and filter out empty strings)
    words = [word for word in content.split() if word]
    word_count = len(words)

    # Calculate minutes, rounding to the nearest minute with a minimum of 1
    minutes = max(1, round(word_count / words_per_minute))

    return minutes

I think this right here is a great example of why the code that these tools produce is generally kind of... meh. Let's just really appreciate for a moment the duplication of effort going on there. But it's even more fun. Look at the docstring2 for count_words: it says right there that the "same normalisation rules as calculate_reading_time are applied". It "knows" it copied the work that went into calculate_reading_time too, but never once did it then "think" to pull the common code out and have both of the functions call on that helper function.

Back to the parallel invitations to refactor, having asked:

please do a review of this codebase and see if there is any scope for refactoring so there's less duplication

Both Gemini and Claude noticed this and did something about it. Gemini came up with a:

def _strip_formatting(content: str) -> str:

with all the regex-based-markdown-stripping code in there and then rewrote count_words and calculate_reading_time to call on that. The Copilot/Claude cleanup did something very similar:

def _strip_markdown_formatting(content: str) -> str:

So it's a good thing that both of them "noticed" this duplication of effort and cleaned it up. What I do find interesting though is what the result was. Stripping docstrings and comments for a moment, here's what I was left with, by Gemini, for count_words and calculate_reading_time:

def count_words(content: str) -> int:
    content = _strip_formatting(content)
    return len([word for word in content.split() if word])

def calculate_reading_time(content: str, words_per_minute: int = 200) -> int:
    content = _strip_formatting(content)
    words = [word for word in content.split() if word]
    word_count = len(words)
    minutes = max(1, round(word_count / words_per_minute))
    return minutes

and here's what Copilot/Claude came up with:

def count_words(content: str) -> int:
    return len([word for word in _strip_markdown_formatting(content).split() if word])

def calculate_reading_time(content: str, words_per_minute: int = 200) -> int:
    words = [word for word in _strip_markdown_formatting(content).split() if word]
    return max(1, round(len(words) / words_per_minute))

In both cases calculate_reading_time is still doing the work of counting words when count_words is right there to be called! Don't even get me started on how the Gemini version of calculate_reading_time is so obsessed with assigning values to variables that only get used once in the next line3. Were I reviewing these PRs (oh, wait, I am reviewing these PRs!), I'd request the latter function be turned into:

def calculate_reading_time(content: str, words_per_minute: int = 200) -> int:
    return max(1, round(count_words(content) / words_per_minute))

I would imagine that there's a lot more of this going on in the code, and under ideal conditions this sort of thing would not have made its way into the codebase in the first place. Part of the point of this experiment was to mostly get the agent to do its own thing, without me doing full-on reviews of every PR. Were I to use this sort of tool in a workplace, or even on a FOSS project that wasn't intended to be this exact experiment, I'd be far more inclined to carefully review the result and request changes.

Or, perhaps, hear me out... I have a third agent that I teach to be just like me and I get it do the work of reviewing the PRs for me. What could possibly go wrong?


  1. Again, I guess I should stop referring to Copilot in this case and instead refer to Claude Sonnet. 

  2. Note to self: I need to educate the agents in how I prefer and always use the mkdocstrings style of cross-references

  3. Yes, I know, this is a favoured clean code kind of thing in some circles, but it can be taken to an unnecessary extreme. 

On GitHub

2 min read

It seems that dunking on GitHub is the flavour of the day. At the moment most of the social/news type things I tend to read are filled with the Ghostty news, as well as a small revival of posts and links to blog posts about all the recent outages. It's understandable. It does seem that something has shifted with GitHub in the last few months. While it hasn't been the site I used to enjoy for quite some time now, it just seems to be getting worse at the moment.

It's even pissing off the loyal AI enthusiasts with the Copilot changes.

As I read all of this I find myself mostly nodding along. For the most part I'm not finding that GitHub is getting in the way and stopping me from doing the things I want to do, and for the most part it does act as a vital tool that lets me get work done, and also lets me enjoy my longest-enjoyed hobby. On the other hand I couldn't help but sigh and think "yeah, I get why this is the time that people are done" when I opened up the PR page for this blog, just now, and saw this:

A warning about my PRs

It does get me thinking about my relationship with GitHub, and how long I've been using it. As I've written before, I created my account back in 2008; I was within the first 30,000 users. While my use of it was only very occasional for quite a long time, for the last decade I've been constantly interacting with it. It is somewhere I visit constantly, not just to do work on my own projects, but to read what other people are doing. One of the first things I do every morning, when I sit down at my desk, is open my GitHub dashboard page and have a scroll through the feed to see what people I follow have been up to.

It's generally been the most fulfilling feed I've read.

But I'm also getting that feeling I got when I hung on to my Twitter account far longer than I really should have; not just because of the general vibe of "it's falling apart", but also other types of questionable behaviour. The degrading performance, the troubling business relationships, the over-emphasis on all things AI... it adds up.

There is a sense that some time ago was the time to move elsewhere as my sociable forge (probably around the time that Microsoft took over), and that not having done that, now is the second best time. But the effort of making that move is non-trivial and, quite frankly, I'd want to see where folk start to land, if they started to move away in any numbers at all. For me the real utility of GitHub isn't the "it's somewhere to store my shit" thing, it's the socially coding thing.

Then there's the follow-up problem: if some other forge was to become the next flavour of the decade, it too would probably end up suffering the same fate as GitHub.

Perhaps now is the time for me to start looking into options for collaborative code forges that offer the same sort of solution that Mastodon does for Twitter-like nattering.

Global and local MCP servers with Copilot CLI

1 min read

This morning I'm tinkering some more with NGMCP. Having done a release yesterday and tested it out by globally installing it with:

uv tool install ngmcp

I was then left with the question: how do I easily test the version of the code I'm working on, when I now have it set up globally? Having done the global installation I had ~/.copilot/mcp-config.json looking like this:

{
  "mcpServers": {
    "ngmcp": {
      "command": "ngmcp",
      "args": [],
      "env": {
        "NGMCP_GUIDE_DIRS": "/Users/davep/Documents/Norton Guides"
      }
    }
  }
}

whereas before it looked like this:

{
  "mcpServers": {
    "ngmcp": {
      "command": "uv",
      "args": ["run", "ngmcp"],
      "env": {
        "NGMCP_GUIDE_DIRS": "/Users/davep/Documents/Norton Guides"
      }
    }
  }
}

But now I want both. Ideally I'd want to be able to set up an override for a specific server in a specific repository. I did some searching and reading of the documentation and, from what I can tell, there's no method of doing that right now1. So I've settled on this:

{
  "mcpServers": {
    "ngmcp-global": {
      "command": "ngmcp",
      "args": [],
      "env": {
        "NGMCP_GUIDE_DIRS": "/Users/davep/Documents/Norton Guides"
      }
    },
    "ngmcp-local": {
      "command": "uv",
      "args": ["run", "ngmcp"],
      "env": {
        "NGMCP_GUIDE_DIRS": "/Users/davep/Documents/Norton Guides"
      }
    }
  }
}

and then in Copilot CLI I just use the /mcp command to enable one and disable the other. It's kind of clunky, but it works.


  1. I did see the suggestion that you can write your MCP server so that it does a non-response depending on the context, but that seems horribly situation-specific and wouldn't really help in this case anyway because I want it to work in both contexts, depending on what I'm doing. 

GitHub Copilot wants our interaction data

(Modified: 2026-03-26 10:17:43 UTC)
3 min read

I guess it was inevitable1, but yesterday GitHub announced a new opt-out approach to learning from people's interactions with Copilot. I don't have anything novel or insightful to say on this switch, and I'm sure folk with better-informed opinions have already rushed out posts and articles about this, but I did want to jot down just how curious I am to see this roll out.

For starters: for me this feels like one of those things that will get a lot of backlash, and in a day or so GitHub will say they're pausing rolling this out while they reevaluate this approach2. Then, eventually, they roll it out anyway after a "period of consultation with the community". That sort of thing.

I've not read further this morning, but before going to bed last night it wasn't a happy time in the comments section of the FAQ. I can also see why some would be cynical about this change, given the tone of some of the questions and answers in that FAQ. I'll hand it to them: they're pretty candid and honest with the FAQs, but kinda yikes too.

A bad time in the FAQ

Here's the key thing I'm curious about, and which I'll be thinking about and watching for movement on in the next few days: all the talk here seems to be about protecting the privacy of the proprietary code of businesses3. That... is understandable, from a business point of view, from a commercial adoption point of view, from a "we want all software engineering departments to use Copilot" point of view. But how the heck are they really going to manage that?

In the comments in the FAQ this explanation stood out:

We do not train on the contents from any paid organization’s repos, regardless of whether a user is working in that repo with a Copilot Free, Pro, or Pro+ subscription. If a user’s GitHub account is a member of or outside collaborator with a paid organization, we exclude their interaction data from model training.

This seems somewhat unclear to me. Let's walk this one through for a moment: my GitHub account is a member of a "paid organisation". My account is also my account, for my personal code, I've had it a long time and it's filled with a lot of FOSS repos and I keep adding more. So which scenario is the right one here?

  1. Because I'm currently a member of at least one "paid organisation" I'm always opted-out of this training no matter how the opt in/out setting is set and no matter what code I work on?
  2. Because I'm currently a member of at least one "paid organisation" I can opt in when working on code that is from a repository which is mine, but I'm opted out when I'm working on code from a repository belonging to the paid organisation?

I think it reads like it's #1. But then that seems rather odd to me because, if I go and look at my settings right now, I can elect to opt in/out of this training system. If the correct reading is #1 why not just disable that setting altogether and say below it that I'm opted-out because I'm part of a paid organisation?

Which sort of suggests we should perhaps read it as #2? If that, that raises all sorts of questions. How would Copilot know I'm working on code from such a repository? Sure, it's not impossible to infer if I am working within the context of a given repository, doing some fun stuff to work out the origin and so on, but it feels messy. It also feels like a scenario that could end up being incredibly leaky. It really would not be difficult to run into a scenario where I'm working on some non-Free code but in an environment where the licence isn't clear, or where it appears that the licence4 would permit such training.

ℹ️ Note

Editing to add: there is even a comment where it is acknowledged that someone could be working in such a way that it's impossible to know the provenance of the code: "Copilot ... can even work when you are not connected to any repo."

Or... perhaps there's a #3, or a #4, or so on, that I've not even considered yet. The fact that software engineering departments suddenly have to start thinking about this issue (yes, I know, it's been a background issue for a while but this really drags it out into the open) is going to make for a few interesting weeks, assuming people care about where their code ends up.

Who knows. Perhaps, in some strange way, this is how all software ends up being Free.


  1. And I think a bit of me is surprised that they weren't just doing it anyway. 

  2. This isn't a prediction, I'm just saying it feels like that sort of announcement. 

  3. It's not that simple, but to save getting into the deep detail... 

  4. I'm using licence here as shorthand for a lot of things to consider relating to who should have access to the code and how. 

Copilot rate limits

1 min read

Last night, while tinkering with another BlogMore feature request, I ran into the sudden rate limit issue again. As before, there was no warning, there was no indication I was getting close to any sort of limit, and the <duration> I was supposed to wait to let things cool down was given as <duration> rather than an actual value.

So this time I decided to actually drop a ticket on GitHub support. Around 12 hours later they got back to me:

Thanks for writing in to GitHub Support!

I completely understand your frustration with hitting rate limits.

As usage continues to grow on Copilot — particularly with our latest models — we've made deliberate adjustments to our rate limiting to protect platform stability and ensure a reliable experience for all users. As part of this work, we corrected an issue where rate limits were not being consistently enforced across all models. You may notice increased rate limiting as these changes take effect.

Our goal is always that Copilot remains a great experience, and you are not disrupted in your work. If you encounter a rate limit, we recommend switching to a different model, using Auto mode. We're also working on improvements that will give you better visibility into your usage so you're never caught off guard.

We appreciate your patience as we roll out these changes.

So, in other words: expect to be rate limited more on this product we're trying to get everyone hooked on and trying to get everyone to subscribe to.

Neat.

I especially like this part:

We're also working on improvements that will give you better visibility into your usage so you're never caught off guard.

You know, if it were me, if I wanted to build up and keep goodwill for my product, I'd probably do that part first and communicate about it earlier rather than later.

I guess this is why I don't hold the sort of position that gets to make those decisions.

Too much work for Copilot?

1 min read

I don't know if it's just that GitHub Copilot is having a bad time at the moment, or if I've run into a genuine problem, but all isn't well today. After merging last night's result I kicked off another request related to a group of changes I want to make to BlogMore. It's a little involved, and it did take it a wee while to work on, but mostly it got the work done.

Again, as I said in the earlier post, I won't get into the detail of these changes yet, but they're fairly extensive and do include some breaking changes, so it's probably going to take a wee while to have it all come together. Claude's first shot at the latest change was almost there but with the glaring bug that it did all the work but didn't actually add the part that reads the configuration file settings and uses them (yeah, that's a good one, isn't it?).

So I asked it to fix it. It went off, worked on the issue, and then suddenly...

Denied

This surprised me. After the past few weeks I've had sessions where I've requested it do things way more frequently than this morning. I'm also nowhere near out of premium requests either:

Number of requests left

While the error, as shown, might be valid and might be down to my actions, it's massively unhelpful and doesn't really explain what I did to cause this or even how I can remedy it. This is made all the more frustrating by the fact that it seems to be saying I need to wait <duration> to try again. Yes, literally a placeholder of <duration>. O_o

One thing is for sure: this is another useful experiment in the current experiment. It's worth having the experience of having the tool screw with the flow. It doesn't come as a surprise, but it's a good reminder that using an agent that is hosted by someone else means you fully rely on their ability to keep it working, the whims of their API limits, and perhaps even your ability to pay.