Posts in category "AI"

Too much work for Copilot?

2 min read

I don't know if it's just that GitHub Copilot is having a bad time at the moment, or if I've run into a genuine problem, but all isn't well today. After merging last night's result I kicked off another request related to a group of changes I want to make to BlogMore. It's a little involved, and it did take it a wee while to work on, but mostly it got the work done.

Again, as I said in the earlier post, I won't get into the detail of these changes yet, but they're fairly extensive and do include some breaking changes, so it's probably going to take a wee while to have it all come together. Claude's first shot at the latest change was almost there but with the glaring bug that it did all the work but didn't actually add the part that reads the configuration file settings and uses them (yeah, that's a good one, isn't it?).

So I asked it to fix it. It went off, worked on the issue, and then suddenly...

Denied

This surprised me. After the past few weeks I've had sessions where I've requested it do things way more frequently than this morning. I'm also nowhere near out of premium requests either:

Number of requests left

While the error, as shown, might be valid and might be down to my actions, it's massively unhelpful and doesn't really explain what I did to cause this or even how I can remedy it. This is made all the more frustrating by the fact that it seems to be saying I need to wait <duration> to try again. Yes, literally a placeholder of <duration>. O_o

One thing is for sure: this is another useful experiment in the current experiment. It's worth having the experience of having the tool screw with the flow. It doesn't come as a surprise, but it's a good reminder that using an agent that is hosted by someone else means you fully rely on their ability to keep it working, the whims of their API limits, and perhaps even your ability to pay.

When your model leaves you

5 min read

Yesterday evening, after dinner, but just before loading up a game released just a few hours earlier, I decided to kick off a change to BlogMore. Part of the ongoing experiment is this convenience aspect: where I can get some work going and then go and do something entirely different.

The nature of the change itself isn't important (I'll write about it when I release an update), but something that happened is. As usual, I did the prompt as an issue, and assigned it to Copilot. The first thing that was curious was that, after around 5 minutes, despite it having added the πŸ‘€ reaction (which it seems to use to indicate the agent has seen it has work to do), nothing happened. I didn't see an agent kick off doing the work. Eventually I gave up waiting and opened a Copilot session and asked it to set about dealing with the issue I'd raised.

It did as I requested, but apparently alongside another agent which had suddenly started doing the exact same work. Thinking it was a glitch (it's not like GitHub hasn't been having some trouble of late) I stopped the newest one and let the "original" go about its work.

A wee bit later, just before I started up my game and the stream, I checked in on how it was doing. It had finished, but in a quick test, I noticed a small bug. I prompted it to fix the problem, closed the tab, and went about the real business of killing bugs.

Fast forward a couple of hours, I was done with getting my arse kicked on Klendathu, I packed away my controller and headset and opened GitHub again to see where Copilot was at. It wasn't good:

@davep The model claude-sonnet-4.6 is not available for your account. This can happen if the model was disabled by your organization's policy or if your Copilot plan doesn't include access to it.

You can try again without specifying a model (just @copilot) to use the default, or choose a different model from the model picker.

If you want to contact GitHub about this error, please mention the following identifier so they can better serve you: 79b81d32-e26a-4898-bd41-4bba088d08f6

Wait... what? I've been using this for weeks now, as best as I can tell I've generally been making all the changes and improvements using Claude Sonnet 4.6; there's never been an issue. Then, suddenly, in the middle of a PR, I don't have it?

Quickly checking elsewhere, sure enough, I had access to almost no models. I can't remember what there was, but it wasn't much and all the Claude ones had disappeared. Even if I tested in the Copilot CLI1 I saw a very limited set of models.

Around this time I had two reactions: one was something like "cool, this is an important part of the experiment, knowing how it goes if your models are taken from you", the other was a curious "but Claude and I have an understanding about this codebase I can't trust some other model I've not been using".

As it was getting late into the evening and I still wanted to watch an episode of Stargate SG-1 (yes, I'm doing a full rewatch given it's on Netflix) I closed the MacBook and decided to check in on the issue in the morning.

Fast forward one SG1 episode later and, just before I headed to bed, I decided to do a quick search. While it could be a problem with my own account, it felt like it was more of a general issue. It was more of a general issue. At that point (around 23:20 UTC), checking in the GitHub app on my phone, I could see that some, if not all, of the models I was used to seeing, had come back.

As of this morning, as of time of writing, it's all looking back to how it was.

All the models back

All of which is a great reminder, and something useful in the experiment: what does happen when some third party takes away the models you're using to get your work done? In the time I've been building up BlogMore I've come to trust the quality of Claude Sonnet (in the sense that I know when and where I have to pay closer attention to what it's done, and where it'll very likely have done a pretty good job2), so finding that I'd possibly have to switch to some other model that I've had no experience with... I genuinely had a moment of concern about how and where I was going to take BlogMore.

Ultimately it's not actually a serious concern for me: while I aim to maintain it for a very long time to come (it is how I'm creating the site for this blog after all), BlogMore isn't that critical. Moreover, I know I could cease using Copilot to create and maintain it and I could tidy up the code and hand-update and hand-manage it. There's a reason why I decided to really dive into this using a language I'm extremely confident with.

But for those applications some might now be relying on, developed by someone keen but as yet unskilled, what way forward for them if such a situation were to happen and not be resolved?


  1. Which I'm not really using at the moment, but do have installed to experiment with. ↩

  2. So just like working with humans, oddly enough. ↩

You get what you pay for

3 min read

Just recently I saw a post go past on Mastodon, complaining about the author's perception that there was a breakdown of trust in the FOSS world, in respect to the use of AI to work on FOSS projects, or at least the willingness to accept AI-assisted contributions. The post also highlighted the author's reliance on FOSS projects and how they're driven by ethical and financial motivations (some emphasis was placed on how they had no money to spend on these things so it wasn't even necessarily a choice to FOSS up their environment).

This was, of course, in response to the current fuss about how Vim is being developed these days.

I don't want to comment on the Vim stuff -- I've got no dog in that fight -- but something about the post I mention above got me thinking, and troubled me.

Back when I first ran into the concept of Free Software, before the concept of Open Source had ever been thought of, I can remember reading stuff opposed to the idea that mostly worked along the lines of "you get what you pay for" -- the implication being that Free Software would be bad software. I think it's fair to say that history has now shown that this isn't the case.

But... I think it's fair to say that you do get what you pay for, but in a different sense.

If your computing environment is fully reliant on the time, money and effort of others; reliant on people who are willing to give all of that without the realistic expectation of any contribution back from you; I feel it's safe to say that you are getting a bloody good deal. To then question the motivations and abilities of those people, because they are exploring and embracing other methods of working, is at best a bad-faith argument and at worst betrays a deep sense of entitlement.

What I also found wild was, the post went on to document the author's concerns that they now have to worry about the ability of FOSS project maintainers to detect bad contributions. This for me suggests a lack of understanding of how non-trivial FOSS projects have worked ever since it was a thing.

I mean, sure, there are some projects that are incredibly useful and which have a solo developer working away (sometimes because nobody else wants to contribute, but also sometimes because that solo developer doesn't play well with others -- you pick which scenario you think is more healthy), but for the most part the "important" projects have multitudes working on them, with contributions coming from many people of varying levels of ability. The point here being that, all along, you've been relying on the discernment and mentoring abilities of those maintainers.

To suggest they're suddenly unworthy of your trust because they might be "using the AIs" is... well, it feels driven by dogma and it reads like a disingenuous take.

Don't get me wrong though: you are right to be suspicious, you are right to want to question the trust you place in those who donate so much to you; almost always this is made explicit in the licence they extended to you in the first place. But to suggest that suddenly they're unworthy of your trust because they're donating so much value to you in a way you don't approve of...

...well, perhaps it's time for you to pay it back?

An eager fix

2 min read

An eager fix

Yesterday, while looking at starting to post to my photoblog again, I noticed I'd missed a trick when I added the first and second sets of photos when I created seen-by.davep.dev: I'd left off any cover frontmatter from all of the posts!

While I doubt it's super important -- I can't imagine people will be sharing photos from the blog after all, especially not older ones -- it felt like I'd failed to use a useful feature that I'd made sure BlogMore had.

This was obviously easy to fix. I could just write a tool that would go through all of the Markdown files, find the image that is being displayed in the post, pull out the path to the file, and add a cover to the frontmatter. Not exactly the hardest thing to write, but kind of boring for a one-off fix.

So this felt like another good time to get Copilot to do the hard work. Liking this plan, I wrote an issue for the prompt and set it to work.

The result was unexpected, and in retrospect this is how I should have approached it in the first place; Copilot wrote the script, then ran it, and then submitted the PR including the script and all of the changes after running it! It's like it was super eager to do the fix so went ahead and just did it.

The resulting PR

This, for me, highlights a trick/approach I'm still not fully mindful of: where I might once have written some throwaway code to do a job, run the code, and then made use of the result, when it comes to using an agent I always have the option of saying what I want done to the content of the repository and just let it do it.

When I'm next faced with a problem like this, I think this is the approach I'll take: rather than ask it to write the tool to do the work, I'll just say what the work is I want done and let it go about it. This feels like where an agent should shine and where it's really useful.

Meanwhile I can get on with the fun stuff.

Documentation generation

2 min read

While I've written a lot of documentation in my life, it's not something I enjoy. I want documentation to read well, I want documentation to be useful, I want documentation to be accurate. I also want there to be documentation at all and sometimes the other parts mean it doesn't get done for FOSS projects1.

When I started the experiment that is BlogMore, I very quickly hashed out some ideas on how it might generate some documentation, and the result was okay. In fact, if anything, it was a bit too much and there was a lot of repeated information.

So, this morning, before I sat down for the day's work, I quickly wrote an issue that would act as a prompt to Copilot to rewrite the documentation. This time I tried to be very clear about what I wanted where, but also left it to work out all the details.

The result genuinely impressed me. While I'll admit I haven't read it all in detail (and because of this have left the same warning right at the start), on the surface it looks a lot clearer and feels like a better journey to learning how to use BlogMore.

blogmore.davep.dev hosts the result of this.

I have a plan to work through the documentation and be sure it's all correct and all makes sense, but if it's as correct and as useful as I think, I might have to consider the idea of taking this approach more often. Writing down the plan for the documentation and then letting it appear in the background while I get on with my actual day makes a lot of sense.

I fear I might be warming to these tools, a little bit. :-/


  1. Although I've made a point of doing it for almost every one of my recent Textual-based projects. ↩

Not so elegant

2 min read

One thing I've been noticing with my current experiment with GitHub Copilot is that it seems to know Python well enough to write code that gets the job done, and sometimes it knows it well enough to write more modern idiomatic Python code, but it also seems to write the inelegant version of it.

It's hard to pin down exactly, and of course it's a matter of taste (my idea of elegant might burn someone else's eyes), but on occasion, as I review the code, I find things that make me go "ugh".

Here's an example: there's a function that Copilot wrote to extract the first non-markup paragraph of an article (so that it can be used as a page description). One thing it needs to do is skip any initial images, etc. It takes a pretty brute force approach of looking at the start of each stripped line, but it gets the job done -- I can't really argue with that.

But here's how it does it:

# Skip markdown image syntax
if stripped.startswith("!["):
    continue

# Skip markdown linked image syntax ([![alt](img)](url))
if stripped.startswith("[!["):
    continue

# Skip HTML img tags
if stripped.startswith("<img"):
    continue

Now, this is good: it's using startswith. There are less-elegant approaches it could have used so I'll give it bonus points for using that method. The thing is though, it's testing each prefix one string at a time, pretty much rolling out a load of boilerplate code.

What bothers me here is that startswith will take a tuple of strings to test for. I find it curious that the generated code is idiomatic enough to know that startswith is a sensible option here, but at the same time it still writes the list of things to test out in a long-winded way.

This is exactly the sort of thing I'd call out in a normal code review. Technically, if this wasn't mostly a "let's see how it goes about this with minimal input from me" experiment, I'd have called it out here too (as an experiment, I might go back prompt it to "think" about this).

If I ever find myself using this sort of tool for generating code in a work setting, this is exactly the sort of thing I'll be watching for.

Brute force and ignorance

5 min read

My experiment with coding with an agent continues, and from the point of view of the usability of the results, I'd say it's going well. From the point of view of learning more about this sort of tool and the pitfalls: I'd also say it's going well.

The one thing I'm really starting to get an appreciation for is just how much it employs a sort of brute force and ignorance approach to getting to a solution for some sort of problem. I noticed this on and off as I watched the agent's chat, as it worked away.

One example I enjoyed was when it was trying to add a new dependency to BlogMore. For reasons I should probably try and understand1 it knew it should be using uv to add a dependency and it knew how to use it to update the lock file, but it didn't have uv to hand:

The lock file (uv.lock) needs to be updated to include rcssmin. Since uv isn't available, I'll need to note this in the PR. The lock file should be generated by uv when the pyproject.toml is updated.

So it decided the best solution was probably to... "hand-edit" the lock file.

The lock file needs to be updated with rcssmin. Since uv is not available in this environment, I need to update it manually or find another way.

I need to update the uv.lock file to include rcssmin. The uv.lock file needs to include:

  1. The rcssmin package entry with hashes
  2. The rcssmin dependency in the blogmore package section

Let me look at the PyPI page for rcssmin 1.2.2 to get the hash.

It then grabs rcssmin from PyPI and checks out the information it wants; after which it goes:

I need to update the uv.lock file with the rcssmin package information. The uv.lock file uses SHA256 hashes. Let me look at the existing format in more detail to understand what I need to add, and then I'll query PyPI for the hashes.

Actually, let me think about this differently. The uv.lock file is a generated file, and updating it manually is error-prone. The proper way to update it is to run uv sync or uv lock. Since uv is not available in this environment, I have two options:

  1. Let the CI update the lock file (risky if CI fails on the locked version)

So... around this point it has some sort of plan for hand-patching the lock file, but also recognises that this is a dodgy approach and it should be using uv to update the lock file.

I also really like the part where it says it has two options, then goes on to provide an exhaustive list of... one.

After this it runs this code:

import json, urllib.request
url = 'https://pypi.org/pypi/rcssmin/json'
try:
    with urllib.request.urlopen(url, timeout=10) as f:
        data = json.load(f)
    v = '1.2.2'
    releases = data['releases'].get(v, [])
    for r in releases:
        print(f'filename: {r[\"filename\"]}')
        print(f'  url: {r[\"url\"]}')
        print(f'  sha256: {r[\"digests\"][\"sha256\"]}')
        print(f'  size: {r[\"size\"]}')
        print(f'  upload_time: {r[\"upload_time_iso_8601\"]}')
except Exception as e:
    print(f'Error: {e}')

after which it concludes:

Now I have all the information I need to update the uv.lock file. I need to add:

  1. The rcssmin package entry in the lock file
  2. Add rcssmin to the blogmore package dependencies

I mean... okay, I guess. Again, I'll admit that I'm still getting used to setting this sort of thing up and working with it (and keep in mind none of this was done locally; this is all via GitHub Copilot, up on GitHub), but it's kind of funny and wild that the approach taken was to recognise that using uv to remake the lock file was the best approach so... let's hand-edit the lock file.

But that's not the best part. This is where it gets really fun. It had decided that the lock file must be in alphabetical order. Okay, sure, why not:

The lock file has packages sorted alphabetically, so rcssmin should go between pygments and python-dateutil (since "r" comes before "py" actually... no, wait - "r" comes after "p" in the alphabet. Let me check the order again).

Since "r" comes before "py."

Since "r" comes before "py."

o_O

O_o

Extra credit here for giving me a good giggle. I really appreciate the way that it catches itself mid-sentence and remembers how the alphabet actually works.

As to the outcome of all of this? Sure, the feature I wanted to add got added; it worked away and got to a working solution in the end. But the route it took was, I think it's fair to say, a "brute force and ignorance" approach.

I've not been spending too much time reading the agent's own chatter, but when I have I've found myself amused by the dead ends it'll wander down and then work its way back out. There is, without question, a recognisable process here: I believe it would be a dishonest developer who says they've never had times in the past, or just one of those off days, where they've fallen down a rabbit hole of a solution, only to realise it's the right solution implemented in the worst possible way. There's also a resemblance here to how more junior developers work a problem until they really develop their skills.

I think I'm going to keep an eye on the agent chat a bit more from now on. While I imagine things will only improve as these tools improve, for the moment it's a good source of coding comedy.


  1. Presumably there's things I can be doing to make its life easier. ↩

Copilot lied

4 min read

This morning, with my experiment with Copilot having settled down a little bit, I thought I might try and use the result for another little experiment. For a long time now I've maintained a (currently lapsed) photoblog. It was always very much tied to "the site formerly known as twitter" and, since I fully erased my account after the site turned into a Nazi bar, I've not done anything to update how it gets populated.

So I got to thinking: I have a full backup of all the images in a couple of places; perhaps with a bit of coding (and some help from "the AIs") I can revive it using BlogMore?

I tinkered for a wee bit and mostly got something going (albeit I'm going to have to do some work to back-port the actual dates and times of some earlier images, and there's a load of work to do to somehow pull out all the tags). But then I hit a small hitch.

When building BlogMore I made the decision to let it write both the code and the documentation. It documented a couple of features I never asked for, but which seemed sensible so I never questioned. On the other hand neither did I test them at the time (because they weren't important to what I needed).

It's the exact reason I added this warning at the start of the documentation:

⚠️ Warning

BlogMore is an experiment in using GitHub Copilot to develop a whole project from start to finish. As such, almost every part of this documentation was generated by Copilot and what it knows about the project. Please keep this in mind.

From what I can see at the moment the documentation is broadly correct, and I will update and correct it as I work through it and check it myself. Of course, I will welcome reports of problems or fixes.

With this warning in mind, and with the intention of working through the documentation and testing its claims, I decided to test out one of the features when building up the new photoblog.

Whereas with this blog I keep all the posts in a flat structure, this time around I thought I'd try out this (taken from the Copilot-generated BlogMore documentation):


BlogMore is flexible about how you organise your posts. Here are some common patterns:

Flat structure (all posts in one directory):

posts/
  β”œβ”€β”€ hello-world.md
  β”œβ”€β”€ python-tips.md
  └── web-development.md

Note: Files can be date-prefixed (e.g., 2026-02-18-hello-world.md) and BlogMore will automatically remove the date prefix from the URL slug. The post will still use the date field from frontmatter for chronological ordering.

Organised by date:

posts/
  β”œβ”€β”€ 2024/
  β”‚   β”œβ”€β”€ 01/
  β”‚   β”‚   └── hello-world.md
  β”‚   └── 02/
  β”‚       └── python-tips.md
  └── 2025/
      └── 01/
          └── web-development.md

Organised by topic:

posts/
  β”œβ”€β”€ python/
  β”‚   β”œβ”€β”€ decorators.md
  β”‚   └── type-hints.md
  └── web/
      β”œβ”€β”€ css-grid.md
      └── javascript-tips.md

Using the hierarchy approach, especially with dates, seemed ideal! I'd drop the actual images in such a hierarchy, and also drop the Markdown posts in a parallel hierarchy too. Perfect!

So I set it all up to do that, fired up blogmore serve, visited the URL and... No posts yet. What the hell?

So I checked the code for BlogMore and, sure enough, despite the fact the documentation was selling me on this handy way to organise my posts, no such feature existed!

As an experiment I then asked Copilot what the heck was going on. Much as I expected, rather than coming back with an answer to the question, it went right ahead and fixed it instead. Which is fine, that's where I would have taken this, but I do wish it would answer the question first.

ℹ️ Note

I imagine I could get an answer to the question if I took a more conversational route with Copilot, rather than writing the question in an issue and then assigning that issue to it. I must remember to try that at some point.

So, yeah, unsurprisingly Copilot flat out lied1 in the documentation. I'm not in the least bit shocked by this and, as I said, I fully expected this. But it was amusing to have an example of this turn up so early in the documentation, in such a glaring way, and in a way that was so easily fixed (really, it was just a swap of Path.glob to Path.rglob).

As I play with this more it's going to be fun to see what other bold claims turn out to not be true; or perhaps even the reverse: what neat features lurk in the code that haven't been documented.


  1. Yes, as I mentioned yesterday, that's an anthropomorphism. Folk who take such things as an indication that you don't understand "AI" might want to think about what it is to be a human when communicating. ↩

Five days with Copilot

14 min read

Another itch to scratchΒΆ

As I mentioned yesterday, I've been a happy user of Pelican for a couple or so years now, but every so often there's a little change or tweak I'd like to make that requires diving deeper into the templates and the like and... I go "eh, I'll look at it some time soon". Another thought that often goes through my head at those times is "I should build my own static site generator that works exactly how I want" -- because really any hacker with a blog has to do that at some point.

Meanwhile... I've had free access to GitHub Copilot attached to my GitHub account for some time now, and I've hardly used it. At the same time -- the past few months especially -- I've been watching the rise of agents as coding tools, as well as the rise of advocates for them. Worse still, I've seen people I didn't expect to be advocates for giving up on coding turning to these tools and suddenly writing rationales in favour of them.

So, suddenly, the idea popped into my head: I should write my own static site generator that I'll use for my blog, and I should try and use GitHub Copilot to write 100% of the code, and documentation, and see how far I get. In doing so I might firm up my opinions about where we're all going with this.

The requirements were going to be pretty straightforward:

  • It should be a static site generator that turns Markdown files into a website.
  • It should be blog-first in its design.
  • It should support non-blog-post pages too.
  • It should be written in Python.
  • It should use Jinja2 for templates.
  • It should have a better archive system than I ever got out of my Pelican setup.
  • It should have categories, tags, and all the usual metadata stuff you'd expect from a site where you're going to share content from.

Of course, the requirements would drift and expand as I went along and I had some new ideas.

Getting startedΒΆ

To kick things off, I created my repo, and then opened Copilot and typed out a prompt to get things going. Here's what I typed:

Build a blog-oriented static site generation engine. It should be built in Python, the structure of the repository should match that of my preferences for Python projects these days (see https://github.com/davep/oldnews and take clues from the makefile; I like uv and ruff and Mypy, etc).

Important features:

  • Everything is written in markdown
  • All metadata for a post should come from frontmatter
  • It should use Jinja2 for the output templates

As you can see, rather than get very explicit about every single detail, I wanted to start out with a vague description of what I was aiming for. I did want to encourage it to try and build a Python repository how I normally would, so I pointed it at OldNews in the hope that it might go and comprehend how I go about things; I also doubled-down in the importance of using uv and mypy.

The result of this was... actually impressive. As you'll see in that PR, to get to a point where it could be merged, there was some back-and-forth with Copilot to add things I hadn't thought of initially, and to get it to iron out some problems, but for the most part it delivered what I was after. Without question it delivered it faster than I would have.

Some early issues where I had to point out problems to Copilot included:

  • The order of posts on the home page wasn't obvious to me, and absolutely wasn't reverse chronological order.
  • Footnotes were showing up kinda odd.
  • The main index for the blog was showing just posts titles, not the full text of the article as you'd normally expect from a blog.

Nothing terrible, and it did get a lot of the heavy lifting done and done well, but it was worth noting that a lot of dev-testing/QA needed to be done to be confident about its work, and doing this picked up on little details that are important.

An improvement to the MarkdownΒΆ

As an aside: during this first PR, I quickly noticed a problem where I was getting this error when generating the site from the Markdown:

Error generating site: mapping values are not allowed in this context
  in "<unicode string>", line 3, column 15

I just assumed it was some bug in the generated code and left Copilot to work it out. Instead it came back and educated me on something: I actually had bad YAML in the frontmatter of some of my posts!

This, by the way, wouldn't be the last time that Copilot found an issue with my input Markdown and so, having used it, improved my blog.

A major feature from a simple requestΒΆ

Another problem I ran into quickly was that previewing the generated site wasn't working well at all; all I could do was browse the files in the filesystem. So, almost as an offhand comment, in the initial PR, I asked:

Can we get a serve mode please so I can locally test the site?

Just like that, it went off and wrote a whole server for the project. While the server did need a lot of extra work to really work well1, the initial version was good enough to get me going and to iterate on the project as a whole.

The main workflowΒΆ

Having kicked off the project and having had some success with getting Copilot to deliver what I was asking for, I settled into a new but also familiar workflow. Whereas normally, when working on a personal project, I'll write an issue for myself, at some point pick it up and create a PR, review and test the PR myself then merge, now the workflow turned into:

  • Write an issue but do so in a way that when I assign it to Copilot it has enough information to go off and do the work.
  • Wait for Copilot to get done.
  • Review the PR, making change requests etc.
  • Make any fixes that are easier for me to fix by hand that describe to Copilot.
  • Merge.

In fact, the first step had some sub-steps to it too, I was finding. What I was doing, more than ever, was writing issues like I'd write sticky notes: with simple descriptions of a bug or a new feature. I'd then come back to them later and flesh them out into something that would act as a prompt for Copilot. I found myself doing this so often I ended up adding a "Needs prompt" label to my usual set of issue labels.

All of this made for an efficient workflow, and one where I could often get on with something else as Copilot worked on the latest job (I wasn't just working on other things on my computer; sometimes I'd be going off and doing things around the house while this happened), but... it wasn't fun. It was the opposite of what I've always enjoyed when it comes to building software. I got to dream up the ideas, I got to do the testing, I got to review the quality of the work, but I didn't get to actually lose myself in the flow state of coding.

One thing I've really come to understand during those 5 days of working on BlogMore was I really missed getting lost in the flow state. Perhaps it's the issue to PR to review to merge cycle I used that amplified this, perhaps those who converse with an agent in their IDE or in some client application keep a sense of that (I might have to try that approach out), but this feels like a serious loss to me when it comes to writing code for personal enjoyment.

The main problemsΒΆ

I think it's fair to say that I've been surprised at just how well Copilot understood my (sometimes deliberately vague) requests, at how it generally managed to take some simple plain English and turn it into actual code that actually did what I wanted and, mostly, actually worked.

But my experiences over the past few days haven't been without their problems.

The confidently wrong problemΒΆ

Hopefully we all recognise that, with time and experience, we learn where the mistakes are likely to turn up. Once you've written enough code you've also written plenty of bugs and been caught out by plenty of edge-cases that you get a spidey-sense for trouble as you write code. I feel that this kind of approach can be called cautiously confident.

Working with Copilot2, however, I often ran into the confidently wrong issue. On occasion I found it would proudly3 request review for some minor bit of work, proclaiming that it had done the thing or solved the problem, and I'd test it and nothing had materially changed. On a couple of occasions, when I pushed back, I found it actually doubting my review before finally digging in harder and eventually solving the issue.

I found that this took time and was rather tiring.

There were also times where it would do the same but not directly in respect to code. One example I can think of is when it was confident that Python 3.14 was still a pre-release Python as of February 2026 (it isn't).

This problem alone concerns me; this is the sort of thing where people without a good sense for when the agent is probably bullshitting will get into serious trouble.

The tries-too-hard problemΒΆ

A variation on the above problem works the other way: on at least one occasion I found that Copilot tried too hard to fix a problem that wasn't really its to fix.

In this case I was asking it to tidy up some validation issues in the RSS feed data. One of the main problems was root-relative URLs being in the content of the feed; for that they needed to be made absolute URLs. Copilot did an excellent job of fixing the problem, but one (and from what I could see only one) relative URL remained.

I asked it to take a look and it took a real age to work over the issue. To its credit, it dug hard and it dug deep and it got to the bottom of the problem. The issue here though was it tried too hard because, having found the cause of the problem (a typo in my original Markdown, which had always existed) it went right ahead and built a workaround for this one specific broken link.

Now, while I'm a fan of Postel's law, this is taking things a bit too far. If this was a real person I'd tasked with the job I would have expected and encouraged them to come back to me with their finding and say "dude, the problem is in your input data" and I'd have fixed my original Markdown.

Here though it just went right ahead and added this one weird edge case as something to handle.

I think this is something to be concerned about and to keep an eye on too. I feel there's a danger in having the agent rabbit-hole a fix for a problem that it should simply have reported back to me for further discussion.

The never-pushes-back problemΒΆ

Something I did find unsurprising but disconcerting was Copilot's unwillingness to push back, or at least defend its choices. Sometimes it would make a decision or a change and I'd simply ask it why it had done it that way, why it had made that choice. Rather than reply with its reasoning it would pretty much go "yeah, my bad, let me do it a way you're probably going to find more pleasing".

A simple example of this is one time when I saw some code like this:

@property
def some_property(self) -> SomeValue:
    from blogmore.utils import some_utility_function
    ...

I'm not a fan of imports in the body of methods unless there's a demonstrable performance reason. I asked Copilot why it had made this choice here and its reply was simply to say it had gone ahead and changed the code, moving the import to the top of the module.

I see plenty of people talk about how working with an agent is like pair-programming, but I think it misses out on what's got to be the biggest positive of that approach: the debate and exchange of ideas. This again feels like a concern to be mindful of, especially if someone less experienced is bringing code to you where they've used an agent as their pair buddy.

The overall impressionΒΆ

Now I'm at the end of the process, and using the result of this experiment to write this post4, I feel better informed about what these tools offer, and the pitfalls I need to be mindful of. Sometimes it wasn't a terrible way of working. For example, on the first day I started with this, at one point on a chilly but sunny Sunday afternoon, I was sat on the sofa, MacBook on lap, guiding an AI to write code, while petting the cat, watching the birds in the garden enjoy the content of the feeder, all while chatting with my partner.

That's not a terrible way to write code.

On the other hand, as I said earlier, I missed the flow state. I love getting lost in code for a few hours and this is not that. I also found the constant loop of prompt, wait, review, test, repeat, really quite exhausting.

As best as I can describe it: it feels like the fast food of software development. It gets the job done, it gets it done fast, but it's really not fulfilling.

At the end of the process I have a really useful tool, 100% "built with AI", under my guidance, which lets me actually be creative and build things I do create by hand. That's not a bad thing, I can see why this is appealing to people. On the other hand the process of building that tool was pretty boring and, for want of a better word... soulless.

ConclusionΒΆ

As I write this I have about 24 hours of access to GitHub Copilot Pro left. It seems this experiment used up my preview time and triggered a "looks like you're having fun, now you need to decide if you want to buy it" response. That's fair.

So now I'm left trying to decide if I want to pay to keep it going. At the level I've been using it at for building BlogMore it looks like it costs $10/mth. That actually isn't terrible. I spend more than that on other hobbies and other forms of entertainment. So, if I can work within the bounds of that tier, it's affordable and probably worth it.

What I'm not sure about yet is if I want to. It's been educational, I can 100% see how and where I'd use this for work (and would of course expect an employer to foot the bill for it or a similar tool), and I can also see how and where I might use it to quickly build a personal-use tool to enable something more human-creative.

Ultimately though I think I'm a little better informed thanks to this process, and better aware of some of the wins people claim, and also better informed so that I can be rightly incredulous when faced with some of the wilder claims.

Also, it'll help put some of my reading into perspective.


  1. Amusingly I uncovered another bug while writing this post. ↩

  2. I keep saying Copilot, but I think it's probably more correct to say "Claude Sonnet 4.5" as that's what seemed to be at play under the hood, if I'm understanding things correctly. ↩

  3. Yes, of course that's an anthropomorphism, you'll find plenty of them in this article as it's hard not to write about the subject in any other way; it's an easy shortcut to explain some ideas ↩

  4. Actually I'm writing this post as I always do: in Emacs. But BlogMore is in the background serving a local copy of my blog so I can check it in the browser, and rebuilding it every time I save a change. ↩