Recent Posts

Hike v1.3.0

2 min read

I've just released v1.3.0 of Hike, my little terminal-based Markdown browser. It's about a year now since I made the first release and I've been making the odd update here and there, but mostly it's a pretty stable tool. There's a few other things I'd like to do with it but I keep getting distracted by different ideas.

Today's release sort of rides on the coattails of the current love for all things Markdown because of "the AI". It seems that some folk are now keen on the idea of serving Markdown from their websites, when asked for it: as you can see in this post for example. While that might be handy for some LLM bot or whatever, it's also pretty handy if you happen to have a web-friendly Markdown browser!

So v1.3.0 makes a small change to how a URL is looked at when deciding if it might be a Markdown document, by saying "hey, web server, I like Markdown more than anything else, so feel free to serve me that up". If we get a Markdown type back, we go ahead and load it into Hike.

This means that the post mentioned above loads up just fine now:

Viewing a Markdown site in Hike

Previously, Hike would have gone "nah, that's not a Markdown document" and would have handed off to the environment's web browser.

Hike is licensed GPL-3.0 and available via GitHub and also via PyPi. If you have an environment that has pipx installed you should be able to get up and going with:

$ pipx install hike

If you're more into uv:

uv tool install hike

If you don't have uv installed you can use uvx.sh to perform the installation. For GNU/Linux or macOS or similar:

curl -LsSf uvx.sh/hike/install.sh | sh

or on Windows:

powershell -ExecutionPolicy ByPass -c "irm https://uvx.sh/hike/install.ps1 | iex"

Original Seen by davep rescued

1 min read

Still Alive

At the end of yesterday's post I said I might see if I can rescue the original photoblog from its backup on WordPress. This was the first photoblog I played with, posting to the long-dead Posterous between 2009 and 2013.

So, yesterday evening, I did an extract of the full feed from WordPress, and also asked for a full backup of all the media. I then fired up Emacs and rattled out some Python code that would marry up the two sets of data and add to the photoblog repository. It took a little bit of working out; it seems that every post had two entries in the feed: a parent and a child entry. I've no clue why that's the case; I didn't really care to get too deeply into it.

Soon enough seen-by.davep.dev was updated with an extra 1,833 posts! So now the categories on that blog site are broken down into Seen By Me 1 (the original) and Seen By Me 2 (the second incarnation).

Sadly, for the first blog, tagging wasn't really much of a thing so the tag cloud hasn't grown too much.

But, finally, I've got both the photoblogs back up and hosted somewhere I can point to, and I'm fully in control of their content. While it is hosted on GitHub Pages I've done this in a way that it would be pretty easy to move elsewhere; this is down to the fact that it's a simple static site built with BlogMore.

Seen by davep rescued

2 min read

Final Message

Since mid-2023 my photoblog has been broken. As I mentioned at the time, the issue was that this second incarnation of the blog had started life as a proper mashup of some web tools, and the heart of it was Twitter.

It all started to fall apart when Twitter got its new owner, and APIs became expensive, and other tools would not or could not work with it any more, and then it really fell apart when I finally nuked my account.

So since then the blog has been sat about, unused and unloved, with a lot of broken links and images.

Thankfully, though, the pipeline that I had going had been designed with this sort of problem in mind: part of what I had going also made a backup of the photos I took to Google Drive and to Google Photos. So when I got to a point the other day where BlogMore was usable I decided I should rescue the photos and rebuild the blog.

After downloading the full feed of the Blogger.com-hosted blog, I threw together some Python code that took the data (thanks to feedparser for helping with that), matched up the posts with the images I had in Google Drive, slugged the names, wrote some Markdown and copied some images, and I had the source of a fresh blog.

The result of all of this can be seen up on seen-by.davep.dev.

I strongly suspect this is going to remain a pretty static site. At the moment I've given no thought whatsoever as to how I might have this populate in the way the old version of it did. Quite simply the old version was:

  1. Take photo
  2. Post to Twitter with a specific hashtag
  3. Have IFTTT notice this and create the blog post, make backups
  4. ...
  5. Profit?

I suppose, this time around, I could have something monitor my Mastodon account, or my pixelfed account, and then trigger a similar process; but then that would need something akin to IFTTT running and interacting with GitHub and creating the Markdown and kicking off a build process and...

Eh, maybe one day. For now, though, I'll be happy that I've rescued this particular incarnation of my photoblog and then think about if and how I carry on with something similar in the future.

Meanwhile... this has got me thinking. The original blog is backed up on WordPress. It's been sat there, all sad and neglected, ever since Posterous disappeared. I wonder if I can export all the data from there and mash it into this new version...

Not so elegant

2 min read
AI

One thing I've been noticing with my current experiment with GitHub Copilot is that it seems to know Python well enough to write code that gets the job done, and sometimes it knows it well enough to write more modern idiomatic Python code, but it also seems to write the inelegant version of it.

It's hard to pin down exactly, and of course it's a matter of taste (my idea of elegant might burn someone else's eyes), but on occasion, as I review the code, I find things that make me go "ugh".

Here's an example: there's a function that Copilot wrote to extract the first non-markup paragraph of an article (so that it can be used as a page description). One thing it needs to do is skip any initial images, etc. It takes a pretty brute force approach of looking at the start of each stripped line, but it gets the job done -- I can't really argue with that.

But here's how it does it:

# Skip markdown image syntax
if stripped.startswith("!["):
    continue

# Skip markdown linked image syntax ([![alt](img)](url))
if stripped.startswith("[!["):
    continue

# Skip HTML img tags
if stripped.startswith("<img"):
    continue

Now, this is good: it's using startswith. There are less-elegant approaches it could have used so I'll give it bonus points for using that method. The thing is though, it's testing each prefix one string at a time, pretty much rolling out a load of boilerplate code.

What bothers me here is that startswith will take a tuple of strings to test for. I find it curious that the generated code is idiomatic enough to know that startswith is a sensible option here, but at the same time it still writes the list of things to test out in a long-winded way.

This is exactly the sort of thing I'd call out in a normal code review. Technically, if this wasn't mostly a "let's see how it goes about this with minimal input from me" experiment, I'd have called it out here too (as an experiment, I might go back prompt it to "think" about this).

If I ever find myself using this sort of tool for generating code in a work setting, this is exactly the sort of thing I'll be watching for.

Complexitty v1.1.0

2 min read

Complexitty

I've just released v1.1.0 of Complexitty, my little Mandelbrot explorer for the terminal. I first released it in April last year and have tinkered with it on and off since. Most of the changes have been to do with easier navigation, some additional colour schemes, the ability to pass location information on the command line, that sort of thing; meanwhile, the very heart of it has stayed the same "it's as fast as it's ever going to be expect it to not be fast" approach.

It's a Python application after all: they can be fast, but something like this isn't going to compete with the serious Mandelbrot explorers.

v1.1.0, however, has a little optional speedup. If you install it normally it'll work at the pace it always did: the more you zoom in, the more you ramp up the iterations to tease more detail out, the slower it gets. But now: if you install it as complexitty[faster] rather than just as complexitty it will use Numba to speed up the main calculation.

On the very first run things will be slow to start up, but from then on I see a real improvement. As you zoom in and explore and up the detail, the calculation remains pretty fast. The drop-off of speed that you see without Numba just isn't there.

While the whole idea of Complexitty was to see what I could do with Python, in the terminal, using Textual, and keeping it "pure", I feel this is an acceptable "cheat" given it's optional.

I'm considering this an experimental change for now, as I don't know how well it will work in all places where Complexitty could be otherwise installed. So give it a go, see if it installs and runs as complexitty[faster], and if it doesn't: fall back to plain old complexitty.

If you're interested you can find the source over on GitHub, with the application available via PyPI. If you use pipx you should be able to install with either:

pipx install complexitty[faster]

or

pipx install complexitty

If you're a fan of uv (and who isn't these days?) then try either:

uv tool install complexitty[faster]

or fall back to:

uv tool install complexitty

In fact, to squeeze everything you can out of Complexitty, perhaps try:

uv tool install complexitty[faster] --python 3.14

Brute force and ignorance

5 min read
AI

My experiment with coding with an agent continues, and from the point of view of the usability of the results, I'd say it's going well. From the point of view of learning more about this sort of tool and the pitfalls: I'd also say it's going well.

The one thing I'm really starting to get an appreciation for is just how much it employs a sort of brute force and ignorance approach to getting to a solution for some sort of problem. I noticed this on and off as I watched the agent's chat, as it worked away.

One example I enjoyed was when it was trying to add a new dependency to BlogMore. For reasons I should probably try and understand1 it knew it should be using uv to add a dependency and it knew how to use it to update the lock file, but it didn't have uv to hand:

The lock file (uv.lock) needs to be updated to include rcssmin. Since uv isn't available, I'll need to note this in the PR. The lock file should be generated by uv when the pyproject.toml is updated.

So it decided the best solution was probably to... "hand-edit" the lock file.

The lock file needs to be updated with rcssmin. Since uv is not available in this environment, I need to update it manually or find another way.

I need to update the uv.lock file to include rcssmin. The uv.lock file needs to include:

  1. The rcssmin package entry with hashes
  2. The rcssmin dependency in the blogmore package section

Let me look at the PyPI page for rcssmin 1.2.2 to get the hash.

It then grabs rcssmin from PyPI and checks out the information it wants; after which it goes:

I need to update the uv.lock file with the rcssmin package information. The uv.lock file uses SHA256 hashes. Let me look at the existing format in more detail to understand what I need to add, and then I'll query PyPI for the hashes.

Actually, let me think about this differently. The uv.lock file is a generated file, and updating it manually is error-prone. The proper way to update it is to run uv sync or uv lock. Since uv is not available in this environment, I have two options:

  1. Let the CI update the lock file (risky if CI fails on the locked version)

So... around this point it has some sort of plan for hand-patching the lock file, but also recognises that this is a dodgy approach and it should be using uv to update the lock file.

I also really like the part where it says it has two options, then goes on to provide an exhaustive list of... one.

After this it runs this code:

import json, urllib.request
url = 'https://pypi.org/pypi/rcssmin/json'
try:
    with urllib.request.urlopen(url, timeout=10) as f:
        data = json.load(f)
    v = '1.2.2'
    releases = data['releases'].get(v, [])
    for r in releases:
        print(f'filename: {r[\"filename\"]}')
        print(f'  url: {r[\"url\"]}')
        print(f'  sha256: {r[\"digests\"][\"sha256\"]}')
        print(f'  size: {r[\"size\"]}')
        print(f'  upload_time: {r[\"upload_time_iso_8601\"]}')
except Exception as e:
    print(f'Error: {e}')

after which it concludes:

Now I have all the information I need to update the uv.lock file. I need to add:

  1. The rcssmin package entry in the lock file
  2. Add rcssmin to the blogmore package dependencies

I mean... okay, I guess. Again, I'll admit that I'm still getting used to setting this sort of thing up and working with it (and keep in mind none of this was done locally; this is all via GitHub Copilot, up on GitHub), but it's kind of funny and wild that the approach taken was to recognise that using uv to remake the lock file was the best approach so... let's hand-edit the lock file.

But that's not the best part. This is where it gets really fun. It had decided that the lock file must be in alphabetical order. Okay, sure, why not:

The lock file has packages sorted alphabetically, so rcssmin should go between pygments and python-dateutil (since "r" comes before "py" actually... no, wait - "r" comes after "p" in the alphabet. Let me check the order again).

Since "r" comes before "py."

Since "r" comes before "py."

o_O

O_o

Extra credit here for giving me a good giggle. I really appreciate the way that it catches itself mid-sentence and remembers how the alphabet actually works.

As to the outcome of all of this? Sure, the feature I wanted to add got added; it worked away and got to a working solution in the end. But the route it took was, I think it's fair to say, a "brute force and ignorance" approach.

I've not been spending too much time reading the agent's own chatter, but when I have I've found myself amused by the dead ends it'll wander down and then work its way back out. There is, without question, a recognisable process here: I believe it would be a dishonest developer who says they've never had times in the past, or just one of those off days, where they've fallen down a rabbit hole of a solution, only to realise it's the right solution implemented in the worst possible way. There's also a resemblance here to how more junior developers work a problem until they really develop their skills.

I think I'm going to keep an eye on the agent chat a bit more from now on. While I imagine things will only improve as these tools improve, for the moment it's a good source of coding comedy.


  1. Presumably there's things I can be doing to make its life easier. 

Copilot lied

4 min read
AI

This morning, with my experiment with Copilot having settled down a little bit, I thought I might try and use the result for another little experiment. For a long time now I've maintained a (currently lapsed) photoblog. It was always very much tied to "the site formerly known as twitter" and, since I fully erased my account after the site turned into a Nazi bar, I've not done anything to update how it gets populated.

So I got to thinking: I have a full backup of all the images in a couple of places; perhaps with a bit of coding (and some help from "the AIs") I can revive it using BlogMore?

I tinkered for a wee bit and mostly got something going (albeit I'm going to have to do some work to back-port the actual dates and times of some earlier images, and there's a load of work to do to somehow pull out all the tags). But then I hit a small hitch.

When building BlogMore I made the decision to let it write both the code and the documentation. It documented a couple of features I never asked for, but which seemed sensible so I never questioned. On the other hand neither did I test them at the time (because they weren't important to what I needed).

It's the exact reason I added this warning at the start of the documentation:

âš ī¸ Warning

BlogMore is an experiment in using GitHub Copilot to develop a whole project from start to finish. As such, almost every part of this documentation was generated by Copilot and what it knows about the project. Please keep this in mind.

From what I can see at the moment the documentation is broadly correct, and I will update and correct it as I work through it and check it myself. Of course, I will welcome reports of problems or fixes.

With this warning in mind, and with the intention of working through the documentation and testing its claims, I decided to test out one of the features when building up the new photoblog.

Whereas with this blog I keep all the posts in a flat structure, this time around I thought I'd try out this (taken from the Copilot-generated BlogMore documentation):


BlogMore is flexible about how you organise your posts. Here are some common patterns:

Flat structure (all posts in one directory):

posts/
  ├── hello-world.md
  ├── python-tips.md
  └── web-development.md

Note: Files can be date-prefixed (e.g., 2026-02-18-hello-world.md) and BlogMore will automatically remove the date prefix from the URL slug. The post will still use the date field from frontmatter for chronological ordering.

Organised by date:

posts/
  ├── 2024/
  │   ├── 01/
  │   │   └── hello-world.md
  │   └── 02/
  │       └── python-tips.md
  └── 2025/
      └── 01/
          └── web-development.md

Organised by topic:

posts/
  ├── python/
  │   ├── decorators.md
  │   └── type-hints.md
  └── web/
      ├── css-grid.md
      └── javascript-tips.md

Using the hierarchy approach, especially with dates, seemed ideal! I'd drop the actual images in such a hierarchy, and also drop the Markdown posts in a parallel hierarchy too. Perfect!

So I set it all up to do that, fired up blogmore serve, visited the URL and... No posts yet. What the hell?

So I checked the code for BlogMore and, sure enough, despite the fact the documentation was selling me on this handy way to organise my posts, no such feature existed!

As an experiment I then asked Copilot what the heck was going on. Much as I expected, rather than coming back with an answer to the question, it went right ahead and fixed it instead. Which is fine, that's where I would have taken this, but I do wish it would answer the question first.

â„šī¸ Note

I imagine I could get an answer to the question if I took a more conversational route with Copilot, rather than writing the question in an issue and then assigning that issue to it. I must remember to try that at some point.

So, yeah, unsurprisingly Copilot flat out lied1 in the documentation. I'm not in the least bit shocked by this and, as I said, I fully expected this. But it was amusing to have an example of this turn up so early in the documentation, in such a glaring way, and in a way that was so easily fixed (really, it was just a swap of Path.glob to Path.rglob).

As I play with this more it's going to be fun to see what other bold claims turn out to not be true; or perhaps even the reverse: what neat features lurk in the code that haven't been documented.


  1. Yes, as I mentioned yesterday, that's an anthropomorphism. Folk who take such things as an indication that you don't understand "AI" might want to think about what it is to be a human when communicating. 

Five days with Copilot

14 min read
AI

Another itch to scratch

As I mentioned yesterday, I've been a happy user of Pelican for a couple or so years now, but every so often there's a little change or tweak I'd like to make that requires diving deeper into the templates and the like and... I go "eh, I'll look at it some time soon". Another thought that often goes through my head at those times is "I should build my own static site generator that works exactly how I want" -- because really any hacker with a blog has to do that at some point.

Meanwhile... I've had free access to GitHub Copilot attached to my GitHub account for some time now, and I've hardly used it. At the same time -- the past few months especially -- I've been watching the rise of agents as coding tools, as well as the rise of advocates for them. Worse still, I've seen people I didn't expect to be advocates for giving up on coding turning to these tools and suddenly writing rationales in favour of them.

So, suddenly, the idea popped into my head: I should write my own static site generator that I'll use for my blog, and I should try and use GitHub Copilot to write 100% of the code, and documentation, and see how far I get. In doing so I might firm up my opinions about where we're all going with this.

The requirements were going to be pretty straightforward:

  • It should be a static site generator that turns Markdown files into a website.
  • It should be blog-first in its design.
  • It should support non-blog-post pages too.
  • It should be written in Python.
  • It should use Jinja2 for templates.
  • It should have a better archive system than I ever got out of my Pelican setup.
  • It should have categories, tags, and all the usual metadata stuff you'd expect from a site where you're going to share content from.

Of course, the requirements would drift and expand as I went along and I had some new ideas.

Getting started

To kick things off, I created my repo, and then opened Copilot and typed out a prompt to get things going. Here's what I typed:

Build a blog-oriented static site generation engine. It should be built in Python, the structure of the repository should match that of my preferences for Python projects these days (see https://github.com/davep/oldnews and take clues from the makefile; I like uv and ruff and Mypy, etc).

Important features:

  • Everything is written in markdown
  • All metadata for a post should come from frontmatter
  • It should use Jinja2 for the output templates

As you can see, rather than get very explicit about every single detail, I wanted to start out with a vague description of what I was aiming for. I did want to encourage it to try and build a Python repository how I normally would, so I pointed it at OldNews in the hope that it might go and comprehend how I go about things; I also doubled-down in the importance of using uv and mypy.

The result of this was... actually impressive. As you'll see in that PR, to get to a point where it could be merged, there was some back-and-forth with Copilot to add things I hadn't thought of initially, and to get it to iron out some problems, but for the most part it delivered what I was after. Without question it delivered it faster than I would have.

Some early issues where I had to point out problems to Copilot included:

  • The order of posts on the home page wasn't obvious to me, and absolutely wasn't reverse chronological order.
  • Footnotes were showing up kinda odd.
  • The main index for the blog was showing just posts titles, not the full text of the article as you'd normally expect from a blog.

Nothing terrible, and it did get a lot of the heavy lifting done and done well, but it was worth noting that a lot of dev-testing/QA needed to be done to be confident about its work, and doing this picked up on little details that are important.

An improvement to the Markdown

As an aside: during this first PR, I quickly noticed a problem where I was getting this error when generating the site from the Markdown:

Error generating site: mapping values are not allowed in this context
  in "<unicode string>", line 3, column 15

I just assumed it was some bug in the generated code and left Copilot to work it out. Instead it came back and educated me on something: I actually had bad YAML in the frontmatter of some of my posts!

This, by the way, wouldn't be the last time that Copilot found an issue with my input Markdown and so, having used it, improved my blog.

A major feature from a simple request

Another problem I ran into quickly was that previewing the generated site wasn't working well at all; all I could do was browse the files in the filesystem. So, almost as an offhand comment, in the initial PR, I asked:

Can we get a serve mode please so I can locally test the site?

Just like that, it went off and wrote a whole server for the project. While the server did need a lot of extra work to really work well1, the initial version was good enough to get me going and to iterate on the project as a whole.

The main workflow

Having kicked off the project and having had some success with getting Copilot to deliver what I was asking for, I settled into a new but also familiar workflow. Whereas normally, when working on a personal project, I'll write an issue for myself, at some point pick it up and create a PR, review and test the PR myself then merge, now the workflow turned into:

  • Write an issue but do so in a way that when I assign it to Copilot it has enough information to go off and do the work.
  • Wait for Copilot to get done.
  • Review the PR, making change requests etc.
  • Make any fixes that are easier for me to fix by hand that describe to Copilot.
  • Merge.

In fact, the first step had some sub-steps to it too, I was finding. What I was doing, more than ever, was writing issues like I'd write sticky notes: with simple descriptions of a bug or a new feature. I'd then come back to them later and flesh them out into something that would act as a prompt for Copilot. I found myself doing this so often I ended up adding a "Needs prompt" label to my usual set of issue labels.

All of this made for an efficient workflow, and one where I could often get on with something else as Copilot worked on the latest job (I wasn't just working on other things on my computer; sometimes I'd be going off and doing things around the house while this happened), but... it wasn't fun. It was the opposite of what I've always enjoyed when it comes to building software. I got to dream up the ideas, I got to do the testing, I got to review the quality of the work, but I didn't get to actually lose myself in the flow state of coding.

One thing I've really come to understand during those 5 days of working on BlogMore was I really missed getting lost in the flow state. Perhaps it's the issue to PR to review to merge cycle I used that amplified this, perhaps those who converse with an agent in their IDE or in some client application keep a sense of that (I might have to try that approach out), but this feels like a serious loss to me when it comes to writing code for personal enjoyment.

The main problems

I think it's fair to say that I've been surprised at just how well Copilot understood my (sometimes deliberately vague) requests, at how it generally managed to take some simple plain English and turn it into actual code that actually did what I wanted and, mostly, actually worked.

But my experiences over the past few days haven't been without their problems.

The confidently wrong problem

Hopefully we all recognise that, with time and experience, we learn where the mistakes are likely to turn up. Once you've written enough code you've also written plenty of bugs and been caught out by plenty of edge-cases that you get a spidey-sense for trouble as you write code. I feel that this kind of approach can be called cautiously confident.

Working with Copilot2, however, I often ran into the confidently wrong issue. On occasion I found it would proudly3 request review for some minor bit of work, proclaiming that it had done the thing or solved the problem, and I'd test it and nothing had materially changed. On a couple of occasions, when I pushed back, I found it actually doubting my review before finally digging in harder and eventually solving the issue.

I found that this took time and was rather tiring.

There were also times where it would do the same but not directly in respect to code. One example I can think of is when it was confident that Python 3.14 was still a pre-release Python as of February 2026 (it isn't).

This problem alone concerns me; this is the sort of thing where people without a good sense for when the agent is probably bullshitting will get into serious trouble.

The tries-too-hard problem

A variation on the above problem works the other way: on at least one occasion I found that Copilot tried too hard to fix a problem that wasn't really its to fix.

In this case I was asking it to tidy up some validation issues in the RSS feed data. One of the main problems was root-relative URLs being in the content of the feed; for that they needed to be made absolute URLs. Copilot did an excellent job of fixing the problem, but one (and from what I could see only one) relative URL remained.

I asked it to take a look and it took a real age to work over the issue. To its credit, it dug hard and it dug deep and it got to the bottom of the problem. The issue here though was it tried too hard because, having found the cause of the problem (a typo in my original Markdown, which had always existed) it went right ahead and built a workaround for this one specific broken link.

Now, while I'm a fan of Postel's law, this is taking things a bit too far. If this was a real person I'd tasked with the job I would have expected and encouraged them to come back to me with their finding and say "dude, the problem is in your input data" and I'd have fixed my original Markdown.

Here though it just went right ahead and added this one weird edge case as something to handle.

I think this is something to be concerned about and to keep an eye on too. I feel there's a danger in having the agent rabbit-hole a fix for a problem that it should simply have reported back to me for further discussion.

The never-pushes-back problem

Something I did find unsurprising but disconcerting was Copilot's unwillingness to push back, or at least defend its choices. Sometimes it would make a decision or a change and I'd simply ask it why it had done it that way, why it had made that choice. Rather than reply with its reasoning it would pretty much go "yeah, my bad, let me do it a way you're probably going to find more pleasing".

A simple example of this is one time when I saw some code like this:

@property
def some_property(self) -> SomeValue:
    from blogmore.utils import some_utility_function
    ...

I'm not a fan of imports in the body of methods unless there's a demonstrable performance reason. I asked Copilot why it had made this choice here and its reply was simply to say it had gone ahead and changed the code, moving the import to the top of the module.

I see plenty of people talk about how working with an agent is like pair-programming, but I think it misses out on what's got to be the biggest positive of that approach: the debate and exchange of ideas. This again feels like a concern to be mindful of, especially if someone less experienced is bringing code to you where they've used an agent as their pair buddy.

The overall impression

Now I'm at the end of the process, and using the result of this experiment to write this post4, I feel better informed about what these tools offer, and the pitfalls I need to be mindful of. Sometimes it wasn't a terrible way of working. For example, on the first day I started with this, at one point on a chilly but sunny Sunday afternoon, I was sat on the sofa, MacBook on lap, guiding an AI to write code, while petting the cat, watching the birds in the garden enjoy the content of the feeder, all while chatting with my partner.

That's not a terrible way to write code.

On the other hand, as I said earlier, I missed the flow state. I love getting lost in code for a few hours and this is not that. I also found the constant loop of prompt, wait, review, test, repeat, really quite exhausting.

As best as I can describe it: it feels like the fast food of software development. It gets the job done, it gets it done fast, but it's really not fulfilling.

At the end of the process I have a really useful tool, 100% "built with AI", under my guidance, which lets me actually be creative and build things I do create by hand. That's not a bad thing, I can see why this is appealing to people. On the other hand the process of building that tool was pretty boring and, for want of a better word... soulless.

Conclusion

As I write this I have about 24 hours of access to GitHub Copilot Pro left. It seems this experiment used up my preview time and triggered a "looks like you're having fun, now you need to decide if you want to buy it" response. That's fair.

So now I'm left trying to decide if I want to pay to keep it going. At the level I've been using it at for building BlogMore it looks like it costs $10/mth. That actually isn't terrible. I spend more than that on other hobbies and other forms of entertainment. So, if I can work within the bounds of that tier, it's affordable and probably worth it.

What I'm not sure about yet is if I want to. It's been educational, I can 100% see how and where I'd use this for work (and would of course expect an employer to foot the bill for it or a similar tool), and I can also see how and where I might use it to quickly build a personal-use tool to enable something more human-creative.

Ultimately though I think I'm a little better informed thanks to this process, and better aware of some of the wins people claim, and also better informed so that I can be rightly incredulous when faced with some of the wilder claims.

Also, it'll help put some of my reading into perspective.


  1. Amusingly I uncovered another bug while writing this post. 

  2. I keep saying Copilot, but I think it's probably more correct to say "Claude Sonnet 4.5" as that's what seemed to be at play under the hood, if I'm understanding things correctly. 

  3. Yes, of course that's an anthropomorphism, you'll find plenty of them in this article as it's hard not to write about the subject in any other way; it's an easy shortcut to explain some ideas 

  4. Actually I'm writing this post as I always do: in Emacs. But BlogMore is in the background serving a local copy of my blog so I can check it in the browser, and rebuilding it every time I save a change. 

A new engine

2 min read

For about 2 and a half years now this blog has been built with Pelican. For the most part I've enjoyed using it, it's been easy enough to work with, although not exciting to work with (which I think is a positive thing to say about a static site generator).

There were, however, a couple or so things I didn't like about the layout I was getting out of it. One issue was the archive, which was a pretty boring list of titles of all the posts on the site. It would have been nice to have them broken down by date or something, at least.

Of course, there are lots of themes, and it also uses templates, so I could probably have tweaked it "just so"; but every time I started to look into it I found myself wanting to "fix" the issue by building my own engine from scratch.

Thankfully, every time that happened, I'd come to my senses and go off and work on some other fun personal project. Until earlier this week, that was.

The thing is... I've been looking for a project where I could dive into the world of "AI coding" and "Agents" and all that nonsense. Not because I want to abandon the absolute thrill and joy I still get from writing actual code as a human, but because I want to understand things from the point of view of people who champion these tools.

The only way I'm going to have an informed opinion is to get informed; the only way to get informed is to try this stuff out.

So, here I am, with my blog now migrated over to BlogMore; a project that gives me a blog-focused static site generator that I 100% drove the development of, but for which I wrote almost none of the code.

At the moment it's working out well, as a generator. I'm happy with how it works, I'm happy with what it generates. I also think it's 100% backwards-compatible when it comes to URLs and feeds and so on. If you do see anything odd happening, if you do see anything that looks broken, I'd love to hear about it.

As for this being a "100% AI" project, and how I found that process and how I feel about the implications and the results... that's a blog post to come.

I took lots of notes.

OldNews - A terminal-based client for TheOldReader

3 min read

OldNews

I honestly can't remember when I was first introduced to the idea of RSS/Atom feeds, and the idea of having an aggregator or reader of some description to keep up with updates on your favourite sites. It's got to be over 25 years ago now. I can't remember what I used either, but I remember using one or two readers that ran locally, right up until I met Google Reader. Once I discovered that I was settled.

As time moved on and I moved from platform to platform, and wandered into the smartphone era, I stuck with Google Reader (and the odd client for it here and there). It was a natural and sensible approach to consuming news and updates. It also mostly felt like a solved problem and so felt nice and stable.

So, of course, I was annoyed when Google killed it off, like so many useful things.

When this happened I dabbled with a couple of alternatives and, at some point, finally settled on TheOldReader. Since then it's been my "server" for feed subscriptions with me using desktop and mobile clients to work against it.

But... I never found anything that worked for me that ran in the terminal. Given I've got a thing for writing terminal-based tools it made sense I should have a go, and so OldNews became my winter break project.

Reading an article in OldNews

I've written it as a client application for the API of TheOldReader, and only for that, and have developed it in a way that works well for me. All the functionality I want and need is in there:

  • Add subscriptions
  • Rename subscriptions
  • Remove subscriptions
  • Add folders
  • Rename folders
  • Remove folders
  • Move subscriptions between folders
  • Mark read/unread
  • Read articles (that provide actual content in their feeds)

Currently there's no support for starring feeds or interacting with the whole "friend" system (honestly: while I see mention of it in the API, I know nothing of that side of things and really don't care about it). As time goes on I might work on that.

As with all of my other terminal-based applications, there's a rich command palette that shows you what you can do, and also what keyboard shortcuts will run those commands. While I do still need to work on some documentation for the application (although you'd hope that anyone looking for an RSS reader at this point would mostly be able to find their way around) the palette is a good place to go looking for things you can do.

The command palette

Plus there's a help screen too.

The help screen

If themes are your thing, there's themes:

Available themes Gruvbox Textual Light Nord

That's a small selection, and there's more to explore.

Also on the cosmetic front there's a simple compact mode, which toggles between two ways of showing the navigation menu, the article lists and the panel headers.

Not compact Compact

OldNews has been a daily-driver for a wee while now, while also under active development. I think I've covered all the main functions I want and have also shaken out plenty of bugs, so today's the day to call it v1.0.0 and go from there.

If you're a user of TheOldReader and fancy interacting with it from the terminal too then it's out there to try out. It's licensed GPL-3.0 and available via GitHub and also via PyPI. If you have an environment that has pipx installed you should be able to get up and running with:

$ pipx install oldnews

It can also be installed using uv:

uv tool install oldnews

If you don't have uv installed you can use uvx.sh to perform the installation. For GNU/Linux or macOS or similar:

curl -LsSf uvx.sh/oldnews/install.sh | sh

or on Windows:

powershell -ExecutionPolicy ByPass -c "irm https://uvx.sh/oldnews/install.ps1 | iex"

Once installed, run the oldnews command.

Hopefully this is useful to someone else; meanwhile I'll be using it more and more. If you need help, or have any ideas, please feel free to raise an issue or start a discussion.