Recent Posts

Zensical

3 min read

After first getting involved with Textual, back in 2022, I became acquainted with MkDocs/mkdocs-material. It was being used for the (then still being written) Textual docs, and I really liked how they looked.

Eventually I adopted this combination for many of my own projects, and even adopted it for the replacement for davep.org. It's turned into one of those tools that I heavily rely on, but seldom actually interact with. It's just there, it does its job and it does it really well.

Recently though, while working on OldNews, something changed. When I was working on or building the docs, I saw this:

MkDocs warning

This was... new. So I visited the link and I was, if I'm honest, none the wiser. I've read it a few times since, and done a little bit of searching, and I still really don't understand what the heck is going on or why a tool I'm using is telling me not to use itself but to use a different tool. Like, really, why would MkDocs tell me not to use MkDocs but to use a different tool? Or was it actually MkDocs telling me this?1

Really, I don't get it.

Anyway, I decided to wave it away as some FOSS drama2, muse about it for a moment, and carry on and worry about it later. However, I did toot about my confusion and the ever-helpful TimothΓ©e gave a couple of pointers. He mentioned that Zensical was a drop-in replacement here.

Again, not having paid too much attention to what the heck was going on, I filed away this bit of advice and promised myself I'd come back to this at some point soon.

Fast-forward to yesterday and, having bumped into another little bit of FOSS drama, I was reminded that I should go back and look at Zensical (because I was reminded that there's always the chance your "supply chain" can let you down). Given that BlogMore is the thing I'm actively messing with at the moment, and given the documentation is in a state of flux, I thought I'd drop the "drop-in" into that project.

The result was, more or less, that it was a drop-in replacement. I did have to change $(mkdocs) serve --livereload to drop the --livereload switch (Zensical doesn't accept it, and I'd only added it to MkDocs recently because it seemed to stop doing that by default), but other than that... tool name swap and that's it.

Testing locally the resulting site -- while themed slightly differently (and I do like how it looks) -- worked just as it did before; which is exactly what I was wanting to see.

There was one wrinkle though: when it came to publishing to GitHub pages:

uv run zensical gh-deploy

Usage: zensical [OPTIONS] COMMAND [ARGS]...
Try 'zensical --help' for help.

Error: No such command 'gh-deploy'.

Oh! No support for deploying to the gh-pages branch, like MkDocs has! For a moment this felt like a bit of a show-stopper; but then I remembered that MkDocs' publishing command simply makes use of ghp-import and so I swapped to that and I was all good.

So, yeah, so far... I think it's fair to say that Zensical has been a drop-in replacement for the MkDocs/mkdocs-material combo. Moreover, if the impending problems are as bad as the blog post in the warning suggests, I'm grateful to this effort; in the long run this is going to save a lot of faffing around.

The next test will be to try the docs for something like Hike. They're a little bit more involved, with SVG-based screenshots being generated at build time, etc. If that just works out of the box too, without incident, I think I'm all sorted.


  1. Turns out that it was Material for MkDocs doing this; I wish the warning had said this. 

  2. It's not the first I've seen in my years, and I doubt it'll be the last. 

Documentation generation

2 min read
AI

While I've written a lot of documentation in my life, it's not something I enjoy. I want documentation to read well, I want documentation to be useful, I want documentation to be accurate. I also want there to be documentation at all and sometimes the other parts mean it doesn't get done for FOSS projects1.

When I started the experiment that is BlogMore, I very quickly hashed out some ideas on how it might generate some documentation, and the result was okay. In fact, if anything, it was a bit too much and there was a lot of repeated information.

So, this morning, before I sat down for the day's work, I quickly wrote an issue that would act as a prompt to Copilot to rewrite the documentation. This time I tried to be very clear about what I wanted where, but also left it to work out all the details.

The result genuinely impressed me. While I'll admit I haven't read it all in detail (and because of this have left the same warning right at the start), on the surface it looks a lot clearer and feels like a better journey to learning how to use BlogMore.

blogmore.davep.dev hosts the result of this.

I have a plan to work through the documentation and be sure it's all correct and all makes sense, but if it's as correct and as useful as I think, I might have to consider the idea of taking this approach more often. Writing down the plan for the documentation and then letting it appear in the background while I get on with my actual day makes a lot of sense.

I fear I might be warming to these tools, a little bit. :-/


  1. Although I've made a point of doing it for almost every one of my recent Textual-based projects. 

BlogMore v1.5.0

3 min read

Since switching over on the 19th of last month I've been making lots of changes to BlogMore. While there's been a good few bug fixes and QoL changes, I've also been adding new features that I've found myself wanting.

Here's a list of some of the significant additions I've added in the last couple of weeks (and, yes, as per the experiment, all of these have been developed by me prompting GitHub Copilot):

  • All parts of a date in a post's timestamp can be clicked to get to an archive of that point in time.
  • Added fully-client-side full text search (I'm finding this especially useful).
  • Added sitemap generation.
  • Added a general fallback description for any page that doesn't have one.
  • Added fallback keywords for any page that doesn't have any.
  • Added author metadata to the header of all pages.
  • Hugely optimised the use of FontAwesome.
  • Made best possible use of all the usual og: type metadata in the head of all pages.
  • Added optional CSS minification, improving page load times.
  • Added optional JavaScript minification, improving page load times.
  • Where appropriate, all pages now have rel="prev" and rel="next" tags in the <head>.
  • Added a rel="canonical" tag to the <head> of all pages.
  • Improved the style and workings of the pagination of all archive type pages.
  • Improved the cosmetics of the category and tag clouds.
  • Improved how the first paragraph is discovered for a page or post, when using it as the default description in the <head> of a page.
  • Cleaned up the generated HTML so it's more compact.
  • Added support for custom 404 pages.

As I say, they're just the improvements I've made that have come to mind as I've used BlogMore. I've also done a lot of bug fixing too. You can read the full changelog over on the BlogMore website1.

I feel that the pace of updates and additions has started to slow; I think I've now got more or less everything I wanted from this. I'm pretty sure I can do everything I ever bothered to do with Jekyll and Pelican, and I am enjoying "owning" the code such that, if I have an idea for something I want, it's easy enough to make it happen.

I'm also pretty happy with how well the results perform. Despite the fact I'm not a web developer, and despite this blog being served by GitHub Pages (which, let's be honest, isn't the most speedy host), the measurements for a single page in the blog look fairly good:

Desktop

That's measuring loading in a desktop context. Even measured as mobile (which I've tried to make work well too) it's not too shabby:

Mobile

I think I can rightfully be satisfied with those values, given this isn't normally my primary focus when it comes to software development.

Anyway, if you like messing with static site generators, and one that is blog-centric sounds useful, and if you're not put off by the fact that this is a deliberate "use GitHub Copilot" experiment, feel free to take a look.


  1. Which, somewhat amusingly, is built with MkDocs. 

Hike v1.3.0

2 min read

I've just released v1.3.0 of Hike, my little terminal-based Markdown browser. It's about a year now since I made the first release and I've been making the odd update here and there, but mostly it's a pretty stable tool. There's a few other things I'd like to do with it but I keep getting distracted by different ideas.

Today's release sort of rides on the coattails of the current love for all things Markdown because of "the AI". It seems that some folk are now keen on the idea of serving Markdown from their websites, when asked for it: as you can see in this post for example. While that might be handy for some LLM bot or whatever, it's also pretty handy if you happen to have a web-friendly Markdown browser!

So v1.3.0 makes a small change to how a URL is looked at when deciding if it might be a Markdown document, by saying "hey, web server, I like Markdown more than anything else, so feel free to serve me that up". If we get a Markdown type back, we go ahead and load it into Hike.

This means that the post mentioned above loads up just fine now:

Viewing a Markdown site in Hike

Previously, Hike would have gone "nah, that's not a Markdown document" and would have handed off to the environment's web browser.

Hike is licensed GPL-3.0 and available via GitHub and also via PyPi. If you have an environment that has pipx installed you should be able to get up and going with:

$ pipx install hike

If you're more into uv:

uv tool install hike

If you don't have uv installed you can use uvx.sh to perform the installation. For GNU/Linux or macOS or similar:

curl -LsSf uvx.sh/hike/install.sh | sh

or on Windows:

powershell -ExecutionPolicy ByPass -c "irm https://uvx.sh/hike/install.ps1 | iex"

Original Seen by davep rescued

1 min read

Still Alive

At the end of yesterday's post I said I might see if I can rescue the original photoblog from its backup on WordPress. This was the first photoblog I played with, posting to the long-dead Posterous between 2009 and 2013.

So, yesterday evening, I did an extract of the full feed from WordPress, and also asked for a full backup of all the media. I then fired up Emacs and rattled out some Python code that would marry up the two sets of data and add to the photoblog repository. It took a little bit of working out; it seems that every post had two entries in the feed: a parent and a child entry. I've no clue why that's the case; I didn't really care to get too deeply into it.

Soon enough seen-by.davep.dev was updated with an extra 1,833 posts! So now the categories on that blog site are broken down into Seen By Me 1 (the original) and Seen By Me 2 (the second incarnation).

Sadly, for the first blog, tagging wasn't really much of a thing so the tag cloud hasn't grown too much.

But, finally, I've got both the photoblogs back up and hosted somewhere I can point to, and I'm fully in control of their content. While it is hosted on GitHub Pages I've done this in a way that it would be pretty easy to move elsewhere; this is down to the fact that it's a simple static site built with BlogMore.

Seen by davep rescued

2 min read

Final Message

Since mid-2023 my photoblog has been broken. As I mentioned at the time, the issue was that this second incarnation of the blog had started life as a proper mashup of some web tools, and the heart of it was Twitter.

It all started to fall apart when Twitter got its new owner, and APIs became expensive, and other tools would not or could not work with it any more, and then it really fell apart when I finally nuked my account.

So since then the blog has been sat about, unused and unloved, with a lot of broken links and images.

Thankfully, though, the pipeline that I had going had been designed with this sort of problem in mind: part of what I had going also made a backup of the photos I took to Google Drive and to Google Photos. So when I got to a point the other day where BlogMore was usable I decided I should rescue the photos and rebuild the blog.

After downloading the full feed of the Blogger.com-hosted blog, I threw together some Python code that took the data (thanks to feedparser for helping with that), matched up the posts with the images I had in Google Drive, slugged the names, wrote some Markdown and copied some images, and I had the source of a fresh blog.

The result of all of this can be seen up on seen-by.davep.dev.

I strongly suspect this is going to remain a pretty static site. At the moment I've given no thought whatsoever as to how I might have this populate in the way the old version of it did. Quite simply the old version was:

  1. Take photo
  2. Post to Twitter with a specific hashtag
  3. Have IFTTT notice this and create the blog post, make backups
  4. ...
  5. Profit?

I suppose, this time around, I could have something monitor my Mastodon account, or my pixelfed account, and then trigger a similar process; but then that would need something akin to IFTTT running and interacting with GitHub and creating the Markdown and kicking off a build process and...

Eh, maybe one day. For now, though, I'll be happy that I've rescued this particular incarnation of my photoblog and then think about if and how I carry on with something similar in the future.

Meanwhile... this has got me thinking. The original blog is backed up on WordPress. It's been sat there, all sad and neglected, ever since Posterous disappeared. I wonder if I can export all the data from there and mash it into this new version...

Not so elegant

2 min read
AI

One thing I've been noticing with my current experiment with GitHub Copilot is that it seems to know Python well enough to write code that gets the job done, and sometimes it knows it well enough to write more modern idiomatic Python code, but it also seems to write the inelegant version of it.

It's hard to pin down exactly, and of course it's a matter of taste (my idea of elegant might burn someone else's eyes), but on occasion, as I review the code, I find things that make me go "ugh".

Here's an example: there's a function that Copilot wrote to extract the first non-markup paragraph of an article (so that it can be used as a page description). One thing it needs to do is skip any initial images, etc. It takes a pretty brute force approach of looking at the start of each stripped line, but it gets the job done -- I can't really argue with that.

But here's how it does it:

# Skip markdown image syntax
if stripped.startswith("!["):
    continue

# Skip markdown linked image syntax ([![alt](img)](url))
if stripped.startswith("[!["):
    continue

# Skip HTML img tags
if stripped.startswith("<img"):
    continue

Now, this is good: it's using startswith. There are less-elegant approaches it could have used so I'll give it bonus points for using that method. The thing is though, it's testing each prefix one string at a time, pretty much rolling out a load of boilerplate code.

What bothers me here is that startswith will take a tuple of strings to test for. I find it curious that the generated code is idiomatic enough to know that startswith is a sensible option here, but at the same time it still writes the list of things to test out in a long-winded way.

This is exactly the sort of thing I'd call out in a normal code review. Technically, if this wasn't mostly a "let's see how it goes about this with minimal input from me" experiment, I'd have called it out here too (as an experiment, I might go back prompt it to "think" about this).

If I ever find myself using this sort of tool for generating code in a work setting, this is exactly the sort of thing I'll be watching for.

Complexitty v1.1.0

2 min read

Complexitty

I've just released v1.1.0 of Complexitty, my little Mandelbrot explorer for the terminal. I first released it in April last year and have tinkered with it on and off since. Most of the changes have been to do with easier navigation, some additional colour schemes, the ability to pass location information on the command line, that sort of thing; meanwhile, the very heart of it has stayed the same "it's as fast as it's ever going to be expect it to not be fast" approach.

It's a Python application after all: they can be fast, but something like this isn't going to compete with the serious Mandelbrot explorers.

v1.1.0, however, has a little optional speedup. If you install it normally it'll work at the pace it always did: the more you zoom in, the more you ramp up the iterations to tease more detail out, the slower it gets. But now: if you install it as complexitty[faster] rather than just as complexitty it will use Numba to speed up the main calculation.

On the very first run things will be slow to start up, but from then on I see a real improvement. As you zoom in and explore and up the detail, the calculation remains pretty fast. The drop-off of speed that you see without Numba just isn't there.

While the whole idea of Complexitty was to see what I could do with Python, in the terminal, using Textual, and keeping it "pure", I feel this is an acceptable "cheat" given it's optional.

I'm considering this an experimental change for now, as I don't know how well it will work in all places where Complexitty could be otherwise installed. So give it a go, see if it installs and runs as complexitty[faster], and if it doesn't: fall back to plain old complexitty.

If you're interested you can find the source over on GitHub, with the application available via PyPI. If you use pipx you should be able to install with either:

pipx install complexitty[faster]

or

pipx install complexitty

If you're a fan of uv (and who isn't these days?) then try either:

uv tool install complexitty[faster]

or fall back to:

uv tool install complexitty

In fact, to squeeze everything you can out of Complexitty, perhaps try:

uv tool install complexitty[faster] --python 3.14

Brute force and ignorance

5 min read
AI

My experiment with coding with an agent continues, and from the point of view of the usability of the results, I'd say it's going well. From the point of view of learning more about this sort of tool and the pitfalls: I'd also say it's going well.

The one thing I'm really starting to get an appreciation for is just how much it employs a sort of brute force and ignorance approach to getting to a solution for some sort of problem. I noticed this on and off as I watched the agent's chat, as it worked away.

One example I enjoyed was when it was trying to add a new dependency to BlogMore. For reasons I should probably try and understand1 it knew it should be using uv to add a dependency and it knew how to use it to update the lock file, but it didn't have uv to hand:

The lock file (uv.lock) needs to be updated to include rcssmin. Since uv isn't available, I'll need to note this in the PR. The lock file should be generated by uv when the pyproject.toml is updated.

So it decided the best solution was probably to... "hand-edit" the lock file.

The lock file needs to be updated with rcssmin. Since uv is not available in this environment, I need to update it manually or find another way.

I need to update the uv.lock file to include rcssmin. The uv.lock file needs to include:

  1. The rcssmin package entry with hashes
  2. The rcssmin dependency in the blogmore package section

Let me look at the PyPI page for rcssmin 1.2.2 to get the hash.

It then grabs rcssmin from PyPI and checks out the information it wants; after which it goes:

I need to update the uv.lock file with the rcssmin package information. The uv.lock file uses SHA256 hashes. Let me look at the existing format in more detail to understand what I need to add, and then I'll query PyPI for the hashes.

Actually, let me think about this differently. The uv.lock file is a generated file, and updating it manually is error-prone. The proper way to update it is to run uv sync or uv lock. Since uv is not available in this environment, I have two options:

  1. Let the CI update the lock file (risky if CI fails on the locked version)

So... around this point it has some sort of plan for hand-patching the lock file, but also recognises that this is a dodgy approach and it should be using uv to update the lock file.

I also really like the part where it says it has two options, then goes on to provide an exhaustive list of... one.

After this it runs this code:

import json, urllib.request
url = 'https://pypi.org/pypi/rcssmin/json'
try:
    with urllib.request.urlopen(url, timeout=10) as f:
        data = json.load(f)
    v = '1.2.2'
    releases = data['releases'].get(v, [])
    for r in releases:
        print(f'filename: {r[\"filename\"]}')
        print(f'  url: {r[\"url\"]}')
        print(f'  sha256: {r[\"digests\"][\"sha256\"]}')
        print(f'  size: {r[\"size\"]}')
        print(f'  upload_time: {r[\"upload_time_iso_8601\"]}')
except Exception as e:
    print(f'Error: {e}')

after which it concludes:

Now I have all the information I need to update the uv.lock file. I need to add:

  1. The rcssmin package entry in the lock file
  2. Add rcssmin to the blogmore package dependencies

I mean... okay, I guess. Again, I'll admit that I'm still getting used to setting this sort of thing up and working with it (and keep in mind none of this was done locally; this is all via GitHub Copilot, up on GitHub), but it's kind of funny and wild that the approach taken was to recognise that using uv to remake the lock file was the best approach so... let's hand-edit the lock file.

But that's not the best part. This is where it gets really fun. It had decided that the lock file must be in alphabetical order. Okay, sure, why not:

The lock file has packages sorted alphabetically, so rcssmin should go between pygments and python-dateutil (since "r" comes before "py" actually... no, wait - "r" comes after "p" in the alphabet. Let me check the order again).

Since "r" comes before "py."

Since "r" comes before "py."

o_O

O_o

Extra credit here for giving me a good giggle. I really appreciate the way that it catches itself mid-sentence and remembers how the alphabet actually works.

As to the outcome of all of this? Sure, the feature I wanted to add got added; it worked away and got to a working solution in the end. But the route it took was, I think it's fair to say, a "brute force and ignorance" approach.

I've not been spending too much time reading the agent's own chatter, but when I have I've found myself amused by the dead ends it'll wander down and then work its way back out. There is, without question, a recognisable process here: I believe it would be a dishonest developer who says they've never had times in the past, or just one of those off days, where they've fallen down a rabbit hole of a solution, only to realise it's the right solution implemented in the worst possible way. There's also a resemblance here to how more junior developers work a problem until they really develop their skills.

I think I'm going to keep an eye on the agent chat a bit more from now on. While I imagine things will only improve as these tools improve, for the moment it's a good source of coding comedy.


  1. Presumably there's things I can be doing to make its life easier. 

Copilot lied

4 min read
AI

This morning, with my experiment with Copilot having settled down a little bit, I thought I might try and use the result for another little experiment. For a long time now I've maintained a (currently lapsed) photoblog. It was always very much tied to "the site formerly known as twitter" and, since I fully erased my account after the site turned into a Nazi bar, I've not done anything to update how it gets populated.

So I got to thinking: I have a full backup of all the images in a couple of places; perhaps with a bit of coding (and some help from "the AIs") I can revive it using BlogMore?

I tinkered for a wee bit and mostly got something going (albeit I'm going to have to do some work to back-port the actual dates and times of some earlier images, and there's a load of work to do to somehow pull out all the tags). But then I hit a small hitch.

When building BlogMore I made the decision to let it write both the code and the documentation. It documented a couple of features I never asked for, but which seemed sensible so I never questioned. On the other hand neither did I test them at the time (because they weren't important to what I needed).

It's the exact reason I added this warning at the start of the documentation:

⚠️ Warning

BlogMore is an experiment in using GitHub Copilot to develop a whole project from start to finish. As such, almost every part of this documentation was generated by Copilot and what it knows about the project. Please keep this in mind.

From what I can see at the moment the documentation is broadly correct, and I will update and correct it as I work through it and check it myself. Of course, I will welcome reports of problems or fixes.

With this warning in mind, and with the intention of working through the documentation and testing its claims, I decided to test out one of the features when building up the new photoblog.

Whereas with this blog I keep all the posts in a flat structure, this time around I thought I'd try out this (taken from the Copilot-generated BlogMore documentation):


BlogMore is flexible about how you organise your posts. Here are some common patterns:

Flat structure (all posts in one directory):

posts/
  β”œβ”€β”€ hello-world.md
  β”œβ”€β”€ python-tips.md
  └── web-development.md

Note: Files can be date-prefixed (e.g., 2026-02-18-hello-world.md) and BlogMore will automatically remove the date prefix from the URL slug. The post will still use the date field from frontmatter for chronological ordering.

Organised by date:

posts/
  β”œβ”€β”€ 2024/
  β”‚   β”œβ”€β”€ 01/
  β”‚   β”‚   └── hello-world.md
  β”‚   └── 02/
  β”‚       └── python-tips.md
  └── 2025/
      └── 01/
          └── web-development.md

Organised by topic:

posts/
  β”œβ”€β”€ python/
  β”‚   β”œβ”€β”€ decorators.md
  β”‚   └── type-hints.md
  └── web/
      β”œβ”€β”€ css-grid.md
      └── javascript-tips.md

Using the hierarchy approach, especially with dates, seemed ideal! I'd drop the actual images in such a hierarchy, and also drop the Markdown posts in a parallel hierarchy too. Perfect!

So I set it all up to do that, fired up blogmore serve, visited the URL and... No posts yet. What the hell?

So I checked the code for BlogMore and, sure enough, despite the fact the documentation was selling me on this handy way to organise my posts, no such feature existed!

As an experiment I then asked Copilot what the heck was going on. Much as I expected, rather than coming back with an answer to the question, it went right ahead and fixed it instead. Which is fine, that's where I would have taken this, but I do wish it would answer the question first.

ℹ️ Note

I imagine I could get an answer to the question if I took a more conversational route with Copilot, rather than writing the question in an issue and then assigning that issue to it. I must remember to try that at some point.

So, yeah, unsurprisingly Copilot flat out lied1 in the documentation. I'm not in the least bit shocked by this and, as I said, I fully expected this. But it was amusing to have an example of this turn up so early in the documentation, in such a glaring way, and in a way that was so easily fixed (really, it was just a swap of Path.glob to Path.rglob).

As I play with this more it's going to be fun to see what other bold claims turn out to not be true; or perhaps even the reverse: what neat features lurk in the code that haven't been documented.


  1. Yes, as I mentioned yesterday, that's an anthropomorphism. Folk who take such things as an indication that you don't understand "AI" might want to think about what it is to be a human when communicating.