Recent Posts

BlogMore v2.21.0

2 min read

After noticing a broken link in a post yesterday, I got to thinking that it would be useful to add a linter to BlogMore. So I've released v2.21.0 which adds linting support.

A number of things are checked and the results are broken down into things that are errors or warnings. Errors result from any of these checks:

  • Ensures all posts and pages have valid YAML frontmatter. If a file cannot be parsed, it is reported as an error.
  • Scans the generated HTML for links to non-existent internal paths (other posts, pages, categories, tags, archives, site features like search, or files in extras/).
  • Checks that all <img> sources resolve to valid internal paths or files in the extras/ directory.
  • Checks that the cover property in a post or page frontmatter points to a valid resource.
  • Verifies that all page slugs listed in sidebar_pages actually exist.
  • Checks that all internal-looking URLs in the links: and socials: configuration settings point to valid targets.

On the other hand, the following just result in a warning:

  • Flags if a post is missing a title, category, tags, or a date.
  • Reports if a post's date or modified date is set in the future.
  • Notes if a post's modified date is earlier than its original publication date.
  • Identifies if two or more posts share the exact same title.
  • Flags inline images missing an alt attribute, or those with an empty/whitespace-only alt attribute.
  • If clean_urls is enabled, warns if internal links point explicitly to index.html.
  • Reports internal links using the full site_url (e.g., https://example.com/path/) instead of a root-relative path (/path/).

I feel like all of these cover most of the things that are low-cost to detect but have a positive impact on the state of the content of a blog.

One thing I've not done is any sort of checking of external links. This would be costly and could possibly have unintended consequences that I don't want to be messing with (perhaps a tool to export the list of external links for checking could be useful, at some point).

Having run this against this blog, I did find some things that needed cleaning up, mostly absolute links that could be turned into root-relative links (always good for making the content portable).

I'm going to make this a standard part of my "I'm ready to publish" check for this blog, and it should also be helpful as I carry on migrating the images in the blog over to WebP.

An argument with Gemini

1 min read

At the moment I'm working on a linting command for BlogMore. Having given up on Copilot/Claude for this, I've been having quite a bit of success with Gemini CLI. But while doing this, I've noticed some odd things with it. It does have this habit of cargo-culting some changes, or just rewriting code that doesn't need it.

For example, the tests for the new linting tool: it keeps adding import pytest near the top of the test file despite the fact that pytest doesn't get used anywhere in the code. Every time, I'll remove it, every time it adds more tests, it'll add it back.

Another thing I've noticed is it seems to be obsessed with adding indentation to empty lines. So, if you've got a line of code indented 8 spaces, then an empty line, then another line of code indented 8 spaces, it'll add 8 spaces on that empty line. That sort of thing annoys the hell out of me1.

But the worst thing I just ran into was this. It had written this bit of code:

def lint_site(site_config: SiteConfig) -> int:
    """Convenience function to run the linter.

    Args:
        site_config: The site configuration.

    Returns:
        0 if no errors, 1 if errors were found.
    """
    linter = Linter(site_config)
    return linter.lint()

On the surface this seems fine: a function that hides just a little bit of detail while providing a simple function interface to a feature. But that use of a variable to essentially "discard" it the next line... nah. I dislike that sort of thing. The code can be just a little more elegant. So seeing this I edited it to be (removing the docstring for the purposes of this post):

def lint_site(site_config: SiteConfig) -> int:
    return Linter(site_config).lint()

Nice and tidy.

I then had Gemini work on something else in the linting code. What did I see towards the end of the diff? This!

A sneaky edit

Sneaky little shit!

Now, sure, the idea is you review all changes before you run with them, but knowing that it's likely that any given change might rewrite parts of the code that aren't related to the problem at hand adds a lot more overhead, and I wonder how often people using these tools even bother.


  1. I've seen some IDEs do that on purpose too; I've got Emacs configured to strip that out on save. 

More syncing GitHub to GitLab and Codeberg

1 min read

Following on from my first post about this, I've tweaked the script I'm using to backup a repo to GitLab and Codeberg:

#!/bin/sh

# Check if the current directory is a Git repository
if ! git rev-parse --is-inside-work-tree > /dev/null 2>&1; then
    echo "Error: This directory is not a Git repository."
    exit 1
fi

REPO_NAME="$1"

# If no repository name was provided, try to get it from the origin remote
if [ -z "$REPO_NAME" ]; then
    ORIGIN_URL=$(git remote get-url origin 2>/dev/null)
    if [ -n "$ORIGIN_URL" ]; then
        REPO_NAME=$(basename -s .git "$ORIGIN_URL")
    else
        echo "Error: No repository name provided and no 'origin' remote found."
        echo "Usage: $0 <repo-name>"
        exit 1
    fi
fi

echo "Configuring multi-forge backup sync for: $REPO_NAME"

# Set up the remote called backups. Anchor it to Codeberg.
git remote remove backups > /dev/null 2>&1
git remote add backups "ssh://git@codeberg.org/davep/${REPO_NAME}.git"

# Set up the push URLs.
git remote set-url --push backups "ssh://git@codeberg.org/davep/${REPO_NAME}.git"
git remote set-url --add --push backups "git@gitlab.com:davep/${REPO_NAME}.git"

# Only ever backup main.
git config remote.backups.push refs/heads/main:refs/heads/main

# Also backup all tags.
git config --add remote.backups.push 'refs/tags/*:refs/tags/*'

echo "----------------------------------------------------"
echo "Backups configured:"
git remote -v
echo "----------------------------------------------------"
echo "To perform the initial sync, run: git push backups"

### setup-forge-sync ends here

The changes from last time include:

  • The repo name now defaults to whatever is used for GitHub, so I don't have to copy/paste it or type it out.
  • It now backs up all the tags too.

I've been running with this for a couple of days now and it's proving really useful. Well, when Codeberg is available to push anything to...

An unreliable buddy

4 min read

At some point this morning I was looking for something on this blog and stumbled on a post that had a broken link. Not an external link, but an internal link. This got me thinking: perhaps I should add some sort of linting tool to BlogMore? I figured this should be doable using much of the existing code: pretty much work out the list of internal links, run through all pages and posts, see what links get generated, look for internal links1, and see if they're all amongst those that are expected.

Later on in the day I prompted Copilot to have a go. Now, sure, I didn't tell it how to do it, instead I told it what I wanted it to achieve. I hoped it would (going via Claude, as I've normally let it) decide on what I felt was the most sensible solution (use the existing configuration-reading, page/post-finding and post-parsing code) and run with that.

It didn't.

Once again, as I've seen before, it seemed to understand and take into account the existing codebase and then copy bits from it and drop it in a new file. Worse, rather than tackle this using the relevant parts of the existing build engine, it concocted a whole new approach, again obsessing over throwing a regex or three at the problem.

I then spent the next 90 minutes or so, testing the results, finding false reports, finding things it missed, and telling it what I found and getting it to fix them. It did, but on occasion it seemed to special-case the fix rather than understand the general case of what was going on and address that.

Eventually, probably too late really, I gave up trying to nudge it in the right direction and, instead, decided it was time to be more explicit about how it should handle this2. The first thing that bothered me was that it seemed to ignore the configuration object. Where BlogMore has a method of loading the configuration into an object, which can be passed around the code, but with the linter it loaded it up, pulled it all apart, and then passed some of the values as a huge parameter list. Because... reasons?

Anyway, I told it to cut that shit out and prompted it about a few other things that looked pretty bad too. Copilot/Claude went off and worked away on this for a while, using up my 6th premium request of the session, and then eventually came back with an error telling me I'd hit a rate limit and to come back in a few hours.

GitHub rate limit

Could I have got it to where I wanted to be a bit earlier, with more careful prompting? No doubt. Will a lot of people? I suspect that's rather unlikely. This is one of the many things that make me pretty sceptical about this as the tool some sell it as, at least for the moment. I see often that it's written about or talked about as if it's a really useful coding buddy. It can be, at times, but it's hugely unreliable. Here I'm testing it by building something as a hobby, and I'm doing so knowing that there's no real consequence if it craps out on me. I'm also doing it safe in the knowledge that I could write the code myself, albeit at a far slower pace and with less available time. Not everyone this is aimed at has that going for them.

But these tools are still sold like they're the most reliable coding buddies going.

All that said: having hit the rate limit, and having squandered six premium requests on the problem with no real progress, I decided to use my Google Gemini coding allowance instead (which, in my experience so far, seems pretty generous). I threw more or less the same initial prompt at it, but this time I stressed that I really wanted it to use the existing engine where possible. It managed to pretty much one-shot the problem in about 9 minutes and used up just 2% of my daily quota3.

I've done a little more tidying up since, and I still need to properly review the result, but from what I can see of the initial results it's found all of the issues I wanted it to find, first time (something Claude didn't manage) and hasn't found any issues that don't exist (also something Claude didn't manage).

So I guess this time Gemini was the reliable buddy. But not knowing which buddy you can rely on makes for a pretty unreliable group of buddies.


  1. This process could, of course, work for external links too, but I'm not really too keen on having a tool that visits every single external link to see if it's still there. 

  2. Which is mostly fine; I'm doing this as an experiment in what it's capable of, and also I was sofa-hacking while having a conversation about naming Easter eggs in Minecraft. 

  3. Imagine that too! Imagine knowing exactly how much of your quota you've used at any given moment! Presumably GitHub don't show you where you are in respect to the rate limits on top of your monthly quota because grinding to a halt with no warning is more... fun? 

BlogMore v2.20.0

3 min read

I've just released BlogMore v2.20.0. There are five main changes in this release, and a lot of changes under the hood.

First, the under-the-hood stuff: while this isn't going to make a difference to anyone using BlogMore (at least it shouldn't make a difference -- if it does that's a bug that deserves reporting), the main site generation code has had a lot of work done on it to break it up. The motivation for this is to make the code easier to maintain, and to try and steer it in a direction closer to how I'd have laid things out had I written it by hand. The outcome of this is that, where the generator was over 2,000 lines of code in a single file, it's now a lot more modular and easier to follow.

Some other internals have been cleaned up too. Generally I've had a period of reviewing some of the code and reducing obvious duplication of effort, that sort of thing.

Now for the visible changes and enhancements in this release:

Improved word counts

Until now the word counting (and so the reading time calculations) were done by stripping most of the Markdown and HTML markup from the Markdown source. I wasn't too keen on this approach given that the codebase had a method of turning Markdown into plain text. So in this release the regex-based cleanup code is gone and word counts (and so reading times) use the same Markdown to plain text pipeline as anything else that needs to work on plain text.

Fixed a word count and reading time disparity

It was possible, in the stats page, to have one post appear to have the lowest or highest word count, but to not have the lowest or highest reading time. This was because reading times are always calculated to the minute and so there could be a disparity due to this rounding. The calculation of those stats now takes this into account.

Added an optional title to the socials

The socials setting in the configuration file has had an optional title property added for each entry. Until now the tooltip for an entry would be whatever the site was set to. Generally this works but if you have two or more accounts on the same site, or if you want to use a site value for something different, there was no way of making the tooltip more descriptive.

As an example, currently it's not possible to support Codeberg as a site. On the other hand git is available so it can be used as a substitute icon. The problem is, with this:

- site: git
  url: https://codeberg.org/davep

the tooltip will just say "git". With this update you can do this:

- site: git
  title: Codeberg
  url: https://codeberg.org/davep

and the tooltip will say "Codeberg".

As mentioned: this is optional. If there is no title the previous behaviour still applies.

Wall-clock time measurement

Yesterday, Andy posted about BlogMore's performance with respect to the different optional features. It's something I haven't really considered yet (possibly in part because this blog isn't anywhere near as big as his), but could be a good source of tinkering in the near future. His work to test the different parts of the tool did get me thinking though: it would be neat to know how long each part of the generation process takes.

So now, when a site is generated (either when using build or serve), the time of each step is printed, as is the overall generation time.

Markdown in HTML support

Yesterday I noticed that, on one of my posts, what had been written as a simple caption for an image wasn't rendering as it used to. The actual content of the Markdown source for the post contained this:

<center>
*(Yes, the tin was once mine and was once full; the early 90s were a
different time)*
</center>

While the text was centred, the raw Markdown was left in place (it should have been italic text). The reason for this is that BlogMore had never enabled Markdown-in-HTML support. So, as of this release, if the enclosing tag has markdown="1", any Markdown inside the tags will be parsed. This means the above becomes this:

<center markdown="1">
*(Yes, the tin was once mine and was once full; the early 90s were a
different time)*
</center>

I did think about doing something to turn it on by default (the fact that I didn't have such a "switch" in the post before suggests that Pelican did just always do this), but really I feel this approach is more flexible and less likely to result in unintended consequences.

blogmore.el v4.5.0

1 min read

Carrying on with the theme of being lazy while editing posts, I've released blogmore.el v4.5.0. This version adds blogmore-set-as-cover. With this, if you place point on a line that is an image and run the command, it is set as the cover for the post.

Sure, it's not like it's hard to copy, move, insert a new line, type cover: and then paste the text, but this is faster and more accurate.

And I'm lazy.

And I like hacking on Emacs Lisp that makes my workflow flow faster.

More Codeberg issues

1 min read

As I've said earlier, I'm not looking to move off GitHub any time soon, but I am curious about evaluating the options. So far, while trying out Codeberg, I am finding it to be very unstable.

A little earlier I was simply browsing some of the repositories I've been adding and got very slow load times, and then a 500.

Codeberg 500 error

As I've said elsewhere: I really wouldn't expect perfection. I doubt that Codeberg has the money behind it that GitHub does. But, again, there is that issue with moving off GitHub because of instability; from that point of view it would feel like swapping some occasional instability for what, at the moment, is feeling like regular instability to the point that I wouldn't get too much done.

Syncing GitHub to GitLab and Codeberg

2 min read

I've had a GitLab account since December 2017. This came about because of the new job I started in January 2018. They used a self-hosted internal instance of GitLab for all their code, so it made sense I get familiar with it (it wasn't hard; especially back in 2017 it was near enough a clone of GitHub in terms of what it did). Since then, though, I've never really done anything with it. I think I had a repo or two on there for a short while, but I must have nuked them at some point because the account has been empty for the longest time.

A Codeberg account, on the other hand, only got created the other day. Having created this, I got to thinking about how I might use it. In doing so I thought back to my GitLab account and then also got to thinking about where all my public code lives, and how "safe" it is.

Now, sure, the whole point of Git is that it's distributed. Forges are a useful thing to have and work with, but they shouldn't be the place where your code lives. On the other hand, I've had so many machines, and so many work environments, that it has become the case that my GitHub account has become the storage location for my code and projects.

Mostly this is fine. If GitHub were to disappear tomorrow I imagine we'd all have bigger things to be worrying about anyway. But the principle stands: why not distribute the load? Why not distribute the effort when it comes to sharing code I write?

So yesterday I finally decided on a plan: for the moment at least, I'm going to keep using GitHub as my "primary" location for working on stuff. It's where I'll have WiP branches, it's where I'll keep issues, it's where I'll encourage people to raise requests and stuff, it's where I'll host this blog. But I'm going to start syncing projects to both GitLab and Codeberg. I see this as having two benefits: anyone who doesn't want to interact with GitHub can now easily fork code, and if they wish they can raise issues and the like too. Meanwhile, in doing this, I'll also have the added benefit of my code being "backed up" in at least three different locations1.

The approach I've settled on, for the moment, is based around this little shell script:

#!/bin/sh

# Check if a repository name was provided
if [ -z "$1" ]; then
    echo "Error: No repository name provided."
    echo "Usage: $0 <repo-name>"
    exit 1
fi

REPO_NAME="$1"

# Check if the current directory is a Git repository
if ! git rev-parse --is-inside-work-tree > /dev/null 2>&1; then
    echo "Error: This directory is not a Git repository."
    exit 1
fi

echo "Configuring multi-forge backup sync for: $REPO_NAME"

# Set up the remote called backups. Anchor it to Codeberg.
git remote remove backups > /dev/null 2>&1
git remote add backups "ssh://git@codeberg.org/davep/${REPO_NAME}.git"

# Set up the push URLs.
git remote set-url --push backups "ssh://git@codeberg.org/davep/${REPO_NAME}.git"
git remote set-url --add --push backups "git@gitlab.com:davep/${REPO_NAME}.git"

# Only ever backup main.
git config remote.backups.push refs/heads/main:refs/heads/main

echo "----------------------------------------------------"
echo "Backups configured:"
git remote -v
echo "----------------------------------------------------"
echo "To perform the initial sync, run: git push backups main"

### setup-forge-sync ends here

I'm going to keep all repo names the same2. So when I use this script it'll set things up so I can git push backups and main will then get pushed up to both GitLab and Codeberg. I don't feel the need to be keeping any WiP branches in sync or kicking about, likewise any gh-pages branches.

While I'm sure I could have done something a little more automated, this feels like a neat and simple approach, and also allows me to curate what appears in the two other places over time (I suppose, eventually, I'll mirror everything that isn't a dead experimental repo, but meanwhile I'll prioritise projects that are either still very useful or which I'm actively developing and maintaining).


  1. Yes, I have other backups too, but they're always current-working-machine type backups. 

  2. Except, perhaps, for any repo whose name starts with .; I seem to recall that GitLab can't handle that, for some bizarre reason. Perhaps that's fixed now?