Posts tagged with "Python"

Gemini CLI vs GitHub Copilot (the result)

4 min read

Following on from this morning's initial experiment, I think I'm settling on a winner. Rather than be annoying and have you scroll to the bottom to find out: it's Gemini CLI. Here's how I found the process played out, and why I'm settling for one over the other.


Gemini CLI

Initially this was an absolute mess. After letting it initially work on the problem, the resulting code didn't even really run. The first go, and the three follow-up prompt/result cycles that followed, all resulted in code that had runtime errors. I'm pretty sure it didn't even bother to try and do any adequate testing. This is odd given I've generally seen it do an okay job when it comes to writing and running tests.

Once I had the code in a stable state, with all type checking, linting and testing passing, it still didn't work. No matter how I tried to use the new facility it just didn't make a difference. No images were optimised. In the end I dived into the code, with the help of its attempt at debugging (it added print calls to try and get to the bottom of things -- how very human!), diagnosed what I thought was the issue (it was looking in the wrong location for the files to optimise), told it my hypothesis and let it check if I was right. It concluded I was and fixed the problem.

Since then I've had a working implementation of the initial plan.

Once that was in place it's been a pretty smooth journey. I've asked it questions about the implementation, had my concerns set to rest, had some concerns addressed and fixed, improved some things here and there, added new features, etc.

All of this has left me with 18% of my daily quota used up. While I think this is the highest I've ever got while using Gemini CLI, it still feels like I got a lot of things done for not a lot of quota use.

GitHub Copilot

Initially I thought this had managed to one-shot the problem. Once it had finished its initial work the code ran without incident and produced all the optimised files. Or so I thought. Doing a little more testing, though, it became clear it was only optimising a subset of the images and it didn't seem to be producing the actual HTML to use the images.

On top of this it didn't even follow the full plan that was laid out in the issue it was assigned. For example: once I'd got it doing the main part of the work, it became apparent that it had pretty much ignored the whole idea of using a cache to speed this process up. I had to remind it to do this.

At one point I switched from the in-PR web interaction with Copilot, and used the local CLI instead. When I ran that up it warned me that I was already 50% of the way through some sort of rate limit and this wouldn't reset for another 3 hours. I think I was about 40 minutes into letting it try and do the work at this point.

After a bit more testing and follow-up prompts, I got to a point where I had something that looked like it was working; albeit in a slightly different way from how Gemini CLI did it (the Copilot approach was writing the optimised images out to the extras directory, mixing them in with my own images; Gemini opted for having a separate directory for optimised images within the static hierarchy).


At this point I will admit to not having carefully reviewed the code of either agent; that's a job still to do. But while Gemini got off to a very rocky start, with a bit of guidance it seemed to arrive at an implementation I'm happy with, and one that seems to be working as intended. While it didn't anticipate all the edge cases, when I asked about them it easily found and implemented solutions for them. Moreover, the fact that I could do all of this and confidently know the "cost" made a huge difference. Copilot seems to generally approach this like a quota or rate limit should be a lovely surprise that will destroy your flow; Gemini has it there and in front of you, all the time.

As for the general idea that I'm working on: I think I'm going to implement it. Weirdly I'm slightly nervous about building the blog such that it won't be using the images I created, but I also recognise that that's a little irrational. Meanwhile I'm very curious about the impact this might have on the PageSpeed measurement of the blog. While it's far from horrific, image size optimisation and size declaration seem to be fairly high on the things that are impacting the performance score (currently sat at 89 for the front page of the blog, as I type this).

The other thing that gives me pause for thought about merging this in, and then subsequently using it, is that I've just finished migrating all images to webp, and so saving a lot of space in the built version of the blog. Generating all the responsive sizes of the images eats that up again. With this feature off, the built version of the blog stands at about 84MB; with it on, this rises to 133MB. That extra 49MB more than eats up the 24MB saving I made earlier.

On the other hand: storage is a thing for GitHub to worry about, what I'm worrying about here, and aiming to improve, is the reader's experience.

I'm going to sit on this for a short while and play around with it, at least until I get impatient and say "what the hell" and run with it.

Gemini CLI vs GitHub Copilot (redux)

1 min read

Given I'm almost certainly going to drop GitHub Copilot starting next month, I'm using Gemini CLI more and more for BlogMore. Yesterday evening, I used it to plan out an idea for a change to the application. Now that I've migrated all images to WebP, I thought it might be interesting to look at the idea of having a responsive approach to images. This is something I don't know a whole lot about (never having needed to bother with it before), but it also happens that I need to read up on this anyway for something related to the day job; given this, it felt like a good time to experiment.

Together with Gemini CLI a plan was created.

This morning, over second coffee, I've kicked off the job of implementing it and, honestly, Gemini CLI is really struggling. It "implemented" the change pretty quickly, within minutes, but it just plain didn't work. Since then I've had it iterate over the issue four times and now it's struggling to make it work at all. It's still beavering away on this as I type, and consuming daily quota at a fair rate too.

So, while I still have GitHub Copilot, this feels like a good point to play them off against each other at least one more time. Having saved the plan Gemini wrote last night as an issue, I've assigned it to Copilot (using Claude Sonnet 4.6). As I type this, I have Gemini racing to get this working in a terminal window behind Emacs, meanwhile there's Claude doing its thing in GitHub's cloud.

It'll be interesting to see if Copilot manages to one-shot this, for sure Gemini is far off a one-shot implementation.

Braindrop v1.1.0

1 min read

Braindrop

It's now well over a year since I released Braindrop and it's in constant use by me. I continue to find raindrop.io a really useful resource, and more often than not manage, edit, tag, and review what I save there with Braindrop, including which become public, and which don't.

I've made a few small changes to the application in the past year and a bit, but not much. It's been stable and useful. But on the back of a recent change I made to OldNews, I felt I needed to make the same change here.

So with the release of v1.1.0 I've added three new commands to the application:

  • JumpToNavigation - Jump to the navigation panel; bound to 1 by default
  • JumpToRaindrops - Jump to the main raindrops list panel; bound to 2 by default
  • JumpToDetails - Jump to the details panel for the selected raindrop, if the panel is visible; bound to 3 by default

Now it's just a little easier and quicker to get around the UI.

If raindrop.io is your thing, and you want to work with your saved bookmarks in the terminal: Braindrop is licensed GPL-3.0 and available via GitHub and also via PyPI. It can also be installed using uv:

uv tool install braindrop

If you don't have uv installed you can use uvx.sh to perform the installation. For GNU/Linux or macOS or similar:

curl -LsSf uvx.sh/braindrop/install.sh | sh

or on Windows:

powershell -ExecutionPolicy ByPass -c "irm https://uvx.sh/braindrop/install.ps1 | iex"

If uv isn't your thing then it can also be installed with pipx:

pipx install braindrop

Once installed, run the braindrop command.

BlogMore v2.23.0

1 min read

I wasn't quite planning on making a new release of BlogMore so soon after the previous version, but I had a couple of ideas that I wanted to add, and then also got a nifty request too; so here we are: we have v2.23.0.

The first couple of changes relate to the cache. In the previous release I added a cache of the FontAwesome metadata, which in turn means that a cache directory is being created. I felt it would be fair and useful to provide a command that both lets the user know where the cache lives, and to also remove it. So now BlogMore has a cache command with two sub-commands:

  • location: tells you where the cache directory is located
  • clear: removes the cache directory

Also, now that we have a cache directory, it makes sense to use it a bit more to squeeze even more time out of the build process. So starting with this release, per content directory, the various icons that are created for the site are cached. This means that if the source image doesn't change, for each subsequent build there's no conversion and resize for every variation. This saves a good fraction of a second, making the build of my blog feel noticeably quicker.

Finally, earlier today, Andy asked if it would be possible to have the BlogMore serve mode auto-reload any page being viewed in a browser, when the site is regenerated. It was something I'd considered myself a couple of times so that was a good reason to finally look into it. Not knowing how this could be achieved1, I prompted Gemini for an idea, stressing I wanted a solution that didn't disturb a generated site; it came up with a convincing solution. I let it run at it and, along with a few changes of my own, it seems to be working a treat.

This, of course, now makes me want to squeeze even more time out of the build process.


  1. Web development has never been my primary area of knowledge. 

BlogMore v2.22.0

2 min read

As mentioned a couple of days ago, I've been toying with finding areas of improvement in respect to the performance of BlogMore. Until now, for good reasons, I've not really paid any attention to how fast (or slow) BlogMore is when it comes to generating my blog. While it's never been blindingly fast, it's always been fast enough and I was more keen on making it work right. So for a good while the focus has been on well-formed output, stuff that keeps the crawlers happy, that sort of thing.

But now that I'm in a place where new features aren't really so necessary, it does feel like a good point to find any easy wins in speeding up the code. I think it's gone well.

BlogMore v2.22.0 contains quite a few internal changes that speed up some core parts of site generation. Many of the things identified by Gemini, back when I first kicked this process off, have been done. The amount of Markdown->HTML conversion work has been vastly reduced, which has had a pretty big impact on all sorts of things. There's also caching of the FontAwesome metadata1 which should save a fair bit of time on slower connections. I did avoid the whole business of parallel processing as I dabbled with this near the start of the project and I could not wrangle a win out of that at all; given how much of a win I've had with these changes, I doubt that would change (it could conceivably make things worse).

So, how much faster is it? Roughly, based on my tests, a site generates in about 1/4 of the time it did before. On my M2 Mac Mini my blog builds in under 3 seconds; with v2.21.0 it took around 13 seconds. In my case that's with all the optional features of BlogMore turned on.

Naturally this work has touched on a lot of internals of the code, and made significant changes to the generation pipelines of lots of different pages and features. I've done my absolute best to compare2 the output of v2.21.0 and v2.22.0 and I can't see any significant differences3. When trying out v2.22.0 I would suggest paying just a little extra attention to the result, to be sure you're happy that nothing has changed.


  1. It lives in ~/.cache/blogmore on Unix and Unix-like systems, or %LOCALAPPDATA%\blogmore\cache on DOS/VMS-influenced systems. 

  2. Lots of diff -rq and then diffing an assorted sample of files that showed differences to inspect what was actually different. 

  3. Actually, there's a small difference in the context shown in backlinks, but this was a deliberate change and a very small cosmetic enhancement. 

Speeding up BlogMore

3 min read

As mentioned recently, Andy did a nifty bit of testing of BlogMore to measure the performance hit of each of the optional generation features. Performance is something I haven't really spent much time thinking about; I cared more about how the result looked than I did about how quickly a site got generated.

I seem to remember that, early on, I did have a bit of a play with trying to get Copilot to tackle the idea of parallel generation of parts of the site, but I didn't seem to get any sort of win out of it so I closed the PR and gave up on the idea.

Now though, especially having added the elapsed time of each step to the progress report during generation, I am curious about what wins there are. One that came to mind right away, and I have been considering dealing with for a wee while, is the fact that every generation downloads the FontAwesome metadata. Caching that would be a simple win, for example.

But, if I'm going to do this, it makes sense that I stick to the spirit of this project and get one or more agents to tackle the problem. So I asked Gemini CLI to look over the codebase and give me some big easy wins. I made clear I didn't want it to do any work yet, I just wanted it to report back what it could find.

The result made a lot of sense. Here's what it said:


Performance Optimization Recommendations for BlogMore

I have conducted a comprehensive review of the blogmore codebase and identified several areas where measurable and significant speedups can be achieved, particularly for larger sites.

The most significant bottleneck is in src/blogmore/backlinks.py. The build_backlink_map function iterates through every internal link in every post. For each link, it calls _extract_snippet, which re-parses the entire Markdown content of the source post to generate a plain-text snippet.

  • The Issue: If a post has 20 internal links, it is parsed 20 times just for backlinks.
  • The Win: Modify the logic to insert unique markers for all links in a post and perform a single Markdown-to-plain-text conversion per post.

2. Redundant Markdown Parsing for Metadata

The Post object in src/blogmore/parser.py has properties like description and reading_time that are accessed frequently (on the post page, index pages, RSS feeds, search index, etc.).

  • The Issue: Post.description calls extract_first_paragraph, and Post.reading_time calls calculate_reading_time. Both of these functions perform a full Markdown conversion every time they are called because they are not cached.
  • The Win: Use functools.cached_property on these fields. Additionally, these should be calculated by extracting text from the html_content that has already been generated during the initial parse, rather than re-parsing the raw Markdown.

3. Sequential Processing

The generator currently processes everything sequentially in a single thread.

  • The Issue: For a site with hundreds or thousands of posts, the initial Markdown parsing and the subsequent Jinja2 rendering are strictly serial, leaving multi-core CPUs mostly idle.
  • The Win: Use multiprocessing (e.g., ProcessPoolExecutor) to parallelize the parsing of Markdown files and the rendering of HTML pages. Since these tasks are CPU-bound and independent, this would scale almost linearly with the number of CPU cores.

4. Asset Management Inefficiencies

The AssetManager in src/blogmore/generator/assets.py performs some operations that could be optimized:

  • FontAwesome Metadata: It downloads icon metadata from GitHub on every build to optimize CSS. This can be cached locally to avoid network latency.
  • Icon Generation: It regenerates favicons and Apple touch icons on every build. This should be skipped if the source icon hasn't changed.

5. Minor Algorithmic Improvements

  • Navigation Lookups: In PageGenerator.generate_post_page, the code uses all_posts.index(post) to find the current post's position for prev/next links. This is an O(N) operation called N times, leading to O(N^2) behavior. Pre-calculating a mapping would make this O(1).
  • Markdown Instance Reuse: Several utility functions (markdown_to_plain_text, extract_first_paragraph) create a fresh markdown.Markdown instance on every call. Reusing a single instance (and calling .reset()) or caching the instance creation would reduce overhead.

Summary

The "Big Wins" are definitely optimizing backlink snippet extraction and implementing parallel parsing/rendering. Together, these would likely reduce generation time for large sites by 60-80% on modern hardware.


At first glance they all seem pretty sensible, and I like that it seems to have found some really obvious big improvements, and also the little wins that might have a cumulative effect. I strongly suspect the parallel generation suggestion is again going to be a non-win (and so for now I'm going to pass on it), but most of the other suggestions seem doable and worth trying out.

I think what I will do, rather than work through them all in one PR, is tackle one at a time. So, over the next few days, expect to see some PRs turn up and get merged (hopefully), as I try and tease some speed wins out of the code.

BlogMore v2.21.0

2 min read

After noticing a broken link in a post yesterday, I got to thinking that it would be useful to add a linter to BlogMore. So I've released v2.21.0 which adds linting support.

A number of things are checked and the results are broken down into things that are errors or warnings. Errors result from any of these checks:

  • Ensures all posts and pages have valid YAML frontmatter. If a file cannot be parsed, it is reported as an error.
  • Scans the generated HTML for links to non-existent internal paths (other posts, pages, categories, tags, archives, site features like search, or files in extras/).
  • Checks that all <img> sources resolve to valid internal paths or files in the extras/ directory.
  • Checks that the cover property in a post or page frontmatter points to a valid resource.
  • Verifies that all page slugs listed in sidebar_pages actually exist.
  • Checks that all internal-looking URLs in the links: and socials: configuration settings point to valid targets.

On the other hand, the following just result in a warning:

  • Flags if a post is missing a title, category, tags, or a date.
  • Reports if a post's date or modified date is set in the future.
  • Notes if a post's modified date is earlier than its original publication date.
  • Identifies if two or more posts share the exact same title.
  • Flags inline images missing an alt attribute, or those with an empty/whitespace-only alt attribute.
  • If clean_urls is enabled, warns if internal links point explicitly to index.html.
  • Reports internal links using the full site_url (e.g., https://example.com/path/) instead of a root-relative path (/path/).

I feel like all of these cover most of the things that are low-cost to detect but have a positive impact on the state of the content of a blog.

One thing I've not done is any sort of checking of external links. This would be costly and could possibly have unintended consequences that I don't want to be messing with (perhaps a tool to export the list of external links for checking could be useful, at some point).

Having run this against this blog, I did find some things that needed cleaning up, mostly absolute links that could be turned into root-relative links (always good for making the content portable).

I'm going to make this a standard part of my "I'm ready to publish" check for this blog, and it should also be helpful as I carry on migrating the images in the blog over to WebP.

An argument with Gemini

1 min read

At the moment I'm working on a linting command for BlogMore. Having given up on Copilot/Claude for this, I've been having quite a bit of success with Gemini CLI. But while doing this, I've noticed some odd things with it. It does have this habit of cargo-culting some changes, or just rewriting code that doesn't need it.

For example, the tests for the new linting tool: it keeps adding import pytest near the top of the test file despite the fact that pytest doesn't get used anywhere in the code. Every time, I'll remove it, every time it adds more tests, it'll add it back.

Another thing I've noticed is it seems to be obsessed with adding indentation to empty lines. So, if you've got a line of code indented 8 spaces, then an empty line, then another line of code indented 8 spaces, it'll add 8 spaces on that empty line. That sort of thing annoys the hell out of me1.

But the worst thing I just ran into was this. It had written this bit of code:

def lint_site(site_config: SiteConfig) -> int:
    """Convenience function to run the linter.

    Args:
        site_config: The site configuration.

    Returns:
        0 if no errors, 1 if errors were found.
    """
    linter = Linter(site_config)
    return linter.lint()

On the surface this seems fine: a function that hides just a little bit of detail while providing a simple function interface to a feature. But that use of a variable to essentially "discard" it the next line... nah. I dislike that sort of thing. The code can be just a little more elegant. So seeing this I edited it to be (removing the docstring for the purposes of this post):

def lint_site(site_config: SiteConfig) -> int:
    return Linter(site_config).lint()

Nice and tidy.

I then had Gemini work on something else in the linting code. What did I see towards the end of the diff? This!

A sneaky edit

Sneaky little shit!

Now, sure, the idea is you review all changes before you run with them, but knowing that it's likely that any given change might rewrite parts of the code that aren't related to the problem at hand adds a lot more overhead, and I wonder how often people using these tools even bother.


  1. I've seen some IDEs do that on purpose too; I've got Emacs configured to strip that out on save. 

An unreliable buddy

4 min read

At some point this morning I was looking for something on this blog and stumbled on a post that had a broken link. Not an external link, but an internal link. This got me thinking: perhaps I should add some sort of linting tool to BlogMore? I figured this should be doable using much of the existing code: pretty much work out the list of internal links, run through all pages and posts, see what links get generated, look for internal links1, and see if they're all amongst those that are expected.

Later on in the day I prompted Copilot to have a go. Now, sure, I didn't tell it how to do it, instead I told it what I wanted it to achieve. I hoped it would (going via Claude, as I've normally let it) decide on what I felt was the most sensible solution (use the existing configuration-reading, page/post-finding and post-parsing code) and run with that.

It didn't.

Once again, as I've seen before, it seemed to understand and take into account the existing codebase and then copy bits from it and drop it in a new file. Worse, rather than tackle this using the relevant parts of the existing build engine, it concocted a whole new approach, again obsessing over throwing a regex or three at the problem.

I then spent the next 90 minutes or so, testing the results, finding false reports, finding things it missed, and telling it what I found and getting it to fix them. It did, but on occasion it seemed to special-case the fix rather than understand the general case of what was going on and address that.

Eventually, probably too late really, I gave up trying to nudge it in the right direction and, instead, decided it was time to be more explicit about how it should handle this2. The first thing that bothered me was that it seemed to ignore the configuration object. Where BlogMore has a method of loading the configuration into an object, which can be passed around the code, but with the linter it loaded it up, pulled it all apart, and then passed some of the values as a huge parameter list. Because... reasons?

Anyway, I told it to cut that shit out and prompted it about a few other things that looked pretty bad too. Copilot/Claude went off and worked away on this for a while, using up my 6th premium request of the session, and then eventually came back with an error telling me I'd hit a rate limit and to come back in a few hours.

GitHub rate limit

Could I have got it to where I wanted to be a bit earlier, with more careful prompting? No doubt. Will a lot of people? I suspect that's rather unlikely. This is one of the many things that make me pretty sceptical about this as the tool some sell it as, at least for the moment. I see often that it's written about or talked about as if it's a really useful coding buddy. It can be, at times, but it's hugely unreliable. Here I'm testing it by building something as a hobby, and I'm doing so knowing that there's no real consequence if it craps out on me. I'm also doing it safe in the knowledge that I could write the code myself, albeit at a far slower pace and with less available time. Not everyone this is aimed at has that going for them.

But these tools are still sold like they're the most reliable coding buddies going.

All that said: having hit the rate limit, and having squandered six premium requests on the problem with no real progress, I decided to use my Google Gemini coding allowance instead (which, in my experience so far, seems pretty generous). I threw more or less the same initial prompt at it, but this time I stressed that I really wanted it to use the existing engine where possible. It managed to pretty much one-shot the problem in about 9 minutes and used up just 2% of my daily quota3.

I've done a little more tidying up since, and I still need to properly review the result, but from what I can see of the initial results it's found all of the issues I wanted it to find, first time (something Claude didn't manage) and hasn't found any issues that don't exist (also something Claude didn't manage).

So I guess this time Gemini was the reliable buddy. But not knowing which buddy you can rely on makes for a pretty unreliable group of buddies.


  1. This process could, of course, work for external links too, but I'm not really too keen on having a tool that visits every single external link to see if it's still there. 

  2. Which is mostly fine; I'm doing this as an experiment in what it's capable of, and also I was sofa-hacking while having a conversation about naming Easter eggs in Minecraft. 

  3. Imagine that too! Imagine knowing exactly how much of your quota you've used at any given moment! Presumably GitHub don't show you where you are in respect to the rate limits on top of your monthly quota because grinding to a halt with no warning is more... fun? 

BlogMore v2.20.0

3 min read

I've just released BlogMore v2.20.0. There are five main changes in this release, and a lot of changes under the hood.

First, the under-the-hood stuff: while this isn't going to make a difference to anyone using BlogMore (at least it shouldn't make a difference -- if it does that's a bug that deserves reporting), the main site generation code has had a lot of work done on it to break it up. The motivation for this is to make the code easier to maintain, and to try and steer it in a direction closer to how I'd have laid things out had I written it by hand. The outcome of this is that, where the generator was over 2,000 lines of code in a single file, it's now a lot more modular and easier to follow.

Some other internals have been cleaned up too. Generally I've had a period of reviewing some of the code and reducing obvious duplication of effort, that sort of thing.

Now for the visible changes and enhancements in this release:

Improved word counts

Until now the word counting (and so the reading time calculations) were done by stripping most of the Markdown and HTML markup from the Markdown source. I wasn't too keen on this approach given that the codebase had a method of turning Markdown into plain text. So in this release the regex-based cleanup code is gone and word counts (and so reading times) use the same Markdown to plain text pipeline as anything else that needs to work on plain text.

Fixed a word count and reading time disparity

It was possible, in the stats page, to have one post appear to have the lowest or highest word count, but to not have the lowest or highest reading time. This was because reading times are always calculated to the minute and so there could be a disparity due to this rounding. The calculation of those stats now takes this into account.

Added an optional title to the socials

The socials setting in the configuration file has had an optional title property added for each entry. Until now the tooltip for an entry would be whatever the site was set to. Generally this works but if you have two or more accounts on the same site, or if you want to use a site value for something different, there was no way of making the tooltip more descriptive.

As an example, currently it's not possible to support Codeberg as a site. On the other hand git is available so it can be used as a substitute icon. The problem is, with this:

- site: git
  url: https://codeberg.org/davep

the tooltip will just say "git". With this update you can do this:

- site: git
  title: Codeberg
  url: https://codeberg.org/davep

and the tooltip will say "Codeberg".

As mentioned: this is optional. If there is no title the previous behaviour still applies.

Wall-clock time measurement

Yesterday, Andy posted about BlogMore's performance with respect to the different optional features. It's something I haven't really considered yet (possibly in part because this blog isn't anywhere near as big as his), but could be a good source of tinkering in the near future. His work to test the different parts of the tool did get me thinking though: it would be neat to know how long each part of the generation process takes.

So now, when a site is generated (either when using build or serve), the time of each step is printed, as is the overall generation time.

Markdown in HTML support

Yesterday I noticed that, on one of my posts, what had been written as a simple caption for an image wasn't rendering as it used to. The actual content of the Markdown source for the post contained this:

<center>
*(Yes, the tin was once mine and was once full; the early 90s were a
different time)*
</center>

While the text was centred, the raw Markdown was left in place (it should have been italic text). The reason for this is that BlogMore had never enabled Markdown-in-HTML support. So, as of this release, if the enclosing tag has markdown="1", any Markdown inside the tags will be parsed. This means the above becomes this:

<center markdown="1">
*(Yes, the tin was once mine and was once full; the early 90s were a
different time)*
</center>

I did think about doing something to turn it on by default (the fact that I didn't have such a "switch" in the post before suggests that Pelican did just always do this), but really I feel this approach is more flexible and less likely to result in unintended consequences.