Posts tagged with "Blogging"

Documentation generation

2 min read

While I've written a lot of documentation in my life, it's not something I enjoy. I want documentation to read well, I want documentation to be useful, I want documentation to be accurate. I also want there to be documentation at all and sometimes the other parts mean it doesn't get done for FOSS projects1.

When I started the experiment that is BlogMore, I very quickly hashed out some ideas on how it might generate some documentation, and the result was okay. In fact, if anything, it was a bit too much and there was a lot of repeated information.

So, this morning, before I sat down for the day's work, I quickly wrote an issue that would act as a prompt to Copilot to rewrite the documentation. This time I tried to be very clear about what I wanted where, but also left it to work out all the details.

The result genuinely impressed me. While I'll admit I haven't read it all in detail (and because of this have left the same warning right at the start), on the surface it looks a lot clearer and feels like a better journey to learning how to use BlogMore.

blogmore.davep.dev hosts the result of this.

I have a plan to work through the documentation and be sure it's all correct and all makes sense, but if it's as correct and as useful as I think, I might have to consider the idea of taking this approach more often. Writing down the plan for the documentation and then letting it appear in the background while I get on with my actual day makes a lot of sense.

I fear I might be warming to these tools, a little bit. :-/


  1. Although I've made a point of doing it for almost every one of my recent Textual-based projects. 

BlogMore v1.5.0

3 min read

Since switching over on the 19th of last month I've been making lots of changes to BlogMore. While there's been a good few bug fixes and QoL changes, I've also been adding new features that I've found myself wanting.

Here's a list of some of the significant additions I've added in the last couple of weeks (and, yes, as per the experiment, all of these have been developed by me prompting GitHub Copilot):

  • All parts of a date in a post's timestamp can be clicked to get to an archive of that point in time.
  • Added fully-client-side full text search (I'm finding this especially useful).
  • Added sitemap generation.
  • Added a general fallback description for any page that doesn't have one.
  • Added fallback keywords for any page that doesn't have any.
  • Added author metadata to the header of all pages.
  • Hugely optimised the use of FontAwesome.
  • Made best possible use of all the usual og: type metadata in the head of all pages.
  • Added optional CSS minification, improving page load times.
  • Added optional JavaScript minification, improving page load times.
  • Where appropriate, all pages now have rel="prev" and rel="next" tags in the <head>.
  • Added a rel="canonical" tag to the <head> of all pages.
  • Improved the style and workings of the pagination of all archive type pages.
  • Improved the cosmetics of the category and tag clouds.
  • Improved how the first paragraph is discovered for a page or post, when using it as the default description in the <head> of a page.
  • Cleaned up the generated HTML so it's more compact.
  • Added support for custom 404 pages.

As I say, they're just the improvements I've made that have come to mind as I've used BlogMore. I've also done a lot of bug fixing too. You can read the full changelog over on the BlogMore website1.

I feel that the pace of updates and additions has started to slow; I think I've now got more or less everything I wanted from this. I'm pretty sure I can do everything I ever bothered to do with Jekyll and Pelican, and I am enjoying "owning" the code such that, if I have an idea for something I want, it's easy enough to make it happen.

I'm also pretty happy with how well the results perform. Despite the fact I'm not a web developer, and despite this blog being served by GitHub Pages (which, let's be honest, isn't the most speedy host), the measurements for a single page in the blog look fairly good:

Desktop

That's measuring loading in a desktop context. Even measured as mobile (which I've tried to make work well too) it's not too shabby:

Mobile

I think I can rightfully be satisfied with those values, given this isn't normally my primary focus when it comes to software development.

Anyway, if you like messing with static site generators, and one that is blog-centric sounds useful, and if you're not put off by the fact that this is a deliberate "use GitHub Copilot" experiment, feel free to take a look.


  1. Which, somewhat amusingly, is built with MkDocs. 

Original Seen by davep rescued

1 min read

Still Alive

At the end of yesterday's post I said I might see if I can rescue the original photoblog from its backup on WordPress. This was the first photoblog I played with, posting to the long-dead Posterous between 2009 and 2013.

So, yesterday evening, I did an extract of the full feed from WordPress, and also asked for a full backup of all the media. I then fired up Emacs and rattled out some Python code that would marry up the two sets of data and add to the photoblog repository. It took a little bit of working out; it seems that every post had two entries in the feed: a parent and a child entry. I've no clue why that's the case; I didn't really care to get too deeply into it.

Soon enough seen-by.davep.dev was updated with an extra 1,833 posts! So now the categories on that blog site are broken down into Seen By Me 1 (the original) and Seen By Me 2 (the second incarnation).

Sadly, for the first blog, tagging wasn't really much of a thing so the tag cloud hasn't grown too much.

But, finally, I've got both the photoblogs back up and hosted somewhere I can point to, and I'm fully in control of their content. While it is hosted on GitHub Pages I've done this in a way that it would be pretty easy to move elsewhere; this is down to the fact that it's a simple static site built with BlogMore.

Seen by davep rescued

2 min read

Final Message

Since mid-2023 my photoblog has been broken. As I mentioned at the time, the issue was that this second incarnation of the blog had started life as a proper mashup of some web tools, and the heart of it was Twitter.

It all started to fall apart when Twitter got its new owner, and APIs became expensive, and other tools would not or could not work with it any more, and then it really fell apart when I finally nuked my account.

So since then the blog has been sat about, unused and unloved, with a lot of broken links and images.

Thankfully, though, the pipeline that I had going had been designed with this sort of problem in mind: part of what I had going also made a backup of the photos I took to Google Drive and to Google Photos. So when I got to a point the other day where BlogMore was usable I decided I should rescue the photos and rebuild the blog.

After downloading the full feed of the Blogger.com-hosted blog, I threw together some Python code that took the data (thanks to feedparser for helping with that), matched up the posts with the images I had in Google Drive, slugged the names, wrote some Markdown and copied some images, and I had the source of a fresh blog.

The result of all of this can be seen up on seen-by.davep.dev.

I strongly suspect this is going to remain a pretty static site. At the moment I've given no thought whatsoever as to how I might have this populate in the way the old version of it did. Quite simply the old version was:

  1. Take photo
  2. Post to Twitter with a specific hashtag
  3. Have IFTTT notice this and create the blog post, make backups
  4. ...
  5. Profit?

I suppose, this time around, I could have something monitor my Mastodon account, or my pixelfed account, and then trigger a similar process; but then that would need something akin to IFTTT running and interacting with GitHub and creating the Markdown and kicking off a build process and...

Eh, maybe one day. For now, though, I'll be happy that I've rescued this particular incarnation of my photoblog and then think about if and how I carry on with something similar in the future.

Meanwhile... this has got me thinking. The original blog is backed up on WordPress. It's been sat there, all sad and neglected, ever since Posterous disappeared. I wonder if I can export all the data from there and mash it into this new version...

Brute force and ignorance

5 min read

My experiment with coding with an agent continues, and from the point of view of the usability of the results, I'd say it's going well. From the point of view of learning more about this sort of tool and the pitfalls: I'd also say it's going well.

The one thing I'm really starting to get an appreciation for is just how much it employs a sort of brute force and ignorance approach to getting to a solution for some sort of problem. I noticed this on and off as I watched the agent's chat, as it worked away.

One example I enjoyed was when it was trying to add a new dependency to BlogMore. For reasons I should probably try and understand1 it knew it should be using uv to add a dependency and it knew how to use it to update the lock file, but it didn't have uv to hand:

The lock file (uv.lock) needs to be updated to include rcssmin. Since uv isn't available, I'll need to note this in the PR. The lock file should be generated by uv when the pyproject.toml is updated.

So it decided the best solution was probably to... "hand-edit" the lock file.

The lock file needs to be updated with rcssmin. Since uv is not available in this environment, I need to update it manually or find another way.

I need to update the uv.lock file to include rcssmin. The uv.lock file needs to include:

  1. The rcssmin package entry with hashes
  2. The rcssmin dependency in the blogmore package section

Let me look at the PyPI page for rcssmin 1.2.2 to get the hash.

It then grabs rcssmin from PyPI and checks out the information it wants; after which it goes:

I need to update the uv.lock file with the rcssmin package information. The uv.lock file uses SHA256 hashes. Let me look at the existing format in more detail to understand what I need to add, and then I'll query PyPI for the hashes.

Actually, let me think about this differently. The uv.lock file is a generated file, and updating it manually is error-prone. The proper way to update it is to run uv sync or uv lock. Since uv is not available in this environment, I have two options:

  1. Let the CI update the lock file (risky if CI fails on the locked version)

So... around this point it has some sort of plan for hand-patching the lock file, but also recognises that this is a dodgy approach and it should be using uv to update the lock file.

I also really like the part where it says it has two options, then goes on to provide an exhaustive list of... one.

After this it runs this code:

import json, urllib.request
url = 'https://pypi.org/pypi/rcssmin/json'
try:
    with urllib.request.urlopen(url, timeout=10) as f:
        data = json.load(f)
    v = '1.2.2'
    releases = data['releases'].get(v, [])
    for r in releases:
        print(f'filename: {r[\"filename\"]}')
        print(f'  url: {r[\"url\"]}')
        print(f'  sha256: {r[\"digests\"][\"sha256\"]}')
        print(f'  size: {r[\"size\"]}')
        print(f'  upload_time: {r[\"upload_time_iso_8601\"]}')
except Exception as e:
    print(f'Error: {e}')

after which it concludes:

Now I have all the information I need to update the uv.lock file. I need to add:

  1. The rcssmin package entry in the lock file
  2. Add rcssmin to the blogmore package dependencies

I mean... okay, I guess. Again, I'll admit that I'm still getting used to setting this sort of thing up and working with it (and keep in mind none of this was done locally; this is all via GitHub Copilot, up on GitHub), but it's kind of funny and wild that the approach taken was to recognise that using uv to remake the lock file was the best approach so... let's hand-edit the lock file.

But that's not the best part. This is where it gets really fun. It had decided that the lock file must be in alphabetical order. Okay, sure, why not:

The lock file has packages sorted alphabetically, so rcssmin should go between pygments and python-dateutil (since "r" comes before "py" actually... no, wait - "r" comes after "p" in the alphabet. Let me check the order again).

Since "r" comes before "py."

Since "r" comes before "py."

o_O

O_o

Extra credit here for giving me a good giggle. I really appreciate the way that it catches itself mid-sentence and remembers how the alphabet actually works.

As to the outcome of all of this? Sure, the feature I wanted to add got added; it worked away and got to a working solution in the end. But the route it took was, I think it's fair to say, a "brute force and ignorance" approach.

I've not been spending too much time reading the agent's own chatter, but when I have I've found myself amused by the dead ends it'll wander down and then work its way back out. There is, without question, a recognisable process here: I believe it would be a dishonest developer who says they've never had times in the past, or just one of those off days, where they've fallen down a rabbit hole of a solution, only to realise it's the right solution implemented in the worst possible way. There's also a resemblance here to how more junior developers work a problem until they really develop their skills.

I think I'm going to keep an eye on the agent chat a bit more from now on. While I imagine things will only improve as these tools improve, for the moment it's a good source of coding comedy.


  1. Presumably there's things I can be doing to make its life easier. 

Five days with Copilot

14 min read

Another itch to scratch

As I mentioned yesterday, I've been a happy user of Pelican for a couple or so years now, but every so often there's a little change or tweak I'd like to make that requires diving deeper into the templates and the like and... I go "eh, I'll look at it some time soon". Another thought that often goes through my head at those times is "I should build my own static site generator that works exactly how I want" -- because really any hacker with a blog has to do that at some point.

Meanwhile... I've had free access to GitHub Copilot attached to my GitHub account for some time now, and I've hardly used it. At the same time -- the past few months especially -- I've been watching the rise of agents as coding tools, as well as the rise of advocates for them. Worse still, I've seen people I didn't expect to be advocates for giving up on coding turning to these tools and suddenly writing rationales in favour of them.

So, suddenly, the idea popped into my head: I should write my own static site generator that I'll use for my blog, and I should try and use GitHub Copilot to write 100% of the code, and documentation, and see how far I get. In doing so I might firm up my opinions about where we're all going with this.

The requirements were going to be pretty straightforward:

  • It should be a static site generator that turns Markdown files into a website.
  • It should be blog-first in its design.
  • It should support non-blog-post pages too.
  • It should be written in Python.
  • It should use Jinja2 for templates.
  • It should have a better archive system than I ever got out of my Pelican setup.
  • It should have categories, tags, and all the usual metadata stuff you'd expect from a site where you're going to share content from.

Of course, the requirements would drift and expand as I went along and I had some new ideas.

Getting started

To kick things off, I created my repo, and then opened Copilot and typed out a prompt to get things going. Here's what I typed:

Build a blog-oriented static site generation engine. It should be built in Python, the structure of the repository should match that of my preferences for Python projects these days (see https://github.com/davep/oldnews and take clues from the makefile; I like uv and ruff and Mypy, etc).

Important features:

  • Everything is written in markdown
  • All metadata for a post should come from frontmatter
  • It should use Jinja2 for the output templates

As you can see, rather than get very explicit about every single detail, I wanted to start out with a vague description of what I was aiming for. I did want to encourage it to try and build a Python repository how I normally would, so I pointed it at OldNews in the hope that it might go and comprehend how I go about things; I also doubled-down in the importance of using uv and mypy.

The result of this was... actually impressive. As you'll see in that PR, to get to a point where it could be merged, there was some back-and-forth with Copilot to add things I hadn't thought of initially, and to get it to iron out some problems, but for the most part it delivered what I was after. Without question it delivered it faster than I would have.

Some early issues where I had to point out problems to Copilot included:

  • The order of posts on the home page wasn't obvious to me, and absolutely wasn't reverse chronological order.
  • Footnotes were showing up kinda odd.
  • The main index for the blog was showing just posts titles, not the full text of the article as you'd normally expect from a blog.

Nothing terrible, and it did get a lot of the heavy lifting done and done well, but it was worth noting that a lot of dev-testing/QA needed to be done to be confident about its work, and doing this picked up on little details that are important.

An improvement to the Markdown

As an aside: during this first PR, I quickly noticed a problem where I was getting this error when generating the site from the Markdown:

Error generating site: mapping values are not allowed in this context
  in "<unicode string>", line 3, column 15

I just assumed it was some bug in the generated code and left Copilot to work it out. Instead it came back and educated me on something: I actually had bad YAML in the frontmatter of some of my posts!

This, by the way, wouldn't be the last time that Copilot found an issue with my input Markdown and so, having used it, improved my blog.

A major feature from a simple request

Another problem I ran into quickly was that previewing the generated site wasn't working well at all; all I could do was browse the files in the filesystem. So, almost as an offhand comment, in the initial PR, I asked:

Can we get a serve mode please so I can locally test the site?

Just like that, it went off and wrote a whole server for the project. While the server did need a lot of extra work to really work well1, the initial version was good enough to get me going and to iterate on the project as a whole.

The main workflow

Having kicked off the project and having had some success with getting Copilot to deliver what I was asking for, I settled into a new but also familiar workflow. Whereas normally, when working on a personal project, I'll write an issue for myself, at some point pick it up and create a PR, review and test the PR myself then merge, now the workflow turned into:

  • Write an issue but do so in a way that when I assign it to Copilot it has enough information to go off and do the work.
  • Wait for Copilot to get done.
  • Review the PR, making change requests etc.
  • Make any fixes that are easier for me to fix by hand that describe to Copilot.
  • Merge.

In fact, the first step had some sub-steps to it too, I was finding. What I was doing, more than ever, was writing issues like I'd write sticky notes: with simple descriptions of a bug or a new feature. I'd then come back to them later and flesh them out into something that would act as a prompt for Copilot. I found myself doing this so often I ended up adding a "Needs prompt" label to my usual set of issue labels.

All of this made for an efficient workflow, and one where I could often get on with something else as Copilot worked on the latest job (I wasn't just working on other things on my computer; sometimes I'd be going off and doing things around the house while this happened), but... it wasn't fun. It was the opposite of what I've always enjoyed when it comes to building software. I got to dream up the ideas, I got to do the testing, I got to review the quality of the work, but I didn't get to actually lose myself in the flow state of coding.

One thing I've really come to understand during those 5 days of working on BlogMore was I really missed getting lost in the flow state. Perhaps it's the issue to PR to review to merge cycle I used that amplified this, perhaps those who converse with an agent in their IDE or in some client application keep a sense of that (I might have to try that approach out), but this feels like a serious loss to me when it comes to writing code for personal enjoyment.

The main problems

I think it's fair to say that I've been surprised at just how well Copilot understood my (sometimes deliberately vague) requests, at how it generally managed to take some simple plain English and turn it into actual code that actually did what I wanted and, mostly, actually worked.

But my experiences over the past few days haven't been without their problems.

The confidently wrong problem

Hopefully we all recognise that, with time and experience, we learn where the mistakes are likely to turn up. Once you've written enough code you've also written plenty of bugs and been caught out by plenty of edge-cases that you get a spidey-sense for trouble as you write code. I feel that this kind of approach can be called cautiously confident.

Working with Copilot2, however, I often ran into the confidently wrong issue. On occasion I found it would proudly3 request review for some minor bit of work, proclaiming that it had done the thing or solved the problem, and I'd test it and nothing had materially changed. On a couple of occasions, when I pushed back, I found it actually doubting my review before finally digging in harder and eventually solving the issue.

I found that this took time and was rather tiring.

There were also times where it would do the same but not directly in respect to code. One example I can think of is when it was confident that Python 3.14 was still a pre-release Python as of February 2026 (it isn't).

This problem alone concerns me; this is the sort of thing where people without a good sense for when the agent is probably bullshitting will get into serious trouble.

The tries-too-hard problem

A variation on the above problem works the other way: on at least one occasion I found that Copilot tried too hard to fix a problem that wasn't really its to fix.

In this case I was asking it to tidy up some validation issues in the RSS feed data. One of the main problems was root-relative URLs being in the content of the feed; for that they needed to be made absolute URLs. Copilot did an excellent job of fixing the problem, but one (and from what I could see only one) relative URL remained.

I asked it to take a look and it took a real age to work over the issue. To its credit, it dug hard and it dug deep and it got to the bottom of the problem. The issue here though was it tried too hard because, having found the cause of the problem (a typo in my original Markdown, which had always existed) it went right ahead and built a workaround for this one specific broken link.

Now, while I'm a fan of Postel's law, this is taking things a bit too far. If this was a real person I'd tasked with the job I would have expected and encouraged them to come back to me with their finding and say "dude, the problem is in your input data" and I'd have fixed my original Markdown.

Here though it just went right ahead and added this one weird edge case as something to handle.

I think this is something to be concerned about and to keep an eye on too. I feel there's a danger in having the agent rabbit-hole a fix for a problem that it should simply have reported back to me for further discussion.

The never-pushes-back problem

Something I did find unsurprising but disconcerting was Copilot's unwillingness to push back, or at least defend its choices. Sometimes it would make a decision or a change and I'd simply ask it why it had done it that way, why it had made that choice. Rather than reply with its reasoning it would pretty much go "yeah, my bad, let me do it a way you're probably going to find more pleasing".

A simple example of this is one time when I saw some code like this:

@property
def some_property(self) -> SomeValue:
    from blogmore.utils import some_utility_function
    ...

I'm not a fan of imports in the body of methods unless there's a demonstrable performance reason. I asked Copilot why it had made this choice here and its reply was simply to say it had gone ahead and changed the code, moving the import to the top of the module.

I see plenty of people talk about how working with an agent is like pair-programming, but I think it misses out on what's got to be the biggest positive of that approach: the debate and exchange of ideas. This again feels like a concern to be mindful of, especially if someone less experienced is bringing code to you where they've used an agent as their pair buddy.

The overall impression

Now I'm at the end of the process, and using the result of this experiment to write this post4, I feel better informed about what these tools offer, and the pitfalls I need to be mindful of. Sometimes it wasn't a terrible way of working. For example, on the first day I started with this, at one point on a chilly but sunny Sunday afternoon, I was sat on the sofa, MacBook on lap, guiding an AI to write code, while petting the cat, watching the birds in the garden enjoy the content of the feeder, all while chatting with my partner.

That's not a terrible way to write code.

On the other hand, as I said earlier, I missed the flow state. I love getting lost in code for a few hours and this is not that. I also found the constant loop of prompt, wait, review, test, repeat, really quite exhausting.

As best as I can describe it: it feels like the fast food of software development. It gets the job done, it gets it done fast, but it's really not fulfilling.

At the end of the process I have a really useful tool, 100% "built with AI", under my guidance, which lets me actually be creative and build things I do create by hand. That's not a bad thing, I can see why this is appealing to people. On the other hand the process of building that tool was pretty boring and, for want of a better word... soulless.

Conclusion

As I write this I have about 24 hours of access to GitHub Copilot Pro left. It seems this experiment used up my preview time and triggered a "looks like you're having fun, now you need to decide if you want to buy it" response. That's fair.

So now I'm left trying to decide if I want to pay to keep it going. At the level I've been using it at for building BlogMore it looks like it costs $10/mth. That actually isn't terrible. I spend more than that on other hobbies and other forms of entertainment. So, if I can work within the bounds of that tier, it's affordable and probably worth it.

What I'm not sure about yet is if I want to. It's been educational, I can 100% see how and where I'd use this for work (and would of course expect an employer to foot the bill for it or a similar tool), and I can also see how and where I might use it to quickly build a personal-use tool to enable something more human-creative.

Ultimately though I think I'm a little better informed thanks to this process, and better aware of some of the wins people claim, and also better informed so that I can be rightly incredulous when faced with some of the wilder claims.

Also, it'll help put some of my reading into perspective.


  1. Amusingly I uncovered another bug while writing this post. 

  2. I keep saying Copilot, but I think it's probably more correct to say "Claude Sonnet 4.5" as that's what seemed to be at play under the hood, if I'm understanding things correctly. 

  3. Yes, of course that's an anthropomorphism, you'll find plenty of them in this article as it's hard not to write about the subject in any other way; it's an easy shortcut to explain some ideas 

  4. Actually I'm writing this post as I always do: in Emacs. But BlogMore is in the background serving a local copy of my blog so I can check it in the browser, and rebuilding it every time I save a change. 

A new engine

2 min read

For about 2 and a half years now this blog has been built with Pelican. For the most part I've enjoyed using it, it's been easy enough to work with, although not exciting to work with (which I think is a positive thing to say about a static site generator).

There were, however, a couple or so things I didn't like about the layout I was getting out of it. One issue was the archive, which was a pretty boring list of titles of all the posts on the site. It would have been nice to have them broken down by date or something, at least.

Of course, there are lots of themes, and it also uses templates, so I could probably have tweaked it "just so"; but every time I started to look into it I found myself wanting to "fix" the issue by building my own engine from scratch.

Thankfully, every time that happened, I'd come to my senses and go off and work on some other fun personal project. Until earlier this week, that was.

The thing is... I've been looking for a project where I could dive into the world of "AI coding" and "Agents" and all that nonsense. Not because I want to abandon the absolute thrill and joy I still get from writing actual code as a human, but because I want to understand things from the point of view of people who champion these tools.

The only way I'm going to have an informed opinion is to get informed; the only way to get informed is to try this stuff out.

So, here I am, with my blog now migrated over to BlogMore; a project that gives me a blog-focused static site generator that I 100% drove the development of, but for which I wrote almost none of the code.

At the moment it's working out well, as a generator. I'm happy with how it works, I'm happy with what it generates. I also think it's 100% backwards-compatible when it comes to URLs and feeds and so on. If you do see anything odd happening, if you do see anything that looks broken, I'd love to hear about it.

As for this being a "100% AI" project, and how I found that process and how I feel about the implications and the results... that's a blog post to come.

I took lots of notes.

Seen by davep broke (again)

5 min read

Almost seven years ago I took up maintaining an ad-hoc photoblog again. I say again because I'd had one once before. I'd kicked that off back in the late 200xs, with my little HTC Magic, and hosted it on Posterous. Eventually Posterous was shut down, mostly because the company (or at least the team behind it)) had been bought up by Twitter. When kicking off the blog this time I decided on a few things:

  • I'd host it on Blogger.com; it had been around for long enough, and of course Google could be trusted to keep something that big up and running for good.
  • I'd keep multiple backup copies of the images and file them in useful ways (I keep copies in Google Photos, on Google Drive, on iCloud and locally).
  • As much as possible I'd automate the process of doing some, if not all of this.

At this point, while it was kind of old as an idea, this felt very much like one of those things that was perfect to do in a Web 2.0 way; the good old reliable mashable web!

So the plan became this:

  • Every post would be a tweet, posted to Twitter with a #photoblog tag.
  • I'd use ifttt to keep an eye on my tweets and when it saw one with that tag it would extract the image, make a post to Blogger, drop a copy into Google Drive, and do a couple of other things too.
  • Every week or so I'd do some manual checks to make sure everything is looking okay.

This worked. Mostly. It ran fine for a few years, with very few problems. I'd take a photo, manipulate the heck out of it for the emotional effect I was going for (that was the point of the blog; it was all about the messing with the image), tweet it, and Web 2.0 magic would happen.

Then the odd issue started to crop up. At one point Twitter made changes to how images were stored, or something, and the ifttt recipe broke for a wee while; then they changed the way that public posts could be seen (long before the Musk-era bullshit) and that broke things again, and so on. I forget the details but at every point I was able to nurse it back to life and things carried on.

Recently, of course, it all fell apart when Musk took over Twitter and massively ruined it, turning it into the steaming pile it is now1. I've honestly lost track of which change broke what, and of course I also gave up even trying to use Twitter (the drip, drip, drip of right-wing hate politics got to a point where I could not find a way to make it work any more). So that's when I decided to cut Twitter out altogether.

This had actually started a little earlier than that, when the whole API fiasco kicked off. When that came in ifttt had to remove Twitter things from its free tier; I was on the free tier. I was on the free tier only because I didn't need anything the paid service offered. If Twitter had been "normal" and this change had been made I'd have happily paid -- I don't mind paying for things I find useful.

But, nope, given all the context, I bailed.

So by that point I decided the easiest thing to do was to simply hand-add posts to Blogger, and also along the way post to a pixelfed account instead, reposting those posts from my Fosstodon account. Not ideal, needing more manual input, but also I was thinking that once I find a good flow I could probably automate the whole thing again.

Anyway, that's the point I'd reached. Twitter was 100% out of the workflow, there was a bit more manual intervention, but the primary location for the photos was still getting updated.

Then yesterday I noticed this:

Broken images in my photoblog

I don't actually quite know exactly what's going on here, and at this point I really don't care. My working hypothesis is this: when ifttt added the images to the Blogger posts, it was doing so in a way that it was using the image hosted by Twitter. Because of this, either due to some change in the Twitter API, or perhaps because I've locked down my Twitter account, the images can't be served any more. That's my best guess anyway.

I don't really care to dig deeper than that.

But, yeah, this is another example of the long-growing rot of the dream of Web 2.0. I'm not surprised; I'm not angry; I'm just disappointed.

Thankfully I have all the images saved (see backup options above), so I can go back and edit the posts and drop fresh copies of the images into them. There's 100s of posts affected so this is going to take quite a while; I mean, sure, I could probably do it in a day if I sat and did nothing else, but I have other things to do.

Another option would of course be to create a fresh blog using my own tools; that would be simple enough. I have the images, they're all set with the right date and time, recreating things would be fairly trivial (post titles a slight problem but I could work around that); a tool like mkdocs or Pelican plus some Python code to recreate the posts from the images would be a fun couple of hours mucking about. But... I have a lot of posts on Blogger and all the URLs are stable and still there many years later.

Perhaps I could automate the "fixing" of the broken posts? I wonder what the Blogger API is like to work with?

PS: If you've read this and feel that what's really needed is a "helpful" comment that self-hosting is the solution; please sit on that and read the above again (and, you know, look over the rest of this blog and the entirety of my time and content on the Internet in general and the Web in particular, going back to the mid-90s).


  1. Seriously, if you are reading this and you're still maintaining a Twitter account that you actively use, what the hell is wrong with you? Multiply that lots if you have a blue tick. 

Dot Files

1 min read

While I'm still in blog-tinkering mode (long may it last!), I thought it might be handy to keep a page kicking around that has links to the small collection of "dot file" repositories I have.

Like many people, I keep these in a central location (in my case up on GitHub) so that I can very quickly spin up a familiar work environment on a new machine (new machines are something that doesn't happen too often, but it's always good to be able to get going quickly when it does).

So, depending on browser type/size, either above here or off to the side, there should now be a permanent link to a page of links to those repositories.

As I look at it now it's actually surprising to me how much of my "comfortable" environment is encapsulated in so few tools, and configured with so few collections of files. There are other tools I use a lot too, but most of them either have their own sync systems, or they have so few configuration options (and are likely in a format that isn't easy to grab/store) that it's not worth the bother.

This feels like a good thing, really.

One thing that's not amongst all of this, partly because it's not that interesting, but also partly because the repository is private, is a single bash script called myenv. On a new machine, once I've got enough of a setup that I can clone from GitHub, I drag this down and run the script and most of the rest of the environment follows.

It's quite satisfying when I need to use it.

The switch has been made

2 min read

Well, it didn't take as long as I expected it to. Just yesterday morning I was giving Pelican a look over as a possible engine for generating my blog, having wanted to move away from Jekyll for a while now. Having tried it and liked what I saw to start with, I wrote about how I liked it and wondered how long it would take me to make the switch.

By the evening I was making a proper effort to get the switchover started, and just a wee while earlier, before writing this post, the switch was made!

The process of making the switch was roughly this (and keep in mind I'm coming from using Jekyll):

  1. Made a branch in the repo to work in.
  2. Removed all of the Jekyll-oriented files.
  3. Decided to set up Pelican and related tools in a virtual environment, managed using pipenv.
  4. Ran pelican-quickstart to kick things off and give me a framework to start with.
  5. Renamed the old _posts directory to content.
  6. Kept tweaking the hell out of the Pelican config file until it started to look "just so" (this is a process that has been ongoing, and doubtless will keep happening for some time to come).
  7. Tried out a few themes and settled on Flex; while not exactly what I wanted, it was close enough to help keep me motivated (while rolling my own theme from scratch would seem fun, I just know it would mean the work would never get done, or at least finished).
  8. Did a mass tidy up of all the tags in all the posts; something I'd never really paid too much attention to as the Jekyll-based blog never actually allowed for following tags.
  9. Went though all the posts and removed double quotes from a lot of the titles in the frontmatter (something Jekyll seems to have stripped, but which Pelican doesn't).
  10. Tweaked the FILE_METADATA to ensure that the slugs for the URLs came from the filenames -- by default Pelican seems to slugify the title of a post and this meant that some of the URLs were changing.

All in all I probably spent 6 or 7 hours on making the move; a lot of that involving reading up on how to configure Pelican and researching themes. The rest of it was a lot of repetitive work to fix or tidy things.

The most important aspect of this was keeping the post URLs the same all the way back to the first post; as best as I can tell I've managed that.

So far I'm pleased with the result. I'm going to live with the look/theme for a wee while and see how it sits for me. I'm sure I'll tweak it a bit as time goes on, but at the moment I'm comfortable with how it looks.