Posts tagged with "GitHub"

Copilot lied

4 min read

This morning, with my experiment with Copilot having settled down a little bit, I thought I might try and use the result for another little experiment. For a long time now I've maintained a (currently lapsed) photoblog. It was always very much tied to "the site formerly known as twitter" and, since I fully erased my account after the site turned into a Nazi bar, I've not done anything to update how it gets populated.

So I got to thinking: I have a full backup of all the images in a couple of places; perhaps with a bit of coding (and some help from "the AIs") I can revive it using BlogMore?

I tinkered for a wee bit and mostly got something going (albeit I'm going to have to do some work to back-port the actual dates and times of some earlier images, and there's a load of work to do to somehow pull out all the tags). But then I hit a small hitch.

When building BlogMore I made the decision to let it write both the code and the documentation. It documented a couple of features I never asked for, but which seemed sensible so I never questioned. On the other hand neither did I test them at the time (because they weren't important to what I needed).

It's the exact reason I added this warning at the start of the documentation:

⚠️ Warning

BlogMore is an experiment in using GitHub Copilot to develop a whole project from start to finish. As such, almost every part of this documentation was generated by Copilot and what it knows about the project. Please keep this in mind.

From what I can see at the moment the documentation is broadly correct, and I will update and correct it as I work through it and check it myself. Of course, I will welcome reports of problems or fixes.

With this warning in mind, and with the intention of working through the documentation and testing its claims, I decided to test out one of the features when building up the new photoblog.

Whereas with this blog I keep all the posts in a flat structure, this time around I thought I'd try out this (taken from the Copilot-generated BlogMore documentation):


BlogMore is flexible about how you organise your posts. Here are some common patterns:

Flat structure (all posts in one directory):

posts/
  β”œβ”€β”€ hello-world.md
  β”œβ”€β”€ python-tips.md
  └── web-development.md

Note: Files can be date-prefixed (e.g., 2026-02-18-hello-world.md) and BlogMore will automatically remove the date prefix from the URL slug. The post will still use the date field from frontmatter for chronological ordering.

Organised by date:

posts/
  β”œβ”€β”€ 2024/
  β”‚   β”œβ”€β”€ 01/
  β”‚   β”‚   └── hello-world.md
  β”‚   └── 02/
  β”‚       └── python-tips.md
  └── 2025/
      └── 01/
          └── web-development.md

Organised by topic:

posts/
  β”œβ”€β”€ python/
  β”‚   β”œβ”€β”€ decorators.md
  β”‚   └── type-hints.md
  └── web/
      β”œβ”€β”€ css-grid.md
      └── javascript-tips.md

Using the hierarchy approach, especially with dates, seemed ideal! I'd drop the actual images in such a hierarchy, and also drop the Markdown posts in a parallel hierarchy too. Perfect!

So I set it all up to do that, fired up blogmore serve, visited the URL and... No posts yet. What the hell?

So I checked the code for BlogMore and, sure enough, despite the fact the documentation was selling me on this handy way to organise my posts, no such feature existed!

As an experiment I then asked Copilot what the heck was going on. Much as I expected, rather than coming back with an answer to the question, it went right ahead and fixed it instead. Which is fine, that's where I would have taken this, but I do wish it would answer the question first.

ℹ️ Note

I imagine I could get an answer to the question if I took a more conversational route with Copilot, rather than writing the question in an issue and then assigning that issue to it. I must remember to try that at some point.

So, yeah, unsurprisingly Copilot flat out lied1 in the documentation. I'm not in the least bit shocked by this and, as I said, I fully expected this. But it was amusing to have an example of this turn up so early in the documentation, in such a glaring way, and in a way that was so easily fixed (really, it was just a swap of Path.glob to Path.rglob).

As I play with this more it's going to be fun to see what other bold claims turn out to not be true; or perhaps even the reverse: what neat features lurk in the code that haven't been documented.


  1. Yes, as I mentioned yesterday, that's an anthropomorphism. Folk who take such things as an indication that you don't understand "AI" might want to think about what it is to be a human when communicating. 

Five days with Copilot

14 min read

Another itch to scratch

As I mentioned yesterday, I've been a happy user of Pelican for a couple or so years now, but every so often there's a little change or tweak I'd like to make that requires diving deeper into the templates and the like and... I go "eh, I'll look at it some time soon". Another thought that often goes through my head at those times is "I should build my own static site generator that works exactly how I want" -- because really any hacker with a blog has to do that at some point.

Meanwhile... I've had free access to GitHub Copilot attached to my GitHub account for some time now, and I've hardly used it. At the same time -- the past few months especially -- I've been watching the rise of agents as coding tools, as well as the rise of advocates for them. Worse still, I've seen people I didn't expect to be advocates for giving up on coding turning to these tools and suddenly writing rationales in favour of them.

So, suddenly, the idea popped into my head: I should write my own static site generator that I'll use for my blog, and I should try and use GitHub Copilot to write 100% of the code, and documentation, and see how far I get. I doing so I might firm up my opinions about where we're all going with this.

The requirements were going to be pretty straightforward:

  • It should be a static site generator that turns Markdown files into a website.
  • It should be blog-first in its design.
  • It should support non-blog-post pages too.
  • It should be written in Python.
  • It should use Jinja2 for templates.
  • It should have a better archive system than I ever got out of my Pelican setup.
  • It should have categories, tags, and all the usual metadata stuff you'd expect from a site where you're going to share content from.

Of course, the requirements would drift and expand as I went along and I had some new ideas.

Getting started

To kick things off, I created my repo, and then opened Copilot and typed out a prompt to get things going. Here's what I typed:

Build a blog-oriented static site generation engine. It should be built in Python, the structure of the repository should match that of my preferences for Python projects these days (see https://github.com/davep/oldnews and take clues from the makefile; I like uv and ruff and Mypy, etc).

Important features:

  • Everything is written in markdown
  • All metadata for a post should come from frontmatter
  • It should use Jinja2 for the output templates

As you can see, rather than get very explicit about every single detail, I wanted to start out with a vague description of what I was aiming for. I did want to encourage it to try and build a Python repository how I normally would, so I pointed it at OldNews in the hope that it might go and comprehend how I go about things; I also doubled-down in the importance of using uv and mypy.

The result of this was... actually impressive. As you'll see in that PR, to get to a point where it could be merged, there was some back-and-forth with Copilot to add things I hadn't thought of initially, and to get it to iron out some problems, but for the most part it delivered what I was after. Without question it delivered it faster than I would have.

Some early issues where I had to point out problems to Copilot included:

  • The order of posts on the home page wasn't obvious to me, and absolutely wasn't reverse chronological order.
  • Footnotes were showing up kinda odd.
  • The main index for the blog was showing just posts titles, not the full text of the article as you'd normally expect from a blog.

Nothing terrible, and it did get a lot of the heavy lifting done and done well, but it was worth noting that a lot of dev-testing/QA needed to be done to be confident about its work, and doing this picked up on little details that are important.

An improvement to the Markdown

As an aside: during this first PR, I quickly noticed a problem where I was getting this error when generating the site from the Markdown:

Error generating site: mapping values are not allowed in this context
  in "<unicode string>", line 3, column 15

I just assumed it was some bug in the generated code and left Copilot to work it out. Instead it came back and educated me on something: I actually had bad YAML in the frontmatter of some of my posts!

This, by the way, wouldn't be the last time that Copilot found an issue with my input Markdown and so, having used it, improved my blog.

A major feature from a simple request

Another problem I ran into quickly was that previewing the generated site wasn't working well at all; all I could do was browse the files in the filesystem. So, almost as an offhand comment, in the initial PR, I asked:

Can we get a serve mode please so I can locally test the site?

Just like that, it went off and wrote a whole server for the project. While the server did need a lot of extra work to really work well1, the initial version was good enough to get me going and to iterate on the project as a whole.

The main workflow

Having kicked off the project and having had some success with getting Copilot to deliver what I was asking for, I settled into a new but also familiar workflow. Whereas normally, when working on a personal project, I'll write an issue for myself, at some point pick it up and create a PR, review and test the PR myself then merge, now the workflow turned into:

  • Write an issue but do so in a way that when I assign it to Copilot it has enough information to go off and do the work.
  • Wait for Copilot to get done.
  • Review the PR, making change requests etc.
  • Make any fixes that are easier for me to fix by hand that describe to Copilot.
  • Merge.

In fact, the first step had some sub-steps to it too, I was finding. What I was doing, more than ever, was writing issues like I'd write sticky notes: with simple descriptions of a bug or a new feature. I'd then come back to them later and flesh them out into something that would act as a prompt for Copilot. I found myself doing this so often I ended up adding a "Needs prompt" label to my usual set of issue labels.

All of this made for an efficient workflow, and one where I could often get on with something else as Copilot worked on the latest job (I wasn't just working on other things on my computer; sometimes I'd be going off and doing things around the house while this happened), but... it wasn't fun. It was the opposite of what I've always enjoyed when it comes to building software. I got to dream up the ideas, I got to do the testing, I got to review the quality of the work, but I didn't get to actually lose myself in the flow state of coding.

One thing I've really come to understand during those 5 days of working on BlogMore was I really missed getting lost in the flow state. Perhaps it's the issue to PR to review to merge cycle I used that amplified this, perhaps those who converse with an agent in their IDE or in some client application keep a sense of that (I might have to try that approach out), but this feels like a serious loss to me when it comes to writing code for personal enjoyment.

The main problems

I think it's fair to say that I've been surprised at just how well Copilot understood my (sometimes deliberately vague) requests, at how it generally managed to take some simple plain English and turn it into actual code that actually did what I wanted and, mostly, actually worked.

But my experiences over the past few days haven't been without their problems.

The confidently wrong problem

Hopefully we all recognise that, with time and experience, we learn where the mistakes are likely to turn up. Once you've written enough code you've also written plenty of bugs and been caught out by plenty of edge-cases that you get a spidey-sense for trouble as you write code. I feel that this kind of approach can be called cautiously confident.

Working with Copilot2, however, I often ran into the confidently wrong issue. On occasion I found it would proudly3 request review for some minor bit of work, proclaiming that it had done the thing or solved the problem, and I'd test it and nothing had materially changed. On a couple of occasions, when I pushed back, I found it actually doubting my review before finally digging in harder and eventually solving the issue.

I found that this took time and was rather tiring.

There were also times where it would do the same but not directly in respect to code. One example I can think of is when it was confident that Python 3.14 was still a pre-release Python as of February 2026 (it isn't).

This problem alone concerns me; this is the sort of thing where people without a good sense for when the agent is probably bullshitting will get into serious trouble.

The tries-too-hard problem

A variation on the above problem works the other way: on at least one occasion I found that Copilot tried too hard to fix a problem that wasn't really its to fix.

In this case I was asking it to tidy up some validation issues in the RSS feed data. One of the main problems was root-relative URLs being in the content of the feed; for that they needed to be made absolute URLs. Copilot did an excellent job of fixing the problem, but one (and from what I could see only one) relative URL remained.

I asked it to take a look and it took a real age to work over the issue. To its credit, it dug hard and it dug deep and it got to the bottom of the problem. The issue here though was it tried too hard because, having found the cause of the problem (a typo in my original Markdown, which had always existed) it went right ahead and built a workaround for this one specific broken link.

Now, while I'm a fan of Postel's law, this is taking things a bit too far. If this was a real person I'd tasked with the job I would have expected and encouraged them to come back to me with their finding and say "dude, the problem is in your input data" and I'd have fixed my original Markdown.

Here though it just went right ahead and added this one weird edge case as something to handle.

I think this is something to be concerned about and to keep an eye on too. I feel there's a danger in having the agent rabbit-hole a fix for a problem that it should simply have reported back to me for further discussion.

The never-pushes-back problem

Something I did find unsurprising but disconcerting was Copilot's unwillingness to push back, or at least defend its choices. Sometimes it would make a decision or a change and I'd simply ask it why it had done it that way, why it had made that choice. Rather than reply with its reasoning it would pretty much go "yeah, my bad, let me do it a way you're probably going to find more pleasing".

A simple example of this is one time when I saw some code like this:

@property
def some_property(self) -> SomeValue:
    from blogmore.utils import some_utility_function
    ...

I'm not a fan of imports in the body of methods unless there's a demonstrable performance reason. I asked Copilot why it had made this choice here and its reply was simply to say it had gone ahead and changed the code, moving the import to the top of the module.

I see plenty of people talk about how working with an agent is like pair-programming, but I think it misses out on what's got to be the biggest positive of that approach: the debate and exchange of ideas. This again feels like a concern to be mindful of, especially if someone less experienced is bringing code to you where they've used an agent as their pair buddy.

The overall impression

Now I'm at the end of the process, and using the result of this experiment to write this post4, I feel better informed about what these tools offer, and the pitfalls I need to be mindful of. Sometimes it wasn't a terrible way of working. For example, on the first day I started with this, at one point on a chilly but sunny Sunday afternoon, I was sat on the sofa, MacBook on lap, guiding an AI to write code, while petting the cat, watching the birds in the garden enjoy the content of the feeder, all while chatting with my partner.

That's not a terrible way to write code.

On the other hand, as I said earlier, I missed the flow state. I love getting lost in code for a few hours and this is not that. I also found the constant loop of prompt, wait, review, test, repeat, really quite exhausting.

As best as I can describe it: it feels like the fast food of software development. It gets the job done, it gets it done fast, but it's really not fulfilling.

At the end of the process I have a really useful tool, 100% "built with AI", under my guidance, which lets me actually be creative and build things I do create by hand. That's not a bad thing, I can see why this is appealing to people. On the other hand the process of building that tool was pretty boring and, for want of a better word... soulless.

Conclusion

As I write this I have about 24 hours of access to GitHub Copilot Pro left. It seems this experiment used up my preview time and triggered a "looks like you're having fun, now you need to decide if you want to buy it" response. That's fair.

So now I'm left trying to decide if I want to pay to keep it going. At the level I've been using it at for building BlogMore it looks like it costs $10/mth. That actually isn't terrible. I spend more than that on other hobbies and other forms of entertainment. So, if I can work within the bounds of that tier, it's affordable and probably worth it.

What I'm not sure about yet is if I want to. It's been educational, I can 100% see how and where I'd use this for work (and would of course expect an employer to foot the bill for it or a similar tool), and I can also see how and where I might use it to quickly build a personal-use tool to enable something more human-creative.

Ultimately though I think I'm a little better informed thanks to this process, and better aware of some of the wins people claim, and also better informed so that I can be rightly incredulous when faced with some of the wilder claims.

Also, it'll help put some of my reading into perspective.


  1. Amusingly I uncovered another bug while writing this post. 

  2. I keep saying Copilot, but I think it's probably more correct to say "Claude Sonnet 4.5" as that's what seemed to be at play under the hood, if I'm understanding things correctly. 

  3. Yes, of course that's an anthropomorphism, you'll find plenty of them in this article as it's hard not to write about the subject in any other way; it's an easy shortcut to explain some ideas 

  4. Actually I'm writing this post as I always do: in Emacs. But BlogMore is in the background serving a local copy of my blog so I can check it in the browser, and rebuilding it every time I save a change. 

A new engine

2 min read

For about 2 and a half years now this blog has been built with Pelican. For the most part I've enjoyed using it, it's been easy enough to work with, although not exciting to work with (which I think is a positive thing to say about a static site generator).

There were, however, a couple or so things I didn't like about the layout I was getting out of it. One issue was the archive, which was a pretty boring list of titles of all the posts on the site. It would have been nice to have them broken down by date or something, at least.

Of course, there are lots of themes, and it also uses templates, so I could probably have tweaked it "just so"; but every time I started to look into it I found myself wanting to "fix" the issue by building my own engine from scratch.

Thankfully, every time that happened, I'd come to my senses and go off and work on some other fun personal project. Until earlier this week, that was.

The thing is... I've been looking for a project where I could dive into the world of "AI coding" and "Agents" and all that nonsense. Not because I want to abandon the absolute thrill and joy I still get from writing actual code as a human, but because I want to understand things from the point of view of people who champion these tools.

The only way I'm going to have an informed opinion is to get informed; the only way to get informed is to try this stuff out.

So, here I am, with my blog now migrated over to BlogMore; a project that gives me a blog-focused static site generator that I 100% drove the development of, but for which I wrote almost none of the code.

At the moment it's working out well, as a generator. I'm happy with how it works, I'm happy with what it generates. I also think it's 100% backwards-compatible when it comes to URLs and feeds and so on. If you do see anything odd happening, if you do see anything that looks broken, I'd love to hear about it.

As for this being a "100% AI" project, and how I found that process and how I feel about the implications and the results... that's a blog post to come.

I took lots of notes.

MkDocs/mkdocstrings 404 CSS TIL update

1 min read

Following on from my post this morning, regarding the problem I was having with _mkdocstrings.css being 404 any time I deployed by documentation, build with mkdocs/mkdocstrings, to GitHub Pages...

It's come to light that I was doing this on hard mode, pretty much.

While trying to figure out the best way of deploying the docs, I'd stumbled on ghp-import and had been using that. On the other hand, MkDocs has it's own command for doing the same thing: mkdocs gh-deploy.

TimothΓ©e pointed out to me that he never runs into this problem, but he used this command. As it turns out, if you use mkdocs gh-deploy it creates the .nojekyll file by default.

And how does it do this? It uses the ghp-import code and uses a switch it has to achieve exactly this. In fact... the command line version even has a switch for it!

-n, --no-jekyll      Include a .nojekyll file in the branch.

This is off by default, when you run the command itself, but I wish I'd noticed this when I was first experimenting. O_o

Anyway, thanks to TimothΓ©e's pointers, I've now managed to simplify how I build and publish the docs from textual-fspicker, and I'll apply this to other projects too.

Documenting textual-fspicker (plus a TIL)

4 min read

I've just made a wee update to textual-fspicker, my dialog library for Textual which adds FileOpen, FileSave and SelectDirectory dialogs. There's no substantial change to the workings of the library itself, but I have added something it's been lacking for a long time: documentation!

Well... that's not quite true, it's always had documentation. I'm an avid writer of Python docstrings and I make a point of always writing them for every class, function, method or global value as I write the code. As such the low-level "API" documentation has always been sat there ready to be published somehow, eventually.

Meanwhile the description for how to use the library was mostly a pointer to some example code inside the README. Not ideal, and something I really wanted to improve at some point.

Given I'm still on a bit of a coding spree in my spare time, I finally decided to get round to using the amazing mkdocstrings, in conjunction with mkdocs, to get some better documentation up an running.

The approach I decided to take with the documentation was to have a page that gave some general information on how to use the library and then also generate low-level documentation for the all the useful content of the library from the docstrings. While latter isn't really useful to anyone wanting to use the library in their own applications, it could be useful to anyone wanting to understand how it hangs together at a lower-level, perhaps because they want to contribute to or extend the library in some way.

While writing some of the general guide took a bit of work, of course, the work to get the documentation up and running and generating was simple enough. The effort comes down to 3 rules in the Makefile for the project:

##############################################################################
# Documentation.
.PHONY: docs
docs:                    # Generate the system documentation
    $(mkdocs) build

.PHONY: rtfm
rtfm:                    # Locally read the library documentation
    $(mkdocs) serve

.PHONY: publishdocs
publishdocs: docs        # Set up the docs for publishing
    $(run) ghp-import --push site

The rtfm target is useful for locally-serving the documentation so I can live preview as I write things and update the code. The publishdocs target is used to create and push a gh-pages branch for the repository, resulting in the documentation being hosted by GitHub.

A wee problem

NOTE: I've since found out there's an easier way of fixing the issue.

This is, however, where I ran into a wee problem. I noticed that the locally-hosted version of the documentation looked great, but the version hosted on GitHub Pages was... not so great. I was seeing a load of text alignment issues, and also whole bits of text just not appearing at all.

Here's an example of what I was seeing locally:

Good layout

and here's what I was seeing being served up from GitHub Pages:

Bad layout

As you can see, in the "bad" version the func label is missing from the header, and the Parameters and Returns tables look quite messy.

I spent a little bit of time digging and, looking in Safari's console, I then noticed that I was getting a 404 on a file called _mkdocstrings.css in the assets folder. Problem found!

Only... was it though? If I looked in the gh-pages local branch the file was there (and with fine permissions). If I looked in the remote branch, it was there too. Thinking it could be some odd browser problem I even tried to grab the file back from the command line and it came back 404 as well.

Testing from the command line

At this point it was getting kind of late so I decided I must have screwed up somehow but I should leave it for the evening and head to bed. Before doing so though I decided to drop a question into the mkdocstrings discussions to see if anyone could see where I'd messed up.

As it turns out, it looked like I hadn't messed up and the reply from the always super-helpful TimothΓ©e was, in effect, "yeah, that should work fine". At least I wasn't the only one confused.

Fast forward to this morning and, with breakfast and coffee inside me, I decided to try and methodically get to the bottom of it. I wrote up the current state of understanding and looked at what might be the common cause. The thing that stood out to me was that this was a file that started with an underscore, so I did a quick search for "github pages underscore" and right away landed on this result.

Bingo!

That had to be it!

A little bit of testing later and sure enough, the documentation hosted on GitHub Pages looked exactly like the locally-hosted version.

So, TIL: by default sites hosted by GitHub Pages will pretend that any asset that starts with an underscore doesn't exist, unless you have a .nojekyll in the root of the repository, on the gh-pages branch (or whatever branch you decide to serve from).

To make this all work I added .nojekyll to docs/source and added this to mkdocs.yml:

exclude_docs: |
  !.nojekyll

All done!

And now I've worked out a simple workflow for using mkdocs/mkdocstrings for my own Python projects, in conjunction with GitHub Pages, I guess I'll start to sprinkle it over other projects too.

All green on GitHub

2 min read

In about a week's time I'll have had a GitHub account for 15 years! I can't even remember what motivated me to create one now, but back in October 2008 I grabbed the davep account...

Making my account

...and then made my first repo.

First repo made

My use of the site after that was very sporadic. It looks like I'd add or update something once or twice a year, but I wasn't a heavy user.

First few years

Then around the middle of 2015 I seem to have started using it a lot more.

The next few years

This very much shows that during those years I was working on personal stuff that I was making available in case anyone found it useful, but also leaning heavily on GitHub as a (a, not the) place to keep backups of code I cared about (or even no longer cared about). Quite a lot of that green will likely be me having a few periods of revamping my Emacs configuration.

The really fun part though starts about a year ago:

Working on FOSS full time

It's pretty obvious when I started working at Textualize, and working on a FOSS project full time. This is, without a doubt, the most green my contribution graph has looked. It looks like there's a couple of days this year where I haven't visited my desk at all, and I think this is a good thing (I try really hard to have a life outside of coding when it comes to weekends), but I'm also delighted to see just how busy this year looks.

I really hope this carries on for a while to come.

Apparently, as of the time of writing, I've made 12,588 contributions that are on GitHub. What's really fun is the fact that my first contribution pre-dates my GitHub account by 9 years!

My very first contribution

This one's pretty easy to explain: this is back from when I was involved with Harbour. Back then we were using SourceForge to manage the project (as was the fashion at the time), and at some point in the past whoever is maintaining the project has pulled the full history into GitHub.

My contribution history on GitHub is actually older than my adult son. I suspect it's older than at least one person I work with. :-/ 1


  1. I'm informed that this isn't the case2; apparently I'm either bad at estimating people's ages, or bad at remembering them; or both. 

  2. Although it's not too far off. :-/ 

A new GitHub profile README

2 min read

My new GitHub banner

Ever since GitHub introduced the profile README1 I've had a massively low-effort one in place. I made the repo, quickly wrote the file, and then sort of forgot about it. Well, I didn't so much forget as just keep looking at it and thinking "I should do something better with that one day".

Thing is, while there are lots of fancy approaches out there, and lots of neat generator tools and the like... they just weren't for me.

Then yesterday, over my second morning coffee, after getting my blog environment up and going again, I had an idea. It could be cool to use Textual's screenshot facility to make something terminal-themed! I mean, while it's not all I am these days, so much of what I'm doing right now is aimed at the terminal.

So... what to do? Then I thought it could be cool to knock up some sort of login screen type thing; with a banner. One visit to an online large terminal text generator site later, I had some banner text. All that was left was to write a simple Textual application to create the "screen".

The main layout is simple enough:

def compose(self) -> ComposeResult:
    yield Label(NAME, classes="banner")
    yield Label(PRATTLE)
    yield Label("github.com/davep login: [reverse] [/]")

where NAME contains the banner and PRATTLE contains the "login message". With some Textual CSS sprinkled over it to give the exact layout and colour I wanted, all that was left was to make the snapshot. This was easy enough too.

While the whole thing isn't fully documented just yet, Textual does have a great tool for automatically running an application and interacting with it; that meant I could easily write a function to load up my app and save the screenshot:

async def make_banner() -> None:
    async with GitHubBannerApp().run_test() as pilot:
        pilot.app.save_screenshot("davep.svg")

Of course, that needs running async, but that's simple enough:

if __name__ == "__main__":
    asyncio.run(make_banner())

Throw in a Makefile so I don't forget what I'm supposed to run:

.PHONY: all
all:
    pipenv run python make_banner.py

and that's it! Job done!

From here onward I guess I could have some real fun with this. It would be simple enough I guess to modify the code so that it changes what's displayed over time; perhaps show a "last login" value that relates to recently activity or something; any number of things; and then run it in a cron job and update the repository.

For now though... I'll stick with keeping things nice and simple.


  1. It was actually kind of annoying when they introduced it because the repo it uses is named after your user name. I already had a davep repo: it was a private repo where I was slowly working on a (now abandoned, I'll start it again some day I'm sure) ground-up rewrite of my davep.org website.