Posts tagged with "Copilot"

But is the code that bad?

5 min read

There is, obviously and understandably, a lot of conversation online about AI and coding and agents and all that stuff. Much of it I get, much of it I agree with, I share the vast majority of the concerns. The impact on people, the impact on society, the impact on the environment, the impact on security... there's a good list of things to worry us there.

The one that crops up a lot though, that I don't quite get, is the constant claim I see that at best AI tools produce bad code, and at worst they produce unworkable code. That really isn't my recent experience.

Sure, going back to 2023 or 2024, when I first started toying with these new chatbot things some folks were raving about, the output was laughable. I can remember spending some fun times trying to coax whatever version of ChatGPT was on the go at the time into writing workable code and being amused by just how bad it was.

Even back in October last year, when I first tried out the free Copilot Pro that GitHub had given me to play with, I tried to get it to build a Textual application for me and it was terrible. The code was bad, it didn't really know how to use Textual properly, the application I was trying to get it to write as a test barely worked. It was a disaster.

A month later, in November of last year, I had a second go and better success. That time the (still not released, perhaps one day) application I was building was Swift-based and worked really well, but I can't really comment on the quality of the code or how idiomatically correct the code is in respect to the type of application it is (it's a wee game that runs on iOS, iPadOS, macOS).

By the time I tried my first serious experiment things seemed to be a little different. The code actually wasn't bad. It wasn't good, it was far from good, but it wasn't bad. Also, because it was Python, I was in a good place to judge the code.

Since I've started working on BlogMore I've noticed issues such as:

  • Lots of repetitive boilerplate code.
  • Lots of magic numbers.
  • Lots of magic strings.
  • Functions with redundant and unused parameters.
  • A default state of just adding more and more code to one file.
  • A habit of writing least-effort-possible type hints.
  • A habit of sometimes taking a hacky shortcut to solve a problem.
  • A habit of sometimes over-engineering a solution to a problem.
  • A weird obsession with importing inside functions.
  • An occasional weird obsession with guarding some imports with TYPE_CHECKING to work around non-existent circular imports.
  • An unwillingness to use newer Python capabilities (I've yet to see it make use of := without being prompted, for example).
  • A tendency to write what I would consider less-elegant code over more-elegant code.

The list isn't exhaustive, of course. The point here is that, as I've reviewed the PRs1, and read the code, I've seen things I wouldn't personally do. I've seen things I wouldn't personally write, I've seen things I've felt the need to push back on, I've seen things I've fully rejected and started over. Ultimately BlogMore isn't the code I would have written, but at the moment it is the application I would have written2.

So, here's the thing: every time I see someone writing a negative toot or post or article or whatever, and they talk about how the code it produces is unworkable, I find myself wondering about how they formed this opinion. Are they just writing the piece for the audience they want? Are they writing the piece based on their experience from months to years back, when these tools did seem to still be laughably bad? Are they simply cynically generating the piece using an LLM to bait for engagement? When I see this particular aspect of such a post it's a bit of a red flag about where they're coming from, kind of like how you suddenly realise that someone who seems to speak with authority might be full of shit when they start to spout questionable "facts" on a subject you understand well.

But wait! What about that list of dodgy stuff I've seen while building BlogMore with Copilot? What about all the reading and reviewing I've had to do, and what about the other crimes against Python coding I can probably still find in the codebase? Surely that is evidence that these tools produce terrible, unworkable, unusable code?

I mean, okay, I suppose I could reach that conclusion if I'd had a massively atypical experience in the software development industry and had never had to review anyone else's code, or had never needed to work on someone else's legacy code. Is what I'm seeing out of Copilot something I'd consider ideal code? Of course not. Is it worse than some of the worst code I've had to deal with since I started coding for a living in 1989? Hell no!

From what I'm seeing right now I'm getting code whose quality is... fine. Mostly it does the job fine. Often it needs a bit of coaxing in the right direction. Sometimes it gets totally confused and goes down a rabbit hole which needs to just be blocked off and we start again. Occasionally it needs rewriting to do the same thing but in a more maintainable way.

All of which sounds very familiar. I've had times where that describes my code (and I would massively distrust anyone who says they've never had the same outcomes in their time writing code). For sure it describes code I've had to take over, maintain or review.

It's almost like it was trained on lots of code written by humans.

Meanwhile... not every instance of using these tools to get code done needs to be about writing actual code. More and more I'm finding Google Gemini (for example) to be a really handy coding buddy and faster "Google this shit 'cos I can't remember this exact thing I want to achieve". I'll ask, I'll almost always get a pretty good answer, and then I can generally take that snippet of code and implement it how I want.

I've seldom had to walk away from that sort of interaction because it was getting me nowhere.

All of which is to say: I remain concerned about a great many things in the AI space at the moment, but I'm also as equally suspicious of someone who just flatly says "and the code it produces just doesn't work". If that's part of an article or post I'm left with the feeling that the author put zero actual effort into forming their opinion, let alone actually writing it.


  1. To varying degrees. Sometimes I have plenty of time to kill and I read the PR carefully, other times I glance it over, be happy there's nothing horrific there, and then decide to push back or merge based on the results of hand-testing and automated testing. 

  2. To be fair, it's the application I would still be writing and would be some time off finishing; there's no way it would be as feature-complete as it is now had I been 100% hand-coding it. 

NGMCP v0.2.0

1 min read

The experiment with building an MCP server continues, with some hacking on it happening over a couple of hours while killing time in an Edinburgh coffee shop.

It is, of course, a solution looking for a problem, and I suspect I'm the only person who will ever use it, and even then only as a test, but building it is proving interesting.

The main changes in v0.2.0 are:

  • I turned the current search_guide tool into a line_search_guide tool (because that's what it was doing: a line-by-line search).
  • I added a body_search_guide tool that treats all the lines in an entry as a single block of text and then does the search in that (so searches over line breaks will work).
  • I added a read_entry_source tool that, rather than rendering the entry of a guide as plain text with all the markup removed, it instead delivers the underlying "source" for the entry; something that could be useful if you wanted to get an agent to convert it into another marked-up body of text.
  • I added a markup glossary resource, which technically tells an agent everything it needs to know about Norton Guide markup.

The latter one is interesting. I added it and did some experimenting locally and it seemed to be helpful and I could ask questions about markup and Copilot seemed to use it. Meanwhile, having installed v0.2.0 globally on my machine, and having enabled it, I'm finding that Copilot seems to have zero clue about the markup and instead is using the server to go off and read the guides to work out the markup1.

On the other hand, the new "get source" tool seems to work a treat.

Peeking at the source for an entry

So I suspect I still have some reading/experimenting to do when it comes to resources, so I can better understand why I'd want to provide them and what problem they solve.


  1. All credit to it: it did find CREATING.NG and read the markup out of that. 

Global and local MCP servers with Copilot CLI

1 min read

This morning I'm tinkering some more with NGMCP. Having done a release yesterday and tested it out by globally installing it with:

uv tool install ngmcp

I was then left with the question: how do I easily test the version of the code I'm working on, when I now have it set up globally? Having done the global installation I had ~/.copilot/mcp-config.json looking like this:

{
  "mcpServers": {
    "ngmcp": {
      "command": "ngmcp",
      "args": [],
      "env": {
        "NGMCP_GUIDE_DIRS": "/Users/davep/Documents/Norton Guides"
      }
    }
  }
}

whereas before it looked like this:

{
  "mcpServers": {
    "ngmcp": {
      "command": "uv",
      "args": ["run", "ngmcp"],
      "env": {
        "NGMCP_GUIDE_DIRS": "/Users/davep/Documents/Norton Guides"
      }
    }
  }
}

But now I want both. Ideally I'd want to be able to set up an override for a specific server in a specific repository. I did some searching and reading of the documentation and, from what I can tell, there's no method of doing that right now1. So I've settled on this:

{
  "mcpServers": {
    "ngmcp-global": {
      "command": "ngmcp",
      "args": [],
      "env": {
        "NGMCP_GUIDE_DIRS": "/Users/davep/Documents/Norton Guides"
      }
    },
    "ngmcp-local": {
      "command": "uv",
      "args": ["run", "ngmcp"],
      "env": {
        "NGMCP_GUIDE_DIRS": "/Users/davep/Documents/Norton Guides"
      }
    }
  }
}

and then in Copilot CLI I just use the /mcp command to enable one and disable the other. It's kind of clunky, but it works.


  1. I did see the suggestion that you can write your MCP server so that it does a non-response depending on the context, but that seems horribly situation-specific and wouldn't really help in this case anyway because I want it to work in both contexts, depending on what I'm doing. 

NGMCP - An MCP experiment

2 min read

Recently I've been thinking that it would be interesting to get to know a little about the Model Context Protocol and see what it's about and get a feel for how useful it might be, if at all, for anything I do.

As always happens when I want to try out something new, I reached for a problem I know well so I don't have to get bogged down in solving the problem itself. As almost always happens, I decided I should base it around Norton Guides.

Part of the point of MCP seems to be providing an interface over sources of data and actions, that an LLM might not otherwise be able to cope with, and so it sounded to me like providing a bridge to the content of Norton Guide files would be a perfect test. Of course, this isn't the first time I've bridged LLMs and NG files, but this is obviously intended to be a more generic solution than throwing a Markdown file at NotebookLM.

Earlier this afternoon I sat down and did some reading, and then decided to throw the problem at GitHub Copilot. I told it I wanted to use my NGDB library as the core of the tool, and that it should wrap it up with FastMCP. The initial result was... a bit of a mess. It sort of worked, sort of, but it also seemed to try and put together a project that mostly looked how my Python repos look, but with some bits just wrong.

I did some cleaning up, did some testing, did some tweaking, and eventually I had something working.

Asking what NGMCP can do

So far I've given the code a fairly quick read over, and I can see what it's doing and how it's going about this. This approach obviously has the disadvantage that I didn't hand-write it so there's still a lot to read to really appreciate what's going on; on the other hand, it does have the advantage that it's implemented a tool based on my library so I know what to expect it to be doing.

There will be more code reading happening, and I also intend to look to tidy up the code more and perhaps hand-add some more features.

Looking at the credits of a guide

I very much doubt that this particular MCP server is going to be any use to anyone, but as a proof of concept it works well for me. If I were in a position of needing to build something genuinely useful, I now have a start and a vague idea.

Reading some text from a guide

On the other hand: once again, as with other projects I've done related to Norton Guides, this is a tool that helps keep the content available and accessible; that alone is one reason for me to tidy this up and move it towards v1.0.0 and keep it maintained.

If you fancy having a play, some (currently Copilot-generated) documentation can be found on the server's dedicated site. When I get a bit more time I'm going to flesh this out.

BlogMore v2.8.0

1 min read

I've just published v2.8.0 of BlogMore to PyPI. This is a small update which addresses a bug that Andy reported.

The fix was simple enough, and is another little interesting thing to keep in mind given that BlogMore is an ongoing Copilot experiment. When I first kicked off BlogMore I let it decide which library to use to handle Markdown (I'm more used to markdown-it-py via Textual and so via Hike), and so also let it decide which extensions made most sense given the request. I've honestly never run into the idea of metadata before, only ever dealing with or caring about frontmatter1.

On the other hand, I will say this: I was cooking dinner when the report came in; I pointed Copilot at the issue and let it figure it out. After eating, clearing things away, and general post-dinner chilling, I dropped into the repo to see what it had made of it and... it had figured the issue out and fixed it.


  1. I guess technically they're the same thing, but here I mean I'm more used to the delimited YAML of frontmatter than whatever it is the meta plugin was dealing with. 

GitHub Copilot wants our interaction data

3 min read

I guess it was inevitable1, but yesterday GitHub announced a new opt-out approach to learning from people's interactions with Copilot. I don't have anything novel or insightful to say on this switch, and I'm sure folk with better-informed opinions have already rushed out posts and articles about this, but I did want to jot down just how curious I am to see this roll out.

For starters: for me this feels like one of those things that will get a lot of backlash, and in a day or so GitHub will say they're pausing rolling this out while they reevaluate this approach2. Then, eventually, they roll it out anyway after a "period of consultation with the community". That sort of thing.

I've not read further this morning, but before going to bed last night it wasn't a happy time in the comments section of the FAQ. I can also see why some would be cynical about this change, given the tone of some of the questions and answers in that FAQ. I'll hand it to them: they're pretty candid and honest with the FAQs, but kinda yikes too.

A bad time in the FAQ

Here's the key thing I'm curious about, and which I'll be thinking about and watching for movement on in the next few days: all the talk here seems to be about protecting the privacy of the proprietary code of businesses3. That... is understandable, from a business point of view, from a commercial adoption point of view, from a "we want all software engineering departments to use Copilot" point of view. But how the heck are they really going to manage that?

In the comments in the FAQ this explanation stood out:

We do not train on the contents from any paid organization’s repos, regardless of whether a user is working in that repo with a Copilot Free, Pro, or Pro+ subscription. If a user’s GitHub account is a member of or outside collaborator with a paid organization, we exclude their interaction data from model training.

This seems somewhat unclear to me. Let's walk this one through for a moment: my GitHub account is a member of a "paid organisation". My account is also my account, for my personal code, I've had it a long time and it's filled with a lot of FOSS repos and I keep adding more. So which scenario is the right one here?

  1. Because I'm currently a member of at least one "paid organisation" I'm always opted-out of this training no matter how the opt in/out setting is set and no matter what code I work on?
  2. Because I'm currently a member of at least one "paid organisation" I can opt in when working on code that is from a repository which is mine, but I'm opted out when I'm working on code from a repository belonging to the paid organisation?

I think it reads like it's #1. But then that seems rather odd to me because, if I go and look at my settings right now, I can elect to opt in/out of this training system. If the correct reading is #1 why not just disable that setting altogether and say below it that I'm opted-out because I'm part of a paid organisation?

Which sort of suggests we should perhaps read it as #2? If that, that raises all sorts of questions. How would Copilot know I'm working on code from such a repository? Sure, it's not impossible to infer if I am working within the context of a given repository, doing some fun stuff to work out the origin and so on, but it feels messy. It also feels like a scenario that could end up being incredibly leaky. It really would not be difficult to run into a scenario where I'm working on some non-Free code but in an environment where the licence isn't clear, or where it appears that the licence4 would permit such training.

ℹ️ Note

Editing to add: there is even a comment where it is acknowledged that someone could be working in such a way that it's impossible to know the provenance of the code: "Copilot ... can even work when you are not connected to any repo."

Or... perhaps there's a #3, or a #4, or so on, that I've not even considered yet. The fact that software engineering departments suddenly have to start thinking about this issue (yes, I know, it's been a background issue for a while but this really drags it out into the open) is going to make for a few interesting weeks, assuming people care about where their code ends up.

Who knows. Perhaps, in some strange way, this is how all software ends up being Free.


  1. And I think a bit of me is surprised that they weren't just doing it anyway. 

  2. This isn't a prediction, I'm just saying it feels like that sort of announcement. 

  3. It's not that simple, but to save getting into the deep detail... 

  4. I'm using licence here as shorthand for a lot of things to consider relating to who should have access to the code and how. 

BlogMore v2.3.0

1 min read

I've just pushed BlogMore v2.3.0 up to PyPI. This release has a couple of bug fixes and a couple of significant new features.

The first new feature, which came in as a request, is to add support for control over the themes used for code blocks, including independent control of the themes used for light and dark mode. With these you can specify any of the Pygments styles to use for code blocks. Personally, I prefer to have things blend in, but this now also gives you the chance to have them really contrast (use a light mode theme for dark-mode blog, or a dark mode theme for a light-mode blog).

The other big feature popped into my head earlier today and once I thought about it I had to have it. It's similar to something I had for the photography section of the older version of my website, consisting of a bunch of useless but fun stats and facts about the content.

Things like which hour of the day I tend to post during:

Hour of day

Or the day of the week:

Day of week

Or the month of the year1:

Month of year

This is designed to be turned off by default -- I can imagine most folk would not want this sort of thing on their blog -- but it can easily be turned on with with_stats. The location of the stats can also be controlled using stats_path.


  1. Unsurprisingly March is leaping ahead as of the time of writing. 

Copilot rate limits

1 min read

Last night, while tinkering with another BlogMore feature request, I ran into the sudden rate limit issue again. As before, there was no warning, there was no indication I was getting close to any sort of limit, and the <duration> I was supposed to wait to let things cool down was given as <duration> rather than an actual value.

So this time I decided to actually drop a ticket on GitHub support. Around 12 hours later they got back to me:

Thanks for writing in to GitHub Support!

I completely understand your frustration with hitting rate limits.

As usage continues to grow on Copilot — particularly with our latest models — we've made deliberate adjustments to our rate limiting to protect platform stability and ensure a reliable experience for all users. As part of this work, we corrected an issue where rate limits were not being consistently enforced across all models. You may notice increased rate limiting as these changes take effect.

Our goal is always that Copilot remains a great experience, and you are not disrupted in your work. If you encounter a rate limit, we recommend switching to a different model, using Auto mode. We're also working on improvements that will give you better visibility into your usage so you're never caught off guard.

We appreciate your patience as we roll out these changes.

So, in other words: expect to be rate limited more on this product we're trying to get everyone hooked on and trying to get everyone to subscribe to.

Neat.

I especially like this part:

We're also working on improvements that will give you better visibility into your usage so you're never caught off guard.

You know, if it were me, if I wanted to build up and keep goodwill for my product, I'd probably do that part first and communicate about it earlier rather than later.

I guess this is why I don't hold the sort of position that gets to make those decisions.

BlogMore v2.2.0

1 min read

I've just bumped BlogMore to v2.2.0. This release adds post counts to the archive page.

Post counts in the archive

The overall count appears at the top of the page, with further counts broken down for each year and each month. I've tried to ensure that the counts appear subtle enough, but still readable.

Also, serving the same purpose, but giving more information at a glance, I've added the same counts to the table of contents that appears to the right of the archive if you're on a suitably wide display.

Counts in the ToC

This means I can easily see that I've posted more times this month than I have in any other month since I started this blog. In fact, I've posted more times this month than I have in quite a few individual years in the past.

So... that's today's "I thought I'd added everything but oh look here's another thing to add" feature. Which goes some way to explain why there are so many posts this month, I guess.

BlogMore v2.1.0

2 min read

It's been a couple or so days since I last made any changes to BlogMore -- mainly because I've been messing with blogmore.el -- but yesterday morning I decided to make a change I've been wanting to make for a wee while.

Ensuring fenced codeblocks were handled was one of the things that was important when I started planning BlogMore and, while the result was looking good (thanks to Pygments), the way the block itself looked in the page wasn't quite to my taste. So yesterday I wrote out how I wanted things to be changed and tasked Copilot with getting the job done.

I'm pretty happy with the result.

(defun greet (name &optional (greeting "Hello"))
  "Prints a greeting to standard output."
  (format t "~A, ~A!~%" greeting name))

For the sake of any future reader, should I happen to tweak this even more at some point in the future, here's what the above looks like at the time of writing:

Example code block

As you can see: the language for the block is now shown to the left, and there's a handy "copy to clipboard" icon to the right. I'm still not sure I'm loving the subtle border around the sides and the bottom, I think I'm going to live with that for a few days and see how it sits with me.

I'm also wondering if I should tweak the name of the language a little too, so that it's capitalised correctly. Of course, I could just get into the habit of writing the language name in the start of the block with the correct casing...

def greet(name: str, greeting: str = "Hello") -> None:
    """Print a greeting to standard output.

    Args:
        name: The name of the person to greet.
        greeting: The greeting to use.
    """
    print(f"{greeting}, {name}!")

but given how many code blocks I've got in my blog by this point, where I've typed them in lower case... it's probably easier to just tweak it when presenting it. Moreover, I do want to try and keep my Markdown sources compatible with as many rendering engines as possible and I can't be sure that all of them would downcase before doing the language lookup (although you'd hope they all would).

Meanwhile... given how much more I like how code is looking now, I'm going to have to find more reasons to include code, and Pygments supports so many languages too! Even ones from my distant past...

Function Greet( cName, cGreeting )
   If cGreeting == NIL
      cGreeting := "Hello"
   EndIf
   ? cGreeting + ", " + cName + "!"
Return NIL

As an aside: I also just noticed that they list FoxPro, Clipper, xBase and VFP as aliases for xBase-type languages, but no Harbour! I might have to see about doing a PR for that...