Bad Words In A URL

Today I wanted to share one of the more interesting debugging experiences that I’ve had. This happened quite a few years ago when I was involved in migrating a set of websites over to use a single login. The idea was that as soon as you landed on a site and your session wasn’t logged in, you would be bounced over to an authentication site, which would bounce you back again. The two sites communicated via a backchannel, and if you were logged in on the authentication site the main site would log you in too. If not then you’d browse as a logged-out user, and when you logged in you were bounced over to the authentication site with your credentials passed via a backchannel. If successful then you were logged in on both sites, otherwise an error was displayed.

The backchannel communication was all encrypted with preshared keys, and when the user was bounced to the authentication site they were also given an encrypted token to ensure that a bad actor couldn’t attempt to hijack another user’s session. The exact details of the token aren’t important, but they included the user’s session-id, details of the site they landed on, and the time they were bounced (to prevent against replay attacks).

Everything was working great in testing, and we gradually rolled the change out to more and more users. Eventually, we started getting reports of a small number of users not being able to log in. We were able to determine that they landed on the main site ok, and were bounced to the authentication site, but never arrived there.


Router Stats To Prometheus

I’ve previously written about my plan to collect much more data about my house. In the current work-from-home environment the quality of our internet connection is paramount, and I wanted to be able to monitor it and potentially be alerted to any degradation before it becomes an issue.

Although I’ve replaced my wifi with a UniFi based system, I still use the router that was supplied by my ISP - which is a ZyXEL VMG1312-B10D. Like most networking equipment the ZyXel supports SNMP which is a technology for reading and writing stats and configuration from equipment, and aggregating them together. On paper it sounds great, but unfortunately SNMP is a nightmare to work with, and you need a mapping file for each device, which doesn’t exist for this model. After looking into creating this mapping, and integrating my preferred technology slack of Grafana and Prometheus, I decided to change tack and extract the data myself.

Fortunately the router UI contains some plain text data which looks easy to scrape. So, filled with confidence that this would be an easier approach that learning SNMP I spun up a GitHub project and got to work cranking out some code.


House Measurements

For a long time now I’ve tracked the weather outside my house with my weather station. I also have smart electric and gas meters which display my usage on a little screen in my kitchen, but I didn’t try to do anything useful with that data. Recently I brought an electric car and given that it’s essentially a giant iPad on wheels it inspired me to look into what data I could collect from it, and from elsewhere in my house.

Towards the end of last year, I upgraded my Synology NAS to a newer model which has an Intel, rather than MIPS processor, partly because it was old and I was worried about it dying, but mostly so I could run Docker containers on it. I’ve been running both a Ubiquiti UniFi Controller and PiHole since then, but I knew as part of this project I’d want to run many more containers so I took the opportunity to tidy up the setup.

Docker Compose is a tool that sits above the normal docker command and it lets you run multiple docker containers while simplifying the management of images and the options you need to set for the container to work correctly. You can find my docker-compose.yml file here.


Two Dimensional Dates

Last week the UK government announced more than 12,000 cases of COVID-19 - more than double the number from the day before. This increase was accompanied by the following message on the Government’s Data Dashboard.

Due to a technical issue, which has now been resolved, there has been a delay in publishing a number of COVID-19 cases to the dashboard in England. This means the total reported over the coming days will include some additional cases from the period between 24 September and 1 October, increasing the number of cases reported.

This turned out to be an issue with the reporting of positive tests caused by a limit in Excel. An interesting part of the story is the way it was initially reported in the media. Their focus is on the number of the cases reported each day, which due to the delay in processing tests is not accurate normal, but with this delay is a pretty meaningless number. The key measure used to make important decisions is the rolling average of new case over the last seven days, and with this number you can’t at a glance know how the average is changing.

What is particularly odd, is that it turns out that the government do publish the number of positive cases by the date the sample was taken. It’s just that for the last few days the media narrative has been “huge number of cases”, even though that’s largely an artefact of the old incorrect data. Sure they’re high, but they went up a week before, not last Saturday.


Is This Thing On?

tap, tap, tap Hello, is this thing on?

I started this blog way back in 2008 and was fairly active through to 2012. Unfortunately, apart from a period in 2017, it has been dormant since then. Working for a hedge fund, who are notoriously secretive, and having children, who are notoriously good at sucking up all your free time, meant that blogging really wasn’t an option. Now I work for a more open company (Ocado Technology) and my kids are a little older, so require a bit less time, I’m hoping to resurrect my blog.

Originally I created the blog on, which worked great. It was free, reliable, and easy to use. Unfortunately, when I started writing a new post the editor had become unusable for me. Perhaps I’m an old fuddy-duddy, but I just want to be able to write my text, add some simple pictures, and have the editor get out of my way. Sadly, it quickly became clear that WordPress was going to get in my way, so it was time to give the blog a new home and a new lick of paint.

For many years I’ve had a Linode server, which I use to host a few small websites. I’ve always been a fan of Markdown, so a static site generated by Jekyll seemed like the obvious choice.