Deriving From Untypable Classes

There’s going to be a bit of a change of pace this week, compared to my more recent posts. This time we’re going to be getting deep into the weeds of Python typing.

One of the biggest recent changes to Python has been the introduction of type annotations. Back in the dim and distant past I did my third-year university project on type inference in Python, around the time of Python 2.3. Now though, it’s a much more mainstream part of the Python ecosystem. Alongside tools like black and pylint, the type checker mypy is a core part of my standard Python set-up.

Adding type annotations to your code, and integrating a type checker into your CI pipeline gives you many of the benefits of a statically typed language, while retaining most the speed of development that is associated with Python. The dynamic nature of Python, and the fact that type annotations haven’t been widely adopted by libraries that you might depend on, means that type checking has its limitations and sadly this means it might not be obvious when the type checker has exceeded its abilities to detect errors.

Recently I was investigating a CI pipeline failure for a merge request opened by Renovate for Google’s BigQuery Python API library. The failure was in pylint, saying that a type didn’t have the attribute name we were using. At first, this seemed like a simple failure, but after more investigation, I noticed something odd about it.


Writing A DevOps Vision

DevOps as a concept has been around since around 2010, but implementing the ideas behind it, particularly when you’re in a team that is supporting old monolithic codebases is challenging. For several years we had engineers fulfilling the role of a “DevOps Engineer”. However, we always knew that having a specific person working on DevOps is a bit antithetical to the DevOps concept - it’s supposed to be a state of mind and a set of practices rather than a job role.

The aim was always to have that engineer act as a source of expert knowledge and an enabler. Teams were still supposed to own their code, processes and deployments, but in reality, DevOps related work was often thrown over the wall to that engineer with the expectation that it was their problem, and not the team’s problem.

We ended up in a situation where we had to make a choice - hire a new engineer into the same role, or attempt to spread the work across all engineers. We chose the second option, but that then poses the question of how to change team culture across a department, so that DevOps becomes a standard part of the team’s process, much like Kanban, Scrum or any of the other ways the team organises themselves.


The Power Of Team Dashboards

Using metrics and dashboards is a well-understood tool when monitoring the health and performance of software, or your profitability or other key business metrics. What is less common is using the same tools and techniques to monitor the health and performance of the team behind the software. I’m not suggesting using dashboards to report on individual developers, but as a tool to help the team focus on improving their own processes, it can be very useful, provide it’s handled carefully.

My own journey started when I was promoted from a team leader to an engineering manager, responsible for five teams. The change in level resulted in a significantly different view, but also great difficulty in knowing where to focus my efforts. When you’re a team leader you are so close to the team that you hear and feel every change in mood, and have intimate knowledge of all projects and their current state. Suddenly being responsible for five teams gives you a great view to take advantage of areas of collaboration between teams, and removes you from the noise of day to day life so you can focus on the biggest issues. However, it also removes you from the firehose of raw information so it can be hard to know where you should spend your time to get the best return on your energy.


Replacing Travis CI With BuildBot

Back when I reactivated this blog I posted about using Travis CI to automate the build process. Sadly at the end of last year Travis announced they were ending free builds for all public repositories, and only authorised open source projects will now get free build credits.

The repository for this blog is publically accessible, partly in case anyone wants to see my draft posts, or raise a merge request to fix a typo, but mostly because why not? That previously allowed me to not worry about the cost of building the site, but it’s not unreasonable for a private company who need to make a profit to want to focus their generosity on actual open-source projects. I certainly don’t blame them for the policy change, although I hope the approval process for open source projects is easy and widely applied, so it’s not just a few big projects that can take advantage of it.


Scheduled SMART Checks

For years hard disks (both spinning rust and SSDs) have had a built in monitoring system that tracks various metrics about the health of your disk, called SMART. In the old days if you were lucky you might get some warning that your disk was about to fail because it would start to make a nasty noise. In the modern era of SSDs you likely won’t get any warning, and suddenly boom, your laptop won’t boot or mount the disk.

Obviously nothing is perfect, and any monitoring can miss a failure, but the potential of some warning is better than definitely not getting any. Also this is no subtitute for a proper backup and recovery strategy, but in most home situations people don’t have spare laptops or hard drives just sitting around.

It would be relatively easy for operating system vendors to automatically detect SMART capabable drives and automatically run a check every so often. If it fails, they could pop up a warning about a potential imminent failure. As far as I know though, no-one does this.