Git & SSH sitting in a tree…

I do work for a bunch of different clients, who variously use GitLab and GitHub. For many years I put up with the incessant problem of accidentally signing my commits as the wrong user. It’s just so easy to forget to set the right GPG key and email address when you just want to get on with a project. It’s not the end of the world, but it’s annoying.

Often the same thing goes for the SSH keys you use to push and pull from your git repo; it’s a bit too easy to be lazy and use the same SSH key across multiple clients, when a little isolation would be a good idea from a security perspective.

What if you could work some magic so that identities and GPG and SSH keys are set to the right values right from the start, for every project for each of your clients? Read on…

This whole setup reminds me very much of a post I wrote in 2009 (13 years ago!) on the “holy trinity” of DNS, TLS, and virtual host wildcards that allow you to dynamically host vast numbers of previously undefined sites without having to touch your web server config at all, a classic example of convention over configuration.

First of all let me introduce you to .gitconfig. This file usually sits in your home directory, so for me on macOS that’s /Users/marcus/.gitconfig. This file contains your global git defaults, and is an easy-to-read config file in an “ini” style (and no, those are not real values!):

[user]
    name = Marcus Bointon
    email = marcus@example.com
    signingkey = AC34DF5B434BB76
[github]
    user = Synchro
    token = f693251e52043a23fe5fbd955cff56ff
...

You’ll find lots of other sections in here, which you can read about in the git config docs. But we are only really interested in one option: includeIf. This directive conditionally includes another git config file into your settings, and one of the things you can make it conditional upon is the path to your project. This is useful. I typically set up my client’s projects in the macOS default Sites folder within my home directory. Each client gets a folder, and each of their projects lives within that. This provides a tidy location to put a separate .gitconfig file that can be applied to all of their projects. It ends up like this:

~/.gitconfig
~/Sites/
    client1/
        .gitconfig
        project1/
        project2/
    client2/
        .gitconfig
        project1/
        project2/

Each .gitconfig file only needs to include the differences from the defaults that are set in the primary config file that lives in your home dir. To set up the GPG signing key and email for all of their projects, the file would contain this:

[user]
    email = remotedev1@client1.example.net
    signingkey = 434BB76AC34DF5B

Back in our primary file, we would add this conditional statement to automatically pull in this extra config whenever git is operating in this folder:

[includeIf "gitdir:~/Sites/client1/"]
    path = ~/Sites/client1/.gitconfig

And that’s it as far as GPG goes – commits will now be signed with the key and email address that are specific to this client, so when you set up your next project for them, you won’t have to do anything to set it up; it’ll Just Work™.

But what about SSH? The chances are that your client will have asked you for an SSH public key to add to their repo to provide you with sufficient access, but setting the GPG key doesn’t do anything towards selecting an SSH key for that purpose. You could do that using environment variables (which can be quite annoying) before, but fortunately, git 2.10.0 added the core.sshCommand config option that allows us to specify the SSH command that git uses for file transfer operations, and that can include a -i parameter to select an SSH identity (and -C to use compression for a possible speed boost). Add this to your client’s .gitconfig file, using the path to your client-specific identity file (not the public key which has a .pub suffix) like this:

[core]
    sshCommand = "ssh -i ~/.ssh/id_ed25519_client1 -F /dev/null"

Side note: I do hope you’re using Ed25519 keys for SSH; they’re newer, smaller, stronger, and faster than RSA keys, and they’ve been supported in OpenSSH since version 6.5 in 2014, so if your server doesn’t support them, you probably have bigger problems, or maybe you’re just running RHEL… I hope you’ve seen the post-quantum features of OpenSSH 9.0 too. The SSH client config file (usually found in ~/.ssh/config) is also really useful for twiddling per-directory or per-server configs that you can just set and forget.

Once you’ve done that, your commits will now be signed using your clients’ GPG key, and pushed to their repo using their specific ssh key, and you won’t have to change anything when you start new projects for them, so long as you put them in the same folder.

“What about my IDE?”, I hear you ask. Not to worry, most IDEs use your system’s git and ssh configs, so all this should work just fine with PHPStorm, VSCode, etc.

While I’m sure some bright spark can make this even more dynamic to automate this across clients, I find new clients are rare, but projects turn over fast enough for this to be a real win for getting that first commit signed and pushed correctly, first time.

An open source mini-adventure

I’m using Spatie’s Media Library Pro in a project for dgen.net, and ran into a problem when I tried to use a TIFF-format image, and it failed to show a thumbnail of the image:

Drag and drop works, but no TIFF image preview.

So I set about tracking down why this image didn’t work, since the project this was being used for has lots of TIFF images. This turned into quite the can of worms, but all worked out beautifully in the end.

TIFF images are not supported by most web browsers as they are not a typical “web format”, but they are very common in print and archiving contexts. It doesn’t help that Safari is about the only browser will display them at all, but here the aim is to display a thumbnail, not the actual image, and the thumbnail doesn’t have to use the same format.

Media Library Pro is a set of user interface widgets providing access to Spatie’s Laravel Media Library package, and so it’s dependent on that package to provide all the underlying file management and thumbnail generation, which is handled by a more general mechanism for creating “conversions” of underlying file types. This is especially useful for files that are are not images – for example it’s possible to create thumbnails for audio files using a package I wrote, but being able to do something similar for otherwise undisplayable image types is useful too.

It turns out that Media Library’s image support is handled by yet another Spatie package called (imaginatively) Image. So I started looking there, and found that it did not actually take responsibility for performing image processing operations either, but used yet another package called Glide by the PHP league. In searching for info about using TIFF files with Glide, I found this issue, which told me that Glide already supported TIFF, so long as you were using the imagemagick PHP extension (as opposed to the slower, less capable, but more common GD) as the image processing driver, which I was already. But as I’d seen, this didn’t seem to work. So I set up a simple test script to convert a JPEG image into TIFF using spatie/image (I needed it to convert in both directions), and found that it did indeed create a TIFF file. However, apps I tried could not open it, saying that it was not a TIFF format file. The file command line utility showed me that the file was in fact a JPEG-format image saved with a .tiff extension:

file conversion.tiff
conversion.tiff: JPEG image data, JFIF standard 1.01, aspect ratio, density 1x1, segment length 16, baseline, precision 8, 340x280, components 3Code language: Bash (bash)

This was not helpful! So this was a bug in Glide. I tracked down the cause of that and submitted a PR to resolve it.

One general problem with open source projects, is you never know when maintainers are going to get around to merging (or rejecting) PRs, or having merged them, when they will be tagged for release. I know this because I have been guilty of this myself! Here I struck lucky – a maintainer merged it the same day, and also tagged it for release.

Now I had a different problem. This fix was several layers down in my stack of dependencies, and those projects didn’t know about this change in Glide, so if I wanted spatie/image to gain TIFF support, I needed to bump its dependencies to force it to use the new version. It also turned out that while Glide now had TIFF support, Image did not pass that support through to its consumers, so I needed to let it know that TIFF was also a supported format. All that happened in another PR. Spatie has a very good reputation for supporting its open source packages, not least because they constantly dogfood them, and have a great track record of merging PRs quickly and tagging them for release, and this was no exception – my PR was merged and released very quickly.

Now I was nearly there – but not quite! I discovered two almost identical problems in spatie/laravel-media-library and spatie/image: despite delegating image processing functions to their dependencies (i.e. having image say “I support whatever image formats that glide supports”), they both had their own hard-coded list of supported formats. I had already updated this in image in my previous PR, but now I needed to do the same thing (and something similar for tests) for Media Library. Cue PR number 3! True to form, Spatie merged and tagged this release quickly, and my chain was complete! I followed this up with another PR to port my changes to their later version 10 branch (supporting Laravel 9), most of which involved a switch to the pest testing framework.

Finally, back in my app, I bumped my dependency version constraints (so my app picked up the latest versions of these packages), and then I got this:

The fruits of all that effort!

I observed that there’s more that could be done in these packages, in particular that knowing what image formats and MIME types you can support should be limited only at the lowest-level – all higher dependencies should defer to the lower-level packages. This would mean that there is less code to maintain in those packages, and new formats would automatically start working without PR chains like this. So if you have time on your hands… This is of course how a lot of open source software comes into being – there’s always another yak that wants shaving!

This might seem like a lot of effort for a very small feature, but this is how open source works, on its good days! Every package you use is an accumulation of effort by original authors, maintainers, contributors, and reporters, all of whom want to solve one problem or another, and share their efforts so that others can avoid having to solve the same problems all over again.

This particular chain is the longest nested set of PRs I’ve ever done, it was fun to do, was about the first thing I’ve ever “live tweeted”, it resulted in a solution to the specific problem I had, and that solution is now available to all. This is how open source is meant to work, but it’s not always this (remarkably!) smooth. Some package creators can’t be bothered to maintain their packages, others are on holiday, have just had a baby, or have died; raging flamewars erupt over the most trivial things; discrimination (racial, sexual, religious) is unfortunately common; bug reporters often fail to describe their problems well, or make excessive, unrealistic, entitled demands of maintainers. Sometimes this proves to be too much, resulting in great people stopping (or never starting) their participation in the open source ecosystem, which is a terrible shame.

The web would not exist without open source, and if you want to continue to reap the benefits of this beautiful thing we have collectively created, the best way is to support the maintainers. Whether it’s individual developers like me, package creators like Spatie and The PHP League, or open-source juggernauts like Laravel and SensioLabs (Symfony), we can all benefit from support. There are many different ways you can provide support (not just financially), for example making developer time (or other resources) available, paying for products and services sold by companies that back open source projects, paying maintainers, either directly through things like GitHub sponsorship and Patreon, or through broader programmes such as TideLift that might be more acceptable to accounting departments. I’m tooting my own trumpet here (my blog!), but there are literally millions of open source developers out there, and if you’re reading this, you’re using software that we have all created together.

Abstraction as a service

This is a short story I wrote back in 2015 and published on Medium. I don’t want to use Medium any more, so I’m reposting it here. If you like it, please follow me on Twitter.


On a warm, drizzly London day, I’m looking out of a window for inspiration. I can see a building from my window that an old client moved into recently, and I’ve watched them gradually redecorate it with their own branding. By the ground floor entrance is a small shop that appears to sell nothing but Aero bars (in milk chocolate or mint).

That thought makes me hungry, so I head out for lunch.

In a busy sandwich bar I bump into Claire and we chat about work between mouthfuls of an excellent sandwich (though I don’t recall what was in it). She mentions that she knows someone who might need my services, and who might turn up here for lunch.

A little later she points out a guy who’s just come in. Apparently his name is Tris. He’s 40s, slightly foppish, baggy-looking in a dark linen suit, like something from Muji. He seems distracted.

She waves at him. I’m not sure if he sees her.

We finish our lunch. Claire leads me over to Tris, says a brief hello and introduces me. It seems Claire has mentioned me to him before. “I hear you do the kind of thing I need” says Tris.

I reply with a hesitant “Yes”, not knowing what it is he needs.

He’s there with his daughter, who’s talking to someone else. She’s about 15, with short, dark hair and a denim jacket. I don’t catch her name. She glances at me, smiles, and returns to her conversation.

Tris says we should chat about some work he needs doing. He’s heading off on a sales trip to Taipei and Buffalo, NY, and needs some stuff doing pretty much immediately. He suggests we go to his office.

We head to the station. It’s packed with a glazed post-lunch crowd. We hustle onto a train; hot, humid and smelly. Standing close in the carriage, I notice he’s carrying a shoulder bag the same colour as his suit, a large white book and a handful of pencils. The pencils all have coloured tops, but I realise they are all graphite pencils of different hardnesses, and the points are worn down to fat, blunt stubs; he’s been busy. He sees me looking, and shows me the book; It’s a “grown up” colouring book, but old and worn. He flicks through a few pages in the limited space and I see a mix of repeating patterns and odd artworks, most partly but meticulously coloured in with shades of grey. He complains that it’s a bit cumbersome; one drawing folds out across multiple pages showing a mole digging through earth filled with jumbled aeroplanes, cars, skyscrapers, Coke cans. I have a fleeting moment of recognition; I’m sure I had this book back in the 80s, but I preferred it left as black and white lines.

We arrive at a station. He says suddenly “this is us”, hurriedly bundles up the unfolded page and we explode gently onto the platform in a spray of commuters.

By the time he’s sorted himself out, the station is empty. We cross to another platform and board an empty, old-style train with compartments. It smells dusty. His daughter slumps onto the old bench seat in a cloud of dust and plays with her phone.

The train lurches and bumps down the track to wherever. Tris is animated — he’s got a new concept that he’s bursting to pitch to his foreign clients, though he doesn’t go into detail. He suddenly stops and looks at me pointedly. “You know about 400?”

I’m floored for a moment, but venture “You mean like the HTTP error code?”

“No, no. Do you know about customer databases?”

“Yes”, I say, somewhat relieved.

“OK then, you’ll be fine”. He fiddles with his phone for a moment then hands it to me. “Give me your contact details”

I fill in his contacts app and hand it back. “That’s great, thanks” he says.

We arrive at a station, apparently ours, so we all alight. The rain has stopped. Just by the station is a newsagent. An aluminium-framed frosted glass door next to it sports a small plaque: “Tristan Enterprises”. I hope his daughter is not named Isolde. We go through the door, climb a narrow stairway and emerge into a small office. It’s almost empty, very tidy, and everything is completely white. An old white MacBook sits on one of two desks. I wonder if he only uses it because it’s white. He hangs his bag on a hook, dumps the book on the desk next to the laptop.

“I need servers”, says Tris, “to run all this stuff”, he says with a wave encompassing the entire empty office.

“OK”, I say, “I can do that”.

“Excellent. I think that covers it. I’ll be in touch with the details. Are you alright getting back?”

“Er, I guess so”, I say, mystified that he should bring me all this way for so little.

I head home though the muggy afternoon wondering if it’s all been a waste of time. By the time I’m home, he’s emailed me with login details of his cloud provider, asks me to commission a few servers. I log in and see it’s much as I expected, so I set things up and email him back.

That was a few months ago now. It turns out Claire is working for Tris now too, so we quite often meet for lunch.

We’re happy that Tris pays the bills, but we still have no idea what he does.

Using a Behringer DSP8024 for Room EQ

I have a Behringer DSP8024 Ultra-Curve Pro audio processor on the output of my computer.

Behringer DSP8024 audio processor

I picked up this relatively ancient unit for £50 about 15 years ago (it cost about $500 back in 2001!), and they can still be found on eBay, along with later models like the DEQ2496, and related hardware like Focusrite’s (discontinued) VRMBox. It provides many different audio processing functions, including:

  • Stereo 31-band ⅓-octave graphic equaliser
  • Real-time stereo 31-band spectrum analyser
  • Stereo 6-band parametric equaliser
  • Delay up to 2.5 sec
  • Noise gate
  • Automatic “feedback destroyer”
  • Accurate level meter with selectable scales
  • “Brick wall” limiter for output protection
  • Automatic room equalization using microphone input and internal noise generators

It’s this last feature that is the most useful, combining the analyser with the graphic equaliser. Room equalisation (EQ) can correct a lot of acoustic deficiencies in a room. The shape, composition, and contents of a room, and non-linearities in your speakers and audio interface all contribute to how audio sounds within it. Ideally you want to minimise these effects so as to hear as true a signal as possible. It’s a good idea to apply corrective EQ after adding simple physical acoustic controls (e.g. absorber panels, diffusers, and bass traps, or just old duvets and cushions). Room EQ gets some criticism from audiophiles because it can be very hit & miss and can’t address bigger issues, but it can work very well if you listen from a single location in your room (e.g. in front of your desk).

To measure the room equalisation accurately, you need a microphone with a flat (or at least well-documented) frequency response; I use a t.bone MM-1 for this.

The t.bone MM-1 measurement microphone

The equalisation process works like this, starting from flat EQ (no alteration):

  • Output pink noise from the unit through the speakers
  • Analyse what it sounds like through the microphone, from your usual listening position
  • Alter the equalisation towards a flat response
  • Iterate over this process until the overall response is as flat as possible

This process is loud and quite unpleasant, so leave the room or stick on some closed headphones while it’s busy! It takes a minute or so, and you can hear the change in characteristic of the noise playing through the speakers, and see the changes in EQ on the screen of the unit during the process. After it’s done you can save the EQ curves, and switch the EQ in and out to A/B the config. The difference is pretty noticeable, particularly at the low end where most room-related acoustic problems tend to be; overall it’s like having a major speaker upgrade! One benefit I really notice is when switching between my corrected speakers and a decent pair of monitoring headphones – the audio really doesn’t change in character; there’s no significant tonal shift between the two.

Some people have noted problems with “digital noise” when using this unit, particularly at low volume levels. I suffered from this for a long time, but then realised what caused it and solved the problem. If you have a volume control that is before the processor, you will end up with a small signal going through the analogue to digital converters (ADCs), effectively throwing away much of their available resolution, and you’ll get a lot of quantization noise as a result. The best way to hear this deliberately is turn the input level down, and the output level up, then play something smooth and quiet. It will sound horrible, gritty and noisy, you can really “hear the bits”. This isn’t a problem unique to this unit – any ADC provided with insufficient signal will suffer the same problem.

You want to maximise the use of ADC resolution by giving it a full-range signal to convert. So if you have an audio interface before it, make sure it’s turned up full, and if you have any software level control (e.g. macOS system volume), make sure that’s turned up full too, so you’re always sending a full-volume signal. This way the converters will always use their full 24-bit resolution and the quantization noise will be so small you won’t hear it (it’s impossible to remove completely). However, you still want to control your output level. There are 2 ways to do this: alter the level on your monitors (which can be inconvenient as volume controls on active monitors are often on each speaker separately, and often hidden around the back) or use a passive volume control between the equaliser and the speakers. I use a Mackie Big Knob Passive for this.

A Mackie “Big Knob” passive volume controller.

Passive volume controls have no power supply (so no noise or extra cables), and can only turn a signal down, not up. It’s analogue, so there are no DACs or ADCs, just simple passive components. Ideally when it’s turned up full, it should be indistinguishable (acoustically speaking) from a length of cable.

Controlling level directly on the speakers (or on a separate amplifier if you have one) is possibly better than this approach, but usually less convenient. If you want to be able to run your speakers at full volume via the passive volume control, you need to have the monitors turned up full, and this often means you’ll get significant analogue noise (hiss) from the speakers when you’re listening at lower volume, however, that’s generally less unpleasant and not the problem we’re addressing here.

All of this attention to correct signal levels throughout an audio signal path is part of a wider concept known as gain staging, and occurs in many other places in audio recording, processing, mixing, mastering, etc.

It is possible to do all this processing in software using systems like REW or RoomEq, or even to go further and emulate other listening environments, famous studios or speakers, but I quite like having all this externalised and independent of software, and it also means that it can be applied to external inputs too, if you’re playing an instrument directly through a mixer. The “big knob” also provides a very convenient single control for output level, along with other features such as mute/dim and speaker and input switching.