Laravel duplicate key error despite unique validation

In a Laravel API, it’s really common to create users with an endpoint like this in a user controller:

public function store(Request $request): UserResource|JsonResponse
{
    $validator = Validator::make(
        $request->all(),
        [
            'email' => 'required|string|max:255|email|unique:users',
            'name'  => 'required|string|max:255',
        ],
        [
            'email.unique' => 'That email address already has an account.',
        ]
    );
    if ($validator->fails()) {
        return response()->json(
            [
                'error'   => true,
                'message' => $validator->errors()->all(),
            ],
            Response::HTTP_UNPROCESSABLE_ENTITY
        );
    }
    $user = User::create(
        $request->only(
            [
                'email',
                'name',
            ]
        )
    );Code language: PHP (php)

There’s a problem here though – that unique validation on the email field is subject to a race condition. If two requests are received very close together, both can pass validation, but then the second one will fail with a duplicate key error on the User::create call. While that sounds unlikely, it happens for real sometimes, and you’ll see something like this in your web logs when it does:

192.168.0.1 - - [06/Oct/2023:07:08:54 +0000] "POST /users/ HTTP/2.0" 201 1276 "-" "okhttp/4.9.2"
192.168.0.1 - - [06/Oct/2023:07:08:55 +0000] "POST /users/ HTTP/2.0" 500 17841 "-" "okhttp/4.9.2"Code language: JavaScript (javascript)

The 201 response is a successful creation, but it’s followed a second later by a 500 failure for the duplicate request. The Laravel log will then contain one of these:

[2023-10-06 07:08:55] staging.ERROR: SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry 'user@example.com'
 for key 'users.users_email_unique' (Connection: mysql, SQL: insert into `users` (`email`, `name`) values (user@example.com, Test)Code language: JavaScript (javascript)

To deal with that we can trap the creation error, and return an error response that looks the same as the validation error:

try {
    $user = User::create(
        $request->only(
            [
                'email',
                'name',
            ]
        )
    );
} catch (QueryException $e) {
    //1062 is the MySQL code for duplicate key
    if ($e->errorInfo[1] !== 1062) {
        //Rethrow anything except a duplicate key error
        throw $e;
    }
    return response()->json(
        [
            'error'   => true,
            'message' => 'That email address already has an account.',
        ],
        Response::HTTP_UNPROCESSABLE_ENTITY
    );
}Code language: PHP (php)

This way, as far as the client is concerned, it was a straightforward validation failure with an appropriate 422 error code, and we don’t get spurious 500s clogging up our error logs.

My Skiing Videos

I post skiing videos fairly often, and people keep asking me how I make them, since by most normal understanding of shooting video, they seem like magic.

Skiing fast in Combloux, France on a nice sunny day, filmed by my talented, invisible friend

Is there a drone that follows me? Do I have a friend that can ski backwards very fast while filming, staying out of shot, and not casting a shadow (a vampire?)? Nothing quite so exotic, but it’s still pretty clever.

The Camera

I use an Insta360 One X camera. As you might guess from its name, it shoots 360° video, that is, it captures a complete sphere, looking in all directions at once instead of just a rectangle pointing in one direction. It achieves this by using two cameras and two fisheye lenses, mounted back-to-back, each capturing slightly over a 180° field of view as two square images. These are then mapped into a 2:1 rectangular representation (which conveniently works with common image and video formats like JPEG and MPEG-4) where the poles are the top and bottom edges, which implies a lot of distortion, a bit like a Mercator map projection. This is a full spherical frame image in this format – the distortion is clear (my skis are not the size of surfboards), but the pixels on the left and right sides will match when wrapped around:

A spherical image mapped to a 2:1 rectangle

The combined resolution of these two sensors is 5.7k (i.e. more than 4k), however, you need to bear in mind that all those megapixels have to cover a complete sphere, so you really need it to be this high if you’re going to render out videos that only look in one direction, and thus only use a small portion of the original view.

The “slightly over 180°” field of view is important as it means that when the two video streams are combined, there is a discus-shaped region centred around the camera that is hidden from view, and this is used to make the camera itself (and the selfie stick it’s mounted on) effectively invisible, without having a visible discontinuity (join) between the two image sources.

It’s possible to see the join sometimes, especially when only one lens is exposed to the sun – modern optics are good, but there’s only so much they can do! In this weird perspective, the join is running roughly vertically a bit to the left of me, roughly perpendicular to the lens flare ray from the sun, which stops at the join. The sky is slightly lighter on the left side of the join, probably due to lens flare:

You might have noticed that using spherical images requires a new perspective (ha!) on taking pictures and planning shots; Since you can rotate the view in all directions and zoom in and out, you can produce some very unusual perspectives that are not possible any other way. In the image above, the camera is effectively looking straight up and is zoomed out a long way, so the horizon around it appears as a circle.

One advantage of fisheye lenses is that they have effectively infinite depth of field, so everything is always in focus, and it doesn’t need autofocus. On the other hand, you’re not going to get any subtle bokeh effects!

Stills

You can extract still images from the video stream as either rectangular frames, or full 360° images (as I did above). The camera has a higher-resolution stills mode, however, that’s not really possible to use at speed. The stills quality from video streams is remarkably good.

I’d be delighted if someone had taken this picture of me (no doubt after much setup) – that it’s a selfie is borderline miraculous!

I’m particularly pleased with this shot – the spray of snow was caused by the camera hitting the ground at speed, and the camera itself made a gap in the spray, which happened to line up with me!

Stabilisation

The camera has “FlowState” stabilisation, which uses accelerometers and gyroscopes to keep track of where the horizon is and keeps it flat and steady, regardless of what angle the camera is held at – when you’re filming in all directions at once, it doesn’t really matter which way the camera is “pointing”, in fact the whole concept of pointing it at something doesn’t really apply. This stabilisation is extremely good, keeping things nice and smooth even during really quite violent movement and vibration, but it’s also partly responsible for the viewpoint feeling disconnected from the subject.

Mounting

The traditional place to mount an action camera like your average GoPro for skiing is on top of your head, attached to a helmet. That’s fine as it points forwards, but it means you never star in your own movies, and the resulting footage tends to be pretty monotonous. Mounting a 360° camera there is going to be a bit dull too – it means you lose the view of the ground because your helmet will get in the way, but your friends might be nicely captured. To get a nice perspective on yourself, you need to shoot from a bit further away – selfie stick to the rescue!

The camera isn’t that heavy, but when it’s waggling about at the end of a 1.2m carbon fibre telescopic selfie stick in a fast moving situation, it can be hard to control. When skiing you use your hands for holding your poles and partly for balance, and it’s actually painful to hold stick and pole at the same time. To counter this, I designed and 3D-printed a mount that clamps the stick to my ski pole fairly rigidly, and also offsets the angle a bit – otherwise there would be a risk that the “invisible selfie stick” feature would also make my ski pole disappear, but it also gives you a bit of creative control as you can easily move it between front and side viewpoints with a twist of the wrist.

The camera mounted on my bracket – finger space is a bit tight with fat gloves on

This mount is a great improvement, and makes for much steadier shots and safer precarious camera positioning (like 2cm from the ground at 100km/h!). You still lose a bit of motion in that arm (watch how little my left arm moves compared to my right in the video above), and the balance of your poles changes a lot, but it’s quite workable.

My ski pole mounting bracket, top view

Snowboarding is a bit easier because you have your hands free, and the results can look straight out of a video game:

I also printed a bracket to mount it on my mountain bike’s down tube, and this taught me a couple of things: Vibration on bikes is much more violent than on skis, so the selfie stick rattles (the telescopic parts rattle inside each other) and wobbles a lot more; you go a lot closer to things on a bike, and it’s easier to hit the camera.

A new take on jousting

The end result still looks quite cool though.

Editing

Insta360 have an iPad & iOS app called… Insta360. The iPad version is great, but the phone one is quite usable too. You have a choice of aspect ratios, trimming and editing, adding soundtracks, colour enhancements, filters, and more. As well as nicely smoothed view angle and zoom factor manual edits, it has some very clever features for auto-tracking a subject. Remember that if you’re rendering out a normal-looking video, as I’ve done here, it’s not just that you can choose your view angles – you have to; clips of the ground whooshing by or the tops of some trees and a bunch of clouds are not that interesting!

The software has some enhancement filters, though they are not very controllable – you can’t for example just “make it brighter”, you have to pick a filter preset that happens to look like what you want, then twiddle an amount slider. The dynamic range is really good – notice that in all these clips you can still see the snow texture in direct sunlight, the highlights are not blown out, yet there is still detail in dark areas.

Extras

Insta360 have some other mounts that could be interesting for skiing including a back mount that puts the camera above and slightly behind you for a 3rd-person perspective – just don’t ski under any low branches, and be careful on chairlifts!

They also have a GPS Action Remote gadget, which in addition to controlling the camera remotely, injects a GPS data stream into the video recording. In the editor this data can be used to drive on-screen speedometers and maps. I’ve not tried that, but I’d love to given that I’ve long had a thing about going quite fast. Here’s a clip of me doing that on the Mont Fort World Cup speed skiing track in Verbier, now sadly defunct due to the glacier’s retreat, which also shows how much video quality has improved since 2013 (it was shot on a Contour 1080p camera):

At the start you can see just how steep this is! I managed 147km/h (92mph) on this run.

Posting online

For the most part I share videos via Mastodon, Twitter, and Strava. All of these have similar restrictions/requirements for videos regarding size, video format, bit rate, duration, etc. I usually render full-resolution output from the Insta360 app at 100Mbit/sec, do any simple edits using LosslessCut, and then compress for final output using ffmpeg. The ffmpeg command I use is:

ffmpeg -i in.mp4 -vcodec libx264 -vf 'scale=1280:-1' -filter:a "volume=0.1" -pix_fmt yuv420p -strict -2 -vb 4900k -minrate 1024k -maxrate 4900k -bufsize 4900k out.mp4Code language: JavaScript (javascript)

This compresses to just under 5Mbit/sec using the H.264 codec (sadly, vastly superior H.265 video is not accepted by these sites yet), scales the video down to 1280px wide (720p), sets the pixel format that most of these sites want, and also includes a 90% audio volume reduction, as it’s just really noisy otherwise.

Youtube has support for 360° videos, however, I find that these are actually mainly annoying to watch; rendering out a conventional rectangular clip just works better for me, and I’m the director around here…

Gripes

There are a few annoyances with this camera:

  • The tiny OLED screen is completely unreadable in bright sunlight.
  • There are only two buttons to navigate through all its menus and options, but I never know which one to press.
  • It’s very picky about the SD cards it will work with, though that’s understandable.

The battery life is reasonable, even in the cold, and while I printed a lens cover, the lenses seem to be surprisingly resistant. It doesn’t mind getting covered in snow. For some unknown reason, the 3 threaded sections of the selfie stick use a left-hand thread, which is annoying as it means that the action of tightening the camera undoes the selfie stick.

The One X is a few years old now, and Insta360 have since released newer One X2 and X3 models which solve the screen and controls problem by adding a small touch screen that’s more readable. It also makes the standard case a bit more robust and waterproof, has a bigger battery, better image sensor, and improved audio. Should Insta360 like to give me one to test, I wouldn’t say no!

Git & SSH sitting in a tree…

I do work for a bunch of different clients, who variously use GitLab and GitHub. For many years I put up with the incessant problem of accidentally signing my commits as the wrong user. It’s just so easy to forget to set the right GPG key and email address when you just want to get on with a project. It’s not the end of the world, but it’s annoying.

Often the same thing goes for the SSH keys you use to push and pull from your git repo; it’s a bit too easy to be lazy and use the same SSH key across multiple clients, when a little isolation would be a good idea from a security perspective.

What if you could work some magic so that identities and GPG and SSH keys are set to the right values right from the start, for every project for each of your clients? Read on…

This whole setup reminds me very much of a post I wrote in 2009 (13 years ago!) on the “holy trinity” of DNS, TLS, and virtual host wildcards that allow you to dynamically host vast numbers of previously undefined sites without having to touch your web server config at all, a classic example of convention over configuration.

First of all let me introduce you to .gitconfig. This file usually sits in your home directory, so for me on macOS that’s /Users/marcus/.gitconfig. This file contains your global git defaults, and is an easy-to-read config file in an “ini” style (and no, those are not real values!):

[user]
    name = Marcus Bointon
    email = marcus@example.com
    signingkey = AC34DF5B434BB76
[github]
    user = Synchro
    token = f693251e52043a23fe5fbd955cff56ff
...

You’ll find lots of other sections in here, which you can read about in the git config docs. But we are only really interested in one option: includeIf. This directive conditionally includes another git config file into your settings, and one of the things you can make it conditional upon is the path to your project. This is useful. I typically set up my client’s projects in the macOS default Sites folder within my home directory. Each client gets a folder, and each of their projects lives within that. This provides a tidy location to put a separate .gitconfig file that can be applied to all of their projects. It ends up like this:

~/.gitconfig
~/Sites/
    client1/
        .gitconfig
        project1/
        project2/
    client2/
        .gitconfig
        project1/
        project2/

Each .gitconfig file only needs to include the differences from the defaults that are set in the primary config file that lives in your home dir. To set up the GPG signing key and email for all of their projects, the file would contain this:

[user]
    email = remotedev1@client1.example.net
    signingkey = 434BB76AC34DF5B

Back in our primary file, we would add this conditional statement to automatically pull in this extra config whenever git is operating in this folder:

[includeIf "gitdir:~/Sites/client1/"]
    path = ~/Sites/client1/.gitconfig

And that’s it as far as GPG goes – commits will now be signed with the key and email address that are specific to this client, so when you set up your next project for them, you won’t have to do anything to set it up; it’ll Just Work™.

But what about SSH? The chances are that your client will have asked you for an SSH public key to add to their repo to provide you with sufficient access, but setting the GPG key doesn’t do anything towards selecting an SSH key for that purpose. You could do that using environment variables (which can be quite annoying) before, but fortunately, git 2.10.0 added the core.sshCommand config option that allows us to specify the SSH command that git uses for file transfer operations, and that can include a -i parameter to select an SSH identity (and -C to use compression for a possible speed boost). Add this to your client’s .gitconfig file, using the path to your client-specific identity file (not the public key which has a .pub suffix) like this:

[core]
    sshCommand = "ssh -i ~/.ssh/id_ed25519_client1 -F /dev/null"

Side note: I do hope you’re using Ed25519 keys for SSH; they’re newer, smaller, stronger, and faster than RSA keys, and they’ve been supported in OpenSSH since version 6.5 in 2014, so if your server doesn’t support them, you probably have bigger problems, or maybe you’re just running RHEL… I hope you’ve seen the post-quantum features of OpenSSH 9.0 too. The SSH client config file (usually found in ~/.ssh/config) is also really useful for twiddling per-directory or per-server configs that you can just set and forget.

Once you’ve done that, your commits will now be signed using your clients’ GPG key, and pushed to their repo using their specific ssh key, and you won’t have to change anything when you start new projects for them, so long as you put them in the same folder.

“What about my IDE?”, I hear you ask. Not to worry, most IDEs use your system’s git and ssh configs, so all this should work just fine with PHPStorm, VSCode, etc.

While I’m sure some bright spark can make this even more dynamic to automate this across clients, I find new clients are rare, but projects turn over fast enough for this to be a real win for getting that first commit signed and pushed correctly, first time.

An open source mini-adventure

I’m using Spatie’s Media Library Pro in a project for dgen.net, and ran into a problem when I tried to use a TIFF-format image, and it failed to show a thumbnail of the image:

Drag and drop works, but no TIFF image preview.

So I set about tracking down why this image didn’t work, since the project this was being used for has lots of TIFF images. This turned into quite the can of worms, but all worked out beautifully in the end.

TIFF images are not supported by most web browsers as they are not a typical “web format”, but they are very common in print and archiving contexts. It doesn’t help that Safari is about the only browser will display them at all, but here the aim is to display a thumbnail, not the actual image, and the thumbnail doesn’t have to use the same format.

Media Library Pro is a set of user interface widgets providing access to Spatie’s Laravel Media Library package, and so it’s dependent on that package to provide all the underlying file management and thumbnail generation, which is handled by a more general mechanism for creating “conversions” of underlying file types. This is especially useful for files that are are not images – for example it’s possible to create thumbnails for audio files using a package I wrote, but being able to do something similar for otherwise undisplayable image types is useful too.

It turns out that Media Library’s image support is handled by yet another Spatie package called (imaginatively) Image. So I started looking there, and found that it did not actually take responsibility for performing image processing operations either, but used yet another package called Glide by the PHP league. In searching for info about using TIFF files with Glide, I found this issue, which told me that Glide already supported TIFF, so long as you were using the imagemagick PHP extension (as opposed to the slower, less capable, but more common GD) as the image processing driver, which I was already. But as I’d seen, this didn’t seem to work. So I set up a simple test script to convert a JPEG image into TIFF using spatie/image (I needed it to convert in both directions), and found that it did indeed create a TIFF file. However, apps I tried could not open it, saying that it was not a TIFF format file. The file command line utility showed me that the file was in fact a JPEG-format image saved with a .tiff extension:

file conversion.tiff
conversion.tiff: JPEG image data, JFIF standard 1.01, aspect ratio, density 1x1, segment length 16, baseline, precision 8, 340x280, components 3Code language: Bash (bash)

This was not helpful! So this was a bug in Glide. I tracked down the cause of that and submitted a PR to resolve it.

One general problem with open source projects, is you never know when maintainers are going to get around to merging (or rejecting) PRs, or having merged them, when they will be tagged for release. I know this because I have been guilty of this myself! Here I struck lucky – a maintainer merged it the same day, and also tagged it for release.

Now I had a different problem. This fix was several layers down in my stack of dependencies, and those projects didn’t know about this change in Glide, so if I wanted spatie/image to gain TIFF support, I needed to bump its dependencies to force it to use the new version. It also turned out that while Glide now had TIFF support, Image did not pass that support through to its consumers, so I needed to let it know that TIFF was also a supported format. All that happened in another PR. Spatie has a very good reputation for supporting its open source packages, not least because they constantly dogfood them, and have a great track record of merging PRs quickly and tagging them for release, and this was no exception – my PR was merged and released very quickly.

Now I was nearly there – but not quite! I discovered two almost identical problems in spatie/laravel-media-library and spatie/image: despite delegating image processing functions to their dependencies (i.e. having image say “I support whatever image formats that glide supports”), they both had their own hard-coded list of supported formats. I had already updated this in image in my previous PR, but now I needed to do the same thing (and something similar for tests) for Media Library. Cue PR number 3! True to form, Spatie merged and tagged this release quickly, and my chain was complete! I followed this up with another PR to port my changes to their later version 10 branch (supporting Laravel 9), most of which involved a switch to the pest testing framework.

Finally, back in my app, I bumped my dependency version constraints (so my app picked up the latest versions of these packages), and then I got this:

The fruits of all that effort!

I observed that there’s more that could be done in these packages, in particular that knowing what image formats and MIME types you can support should be limited only at the lowest-level – all higher dependencies should defer to the lower-level packages. This would mean that there is less code to maintain in those packages, and new formats would automatically start working without PR chains like this. So if you have time on your hands… This is of course how a lot of open source software comes into being – there’s always another yak that wants shaving!

This might seem like a lot of effort for a very small feature, but this is how open source works, on its good days! Every package you use is an accumulation of effort by original authors, maintainers, contributors, and reporters, all of whom want to solve one problem or another, and share their efforts so that others can avoid having to solve the same problems all over again.

This particular chain is the longest nested set of PRs I’ve ever done, it was fun to do, was about the first thing I’ve ever “live tweeted”, it resulted in a solution to the specific problem I had, and that solution is now available to all. This is how open source is meant to work, but it’s not always this (remarkably!) smooth. Some package creators can’t be bothered to maintain their packages, others are on holiday, have just had a baby, or have died; raging flamewars erupt over the most trivial things; discrimination (racial, sexual, religious) is unfortunately common; bug reporters often fail to describe their problems well, or make excessive, unrealistic, entitled demands of maintainers. Sometimes this proves to be too much, resulting in great people stopping (or never starting) their participation in the open source ecosystem, which is a terrible shame.

The web would not exist without open source, and if you want to continue to reap the benefits of this beautiful thing we have collectively created, the best way is to support the maintainers. Whether it’s individual developers like me, package creators like Spatie and The PHP League, or open-source juggernauts like Laravel and SensioLabs (Symfony), we can all benefit from support. There are many different ways you can provide support (not just financially), for example making developer time (or other resources) available, paying for products and services sold by companies that back open source projects, paying maintainers, either directly through things like GitHub sponsorship and Patreon, or through broader programmes such as TideLift that might be more acceptable to accounting departments. I’m tooting my own trumpet here (my blog!), but there are literally millions of open source developers out there, and if you’re reading this, you’re using software that we have all created together.