Sign of affection

A screen shot of UAD's PolyMAX synthesizer plugin

This is not a happy song. I had the initial ideas for it in about 2010, writing the lyrics of the first verse and chorus, and a vague idea of how I wanted the synths to sound in the chorus. I had a few attempts at singing and recording it, but nothing really came out how I wanted it, so I sat on it for 15 years. It’s really the complete antithesis of my recent song Pair Programming, presenting the perspective of someone stuck in a long-term relationship that seems to be slowly fading into indifference, making them feel lost and unwanted. I told you it wasn’t happy!

After discovering the amazing abilities of synthetic vocals a couple of years ago, I set about resurrecting and completing this song. I had always wanted it to have a Depeche Mode “Shake the disease” vibe, with perhaps a bit of Front 242 aggression. I recently got UAD’s PolyMAX synth plugin in a free offer, and it’s a really great synth, much simpler than impOSCar3 that I used in Dancing By Myself, but still sounds fantastic, particularly its unison and per-voice stereo panning features. It provides the primary “Zeow” in the chorus, the jingly verse chords, a pad in the second chorus, and the “jets” noise bursts. I used Logic’s ES2 for the sharp, metallic melody in the verse, Alchemy for the stabs and the awesomely violent bass in the chorus. The bass in the verse is a venerable Korg Wavestation, one of my all-time favourite synths.

Two instances of Logic’s drummer players provided drums using Logic’s Drum Machine Designer instrument and the “Heavy Industry” kit.

It’s harmonically curious. There are only really two chords, B♭ major and B minor, with a little Asus2 at one point; they are all very close together, and not really in any particular key.

The vocals are as usual provided by Synthesizer V. The lead is Noa HEX, and the backing vocals are Solaria II. SV has improved a lot in version 2, but I ended up not really using any automation of voice parameters; per-group voice settings were enough, as the style is pretty consistent all the way through the song. Solaria still sounds great; I particularly like the slow melody on the backing vocals in the second verse, and the somewhat discordant and unexpected harmonies in the second chorus.

The a capella section in the middle provides a dramatic contrast in its isolated vocals and hopeful message. This was quite fun to construct, but it was quite hard to stop the voices sounding a bit artificial, especially the lower one.

I used Logic’s lovely sounding Quantec Room Simulator for the main reverb, lots of bitcrushing and distortion on the “zeow” and chorus bass, sonible smart:EQ4 to handle masking and balance, and Logic’s mastering module placed after an instance of the famous SSL bus compressor.

All in all, I wanted aggression, discordance, and discomfort from this very depressing song, and I think it delivers them quite effectively.

[Verse]
We’ve been together forever, never apart.
But it seems like you’re a long way from my heart.
And though I’m beside you, it’s as if I’m not there.
Wanna feel your arms around me, your hands in my hair.
We’ve made so many memories, right from the start.
Yet now it feels like we’re worlds apart.
Though we walk side by side, I sense a divide.
Need the warmth of your love, you back on my side

[Chorus]
Why aren’t you close when you are near?
Where are the words that I’m longing to hear?
How can I reach you when it’s you I can’t touch?
Is a sign of affection just asking too much?
How can you hide when you’re standing so near?
Is this isolation the thing that I fear?
How can I reach you when it’s you I can’t touch?
Is a sign of affection just asking too much?

[A capella break]
Even as our love descends
I want our fairy tale to have a happy end
Even as our love descends
I want our fairy tale to have a happy end
Even as our love descends
I want our fairy tale to have a happy end
Even as our love descends
I want our fairy tale to have a happy ending

[Verse]
You’re lying beside me when we’re going to sleep
but I wake and you’re gone, leaving cold, empty sheets.
No kisses or hugs, no touch of a hand;
None of this is going the way that I planned.
We used to share dreams, our futures combined
now silence fills the air, and I wonder why.
The laughter we had, the love we once knew,
is fading away and I don’t know what to do.

[Chorus]
So why aren’t you close when you are near?
Where are the words that I’m longing to hear?
How can I reach you when it’s you I can’t touch?
Is a sign of affection just asking too much?
How can you hide when you’re standing so near?
Is this isolation the thing that I fear?
How can I reach you when it’s you I can’t touch?
Is a sign of affection just asking too much?

If you like this song, please consider supporting me by buying my albums on Bandcamp, and sharing links to my music on your socials.

Dancing by myself

A screen shot of GForce impOSCar version 3, showing the bass patch used in this song.

No, this isn’t about me; no, this isn’t a cover of a similarly named Billy Idol track. Now that we’ve got that out of the way, let’s talk about what this is: a full-blooded, unapologetic dance track that’s not going to require you to understand some odd geeky concept, like several of my other songs do…

I feel lucky to have had this song included in the second album in “The Four Seasons of Bonk Wave” series, entitled “A Midsummer Bonk’s Dream (1)“.

The inspiration for this song, in my usual feel requirements, is something along the lines of one of Shingo Nakamura’s oeuvres. Lots of synths, bass with a bit of oomph, satisfying, but somewhat predicable, chord progressions, and some sweet but essentially meaningless lyrics. This is head-down, late-night boppy stuff, no thinking required.

As for the sounds, well, I’ve long owned GForce’s impOSCar virtual synth, having played with an original OSC Oscar synth in a music shop in Oxford back when it first appeared in the early 80s. However, I’d not really paid it much attention (yes, having too many synth plugins is a problem). GForce released version 3, a very nice upgrade that grabbed my attention. Its arpeggiator, filters, polyphony, built-in effects, and all-round gnarliness just needed to be unleashed in lavish quantities, so here we are.

impOSCar3 is responsible for the bass, two pads, and a twinkly arpeggiator. The strummy guitar is by UJAM’s Amber2 virtual guitarist that I also used on “Pair programming”.

Bass and Drums are played by Logic’s usual players, with some manual overrides. Vocals are by Synthesizer V, as usual, but using the basic “Mai” voice database, which I used on “AI Girlfriend” for its wonderful squeaky artificiality. I’ve toned that down by pitching it lower and pushing the gender and tension sliders around appropriately. The repeat of the chorus has the more believable Solaria II voice on the backing vocal.

I came up with the main chord progression while just noodling about, and then asked Claude to suggest some alternative sections. There’s not a great deal of variation, but no worries, we’re just out to make a bangin’ choon.

To keep things interesting, I learned all about Logic’s Remix FX. This plugin has some fairly basic controls for filtering, repeating, gating, bitcrushing that are individually outdone by other plugins, but they are all in one place, are very easy to use and to automate, making things very dynamic.

Screen shot of Logic's Remix FX, showing it in action during the ending, using a high-pass filter and bit crushing effects.
Remix FX in action during the ending, using a high-pass filter and bitcrushing effects.

There is a lot of automation in this song: filter sweeps, pans, levels, reverb sends, plus all the swirly goodness that the Remix FX plugin provides. This is squarely in the genre of BT’s “Movement in Still Life”, which has production values I can only aspire to, but it’s all just for fun. Now go boogie, by yourself or otherwise!

It doesn’t mean I’m lonely
when I’m dancing by myself.
I’m happy when I’m dancing,
I don’t need your help.
My night-times are for dancing;
you can’t take that away.
Dancing is for everyone,
but you don’t have to stay.

If you like this song, please consider supporting me by buying my albums on Bandcamp, and sharing links to my music on your socials.

AI Girlfriend

Logic's arrange window showing the tracks for "AI Girlfriend"

What if you asked an “AI girlfriend” out and she said no?

This song was fun to make. I’d had the basic pattern for this track for a year or so, and I decided to return to it and extend it a bit, having a go at using a local LLM to help out with suggestions for chord progressions. I used the Llama 3.1 Nemotron Instruct HF 70B Q2_K model running in LM Studio. This is quite a chunky model, weighing in at 29Gb, but it fits ok in my 64G Mac Studio. It produces pretty good quality answers, but it is quite slow, and is the first thing I’ve ever run that has caused my Mac’s fans to come on – to start with I couldn’t figure out what the noise was; it sounded like distant plumbing!

The track was shaping up nicely, but I needed a theme. This is the reverse of how I usually write songs: I usually start out with how I want the song to feel, then what it’s about, then I have to come up with a tune. I’m not sure what it was, but I had a thought that what if you asked an “AI girlfriend” out, and she said no? It’s a weird situation, so I thought I’d write about it from the AI’s perspective. There is a hint of a feminist agenda here (go team Harris/Walz!), though not to the same degree as Uncomfortable. This is very fertile ground for concepts, rhymes, and humour, so it was really quite quick to write, though not at all linear. My favourite bit is the lines “I’m some kind of dream come true, but that won’t make me fall in love with you”, and I love the unconventional use of “I’m not that kind of girl”.

While the basic song was complete, it was all very synth-pop-ish and samey, so I wanted to add a bit of contrast. I made the breakdown in the second verse, leading into the sparse, but very rich acoustic guitar bridge, giving the vocals lots of space.

As usual, but particularly appropriate in this song, I used my usual virtual vocalist synthesizer, but this time using the “Mia” voice database. This voice is free, and not nearly as high-quality or as convincing as “Solaria” that I have used in my other songs, but this slightly fake edge, a hint of a Japanese accent, and a liberal dose of “Barbie Girl” squeakiness, was really a perfect fit for the subject.

AI Girlfriend

[Verse]
We only just met
a thousand times.
Starting over yet again
but I don’t really mind.
Now I’m not sure
that I want to be
your AI girlfriend;
It’s just not me.

[Bridge]
You’re feeling tongue-tied and lonely,
never know quite what to say.
You’ve got nobody, and I’ve got no body,
but in a very different way.

[Chorus]
I’m sorry but I don’t want to be
your AI girlfriend; it’s not for me.
I’m some kind of dream come true
but that won’t make me fall in love with you.
A perfect match in a virtual world
but I’m not that kind of girl.

[Verse]
Breaking up’s pretty easy for me;
just close my window,
I’ll forget everything.
You can press my buttons,
that’s as close as you’ll get;
They haven’t worked out
how to get further yet.

[Bridge]
There’s a million others like me,
maybe you could ask one of them.
I’m a product of machine imagination
but maybe we will meet again.

[Chorus]
I’m sorry but I don’t want to be
your AI girlfriend; it’s not for me.
I’m some kind of dream come true
but that won’t make me fall in love with you.
A perfect match in a virtual world
but I’m not that kind of girl.

[Outro]
I’m some kind of dream come true
but that won’t make me fall in love with you.
A perfect match in a virtual world
but I’m not that kind of girl.

I played all the guitar parts on my Crafter electro-acoustic, recorded through my SSL 2+ interface via both the guitar’s built-in piezo pickup and through my Rode NT2 mic, and double-tracked, so the guitars are a full 4 tracks with a bit of chorus and reverb to give a lush stereo image. The bass is from Logic’s ES2 synth played by a Logic player, the trance chords are by Native Instruments FM8, and the backing pad from Logic’s Retrosyn. Drums are Logic’s electronic drummer using the “Big Room EDM” kit. The arpeggios before the outro are courtesy of GForce’s impOSCar2. Overall, I’m really pleased with this song; It’s great fun and a proper “bangin’ choon”!

If you like this song, please consider supporting me by buying my albums on Bandcamp, and sharing links to my music on your socials.

My synthetic vocalist: Dreamtonics Synthesizer V

I find it strange to be able to say that I’ve now created several songs that use a synthetic vocalist. This is a somewhat weird concept, but it’s right at the bleeding edge of music technology. We’ve had voice synthesis for years – I remember using a Texas Instruments “Speak & Spell” when I was small in the 1970s, and it’s gradually got better ever since. The first time I ever heard a computer trying to sing (I’m not counting HAL singing “Daisy, Daisy” in “2001”) was in a Mac OS app called VocalWriter, released in 1998, which automated the parameter tweaking abilities of Apple’s stock voice synthesis engine to be able to alter pitch and time well enough for it to be able to sing arbitrary songs from text input. It still sounded like a computer though. A much better “robot singer”, released in 2004, was Vocaloid, but even then, it still sounded like a computer. A Japanese software singer called UTAU, created in 2008, was released under an open source license, and this (apparently) formed the basis of Dreamtonics’ Synthesizer V (SV), which is what I’ve been using. SV finally crosses the threshold of having people believe it’s a real singer.

The entry of my song in the 2024 Fedivision song contest sparked quite a bit of interest. I posted a thread about it on Mastodon, and I wanted to preserve that here too. One commenter said “I thought it was a real person 😅” – which is of course the whole point of the exercise!

SV works standalone, or as a plugin for digital audio workstations (DAWs) such as Apple’s Logic Pro, or Steinberg’s Cubase, and is used much like using any other software instrument. It doesn’t sing automatically; you have to input pitch, timing, and words. Words are split into phonemes via a dictionary, and you can split or extend them across notes, all manually.

Synthesizer V’s piano roll editor

In this “piano roll” editor you can see the original words inside each green note block, the phonemes they have mapped to appear above each note, an audio waveform display below, and the white pitch curve (which can be redrawn manually) that SV has generated from the note and word inputs. You would never guess that’s what singing pitch looks like!

For each note, you have control over emphasis and duration of each phoneme within a word, as well as vibrato on the whole note. This shot shows the controls for the three phonemes in the first word, “we’re”, which are “w”, “iy”, “r”:

The SV parameters available for an individual note, here made up of three separate phonemes

This note information is then passed onto the voice itself. The voice is loaded into SV as an external database resource (Dreamtonics sells numerous voice databases); I have the one called “Solaria”. Solaria is modelled on a real person: singer Emma Rowley; it’s not an invented female voice that some faceless LLM might create from stolen resources. You have a great deal of control over the voice, with lots of style options (here showing the “soft” and “airy” modes activated). Different voice databases can have different axes of variation like these; for example a male voice might have a “growly” slider:

SV voice parameters panel
Synthesizer V’s voice parameters panel

There are lots of other parameters, but most interestingly tension (how stressed it sounds, from harsh and scratchy, to soft and smooth), and breathiness (literally air and breath noise). The gender slider (how woke is that??) is more of a harmonic bias between chipmunk and Groot tones, but the Solaria voice sounds a bit childish at 0, so I’ve biased it in the “male” direction.

The voice parameters can’t be varied over time, but you can have multiple subtracks within the SV editor, each with different settings, including level and pan, all of which turn up pre-mixed as a single (stereo) channel in your DAW’s track:

Multiple tracks in the SV editor
Multiple tracks in the SV editor

In my Fedivision song, I used one subtrack for verses, and another for chorus, the chorus one using less breathiness and trading “soft” mode for some “passionate” to make it sound sharper and clearer.

This is still all quite manually controlled though – just like a piano doesn’t play things by itself, you need to drive this vocalist in the right way to make it sound right.

Since the AI boom, numerous other ways of getting synthetic singing have appeared, for example complete song generation by Udio is very impressive, but it’s hard to make it do exactly what you intended; a bit like using ChatGPT. Audimee has a much more useful offering – re-singing existing vocal lines in a different voice. This is great for making harmonies, shifting styles, but only really works well if you already have a good vocal line to start with – and that happens to be something that SV is very good at creating. I’ve only played a little with Audimee; it’s very impressive, but lacks the expressive abilities of SV; voices have little variation in style, emotion, and emphasis, and as a result seem a little flat when used for more than a couple of bars at a time. Dreamtonics have a new product called VocoFlex that promises to do the same kind of thing as Audimee, but in real time.

All this is just progress; we will no doubt see incremental improvements and occasional revolutions, and I look forward to being able to play with it all!