Andrew Huang on Transit 2 and the Future of Music Tech

Andrew Huang is one of the most widely followed music producers on social media.

With a YouTube audience of 2.37 million and counting, Andrew’s ability to blend sophisticated music education into watchable and inspiring content has put him at the center of a new generation of producers.

We first reached out to Andrew in 2022 with vague plans for a collaboration. Ideas started flowing almost immediately, and the concept for Transit 1 was born out of Andrew’s real-world process in the studio.

With a knack for identifying common roadblocks in modern production, Andrew’s perspective on music tech proved invaluable as the collaboration continued with Transit 2.

Andrew played a pivotal role in the development of the sequel, bringing even more fresh ideas and game-changing insights to the plugin.

We sat down with the Toronto-based producer to talk about Transit 2 and the future of music technology.

BA: As a creator covering music tech, you have access to the latest tools and technologies. What makes a plugin stand out to you in the sea of new releases?

Andrew Huang: At the end of the day it’s about whether it can do something that nothing else can - that might be about a particular sound, or removing friction in the workflow, or inspiring different kinds of ideas. I also love a fun, flashy UI - but it wears off quick if there's no substance behind it.

BA: The idea for Transit 1 came from a workflow blocker in your own creative process. How did firsthand experience inform your ideas for the sequel?

AH: There were always dreams of doing more extreme effects with Transit. The Loop module was something we started working on with V1 but it took much more fine-tuning than any other effect. I found so many glitch use cases for Transit and was excited that we could expand into that territory even further. The Mod Filter module was something I wanted to do because I've gotten very into filter banks in the past couple of years and I love that they can add motion that's not as easy to pinpoint as, say, a filter or phaser sweep. I also found it limiting to only be able to set one sequence length, and ended up thinking that in the same amount of space we could just put individual launch buttons for all the most common lengths.

Of course the team at Baby Audio brought so many amazing ideas to the table too, and I have to credit our users for dreaming up some of the things we incorporated into the new version. In particular I want to shout out my friend Recue who was the first I saw controlling Transit with an envelope follower!

What new features in Transit 2 are you most excited to share with users?

The Follower and Sidechain modes - having the macro react to either incoming audio or the audio on another track respectively - open up completely new ways to explore sound and I feel like they are shortcuts to finding an interesting and dynamic vibe. You simply get more interesting modulation by deriving it from audio, and of course it's directly musically related to that audio. I'm also particularly excited about the Warp, Loop, and Reverser modules, because they bring Transit beyond standard effects and into areas of messing with time and pitch and achieving things that would often require editing.

The original Transit use case shines in genres that call for complicated effects transitions. How can producers use Transit 2 to go beyond buildups and beat drops?

We're calling Transit 2 a "motion effects playground" because it's so fun and easy to get lost in playing with it and exploring the movement it will add to any track. I've been delighted to find it's one of those rare plugins that offers many viable possibilities in almost any situation, as opposed to the more common experience - that I have anyway - of finding one preset you're ok with after auditioning 30. We really feel Transit 2 now goes completely beyond transitions and can be an amazing tool for adding life to any part of a mix.

Here are some ideas for getting the most out of it:

1. Use the macro knob as a "sweet spot finder": While sweeping it from 0 to 100 is the classic transition use case, trying out different presets while exploring different static positions of the macro knob can quickly get you to an inspiring vibe.

2. Experiment with all the modes: The same effects chain can sound very different when comparing it in, say, LFO mode vs Follower. And the same goes even for different Detection settings when using Follower or Sidechain.

3. Try the Follower on everything: Many DAWs don't even include an envelope follower, and the ones that do will require you to map it to effect parameters individually. But it can sound incredibly cool to have effects respond so closely to your input rather than being static. So not only is Transit 2 making a less common effects processing approach more accessible, it lets you try tons of creative possibilities fast because the 800+ presets are already fine-tuned with parameters pre-mapped to the Follower.

4. Inverting the Follower: Setting the Range control to travel backwards (it turns pink to let you know) means that your effects chain will come into play when your input volume is low. This is fantastic for effects that are better when they're filling space rather than being added on top of your sound. For instance, inserting glitches in drum tracks using any combination of Loop, Warp, Delay, and Reverser. Or to keep your mix clean even with a heavily verbed out vocal, you could have your Reverb Length, Tone, or Mix turn down when the vocal is present, and then turn up during the spaces.

5. Use DAW automation to open up even more possibilities: Macro control is fast and powerful and Transit already lets you fine-tune the range and acceleration of the effects parameters, but these parameters are also exposed to your DAW if you want to have things moving in completely different ways from one another. It's even possible to automate macro-linked parameters (you can use DAW automation to change their start and end points while the macro itself is in motion) as well as mode parameters like Detection, Range, and LFO Rate!

6. Forget that Transit is a macro-controlled multi-effect: We have 28 modules and some of them are incredibly creative all on their own. Load up just a Warp to do a tape stop effect. Put a touch of Mod Filter or Analog Chorus on a part that needs just a little extra sauce. Our Distortion has multiple modes that all sound awesome - I particularly like using Fold on bass sounds. Treat it like you have a big bag of individual plugins. And by the way, there are module-level presets that you can load (and save your own!) using the 3-lines "burger menu" in each effect slot's title bar.

You’ve said Baby Audio was a natural fit to work with on your first plugin. What made that collaboration so effective?

I was always drawn to Baby Audio plugins for having fresh, forward-thinking designs that never sacrificed - and usually enhanced - workflow. We share very similar values in terms of wanting to innovate while keeping things super easy to use. (And also wanting to put randomizers on everything.)

Many new producers look to your channel to learn essential skills for making music. What insights do you bring to plugin design from your role as an educator?

Because of the range of topics and genres I cover and the size of my audience, I've gotten insights into what lots of people go through in their creative process, beginners and pros alike. So I've gotten to see at scale what things often come up as workflow obstacles and feature requests. I think there's also a parallel between the way I communicate - trying to find the best examples and the most concise explanations - and the way I think products should work - getting to the heart of what's important, making sure that even if something is complex it's not convoluted.

We’re calling Transit 2 a “motion effects playground” because it’s so fun and easy to get lost in playing with it and exploring the movement it will add to any track.
— Andrew Huang

What new developments in music tech are most exciting to you at the moment?

If you know me at all you know I'm a Eurorack nerd, and there are super creative things happening in that space all the time. Mzourack Pebbles comes to mind - it's sort of like an envelope follower, but it generates rhythmic triggers based on audio crossing a certain threshold, and there's even an option to have a second trigger generated when a signal crosses the other way - going from above the threshold to back below it. Joranalogue and Hainbach recently condensed the functions of an entire wall of nuclear test equipment into a single module. Also, as much as there are issues with AI, I see a lot of companies going about it in interesting and ethical ways. I'm not interested in pushing a button to have a song written for me automatically, but I think vocal transformation has a lot of untapped creative possibilities, and I'm having fun with this plugin Combobulator which can resynthesize your input with the sound design flavor of producers who have given permission for their music to be trained on!

It seems like you’re just getting started as a creative partner with Baby Audio, can you hint at any future developments with the company?

We've been working on another plugin in conjunction with Transit 2 that I'm just as excited about - it's so different from anything else I've seen out there. I think it may have the potential to inspire people's productions, and save their time, even more than Transit!

Beat Drops and Buildups : How to Make 8 Powerful Audio Transition Effects

Beat drops and the buildups that precede them are how you create rising tension and satisfying payoffs in your songs.

They’re an essential part of modern arranging technique in almost all of today’s pop and electronic genres.

Even so, many producers find programming buildups to be tedious and time consuming, despite their powerful effect on listeners.

Luckily, it’s not difficult to create thrilling transitions with the right tools and basic knowledge of the most common transition types.

In this article I’ll break down the 8 of the most impactful transition effects and how to create them.

Let’s get started.

What are audio transition effects?

Transition effects are production elements that embellish sections of a song that build tension and expectation before a beat drop, breakdown or new section.

Transition effects are most often used in electronic genres to signal to the audience that a peak in energy or intensity is approaching.

In addition to arrangement choices that build toward the beat drop, transition effects add new interest to repeating sections and develop over time as a buildup plays out.

While they often give producers the opportunity to show off their creative flair with special processing, transition effects help define boundaries between sections and give listeners cues about the structure of the song.

How to create transition effects for buildups

Traditionally, producers create transition effects by automating multiple effects and parameters one-by-one.

With many different automation lanes required across several effects, keeping track of each curve over the progression of a transition requires some effort and patience.

But there’s an easier way to create stunning transitions than automating them by hand. We teamed up with Andrew Huang to create the simplest and most powerful solution for creative transitions.

Our Transit 2 plugin lets you create change over time in up to seven separate effects all controlled by a single macro parameter. V2 comes loaded with 4 new motion modes and 10 new effects modules for any kind of transition you can imagine—and beyond.

Simply click and drag the macro control or automate it to create any of the transition effects described in the next section.

On to the techniques.

1. Filter sweeps

Filter sweeps are one of the original transition effects from the early era of electronic music.

The term refers to sweeping the cutoff frequency of a synthesizer's low-pass filter. While the effect is striking on a synthesizer, it works even better when an entire mix is routed through a synth-style resonant LPF.

When swept from high to low, the filter gradually removes more high frequencies from the signal.

The result is an indistinct, bassy character that often sounds like it’s coming from the next room.

Sweeping from low to high adds the high frequencies back, uncovering more detail as the filter opens up.

Producers use filter sweeps in both directions to enhance the drama of a transition with a predictable trajectory that emphasizes different frequency areas in the mix.

Creating this classic type of transition couldn’t be easier in Transit 2. Simply link the Filter 12 or 24 modules’ cutoff frequency to the motion control and listen as the filter moves in sync with the macro control.

2. Risers

Risers are simple tones that increase in pitch over the course of a transition.

Often created with a single synth oscillator, risers provide an unmistakable signal that tension in the song is…rising.

Since the pitch rises freely over the span of the transition, it doesn’t always align with the key of the song.

This results in extra tension as the tone increases in pitch and occasionally clashes with the song.

Transit 2’s OSC module lets you create instant risers with your choice of sine, saw, square or triangle waveforms. You can even beef them up with thick stereo enhancement using the spread control.

Producers often add ambience effects such as delays and reverbs to enhance the basic character of a riser, so try adding these after the OSC module for even more stylish risers.

3. Washout

Speaking of delay and reverb, washouts are another popular transition created with effects.

Washouts are heavy ambience effects that increase in depth and intensity, obscuring the mix almost entirely in delay and reverb at their peak.

Washouts typically involve a gradual increase in the wet mix of the effects as well as rising intensity of other parameters such as delay feedback.

In some cases a washout ends with 100% wet mix, leaving only the reverb trails and decaying delay repeats for dramatic effect.

The new Retro Reverb in Transit 2 is perfect for extreme washout effects thanks to it’s crunchier, vintage character.

4. Tape stop

Tape stop is a classic effect that creates an abrupt drop off in pitch similar to turning off a tape machine while the transport is running.

In transitions, a tape stop often appears as the last element in a complex sequence to emphasize the silence before a beat drop.

The new Pitch+ Module in Transit 2 can create instant tape stops like the one seen in this preset.

Just make sure the pitch motion starts at a neutral position with no pitch change and rapidly decreases to a near-stop.

To make sure it feels authentic, try making the slope more severe by dragging the line inside the pitch parameter up and to the right.

5. Lo-fi

Sometimes the most effective transitions can come from intense distortion and lo-fi signal degradation.

During these transitions, the signal gets progressively more distorted until the entire mix is unrecognizable.

While basic distortion is usually a component of this technique, additional lo-fi elements can help add character to the effect.

Transit 2 comes with a host of modules for adding grit, grime and lo-fi goodness to your sound.

The Bitcrusher for an intense digital downsampling effect, or use the Speaker module to make it sound like your mix is coming out of a tinny transistor radio.

6. Pumper

Sidechain ducking is one of the most popular techniques in electronic production.

In fact, Transit 2 comes with a full sidechain mode to let any linked motion parameters react to any source signal in your mix.

But you can also create a traditional four-on-the-floor sidechain ducking effect without any complex routing using the pumper effect.

Especially effective when used on the entire mix, pumper gives you a configurable sidechain ducking effect that can increase in intensity as your transition builds.

Try this module last in the chain after a washout for a pulsing rhythmic reverb wash.

7. Loop

Glitch effects can give your transitions the unique flavor of a scratched CD or corrupted audio file.

While there are plenty of great ways to create glitch effects, some can be as time-consuming as automating your transitions by hand.

The Loop module in Transit 2 is built for easy stutter and repeat effects that work perfectly in transitions.

Watch Andrew demonstrate how easy it is to setup compelling glitch effects using the Loop module:

For transitions, try reducing the loop length progressively throughout the buildup to transform your sound into a garbled digital tone at the 1/64th setting.

If the effect is too aggressive, you can soften the onset and release of the looped slices with the Smooth control.

8. Autopan

Another classic technique from the early era of effects, panning back and forth from left to right still sounds psychedelic in 2024.

Of course, modern autopanners use an LFO to generate their motion, so the possibilities go far beyond what an engineer could do with the physical pan control on a mixing console.

For example, Transit 2’s tempo-synced autopanner can increase speed by tempo divisions like the Loop module I described earlier.

At extreme settings, the rapid panning sweeps create a whirlwind effect that’s perfect for the peak of an intense buildup transition.

The ultimate transition designer

Getting transitions right can be tricky, but it doesn’t have to be difficult or daunting.

If you find it confusing to create them with multiple plugins and automation lanes, Transit 2 provides a complete workflow with deep creativity.

The next time you need to create any of the transition effects described above, try Transit 2 to see how you can get your mix moving.

Vocal Synthesizers: 4 Ways to Create Synthetic Voices and How to Use Them

Vocal synthesizers blur the line between natural and artificial like no other type of effect.

Since the early days of sound synthesis, engineers have looked for convincing methods to emulate the sound of a human voice.

And while some approaches were more successful than others, vocal synthesis technologies have become a part of the language of modern music production

But what exactly is vocal synthesis? What are the key types, and where are they most useful in a mix?

In this article I’ll break down the most common approaches to vocal synthesis and suggest three creative ways to use them in your workflow.

Let’s get started.

What is vocal synthesis?

Vocal synthesis refers to a group of technologies that simulate the sound of human voices or augment vocal recordings with synthetic properties.

Vocal synthesis began in the 1970s with the introduction of analog vocoders. Short for ‘voice encoders,’ these devices became popular in early electronic music production.

Since then, vocal synthesis has developed with new techniques and technologies, sparking trends in vocal production across musical genres.

Here are the main four types of vocal synthesizer in common use today:

Vocoder

As I mentioned above, analog vocoders were the first widely used vocal synthesis tools.

They work by imparting the characteristics of a speaker’s voice onto a synthesized tone.

A vocal recording or microphone input provides the modulator signal and a synthesizer oscillator provides the carrier signal. The modulator’s amplitude and frequency characteristics are used as the modulation source for the synth's envelope and filter, creating a distinctive ‘talking synth’ effect.

There are plenty of great vintage vocoder plugins out there if you’re looking for the classic version of this effect, including the free TAL Vocoder.

Formant synthesis

As synthesis and acoustics matured, researchers identified unique characteristics of human speech known as formants.

Formants are fixed resonant frequencies that arise from the shape of the singer’s vocal tract. They’re unique to every person and play a significant role in how we differentiate between voices.

Whether using physical modeling or frequency modulation, formant synthesis recreates known formant patterns such as those of vowel sounds to simulate speech.

The Yamaha FS1R rackmount synth was one of the most advanced hardware units to include formant synthesis.

Vocal Resynthesis

Vocal resynthesis is another method of synthesizing vocal timbres that starts with an existing vocal recording.

With this technique, the frequency information in the original audio is analyzed, deconstructed and then remapped onto a new signal.

Unlike a traditional vocoder, the analysis phase extracts information about the harmonic partials that are present in the original vocal.

This gives vocal resynthesis tools even greater control of the resulting timbre of the sound, which is vital for creating a realistic voice.

Our Humanoid plugin uses a complex resynthesis process to dramatically alter the timbral qualities of the incoming vocal.

In addition to hard tuning and pitch correction, Humanoid morphs your voice into a synth waveform as you turn up the Transform dial.

Watch Alex demonstrate how Humanoid combines powerful tuning, formant shifting and resynthesis to create a uniquely synthetic vocal texture.

AI Voice Generators

The latest developments in vocal manipulation use neural networks to map the characteristics of one voice onto another.

By training on many recordings of one speaker or vocalist, AI systems can identify the unique qualities present in their voice.

AI vocal transformation is not a traditional form of synthesis, but a completely new process that uses machine learning in place of signal processing techniques.

This method has some unique advantages, as the neural network can identify subtle qualities in the target signal that can’t be easily modeled in a virtual signal path.

In fact, our TAIP plugin uses this approach to create an accurate simulation of a vintage tape machine.

In the case of AI vocals, the realistic accuracy of the simulated voice is the subject of some controversy, as it raises concerns about the potential for misuse.

Even so, AI vocals have exciting creative applications that artists like Holly Herndon have embraced to explore the boundaries of art and technology.

How to use vocal synthesizers in your mix

With the background out of the way, here are some creative techniques to consider using vocal synthesis.

To take advantage of any of these ideas, you’ll need a capable vocal synthesis plugin in your arsenal.

We built Humanoid to give producers access to multiple vocal manipulation and resynthesis tools with a simple and creative workflow.

If you’re looking for a single plugin that can transform your vocal sound into something new and synthetic, Humanoid is a powerful option for all the techniques I’ll describe in the next section.

On to the tips.

1. Classic robot voices

Robotic voices kicked off listeners’ fascination with synthetic vocals in the early era of analog synthesis.

The uncanny sound is still just as captivating when used effectively.

Humanoid can evoke the familiar sound of retro vocoding, but it can also push the technique into the future.

With the Transform control turned up, the original vocal texture gets forcibly mapped onto a synthesis waveform. Try using a simple square or saw wave to emulate the primitive texture of early analog vocoders.

Or try playing rich polyphonic chords in MIDI mode for a modern, detailed vocoder effect.

2. Augmented natural voices

Modern vocal synthesis often serves to enhance a natural vocal, or exaggerate it’s qualities for a stylish effect.

While Humanoid isn’t exactly subtle, it can work with the incoming vocal texture to create a distinctive synthetic layer that adds interest to an arrangement.

Use Humanoid’s mix controls to balance the blend of tuning, resynthesis and dry vocal to create a robotic hybrid that stands out in the mix.

3. Vocals as an instrument

One interesting aspect of extreme vocal synthesis is that it can completely replace the incoming vocal with a synthetic tone.

This is useful when you want the phrasing and expression of a vocal line without an identifiable voice quality.

With the Transform dial fully clockwise, Humanoid eliminates any trace of the original voice, leaving only the resynthesized synth tone controlled by the vocal input.

Try improvising melodies with Transform at 100% to come up with unique hooks you might not discover on your instrument.

Synthetic voice machine

The singer is often the focal point of a song’s drama and emotional intensity.

Vocal synthesizers give you the ability to alter their sonic persona in ways that can be surprising and uncanny.

If you’re looking for a way to add a unique dimension to your vocal tracks, get back to your DAW and try out the techniques covered in the article.

What is a Dynamic Resonance Suppressor? How to Fix Harsh Frequencies In Your Mix

Dynamic resonance suppressors are some of the newest mixing plugins to gain popularity among producers and engineers.

It’s a unique method of controlling frequency content that’s different from traditional tools like EQ or multiband compression.

But what really is dynamic resonance suppression, and how can you use it to get better results in your mix?

In this article, I’ll break down the basics, explain why it works and give my top suggestions for where to use it in your production.

Let’s get started.

What is dynamic resonance suppression?

A dynamic resonance suppressor is an audio processor that selectively reduces unwanted resonant frequencies without altering the character or tonal balance of the original material.

It works by analyzing incoming audio, identifying resonances, tracking them over time and attenuating them with adaptive filters that change their shape and intensity to match the material.

Dynamic resonance suppression offers an alternative to EQ or other tone-shaping processes to help reduce the effect of energy buildup at problematic frequencies.

Unlike EQ or compression, it works on the sound’s spectral content rather than the audio signal, allowing much greater flexibility to control resonant peaks.

What are resonances in mixing?

Resonances are buildups of energy that can appear in your tracks and lead to negative effects in your mix.

All complex sonic timbres contain concentrations of energy at different frequencies. When you hear a cello play a rich low tone, you’ll hear the fundamental frequency of the note as well as resonant harmonics at various intervals above it.

When these overtones occur at integer multiples of each other (2:1, 3:1, 4:1, etc), they’re said to be harmonic. If you listen carefully to a sustained tone, you can usually pick out the overtones that are present in a complex sound.

But sounds in the real world always contain a balance of harmonic partials and inharmonic partials.

These are the noisy parts of the sound, such as the grating friction of the cellist’s bow on the strings.

Depending on the balance and intensity of the different partials, some overtones may stick out audibly in a signal or recording.

On top of that, any time you record a sound with a microphone, you capture the source as well as the acoustic reflections in the environment where you recorded it.

The room itself has a big influence on the resonant frequencies in a recording. Untreated acoustic spaces can contribute to bad resonances as reflections bounce off hard surfaces and combine with the direct sound.

Why do you need to control resonances in a mix?

Excess energy in some parts of the frequency spectrum can work against a sound’s role in the mix.

Think of the boomy bass of a recording made in a small room, or the grating sibilance of a mic that was a poor match for the singer.

Compounded across dozens of tracks, problematic resonances stack up and lead to negative effects like harshness, muddiness and lack of clarity.

High end recording studios spend a small fortune to minimize the effect of bad room resonance with acoustic treatment.

But for most of us, some amount of problematic frequency buildup is inevitable. Whether it’s a sample, a recording or even a synth sound, you may need to deal with bad resonances first to get the results you need in your mix.

Why use dynamic resonance suppression to fix harshness

In the analog era, engineers and producers had only a few tools to control the frequency balance of their tracks.

Parametric EQ and early de-essers were about as sophisticated as it got.

Even so, these tried and true methods can still work to decrease the effect of unwanted resonances.

Today’s digital EQ plugins come with steep notch filters that can almost completely attenuate a narrow range of frequencies.

One straightforward approach is to locate the problematic resonance by sweeping a narrow boost until the offending frequency increases in volume. You can then change the filter type to notch and remove a great deal of the signal content in the selected frequency range.

This practice is common, but it’s easy to take it too far and destroy the natural character of the sound with too many notched frequencies.

You also risk introducing other consequences of excessive EQ such as phase shift.

Similarly, de-essers or multiband compressors can sometimes help, but these approaches come with limitations of their own.

Dynamic resonance suppressors offer a third option to get even more control with less of the downsides associated with traditional methods.

Since the technology works by attenuating energy in the frequency domain, there’s no gain reduction function acting on the signal and causing changes to material you want to preserve.

And because the analysis can differentiate between resonances and tonal components of the signal, the filters can adapt and change to respond only to problems while leaving the rest intact.

3 Ways to Use Smooth Operator for a Cleaner Mix

If you want to get access to the benefits of dynamic resonance suppression, you’ll need a plugin that can do the job right.

We built Smooth Operator to make this complex form of processing easy to use and effective.

It combines several variations on spectral resonance manipulation to help you deal with resonances that add unpleasant qualities to your mix.

Try it free to see how this approach can unlock smooth top end and cleaner tracks.

Now, on to the tips:

1. Soften harsh vocals

Modern vocals call for heavy compression and bright, airy top end.

But multiple stages of dynamics and EQ can bring out harshness, even when they’re necessary for the vocal to sit in the mix.

In these cases, it’s not always possible to target only the harsh frequencies with EQ. If you attenuate too mucht in he critical zone between 2-5 kHz, you may lose presence and intelligibility.

Alternatively, if you clamp down too hard with a de-esser, you risk turning ‘s’ syllables into unnatural ‘f’ sounds, ruining the singer’s delivery.

Try using Smooth Operator’s frequency to display to hone in on the range where spiky resonances stick out the most.

2. Tame distorted guitar

Distorted guitars need a lot of bite to cut through a loud mix in aggressive genres.

But it’s easy to get too much of a good thing. With multiple doubled takes, you may find resonances begin to stack up as a result of the mic and speaker combination you recorded.

While these can sometimes help define the guitar in the mix, they may also add a fatiguing edge to the overall sound.

Add Smooth Operator to your guitar bus and use narrow selections to tackle midrange frequencies and upper harmonics that feel grating on the ear.

3. Chill out biting hi-hats

The hi-hat groove plays a big role in the 808-style beats found across modern genres.

They need to be powerful and punchy to drive the rhythmic feel and loud in the mix to cut through.

But static samples can start to feel harsh as they repeat over and over again.

When you need to retain loudness and punch but the hits feel too brittle, try Smooth Operator to decrease high frequency intensity around the attack.

Smooth it over

Dynamic resonance suppression is likely to become a new secret weapon in many producers’ toolboxes.

It’s a viable solution when other methods don’t work to manage tricky resonant frequencies.

Now that you have an idea of how they work and where to use them, get back to your DAW and try balancing your mix with Smooth Operator.

How to Make Synthwave: 9 Tips for 80s-Inspired Tracks

Synthwave is a musical throwback to one of the most iconic decades in pop culture.

But it’s also a thriving modern genre with an active community of fans.

If you’ve ever heard its distinctive blend of glossy synths, retro drum machines and 80s pastiche, you might have wondered how to get started producing synthwave tracks.

Luckily, it’s possible to get an authentic 80s flavor in your productions if you know a few key techniques.

Of course, you’ll need more than just tips and tricks to master the genre, but this article will give you some basic ideas to get started.

Let’s dive in.

What is synthwave?

Synthwave is a genre of electronic music that draws inspiration from stylistic trends of the 1980s in music, cinema, games and design. It celebrates the unique aesthetics of the era with a nostalgic take on its core elements.

Synthwave is a genre of electronic music that draws inspiration from stylistic trends of the 1980s in music, cinema, games and design.

Synthwave developed in the mid 2000s among French house producers and grew in popularity with the rise of 80s-inspired media like the 2011 film Drive and Netflix’s Stranger Things.

Sometimes referred to as Outrun or retrowave, synthwave tracks often feature analog synths, lo-fi samples, gated reverb and other staples of 1980s electronic production.

Musically, synthwave leans toward the cinematic side of the era’s musical landscape, with a special reverence for the early electronic film scores of Vangelis, Tangerine Dream and John Carpenter.

9 production tips for synthwave music

To produce synthwave tracks, you’ll need a convincing palette of sounds and a keen ear for chords, melodies and beats that evoke the 1980s.

As daunting as it seems, getting the sounds themselves isn’t the hard part. Synthwave has gained enough popularity that there are many plugins available to help you get an authentic 80s-inspired sound.

If you’re looking to add synthwave sounds to your plugin folder, consider the Baby Audio Synthwave bundle for a greatest hits package of popular tools for the genre.

Now, here are 9 production elements you can try to capture the genre’s distinctive sound.

1. Repeating sequenced plucks with delay

Advanced hardware sequencers first started to appear in the early 80s.

Unlike the analog units of the original modular synthesis era, these sequencers were meant to program musical sequences of notes for other electronic instruments to play.

While it was still difficult to build full-scale arrangements with these tools, composers seized on their ability to generate hypnotic sequences that could repeat indefinitely as they manipulated their synths.

Combined with tempo-synced delay, these sequences could create an immersive atmosphere that was perfect for the gritty genre films of the era. Tangerine Dream’s Love on a Real Train is a perfect example of this trick in action.

Synthwave producers picked up on the technique, with many tracks including hypnotic looped sequences enhanced by delay.

Try experimenting with short sequences of synth plucks with dotted-eighth note delay to find patterns that can drive your track’s rhythmic pulse.

2. Lush analog pads

Another key feature of the 80s in music production was the introduction of affordable polyphonic synthesizers.

These made it possible for producers to create rich sustaining chords to serve as the backdrop for other elements of the track.

Synthwave relies heavily on lush and dreamy analog pads to set the stage for its late-night mood.

80s pads have an instantly recognizable sound, so consider an accurate vintage emulation if you don’t have a classic hardware unit in your studio.

BA-1 is based on one of the most popular beginner synths of the 80s. With the digital version, we added full polyphony and a second oscillator to turn it into a powerful vintage pad machine.

Check out Sunglasses Kid’s demo clip to hear BA-1’s pad capabilities in an 80s-style track.

3. Vintage-style drum machines

Drum machines began their takeover of pop music in the 1980s. But even the most high tech early machines were challenging to program and not nearly as sophisticated as modern plugins.

To keep it authentic to the era, most synthwave tracks feature simple drum patterns that wouldn’t sound out of place coming from a vintage Linndrum.

Speaking of the Linndrum, it’s best to choose vintage-style samples with that characteristic compression and lo-fi feel of early digital hardware.

Look for a good set of drum samples from an 80-s era drum machine, and enhance them with modern techniques like parallel processing if you need more weight and punch.

4. Arpeggiated octave bass

Many of the most popular 80s synths featured onboard arpeggiators.

Producers quickly realized that holding down a single bass note with the arpeggiator set to a 1 octave range would create a bouncing rhythmic octave effect.

You can hear it all over classic recordings from the 80s like in the verse bassline on Blue Monday by New Order

BA-1’s built-in arpeggiator makes it easy to recreate this behavior without any additional MIDI effects.

Simply turn on the arpeggiator, set the octave range to +1 and hold down a note for instant 80s octave bass.

5. Vocoder vocals

Vocals can pose a challenge for electronic genres that evoke a specific mood with their sound palette.

The wrong vocalist can easily break listeners’ sense of immersion in the production.

One solution is to use a heavily processed vocal like the one found in Kavinsky’s classic early synthwave track Nightcall.

Humanoid is a great choice for classic vocoder-style sounds that can completely reshape a singer’s natural voice.

Its polyphonic MIDI mode means you can easily turn your vocals into a rich synthetic chord element in your arrangement.

Try this approach if you’re struggling to get vocals to sit right in your synthwave track.

7. Gated reverb

Gated reverb is another stylistic hallmark of the 1980s that’s been revived in synthwave production.

It adds characteristic punch and upper midrange density to percussion by abruptly cutting off the reverb decay.

Even if you already have a nice punchy snare, a little gated reverb can add even more 80s flavor.

Crystalline comes with a built-in gate function to turn any setting into a classic gated reverb.

It also comes with plenty of presets for 80s-style gated snares, toms and percussion.

8. Sidechain compression

One modern element in synthwave production is the extensive use of ducking sidechain compression.

It’s usually implemented so that the sustained sounds like pads and bass move out of the way for the kick drum pulse.

You can create sidechain ducking with any compressor in your plugin folder, but both BA-1 and Crystalline come with a built-in sidechain function so that no setup is required.

Simply turn up the sidechain amount to get an internal 4/4 ducking effect linked to your track’s tempo.

9. Lo-fi

If there’s one defining aspect of the 80s in music technology, it’s the limited fidelity of the equipment available at the time.

While pro recording gear was capable of impressive results, the final listening medium could be vinyl, cassette, analog radio or quirky formats like 8-track and VHS audio.

These all brought various flavors of analog saturation, warbling wow and flutter, especially as they decayed over the decades.

Most synthwave productions have some way of augmenting the lo-fi qualities.

While there are plenty of plugin options out there for lo-fi, Super VHS offers a unique approach that emulates the specific qualities of VHS audio.

Check out how Super VHS can enhance everything from percussion to synths in your mix.

Under neon lights

The best synthwave tracks create an irresistible nostalgia for a unique period in pop culture.

But many of the production techniques to make it are modern processes in a DAW-based workflow.

If you’ve made it through this article, you’ll have great start for creating your own synthwave songs.

Ambient Pads: 5 Synth Ideas With Physical Modeling

Ambient pads are one of the best ways to add texture and atmosphere to an arrangement.

But they’re especially powerful in genres where they can become the main event.

In ambient, new age and experimental music, pads can be as creative as any other element of the song.

But how can you create truly great pads that hold listeners’ interest without fading into the background?

In this article, I’ll break down six techniques that take advantage of a unique form of synthesis called physical modeling to create compelling ambient pads for any genre.

Let’s get started.

What are pads in music production?

Pads are rich sustained chords or drones that provide a sonic backdrop for the other action in the arrangement. They’re a fundamental type of patch in sound design with synthesizers.

Pads are rich sustained chords or drones that provide a sonic backdrop for the other action in the arrangement.

Pad sounds are typically polyphonic, allowing complex chords to fade in and out with the harmonic progression.

They became more common in music with the development of affordable polyphonic synths in the late 70s and 80s.

By now, synth pads are found in almost every genre of music, but they play a special role in ambient styles that call for dense, textural walls of sound.

While any polyphonic synth can produce a pad sound with the right settings, some synthesizer types are known for the strength of their pads.

That said, producers often enhance the ambient qualities of a synth pad by adding stylized reverb, delay and other creative effects.

A great pad sound is usually the result of a well programmed synth patch enhanced by tasteful effects and good arranging.

How to build a pad sound on any synth

The basics of creating pads with a typical synthesizer are straightforward. Turn your VCA envelope attack time up for a long, slow fade in.

Then turn up the release time so the notes gradually fade away when you take your hands off the keys.

Aside from those basic qualities, sound design for pads is limited only by your imagination.

But as any experienced ambient producer knows, stopping there will leave you with a flat, uninteresting sound that can obscure other elements in the mix.

The most engaging pads contain subtle movement and evolving details that unfold over time and react to your playing in musical ways.

Ambient pads: 6 creative tips

With the background out of the way, I’ll outline a few tips to help create pads that do more than just fill space.

But first, it’s worth noting that you need to choose the right type of synth for the job.

Your choice of synthesis style plays a big role in the end result as you build your patches.

For example, analog-style synths can create breathtaking ambient textures. But the pads built with these synths often have a vintage character reminiscent of the early polysynth era.

If you’re after this flavor, consider an 80s-style synth like BA-1. It models well-known analog hardware from the 80s and adds capabilities like polyphony to turn it into a great platform for pads.

But if you want to push the envelope into modern sound design territory, there’s much more to explore than recreations of vintage classics.

In fact, some of the most exciting ambient pads are being made with physical modeling. In contrast with analog synthesis, physical modeling relies on mathematical approximations of real world instruments.

That means they can create organic, hybrid textures that blur the line between synthesis and reality. It’s the reason why our physical modeling synth, Atoms’, has been popular among ambient producers.

On to the creative tips.

1. Go wild with LFOs

LFOs are one of the principal ways to create change over time in synthesis.

They’re repeating periodic waveforms that modulate synth parameters at slower-than-audible speeds.

Simple LFOs can add motion to core aspects of your synth sound, but using them creatively can unlock more than just filter sweeps or choppy tremolo.

To create truly evolving soundscapes, try experimenting with multiple LFO waves controlling different parameters at different rates.

It’s especially effective to contrast slow evolving motion with faster periodic cycles between parameters.

Use a wide variety of waveforms and invert the direction of some waves relative to the others to see how they interact with each other.

You can even modulate LFOs with other LFOs to create change in the rate of change!

If all that sounds like a lot to manage, you’re not wrong. Many advanced synths use a system called a modulation matrix to manage sources and destinations in a list.

It’s a convenient system once you wrap your head around it, but it takes some getting used to if you haven’t tried it before.

That’s why we created a simplified LFO system to make it easy to set your sound in motion with Atoms.

Atoms’ intuitive motion controls makes it easy to orchestrate all 6 of its LFO-controllable parameters with multiple waveforms, rates and behaviors.

2. Add noise

Noise is often a good thing when it comes to sound design.

After all, many acoustic instruments create sound by exciting a resonator with a noisy source.

Atoms works in much the same way, exciting its network of interconnected masses and springs with a virtual bow.

While noise can be overwhelming when heard on its own, a little bit can animate the action of LFOs and add density to frequencies affected by filters.

Of course, noise is also a great parameter to modulate to create an ambient wash that ebbs and flows.

Try adding some low level noise to your pads to add texture and bring out resonances during filter sweeps.

With Atoms, you can increase the noisy qualities of the sound by turning up the Force and Overtones parameters.

These increase the pressure and stickiness of the virtual bow as it drags across the vibrating strings, creating a unique profile of noisy harmonics.

3. Introduce randomness

While LFOs can create compelling cycles of change, they always follow a repeating pattern.

Randomness is a great way to ensure that your pads develop differently from chord to chord.

A little goes a long way, but a bit of random modulation can ensure you get unique variations with every new note.

Atoms’ random LFO settings are based on another physical model that drifts back and forth through the parameter’s range with unpredictable twists and turns.

Subtle randomness with the Drift mode works great when applied to the overall pitch for warped-vinyl style vibrato.

Speaking of which…

4. Go lo-fi

It’s easy to reach for the glossiest, most futuristic sounds when you’re looking to create an engaging pad.

But it’s worth exploring the other end of the spectrum to create pads that truly stand out.

Sometimes a hazy, warbling lo-fi pad can work magic for an arrangement that lacks organic feel.

Atoms comes equipped with four unique profiles that model different arrangements of its masses and springs.

The lo-fi mode intentionally reduces the number of nodes in the system to create a grainier, earthier sound that produces great lo-fi pads.

If you’re using a different synth, a lo-fi plugin can help add a bit of dusty vintage vibe to your sound.

Here’s an example of how SuperVHS brings a unique lo-fi flavor to any sound in your mix.

5. Extend the ambience with effects

As I mentioned above, pads are a blank canvas for processing with effects plugins.

Time-based effects like delay and reverb can extend the duration of notes and obscure the onset and decay of each chord in a sequence.

When used well, they create a powerful blending effect that makes your pads feel more expansive and enveloping.

But it’s possible to take it too far and create a sludgy mess that lacks definition.

Make sure to consider whether your pad really needs additional ambience before slathering it in reverb.

But if your sound really does call for a rich textural reverb, it helps to match it with the right type of reverb plugin.

Many producers prefer algorithmic reverb for cavernous, larger-than-life reverb since it doesn’t strictly model a real physical space.

Crystalline is an algorithmic reverb that’s capable of epic decay times and lush modulation. Here’s how it sounds adding stylish decay to a synth pad:

Creative launch pads

Ambient pads are used across music genres for a reason.

They provide a perfect textural canvas that immerses listeners in the song’s texture.

Whether you choose physical modeling, virtual analog or any other style of synthesis, the creative tips in this article will help you build better pad sounds.

Now that you have some ideas to inspire you, get back to your DAW and keep designing sounds.

What is a Phase Vocoder? How Pitch Correction Works in Music Production

Phase vocoder is the signal processing technique that enables some of the most common operations in modern vocal production.

But few producers know the term, or understand how it affects the results they get from their plugins.

But if you ever used pitch correction to polish a vocal performance or get the classic hard-tuning effect, you’ve used this technology.

In this article, I’ll explain the basics of phase vocoder processing and why it matters for getting the most from your vocal tracks.

Let’s get started.

What is a phase vocoder?

Phase vocoder is a signal processing algorithm used for manipulating pitch information in audio signals.

It was primarily developed for processing speech, so it’s often considered within the broader category of voice encoding technologies, or vocoders. But it has little in common with synth-based vocoders that work by modulating a carrier signal with a modulator signal.

In fact, technologies that use the phase vocoder technique often focus on pitch shifting and time stretching rather than voice modulation.

Phase vocoder is a signal processing algorithm used for manipulating pitch information in audio signals.

Even so, phase vocoder plays a role in some of the most common vocal effects in music production, including pitch correction and hard tuning.

How does phase vocoding work?

The phase vocoder algorithm breaks the incoming signal down into chunks and performs an analysis called the Fourier transform on each successive block.

The Fourier transform takes simple information in the audio waveform and uses it to extract a profile of the frequency content it contains.

This frequency information includes the position of the signal’s harmonics and their intensity relative to one another.

Since every sound is made up of many harmonic partials at different frequencies, this information can be used to guess the musical note that’s being played or sung during that analysis frame.

But once the analysis has taken place, the phase vocoder algorithm can also manipulate the frequency information before reconstructing the signal.

That means it can shift the pitch by moving the location of the harmonic partials while preserving their relationship to one another.

As a result, the algorithm can change the pitch without impacting the timbral qualities of the voice, since these are determined by the unique distribution of harmonics in the singer’s voice.

What does the Phase part mean?

If you’ve been following along closely, you may have spotted a potential problem with this method—phase coherence.

Since each incoming frame of audio must be analyzed and processed separately, the changes affect each portion of the signal differently.

Without correcting for this problem, you’d hear audible changes in sound from frame to frame, including distracting skips in phase.

The issue is made worse by the fact that successive analysis frames need to overlap each other for us to perceive a continuous sound.

Modern implementations of the phase vocoder use advanced techniques to ensure that the phase relationships between frames remain coherent.

Phase vocoder and pitch correction

With the basics out of the way, here’s how the phase vocoder technique works in pitch correction.

As I mentioned above, the algorithm can determine the pitch class of a sung note by extracting the positions of its harmonics and their intensity.

The sounds we perceive as having a musical pitch are usually periodic waveforms with a distribution of harmonics that follow an identifiable pattern.

In these types of sounds, you’ll find a strong fundamental frequency followed by harmonics that occur at integer multiples of the fundamental.

For example, imagine a bowed string instrument playing a rich, sustaining note at concert pitch, or A4 = 440 Hz.

The sound will contain a strong fundamental frequency at 440 Hz, followed by a 2:1 harmonic at 880 Hz, a 3:1 harmonic at 1420 Hz, a 4:1 harmonic at 1760, and so on.

This pattern is known as the harmonic series and it’s common to pitched sounds.

Knowing this, the analysis algorithm can use the pattern to identify the most likely fundamental frequency for an incoming signal.

When you set a pitch correction plugin to recognize the notes in a musical scale, it simply moves the harmonic content of the signal from the detected pitch toward the closest pitch in the set scale.

Parameters such as retune speed or Humanoid’s quantize parameter affect the behavior of the pitch remapping from detection to scale.

Fast retune speeds are responsible for the characteristic “stair step” effect that made hard-tuning famous.

But Humanoid takes it further than just retune speed to let you dial in the exact behavior of the tuning algorithm.

Watch out break down of how the quantize function lets you get the results you need from any incoming vocal with Humanoid:

Phase vocoder plugins

Some of the most common tools used in vocal production rely on phase vocoder tech for their key functions.

I’m talking about vocal pitch correction tools that allow you to adjust the tuning of a vocal performance to fix mistakes and improve the sound.

In most cases, these plugins are made to minimize their impact on the original vocal timbre. That’s why maintaining phase coherence is so important.

But there’s a lot more that can be done with FFT analysis and the phase vocoder technique if you’re willing to experiment.

Humanoid is our over-the-top pitch corrector and vocal transformer that uses phase vocoder technology in an unconventional way.

It can hard tune your vocals to a scale using a process similar to the method I’ve described above. But it can also perform other operations on the harmonic content of the vocal signal to manipulate the sound.

For example, even harmonic sounds with partials that follow the pattern of the harmonic series contain a little variation.

Whether it’s a slight deviation from exact integer ratios, or the presence of some inharmonic partials, each human voice is unique.

But what happens when you force all the harmonics into a perfect ratio and remove any noisy partials from the signal? The result is a uniquely synthetic vocal texture you can increase by turning up the Robotify knob in Humanoid’s pitch section.

Check out the official tutorial to hear Humanoid in action and see the creative possibilities of this approach.

A new phase of creative vocals

Phase vocoder technology enabled a huge shift in what was possible for producers to do with recorded audio.

As common as it seems today, fixing the pitch of a vocal performance has long been a holy grail for music technology researchers.

But creative musicians and producers always find the artistic applications of any new technology.

Humanoid is just one tool for exploring the possibilities that come with phase vocoders and pitch correction.

Now that you have an idea of how they work, get back to your DAW and manipulate some vocals.

Formant Shifting: 4 Creative Techniques To Alter Your Voice

Formant shifting is one of the most transformative effects you can apply to the human voice.

The reason why has to do with the physical qualities of the vocal tract itself.

But what exactly is formant shifting? How does it work and how can you use it creatively to reshape your vocal tracks?

In this article, I’ll explain what formants are, how to alter them with vocal plugins and the creative applications for formant shifting in a mix.

Let’s get started.

What are vocal formants?

Formants are fixed resonant frequencies that occur in human speech and singing. They play a big role in shaping the overall timbre of a person’s voice.

Formants are known scientifically as “local maxima” for frequency intensity. That just means concentrations of energy that build up at specific frequencies.

The frequencies where vocal formants occur are determined by the shape of the singer’s vocal tract.

While their exact resonant frequencies are unique to each person, patterns can be found in voices we recognize as having a low or high timbre.

Though we might not realize it directly, the quality of the vocal formants gives our brains a lot of clues about the speaker.

You’ll understand why when you hear the result of manipulating them with a formant shifter!

What is formant shifting?

Formant shifting is a signal processing technique that gives the illusion of changing the resonant frequencies of a voice without changing the pitch of the sung material.

Formant shifting is a signal processing technique that gives the illusion of changing the resonant frequencies of a voice without changing the pitch of the sung material.

It works by moving the frequencies of the resonant peaks up or down to simulate changes in the shape of the vocal tract.

Since formants are fixed resonant frequencies, they add emphasis in the same areas regardless of the sung or spoken pitch. It’s the reason why moving them up or down can be done without changing the melodic features of the vocal.

Formant shifting is sometimes measured in semitones to maintain a similar taper to pitch shifting controls, but it may also be freely adjustable in cents or with other scale measurements.

How to use formant shifting in your mix

With the background out of the way, here are four creative applications for formant shifting in music production.

Of course, to take advantage of these techniques you’ll need a plugin with formant shifting capabilities.

While there are plenty of options available, we built Humanoid specifically for radical voice manipulation.

It features robust formant shifting controls in addition to pitch correction, additive resynthesis and a whole host of other vocal transformers.

If you like the power of formant shifting and want to take it further, consider trying Humanoid to see how it can reshape your voice.

On to the creative techniques.

1. Contrast formant frequencies with voice type

As I mentioned above, we often associate specific formant patterns with common voice types.

Rich, resonant low voices often have lower formant frequencies, while higher, squeakier voices have higher formant frequencies.

Creating an intentional contrast with the original formants in the source signal is one of the most striking ways to use the technique.

Try shifting the formant lower on a higher vocal or higher on a lower vocal for a disconcerting mismatch of high and low timbres.

2. Combine formant shifting with pitch remapping

Since pitch and formant are independent of each other, you can manipulate both in either direction with plugins like Humanoid.

Making slight changes to the formant frequencies can help mask the effects of altering the pitch.

It can also expose different qualities in the original vocal that you might not have perceived before.

Humanoid can remap incoming vocals to any target pitch in MIDI mode without artifacts.

But there can still be a noticeable difference between the source vocals and the remapped pitch at extreme settings.

Try subtle adjustments with the formant control to rebalance the timbre after shifting the pitch by large intervals.

3. Use multiple formant shifts to differentiate sections

Vocal timbre is one of the key signals we use to identify the difference between speakers.

You can exploit this natural tendency to create contrasting characters or personas in your music.

This technique was famously used by The Knife on their 2006 album Silent Shout to give vocalist Karin Dreijer a wide range of characters and identities to inhabit from section to section.

Try using automation to manipulate formants between verse and chorus, or turn on formant shifting to build intensity during transitions.

4. Blend formant shifted vocals with dry signal

Radically shifting the vocal formants can make any voice seem unrecognizable.

But in many cases, you may want to retain some of the original vocal’s original quality. This way, you can layer in the formant shifted version for additional power and interest.

Humanoid offers a handy wet/dry mix control for the entire plugin.

Try using extreme formant settings with some dry signal blended in to maintain the basic character of the original vocal.

Vocal shapeshifting

The human voice has a huge range of natural expression.

When you add the possibilities of formant shifting, you open up a whole new dimension of sound design for vocals.

In combination with other manipulation techniques, plugins like Humanoid let you alter nearly every aspect of a vocal performance.

Now that you understand the basics, get back to your DAW and try shifting formants in your own vocal tracks.

Creating Presets for Atoms: Sound Design With Physical Modeling

This guest post is written by Christoph Strauss. Christoph is an audio engineer, freelance journalist and owner of soundchills.de. In this article he shares his take on sound design with Atoms.

Atoms is a complex physical modeling synthesizer that is yet simple to control. I have made a preset pack called “The Zen Expansion” for it and will not only share with you my method but hopefully also ignite your creativity.

Baby Audio have enriched the world of music makers and sound designers with the release of Atoms once again. The plugin is based on a unique take on physical modelling that specializes in otherworldy, dark, and melancholic sounds. I won’t get into the details of physical modelling at this point. Baby Audio have released a post as well as a deep-dive-video explaining it.

Move it, move it!

While reviewing it for my blog I really got to learn the ins and outs of it, the strengths and weaknesses. During the process I ended up creating an Expansion Pack (which anyone can do within Atoms) with 17 presets that Baby Audio and I are giving away on their Discord, so head over there!

Now, back to making presets. The most important thing to keep in mind when using Atoms is that the final sound heavily relies on the internal automations – but let’s start from the beginning.

Before jumping into the sound design, I decided what kind of sound I wanted (i.e. Pad, Bass, etc.) to have. Accordingly, I began setting a base sound. This was done by utilizing the upper four knobs for Chaos, Order, Force, and Overtones and pause all automation. Choosing the right profile can enhance the characteristics even more. Then, I would adjust the attack and release and experiment with the automation of said parameters. For “regular” sounds (excluding sound effects and such) I listened to a loop for a bit and made sure no unwanted artefacts occur, especially when not using synced automation.

Once happy with that, I added Drive and Filter to taste. Especially the filter will have a great impact on the audio and can, if automated, turn a long held note into a sequence, for example. In general, I found “Drift” to be a cool mode that can introduce subtle variations which keep things interesting but are not overwhelming.

My final step was to polish things up using Movement, Modulation, Vibrato, and Space. They add an organic feel and the latter places the sound in a desired distance.

Let’s sum things up

The beauty of sound creation lies in diversity. My approach on Atoms is one of many and I did jump back and forth between the above mentioned steps a lot. The combination of different automations brings out very interesting sounds that can be unpredictable. Although Atoms only has few parameters, a variety of complex sounds can be generated. As so often, experimenting is key.

What is City Pop? The Producer's Guide to Classic Japanese Pop

City Pop is one of the most fascinating genres to see a resurgence in popularity in modern music.

With roots in the sophisticated, jazz-inflected styles that emerged in Japanese cities in the late 70s, City Pop has gone global with many producers chasing its nostalgic feel into the 2020s.

But what is City Pop exactly? What are its defining features and how can you capture the sound in your own productions?

In this article, I’ll give a brief intro to City Pop, break down what makes it unique and suggest three essential techniques to help you create it.

Let’s get started.

What is City Pop?

City Pop is the term for a style of Japanese pop music that arose during a period of rapid economic growth and technological development in the 1970s and 80s.

Influenced by contemporary trends in western music, City Pop was a uniquely Japanese take on the adult-oriented genres of the 70s and 80s like funk, disco, AOR and soft rock.

As the genre progressed through the 80s, City Pop artists developed a distinctive style that combined a unique musical vocabulary with the cutting edge music technology that was being pioneered in Japan at the time.

Regarded as an important stage in the evolution of Japanese musical culture, City Pop has found a broader audience in recent times as listeners around the world rediscover its unique sound and aesthetic.

City Pop songs and artists

Although the term City Pop was rarely used during the movement’s heyday, there are dozens of landmark releases now considered touchstones of the genre.

Here are a few essential artists and tracks that highlight key aspects of the City Pop sound.

Taeko Onuki

A founding member of the band considered by some to be the original City Pop act, Taeko Onuki’s 1977 album SUNSHOWER showcases the early roots of the genre.

In the synth solo on “ Tokai” you can hear how jazz and R&B influences began to collide with new technology.

The rich polyphonic synth patch is played in a pianistic style that shows how traditional musicians were willing to embrace new sounds during the City Pop era.

Kingo Hamada

Kingo Hamada’s 1982 album midnight cruisin’ is a touchstone of the City Pop genre among original fans and revivalists.

The song “Machi no Dorufin” is the basis for the wildly popular Englewood edit “Crystal Dolphin” that took off on social media in the late 2010s.

The album highlights the escapist, tropical flavor of City Pop that has become synonymous with modern takes on the genre.

Makoto Matsushita

Incorporating the orchestral accents of 70s pop and the airtight grooves of Steely Dan, Matsushita’s 1981 single “Love Was Really Gone” shows off the sophisticated side of City Pop.

The song’s effortless solos and rich instrumentation capture the cosmopolitan feel of contemporary urban life.

How to make City Pop

City Pop may not have strict genre conventions that are easy to imitate, but it does have a recognizable feel that comes from common elements found in many classic productions.

With the background info out of the way, here are some general pointers to help you get a more authentic City Pop sound in your own productions.

1. Jazz and fusion inspired harmony

Rich, jazz-inflected chords and progressions are a defining aspect of the City Pop sound.

The Japanese artists that popularized the genre were often reacting to developments in western music that saw a creeping influence of jazz and R&B into mainstream rock and pop.

In the US, west coast jazz had evolved into the smoother, “Quiet storm” radio format and its breezy, laid-back feel was influencing pop acts like Steely Dan, Hall and Oates and McDonald-era Doobie Brothers to experiment with more complex harmony.

As City Pop developed, it picked up a harmonic vocabulary associated with these broader trends in music.

Jazz itself was changing in futuristic directions that resonated with listeners experiencing the rapid technological progress occuring in Japanese urban centers.

Forward-looking fusion acts like Weather Report and Michael Brecker’s Steps Ahead were popular among urban professionals keen to show off their brand new cassette decks and hi-fi gear.

As a result, harmonic concepts from jazz such as major 7th chords, sus4 chords, ii-V-I progressions and even tritone substitutions are often found in City Pop.

Throw in a healthy dose of the IV-V-iii-vi progression found elsewhere in Japanese pop and you’ll start to hear the essence of the sound come alive.

If you need a refresher on jazz chords and progressions, head over to this guide to get the basics: 8 R&B and Jazz Chord Progressions Every Musician Should Know.

2. 80s-style synth patches

Pioneering synthesizers were being developed alongside the rise of City Pop as Japanese companies like Roland, Yamaha and Korg led a wave of innovation in music technology.

Many of these boundary-pushing devices made their way into the hands of City Pop musicians and recording engineers during the peak of the genre.

Vintage-style monophonic and polyphonic synth patches can be heard all over records from City Pop’s 80s heyday.

If you’re looking to capture the sound, you might consider an 80s-inspired synth plugin like BA-1.

A go-to choice for retro genre heads, BA-1 was such a hit among modern City Pop producers that we created two preset packs dedicated to the style.

Check out City Pop Volumes 1 and 2 for 128 authentic 80s-style patches.

Inspired by the classic portable CS01 synth, BA-1 has instant retro feel on tap with analog-modeled sound that gives you the best of what made this era of synthesis so iconic.

Tight, but natural percussion

Most City Pop artists relied on session musicians to fill out the ensembles for albums and performances.

Drawing from the best of the best, many of the classic albums and singles feature expert rhythm sections that feel accurate, but not rigid or mechanical.

The result is a tight, human take on disco and R&B grooves with a distinctive dry sound.

To keep your own productions from feeling too sterile, avoid quantizing percussion hits directly to the grid where possible.

On top of that, consider choosing sampled loops with a more human feel rather than sequenced drum machine hits if you can.

Sonically, the drum recording techniques of the day often focused on a punchy, direct sound with ample isolation between individual microphones.

If you need some resources to help you get the 70s dry drum sound, there are plenty of excellent sample packs and tutorials to help you.

Horns, backing vocals and aux percussion

Classic City Pop often incorporated elements that signified large budget productions in the era of limited tracks and expensive analog tape.

Slick horn sections, syrupy backing harmonies and additional percussion elements helped provide the sense of a large, perfectly arranged ensemble backing the vocalist in singles like Anri’s “Remember Summer Days.”

Try stacking up multi-tracked backing vocals for important sections and punctuating transitions with horn section stabs.

City Pop nostalgia

There’s a light and carefree feel to the best City Pop tracks that’s impossible to resist.

It’s no wonder producers want to recapture a bit of that magic in their own music.

And even though you can’t travel back in time to 1980s Tokyo, you can still understand the basics of the genre and find great sounds to help you get close.

If you’ve made it through this article you’ll have a great start for producing City Pop.

Effects Rack Plugins: How to Use Multi-FX for Creative Sound Design

Effects rack plugins are a practical way to add color to your tracks with a signature style.

By combining multiple effects into a single package, you can build up unique textures quickly with a single interface.

But in addition to making it convenient to work with several effects at once, effects racks offer benefits for modern workflows.

In this article, I’ll break down the basics of effect rack plugins and explain why they’re seeing a resurgence in 2024.

Let’s get started.

What are effects racks plugins?

An effect rack plugin is a collection of individual effects within a single plugin with shared interface elements and a single set of inputs and outputs.

An effect rack plugin is a collection of individual effects within a single plugin with shared interface elements and a single set of inputs and outputs.

Effects racks may be fixed or flexible chains of effects in serial, parallel or modular configurations.

The term “rack” comes from the 19” rack hardware format that was common for outboard studio equipment during the analog era.

Rack effects could be easily patched in and out of an analog console to create intricate chains of cascading effects.

Today the concept of effects racks lives on in plugin form to give producers access to familiar aspects of the workflow.

Why use an effect rack?

With all the plugins out there, you may wonder what the benefit of using a multi-effect in place of individual plugins.

Here are a few things to consider if you’re not used to working with effects rack plugins:

Easier workflow

A dedicated effect rack puts a curated chain of useful devices directly at your fingertips. That means you can quickly mix and match without taking the time to search through plugins and load them individually.

On top of that, many multi-fx plugins come set up for custom routings such as wet/dry blends, or parallel signal chains.

This can make it easier to achieve specific results that might take longer to set up individually.

For example, Parallel Aggressor contains three internal signal paths—one for compression, one for saturation and a third for the dry signal. That’s a lot to configure manually in your DAW.

Sonic cohesion

With a folder full of diverse effects, stacking plugins can sometimes result in a mishmash of styles.

That’s not always a bad thing, but sometimes you need a cohesive sound that retains its character across multiple stages of processing.

Effects racks ensure that each plugin in the chain is cut from the same cloth and interacts favorable with the others.

This could mean effects with a similar sonic feel, or a specifically curated palette of possible sounds that work naturally well together.

Less option paralysis

Finally, anything is possible in the world of DAW-based music production.

The potential for creative sound design is unlimited, but that can sometimes feel like a burden.

Staring down an empty insert chain with hundreds of plugins to choose from can affect your inspiration when you’re trying to get ideas down fast.

Effects rack plugins are a great way to impose creative limitations and free yourself from endless option paralysis.

Plus, the interactions between effects inside the rack can lead to unexpected possibilities that may not have been possible otherwise.

How to use effects racks creatively

With the basics out of the way, here are three approaches to using effects racks creatively in a mix.

Variations on a theme

Some multi-fx plugins provide a convenient package for effects that are often used together in a mix.

After all, many mixing styles rely on characteristic combinations that are often used together.

For example, most lo-fi textures require a mix of compression, saturation, pitch vibrato and other noisy elements to sound authentic.

SuperVHS takes this approach to put all the essential ingredients of a convincing lo-fi texture into a single plugin.

Try increasing each main parameter one by one to hear its effect clearly, then layer in a combination to create your ideal lo-fi effect.

Classic signal flows

Some rack-style plugins are built to recreate well-known signal flows from the days of analog hardware.

Think of virtual pedalboards or classic console channel strips.

These are a form of multi-effect that take cues from familiar workflows to help you approximate hardware processes in the DAW.

For example, vintage rack hardware from the 80s put a signature stamp on many of the most beloved sounds of the era. Many of these resulted from cheap gear that was meant for amateurs rather than industry pros.

As professional-quality tools were out of reach for the home producers at the time, affordable effects units were often quirky and not exactly subtle with their sound.

Despite that, inexpensive digital reverbs and chorus units made their way onto hundreds of influential recordings during the 80s and 90s.

Putting these characteristic effects in one package was the driving force behind the addition of the Retro FX Strip in the v1.5 update of BA-1.

Many users of the original BA-1 synth loved the effects section and wanted to use it on other tracks in their mix—the perfect application of a well-designed effects rack!

You can now add BA-1’s beloved lo-fi and 80s-flavored drive, filter, chorus, delay and reverb to any track in your session for authentic retro character.

Creative workstations

Sometimes a particular collection of effects is all about pushing the boundaries and unlocking creative possibilities.

Whether it’s a suite of purpose-built effects that complement each other or a platform for creating custom combinations, workstation plugins are all about sound design and experimentation.

If conventional plugins aren’t enough to satisfy your sonic curiosity, try a workstation like NI Molekular to build your dream effects from scratch.

Off the rack

The best thing about producing with a DAW is the flexibility to work however you want.

In the end, plugins are just tools for making music, and whatever form they take they should help you realize your unique vision.

Effects racks are one tool you may not have considered that can save you time and effort in a session.

If you’ve ever brushed off effects racks as relics of a bygone era, it might be time to jump back in and see if they work for you in 2024.

What is Physical Modeling? 5 Techniques for Futuristic Sound

Physical modelling is one of the most fascinating approaches to sound synthesis.

It uses mathematical techniques to create sounds that mimic the behavior of real-life instruments.

But physical modelling is far from just a theoretical exercise. It’s a powerful synthesis tool that can change how you think about timbre in music.

In this article, I’ll explain what physical modeling is, how it works, and the five most common techniques you’ll find in physical modelling synthesis today.

What is physical modeling?

Physical modeling is the term for synthesis methods that simulate the behavior of sound producing objects in the real world.

Acoustic instruments rely on strings, membranes, air columns, or other physical structures to create their sound.

Physical modeling is the term for synthesis methods that simulate the behavior of sound producing objects in the real world.

Physical modeling uses clever algorithms to recreate these mechanisms in the digital domain. The result is virtual instruments that generate sound in the same way as the traditional instrument families like strings or woodwinds.

But unlike conventional instruments, physical models aren’t bound by the limitations of real life.

What would a cello sound like if it were the size of a car? Or what if the air inside a flute were much less dense?

These are the types of intriguing sonic questions you can explore with physical modeling.

How does physical modeling work?

Sound producing bodies can be broken down into component parts that each play a role in turning energy into sound waves.

Think of a violin bow dragging along a string. The slipping and sticking of the bow creates friction as it passes along the string, adding energy to the system.

The energy causes the string to vibrate and the body of the violin amplifies that resonance. The player’s fingers change the length of the string as they move between notes, affecting the fundamental frequency of the vibration.

In the world of physical modelling, the scratchy bow can be represented by a burst of noise, while the size and resonance of the body can be approximated with filters and other techniques.

Today’s sound designers have a deep understanding of the physics of acoustics, leading to ever more accurate physical modelling algorithms.

Types of physical modeling used in synthesis

There are as many approaches to physical modeling as there are sound producing phenomena in the world.

Synth designers have developed different techniques to imitate all kinds of instruments, from struck bodies to masses of air and even networks of springs.

Here’s a guide to the common physical modelling methods found in instruments available today.

1. Karplus-Strong string synthesis

Karplus-Strong string synthesis was one of the first physical modelling algorithms to be identified and developed.

It uses a surprisingly simple system to simulate the behavior of a struck or or plucked string.

To start, a burst of noise gets fed into a feedback loop consisting of a delay line and a low-pass filter. This simulates the complex harmonic content produced when a string is plucked.

The delay line represents the length of the string, with the delay time determining the pitch of the note.

The low-pass filter, which slightly attenuates the high frequencies with each pass through the loop, simulates the energy loss that occurs in a real vibrating string due to factors like air resistance and friction.

As the sound repeatedly cycles through the delay line and filter, it begins to take on a characteristic that closely resembles that of a plucked string instrument.

The decay of the sound can be controlled by adjusting the feedback level and the filter's characteristics, allowing for the simulation of different types of string materials and plucking techniques.

2. Digital waveguide synthesis

Building on the work of Karplus and Strong, digital waveguide synthesis was developed in the 80s by Julius O. Smith III.

It goes even further to simulate the propagation of a sound wave within a medium.

By modelling the way sound waves travel back and forth along a string or within a tube with additional delay lines, this method can reproduce instruments like guitar or flute.

In digital waveguide systems, the delay lines update continually to model the reflection of waves at boundaries like the nut and bridge of a guitar or the ends of a flute.

This requires filters and other signal processing elements to simulate the frequency-dependent energy losses that occur in real-world physical mediums.

Designers craft these elements to reflect the acoustic properties of materials like the density and tension of a string or the characteristics of the air column within a wind instrument.

3. Mass-spring interaction networks

This unique method models a physical system as a network of masses connected by springs.

Each mass is a point of inertia with its own physical characteristics like weight and density. The connecting springs can move back and forth by contracting or expanding.

When the system is excited by a virtual bow or pluck, forces of tension and compression act on the network to push and pull the springs. This action produces complex vibrations and sound waves.

The physical laws of motion determine the interaction of points in the network and the sonic results they produce.

By adjusting the parameters of the masses, springs, and their connections, this approach can simulate wide ranging acoustic properties and behavior, including nonlinear and chaotic systems.

If that sounds complex, you can get a sense for how it works by experimenting with an instrument based on this technique.

Our Atoms synthesizer uses cutting edge research on mass-spring interactions to produce an instrument that’s easy to use, yet unlike you’ve ever heard.

Its six simple parameters act on the mass-spring network in ways that push, pull and mangle its structure as it vibrates.

The result is an eerily, organic texture that can evolve and shift as you play.

Watch how Atoms produces unique sound with mass-spring interaction networks in our tutorial video.

4. Modal synthesis

Modal synthesis is a physical modelling approach that takes inspiration from the resonant behavior of acoustic objects.

It's often used to simulate percussion instruments like drums, bells and mallet instruments.

It works on the principle that any sound can be decomposed into individual modes of vibration.

Each mode represents a specific way in which the object vibrates and resonates at a particular frequency.

The modes are typically represented by sets of filters tuned to each modes resonant frequency that are excited by an input source like an impulse.

This allows them to mimic the percussive strike of a stick or mallet on a resonant surface or membrane.

5. Formant synthesis

Formant synthesis focuses on replicating the vocal qualities of speech and singing with physical modelling methods.

It’s often implemented with a network of band-pass filters that correspond to formant frequencies that match those of natural speech or singing.

An input sound that mimics the role of the vocal folds like a buzz or hiss acts on the filter network to impart a distinctive vocal quality.

By changing the center frequencies and bandwidths of the formant filters over time, these systems can emulate different vowels, consonants and transitions.

How to use physical modelling to create unique sounds with Atoms

Atoms’ unique approach to physical modelling makes it exciting to explore if you’re on the hunt for new sounds.

After all, traditional subtractive synthesis and even FM have limits once you’ve used them a lot. But Atoms may be unlike anything else you’ve tried in the world of synthesis.

Here are four ideas to experiment with unique physical model at the heart of Atoms:

1. Use the Force and Overtones to add harmonics and control them with Order

Atoms is a mass-spring interaction network excited by the action of a virtual bow.

The Force control determines the pressure of the bow against the spring while the Overtones control determines its position along its length.

Just like a real bow, increasing the pressure on the spring will add a raspy, noise-like quality to the signal.

And bowing closer to the boundary will simulate playing closer to the bridge of a string instrument. This increases sharper high harmonics like those that occur when string players are indicated to play "sul ponticello."

Adding the Order control introduces high frequency damping to the springs. This acts in effect as a low-pass filter. Noisy excitation from the bow in combination with sharper overtones can yield interesting pizzicato-like effects when heavily damped with the Order control.

2. Control onset and decay with Attack and Release

If Atoms feels overwhelming at first, there are familiar elements you can fall back on as you experiment.

For example, Atoms’ attack and release controls work similarly to those in subtractive synths.

That said, these too are related to the action of the mass-spring the network rather than just a subtractive ADSR.

Attack refers to the velocity of the bow as it contacts the spring. Just like a traditional attack control, lower values produce a more abrupt onset, while higher ones create a gradual swelling effect.

Release controls the overall damping of masses in the system.

As physical forces like air drag and friction act on a vibrating spring, its energy will decrease over time causing the sound to decay.

Turning the release control up reduces damping for a longer decay.

3. Automate it all for evolving sound

The key Atoms expressive, organic sound is to get its parameter moving with the onboard automation controls.

Simply engage a Motion mode from the menu next to each of the six main parameters and set its range with the outer ring.

You can set the automation speed in Hz or sync to the session tempo to keep it all in time.

Experiment with the different Motion modes to see how each parameter interacts with the others as they move back and forth.

4. When in doubt, roll the dice

If you simply want to hear the range of different textures Atoms can produce, try hitting the randomize button represented by the Dice icon in the top left corner.

This engages a musically calibrated randomization function that produces new and usable sounds on demand.

But if you’d rather not randomize everything, you can exclude parameters from the randomizer by locking them with the lock icon.

Finally, if you just want a slightly different take on the patch you’re currently working with, you can hit the Recycle but to generate variations on the current settings.

The sound of science

Physical modelling is one of the last frontiers left in instrument design.

While it may seem complicated at first, there’s nothing scary about plugins and instruments based on this technology.

If you’re a creative producer curious about the future of synthesis, it’s worth exploring physical modelling to see how it can inspire you to create.

Try Atoms free to get started with physical modelling and mass-spring interaction synthesis.

What is Algorithmic Reverb? How to Get Stylish Ambience in Your Mix

What is an Algorithmic Reverb Michael Gerzon M.R. Schroeder, Synthetic Stereo Reverberation.

For software users, reverb effects tend to fall into one of two categories: Algorithmic or Convolution based. Let’s take a deep dive and define them once and for all.

Algorithmic Reverb defined

An algorithmic reverb is based on a mathematical model that simulates the behavior of a physical space. You can think of this as a really complex signal chain. In an algorithmic reverb, your audio is passed through a network of digital delay lines that are modulated, filtered and fed back into each other in a manner aimed to resemble the reflections of a room, hall or chamber – or even a “super-natural” space.

Algorithmic reverbs - whether in VST plugin or hardware form - are hard to develop because they consist of so many parameters that each have compounding effects on the end result. One little change in any part of the chain/algorithm will make for a completely different sound, and things can quickly get metallic or murky. This means you need equal development and sound design skills to create a great algorithmic reverb. (And a whole lot of patience!).

Because of the complexity that goes into developing pro-grade algorithmic reverbs, these effects are often considered ‘the holy grail’ of DSP development. In audio circles, the inner workings of legendary algorithmic reverbs such as the Lexicon 480L have almost become as mythical as the Coca Cola recipe.

Algorithmic reverbs are not as realistic as convolution reverbs, but in some ways this is their strength. They turn the idea of what a “reverb” is on its head: Instead of aiming to sound exactly like the real world, algorithmic reverbs tend to just sound like reverb effects, often in ways that transcend the limitations of any natural space. This is why algorithmic reverbs are so great for making vocals, synths or mix busses sound larger than life.

At Baby Audio, we love algorithmic reverbs (perhaps you can tell). Our best attempt at making one for the modern era is our Crystalline plugin (VST, AU AAX + iOS). It stands on the shoulders of classic studio reverbs such as the Lexicon and EMT units that defined the sound of the 1980s, but offers an upgraded, contemporary sound and feature set.

Crystalline by Baby Audio, a modern algorithmic reverb.

Convolution Reverb defined

Convolution reverbs, on the other hand, use Impulse Responses recorded out in the real world to create a natural sounding reverberation. You can think of a convolution reverb as a plugin that can host ‘samples’ of real spaces and which lets you send your tracks through these space samples to get similar reflections (often with a very high degree of realism). The benefits to this approach is accuracy and its ability to mimic real world spaces, but the main drawback is that impulse responses tend to be somewhat static. (And sometimes, real world spaces just aren’t as exciting when you want to truly elevate a track).

Algorithmic Reverb vs. Convolution Reverb - what’s better?

If you make music firmly rooted in the real world, such as folk, classical or jazz, you’d probably want to find a well equipped convolution reverb. It will give you the truest representation of the rooms and halls known from our physical world and help your music sound natural and “real”.

On the other hand, if you’re looking for reverb as an ‘effect’ to elevate and enrich your tracks, then algorithmic reverbs tend to offer more potential – both sonically and feature-wise. Sonically, algorithmic reverbs aren’t tied to the constraints of the physical world. Their algorithms can be tweaked in endless ways to make super-natural spaces that give your music a touch of otherworldly magic. For this reason, algorithmic reverbs are typically preferred for modern music genres such as pop, hip hop, electronic, dance, ambient and experimental.

Having said that, algorithmic reverbs aren’t just for modern sound design purposes. If you’re looking for a 1980s or 1990s retro vibe, an algorithmic reverb is 100% the way to go! Especially in a lower quality setting such as Crystalline’s optional ‘basic’ mode.

What’s the history of the Algorithmic Reverb?

The idea of recreating a physical space digitally through an algorithm - or mathematical formula - was pioneered in the 1960s by academics such as Manfred Schroeder and Michael Gerzon. It may seem like an obvious idea today, but you have to remember that computers and microchips were almost as foreign at the time as the space rocket that landed the first human on the moon.

The space age was an era for big visions and small chips!

As a result, algorithmic reverbs improved slowly but consistently throughout the 20th century – with every new technology leap. Companies like EMT and Lexicon finally led the way in making algorithmic reverbs the go-to choices in studios around the world.

This leads us to Crystalline…

With modern computer power at our disposal, we now have a luxury our predecessors could only have dreamed of. Where they had to walk a hard line between restraint and imagination, our super-powered computers allow us to focus primarily on the latter.

In working on Crystalline, we experimented with the algorithmic reverb concepts originally envisioned by pioneers like Gerzon and Schroeder — and re-applied their ideas from scratch without the original limitations. This led to A LOT of delay lines! And even more trial and error.

But in the end we got something we're proud of as a contribution to the ongoing evolution of the algorithmic reverb. Crystalline is certainly not the end station, neither is it the only modern reverb to tap into these classic ideas. But it offers our upgraded and genuine take on what could rightly be called ‘the holy grail’ of dsp – designed with a 21st century user in mind.

If you want to hear Crystalline in action, you can get a free trial and learn more by clicking the link below.

How to use algorithmic reverb in your mix

Algorithmic reverb is perfect for any situation that calls for stylish and creative atmospheres.

It’s often used for large ambient washes where modulation plays a central role. But it works equally well for applications that require tight rooms, plates and chambers.

Many of the most beloved algorithmic reverb hardware units are famous for their non-linear modes, including the classic gated reverb trick from the 80s.

Crystalline can add a distinctive flavor whether you need the expansive sound of a space that’s larger than life, or a short decay that stands out with rich character.

Here are a few tips for using algorithmic reverb with Crystalline:

  • Engage slow, deep modulation on settings with long decay times to add lushness and complexity to the reverb tail.

  • Shape the harmonic content by adjusting the high and low damping frequencies. Try different combinations with short or long decays to see how they interact.

  • Create the classic gated effect by turning up the gate control’s threshold and setting the release to match the song’s tempo.

  • Try the Shimmer function to add a ghostly upper octave reverberation to the signal