How to Make a Music Visualizer Using After Effects?

Music visualization allows you to create shapes that flow with the music. It’s like a visual interpretation of the amplitude, frequency, and components of the piece being played.

You’ve probably come across this type of technology in music players such as Windows Music Player or iTunes.

And here is how you can make your own music visualizer in Adobe After Effects.

Getting Started

After having installed and set up Adobe After Effects, launch the program and press “Ctrl + N” to open a new composition.

In the composition’s settings, you can choose any numbers you want.

The most important part is to change the “Duration” to the length of the song to which you want to make your animation. Make sure that it wouldn’t fall short or that there wouldn’t be any extra frames.

Hit “OK” for the canvas to open up.

Importing the Music

Press “Ctrl + I” and select the music file you want to import, which will appear in the project tab.

Now it’s time to focus on the timeline. Go to the layer menu, hover over “New” and select “Solid.”

By clicking that, you can add in a solid color and give it a name –let’s say “Visualizer.” This is where the effect is going to be based on.

Now it’s going to be selected in the timeline. While it is, go to the effect menu, hover over “Generate,” and click “Audio Spectrum.”

And while you can do things with “Waveform,” but “Spectrum” is the more suitable option for this guide.

When the default spectrum pops up, go back to the projects’ tab and place the music file into the timeline.

Now you’ll see your music displayed as a green line and the effects as a red one. Align them both together at the beginning.

With the visualizer layer selected, return to the effects control tab and adjust all the settings you like according to your preferences.

Under the parameter titled “Audio Layer,” make sure that there’s a link to the music file name.

The animation will start automatically once you click that.

Tweaking the Animation

Without any adjustments, the animation would look pretty mundane. That’s why you want to play around with the settings to bring it to life.

You can control the thickness of the lines, add frequency bands, or adjust the height of the lines to get more contrast on the spectrum.

Moreover, you can change colors to make the inside and outside colors different –you can do any sorts of combinations.

The start and end points determine where the line is placed and allow you to angle it as well, while the side options cut the lines in half.

After you’ve added some tweaks, play the piece and see if you like how things are going.

Take some time and practice with all the parameters you can find to fine-tune the animation to your preferences.

How to Change the Shape of the Visualizer?

To do so, you can either play around with the start and end points. But that’s not it, you can delve deeper into shaping the visualizer.

For example, if you want to make a circle, you should go to the “Rectangle” and click and hold on it to get different shapes.

If you go for an ellipse, place the crosshair in the middle of the effect, and click and hold “Ctrl + Shift.” This allows you to scale the ellipse proportionally from the center of the screen.

Then, center the shape according to your own preference, and then your effect will be surrounded by the shape.

Afterward, go back to the effect controls for the visualizers, go for the path option, and choose “Mask 1”.

This makes the effect work its way around the circle.

However, the effects may be cut off due to the square-shaped mask. To get rid of that, go to “Add” in the lower portion of your program.

Change it to “None,” and then there should be no cropping going on.

If some lines are going off the screen in any areas, bring the maximum height down to bring them into perspective.

If you only want the effect on the outside, choose “Side B” from the side option. Contrarily, choose “Side A” if you want the effects applied on the inside of the circle.

Exporting the Video

Go to “File,” hover over “Export,” and choose “Add to Render Queue.”

Under the output module, you can change several settings. What you should make sure of, however, is that the audio settings are turned on so that you don’t lose your audio.

Moreover, if you want to composite the effect you’ve created over another video footage, you can have the black background automatically removed.

To do so, change the channels from “RGB” to RGB Alpha.” This renders the video with 32-bit cover space, which allows you to enjoy transparency.

Final Thoughts

Once you learn how each parameter affects the shape and size of the animations you create, you can really create some awesome visualizations.

All it takes it a little experimenting and a lot of passion.

Cymatic Lighting: How the Deaf Can Benefit from “Visual Sound”

Around 360 million people in the world and 1 in 8 Americans live in partial or complete silence because of hearing loss.

Cultural events such as social gatherings and performing arts are a powerful way to build up communities. However, the deaf and hard-of-hearing (DHH) can’t always enjoy their benefits as these events usually rely on sound to deliver their message or experience.

And because artists and venues don’t usually have the resources or aren’t even aware that their events are not inclusive for those who can’t hear, there should be a way to translate sound into other senses to make cultural events more accessible.

How Cymatic Lighting Came to Be

A Deaf musician created the needs and functionality of the Cymatic Lighting system.

Lighting visualization is essential in giving meaningful cues to the nature of the sound.

To create a relationship between where the sound was panned in space and the visual pattern that is displayed, stereoscopic features of sound are translated.

Qualities of sound, such as frequency and amplitude, are mapped to resemble similar qualities of light luminosity and hues through algorithms.

This is done as precisely as possible to get detailed information about the sound that could convey the audio and music cues into visuals.

Evolving from Blinking Lights

By combining a scientific and technical approach, you can explore the relationship between sound and light by integrating Cymatic principles (the study of sound made visible).

This is the best way to improve the concept of visual sound beyond several blinking lights from earlier technology.

Sometimes, the basic depiction may not be very accurate at conveying the difference between the notes being played.

That’s why integrating additional code and algorithms that respond to the acoustic envelope (dynamics) of audio makes it more responsive.

To make the whole process more advanced, multiple strips of LEDs can be added in a 2D grid to allow the users to see tiny changes in timbre.

For example, users being able to tell the difference in harmonics and distribution of frequencies that lead to recognizing the difference between a string, percussion, or synthesizer instrument.

Feedback from the DHH

According to deaf users, integrating the Cymatic Lighting system into the environment (furniture or architecture) gave them a more relatable and engaging experience when it came to live or recorded music performances.

They were better able to follow the rhythm and distinguish between individual instruments such as drums or guitars from a mix of instruments. This truly enhances the way they experience music.

They even showed interest in installing the technology as an alert or safety system. This is because it can easily convey any audio signal by turning it into simple yet precise expressions of light and movement.

When installed as part of a smart home lighting system, the technology indicates the difference between a fire alarm alert and a doorbell.

Future Development

A lot of upgrades can take place with the system, including wireless control from mobile devices. This includes streaming wireless music via Bluetooth and incorporating a 2-way alert feedback so that users can receive notifications and own light display visualization displayed on their mobile device screens. 

Final Thoughts

Making more venues aware of the needs of the DHH opens up more opportunities to involve them as well as bring a broader range of communities together.

What is Audio Visualization?

Music visualization is a feature found in media player software and electronic music visualizers. It produces animated images that correspond to the piece of music that’s playing.

These images are rendered in real-time and synchronize with the music being played.

Visual techniques can be simple such as simulations of an oscilloscope display, or they can be more complex ones that combine several composited effects.

Generally, visualization techniques depend on the changes in the volume and frequency spectrum to generate the images.

The higher and stronger the correlation between a musical track’s spectral characteristics (such as amplitude and frequency) and the objects or components of the visual image being generated, the more effective the visualization is.

As a song is being played on a visualizer, the audio data is read in extremely short time slices (usually spanning less than 20 milliseconds).

The visualizer, then, does a Fourier transform on each slice. This means that it extracts the frequency of components and produces visual display according to the frequency of information.

The programmer is responsible for how the visual display responds to the frequency info.

To update the visuals at appropriate times with the music without overclocking the device, the graphics methods have to be extremely fast and lightweight.

Music visualizers used to (and still do) modify the color palette in Windows directly to get the most awesome effects.

One of the trickiest parts about music visualization is that the frequency-components-based visualizers don’t usually respond very well to the beats of music such as percussion hits and similar sounds.

However, you can write more responsive –and often complicated- visualizers to combine the frequency-domain information that is “spikes-conscious” in the audio that is responsible for corresponding to percussion hits.

Basically, what you do is you take a certain amount of the audio data and analyze the frequency of its components. Then, you use this data and modify some graphic that is later displayed repeatedly.

The most obvious way to run this frequency analysis is by using an FFT. But you can use a simple tone detection with a lower computational overhead.

For example, you write a routine that draws a series of shapes arranged in circles over and over. Then, you should determine the color of the circles using the dominant frequencies and set the size using the volume.

Wait a second, what’s FFT?

When you search for FFT or “Fast Fourier Transform,” you’ll get tons of complicated math equations and graphs.

When you’re building a music visualizer, the most common way the music will be represented digitally is through a standard waveform.

This typically shows how loud the song gets over time. That’s why we see lots of big spikes around the middle of the song, and these get smaller as the sounds get lower.

The representation of the amplitude (loudness) over time is referred to as the time domain.

However, this only gives us information about how loud a song is at specific points. To learn more information and identify more components, we use a Fourier transform.

So rather than representing audio in the time domain, a Fourier transform enables users to represent audio in the frequency domain. And instead of only showing us amplitude and time, we’ll be seeing both amplitude and frequency.

Different Audio Representations

Audio data can be processed in multiple ways, the simplest of which is displaying it as waveforms that change quickly, and then applying some graphical effects to that.

In the same manner, components like volume can be calculated and passed as information that feeds a graphics routine without needing a Fast Fourier Transform to get frequencies –simply calculating the average amplitude of the signal.

To convert the data to frequency domain using an FFT allows you to use more complicated and advanced effects, including things like spectrograms.

It’s admittedly quite tricky to detect some things that may be a little obvious such as the timing of drum beats or the pitch of notes from the FFT output.

Generally, reliable beat-detection and tone-detection are quite tough, especially when you want those in real-time.You can see a simple representation of amplitude-based audio-detection on this site. Just play your favorite song and see it visually represented.