Cymatic Lighting: How the Deaf Can Benefit from “Visual Sound”

Around 360 million people in the world and 1 in 8 Americans live in partial or complete silence because of hearing loss.

Cultural events such as social gatherings and performing arts are a powerful way to build up communities. However, the deaf and hard-of-hearing (DHH) can’t always enjoy their benefits as these events usually rely on sound to deliver their message or experience.

And because artists and venues don’t usually have the resources or aren’t even aware that their events are not inclusive for those who can’t hear, there should be a way to translate sound into other senses to make cultural events more accessible.

How Cymatic Lighting Came to Be

A Deaf musician created the needs and functionality of the Cymatic Lighting system.

Lighting visualization is essential in giving meaningful cues to the nature of the sound.

To create a relationship between where the sound was panned in space and the visual pattern that is displayed, stereoscopic features of sound are translated.

Qualities of sound, such as frequency and amplitude, are mapped to resemble similar qualities of light luminosity and hues through algorithms.

This is done as precisely as possible to get detailed information about the sound that could convey the audio and music cues into visuals.

Evolving from Blinking Lights

By combining a scientific and technical approach, you can explore the relationship between sound and light by integrating Cymatic principles (the study of sound made visible).

This is the best way to improve the concept of visual sound beyond several blinking lights from earlier technology.

Sometimes, the basic depiction may not be very accurate at conveying the difference between the notes being played.

That’s why integrating additional code and algorithms that respond to the acoustic envelope (dynamics) of audio makes it more responsive.

To make the whole process more advanced, multiple strips of LEDs can be added in a 2D grid to allow the users to see tiny changes in timbre.

For example, users being able to tell the difference in harmonics and distribution of frequencies that lead to recognizing the difference between a string, percussion, or synthesizer instrument.

Feedback from the DHH

According to deaf users, integrating the Cymatic Lighting system into the environment (furniture or architecture) gave them a more relatable and engaging experience when it came to live or recorded music performances.

They were better able to follow the rhythm and distinguish between individual instruments such as drums or guitars from a mix of instruments. This truly enhances the way they experience music.

They even showed interest in installing the technology as an alert or safety system. This is because it can easily convey any audio signal by turning it into simple yet precise expressions of light and movement.

When installed as part of a smart home lighting system, the technology indicates the difference between a fire alarm alert and a doorbell.

Future Development

A lot of upgrades can take place with the system, including wireless control from mobile devices. This includes streaming wireless music via Bluetooth and incorporating a 2-way alert feedback so that users can receive notifications and own light display visualization displayed on their mobile device screens. 

Final Thoughts

Making more venues aware of the needs of the DHH opens up more opportunities to involve them as well as bring a broader range of communities together.

What is Audio Visualization?

Music visualization is a feature found in media player software and electronic music visualizers. It produces animated images that correspond to the piece of music that’s playing.

These images are rendered in real-time and synchronize with the music being played.

Visual techniques can be simple such as simulations of an oscilloscope display, or they can be more complex ones that combine several composited effects.

Generally, visualization techniques depend on the changes in the volume and frequency spectrum to generate the images.

The higher and stronger the correlation between a musical track’s spectral characteristics (such as amplitude and frequency) and the objects or components of the visual image being generated, the more effective the visualization is.

As a song is being played on a visualizer, the audio data is read in extremely short time slices (usually spanning less than 20 milliseconds).

The visualizer, then, does a Fourier transform on each slice. This means that it extracts the frequency of components and produces visual display according to the frequency of information.

The programmer is responsible for how the visual display responds to the frequency info.

To update the visuals at appropriate times with the music without overclocking the device, the graphics methods have to be extremely fast and lightweight.

Music visualizers used to (and still do) modify the color palette in Windows directly to get the most awesome effects.

One of the trickiest parts about music visualization is that the frequency-components-based visualizers don’t usually respond very well to the beats of music such as percussion hits and similar sounds.

However, you can write more responsive –and often complicated- visualizers to combine the frequency-domain information that is “spikes-conscious” in the audio that is responsible for corresponding to percussion hits.

Basically, what you do is you take a certain amount of the audio data and analyze the frequency of its components. Then, you use this data and modify some graphic that is later displayed repeatedly.

The most obvious way to run this frequency analysis is by using an FFT. But you can use a simple tone detection with a lower computational overhead.

For example, you write a routine that draws a series of shapes arranged in circles over and over. Then, you should determine the color of the circles using the dominant frequencies and set the size using the volume.

Wait a second, what’s FFT?

When you search for FFT or “Fast Fourier Transform,” you’ll get tons of complicated math equations and graphs.

When you’re building a music visualizer, the most common way the music will be represented digitally is through a standard waveform.

This typically shows how loud the song gets over time. That’s why we see lots of big spikes around the middle of the song, and these get smaller as the sounds get lower.

The representation of the amplitude (loudness) over time is referred to as the time domain.

However, this only gives us information about how loud a song is at specific points. To learn more information and identify more components, we use a Fourier transform.

So rather than representing audio in the time domain, a Fourier transform enables users to represent audio in the frequency domain. And instead of only showing us amplitude and time, we’ll be seeing both amplitude and frequency.

Different Audio Representations

Audio data can be processed in multiple ways, the simplest of which is displaying it as waveforms that change quickly, and then applying some graphical effects to that.

In the same manner, components like volume can be calculated and passed as information that feeds a graphics routine without needing a Fast Fourier Transform to get frequencies –simply calculating the average amplitude of the signal.

To convert the data to frequency domain using an FFT allows you to use more complicated and advanced effects, including things like spectrograms.

It’s admittedly quite tricky to detect some things that may be a little obvious such as the timing of drum beats or the pitch of notes from the FFT output.

Generally, reliable beat-detection and tone-detection are quite tough, especially when you want those in real-time.You can see a simple representation of amplitude-based audio-detection on this site. Just play your favorite song and see it visually represented.