top of page

(not quite) All About The Bass - pt.1


The low frequency part of the sound spectrum is full of opportunities and bum pains. It can convey very visceral energy. It causes psycho-acoustic masking effects - where one sound hides another, which can be either good or bad. Different playback systems can be very different in how they handle bass, making it rather unstable and unpredictable. The list goes on. All of which makes it both important and tricky to deal with. The cherry on top is that most tools to analyse sound are completely rubbish with low frequency!

A typical spectrum analyser, used to draw a graph of the sound spectrum, is based on a maths technique called the Fourier Transform. You may have heard of it as FFT, which is a commonly used efficient version. The Fast Fourier Transform. It takes a sound wave and finds the frequency components that make it up. So far so good. The problem is that it works in frequency bands. It makes a finite number of bands across the audio spectrum - a kind of pixelating. These sound pixels are evenly (linearly) spaced over the audio spectrum, but sound doesn't work that way! The spacing of the actual frequencies as you hear them musically, is logarithmic not linear. In short, the lower down you go, the fewer pixels you get. For a typical plugin, the lowest frequencies are just one big approximate pixel with no actual detail at all! (Even if the tidy looking graph seems to tell you otherwise).

If you want to know for real whats going on down there, there's a few possible approaches. One is to have utterly kick-ass monitoring and listen to it. That's several kinds of nice, but may not be practical. It also doesn't tell you how a different system is going to behave. Another (zero cost) option is to do the FFT thing with such a massive number of pixels that the bass still has good resolution. I'm quite a fan of zero cost, so here's the technical lowdown... The number of pixels you get (aka Bins in FFT speak) relates to the length of the sound you are analysing. A longer sample of sound results in more pixels. To get the resolution I wanted with sub-sonic energy, I needed around one hundred seconds of sound. That is, the FFT needs to average a full one hundred seconds to produce a single static, high-res graph. The limitation is implicit in the way sound and the FFT works. The real-time, jiggles-about-while-the-music-plays stylee just isn't an option if you want that bass microscope!

So... Here's a bit of Python code that will take a stereo wav and draw a graph from zero to about 1.4KHz with super-duper detail. It's not a polished product, just bare bones clumsy so you'll need a little tech savvy to use it. Contact me (or your techy friend) if you need a bit more of a clue!

Using it to analyse some natural sounds painfully confirms a few fears for me. The sound of waves on the sea for example have low frequency components that go down and down and down. The slight roll off on the graph below 18Hz is probably from the simple recording equipment. It'll take crazy hardware if I want to accurately reproduce that! Up side, using it on some sound I had intended to go on a movie shines a light on some sub sonic gremlins. Movie sound systems can be wired to produce tonnes of very low bass to rattle the chairs. It's pretty important to know if that low frequency energy is there or not, cos it may suddenly jump right out at you! A bit of steep high pass filtering - more on that in another post - and lots of embarrassment is saved... :)

import numpy as np import matplotlib.pyplot as plt import soundfile as sf

rawdata, samplerate = sf.read('AStereoSoundFile.wav') timestep = 1/samplerate

oneD = np.zeros(102400) looplen = len(oneD) for i in range(0, looplen): oneD[i] = rawdata[i,0]

sp = np.fft.fft(oneD) freq = np.fft.fftfreq(len(oneD), timestep) freq2 = np.zeros(3200) c = np.sqrt((sp.imag**2)+(sp.real**2))

fulllen = int(len(c)) ref = int(len(c)/32) ref2 = (ref*32)-1 for i in range(ref, ref2): c[i] = 0

ref = ref-1 d = np.zeros(3200) for i in range(0, ref): d[i] = c[i] freq2[i] = freq[i]

plt.yscale('log') plt.fill(freq2,d) plt.show()

The above code uses the Numpy and Soundfile python libraries, so you may have to get them too. Turn a blind eye to my coding inefficiencies, or tidy it yourself, if you're into that. Put it in the same directory as the file you want to look at. The file must be longer than one hundred seconds. Change line four so the file name is right. Run it and bob's your uncle :) Hopefully. Worked for me in Python 3 under Linux anyway!...

Waves on a beach. The plot showing plenty going on below 20Hz - and thats after the simple recording setup has lost a bit of it! You can see the sound 'pixels', but there's enough of them to give a meaningful picture.

Zoomed out - the Python code as it stands graphs up to around 1.4KHz. You can see the overall slope that's typical of a lot of natural sounds.

Featured Posts
Recent Posts
Archive
Search By Tags
No tags yet.
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page