Loading…

Recap: April Dinner with Stephanie Rozek

I apologize for having mentioned spring, given it was bitterly cold, windy, and snowing on the evening of our Dinner. Won’t make that mistake again… We assembled in the very warm and cozy Jelly Bean room at the Communitech Hub again, and warmed ourselves even further with spicy Taco Farm goodness, and the magnificence of sugar, fat, and carbs that is Debrodnik’s Doughnuts. We even had THREE guys in attendance, and two people there from other countries!

As we’ve previously mentioned, PJ and I (Melle) will be passing the torch for Girl Geeks at the end of this season, and we have one interested organizer party, but have lost the other potentials, so Tina is still looking for co-organizers. If you’re interested, let us know and we can make connections.

Steph kicked things off in the traditional tech fashion: with cats (she ended her talk with a cat slide, too — good stuff). And to get us in the right frame of mind, she added a quote from Marilyn Manson:

“Music is the strongest form of magic.”

Which makes sense, if you consider how evocative it is, how culturally and individually defining it can be, and what it can do to us physically.

Then we headed back in time, to the advent of sound recording. Back in 1857, Édouard-Léon Scott de Martinville brought us the phonautograph, which could record sound waves passing through the air. It wasn’t really intended for playback, though, just visual capture and study of the recording.

The phonograph came next with Edison in 1877, and used a cylinder covered in an impressionable material like tinfoil, lead, or wax. A stylus etched grooves on the cylinder, and it was these grooves that were “read” to play back sound. A decade later, we got the gramophone from Emile Berliner, between 1887 and 1893. The gramophone used a disk instead of a cylinder. The depth of the grooves the stylus made in the medium corresponded to changes in air pressure created by the original sound. Trace a needle through the groove and amplify it, and voila, recorded playback.

However, back in the late 19th century we didn’t have truckloads of gear or swarms of roadies, so how did we make acoustical recordings? Well, performances were recorded live directly to the recording medium. The performers crowded around a big cone, which had a diaphragm located in its apex. The diaphragm had a cutting needle connected to it, and the needle made the groove in the recording medium. The sound quality was… not spectacular. But for the first time, a concert or other performance wasn’t one-time only.

Technology progressed gradually over the next couple of decades, and by 1925 and later, microphones got a lot more sophisticated and improved sound quality, and they replaced the horn/cone. Radio got very popular and foley art developed, which then was the creation of sounds to create mood and enhance story, like wobbling metal for a storm, or coconut shells for horse hooves, etc. Bell Labs developed the Western Electric (Westrex) recording system.

Microphones captured sound and converted it into an electrical signal, which was then amplified and used to actuate the recording stylus. This eliminated the “horn sound” resonances common in acoustical processes (that squawkiness in really old recordings), and enabled clearer and more “full-bodied” recordings. A wider and more useful range of audio frequencies could be captured, include those farther away and weaker.

At that point Steph took a brief side trip into microphone tech, since they’re pretty central to sound recording then and now. There are three main types: Dynamics, Condensers, and Ribbons. The key aspects of each and considerations for their use are:

  • frequency response
  • polar response patterns
  • pop shields
  • placement.

types of microphones

One thing many people don’t know is that just setting up a mic (or several) isn’t going to guarantee great sound capture, or any, for that matter. Microphones have patterns where and how they capture sound. Omnidirectional mics can capture sound from all around them. Cardioid mics capture sound from in front of them. Hypercardioid capture sound mostly from in front, and a bit from behind. Bi-directional capture sound pretty equally in front or behind. And shotgun mics capture sound from all sides, but in a more narrow (channel) front and back, and in a very limited way at each side. (Shotgun mics are the ones common in TV shoots, etc.)

Microphones are also very sensitive to sounds, and need to be “protected” a bit from the harsher elements, e.g. plosives like “p” and “b” when we speak. These sound really unpleasant when recorded without filtering them down. This can be done with smaller foam covers, big fuzzy covers (to block bigger disturbances like wind), or even pantyhose stretched over a coat hanger. And when told where and how to address the mic, listen. No, you will not come off too loud if told to basically eat the thing.

So, moving back to recording, magnetic recording was invented around 1898-1899, and found its first practical uses in the 1930s in Germany (with the K1 tape recorder made by AEG). By 1943 the same company had come out with stereo tape, which could record more than one “channel”. The Allied forces discovered this German technology as a result of WWII, likely finding it when they overtook German positions. By 1948, 3M (an American company), had magnetic tape and the Ampex company brought us the Model 200 tape deck. (Could mix tapes be far behind…?)

Historically, recording had been on a single track, which limited the quality, richness, and depth of sound recording. That changed once the 1950s arrived, and we could record multiple tracks synchronized on a single medium. Stereo recording became very widely adopted (Les Paul or guitar fame ordered the first 8-track from Ampex).

In the 1960s, 3-track recording was popular, producing the famous “Wall of Sound” experience. Typically they involved the lead on one track and two backing tracks. In the late 60s 4-track recording got big, which is how the Beatles and Stones recorded. We started to get fancy, doing things like “bouncing” recordings to add complexity to the experience. Audio cassette tapes also arrived for consumers by 1963.

Then we started making mix tapes, evolving tech, and arguing about which tech was better. We’re still doing that. Digital recording arrived in the 1980s, and thus the analog vs. digital debate raged. CD and MP3 formats arrived, the former making the music industry a crapload of money, and the latter taking it away. (Relatedly, there’s a great recent New Yorker piece about that very subject and the rise of music piracy, warez, torrenting, and the like.) Hard disk recording arrived in the late 1990s.

From there we moved into how things look today. In a word: apps. Gazillions of them, to produce or play pretty much any way you like. Some that Steph discussed (available for the major mobile platforms, just check their respective apps stores):

  • Spectrum analyzers: as noted, analyzes the spectrum of frequencies being recorded and enables slicing and dicing to find specific sounds to remove to improve quality.
  • Sound meters: a decibel noise meter – these are very common and available for all mobile platforms.
  • Perfect Ear: enables you to work with chords, interval training, etc.
  • Chordbot: enables you to write a song on your phone, adds chord progressions, etc.
  • OCR apps: write out a score and scan it to a digital copy.
  • NotateMe: similarly, draw and write on your screen and turn your scribblings into a real musical score.
  • MIDI Sheet Music: can convert MIDI files into sheet music.
  • Heat Synthesizer Pro / Caustic: sound design and composition right from your mobile device.
  • Nanoloop: compose and play with sounds – a synthesizer/sequencer/sampler.
  • Touch DAWs: Digital Audio Workstation – full mixing board control from a tablet or phone.
  • Virtual Guitar Pro / Real Guitar / Guitar Flex: play around or compose right from a phone, strum the screen and it sounds like a real instrument.
  • Real Drum: like the guitar apps, play and record drumming from your device.

Of course, digital tools for sound recording, manipulation, sharing, etc. don’t end there. Some others Steph mentioned:

And now that we got the nuts and bolts down, things got really interesting. Steph moved on to psychoacoustics. FYI, if a tree falls in the forest and there is no one there to hear it, technically it does not make a sound. Why? Because as we define it, “sound” requires a source, a medium, and a receiver. (So source = tree, medium = air, receiver = our ears and brains.) No ears/brains, no sound.

Psychoacoustics is the study of how the brain perceives and interacts with sound.

psychoacoustics diagram

From Steph’s notes:

Fields of study within psychoacoustics include pitch perception, sound localization, and musical acoustics. Use of psychoacoustics can help sound engineers create more realistic sound space experiences for music, movies, and concerts. In medicine, psychoacoustics can help medical professionals identify and treat causes of hearing loss or sound localization malfunction. Tests performed when studying psychoacoustics often examine the nature of the sound as well as brain activity that occurs in response to sound.

Psychoacoustics can even be used for unique or nefarious purposes, like sonic warfare or discouraging loitering youth. Very low frequencies can’t be heard, but can cause uncontrollable bowel loosening, and rock, metal, etc. are still used to torture terrorists, dictators, etc. from time to time. Regarding the youth, the range of frequencies we can detect shrinks with age, so in our teen years, we can still hear very high-pitched sounds, which are annoying, but which people over 20 can’t usually detect anymore.

Resonance is the impact of one vibration on another. The classic example is an opera singer shattering a wine glass just by singing a high note, or the walls of Jericho falling in the bible. Important areas of resonance include entrainment, sympathetic vibration, resonant frequencies, and resonant systems.

Entrainment can change rate of brain waves, breaths, or heartbeats from one speed to another through exposure to external, periodic rhythms, e.g. tapping your feet to a rhythm. We can alter one pulse (such as brain waves) with music, and the other major pulses (heart and breath) will dutifully follow. This is an important part of music therapy, for example.

Psychoacoustics can have negative effects as well, like noise pollution. This is why cities have ordinances against overly loud construction noise, concerts, etc. But sound can be crafted for all kinds of purposes, including art. Some sound installations:

And with that, we came to the end of our sound adventures. A great talk by Steph, and lots more to think about with something we largely take for granted in day to day experiences.

We’ll be announcing our May Dinner shortly, when we take a bit of a field trip to check out friends’, alums, and sponsor Magnet Forensics’ new office – stay tuned!

Leave a Reply