Joshua Murphy
CT270 paper #4
Assigned 180405
Reverb: Spatial Language
Reverberation at it’s bare bones definition, is the collection of reflected sounds in an enclosed area. In other words, reverb is spatial language but more specifically a result of a direct sound creating early reflections off surfaces. As those reflections create other reflections those reflections blend and overlap and the summed effect humans perceive as reverb. In certain amount it is a desirable feature of various auditoriums and powerful when as an audio effect to mimic live spaces in music.
Every room has certain characteristics that result in the way it reflects sound waves. In a more reflective room, it will take a while for the sound to die and this results in a “live” sounding room. Rooms that are not reflective are known as “dead”. This is a quality that audio engineers use acoustic foam and bass traps in studios to achieve. Drummers tend to lean towards recording in large, live rooms so there are lots of natural reflections while vocalists usually record in dead rooms called vocal booths then add reverb in the mixing process to create a sense of acoustic space. Too much reverb can jumble thing together and it can become difficult to articulate one from another, resulting in a muddy sound.
Nearly all music styles hone the use of reverb, or the lack of it. In fact, reverb is encountered in the average human's life several thousands times a day. Reverb is such a common phenomenon that our brains adjust for it constantly, and it helps us to give us a sense for the size and shape of a room. In many ways, reverb is a huge part of the reason some blind people have the ability to “see” with their ears. Auditoriums are massive rooms that are very specifically designed to adjust for these reverb times in specific ways with various shaped surfaces and designs. These rooms are truly where architecture meets math and audio. On the polar opposite side there are rooms known as “anechoic chambers”, which are unpleasant for humans due to their design to achieve total and utter lack of reflections. It can be interesting to see that something like conceptual like reverb that not many people choose to learn about or understand can have such a large impact on their state of mind and being.
When the Harmonicats Peg o My Heart was released in 1947, it wouldn’t have stuck out much as a simple harmonica tune but due to what is thought to be the first application of reverb on a song, it stuck out like a sore thumb to listeners. The ethereal “liveliness” of the song was unprecedented, and the effect was achieved with a microphone and loudspeaker in the studio’s bathroom. Later on in time blues legend Robert Johnson would play his guitar in the corner of a room in order to achieve reverb and guitarist Les Paul was a early pioneer of artificial reverb, going as far to build 8 echo chambers underneath Capitol Records in the 1950’s. Early designs include spring reverb and plate reverb and worked through a combination of short delay times and audio filtering with analog equipment. Digital reverb didn’t make a scene until the 1970’s, with Lexicon and EMT both releasing devices to create artificial reverb. Since then, audio engineering has progressed as a field to the point where we are able to use technology to simulate the physical nature of any space being modelled, down to precise detail.
When applying reverb, the engineer must consider whether he intends to replicate a natural room sound, or on the other hand, essentially create ear candy through a much more noticeable effect. Reverb has two elements, various parameters, and “types”. The first element is the early reflections consisting of the first group of echoes to occur when the sound waves hit the walls or ceilings, and tend to be more concise, echo-y, and defined. To adjust the level of early reflections, we adjust the level of decay, which is the sounds created as those waves continue to bounce around a room thereafter. This is also referred to as a reverb tail. It’s only at this point most people would associate these sounds as reverb. Pre-delay is another important parameter that specifies the time it takes for the first sounds to travel from the source to the earl reflections. A biggers space results in more pre-delay as a result of the time it takes for the sound to actually travel around the room and reflect. Dampening also has a strong effect on the overall tone of a song. When considering a softer surface, reverb tails will lose high frequency as they travel and bounce around a room, which has a tendency to result in a warmer sound that has less sharp of edges. The ensuing warmness can be a desired effect in the event of plastic or artificial sounding high end.
Some reverbs have an additional plethora of advanced parameters and features. High frequency and low frequency decays have different effects, with the former resulting in an ethereal sound, while low frequency decays result in a larger and spacious sound. The use of high / low frequency attenuation on reverbs, also known as cutting, can prevent metallic / muddy sounds. Specific reverbs allow for the using of stereo imaging with width parameters and some others even the “togetherness” of the early reflections times. This is early reflection diffusion, and the increase of this parameters triggers a “thickening” of the sound. Reducing this parameter results in more separate and distinctive echos. You can also apply diffusions on the reverb tail, and use a gate to set the end of a reverb tail, each with their own specific potential applications for audio engineers.
Reverb types include spring reverbs, plate reverbs, hall reverbs and room reverbs. Spring reverbs are based on early analog reverb devices that sent a signal with a transducer through an old piece of sheet metal. While these lacked in early reflections they resulted in smooth and clean sounding reverb. Springs reverbs were based on springs instead of sheet metal, and the effect is adjusted by tension and length. Room reverbs are based on the old idea of placing a microphone in an room and replaying the sound through a speaker to simulate the space of that room and add a sense of realism to a song or other media project. Finally hall reverbs result in expansive sounds with long decay times.
One of the most fascinating things is that through the use of modern day DAW’s, I would venture to say that most of these parameters can be automated to effectively add real time modulation for a variety of aesthetic purposes in music mixing and advanced sound design. This is just the tip of the iceberg in regards to the endless nature of very powerful and heavy duty digital programs and how they can utilize and variously layer, modulate and mix reverbs and their parameters to pack a sense of depth into any project.
As I have provided evidence for at this point, reverb is an incredibly powerful tool for audio engineers during the mixing process and in production. Reverb can be utilized in layers for masking purposes, or to mitigate the effects of bad recording or previous compromises to program material. Within specific genres of music, reverb has massive creative potential that expands beyond the ideas of providing background space. Truely a form of artistic expression, spatial design now has the potential to be used as a creative medium. Audio is a highly matured science, and at our fingertips we now possess the tools to harness unprecedented control over all aspects of our audio productions.
One of the most standout examples of this blooming of technological advancement is the advent and accessibility of convolution reverb. Much in the manner that a guitar can be sampled, the characteristic acoustic identity of a room can be profiled and replicated within the digital domain. Using a technology known as impulse response recording, a calibrated acoustic tone can be used to provide a digital footprint of the acoustic processes applied to real life sound in a three dimensional space. Complex computational algorithms not previously possible in the analog era are capable of reverse engineering and replicating the real life acoustic process using adaptive digital processing. In other terms, any reverberation that we hear in real life we can now artificially be applied to any audio production.
The algorithms that drive the function of convolution reverb use a series of intricate mathematical instructions to interpolate the physical processes to required to recreate the reflection properties of a real world space. These interpolations are based on an audiological-level understanding of the characteristics of reflections and short delays that contribute to the sound humans hear as reverberation. For example, the designers of convolution reverb plugins such as Altiverb take into account very specific, scientific consequences of principles such as the Haas and Precedence effects to design adaptive systems that are capable of applying realistic spatial processing in the digital domain. Convolution reverbs can be used to replicate the acoustic profiles of many famous places, such as the Sydney Opera House, or Holland's Gelredome Arnhem.
The expansion of the breadth of the production capabilities of other tool sets for audio engineering have made a strong impact on our flexibility of options when using reverb in production. For example, powerful (and DSP-intensive) stereo imaging plugins such as Izotope's Ozone Imager give us the ability to control the position and width of our reverbs in the stereo field, giving us the potential to exploit a vast array of advanced creative panning effects.
Of course, while those sorts of tricks are generally only best in appropriate modern productions, this hyper-accurate panning control has uses in any mix where precision is needed to avoid masking, or to increase overall track RMS. This level of control makes reverb a powerful tool for beefing up a mix — for example, one could put a very short, low volume, targeted reverb on a kick drum to bolster it in a sort of cinematic way. In any case, beyond the paradigm of the mindset of the past, Reverb is not just a creative tool; it has powerful and useful applications for mixing in general.
Reverb is a powerful tool to increase the character and life of elements in the mix, and there are a great many easy-to-do production strategies one can use to achieve these effects. For example, the lead vocal in some rock and power ballad often makes use of a strong, zealy plate reverb with heavy compression, to increase the power of the lead singer's voice, and give him strong dimensional control over the sound stage. Reverb can be use to accentuate elements, such as is the kick drum example noted above. Reverb can also be used to add a natural attack or decay to a sound. These examples are just a few of the great many practical production applications of reverb, for both creative control and mixing.
One potentially overlooked option when using any reverb is the possibility of tuning the reverb with EQ. Some engineers may shy away from powerful, modern reverbs because of a shiny high end, but neglect to acknowledge the potential to EQ out unwanted frequencies of a reverb return, like any other channel in a production.
Reverb has a many numerous applications in post, and plays a critical role in any professional audio post production project. Reverb is important for many special effects, and convolution reverb is often used to place recorded dialogue accurately in its on-setting. Reverb mixing is often necessary to "soften" the effect harsh special FX samples on a film's sound field.
While Reverb is certainly important in post for TV and film, some of the most advanced reverb programming occurs in video games. Video games have been in a race to try and be as realistic as possible since they came into existence, and this drive for perfection has spawned incredibly nuanced reverb algorithms that processes information from a game's 3D graphics engine in real-time to create a virtual audio convolution of the game world, which is then applied to the sounds of the game.
Reverb has become entrenched in the characteristic sound of many nuanced and underground music genres. For example, many EDM genres have sonically imprinted reverb techniques considered a necessity of style. Many underground rap productions use reverb automation and distortion to accentuate certain words or phrases, as well as to establish a dark, brooding atmosphere. Modern hard rock music, such as Linkin Park, would not sound anything like itself without its characteristic smooth, dense vocal reverbs. Classical musical recordings may not be able to be performed in a concert hall as desired. With a convolution reverb plugin, the orchestra can sound as if it was in any european concert hall the producer wished. Reverb has given us the power to overcome boundaries that previously may have held back our productions and mixing capabilities. Now, we have more power over the spatial aspects of our mixes than ever before.
Reverb has come a long way in a long time. Far more powerful than the cold, affective, wonky processors of the '70s, on our mere laptops we have the power to do everything I have already mentioned. Some engineers may shy away from the use of powerful, modern reverberation processing, but this is a mistake. We live in a golden age of freedom for the power of audio processing, and no matter what style of music or production, reverb has something to contribute to the process.
Reverbs will continue to improve with time, and in the future, we may see even more advanced technology in the modern plugins, perhaps some integration of the 3D capabilities mentioned above. Whatever the future brings, with it will come greater power and freedom in our mixing decisions and production techniques. It will be interesting to see where developers bring this technology next.
In all, reverb is one of the most powerful tools in a modern audio engineer's arsenal. Reverb has all the same powerful capabilities EQ does — working to fix, fit, or feature source material in an effective and aesthetically valid way. We are at a point when modern plugin reverbs have incredible potential for realistic, natural sound, which can help save project studios and production houses large sums of money for recording at off-site locations for effect. Revolutionary innovations in plugin GUIs by industry leaders such as Izotope, Fabfilter and Waves have made mature exploitation of the power of reverb in our time easier and more intuitive than ever. To conclude: In my opinion, reverb is a quintessential part of modern audio processing, so as such, all engineers should attempt to stay current with its developments. Engineers should ask themselves if they are using reverberation processing to the fullest potential they can achieve from it. As engineers, we are obligated to do the best we can to provide the best work to our clients. Now, reverb is a critical part of that game. Additional information about reverb can be found from the link below.
https://www.soundonsound.com/techniques/choosing-right-reverb
Works Cited
Buchanan, Jono. “Understanding Reverb.” Resident Advisor, 23 Feb. 2012, www.residentadvisor.net/features/1544.
Sherwin, Ian. “Reverb Parameters Explained in Logic: Pre-Delay, Attack, Decay, Diffusion, Density, Spread, Room Shape -.” Point Blank's Online Magazine, 3 Oct. 2014, plus.pointblankmusicschool.com/reverb-parameters-explained-in-logic-pre-delay-attack-decay-diffusion-density-spread-room-shape/.
Staub, Jamey. “The Five Main Types of Reverb.” Sonicscoop.com, SonicScoop, 3 Nov. 2013, sonicscoop.com/2013/11/03/the-five-main-types-of-reverb-and-how-to-mix-with-them-by-jamey-staub/.