Project Development Blog

Table of contents

Week 2; Understanding sound and setting the scene for the final assessment

Week 3; Reflecting on the impact of technology in shaping the new era of music

Week 4; Testing and analysing different recording approaches

Week 5; Drums and their contribution to the birth of music genres

Week 6: What is the essence of a bassline ?

Week 7: Building a hook with leads and vocals

Week 8: Modulating depth and space with reverb

Week 2: Understanding sound and setting the scene for the final assessment

“Sound is a form of energy that propagates through a host medium in the form of longitudinal waves.”

Aim of the 1st session: To construct a simple ‘sound cannon’ and use the echo time to estimate the speed of sound.

Objectives:

  1. Assemble a simple sound cannon using a plastic cup and a balloon.
  2. Record a slow-mo video of the sound being generated with my phone, placed at a known distance to a wall
  3. Estimate the duration of the echo time.
  4. Calculate the speed of the sound.

I have tried measuring the duration  the pop sound takes to hit the wall (1.51m from my microphone) and bounce back again on my microphone( measuring the echo). The best time estimate   is  0.73s. Using the equation speed of sound, v=d/t, v = 2(1.51m)/0.73s = 4.13 m/s, which is so clearly not the actual value. (c.f ~343 m/s in air)

Limitations in this experiment:

  1. The distance to hit the wall was too small and the sound frequency was too low to even hear an echo
  2. There were some obstacles along the direction of sound propagation that caused deflections.
  3. I also could not properly measure the time as the time units on my phone video editor is limited to 0.01 s sensitivity, and the echo time for such a small distance was too small to even be accurately measured.  

Update: My subject tutors pointed to the fact how frame-rate of video recordings and human error in calculation interfere with the results. A better approach would be to do this with a partner on the receiving end to detect the sound.

Aim of the 2nd session: To plan my track structure for the final assessment

I intend to produce a form of bouncy, groovy and “techy” track with some vocals that adds tension and directions to the flow.

I will be using the lyrics from the following track, but recorded in my voice. I will then produce a track using these recorded vocals and Ableton as my DAW.

The lyrics:

Deep in the faith

In the groove

Sway side to side, let your body move

I will update about my recording and sound design techniques throughout the upcoming blog posts. Stay tuned!

Week 3: Reflecting on the impact of technology in shaping the new era of music

This week lectures provided great insights on how the digitisation of music has led to a the emergence of new forms of innovations in music production.

The boom of electronic sound devices and Digital Audio Workstations(DAWs), levelled up the availability of gears for musicians in an affordable manner. Similarly, step-sequencers incorporated in DAWs meant that creators no longer needed to rely on front-end interfaces on drum machine or hardwares.

Week 4; Testing and analysing different recording approaches

  • Aim of the session: To select for a clean vocal recording sample using my iPhone mic and macbook in-built microphone.

    For each device, I will use the same (apple native) recorder app – Voice memos.

    First, I explore the recording quality of macbook in-built microphone. The room I am recording is more or less quiet, with some sudden noises from the surroundings as I live in a student accommodation. Below is a raw recording sample from my macbook mic, while sitting at my desk, about 55 cm away from the mic.

    Here is the audio waveform of the first recording sample, with the default mic level.

    The peaks are not too high, and the sound level is rather uniform during moments of speaking. There is also minimal capturing of the surrounding ambience. To confirm that I am not lacking in terms of voice quality/timbre, I proceeded with a recording whereby the mic-level is adjusted to two notches or so from the maximum after which I analysed the signal-to-noise ratio in both macbook recording samples.

    Below is a sample with mic-level adjusted, and positioned about 30cm from the source.

    This is the corresponding waveform.

    After listening carefully, it appears that the second recording offers a much clearer and more audible output of the sound, however, background noises and clicks from my mousepad to stop the recording can now be heard much more.

    Next, recording is carried out from my iPhone mic (using native “Voice memos”) with no adjustments at all.

    Below is the raw iPhone audio recording waveform.

    It appears that my iPhone default mic level is high enough to pick some background noises.

    It seems that I cannot alter the iPhone mic level, but this time, I position the mic ~30 cm from the source.

    Choosing the sample that maximises the desired audio source over unwanted background noises is an essential step in producing an accurate and clean reproduction of the original sound source. This allows room for additional effects and manipulations during the post-processing phase, such as using gates, eqing, or slicing techniques.

    Interestingly, I noticed that both the iPhone and macbook records in mono.

    After this session, I feel that the signal-to-noise ratio is better when recording from my macbook in-built mic. The macbook mic also gives me flexibility to adjust the mic level, which seems to be restricted on my iPhone settings. Moreover, the quality of sound from this room can be described as dry, and I like it as I am able to control the level of reverb or echo more precisely aftwerwards

    Perhaps, experimenting more with mic-positioning,  a different recording space or better hydration before singing might give new perspectives in future macbook recordings.

    I will also process the samples in Ableton in the upcoming blogs. Stay tuned!

    Week 5; Drums and their contribution to the birth of music genres

    Imagine a Rock concert or Dance Music festival without the sounds of drums. That would surely be a terrible place to be for music enthusiasts.

    Beats, from simple to complex, act as a focal point in evoking the characteristic rhythm that gives music genres their distinctive feel. From the classical backbeat influencing the 50’s-70s’ pop era to the pulsating groove of electronic dance music(EDM), focus is attributed differently on the drum notes. While the backbeat emphasises on the 1st and 3rd notes, the upbeat is a shift on the 2nd and 4th notes, which changes the “drum-feel”. This is interesting; just like during reading our brain first recognises letters, our brain also decodes music into its individual elements to sequentially form an overall perception – the music experience. Musicians exploit this in their drum patterns to give birth to new music genres, breaking off the conventions.

    This week I was really inspired by the backbeat, featured in the lecture. It felt completely fresh to my ears, which are familiar to the 4/4. To really get in the groove, I experiment with “battery” by Native Instruments on Ableton to create a beat with the kick on the 1st and 3rd, and snare mainly on the 3rd, with some off-beat patterns.

    This is an audio only with kick, layered snare and some off-beat claps, as shown in the screenshot below.

    /

    Now, I include the ride, with a focus on the 1st and 3rd notes.

    Finally, I include some running closed hi-hats and some off-beat open hi-hat to add space and enhance the rhythm.

    In regards to my final assessment, I will focus on the 4/4 beat, with additive percussions, such as in the reference track listed below.

Week 6: What is the essence of a bassline ?

Week 7: How to create a hook using vocals and leads.

Week 8: Modulating depth and space using reverb

This week’s content dive into the functions and aspects of adding reverb to different elements making up the track, with a focus on drums, vocals and leads, to create a sense of immersion.

Oftentimes, monophonic recordings pose as a barrier to implementing a sense of depth and space in a track. In real life, sound is much more lively and full of character as opposed to artificial/synthesized sound design. One aspect which can really give the listener’s cues about the relative spatial and temporal qualities of a sound in a given environment is the sound’s reverb. In simpler terms, sound originating from the source is not the only pathway directed to our ears, it in fact combines with reflected soundwaves on nearby surfaces to give an overall binaural effect. In music production, a mono sound, can be transformed into stereo by exploiting the features of reverb and panning as well.

I explored these effects using Fabfilter Pro-R on a vocal recording which I will probably use in my final assignment. Below is the recording, recorded with my macbook mic. It has very minimal reflections, as the room i used for recording has limited surfaces and is small.

Next, I experimented with turning all the knobs to their maximum, while adjusting the distance to “far” as follows.

Obviously, this sounds really distant and far away, giving the impression that the source really further from the microphone in a large open space.

Afterwards, I then experimented with adjusting the distance, and also the dry/wet and space to produce this end result. Before doing that, I try to adjust the pitch of my vocal to the key of my track (D), using a tuner.

This is the audio.

The vocals can be heard more clearly than the previous trial, but I feel that there the reverb is still too high and creates a sense of dissonance, which I do not want in this instance, given that this vocal serves as a hook for my track, and is intended to be mimicking real-life acoustic effects, while being easy to interpret.