Tuesday, November 14, 2006

Creative Computing Major Project Semester 2, 2006

Delayed Extraction...



Program Note:

Delayed Extraction

10'30

The purpose that this work has evolved to pursue is that of exploring the sonic and musical possibilities offered by a mere handful of guitar tones. With enough experience, it becomes very rewarding to leap into the idea of electronic music performance with restrictions placed on your use of tools. I felt the restrictions did not adversely affect my creativity, but opened my eyes to concepts I have overlooked in the past.

My initial approach was to use large chunks of pre prepared audio as a base, and ‘improvise’ with what ever I could over the top. Realising soon enough that this was keeping me locked into ‘studio composition’ mode, I opted for a more abstract approach. My point of departure then became a concept I have described as ‘abstract guitar’. This involves capturing a few select notes from the guitar with an audio recording device, and quickly splicing and manipulating them with editing and sequencing software (Ableton Live, F-scape, and Plogue Bidule) to build a musical result as quickly as possible. It is an exciting and often unpredictable way of composing that yields many surprising results.

The presence of the conventional drum kit was an experiment that became too important not to be included. Even if I am writing abstract music, I still crave a definable rhythm section of some description. In this case, the drum kit is an inbuilt programmable feature of the application ‘Ableton Live’, and a fine sounding one at that…


Analysis:

My approach to this project quickly became focused on reducing the time lag between realising certain components. The Kawai K5000 has been instrumental in this regard. The first task for the Kawai was to ease the pain of recording audio samples from my guitar. By mapping the incoming MIDI data via a ‘CC to parameters’ device, I was able to give myself a couple of on off controls for the file recorders. It’s not as convenient as having a button to press, as the K5000 only has rotating knobs, but it beats using the mouse. The real convenience of this is that you don’t even need to open up the file recorders for the purpose (although it is advisable for visual confirmation of operation). It’s as simple as turning the knob hard right to start recording, and hard left to stop. The newly created audio files conveniently turn up in Live’s file manager, presuming you have set your recording path correctly.


With the audio capture functionality sorted out, it was time to look at audio file manipulation, for this was to be the main point of interest in the piece. It quickly became apparent that the ‘type’ of audio captured would severely influence its potential for real time manipulation. The more complex I tried to make the files (such as trying to record predetermined guitar riffs without following a metronome), the more difficult they became to work with in real time. A change of tactic was needed, so I decided to simplify the input and just play single notes, which would then be chopped up in F-scape and sequenced as small segments in Live. The results were instantaneously more successful, so I stayed on this path.


There was still the issue of time lag however, as recording a note, dumping it to F-scape and splicing, and then loading the Impulse playback devices in Live was still blowing the timeframe of a piece out to around thirty minutes from start finish. The only solution for this problem that my limited experience with this type of performance could surmise was to create a detailed and straightforward score to follow. This way I would not be wasting time between processes dwelling on what needed to be (or what could be) done next. Taking advantage of OSX’s handy screenshot capability, it was simple to provide a relevant visual cue for what was to come next. If it were something to do with the Impulse in Live for instance, then my score would show a picture of the object, along with a short written direction to clarify the required procedure. Some parts of the score contained traditional notation, to provide a reference point for either the input of guitar notes, or the way that they may be sequenced. Although these could be followed to the note if desired, they are only intended as a rough guide, to negate the need for sophisticated conventional musical thought, which would hamper the process.


The downloads listed below contain the complete performance in Mp3 format, along with the relevant program files if you feel the urge to recreate the masterpeice for yourself...

I know, I know, 10 minutes is a long time - if you happen to get bored, fast forward to the drum solo at 6'15, it goes off!

Delayed Extraction (14.4 Mb Mp3)

Performance Live Patch(0.097 Mb Ableton Live File)

Performance Bidule Patch(0.243 Mb .bidule)

Performance Reason Patch(0.034 Mb .rsb)

Performance Cuebase Score File(0.060 Mb .cpr)

Performance Complete Score(2.6 Mb .doc)

Monday, November 06, 2006

Audio Arts - Minor Project Semester 2, 2006

Driving the Type 3 Volkswagen:

Aint she the purrtiest...


Process information:

My initial objective was to create a realistic representation of the engine sound from the car. At first I stumbled across a potentially suitable sound from using Plogue Bidule’s FFT and spectral to MIDI devices, which were converting my raw guitar signal and sending its interpreted MIDI data to Reason. Using a bass synthesiser patch in Reason produced some very VW like results, as the random incoming MIDI data caused it to cough and splutter notes of random modulating low frequency and duration. Listening to these results in conjunction with the original sound recording of the car quickly rendered them too unpredictable for practical use as engine noise however.

Instead I opted for the technique of white noise modulation in Plogue Bidule as this offered a great deal of control over the sound and variable aspects of it’s character. I used a mixture of real time control over the frequency variables with the mouse, simply recording my efforts and splicing out the useful components, and MIDI control. The MIDI control set-up was tricky at first, but establishing a connection between Plogue Bidule and Cuebase proved to be well worth the effort. Using a MIDI to value device receiving modulation data from Cuebase, and a parameter modulator in bidule, allowed me to easily map the incoming control data to Bidule’s variable devices. This enabled the use of ‘ramp drawing’ in cuebase to visually represent things that were happening in the recording and in my synthesised sound, namely the revving of the car engine. I only used the automation in this way for the low frequency component of the engine noise, as the white noise modulation had already been appropriately achieved through real time human control.

The result of this combination produced some strikingly analogous characteristics with the real world engine sound, as you will see from inspecting the attached sonogram. There is a heavier and more consistent low frequency presence in the 10 to 20 KHz range in the synthesised sound by comparison, but this is of negligible importance as the human hearing range is weak at that level anyway. In the 60 to 75 KHz range is where the real success is evident. The synthesised sound has a stronger and more consistent signal than the original, but this can most likely be attributed to the bass frequency, which is kept low in the mix anyway.


The sonogram. Artificial file is on top, Original file is below.

For the more incidental noise in the car I chose to use straight recording and audio manipulation techniques. The emulated sounds came from limited sources such as the zippers on a back pack, paper being scraped and rubbed together, and simple percussive noises achieved by hitting various ‘in-house’ objects in studio 2’s dead room. The files were given names according to the real-world sound I would most likely associate them with for ease of selection in the sequencing process. I specifically recorded a limited set of sounds and vowed to manipulate them as necessary, to emulate the various sonic events inside the car. Tried and tested techniques were used such as splicing, stretching, pitch shifting and delay.

The surround sound mix down in studio one proved a little tricky at first. The presence of audible clicks was the first concern. A quick check of the EMU FAQ revealed that this is an issue that can arise from having ‘multiprocessing’ ticked as an option in the devices / expert menu. Unchecking the relevant box seemed to alleviate the problem. All of the control parameters relating to 5.1 mixing worked pretty much to the letter as my lecture notes stated. Cuebase initially lost touch with the HD device, but a quick switch and switch back of ASIO functions sorted this out.

The sound event map.

Sonically, I’m happy overall with the final product. I have a small concern with the placement of certain sounds in the surround sound context. In the current set up of studio one, the front left and right speakers are situated more for the purpose of stereo monitoring from the Control 24 surface. It makes mixing in Cuebase a little disorienting when you need to repeatedly make a change at the computer keyboard, and then adjust your position in the room to effectively judge the results. I could have moved the speakers to a more suitable location, but shifting expensive and unstable hardware is something I try to avoid if unnecessary. As a result, I have gone with the on screen representation of sound locations in some cases, relevant to how they appear on my sound event map. I felt this would be a better option than over compensating for a problem that is only relevant to the set up of a particular studio.


Type 3 Synthesis (4.2MB Mp3)

Week 13 Forum 02-11-06:

Week 13 Forum 02-11-06:



Johannes Sistermanns:


I really tried to ‘resonate’ with Johannes Sistermanns, but after the promising audio examples that were played early on in his presentation, I felt he degenerated into what seemed a never ending defensive intellectual blurb, more concerned with silencing nay sayers toward his creative freedom than producing worthwhile art. I use the term ‘worthwhile’ in a vain attempt to remain neutral in matters of a subjective nature.

Now I’m all for creative freedom, and if some one wants to construct a bizarre street scene from cling wrap, old car engines and other items, throw a few piezo devices around and turn peculiar resonances into audio, I think they should. My real issue with Johannes is that he appears to believe he is above constructive criticism.
Like = resonate? Dislike = doesn’t resonate? Pleeaassee!

To make matters worse, he expressed strong unsubstantiated opinions about other practitioners in his self indulgent niche. According to Johannes, anyone who creates a sound installation as a work of art is not creating a pure sound installation if they chose to use loud speakers to project unrelated sound into the space. I’m no expert on the high art of sound installations myself, but I don’t believe for a second that there is some rule book out there which states what qualifies and what doesn’t. Besides, even if there were, all rules would be periodically broken in the name of creative freedom anyway. Coincidentally, we had a sound installation artist by the name of Robin Minard give a talk at EMU earlier this year. I found Robin’s approach and work on display during the Adelaide Festival 2006 inspiring to say the least. If Johannes is to be taken seriously, then Robin is apparently working under the wrong title, just because he subjects his spaces to third party audio. Whatever.

What gets to me about this attitude is that the people that perpetrate it are actually using some sort of security blanket of projected arts community politeness, so they don’t have to put themselves completely out there for public scrutiny. Come on Johannes, if something is really worth doing, it’s worth copping the initial backlash that the artist may receive. I’ve spent many a day at band recording sessions being told that my last take sucked and I can do better. I didn’t always agree at first, but when I asked the question why, I usually got feedback that helped me to improve in the long run. If artists stop accepting constructive criticism altogether, how is art going to evolve?

To summarise, I cant say whether I LIKED or DISLIKED Johannes’ work with his clingwrap installations as there were no audio examples to be heard. I guess I’ll just have to take his word that it was a worthwhile pursuit.


Opinions are everywhere, mine is just one (and it’s really not that scary).


Reference:

Johannes Sistermanns. ‘Sound Plastic’. Lecture presented at EMU Space, Schulz Building, 5th Floor, Adelaide University. Thursday 2nd November 06.