Friday, March 30, 2007

AArts – Wk 5 – “Wind Instruments”

AArts – Wk 5 – “Wind Instruments” 30/03/07:


If I develop strong aspirations for a career in audio engineering in the future, recording classical musicians seems like a rewarding area of speciality.

Fellow engineers for this week: Luke, Jake and William.

Instruments recorded: Flute (Lydia Sharrad) and Saxophone (Martin Cheney).

Maybe we should try the old '57 near the bell?

We started off with what seemed to work for the group session on Tuesday with a twist: two dynamic mic’s (Shure SM-57 and a Beta 58) facing toward the bell of the sax. My logic behind this idea came from a section in this weeks reading which referred to SM-57’s as being potentially suitable for rock saxophone recording. Martin was a classically trained saxophonist but it worked nevertheless. During the recording and postproduction I found dropping out all mic’s minus the AKG-C 414 (which for the sax I believe produced a fantastically warm “chocolate velvet” quality) allowed one to dial in organic reverb from the room mic (U-87) and overhead (KM-84), and a desirable amount of ‘edgy snarl’ from the dynamics. The group consensus at the time was adverse to the inclusion of the SM-58 as in the studio mix it added nothing desirable. After playing around with levels in postprod’ however I think it complemented the 57 nicely.

We tried a few alternate placements such as sticking the room mike almost inside the bass trap (success!), inverting the positions of the AKG and SM-57 (I liked the first way better) and utilising another AKG as an alternative room mic for a duller response (I preferred this to the U-87 despite the audio not coming in quite as clean).

This would sound much more authentic with a 16th Century microphone..

We kept the same configuration when recording Lydia, but also tried using two room mic’s for alternate ambience, and switched their positions for the second take. The dynamic mic’s weren’t really up to the task for flute recording but David G. did mention that on Tuesday I believe (me such a rebel)…


Click Here for online link to audio examples folder.


Reference:

David Grice. “Recording Wind Instruments”. Tutorial presented at EMU space and Studio 1, 5th floor, Schultz Building, University of Adelaide. 30th March 2007.

Keith Gemmell. “Recording Brass and Woodwind”. Music Tech Magazine, July 2003.

Thursday, March 29, 2007

Forum – Wk 5 – “Collaborations Pt 2”

Forum – Wk 5 – “Collaborations Pt 2” 2 9/03/07:


Luke Digance - Merce Cunningham, Sigur ros and Radiohead:


Stay on your toes people..

I’m not sure how I feel about the idea of composing music for dance. Maybe it’s because I’ve never truly understood dance as either an art form, or as a method of releasing the tension incurred through day-to-day life. Surely I’m missing out (possibly the consequence of my own inhibitions), but I’d rather ride my bike or something…

The collaboration of Cunningham’s ‘Split Sides’ dance production and the two sophisticated but quite different musical entities of Radiohead and Sigur-ros, seems very much to reflect the implied meaning of the title. By this I am of course referring to the obvious contrast between the two musical compositions provided for the work. Splitting composition duties between two separate artists (and keeping their works separated through out a performance) is perhaps a directorial approach that I have witnessed from others in the past, but never really questioned or thought about. I found this to be the most interesting aspect of the work presented.

I thought it strange at first that Luke would present on a work he had not actually seen performed, but then the question arose: What if the presentation was about something from the distant past, before one’s time for instance? Therefore it stands up as a valid insight, in my opinion at least. After all, our esteemed history lecturers refer to performances from hundreds of years ago on a regular basis, with only dots on paper, post recording era audio, and modern day symphonic realisation to gain a sense of the original sound don’t they?


Note to all future presenters – if you’re going to have music playing in the background, turn the volume WAY down. It’s quite disorienting to try and determine the point someone is making, while music plays at conversation level in the same room (unless you’re at the pub of course).


Daniel MurtaghMike Patton:


Mr Bungle, rapid time changes, stylistic crossing insanity, hold on while I put up my savant guard…

These are the things that spring to mind when I hear Patton’s name, and there was plenty of music to substantiate this preconception on display today – except for that pop track from the California album, what was that all about? Mike showing off his versatility no doubt, and it was this infamous versatility that Daniel pointed out when asked the question: “What to people hope to experience from a collaboration with Patton?”

I did enjoy what I suspect was a vocal imitation of a guitarist torturing a floyd rose tremolo system, during the collaboration with John Zorn. The list of Patton’s musical exploits is enormous so I’ll end by saying I’m looking forward to a collaboration between Patton and David Lynch should it eventuate.


Darren SlynnSteely Dan, Zappa and Weather Report:

This next sequence of notes will be analogous with tomorrow's temperature in Auckland..

I suspect an accomplished public speaker judging by his visible confidence, Darren’s presentation, while showcasing a lot of interesting information on the above mentioned musicians, seemed to fall short of identifying what I would consider to be a noteworthy or ‘abstract’ collaboration of sorts. After all isn’t rock and jazz combining with fusion really just rock and jazz combining with rock and jazz? I always thought of fusion as a combination of the first two by definition – correct me if I’m wrong, I’ve probably heard too much Frank Gambale.

Word count prevents further expressivity. Stop.


Alfred Essemyr – DJ’s and the artists they borrow from?


I must say, I’m not sure if this one qualifies as a collaboration in the truest sense of the word. However there were some interesting insights into the problems faced by DJ’s and other collage or scratch type artists, especially if they specifically want to work with vinyl. Australia is always last on the list for all things cool in the western world, isn’t it sad?


Reference:

Stephen Whittington. “Collaborations Pt 2”. Forum workshop presented at EMU space, 5th floor, Schultz building, University of Adelaide. 29th March 2007.

Alfred Essemyr. “Collaborations Pt 2”. Student project presented at EMU space, 5th floor, Schultz building, University of Adelaide. 29th March 2007.

Darren Slynn. “Collaborations Pt 2”. Student project presented at EMU space, 5th floor, Schultz building, University of Adelaide. 29th March 2007.

Daniel Murtagh. “Collaborations Pt 2”. Student project presented at EMU space, 5th floor, Schultz building, University of Adelaide. 29th March 2007.

Luke Digance. “Collaborations Pt 2”. Student project presented at EMU space, 5th floor, Schultz building, University of Adelaide. 29th March 2007.




Wednesday, March 28, 2007

CComputing – Wk 4 – MIDI Information and Control:

CComputing – Wk 4 – MIDI Information and Control:



After some twenty plus hours of mind-numbing programming, I stop to ask myself: “Is this my future?” If so, God help me…

The updated control surface..

The basic rundown of the added tools to ‘Baby Vomit Keys’ for this week is this:

1) MIDI input selection menu: Connected a midiinfo object to a ubumenu. Upon starting the program, a loadbang hits the midiinfo inlet which forces an update of the connected ubumenu with devices relevant to the computer being used. Ubumenu (middle outlet) is connected to left inlets of notein, ctrlin, pgmin and bendin, via send and receive objects to automatically set the correct ports to be used by these devices.

2) MIDI output selection: A different ubumenu is connected to the same midiinfo device and its device information is updated upon initialisation as well. Its middle outlet is connected to the left inlet of noteout, ctrlout, pgmout, and bendout, via send and receive objects for setting correct output ports.

The edit window..

3) MIDI panic button: Midiflush object inserted as the gateway to note out, and all pitch and velocity information must pass through it before being sent to the synth. Bang object is connected which is activated by pressing the ‘panic’ message button which will activate the flush object and turn off all notes.

4) Pitchbend, Modulation and Program Change: These are all activated by dial objects connected to their relevant output devices (mentioned at point 2) via send and receive objects in most cases. Input from the controller keyboard is achieved by the inclusion of relevant input objects (also mentioned at point 2) also connected to the output devices.

5) Channel selector: Umenu containing numbers 1-16 with middle inlet connected to all input / ouput devices.



Inside the sub-patches..

Enjoy!


Click to access online folder containing text file of patch


Reference:

Christian Haines. ‘Creative Computing Wk 4 - MIDI Information and Control’. Lecture presented at the Audio Lab, level 4, Schultz building, University of Adelaide. 22nd March 2007.

Friday, March 23, 2007

AArts – Wk 4 – ‘Drumkit Recording’ – 20/03/07:

"Okay Jake it looks cool, but do you have anything bigger?"

Luke and the multitalented Jake were once again my partners in the weekly recording project. The Drumkit sure is a lot of work. Once again I can’t help but feel a little adverse to the idea of commercial sound engineering when faced with laborious tasks such as this.

After wandering around with the floortom to find some kind of sweet spot, we settled on assembling the kit near and facing the window of the control room, and planned to baffle off the area to reduce the liveliness of EMU space. Then the joy of setting up a squillion mic’s, stands and cables ensued. After the hard work was dispensed with we moved on to the task of setting mic positions. There was a handy tip in the readings for this week which logically instructed the would be engineer to place one’s hand palm up near the spot most likely to be hit on a given drum. It stated that the hairs on the back of your hand should feel a rush of air emanating out from the strike zone, and the mic should be placed as close as possible to this sensation (within reason – you don’t want the drummer whacking the mic’s). It gave us a good starting point, but I doubt it would achieve any more sonic mastery than the two-finger rule explained by David in this weeks lecture.

"Hmm, getting there..."

After a long sound check (still trying to polish that turd of a bass drum to no avail) we settled on a close micing set up reminiscent of that displayed on Tuesday. SM-57's were placed over and under the snare and after repeated testing of the combined signals, there was no need for phase inversion. The condenser stereo overheads were placed about 1.5m apart and angled straight down at their respective cymbals for the first recording. This was the optimum position for me as it produced a fuller, brighter sound. The room mic was placed nice and high, so it could receive sound from directly inside the baffle area, as well as picking up reverberations from the space.

After this moderately successful attempt, we tried the X-Y axis system with the overheads for some educational comparison. This produced, to my ears, a sub standard result when compared to the previous, as there seemed to be a significant drop in the overall substance of the waveform captured.

Some before and after audio snippets are below for your perusal, so as usual make up your own mind.

"now this is just silly"

Mp3's in online folder linked below:

Audio Example Folder (check the two X-Y and non X-Y examples for an interesting comparison)

Reference:

David Grice. “Recording the Drumkit”. Lecture presented at EMU space and Studio 1, 5th floor, Schultz building, University of Adelaide. 20th March 2007.

Forum - Wk 4 – 'Collaborations' - 22-03-07

Forum - Wk 4 – 'Collaborations' - 22-03-07 (happy birthday to me!):


Well, the presentation side of the subject is over for the time being, so I guess I can just sit back and enjoy the show for a while. I think it went pretty well, despite the odd dissenter from the truth casting a counter argumentative rock in my pond.

There was a lot of material presented – especially from William (easy on the fine detail there son) – and it is always interesting to see how others approach this kind of thing. I agree with some of Stephen’s feedback regarding vagaries of context that certain material was presented in, so hopefully the next few weeks of presentations should be concise and to the point. I did try my best to present my ideas about my topic with a minimum of indecisive time wasting.

Everyone knows what I presented so check the student blog links on the left to find information regarding it’s substance or lack thereof…

Despite his deviating from the subject of the day somewhat, I found Vinny’s presentation inspirational. I had never heard of Trilok Gurtu, but the examples that Vinny played sounded incredibly rich in harmonic and especially rhythmic structure. The sheer volume of respected artists Trilok has worked with should be an indication of his considerable diversity and talent. I am always looking for new material that suits improvisational accompaniment, so I plan to explore the sonic world of Trilok Gurtu further in the future.


William’s
look at the game audio industry reminded me of my mortifying call centre experience at my last place of employment. This was only due to the mention of cubicles and how people with creative positions of employment respond to their presence or absence. I’m sympathetic to the idea of working at a creative project in isolation. At least I have found that it’s the best scenario for me, as I’m not really a people person when I can avoid it. Besides that, the ridiculous amount of time that needs to be poured into a substantial audio project of any kind renders distractions extremely counterproductive, and people can be very distracting.

Apparently this has something to do with game audio...
Everyone loves a picture.

On to Sanad and the raging debate that ensued over the proposed question: “what constitutes world music?” This inspired some hot debating of negligible substance. Not to say that anyone was right or wrong, it’s just that throwing a question to a group of people and asking them what they think about it on the spot will never draw from them the most coherent answers they may be capable of producing. For a real debate to ensue, people need time in advance to really think about a question, gain some perspective, do a little or a lot of research and come to the table with a substantiated opinion. The kind of format sparked by Sanad’s approach to the forum is good for getting some initial ideas out there, but cannot really be used as a serious debating platform – at best it could be the first step towards such a debate taking place at the next meeting of the said group.

Nothin' drives my point home like a good stabbin'


Reference:

Stephen Whittington. “Collaborations”. Workshop presented at EMU space, level 5, Schultz building, University of Adelaide. 22nd March, 2007.

Monday, March 19, 2007

CComputing - Week 3 - Program Structuring:

CC – Week 3 – Program Structuring:


Here it is, my (my?) first piece of ‘real’ software. Well, you know what I mean. This patch became a little more intense than the exercise required, but I thought I was on to a good thing, so hopefully the EMU overlords won’t mind…

The patch is titled “Baby Vomit Keys” in response to the less than attractive background colour scheme offered by Max, and contains these basic functions:

- Slider and numerical display for note duration and velocity (values are set to a default on startup with a ‘loadbang’ object, so the operator doesn’t have a brain tumor trying to work out why it makes no sound)

- Dropdown menus for selecting MIDI channel, keyboard range, and MIDI input and output

- A four octave keyboard emulation tool with a MIDI note name display showing the most recently pressed key

- An optional auto-play chromatic scale function providing continuous output when toggle is switched on, for testing MIDI output without having to keep clicking the virtual keyboard with the mouse

- An optional random interruption function to add some avant-garde spice to the endlessly entertaining chromatic scale.

The only real brain teaser I found was working out how to select the octave range. I ended up using a select object which sends a bang from a particular output depending on the message or number received (# or message must match the output name). The outputs were routed to three number messages which would alter the changeable argument for an ‘offset’ message (which is a command that the K-slider recognises).

In short, if the operator picks option two from the drop down menu, it sends a 1 out of its left output (it counts starting from zero so 2 = 1, 3=2 etc) to the select object. Select then sends a bang out of its output that matches 1, triggers the argument change for offset via the connected message box, THEN sends the bang to a bang object (located to the left of the offset argument changing messaging boxes so as not to be triggered first) which triggers the new offset message to be sent to K-slider.

Now, those of you who have bothered to read this far may hasten to point out the redundancy of the above mentioned ‘bang’ object, and you would be correct in doing so. However, I have found that the odd flashing object here and there can serve to highlight certain features of the Max language that we need to become familiar with (right to left output order for instance), so it serves a valid purpose in this regard.

Did I say ‘in short’? Geeezzz!

The random interruption function simply works by sending the metro’s output to a ‘gate’ object, which opens when it receives a one and shuts when it receives a zero. The metro’s output is also sent to a ‘random’ object which is set to spit out a zero or a one when it receives a bang. Note once again the issue of right to left order has been taken into account. The random object is located to the right, as it must set the function of the gate (not to be confused with the ‘switch’ object located beneath it, which is simply for toggling random mode on and off) which is located to the left, before the metro’s bang message reaches the gate’s input.

My apologies for going on, but I have found that explaining the functionality of Max objects and programming in writing is helping to set the language straight in my mind. The text for recreating the patch is below, so have fun…


Text file of patch:

MIDI keys ripoff textfile


Reference:

Christian Haines. “Program Structuring”. Lecture presented at Room 408, Level 4, Schultz building, University of Adelaide. 15th March 2007.

Friday, March 16, 2007

AArts – Week 3 - Electric Strings 16/03/07:

AArts – Week 3 - Electric Strings 16/03/07:


Now Luke, what did we say about brining your double headed violin of doom?


Now I’m speaking my language. I was especially looking forward to experimenting with the Beta 52 for this exercise, as I have always found the SM-57’s are adequate but tend to leave me wanting. The B-52 didn’t disappoint on a fundamental level, but I found the overall sound of the space to be a little to ‘live’ for my liking. This was especially evident with placement of the room ambience mic (U87), which ended up half buried behind the drum kit in the carpet corner of EMU space. This was the only position I felt (and I think my partner in crime Luke would agree) that the signal achieved anything close to low-end depth.

I think this session has given some early exposure to the extra pressures being placed on EMU’s facilities this year, as other students were occupying the dead room and would have liked to use some of the gear that we had already set up for ourselves. It’s a shame we didn’t get to try a ‘dead recording’ of the Laney to compare the difference. I used the dead room for a similar exercise last year though, and achieved what I then believed to be a satisfactory result.

Trying out the procedures suggested by D. Grice was beneficial in improving on the sound quality from our initial placement of the various mics, but there is plenty of room for improvement. It’s not all down to mic placement though, I had a play on my guitar for Jake while he played around with his set up after Luke and I had finished, and he suggested that I play his guitar, which he had brought for the occasion. To my surprise, the sound he achieved with his placement (once again most notably the B-52) was preferable than my own effort, to my ears anyway. It could be that Jake's combination of heavier strings, different action, timber and pickups was more appropriate for the Laney amp. Whatever the case I often seem to think other people’s gear sounds better than mine – grass is greener perhaps?

Here is a rundown of basic mic placement:

Shure Beta 52: 30 degrees off axis, very close to the floor and right of speaker cone.

Shure SM 57: 30 degrees off axis, very close to left of speaker cone and about 1 3rd up from the bottom of the amp.

Neuman U87: Far away in carpeted corner, diaphragm facing the outside (southern window) and 45 degrees to the floor, omni pattern on.

Amp used: Laney Combo

Guitar used: Ibanez RG – 2120

Audio files are linked below (Luke Digance and I worked as a team on this project so the single mic files at least will be the same on his blog):


Individual mic recording files:


Shure Beta 52: Dive bomb x 3 (0.100Mb Mp3)

Shure SM 57: Dive bomb x 3 (0.100Mb Mp3)

Neuman U87: Dive bomb x 3 (0.100Mb Mp3)

Shure Beta 52: Hendrix Feedback (0.100Mb Mp3)

Shure SM 57: Hendrix Feedback (0.100Mb Mp3)

Neuman U87: Hendrix Feedback (0.100Mb Mp3)

Shure Beta 52: M of pups intro (0.100Mb Mp3)

Shure SM 57: M of pups intro (0.100Mb Mp3)

Neuman U87: M of pups intro (0.100Mb Mp3)


Mix down of total mic combination recording files:


Dive bomb x 3 (0.100Mb Mp3)

Hendrix Feedback (0.100Mb Mp3)

M of pups intro (0.100Mb Mp3)


Reference:

David Grice. ‘Recording Electric Strings’. Lecture presented at Studio 1 and EMU space, Level 5, Schultz building, University of Adelaide. 13th of March 2007.

Thursday, March 15, 2007

Forum - Wk 3 – Timeline group performance 15/03/07:

Forum - Wk 3 – Timeline group performance 15/03/07:

Cage again. Why do I feel the evil bars of restriction close around me at the very mention of his name? Could I refer to his methodology as some sort of ‘Cage cage’? Whatever the case, despite my considerable appreciation for David Harris as a person and musical expert, I can’t help but find this kind of workshop tedious.


Is there a problem? Oh no, don't tell me - is it...the Cage?

Even when I have heard experiments of this kind in the past, with expert musicians in control of the performance, the element of musical randomness (which is often part of the goal) seems overly restricted by the rules or music (pre written) that each is faced with. My main concern with our combined effort on Thursday was that I felt I was achieving nothing interesting with what I was given to work with. The restriction on how many notes one can play, the dynamics used, and the timeline of windows given for expression made it very difficult to conjure musical answers and accompaniment to the various elements of sound (some of which were very musical) in the room.

Now don’t anyone start with the whole ‘what is music?’, ‘what is musical?’, ‘who am I to decide either?’ etc, etc, I’ve heard it all before and I have little interest in pursuing what would be an endless academic debate on the subject. I never asked John Cage to free my mind and the irony of the situation is that the only time I feel musically imprisoned is when someone tries to impose Cage-like restrictions on my expression. I know I use the same blues scale that Robert Johnson exhausted some ninety years ago and use it with probably none of the originality or innovation exhibited by the man, but I still love it. I love the way I can play it over and over, upside down, back to front fast or slow and never get bored with the sound. I don’t care that it’s clichéd and over used, it’s what I like to hear more than anything else in the world, and certainly more than listening to someone like Cage point out how narrow minded I am because I don’t appreciate the rare sound of a cloud evaporating or whatever.

I guess at the end of the day I’m just a person who knows what they want when it comes to music. I fail to identify with the ideals of people who feel they can’t waste time (or that people in our society waste too much time) with music for pure and simplistic sensual indulgence. That being said, the very act of establishing large group performances of this type is technically an intellectual indulgence on the part of the organiser yes?

Maybe it’s the fact that I couldn’t contribute anything I felt was substantial to the session that is feeding my angst. Despite the obvious humour that was evident in some of the pre-written material for spoken word, I believe the overall sound and structure was relatively uninteresting. It could be the lack of serious input from some members of the group, but I think a bit more freedom allocated to the tonal instruments and vocalists would have really given some life to the party.

Anyway, next week I get to bore you all with my presentation on the greatest collaboration of all time – Metallica and the San Francisco Symphony Orchestra, so get those lighters gassed up...

Please feel free to download the crappy voice recorder audio file of the session located below if you want to experience the full 45 minutes all over again in glorious 2 bit lo-fi sound.

Timeline Group Performance (7.8Mb Mp3)


Reference:

David Harris. ‘Timeline Group Performance’. Workshop presented at EMU space, level 5, Schultz building, University of Adelaide. 15th March, 2007.

Wednesday, March 14, 2007

CC2 – Week 2 – Max Quickstart:

CC2 – Week 2 – Max Quickstart:

Below is the picture example and text for recreating my first (graded) Max patch. I was already familiar with the objects and processes talked about in Thursday’s lesson, so this was not a big challenge. However, I suspect experienced users may know of more efficient ways to deal with this type of program.



I think the image above makes the process pretty self explanatory but here is the basic run down:


1) Toggle for start and stop of Metro

2) Number boxes (int and flo) for tempo selection of Metro

3) Counter to provide the count range

4) + operator for mapping numbers to relevant MIDI note range (changeable with number box on far right)

5) Makenote object providing velocity and duration of note values and receiving pitch value from + operator

6) Noteout object for sending MIDI data to the software synth.


Load it up and have a go. It kind of reminds me of that giant pinball machine in Sesame Street – 1 2 3 4, 5, 6 7 8 9, 10, 11 12…



Start of txt:

max v2;
#N vpatcher 18 44 804 670;
#P window setfont "Sans Serif" 9.;
#P flonum 599 107 73 9 0 0 0 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P number 655 297 35 9 0 115 259 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P window linecount 1;
#P newex 227 343 31 196617 + 64;
#P number 227 304 35 9 0 0 0 3 0 0 0 221 221 221 222 222 222 0 0 0;
#N counter 0 0 12;
#X flags 0 0;
#P newobj 392 265 77 196617 counter 0 0 12;
#P number 454 106 67 9 0 0 0 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P toggle 121 78 49 0;
#P newex 394 543 111 196617 noteout a 1;
#P newex 394 459 109 196617 makenote 70 127 500;
#P window setfont "Sans Serif" 20.;
#P newex 392 192 122 196628 metro 500;
#P window setfont "Sans Serif" 12.;
#P window linecount 3;
#P comment 625 239 100 196620 Choose starting note here:;
#P comment 388 51 100 196620 Choose tempo in integer values here:;
#P window linecount 4;
#P comment 569 36 100 196620 Choose tempo in Decimal number values here:;
#P window linecount 2;
#P comment 121 36 100 196620 Start and Stop here:;
#P fasten 9 0 10 0 397 297 232 297;
#P connect 10 0 11 0;
#P fasten 12 0 11 1 660 328 253 328;
#P fasten 7 0 4 0 126 138 397 138;
#P connect 4 0 9 0;
#P fasten 11 0 5 0 232 415 344 415 344 415 399 415;
#P connect 5 0 6 0;
#P fasten 5 1 6 1 498 503 449 503;
#P fasten 13 0 4 1 604 140 509 140;
#P fasten 8 0 4 1 459 140 509 140;
#P pop;

End of txt.

Reference:

Christian Haines. “Max Quickstart.” Lecture presented in tutorial room 407, 4th Floor Schulz building, University of Adelaide. 8th March 2007.

Thursday, March 08, 2007

Forum Wk 2 – ‘Originality in Music’

Forum Wk 2 – ‘Originality and S. Whittington’:


Nice to see a semi return to the workshop format from last year. This would have been a preferable environment in ‘06 on occasion, as some interesting insight was offered in-between musical examples of derivation and quasi plagiarism, rather than the sit and absorb bombardment of afore.

Some questions were bandied about, such as “Is all art derivative?” and “What is the value of originality?” to mention a couple. I think it may be historically easy to prove the derivative roots of most forms of art. Even one of those modernist avant-garde freaks like John Cage couldn’t say he was just noodling away as a conventional composer, and then plucked something like ‘Williams Mix’ out of thin air. I believe it is the small steps toward original thinking that brave (perhaps ‘enlightened’) people take that eventually creates something truly different. But a challenging piece of art is usually not that out there at the time it becomes noticed (I’m sure there are exceptions and this also depends on who is receiving it, so I’m being careful not to deal in absolutes). If we could take a piece like Williams Mix back to 1800 or earlier in a time machine however, we could rest assured that the Ye-oldies whom we selected to hear it should regard it as completely original, given that nothing even remotely of that nature would have been heard before. Then we would be rightly burnt at the stake for being the witches that we are…


I side with Walter Benjamin to some extent on his view that all music has a certain aura that is lost on reproduction. Even with the fantastic recording and reproducing technology that we enjoy today, I still get much more exited to see someone I appreciate play live, than I do when listening to a new CD.

Getting back to the subject of derivation, surely it would be difficult to construct a more derivative approach than the styles of John Zorn and Christian Marclay? The very nature of the compositional process for these two is unavoidably derivative, yet they know this and persevere anyway, so is derivative really a dirty word in composing circles?


Discussion eventually moving on to the subject of Computer Generated Music raised some eyebrows I’m sure. Who would have thought that a well schooled musician like Stephen would let the ‘human composers may soon be irrelevant’ cat out of the bag – I guess he wont be cheating himself out of a job, but the very idea is formidable enough to conjure grateful feelings toward my flexible choice of study path. However, I’ve heard a couple of these computer-composed pieces this year, and I’m really not convinced that they have all of what it takes to reach out to people. They may be able to instantly create harmonic and contrapuntal perfection, but so often it seems to be a certain degree of human ‘imperfection’ that people respond to in music, I mean try listening to the White Stripes (shudder). I find it hard to imagine a PC coming up with a concept so commercially successful, yet that is so removed from the evolution of music…


Reference:
Stephen Wittington. ‘Forum – ‘Originality in Music’. Lecture presented at EMU Space, Schulz Building, 5th Floor, Adelaide University. Thursday 8th March 2007.

AArts Wk 2 - "Voice Recording"

Audio Arts Wk 2 - Voice Recording 06/03/2007:





Okay, this is not my first experience in voice recording for radio purposes. I recently held a recording session of this nature in my home studio, for a friend who was auditioning for an announcer’s position on ABC Classic FM. I certainly don't think this makes me an adept engineer in the field, but I hope I don't make similar mistakes to those which occurred on the said occasion...



I started out squeezing the input compression (which I have taken to inserting at the input stage via an auxiliary track) at a monstrous ratio of 10:1, as I have read in the past that this is the preferred setting for voice over recording engineers. This quickly proved to be overkill as I could virtually yell into the U87 and only register some 40% of level at the track stage of the signal chain. I then realised that as I was recording in the control room, the compressor forced the mic to pick up too much background noise, so I abolished the idea of input compression altogether.


I began with a flat read of my fictitious advertising blurb, performing two takes - one with the U87, and one with the intimidating NTV (two takes were performed in this way for all readings and singing (?) attempts. Backing off the gain was an early necessity as the mic was distorting badly.


I have chosen to leave the compression off for the examples I've posted so that you may judge freely the quality of microphone usage on display. Even for the 'expressive' readings of the ad it seems that there was little danger of peaking unless I really started to yell.


For my pitiful attempt at singing however, the real art of vocal recording began to rear its head. I deliberately left the moment of distortion at the end of Unchained Melody, so that you might hear the result of bad (but loud) vocals being pumped at point blank range into an unsuspecting U87. Its so sensitive... The interesting fact about the distortion issue is that the gain on the desk was way down at 45% or so and the level meters weren't peaking, so I can only assume it's a result of harmonic distortion occurring in the mic's diaphragm itself.

This was less of an issue with the NTV but I still have a lot to learn with regard to tackling such issues. I suspect it's partly down to a singers microphone technique (distance from mic during loud passages etc), but we can't rely on them for everything now can we?



Check out these fine examples of my less than mediocre vocal skills:



U87 Flat vocal delivery (0.400Mb,Mp3) (sorry, this one was disallowed by box.net)


U87 Expressive vocal delivery (0.400Mb,Mp3)


NTV Flat vocal delivery (0.400Mb,Mp3)


NTVExpressive vocal delivery (0.400Mb,Mp3)


U87 – Dave fails at Unchained Melody (Righteous Brothers) (0.400Mb,Mp3)


NTV – Dave fails at Black Dog (Led Zeppelin) (0.200Mb,Mp3)


Satan is coming...(0.155Mb, Mp3)


Reference:

David Grice. “Voice Recording”. Lecture presented at EMU Space and Studio 1, 5th floor Schulz building, University of Adelaide. 06/03/2006.

Friday, March 02, 2007

CC2 - Week 1 – “Pseudocode Standard”:

CC2 - Week 1 – “Pseudocode Standard”:

These ingeneous limited edition spectacles actually enable one to 'see' the code - just like Neo.
Send your credit card details to notesdontmatter for a pair ($799.95 US - today only).


Here is my solution on virtual paper to this weeks million dollar challenge:


IF operator hits start THEN

SEQUENCE – C, D, E, F, G, A, B

CASE note OF synth plays:

C : Play A
D : Play B
E : Play C
F : Play D
G : Play E
A : Play F
B : Play G

ENDCASE

INCREMENT notecount

REPEAT - UNTIL notecount = 20

ENDSEQUENCE

ENDIF


Thankyou for listening to the virtual sound of ‘a minor’ – the Major of all minor keys. Tune in next week when I shall attempt to reconstruct a Brahms symphony using nothing but my absent mind and an invisible grain of sand.

I think code is going to drive me mad…

Reference:

Christian Haines. "Pseudocode Standard". Lecture presented at the University of Adelaide, Schultz building, Room 408. 01/03/2007.

Forum Week 1 – 01/03/07


Forum Week 1 – 01/03/07

Steven Whittington's first techo 'axe'


The most notable difference I can determine, regarding the direction some of the 1st years are planning to take (compared to my class at the same time last year), is the desire for a career in live/studio sound engineering voiced by a few. This is obviously due in part to the new course material on offer, but it wasn’t just the diploma students that seemed interested. My personal experience with sound engineering for the work of others has been minimal, but enough for me to have an idea of the kind of workload and stress it can involve.


I remember my first shot at live mixing for a band being requested of me. I showed up at the venue with an idealistic image of myself setting up a few microphones, then settling in behind the desk (which would be situated in the middle of the band room, central to the stage) for some level checking followed by a professional display of EQ and signal effect tweaking on the fly. How wrong could I have been?

Just a small glitch with the power people, remain calm...


The reality on the night was a pathetic four channel mixer (it used to be eight but four faders were broken) situated at the back of the stage near the drum riser – so much for a fair degree of stereo separation. This shameful excuse for a live setup (people had actually paid money to see bands perform here) allowed just four microphones to be used simultaneously. So the options were limited pretty much to singers one and two, and micing the kick and snare drums. Add to this the location of the desk and you’ll get the picture – I was given a set of conditions to work with that could, at best, sound like an average rehearsal room jam session. It was a pretty frustrating and off putting first experience.

That being said, I have since had much more rewarding experiences in the field, some of them related to course work during this degree, so I certainly don’t intend to put anyone off working in the field. I guess I’m just trying to say that one should be prepared to take the good with the bad…

Anyone else got a worst and / or best experience with sound engineering to tell me about?


Reference:

Stephen Wittington. ‘Forum – Introduction and Overview’. Lecture presented at EMU Space, Schulz Building, 5th Floor, Adelaide University. Thursday 1st March 2007.