I’ve looked a bit into people trying to make the Octatrack do some granular processing and there’s a few ideas floating around that seem interesting.
There’s a post by user Nikofeyn on Elektronauts where he processes guitar and gets some nice atmospheric results.
I’ll try to go through what he did and see if that sparks any new idea. He’s using 4 main tracks (not OT tracks, but audio tracks) and T8 was setup as a master with more dark reverb:
- T1 (flex) samples 1 bar of guitar, applies filter and dark reverb while modulating pitch and rate. He says there are various playback triggers, which reminds me I have to take a closer look at them.
- T2 is a thru with filter and lo-fi (distortion, srr and heavy amp mod), the filter being modulated by an LFO. T3 is a neighbor that applies reverb at 70% so that the live guitar doesn’t affect the sound that much. This is a pretty cool idea which I’ve played around with myself but never got much out of it. I was just fooling around with the idea though, while he’s using it to get more ambiance out of the same input.
- T4 (flex) plays back a single note sample of the guitar, with pitch shift and rate set negative, filtered and into heavy reverb at 70%. The audio seems to be previously sampled.
- T5 and T6 are thru and neighbor for an A4 arp with a high q filter, lots of delay, another filter and reverb.
- T7 has a flex machine that plays back the recording buffer in R1 with rate reduction and delay. I’m not sure if this is any different than copying the same recording and playback trigs that are on T1 to T7 and having it sample the guitar, I don’t think so but would need to check. It obviously saves memory but that’s irrelevant in this case since it’s only 1 bar samples.
So, this was cool to me, although it doesn’t really have the textures I associate with granular synthesis, which to be fair might be my own lacking more than anything else. To be fair, he does reference MI Clouds as a motivation to do this and the Oliverb is probably as much a staple of the Clouds sound as the granular processing, and this does sound like something that could come out of Clouds.
Using short samples to form a cloud
On the subject of Granular synthesis hardware user teacherofstalker at Elektron-users comments:
So, Granular Synthesis, as Xenakis has described it, is all about composing a complex sound using smaller particles of pure, elementary waveforms, like very short sinewaves.
So you load your MM tracks with sinewaves that have a very short envelope, fire up the ARP at small speeds (i.e., 1x – 3x), and start randomizing (using the LFOs and, of course, manually):
Add some delay, effects etc.
Mass-control the cloud pitch using Pattern Transposition.
With sinewaves it can become a bit monotonous pretty quickly, so try blending different waveforms.
This seems like something that can be approximated in the OT, using the crossfader as a sort of ribbon controller. This technique reminds me of Sonic Encounters 11 – Swarm of the nanobots, where Mark Mosher tried to approximate a Swarmatron with the OT. That article was one of the things that showed me the complex setups that can be achieved with the OT and since I’m planning to explore the swarm idea down the line I’ll leave this approach until then. It’s worth mentioning that this is actually trying to replicate granular synthesis, rather than granular processing.
Close to real granulation
This thread, also at Elektron-users, explore using the MachineDrum UW for granulation too. User stiiiiiiive writes:
I’m going to post some SYSEX and/or audio example but for now, here are some settings -obviously tempo dependant. However, only the RAMPLAY track (and the optional low-attack-LFO track) need to be triggered.
Track 2: just for LFO purpose; LFO is modulating Tr10 START parameter.
Shapes: RAMP DOWN and RAMP UP.
Mix: 127=normal pla, 0=reverse normal speed (if speed=1; else, plays faster)
Track 6: optional, just for LFO purpose; used to get a softer attack, LFO modulates Tr10 volume.
Track 10: RAM PLAY machine.
PITCH: to taste!
HOLD: enough to allow re-triggering.
RETRIG TIME: to taste, determines the grain size*
LFO modulates START parameter with a random shape.
SPEED: 127 so that it’s faster than RETRIG TIME
MIX: full random
Track 14: CTRL-8P machine for control convenience
P1: Tr2 LFOM –> timestretch
P2: Tr2 LFOS –> speed factor for timestretch
P3: Tr10 LFOD –> time jitter
P4: Tr10 RTRIG TIME –>grain duration
* These parameters are referenced by the CTRL-8P machine.
** These values shall stay constant.
I’m not very familiar with the MDUW so let’s try to deconstruct what’s going on here.
- Load a sample and set retrig time to the grain size of your liking.
- Set a random LFO, faster than the retrig time, to modulate the Start time.
- You then set a mix of slow up ramp and down ramp LFO modulating the start parameter. When mix is at 127, the playback is normal, when it’s set a 0 it reverses the playback direction. I’m not sure how this behavior could be replicated on the OT, maybe using a ramp to modulate rate is a close enough. This LFO mixing business seems really interesting and should probably look into it more.
Changing the Mix then results in timestretch, and changing the LFOs speed varies the speed factor for timestretch. Changing the depth of the random LFO modulating the start parameter modifies how much the grains jump around (time jitter). Changing retrig time changes the grain size.
The audio sample has some very interesting textures, some of which certainly sound like what I associate with granular processing, so if this can be replicated with the OT I would be quite happy. There would be max 8 grains possible, but I’m sure a lot can be done with a couple of grains.
My original idea was to have two flex machines pointing to the same audio recorded by a PU machine, sliced to very short slices, playing random slices thru slightly different effects. This could be done by recording a very short loop, say between 1 and 4 steps, and making 32 slices which would make them sub 0.06 seconds long. This, I think, would approximate a granular delay with fixed grain size, but lacks true superposition of the grains.
One way to avoid the fixed grain size an have possible superposition might be to not slice the audio and instead use p-locks to set an approximate starting position on the loop and a short length. Then using lfos to modulate this parameters would make each grain have different size and grains in different tracks could then overlap. Good starting values would be those corresponding to the position and length if we were to make slices.
I might still try to make this setup work, but I realize that it’s trying to do too much of the work by hand and will probably end in a very rigid results. For example grains would always fire at the same time, and while one could modulate the attack to simulate slight triggering differences, it’s again trying to do a lot of tricks to cover the shortcomings of this method and I’m likely to ran out of LFOs (start, end, attack already use all three).
Instead, I’ll try to make the MDUW method work which seems like it should be pretty straightforward.