Who's dawless? Share your setup?

hey, i’m super into dawless with the pyramid as my mother brain. Just wondering, anyone else here doing the dawless thing? What’s your setup like?

If i’m not the only dawless user on the forum, i’ll add my setup in the comments too :slight_smile:

1 Like

Not pure dawless because I record and finish in Ableton. But I have a flight case rack, 3 keyboards, 2 groove boxes and Pyramid.

  • MOTU MIDI Express
  • Boss SE-50 multi fx module
  • Machinedrum
  • Behringer Xenyx 1002b (mixer)
  • Model Cycles
  • Digitone keys
  • Nord Lead 3
  • Korg X5
  • Koala Sampler on my phone (maybe an iPad mini eventually)

The rack, Pyramid, Nord Lead 3, and Korg X5 are permanently hooked up, I call it the fixed setup. Everything else I use is either independent or stuff that can be optionally layered on, either sequenced, or performed. Finished ideas get recorded into Ableton. I can jam and generate ideas with different gear without constantly rearranging everything. Plus I can start a song without thinking about what plugins to use, setting up tracks, etc. Only thing the fixed setup lacks at the moment is proper sampling (MD is limited). But baby steps.

I’m sort of on the fence with Pyramid, but that may be just because I need to master it - I like it a lot. It’s the core of the fixed setup. The Digitone Keys is another composition tool I’m on the fence with - kind of a hybrid workstation/performance multitool - thing is, it only has 4 track and 8 voices, but it sounds so good I almost can’t fathom returning it right now. With its limitations, it’s starting to look like I’m going to have at least one layer of audio clips from that, performing the rest live which it seems ideal for and something I’ve been dreaming of - too bad only 37 keys, but, we adapt. Model Cycles has some use for composition but I prefer to use it for making beats and drum sounds for sampling. I might slave it to the Digitone and treat that as a scalable unit. Because the Digitone is an audio and midi interface, I plan to pair that with Ableton, where I’ll combine it with audio clips I’ve recorded from the various sources in my setup.

Ableton continues to be a necessary component because you can’t beat editing a song with a monitor, keyboard, and mouse. Thought about MPC One/Live, but if I’m honest I think it would bog me down with options at a time when I really feel like the simplicity of my dawless setup is working for me.

1 Like

I’m DAWless, but I do weird things.

Prophet 12, Impact LX49 (studio, for realtime recording), BomeBox for control.
Pyramid controlled via BomeBox/PyraMIDI (only controls I use on Pyramid for playing are Start & Stop)
Everything gets routed back into BomeBox because I reserve the right to modulate all MIDI Events flowing through my rig.
Stuff played by:

  • Emu Orbit V2
  • Emu XL1
  • Emu Planet Earth
  • Rample (main center-pan drums mostly for now)
  • Prophet 12
  • Minitaur
  • Octatrack (set up to play like a rompler (rampler?))

DMXIS system to translate MIDI to DMX (lighting is pgmd from the Pyramid and follows a logic function based on what Tracks are playing).

Note: I use the BomeBox extensively to standardise my interface because I’m stupid. So, no matter how many or which Tracks on the Pyramid might be ‘drums’, on my controller Drums are always the same. Oh, it gets much more involved than that, but if you’re curious: be careful asking autistic people these questions (especially people on the spectrum with degenerative brain issues).

There’s other stuff like Grid control, defining all Pyramid Tracks into Groups or Layers (using Chance 0% hack to make a Freelatch mode), making the OT work more like a synth than an Elektron device, meta-controls, logic to determine lighting patterns so it’s handled automatically and modulates as I modulate Tracks, swapping Tracks for fills, etc, but suffice to say I spent some time working on the script so it’s tailored to my brain dysfunctions and heavily influenced by my synaesthetic relationship with music. heh I have a blog post about it somewhere, but I’m not ready to share until I make videos - and my music sucks anyway, so no reason to demo it, eh? LOL

For live it’s a keyboard on a stand, with the Pyramid on a shelf on top, a 6u rack enclosure, and then just missing something to port the Minitaur, OT, modular lunchbox - plus the lights, which are some bar washes, some ADJ FX, a hazer, and RGB lasers. But - haven’t played live since I added the Pyramid and probably not really high on the ‘future’ list for awhile.

And Please share your setup @vt100 . I enjoy your music and it’d be neet to see how normal people do stuff.

1 Like

I’d love to pick your brain a little bit on your dmx logic and see any prototyping videos your’e willing to share. I’m all about exploring how, if making music dawlessly, we can have more integrated visuals than, you know, djs and everyone else. I’m actually working on a similar but different idea to do midi controlled animation in a patching software (presently resolume wire) to animate led sculptures (my visuals are not dawless, i’m okay with that). It’s very early stages and i’ve been enjoying digging into everyone else’s lights to get ideas.

idk if i’d call myself ‘normal’, I may have the largest-portably-ergo minded dawless setup on the planet. Anyways - okay I’ll put my description thingie in the next post.

you know, me too. I don’t think i have the itch to record dawlessly. I am however, trying to get in the habit of keeping the music as much as i can in the dawless rig. But I edit in the daw all the time, and in many cases like to do a tiny bit of hw fx automation for my records. The stuff we put on spotify lasts forever, so, idk, its just worth it to me. Though, I really do intend to do that less and less, just because I want to have an authentic performance and I really think that the better I can make that performance the better everything else is gonna be so, there’s a lot of incentive for me not to edit too hard in the daw. but yea, I totally feel this.

okay, so maybe just as soon as I learned anything about programming electronic music on a daw, I wondered what it would be like to ‘do it live’. I think maybe I had these ideas of like tangerine dream in the seventies, or I’d convinced myself that orbital had never touched a computer before or something. Anyways, so eventually i set myself free and started building my spaceship.

I probably do have the largest dawless rig on the planet, lol. I’ve always liked to write kind of, dense songs and i also don’t really like to compromise on what i want for a sound. So literally everytime someone is like “well you don’t need all that stuff” over the years ha, shows what you know. I also think that having lots of different instruments makes orchestras sound more interesting (right, this is based on programming sample instruments for so long), so I like to have a blend of like different fundamental kinds of sound sources - i have analogs, digitals, and chips/chip replicas for this reason.

My goal has been able to perform electronic music as though a dj were playing records, but on a dawless setup designed to travel, cause you know, I would like to play gigs. And to take advantage of things i can do in dawless, that a dj couldn’t do, or to discover what they are, anyway. Also, then in the last year or so i’ve been sequencing little bit swishes and what not to have my visuals (in synesthesia) be a little tighter than just what frequency response can give.

What does ‘as though a dj were playing’ - well, to me it means this:
a) there are distinct and different songs
b) I can blend between the songs using various mechanisms (drums, silence, mixing two parts together from each song, etc).

Goal: Contiguous techno, but on synths.

High level - the pyramid is central control in my setup:

  • orchestra pit (details to follow)
  • 32 channel analog mixer - yamaha something.
  • master bus chain: analog heat + harmonix platform (limiter)
  • 8 channels are side chained on a compressor to the kick on a little 8 channel rack compressor i found
  • bomebox - I used this to convert certain cc signals to start/stop messages for a couple of my synths. Then i program sequences on them and I use these trigers to play these sequences while i’m loading songs on the pyramid and it can’t play notes itself.
  • 2 banana split midi splitters, one for each bus. Everything that doesn’t fit there gets daisy chained to a thru on another device.

There are two synth stands for all of this. First I have is a ‘spider’ synth stand that has three tiers. On the top tier I have my ‘board’, which is this custom thing my friend made after talking to me about my last board and desires. Basically it’s like two slanted shelves with a ton of velcro on it and then a flat surface to the right where I do cable, put hubs, and a patchbay. All of my little synths/fx on are that and they’re pre-wired to a patch bay. The patch bay is because I can easily prewire the synths and make a map and plug things in really quickly - this unit hasn’t been out yet, but it will be so, there will be more iterations. This also means i don’t have to do as much audio, power, and midi cabling when i move things. It needs me to build a giant box for it still, anyways (more like a coffin).

Devices on tier 1 board:

  • nesPoly (nintendo synth)
  • hapiNES (another nintendo synth) - 4 instrument multitiumbral
  • Therapsid mk2 (SID synth)
  • Ay3 (SID synth)
  • digitone - 2 channels i’ve reserved for multitimbral and then other two i program patterns and mute/unmute. I just started working with FM and its really rounding out the sound imho. This was a megafm (yamaha chip fm synth), but I bricked it and have since fell in love with the digitone.
  • analog rytm - I program drums directly on this, not the pyramid and perform pattern switches directly on the rytm
  • pyramid
  • empress echostation (delay)
  • strymon timeline (delay)
  • ventris dualverb

i have velcro on the underneath of tier 1 which is where I have the bomebox and like to affix power strips and things like that. There’s also some electrical distribution inside the box (5v midi hub and a 18-9v dc brick)

Tier two has a nord lead 4 - I use this 4 instrument multi timbral

Tier 3 has another board - it’s flat, it used to be the tier 1 board. It has:

  • roland system 8 (full size, idk, i strangely like this synth a lot), I have a sequencer trigger setup in the bomebox for this
  • platform
  • Analog heat
    Uh, there might be someting else on it, i’m doing this from memory i can’t remember right now. lol

Okay - that’s stand one, the spider stand. then I have an x stand with a second tier. The x-stand lower tier holds my mixer. Then the upper tier I have another board with my analog synths:

  • subsequent 37
  • prophet rev2 - i have a sequencer trigger in the bomebox for this

So far if i’ve found a reason to use midi on something i’ve got midi going to it. Like on the heat you can do classic master bus filter effects pretty easily with a little midi so it’s got midi to it. Yesterday I was using a couple of midi lfos in the pyramid to modulate a ‘shimmer’ on the secondary reverb of the ventris, another example.

Probably in the next year i’m going to learn and make some more custom cable lengths for it to reduce some of the insanity on the tier 1 board. I have a bunch of work on the portability side of things too. Before I made the changes i have now i could set it up in 2 hours (if i prepped for two hours lol). But yea, i need to make some stuff, make the travel coffin, road cause a couple other things. But if you look at my stuff, i’ve written on sharpee all over it so I can just not even think about what goes where. Some of my cables are labeled if it’s helpful for that purpose too.

I also put my midi CC values on all my synths, at least, if i’ve ever automated the parameter. I use this like, glow in the dark mixing console tape. I cut little squares and then write the cc numbers on it so i can look up manuals less. I actually prefer this to the pyramids definition files, less looking around.

Anyways, some of the cool live stuff i’ve only ever done by myself just because of covid, but I figure I got enough covid left to get the setup super swanked out with some cool visuals by the time covids all over.

see? idk if i’d call me normal haha. If anything, i definitely feel like i’m the only one who approaches dawless this way so far. Anyways, that’s my spaceship, in words.

As far as the pyramid - I’m a big fan of templating and things so, that’s how i navigate working with all the devices. I’ve been building a template for a long time now. Track Page A is for instruments. Page B is for fx and whatever. page c is for what I call ‘landings’ - a non pyramid sequencer that’s playing when i load the song, this ‘landing’ would start that sequencer before I transition to that song. And then page D is auto-vj. I also don’t program drums on the pyramid but on the rytm and I manually perform pattern switches on it, fills, and stuff like that.

phew, okay i’ll stop now.


I feel like there’s a point of diminishing returns when you start to try to make it all dawless just for the sake of it being dawless. A lot of what I like about electronic music is inhuman, mechanical. A daw is very good at adding “perfection”. I do take pride that the core of my stuff is generated from live exploration of ideas and some performance. That makes it feel less like a contrivance. But I’m perfectly fine with the daw being relegated to a final polish role. I think a “live version” vs a “studio version” of each song will naturally emerge.

1 Like

I had gone dawless for a while, but recently , reversed direction a bit…
now Im very much ‘hybrid’ :slight_smile:

the dawless setup was:

  • Pyramid
  • Octatrack
  • Eurorack Modular, w/ Hermod
  • Virus TI
  • various other things like Organelle, Spectralis
  • midi : midihub + iConnectivity mioXM

another big part of my setup are ‘control surfaces’
I have a few Expressive controllers which are important to me - Eigenharp/Soundplane.
also recently the Electra One midi controller has become pretty central to my setup

so yeah, I went pretty ‘hardcore’ dawless - even at some point starting recording to a 4 track tape!
though I never managed to include succulents :wink:

why did I pull back?

mixing audio…
I had a largish mixer (16ch?) , but really got fed up with it. so gave it away.
I also started to get fed up with the ‘cable salad’ , nothing ever seem to be connected to what i wanted it to be, and I had cables everywhere (midi, audio , power)
(yes, I see the inconsistency, that I love modular with patch cables everywhere! )

so I reached a fork on the road…

did I buy yet more hardware… including a better mixer, something that I could better record to the daw with, and probably a patchbay?

at the same time, I recently had got a new Mac Mini M1, which, frankly, I was blown away by… everything runs silky smooth, a joy to use… and I really wanted to integrate and play more vcvrack, and things like bitwig grid… esp, for cv into the modular.

so instead… I chose to go ‘all in’ with hybird, and bought a large audio interface with an adat expander, giving me something like 20 channel in and out, and decided for the most part to use my computer as the mixer.

so for last few weeks, ive been rewiring everything, creating templates, so that I can seemlessly route things around , wether it be hardware instruments or software.

I resonate with @roger 's comment above, I feel before this - I had almost got to stage of being dawless , just because it was dawless… for no real reason.

I love computers for their versatility, and love hardware for its tactile nature… so feel I should leverage the best of both worlds.

however, I do kind of feel dawless ,in many ways…
Im not really into editing on a daw, I just use my computer for instruments, for fx, for routing… and as a recorder.

put another way is something like Machine+/MPC Live 2 - really dawless? (if you are ‘pure’)

of couse, I still have the same setup, so I can still run dawless (my mixing on my SSP / Octatrack or modular).

I also still find dawless setups ‘inspiring’… mainly because Im interested in the tactile side of it, this is something Im strive to keep in my setup.

1 Like

Yeah, I think this is the essence of why people are drawn to the idea of “dawless”.

In my case I need the sense that I can be away from the computer, primarily because … I can’t keep it on all the time, nor can I keep it off the internet, nor can I keep it dedicated to music. All the software installed on it and an annoying bug I haven’t been able to figure out where it resets while waking up from sleep or hibernation just puts the nail in the coffin that computers for me have gotta be well and far away from my inspiration zone, which can be ruined by the computer BS I’ve come to know so well.

I felt similarly dawless when I used to keep a dedicated production computer on all the time, in fact I felt that I was really finally able to feel like it was an instrument always available and reliable. But there was a wrinkle, when I realized this setup was not very convenient to transport, and I like to jam outside the studio a lot - even if it’s just upstairs with the roommates but also at other people’s houses etc. With a computer, you need an audio interface, a controller, a mouse and keyboard (both the keyboard and touchpad were real shite on that laptop) power cables for both the audio interface and the laptop, and that’s if you don’t want to incorporate an external device, then you need a MIDI interface and sometimes the computer “forgets” things requiring you to enter Settings and troubleshoot and you see where I’m going with this. :wink:

That definitely was the culprit of some frustrating and awkward times.

So yeah, the dawless component of my studio, very important. If a computer were as effortless to transport, and well, turn on, as dedicated gear, definitely wouldn’t have bothered and just get some MIDI controllers and integrated music systems in the vein of Maschine (not really that familiar with all that I just know it’s a thing that exists). Because yeah they are more flexible and you can carve a workflow that’s simple out of them and some people do that but they aren’t as concerned with immediacy as I am maybe.

The tactile quality is sort of a bonus, cuz I discovered only after I figured out my setup that I really like the feeling of being “in” the workstation instead of operating it through a porthole and I have a feeling a lot of people are starting to appreciate that too since dawless has become such a trend.

yeah, I think it doesn’t really matter what you use - really it comes down too least hassle/resistence.

I think thats possible with both dawless and computer based, depending on what you have to hand.

the other thing Im trying ( as a dawless technique) is what Hainbach called ‘islands’.
so rather than trying to create a setup, where everything is connected to everything else.
create small/limited groups of instruments (2 or 3 max) as standalone setups.

I like this approach, since it encourages the benefits of minalism - focus, lack of complexity.

and , of course, over time, you can change these islands, use different combinations, and so keep things fresh and interesting (after all I do like change :slight_smile: )

1 Like

My setup is a hybrid Dawless but 90% no computer. I use Ableton for managing presets and tuning my drum machine, and finishing tracks in the Ableton. I went this route as I spend hours on my computer video editing, design and teaching. Just needed to get away from the screen for the sake of, well, getting away from the computer.

My setup:

Octatrack MK1
Analog Keys
Prophet 600
JoMoX Airbase 99
Meris Pedals
Moogerfoogers, Low Pass, Ring Mod, Freq, I have a modular ADSR setup to turn this into a drone mono synth.
Tascam Model 24 - recording and sound card
iConnect MIO
Microphones and pedals.

I am still working out the best production methods, and the main area I am struggle with is the best way to pull it altogether for final composition and arrangement. Currently, learning the Octatrack and might use that or the Analog Keys for arrangements.

What I like, is I turn on two power boards and a few switches and I am in and playing, more fun than a computer.


I typed an extremely lengthy response, but apologies that my abilities to translate to neurotypical have been severely lacking lately. Sorry.

I don’t know what the proper way to describe it, but I interface with the Pyramid quite differently and my logic (or lack thereof…?) is built on that - namely I use an outboard controller and the BomeBox to create 4 virtual sequencers (one for each Bank) on the Pyramid: Drums, Bass & Rhythm, Melodics, DMX & Control. These are further subdivided based on what they are, but Bass/Rhythm is the simplest having 4 Groups and 4 Layers. Selection of which Pyramid Tracks get turned on/off to control the lighting are built around which Group and how many Layers are playing - and each Pyramid Track for lighting is created as ‘elements’ that relate to the overal theme (motion, colour, etc).

(I know some people get annoyed that Pyramid SEQs affect Mute States for all 64 Tracks, but don’t want to use an outboard controller & Event Processor, which allows you to affect a subset of all Tracks. Everyone knows this, right? Whut?)

Also, loading a Project into the Pyramid sends a PgmChg to the BomeBox which loads the static ‘non playing’ selections for the washes. One Track on the Pyramid resets any lighting (for ending songs, etc). BomeBox takes care of a lot of housekeeping, but all my MIDI data goes through the BomeBox because: reasons. Then there are 10 Tracks based on what Drum Groups or Layers, and Bass Groups or Layers are playing. I haven’t implemented melodies that exist outside of the Drum & Bass Groups. There are some Tracks for temporary additions/subtractions based on Fills or Complexity which are pgm’d on a case-by-case basis (see aforementioned BomeBox calling new routing/translators based on what song I’m playing).

Note: I don’t DJ - or rather I’m too old to relate to DJ’ing the way I relate to playing songs on a synth. My “songs” tend to be 10-40min long when I’m not trying to do trashy 80s style synth pop. In other words, I don’t transition seamlessly - just not my approach.

I’m running about 92 channels of DMX with washes front, washes back, 2 rotating/shutterable/colour FX, one relay with two audio affected coloured beam fx, a single RGB laser (for this setup), a multicolour strobe. All one one tree with washes on the floor/stage, and using ADJ WiFly for faster setup. This is going to change once/if I get some more music together - but no rush because: pandemic.

Ugh - I’m doing it again. Sorry.
Someday I’ll make a video of the lighting and how it relates, but honestly most people just back away slowly when I start describing this stuff. I made a huge blog post about my system, but the only people I shared it with…disappeared. LOL

I lost all abilities to relate to neurotypicals after a burnout event. Sorry. :wink:
I love this subject, though. Apologies again.

It seems to me that so many people Start and/or End in a DAW, or prefer using a computer to create/edit MIDI ‘clips’ that it would make sense for future hardware manufacturers to incorporate some sort of hybrid system.

I’m thinking something along the lines of Rusty’s OctaEdit for the Octatrack.

As that relates to Pyramid, it would be amazing to be able to have an Ableton template that you just fill in the clips, then click-click-click and it would ‘build’ a Pyramid project you can just load.

Or better yet, future hardware designers would have an app that allows you to review and edit all the parameters of your Project including visualising Piano Rolls in realtime - sort of like realtime editing a synth via SysEx.

Yea, i’ve got it to a one button boot up. It’s nice :slight_smile:

I don’t know your whole setup, but would this mean that you have to program inside the bomebox where to route whichever pattern to the desired synth (say the one playing bass for this song)?

I don’t totally understand or maybe it’s my brain :slight_smile: A seq is always a subset of the tracks, isn’t it? Perhaps that’s a relativistic way of looking at it. But if I have seq A and seq B and seq B only differs from Seq A in terms of a single track’s mute state, then that would imply that Seq A is only working on a subset of tracks, would it not? I mean to say, whatever a seq operates on is what you program it to operate on, which unless you’re into black midi or something fun like that, would be a subset wouldn’t it?

I find that dawless and hw synths in general tend to lead to longer compositions. My average on my up coming record is around 10 minutes a song. There are just physical realities to how much shit you can mung and how quickly and things. This is my relationship with it anyways, but I have to do what it wants me to (this is a core perspective to my approach).

for me, and at least my goals, there’s like my music and performance of it, and then there’s recording and making records. In general, I am actually trying to get a deep interesting musical performance without the use of a daw. But my love of dawless and why I do it is about composing and performing, not recording. Now I usually do edit something at some point, but I’d do that to a guitar too - I guess it’s a different art form and a different set of goals.

For me, and dawless, I was thinking about why I’m doing it and it’s a bit different than some of the responses in this thread. While yes, the tactile thing is a part of it, and the getting away from the computer is a part of it (and the impetus), there’s more for me.

One thing is, I find the limitations within a dawless is something that I like exploring. It’s like this musical confinement for me, and within that confinement, I actually write better music, more enjoyably at that. Systems of constraints have implications upon music that I find fascinating and I’m excited to succumb to them.

Second, I find great art in the creation of the system. And the navigation of that system.

Third, I find that these limitations force me to go deeper when I’d almost always otherwise choose breadth at some point.

Anyways, just some thoughts I had when reading about people’s expressions in hybrid dawlessness. I’m definitely at a stage where i’m digging my heels into dawless, I kinda just surrender to it’s shortcomings; they’re an asset.


i have so many blog posts hahahahhaha.

1 Like

I’m starting here because YAAASSSS!
I don’t like finishing things, I guess. I visualise a system, then I make the system, then I make a few songs, then I walk away. When I was performing I’d adjust my system every show, but curiously enough it was all headed in this direction, except I used Ableton.

The standardisation I’ve been working on means I can work on a song, walk away, then come back and press Play and basically play it all without having to remember too many things.

Yeah, tough to ‘see’ the dataflow of someone else’s system. Electronic music is such that we get to make our own systems, which unfortunately creates barriers to understanding.

But my rig can have a laptop in series with everything running MTPro. Once I set up everything thing, i upload the MTPro script to the BomeBox and add one loop to the back of my rack case and the laptop is out of the picture. The laptop runs in series with the data and a basic BomeBox script for “THRU” for when Laptop is connected.

My script is kind of big and took about a year to test out ways to approach it, and then figure out how to implement it, then refine it. I use Global Variables as bit masks for Track states, for Layers and Groups. Once an Output Timer is enforced, each Track’s status is comared between Group & Layer, and then compared to previous run state, and output changes if necessary. So, like…DG1 (Drum Group 1) might have a mask of 0xF0D/0d3853 which in binary is 111100001101 so Tracks 1,3,4,9,10,11,12 belong to DG1. Then DL1 might be 0x5 or binary 101 which means Tracks 1 & 3. Both need to be ‘on’ for Tracks 1 & 3 to play.

Of course, I made data entry easier, but that’s the under the hood bit I guess.

There are Translators that deal with Routing and the Main Processing, and Variable Init routines, but then I have MTPro Presets that are called for each ‘song’ that basically say for any given Pyramid Track, which Group & Layer it belongs to.

And all MIDI Data goes back through the BomeBox because I have things like an Intensity meta-control - which for most purposes will increase/decrease Velocity of all Note Events - but based on which ones you want on a song-by-song basis.

Note: My ‘songs’ aren’t necessarily static. They are sort of like a bunch of clips and a given set of modulations that go together and I can create. I can re-imagine any song, and things are standardised so that I can do this. I don’t look at it as controlling an individual synth, but let’s say Melody 1 has 2 buttons to modulate, one for Fade In/Out and one for Morph/ModWheel kind of stuff. I don’t have to Fade In/out, but the sound I’m using for it I’ve predetermined what a Fade In/Out would be - perhaps via Filter or just Volume/Expression. THis is coded into the BomeBox Preset. Nevertheless, two specific knobs on my controller are always Fade In/Out and Modulate for Melody 1. Two for Melody 2, Two for 3, two for 4. That sort of thing.

These modulations are also not always 0-127. I use Global Variables that I pack with (depending on the modulation) a LoVal, HiVal, & sometimes Breakpoints so I can invert the response, change it to go from 62-100 (as the knob goes from 0-127), etc.

This is all to simplify my interface. I learned with my first synth in reading up on ‘how to program a synth’ LOL that the goal is one instrument with many voices - so I standardise my relationship. Well, sort of a categorisation system for my head -which is again an autistic trait.

Each Melody has aforementioned 2 controls. There’s also a single Bass control for Cutoff (again, changeable to only usable range rather than full 0-127 based on how I set up the BomeBox Preset), controls specific for the Rample (because I use that for specific things), and I love a suspend button - which I can ‘suspend’ all changes to the knobs, make adjustments which just get stored by the BomeBox, then when I press the Suspend button again to UNSuspend, it sends all those bits of MIDI CC’s at once (spread out a bit).

I created that system when I had the OT in my rig a bit differently and would mess with loops with FX slowly until it because a noisy wash,then Suspend, set all the values back to 0/Start, then Unsuspend on the 1 of the next section. LOL So far the OT has been extremely resilient in being slaved, btw. I wouldn’t have expected that with my impression of Elektron.

Blah blah blah - I’m on a sugar induced roll. Sorry.

Well, on the Pyramid, a SEQ is the Mute States of all 64 Tracks. Because I don’t do Loops that are all built on the same number of repetitions of bars, and I enjoy throwing sections of songs into different meter, I use Relatch Mode on all my Tracks. (As noted, using the Chance 0% hack I can Relatch a Track but it still might not even send Note Data if the Layer is turned off via the control script)

It gets a bit wonky in there, but I wanted better control of when Tracks start at 1. Using PyraMIDI with a Controller and a BomeBox to translate, I can affect 16 Tracks with one set of buttons irrespective of what the other 48 Tracks are doing and never affect them.

Some of the things I implemented were based on older versions of PyraOS, but let’s say I’m entering a Bridge type of section and I’m using BG3 for the Bridge (Bass Group 3). I’m currently playing the Minitaur with some frilly bits played on an Orbit which are Layer 1 (two Pyramid Tracks), a few Pads on different synths on Layer 2 (3 Pyramid Tracks), and I have some heavy percussion on Layer 3 playing. There are some high note rhythmics on Layer 4 but that’s not voicing right now. I hit the BG3 button and ALL layers are now started from the 1, even though Layer 4 isn’t playing. Then maybe halfway through the bar, I open up Layer 4 and it’s in Sync - rather than starting from 1 if I used the SEQ Mute feature or just Unmuted the Track.

That’s the advantage of the Chance 0% hack.

Also, with the Chance hack, there are fewer stuck notes because a Note started with a 0% Chance for the next note to happen still has a Note Off Event. This is a huge thing for my Complexity meta-control.

Sometimes I get bored with how long a section is.
Sometimes I think it’s too short.
It’s worse in performance when something I call “Adrenaline Induced Temporal Anomaly” happens - when you think you’ve been playing a bit for way too long…but you haven’t even played it enough to establish the theme well enough to reference it in a later movement! LOL

re: Dawless reasons - I dont’ know what is wrong with me, but going in and out of the computer seems to always yield hellacious jitter. And no matter how much I try to fix it, it might be fine, then next week will suck. I want to make music, not play with Windows. I was MUCH more productive with my Grey Matter E! board on my DX7IIFD back in the 80s! LOL (Then again, my hands/fingers worked back then and I could play keyboard and guitar. sigh)

Also for sound generation, for some reason I think synths, even romplers sound better in live situations than anything going through my audio interface from Ableton. Prejudice? I dunno - something something transients perhaps. Blech. Apparently I like romplers since I have a total of 7 I think. LOL Or rather, a bulk of my music making was done in the mid-90s. sure…that’s it. yeah.

And third, I just don’t like a laptop for performance. I want to simplify some sides of things so I can make this other stuff happen. I’ve always wanted to have this kind of interface and the Pyramid is the closest hardware to making this happen - something I’ve ‘seen’ in my head since I was first listening to Wendy Carlos and Synergy and Kraftwerk on 8Track tap and shooting a homemade HeNe laser through household drink glasses to make patterns on the ceiling! LOL

Interesting you talk of limitations with a DAWless system. Are they limitations tho? Yeah, you have more options to do ‘things’ with a DAW…but, does a song need all-the-things? I guess I should use Ableton more, but I hate upgrading so many things - doing system housekeeping just to make music.

As for my overzealous MTPro/BomeBox system, it allows me to make a song and then walk away. Then I can come back months later and not have to remember which Track does what - it’s all built so everything works together with a few notes on how to reconstruct stuff - one interface, many sounds!

I’m going to go back and listen to your music knowing what I know now of your approach.
This is exciting.
Thank you for sharing!

Oh, sorry. Also to add: My setup is mostly the Control Script, BomeBox, Pyramid. Everything else can be adjusted from inside the BomeBox scripts, so the destination synths are irrelevant and can be changed on a whim.

Actually, I use several Emu romplers which all respond to 16 channels. I can use SysEx to turn channels on/off on each synth, so…say, this song the XL1 is playing Channels 10-12. Then next one, the Orbit is playing CHannel 10 and the XL1 playing 11, and the Planet earth playing channel 12.

I love SysEx.
Oh, and obvs I love Emu mostly because the Orbit (V1 & V2) have the ONLY Operating Systems I haven’t been able to crash. Srsly. I’ve crashed everything else at one time or another.

My next project is coding a translation to send MIDI CC’s that translate to SysEx so I can directly address most of the Emu romplers’ parameters without using the MIDI A, B, C, D etc system. (The older (I guess) 90s way of modulating synths - every parameter wasn’t directly addressable and you’d set an incoming CC to go to A, B, C, etc…and then in your patch MIDI A, B, C, etc can be routed to modulate something. Just a middle man)

This reminds me of something mode selektor does - “a drama knob” - though in their case they find whichever parameters make things more dramatic (not necessarily exclusively velocity).

Yea, I think that’s the point of any standardization/templating. It’s like, ‘how much stuff can be stored in my brain’ and then given what can’t, can you compress it in some kinda ‘process’ that you just know that requires less memory. I imagine it’s relative to your skill and how much stress you like to manage while playing hehe.

I like to leave in room for improv and don’t like things to be to pre-determined. Kinda like, I build up and store a really good suggestion and then remember what I like.

So whatever gets you to performability with the flexibility you need - that’s the sweet spot and that has to work within anyone’s personal workflow perspectives.

lol, this is why i got a mac.

To me they are.

Only 32 channels. Only six sends. 10 different devices that all respond to program changes differently, if they even can load a patch. If I reach for a different delay, that’s all my songs - as the rig only has one state at a time and that state applies to everything i’ve ever created (so i can’t just go use one different delay on one song). I only have 3 eq knobs per channel - not multiple 8 channel eqs including ones with mid side processing. No way to splice 3 different kicks together (without going back to my daw anyway haha).

You have to have a cable for every device, effect, return.

Let’s not even get started on what we have to do to switch songs while playing.

Given how I’d use a daw anyway, it’s an extreme limitation. I’m just not going to go reach for a string orchestra whenever I feel like writing with string orchestra. And this is kinda the imposed depth I was talking about - you can change the system, and there are reasons to do it - but changes that are simply in a daw (like loading your songs, or using whatever instrument you want, or having midi actually work) can impact my life’s work. This tends to make me go deeper with what’s already there (wheras in a daw, i’m always reaching for another new toy).

Well, if i’m working dawlessly it definitely doesn’t. if i’m working in a daw, it also doesn’t. However, in a daw, I believe I have a lot more things to choose from. Working dawlessly, I believe my spaceship has a way it operates best and leaning into that is the super power. I think, in the context of music historically - a daw represents a world without limitations (to me) - you can play any instrument in any acoustic space (or lack of space), with as many fingers as you want.

The limitation within dawless, for me, is quite tangible (all the way down to the synthesizer bugs which are much harder to fix than any soft synth that’s ever existed). But it’s not really about all the things in a song… it’s kinda about all the songs created within a specific environment, and due to things like cost and physical space - being restricted to that environment (i can’t fit any more synths, and so my orchestra must derive any new sounds from within). And it’s not like a daw is a mandate for a world without limitations, but a daw does not impose limitations the same way that physical devices do (imho/ymmv).

For me, what’s fun is, I’m usually discovering the limitation moreso than designing it. I came from a daw right - so everything I do gets compared to composing in that environment. And i mean, there’s always a question of, how much is it worth it to work around any given limitation (cause i mean, you can, even with arbitrary constraints I place on myself like everything must be hardware). Right: you can have 8 channels of side chained compression – you need need 1 outlet, 17 cables (send, return, sidechain), and 8 channels of compression that are connected to a single side chain (or a device that allows you to input 8 side chains, in which case you need 24 cables). You can expense anything, I suppose - but in a daw, you can side chain every thing youv’e ever thought of… for free (people in daw’s really don’t understand how much these damn cables cost, let me tell you). I assure you, if I could have 16 channel of side chained compression, I would (soooooo limitinggggg).

I’m not trying to say one is better btw, it’s just my perspective. I read this book, “How music works” by david byrne, and it really got me thinking about constraints/environments impact music (highly recommend it). Upon reading that book, I realize that, at least for me - building the dawless spaceship was a way of boxing me in and at least, for me, that’s been healthy.

So, yea, tl;dr: it is limitation (though i admit, philosophically you can go to town on ‘what is limitation’ for days haha), and it’s not about songs needing to be everything, as much as it’s like… being in a band (every one in the band plays a couple of their instruments, and the band writes all the songs within those confines; electronic musicians don’t have to conform to this, it’s a choice, unlike a band or a dawless performer, or heck, most orchestras - and yes, you can do this to yourself in a daw, if you really want to, to me it’s just different mindset/existence going that route).

1 Like

I must have deleted the bit in a previous msg, but I have 4 meta-controls: Complexity, Intensity, Crescendo, and Morph.

Complexity currently does 2 things:

  • Increase/decrease Global Mute Probability
  • A crazy thing where I can have Pyramid Tracks that only turn on/off based on the Complexity value. So, you can have up to 3 Tracks that are all playing the same Group & Layer, but based on where that incoming CC is set (and I use one of the continuous controller pads on a Prophet 12 which are great for this), it will select which Track is playing in real time. So, you can have a bass line that is super simple, moderately complex, and super complex and then vary within the phrases which you choose. And this is done with Chance 0% hack so Note Off Events are retained, although some issues may come thru if you go from one Complexity layer to another with the same note active and different note lengths, but I look at that as a feature.

Intensity is pretty much Velocity, but it’s completely scalable per MIDI Channel

Crescendo can be anything, but usually the approach is volume. One of the cheesy techniques I use this for is manual crossfades, or a pad that is slowly increased in volume or filter cutoff over time manually. THis way I can do it quickly or slowly as my whim takes me.

Morph is usually reserved for one or two channels and is a very noticeable, sometimes hook oriented modulation.

Complexity & Intensity are on the Prophet 12 slider pads, Morph is the Modwheel, and Crescendo is a footpedal (injecting the MIDI Data via MIDI Baby into the data stream).

All of these are coded on a per song basis in the BomeBox song Preset. My personal relationship with those concepts is the same. The control is the same. Just the destination data and actual values changes, which I really don’t need to remember.

Better living through stupidity!

My whole approach is all improv. I started designing this when I was playing music for an improv dance troupe - I had to be ready for changes in all those axes (axises? English is difficult).

I am morally against Mac.
Well, Windows too. All my systems are still Windows 7.
I’m switching to Linux next!

Can have 48 with the Pyramid!
It’s a problem for me to have too many channels/tracks, such as in a DAW. I end up making so many ‘buried’ sounds and throw away tracks and spend way too much time making everything ‘just so’.

Which is interesting since forcing things to be “wrong” is another thing I do (over-humanise and/or detune instruments).

WRT Effects, I guess that’s why I like working for live because I dont’ focus so much on effects other than ones built into my gear. The Emu romplers, at least the ones on the Proteus 1000 base have 2 effects all the channels share. OT of course is flexible. I still need a processor for the Minitaur but I’m partial to rackmount stuff and I really want to find a Peavey Bassfex for it.

Then again, the music that I have in me is quite different than yours. :slight_smile:
Your sound is much more polished and my biggest influence is punk rock, so it has to sound like cr*p.

As for what sounds you need at a given point, if it’s not a front/center lead type of instrument (like strings for big swells - you might want authentic samples but not necessarily need a full dedicated string unit), why not an OT? With the latest OS upgrade, I use mine like a rompler. The Note# selects the Sample Slot, so I set up a Bome translator to accept PgmChg and then build that into each Note Event so I can basically select “patches” (the Sample Slot) via PgmChg.

DId I miss something or in the old days were synth bugs fewer and further between? Now it seems like anything you purchase you still have to wait a year until it sometimes even does what they market it as doing.

Get those damned kids off my lawn!

Loved this convo, thank you!
Actually, I’m completely going to rewrite my DMX interface bit in my script to something simpler. I’d love to hear about what software you use to interface the VJ bit of your performance rig.

1 Like

disclaimer: i’ve barely got my feet wet here. I have some ideas i’m exploring - but i imagine after a couple years i’ll have something really nice. Like, i have very little experience animating, making cool images, even using video editing software.

So maybe a year ago I decided I wanted something more interesting than 25 year old winamp plugins from geiss (i’ll love you forever) for my jam videos. The first evolution was using this program called synesthesia (which is great, because i have this whole synaesthetics thing). It’s a alike, opengl based app that can take run glsl and other shader programs modulated by some parameters and video content. And, as it would turn out, shitty vacation nature videos that area laughable almost everywhere… make fantastic vj footage.

This is synesthia: http://synesthesia.live

This does some kinda fft freq analysis and then modulates little bits of whatever scene. It’s cool, and I wanna do more of it but I feel like frequency response only gets so immersive. Like, yea a few millisseconds after a beat you can see a flash - but you never know where in the song you are, or like, you can never see different video elements move with different sonic elements. Anyways, Auto-vj v1 was kinda born because i realized each of these scenes had faders and buttons in Synethesia that i could map to midi. So I started mapping midi cc lfos from the pyramid and programming like button presses at certain parts of my song. The impact of a syncopated bit of vj is really nice; it just adds something to the beat response.

and it’s just crude signalling, i don’t mean to imply i’m doing anything really cool. But like, for example, if I have a song that is like 3 movements, I can have the pyramid trigger a scene switch or major colorshift when the movement starts playing. Or a swish that happens at the beginning of 16 bars. Or a million other things I don’t really intend to keyframe every little thing but I can just put little triggers in loops based on the normal way I structure my songs (around 16/32 bars)

The second iteration of auto-vj is what i’m working on now and i’ve barely begun to prototype. It’s more of an integrated audio-visual art piece. For me, I’d like more audio-visual connection and this guy in my art collective happened to buy this patching software called wire from resolume:


I guess this came out in June. Everyone else uses touch designer. Anyway - Wire takes midi, and it’s also possible to draw simple shapes and move them around. I have a ton more learning to do, but I can create simple visual synthesizer patterns that are tied to instruments I route to it. For fun I did something really simple with a square that moved every down beat. I also did a prototype which was more like a laser beam when I pressed a midi note; you press the note and this light swishes down (it worked really well with this long attack synth patch i had at the time).

My intention is to identify say, a few animations for different parts of my songs. Each one then organized to a layer, and each layer has priority in terms of opacity. So all of the layers are on a single view, but one is more important and one is least important. Then, hook up the midi and watch it go.

And then the fun part… so resolume wire outputs to resolume and resolume can output transform a video signal to programmable LED strips. We presently have 1800 leds that we drape over this tunnel structure (it’s an art piece) which would be like, 12feet or 18 feet long. Basically, the goal is to use wire to animate my music on this giant led tunnel (we’re gonna need more leds though). What’s kinda nice is, because it’s ‘super low res’ (50x36 at present, but eventually it’ll be like 50x100), I can get away with animating crude shapes (nothing fancy is gonna look any good). So, I don’t think i’ll have to get really good at wire and hopefully can focus more on what kinds of patterns response to midi notes.

I don’t really want to program specifically where things should go in a light show, so I really like this approach. And I can do things like modulate the position of an image by pitch, or modulate brightness (i think) with velocity and so on. I think this would give a really cool dynamic kinda thing where you don’t totally know what to expect, but it makes sense with the music that you’re hearing.

So this is what i’m prototyping now: the pixel synthesizer and it will be driven by my pyramid. Lots of work to do yet. I’d like to expand upon this some day, and do an integrated performance on multiple led sculptures (with different shapes and things). Maybe i’ll get there. I have nothing else to do with my life, so, I probably will.

I do intend to expand on auto-vj v1 too just because, i’ll always be making videos for online and I wanna be able to make content for projection mapping (even if it’s just my shitty vacation videos with glsl on top). I might explore resolume and wire more there too. With wire, I can expose a lot more video controls in resolume which can then be mapped and commanded by my pyramid. With Synethesia, you can do it, but you have to edit json and bind it to glsl goodies; I find the resolume wire workflow much more straight forward than trying to figure out how to write a program that knows each’s pixels value. (maybe i’ll learn how to code fragment shaders some day too, but uhhh, i kinda like tactile so… yea)

Very likely, the control scheme I find will work for autovj1 and and ledvj just because I tend to like to trigger things on common events. And yea, with any luck I just write music, then duplicate those tracks to output to the pixel synth, and magic happens.

hehe, much like the tangibleness of dawless, I quite like the tangibleness of multi-dimensional led arrays. Anyways, okay i’ll stop now.