I’m really curious about how Hapax might work as a more dynamic melodic quantizer. I’ll send this as a feature request, but before doing so, I want to see if anyone else has similar ideas.
Right now, we can set a global scale for a project via the pScale setting, but my projects usually contain key changes and/or modulations, so the idea of a single project-level key+color+scale doesn’t really add any value for me.
I’ve spent a lot of time working on this exact kind of thing in my modular, where I have an O_c operating as a quad turing machine going into a Instruo QPQ that is quantizing the O_c’s CV based on MIDI melodies/chords played live or sequenced, with the quantized outputs going into a second O_c, operating as a quad-S&H, the triggers of which are being driven by a Gatestorm.
What I would like to see in Hapax is some kind of real-time analysis of one or more track’s MIDI note data, which would result in the pScale (or a similar setting) being set on-the-fly. This could be as as simple as setting the pScale to be just whatever notes are currently being playing on a given track at any given time, allowing the Scaler effect on any track to quantize that track’s MIDI note data to just those notes, or could be as complex as inferring the scale and key from the notes played to generate missing notes as well.
I’ve actually been playing with some ideas for how I can intelligently infer a scale from a given set of inputs, for example, this rudimentary scale finder can do something along these lines. The next step would be to set up weights for each possible output scale and then build web MIDI API support into a working prototype, so I can try it out.
It would be amazing if Hapax could do something like the above!
This would really take the Hapax to next level in terms of composition. I don’t know of any device that is doing this (besides the breadboard experiment I have in my studio), but this is something that is very common in music (maybe not dance music and that’s the problem here). It sounds to me like a chicken and egg situation, tools don’t provide this capability because people don’t usually do it, because tools don’t provide it (DAWs are also part of the “problem”).
The transpose track could very well be used for this, instead of only transposing, effects such as scaler could be applied to it and affect all the other tracks which allow being transposed.
WAIT. I just discovered that you can write patterns in the TRSP track. It’s just awesome. Thanks so much Squarp, you rule.
I’ve built exactly the same system in Max for Live, and I know for a fact how powerful this is. I’m so glad I’m not alone seeing the potential.
Agreed. It’s currently impossible to automate global/pscale in one place, if you also want to use pTrsp. You have to automate each individual track’s scaler separately…You get issues with the clash between pScale and scalers, you get issues with pTrsp scale vs the tracks its transposing, etc…
You also run into issues with the stick up/stick down setting meaning you dont get proper scale degree translation eg between major modes.
.
I mean I just wish a sequencer manufacturer would allow everything to be done in scale degrees and scales (transposition, effects, sequencing, LFOs for notes, etc)… buuuuuut I am not holding my breath…
Well… i find that by using the “Match Chord” mode of pTRSP, you CAN automate the scale with patterns in Track 16.
Simply make chords that contain all the notes of a scale. You can then have for instance one pattern per scale, or write harmony changes within longer patterns.
I find it wonderful.
Thanks! I will have a play with this.
It does unfortunately mean you lose the scale-degree programming enabled by pScale.
My main issue is I’m not quite fluent enough with realtime scales to work chromatically when jamming and keep myself within scales/modes while improvising.
Having the transpose match chords set the scale is a little assistance, sort of halfway. I’ll give it a go.