actually midi cant do anything ‘simultaneously’ - its a serial protocol
so in terms of events, its limited by throughput speed, though in practice this is only an issue for DIN, USB is fast enough (even if jitter is an issue)… the hermod is using CV, so that’s only limited by the DAC used.
but yeah, the 7000 event limit is a memory issue.
there are a number of solutions to this:
- new hardware = more memory
- improved internal representation, i.e. make the events take up less memory. this also includes things like quantisation and interpolation
- streaming , i.e. keeping less in memory
hardware is easy, and the realistic way to get lots more events, so more likely to happen
internal representation, we do not know the software code , so its impossible to know what improvements can be made here - bound to be some ‘quick wins’ (like quantisation/interpolation) and some that are really hard, which are possibly not worth the (time) investment… and some in between.
Note: often reduced memory footprint, leads to high cpu consumption, i.e. it becomes a trade off, so this might also limit squarp
streaming, so taking events from sd card , ok theoretically this can happen… on Axoloti, a similar processor it can stream audio, which is much more data than midi
this takes cpu cycles, and the code to do this looks very different to code, that that assumes everything is in memory i.e. its quite likely the effort to do this would be prohibitive.
a more practical approach would be to ‘chunk’ the data, i.e. so it is loaded in sections… depending on how you split it up, this becomes less or more dynamic. the natural spit would appear to be project, because its self contained.
so the question is, is this practical? without seeing the code, difficult to say … butfor sure its would not be trivial…
the idea is two projects could be simultaneously held in memory i.e. a second project could be loaded in the background, whilst the first is playing, then at a point in time, you ‘flip’ the active project, and the other inactive project can be swapped out for a new project.
this ‘approach’ is actually not that uncommon e.g. we do it with graphic rendering all the time.
there are some limitations:
- projects can only be half the size (more likely 3000 events) , (if they are to be ‘flipped’)
- projects flipping can only happen at a certain rate (time to load + initialisation etc)
is it simple? no, there are all sorts of assumptions in code, and this would break many of them…so depending on how the code is written, this could be a few weeks work, to being virtually impossible.
(unfortunately, this is one of those things, if you have it in mind when you write the code is not that hard… but its hard to retrofit)
summary… id say its perhaps possible we might get some of the easier optimisations (of internal representation) , which will help in certain use-cases - but apart from that, its likely to come from a hardware revision.
personally, i don’t mind, as I knew the limitation before I bought the pyramid.
the only area this effects me, is pitchbend and pressure recording - if they quantised, and interpolated that, then id be 100% happy