Hapax Audio-Sync best PPQN?

I run a hybrid setup. The core is a Hapax and 2 Macs.

So, Mac1 sends an audio-sync signal to my Hapax, and also to Mac2.

It’s setup in a basic way, I simply have an audio clip on a track, routed out of my interface into my Hapax. I do it like this because I can adjust all the different audio-sync signals to get everything bang on (I have separate signals for Mac2, Deluge, and others etc).

It works well. However, I notice sometimes, after a while (maybe over 20/30mins) the Hapax isn’t quite in time any more.

Anyone else notice this?

Anyone know the best PPQN settings? I just assumed highest is best… but maybe not?

What is generating the audio sync signal on Mac1?

I don’t think the PPQN is that relevant if your tempo doesn’t change constantly, but I could be wrong.

Appreciate the response!

It’s simply an audio file, on an audio track.

That way I can have multiple tracks feeding multiple devices, and move the audio about to get everything playing in time.

Also because I need to send the audio before the beat to ensure the Hapax plays in time (because of the inherent latency in the system etc).

I do change tempos sometimes but this is happening when I leave the tempo alone!

I’m gonna need some more info.

how was the audio track generated?
Is the DAW doing anything else besides simple audio playback (and record)?
When the Hapax desyncs, Versus what does it desync? What else is consuming the exact same clock signal and not having issues?

Normally the way you’d (well I would) use the clock sync with a DAW is by using a VST or AU to make the clock signal, derived from the DAW’s internal audio clock (and thus making it sample accurate)

Assembled by hand (in Ableton) from a recorded sync signal pulse.

Erm… guilty as charged :smiley: It’s doing a lot more. It’s basically a huge project, a kind of supercharged performance mixer, over 500 tracks running a lot of crazy stuff.

I do get a few clicks when I push everything too hard. Maybe it’s that.

Couple of reasons I can’t do it that way:

One I need to start Ableton 4 bars ahead of the Hapax, this is to ensure stuff like Infiltrator syncs up correctly with the Hapax etc.

Second reason, I find I need the audio-sync to come in slightly before the first beat, so the Hapax can send it’s midi and the audio resulting from that hits perfectly on the beat.

I tried Ableton CV tools device, but it can’t do negative latency (since it can’t know when you’re going to push play, obviously - although they could get around this by programming in a little delay after you press play), so I can’t get the Hapax in time that way.

I probably just need a better computer :smiley:

Appreciate the help, thanks!

depending on your needs i may have something worth looking at (re computer) check my profile out.

500 tracks is a lot! why so many? are you doing foley stuff? or is it all for music? maybe you have several songs in one project?

i trigger a lot of plugins from hapax and i find the compensation page to be a very amazing feature! i also use a square wave to clock the hapax, in reaper. im honestly getting great results! but of course im not running 500 tracks! wow thats a lot. what computer do you use?

I’m running two Mac Mini M1’s, Ableton on each one.

Mac1 loads individual projects (ie different songs), Mac2 is always the same project - it’s a huge project, basically a performance mixer with a ton of crazy stuff it can do. It’s all controlled by a lot of controllers, which run into a big Drambo project (over 4,000 modules) to make everything happen.

It’s not easy to explain. Here’s a link to a few pics and a brief explanation if you want:

Hey, I use a square-wave in an unusual way too! Send it to CC’s and back into Launchpads to trick the Launchpad into displaying a third colour (red + green very fast makes it yellow, and disables the pad).

1 Like

I really think the cause of your problem is the pre rendered audio sync file. I don’t know about the cv tools but there must be another way to compensate the latency? Your really need to tie the sync signal to the audio engine clock for sample accurate sync, anything else will eventually desync

Reasons for thinking this?

I’m no expert on this, but isn’t audio output from the interface sample accurate too?

Also, that’s how Squarp suggest to do it in the Hapax manual.

Any technical info on why this would be?

Perhaps easier to explain with an example:
If you put 2 computers next to each, both set to 120 bpm, they will, eventually, desync.

I wouldn’t know the exact reasons, I’d suspect a mix of component tolerance and rounding of floating point maths. Probably mostly the last thing.

In this case, you seem to fingerpoint the hapax being out of sync, but it sounds like the hapax is the only consumer of that specific sync signal. So it’s not easy to verify if the issue is the Hapax. Is there any way you can similarly sync another device on the exact same signal?

Yeah exactly right I think.

Normally I solve problems by careful elimination of each variable until only one possiblility remains.

Not easy to do in this case… but it is possible, I’ll have to set aside a day to do it.

What I’ll do is, use an acutal physical jack splitter to split one audio track to both the Hapax and something else (probably the Deluge).

Then send MIDI from both back in to a Mac to trigger a kick drum in Sampler/Simpler.

Record that and leave it running. Check back after 20 mins etc.

Only thing is, it’s not the same as when I’m pushing the CPU hard etc.

I wonder if stuff like Ableton CV tools (or other time-sync generating plugins etc) are given top-priority by the software so even when the CPU is pushing hard their specific signals are not ever going to click or drop-out etc.

I guess if I run the test and both the Hapax and the Deluge are perfectly in sync forever then it’ll point to buffer drop-outs/clicks being the culprit.

One thing’s for certain… I’m not going back to MIDI clock. Far more jitter, and I’ve tested that extensively. Just makes sense having dedicated cables just for clock, at a much higher resolution, more accurate than MIDI can ever be!

Appreciate the help, btw, thanks!

Splitting the sync signal passively might not be a good idea. Cant you software route it to other soundcard out?

Also CPU shouldn’t be an issue, just increase buffer (and latency :frowning: ) if PC cant handle

Yeah I’ve went down that rabbit hole years ago, ended up making my own Sync software at one point. But I stopped using it because I concluded:

  • a lot problems I had were actually related to midi-dinsync conversion pitfalls
  • I mostly use hardware, but when I use a DAW I started using Logic Pro exclusively. The jitter there is lower than any hardware I tested with at that moment (sound crazy right? My hardware sequencer had more jitter), also better than Cubase on Atari. But ymmv. Measured using Pete Kvitek midigal tester iirc.

As for high note resolutions, those only make sense when making music that requires such resolution imo

to ensure good performance, i dedicate several of my cores (on my pc, using process lasso) to my daw only and set to realtime priority and set buffer sizes for no dropouts. also my sync signal is a single cycle pulse triggered at 64th notes, quantized to the grid, dedicated audio out to hapax. i cant hear any jitter or desync but im not using a microscope. works for me. but im not doing it how youre doing it.

I’ve done it before, it’s fine as long as there’s sufficient level.

Of course I can route it, but the split-it-passively-test is to absolutely ensure the two signals are exactly identical, to eliminate system factors etc.

Can you explain this more? Not sure this applies to my system - as everything clock related has nothing to do with MIDI (well, almost true, I do use MIDI clock to sync delays on some devices if it’s the only way, also I don’t know a way of getting an Ipad to sync to audio - if you know let me know!)

I’m on 2 Macs. Increasing buffer size is unfortunatly not an option.

The reason is, I’m not producing music in the usual way at all, everything happens live in the moment, so I need to run the lowest possible buffer for usable real-time latency.

I’m on Macs so generally very optimized and no need for much system tweaking at a low-level.

Pretty much how I do it too, except 96th notes for 24PPQN (because I assumed higher is better - not sure now).

I get jitter for sure, but this is definitely the Hapax jittering the MIDI notes on it’s output.

It’s fairly low, definitely usable. The Deluge is a bit tighter, but then again the Deluge is insanely tight in all areas of timing. Just not as good as sequencer overall!

I don’t think it applies, so slightly off topic and quite a long story :slight_smile:
But to summarize, it has to do with how modern DAWs start the midi clock (and send the start message), and how analog clock devices deal with it. Just plain 1:1 conversion of these messages can cause desync by 1 clock pulse right from the start, especially with older Dinsync devices. With some “clever” conversion logic to handle all the different scenarios however, this problem can be mitigated 100%. I think a lot of people are investing in fancy clock solutions that allow shifting stuff back and forth, while all they really need is a proper dinsync convertor :wink:

macs dont dedicate cores to specific software that is running. when i do that, it just reduces the need for larger buffer sizes. i can get very low latency. and im mostly just sending midi to plugins anyway, until i record. people persist with the idea that macs are superior but the only truth to it was way back in the 90s when macs were more focused on multimedia and usb was very new. about the only thing macs have over pcs nowdays is theres not many viruses written for them. my pcs are super multimedia capable! i have no issues! maybe its because im used to using them the right way.

oh and that they have some exclusive software but so does pc. a lot more in fact. also i can run ancient software from the 90s all the way to today!

1 Like

What interface are you using for the sync signal? The only way I imagine it would drift out of sync would be if a pulse got missed or skipped or blurred together or something. If reliability and tight synchronization of CV signals are needed I think DC coupled outputs are preferred, not all interfaces have those.

In regards to audio files, perhaps it’s missing a little bit at the end in your arrangement, or it’s overlapping/crossfading between the clips in a weird way or something if you’re duplicating/looping them in your DAW. Or it’s using a timestretching algo that is causing artifacts (make sure it’s in a ‘repitch’ mode instead of a stretch algo).

I am reading above that you use Ableton Live, I would use the CV clock device instead of an audio file. Any reason you’re not using that?

Ha ha, definitely not getting sucked into that one :smiley:

I’ll just say… been using PCs since the old 8088 days, building my own for a long time (last one was about 8 years ago, huge rig, cost over £6,000 in parts!), only been on Macs for around 18 months now.

There are some things I just cannot believe Macs won’t do (or do easily), some things that frustrate the f*ck out of me… but I definitely prefer the experience overall. Everyone’s different though… and music making is music making no matter the equipment you use, so who cares really, right!!!

1 Like

Appreciate the thoughts, thanks!

A Cymatics Utrack on Mac1, a MOTU Ultralite MK3 on Mac2.

The MOTU is DC-Coupled. Not sure about the Cymatics, can’t find that info.

Definitely not these. I’m on Repitch and I’ve checked all the clips carefully.

I’m thinking it’s because of buffer overruns caused by high CPU. Gonna be testing it extensively soon hopefully!

Because I need negative delay compensation for everything to hit on the beat, also I need to start everything 4 bars early to make sure my Infiltrator patterns loop in the right places etc.

So I need the Hapax to start a tiny bit before the 1 on the fourth bar.

Also, I need separate clock signals for the Hapax, Deluge, and Mac 2, it’s easier to move them all around on the grid and get everything lined up when the audio comes back in.

Final reason… I don’t record often (my rig is for live playing) but when I do I want everything recorded perfectly tight. After extensive testing, I learned the Hapax has a certain amount of latency (like every device), and this latency changes according to the BPM.

There’s a fairly linear relationship, you can see it by doing many recordings and then holding up a clear ruler to the monitor, it’s a straight line.

So I have many tracks for audio-sync, a track for every 10BPM, from 70BPM up to 200BPM. Then when recording I’ll switch to the closest BPM that helps the Hapax to line up even more perfectly (the different BPM sync-tracks are lined up very slightly differently etc).

When I’m not recording (ie most of the time) I just use the 120BPM sync-track, cos I don’t care that much and it’s actually pretty tight anyway.

Hope that makes sense!

the most important part of the sync pulse is the leading positive edge