The EaganMatrix (inside the Osmose) and the Hydrasynth are both great and each one has its own approach. I think the Anyma synths are less beefy, in terms of computational resources, but the synth engine offers more kinds of modules, more freedom in some way. Not that it's always useful to have 16 LFOs or envelopes, or to be able to modulate the curve of a mapping, but it sometimes makes trying an idea easier during sound design. As we started with a wind instrument (Sylphyo), we also take special care to make support for this kind of MIDI controllers effortless.
The synth engine in the Anyma Phi runs on a STM32F4. The UI and MIDI routing runs on a separate STM32F4. No RTOS, we find it much easier to reason with cooperative multitasking, and easier to debug. So far, we don't have any latency/jitter issue with this approach, although it required writing some things (e.g. graphics) in a specific way.
The Omega runs on a mix of Cortex-A7 and STM32.
I have a pure software background but I came to appreciate the stability, predictability and simplicity of embedded development: you have a single runtime environment to master and you can use it fully, a Makefile is enough, and you have to be so careful with third-party code that you generally know how everything works from end to end. The really annoying downside is the total amount of hair lost chasing bugs where it's hard to know whether the hardware or the software is at fault.
In contrast, programming a cross-platform GUI is sometimes hell, and a VST has to deal with much more different configurations than a hardware synth, you're never sure of the assumptions you can make. The first version of Anyma V crashed for many people but we never had the case on the dozen machines we tested it on.
Interesting prespective. I can definitely see how you have the immediacy edge over the pain of the EaganMatrix, and having different engines besides the core wavetabel-y of the Hydra is a win, IMHO - though, yeah, both fit different needs.
I'm mostly an embedded guy (Usually much lower power ST parts), so it's neat to hear about how you approached it. Having multiple chips separate so can't underrun as easily if the UI needs to react is really nice design!
I see a lot of your engine is modified from from Mutable Instruments, but you do have a good selection of original sound sources as well. What sets yours apart? Did you have a strong background in DSP before Aodyo?
I did a tiny bit of DSP and I've been exposed to the HCI/NIME community in the past, but that's it. Many modules in the Anyma are just reasonable implementations of clever formulae I didn't design but studied from papers :). And for the Mutable stuff, a lot of optimization work and tradeoffs to make. We are lucky to have a sound designer with a good ear.
That said, we've been working for a while on our own waveguide models (Windsyo and others), and we have found some tricks I've never seen elsewhere. There's a lot to explore, especially when looking for "hybrid" acoustic-electronic sounds.
For sure. I really dig those hybird sounds too. I'm particularly fascinated with the sounds that are more electro mechanical (See Korg's Phase-8, or Rhythmic Robot's "Spark Gap") so I'm glad to see more people trying to combine physical modeling and synthesis in smoother ways than just layering them.
Oh my. So, how much processing load are you typically at now?
You know your backers are, from what can be gained from the KS comms, (to put it mildly) not too convinced Aodyo will provide more than enough juice (!=JUCE) this time, for chaining up enough modules while guaranteeing 16 note poly? And this with a multi-timbral design?
(you might refer to your end of 2023 update, regarding the 4+1 core concept which had to be changed creating further delay, and so on)
We had to switch to a more powerful architecture and chip but the voices are still dispatched on several processors. It'll be enough to withstand 16-note polyphony at 125% load, with some extra power left because we don't use the second core yet.
The synth engine in the Anyma Phi runs on a STM32F4. The UI and MIDI routing runs on a separate STM32F4. No RTOS, we find it much easier to reason with cooperative multitasking, and easier to debug. So far, we don't have any latency/jitter issue with this approach, although it required writing some things (e.g. graphics) in a specific way. The Omega runs on a mix of Cortex-A7 and STM32.
I have a pure software background but I came to appreciate the stability, predictability and simplicity of embedded development: you have a single runtime environment to master and you can use it fully, a Makefile is enough, and you have to be so careful with third-party code that you generally know how everything works from end to end. The really annoying downside is the total amount of hair lost chasing bugs where it's hard to know whether the hardware or the software is at fault. In contrast, programming a cross-platform GUI is sometimes hell, and a VST has to deal with much more different configurations than a hardware synth, you're never sure of the assumptions you can make. The first version of Anyma V crashed for many people but we never had the case on the dozen machines we tested it on.