Hacker Newsnew | past | comments | ask | show | jobs | submit | sunray2's commentslogin

Putting aside the beauty of both the synth and its purpose, what I'm curious about is the learning process in making this. The running theme is that you picked up several new skills 'from cold'. That in itself is impressive enough. How did you approach learning:

- the necessary basic electronics;

- PCB design;

- 3D CAD;

- your particular iterative process,

among other things? I get the impression you built things incrementally, observed what happens and learnt via that feedback loop? Maybe others could share their own feedback loops, too.


I've been reflecting on this a bit. I found it very useful to have a well-defined project and focus the learning time on things that are necessary for completing it. Having a background in coding was useful because I find you end up developing a knack for isolating parts of systems and figuring out how to work on small parts that end up fitting together.

I therefore focused initially on simply getting readings from a single potentiometer; if I could do that then I felt pretty confident I could read from four of them. If I could generate a midi message I was pretty confident I could send it to something that could read it etc.

When I started on the PCB design I had a simple circuit already so it was a case of translating that onto a board.

I didn't get too deep into any of the various parts but I found that it gave me a birds-eye view of the whole process and I now feel confident in isolating parts of them and 'zooming in' to them and refining them, building on the foundation I've developed.


Somewhat related: there's a relatively big push for optical interconnects and integrated optics in quantum computing. Maybe this article yields insight onto what may happen in future.

With quantum computing, one is forced to use lasers. Basically, we can't transmit quantum information with the classical light from LEDs (handwaving-ly: LEDs emit a distribution of possible photon numbers, not single photons, so you lose control at the quantum level). Moreover, we often also need the narrow linewidth of lasers, so that we can interact with atoms in the way we want them to. That is, not to excite unwanted atomic energy levels. So you see in trapped ion quantum computing people tripping over themselves to realise integration of laser optics, through fancy engineering that i don't fully understand like diffraction gratings within the chip that diffract light onto the ions. It's an absolutely crucial challenge to overcome if you want to make trapped ion quantum computers with more than several tens of ions.

Networking multiple computers via said optical interconnects is an alternative, and also similarly difficult.

What insight do i gleam from this IEEE article, then? I believe if this approach with the LEDs works out for this use case, then I'd see it as a partial admission of failure for laser-integrated optics at scale. It is, after all, the claim in the article that integrating lasers is too difficult. And then I'd expect to see quantum computing struggle severely to overcome this problem. It's still research at this stage, so let's see if Nature's cards fall fortuitously.


Trapped-ion and neutral-atom QC require lasers because the light signal needs to be coherent. That's the main feature of a classical laser, really. The explanation with the number of photons doesn't really cut it, because even a perfect laser does not have a definite photon number: coherent states are inherently uncertain in both photon number and phase. But LEDs are even worse, because the light signal is truly incoherent. It's not even a good quantum state, it's a classical superposition of incoherent photons that you can't really use for any quantum control.

But even more than that, this seems to me like a purely on-chip solution. For trapped ions and neutral atoms you really need to translate to free-space optics at some point.


Indeed, it is nuanced, as you point out. For example, you can't just attenuate a laser and use that as a single photon source (instead you'd get a coherent state). To realise a true single photon source you need an additional (quantum) process, like controlled stimulated emission from single atoms, or driving some nonlinear crystal to generate photon pairs (that's using spontaneous parametric down conversion, i think). And that's where the coherence properties of the laser are essential.

As for fully integrated optics, it's where quantum computers eventually want to be, and there's no physical limitations currently. But perhaps it's too early to say whether we would absolutely require free space optics because it's impossible to do some optics thing another way.


Quantum computing is still a technology of the future. When we are still talking about 12 qubits as a breakthrough, there’s a long way to go. Optical interconnects are the least of quantum computing’s problems.

However, it’s not correct to say lasers are unreliable. It’s fundamentally false and it’s not supported by field data from today’s pluggable modules. 10’s of millions of lasers are deployed in data centers today in pluggable modules.

It’s also useful to remember that an LED is essentially the gain region of a laser without the reflectors. When lasers fail in the field, they fail for the same reasons an LED will fail; moisture or contamination penetration of the semiconductor material.

An LED is not useful for quantum computing. To create a Bell pair (2qubits) you need a coherent light source to create correlated photons. The photons produced by an incoherent light source like an LED are fundamentally uncorrelated.


Actually optical interconnects are the biggest of (photonic) quantum computing problems. If we had good enough optical interconnects (i.e. with low enough optical loss) we would already have a fault-tolerant quantum computer. See https://www.nature.com/articles/s41586-024-08406-9 (also note that Aurora produces 12 physical qubit modes at each clock cycle)


TSMC's approach here sounds sensible but I don't think it speaks much to QC. It is a pretty different problem domain. The trapped-ion QCs can use much more expensive / less practical lasers and optics and still be useful.


Don't be discouraged! It might even be that 2D is better than 3D in this case: it's all about how it sounds, right? And if a 2D simulation can be less expensive than a 3D while sounding just as good or better, it works in your favour!

I think that's the real key to this stuff: what makes these things actually sound good?


Thank you for this, it looks very cool!

Remind me of Korg's Berlin branch with their Phase8 instrument: https://korg.berlin/ . Life imitates art imitates life :)

I highly support and encourage this. Is there a way I could contribute to Anukari at all (I'm a physicist by day)? These kinds of advancements are the stuff I would live for! However I should stay rooted in what's possible or helpful: I'm not sure if this is open-source for example. As long as I could help, I'm game.


For the foreseeable future I'm just going to be working on stability/performance, but eventually I will get back to adding more cool physics stuff. It's not open-source, but certainly I'd enjoy talking to a real physicist (I'm something a couple notches below armchair-level). Hit me up at evan@anukari.com sometime if you like!


Thanks, will hit you up later!

I was using the demo just now: the sounds you get out of this are actually better than I expected! And I see what you meant in the videos about intuitive editing, rather than abstract.

Although, I was often hitting 100% CPU with some presets, with the sound glitching accordingly. So I could experiment only in part. I'm on an M1 Pro; initially I set 128 buffer sample size in Ableton but most presets were glitching, I then set to 2048 just to check for improvement, which it did, nevertheless it does seem a bit high. Maybe my audio settings are incorrect? I can give more info later if it helps you.


Yeah performance at low buffer sizes is a big challenge, generally I recommend 512 or higher, which I know is not great but right now it's the most practical thing. The issue is that the computation is all done on the GPU, and there's a round-trip latency that has to be amortized. One day I'd like to convince Apple to work on the kernel scheduling latency...


Very interesting!

What's the fundamental physical limits here? Namely, timing precision, latency and jitter? How fast could PyXL bytecode react to an input?

For info, there is ARTIQ: vaguely similar thing that effectively executes Python code with 'embedded level' performance:

https://m-labs.hk/experiment-control/artiq/

ARTIQ is quite common in quantum physics labs. For that you need very precise and determining timing. Imagine you're interfering two photons as they reach a piece of glass, so that they can interact. It doesn't get faster than photons! That typically means nanosecond timing, sub-microsecond latency.

How ARTIQ does it is also interesting. The Python code is separate from the FPGA which actually executes the logic you want to do. In a hand-wavy way, you're then 'as fast' as the FPGA. How, though? The catch is, you have to get the Python code and FPGA gateware talking to each other, and that's technically difficult and has many gotchas. In comparison, although PyXL isn't as performant, if it makes it simpler for the user, that's a huge win for everyone.

Congrats once again!


(minor edit: for observing experimental signatures of photon interference, nanosecond precision is the minimum to see anything when synchronising your experimental bits and pieces, but to see a useful signal needs precision at the 10s of picoseconds! So, beyond what's immediately possible here.)


Did you work at Rigetti?


No, didn't work there.

I looked up any connection to ARTIQ they may have: it seems they do full stack QC, as they have their own quantum compiler [1]. But I'm not really sure what they're doing currently.

[1] https://github.com/quil-lang/quilc


So I'll take a layman's view here since I've only cursory experience of the big data tasks that this software seems to made for. Or maybe the pitch is still different and it went over my head.

It loads quick, and works with large data. Crucially, you can view and edit visually, not only programmatically.

Assuming those already working with such data have Excel and Python tools etc., the pitch here is that the $39 license fee saves time or effort. So, is it that the user can spot and correct errors that you couldn't otherwise do with either Excel or with other big data tools? And/or otherwise do the necessary data manipulations?

I came across the phrase 'eyes like a shithouse rat' recently, to describe the people doing final checks at a printing press. I think there's probably plenty of people out there who would pay $39 for eyes like a shithouse rat.

Also the website makes me nostalgic :)


I like old-school UIs but I wonder if that look doesn't do the product a disservice. I think most people would find it much more appealing if it looked at least as good as tad and rowzero mentioned in other comments. My first impression was that it is some old, slow software from 30 years ago, and the plots are not what I would show to anyone (especially the 3d bar plots). But that's just the looks. Otherwise, the product is solid.

(Yes, I know that there is plenty of old software that is super fast.)


Yep, the website is being kept simple, in fact too simple and will have to be eventually redesigned. It seems the classic (and actually up to date) WinAPI GUI shouldn't imply the software is being "slow". One could have it both ways and say it's usually on the contrary where commonly cloned packages weighing hundreds of MB and performing relatively simple tasks get a free ride on users' ten(s) of times faster hardware. Then there is the Wirth's law, May's law etc.


Do you think that craftmanship and longevity, in terms of keeping these people on board, go hand-in-hand?

As an example, Hamamatsu Photonics has been in the optics field a long time, and is going hard on developing for quantum physics applications. It's refreshing, since pretty much every company in quantum computing is very young, so hasn't had the time to build that craftsman vibe yet. Of course, there are people who've been working on quantum information technologies for a few decades now.

I look forward to seeing this ethos developing in quantum, for sure.


> Do you think that craftmanship and longevity, in terms of keeping these people on board, go hand-in-hand?

In this case, yes. But that also depends on who you want to retain.

If you want to retain folks that treat their work seriously, and in a craftsmanlike manner, it's important to provide a structure that incubates and rewards that.

We've really reached a point, in tech, where we're in a "death spiral." Companies treat their employees like crap. They may pay them well, but they treat them terribly. This means no loyalty, so the employee feels no issue with leaving as soon as the grass looks greener elsewhere, and the management feels justified in looking at their employees as "disloyal," or even "dangerous." It's a classic negative feedback loop. Money is the only meaningful currency, so people flit around, jacking up their salary, and looking at each company as "just another job."

The people that need to start the change, are CEOs (and shareholders). It's difficult, because "blinking first," seems "wussy," and also, it's pretty much certain that employees would continue to act the way that they do now, for some time, until a new culture gets established. That time, may be enough time to kill the company, as their more rapacious competition eats their lunch.

I was lucky to join an old corporation that had a long-established tradition of retaining top talent. Not sure if you would be able to start a new one, with a similar ethos, these days.


PhD in Quantum Optics and Atomic Physics from Oxford here (they call it 'DPhil in Atomic & Laser Physics' here).

I fully agree with what everyone else is saying here, it's really great advice. On the science aspect, their advice has you well-covered. From myself, I emphasise: the most important part of ingredient of your PhD, other than you, will be your prospective advisor. The choice of school is secondary to that.

My two cents: consider how well you will get on with your supervisor. Meet them, their students (past and present), their departmental colleagues and friends if possible. Get a feel for them, a vibe. Even the people who don't love them very much will give you an idea for what this person is like in just a few hours. Imagine: if this is what your supervisor will be like for one day, can you imagine being with them for several years? In this respect, it's like a long-term relationship, with one massive difference: if a (romantic) relationship goes south after a few years, you can break up, but for a PhD if you quit halfway you could end up without a piece of paper to show for it. You will then have to weigh up that possibility with the effort needed to continue, which drains from and adds a hefty cost to all parts of your life. I've seen this 'crossroads' occur for many people who left halfway.

To drive home the last point: my personal experience is such that I literally cannot think of anyone who's had a tougher PhD journey than myself and still managed to complete it---I'll concede that I'm biased, naturally. Backing up that statement: statistically speaking, across my department I don't think anyone in the whole Oxford Physics department fell in the same hole that I did for at least 20 years. I need not go into details, but if pressed, you could ask me directly (this is my first post on HN, I would have to figure out how to respond). Be cautious about the life aspect of the PhD, not just the science itself, is my point.

Also be aware of the life outside of your work. This is the point you highlight in your original question: it's great that you do. Specifically, the location matters since it defines your environment. For your PhD to work out, you need not just a support network but an environment in which you can succeed with as little effort as possible. Your hard work then compounds from there. You want to be able to get home/go out on the weekends and recover energy after your work, not drain it. If your PhD work takes your energy and you can't get back your energy or recover it, that's bad. Same goes for the work itself though, meaning your work should motivate you and give you energy too. Imagine, what will your evenings and weekends be like if you were in exactly the same place but didn't have any PhD work at all. Would you still enjoy it? Ultimately this decision is a personal one, and you have to use and trust your judgement. If you don't feel 100% confident to answer that, talk with others about this point before moving forward. It's difficult to get a feeling for this experience, so you need to be armed with quality advice.

Note that a PhD is a hard journey and a life-changing experience. Of course, it's about your life. There's lots of negative things to say about the experience, but I'll end positively: my PhD has completely opened my world to experiences that I wouldn't have been able to dream of otherwise. I simply wouldn't have known such things exist. It's broadened my horizons in a similar way to like someone moving from the island of Nauru in Oceania to Bay Area SF would experience. So, do what makes you happy :)

Good luck!

Footnote: Actually, it's late in Europe at the moment, so to save time I tried to dictate this post using ChatGPT Plus for the first time. It spoke out what I said for 7 minutes uninterrupted and absolutely flawlessly! Then ended with 'Transcript unavailable', and I lost the text. Perhaps this is like 'The house always wins", except there is no house, or winning. So, from now on, whenever I think of AI going forward, I will think: 'The always". Even if it doesn't make sense to anyone, I'm sticking with that thought, it has personality!


>To drive home the last point: my personal experience is such that I literally cannot think of anyone who's had a tougher PhD journey than myself and still managed to complete it---I'll concede that I'm biased, naturally. Backing up that statement: statistically speaking, across my department I don't think anyone in the whole Oxford Physics department fell in the same hole that I did for at least 20 years.

Let's talk. I think I have some notes for you.


>Let's talk. I think I have some notes for you.

How?


Email is my HN handle at gmail


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: