Hacker Newsnew | past | comments | ask | show | jobs | submit | pwr-electronics's commentslogin

Try nomograms instead - you can print them on paper, and they have one for just about everything. https://youtube.com/channel/UCOLYtsL4ge6QfaAvBDeG1IA


Maybe pdf numbers are sightly annoying in the modern age of writing.


The choice between refactoring and money-generating work is a false dilemma. There are other options, and the developer doesn't have to make that decision or carry out the work all on their own.


Indeed.

If the code has turned to spagetti then how do you manage to change code quickly (due to e.g. Corona rules) so you can follow where the market went and not get competed out of business?

When the company is in startup mode and has no customers, it's easy to just throw more mud on the wall.

But when you have an existing business based on 1M lines of code and you want to keep being in the market when the market changes quickly, then spagetti code can be death. Being ready means having cleaned up code beforehand so it is easy to change it.


At an organisational level, you have to make a decision on how much time you spend doing one or the other. It might be that some developers never do any refactoring but someone is always going to end up doing it. Or nobody does it, and the code slowly decays.

Unless you're saying that you don't have to do refactoring at all in the organisation, but the only way to do that surely is always get it right the first time, which isn't hugely practical. You may sometimes encounter a situation where the quickest way to build a feature is to fix some old ugly code, but that's certainly not the case every time.


My whole point is that you don't, because there aren't always just two options. That's the false dilemma logical fallacy.

I'm saying you can fix problems without dropping everything and redoing work. You're allowed to problem solve and work with people to create a third option. And you can prevent new ones by learning and strategizing.


Well whether you drop everything or clean as you go or whatever other strategy, fixing stuff takes time. Even if it's just the mental effort of designing a better way and consensus building.

I'm just using simple analogies for the sake of explanation, but it is nearly always the case that expanding the scope of work to fix previous architectural decisions that were either flawed or no longer relevant will take considerably longer than just fixing the problem at hand.

There may be the odd time, particularly in a large, well defined piece of work, where you can say actually tidying up this other stuff will save time overall. Or perhaps you can batch a bunch of improvements in the same system together into a larger, more thoughtful architectural improvement. All of that is great if you can do it, but it's often not possible.

As far as preventing future architectural issues by learning and strategizing, I feel like that's what we spend our entire career trying to get better at doing ;). But alas I, and everyone else, seem to continue making decisions that don't pan out long term. Even if you did make a perfect decision at the time, often the world/business/third party dependency changes, and what was an excellent decision in the past becomes a pain point a few years later.

It used to be the case that we tried to design infinitely extensible software so future requirements could always be incorporated, but that makes the software unmaintainable. So the pendulum swung to YAGNI and only designing for exactly what was right in front of you, but that leads to major architectural overhauls every few months. True answer is somewhere in the middle, but learning where is something that only seems to come with decades of experience.

Unfortunately older programmers all seem to be forced out of developing and into management or other careers for some reason.


I'm still trying to challenge your assumptions. Why does a different solution necessarily require expanding the scope of work? Like you said, that's where experience helps to have those skills in your toolbox. Doing things better doesn't have to be harder.


It doesn't always require expanding the scope of work, but very often does. I even suggested a few situations where it doesn't, but in many cases fixing the true underlying problem involves expanding the scope of work.

It's hard to argue the nitty gritty without examples so here's a real world one from quite a long time ago, in a company that went bust after the death of the owner.

--

We had a system that had a significant quantity of code written in a custom language that would be compiled by an internally written compiler. This compiler was in some ways a work of genius, written in the 80s, but it had a lot of very deep architectural flaws in the optimiser that meant certain patterns of code would generate invalid output. We didn't write much new code in this language but had a pretty large body of code that needed to continue running.

So during a server hardware refresh, we found that almost everything was crashing. Turns out, a compiler optimiser flaw meant that any time a loop had a number of iterations that wasn't a multiple of the number of CPUs, generated programs would segfault.

We investigated what it would take to fix the underlying issue but it would have been a week or more of work just to understand why it was happening. Porting all the old code would have taken even longer.

Instead what we did was, using a pre-existing AST manipulation library we had written, add a prebuild script that hacked all of the files to include a CPU count check then pad out the number of iterations with NOPs. Took a few hours and unblocked the server upgrade.

--

Another, perhaps less esoteric and more recent example:

A third party open source library we use had an issue where a particular function call would sometimes get stuck in an infinite loop due to incorrect network code in the library interacting badly with our network hardware.

We submitted a bug report and fix, but maintainer wouldn't accept a fix unless we also changed a bunch of other related code, added a bunch of tests etc. which we didn't have time to do. We considered a fork but that would involve keeping it up to date, rebuilding packages and so on.

We worked around the issue by running it in a different process and monitoring CPU usage. If CPU usage goes beyond q certain threshold, we kill the process and try again.

Workaround was quick and has been working fine for over a year now. Contributed patch is still languishing in an open PR with various +1s from other users.


I think your examples agree with my point: You found minimal-time solutions that haven't caused continuous suffering afterwards, and can be easily removed when the root cause is fixed. That's a good result.


As the article below explains, it's a combination of structured qualitative analysis and a review process. That process builds on top of all the other application-specific or discipline-specific processes, like a hierarchy. The higher up you go, the more generic it gets. The lower down you go, the more you critique the exact math or test or whatever.

https://adsabs.harvard.edu/full/1996ESASP.377...83F


Does the company have any in-house guidance on the application domain? Or is it all external feedback so far?


Both in-house and external

Almost everybody here has worked in hardware before. Bunch of people from Apple, Siemens, NASA, and other places


Riveting


just beam-ing with excitement


Welded to my seat.


Was expecting a time lapse. Not a live view.


There is a time lapse, but it's a link on the landing page.


The business opportunity is in software that's inseparable from the hardware. Academics won't work on that because they don't do product development.


Nothing breaks the laws of thermodynamics, ever.

I assume you're referring to how some popular science articles report things. That's just wordplay. Nothing is being bent, broken, or bypassed.

Usually it just means they did something an ordinary homogeneous material couldn't do, for example. Which is genuinely interesting, even if it's not actually breaking physics.


With a generous interpretation, they can be said to break classical thermodynamics, since they quantum physics onto the table, most notably magnets can have negative temperature according to the statistical definition. Nothing that breaks physics at large, but would probably cause Ludwig Boltzmann a mild headache if he heard about it.

It's still not going to allow you to make perpetual motion machines, though.


Why would you use a classical model to describe a quantum system? That's the kind of wordplay that those articles do, and almost identical to my example about materials. It's entertaining, but meaningless.


Well in this case you absolutely can, the results just seem counter to our intuition about temperature. Negative temperature is a meaningful description of these types of systems.


I'm trying to explain how articles often conflate "intuition" with "breaking physics", and you're making your argument by conflating them. We're having different conversations.


Where have I said anything about any of this breaking physics, other than repeatedly clarifying that it doesn't...?


> With a generous interpretation, they can be said to break classical thermodynamics

https://news.ycombinator.com/item?id=31008263


Operative word being classical. Quantum physics breaks classical physics. Stopped being controversial about a hundred years ago.

Thw very same comment specifically states that physics itself is not broken.


Ok, so you're confirming that you're doing exactly what I said popular science articles often do.


No I am not. Breaking classical physics is not breaking physics.


I'm running out of ways to explain myself, so I'll just reword and summarize this thread from my perspective:

OC: Is it broken?

Me: No, they make generous interpretations to claim that it's broken.

You: With a generous interpretation, they can be said to break ...

Like I said, we're not having the same conversation.


This is just sophistry. You're omitting the crucial context of what it is in these sentences, and it is only when this is done they appear contradictory.


Of course there are contradictions. I'm talking about journalistic style in magazines. You're talking about something else.


For graphical computation, I recommend Nomographer's YouTube channel https://www.youtube.com/channel/UCOLYtsL4ge6QfaAvBDeG1IA


> It should catch the issue and exit cleanly with an error message.

Probably not. As the author describes, LLVM has to tool to check for invalid IR, which they used to investigate the issue and generate an explanatory error message.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: