Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While the author may never go back and fix his code later, that doesn’t mean that everybody else does the same.

If you never have to come back to “fix it”, was it actually wrong to begin with?

Personally, in my own career, I’ve found this “come correct” mindset is used to justify unnecessarily flexible solutions to allow for easier changes in the future … changes that 90% of the time, never actually materialize.

I held this mindset too when I was younger. Then I got tired of seeing how all my clever abstractions never actually got used the way I intended, and decided to get smarter about it.



> If you never have to come back to “fix it”, was it actually wrong to begin with?

A dead simple and effective heuristic is to improve code a little bit each time you touch it. The code you touch a lot gets love and ends up nicely designed. The code you rarely touch won't be as nice, but that's fine, because you rarely touch it.

If you're going to make a big functionality change or addition that will take a while, that's a great time to significantly refactor the existing code. The change will provide additional design context for the refactoring, which you wouldn't have had earlier.

Of course this strategy requires you to have good tests. If you do a refactoring followed immediately by a functionality change, and you don't have good tests to verify that the refactoring is solid, you'll have trouble attributing bugs to the refactoring or the changes. If you don't have good tests and basically test in production, then you'll want to refactor ahead of time so you can deploy the refactored code and shake out the bugs before you start building on it.


> Of course this strategy requires you to have good tests.

Not necessarily, if you also follow another similar principle. Leave the code a little neater than you found it, but also leave the code a little more well tested than you found it!


Could you distinguish between "good tests" and "a little more well tested"?

It seems to me that good tests enable code to be well tested. Here, "good tests" would be verification of behaviour by using the various expected and unexpected input and output.


> If you're going to make a big functionality change or addition that will take a while, that's a great time to significantly refactor the existing code

and so here comes the trade-off question: what if such a change could be made faster/with fewer people or resources, but at the cost of not doing the refactoring?


It's more that you're spending a lot of time reacquainting yourself with that code; so the price of tweaking it is low. They'll be some return, even quickly from the changes; but mostly, it's the cheapest time to refactor - if you're ever gonna refactor, this is when to do it.


You have to ask what motivates the refactoring. It should speed up work in that part of the codebase, either by making the change faster or by reducing the time needed for follow-on bug fixes (which in my opinion is the same thing.)

What's nice about making a refactoring right before a major change is that it's much less speculative. Your estimation of the value of a refactoring is only as good as your prediction of what changes will need to be made in the future. If you already know exactly the change you are about to make, you can justify the value of refactoring with much more confidence.

By contrast, if you're worried that the change in front of you will be easy, but future changes will be hard, then maybe it will be best to leave the refactoring until just before the future changes. After all, they might not come, and if they do, they might not be what you expect.

There are two reasons to do a refactoring now to account for non-immediate future work. I think only one of them is actually about the future, and the other one is really about the present.

The first reason, which really is about the future, is if you do know what future changes you will have to make. For example, if you know that such-and-such future functionality will be required to support promises being made to current or prospective customers. Or if your company actually has a product roadmap that it sticks to. Then you know the refactoring will pay off eventually, and you can make a firm case for doing it now. You should discount the value of the refactoring to account for any uncertainty about the future work.

The second reason is that bug fixes after the release will be easier if you do the refactoring first. I think this is the same as saying that the current change can be completed faster with the refactoring. Releasing something with a bunch of bugs doesn't make it "done." It's done when the engineers who are doing it can move on to other things. If you're stuck fixing bugs from the initial release, you aren't really done, so it wasn't done faster. Product and sales will often tell you that there are crucial strategic reasons that releasing something now with a bunch of bugs is better than releasing it a few weeks later with fewer bugs, but they're almost always lying^H^H^H^H^H suffering from tunnel vision on their own goals. Your engineering manager should escalate, and nine times out of ten the business does not actually want you to push out a piece of shit a few weeks earlier. They will tell you to descope or delay the initial release. If you count bug fixing time as part of the time spent making the upcoming change, then the refactoring becomes justified.


Yup. I regularly come across my own notes "This isn't the best way to do this because X, Y, Z. Need to address later."

Years later I'm in that code and realize that X, Y, Z never ever happened (even if it seemed highly likely) and that block of code was working just fine an folks found it easy to work with... I was dead wrong about being wrong.


I don't think there is anything wrong with this. You documented the assumptions made when you wrote it, so years later you know them and can say with confidence that you made the right choice. Much less guesswork. Worst thing was that you were a little rude to yourself.


Reminds me about a part of code I wrote quickly as a POC, it made it's way into Production unchanged. It somehow ended up being the most stable feature, perhaps because it wasn't trying to do too much. Just what it had to do and nothing more.


> unnecessarily flexible solutions

Flexibility is never unnecessary. Extreme flexibility feels amazing and will lead to serendipitous jumps in your productivity where you implement cool new useful features you never thought of before just by combining things you already wrote in new ways.

The problem is that what people think is flexible design is actually either dead weight or a brittle inner-platform. Adding fields because you might need them later, adding an interface without a need on the consumer side, moving something to a configuration file instead of a constructor parameter, imagining up needs that no consumer will ever actually have, etc. etc. All of these try to achieve flexibility by "adding more" - more layers, more configuration, more fields, more abstractions, whatever. Indeed it's better to ignore flexibility than to try to be flexible in these misguided ways and fail miserably.

But there's a third option: extremely simple, terse, clear code that follows proper design principles (not "patterns") from top to bottom. Code that never asks for more than it needs to do its job. Code that makes the fewest assumptions possible. This is flexibility via removal -- removal of assumptions, of preconditions, of responsibilities. When you identify a unique need your code has, you concretely express that need as simply as you can (e.g. a small interface), but you don't implement it. Your code just does its one simple job using its simple needs, and you don't worry about whether or not a concrete implementation of your needs actually exists. If you never get around to implementing one, then you never needed that code you just wrote to begin with, and you delete it.

It's perfectly possible to write code that is about equally "correct" as your requirements are, and that is clean and extremely flexible, the first time, without ever having to go back and "fix it later". It just looks nothing like what everyone seems to think "flexibility" looks like.


You basically just explained good software engineering. Code that does one, well-defined, necessary thing and does it well.

If you want to "do a thing", a DoThing() function is the optimal way to represent it. You can't get any more abstract than that, and you don't need to.

Somehow a lot of (badly explained) ideas of "design patterns" and "abstraction" have rotted people's brains into thinking you're supposed to add a whole bunch of extra layers everywhere.


Depends on how many DoThing’s you have in your code and on how many other DoThing’s they depend.


> what people think is flexible design is actually either dead weight or a brittle inner-platform.

Or as Fred Brooks called it, "accidental complexity"


100%. The easiest code to change is code you haven't written at all. It is orders of magnitude easier to add features to small, simple projects than big, complex ones.

Simple code is malleable code.


I started off reading your comment thinking I was going to disagree with you. But you are 100% right.


> Personally, in my own career, I’ve found this “come correct” mindset is used to justify unnecessarily flexible solutions to allow for easier changes in the future … changes that 90% of the time, never actually materialize.

Worse, after 10 years, flexibility is required in an unforeseen dimension and a rewrite is needed anyways.

In other words, when the basic assumptions of that fancy abstraction are just not workable with the future requirements, you're hosed. Worse, now you might need to refactor a lot of code building on this abstraction.

That's why I prefer composition wherever feasible. Easier to repurpose when the crystal ball is not working correctly.


After 30 years programming, I still agree with this take. The crystal ball is inconsistent. The one certainty is that you'll understand the problem better with time and experience. And your code is the easiest to change when its small and simple. You often have to implement the code wrong in order to figure out how to implement it right.

The OP mentioned they didn't assemble their bed frame for 6 months after moving in. Thats a perfect metaphor - writing software really is like furnishing and caring for a home. Code is both content (what your program does) and environment (where you do it). If you don't take the time to care for your home or your software, it'll become a mess in short order.

I like to think about gardening metaphors with good software. Sometimes we just need to tend the garden - remove some weeds, and clean things up a bit. You can tell when thats needed by gazing out at the garden and asking yourself what it needs.

Personally I'm not very good at maintaining my apartment sometimes - I bought some shelves that I didn't install for well over a year. So whenever I have that urge to tidy up and do some spring cleaning, I jump on that instinct. Last week I removed a few hundreds of lines of code from my project (because the crystal ball was wrong). It was a joy.


Not all code is the same. Who's using the code? You, your department internally, or your clients?

How many times does the code run? Daily, monthly, once and maybe never repeated again?

How resource intensive is it? Does it take a second or a month to execute?

Do you know from the start where you're going to end up? If it's a research problem, probably no. Then you don't need to prematurely optimise the code.


I’m not talking about optimisation.


> Personally, in my own career, I’ve found this “come correct” mindset is used to justify unnecessarily flexible solutions to allow for easier changes in the future … changes that 90% of the time, never actually materialize.

If anything, I've learnt that code shouldn't be just correct, easy to read and reasonably easy to change... but also easy to throw away. Write code that contains the simplest solution that you can successfully get away with, without it becoming a problem down the line. And if need be, it should be coupled loosely enough to be replaced with something that fits the contract and passes the tests (provided that you have those).

An example of what not to do: an intricate hierarchy of "service" classes, which help you process some domain object and any other domain object type that you might want to handle in the future. It sounds good, but might have abstract classes, bunches of interfaces, as well as some methods with default implementations and so on. To understand how it works, you might need to jump around many files, even your IDE sometimes getting confused in the process.

A better example: a single "service" class that helps you process some domain object, with mostly pure functions, that are testable and self contained. You should be able to figure out what it does without jumping around a dozen different files and also replace just this one file. Need to process another entity type but aren't sure whether they're within the same context, or this logic could evolve separately? Just make another class as a copy. Realize that you've reached "The rule of threes"? Extract the common interface through your IDE later, as it becomes relevant.

Of course, it varies from language to language and even between different codebases written in the same language.


> If you never have to come back to “fix it”, was it actually wrong to begin with?

1. Ah, we often "have to", but also aren't allowed to because it would rock the boat too much and take too much time.

2. Unfortunately, too many coders share your attitude, and then other people have to work with what you guys left us - facing the consequences of making it do something which is different than the original settings, and being stuck with some jerry-rigged and inflexible set of assumptions. At least sketch out in comments what the better solution was supposed to be and where you cut the corners!


I've had the same experience in my career. I really do wonder how much of YAGNI is only learned through experience. Earlier in my career, I wanted everything to be more perfect, and I wanted to make sure I fully designed for all eventualities in my code. Nowadays, I know I can program myself out of a bad situation if it arises, so I don't try to cover all my bases, but I do make sure I'm aware of what could go wrong.

If I make a note to "fix it later", it just means, this could be bad, but might not be, so I might fix, but I might not either. If it ends up blowing up, it should be obvious to me at the point that I make the decision how bad it could really be, what the ramifications are, and how easy it is to fix or detect. With experience, you can play a bit more fast and loose with what you decide to do or not do, but it requires a lot of other skills, such as designing defensively, such that if something does go wrong, it's easily detected and doesn't cause irreversible damage to data.


I'm the antithesis of this. My devs thought when I said "we'll get to x later" meant it'd be nice one day. Then they kept getting lost when they'd come back and it'd be implement the way I said we should. They've learned to pay attention


Agreed that you shouldn't design for changes that aren't extremely likely to happen. Ends up overcomplicating things with abstractions on top of abstractions.

And then when a change does come it often isn't even expected and doesn't fit into the abstractions. But because everything is so convoluted, you either need to make a large code change to fit the new feature or you need to put a hack on top that completely invalidates the abstractions.


There's a saying in Hebrew that goes something like: "There's nothing more permanent than the temporary". Every single large code base that I've ever worked had an endless number of TODO and FIXME comments.

But I'm with you. If you think what you're doing is ok, don't leave a FIXME comment or plan on it getting fixed down the road. If it's not ok, just don't do it please ;)


It’s always entertaining to watch juniors start hung ho for their early career until their first major project gets sunset. It’s like they let so much of themselves go they can’t bear to do it again. For me, it’s like taking a shit, I don’t mind losing that part of me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: