Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Yes there are some domains where going from the time from zero to a working product is critical. And there are domains where the most important thing is being able to respond to wildly changing requirements

I agree with your observations, but I'd suggest it's not so much about domain (though I see where you're coming from and don't disagree), but about volatility and the business lifecycle in your particular codebase.

Early on in a startup you definintely need to optimize for speed of finding product-market fit. But if you are successful then you are saddled with maintenance, and when that happens you want a more constrained code base that is easier to reason about. The code base has to survive across that transition, so what do you do?

Personally, I think overly restrictive approaches will kill you before you have traction. The scrappy shoot-from-the-hip startup on Rails will beat the Haskell code craftsmen 99 out of 100 times. What happens next though? If you go from 10 to 100 to 1000 engineers with the same approach, legibility and development velocity will fall off a cliff really quickly. At some point (pretty quickly) stability and maintainability become critical factors that impact speed of delivery. This is where maturity comes in—it's not about some ideal engineering approach, it's about recognition that software exists to serve a real world goal, and how you optimize that depends not only on the state of your code base but also the state of your customers and the business conditions that you are operating in. A lot of us became software engineers because we appreciate the concreteness of technical concerns and wanted to avoid the messiness of human considations and social dynamics, but ultimately those are where the value is delivered, and we can't justify our paychecks without recognizing that.



Sure it’s important for startups to find market traction. But startups aren’t the majority of software, and even startups frequently have to build supporting services that have pretty well-known requirements by the time they’re being built.

We way overindex on the first month or even week of development and pay the cost of it for years and years thereafter.


I'm not convinced that this argument holds at all. Writing good code doesn't take much more time than writing crap code, it might not take any more time at all when you account for debugging and such. It might be flat out faster.

If you always maintain a high standard you get better and faster at doing it right and it stops making sense to think of doing it differently as a worthwhile tradeoff.


That's the hard part of project management.

Is it worth spending a bit more time up-front, hoping to prevent refactoring later, or is it better to build a buggy version then improve it?

I like thinking with pen-and-paper diagrams; I don't enjoy the mechanics of code editing. So I lean toward upfront planning.

I think you're right but it's hard to know for sure. Has anyone studied software methodologies for time taken to build $X? That seems like a beast of an experimental design, but I'd love to see.


I personally don't actually see it as a project management issue so much as a developer issue. Maybe I'm lucky but in the projects I've worked, a project manager generally doesn't get involved in how I do my job. Maybe a tech lead or something lays down some ground rules like test requirements etc but at the end of the day it's a team effort, we review each other's code and help each other maintain a high quality.

I think you'd be hard pressed to find a team that lacks this kind of cooperation and maintains consistently high quality, regardless of what some nontechnical project manager says or does.

It's also an individual effort to build the knowledge and skill required to produce quality code, especially when nobody else takes responsibility of the architectural structure of a codebase, as is often the case in my experience.

I think that in order to keep a codebase clean you have to have a person who takes ownership of the code as a whole, has plans for how it should evolve etc. API surfaces as well as lower level implementation details. You either have a head chef or you have too many cooks, there's not a lot of middle ground in my opinion.


I hear you, and agree there’s not much overhead in basic quality, but it’s a bit of a strawman rebuttal to my point. The fact is that the best code is code that is fit for purpose and requirements. But what happens when requirements change? If you can anticipate those changes then you can make implementation decisions that make those changes easier, but if you guess wrong then you may actually make things worse by over-engineering.

To make things more complicated, programmers need practice to become fluent and efficient with any particular best practice. So you need investment in those practices in order for the cost to be acceptable. But some of those things are context dependent. You wouldn’t want to run consumer app development the way you run NASA rover development because in the former case the customer feedback loop is far more important than being completely bug free.


I always try to design for current requirements. When requirements change I refactor if necessary. I don't try to predict future requirements but if I know them in advance I'll design for them where necessary.

I try to design the code in a modular way. Instead of trying to predict future requirements I just try to keep everything decoupled and clean so I can easily make arbitrary changes in the future. Some times a new requirement might force me to make large changes to existing code, but most often it just means adding some new stuff or replacing something existing that I've already made easy to replace.

For example I almost always make an adapter or similar for third-party dependencies. I will have one class where I interact with the api/client library/whatever, I will avoid taking dependencies on that library anywhere else in my code so if I ever need to change it I'll just update/replace that one class and the rest of my code remains the same.

I've had issues in codebases where someone else doesn't do that - they'll use some third-party library in multiple different components and practically make the data classes of that library part of their own domain and have workarounds for the library's shortcomings all over the place so when we need to replace it or an update contains breaking changes or something like that it's a big deal.

There's a lot of things like this you can do that don't really take much extra time but makes your code a lot simpler to work with in general, makes it a lot easier to change things later etc. It has lots of benefits even if the library never gets breaking changes or needs to be replaced.

Same thing for databases, I'll have a repository that exposes actions like create, update, delete etc and if we ever need to use a different db or whatever it's easy. Just make a new repository implementation, hook it up and you're done. No SQL statements anywhere else, no dependency on ORMs anywhere else, I have one place for that stuff.

When I organize a project this way I find that nearly every future change i need to make is fairly trivial. It's mostly just adding new things and I have a place for everything already so I don't even need to spend energy thinking about where it belongs or whatever - I already made that decision.


Well said. This summarizes my experience quite succinctly. Many an engineer fails to understand the importance of distinguishing between the different tempo and the immediate vs long-term goals.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: