Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This isn't recommended practice really and there is nothing about this which justifies having to maintain huge code bases in a single folder or multiple folders in one larger one.

Won't be surprised to see that many would probably need a safari map or README documentation in every single folder to navigate a repository as large as stripes.

Sounds like an emergence of a new bad practice if you are having to praise how large your code base is.



> Won't be surprised to see that many would probably need a safari map or README documentation in every single folder to navigate a repository as large as stripes.

No different to having thousands of smaller repos instead.

I personally dislike monorepos, for very niche, in-the-weeds operational reasons (as an infra person), but their ergonomics for DX cannot be understated.


The 'ergonomics for DX' benefit is that you can share code across projects without having to go down the path of creating a package / library pushed to some internal registry and pulled by each project right?

Or are there any other aspects to the monorepo architecture that make it beneficial for large companies like that?

Just curious, I've never worked in such an environment myself.


To put it in the most general terms: It provides the same value that using a VCS has for a project, but applied to the entire company.

In a standalone project, would you accept a change that is incompatible with other code in the project? For example, would you allow a colleague to change a function in a way that breaks the call sites? No, you probably would not.

The attitude within monorepo shops is that this level of rigour should be applied to the entire company. Nobody should be able to make a change anywhere if it would break anything elsewhere, or they should only be permitted to do so with intention. There are caveats to this, but that is the general idea.


In addition to what you mentioned, the ability to atomically commit to a library and all of its consumers. And for a change to a library run the tests of all of its consumers as well.


Every host running a particular commit is running the code you think it is. No submodules or internal packages. If you updated the Button component in the design system, when your commit is deployed, every service that gets deployed has the new button now.


I'd say there's 4 main advantages, summarizing what other comments are saying but also from my own experience:

- atomic PRs. All changes for a migration/feature living in one spot makes development much easier, especially when dealing with api changes and migrations

- single history. This is useful when debugging. A commit can more easily encapsulate the state of "the whole system" as opposed to a single part of it. This makes reverting, if necessary, easier

- environment consistency. updating the linting tool, formatting tool, UI library, etc is never a priority, so there's always drift, where an old repo gets stuck with old tools, dependencies and an old environment

- not shipping your org chart is easier when everyone can see and work work on the whole codebase, as easily as possible.


Dependency versioning is much smoother.

Example: Service A requires version 1.1 of libFoo and libFoo 1.1 requires version 0.1 of libBar. But Service A also directly uses libBar version 0.2. Now you have a conflict.

If libFoo and libBar are internal code stored in a monorepo they're automatically version-compatible because there is only one version of both.


how do you coordinate deploying a change that requires six different repos to be deployed to six different systems at the exact same time? with a mono repo, you're still deploying to six systems, but at least there's only one commit sha to keep track of.


understated -> overstated


Meta also has a massive monorepo accessed primarily through cloud devservers.

When several of the world’s most successful software companies use this approach, it’s hard to argue that it’s inherently bad. Of course it’s sensible to discuss what lessons apply to smaller companies who don’t have the luxury of dedicated tooling teams supporting the monorepo and dev environment.


Just because some successful companies use some approach doesn't make it the best practice. I have seen firsthand nuisance of monorepo, which took almost 15minutes to correctly switch branches on intel machines(and decently spiked the CPU by causing windows defender to panic). It has decent benefit of easy code sharing, but build and test are soul sucking experiences and if someone decides to run some updated formatter and linter rule accidentally, the whole MR becomes a nightmare to correctly review(once had a 2k+ changes and had to request to rollback and then only commit what they actually wanted to change).


> took almost 15minutes to correctly switch branches on intel machines

This can probably be fixed with trivial tuning. Just configuring Git to fetch only your branches would speed up the branch switching significantly.

> build and test are soul sucking experiences

Why? It doesn't have to be. If you are going to build the entire monorepo, then yes, but this should only happen when you are running CI, and even then you can break down the builds into smaller components.

> the whole MR becomes a nightmare to correctly review

Not if you set up code ownership properly. You also need to think what happens in case of emergencies, so having a selected list of "super users" and users with permissions to bypass reviews is important.

It sounds like this company wanted a monorepo, but nobody invested any money or time to actually think about developer productivity. When this happens, yes, of course it won't be good, because no project succeeds like this. The nice thing about a monorepo is that instead of 1,000 repos with tooling all over the place and no specialist to take care of them, you can have one repo with really good tooling and a team dedicated to just keep it running smoothly. But if nobody is actually taking care of the monorepo, it will rot just like any other codebase.


“Someone autoformatted the whole thing under new settings at the same time as introducing a new feature” is hardly a monorepo problem. That could be a pain in the ass to review even in a single file. But the flip-side, of someone cleanly wanting to a do a mass autoformat or autorefactor, is much easier in a monorepo than in split repos.


Why would you feel obliged to accept a MR in which someone has accidentally changed large amounts of code?


Nothing you describe is inherent to monorepos. Git is slow yes but go use hg. Build and test are slow? That's a CI problem: you didn't allocate enough resources to the build system. Someone ran a formatter accidentally? That's that someone's mistake.


Meta also uses React and we know what mess that introduced to the world...


> Won't be surprised to see that many would probably need a safari map or README documentation in every single folder

Is...documentation a bad thing?


imo monorepos are great, but the tooling is not there, especially the open-sourced ones. Most companies using monorepos have their own tailored tools for it.


very much recommended practice by many with, of course, caveats and situations where perfect is the enemy of good, etc, etc

e.g. https://trunkbaseddevelopment.com/monorepos/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: