Hacker Newsnew | past | comments | ask | show | jobs | submit | nightfader's commentslogin

Homebrew is available for Linux as well since approx 3yrs now. I've been using it without issues https://docs.brew.sh/Homebrew-on-Linux


OK but what I like about homebrew on mac is that when I'm having an issue with "popular stack X broke after updating" it's probably me and >10k other people out there, so by the time I hit the problem it's already under investigation on GH. I'm not sure the same would apply to homebrew on Linux - even if you ignore the differences between distros - how popular is homebrew on linux and linux desktop in comparison ?


I wish more of those 10k people would help get others off of a package manager that is so fragile and convoluted that updating so often leads to popular things breaking.

Things like macports and pkgsrc do things in an arguably much simpler, more unixy way, without the contortions that so often seem to leave homebrew in a bind after routine operations like updating.


I’ve never experienced a broken homebrew and I’ve used Mac for years


The comment was in response to parent's stated complaint, namely having to wait for someone else to resolve issues with popular packages being broken after an update, which has been the experience of more than one user.


Neither have I. My main complaint is that it's slow.


I’m curious what’s the benefit? I use homebrew as a Linux package manager for MacOS. On Linux I just use the distros package manager


If you need to build something from source (my use-case: Vim, so I can change which language bindings exist in the resulting build) it can sometimes be a lot easier than cloning and using the "raw" C/Make build system.

Also, assuming a downstream distro like Debian or Ubuntu, what's in Homebrew is likely a more up to date package. You could fiddle with adding/using Debian testing or some PPA, or... you could just use Homebrew.

(FWIW: I use Arch and the AUR on my desktop Linux installs these days, and it's essentially the same process. But still using Homebrew on the Mac, and occasionally in Linux when I'm not on a desktop)


That's also why perforce is slow as heck unless you throw massive resources at it. I also work in the chip industry BTW.


I occasionally used to start a sync, go get coffee, chat with colleagues, read and answer my morning email, browse the arxiv, and then wait a few more minutes before I could touch the repo. In retrospect, I should have setup a cron job for it all, but it wasn’t always that slow and I liked the coffee routine. We switched to git. Git is just fast. Even cloning huge repos is barely enough time for grabbing a coffee from down the hall.


I mean "massive resources" is just de rigeur across the chip industry now. The hard in hardware is really no longer about it being a physical product in the end.


I've only used Perforce for two years and it didn't feel slow at all. The company wasn't exactly throwing money at hardware.


I don't like it (but used it for many years).

I love Git, but, then, I don't have a workflow that would benefit from Perforce.


How is it not bad design? Let's say you are working in a team. Would you really want your colleagues spending a significant amount of time cloning your artifacts? Your comment is also not consistent with forgetting that one is not a developer. Even if it's my grandma, she's not gonna want to wait for 1hour to download a giant file from VC assuming she knows what a VC is. Large blobs can go into versioned object storage like GCS or S3 etc


In Subversion at least, you'd do a partial checkout. If you don't need a particular directory you just don't check it out. If you lay out your repo structure well there's no problem. It was incredibly convenient.

I've tried many different SCM over the years and I was happy when git took root, but its poor handling of large files was problematic from the beginning. Git being bad at large files turned into this best practice of not storing large files in git, which was shortened to "don't store large files in SCM." I think that's a huge source of our availability and/or supply chain headache.

I have projects from 20 years ago that I can build because all of the dependencies (minus the compiler -- I'm counting on it being backwards compatible) are stored right in the source code. Meanwhile, I can't do that with Ruby projects from several years ago because gems have been removed. I've seen deployments come to a halt because no startup runs its own package server mirror and those servers go offline or a package may get deleted mid-deploy. The infamous leftpad incident broke a good chunk of the web and that wouldn't have happened if that package was fetched once and then added to an appropriate SCM. Every time we fetch the same package repeatedly from a package server we're counting on it having not changed because no one does any sort of verification any longer.


SCC systems that handle big files don't suffer from the "you have to clone all the history and the entire repo all the time" problem that git suffers from. At least Perfoce doesn't...

git has its place but it's really broken the world for how to think about SCC. There are other ways to approach it that aren't the ways git approaches it.


When you make a video game you want version control for your graphics assets, audio, compiled binaries of various libraries, etc. You might even want to check in compiler binaries and other things you need to get a reproducible build. Being able to chuck everything in source control is actually good. And being able to partially check out repositories is also good. There is no good technical reason why you shouldn't be able to put a TB of data under version control, and there are many reasons why having that option is great.


The versioned object storage solves nothing. If your colleagues need the files, they're going to have to get them, and it's going to be no quicker getting them from somewhere else. Putting them outside the VCS won't help. (For generated files, you may have options, and the tradeoffs of putting them in the VCS could be not worth it. But for hand-edited files, you're stuck.)

If the files are particularly large, they can be excluded from the clone, depending on discipline and/or department. There are various options here. Most projects I've worked on recently have per-discipline streams, but in the past a custom workspace mapping was common.


> Would you really want your colleagues spending a significant amount of time cloning your artifacts?

Not just the artifacts, but their entire history. That is a problem that Git has out of the box, but there is no reason it needs to work that way by default. LFS should be a first class citizen of a VCS, not an afterthought.


So how would you version a game that needs assets? These files must be versioned but can be very big, for example long cutscene videos.

Some projects need the ability to version big files, there is a good reason why perforce exists and is widely used in the gaming industry.


I am not saying that it is a better UX, but hashed/versioned blobs on S3 would mostly work depending on tooling integration.


That's building a custom version control on top of the version control you're already using.


not really, it is like building a custom storage layer for your VCS.

you are still relying only on git as a source of truth for which artefacts belong to which version.


Isn’t that essentially what git lfs is?


I believe so, but with different UX. In almost every case I expect git lfs to be better, but I can see reasons to use more custom flows.


That's what object storage with versioning turned on is for e.g. GCS or S3


Although blob storage work well for versioning, you have to make heavy use of the underlying proprietary API to get these versions, and I am not quite sure you can do more complex operations, like diff and bisect between those versions the way you could with git.


Why use git at all then? Just use an object store with versioning turned on.


Because git excels in relatively small size text files and patching and difficult. You can't binary blobs like jpegs, audio, video easily.


But that's my point: why can't a version control system be good for this as well? It's the same thing underneath. Why do we have to split these different use cases across different tools and hope a foreign key constraint holds?


We've got lots of disk-backup tools that handle this just fine, deduplicating blocks and compressing where they're able.


The whole point that git rejects large blobs is primarily because they don't belong in VCS. But for those who need large blobs there is git-lfs as the author mentioned. I don't see a problem with that approach because I personally don't like my git repos growing large after just a few commits which then takes up time for huge clones by other devs. This is the whole principle behind monorepos. If going the monorepo route it's in a teams or projects best interest by keeping the repo size small so new clone by newly onboarded devs or during a CI pipeline don't take forever. Fossil is an all in one VCS with wiki, issues etc which I don't appreciate because for one it's not feature rich and for another it bloats the backups and restores. So I prefer gits Unix philosophy of doing one thing but doing it really well. There are some philosophical amd usability differences between fossil and git too but in the grand scheme of things it doesn't matter when one has been using git for a long time. Fossil doesn't have an ecosystem either and making it work with CI CD is a pain because CD tools like agrocd or flux or CI tools like gitlab/gitbub/circle/travis CI systems don't work with fossil out of the box.


> The whole point that git rejects large blobs is primarily because they don't belong in VCS.

Who are you to say that my blobs don't belong in version control? Where does a versioned asset file for a website or a game go, if not in version control? If the answer is "somewhere else referneced by the git commit", then you're accepting that the data belongs in version control but that git can't handle it.

> But for those who need large blobs there is git-lfs as the author mentioned.

git-lfs isn't git, though. It's a bodge on top of git that breaks many of the assumptions about git, requires special handling and setup. If it _were_ a core part of git I would agree, but it's not.

> So I prefer gits Unix philosophy of doing one thing but doing it really well

Git is tightly coupled to a _bunch_ of unix tools, and doesn't work without them. Try running git on windows and see that installs an entire suite of posix tools (msys) just to let you run `git clone`.


People only think large blobs don't belong in VCS because they don't work well with Git.

As soon as a VCS comes along that actually handles that properly people will say "of course, it was obvious that it should have been like this all along!".

Git LFS is a proof of concept, not a real solution.

Unfortunately none of the new Git alternatives I've seen (Jujitsu, Pijul etc) are tackling the real pain points of Git:

* Submodule support is incomplete, buggy and unintuitive

* No way to store large files that actually integrates properly with Git.

* Poor support for very large monorepos where you only want to clone part of it.

In a way, Git is bad at everything that centralised VCS systems are good at, which isn't surprising given that it's decentralised. The problem is that most people actually use it as a centralised VCS and want those features.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: