Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I find the way software is packaged in Go to be a terrible regression, but I'm more interested in why you think pip and venv's don't solve the problem for Python already?

Edit: I see from your reply that we're talking at cross purposes. I thought you meant source distribution, but you mean binary distribution to end users or deployment to production systems, and in that case I agree with you.



Being able to ship a self contained binary of your application is a very powerful concept, on which many seem to agree.

The way I see it, pip and virtualenv are not practical for deployment or distribution. You shouldn't have to download and install things during a production deployment. I even created a tool (https://github.com/objectified/vdist) to mitigate this problem, but it will always be a hack when doing it this way.


Remember that critical Go security update?

Usual procedure - update the shared library, restart affected services. Go - recompile everything.


Yeah, but... that's not really the "usual procedure" though. Nobody who knows what they are doing literally downloads openssl manually, compiles the new shared library, manually installs it, and manually restarts the affected services, on the grounds that if you do that you have just proved you don't know what you are doing. (Most charitably, you're doing a "Linux From Scratch" for educational purposes, but that's just about the only valid reason.) Once you have introduced package management and/or system management tools, it doesn't seem like a significantly different problem anymore, and likely to be utterly swamped by the other bigger problems that appear at scale.


No, you update the shared library using your distribution's package manager and then restart the affected services using one of the small scripts for that purpose.

What's the issue with that?


Yeah that's the usual refrain, but the only situation where you have shared libraries is on Linux with a package manager. In that case it is trivial to recompile all packages that depend on the insecure library anyway.

On Windows you have to package most shared libraries with your app anyway so you have to get a new version of it anyway.


Sure but that's one of the reasons I use Linux.


> update the shared library, restart affected services

... look for programs to break at runtime because of some unrelated API change in the shared library.


That's why Debian Stable and RHEL exist. Security patches don't break the API.


I am willing to believe that they come closer to the ideal than others, but nobody is perfect, and I'd rather discover incompatibilities myself, when I'm getting a new version of my app ready, before shipping, instead of trying to understand why random users are incoherently reporting impossible app failures.


> Security patches don't break the API.

shouldn't

When the patched library is not part of Debian Stable or RHEL's repositories (for example, if you require features from a release less than a year old) all bets of API stability are off.

OpenSSL and libc are not the only libraries which are patched for security that people use.


And heaven help you if RedHat decides not to backport a critical bugfix. OpenSSL on CentOS 6 has 99 patch files, a script named "hobble-openssl" and non-trivial changes to the build system that affect linkage, making DIY backports less than trivial.


Which only needs to happen once.


Yes but still. If you have a procedure in place to recompile and redeploy everything, you could just deploy the libraries as well.


Plus, the system admin doesn't just have to wait on the new security-patched library to be ready, they have to wait for everyone who used Go to recompile and distribute their programs.


So you're redeploying one binary instead of another binary. You still need to deploy something.

In fact, what about just switching connections to freshly launched VMs?


> The way I see it, pip and virtualenv are not practical for deployment or distribution. You shouldn't have to download and install things during a production deployment. I even created a tool (https://github.com/objectified/vdist) to mitigate this problem, but it will always be a hack when doing it this way.

Please excuse my newbness but doesn't python wheel do most of this (besides compiling to a single package)?


Not under linux, but it's coming. Alhough they have no way to include big dependancies like QT and the like.


Why not just follow the Erlang/OTP practice of creating a "release" which contains everything you need, including the runtime system, deps, and your code into a self-contained tarball?


Pip and virtualenv solve the problem of creating and populating isolated runtimes for things you control. They don't really address distribution to end users, i.e., those who use OS packaging or app stores.

Distribution of python "binaries" is a real pain though. Twitter's PEX (https://engineering.twitter.com/university/videos/wtf-is-pex) is the best, in my opinion. (It's basically a packaged venv, so in that regard, I guess I agree.) It's still problematic and requires a writable filesystem just to run.

I build OS packages for python programs using PEX, and it's okay, but it's probably my least favorite distribution mechanism.


I found it quite hard to produce a Windows Installer that my users can just run that installs my program plus all the libraries it depends on. If I recall correctly, it was hard to bundle OpenCV and other stuff that relied on native code.


Why is Go packaging terrible? Because we are going back to static binaries?

I write and ship Go and Python code every day and I found that distribution is a great overall benefit - one, that some developers for some reason seem to ignore.

It's not just convenient for me, but for various other parts of a project as well: less moving parts is welcomed by ops, fast iterations help to meet the requirements. Overall a very positive impact for a reasonable price tag: larger binaries and recompilation overhead, when a critical library that is used needs a fix.


It great when you hold all the go code and can deploy fixed version anytime. It sucks ass when there is a critical vulnerability in libc and all your go based binaries was compiled against it and you are waiting for the vendors to issue patches instead of just being able to upgrade libc.so.


The default Go compiler, gc, doesn't link to libc at all on Linux, but directly uses system calls, whose interface is stable. It does link (dynamically, I think) against libc on OS X and Windows, since the system call interface is not stable there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: