Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Linux relies on dynamic linking extensively in two ways: > > - A dynamic library may be updated independently of its dependents. You can’t do this in Rust, see above.

This isn't a linux thing. Its a popular linux distribution thing. Linux distributions want to be in charge of which version of which dependencies every package uses. But that means the same piece of software on different linux machines will use subtly different versions of all their dependencies.

Distribution maintainers love this, because they can patch libjpeg to fix a bug and it'll affect all programs on the system. But its hell for the actual developers of those packages. If I use a dependency at version 1.2, ubuntu might silently replace that with version 1.1 or 1.2.1 or something else - none of which I've actually tested. Debian might only have version 0.9 available, so they quietly try and patch my software to use that instead. If there's a bug as a result, people will blame me for the bug. Sometimes the debian maintainers maintain their own patch sets to fix bugs in old versions - so they ship version 0.9.0-1 or something. When a user is having problems its now almost completely impossible for me to reproduce their environment on my computer.

Linux doesn't have to do any of this. It all works great with static linked binaries. Eg alpine linux uses musl instead of glibc and just statically links everything. This provides better performance in many cases because library code can be inlined.

Apt is in theory language agnostic. But in practice, this whole dynamic linking thing only really works for pure C libraries. Even C++ is a bit too fragile in most cases for this to work. (I think C++ is in theory ABI compatible, but in practice it only seems to work reliably if you compile all binaries with the same compiler, against the same version of the C++ STL.)

The reason Go doesn't support dynamic linking is that Rob Pike thinks its stupid. That dynamic linking only ever made sense when we didn't have much RAM, and those days are way behind us. I think I generally agree with him. I have no idea where that leaves rust.



> This isn't a linux thing. Its a popular linux distribution thing

You mean: it’s an OS thing.

Windows relies on it. MacOS and iOS rely on it.

When shared frameworks get to the multiple GB size then you can’t boot the system without dynamic linking.

> Even C++ is a bit too fragile in most cases for this to work. (I think C++ is in theory ABI compatible, but in practice it only seems to work reliably if you compile all binaries with the same compiler, against the same version of the C++ STL.)

Maybe about a quarter of the shared libraries, and maybe half of the large ones (as measured by source size) on Linux are C++.

It’s true that the preferred idiom is to use C ABI around the edges, but not everyone does this.

C++ has stable ABI even if you use different compilers these days. That’s been true for at least a decade now.

> The reason Go doesn't support dynamic linking is that Rob Pike thinks its stupid

I don’t care about what Rob Pike thinks is stupid.

> That dynamic linking only ever made sense when we didn't have much RAM, and those days are way behind us

Are you ok with me making thay same argument in favor of Fil-C’s current weak sauce optimization story?

“Optimizing your language implementation only even made sense when we didn’t have much RAM and when computers were slow, and those days are way behind us.”


What shared frameworks are multiple gigabytes in size?? If that’s the case I don’t think the problem is dynamic linking. A hoarder shouldn’t solve their problem by renting more houses.

Honestly I’ve been thinking of making my own toy OS and I’m still a bit on the fence about dynamic linking. As I understand it, macOS puts a lot of their UI components into dynamic libraries. That lets them update the look and feel of the OS between versions by swapping out the implementation of the controls for all apps. I think that’s a good design. I like that they have some way to do that. And for what it’s worth, I agree with your earlier point about scripting languages and whatnot using dynamic linking in Linux for extensibility.

In general I want more programs to be able to do what Pam does and be programmatically extensible. Dynamic linking is a very efficient way to do that. I think it’s really cool that Fil-C can transparently provide safety across those API boundaries. That’s something rust can’t really do until it has a stable ABI. In practice it doesn’t usually take much code to wrap / expose a C api from rust. But it’s inconvenient and definitely unsafe.

> Are you ok with me making thay same argument in favor of Fil-C’s current weak sauce optimization story?

No of course not. Static linking is a mixed bag optimisation wise because - while code is duplicated in ram, doing so also allows inlining and dead code elimination. So it’s usually not that bad in practice. And binaries are usually tiny. It’s an application’s data where I want efficiency.

I think Fil-C’s optimisation story is fine for many use cases. It should be able to be as efficient as C#, Go and other similar languages - which are pretty fast languages. More than fast enough for most applications. But there’s a lot of software I’d like to keep in fast native code, like databases and browsers. So I wouldn’t use Fil-C there. And in the places I don’t mind slower code, I’d rather use a C# or go or typescript. So for me that leaves Fil-C as an interesting tool primarily for sandboxing old C code.

If that was how Fil-C was talked about and positioned, that would be fine. But that’s not the narrative. I’m frustrated with some of the hypocrisy I’ve seen online from “influencers” like Casey M. Last week C and systems code was good because of how fast bare metal C runs. But now C is still great because it’s memory safe through Fil-C, so why even bother with rust? What liars! They just like C and don’t want to admit it. If they really cared about performance and systems code, they wouldn’t find Fil-C exciting. And if they actually cared about safety they wouldn’t have been so dismissive of memory safe languages before Fil-C came out. Liking C is fine. Hating rust is fine. There’s a lot to dislike. I just don’t like being lied to.

“I like Fil-C because it lets the Linux ecosystem have safety without needing to change to another language. I like C and want us to use it forever” is - I think - what a lot of people actually believe. If that’s the case, I wish people would come out and say it.


People use C not because they like it, but because its the lingua franca [1]. As long as The Unix Programming Environment [2] remains dominant, The C Programming Language [3] will too.

1. https://humprog.org/~stephen/research/papers/kell17some-prep...

2. https://amazon.com/dp/013937681X

3. https://amazon.com/dp/0131103628




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: