Hacker Newsnew | past | comments | ask | show | jobs | submit | generichuman's commentslogin

This is very exciting for zig projects linking C libraries. Though I'm curious about the following case:

Let's say I'm building a C program targeting Windows with MinGW & only using Zig as a cross compiler. Is there a way to still statically link MinGW's libc implementation or does this mean that's going away and I can only statically link ziglibc even if it looks like MinGW from the outside?


This use case is unchanged.

If you specify -target x86_64-windows-gnu -lc then some libc functions are provided by Zig, some are provided by vendored mingw-w64 C files, and you don't need mingw-w64 installed separately; Zig provides everything.

You can still pass --libc libc.txt to link against an externally provided libc, such as a separate mingw-w64 installation you have lying around, or even your own libc installation if you want to mess around with that.

Both situations unchanged.


That's cool. I imagine I could also maintain a MinGW package that can be downloaded through the Zig package manager and statically linked without involving the zig libc? (Such that the user doesn't need to install anything but zig)

That's a good way to sell moving over to the zig build system, and eventually zig the language itself in some real-world scenarios imo.


do you suspect it will be possible to implement printf??

while we're talking about printf, can i incept in you the idea of making an io.printf function that does print-then-flush?


It's completely possible to implement printf. here is my impl (not 100% correct yet) of snprintf for my custom libc implemented on top of a platform I'm working on <https://zigbin.io/ab1e79> The va_arg stuff are extern because zig's va arg stuff is pretty broken at the moment. Here's a C++ game ported to web using said libc running on top of the custom platform and web frontend that implements the platform ABI <https://cloudef.pw/sorvi/#supertux.sorvi> (you might need javascript.options.wasm_js_promise_integration enabled if using firefox based browser)


yeah I just thought there are "compiler shenanigans" involved with printf! zig's va arg being broken is sad, I am so zig-pilled, I wish we could just call extern "C" functions with a tuple in place of va arg =D


The only thing C compilers do for printf, is static analyze the format string for API usage errors. Afaik such isn't possible in zig currently. But idk why'd you downgrade yourself to using the printf interface, when std.Io.Writer has a `print` interface where fmt is comptime and args can be reflected so it catches errors without special compiler shenigans.


I'm thinking: do a translate-c and then statically catch errors using my zig-clr tool.


Can you build GUI programs with this? I'm thinking anything that would depend on GPU drivers. Anything built with SDL, OpenGL, Vulkan, whatever.


No, in my experimentation I tried to convert OBS into static and it had the issue of it's gui not working. I am not exactly sure what's the reason but maybe you can check out another library like sdl etc. that you mention, I haven't tested out SDL,OpenGL etc's support to be honest but I think that maybe it might not work in the current stage or not (not sure), there is definitely a possibility of making it possible tho because CLI applications work just fine (IO and everything) so I am not really sure what caused my obs studio error but perhaps you can try it and then let me know if you need any help/share the results!


Check Detour out: https://github.com/graphitemaster/detour?tab=readme-ov-file#...

I suspect with combination of Detour & Zapps it could be possible.


If your use case is generating html, MathML is supported in all modern browsers: https://developer.mozilla.org/en-US/docs/Web/MathML#browser_...


That doesn't give you independence from the libc, does it? By extension you lose distro-independence too (not sure if Detour supports musl-based ones, need to run tests).

Agree that IPC will be more secure and stable though.

I imagine Detour is mostly targeting closed source projects trying to run on as many distros as possible.


No UNIX has independence from the libc, Linux is the exception to the UNIX rule that libc is the OS API, traditionally syscalls aren't ABI stable.

This approach isn't portable to other UNIX like platforms.


I'm only thinking in terms of Linux distributions since I never needed to deploy software on other UNIXes (excluding macOS, but Apple constantly forces changes anyway).

Do other UNIXes have any problems similar to glibc ABI problems that Linux users experience, or do they stabilise the libc ABI similar to how Linux keeps syscalls stable?


There are naturally ABI breaks between major OS versions, outside of what POSIX requires.


I will be even more impressed with linux syscall stability if your implication is that (some) people need to recompile their software for each major update on all other UNIXes.


Linux is only friendly for FOSS projects, it hardly has the same stability as commercial UNIX systems for closed source software.


That's only true if you're making CRUD software and easily replaceable by any random programmer. For anything more serious LLMs are only useful as a better search engine.


Keep in mind there _may_ be a negative feedback loop there.

If you're building your software in a way that won't be able to perform better with superior disk/db/network performance, then it isn't worthwhile to ever upgrade to a more performant disk/db/network.

If it is possible, make sure your software will actually be faster on a faster disk rather than just testing on a slow disk and thinking "well we're I/O bound anyway, so no need to improve perf".


Okay, C++ is believable, but can you really build a Java / .NET project that was not touched for 20+ years with no changes to the code or the build process (while also using the latest version of the SDKs)?

I imagine you can _make_ a project compile with some amount of effort (thinking maybe a week at most) but they wouldn't be exactly "unzip the old archive and execute ./build.bat".


Yes, because Ant exists since 2000, Maven exists since 2004, and MSBuild since 2003.

Before it was a common procedure to have central package management, we used to store libraries (jars and dlls), on source control directly in some libs folder.

Afterwards, even with central package management, enterprise software when done right, is not calling the Internet in every build, rahter there are internal repositories that are curated by legal and IT, and only those packages are allowed to be used in projects.

So the tooling is naturally around after 20+ years, no one is doing YOLO project management when playing with customer's money.

As for the "...latest version of the SDKs..", that is moving the goal posts, there is no mention of it on,

> Go is the only language where I've come back to a nontrivial source code after 10 years of letting it sit and have had zero problems building and running. That alone, for me, more than makes up for its idiosyncrasies.


Ant and Maven have existed for a long time, but for me they didn't prevent Java (and other JVM language) projects from suffering significant bitrot in the build process.

For example, I worked on a project that just stopped being able to be built with Maven one day, with no changes to the JVM version, any of the dependencies, or the Maven version itself. After a while I gave up trying to figure it out, because the same project was able to be built with Gradle!

Older Scala projects were a pain in the ass to build because the Typesafe repositories stopped accepting plain HTTP connections, requiring obscure configuration changes to sbt. I've never had to deal with things like that in the world of Go.


All fair, but:

> As for the "...latest version of the SDKs..", that is moving the goal posts, there is no mention of it on [...]

I thought it was implied since tooling & library breakages over the years happen and sometimes you can't just get the old SDK to run on the latest Windows / macOS. If the languages and Ant/Maven are backwards compatible to that extent, that's actually pretty good!

I had to deal with moving a .NET Framework 4.7 project to .NET Standard 2.0 and it wasn't effortless (although upgrading to each new .NET release after that has been pretty simple so far). We took a couple of weeks even though we had minimal dependencies since we're careful about that stuff.


I landed on "immutable distros by default for average users" as well. It is a more Windows/macOS like experience where it is much harder to mess up the system.

Flatpak guarantees everything will work in most cases, and for other cases there's AppImage. Just need to get most devs to distribute AppImages. BoxBuddy with distrobox will solve _all_ edge cases where someone says "X works with Y in Z on my machine" so you replicate their machine in distrobox.

I know this is trading program size with convenience, but that's what Windows and macOS does too. It is better to be on some immutable linux distro rather than Windows in my opinion. We don't have to force the average person who just wants their computer to work to install (extreme example) Gentoo or whatever.


https://archive.md/cvzgy

(was down for me, this fixed it)


> [...] where we make more $$$ than 95% of the population [...]

That doesn't make a difference. Suppressed wages are suppressed wages.

But, since you care about the comparison, people doing the suppressing are making even more.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: