Using parens to pass type arguments was one of the things that turned me off on Zig. For a language that prioritizes "no hidden control flow," it sure did a lot to make various syntax conventions _masquerade_ as control flow instead.
> Using parens to pass type arguments was one of the things that turned me off on Zig.
It's just regular comptime function calls which happen to have parameters or return values which are comptime type values (types are comptime values in Zig). I find that a lot more elegant then inventing a separate syntax for generics, and it lets you do things trivially which are complex in other languages (like incrementally building complex types with regular Zig code).
It might be unusual when coming from C++, Rust or Typescript, but it feels 'natural' pretty much immediately after writing a few lines of generic code.
Macros can have control flow, so compile-time control flow is definitely possible, but perhaps we trained ourselves to not think of control flow in this way because using complicated compile-time logic is generally frowned upon as a footgun.
Perhaps Zig is the language that on purpose blurs the line between what runs when (by basically having what looks like macros integrated into runtime code without any conspicuous hashtaggy syntax), and so a Ziggy would not see compile-time control flow as something weird.
This isn't an interesting counterpoint. This is the de facto narrative regarding patent lawyers and patent trolls masquerading as firms that act as though they somehow contribute.
Very mid compared to Visual Studio in my experience. You don't even get a modules window, and there's a whole litany of core C++ debugging features missing.
I agree with this whole thread. I do use VS Code as my debugger frontend for C++ on Linux, but it's a real bummer compared to what's available on Windows. Hence this series!
Swift is pretty next level compared to Rust. Rust does seem like a fairly constant level of slow and linearly slows down as the project gets larger. Swift tends to have huge degradations on seemingly innocent changes that make the type inference go off the rails. Although at least if you spend the time to track these down you can fix the worst offenders.
Pardon me for not getting your context, but are compile times a big issue in software development? I have never programmed professionally and all my experiences with code is from a couple of classes taken in college a decade ago.
When doing UI things, a common workflow is to have code and the app side-by-side, make a small tweak in the code, recompile/hot-reload, look at he result, repeat. A long compile time makes workflow a pain.
In general, you're right. But there are at least 2 times where they're absolutely vital- anytime you're dealing with a UI and data exploration in data science (since you make a lot of frequent, small changes in the goal of fine tuning something.) Everything else, best practices has good testing and partial compilations to make it moot. There's probably some other contexts that make it valuable, but I've never had to deal with those.