I've tried it a few times and it has great features on paper but to use it gets in your way too much. I can spin up a C# dotnet project write and test the code 10 times faster than in Rust. It might not perform as fast but the hot code can be written in a small C library using code/runtime analysis tools to catch any memory safety issues.
Writing performance-sensitive code in C/C++ and calling it via interop used to be the way to go during .NET Framework days but since then has become a performance trap.
Especially for small methods, calling them through interop is a deoptimization because they cannot be inlined, and involve GC frame transition (which you can suppress) as well as an indirect jump and maybe interop stub unless you are statically linking the dependency into your AOT deployment. In case the arguments are not blittable to C - marshalling too. Of course, this is much, much faster than anything Java or Go can offer, but still a cost nonetheless.
It is also complicates the publishing process because you have to build both .NET and C parts and then package them together, considering the matrix of [win, linux, macos] x [x64, arm64], it turns into quite an unpleasant experience.
Instead, the recommended approach is just continuing to write C# code, except with pointer and/or ref based code. This is what CoreLib itself does for the most performance-sensitive bits[0]. Naturally, it intentionally looks ugly like in Rust, but you can easily fix it with a few extension methods[1].
Thanks, I haven't had to do dropping to C for a while as the improvement in performance of dotnet along with features like AOT, Span<t> etc close the gap enough for the domain I work in.
Good to know you can remain within the framework and get decent performance though with the unsafe pointers/refs. Would be interesting to see a good benchmark using only C# with latest features and Rust, although I cognisant of the fact there is more to it than pure performance (binary size, dependencies, GC etc).
Rust standard library is far more conservative when it comes to vectorization and auto-vectorization is far more fragile than people think, both links - they beat it in performance significantly ;)
Too niche, like Haskell, Lisp and other functional languages it has a long learning curve and forces you down a paradigm. Languages like C#, java and Python allow multi paradigm programming OO, functional, Procedural etc.
Exactly, I like to choose who I socialise with and have a vibrant social life outside work. I prefer to work from home and communicate over teams and get my work done than have to sit next to a random office bod who I may have nothing in common with, or worse has some kind of personality issue.
The TLDR is that it needs “function coloring” which isn't necessarily bad, types themselves are “colors”, the problem being what you're trying to accomplish. In an FP language, it's good to have functions that are marked with an IO context because there the issue is the management of side effects. OTOH, the differences between blocking and non-blocking functions is: (1) irrelevant if you're going to `await` on those non-blocking functions or (2) error-prone if you use those non-blocking functions without `await`. Kotlin's syntax for coroutines, for example, doesn't require `await`, as all calls are (semantically) blocking by default. We should need extra effort to execute things asynchronously.
One issue with “function coloring” is that when a function changes its color, all downstream consumers have to change color too. This is actually useful when you're tracking side effects, but rather a burden when you're just tracking non-blocking code. To make matters worse, for side-effectful (void) functions, the compiler won't even warn you that the calls are now “fire and forget” instead of blocking, so refactorings are error-prone.
In other words, .NET does function coloring for the wrong reasons and the `await` syntax is bad.
Furthermore, .NET doesn't have a usable interruption model. Java's interruption model is error-prone, but it's more usable than that. This means that the “structured concurrency” paradigm can be implemented in Java, much like how it was implemented in Kotlin (currently in preview).
PS: the .NET devs actually did an experiment with virtual threads. Here are their conclusions (TLDR virtual threads are nice, but they won't add virtual threads due to async/await being too established):
You're supposed to either await a Task, or to block on it (thus blocking the underlying OS thread which probably eats a couple of megabytes of RAM). It's a completely different system more akin to what Go has been using.
This is not necessarily correct. Tasks can be run in a "fire-and-forget" way. Also, only synchronous prelude of the task is executed inline in .NET.
The continuation will then be ran on threadpool worker thread (unless you override task scheduler and continuation context).
Also, you can create multiple tasks in a method and then await their results later down the execution path when you actually need it, easily achieving concurrency.
Green threads is a more limited solution focused first and foremost on solving blocking.
Blocking the thread with unawaited task.Result is an explicit choice which will be flagged by a warning by an IDE and when building, that this may not be what you intended.
Yes, but this is supposedly transparent. At least until you interface directly with native libraries, or with leaky abstractions that don’t account for that.