Hacker Newsnew | past | comments | ask | show | jobs | submit | marcus_cemes's commentslogin

You could make the same argument for any language. It still requires you to think and implement the solution yourself, just at a certain level of abstraction.


This stands to reason. If you need to bridge different languages together like in your case, they need to speak a common tongue. REST/GrahQL/gRPC solve this problem in different ways. There is no technical limitation keeping you from serving HTTP traffic from Erlang/Elixir, but from my own experience it isn't a pleasant experience. JavaScript or Python are dead simple, until you realise that 64-bit integers are not a thing in JS, and need to be handled as strings. Similarly, tuples will give you hell in Python.

On the other hand, if you don't need to cross that boundary, the BEAM will very happily talk to itself and let you send messages between processes without having to even think about serialisation or whether you're even on the same machine. After all, everything is just data with no pointers or cyclic references. That's more that can be said for most other languages, and while Python's pickle is pretty close, you can probably even share Erlang's equivalent of file descriptors across servers (haven't tried, correct me if I'm wrong), which is pretty insane when you think about it.

> I have found the real value of Erlang to be internally between trusted nodes of my own physical infrastructure as a high-level distributed "brain" or control plane

I think this is pretty high praise, considering it's about as old as C and was originally designed for real-time telephone switches.


> There is no technical limitation keeping you from serving HTTP traffic from Erlang/Elixir, but from my own experience it isn't a pleasant experience.

I would be interested in what was unpleasant? I've run inets httpd servers (which I did feel maybe exposed too much functionality), and yaws servers and yaws seems just fine. maybe yaws_api is a bit funky, too. I don't know the status of ACME integration, which I guess could make things unpleasant; when I was using it for work, we used a commercial CA, and my current personal work with it doesn't involve TLS, so I don't need a cert.

> you can probably even share Erlang's equivalent of file descriptors across servers (haven't tried, correct me if I'm wrong)

Ports are not network transparent. You can't directly send to a port from a different node. You could probably work with a remote Port with the rpc server, or some other service you write to proxy ports. You can pass ports over dist, and you can call erlang:node(Port) to find the origin node if you don't know it already, but you'd definitely need to write some sort of proxy if you want to receive from the port.


Perhaps I was a little harsh, this was a few years back when I was evaluating Elixir for a client, but ended up going back to a TS/Node.js stack instead. While the Phoenix documentation is stellar, I found it difficult to find good resources on best practices. I was probably doing something stupid and ran into internal and difficult to understand exceptions being raised on the Erlang side, from Cowboy if I recall. In another case, I was trying to validate API JSON input, the advice I got was to use Ecto (which I never really groked) or pattern match and fail. In JS, libraries like Zod and Valibot are a dream to work with.

The result was a lot of frustration, having been thoroughly impressed by Elixir and Phoenix in the past, knowing that I already knew how to achieve the same goal with Node.js with less code and would be able to justify the choice to a client. It didn't quite feel "there" to pick up and deploy, whereas SvelteKit with tRPC felt very enabling at the time and was easily picked up by others. Perhaps I need another project to try it out again and convince me otherwise. Funnily enough, a year later I replaced a problematic Node.js sever with Phoenix + Nerves running on a RPi Zero (ARM), flawless cross-compilation and deployment.

> Ports are not network transparent

I stand corrected, thank you for the explanation!


64 bit ints are a thing in JS for a while now


No, they aren't. You have to use BigInt, which will throw an error if you try to serialise it to JSON or combine it with ordinary numbers. If you happen to need to deserialise a 64-bit integer from JSON, which I sadly had to do, you need a custom parser to construct the BigInt from a raw string directly.


To extend upon this, memory generally has a single owner. When it goes out of scope, it gets freed [1]. The drop() function, which appears analogous to free() in C/C++, is actually just an empty function who's sole purpose is to take ownership and make it go out of scope, which immediately frees the memory [2].

> This function is not magic; it is literally defined as: pub fn drop<T>(_x: T) {}

This is usually more deterministic than GC languages (no random pauses), but can be less efficient for highly nested data structures. It also makes linked lists impossible without using "unsafe rust", as it doesn't abide by the normal ownership rules.

[1]: https://doc.rust-lang.org/rust-by-example/scope/raii.html [2]: https://doc.rust-lang.org/std/mem/fn.drop.html


Linked lists to arbitrary memory, yes. Linked list from a consecutive chunk of memory managed by a bump allocator: just as easy as any language, no need for unsafe.

Admittedly not the easiest language to make a linked list in.


Never heard of shadcn or franken-ui, but they look identical, one links to X (Twitter), the other to Mastodon. What's the story there?


Franken UI is an HTML-first, open-source library of UI components based on the utility-first Tailwind CSS with UIkit 3 compatibility. The design is based on shadcn/ui ported to be framework-agnostic.


Thank you. As weird as it can be, I came here looking exactly for an HTML-first option and had a gut feel that I would find it in the comments!

Thanks again!!


Oh wow, I haven't seen Franken UI - this looks great, I can definitely look to port some of these.

I guess I've been taking an opinionated approach to start by taking components I had already built from my other projects and compiling them here for now.


In what sense is shadcn not framework agnostic?


not sure if trolling

It provides you with templates for react...? How can anyone argue that that's framework agnostic...?


When all you know is React everything looks like it needs a fat client?


yeah, I just have no idea what shadcn is, so I figured I'd ask for the sake of others who also have no idea.


It's a React library.


This is clean JavaScript syntax in my opinion and should be what people strive for. It's perfectly readable, it's faster, it does async correctly without any unnecessary computation, can be typed and will have a normal stack trace. Piping is cool when done right, but can introduce complexity fast. Elixir is a good example where it works wonderfully.


The two are not mutually exclusive, it's probably not an issue with horizontal scaling.


I think this is a great use case. From my experience, having everything in one language is a huge plus. You can pull data from the database and just inject it into the view. The closest I've gotten to this in the JS/TS world is a Prisma + tRPC + SvelteKit for E2E type safety, but there's a huge cost in complexity and language server performance and some extra boilerplate.

The main limitation is likely offline apps, LiveView requires a persistent connect to the server. I doubt this is something you'll encounter for your use case.


I'm planning on adding tRPC to the Prisma + Nest + Next stack so can you elaborate on "language server performance"?


I decided to use Tauri for the first time for a university project and it was absolutely painless to design a small and useful GUI application to programatically generate schematics for photolithography masks.

- Single lightweight binary install and executable (~6 MB), clean uninstall

- Automatic updates (digitally signed, uploaded to a small VM)

- Integrates nicely with SvelteKit and TailwindCSS

- The Rust backend was able to integrate with GTSDK over FFI. The cmake crate made C++ compilation and linking automatic as part of cargo build, provided that a C++ toolchain is available (no problems even on Windows).

- No scary toolchain setup with a load of licenses to review and accept (looking at you, Flutter. I'm a student, not a lawyer. Although perhaps this will also be a thing with Tauri + Android?)

For a small project, I can't recommend it enough. I wouldn't know where to start with a C# or Qt GUI application, especially if I wanted to make it cross-platform.

It'll be interesting to see if it gains any traction in the mobile space. Flutter is great and may be better optimised for certain rendering techniques, such as infinite lists, but sticking with web technologies is a very compelling advantage.


.NET MAUI is cross-platform and very easy to get started with, but would sacrifice a lot of performance to gain the convenience and simplicity of the development experience.


It's also no-go for Linux. Otherwise I would be all over it.


There is work being done to address desktop linux, but I agree that is one of the deficiencies.

https://github.com/jsuarezruiz/maui-linux/pull/37

The lack of a WASM target is another, although UNO project in the past provided such a target for MAUI's very closely-related predecessor (Xamarin.Forms).

https://platform.uno/xamarin-forms/


I guess this comes down to personal preference. For me, this is mixing the interface with the implementation. You shouldn't need to know how something works to be able to use it, for me, that's the real overhead. Maybe this works on a small scale, but what if the source code changes?

That being said, I do like inspecting the source from time to understand it better, or make up for missing documentation. Sometimes though, with this being JS, I wish that I could unsee the things that I've seen, code that production depends upon, deep within the dependency tree.

I agree with the idea of fluency when writing without types, but for me it's not about how fast you can write code. Code for me is a lot of rereading and understanding what the hell you wrote just a few days ago, I find typed code easier to get back into and it's faster to find things that broke in parts of the codebase that you're less familiar with when you change something.


I don't want to come off dogmatically defending Rust, I code little Rust in comparison to JS, and I've done C and C++ for some embedded systems. C is a very different monster to most other languages, I find people defending C to be just as proud and defensive as Rust programmers.

To address some of your complaints: yes, there are a lot of concepts to understand. I like wrappers, I found it crazy that in C, you first declare a mutex_t variable, and then you specifically have to call chMtxObjectInit(*mutex_t mutex) to initialise it [1]. If you forget? UB, kernel panic sometime in the future. I think Mutex::new() is far cleaner, and it's namespaced without arbitrary function prefixes. Binaries are tiny in comparison to JS/Python with deps, they will be larger than C. Compile times aren't that slow and you can't make extra language features happen out of thin air.

In C, I've found that it's commonplace to do a lot of clever and mysterious pointer and memory tricks to squeeze out performance and low resource utilisation. In embedded, there's usually a strong inclination to using "global" static variables, even declaring them inside function bodies because it "limits the visibility/scope of the variable". Not declaring a static variable inside a function is what knocked a few points off my Bachelor's robotics project.

I personally don't like this. It puts a lot of pressure on the programmer to understand the order of execution, and keep a complex mental model of how the program works. Large memory allocation, such as a cache, can be hidden in just about any function, not just at the top of a file where global variables are usually defined.

It sounds like what you're trying to accomplish is inherently unsafe, hence the "preaching", as in it requires the programmer's guarantee that 1) the data is fully initialised before it's accessed and 2) once the data is initialised, it's read-only and can therefore safely be accessed from other threads. C doesn't care, it will let you do a direct pointer access to a static variable with no overhead. Where's the cost? The programmer's mental model. I haven't tried, but I imagine that Rust's unsafe block will allow you to access static variables, just like in C with no overhead, effectively giving your OK to the compiler that you can vouch for the memory safety.

Rust solutions: lazy_static crate (safe, runtime cost in checking if initialised on every access), RwLock<Option<T>> (safe, runtime cost to lock and unwrap the option), unsafe (no overhead, memory model cost and potentially harder debugging), extra &T function parameter (code complexity cost, "prop-drilling", cleaner imo). On modern hardware, the runtime cost is absolutely negligeable.

Why would you not want to use Rust for a large project? This seems a bit contradictory to me. The safety guarantees in my opinion really pay off when the codebase is large, and it's difficult to construct that memory model, especially with a team working on different parts. Instead, you overload that work to the compiler to check the soundness of your memory access in the entire codebase.

If you like C, by all means keep on using it, I enjoyed my forray into C, it's simple and satisfying, but would much prefer Rust, after spending a lot of time tracking down memory corruption. Rust's original design purpose was to reduce memory bugs in large-scale projects, not to replace C/C++ for the fun of it. We usually have a natural inclination to what we know well and have used for a long time. Feel free to correct me if something is wrong.

1: http://www.chibios.org/dokuwiki/doku.php?id=chibios:document...


The point was none of those rust crates worked, or required you to use mutex in the end which the solution would not actually need (not zero-cost abstraction). I would've fine using unsafe, but even with unsafe it felt like I was fighting the compiler. I would just write this particular function with C, or use the lower level C FFI functions instead.

I stand that rust is more fun to write when you work higher level, treat it higher level language, where you'll have less control over the memory model of program. It all starts breaking down and you need to become a "rust low-level expert" when you want to work closer to the memory model (copy-free, shared memory, perhaps even custom synchronization / locking models ...). It does make sense, but in my opinion figuring out how to map your own model into rust concepts is not trivial, it requires lots of rust specific knowledge, which will take a long time to learn IMO.

When unsafe was marketed to me, I thought it was a tool I could use to escape the clutches when I'm sure what I'm doing and don't want rust to fight me, but sadly it doesn't work that way in practice, but the real way is to actually write C and call it from rust.


Just for fun, I tried the static variable approach for myself. I have to agree with you, it's really hard. I gave up after half an hour. Rust doesn't seem to like casting references to pointers, which I understand, as I don't think there's a guarantee that they are just pointers. A &T[], for example, is a fat pointer (two words, also encodes the count). I think the correct approach here is either accept runtime overhead, or pass a context to each function as a parameter.

I also agree with your other statement. I think Rust tries to abstract a lot of behind into its own type system, such as Box<T> for pointers, whilst keeping it relatively fast. C is definitely the right tool for the job if you want direct memory access, I also think this is a relatively small proportion of people, working on OS, embedded systems or mission critical systems such as flight control/medical equipment.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: