Hacker Newsnew | past | comments | ask | show | jobs | submit | kennethallen's commentslogin

I don't understand the use case here. Is this supposed to be for enterprise to control access to internal applications via network access policies?


Yes. The acronym is “ZTNA” (Zero Trust Network Access).

It is an alternative to a traditional corporate VPN that addresses a few architectural issues; namely:

- L3 connectivity (which allows for lateral movement) to the corporate network. - Inbound exposure to the VPN gateway (scaling can become a challenge, not to mention continuous vulnerabilities from… certain vendors) - Policy management can get convoluted if you want to do micro-segmentation properly.

ZTNA is essentially an “inside-out” architecture and acts (kind of) like a L4 proxy. I’m going to butcher this explanation, but:

1. Company installs apps/VMs/containers throughout their network. These must have network reachability to the internal apps/services the company wants to make available to its users.

2. These apps/VMs/containers establish TLS tunnels back to the company’s tenant in the vendor’s cloud.

3. Company rolls out the vendor’s ZTNA client to user devices. This also establishes a TLS tunnel to the vendor’s cloud. Hence the vendor’s cloud is like a MitM gatekeeper.

4. Company creates policies in the vendor’s cloud that says “User A can access App X via app/VM/container Z”

5. Even if App X is on the same LAN segment as App Y, App Y is invisible to User A because connectivity to the internal apps happens at L4.

It is an interesting architecture. That being said, ZTNA solutions have their own issues as well (you can probably already spot some based on my explanation above!)

(Note: I worked for a security vendor that sold a ZTNA solution as part of their ~4-5 years ago. Things could be different now.)


Yes, this is exactly what this does.


Running LLMs will be slow and training them is basically out of the question. You can get a Framework Desktop with similar bandwidth for less than a third of the price of this thing (though that isn't NVIDIA).


> Running LLMs will be slow and training them is basically out of the question

I think it's the reverse, the use case for these boxes are basically training and fine-tuning, not inference.


The use case for these boxes is a local NVIDIA development platform before you do your actual training run on your A100 cluster.


Author on Twitter a few years ago: https://x.com/tqbf/status/851466178535055362


Oh, joy


I have a few questions after reading the README.

First, if it uses PRNG with a fixed-size state, it isn't accurate to say it never repeats, correct? It will be periodic eventually, even if that takes 2^256 operations or more.

Second, can you go more into the potential practical or theoretical advantages? Your scheme is certainly more complicated, but I don't see how it offers better tamper protection or secrecy than a block cipher operating in an authenticated mode (AES+GCM, for instance). Those have a number of practical advantages, like parallel encryption/decryption and ubiquitous hardware support.


You are correct. The probability of a state collision is cryptographically negligible, on the order of breaking a 256-bit hash function.

You're also right that AES-GCM is faster and has hardware support. Ariadne explores a different trade-off. Its primary advantage is its architectural agility.

Instead of a fixed algorithm, the sequence of operations in Ariadne is dynamic and secret, derived from the key and data history. An attacker doesn't just need to break a key; they have to contend with an unknown, ephemeral algorithm.

This same flexible structure allows the core CVM to be reconfigured into other primitives. We've built concepts for programmable proofs-of-work, verifiable delay functions, and even ring signatures.


FYI your comments seem to be showing up as dead (dead comments don't show up by default, only when people logged into HN have them enabled), I think something may have triggered a shadowban on your account. Might want to send a message to the moderators.

I hit 'vouch' for the comment I'm responding to so it should be visible, but the other response you gave (https://news.ycombinator.com/item?id=44353277) is still listed as dead.


It does not show up as dead for me, but your comment was made 7 hours ago.


They cannot just relicense the work of all of their public contributors without them agreeing in writing. This is completely illegitimate. (They don't seem to require signing any contributor agreement.)


Have you come by your certainty that they have not asked because you were a contributor?


I walk away from every article on or attempt to use Nix more mystified


Yeah Nix seems like an insanely useful concept (declarative, reproducable, diffable, source controllable definitions of linux environments). But actually using it is a nightmare.

I think there's some point on the spectrum between "commit your dotfiles to git" and Nix that would be useful, I don't know what it is though. Containerization is kinda like this, but they're entirely imperatively defined rather than declarative and I wouldn't really want to use a docker container as my main system environment.


I run NixOS on every computer I'm allowed to install it and I really don't think it's hard to use, just different. Adapting to a new workflow is hard, but I don't know that NixOS is intrinsically more difficult than any other Linux.

I wouldn't want a docker container as my main environment, but I do like having NixOS managing my main environment for a few reasons.

First, the declarative nature of everything makes it clear and easy to know what is actually installed on my computer. I can simply look at my configuration file and see if a program is installed, and if it's not. If I want to uninstall something, I delete the program from the configuration.nix and rebuild. This might not seem insignificant, but to me it's game changing. How many times have you uninstalled things in Ubuntu or something and had a ton of lingering transitive dependencies? How many times have you been trying to debug a problem, end up installing a million things, and then painstakingly having to track down every unnecessary dependency and transitive dependency that you ended up installing, only to miss something for months? Maybe most people here are better at this than I am, but these things happened to me all the time.

Second, the declarative nature of NixOS makes snapshotting trivial. Again, this is game-changing for me, and it makes fixing stuff on my computer more fun and less scary. If I break something, all I have to do is reboot and choose the last generation, then fix it.

This might not seem like a big deal, and again maybe for people smarter than me it's not, but for me it completely changed the way I deal with computers. When I first started using Ubuntu, when I would do something like break the video driver or the WiFi driver, I would end up having to nuke the system and start again, because I would get into a state where I didn't know how to fix it. I probably could fix these things now, I've been doing this stuff for awhile, but even still, it's nice to be able to not ever have to worry about getting into a state like that.


My biggest gripe with Nix (from real world experience) is that my .nix files randomly break due to changes and I have to spend my time going through Github commits to see what changed in the settings I used to fix it.

That and when things do error, the error messages may as well be generated from /dev/random


Are you importing things from all over the internet, without pinning to a specific version? It sounds a lot like it, at least, and in that case I'm not sure how this is a flaw of Nix, or how it would be much different in other places.


To be fair, you get this all the time when you run nix-channel update, "<whatever> has been deprecated/removed, use <something-else> instead".


Nix channels (and NIX_PATH) break reproducibility. Pinning revisions makes things more robust; my preferred approach is to use default function arguments, so they're easy to override (useful when composing lots of things together).

It seems like flakes are another way to do that, but they seem way too over-complicated for my taste.


Yeah of course, but channels probably shouldn't be used outside of managing the local machine, and there's usually quite long and fair time period for deprecation warnings taking effect. Not sure how bad if one uses unstable, but if using unstable the complaint isn't really fair to begin with.


What you describe with building and rebuilding and keeping a clean environment is exactly what I use Docker containers for, eg devcontainers. I know it’s not reproducible in the same way, but the learning curve is so, so much lower for something 90% as good and with much more documentation and online support.


It is not the same as docker though. Docker is fine, but that would only work with application level stuff. NixOS lets you manage the entire system this way, including drivers and kernel modules and everything else.

This is not a small difference.


That’s the 10% that Docker can’t do, indeed. But most of what I want to do is handled fine by the area where Docker and Nix (not NixOS) overlap.


I am saying that’s more than 10%, it’s fundamental to the entire NixOS experience.


> I run NixOS on every computer I'm allowed to install it and I really don't think it's hard to use, just different.

I work at a place that uses Nix for almost everything. Despite that, most developers do not like it (and usually create tickets asking the "experts" to fix things). The above quote is basically exactly what the experts are always telling the developers. That, along with "you just need to try harder." As if it's not valid that someone can think Nix isn't ergnomic and often sucks to use.

I personally don't mind it all that much, although nowadays I just use it for home-manager. But I've seen people go from disliking it to hating it because of the way some experienced Nix people have treated them.


I mean people might be dicks about it, but I stand by my point.

It’s inherently hard to learn new things, especially if they’re contrary to things that you’ve been doing for N years, so I understand frustration, but that doesn’t imply that Nix itself is more difficult than anything else.

There are plenty of things that aren’t inherently difficult for most humans, but are hard to learn simply because they’re different. I don’t know that Spanish is a more difficult language than English, but I would have trouble learning it just because I have spent my entire life speaking English and approximately none of it speaking Spanish. This doesn’t mean Spanish is more difficult than English.

I will agree that the Nix language itself is kind of a pain in the ass with wonky syntax, though I have grown to kind of like it begrudgingly.


That's fine. I'm not trying to convince you to change your stance, simply bringing a perspective of Nix being used in production in a company of 500+ engineers. It's not an exhaustive example, but I've always used it as a data point of the general dislike of Nix I see across the board.


> [...] but they're entirely imperatively defined rather than declarative [...]

The conceptual problem with Docker isn't imperative vs declarative. It's that Docker doesn't even try to be reproducible. Executing the same Dockerfile twice doesn't necessarily build you the same container.

(Imperative vs declarative is still an interesting problem to think about, it's just independent of reproducibility in the abstract.)


I'm also one of the people who have tried Nix(OS) a couple of times and found it too much of a hassle, but nightmare is exaggerating a bit, I feel.

Nix's strength and weakness is that it wants to take over everything, and if you want to do something without it, you might be in a world of pain. And after doing more or less standard unixy things for 20+ years, it's difficult to hand over control to a new thing like that.


I honestly wonder why there are such divergent opinions between say git and Nix? (I've never used Nix, but I use git all the time) Is Nix so much harder to use than git?

git also has a clear model (the Merkle tree of file and directory nodes) but a famously unfriendly UI (git checkout does 5 or more unrelated things)

Some people do not like git, but I get the feeling that most people just use it, and get on with their day.

Why the difference with Nix? Maybe because building packages is inherently slower. Whereas you can quickly get yourself intro trouble with git, but you can also quickly get out of trouble (rm -rf, git clone)

Maybe Nix is more stateful? Although the git index/staging area I find to be a fabulously odd piece of state, and honestly breaks all the rules when I think about CLIs

Also Nix does rely on a very big global repo, whereas git doesn't

It also seems that Nix's model is less clean, and perhaps doesn't match the problem domain as well ... there are disagreements on whether it is "reproducible", etc.

Or maybe it's just a harder problem


You may not use Nix, but some pretty cool Nix tools use your work. :)

Just today I was working on some integrations with a runtime dependency resolver for shell programs that uses OSH for parsing on the job. We use Nix to manage our development environments, and we use it to make some small wrappers and tools written in Bash into something portable and reproducible that we can include in development environments that run both on Linux and macOS.

Historically we've just included them inline in our Nix files, but thanks to resholve, we're switching to a nicer system so that they live in separate files that are more pleasant to edit. The "source code" of the scripts lives in the repo as normal bash scripts, but when they get built into the development environment, all command invocations get automatically replaced with hard-coded references to the paths of the relevant executables in the Nix store, the shebang gets pinned to a specific bash version in the same way, and they also get run through ShellCheck. Now we have not only a really nice and quick way to define portable wrappers in our Nix code, but a sane way to manage longer scripts without any portability issues.

So thanks for your cool shell and associated tools and libs!

> Or maybe it's just a harder problem

Fwiw, I think this is true.

> Some people do not like git, but I get the feeling that most people just use it, and get on with their day.

I think one of the other devs on my team probably feels this way. He thinks it's conceptually cool but is somewhat horrified by the complexity and UI. He makes simple uses of it in ways that have precedent on our team, but never really dives in.

> Why the difference with Nix?

Nix didn't have a celebrity author to spur adoption early on, in some ways it can be slow, and I think maybe it isn't as much better than entrenched alternative stacks than Git was as opposed to SVN. The pain of SVN was very acute for the average developer. I'm not sure that any of the pains of dependency hell, stateful configuration management, distribution of portable executables, etc., are quite as acute for the average operator or developer as that. People who feel Nix makes their professional lives easier tend to have come to it after their career has inflicted more specialized pain upon them.

Using Git also doesn't generally (ever?) involve writing code in a Turing-complete language, but to make the best use of Nix, you do have to do that. The paradigm of that language is not very mainstream, either, and although it's generally suited to its domain imo, it certainly has some warts.


I'm glad to hear this! Travis Everett deserves a shout out too for building on OSH, and for being an early adopter :)

Now that Oils has been completely translated to native code (as of last year), we should probably find a way to officially support that functionality.

It is basically built on 'osh --tool syntax-tree foo.sh' ( https://oils.pub/release/0.28.0/doc/ref/chap-front-end.html )

I've also gotten a request for "tree-shaking", which is very related

Thanks for letting me know it's useful!

---

And on reflection I do think Nix has a harder problem, in part because it has to deal with the "outside messy world", where as git just stores files. For example, I noticed that on Debian the set of root packages is actually pretty coupled to apt-get. So any package manager has to have some hacks. Also OS X and libc make that even harder, etc. Some decisions are at the mercy of the OS itself


I suppose one characteristic that Oils, resholve, and Nix share is that each is an ambitious attempt to bridge two worlds in order to propagate recent innovations to actually-existing environments that are entangled with complex legacies: Oils with its two frontends/modes/languages that hope to bring the innovations of PowerShell to settings where Bash is deeply entrenched, resholve in trying to add dependency management to a language that has always fundamentally lacked it and has perhaps a uniquely flexible standard of interop, and Nix in trying to bring insights from implementations of functional programming languages (e.g., garbage collection) and their features (immutability, composability) to Unix filesystem hierarchies.

In a way, I suppose resholve also directly shares a goal of Oils itself— namely, to make shell scripts "more like 'real programs'". YSH pursues this goal through more powerful and flexible language constructs, greater facility with data structures, more ergonomic error messages and other feedback for the user, things like that. Resholve pursues this goal by trying to make it possible for shell programs— even shell programs that rely on a mixture of third-party programs from God-knows-where— can be more portable, less sensitive to changes in the environment, and more amenable to a kind of static analysis of their behavior and dependencies. And both try to achieve their missions in a "conservative" way, letting users retain (or continue to write) code written in the shell language they're most familiar with.

Certainly here on HN, "disruptive" innovation gets a lot more consideration (and praise). But I think that "conservative" innovation of the kind I've just outlined is sometimes even more ambitious and challenging. It's a great thing to see such projects manage to produce interesting and useful results.

So long live Nix, resholve, and Oils!


Git doesn’t really care at all about the content. Nix does. Really they’re not reasonably comparable at all.


Maybe it's better now, but what I ran into in trying twice is that if you're not into installing by "curl | sh", then trying to build from source was an awful experience. It had out of date instructions for installing a whole lot of dependencies. I'd figure out one problem only to run into another, and another. Gave up both times, a few years in between.


There are a few really good ways to install Nix, including ones that people often invoke via `curl | sh`. If you prefer, you can download the exact same installers, and verify the checksums, read their code, etc. You don't have to actually use a different installer just to avoid `curl | sh`.


There are distro packages out there, but those can also come with gotchas; e.g. IIRC Ubuntu's package was configured not to allow `/bin/sh` in sandboxes, which caused some things to break in obscure ways :-(


> I honestly wonder why there are such divergent opinions between say git and Nix?

Nix is:

1. difficult & different, where similar-ish tools are straightforward & familiar, (e.g. cloud-init, ansible, vs NixOS; or devcontainers using Dockerfiles, vs nix shells).

2. demands a significant amount of understanding, even for tasks which you'd expect to be easy.

e.g.: git is difficult, but you can get by with rote memorizing 5 commands (& copy-pasting the repo if you mess up). Emacs is difficult, but you're not required to use it. Haskell can be difficult to work with, etc.

I'd say that running `direnv allow` & using nix that someone else has written is unlikely to be difficult. But, having to write your own Nix code can be quite high friction.

> It also seems that Nix's model is less clean

I think nixpkgs is cluttered with organic mess of inconsistent designs.. but I think there's also friction where Nix's ideal package is built with `./configure && make && make install`, and many packages aren't quite that clean.


> some point on the spectrum between "commit your dotfiles to git" and Nix

That would be configuration management tools such as Salt/Puppet/Ansible/Chef.

They were popular ten years ago, and gained a lot of exposure as the devops movement gained ground, but they never stopped being useful.

Having your non-running state defined declarative is powerful, and if you can define a single source of truth for entire distributed systems, that suddenly makes you able to reason about state on whole systems.


What I can't currently figure out with Nix though is how I kill off dependency explosions though.

I want to reach into a big tree of derivations and force library and compiler versions to a single common one so we don't, for example, have 6 rust compilers being used.


The usual approach is to give Nixpkgs some `overlays`, which override the attributes you want. This can be handy in conjunction with attributes like `.override`, `.overrideAttrs`, etc. for swapping-out things deeper in the dependency graph.

The https://codeberg.org/amjoseph/infuse.nix project looks nice as a way to simplify annoying chains of overrides; though I haven't used it personally.


I'm pretty sure the best middle ground that currently exists is the various "immutable, snapshot to upgrade" distros out there.


Yes and no. It has some of the same advantages, but not the declarative definition of a system.

For me (even though I use NixOS the desktop) there is a difference between desktops and servers. I can set up a desktop pretty quickly - install a bunch of flatpaks, checkout a bunch of dotfiles and I don't do it often.

Also, popular desktops like Fedora tend to be far more polished than the NixOS desktop, which has all kinds of glitches. Just to name two current ones: (1) gdm login will fail if you are too fast and log in before WiFi is up (usually you are thrown back in gdm, sometimes the session freezes up completly); (2) fwupd firmware updates usually fail.

On the other hand, on servers and remote development VMs, the setup work is annoying because I spin up/down machines far more frequently and managing them as pets gets old pretty quickly. So NixOS is much nicer, because you can have a system up and identical in 5 minutes. You could of course approximate it with something like Ansible on non-NixOS.

Though, I think the differences will become smaller since Fedora-based immutable systems will switch from OSTree to bootable containers soon [1].

Of course, you can use Nix on another immutable distro than NixOS.

[1] https://docs.fedoraproject.org/en-US/bootc/getting-started/


Depends on your use cases. I use Nix all the time at work but I don't use NixOS there at all. (I'd like to, but there are barriers and it's not a priority.) Distros like that don't address my use cases at all.


For what it’s worth, nothing in this article is really necessarily for general usage of Nix, as the derivation format is mostly abstracted-away, like how the OCI image format is irrelevant to everyday authoring of Dockerfiles.


Yeah, it's like those famous posts comparing monads to burritos or something


Do I get a superuser? Can I install any extensions I want?


Yes, the 'app' user you connect with has SUPERUSER rights, so you can run CREATE EXTENSION.

You can only install extensions that are already built into the PostgreSQL image we use (supabase/postgres:latest).

Based on that image, the available extensions include:

amcheck, autoinc, bloom, btree_gin, btree_gist, citext, cube, dblink, dict_int, dict_xsyn, earthdistance, file_fdw, fuzzystrmatch, hstore, insert_username, intagg, intarray, isn, lo, ltree, moddatetime, pgcrypto, plpgsql, uuid-ossp, and several others.

If you need an extension not included in this list, we would need to use a custom database image.

__________

Use the code DBSONMETAL and get 30% off for the next 12 months.

That makes this database — already significantly cheaper than other providers like DigitalOcean, Neon, and Render — almost unbeatable in price.


I'm going to take all of my toys and go home until everyone promises to stop being mean to me.


Was this LLM-written?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: