Hacker Newsnew | past | comments | ask | show | jobs | submit | hvenev's commentslogin

Will you not have `~/.ssh`? If you have `.ssh .config/ssh` as a rewrite rule, `stat ~/.ssh` will still find it.


The point is to have a clean home directory.


Abandon hope.

I just treat ~ as a system-owned configuration area, and put my actual files (documents, photos, etc.) in a completely different hierarchy under /.


"/home/${USER}" for whatever junk programs are going to stick there, "/home/${USER}/home" for my "real" home directory.


I have been doing this for decades. My files are in a sub-directory of $HOME. It also makes it very obvious when a piece of software does not treat your $HOME with respect.


You could write a kernel module, then, that just hides certain symlinks from you (which is effectively what this module is).


On Windows this was always easier because, for some reason, most everyone respected %appdata% compared to XDG_CONFIG_HOME, but also because hidden files wasn’t just a naming convention but an actual separate metadata flag.


Always... Except for the decades before this became common. Never a bloated C: root directory. Microsoft even had games store stuff in My Documents\Games at one point. My Documents was a user dir that saw a lot of abuse over the years.


They still have that, it's just `My Documents\My Games` now. And Visual Studio makes a folder in My Documents for every annual release. And…


Yes, as in there’s no reason Linux can’t clean up its game the same way.


That ship has sailed 30 years ago.


For IPv4 the graph does not start at zero, but at around 45K.


Correct, click the "Min/Max scale" toggle to get a zero-based graph that shows the v4 reduction in context.


> Jeff once simultaneously reduced all binary sizes by 3% and raised the severity of a previously known low-priority Python bug to critical-priority in a single change that contained no Python code.

This sounds really plausible. A change to the C toolchain/library (for example, specialized/inlined memcpy) may affect binary sizes significantly, and may change the behavior of something the C standard leaves undefined (for example, memcpy with overlapping arguments).


I have such a Python bug right now because of something that fork()s in a way that can't posix_spawn(). One of those is a lot easier to make performant than the other.



I had the same reaction. Haven't they been selling DGX boxes for almost 10 years now? And they've been selling the rack-scale NVL72 beast for probably a few years.[1]

What is changing?

[1] https://www.nvidia.com/en-us/data-center/gb200-nvl72/


Cutting out the Vendor like SuperMicro or HPE, they are going straight to consumer now.


When nVIDIA sells DGX directly they usually still partner with SuperMicro, etc. for deployment and support. It sounds like they're going to be offering those services in-house now, competing with their resellers on that front.


Hyperscalers and similar clients don't use DGX, but their own designs that integrate better with their custom designed datacenters

https://www.nvidia.com/en-us/data-center/products/mgx/


Back when my job involved using Kubernetes and Helm, the solution I found was to use `| toJson` instead: it generates one line that happens to be valid YAML as well.


From what I remember, the quality of a safe is measured in minutes, with "15-minute" safes being OK for general use.


English also changes, so the only way to be safe is to quote all identifiers.


I'll just stop upgrading SQLite if they ever add "rizz" as a keyword


What I personally do is

    User=per-service-user
    ExecStart=!podman-wrapper ...
where podman-wrapper passes `--user=1000:1000 --userns=auto:uidmapping=1000:$SERVICE_UID:1,gidmapping=1000:$SERVICE_GID:1` (where the UID/GID are set based on the $USER environment variable). Each container runs as 1000:1000 inside the container, which is mapped to the correct user on the host.


Our GCP VMs are also not responding (europe-west4-a and us-central1-b).

edit: Seems to be a network problem. We can't connect to them from Bulgaria, but we can connect to them from the US.


I wonder when quantum computers will be able to target post-quantum RSA [1]. Normal RSA operations (key generation, encryption, decryption) have an asymptotic advantage over Shor's algorithm, so it is not unreasonable to just use large enough keys. The advantage is similar to Merkle's puzzles [2], with the added bonus that the attacker also needs to run their attack on a quantum computer.

A while ago I generated a gigabit RSA public key. It is available at [3]. From what I remember, the format is: 4-byte little-endian key size in bytes, then little-endian key, then little-endian inverse of key mod 256**bytes. The public exponent is 3.

[1] https://eprint.iacr.org/2017/351.pdf

[2] https://dl.acm.org/doi/pdf/10.1145/359460.359473

[3] https://hristo.venev.name/pqrsa.pub


Post-Quantum RSA is clearly a joke from djb, to have a solid reply when people ask "can't we just use bigger keys"?. It has a 1-terabyte RSA key taking 100 hours to perform a single encryption. And by design it should be beyond the reach of quantum computers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: