Hacker Newsnew | past | comments | ask | show | jobs | submit | daniel-levin's commentslogin

Difficulty is relative and practice makes perfect. People love to compare difficulty of processes, metals, positions etc. They’re all hard without practice. They’re all easy with a lot of practice. If you just want to fuse some metal it can be very frustrating to fight with the welder and get nowhere. But if you’re deliberately practicing, getting hundreds of hours under the hood, you will get good. The other crucial component is that other humans have worked out how to weld metals effectively and have documented it. There are tons of handbooks and manuals, detailing which techniques and consumables you should be using for a given weld. Combine that with lots of time under the hood, and you’ll be making phenomenally good welds without difficulty.


> People love to compare difficulty of processes… they’re all easy with a lot of practice.

People also love to diminish the value of skilled trades and high quality craftspeople. If it takes thousands of hours to become a competent welder then it’s hard. It’s okay to say that things are hard.


I enjoy watching a skilled craftsman at work. It's beautiful.


Coinbase unironically suggests using PKI to protect oneself. It’s unbelievable self-satire.

https://www.coinbase.com/blog/celer-bridge-incident-analysis


It looks like this is about protecting your "web3" web application, not your individual self. In that context it's probably fine advice.


Neat! This is the direction I’d hoped to see gvisor go in. What’s the reasoning for building from scratch and not piggybacking off gvisor?


We certainly looked into gVisor and Firecracker when we started this project a few years ago. These systems use KVM and gVisor in particular uses the Model Specific Registers (MSRs) to intercept system calls before forwarding them to the host kernel. Intercepting syscalls this way has less overhead than ptrace and we would have complete control over the system environment. I think it's a good approach and worth exploring more, but ultimately the deal breaker was that KVM requires root privileges to run and it wouldn't run on our already-virtualized dev machines. We also wanted to allow the guest program to interact with the host's file system. So, we went with good ol' ptrace. Last I checked gVisor also has a ptrace backend, but it wasn't very far along at the time. When going the ptrace route, there is less reason to depend on another project. Another reason of course is that we'd be beholden to a Google project. ;)


I thought it was very cool how gVisor is multi-backend (their “sentry” implemented vie either ptrace or kvm), which is pretty unusual with instrumentation tools.

We could maybe have shared this logic to intercept syscalls and redirect them to user space code serving as the kernel. That is, we could have shared the Reverie layer. We saw ourselves as headed towards an in-guest binary instrumentation model (like rr’s syscall buffer). And so one factor is that Rust is a better fit than Go for injecting code into guest processes.

Regarding the actual gVisor user space kernel.. we could have started with that and forked it to start adding determinism features to that kernel. At first glance that would seem to save on implementation work, but “implement futexes deterministically” is a pretty different requirement than “implement futexes”, so it’s not clear how much savings could have been had.

We could still have a go at reusing their kvm setup to implement a Reverie backend. But there’s some impedance matching to do across the FFI there, with the Reverie API relying on Rusts async concurrency mode and Tokio. Hopefully we could cleanly manage separate thread pools for the go threads taking syscalls vs the Tokio thread pool hosting Reverie handler tasks. Or maybe it would be possible to reuse their solution without delivering each syscall to Go code.


> that KVM requires root privileges to run

It doesn't. It only requires privileges to access /dev/kvm


Oops, yes, you are correct and it's not too hard to get around that by adding the user to a group that has access. Still, nested virtualization isn't always enabled, which I think limits the number of places we can run.


I think if you come from the JVM / CLR world, you are so protected by the runtime’s patching up of references that it might not even occur to you that a (raw) pointer to a data structure’s internals can dangle after the data is moved around. The runtimes mentioned pause your code, move things around and even compact the heap and your references magically still point to what they did before!


Except, it's not magic. It costs processing cycles, and latency, and invalidations of cache rows you were using, and memory bus traffic. Lots of all of them.

Most of the costs are not counted as runtime for your process, so benchmarks invariably appear to show GC as costing less overhead than it does. Among the costs, as with all the myriad varieties of caching done in modern systems, is that GC makes it hard to know the costs of design choices you make. Many of the costs are in making the caches less effective.


Well it's not so much being protected by patching up references, but more that C++ is (as far as I know) the only language that doesn't complain when you create a reference to an object that can move away.

Most languages either don't have the concept of classes or ensure that invalidating references is an explicit operation (such as calling the destructor).


Presumably not including chickens... broilers only live for 33 days on average



Have you ever set up a high-occupancy building in a structure that isn’t rectilinear? It’s a space efficiency nightmare to fit naturally rectangular fixtures and appliances to rounded walls. I worked in a beautiful post-modernist building with a terrible architectural oversight: the aesthetically charming glass panels acted as a lens that focused light in a way that heated the whole office up. The AC had to be cranked up high just to survive in there. I would take a well-lit, well-ventilated box over a more architecturally creative space that causes its occupants problems.


Microsoft shares source code with lots of partners. It would be asinine to admit that source code leaks, accidental or otherwise, would compromise their security. If they did that, it would create headaches for their massive contracts where source sharing is a prerequisite. So they toe the party line and say no, in fact, source code leaks do not compromise security.


Many years ago when I worked at Microsoft I asked for the source code to Solitaire. A few days later I received a stack of CD-ROMs with the entire source code of Windows NT (4.0 maybe).


And what of the source code to Solitaire!?

Cool memory, thanks for sharing.


I just thought of something. At the time blank CD-R's were about $15 each and the fastest burners at the time were 2x burners. I'm sorry I wasted so much of time the person who burned these and the cost of the media!


It took ages to figure out where the code even was in the many files and folders. The directory structure did not make it obvious.


Can't wait until cozy bear leaks that :D


NT 4 code was already leaked almost full back in 2004. You can still find it with relative ease if you know where to look, or search for certain keywords in the code.


Man I'm old, but give me NT 4 with modern technology support, modern drivers and GPU driven and I would move in a heartbeat.


Yeah I remember when there weren't dozens of services running in the background just for basic OS functionality.


What do you miss the most? The UI? Speed?


UX, speed, simplicity, lightness. Applications ran without talking to the internet, asking if it's ok to run it, telling me it's probably unsafe to run it, telling me I need to update for security reason, telling me I should play candy crush, letting me search my files without adding recommended noise somewhat supposed to be relevant to what I did 3 days ago. I could go on. I just want to stare at a blue flat colour knowing tomorrow it will be just the same. /s


Yes! UI was amazing and obviously you can’t beat the speed it would run at on modern hardware


Honestly I think Xfce is about the same. And probably more stable, though obviously it's hard to do a direct comparison.


Depending on the "Distro" and compile options/build flags/included libraries MATE is as fast and light, since at least 6 years, actually. I think the first distribution which showed that to a general audience was LMDE a.k.a. Linux Mint - Debian Edition.


The window chrome is fine but the settings are a bit of a mess in my opinion.


UI shouldn't be too hard (https://www.wincustomize.com/explore/windowblinds/8628/). I am not so sure about the speed if you'll use modern drivers.


Make that winning animation use the GPU!


That was before Source Depot, I presume.


They were using SLM (Slime) but I did not have access to the server since I was on a different project (Microsoft Systems Management Server).


> a stack of CD-ROMs with the entire source code of Windows NT

That's a lot of code. Scary.


>That's a lot of code.

It's estimated to be around 40 million lines of code


And it was not compressed it was just a bunch of files and folders. My guess is it was around 15 CD-ROMs


40 million lines of 80 characters would fit in 5 CDs. With a more reasonable average length, it'd fit comfortably in 3.

And 40 million lines for an OS is a crazy amount of code.


The source code is already out there, so any compromises have already been found and exploited. Leaking it further won't create more vulnerabilities, and more likely will cause existing vulnerabilities to be found by white hats


> Microsoft shares source code with lots of partners

ALL source code for ALL active AND inactive projects? I highly doubt it.

You simply have no idea if the attackers had access to unshared, proprietary code or not. Like Azure server-side components.


>> Why are compilers ones behind the curve?

I suspect it’s because there aren’t a lot of high quality libraries you can integrate into the backend of compiler tools that don’t run into license issues pretty fast. Imagine if GNU binutils was more permissively licensed and as modular as clang? Then developing novel, non-GPL’d compiler infrastructure could depend on BFD - the boring part that working on won’t bring bonafide improvements to your new compiler. Another factor is that LLVM’s quality and ubiquity has reduced the monetary and technical upside to pursuing new opportunities in compiler development.


Hi Eric, presumably the companies will trade on a limit order book, like other exchanges. This mechanism of price formation allows rapid price movements up or down. What’s to stop the speculators or retail investor hordes from piling into a stock and driving its price up or down? Are there holding period rules?


No, we do not restrict liquidity in any way. We simply believe that companies should be able to engage with "speculators" as you call them differently from long-term investors. Ultimately volatility isn't a problem in itself - it's the effect it has on corporate decision-making that matters.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: