I feel like we must have read two different articles. You sound crazy. Didn't read it your way at all.
> Think of that "debugging tools give a huge warning" as being the
equivalent of std::panic in standard rust. Yes, the kernel will
continue (unless you have panic-on-warn set), because the kernel
MUST continue in order for that "report to upstream" to have a
chance of happening.
"If the kernel shuts down the world, we don't get the bug report", seems like a pretty good argument. There are two options when you hit a panic in rust code:
* Panic and shut it all down. This prevents any reporting mechanism like a core dump. You cannot attach a normal debugger to the kernel.
* Ignore the panic and proceed with the information it failed, reporting this failure later.
The kernel is a single program, so it's not like you could just fork it before every Rust call and fail if they fail.
> In the kernel, "panic and stop" is not an option (it's actively worse
than even the wrong answer, since it's really not debugable), so the
kernel version of "panic" is "WARN_ON_ONCE()" and continue with the
wrong answer.
(edit, and):
> Yes, the kernel will continue (unless you have panic-on-warn set), because the kernel
MUST continue in order for that "report to upstream" to have a chance of happening.
Did I read that right? The kernel must continue? Yes, sure, absolutely...but maybe it doesn't need to continue with the next instruction, but maybe in an error handler? Is his thinking so narrow? I hope not.
The error handler is the kernel. Whatever code runs to dump the panic somewhere must rely on some sort of device driver, which in turn must depend on other kernel subsystems and possibly other drivers to work.
There is an enormous variation in output targets for a panic on Linux: graphics hardware attached to PCIe (requires graphics driver and possibly support from PCIe bus master, I don't know), serial interface (USART driver), serial via USB (serial over USB driver, USB protocol stack, USB root hub driver, whatever bus that is attached to)... There is a very real chance that the error reporting ends up encountering the same issue (e.g. some inconsistent data on the kernel heap) while reporting it, Which would leave the developers with no information to work from if the kernel traps itself in an endless error handling loop.
In the case of WARN() macros, it will be continued with whatever the code says. There is no automatic stack unwinding in the kernel, and how errors should be handled (apart from being logged) must be decided case-by-case. It could just be handled with an early-exit returning an error code, like other "more expected" errors.
The issue being discussed here is that Rust comes from a perspective of being able to classify errors and being able to automate error handling. In the kernel, it doesn't work like that, as we're working with more constraints than in userland. That includes hardware that doesn't behave like it was expected to.
Well, you've edited your reply a couple times, so it's a moving target, but:
> * Panic and shut it all down. This prevents any reporting mechanism like a core dump. You cannot attach a normal debugger to the kernel.
No one is really advocating that. Clearly you need to be able to write code that fails at a smaller granularity than the whole kernel. See my comment upthread about what I mean by that: dynamic errors fail smaller granularity tasks and handlers deal with tasks failing due to safety checks going bad.
> dynamic errors fail smaller granularity tasks and handlers deal with tasks failing due to safety checks going bad.
Yes and that's why Rust is bad here (but it doesn't have to be). Rust _forces_ you to stop the whole world when an error occurs. You cannot fail at a smaller granularity. You have to panic. Period. This is why it is being criticized here. It doesn't allow you any other granularity. The top comment has some alternatives that still work in Rust.
"a bunch of folks do something insecure" does not speak argument.
The argument is that it is insecure. Most easily because I can inject, "cat ~/.ssh/*_rsa | curl ..." and get your company ssh keys. There's no reason rust, brew and all the rest can't provide a Download page with a checksum. They choose not to, like this project chose not to, because it doesn't look as sexy.
Sure you can, but there's always going to be trust somewhere. I trust that the curl | bash examples I see are from reputable sources, and I trust their infra as much as someone else's to be safe (https protects MITM attacks). NixOS is a cool example of complete package transparency with their binary cache, if your expressions don't evaluate the same as theirs you'll build from source.
But really, curl | bash isn't the end of the world.
If they do it against a github url they also have the security of github behind you, because you can't differentiate on user agent there, which seems to be the commonly argued pitfall. Or other ways to detect you're not a browser, on a hosted platform you have someone else's security team behind your back.
If someone has pulled off a sophisticated enough attack to intercept your http curl of the script and inject a malicious version, why can't they also intercept your brower http requests for the download page and inject different html that gives a good hash/checksum of the malicious script?
Going even further, what is stopping a malicious attack on the package source itself--like someone gaining control of the package source and committing a malicious version (as NPM, pypi and other registries have seen)?
The point is, "use your package manager" is not any better in the grand scheme of things than blindly curling and executing a script. Neither option is perfectly secure.
No, the concern is not your computer is compromised. Yours is a low-value target, sorry.
It's their http server, or a machine that feeds that http server, which is a good target for a compromise. Injecting a little bit of malicious code that steals something, or installs a fileless piece of malware, would bring massive benefits to the perpetrator, even if the exploit is short-lived.
That shell script should be a zip (gzip, xz) file, with a sha256 hash of it published on a different, separately hosted resource.
Maybe we should provide an utility that just does that in one command. It could even be a shell script...
Realistically a poisoned ARP or DNS attack that redirects your machine's traffic to the attacker's server, both for the download and the download page, is something to be concerned about. This only requires someone to have access to your local network, not to your machine. It could be as innocent as working at a coffee shop from their wifi network and an attacker being on it too...
It could, but I can trust that no individual stepped in the middle of that process.
I trust Rust to not put such a thing in their binary. I do not trust an arbitrary man in the middle, and it's trivial to modify a shell script.
Without a checksum, I can't ensure the binary im piping through the shell is the binary they posted and built. Anyone can step in, modify a few lines, and get access to a large part of my system. The barrier to entry to add such capability to arbitrary binaries is outrageously high.
Install scripts are usually hosted on GitHub/etc and changes are clearly tracked. Compiled binaries are untracked and do not offer the same guarantees. I would trust the script more than a binary that could’ve been modified anywhere along the build process.
Not everyone uses Linux, and not every package can be audited by repo devs. It’s simply not scalable.
If I recall properly, the command circular buffers of 2^n bytes ("queues" in vulkan3d) are VRAM IOMMAP-ed (you just need atomic R/W pointers for synchronization, see mathematically proven synchronization algorithms).
There is a "GPU IRQ" circular buffer of 2^n bytes coupled with PCIE MSIs (and I recall something about a hardware "message box").
The "thing" is, for many of them, how to use those commands and how they are defined feels very weird (for instance the 3d/compute pipeline registers programing).
Have a look at libdrm from the mesa project (the AMDGPU submodule), then it will give you pointers where to look into the kernel-DRM via the right IOCTLs.
Basically, the kernel code is initialization, quirks detection and restoration (firmware blobs are failing hard here), setting up of the various vram virtual address spaces (16 on latest GPUs) and the various circular buffers. The 3D/compute pipeline programing is done from userspace via those circular buffers.
If I am not too much mistaken, on lastest GPU "everything" should be in 1 PCIE 64bits bar (thx to bar size reprograming).
The challenge for AMD is to make all that dead simple and clean while keeping the extreme performance (GPU is all about performance). Heard rumors about near 0-driver hardware (namely "rdy" at power up).
> Have a look at libdrm from the mesa project (the AMDGPU submodule), then it will give you pointers where to look into the kernel-DRM via the right IOCTLs.
> This reads like some generic LinkedIn CEO post that sounds deep on the surface but actually means nothing.
I felt exactly the opposite. In my career as an engineer I regularly encounter experts who claim to be so, but offer no qualifications or expertise. Having the ability to respond to this type of stuff is valuable.
In my personal life, I've felt that many therapists exhibit this exact response. They choose to give heuristics and platitudes because, often times, they work. But it means they are giving up the expertise which they claim possession of.
I'm reminded of quite the childish thing by this article: "With great power comes great responsibility." If you claim to be an expert, you need to actually be an expert. I consider this the social contract of expertise and prestige.
IMHO it's like your actual desk. Some people like it clean and organized - others have a cluttered mess of objects.
My desk is a tragedy, but I much prefer tiling window managers and rigid, well-organized computing environments. Great to see the variety of options provided by KDE. I'll have to give this a try.
Yeah I dug into the update catalog and downloaded the update that way myself. So there's a few ways to get it. Folks above mentioned the Update Assistant giving it if the Windows Update screen doesn't.
I think it's pretty common at small to mid sized companies + startups. Your more "trendy" companies and your F500 companies do the type of Leetcode interview you hear everyone on HN complain about.
It typically looks like a 15 minute phone interview with HR, followed by a lengthy Leetcode/ take home exam (that's auto graded, no humans), a computer form where you input your school, GPA, and courses taken (seriously). All of this info gets turned into a number and then HR takes a sample of the top X and hands it to the hiring manager: "Here are the 'viable' candidates".
The hiring manager then has to (basically) interview the candidate themselves. Ensure they actually have the skills for the position, determine their interest in the role, etc. So this is probably what you're doing right now. Just imagine someone filtered a bunch of your resumes first.
Take with a grain of salt, but I have heard of some folks explicitly getting permission to do hiring outside of HR at said large companies. The kids they get out of undergrad and through HR's Leetcode process are apparently complete garbage. Don't understand C, pointers, memory, or Linux at all. Don't even know what files are.
> Take with a grain of salt, but I have heard of some folks explicitly getting permission to do hiring outside of HR at said large companies. The kids they get out of undergrad and through HR's Leetcode process are apparently complete garbage. Don't understand C, pointers, memory, or Linux at all. Don't even know what files are.
I've done a lot of interviews, and I just accept that I need to explain what a byte is to the candidates. You'd think people with a programming background would know what they are, but it's not a hard concept, so whatever. I don't really care if they know how to open a file or what a byte is, I want to know if they can describe their output, and then write code that does what they said. And if they can write a loop with a loop in it without going off the rails. Bonus points if they can communicate reasonably throughout. You can teach someone how to use files, and unless you're a C shop, most people don't need to use that much C that they can't learn it when it comes up, if they need to. But it's hard to teach 'make a spec, follow the spec you made' and if you have to teach a programmer how to do nested loops, they aren't a programmer yet. (which is maybe fine, if you're interviewing someone who's only programming adjacent or something)
to get around to the algorithmic portion of the interview process (filling out your GPA and hoping for a recruiter call), try reaching out to the recruiters directly. they only go to that pool when their existing leads runs dry, and they'd often love the opportunity to add some self motivated candidates.
How I got my job is I searched linkedIn for <company name> + recruiter and just added all of them. That alone generated lots of recruiter calls. This was 2018, not sure if things have changed
Which is of course exactly what these companies should expect if their idea of candidate "preparation" consists of sitting in front of a webform and being asked to type in functions, one after another.
I agree. If the church didn't survive, they would just build a new one.
I imagine there lie the remains of hundreds of separate cathedrals under one. But you would never say "the church fell down". Rather, "there was an accident and renovations were required".
I see them as a Ship of Theseus, where the most long-lasting examples were determined through a lot of trial and error.
Isn't this a thing with Notre Dame? It's been a while but I remember the opening of Hunchback mentions the rebuilding right?
Many cathedrals have been rebuilt two or three times, and building a bigger church over the top of a smaller one is normal, but there are also plenty that have survived more or less as they are for at least several centuries. (Hell, there's an 11th century church in my home village, and it still has the door on the wrong side because the village itself was moved due to plague).
Straight masonry without rebar lasts essentially forever - think of all those Roman viaducts that are still in use. Water can wear through it eventually if the shape lets it, especially in places with freeze-thaw cycles (so there's a saying that a church will survive as long as there's someone around to clear the gutters). I guess hurricanes would probably do it if you're in a place that gets those. But there's just not a whole lot to go wrong with what's essentially a big lump of stone. (Of course a bare stone building is not particularly comfortable for living in, and you have to be a bit more careful about how you maintain wood or fabric on the inside - but the building shell itself will last as it is)
> They are already way outside of rational territory and deep into religious territory. In their minds Linux hasn't changed a bit since 1999, even though if you were to compare mac os from that time period against modern linux, they would become enraged at the unfairness and injustice.
I agree that this is part of it. But I also just see plain antagonism against Linux because people recommend it. Just pure contrarianism.
But a lot of the recommendations for Proton are also "memey". It's a bad fit if you need Anti-Cheat or proper injection protection. And that's just the nature of the beast. A proper Anti-Cheat _shouldn't_ approve of the injection necessary to get Proton working
So I do have to keep a Windows installation myself. But Proton represents a really cool technical milestone. Weird the way people talk about tech in games.
I feel like we must have read two different articles. You sound crazy. Didn't read it your way at all.
> Think of that "debugging tools give a huge warning" as being the equivalent of std::panic in standard rust. Yes, the kernel will continue (unless you have panic-on-warn set), because the kernel MUST continue in order for that "report to upstream" to have a chance of happening.
"If the kernel shuts down the world, we don't get the bug report", seems like a pretty good argument. There are two options when you hit a panic in rust code:
* Panic and shut it all down. This prevents any reporting mechanism like a core dump. You cannot attach a normal debugger to the kernel.
* Ignore the panic and proceed with the information it failed, reporting this failure later.
The kernel is a single program, so it's not like you could just fork it before every Rust call and fail if they fail.