Yeah, it's much, MUCH different to have those vistas unfold in front of your eyes at smooth 144 fps while you have control over the movement and everything compared to hyperanalizing 1 still frame or a sequence of frames with no real control.
The nice thing about this is that it's simple and works pretty much in any language so there's very little cognitive strain if you work with multiple languages at the same time.
I believe the concern is that the attackers gain root access on system A but hide their presence/activity - even in the presence of logs to remote, more trusted server B.
https://github.com/c-blake/kslog has maybe a little more color on this topic, though I'm sure there are whole volumes written about it elsewhere. :)
EDIT: But maybe your "game over" point is just that it is kind of a pipe dream to hope to block all concealment tactics? That may be fair, but I think a lot of security folks cling to that dream. :)
> I believe the concern is that the attackers gain root access on system A but hide their presence/activity - even in the presence of logs to remote, more trusted server B.
That's generally called pivoting and has nothing to do with method of persistence of the malicious code.
OP makes a point that certain systems move or have moved away from giving root user the ability to extend/modify kernel code at runtime via kernel modules, my argument is that none of that matters since root user can still extend/modify kernel code at runtime via binary patching.
> my argument is that none of that matters since root user can still extend/modify kernel code at runtime via binary patching.
OpenBSD restricts that ability as well[1]. Neither /dev/mem nor /dev/kmem can be opened (read or write) during normal multi-user operation; you have to enter single-user mode (which requires serial console or physical access to achieve anything useful). Raw disk devices of mounted partitions can't be altered, immutable/append-only files can't be altered, etc.
You can also choose to completely prohibit access to raw disk devices, although that gets annoying when you e.g. need to format an external drive. There is of course still a lot of potential to do harm as root, but it's not as easy to create a persistent threat or resist in-system analysis by an administrator.
You sound like you're dismissing it, but even if it wasn't all that useful on its own, it's a part of defense in depth strategy - it's just one layer in a carefully thought out system. Pledge/unveil is another, so is privsep+imsg, W^X, (K)ASLR, syscall origin verification, boot-time libc/kernel relinking, and a couple dozen other features I can't even recall now.
Most importantly, all of these features and mitigations are enabled by default, and are pretty much invisible to the end user or administrator; and actually easy to use for a developer. Contrast this with e.g. seccomp or SELinux. Google is even hinting "selinux permissive" and "selinux disable" in top 3 suggestions...
Ah. I misunderstood your "persistence" to mean "persistence of logs" not "of code/illicit powers". Sorry - I read too quickly.
I do think the defense mentality, as evidenced by many comments in this thread, remains a bit too much about "how challenging to make things" rather than the "in theory possibility". Besides binary patching a static kernel as you say, for example, you could have remote hashes of all relevant files a la tripwire, and remote access and programs to check said hashes. If the attacker can detect and adapt to a hash checking pattern then they can "provide the old file" for some purposes/etc. to hide their presence. To do so they have to also write the code to detect/conditionalize. The rationale of this defense mentality seems to hope for a "distribution of attacker laziness" that may at least "help", but sure - it is just a higher, finite bar. And once the work has been done..game over. But I do not mean to belabor the obvious. Anyway, thanks for clarifying your argument.
Aye, I think what you're describing is "security by obscurity" - i.e. the capability is still there, I'm just counting on the attacker not knowing that it is because I've hidden it so well. It can work really well in combination with actual security practices, but it absolutely shouldn't be considered one.
Some versions of `strings` might try to parse the file as an executable, which could expose one to any vulnerabilities that may be present in the library used to do so.
However, on my Fedora 36 machine at least, it doesn't do so by default and I'd have to specify the `-d` flag for it to do this.
No it's not fine, this is just another coercion and consolidation of power where it shouldn't belong. Independent entities cannot innovate on their own because now there's a central apparatus that decides what should be innovated and how, with all the inherent political power struggles of big players, good luck.
EU should be there to set goals, not to dictate implementation.
A simple policy that both set_fs() calls need to happen within the same function body with corresponding CI test based on AST/DWARF inspection would have also prevented it. Do you really want to rely on stack unwinding/destructors for security sensitive code when stack is usually the first thing that gets controlled by the attacker? Exception handling (SEH) on Windows is an exploitation vector of it's own.
I'm talking about the general idea not specific implementation. Having something happen at function/block exit doesn't mean a runtime configurable behaviour. If you don't have exceptions, it's pretty easy to statically compile that behaviour and guarantee it rather than rely on checks.
I'm not sure I see the link here (although it is definitely possible that I'm jsut missing something, I'm no physicist). I don't think Maxwell's demon needs perfect knowledge of the whole system -- it is just locally deciding to let through "fast" molecules and block "slow" ones.
I'm no physicist either, I just like to ask questions :)
How does the demon attain the knowledge of what is "fast" and "slow" without continuous observation (and thus interaction) with the particles. Velocity is just function of position over time, so the demon needs at least 2 samples to make the most basic approximation. Where is the entropy for doing that coming from? How does the interference of the measuring apparatus factor into the whole process - what if the sole act of measurement changes the state of the particle from "fast" to "slow" or vice versa? Do we need to measure twice? But what if the second measurement causes the transition it was meant to detect?
This all depends how much product are you expected to produce. Your "don't chop things beforehand" works if you're cooking for 1 or 2, but if you need to cook for 4+ you absolutely need to plan your steps and prepare your ingredients so that you merely combine them. This is evermore important if you want to have some sufficiently consistent level of quality, which I suppose is implied.
I cook for four (my family) and always just pipeline everything. Sometimes I make up to 4 dishes at once. This feels a lot faster to me. So at least for that number I don’t agree.
This looks amazing but I'd appreciate an autoplay button or free walk + look, the scrollwheel monorail handcranking makes the exploration rather frustrating.
Agreed. The content is great but UX is terrible so I quit early. Scrolling is not a good way to initiate action and is extremely annoying to deal with.
Arrow keys work for me. But it's still strange because when you keep it pressed often nothing happens for a time and then abruptly you move too quickly.