Hacker Newsnew | past | comments | ask | show | jobs | submit | BonusPlay's commentslogin

Except that many component manufacturers release their efi capsules signed with Microsoft PKI. So no, you can't fully remove them if you want to verify updates.


While "So no, you can't fully remove them if you want to verify updates" is a valid point, it's also an answer to a different question than the one asked.


You're completely missing the point here.


If you're interested in the topic there's great YouTube channel that demonstrates such attacks IRL together with full tutorials. Below are 2 satellite related videos:

1) https://www.youtube.com/watch?v=2-mPaUwtqnE

2) https://www.youtube.com/watch?v=ka-smSSuLjY


Not the best name for the article. My first guess was version changes, or software being added/removed from repo. Turns out this is about source code modification.


As a native (British) English speaker, I was also unclear until reading the article.

Personally, I believe s/change/modify would make more sense, but that's just my opinion.

That aside, I'm a big fan of Debian, it has always "felt" quieter as a distro to me compared to others, which is something I care greatly about; and it's great to see that removing of calling home is a core principle.

All the more reason to have a more catchy/understandable title, because I believe the information in those short and sweet bullet points are quite impactful.


Patching out privacy issues isn't in Debian Policy, its just part of the culture of Debian, but there are still unfixed/unfound issues too, it is best to run opensnitch to mitigate some of those problems.

https://wiki.debian.org/PrivacyIssues


Thanks for the link, that'll come in very useful.

> it is best to run opensnitch to mitigate some of those problems

Opensnitch is a nice recommendation for someone concerned about protecting their workstation(s); for me, I'm more concerned about the tens of VMs and containers running hundreds of pieces of software that are always-on in my Homelab, a privacy conscious OS is a good foundation, and there are many more layers that I won't go into unsolicited.


Homelabs are usually running software not from a distro too, so potentially more privacy issues there too. Firewalling outgoing networking, along with a filtering SOCKS proxy like privoxy might be a good start.


I understood what it meant immediately, but i think only because i already knew that Debian are infamous for doing this.


Me too. I was hoping for an explanation of why the software I have got used to and works very well and isn't broken keeps being removed from Debian in the next version because it is "unmaintained".


A problem I encountered while writing custom stdlib, is that certain language features expect stdlib to be there.

For example, <=> operator assumes, that std::partial_ordering exists. Kinda lame. In the newer C++ standards, more and more features are unusable without stdlib (or at least std namespace).


At least you have the chance to implement your own std::partial_ordering if necessary; in most languages those kind of features would be built into the compiler.


Haskell solves this nicely: your own operators just shadow the built-in operators. (And you can opt to not import the built-in operators, and only use your own. Just like you can opt not to import printf in C.)


that's not the issue, the problem is the operator is required by the language to return a type from the stdlib, so you have to pull in the stdlib to get that type


In Haskell, when you define your own operators (including those that have the same name as those already defined in the standard library), you can specify your own types.

There's basically no operators defined 'by the language' in Haskell, they are all defined in the standard library.

(Of course, the standard library behaves as-if it defines eg (+) on integers in terms of some intrinsic introduced by the compiler.)


Sometimes standard library types defined in terms of compiler-builtins like `typedef decltype(nullptr) nullptr_t` but that doesn't always make sense. E.g. for operator<=> the only alternative would be for the compiler to define std::partial_ordering internally but what is gained by that?


Well, just the idea that you can use the entire core language without `#include`'ing any headers or depending on any standard-library stuff, is seen as a benefit by some people (in which I include myself). C++ inherited from C a pretty strong distinction between "language" and "library". This distinction is relatively alien to, say, Python or JavaScript, but it's pretty fundamental to C that the compiler knows how to do a bunch of stuff and then the library is built _on top of_ the core language, rather than alongside it holding its hand the whole way.

Your example with partial_ordering is actually one of my longstanding pet issues. It would have been possible (I wrote in https://quuxplusone.github.io/blog/2018/04/15/built-in-libra... ) to define

    using strong_ordering = decltype(1 <=> 2);
    using partial_ordering = decltype(1. <=> 2.);
But it remains impossible, AFAIK, to define `weak_ordering` from within the core language. Maybe this is where someone will prove me wrong!

As of C++14 it's even possible to define the type `initializer_list` using only core-language constructs:

    template<class T> T dv();
    template<class T> auto ilist() { auto il = { dv<T>(), dv<T>() }; return il; }
    template<class T> using initializer_list = decltype(ilist<T>());
(But you aren't allowed to do these things without including <compare> resp. <initializer_list> first, because the Standard says so.)


Note that even for C the dependency from compiler to standard library exists in practice because optimizing compilers will treat some standard library functions like memcpy specially by default and either convert calls to them into optimized inlined code, generate calls to them from core language constructs, or otherwise make assumptions about them matching the standard library specification. And beyond that you need compiler support libraries for things like operations missing from the target architecture or stack probes required on some platforms and various other language and/or compiler features.

But for all of these (including the result types of operator<=>) you can define your own version so it's a rather weak dependency.


> C++ inherited from C a pretty strong distinction between "language" and "library".

This is long gone in ANSI/ISO C, as there are language features that require library support, like floating point emulation, followed by threading, memory allocation (tricky without using Assembly), among others.

Which is why freestanding subset exists, or one has to otherwise call into OS APIs as alternative to standard library, like it happens on non-UNIX/POSIX OSes.


Both AMD and Google note, that Zen[1-4] are affected, but what changed about Zen5? According to the timeline, it released before Google notified AMD [1].

Is it using different keys, but same scheme (and could possibly be broken via side-channels as noted in the article)? Or perhaps AMD notices something and changed up the microcode? Some clarification on that part would be nice.

[1] https://github.com/google/security-research/security/advisor...


We were not able to demonstrate that Zen5 is affected. If we end up doing so, we may release a new advisory or something.


Honestly I don't get why people are hating this response so much.

Life is complex and vulnerabilities happen. They quickly contacted the reporter (instead of sending email to spam) and deployed a fix.

> we've fundamentally restructured our security practices to ensure this scenario can't recur

People in this thread seem furious about this one and I don't really know why. Other than needing to unpack some "enterprise" language, I view this as "we fixed some shit and got tests to notify us if it happens again".

To everyone saying "how can you be sure that it will NEVER happen", maybe because they removed all full-privileged admin tokens and are only using scoped tokens? This is a small misdirection, they aren't saying "vulnerabilities won't happen", but "exactly this one" won't.

So Dave, good job to your team for handling the issue decently. Quick patches and public disclosure are also more than welcome. One tip I'd learn from this is to use less "enterprise" language in security topics (or people will eat you in the comments).


Thank you.

Point taken on enterprise language. I think we did a decent job of keeping it readable in our disclosure write-up but you’re 100% right, my comment above could have been written much more plainly.

Our disclosure write-up: https://www.todesktop.com/blog/posts/security-incident-at-to...


Seems like you assumed none of your tools got backdoored. I'd start bootstrapping from busybox.


If the system is backdoored, do none of these things. Boot from rescue media. Save only non-executable files and wipe the rest.

Do not trust key material, sensitive data or remote logins that the backdoored system have had control over. Repeat the same operation for them.

To check for backdoors, again boot from rescue media and do a full integrity check. Do not limit the check to open files.


Not even that is enough if the malware has loaded a kernel module.


Linux Local Privilege Escalation, but the attacker has to be in sudo group in the first place.

Great read, but this feels like academic research. Technically correct, but impractical at best.


To be precise: you don't need to be in the sudo group, but in the lpadmin group. I'm not familiar with how Ubuntu groups are set up, but I guess it's likely that lpadmin is only granted to administrators by default.

That said, I'm guessing people aren't expecting lpadmin to mean a full privilege escalation to root.

There are two bugs here: one in cups, which allows it to chmod anything 777 (doesn't properly check for symlinks, or for the failure of bind), and one in wpa_supplicant, which lets it load arbitrary .so files as root. However, I suspect that even if these bugs are fixed, having access to lpadmin will still be a powerful enough primitive to escalate to root given the rather sizable attack surface of cups.


It became crystal clear that cups is a can of worms, and it would be prudent to completely replace it with with a new solution built from the ground up, ideally using modern tools and standards.


or just sandbox cups. There's no reason cups needs to write anything beyond its "configuration" and its "print spool". And hence, shouldn't have access to anything beyond what it needs to configure itself and print.

thing like cups should be easy to sandbox, especially if we allow dbus like APIs as a means to cross sandbox boundaries (i.e. RPC mechanism).

and by sandbox, I dont mean simply use apparmor type rules (though that can work), but a cups that lives within its own file system and nothing else is even visible.

i.e. programs will always be buggy, even if we get rid of all language oriented bugs, there will still be logic bugs that will result in security holes. We just need to make it easy to isolate programs (and services) into their own sandboxes while retaining the ability for them to interact (as otherwise, lose much of the value of modern systems).

In practice, I would argue, a lot of modern systems do this already (ala ios/android). The apps run sandboxed and only have restricted abilities to interact with each other.


That's sort-of the direction they're going with CUPS 3. The 'local server', which is what most people will need, runs as a normal user, not root, doesn't listen on the internet, and talks only the IPP everywhere protocol. For supporting legacy printers, there will be separate sandboxed 'printer applications' which read in IPP Everywhere, run the driver code, and communicate with the backend printer using an appropriate protocol.

For enterprise users there will be a separate 'sharing server'.

https://ftp.pwg.org/pub/pwg/liaison/openprinting/presentatio...


I'd prefer they make it the default to not install it. I don't need to print from Linux. I don't print from Windows nor MacOS much either. Less than once a year. But I particularly don't print from Linux. I suspect that true for most people. cups shouldn't be a default install IMO


> I don't print from Windows nor MacOS much either. Less than once a year.

Many Linux users and developers don't run anything else. If they're to print at all it'll be from Linux.


that's fine. They can install it. I suspect the actual number of computers running linux that need to able to print be less than 1 of 20.


And that new solution will have only 70% of cups' features 15 years in with tons of gotchas in everyday use cases, like wayland


> new solution will have only 70% of cups' features 15 years

Which sounds fine? Most people don't want LPT printers support, they want AirPrint and WSD to just work.


How many percent is "most" people? What about enterprise users with complex setups/requirements, will they be supported or out-of-luck? Typically you'll have print servers with centralized authentication, possibly logging/auditing/billing, any this might depend on "the" component they'll leave out in the new product because, well, most people don't care about it...


> How many percent is "most" people? What about enterprise users with complex setups/requirements, will they be supported or out-of-luck? Typically you'll have print servers with centralized authentication, possibly logging/auditing/billing, any this might depend on "the" component they'll leave out in the new product because, well, most people don't care about it...

But the old, complex cups doesn't go away if a new, sandboxed version is developed, so the people who want the complexities can evaluate whether the security trade-off is worth it, and use it anyway if so.


Using modern tools and standards? So build with node.js, runs like a pig, only supports the three models made by the sponsoring company?


You honestly think you can do it without Electron?


Well it's your lucky day, they're working on rearchitecting cups to be, among other things, more secure. See https://ftp.pwg.org/pub/pwg/liaison/openprinting/presentatio...


What if you don't need cups because you don't print anything?

Just sudo apt remove cups right?

No, because cups is a dependency of the entire graphical subsystem, just removing cups also removes everything from the Nautilus file manager to Firefox to ubuntu-desktop itself.


Any idea why that is?!


This might be wrong, but based on my own experience of Ubuntu effectively uninstalling itself when I tried to remove a single package.

I think most of the default software gets installed as one large package group, rather than as individual pieces of software. Only the group is marked as manually installed, but the individual programs pulled in by that group are marked as automatically installed. If you try to apt install something you already have as part of the default distro software, you'll usually see a message saying something like "marked as manually installed."

When you go to uninstall one program from the group, that one program is uninstalled as requested, but the group itself has to be marked as uninstalled, since you've removed one of that group's "dependencies" and thus can no longer satisfy that group's installation requirements. You now have a load of software that was automatically installed as dependencies of another package, but are no longer dependencies of any manually-installed packages. The next time you run apt autoremove, it'll remove all of those automatically-installed components and leave you with an almost bare system.


If it's printable...? Perhaps?


systemd-printd incoming


To expand on this: if the user is in the sudo group, they have explicit permission to execute anything they like as root. If someone wants a user to not be able to do this, they don't put that user in the sudo group. As far as I can tell from the write-up, if you remove a user from the sudo group because you don't want them to have that privilege then this "exploit" won't work.

The bugs found look correct and have security implications, but what is demonstrated is therefore not really "root privilege escalation" since it applies only to users who already have that privilege.


They can execute anything they like as root... by entering their password.

This post shows a way that clever code can execute anything it likes as root without knowing the user's password. That seems pretty significant to me.


> They can execute anything they like as root... by entering their password.

If it has control of your user account, then it can just arrange to wrap your shell prompt and wait for you to sudo something else. The sudo password prompt in its default arrangement doesn't really provide much security there and isn't expected to.


On a server, you may be waiting months for that human to login and use sudo. Maybe even years.


On a properly configured server you'll be waiting forever, because the users actually running the applications on that server aren't the same users who have privileges to make changes to the system or have access to stuff like sudo. So if you take over the nginx/postgres/whatever user, you're not really going to get anywhere.

On the other hand you probably don't need to. Those users already expose all the juicy data on the server. You don't gain much from obtaining root anyways, except better persistence.

This attack might be more interesting when chained with some other exploit that gains access to a users system via their e-mail client or browser. In other words nice if you're NSO Group making exploits for targeting individuals, but not that useful if you're trying to make a botnet.


That's not really relevant nowadays. Most attacks are done indiscriminately and en-masse, so an attacker wouldn't have to wait very long in practice.

Only in "advanced persistent thread" territory is your point really relevant, but the attack I describe is much more widely applicable. Having to wait a while is therefore not in any way a mitigation. In practice then, one cannot assume any security from sudo requiring a password.

https://en.wikipedia.org/wiki/Advanced_persistent_threat


Somewhat tangentially, I will say that Touch ID-based sudo is a real upgrade over password sudo. It still gives you that extra moment to reflect on whether you really want to run that command (unlike passwordless sudo), without being burdensome.


Using print server vulnerabilities to gain local privilege escalation is reminiscent of Windows 95. The year of "Linux on the Desktop," I guess.


In fact it's also reminiscent of Windows 11.


If that is your attitude, why bother with the sudo group at all? Just run as root.

(For what it's worth, I think most people would not lose much security from running as root, and the obsession with sudo is so much security theater, for exactly this sort of reason.)


User accounts and sudo does auditing of who is doing what. They’re are other ways, sure, but checking auth.log is the simplest.

And while lpadmin users can escalate, I’m more interested in escalations from services like web servers or whatever, running as low priv users. I use sudo to allow scripts running as my web server to run specific limited privileged programs as a simple layer of defence.


It's security cargo cult.

Use sudo instead of root. Change ssh port. Disable ssh passwords. Use some port knocking schemes or fail2ban. Close all ports with firewall. Some of those requirements might come from some stupid complicance rules, but usually they just come from rumors and lack of clear understanding of threat model. And sometimes they might even slightly reduce security gurantees (like changing ssh port to 10022, there's actually a security reason why only root can bind to <1024 port and by changing it to higher port you lose some tiny protection that unix greybeards invented for you).

I'm not saying that all those measures are useless, in fact I often change ssh port myself. But I'm doing that purely to reduce log spam, not because that's necessary for security. Configuring firewall might be necessary in rare cases when brain-dead software must listen on 0.0.0.0 but must not be available outside. But that's not something given and should be decided in case-by-case basis, rather than applied blindly.


Honestly, sudo’s value is really sanity, not security.

The first time you use certain flavors of sudo, you get a nice little message which reminds you why sudo exists:

  We trust you have received the usual lecture from the local System
  Administrator. It usually boils down to these three things:
  
      #1) Respect the privacy of others.
      #2) Think before you type.
      #3) With great power comes great responsibility.

Realistically, sudo exists to remind a user of these points. That is: by needing to type “sudo” before a command, you’re being reminded to pay closer attention that you’re not violating another user’s privacy or doing something that’s going to break your system.


Sudo is so commonly used especially on developer machines that I think it is used reflexively without any thought at at all.

It should not be, but that's a different issue. It amazes me the amount of open-source projects that want to be installed with "sudo" when there is no reason they should not be able to be built and used entirely from within the developer's home directory.

I know more than one person who starts a shell session with "sudo -i" and then just works as root because typing "sudo" all the time is an annoyance.


I wonder if this comes from the how some developers view ops knowledge and tasks as merely ancillary to their interests and work.

For me, Linux was a hobby prior to and separately from programming. In the tutorials and documentation I read, every command was explained in detail and it was emphasized to me that I should never run a command that I don't fully understand. All instructions to elevate privileges were accompanied with advice about being careful running commands as root because root's privileges are especially dangerous. I was interested in those warnings, and took them seriously, because I wanted to master what I was learning. What I was learning, though, explicitly included Linux/Unix norms like security 'best practices'.

Developer documentation doesn't usually concern itself with Linux/Unix norms the way that tutorials for Linux hobbyists and novice sysadmins do. At the same time, the developers reading it might be perfectly dedicated to mastery, but just not really see what is considered proper usage by sysadmins (let alone the considerations that inform the boundaries of such propriety) to on-topic for what they're studying/exploring/playing with. Diving into those details might not be 'patt of the fun' for them.

What such a developer learns about sudo is mostly going to come from shallow pattern recognition: sudo is a button to slap when something doesn't work the first time, and maybe it has something to do with permissions.

But I think that comes from the mode of engagement, especially at the time of learning sudo, more than the mere frequency of use. I use sudo several times every day (including sometimes interactive sessions like you mention, with -i or -s), but I am careful to limit my usage to cases where it's really required. I'm not perfect about that; occasionally I run `sudo du` when `du` would suffice because I pulled something out of my shell history when I happened to be running it from / or whatever. But I certainly don't run it reflexively or thoughtlessly.


You are correct Sudo is not that useful, running as root is actually nice

Because in both cases, you must auditd, and then you understand that sudo adds few values


If you want to manage VMs, then you're probably using terraform + provider. However, SDN (Software Defined Networking) is not yet supported [1], which makes any kind of deployment with network separation not feasible (using IAC only).

[1] https://github.com/bpg/terraform-provider-proxmox/issues/817


> Proxmox uses ZFS making snapshotting quick

Proxmox only supports linear snapshots using ZFS (so no tree-like snapshots). This might be a deal-breaker for some usages.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: