The web-flow signing system is for users’ convenience in places where it’s not feasible to sign the commit with their own private key: commits made in the web interface or on an ephemeral GH-provisioned VM (codespace). For the latter, you are free to send your own private key to your codespace so you can sign your own commits but GitHub cannot because they don’t have your private key and don’t want to have it. Defaults matter and signed commits are important.
As a sibling notes, this use case and similar ones is the reason the committer field exists as distinct from the author field. I think a $10K bounty for this bug speaks to how seriously they stand behind the fact that they will only sign and mark as verified commits whose author field matches an authenticated user.
> The web-flow signing system is for users’ convenience in places where it’s not feasible to sign the commit with their own private key:
Who signs all their commits? Joey Hess maybe? There are certainly others. But I’ve never seen anyone make a case for this. In fact only negative cases since it just encourages you to automate your signing process, which many are not comfortable with.[1]
I’m not important enough to sign anything.
On Bitbucket we push the big merge button and out comes a commit with the correct person attributed to it.[2] Even Atlassian manages to do this the correct way.
> For the latter, you are free to send your own private key to your codespace so you can sign your own commits but
Yeah GPG/SSH sign commits... who cares. Most people don’t.
> Defaults matter and signed commits are important.
I don’t care about your opinion.
I wouldn’t mind if this was an option that I could opt out of. (I’m wondering out loud, not asking you or anyone else.) I just haven’t heard of it yet.
I’m a Git user after all so I’m used to changing bad defaults.
> As a sibling notes, this use case and similar ones is the reason the committer field exists as distinct from the author field.
Quite a leap to go from attributing emailed-around patches to the correct author while also maintaining the committer (like the maintainer) to what looks equivalent to Norton Antivirus junk output stuffed 40 lines into someone’s email signature.
> I think a $10K bounty for this bug speaks to how seriously they stand behind the fact that they will only sign and mark as verified commits whose author field matches an authenticated user.
“I think the price they put on this SPOOFING vulnerability speaks to how serious they are about verified commits”, they said without irony.
“Sent from my GitHub”, ah they all felt at-ease immediately... wait the same platform that had a spoofing vulnerability?
[1] Well, allegedly. I have never signed anything so I don’t know.
[2] They committed it too. Or wait. Was that the merge button?
The research you do in that scenario would just tell you the prices which shares had actually changed hands at. A decentralized market-based price discovery mechanism cannot be considered collusion or price fixing, since it’s exactly the opposite.
> but you still need a Personal Access Token to integrate pull requests and issues with your Git client
(Nitpick from a former GH employee) PATs really are almost exclusively intended for personal testing with curl and such. The strongly preferred way for apps like you describe to work is a pseudo-OAuth flow (“GitHub Apps”) which yields a token that is not a Personal Access Token. Better in just about every way: more ergonomic, more secure (revocable, shorter duration, predictable fine grain scope with a mechanism for requesting additional permissions as apps change, etc), and requests are attributable to the application which generated them instead of just the user. If you use an app that actually requires generating and pasting a PAT, it’s either extremely old or made by someone who is not prioritizing security and user experience. It even works well in CLI apps, cf. the `gh` command line utility.
It sounds almost exactly like the mechanics of a session cookie as implemented on nearly every website on earth. Exchange a password for a bearer credential that is randomly chosen and revocable. There are only so many basic ideas in security.
“Please read my rant about how this useless hair-shirt I wear to clear first party cookies too often breaks the web (for me)”
> the web has no notion of a “device”, and this is a very intentional design choice made for privacy purposes [...] why do web developers persist in believing in this fiction of a “device”?
Cookies are a core part of the web which enable the construction of stateful applications on top of a stateless protocol. “Remembered device” is usually just an extra cookie set on login, or a row in a backend database. It’s no more fictional than the web itself, which is after all just a series of electrical impulses over wires.
Whether a device (however you build that abstraction) has previously logged in is a high-signal data point that meaningfully increases account security at login time and all serious web security teams use it to protect their users.
Imagine if these people made posts like "I edited user32.dll to dummy out random functions I deem unnecessary like RegisterClass or CreateWindowEx and now nothing works! This is proof that Windows is broken!"
It will forever be a mystery for me why people deliberately make their browsers work in ways that contradict the standards the web is built on and then manage to find blame in others when stuff doesn't work. It's already difficult enough to support all major browsers when their interpretations of the standards differ very slightly.
or the entitled "I've disabled Javascript, all web developers should make their site work without JS" when even in 2013 only 0.2% of all users to gov.uk had JS disabled*
That might be a very misleading statistic. What if more than 0.2% of people wanted to disable JavaScript, but in the end surrended to the fact that those pesky web devs never test their creations with JS disabled?
I know I am one of those who would like to disable JS, but it's just not practical. So stats really are a dangerous tool, they sometimes can end up telling you just what you want to hear...
If anyone tests their web pages without JS, it would be gov.uk. I'm an American but I frequently reference their guidelines on accessibility and similar because they're so thorough and conscientious about it.
Yeah but I am afraid every other website on the Internet is not done with the care and good craftsmanship that the gov.uk applies to its website. I wish, though! But then so many web devs (and so many of their managers, too!) would be lost without knowing what to do without a JS framework that weights several MBs worth of bloat...
It's about cost. If it takes a web dev a day to progressively enhance a page from html through css and JS, then it's a day they're adding value to a small slice of the users.
Even if you multiplied the 0.2 by 10 or 20, you're still looking at a slice not large enough to build for.
I'd say it's about priorities (i.e. very much related to cost, but with slight differences). That's why I mentioned the lack of people who cares. If you as a manager care, or if a dev with enough decision power cares, it will just be included as part of the time it costs to get the website done.
A worker putting a helmet and appropriate clothes is losing time that could be beter spent producing value. Or if we talk about social policies and minorities, for example, even if as the word says, it's a "minority" of people so it might seem that it's a slice not large enough to improve for. A bit extreme examples, but you get the idea.
Also a 0.2% of all world population is still a huge amount of people. It's just that people who can take decisions, just don't care. But some people do care, like those in charge of co.uk websites, and then we all see how well things can be done and how poorly we've been doing in comparison.
> people...who would like to disable JS, but it's just not practical
As I tell my kid when he "wants" something, I want a pony, and a million dollars.
I don't see why the fact that some people might like that matters. I mean, given the choice for free sure I'd "like" it too. But it will never remotely be worth it to build two entirely separate web applications for every website to make that dream a reality, nor do I see the whole Internet agreeing to discard the decades of advancements in FE technologies to go back to script-free HTML.
All that said, boy would that be a great jobs program for developers over age 35 though! Imagine developing for the web with no Webpack, no JS compilers, transpilers, bundles.[1]
[1]: Or whatever you frontend folks use for your toolchain this year, or this nanosecond...
> decades of advancements in FE technologies to go back to script-free HTML.
I don’t think modern webshit which requires downloading megabytes and megabytes of obfuscated code to view someone’s blog is an “advancement” for anyone except the adtech bastards.
Look. I agree with you, in the core idea. There have really been advances in technology, but for each step made with brilliance and prowess, there have been 3 steps back with laziness and carelessness.
Some applications of the newer technologies merit their use.
Most use cases, however, don't.
Bad practices abound, the "art" of programming becomes a chore made by let's say not very skilled people. Luckily there are still lots of good managers and good devs that value adequately done products, but on average that's not the case and the Web gets more and more bloated as a whole.
One day you decide to disable JavaScript in your phone (which BTW is an incredible way to speed up modern webshit, as the sibling comment puts it, in under-powered mobile devices), and turns out that lots of f*ing blogs don't load their plain text and static pictures if JS is not enabled. That's an absurd situation we've collectively ended up in.
The mere thought of having a Word document with just text, images, and a couple tables, and not being able to open it if VB macros were disabled, sounds absurd. But that's exactly what large parts of the Web have become.
Your complain is conflated. Turning off javascript is not akin to turning off macros in a Word document. It’s like deleting your desktop environment and complaining Word doesn’t work in a terminal.
I’m not sure if you’re really thinking about the impact of not having any javascript. Want to reply to a comment on HN? The whole page reloads. Want to upvote a comment? The whole page reloads. Sure you can give every comment an ID and reload back to where you were, but then you can’t have collapsible comments (because css, presumably what you’re hacking for collapsible comments without JS, can’t respond to anchor references).
There’s a million other usability things that require JS, it’s so much more than a macro language.
There are bad practices everywhere, in every field, and it feels like everyone feels they have the authority to beat down JS, and web dev as a whole, likely with zero experience working with it.
Web arguably has the best developer experience of any field. It’s so good, they took the web and put it in your desktop. Electron, GTK, KDE, everything is javascript.
The war is lost and over. Start arguing/discussing how JS can be improved instead that it shouldn’t exist (there’s PLENTY to complain about, don’t get me wrong).
You made it sound like even for a simple site, JavaScript would be a necessity and we should expect websites to not work well without it. I was actually about to concede that it's OK if JS has eaten the world (see my closing thought)...
> you can still view it through https://nitter.net, which I guess makes the open source Javascript-less front-end to Twitter more accessible for SEO
WHAT? I had no idea. So there is Nitter [1] frontend for Twitter -which is a platform clearly more complicated than HN- and they manage to not only work without JavaScript, but have it as one of their core motivations.
Things get even better, from that project I find about Invidious [2], a frontend for nothing else than YouTube! And again, no JS is not only an option but a highlighted feature.
After these discoveries, my bar for how JS-free we should expect most websites to be has just gone up, not down. Especially those websites consisting on just presenting text and media (i.e. the immense majority)
I agree the war is lost, though. Luckily there will still exist people desiring and making noise for a leaner and faster experience. The problem is bloated frameworks and privacy invasion via JS. Those are essentially my main reasons to want to browse the Web without JS.
Maybe I'm just too old but it feels like humanity will always find a way to collectively fuck everything up. The web is always going to be shit. Fortunately another contingent of humanity invented uBlock and reading mode.
this happens a lot. A LOT. not this exactly, but I know a lot of people who keep .reg files for "fixing Windows bullshit" on a new system, which they built up when Windows XP or Windows 2000 was new.
Of course, a lot of those "fixes" now break things, because the underlying workings of windows changes a lot, but every last person I know who uses these has very odd problems with Windows that I have never once seen myself.
a lot of these things that only experts knew how to do 20 years ago are now the causes of very odd problems, because these folks don't bother to verify that these registry settings are still the correct way to make the intended changes.
It seems to me like the user you're replying to is well aware of how web devs attempt to identify unique devices (browser cookies.) They're saying that the manner that this is implemented leads to poor user experiences due to the faulty assumption that just because a cookie doesn't exist in the client browser, that the device is in fact unique to previously used devices. Which I don't see how your comment actually addresses. I tend to agree with the other user. Making healthy security conscious decisions like low TTLs on local cookie storage (such as cookie purge on browser/tab close) feels unrewarded when the site enforces additional security gates on login. The point is: unique login devices may have been a good idea, but in practice the design of the web does not make them an ideal candidate for bolstering user security. Maybe someday passkeys solves the unique device problem sufficiently such that faulty assumption methods like browser cookie storage cease to be commonplace.
I'm the parent commenter, but the viewpoint you're agreeing with is an extract from the article, not my perspective, as indicated by the > before the paragraph. My own comments are the subsequent two paragraphs.
In short, I entirely agree with @brasic: the article author has a nonstandard configuration (clearing cookies automatically before their expiry date) and based their entire article on the difficulties that this highly unusual and unnecessary choice has caused for them. "Hair shirt" is a great way to describe it.
> Making healthy security conscious decisions like low TTLs on local cookie storage (such as cookie purge on browser/tab close) feels unrewarded when the site enforces additional security gates on login. The point is: unique login devices may have been a good idea, but in practice the design of the web does not make them an ideal candidate for bolstering user security.
This is exactly @brasic's point, though: if a website can affirmatively identify that you've logged in from this machine before, that's a pretty good indicator that this new session is a legitimate login. We can do that through cookies, and for most users that's just fine. If you clear cookies regularly for security reasons, then you shouldn't be offended that a website asks you for extra confirmation that you are you, since that is also done for security reasons.
Clearing cookies for a domain is instructing your browser to identify itself as though it had never spoken to that server before. If you want the server to know you're still you, maybe just leave the cookies there?
Yep, to be clear I was agreeing with lolinder. I would have posted top level but they had already expressed almost exactly my objection to the article so I replied to avoid redundancy.
Maybe just accept my password and at most my TOTP? Asking for some others auth method that I may not be able to provide in a timely manner or at all only helps the provider cover their ass.
It doesn’t show as flagged for me. Time heals all wounds, if your comment is not objectionable it will usually be vouched for eventually. Which is one of the reasons it’s a rule here that you don’t complain about voting. Such comments also rarely age like wine :)
You can complain, you'll just get a small karma hit for it. It's inevitable because it's off topic, but karma on HN is more or less meaningless anyway.
What do you mean? WhatsApp rolled out E2E encryption between 2014 and 2016.
It’s by far the largest messaging service that is e2e encrypted by default. I think it’s more than fair to call it ahead of the curve for a service of its size.
But, let's go to the start - WhatsApp started in 2011.
Encryption arrived in varying degrees until there was some pressures to change the amount of privacy messages had for advertising purposes.
Lots in the news at the time.
2018 "Another point of disagreement was over WhatsApp’s encryption. In 2016, WhatsApp added end-to-end encryption, a security feature that scrambles people’s messages so that outsiders, including WhatsApp’s owners, can’t read them. Facebook executives wanted to make it easier for businesses to use its tools, and WhatsApp executives believed that doing so would require some weakening of its encryption."
2018 "Acton said he tried to push Facebook towards an alternative, less privacy hostile business model for WhatsApp — suggesting a metered-user model such as by charging a tenth of a penny after a certain large number of free messages were used up." https://techcrunch.com/2018/09/26/whatsapp-founder-brian-act...
How would this even be mitigated while preserving the (wacky) existing support for runtime-selected PKCS#11 provider libraries? It strikes me that the most compatible way might be to double down on the wackiness and try to perform the required feature detection in some more indirect way like parsing the named lib with readelf(1) or the platform equivalent.
The sensible thing would be to force users to register available provider shared libraries in an ssh-agent config file, but that feels like a pretty big breaking change.
Edit: Didn’t realize a patch was already available. I see that they did in fact fix this with a breaking change, by simply disabling the functionality by default, and recommending that users allowlist their specific libraries:
Potentially-incompatible changes
--------------------------------
* ssh-agent(8): the agent will now refuse requests to load PKCS#11
modules issued by remote clients by default. A flag has been added
to restore the previous behaviour "-Oallow-remote-pkcs11"
By finally acknowledging that loading plugins via shared objects is a bad idea, and it was only valuable in the days of resource constrained computers.
Any application that wants to use plugins and is security sensitive, should adopt OS IPC, and load them as separate processes.
Process separation was already in place. The PKCS#11 library is loaded by a long lived helper process, not ssh-agent itself.
> (Note to the curious readers: for security reasons, and as explained in
> the "Background" section below, ssh-agent does not actually load such a
> shared library in its own address space (where private keys are stored),
> but in a separate, dedicated process, ssh-pkcs11-helper.)
That didn’t help because the long lived nature of the helper process exposed it to the shared lib side effects such that they could be chained into a gadget. If I understand correctly, the long life is important for interacting with many smart cards and HSMs because of their APIs.
If you are suggesting that there should be an IPC API for this process and vendors ship a full program that speaks it, that seems reasonable at a glance, but not really something the OpenSSH project can dictate.
Indeed, my suggestion is zero dynamic libraries in security critical code/applications.
If security is a goal, loading in-process foreign code is already a lost battle.
Plugins as dynamic libraries made sense when we were fighting for each MB, not when people have hardware where they go to the extreme of running containers for every application they can think of.
It would help against attacks that depend on corrupting process address space, like this one.
Additionally, one could use OS security features to reduce API surface for each plugin, depending on what they are actually supposed to be doing, e.g. no need for file system access if they only do in-memory data processing.
As for "would it help in 100% of the attacks?", no.
Even if there were no plugins support, there is still the possibility to exploit logical errors anyway.
What matters is having a balance between reducing attack surface, and application features, and it than regard process sandboxing is much safer than loading foreign code in-process.
> How would this even be mitigated while preserving the (wacky) existing support for runtime-selected PKCS#11 provider libraries?
Put the pkcs11 libraries in a specific directory, configure only that directory, let users manually add others. Or stop using forwarding and configure ProxyJump where needed. (If that's the only use case you're interested in)
Pre-acquisition WhatsApp was built using Erlang and had an impressively large user-to-server ratio IIRC. I expect that since then things have mostly been rebuilt to use existing meta shared infrastructure which might not be as stable or efficient.
As a sibling notes, this use case and similar ones is the reason the committer field exists as distinct from the author field. I think a $10K bounty for this bug speaks to how seriously they stand behind the fact that they will only sign and mark as verified commits whose author field matches an authenticated user.
(Disclaimer: former GH employee)