Hacker Newsnew | past | comments | ask | show | jobs | submit | mayakacz's commentslogin

Tailscalar here. Did you expect a wearable fabric or a CNI?


"It can be two things."


late. But was hoping for wearable :)


Tailscalar here.

The Windows client caches the current version for a while, so may not yet have v1.32.3 available on your device. In that case, you can still pull the latest release from http://pkgs.tailscale.com/stable.


Tailscale admin here, politely requesting client update push capability. Being able to see endpoint version is helpful, I will be suspending unpatched endpoints in the near future.


Seconded


Thirded

Or Get Tailscale in the Windows Store so it could auto update for all the endpoints out in the wild I don't control (company laptops, users home PCs, etc). Trying to get TEN people to manually update today was a pain, I can't imagine even triple that.


`winget upgrade tailscale.tailscale` works too.


I think that's right. I would strengthen that statement slightly - it's about ensuring that no actor - whether an insider, or someone who has stolen their credentials, or otherwise compromised them - can perform an action that single handedly accesses user data, without it being known to another actor - via access logs, via approvals, etc.

In terms of the upstream introduction of a new vulnerability, Binary Authorization for Borg can only verify that the code was in fact merged. See the section on third party code, "When importing changes from third party or open source code, we verify that the change is appropriate (for example, the latest version)."

Disclosure: I work at Google and helped write this whitepaper on Binary Authorization for Borg.


I think 'garbage' is a strong word, but I believe what the original poster is trying to say is that there are a lot of binaries, packages, and libraries that most organizations will consume from upstream, and not verify directly. This requires either trust on a third party (often many third parties - in the case of open source), or more intense validation of those components and any changes to those components.

Binary Authorization for Borg performs verification for pieces that come out of Google's CI/CD pipeline. For third party code, see in the doc, "When importing changes from third party or open source code, we verify that the change is appropriate (for example, the latest version)."

Disclosure: I work at Google and helped write this whitepaper on Binary Authorization for Borg.


Yes, there's a few listed in this blog post: https://cloud.google.com/blog/products/identity-security/bey... - Kubernetes admission controllers, OSS part of Kubernetes: https://kubernetes.io/docs/reference/access-authn-authz/admi... - Kritis, OSS: https://opensource.google/projects/kritis - OPA Gatekeeper, OSS: https://github.com/open-policy-agent/gatekeeper - Binary Authorization on GKE/Anthos: https://cloud.google.com/binary-authorization/ They don't all do all the pieces. The hardest part is going to be integrating whatever enforcement solution you choose with your upstream CI/CD pipeline.

Disclosure: I work at Google and helped write this whitepaper on Binary Authorization for Borg.


Agreed with this statement. It's a best practice generally to verify all software updates originate from a particular source before applying them in your environment. Most over the wire updates do this. What's different with Binary Authorization for Borg is that within Google, that last verification step means more than just "came from Google", but "came from Google and went through all previous necessary checks", because of the way the CI/CD system works together.

Disclosure: I work at Google and helped write this whitepaper on Binary Authorization for Borg.


I am perfectly aware that Binary Authorization for Borg is for binaries running inside Google.

I am saying that our solution provides almost the opposite: publicly-verifiable assurance that you are running legitimate binaries, despite being built by automation


in-toto.io also addresses the "proof it went through some steps". How would you compare the two systems?


The biggest difference for me is that in-toto allows you to define any set of upstream metadata requirements, in a very open format, and Binary Authorization has a set of centrally defined requirements, that teams tend to implement in tranches, to meet minimum requirements. It may sound better to have a freeform format, but in practice, I've found that it makes it harder for people to know what they should actually do. In Binary Authorization for Borg, services still define service-specific policies, but pick from a previously defined set of potential requirements. See the section on service-specific policies: https://cloud.google.com/security/binary-authorization-for-b...

You can more easily compare Grafeas and Kritis (OSS projects Google developed, which are similar to GCR Vulnerability Scanning and Binary Authorization for GKE), to in-toto. In fact, I gave a talk covering some of the options for this here: https://youtu.be/uDWXKKEO8NU?t=1314

Disclosure: I work at Google and helped write this whitepaper on Binary Authorization for Borg.


I did a BSc in math/econ, then MSc in math. Went into consulting work, mostly in security. Now work in crypto for a tech company.


Gmail: incl. news alerts

Feedly (fresh articles): Ars Risk Assessment, Bloomberg, The Atlantic Business, various friends' and food blogs

Pocket (older articles)

If I have more time: HN, r/crypto, sometimes Medium

To waste time: Sporcle, Instagram, Foodgawker


Hi, We're the team working on Google's experimental encrypted BigQuery client, also mentioned in the article. It's open-sourced and available on github: https://github.com/google/encrypted-bigquery-client

Happy to answer any questions or chat about cool uses of PHE!


Hmmm. There was a TV movie I saw about this when I was growing up in Canada... https://en.wikipedia.org/wiki/Great_Stork_Derby


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: