Hacker Newsnew | past | comments | ask | show | jobs | submit | packetslave's commentslogin

> Waymo driver? The vehicles are autonomous

the "Waymo Driver" is how they refer to the self-driving platform (hardware and software). They've been pretty consistent with that branding, so it's not surprising that they used it here.

> Importantly, Waymo takes full ownership for something they write positively [...] But Waymo weasels out of taking responsibility for something they write about negatively

Pretty standard for corporate Public Relations writing, unfortunately.


right-click, search Google: "MSK most commonly refers to Memorial Sloan Kettering Cancer Center, a world-renowned institution for cancer treatment and research"


The first co-author of the linked paper is also associated with MSKCC.


cool...so random i hadn't thought to right click, but thanks because while right under my nose and overlooked, it's useful to know now!


Right-click, "Search Google for 'ADC'", takes much less time than making this useless comment.

https://en.wikipedia.org/wiki/Analog-to-digital_converter


My point wasn't "I can't find this information", my point was "this is poorly written".


I wanted to say I that I think it's overrated in terms of its position on HN, but rather than criticize side issues of it, which often point to something being a weak article in general, I probably should have just said exactly what I don't like about it as a whole. So I'll do that.

I think the headline is problematic because it suggests the raw photos aren't very good and thus need processing, however the raw data isn't something the camera makers intend to be put forth as a photo, and the data is intended to be processed right from the start. The data of course can be presented in as images but that serves as visualizations of the data rather than the source image or photo. Wikipedia does it a lot more justice. https://en.wikipedia.org/wiki/Raw_image_format If articles like OP's catch on, camera makers might be incentivized to game the sensors so their output makes more sense to the general public, and that would be inefficient, so the proper context should be given, which this "unprocessed photo" article doesn't do in my opinion.


> I think the headline is problematic because it suggests the raw photos aren't very good and thus need processing

That’s not how I read either the headline or the article at all. I read it as “this is a ‘raw photo’ fresh off your camera sensor, and this is everything your camera does behind the scenes to make that into something that we as humans recognize as a photo of something.” No judgements or implications that the raw photo is somehow wrong and something manufacturers should eliminate or “game”


yep, and it was this exact requirement that also caused the exact same outage back in 2013 or so. DDoS rules were pushed to the GFE (edge proxy) every 15 seconds, and a bad release got out. Every single GFE worldwide crashed within 15 seconds. That outage is in the SRE book.


Is there a link to the SRE book?



I have a bit of a different take on browser fingerprinting: I don't want to uniquely identify you as a person so I can serve you ads, or whatever. I want to identify your traffic when you're scraping my content from 2,000 different IP's and 1,000 different user accounts, I can block you or rate limit you without hurting anyone else.

Given the scale of scrapers these days (AI companies with VC money have no problem spinning up thousands of VMs running Chrome), fingerprinting at the browser level is the only realistic option.

(obligatory: my personal opinion, not necessarily my employer's)


TCP/IP Illustrated is still in copyright. Please don't post pirated material here.


Apologies; if I could edit it I would.


and his dad was head of computer security at the NSA for a while


Google had 86TB of sourcecode data in Piper way back in 2016.


Dang, that's mind boggling - especially if I keep in mind that a book series like lord of the rings is mere kilobytes if saved as plain text.

Having 86 TB of plain text/source code - I can't fathom the scale, honestly

Are you absolutely sure there aren't binaries in there (honestly asking, the scale is just insane from my perspective - even the largest book compilation like Anna's isn't approaching that number - if you strip out images ... And that's pretty much all books in circulation - with multiple versions per title)


Each snapshot of the repo isn't that big, but all the snapshots together, plus all the commit metadata and such, are


Also the just-released DGX Spark from Nvidia (although it "only" has 128gb of unified memory)


One of its defining features is the ability to link them together at speeds about the same as their ram speed iirc.


in AWS's defense, it's pretty easy to see the rationale for this. The amount of abuse they must deal with is staggering -- "spin up account, do evil stuff until they notice and nuke the account, repeat"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: