My work around for this is to take a walk without headphones. For me, it means that the only thing I can do is think (can't read, can't watch video, etc). And by the time I've finished a 30 mins walk, I've usually got a bunch of things that I'm enthused about working on.
For those that don't know: Concentrated sugar solutions are indeed hydroscopic.
So it will indeed absorb water out of the atmosphere.
It's still doubtful that it will do it fast enough to create any significant pressure differential though :)
No, because glass isn't a crystal. This is part of why it was such a hard problem: Because the existing sub-molecule imaging techniques couldn't image it.
What's the difference between N and N+ ? As far as I can tell, N+ just means a higher doping concentration? But it seems very fuzzy as to what 'higher' means?
With the increasing prevalence of TLS traffic (https et al), any cache almost certainly needs to be an explicitly configured proxy which is a user configuration issue with all it's associated support and compatibility problems.
Add to that the power consumption on a satellite is often a limiting factor, the security issues, the added latency, the ever decreasing fraction of traffic that is cache-able, etc etc ... it's very much not clear that this is a good idea.
I see that you replied before reading the final paragraph.
Whatever is the power consumption of broadcasting to all receivers in the cell, it is much, much less than transmitting the same data to each of the receivers individually, and less again than receiving it over and over again from the fiber terminal, even if you have to power up an SSD for it.
Not quite: Truetime has guarantees about ordering, rather than accuracy.
If you ask for the time and get A, and then ask for the time again and get B, then Truetime guarantees that A is less than B.
Obviously, in a distributed setting this is much easier to do if you have accurately sync'ed clocks, but that accuracy goes to reducing the uncertainty in the time (and hence making truetime faster) rather than providing accuracy.
As far as I'm aware, IBM is one of the few chip-designers who have eDRAM capabilities.
IBM has eDRAM on a number of chips in varying capacities, but... its difficult for me to think of Intel, AMD, Apple, ARM, or other chips that have eDRAM of any kind.
Intel had one: the eDRAM "Crystalwell" chip, but that is seemingly a one-off and never attempted again. Even then, this was a 2nd die that was "glued" onto the main chip, and not like IBM's truly eDRAM (embedded into the same process).
You're right. My bad. It's much less common than I'd thought.
(Intel had it on a number of chips that included the Iron Pro Graphics across Haswell, Broadwell, Skylake etc)
Crystalwell was the codename for the eDRAM that was grafted onto Broadwell. (EDIT: Apparently Haswell, but... yeah. Crystalwell + Haswell for eDRAM goodness)
1. The sheer volume of fraud attempts. Economics often dictate that it needs to be cheap and fast to reject a fraud attempt.
2. Information leakage. It's normal to see people complain that '<insert service of choice> banned them and refused to say way'. There's a very good reason for that: They're trying to slow the rate at which fraudsters learn to exploit them. So they deliberately don't detail exactly what the issue was. Yes, it's super frustrating if you get innocently caught up it, but it's not arbitrary.
TL;DR: Like everything else in life, there are real and genuine trade-offs here.
It's never that simple. You're implicitly assuming that a fraudster wants the account long term, which is rarely true.
And identity is a VERY complex area, and nothing like as simple as "plenty of ways to verify identities". Particularly noting that fraud is often carried out by leveraging many partial opportunities: I use the (false/stolen) identity from over there to carry out of the fraud over here.
> You're implicitly assuming that a fraudster wants the account long term, which is rarely true.
Wait what?
Here my comment:
> Fraudster also doesn't have the same needs as most customers, they don't need to keep the same account...
How does I assume fraudster wants the account? I'm arguing the reverse, that they don't want it, thus give more credibility over anyone doing effort to get his account back. I don't understands that part, feel free to clarify it.
> And identity is a VERY complex area, and nothing like as simple as "plenty of ways to verify identities".
I was arguing that opening up customer service for theses instances won't be a huge risk if you keep a flag on the account as they fraudster don't need the account long term (as you seems to agree).
Doing others verification is to reduce that risk further, risk that I already consider minimal. No one said that it would be 100% effective, nothing is perfect, sure some will be able to bypass, but as I said, they don't need to.
> Particularly noting that fraud is often carried out by leveraging many partial opportunities: I use the (false/stolen) identity from over there to carry out of the fraud over here.
Yup, thus why getting more proof of the user identity will allow to confirm he is actually who he is claiming to be. Here in Canada we can do that at Canada Post office. It's not something Stripe ask for, thus if someone with a flagged account ask to get it back, doing a local verification will most probably be harder for him.
Known of v high level example that is just incredibly disciplined about how they spend their time (e.g. "I did this, but was slowed significantly by poor code over that. Estimate about 30% time loss. I'm going to spend N weeks more in this area, so it's worth me spending 3.5 days fixing that code, but no more. I think I can fix it in about 2 days, so I will fix it. If I can't see the end of the fix by the time I'm 4 hours in, then I'm probably wrong about the expected time, so will abandon the fix").
The result is ridiculous productivity. They don't work long hours (strictly 9-5), but just very, very focused on making certain that there will be meaningful results from those hours.
So they exist, but absolutely it's not going to be common!
I don't know about other places, but volume of work does not matter for L5->L6 at Google. The kind of work you do is what matters. So doing 50% more doesn't get you there.
The L6 (and L7) ICs I know got there by implementing really tricky components way down the tech stack that are very difficult to update due to insane implicit dependencies and also enormously impactful in terms of savings across the entire company.
It's not rare at all in my experience. You just have to have a combination of skills that pays more than average on the open market (big data runtimes or distributed systems, say, or AI, in addition to other domain expertise perhaps), and you'll be L6 or L7 IC, no problem. No managerial duties whatsoever. L7 is more nebulous, but L6 is absolutely doable for anyone with a modicum of marketable skill.
You see, in spite of their internal promo treadmill and brainwashing, companies have to compete on comp regardless. Comp ranges, in turn, are tied to levels. If, say, someone who knows a distributed query plan from a hole in the ground costs $1M/yr, they'll get that and the "level" will be adjusted accordingly, as much as necessary, ignoring the elaborate formal descriptions of job responsibilities.
It's the proles in the trenches that take the levels seriously. People who actually run things understand that levels are merely a hamster wheel, put in place to give you something to look forward to, and to keep your comp expectations in check. Lack of "broad organizational impact" is _the_ most often used cudgel to deny promotions to the more senior level, unless your skill stack allows you to bypass this bullshit entirely.
That sounds entirely logical, but unfortunately having never worked in a FAANG or FAANG-like company[1] I can't tell whether this is the reality or not.
I would very much like other readers here with experience in FAANG or FAANG-like companies to comment on your comment.
[1] Most Corporates and enterprises are different - they resolutely refuse to match based on market-value of skills. There's also very little a developer (senior or not) can do to have a company-wide impact, because the fiefdoms that are in place will resist any attempt to "lose" their territory.
Largely false. Managers have pretty wide latitude to approve compensation matches for employees they want to keep without up-leveling them. So if you're a star L5 at Google and Facebook is giving you a $MM E7 package, you could end up with a $MM stock grant (which would normally be reserved for L7+) at Google. Compensation raises are tied to level, but that just means that you'll sit and vest your existing stock grant without further raises or refreshers until your formal level matches your comp package. This happened semi-frequently in the ~2010-2011 era when Facebook was recruiting heavily out of Google with pre-IPO stock.
The promo process is based on your value to your employer, and at least at Google is done by committee, which is drawn from a selection of high-level employees outside of your manager's department (so they have no incentive to keep you) and has a packet of information that does not include anything about your market value outside the company.
Stock ranges (as well as refresh grants) are also tied to levels you're hired at. "Largely false" my ass. All of this BS applies _only_ to people with weak skill stacks. Committee doesn't even know your "previous" level elsewhere. They know that you've e.g. built this very impressive distributed system here and now and hopefully it made $MM revenue difference (which you'd be smart to quantify and mention). So if you can do it, you'll get rewarded appropriately. If you can't, you won't. And sure, you can't go E5 to E7, there's just no way. But getting from E5 to E7 (or T5 to T7 at Google) is doable in two promo cycles for someone who's in the right place at the right time, and has the right skill stack. Or in fact in the immediate if they are willing to move to another FANG company.
Distributed system & big data knowledge is table stakes for Google engineers. At L4 you will be expected to deal with large distributed systems handling petabytes of data. You probably don't need it for L3 (which includes new grads who would never have had an opportunity to develop that skillset elsewhere), but by the time you're at L4 you should be dealing with that regularly.
This is one of the perks for moving from FAANG to non-FAANG; hiring managers elsewhere know that simply by being an engineer there you will have had exposure to distributed systems and big data, and so bring a transferrable skillset to their company.
I thought about whether your comment is true in the context of Speech & Image recognition and other AI technologies. The Speech leads do seem to have an awfully high number of Distinguished/Fellow engineers (this is the top of the eng ladder, where you can basically write your own ticket and your comp package is enough to retire on). However, there are still an awful lot of L4 engineers working on Speech, many training deep-learning models as part of their daily duties.
> Distributed system & big data knowledge is table stakes for Google engineers
True, but observe that knowledge is very different from _experience_. In anything sufficiently complicated there's tons of stuff you won't find in books, and even if you do, you won't pay sufficient attention to. E.g. you could sorta know how a database works (from a university course or something), but if you haven't implemented e.g. a state of the art query engine or a storage manager, you'll still be SOL in practice until you write one or more of those things and actually gain experience.
Knowledge by itself is darn near worthless, it only becomes valuable with experience, particularly if it's in 2 or more related fields.