My debit card is a direct line to my primary bank account. If something goes wrong there and an attacker gains access, my cash is simply gone. Yes, the bank will perform an investigation and yes they may issue some provisional credits as a bridge, but there's a window of time between the theft and that investigation concluding where my actual cash is not in my account.
With a credit card, if the card is compromised, its not my money being stolen - its the card issuer's money from my line of credit, and they were planning on settling up with me when my monthly statement closes. I still have to launch a fraud case with the issuer, but critically, _all of my money is still in my bank account_ and I can continue to pay my other bills and obligations as normal.
I think its reasonable to consider giving up that buffer to be additional risk for the debit card approach, setting aside any other advantages or disadvantages between the two.
My debit card is a direct line to my primary bank account. If something goes wrong there and an attacker gains access, my cash is simply gone.
Your bank lacks proper security protections then. Here most banks have limits on debit card transactions. If you want to do a very large transaction, you have to increase the limit for a short time period in your banking app, and there is a delay of a few hours (they'll warn you when the spending limit is increased).
IANAL, but also consumer protection is much stronger in Europe. E.g. in NL if you stick to 5 basic rules, which are sensible things like not intentionally giving away your banking card or PIN code, the bank has to refund stolen money:
EU has much stronger consumer protection and it's on the banks to provide secure systems. Like if my card gets skimmed by an ATM or merchant the bank pays for the fraudulent charges. And overall the EU has much less card fraud.
I've personally had a decent amount of luck with trying to reframe this sort of sentiment from "being useful" to "having purpose".
Right now, yes, its true that a lot of my day to day purpose is driven by participating in the economy and setting myself up for the life I'd like to have in my later years, and I get genuine validation from solving problems and collaborating with people in my day job.
But sometimes, my purpose is to go snowboarding and forget about work. Or to help a friend fix their bicycle. Or to get lost in conversation with a new person I'm dating. As far as any of us know, we only get one turn to be alive on this rock, so we might as well purposefully enjoy it as much as we try to purposefully be useful.
If you look at Ginny Oliver from the article, it might be fair to question whether she was as useful on a lobster boat at 105 as she might have been in her youth. But I doubt she was concerned with usefulness since she had such sense of purpose.
Its less about torrents being the delivery mechanism and more about bringing data from a potentially unknown source, under potentially unknown licensing, and distributed for a potentially unknown reason into the corporate computing environment.
Torrents would be a perfectly valid way for Google to distribute this dataset, but the key difference would be that Google is providing it for this purpose and presumably didn't do anything underhanded to collect or generate it, and tells you explicitly how you're allowed to use it via the license.
That sort of legal and compliance homework is good practice for any business to some extent (don't use random p2p discoveries for sensitive business purposes), but is probably critical to remain employed in the sorts of giant enterprises where an internal security engineer needs to build a compelling case for spending money to upgrade an outdated protocol.
The thing about trademarks is that, if you want to prevent other people from using them, you generally have to still be using it yourself and be able/willing to justify to a court that you're still using it. (At least in most legal systems that I'm familiar with)
Since the original company both changed names and was subsequently liquidated in bankruptcy nearly 20 years ago... that seems unlikely. There's only so many names out there, and occasionally they get fairly recycled.
I have no insider knowledge here but it doesn't seem outlandish to think that the negotiations would go a little differently for an established product vs a brand new one. Goldman may have simply been the only bank willing to work with Apple when the customer base (in size, demographics, spending patterns, whatever) was hypothetical.
What bank offers rewards and no fees to subprime(below 660) customers? There aren't any. Why no wanted the deal. Guaranteed to lose money. Its not like there's name recognition, i doubt most people could name the underlying bank for the Apple Card. Only place the bank is mentioned is the fine print at the bottom of the card details. Everything is branded "Apple Card"
> i doubt most people could name the underlying bank for the Apple Card. Only place the bank is mentioned is the fine print at the bottom of the card details.
And in the bottom-right corner of the titanium card and in the picture in Wallet. And it's advertised practically everywhere they mention the titanium card. And if you have Apple Savings it's also specified to be from GS everywhere.
GS was inexperienced and didn't know what they were getting into; that's why Apple was able to get such a good deal and also why GS now wants out. I fear Chase does know what they're getting into and Apple likely has far less favorable terms now. Though I'm incredibly glad they didn't give it to Synchrony (who runs PayPal and is incredibly sociopathic)
Game mode being latency-optimized really is the saving grace in a market segment where the big brands try to keep hardware cost as cheap as possible. Sure, you _could_ have a game mode that does all of the fancy processing closer to real-time, but now you can't use a bargain-basement CPU.
I think there's some real sample bias in that definition of "the community" though, because people who are passionate Ruby programmers giving conference talks, running meetups, etc are often a distinctly different group than the regular-old programmers making business software go 'round every day. The big players writing tools for bringing various flavors of type safety into Ruby are doing it because they're experiencing the pain of having lots of programmers working on large, complex software over years-long periods with the tools that Ruby gives you out of the box. They often employ some of those community fixtures, but thats not the majority of an engineering organization.
The reality is that there certainly are enthusiast programmers who can thrive with the lightweight elegance of stock Ruby, but most people writing code professionally aren't enthusiast programmers under ideal conditions. Everything is always a little more distracted, a little less well-defined, and a little more coupled to legacy than anyone would want. And those are the conditions where I want my tools working as hard as possible, automatically, for me / my teams.
I’ve work as a consultant for a few different tier 2 Ruby organizations and many startups myself and haven’t seen any typing in production. I suppose reconciling our different experience requires data about whether “most” Ruby programmers are at the giants or are spread out over the long tail of startups and scale-ups.
Shorter lifetimes means more renewal events, which means more individual occasions in which LE (or whatever other cert authority) simply must be available before sites start falling off the internet for lack of ability to renew in time.
We're not quite there yet, but the logical progression of shorter and shorter certificate lifetimes to obviate the problems related to revocation lists would suggest that we eventually end up in a place where the major ACME CAs join the list of heavily-centralized companies which are dependencies of "the internet", alongside AWS, Cloudflare, and friends. With cert lifetimes measured in years or months, the CA can have a bad day and as long as you didn't wait until the last possible minute to renew, you're unimpacted. With cert lifetimes trending towards days or less, now your CA really does need institutionally important levels of high availability.
Its less that LE becomes more of a single point of failure than it is that the concept of ACME CAs in general join the list of critically available things required to keep a site online.
> would suggest that we eventually end up in a place where the major ACME CAs join the list of heavily-centralized companies which are dependencies of "the internet"
I think that particular ship sailed a decade ago!
> Its less that LE becomes more of a single point of failure than it is that the concept of ACME CAs in general join the list of critically available things required to keep a site online.
Okay, this is what I wanted clarified. I don't disagree that CAs are critical infrastructure, and that there's latent risk whenever infrastructure becomes critical. I just think that risk is justified, and that LE in particular is no more or less of a SPOF with these policy changes.
"Internal" is a blurry boundary, though - you pick integer sequence numbers and then years on an API gets bolted on to your purely internal database and now your system is vulnerable to enumeration attacks. Does a vendor system where you reference some of your internal data count as "internal"? Is UID 1 the system user that was originally used to provision the system? Better try and attack that one specifically... the list goes on.
UUIDs or other similarly randomized IDs are useful because they don't include any ordering information or imply anything about significance, which is a very safe default despite the performance hits.
There certainly are reasons to avoid them and the article we're commenting on names some good ones, at scale. But I'd argue that if you have those problems you likely have the resources and experience to mitigate the risks, and that true randomly-derived IDs are a safer default for most new systems if you don't have one of the very specific reasons to avoid them.
Internal means "not exposed outside some boundary". For most people, this boundary encompasses something larger than a single database, and this boundary can change.
There are examples of the warehouse-based model working, but they clearly require both density _and_ mindshare. Its not clear Kroger had either based on the other comments in here. FreshDirect in NYC has been operating since the early 2000s with a fleet of tiny trucks with a couple of employees in them and a giant FC with essentially zero retail footprint.
(As an aside, they also have some of the best meat and produce you can get in the city without going to a farmers market. So many retail grocery stores here lack loading docks, the food handling getting from the truck to the sidewalk to the basement of the store to the shelves is really, really rough especially during the summer months. Skipping that and going warehouse-to-home has advantages)
With a credit card, if the card is compromised, its not my money being stolen - its the card issuer's money from my line of credit, and they were planning on settling up with me when my monthly statement closes. I still have to launch a fraud case with the issuer, but critically, _all of my money is still in my bank account_ and I can continue to pay my other bills and obligations as normal.
I think its reasonable to consider giving up that buffer to be additional risk for the debit card approach, setting aside any other advantages or disadvantages between the two.
reply