Why are indexes on foreign keys required? If I'm doing a join, it's going to select the primary key of the other table, how will an index on the foreign key help?
Referential integrity checks by the DB engine (e.g. when deleting from the foreign table) require reverse look-ups of foreign keys, which would necessarily become full table scans without an index. Apart from that, applications also often do look-ups like this.
I think I'm gonna keep my by now 13 year old car for a LONG time. Nothing is locked down, no complicated gearbox, no electronic parking brake, no remote control, no subscriptions, plenty of room in the engine bay (can even swap utility belt rollers without having to take out the radiator), and it's all mine and nobody else's. Yes it will cost in maintenance but at least I can at least work on it myself if need be without a computer. And by now the car is unattractive anyway, nobody is going to steal it so I don't need to worry too much about it.
This definitely resonates, for all old cars designed to be maintained with simple tools and common parts.
That said, I have (lease) an Ionic 5 and it's the best freaking car I've ever driven by a mile. Some of the features on these things should have been on ICE cars a decade ago -- they're not unique to EVs -- but car companies thrive on product stagnation until someone raises the bar and they're forced to "me too".
> And by now the car is unattractive anyway, nobody is going to steal it so I don't need to worry too much about it.
Think you’ve got that backwards. Typically it is older cars that get stolen. 13 years old is new enough it should be harder to steal, but for joyriding or as a vehicle for doing other crimes the thieves are not looking for a new car.
you gonna pay for it either new or brand-new but they don't have that kinda nonsense.
my only gripe with toyota | lexus is they keep raising prices yet offering the same product year in year out cz they know people looking for a reliable car don't really have alternatives.
It can get worse. People use their phones to send a literal photo of their screen. This happens almost daily in one of the programming discords I'm member of and it drives me crazy, these are supposed to be (future) programmers and they don't even know how to make a proper screenshot??
We use GCP at work, it works well for what we use it (VMs, container storage, cloud file storage), I wish they would stop deprecating stuff though, it just causes developer busy work with no additional value.
> it just causes developer busy work with no additional value.
Anyone remotely familiar with Google as a third party developer will notice the pattern: this will ramp up until it is almost your entire job simply dealing with their changes, most of which will be not-quite-actual-fixes to their previous round of changes.
This is not unique to Google, but it is a strategy employed to slow down development at other companies, and so help preserve the moat.
Old-timers who date back to when Joel Spolsky's early musings on the business of software development were fixtures on the HN front page will remember him using the phrase "fire and motion" for Microsoft's old strategy of constantly making changes so that everyone trying to keep up was—like the Red Queen—running as fast as they could but not getting anywhere.
> Watch out when your competition fires at you. Do they just want to force you to keep busy reacting to their volleys, so you can’t move forward?
> Think of the history of data access strategies to come out of Microsoft. ODBC, RDO, DAO, ADO, OLEDB, now ADO.NET – All New! Are these technological imperatives? The result of an incompetent design group that needs to reinvent data access every goddamn year? (That’s probably it, actually.) But the end result is just cover fire. The competition has no choice but to spend all their time porting and keeping up, time that they can’t spend writing new features. Look closely at the software landscape. The companies that do well are the ones who rely least on big companies and don’t have to spend all their cycles catching up and reimplementing and fixing bugs that crop up only on Windows XP.
> it is a strategy employed to slow down development at other companies, and so help preserve the moat.
Worked at Google for 7 years, and your post reminds me it is time to share a secret: it is Koyaanisqatsi* and people's base instincts unbridled, no more. There is no quicker route to irrelevancy than being the person who cares about something from last years OKRs.
* to be clear, s/Koyaanisqatsi/too big to exist healthily and yet it does exist -- edited this in, after I realized it's unclear even if you watched the movie and know the translation of the word
If they actually incentivized a group to support stability and continuity among enterprise customers, they would probably be able to diversify their revenue away from ads. Microsoft understands this…
The real sick thing is it doesn't matter, right? Like we're commenting on an article about how they won the day yesterday and Cloud revenue continues to skyrocket.
To be clear, I agree with you, and am puzzled by the lack of consequences from the real world for the stuff I saw. But that was always the mystery of Google to me, in a nutshell: How can we keep getting away with this?
A large part of that is the Google-are-super-geniuses PR effort. Anyone pointing out that Google's products don't reflect this to their boss faces having their own credibility reduced instead.
If it's so obvious, and Google supposedly know this internally, and can obviously tell by others that some are avoiding Google because they're fast at sunsetting services, why are they not doing anything about it?
Imaginary conversation between an honest VP and earnest year 0 me, here's what the VP says:
"We definitely care about deprecations: now tell me how to accomplish that with Sundar's Focus™* Agenda over the last 2 years"
* no net headcount increases, random firings, and any new headcount should be overseas. i.e. we have the same # of people we did in 2021 with 50% more to do.
I used to think that made sense (as sibling mentions, Spolsky's "fire and motion" thesis)... until I worked at large-ish tech company whose internal platforms also kept doing this. Heck, the platform I owned also underwent a couple cycles of this. And so a large part of our work was just doing the Red Queen's race of deprecations and migrations.
So it was definitely not "fire and motion," as there was no competition. I think platforms genuinely need to evolve as new use-cases onboard and technology progresses, and so the assumptions underpinning the platform's design and architecture no longer hold.
However, I do think a small part of the problem was also PDD: "Promotion Driven Development."
Unfortunately the software deployed on top of them will.
So you either:
1) postpone all your updates for years until a bad CVE hits and you need to update or some application goes end of life and you’re screwed because updating becomes a massive exercise
2) do regular updates and patches to the entire stack, including Linux, in which case, you’re in the same position you were before with running on the stack rot treadmill
So you might’ve moved the rot to a different place, but I don’t know if you’ve reduced any of it. I’ve owned stuff deployed off of vanilla VMs and I actually found it harder to maintain because everything was a one-off.
My rationale for staying up to date aggressively is that it minimizes integration work. Basically integration work multiplies, it doesn't just accumulate. So the further you fall behind the more that can break when you finally do upgrade. And you create needlessly more work related to testing and fixing all that. Upgrading a system that was fully up to date until a few days/weeks ago is generally easy. There's only so much that changes. Doing the same to something that was last touched five years ago can be a bit painful. Apis that no longer exist. Components that are no longer supported. Code that no longer compiles. Etc.
I see a lot of teams being overly conservative with keeping their stuff up to date running with years out of date stuff with lots of known & fixed bugs of all varieties, performance issues that have long since been addressed, etc. All in the name of stability.
I treat anything that isn't up to date as technical debt. If an update breaks stuff, I need to know so I can either deal with it or document a work around or a (usually temporary) version rollback. While that happens, it doesn't happen a lot. And I prefer knowing about these things because I've tried it over being ignorant of the breakage because I haven't updated anything in years. It just adds to the hidden pile of technical debt you don't even know you have. Ignorance is not an excuse for not dealing with your technical debt. Or worse compounding it by building on top of it and creating more technical debt in the process.
Dealing with small changes over time is a lot less work than with dealing with a large delta all at once. It's something I've been doing for years. If I work on any of my projects, the first thing I do is update dependencies. Make sure stuff still works (tests). Make sure deprecated APIs are dealt with.
If you’re willing to put in the maintenance work, you’ll probably be in good shape whether you’re on plain VMs or a snazzy cloud provider managed service.
If business understands that you need time to work on these things :’)
RedShift1 complains that GCP is "deprecating stuff". I wouldn't put doing regular updates in the same problem category as having to deal with part of your stack disappearing.
To me "I wish they would stop deprecating stuff" sounds like any part of the stack has something like a 1% or even 10% chance in any given year to be shut off.
I would expect that by carefully choosing your stack from open source software in the Debian repos, you can bring the probability of any given part being gone with no successor to less than 0.1% per year. As an example - could you imagine Python becoming unavailable in 2026? Or SQLite? Docker?
> Fuck yooooouuuuuuuu. Fuck you, fuck you, Fuck You. Drop whatever you are doing because it’s not important. What is important is OUR time. It’s costing us time and money to support our shit, and we’re tired of it, so we’re not going to support it anymore. So drop your fucking plans and go start digging through our shitty documentation, begging for scraps on forums, and oh by the way, our new shit is COMPLETELY different from the old shit, because well, we fucked that design up pretty bad, heh, but hey, that’s YOUR problem, not our problem.
> We remain committed as always to ensuring everything you write will be unusable within 1 year.
I have a bunch of scripts that do the work for me so I can just run that in the background and do something else, and for one off tasks grumble and mumble a bit.
Oh my god I thought I was the only one. Everyone says that you should be using the command line anyway, but I'd rather use a GUI but it's disgustingly slow.
It surprises me that it's built in Python (as is AWS'), which doesn't seem like a very appropriate language for exactly this reason. Go would seem much more apposite.
The execution speed of the language the CLI is written in doesn't matter. Any difference is dwarfed by the slowness of network calls, specifically their API response times.
It's not negligible versus a single request. Even `gcloud --help` takes over a second on this machine - actually getting it to do a simple list request takes almost no time longer over any reasonable connection.
Plus Python is notoriously not easy to deploy for - a Go (or Rust or whatever) binary would have almost no dependencies to worry about.
It's not just busy work, it's like being chained to a sandpaper treadmill especially in some specific "hot" domains like AI. It feels like you can build something with it and 3 months later you've got to update your code because half the dependencies are deprecated.
In AI, whatever you wrote is going to be deprecated in theee months, and in six months SOTA LLMs will one-shot it directly. That's what it means for a domain to be "hot". AI is currently the largest and most intense global R&D project in the history of humanity. So for this one field, your complaint makes no sense.
Ah that is where logging and traceability comes in! But not to worry, the cloud has excellent tools for that! The fact that logging and tracing will become half your cloud cost, oh well let's just sweep that under the rug.
I love GraphQL, it's great. It takes away the ambiguous way to organize REST APIs (don't we all love the endless discussion about which HTTP status code to use...), and at the top level separates operations into query/mutation/subscription instead of trying to segment everything into HTTP keywords. It takes a bunch of decision layers away and that means faster development.
Much easier would just be native decimal support in JS and we can represent a decimal using d as the decimal point, for example 15d0 (15.0) or 384d25 (384.25).
Using cents instead of dollars sounds fine until you have to do math like VAT, you really need decimal math for that.