Have you seen some of the stuff in the Enron or Epstein emails? They can be rather candid and act as if there is nothing to hide or they will never get caught
If you have trouble with toe nail trauma (all chipped for instance) check out heel lock lacing. It will prevent your toes to hit against the front of the shoes.
One example here [0] for running shoes but it's useful also for normal walking. Ian of course has his own entry about this [1]
I found this to be incredibly helpful for blister prevention. Only downside is your laces have to be pretty long since this takes up a couple more inches on each side.
> Tesla was Norway's top-selling car brand for a fifth consecutive year, with a 19.1% market share, followed by Volkswagen at 13.3% of registrations and Volvo Cars at 7.8%.
I found it very lacking in how to do CD with no downtime.
It requires a particular dance if you ever want to add/delete a field and make sure both new-code and old-code work with both new-schema and old-schema.
The workaround I found was to run tests with new-schema+old-code in CI when I have schema changes, and then `makemigrations` before deploying new-code.
Are there better patterns beyond "oh you can just be careful"?
This is not specific to Django, but to any project using a database. Here's a list of a couple quite useful resources I used when we had to address this:
Generally it's also advisable to set a statement timeout for migrations otherwise you can end up with unintended downtime -- ALTER TABLE operations very often require ACCESS EXCLUSIVE lock, and if you're migrating a table that already has an e.g. very long SELECT operation from a background task on it, all other SELECTs will queue up behind the migration and cause request timeouts.
There are some cases you can work around this limitation by manually composing operations that require less strict locks, but in our case, it was much simpler to just make sure all Celery workers were stopped during migrations.
I simplify it this way. I don't delete fields or tables in migrations once an app is in production. Only manually clean them up after they are impossible to be used by any production version. I treat the database schema as-if it were "append only" - Only add new fields. This means you always "roll-forward", a database. Rollback migrations are 'not a thing' to me. I don't rename physical columns in production. If you need an old field and a new field to be running simultaneously that represent the same datum, a trigger keeps them in sync.
1. Make a schema migration that will work both with old and new code
2. Make a code change
3. Clean up schema migration
Example: deleting a field:
1. Schema migration to make the column optional
2. Remove the field in the code
3. Schema migration to remove the column
Yes, it's more complex than creating one schema migration, but that's the price you pay for zero-downtime. If you can relax that to "1s downtime midnight on sunday", you can keep things simpler. And if you do so many schema migrations you need such things often ... I would submit you're holding it wrong :)
I'm doing all of these and None of it works out of the box.
Adding a field needs a default_db, otherwise old-code fails to `INSERT`. You need to audit all the `create`-like calls otherwise.
Deleting similarly will make old-code fail all `SELECT`s.
For deletion I need a special 3-step dance with managed=False for one deploy. And for all of these I need to run old-tests on new-schema to see if there's some usage any member of our team missed.
One option is to do multi-stage rollout of your database schema and code, over some time windows. I recall a blog post here (I think) lately from some Big Company (tm) that would run one step from the below plan every week:
1. Create new fields in the DB.
2. Make the code fill in the old fields and the new fields.
3. Make the code read from new fields.
4. Stop the code from filling old fields.
5. Remove the old fields.
Personally, I wouldn't use it until I really need it. But a simpler form is good: do the required schema changes (additive) iteratively, 1 iteration earlier than code changes. Do the destructive changes 1 iteration after your code stops using parts of the schema. There's opposite handling of things like "make non-nullable field nullable" and "make nullable field non-nullable", but that's part of the price of smooth operations.
Deploying on Kubernetes using Helm solves a lot of these cases: Migrations are run at the init stage of the pods. If successful, pods of the new version are started one by one, while the pods of the new version are shutdown. For a short period, you have pods of both versions running.
When you add new stuff or make benign modifications to the schema (e.g. add an index somewhere), you won't notice a thing.
If the introduced schema changes are not compatible with the old code, you may get a few ProgramingErrors raised from the old pods, before they are replaced. Which is usually acceptable.
There are still some changes that may require planning for downtime, or some other sort of special handling. E.g. upgrading a SmallIntegerField to an IntegerField in a frequently written table with millions of rows.
A request not being served can happen for a multitude of reasons (many of them totally beyond your control) and the web architecture is designed around that premise.
So, if some of your pods fail a fraction of the requests they receive for a few seconds, this is not considered downtime for 99% of the use cases. The service never really stopped serving requests.
The problem is not unique to Django by any means. If you insist on being a purist, sure count it as downtime. But you will have a hard time even measuring it.
The general approach is to do multiple migrations (add first and make new-code work with both, deploy, remove old-code, then delete old-schema) and this is not specific to Django's ORM in any way, the same goes for any database schema deployment. Take a peek at https://medium.com/@pranavdixit20/zero-downtime-migrations-i... for some ideas.
The most stalkable users are android users, but even that it's going away with newer androids. And it already beeps when you move it if it's been away from the owner for too long.
I know because I have an android phone and a not-so-used ipad and mine beep all the time.
Not sure about that. My Android warns me about my wife's airtags so often, that if I would actually be tracked by a malicious airtag, I would just assume it's one of my wife's tags. This could be prevented if I could mark a tag to be trusted on my Android phone, but no such feature exists.
Meeting could happen in public, it doesn't mean they know their private address.
For example, meet someone at a convention/fair/job, gift/sell them something which has a hidden tag, and then wait for them to drive to their hotel, or home. Gotcha. With influencer and celebs, you can also send something to their agency, and hope they are re-routed to their home. S** like that happened quite often until people learned to be more careful. Probably still sometimes happens even now.
I'm sure idiots do this but it's a pretty high risk way to try to track someone. IME the tracking notifications are timely enough that you're going to have a good idea where they came from. Actual GPS trackers are cheap on Amazon, have better accuracy, and don't notify people they exist -- they just don't have the public's mindshare nearly as much.
Or the much more, frankly common, scenario is: a $15 plushie bought through Amazon wishlist, sold by PerpOwned LLC for $500, and delivered through Amazon warehouse. That's actually happening.
I don't know why anyone with a budget would choose to use an Airtag that notifies nearby iPhones and makes noise when you move it around. If Amazon lets you ship arbitrary items to people's private address, that sounds like a vulnerability with Amazon that is far more severe than simply shipping Airtags.
“Easy” in relative terms for people who hack on electronics. Even if you remove the speaker, modern phones will tell the victim that someone else’s AirTag is moving along with them, unless the owner of the AirTag is also present.
Datomic can use various storage services. Yes, pg is one option, but you can have DynamoDB, Cassandra, SQLServer and probably more.
> Also, it doesn't support non-immutable use cases AFAIK
What do you mean? It's append only but you can have CRUD operations on it. You get a view and of the db at any point in time if you so wish, but can support any CRUD use case. What is your concern there?
It will work well if you're read-heavy and the write throughput is not insanely high.
I wouldn't say it's internally more complex than your pg with whatever code you need to make it work for these scenarios like soft-delete.
From the DX perspective is incredibly simple to work on (see Simple Made Easy from Rich Hickey).
Thanks, I'll look into it.
My current setup for this kind of use cases is pretty simple. You essentially keep an additional field (or key if you're non relational) describing state. Every time you change state, you add a new row/document with a new timestamp and new values of state. Because I'm not introducing a new technology for this use case, I can easily mix mutable and non-mutable use cases in the same databases (arguably even in the same table/collection, although it probably makes little sense at least to me).
reply