Today I'm going to share what I've learned over several years of using relational DBMSes (mostly Postgres). I'm probably not the first to agree on this, and there are probably people who disagree.
This is how I would build a social media app as an example. But it works for a wide variety applications, from social to money to inventory tracking, and it's served me well each time.
1. Make each table a logical collection of events that have occurred, not objects or state. This is not CRUD. One row means one event, for example a table `react` means "a user reacted to some post." Keep these events as close as possible to the end user's intent.
2. Never delete or update rows as a way to record events. This would mean forgetting that something happened. Only do that for manual corrections or special privacy reasons, e.g. GDPR requirements to erase user data.
3. Figure out the present state by aggregating events, either in SQL for realtime requests or perhaps in a mapreduce pipeline for batch. For example, the visible reactions on a post are only the latest one each user made, superseding previous ones.
4. On the technical side, in Postgres always use an `id bigserial` primary key and add a `created_at timestamptz` column for bookkeeping. Some other attributes might be unique, but never make them primary keys, and only create foreign keys to the `id` col. Similar for MySQL. If (for special use cases) you need to understand exactly which event happened first transactionally, order by some other autoincrement column (or just the ID if you're lazy). If you need to do that across multiple tables, make a separate table joining them.
5. Try to only add new tables and cols as needed for new features. It's ok and inevitable to end up with some deprecated ones that you eventually clean up more carefully.
6. Keep everything normalized as a baseline. If some query is too slow to aggregate past data, make an additional denormalized table tracking the current state (i.e. an exception to rules #1-3) alongside the regular ones you insert into in the same transaction.
7. Kinda implied by all the above, don't use ORMs or other heavy-handed ways to abstract away the DBMS. SQL is your interface.
8. You can get pretty far with a monolithic backend/DB this way, but when it's time for microservices: Draw a graph of your tables by foreign keys. Each clique is a candidate for becoming its own service (with its own DB) if you don't foresee it becoming connected to the others. You aren't going to have cross-service transactions. Introduce caching along these boundaries. Never make a service persist info it's not authoritative on (ephemeral caching is OK). Things should be identified across services by some random UUID or incrementing int, not some attribute that's believed to be unique at some point (e.g. user's email address). Generally, whichever service observes things happening is the one that owns the data; don't make a service that resembles a custom DBMS for multiple others to R/W to.
Why is this so great? You have a clear signal on what exactly to store and what not to. You never lose information. You can always support new features with the existing info you have. You have fewer layers of abstraction between your UI (or API) and your database, which despite some conventional advice actually makes it easier to adapt to new features.