Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is related to what I meant, but specifically I was referring to the inability of databases to prevent a 1:1 becoming a 1:2 (or 1:N).

Yes, you can hack it with a unique constraint, but it's not very elegant.



That’s pretty much exactly what unique constraints are for. Why do you consider that a hack?


Mainly because you end up with something that's untidy. Your elegant table that contains all data identifiable by a single PK is now spread across two tables and relies on a (often forgotten) constraint in addition to a FK.

I'd rather follow the rules of normalisation wherever possible.


I still fail to see how your single table approach is more normalized. Consider for example an e-commerce order that may have either (1) exactly one shipping address or (2) no shipping address (for things like digital, downloadable products). You’re saying that you’d rather store the address data on the order table using a bunch of nullable fields? That bloats the order table and introduces a ton of possible consistency problems. A much more normalized model, IMO, would be to separate the address fields into their own table and use the unique FK approach to ensure that no order has more than one related address. Precisely which rules of normalization is that breaking? And how would they be resolved by putting everything into a single table?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: