Ahh yes, the "unicode sandwich." A single phrase that solved ten-years of (my) wonder at why we were always doing .encode() and .decode() in Python. This after reading a dozen pieces on Unicode. Just add more until it works.
Completely understood Unicode itself on disk/on the wire, but couldn't write a proper program handling it, until then. No one until Ned (in my experience) bothered to mention it.
What helped me was using pycharm's debugger to examine the datatypes - it shows you whether something is a bytestream or a unicode string. Stepping through some functions really clarified the matter and squashed a lot of bugs.
Oh, I knew the types all right. Could even fix errors. Didn't know what to do with them, as in what-goes-where and when and most importantly why. The unicode sandwich solved all that. ;-)
Completely understood Unicode itself on disk/on the wire, but couldn't write a proper program handling it, until then. No one until Ned (in my experience) bothered to mention it.