feedparser and the often-paired-with chardet library also approach this problem. The feedparser library has a series of fallbacks it uses to attempt to figure out the character encoding of a feed (finally bailing out and assuming windows-1252 if nothing else works). The chardet library is also quite good at guessing the intended encoding of a chunk of text, and will report its best guesses and confidence in them.
When you give chardet text in a 1-byte encoding, it sometimes ends up >99% confident that it's in ISO-8859-2.
Empirically, it's not in ISO-8859-2.
I think the problem here is that chardet is built on the assumption that "encoding detection is language detection" (from its docs). This assumption is necessary, and basically correct, when distinguishing Japanese encodings from Chinese encodings. It's even pretty much taken as a given that you can't have Japanese and Chinese text in the same document without contortions that most developers are unwilling to go through.
But European languages and encodings are much more intermixed than that. One document may contain multiple European languages, and these languages may be written outside of their traditional encoding.
I wouldn't know how to fix the European languages without damaging chardet's clear success at distinguishing East Asian encodings.