Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interestingly they offer a "lossy" mode, which is something I've been thinking about a lot recently. In reality, the decoding of all image formats (that I'm aware of) is completely lossless (i.e. deterministic), even for jpeg. The encoding is where the lossy decisions are made.

So it should be totally possible to take a lossless format, and ask the question -- how to minimally change the image to produce an image that compresses better? There are png pre-processors, like pngquant, that are somewhat lossy, but I wonder how far this can go -- the heuristics that pngquant uses, for example, are based on image processing heuristics, but it seems like it should be equally possible to use characteristics of the very specific compression systems used in a slightly more generic way.

For LZ-based systems, I wonder if there is some more generic way of approaching this. Running it forwards, and saying "what is the cheapest prediction of the next byte given our table so far" is interesting, but running it backwards is also interesting, to say "what is the byte that will increase our hit rate as we move forward in the image".

I played with this a little bit, trying to get farbfeld + bzip2 to produce lossy image compressions, but bzip2 is too complex to easily work with in this fashion. It seems like png has some possibilities here (to optimize against its filters) and QOI (and QOIR) seem to have good possibilities here because of the simplicity of their heuristics.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: