Probably, multiple resolutions of the same thing. E.g. a lower res image of the entire scene and then higher resolution versions of sections. As you zoom in, the higher resolution versions get used so that you can see more detail while limiting memory consumption.
Replicated at different resolutions depending on your zoom level.
One patch at low resolution is backed by four higher-resolution images, each of which is backed by four higher-resolution images, and so on... All on top of an index to fetch the right images for your zoom level and camera position.
JPEG and friends transforms the image data into the frequency domain. Regular old JPEG uses the discrete cosine transformation[1] for this on 8x8 blocks of pixels. This is why with heavily compressed JPEG images you can see blocky artifacts[2]. JPEG XL uses variable block size DCT.
Lets stick to old JPEG as it's easier to explain. The DCT takes the 8x8 pixels of a block and transforms it to 8x8 magnitudes of different frequency components. In one corner you have the DC component, ie zero frequency, which represents the average of all 8x8 pixels. Around it you have the lowest non-zero frequency components. You have three of those, one which has a non-zero x frequency, one with a non-zero y frequency, and one where both x and y are non-zero. The elements next to those are the next-higher frequency components.
To reconstruct the 8x8 pixels, you run the inverse discrete cosine transformation, which is lossless (to within rounding errors).
However, due to Nyquist[3], you don't need those higher-frequency components if you want a lower-resolution image. So if you instead strip away the highest-frequency components so you're left with a 7x7 block, you can run the inverse transform on that to get a 7x7 block of pixels which perfectly represents a 7/8 = 87.5% sized version of the original 8x8 block. And you can do this for each block in the image to get a 87.5% sized image.
Now, the pyramidal scheme takes advantage of this by rearranging how the elements in each transformed block is stored. First it stores the DC components of all the blocks the image. If you just used those, you'd get an image which perfectly represents a 1/8th-sized image.
Next it stores all the lowest-frequency components for all the blocks. Using the DC and those you have effectively 2x2 blocks, and can perfectly reconstruct a quarter-sized image.
Now, if the decoder knows the target size the image will be displayed at, it can then just stop reading when it has sufficiently large blocks to reconstruct the image near the target size.
Note that most good old JPEG decoders supports this already, however since the blocks are stored one after another it still requires reading the entire file from disk. If you have a fast disk and not too large images it can often be a win regardless. But if you have huge images which are often not used in their full resolution, then the pyramidal scheme is better.
> Grid doesn't have it's own element like table does, so you have to use css to apply that display to a div.
Well, OOTB, yeah. I personally like to make use of custom html elements a lot of the time for such things. Such as <main-header> <main-footer> <main-content> <content-header> etc, and apply css styles to those, rather than putting in classes onto divs. Feels a lot more ergonomic to me. Also gives more meaningful markup in the html. (and forces me to name the actual tags so I use much less unnecessary ones)
Recent React round-trips custom elements better now. You just have to remember the standard's rule that all custom elements need to be named with dash (-) inside them.
I don't think it'd benefit HN at all at this time, as the country of origin just isnt relevant most of the time, and would just invite flamewars. I believe that flagging/downvoting is a good enough tool to deal with stuff, as the HN mod team puts in the work.
Maybe once the site gets 10x as big and the mod team gets overwhelmed, it could serve a purpose, but I doubt that to ever be the case.
You've ruined something for me. My adult side is grateful but the rest of me is throwing a tantrum right now. I hope you're happy with what you've done.
I am fairly certain that the vast majority comes from improper use (bypassing security measures, like riding on top of the cabin) or something going wrong during maintenance.
One thing with dithering writeups that always bothered me: There never seems to be coverage of how to calculate colour similarity when there are multiple different colours involved, rather than just black and white gradients.
I am trying to implement it for myself, but really struggling to find any proper literature on that, that I am actually able to understand.
The first step is to convert from the RGB color space to something more perceptual, like Lab. I'm sure there's a standard for comparing the color similarity of two images but I'm having trouble remembering the name - too early in the morning I guess.
Lichens pretty much were what created the first terrestrial organic material. Which in turn became soil for mosses which then built up the soil layer more and so on