Hacker Newsnew | past | comments | ask | show | jobs | submit | fps-hero's commentslogin

Very interesting! Thanks for sharing.

It’s very frustrating that the tried and true method of sampling comparators has become prohibitively expensive due to niche parts.

I suspected that the ST HRTIMER would be a good candidate for replacing the delay line components, with the benefit of synchronisation for free. Good to see it works!

The diode bridge sampling and verification is the “hard part”. My motivation was for wanting to make a TDR was lack of equipment required to measure the signals required. Ironically, it’s the very equipment you need decent equipment to verify a TDR is working correctly. Most existing designs attempt to replicate old sampling bridges that use exotic unobtainable diodes. If you stray from the tried path, it’s up to you to verify performance, so there is some valuable insight here.

I liked reconstructing the signal using a spectrum analyser, very clever idea.


Did you miss the point of the article? JPEG-XL encoding doesn't rely on quantisation to achieve its performance goals. Its a bit like how GPU shaders use floating point arithmetic internally but output quantised values for the bit depth of the screen.


Which is completely wrong by the way, JPEG-XL quantizes its coefficients after the DCT transform like every other lossy codec. Most codecs have at least some amount of range expansion in their DCT as well, so the values quantized might be greater bit depth than the input data.


> Did you miss the point of the article?

Sorry I missed. How is the "floating point" stored in .jxl files?

Float32 has to be serialized one way or another per pixel, no?


The cliff notes version is that JPEG and JPEG XL don't encode pixel values, they encode the discrete cosine transform (like a Fourier transform) of the 2d pixel grid. So what's really stored is more like the frequency and amplitude of change of pixels than individual pixel values, and the compression comes from the insight that some combinations of frequency and amplitude of color change are much more perceptible than others


In addition to the other comments: you can have an internal memory representation of data be Float32, but on disk, this is encoded through some form of entropy encoding. Typically, some of the earlier steps is preparation for the entropy-encoder: you make the data more amenable to entropy-encoding through rearrangement that's either fully reversible (lossless), or near-reversible (lossy).


No, JPEG is not a bitmap format.


The gradient is stored, not the points on the gradient


This would introduce a bias towards countries that are large and have extensive motorway networks. They would appear safer than countries that have a smaller portion of motorway miles.

> If we look at the number of deaths per billion miles driven, we see that motorways are roughly four times safer than urban roads, and more than five times safer than rural roads. This is not specific to the UK: among 24 OECD countries, approximately 5% of road deaths occurred on motorways.5 In almost all countries, it was less than 10%.


I think you have a misunderstanding about the Ethernet standard they are discussing. This isn’t copper Ethernet, its fibre Ethernet standard. Copper Ethernet caps out at 25Gb/s with an impractically short 10 meter cable run.


I have come across the following where each line is at 56Gb/s https://apps.juniper.net/hct/model/?component=QDD-400G-DAC-2...


Absolutely. Software encoders are constantly pushing the envelope in terms of quality, and they have tuneable knobs to allow you to pick the speed vs quality trade off.

Hardware encoders are much more restrictive. They will target a maximum resolution and frame rate, realtime encoding a possibly low latency as requirement.

The standards define the bitstream, but is lots of scope for cheating to allow for simpler hardware implementation. You could skip motion prediction and use simple quantisation, it would just contribute to the residual entropy that needs to be encoded. Then, you can use inefficient implementations of the entropy encoders. The end result is something that can be interpreted as a valid bitstream but threw out all the algorithmic tricks to give you high quality compressed video.

I think specifically in terms of YouTube, it’s not a hardware encoding issue but it’s a purposeful optimisation for storage and bandwidth.


Absolutely cringe worthy. The painful thing is it’s just a simple conversation factor but I couldn’t take the article seriously after that.

Speaking of reinventing, the article rediscovered the concept of bakers percentage, which is how bakers always describe recipes! Except baker’s percentage is unit agnostic and not susceptible to variations in volumetric measurement, ingredient density, and non-universal cup sizes.


Heh, engineering is being about as precise and accurate as needed. Knowing that your taking a shortcut and shrugging and saying "this is easier and still works" is the peak of engineering.


Right? There's some misunderstanding that engineering is about precise measurements and reproducible results. That's a scientist, not an engineer. An engineer is the person who gets the job done and the thing built, and if it doesn't kill someone or explode and start a fire, then that's all good.

But I think treating cooking as any of the STEM subjects is wrong. Cooking is an art. It has to please your senses and senses are not scientific instruments, they're subjective and inaccurate. Recipes are only templates and you need to use the brain to fill in the blanks: if I tell you to add two onions in a stew, can't you use your noggin to decide whether your onions are too small, and so you need to add more than two, or too large and so you need to add fewer? Will it ruin the stew if you add three onions, or one and a half? How many traditional dishes are the result of cooking another dish with what ingredients were at hand, or the result of mixing two things that were previously not eaten together?


But it isn't just a conversion factor exactly for the reasons you state: a cup of flour will be different weight based on brand, how much it settled in the bag etc. Always have to deal with weights.


Do those factors also affect how much volume you need?


Baking is literally chemistry. You don't get consistent results if you measure a powder of quite variable density by volume instead of mass.


You also don't get consistent results if you assume the type of flour, atmospheric humidity, baking conditions, etc etc. etc. will be the exact same as the recipe author's and simply blindly follow the exact measurements because "precision!".

The best approach is to watch a video that clearly demonstrates how the product should look and feel at every point along the process, and do what you can to imitate that - even if it means leaving your scales and cups in the cupboard.


Right, but you can successfully course correct if you have a reproducible measurement, instead of one that varies by 20% each time you make it (source: Cooks Illustrated magazine on angel food cake, did experiments on how much variation cup measurements of the same flour had - given their audience, probably preaching to the choir, but it was at least a decade ago...)


If you know how the right amount feels, it really does not matter one bit how far off your initial measurement is. You start with an amount too low, and you add more until it feels right. If the amount you start with is 24% too low vs 20% too low bears no significance. Obviously you shouldn't start with an amount so far off as to be too high, but again that would never happen if you are going by feel.


You want the correct amount by weight. Since volume (for the same weight) varies depending on several factors, it follows that those factors affect how much volume you need.

The technique used for measuring "1 cup" affects how much weight you get in 1 cup. This is in addition to the type of flour, clumping, how densely it was packed in the bag, etc.


Unless the virus derived from gain-of-function research, which is one of the likely hypothetical origins.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10234839/


Isn’t it funny that they decided that, of all episodes, was the one that crossed some sort of line decency. The episode was entirely on-brand, but I can see how someone might watch that one episode and be turned off.

The themes and ideas presented presented in later episodes left me far more shocked and moved. “San Junipero” and “Smithereens” were incredibly moving for me personally, while other episodes were absolutely fascinating because they took some aspects of our society and humanity and took them to their logical extreme.

The most shocking part of all is how true to reality most episodes are. What set S01E01 apart was it didn’t rely on sci-fi to tell its story, which is the only way in which it isn’t on brand.


Honestly it’s incredible that sending 12,000 satellites into space to provide internet coverage to multiple continents across the world is as cheaper than installing ground based infrastructure. The cost per satellite is reportedly $500k, so the star link constellation cost only $6 billion.

For comparison the Australian NBN (country wide internet infrastructure upgrade) cost over $50 billion, and it still relies on satellites to cover rural areas.


I bought a fake Nokia back in the day (18 years ago). The thing which gave away it was fake was the atrocious battery life (2ish run time hours), and otherwise clunky software. I can assure you faking a hologram was absolutely not problem.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: