Voting doesn’t matter. Between the “small state bias” in the electoral college - which has allowed the the President to be chosen without winning the popular vote twice in the last 30 years, gerrymandering which allows Republicans to be over represented in Congress and the continuous erosion of the separation of church and state, the majority doesn’t stand a chance.
My TV appliance does not support other browsers; must every appliance be open and support multiple competing browsers? Is there some rational boundary where the appliance can be shipped and just work as it was designed, and not have to support everybody's choice of software? If there exist too many limitations without work arounds, why buy it?
I like your point and I smiled while reading it. I don't think this regulation is good either. But not for this particular reason.
The iPhone is a general purpose smartphone, and your TV is not. The "smart" part of the smartphone makes it useful without Safari. See apps that integrate with health devices or simpler uses like offline Google Maps. Neither are functioning as a "phone" but instead as a general use computer.
With the exception of the iPhone, no other general computing device is currently enforcing a single browser.
(And the "smart" in the "smart tv" does not get even close, as their apps can do almost nothing in comparison.)
A jpeg pixel is 64 (=8 x 8), 8 bit coefficients which are summed together for each pixel. That result is not 8 bits, but more; that is a misunderstanding, often repeated. A jpeg is capable of over 11 bits dynamic range. You can figure out that an 11 bit dynamic range image has more than 8 bits. See the wikipedia.
A jpeg pixel is not 64 eight-bit coefficients. Jpeg compresses an 8x8 pixel block at a time by taking a DCT (which mathematically is lossless, but in practice is not due to rounding and quantization at this stage), which turns those original 8x8 values into another set of 8x8 values, then some of these are thrown away and/or quantized for lossy compression.
Decompression is the reverse: take these 8x8 quantized DCT coefficients, perform an inverse 8x8 DCT top get pixel values.
The 11-bit dynamic range part you claim is merely from a color profile, which takes the resulting 8 bits per channel (i.e., 256 possible values) and spreads them over a gamma curve to an 11-bit range. But there are still only 256 possible levels per channel, too few for quality image editing.
Think of it as squaring: taking [0,255] as your input and squaring every value gives you a range of 0 to 255^2 = 65025, but that does not allow you to store any value in that range. It only allows you the 256 values that are squares.
So the dynamic range is 11 stops, but the number of representable levels per channel is still 8 bit: 256 levels. This makes gradients band no matter how you do them in JPEG.
It's why photo processing software wants RAW, not JPEG. JPEG, besides being lossy, does not allow enough steps.
One example: given a so-so RAW, at 11 bits, you can pull out dark things or darken bright things smoothly. This is not possible once you go to jpeg, for any implementation of jpeg.
I said a jpeg pixel is the summation of 64 8 bit coefficients. The coefficients are 8 bit, but obviously the cosine values are not 8 bit. They can be floating point. All a jpeg need give is coefficients, the coder/decoder knows what to do with them. Summing 64 products, with each product = 8 bit numbers x cosine value gives more than an 8 bit result for the resultant pixel. In addition, there is another dct for the color values. This adds more bits of info.
The cosine values are transcendental numbers, they can have an infinite number of decimal places, yes? So adding up 64 products of (cosine values * 8 bit integer) to get 1 pixel value can obviously have more than 8 bits.
No, a jpeg pixel is not "the summation of 64 8 bit coefficients." I've written jpeg codecs (and many other image formats). It works just as I explained above.
Or simply read the libjpeg source.
Don't like that, read this [1]: "JPEG images are always recorded with 8-bit depth. This means the files can record 256 (28) levels of red, green and blue."
Don't like that, here [2] is the JPEG ISO standard, section 4.11, baseline jpeg, "Source image: 8-bit samples within each component".
A DCT takes an 8x8 pixel input, 8 bits per channel, and transforms them into an 8x8 output. It matters not what these are - the information theory content is nothing more than what was put into it. There is not suddenly magically more information content.
More simply, appending zeroes to a number does not mean you can represent more numbers. You simply can represent the exact same numbers, just wasting more space.
None of what you wrote adds more resolution at the output. It simply isn't there.
If I give you 5 possible inputs to a function, then you have 5 possible outputs, not matter how many digits you finagle into representing the output.
Jpeg has 8 bits of resolution per channel. End of story. That is why professional photos are taken and edited in raw - you get more bits of resolution per channel.
I'm not sure why you're still arguing this. It's a longstanding, well known issue, and I explained it all again very simply.
If you think it isn't true, encode one of your magic jepgs with more than 256 levels of gray and post it here. Good luck :)
If you cannot do that, then maybe you should consider that you're wrong.
What you have described is usually called a 24 bit rgb image--not a 8 bit image. An 8 bit image can have only 256 distinct levels, whereas a jpeg can have ~16 million or 2^24 values for each pixel. 8 bit images are used often for medical imaging, but they are crude compared to 24 bit rgb images. One can argue that 24 bit rgb images are too crude, but they should not, IMHO, be called 8 bit images. But that is often what people say about jpegs. Typical jpegs with 8 bit coefficients have much more information than 8 bit images. Perhaps typical imprecise terminology?
[1] https://en.wikipedia.org/wiki/Color_depth#True_color_(24-bit...
[2] https://www.quora.com/How-many-colors-does-a-JPEG-contain
[3] https://en.wikipedia.org/wiki/JPEG#JPEG_codec_example << They walk thru the steps.
I never called them 8 bit images. I wrote 8 bits per channel. Each of RGB are channels. An RGBA image has 4 channels. A grayscale image has one channel. This is standard terminology. So an 8 bits per channel image with three channels is a 24 but image.
It is very precise terminology, used correctly. It's also covered in your links; you can read it there.
Now, if you encode gray levels in RGB, at 8 bits per channel, you do indeed end up with only 256 gray levels in the image, because for each pixel, R=G=B.
Actually the coefficients are 12 bit in JPEG, before quantization. In principle you can make pretty accurate 10-bit HDR JPEG files, and with an accurate JPEG decoder, it would work well enough.
The most common JPEG decoders though (in particular libjpeg-turbo) are using a cheap but not super precise iDCT that has 8-bit YCbCr as output, which then gets chroma-upsampled if needed and converted to 8-bit RGB. That causes the effective precision in reds and blues to be only 7-bit. But in principle you could have about 10 bits of effective RGB precision, it just requires a sufficiently precise JPEG decoder.
Alternatively, if you're being confused by the quoted intermediate DCT precision, that's not relevant to the final output either. You cannot implement any transform, DCT especially, without having intermediate values with a greater range than the input or output. Like, even a simple average of two 8-bit values (a+b)/2 has an intermediate range of 9 bits.
The JPEG spec does specify both 8 and 12 bit sample precision for lossy, but I don't think anyone ever implemented 12-bit since libjpeg never cared about it.
Libjpeg does have support for it. Unfortunately, you have to have two copies of the library, one for 8 bit and one for 12. And you have to rename all the API methods in one of the libraries so you dont get name collisions. I believe that LibTIFF has a build configuration for this so that 12bit JPEG data can be encapsulated in a TIFF file.
Fair enough, but the end result is what really matters, and I regularly see banding in the skies of photos. Some people add grain just to help deal with that, which is a ridiculous problem to have in 2022.
All cloud platform scan for CSAM. Apple was going to move the scanning task off their servers, where it is now, onto apple devices only as the devices upload to apple's cloud. If you use cloud services, info does not stay on your phone, does it?
> I can ditch my iPhone at any time, but Apple wanted CSAM to be part of Mac OS and this is unacceptable.
It is already on MacOS. It sends hashes from every unknown file on ”virus protection purposes”. Only name and hash database is different. How that changes your privacy?
And to be fair, they never said publicly that it would come to Mac OS.
It seems apple devices are a black opaque box:
support.apple.com/en-us/HT202303
Every thing is encrypted except imap email storage. They have no access to anything except that. They can't do anyting with any data except imap emails stored on their server.
The governments want them to do some thing preventing CSAM, so now they can match a perceptual hash--not content--to known images from an non-governmental organization. That is about the minimum invasive CSAM prevention anybody can do.
Can someone suggest an alternative CSAM prevention which is less intrusive?
It seems the alternative would be to continue to be a black opaque box, or NOT encrypt you photos and rumage through them.
From https://support.apple.com/en-us/HT202303
>End-to-end encryption provides the highest level of data security. Your data is protected with a key derived from information unique to your device, combined with your device passcode, which only you know. No one else can access or read this data.
Let us look on the bright side: With all the "alternate facts" promulgated by politicians one must undertake some discernment of the context, the source, and not immediately accept what is said as fact. This hesitancy to immediately accept what is written is a good quality to have. Unfortunately it does not seem to be a widespread quality in the US voter.
Many people were raised and psychologically used to the days when news media had professional journalist who dug out the truth and presented it to the people. Many have still that mindset even though those days are in the past, and many media sources are tainted with other objectives than digging out the truth. That seems to be left to the individual. So Life is forcing us to grow more discerning; hopefully there will be a long term payoff.
EARN IT seeks to deal with the scourge of online child exploitation by coercing service providers to more aggressively police such content on their platforms. https://www.congress.gov/bill/116th-congress/senate-bill/339...
Similar laws in UK and others.
Maybe this will short circuit the need for a government backdoor to snoop in icloud photos?
Q2) Didn't people agree to no illegal KP with the icloud TOS? Doesn't all this do is move the scanning from apple's servers to the distributed ARM processors?
Q3) Is that more environmentally friendly or less? I am sure it is cheaper for apple to have the iphone scan than add additional servers, cooling, space, etc.
If one doesn't use icloud photos this does not affect them, for now.