I'm not super familiar with the jpeg format, but iirc h.264 uses 16x16 blocks, so if jpeg is the same then padding of 16px on all sides would presumably block all possible information leakage?
Except the size of the blocked section ofc. E.g If you know it's a person's name, from a fixed list of people, well "Huckleberry" and "Tom" are very different lengths.
I know this is likely to be an unpopular take but: I wish it was normal to ship your compiler in your source repo.
Modern compilers are bloated as hell huge things which makes it a bit impractical, but if it was a normal thing to do then we'd probably have optimized the binary sizes somewhat.
I just really like the idea of including _everything_ you need for the project. Also ensures that weird problems like this dont happen. As an extra benefit, if you included the compiler source and a bootstrapping path instead of just the latest binary, then you could easily include project specific compiler / language extensions with no extra effort.
As someone who recently built a daily word game[1], I 100% get it. I can say from first hand experience: there's an awful lot of words that are totally valid but not fun.
I spent approximately as much time on building the word list as I did developing the game. The author's technique of just grabbing a word list and spellchecking it is completely not sufficient, you will get so many weird unfamiliar words in there. In the end I was able to whittle down my list to about 24,000 using various automatic methods, but from that point I just had to do a manual review on the remaining list, which meant I got to see a lot of words, and many of them felt very obscure and/or not fun.
(that's the best linkable reference I could find, unfortunately).
I've run into a similar problem where an overload resolution for uint64_t was not being used when calling with a size_t because one was unsigned long and the other was unsigned long long, which are both 64 bit uints, but according to the compiler, they're different types.
This was a while ago so the details may be off, but the silly shape of the issue is correct.
This was my point. It may be `unsigned long` on his machine (or any that use LP64), but that isn't what `uint64_t` means. `uint64_t` means a type that is 64-bits, whereas `unsigned long` is simply a type that is larger than `unsigned int` and at least 32-bits, and `unsigned long long` is a type that is at least as large as `unsigned long` and is at least 64-bits.
I was not aware of compilers rejecting the equivalence of `long` and `long long` on LP64. GCC on Linux certainly doesn't. On windows it would be the case because it uses LLP64 where `long` is 32-bits and `long long` is 64-bits.
An intrinsic like `_addcarry_u64` should be using the `uint64_t` type, since its behavior depends on it being precisely 64-bits, which neither `long` nor `long long` guarantee. Intel's intrinsics spec defines it as using the type `unsigned __int64`, but since `__int64` is not a standard type, it has probably implemented as a typedef or `#define __int64 long long` by the compiler or `<immintrin.h>` he is using.
long and long long are convertible, that's not the issue.
They are distinct types though, so long* and long long* are NOT implicitly convertible.
And uint64_t is not consistently the correct type.
Would be nice to see some qualitative analyis to know if it's just slop, or actually more interesting projects. Not sure how to do that though. I think just looking at votes wouldn't work. I would guess more posts causes lower average visibility per post which should cause upvotes to slump naturally regardless of quality.
Edit: maybe you could:
- remove outliers (anything that made the front page)
- normalise vote count by expected time in the first 20 posts of shownew, based on the posting rate at the time
a weighted sampling method is probably the best. Segment by time period and vote count or vote rate, then human evaluate. This could be done in a couple of hours and it gives a higher degree of confidence than any automated analysis.
My first php script was a file upload server for a lan party. Luckily nobody tried to upload a file named ../index.php, because I realized afterwards that it would have worked :p
And I see this argument often. People make too much fuss about the massive error messages. Just ignore everything but the first 10 lines and 99.9% of the time, the issue is obvious. People really exaggerate the amout of time and effort you spend dealing with these error messages. They look dramatic so they're very memeable, but it's really not a big deal. The percentage of hours I've spent deciphering difficult cpp error messages in my career is a rounding error.
Do you also consider that knowing type deduction is not necessary to fix those errors, unless you are writing a library? Because that is not my experience (c++ "career" can involve such wildly different codebases, it's hard to imagine what others must be dealing with)
Except the size of the blocked section ofc. E.g If you know it's a person's name, from a fixed list of people, well "Huckleberry" and "Tom" are very different lengths.
reply