Hacker Newsnew | past | comments | ask | show | jobs | submit | more TrinaryWorksToo's commentslogin

Buy shares in Tesla. If Twitter succeeds Elon will need to sell less stock to fund Twitter.



Sat should be pass or fail. It's not precise enough to rank students


They're about the same accuracy according to this data: https://www.cnet.com/tech/tech-industry/study-wikipedia-as-a...


Well, for a particular set of fairly obscure science articles. It's quite different in the social sciences, for example. What people don't appreciate is that Wikipedia has very different strengths and weaknesses to Britannica.

Britannica doesn't contain outright hoaxes and nonsense. Examples:

https://www.theregister.com/2017/01/16/wikipedia_16_birthday... https://wikipediocracy.com/2022/08/11/wikipedias-credibility...

But Britannica can never be as up to date as Wikipedia:

https://www.inputmag.com/culture/queen-elizabeth-ii-death-wi...

Nor can it cover as many topics as Wikipedia.

Wikipedia's quality also depends on the topic area. Hard science and computing tend to be covered more adeptly than philosophy for example.

And article quality simply varies much more in Wikipedia. It ranges from some of the finest writing anywhere, rivalling anything in Britannica and surpassing it in up-to-dateness, to complete rubbish and intentionally falsified content.


I think you are drastically underestimating how absolutely bonkers the 1970 Britannica edition was in terms of (using your example) Social Sciences.

I did this analysis long ago and i don't have the set in front of me, but there absolutely were hoaxes and nonsense which were believed to be true (or which fit the prevailing narrative) in 1970.

Not being up to date isn't just about incorporating new information. Fields like social sciences have huge revisions and reversals because conclusions in those fields are so often rooted in opinion and inference rather than empirical observation.

Yet, no teacher would complain about using an old copy of Britannica.


Perhaps so, but why choose the 1970 edition, rather than one from the 1990s, or the current online one?

At any rate, here is a quote from a BBC journalist: "In the Wikipedia gullibility stakes, no one is infallible. That means any journalist in any newsroom will likely get a sharp slap across the head from an editor for treating Wikipedia with anything but total scepticism (you can imagine the kicking I've taken over this article)."

https://www.bbc.co.uk/news/uk-northern-ireland-37523772

I doubt any journalist ever got such a kicking because they cited Britannica.


It's simply what my grandparents had in my home, back when I still cared about school and when Wikipedia was relatively new. I remember being astounded that this ancient crappy thing would be accepted, while wikipedia would not be.

You know in the world of tech they have a similar saying: "No one ever got fired for buying IBM."

This does not mean IBM is either a superior choice, or a wise choice.


Yeah, it's a valid comparison. I will let you in on a secret: I have a full 1990s' Britannica set in my home, about three metres from my desk.

I consult it maybe four or five times a year, and the Britannica website maybe another dozen times or so.

Still, it's good to always be aware that the ways Wikipedia and Britannica are produced are very different and to factor that in (see my post below from just a couple of minutes ago, about the desert article).


I think this was your point (the last few sentences), but those 16 or so hoaxes cited are on absolutely obscure subjects .. A children's book character "Amelia Bedelia" ? A subspecies of raccoon ?? .. So yes you're going to have articles that matter to 1 person per year get skewed, and that sucks .. and it's unfortunate for whatever poor soul wants to use those 1/yr-viewed articles that are misleading .. but I'm not sure that is a good argument against wikipedia.

If however we were to go look up 'Mars' .. or 'electron' and find hoaxes there, then there would be a clear and obvious problem


How about "desert"? Wikipedia's "desert" article said for almost a year that the mean winter temperature in cold deserts, such as those found in Greenland and Antarctica, is typically between +4 and –2 °C.

False info inserted: https://en.wikipedia.org/w/index.php?title=Desert&diff=prev&...

False info deleted: https://en.wikipedia.org/w/index.php?title=Desert&diff=60327...

The desert article was classified as a "Good article" during that time and got about a million views. This is not an obscure article, nor was the article on Maurice Jarre at the time of his death, which is why the hoax instantly entered newspapers around the world.

So I accept your point to a degree: many of the topics mentioned in that article are very obscure, but even a high-visibility, highly trafficked article can contain absolute balderdash mixed in with solid information and really good writing.

I'm also not trying to make an argument against Wikipedia – I am a Wikipedian and I love Wikipedia, and use it daily, but it's important to be aware of its weaknesses and vulnerabilities.


Yeah .. I know this one is getting buried, but yes.. there are inconsistencies and hoaxes, and just bad information on the articles... And yes that can be dangerous to a reader..

But you know I was just thinking earlier today reading a Wikipedia article and looking at some of the citations, and I thought that even with it's shortcomings, it has the potential to make readers, especially those that are curious, actually question what they're reading.. And I think when compared to like Britanica, thats a very positive aspect.

Meaning.. I know that citations are all over academia and research areas, but they aren't in a lot of k-9 textbooks .. and maybe for someone who doesn't go on to higher education, they can potentially have that questioning-effect.


I'm glad you posted that information, because it really shows that random online information can often be wrong or conflict with other online authorities. Here are two quotes from an article that was posted 10 years after your link. [1]

  "There has been lots of research on the accuracy of Wikipedia, and the results
  are mixed—some studies show it is just as good as the experts, others show
  [that] Wikipedia is not accurate at all."
and later in that article

  They found that in general, Wikipedia articles were more biased—with 73
  percent of them containing code words, compared to just 34 percent in
  Britannica.

[1] https://www.forbes.com/sites/hbsworkingknowledge/2015/01/20/...


> code words

I'm not sure that's a great measure of bias. It's easy to write in an unbiased tone, yet still be biased.


Further, unbiased articles may easily contain biased words. Especially since Wikipedia is an international projects, the articles describe multiple points of view, and the "code words" of the research were terms such as the following: "tax breaks, minimum wage, fuel efficiency [--] death tax, border security, war on terror".


Bias is not the same thing as being correct or incorrect.


That is some seriously selective quoting. The next sentence says the code word count is due to Wikipedia having longer articles.


That some false paraphrasing. No it does not say that in the next sentence.

However, you just made my point, which is that, "random online information can often be wrong or conflict with other online authorities."


I remember hearing this a long time ago, and the article you linked is from 2005. I wonder if Wikipedia having >= accuracy to traditional encyclopedias remains true today, given how different the web and web users are today.



You can’t implement the colouring with an image map.


You're right. The image its self would need to be colored and then switched out perhaps.

Or use an SVG.


A lot of that research is flawed:

https://www.youtube.com/watch?v=UBc7qBS1Ujo


A very good video, and goes over a lot of the points I've made in this thread in good detail, I also recommend it.


It's why I used DDG first, because it does this less in my opinion.


DDG is just Bing though.


With a proxy!


This looks a lot like some ufo videos


If you want to hack it, give it a binary de Bruijn sequence with alphabet size k=2. Since it looks to follow whichever pattern already exists, and de bruijn sequences minimize existing patterns, it always beats the game.

I used http://combos.org/bruijn and pressed left for zero and right for one.

Using rule Grandmama creates close to a perfectly straight line up and to the right when you start from the first 1 in the sequence.

Try: 1001000101010011010000110010110110001110101110011110111111000000

where 1 is right and 0 is left.


This is cool info, thanks.

> Since it looks to follow whichever pattern already exists, and de bruijn sequences minimize existing patterns, it always beats the game.

I'm pretty sure this isn't true though. A de bruin sequence doesn't guarantee that the order of the n-patterns is random, only that the number of unique n-length patterns is maximized.

Indeed, the algorithm you mention puts n-subsequences with many 0's in the front, and subsequences with many 1's in the back. Sure, every n-length subsequence appears only once, but because the order of the subsequences does follow a predictable pattern, your total sequence is still pretty predictable.

This disparity isn't noticeable when n is small (you chose n=6), so you can comfortably beat the game. But pick a large n and your sequence becomes rather predictable.

Try n=20. In that case, the generated sequence S has length |S|=32,768=2^15. In the first half, there are 9098 0's and 7286 1's. In the second half these numbers are exactly the opposite. Throw this sequence into the game, and you end up with a prediction accuracy of 50%. Not worse than random, but you didn't beat the game either.


I picked n=6 in particular because the code checks the past 5 numbers so 6 means that it can't do its frequency analysis but also avoids the problem you mention. And yes, it isn't random, rather it enforces a unique sequence. Because the code looks for repetition one number at a time, a de bruijn sequence has to be pretty optimal since it attempts not to.

Also this sequence seems to work better: 0000001001000101010011010000110010110110001110101110011110111111


Ah yeah. My monitor cut off the text under the graph so I didn't know they just gave away their predictive algorithm. n=6 obviously works, cool approach!


Here is the follow up blog post where the post mentions De Bruijn sequences specifically: https://www.expunctis.com/2019/04/01/Not-so-random-followup....


As opposed to private individuals who do the same and can't be voted out?


I can much easier leave a private company for another private company.

Besides “voting people out” is not a great take - especially in the US. Because of both gerrymandering and the very form of the constitution, the US is very much rule by the minority. If you live in California, you are represented by the same number of Senators as someone who lives in Wyoming.

The last thing I want is government to have more power.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: