HN is a perfect example of an classic, evergreen product. Or call it a crocodile product, as crocodiles are somewhat location-loyal.
Are there more examples of products that did (on purpose) not change significantly?
If you consider the sentiment on the recent GMail redesign, Facebook, .. it often appears that product managers are the only ones that want to change the product and its appeal. I think Reddit also had the crocodile concept for long time. Google‘s main search page changed minor since the past 10 years. I guess there are more examples (Quora, Craigslist, Wikipedia..?)
@pavlov your examples capture the could-be quite well.
I don't know, Amazon seems pretty bloated at times. The page may look similar to old Amazon pages but there is just so much happening I'm overwhelmed. "Related Items" " People who bought this also looked at", multiple sections of "Sponsored Items", "Customers Also Bought", "Frequently Bought together". Sigh. Just show me this product and the information about it (specs, reviews, questions [and don't show me any questions which were just answered with "I don't know"])
The overwhelming is especially prevalent on AWS - I get super confused every time I do work there. So many buttons, services, etc. It's for that reason primarily that I prefer using GCP as it is kept a bit more similar.
Nope. The search box UI looks very similar, but the UX has changed a ton.
From not being able to use "+" in queries to how your queries are interpreted to what you actually get as your results (from map results to AMP articles, there's a whole spectrum), it's very much a different product these days.
In the Netherlands an early very (1998) popular web page is http://www.startpagina.nl , and it's still popular with non-tech savvy people. http://www.nu.nl (1999) is the country's most popular news page, http://www.marktplaats.nl (1999) the most popular Craig's list / Ebay equivalent.
They have all tweaked their UI over two decades if you look closely but make sure they look more or less the same as ever.
Actually Teletext ( https://nos.nl/teletekst ) is still used by many and it looks exactly like it does on TV.
Teletekst is an outlier, because it did not start as a web page, but the design has been consistent over the years and I still use the site and the app for news, TV guide and weather forecast daily!
Counter opinion: I think of hn as a perfect example of a non-product. I can't think of anything positive about hn that isn't ultimately traceable to its nature of not being a product.
As an avid redditor, I wish they preserved the crocodile concept. I can't bring myself to like their "new reddit" and always keep going back to "old.reddit.com/...".
Their leadership has this notion that people just don't like change, and that's why they use old. To me, I just don't like the new design. It's painful, old is simple, clear, and quick.
I really hope that the ShareLaTeX UX will survive. If I had to use Overleaf's UI, I might seriously consider canceling the subscription. The dark theme is nothing you want to use on a daily basis. Otherwise, ShareLaTeX also worked much faster and had way better example snippets, I think?
I'm optimistic that you do the transition, with Open Source.
This series might be interesting for anyone interested in a personal story of science at the intersection of research, speaking, travels, and family. Enjoy!
"A paper from the Open Science Collaboration (Research Articles, 28 August 2015,aac4716) attempting to replicate 100 published studies suggests that the reproducibility of psychological science is surprisingly low. We show that this article contains three statistical errors and provides no support for such a conclusion. Indeed, the data are consistent with the opposite conclusion, namely, that the reproducibility of psychological science is quite high."
As a non-expert, may I ask (as the term does not appear in the paper): How valuable is the Shannon number in order to evaluate "complexity" in your context?
Since both numbers are out of the realm of brute-forcing, the bigger achievement is because of the more fluid and strategic nature of Go compared to chess. Chess is more rigid than Go, and playing Go employs more 'human' intelligence than chess.
Quoting from the OP paper:
"During the match against Fan Hui, AlphaGo evaluated thousands of times fewer positions than Deep Blue did in its chess match against Kasparov; compensating by selecting those positions more intelligently, using the policy network, and evaluating them more precisely, using the value network—an approach that is perhaps closer to how humans play. Furthermore, while Deep Blue relied on a handcrafted evaluation function, the neural networks of AlphaGo are trained directly from gameplay purely through general-purpose supervised and reinforcement learning methods."
"Go is exemplary in many ways of the difficulties faced by artificial intelligence: a challenging decision-making task, an intractable search space, and an optimal solution so complex it appears infeasible to directly approximate using a policy or value function. The previous major breakthrough in computer Go, the introduction of MCTS, led to corresponding advances in many other domains; for example, general game-playing, classical planning, partially observed planning, scheduling, and constraint satisfaction. By combining tree search with policy and value networks, AlphaGo has finally reached a professional level in Go, providing hope that human-level performance can now be achieved in other seemingly intractable artificial intelligence domains."
I will admit to not following AI at all for about 20 years, so perhaps this is old hat now, but having separate policy networks and value networks is quite ingenious. I wonder how successful this would be at natural language generation. It reminds me of Krashen's theories of language acquisition where there is a "monitor" that gives you fuzzy matches on whether your sentences are correct or not. One of these days I'll have to read their paper.
For language generation, AFAIK there is no good model that follows this architecture. For image generation, Generative Adversarial Networks are strong contenders. See for instance: