Hacker Newsnew | past | comments | ask | show | jobs | submit | mcilai's commentslogin

Ironically, it is unbelievably damaging for individuals to believe that one needs a lot of luck, as this belief feels good, destroys resilience, and prevents one from recognizing and going after the (initially) modest opportunities that do show up.

Sure, you won’t be like Elon Musk, but you could be better than 90% of the population, since most people don’t want to try hard. Much more “fun” to believe that life is unfair, and therefore no action is required on one’s part.


If you're born into a family making below the median income, it is incredibly unlikely that you will end up making an income in the top 10%, no matter how hard you work. The systemic pressures to prevent socioeconomic mobility are such that you still need a massive amount of luck, whether that luck is in being born rich or in getting just the right opportunities at the right times to break free of poverty.

And painting this as just being about mindset is also an incredibly narrow (and privileged) view. The amount of energy and brainpower that the stresses of being poor sap from you is just staggering.


Don't forget the very broken incentives.


Overpromising? Underpromising, if anything. I cannot understand the point of view that denies the _overwhelming_ progress that has taken place in AI. And to deny GPT-3’s mega advance — we now have a system that can achieve state of the art or near state of the art requiring a negligible number of training cases (i.e., few shot learning) — seems wrong to me.


Deep learning will continue to surprise


Nice try, GPT-3.


> because it most likely is

"Ark estimates that Deep Learning has created $1 trillion in market value so far. " -- we might be headed towards a warm winter

(from https://www.nextbigfuture.com/2020/01/ark-invests-big-five-t...)


Value is there until it isn't. A lot of the value here comes from its potential rather than its current use


It's an unbelievably valuable resource to humanity.


I do not deny that it is great for learning. Just is that learning recognized by anyone?

Or is it something only worth doing for personal fulfillment?

Coursera has a very heavy pitch about employability.


There is no perception of scarcity in the market of online certification so they carry little weight.

That doesn’t mean you don’t gain valuable knowledge but it’s unlikely to be in demand from an employer.


There could be scarcity if few people could successfully complete the certification. The problem is whether it is trustworthy though.


If Coursera succeeds it'll be worth far more than 2.5B.


Deep learning is as close to being a "newtonian theory" of the brain as it gets -- deep learning abstracts away a lot of the complexity of neural systems (e.g., a simple artificial neuron vs a highly complex biological one) while maintaining a number of essential characteristics: massively parallel computation, error tolerance, graceful degradation, distributed representations, information is stored in slowly-changing synapses, and, most importantly: a simple, local, and powerful biologically-plausible-if-you-squint-hard-enough learning rule.

The important question to ask is: is the deep learning abstraction any good?

There's a very strong case to be made that the answer is yes: deep learning systems can perform many (of course, not all, at least not yet) tasks that involve perception (computer vision/speech recognition), motor control (the recent openai robot), language understanding (machine translation/BERT/GPT), planning (alphago/dota/the deepmind protein folding), and even some symbolic reasoning (the recent work from facebook on symbolic integration https://ai.facebook.com/blog/using-neural-networks-to-solve-...). Some of these tasks are performed at such a high level that they become commercially useful, and in some cases, surpass "human level".

So here we have a "model family" -- deep learning -- with a set of principles so simple that it can be studied with intense mathematical rigor (for example, https://arxiv.org/pdf/1904.11955.pdf or https://papers.nips.cc/paper/9030-which-algorithmic-choices-...), and that produces many of the behaviors we want out of brains (and not just behavioral: see, e.g., https://arxiv.org/abs/1805.10734: " Interestingly, recent work has shown that deep convolutional neural networks (CNNs) trained on large-scale image recognition tasks can serve as strikingly good models for predicting the responses of neurons in visual cortex to visual stimuli, suggesting that analogies between artificial and biological neural networks may be more than superficial." -- this is just one of many papers that show that even under the hood, trained deep learning systems exhibit many properties of biological neural networks).

These reasons strongly suggest (imho) that deep learning is in fact the newtonian theory of neuroscience. More strongly, no other theory comes remotely close in its simplicity and explanatory power.


Everybody in the history of humans has said the latest technology is the best model for how a brain works. There used to be a piston model for the brain.

Self driving cars can't leave an enclosed environment and might never do so safely.

Richard Dawkins spoke very highly of the brains ability to do some kind of natural calculus for the sake of tracking a ball in flight, but most animals run on simple tricks and reference points.

Deep learning might be the "good think" for the next ten years, some of us are not going to let go of the transcendent truth that the brain is not defined by what we think it is. I see limited reason to see deep learning as more likely than some emergent behaviour from a vast number of simple rules. Like animals flocking together in a boid sim.


> Everybody in the history of humans has said the latest technology is the best model for how a brain works. There used to be a piston model for the brain.

Is this the same mistake as in "The Relativity of Wrong" [1]?

> people have thought they understood the Universe at last, and in every century they were proven to be wrong. It follows that the one thing we can say about out modern "knowledge" is that it is wrong.

> [...]

> My answer to him was, "John, when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical, they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."

Modelling the brain as a bunch of pistons or as a complicated machine or clockwork thing is a lot better than as a magical clay golem or opaque soul. Modelling it as a computer is even better than that. Not a computer in the sense of an x86 desktop exactly, of course, but the concept of computation is clearly fundamental to understanding the system. Similarly, the brain is not ResNet but concepts like backpropagation are probably useful.

So, sure, maybe people have been using the latest fad to explain the brain forever. But that's only bad to the extent that the latest fad is getting further away instead of closer.

1: http://hermiene.net/essays-trans/relativity_of_wrong.html


Degrees of wrongness along history would make sense in this discussion if computing were the only path for understanding the brain.

Ancients used to think that thinking happened in the gut and recently the microbiome pathway for describing thought has re-emerged. Both the gut pathway and the computational stream could be wrong.

The outcome of seeing the brain as a computational device will run out of juice like revelation has.


There's one minor difference between past models of the brain and deep learning: deep learning can actually perform difficult and useful cognitive tasks that cannot be accomplished by any other means.


Computers beat chess by grinding out the answers, something previously unattainable for humankind. Computers can beat chess,go and starcraft now with curve fitting ml programs.

The problem is for all this power people still play chess,go and starcraft and we don't know how their brain works.


Out of curiosity, do you believe in the concept of a soul?


See my post in another thread re Elman et al. Rethinking Innateness (https://mitpress.mit.edu/books/rethinking-innateness).


Deep learning?


Number 6 is not a myth!!!


Can you explain why?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: