Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I appreciate you're running the numbers to extrapolate this approach, but just wanted to note that this particular figure isn't an upper bound nor a longer bound for actually storing the "state of a single human brain". Assuming the intent would be to store the amount of information needed to essentially "upload" the mind onto a computer emulation, we might not yet have all the details we need in this kind of scanning, but once we do, we may likely discover that a huge portion of it is redundant.

In any case, it seems likely that we're on track to have both the computational ability and the actual neurological data needed to create an "uploaded intelligences" sometime over the next decade. Lena [0] tells of the first successfully uploaded scan taking place in 2031, and I'm concerned that reality won't be far off.

[0] https://qntm.org/mmacevedo



> In any case, it seems likely that we're on track to have both the computational ability and the actual neurological data needed to create an "uploaded intelligences" sometime over the next decade.

They don't even know how a single neuron works yet. There is complexity and computation at many scales and distributed throughout the neuron and other types of cells (e.g. astrocytes) and they are discovering more relentlessly.

They just recently (last few years) found that dendrites have local spiking and non-linear computation prior to forwarding the signal to the soma. They couldn't tell that was happening previously because the equipment couldn't detected the activity.

They discovered that astrocytes don't just have local calcium wave signaling (local=within the extensions of the cell), they also forward calcium waves to the soma which integrates that information just like a neuron soma does with electricity.

Single dendrites can detect patterns of synaptic activity and respond with calcium and electrical signaling (i.e. when synapse fires in a particular timing sequence, the a signal is forwarded to the soma).

It's really amazing how much computationally relevant complexity there is, and how much they keep adding to their knowledge each year. (I have a file of notes with about 2,000 lines of these types of interesting factoids I've been accumulating as I read).


> we may likely discover that a huge portion of [a human brain] is redundant

Unless one's understanding of algorithmic inner workings of a particular black box system is actually very good, it is likely not possible not only to discard any of its state, but even implement any kind of meaningful error detection if you do discard.

Given the sheer size and complexity of a human brain, I feel it is actually very unlikely that we will be able to understand its inner workings to such a significant degree anytime soon. I'm not optimistic, because so far we have no idea how even laughingly simple, in comparison, AI models work[0].

[0] "God Help Us, Let's Try To Understand AI Monosemanticity", https://www.astralcodexten.com/p/god-help-us-lets-try-to-und...


we are nowhere near whole human brain volume EM. the next major milestone in the field is a whole mouse brain in the next 5-10 years, which is possible but ambitious


What am I missing? Assuming exponential growth in capability, that actually sounds very on track. If we can get from 1 cubic millimeter to a whole mouse brain in 5-10 years, why should it take more than a few extra years to scale that to a human brain?


assuming exponential growth in capacity is a big assumption!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: