Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Consciousness Might Hide in Our Brain's Electric Fields (scientificamerican.com)
40 points by thomasjudge on Dec 2, 2024 | hide | past | favorite | 36 comments


Consciousness is clearly hiding in the smells. Scientists just haven’t properly looked at the olfactory chemistry happening in the air around us all the time. The neurons do the raw computation, but the subtle chemical exchanges and releases are what actually drives cognitive qualia and the sense of smelf. This also explains why smell is so important to memory.


"sense of smelf". underappreciated comment.


This is true. My friend lost her sense of smell for a year due to covid and became a philosophical zombie during that period in spite of outwardly seeming normal.


no, consciousness is in proprioception, that's why we store memories spatially!


There is no indication of EM-Fields involvement in biological functions. The observation of EM-fields where voltages spike and currents flow seems to be another case of the meta-scientific phenomenon of researchers in one discipline re-discovering the results of their colleagues from a neighbouring discipline. In this case biologists "discovering" Maxwells Equations.


I probably misunderstand your comment but electric potentials play many roles in biological functions (example: proton gradient). As for magnetism, it's certainly speculated that some migratory animals can detect the earth's magnetic field.


You can induce an activation potential with an external EM field. That’s the entire basis for deep brain stimulation.


A rediscovery is also a discovery. No need for the quotes here.

EDIT: Finding more evidence for convergence between scientific fields is also worthy. (Though the delta is very small at this point.)


This is interesting but I wonder if it just moves the substrate from neuron spikes to that EM field, if it is involved in cognition. Additionally I don't think it would offer anything new in terms of explaining the hard problem of C.

I think it's premature to dismiss figuring out the neural code architecture, which has a lot of evidence it is the probable cause (mainly through brain damage and its correlations), but it's good they are looking elsewhere.


What I see in this article is something like "ephaptic" effects in neurons are mysterious, consciousness is also mysterious so maybe "ephaptic" effects can explain consciousness.

Ephaptic coupling[1] sounds cool and interesting but the article did not convince me that it is linked it to consciousness.

The involvement of consciousness in the article is not a hook for me and I would rather it not be in the article at all.

[1] https://en.wikipedia.org/wiki/Ephaptic_coupling


I post this because I think the current neural-net hegemony of AI research underappreciates the importance of consciousness and implicitly assumes that it can just scale its way to AGI


I think it's a mistake to conflate consciousness with AGI and vice-versa. It's entirely possible that you can have AGI without anyone being "home" and conversely it's also possible that there is a subjective experience going in much simpler systems. We just don't know, and maybe we never can.

But regardless, I think it's easy to imagine that there could be a machine that is generally intelligent, or super-generally-intelligent. but doesn't experience.


It's equally easy to imagine a machine that can't be generally intelligent because it can't experience.

In other words, just because you can imagine it, doesn't mean you get to treat your assumptions as fact.

What we know right now is that we don't have AGI. Therefore, we currently don't know how to achieve AGI. While it is possible that scaling current methods and models will get us there, it is also possible that it never gets us there.

And when you consider that we see examples of general intelligence everywhere, in other people and animals, operating at far less power than it takes to train these models, it's not out of hand to say that we are possibly barking up the wrong tree.

Then there is the matter of actually defining general intelligence. It may also be the definition of consciousness, or at least require it. But currently, there is no mutually agreed upon definition of "general intelligence". Some people try to define it as "has a lot of knowledge". In which case, a hard drive can be considered intelligent. Or even a book. But people generally accept neither as intelligent. So "knowledgeable" is not a fitting definition. Something with intelligence must be able to act on its knowledge.

And I think that's where consciousness would come into play, if it does. It could be that consciousness is needed to direct knowledge. And one's ability to direct that knowledge is what we would consider intelligence.

Which could be why LLMs plateau. They have no internal direction. They are pure knowledge without direction. We supply the direction. And we must navigate the nodes to find the knowledge we can use. LLMs can't really tell truth from falsehoods, we do that. Just like we do internally. We discard untrue things, or things that aren't quite what we were thinking, or some other filter. LLMs just expose that process because now part of our knowledge is contained in the LLM.

So, just because it's easy to imagine, it doesn't mean that it's possible.


>Then there is the matter of actually defining general intelligence. It may also be the definition of consciousness, or at least require it. But currently, there is no mutually agreed upon definition of "general intelligence".

Here lies the problem. We should have a rule that any time we discuss AGI, we preface with the arbitrary definition that we choose to operate on. Otherwise, these discussions will inevitably devolve into people talking past each other, because everyone has a different default definition of AGI, even within the SF AI scene.

If you ask Yann LeCun, he'll say that no LLM system is even close to being generally intelligent, and that the best LLMs are still dumber than a cat.

If you ask Sam Altman, he'll say that AGI = an AI system that can perform any task as well as the average human or better.

If you ask Dario Amodei, he'll say that he doesn't like that term, mostly because by his original definition AGI is already here, since AGI = AI that is meant to do any general task, as opposed to specialized AI (e.g. AlphaGo).


The definitions are one of the major sticking points.

We don't have good, clear definitions of either intelligence or consciousness.

They need to be generally agreeable. Include everything we accept as intelligent or conscious and exclude everything we accept as not intelligent or not conscious.


> It's equally easy to imagine a machine that can't be generally intelligent because it can't experience.

I agree with this. I was just pointing out that the parent comment:

>I post this because I think the current neural-net hegemony of AI research underappreciates the importance of consciousness and implicitly assumes that it can just scale its way to AGI

assumes that you need conciousness for AGI, but we actually don't know if that's true.


It's a better bet that it's a requirement however.

The only known examples we have of general intelligence comes with consciousness.

Not to mention, he's positing that AI researchers are assuming that consciousness is unnecessary and he's saying he disagrees with that position. So saying that he could be wrong is really just circling back to what AI researchers are assuming. Or basically, disagreeing with his disagreement.

I would expect either some sort of exposition on why or a third option being presented.

I also was not a fan of the position of "I can imagine this therefore it is a valid option". I don't accept the logic that something is necessarily a possibility simply because we can imagine that it can happen.


It's also true that the only known examples of general intelligence are embodied in meat machines. Is this a prerequisite for AGI? Again, we don't know. I think probably not, but some people think it is, and the debate is unresolved.

Similarly, my argument is that it's premature to assume that consciousness is a prerequisite for AGI.

Finally, I don't think there's anything invalid about disagreeing with someone's disagreement, and then stating the reason why. In fact you also did this in response to my comment!


Until you can show me a counterexample, the null hypothesis is that intelligence requires consciousness. The two sides are not equally weighted. You need to come with something.

In your original response, you stated a reason why you disagree. And I pointed out why that reason is not good. It's so low as to not even be a threshold.

Other than that, all you've done is reiterate the disagreement.

I've given a counter-hypothetical to point out why your reasoning is flawed. I've illustrated reasons why the discussion is complicated. I'm not disagreeing with your disagreement, I'm pointing out not only why I disagree with your premise, but where I believe your premise doesn't hold.

So far, the only thing you've offered in response is "Well, maybe it's not required". Why do you believe that? Beyond, "We don't know, and I can imagine it". And even then, I'd treat your imagination with a little skepticism. Just because you can construct the sentence "AGI does not require consciousness" does not mean you can actually conceptualize what that means.

Mostly because it would require defining both AGI and consciousness in a mutually agreed upon way. And defining it in a way that would definitely include everything we accept as conscious and exclude everything we accept as lacking conscious.


To be blunt, the null hypothesis is that consciousness is not required for AGI.

One example of a correlation between AGI and consciousness without any theory (let alone a testable theory) for why there would be causation does not constitute evidence.


Beyond underappreciated, it's essentially being ignored. As far as what is publicly available anyway. I think the explanation for this partially or wholly lies in the goals and values of the people that research these systems vs the goals and values of natural brains. Generally speaking, AI researchers are trying to leverage existing hardware and tooling to demonstrate performance increases on a set of benchmarks. AI founders are trying to apply the research into products to attract investors. Established tech companies are trying to make money. None of these goals and values are anything like what natural brains are trying to do: survive.

If AI researchers, founders and tech companies thought artificial consciousness was the fastest path to any of those goals, they'd put resources into that. But they probably think it's an intractable problem, so it gets passed over for things that are likely to have a real payoff in the short term.


> what natural brains are trying to do

and so much more! "survive" does not acknowledge the inspirational, right?


Surviving is impossible in the long term. If anything, it's reproduction?


As far as I'm aware there's no scientific consensus on what consciousness is, how to measure it, what exact mechanisms are in play, or if it even exists at all.

Before bothering with the topic of consciousness in AI and AGI, we really need to nail down that definition for humans before squabbling over its presence or absence in something else.


At this point even if it turned out the brain operates on subatomic mechanical gears so small we cannot actually see, it would not in any way affect the reasoning that the current approach is going to scale to AGI. The reasoning is not based on closeness of LLMs to the operation mechanics of the human brain, but on the observed capabilities.


But how far do you have to scale LLMs to get to a conscious system? Would it even be practical?


> But how far do you have to scale LLMs to get to a conscious system?

Well, that's relatively simple to see. Run a Turing test on models of different sizes and extrapolate. That will give you some approximation of the "distance".

> Would it even be practical?

You can make a descent educated guess by doing the above.


a conscious system is aware of itself. chat gpt is aware of itself, that's the first thing they tell it in the system prompt.


Chat GPT is not aware. It has no self.

It produces token strings that are statistically likely to be a match for any given prompt based on text it's been fed. But the reason it gets things wrong is not the same reason we get things wrong. It gets things wrong because the probabilities matched but the actual meaning did not.

It sounds great that RAG stands for Recursively Assembled Grammar (in a discussion about LLMs), only it doesn't. The LLM generated text that even explained RAG as a recursive grammar applied in the context of LLM usage, but RAG stands for Request-Augmented Generation. When I pointed this out the system output "Oh, of course, that is also a thing and it means..." But the system had no awareness. Not of the meaning of the text it was producing, and not of any sort of self.


On an Etch-a-Sketch I can tell it that it's aware of itself...doesn't mean it actually is.


you probably claim to be aware too.


Michael Levin has been doing research activating certain types of cell growth based on certain field frequencies. I wonder if this ties together. https://youtu.be/p3lsYlod5OU?si=BY6DRnqUZtc12wFH

I wonder if neurons are actually the hard drive and ephaptic fields are the software or network the software runs on.


I'm struggling to understand the leap from ephaptic neurons to the explanation or location of consciousness. This assumes that consciousness can be located in the brain.


That should be easy to test. Not trivially easy, but still.


Are they (the writers or scientists) trying to find the 21 grams?


No,because that's not a thing. Scientists are trying to find the neural correlates of consciousness based on entirely physical, detectable, and scientifically understandable concepts, not pseudospiritualism (https://en.wikipedia.org/wiki/21_grams_experiment) amplified by poor journalism.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: