Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What is it like to have a brain? Ways of looking at consciousness (lareviewofbooks.org)
138 points by Hooke on Oct 14, 2022 | hide | past | favorite | 174 comments


Aeon just had a great article on the consciousness, "Seeing and somethingness" https://aeon.co/essays/how-blindsight-answers-the-hard-probl...

It argues that consciousness evolved out of sensation, where we developed an "inner self" to predict how sensations would affect us, and it's that inner self that became our conscious.

Don't miss out on the comments section, the author answers a lot of question in there.


Like Dennett's book "Consciousness Explained", the Aeon article falls into the category of explaining what we are conscious of, not how it is possible to be conscious of anything at all. It does not really tackle Chalmers "hard problem of consciousness", despite the subtitle.


I would be interested in your response to the following thought experiment:

After years of heroic work and ingenious insights, along with a lot of technological progress, we "solve" the "easy" problem of consciousness in the following (I admit implausibly in any foreseeable future) strong sense:

(Note: I'm going into quite a lot of detail because I think that when people say things like "understanding how the machinery of consciousness works would not tell us anything about how it is possible to be truly conscious of anything at all" they are commonly underestimating what it would actually mean to understand how the machinery of consciousness works.)

1. There is a scanning device. You can strap yourself into this for half an hour, during which time it shows you images, plays you sounds, asks you to think particular kinds of thoughts, etc., all the while watching all your neurons, how they connect, which ones fire when under what circumstances, etc. It tries to model your peripheral as well as central nervous system, so it has a pretty good model of how all the bits of your body connect to your brain, and of how those bits of body actually operate.

2. There is a simulator. It can, in something approximating real time, pretty much duplicate the operation of a brain that has been scanned using the scanning device. It also has enough simulation of the body the brain's part of that it can e.g. provide the simulated brain with fairly realistic sensory inputs, and respond fairly realistically to its motor outputs. There's a UI that lets you see and hear what the simulated person is doing.

3. Researchers have figured out pretty much everything about the architecture of the brain, and it turns out to be reasonably modular, and they've built into the simulator a UI for looking at the structure, so that you can take a running simulation and explore it top-down or bottom-up or middle-out, either from the point of view of brain structure or that of cognition, perception, etc.

4. So, for instance, you can do the following. Inside the simulation, arrange for something fairly striking to happen to the simulated person. E.g., they're having a conversation with a friend, and the friend suddenly kicks them painfully in the shin. Some time passes and then (still, for the avoidance of doubt, in the simulation) they are asked about that experience, and they say (as the "real" person would) things like "I felt a sharp pain in my leg, and I felt surprised and also a bit betrayed. I trust that person less now." And you can watch the simulation at whatever level of abstraction you like, and observe the brain mechanisms that make all that happen. E.g., when they get kicked you can see the flow of neuron-activation from the place that's kicked, up the spinal cord, into the brain; the system can tell you "these neurons are active whenever the subject feels physical pain, and sometimes when they feel emotional distress, and sometimes when they remember being in pain" and "this cascade of visual processing is identifying the face in front of them as that of Joe Blorfle, and you can see here how these neurons associated with Joe Blorfle are firing while the conversation is happening, and when the pain happens you can see how these connections between the Joe Blorfle neurons and the pain neurons get strengthened a bit, and later on when the subject is asked about Joe Blorfle and the Joe Blorfle neurons fire, so do the pain ones. And you can see this big chunk of neural machinery here is making records of what happened so that the subject can remember it later; here's how the order in which things happen is represented, and here's how memories get linked up to the people and things and experiences involved, etc. And when he's asked about Joe Blorfle, you can see these bits of language-processing brain tissue are active. These bits here are taking input from the ears and identifying syllable boundaries, and these bits are identifying good candidates for the syllable being heard right now, and these other bits are linking together nearby syllables looking for plausible words, with plausibility being influenced by what notions the subject is attending to, and these other bits are putting together something that turns out to be rather like a parse tree, and, and, and ...".

5. That is: the linkage -- at least in terms of actual physical goings-on within the brain -- between being kicked in the shin by Joe Blorfle on Thursday, and expressing resentment when asked about Joe Blorfle on Saturday, is being accurately simulated, and the structure of what's being simulated is understood well enough that you can see its "moving parts" at higher or lower levels of abstraction.

OK, so that's the scenario. I reiterate that it would be wildly optimistic to expect anything like this any time soon, but so far as I know nothing in it is impossible in principle.

Question 1: Do you agree that something along these lines is possible in principle?

[EDITED to add:] For the avoidance of doubt, of course it might well turn out that some of the analysis has to be done in terms not of particular neural "circuits" but e.g. of particular patterns of neural activation. (Consider a computer running a chess-playing program. You can't point to any part of its hardware and say "that bit is computing king safety", but you can explain what processes it goes through that compute king safety and how they relate to the hardware and its states. Similar things may happen in the brain. Or very different things that likewise mean that particular bits of computation aren't always done by specific bits of brain "hardware".)

Question 2: If it happened, would you think there is still a "hard problem" left unsolved?

Question 3: If you think there would still be a "hard problem" left unsolved, is that because you think someone in this scenario could imagine all the machinery revealed by the simulator operating perfectly without any actual qualia?

(My answers, for reference: I think this is possible in principle. I think there would be no "hard problem" left, which makes me disinclined to believe that even now there is a "hard problem" that's as completely separate from the "easy" problem of "just" explaining how everything works as e.g. Chalmers suggests. I think that anyone who thinks they can imagine all the processes that give rise to (e.g.) a philosopher saying "I know how it feels for me to experience being kicked in the shin, and I think no mere simulation could truly capture that", in full detail, without any qualia being present, is simply fooling themselves, in the same way as I would be fooling myself if I said "I can imagine my computer doing all the things it does, exactly as it does, without any actual electrons being present".)


There's too much here to really reply to, but I will say this:

1. the existence of GPT3 and its cousins I think comes close to, if not actually proving, then severely tilting the scale in favor of "you do not need consciousness or qualia to participate in relatively human conversation and self-reporting". This means that scenarios where you interrogate a system to see what it thinks are going to rapidly become less and less interesting and less and less convincing as an indication of anything interesting.

2. The problem with qualia is that their subjective nature (coupled with the self-reporting dilemma mentioned in 1. above) means that it is essentially impossible to know whether a given system/individual is experiencing them or not. Do I think that the system you've described could exist without qualia? I do. Do I think it could report qualia without actually having them? I do. Do I think it might actually have qualia? Yes, possibly, but with a lot of caveats.


The point of my thought experiment isn't "a computer does something relatively human", it's "a computer does the same things a human does at the level of neuron activations, leading to the same things a human does at the level of actual actions".

And the point isn't "can you deny that the simulation has qualia?" (though I do find denying that pretty implausible); it's that it feels pretty clear to me that having the level of understanding that would be demonstrated by such a simulation-plus-analysis would in fact constitute a solution to the "hard problem".

(Of course, that would be entirely irrelevant for anyone who believes that my scenario is impossible in principle. For instance, if someone thinks that humans don't think with their brains but with their immaterial souls, they should predict that all attempts to do the sort of thing I describe will end in failure: you might get the machine do do exactly what the brain-meat does, but that won't lead to human-like behaviour because human-like behaviour is enabled by human-like souls which the simulator doesn't have.)

My maybe-uncharitable view is that the "hard problem" is "hard" because it is not really a problem so much as it is a decision to refuse ever to admit that we understand. No matter how detailed and complete an explanation we might have of human consciousness, you can always say "nope, that doesn't explain why there's anything it feels like for me to eat a perfectly ripe peach". Even if (as in my fanciful scenario) that explanation enables us to trace every detail of the processes that lead from being eating the peach to saying "mmmm, that's delicious", to wanting to buy more peaches in future, to rhapsodizing about how no mechanical explanation could ever do justice to the experience, etc. Even if (again, as in my fanciful scenario) the explanation lets us identify (down to the level of neuron-activations) what is common between the experience of eating a peach and the experience of eating a plum, what is different between the experience of eating a ripe peach and the experience of eating a not-so-ripe one, what is shared by all experiences of seeing something a bright scarlet colour, and so forth.

To me, this all seems like saying that gravity is ineffable, that although we can write down Newton's or Einstein's equations and compute exactly what happens when two massive bodies are near one another, there's still always something left unexplained. I can imagine, I say, things that behave according to the same equations but don't really have mass: they might instead have not actual mass but some mere facsimile of the real thing. Or that chess is ineffable, that although a machine can choose chess moves (and beat grandmasters) it isn't really playing chess but doing some mere facsimile of chess-playing. And one can go through the same manoeuvre with any concept at all. Consider the Hard Problem of Trousers: we may be able to analyse the way in which pieces of fabric are made and shaped and put together to make trousers, but that still leaves completely unanswered the question of why the resulting object is a pair of trousers. After all, I insist, I can imagine taking exactly the same pieces of fabric and putting them together the same way to make something that could be worn like trousers but that aren't really trousers...


Wikipedia states "according to a 2020 Philpapers survey, 29.72% of philosophers surveyed believe that the hard problem does not exist, while 62.42% of philosophers surveyed believe that the hard problem is a genuine problem"

The boring answer to your question is given your thought experiment scenario the numbers would probably change to the hard problem of conscousness philosophers being the minority rather than the majority. If everyone has a seemingly conscious A.I. best friend like in the science fiction stories the numbers will continue to go down, but you won't be able to definitively solve the issue.

Philosophers can't even agree whether the biblical God is running the universe behind the scenes, which would potentially have unadressed implications for your thought experiment scenario.

The survey results for religion are:

God: theism or atheism?

Accept or lean toward: atheism. 678 / 931 (72.8%)

Accept or lean toward: theism. 136 / 931 (14.6%)

Other. 117 / 931 (12.6%)


Your answers all suggest you think consciousness is computational. If we could simulate the complex computations of the brain there’s nothing else there. I’m of the opinion that consciousness is not computational but a fundamental property of the universe that we can’t explain with current physics. I believe this because I can’t adequately explain what I experience otherwise.


As it happens I do think consciousness is computational, but that isn't the point my thought experiment is trying to make.

Rather, I was saying: if it turned out that consciousness is computational, or more precisely implemented by something we can model computationally, and if we understood its mechanisms in enough detail to do the sort of simulation-and-explanation in my thought experiment -- then, I claim, it would be difficult to maintain that there is really a separate "hard problem" of consciousness that remains untouched no matter how thoroughly we solve the "easy problem" of explaining the physical processes by which it works. If I'm right about that, I think it weakens the arguments used to suggest that here in the real world there is a separate "hard problem" that we should be very perplexed by.

(There are actually two different scenarios in which consciousness might be non-computational, and they have different implications for the thought experiment. One is where the "mechanisms" of consciousness are non-computational. In this scenario, my thought experiment could never come true: the world isn't put together in the right way for it to work, because that computer simulation will never produce the same behaviour as actual conscious humans exhibit. The other is where the mechanisms are all computable, and everything in the thought experiment goes through perfectly OK, but there's some further Essence Of Consciousness that we have and our simulations don't, without which we get all the same behaviours, right up to writing books about the nature of phenomenal consciousness or poems about the actual phenomena, but "no one's home" -- there are no real experiences, only behaviours that falsely report experiences. I think the second position is held by many people who worry about "the hard problem", and I don't think it really makes sense, but again that isn't quite the point I was trying to make, though it is closely related.)


This is really interesting, really interested in learning more about this. Is this an opinion specific to your own understanding or a belief held by a broader group of academics?


Yes, this is a belief held by a broader group of people, but I haven't booked up on it much myself. Some quick googles for "Quantum consciousness" and "consciousness as a fundamental field" brought up some of the arguments I remembered hearing about:

[0] https://en.wikipedia.org/wiki/Quantum_mind

[1] https://www.scienceandnonduality.com/video/consciousness-as-...

As I understand it, a common form of this framework is basically panpsychism, but more science-y and less metaphysics-y. Some people believe that consciousness may arise from a fundamental field in the universe just like how there's an electromagnetic field or a Higgs field. I vaguely remember hearing one theory that posits intelligent life forms are to the field of consciousness what photons/electrons are to the electromagnetic field — that is to say, spikes or clusters of energy in the field that stand out drastically against the background noise.


I will take a shot:

> Question 1: Do you agree that something along these lines is possible in principle?

Seems reasonable.

> Question 2: If it happened, would you think there is still a "hard problem" left unsolved?

Yes.

> Question 3: If you think there would still be a "hard problem" left unsolved, is that because you think someone in this scenario could imagine all the machinery revealed by the simulator operating perfectly without any actual qualia?

If I see an amazing demo of something, and then they try to sell me something that is capable of only a small subset of what was just shown in the demo, I am going to balk. I suspect if the topic was most anything other than consciousness (which is one of those topics that seems to cause the mind to behave anomalously for some reason), I think most people would agree.

EDIT: thinking about it more....if you could start to make amazingly accurate and precise predictions about how the mind is going to react under a wide variety of scenarios, I would be much more impressed...although, my intuition is that this is far less complex than it may seem for normal behaviors (fine-grained precision is where I'd have to surrender my skepticism).


I don't understand the relevance of your paragraph beginning "If I see an amazing demo" to the question at hand. I assume it's meant to be an analogy that justifies your answer to Q2, or something of the kind, but I don't quite understand what's being analogized to what; could you give some more details?

For the avoidance of doubt, the intention of the thought experiment is that it does indeed make accurate and precise predictions, so that if you give the real person and the simulated person the same experiences, they respond in the same way.

Of course in practice, even with all the technological advances that would be required to make the thought experiment a reality, you'd never be able to make the experiences, or the starting states, exactly identical. So in cases where how you act is exquisitely sensitive to the details of your starting state or the experiences you have, simulation and reality might diverge. My guess is that many things aren't so delicate, and I personally would be pretty impressed by a me-simulation that consistently said and did things that are the sort of things I would be likely to say and do.


> I don't understand the relevance of your paragraph beginning "If I see an amazing demo" to the question at hand. I assume it's meant to be an analogy that justifies your answer to Q2, or something of the kind, but I don't quite understand what's being analogized to what; could you give some more details?

I'm assuming we're ultimately talking about the capabilities of the human mind - comprehensively, it's capabilities are significant. But then, we also know it to be incredibly flawed, which should be kept in mind during such considerations.

> For the avoidance of doubt, the intention of the thought experiment is that it does indeed make accurate and precise predictions, so that if you give the real person and the simulated person the same experiences, they respond in the same way.

So something kinda like this (way more complex likely)?:

"select distinct [phenomenon] from [human_mind_capabilities] where [type] in ('experience','response')"

I wonder how many rows that would return. I also wonder how much diversity would be in the resultset. Either way, I think one would need a pretty powerful thought experiment for even remotely comprehensive coverage.

> So in cases where how you act is exquisitely sensitive to the details of your starting state or the experiences you have, simulation and reality might diverge.

One example (singular, at least kinda):

https://en.wikipedia.org/wiki/Jeffrey_Dahmer

Another (when networked):

https://en.wikipedia.org/wiki/Casualties_of_the_Iraq_War

> My guess is that many things aren't so delicate, and I personally would be pretty impressed by a me-simulation that consistently said and did things that are the sort of things I would be likely to say and do.

I too have an intuition that there is a very rich vein of ore at some level, and that it may be accessible, were we to look for it.


One thing that surprised me about ‘A Thousand Brains: A New Theory of Intelligence’ by Jeff Hawkins was how many different types of computer simulations currently exist for approximations and parts to your experiment’s steps.


How is consciousness possible?

Here's my naive answer with just a bachelors in neuroscience.

There are some "business requirements" as far as I know.

We need:

- space and time

- memory

- inputs

- decision making models

- attention

You need space and time because thinking is a verb. Action is required which needs the passage of time.

You need memory in order to link past events with current ones. I'm not strictly talking about memory from earlier in the year or day, but also working memory from what you perceived a second ago. Continuity seems essential.

Inputs are require because we need a data stream to grapple onto and spark brain activity. The brain can obviously cause its own activity as well.

After that it's just a recursive ETL function via the thalamo-cortical loop [1]. The "self" doesn't necessarily need to live in this loop, but it needs access to the data stream. Inputs trigger neurons/glial that recall past events which all get fed into a bunch of decision making models that spits out new thoughts or decisions which cause us to act. The loop is happening so quickly and chaotically that coupled with randomness, short term and long term memory ... you've essentially got a system to explain creativity and "free will".

Really, consciousness is just an emergent property of a nervous system. It doesn't seem nearly as magical through this lens but that's what makes it so convincing to me.

[1] https://en.wikipedia.org/wiki/Cortico-basal_ganglia-thalamo-...


What you are describing regarding inputs, brain activity on data stream, and how its just a recursive ETL function and such, is describing behavior and computation though

You have some input that goes into the brain, and produces neural firings that trigger actuators to respond to that stimulus

What I don't understand is why are we conscious beings instead of extremely complex bio-computation machines exhibiting memory, inputs, and complex behaviors such as attention and decision making models, all with behavioral actuators, and all this simulating thought and intelligence but really have no entity whatsoever thats actually experiencing stuff?

I've always wondered what ultimately makes neurons and all so special that they produce conciousness. Does a rock that undergoes vibrations have some sort of consciousness associated with it, if not much less than what is seen in a human?

So maybe consciousness is an inherent property of the universe that is in a plane beyond what the 5 senses can pick up on? But that in humans it finds a greater catalyst or something than in animals, plants, or minerals?

I dunno, what do you think? Is there anything rational in that train of thought or did I smoke too much crack again lol

TBH I take a lot of my inspiration of this from Baha'i Faith, which teaches that science and religion are essentially in harmony (i.e. different views on the same reality). To me a lot of the bahai teachings are really deep and elevating (i.e. making me a better person and less selfish by helping me see something more valuable than survival tendencies)

Hopefully I can live by it though


> What I don't understand is why are we conscious beings instead of extremely complex bio-computation machines exhibiting memory, inputs, and complex behaviors such as attention and decision making models, all with behavioral actuators, and all this simulating thought and intelligence but really have no entity whatsoever thats actually experiencing stuff?

Subjective experience and the "self" are just emergent properties of the system I described, on a scale that's difficult to comprehend.

> I've always wondered what ultimately makes neurons and all so special that they produce conciousness. Does a rock that undergoes vibrations have some sort of consciousness associated with it, if not much less than what is seen in a human?

There's nothing special about neurons, they are biological systems to encode information from the external world and produce action potentials that trigger other neurons. It's also a misconception that neurons are the only thing firing in the brain. Glial cells also play an important function here. Further, it doesn't make sense to talk about consciousness at the neuronal level. Consciousness is an emergent property of the nervous system.

You can't point to any molecule and call it wet, but a bunch of h2o molecules clumped together have an emergent property of wetness.

Likewise, a rock is hard and if you use it to crack open a walnut, it has physics that relates to the entire object. However, we could explain the entire system at the molecular level but it doesn't really explain the hardness and physical forces that emerge from the rock that allow us to crack a walnut.


> Subjective experience and the "self" are just emergent properties of the system

There are other systems with emergent properties where we can explain how the emergent properties are a function of the other properties of the system.

As of now, we can't do that with consciousness and brains/bodies.


You say on a scale difficult to comprehend. But where's the scale that's deep enough to be incomprehensible? It's usually doubted anything sub cell-scale is responsible (like quantum microtubules of Penrose are almost universally derided) for consciousness.


Your basically describing the P-Zombie thought experiment posited by David Chalmers.

I always thought it was a bad argument. If you assume consciousness is not required for some of the advanced “bio-computation” that we do, then of course it’s going to be superfluous.

You know at the very least that you feel conscious. And it feels like you are actively participating, guiding your body and mind to do cool and nuanced things everyday. So you probably need to be conscious to do what you do, so the P-Zombie just isn’t possible.

There’s some real philosophy that Chalmers is doing around feasibility when it comes to the P-Zombie argument, though that goes past my head.


Eh, I am not saying consciousness is not required for advanced bio-computation

I am saying that going from observation of behavior (computation, memory, actuation, etc...) to saying that "poof! that explains consciousness" is a leap of assumption without having deescribed the mechanism that brings about actual awareness

The P-zombie thing is pretty interesting honestly. So far on initial read it seems to make sense to me, except that to me the idea of "non-physical" doesn't mean anything magical or otherworldly. But rather that maybe theres more to the universe that the 5 senses do not interact with, and maybe someday we will discover physical artifacts of free will in living things that are able to manifest an attribute of consciousness like free will

For instance, quantum mechanics shows matter is intrinsically non-deterministic, although the statistics of outcomes are robust

Maybe that, or some other physics we have not discovered yet, can help show artifacts of consciousness that would not be feasible in a very complex and sophisticated classical computer?

Maybe there lies the distinction?

But again, I am not saying that such a physical artifact would cause consciousness, but rather shows that the human being allows consciousness to manifest in the human kind of like a mirror (the body) reflecting light (the conscious being, or the soul)

From Baha'i: "Know thou that the soul of man is exalted above, and is independent of all infirmities of body or mind. That a sick person showeth signs of weakness is due to the hindrances that interpose themselves between his soul and his body, for the soul itself remaineth unaffected by any bodily ailments. Consider the light of the lamp. Though an external object may interfere with its radiance, the light itself continueth to shine with undiminished power. In like manner, every malady afflicting the body of man is an impediment that preventeth the soul from manifesting its inherent might and power. When it leaveth the body, however, it will evince such ascendancy, and reveal such influence as no force on earth can equal. Every pure, every refined and sanctified soul will be endowed with tremendous power, and shall rejoice with exceeding gladness."


This doesn't really address the hard problem of consciousness. https://consc.net/papers/facing.html


> The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect.

Subjective experience is just the system I described happening extremely fast. What I describe seems simple and doesn’t explain experience, but when you acknowledge the scale of our computation, what emerges is the self.

Continuity, recursive loop between inputs, cortex responding to those inputs, and being fed through our prefrontal cortical decision making models. Then that feeds back into the system again with even more inputs in the next stack frame. It’s happening millions of times, causing bursts of brain activity. All at the same time our brains are also being molded by all this brain activity. Changing constantly.

It is absolute chaos.

It’s this recursive reduce function that gives rise to what we describe as the self and subjective experience.

We as humans are hopelessly biased here, we put consciousness on a pedestal, but I don’t think there’s a hard problem.


Going by this, what more do we need to make machines conscious, if anything?


People fundamentally disagree about whether there is anything besides the “of”. My personal introspection tells mere there is only “of”, because what I perceive as my consciousness, is, by virtue of being a perception, in the end just an “of” itself. There is some sort of recursivity involved in the whole construct of consciousness, which makes it hard to get a grasp on. In some sense, consciousness is just that, being the perceptor and the perceptee at the same time. This recursitivity or fixedpointness will probably be key to a precise understanding of the whole shebang.


The argument you're making is just eliding the "hard problem".

We can trivially imagine an electronic circuit that registers different current levels when exposed to red or blue light. Nobody (that I'm aware of) suggests that there is an experience within the electronic circuit, despite the fact that it "senses" different frequencies in the electromagnetic spectrum. The circuit is qualia-free.

You, on the other hand, are qualia-full. Whether the experience you have when a red object is in front of you derives purely from your optical sensory apparatus, or if it derives from a self-reflective awareness that your brain is dealing with "red" really makes no difference to the central point: you have an experience.

We have no explanation for how there can be experiences/qualia, and possibly, because they are either extremely or completely subjective, we may never any means of studying their existence.


This argument is not "eliding" the hard problem. This argument is saying that Chalmers' hard problem does not actually exist.

We have many explanations for what people describe as "the hard problem". But nobody who believes in "the hard problem" accepts these explanations, which have been given for decades by philosophers like Dennett.

There is no way to reconcile your view, that there IS a hard problem and that no progress has been made toward solving it, with our view, that there is no such problem, and that it does not need solving.


I’m convinced at this point having read almost every book on the topic, hundreds of threads across the web, and many an in-person debate, that individuals actually have wildly different experiences of consciousness.

For me, the hard problem is such an obvious and almost intractable thing that any folks who dismiss it clearly don’t understand it (or live an inner experience that lets them understand it).

And because I’ve never really seen anyone flip on their view of the hard problem, I’m 99% convinced it’s because they actually have a different lived conscious experience. Perhaps they process information different, or have more or less access to their internal state.

It’s possible that folks who dismiss the hard problem actually have more conscious access to their internal state and hence don’t understand why it’s an issue.

An analogy is aphantasia (folks who don’t have an internal visual experience). Somebody with an aphantasic brain simply doesn’t experience consciously like others. I feel that understanding that the hard problem exists is something rooted in a different internal experience.


It’s also extremely obvious to me we have a fundamental gap in our understanding of the universe and reality that needs to be filled to explain the hard problem. I suspect most people have fairly similar conscious experiences though, that seems like a reasonable assumption. Humans hate to not understand something, we’ve come up with crazy religions and who knows what else to explain the world around us. The reality is there’s just so much we don’t know, so much we can’t know. I think that makes most people uncomfortable.


Dennett's argument is that consciousness is an illusion. Where would such illusion take place? I'm genuinely convinced that Dennett is a troll.


I only don't think he's a troll because the non-illusion options seem just as wild to me. I think it's fair to say we've made almost zero progress on how sensations/experience physically work, and there's not much more room to look. For one critique of the progress toward a computational theory of mind, Tim Maudlin (philosopher of physics) says we are no closer today than thinking water and trough computation could get us to explaining how toothaches feel.

As for how illusions occur and from what things, it does hardly seem possible. But the alternatives are hardly better.


Total agreement on the fact that we are as clueless as it gets to explain emergence of experience. That's why I think that the only reasonable avenue forward is to try and take experience as a fundamental building block. We are going through our age's copernican revolution.

The illusionism argument is ridiculous because an illusion is still experienced (the very fact we are talking about it is proof), so at best it is circular reasoning. Probably it's woo-enough to be taken seriously by some :)


The thing about experiences/qualia is that they aren't just subjective, but momentary. Any sense of permanence, continuing identity or indeed of experiences being "about" something in particular is ultimately linked to our memory, which is not part of the "hard problem" itself; it fits solidly within the structure of causal relations we usually call "reality", or just "the physical universe". So the hard problem is hard, but it's also very tightly constrained; it "only" has to explain tiny fragments of subjective experience that float in and out of existence.


Recalling a memory or thinking about the future or whatever, are still and always experiences in the now. You are not getting out of it.


We had a heatwave this summer in the UK, weeks of it. I loved every moment. Thus I refute your 'momentary'. I've also had decades of pain and while it might sink lower in your perceptions, it's always there while you're awake.


The circuit you’re describing registers the external light impulses, but it doesn’t experience its own registering of those impulses.

What I’m imagining is that the registering mechanism would itself have sensors placed on its wires that measure the currency levels on those wires, and have the measurements of those sensors as additional inputs into the cognition automaton. And then have sensors on the gates and wires of that automaton, which again feed as additional inputs into that same automaton. Add memory and timing delays. And then multiply all that some million times to get to the level of complexity of our inner mind, of our sensory and movement apparatus, and of our mental models.

When introspecting myself, I don’t see or feel or think anything that couldn’t be explained by such a setup. The different textures (qualia, if you will) of what I perceive in my mind have a certain complexity, but that is merely quantitative and structural, not qualitative.

I therefore simply do not agree that there is a hard problem of consciousness to begin with, in the usually given sense. I don’t agree that there is a qualitative difference between the perception of “qualia” and other perceptions. “Qualia” are just a perception of representations and processes happening in my brain. I see no puzzling mystery that would require solving.


No, this totally missing the point again.

It is not a question of what the sensor detects. It is a question of how it is possible for there to be an experience when sensing occurs.

Your introspection is simply pointing out the likely nature of what you experience, and I actually agree (tentatively) with the idea that most of our conscious experience is rooted in a self-reflecting system. But none of that has any bearing on how there can be ay experience at all.


What you call “experience” for me is just sensing of internal information processing, of internal representations. This may need some dedicated introspection to fully realize. You’re making a distinction which I believe is a mirage. It’s just a special attributation we make in our minds to those inner perceptions. If you look closely, it vanishes.

Think about it: How do you know that you have what you call an “experience”? It’s because you perceive the having of the experience. So, at some point, the perception of having an “experience” is something that enters as an input into your cognitive process, and you match it to some mental models you have about such inputs.

I adjusted my mental model to think of those “qualia” perceptions as being the sensing of parts of the internal workings of my brain. It’s a side-effect of all the processing that is going on, if you will, and of the likely fact that the sensing of some subset of the processing steps is being feed back as inputs into the cognitive processing.


> "just sensing of ..."

We know that physical systems can sense phenomena in the world. We doubt that they experience anything when they do so. Even if we create a Hofstadterian "strange loop" so that we sense our own sensing, that does not give rise to "experience".

I concede that it could be a matter of scale, but a photosensor that knows that it glows yellow when it senses red does not have an experience.


I dispute that "experience" is something that requires a special explanation. The feeling of what you call "experience" is just something that you sense inside your brain. It is just content, data, like thoughts and other perceptions are as well. You are thinking about it. You are thinking about how you feel about it. You have perceptions about your thinking of how you feel about it. Is your cognition able to process anything that isn't (mere) information/data? I don't believe mine is. What would that even mean?

Yes, a photosensor doesn't have that kind of sensation, and that's because there is nothing going on inside of it that would correspond to that sensation (no neural correlate, so to speak). In the humain brain there is enough that is going on (the complexity and quantity is staggering) that our range of inner experience is easily representable by it. And "experiencing" merely means that we are processing those representations in a way that enters our cognition, like "mere" perceptions do as well.

It seems that you are thinking that perceptions lead to experience, and then nothing else. My view is that perceptions lead to internal processing that in turn is itself perceived by our cognition, in addition to the original perceptions. And it's the texture of this perceived processing that we call "experiencing". That is, it doesn't stop at the "experience" step, because then we wouldn't know anything about it. Instead, the information about the experience then enters our cognitive processing as a subsequent step. And that's how we don't perceive just "red" full stop, but also an associated "experience" of the red. But again, we wouldn't know about that experience if that wasn't information that enters our mind. And I see no puzzle about such information entering our mind.


    It seems that you are thinking that perceptions lead 
    to experience, and then nothing else.

    My view is that perceptions lead to internal processing 
    that in turn is itself perceived by our cognition, in 
    addition to the original perceptions
By this definition, which I don't necessarily disagree with, wouldn't a single level of reflection/introspection have to qualify as consciousness?

If we decide that a simple light sensor isn't "conscious" because it's a pure input/output machine with no machinery that could reasonably be described as "internal processing", then what about a Roomba?


I wouldn’t describe it as a single level, because there is some recursiveness involved, and different parts of an experience may involve a different “strength” (like individual tangled weeds have different strengths) or a different number of levels. I think of it as having an organic shape.

I also don’t think consciousness is black and white, there is certainly a spectrum. To us, qualia have complex, multi-faceted and multi-layered textures, in line with what we can cognitively perceive and process, and they trigger just-as-complex associations and emotions. It seems to me that this richness is a large part of what makes consciousness wondrous.

I don’t think a Roomba is observing its own internal processing. A program that debugs/profiles its own execution and uses those data as inputs of its main functions, or a neural network that feeds back the changing weights of its edges as additional inputs for itself, would maybe come closer to asking that question.

As an analogy, does a simple light sensor qualify as “seeing”? “Seeing” on the level of a mammal, with object and motion recognition and building an internal spacial model of what it sees, along with predicting what will happen in the next moments based on what it sees, is certainly on a whole other level.


Really?

+ Human beings

+ Dogs

+ Bugs

+ Plants

+ Eukaryotes

+ Viruses

+ Proteins

+ Molecules

+ Atoms

+ Subatomic Particles

+ ???

+ Pure Information, ones and zeros

Here is a line (--------). Place it so that above it, things are conscious, and below it, things are not conscious. It seems very obvious to me that there is nothing special about human beings, (or dogs, or bugs, or plants, or wherever you draw the line), that makes them special and conscious. For example; if you assert that the line is at "Bugs" I could not possibly disprove or prove you right, although I would suspect that you might be a Bug. Likewise, I could assert that your circuit has qualia, and you could never prove or disprove me.

Proving or disproving me is solving the hard problem. It's hard because it's meaningless, because consciousness is either shared by all things; or, shared by no things. Which is, imho, a distinction without a difference.


Check out Peter Russell. The person closest to the mark for me. If you know of him the above is that but if not, he is amazing.


Nice to see him mentioned. He's very little known, yet one who can argue properly and one of the few that make much sense.


> because consciousness is either shared by all things; or, shared by no things.

Alternative: it is shared by 192 kinds of things, and nothing else.

Why not?


Indeed, why not. I couldn't possibly prove you wrong or right. That's why the hard problem does not exist and there is no line.


As an aside, you can trivially place the line above humans:

+ ???

+ The Galaxy

+ The entire Biosphere

+ Regions of Space

+ Corporations

+ Humans


We don't know what qualia are so it's an acceptable possibility to me (unprovable, mind) that such a circuit may 'experience' something. It would be unutterably basic if it happened, nonetheless I'm ok with that.

There's also a view that consciousness is intrinsic to everything (ah, here you go https://www.scientificamerican.com/article/does-consciousnes...) which is a cheap, cheesy and IMO totally unacceptable way to 'explain' consciousness and I reject that as an explanation, but it doesn't make it actually wrong.

Edit: missed your last line "We have explanation for how there can be experiences/qualia" - I'm surprised, you can explain it, got any links?


> We don't know what qualia are

Actually, no. This is the one part of the problem we are clear on.

Since this is an adequate excuse to give a lecture, here we go:

Suppose I say "I am fat", and suppose you say of me "he is fat". In both of these sentences, the same thing is meant - an observation of the physical world was made, and excess lipids were detected. However, if I say "I am looking at a laptop", and you say "he is looking at a laptop", we don't mean the same thing. You mean that a particular physical state has been observed, with my eyeballs and brain directed laptopwards. Whereas what I mean...

...well, what I certainly don't mean is that I see any eyeballs, or brain. I don't observe any process of looking at all. I just look. What I mean by "I am looking" is completely different to what you mean by "he is looking" even though the two statements are formally similar.

In other words, there are two wildly different, incommensurable uses of the word "am". One of them has a corresponding "is", one of them does not.

So, getting back to qualia, there simply are no qualia. There are qualia that am, but as noted above, that doesn't imply that there are qualia that are.


Sure, pan-qualism/pan-psychism may well turn out to be a respectable postion.

[ fixed the missing "no" in the GP ]


Panpsychism is untenable because it trades the hard problem of consciousness for the composition problem. The only consistent and coherent game in town is analytical idealism.


It might be a piss-poor explanation, because it doesn't explain anything, it presumes its conclusion, but that doesn't make it wrong.

And if you throw in phrases like "composition problem" and "analytical idealism" the fer fuck's sake provide some simple explanation or something.


Something that assumes its conclusion and doesn't explain anything is not even wrong...

At any rate, given the context of the thread I didn't feel the need to clarify what the composition problem nor idealism are. But since you ask:

The composition (aka combination) problem refers to the need to explain how multiple bits of consciousness combine together to form a united and cohesive experience such as yours or mine. The hard problem focuses instead on how quantities (matter) give rise to qualities (experience).

Analytic Idealism is the view that experience is the ontological primitive. Both problems sublimate away. Yes, you can derive physics from it. In fact you don't even bother physics in any way: all we know about it remains valid under analytic idealism. If anything, many counterintuitive findings of QM fall into place under AI. If this pique your interest, here's a good intro https://www.essentiafoundation.org/analytic-idealism-course/.


> Something that assumes its conclusion and doesn't explain anything is not even wrong...

No. It can be entirely right. It just isn't useful in any way.

From your link "Analytic Idealism is a theory of the nature of reality that maintains that the universe is experiential in essence"

Maintain all you like. It might even - per my prior statement on pan-psychism - be true. I want quantifiable evidence before we accept it as true. Cough up.

Anyway, thanks for the descriptions of the tech terms.


Can you argument your desire for a quantifiable evidence without begging the question? There is research compatible with these claims. It's mentioned and linked in the link I've previously provided.


> Can you argument your desire for a quantifiable evidence without begging the question?

Don't fall back to sophistry. Is there evidence? yes or no, if yes then show me.

> There is research compatible with these claims. It's mentioned and linked in the link I've previously provided.

I'm downloading the single video which mentions the word evidence. If there is any more evidence links, please show me them, especially written evidence, and especially especially, empirical evidence.

Edit: right, they're saying that experience is essential to reality. They provide evidence in the form of quantum experiments. I can provisionally accept that, very provisionally. Does it say anything about what consciousness actually is, which is what you (seemed to) claim? Haven't found any sign of that yet.


It's not sophistry, it was a preemptive question anticipating your response, which in fact showed up: you asked what consciousness _is_. This is the sort of question Babbage would have replied to with his famous remark. "What is" is a philosophical, in fact metaphysical, question. You know what consciousness is. Just pinch yourself.

To ask what something _is_ implies reduction. You want a description of experience in terms of something else. So you are begging the question, having already concluded that it's not fundamental. In idealism experience is the reduction basis, so the question makes no sense. On the contrary, one should try to explain everything else in terms of experience (and if you don't accept a reduction basis, then you won't be able to explain anything, even with materialism).

Now, the explanations need to be consistent with empirical evidence and they need to have explanatory power. I submit to you that AI succeeds, and the empirical adequacy of those claims is abundantly linked in the description of the material you are downloading.

Science and scientific theories are concerned with the behaviour of nature, not its essence. They tell us how it behaves, not what it is. We build models out of observations and given the chance, we should stand by a model with the most explanatory power.


Asking where consciousness/qualia come from gets you saying "You know what consciousness is". So you don't answer.

> To ask what something _is_ implies reduction

Yes, but you learn something. What is wood? Partly it's lignin. What's that? Randomly polymerised phenol-like molecules. What's phenol? A benzene ring (unsaturated carbon ring) with an OH on it. What are C, O and H? Elements. What is (eg.) carbon? An atom composed of... etc.

You can't reduce consciousness/qualia in any way so you cop out.

The AI video #5 takes some interesting quantum stuff and extrapolates it with extreme dodginess and emotive language into some semi/pseudo-scientific conjectures. My guess is you haven't even watched the vids.

> I submit to you that AI succeeds, and the empirical adequacy of those claims is abundantly linked in the description of the material you are downloading

Nope.

I've had enough here.


> Yes, but you learn something. What is wood? Partly it's lignin. What's that? Randomly polymerised phenol-like molecules. What's phenol? A benzene ring (unsaturated carbon ring) with an OH on it. What are C, O and H? Elements. What is (eg.) carbon? An atom composed of... etc.

Do you notice that this is infinite regress? At a certain point you must accept a non-reducible fundamental. If you don't, then it's infinite regress and you have explained effectively nothing. Materialism accepts the quantum foam as primitive. Idealism consciousness. But it's the same epistemic step, accepting a primitive, and derive the rest from it.

Also, if you think that the references are pseudo-science, bring that up with the paper authors. AI simply offers an alternative interpretation, but those are studies coming out of mainstream science circles.


> To ask what something _is_ implies reduction. You want a description of experience in terms of something else. So you are begging the question, having already concluded that it's not fundamental.

This is not true. When people have asked "what is gravity" and they determine that its a fundamental, non-decomposable force within the universe, they are not disappointed. That's still an answer.

"Consciousness is a fundamental property of <X> in our universe" is a perfectly fine answer to the question "What is consciousness". It's just not the answer that most people currently find most likely.


I actually think we are in agreement here. My stance is that it is fundamental and doesn't need to/can't be reduced, therefore explained in terms of something else. If the question is simply meant in a descriptive way, then the answer "consciousness is all there is" is perfectly fine.


> the composition problem

Would these be like the “easy problems” then?

Besides, wouldn't that be exactly like our current physics? (Where we reduce until we find something fundamental like “electric charge just is and has this properties”, similarly for Quantum fields and the like -- and then how that composes and interacts with other fundamentals is the whole business.)


The composition problem is considered the hard problem of panpsychism. But as your intuition correctly points out, it is definitely easier than the original hard problem of materialism, which is a category jump from quantities to qualities.

The main difference is that physics is quantifiable, so we can come up with in-principle explanations of really complex stuff (weather forecasts, life itself, etc). With qualities it's a different ballgame, because we are talking about different subjects combining into one. Not saying that it's a dead end, but we haven't even yet started to define the problem, let alone solving it.

There are other more compelling roads. Rather than a combination mechanism, we can look at the problem from a decomposition perspective. This is easier because it's a really well understood phenomenon that we can actually study: the compartmentalization of one unified mind into separate minds, also known as multiple personality disorder or dissociative identity disorder. Here, we have a clear in-principle path to walk.


> Nobody (that I'm aware of) suggests that there is an experience within the electronic circuit, despite the fact that it "senses" different frequencies in the electromagnetic spectrum. The circuit is qualia-free.

Your thought experiment actually shows why a materialistic model makes much more sense than any model that takes "the hard problem" seriously. It's trivially easy to show that the human brown is much more complex than an electronic circuit that registers red or blue light, and that the human brain is doing much more. It's no surprise to anyone that a simple circuit like that lacks an abstraction of the world around it.

But if you believe qualia is an actual thing, than how can you show that the circuit lacks it? No one seems to be able to come up with any way of measuring qualia, or even showing that it exists. There's no way to show that we have more of it than the circuit, and we're left relying on what "feels right."

Before demanding that models solve "the hard problem," we should first ask ourselves if there's any evidence that "the hard problem" even exists. The only evidence I've seen anyone provide for qualia is that they feel they have it, or extremely contrived thought experiments where people feel like it makes the most sense. Counter to this is the entirety of science showing that the world we live in is one that is an emergent one of physical properties. Which, yes, leads to a good number of surprising things.

Creationists thought that god had to have a role in creating life, because they couldn't conceive that something as complex as an eye could be an emergent feature of random physical properties. "Hard problem" folks think that there must be some sort of non-physical "qualia," because they can't comprehend that our experiences are simply an emergent feature of the extremely complex neurons in our brains. Yet scientists that actually study these things don't run into these problems, and keep using the physical model because it's the only one that has made any sense so far.


> The only evidence I've seen anyone provide for qualia is that they feel they have it

Is that not sufficient? It's Descartes' "I think therefore I am". Qualia are just experiences, so if you experience something then they exist.

I don't think qualia need to be non-physical. The "hard problem" remains even if they are physical, because they don't exist within our current understanding of physics and thus still need to be explained.

Actually, reading https://plato.stanford.edu/entries/qualia/#Uses, it seems that your restricted definition of qualia "Qualia as intrinsic, nonphysical, ineffable properties" is one use of the term. But it also has a more general meaning:

"Qualia as phenomenal character. Consider your visual experience as you stare at a bright turquoise color patch in a paint store. There is something it is like for you subjectively to undergo that experience. What it is like to undergo the experience is very different from what it is like for you to experience a dull brown color patch. This difference is a difference in what is often called ‘phenomenal character’."


> Is that not sufficient? It's Descartes' "I think therefore I am". Qualia are just experiences, so if you experience something then they exist.

If you experience something, some experience exists. It does not mean that qualia exists. I've had people use the same personal experiential evidence for the soul, god, reiki, and a whole host of other metaphysical phenomenon. Hopefully we can agree that someone relating their experience of the metaphysical is not sufficient evidence that the metaphysical exists (if it is, we need to believe in a whole host of things).

I've followed a lot of neuroscience research. I've never seen any that says "hey, we've run into this problem, but it can't be solved by our current understanding of physics." The "hard problem" seems to reside entirely in the realm of philosophy. There might be a few neuroscientists who engage it in an informal way from time to time, but by and large the people who actually study the mind don't seem to have this problem.

That's not to say that there's not a lot left to learn. There is. But there's no evidence that anything like the "hard problem" or "qualia" exists or is a barrier to our understanding of the brain.


> I've followed a lot of neuroscience research. I've never seen any that says "hey, we've run into this problem, but it can't be solved by our current understanding of physics."

That's because neuroscience explicitly avoids the philosophical problems of the mind as a premise of the entire discipline. The "hard problem" is that we can reach a point where neuroscience has completely explained human behaviour, and we still won't have explained the existence of experience or how it relates to the physical world. Neuroscience doesn't worry about the "hard problem" because it isn't even close to solving the "easy problem" yet.

> If you experience something, some experience exists. It does not mean that qualia exists.

I believe that "qualia" as used in the context of the "hard problem" is just "experience". It doesn't imply anything more specific than that. To quote Chalmers who coined the term "hard problem":

"...even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience?"

(https://en.m.wikipedia.org/wiki/Hard_problem_of_consciousnes...)

What he is talking about is just experience. Not a more specific meaning of qualia (i.e. definition 1, not 2, 3, or 4 on this list https://plato.stanford.edu/entries/qualia/#Uses)


> Nobody (that I'm aware of) suggests that there is an experience within the electronic circuit, despite the fact that it "senses" different frequencies in the electromagnetic spectrum. The circuit is qualia-free.

I personally feel like it's hard to justify being anything other than agnostic on this question (and indeed the question of whether inanimate objects like rocks have consciousness). Reason being essentially that we have so little understanding of consciousness that it seems hard to justify claiming any knowledge of it at all.


I'm a neophyte on the topic, at least as far as reading "serious" literature on it, but:

     We can trivially imagine an electronic circuit 
     that registers different current levels when 
     exposed to red or blue light. Nobody (that I'm 
     aware of) suggests that there is an experience 
     within the electronic circuit
...yeah, I'd like to here this addressed by those who claim consciousness is a non-hard problem.

If we are to claim that consciousness is a series of inputs and outputs, then even a simple device like the one you described is "conscious."

A moderately more complex device (say, a Roomba) might even be said to experience "qualia" because it clearly makes decisions and considers some things good and other things bad.

And by this logic a blade of grass is almost certainly conscious.

That's fine, I guess, but it feels awfully lazy. If we expand the definition of consciousness to include Roombas and blades of grass, that's cool, but we still have the challenging problem of defining what makes human consciousness different from grass consciousness. If we decide that the difference is purely quantitative, then we have some uncomfortable things to think about.

If we value human life more than grass or Roombas, because we have "more" consciousness, then are some humans more valuable than others because they have more neurons, or perhaps simply more active neurons?


You're demanding a the hard problem be addressed from a point of view that doesn't see the hard problem as a thing.

You might as well demand the atheist explanation for how god created the world. The atheist must deny the premise of the question, they can't give an answer in the terms of the question.

"We have no explanation for how there can be experiences/qualia"

We do, but not in the terms you set out. The explanation is that that there is not a you that experiences qualia, two separate things. Rather the experiences ARE YOU. There's nothing they happen to. Their happening is you.

You don't have to agree with it. But the hard question is not ducked. It is not a coherent question in this view.


Not unexpectedly, I do not agree with this characterization.

Short of panpsychism (which could be a thing), I think we mostly all agree that there are almost certainly objects in the world in which there is no experience By contrast, I think we mostly all agree that we (each of us as individuals, and humans as a group) are things in which experience does occur.

This naturally gives rise to the question: if some set of things have no experience, and another set of things have experience, how does experience arise in the the second set of things?

There are subsidiary question, such as whether experiences may come in different "levels" thus creating a continuum between "none" and "definitely experiencing". But these are secondary, mostly, the question of how things in which experience occurs differ from the things where it does not.


Depending on the definition of "mind", panpsychism sounds rather reasonable: "the view that the mind or a mindlike aspect is a fundamental and ubiquitous feature of reality".

https://en.wikipedia.org/wiki/Panpsychism

It could be that a rock, an ant, a computer, and a human being are all experiencing its existence at varying levels of awareness. And that this "experience" isn't necessarily exclusive to living things - though it may be impossible to prove or understand how a computer experiences itself, if at all.

It's also amusing to entertain the opposite, the perspective of people who deny the existence of "experience and qualia" altogether, who argue there is no "hard problem of consciousness" in the first place. Maybe people experience consciousness so differently, for some it's "invisible", and for others it's undeniable and obvious that it exists.

https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

https://en.wikipedia.org/wiki/Qualia


> Maybe people experience consciousness so differently, for some it's "invisible", and for others it's undeniable and obvious that it exists.

Indeed. I have seen it proposed by hard problemists that people to whom it is "invisible" may in fact be p-zombies.


> Nobody (that I'm aware of) suggests that there is an experience within the electronic circuit

There is a theory, Integrated Information Theory (IIT) [0], which argues exactly that [1].

[0]: https://en.wikipedia.org/wiki/Integrated_information_theory

[1]: https://www.journals.uchicago.edu/doi/10.2307/25470707#_i2


I'm familiar with IIT. I don't believe it suggests that a photosensor has qualia.


Isn't the parent and related answers pointing towards the idea that there's a level of complexity above which you need to be to see those qualia? A single little circuit might not be the one that is conscious, but a bunch of them connected together might exhibit patterns that we could call experience?

Maybe the analogy is that a single DNA molecule is not a living thing but that molecule along with a bunch of others is?

Seems like the problem arises in pinning down what level of complexity is required.


It's worth pointing out that there's zero evidence that complexity is required for conscious. Our only evidence that beings other than ourselves are conscious is that we know ourselves to be conscious and they seem similar to us. IF inanimate objects were conscious, we wouldn't know because... well, none of us actually has direct evidence than anything other than ourselves is conscious.


I broadly agree with this, and it's been my feeling since reading Dennett & Hofstadter in my teens and twenties.

However, if this does turn out to be true, I suspect that it will not constitute an "explanation" of experience, merely a description of its prerequisites.


> you have an experience.

What are "you"? And what is an "experience"? You highlight this like it's a magical phrase that cannot be explained, and your arguments suggest that you take it as a foundational assumption. But if you assume a fundamentally inexplicable nature to both personhood and experience, then you cannot possibly recognize any explanation for these, because doing so would require giving up your assumption.

I recently came across a paper arguing that consciousness as commonly referred to does not exist due to semantic issues around the term "consciousness," and your arguments reminded me of it. It's quite readable, if you're interested: "Consciousness Semanticism: A Precise Eliminativist Theory of Consciousness" by Jacy Reese Anthis. The abstract and main argument:

> Abstract. Many philosophers and scientists claim that there is a ‘hard problem of consciousness’, that qualia, phenomenology, or subjective experience cannot be fully understood with reductive methods of neuroscience and psychology, and that there is a fact of the matter as to ‘what it is like’ to be conscious and which entities are conscious. Eliminativism and related views such as illusionism argue against this. They claim that consciousness does not exist in the ways implied by everyday or scholarly language. However, this debate has largely consisted of each side jousting analogies and intuitions against the other. Both sides remain unconvinced. To break through this impasse, I present consciousness semanticism, a novel eliminativist theory that sidesteps analogy and intuition. Instead, it is based on a direct, formal argument drawing from the tension between the vague semantics in definitions of consciousness such as ‘what it is like’ to be an entity and the precise meaning implied by questions such as, ‘Is this entity conscious?’ I argue that semanticism naturally extends to erode realist notions of other philosophical concepts, such as morality and free will. Formal argumentation from precise semantics exposes these as pseudo-problems and eliminates their apparent mysteriousness and intractability.

> 3 The Semanticism Argument

> Now that terminology is established, the semanticism argument is brief and straightforward.

> 1. Consider the common definitions of the property of consciousness (e.g., ‘what it is like to be’ an entity) and the standard usage of the term (e.g., ‘Is this entity conscious?’).

> 2. Notice, on one hand, each common definition of ‘consciousness’ is imprecise.

> 3. Notice, on the other hand, standard usage of the term ‘consciousness’ implies precision.

> 4. Therefore, definitions and standard usage of consciousness are inconsistent.

> 5. Consider the definition of exist as proposed earlier: Existence of a property requires that, given all relevant knowledge and power, we could precisely categorize all entities in terms of whether and to what extent, if any, they possess that property.

> 6. Therefore, consciousness does not exist.

https://jacyanthis.com/Consciousness_Semanticism.pdf

I recommend reading it before objecting, as the author addresses a number of possible objections in section 4.


3 is incorrect. Additionally, the idea of disproving the existence of a thing by arguing from inconsistencies between words describing it is not related to reality.

Edit: I read it. Anthis readily grants the existence of a mental life but wants to deny that “consciousness” is a meaning property. Sure, for the sake of argument. But that leaves the question of why I have a mental life at all entirely unaddressed.


There's a clear gap between steps 4 and 5. Our current definitions and standard usage of consciousness are inconsistent. But step 5 requires "all relevant knowledge and power", which we almost certainly don't yet have. Therefore 6 does not follow.


The hard problem is hard because most people don’t get why it is hard.


> how it is possible to be conscious of anything at all

I ... have always seen that as something obvious. On a high level, isn't consciousness the result of combining a bunch of neurons (a "hardware", of sorts) in certain preexisting configuration (a "software" of sorts) and stimuli (inputs) ?.

To me the problem is on the details. The separation between layers is not needed because there's no limited consciousness needing to understand the system in the first place. All the encodings and pathways in all 3 levels are tortuous and non-intuitive, and profoundly intertwined. An image being seen through the eyes will end up being encoded as a bunch of electrical signals that will then sent to neurons that will pass it to other neurons until eventually certain other neurons will release molecules will get released through another end and that will activate other neurons and that gets encoded as "pleasure".

But to me it is obvious that this is what's going on. The "hard part" is the reverse-engineering the software, the operative system and the hardware.

This would be a deeply challenging technical puzzle - perhaps beyond our capabilities to tackle, perhaps it would require many generations of people. But I don't understand where this "philosophical hardness" is. I only see the so-called "easy" problems.

What's mysterious about Consciousness the software/hardware it is part of is finicky and tricky, like something written by a crazy savant with a big ego.

https://en.wikipedia.org/wiki/Saccade


I think you're missing the point. A brain could encode "pleasure", and respond appropriately to that and other complex stimuli without there being anyone actually feeling that pleasure. But there is, and that's unexplained. That's not just a defect in our understanding of brains, it's something which we don't understand on the level of physics.


Yep. People talk about the hard problem, but it’s lots of things.

The consciousness process might have obvious components, and many non-obvious ones, but until those factors explain why it seems like a first-person experience…

Well dang, a “first-person experience” remains basically undefined! So we’re not even to that bridge yet!

My guess is that both questions will be answered at about the same time. I highly doubt that we’ll just get scientifically used to “well we get how brains work, so that’s the self.”

At least as long as anyone resembling us keeps wondering, we’ll keep looking.


> A brain could encode "pleasure", and respond appropriately to that and other complex stimuli without there being anyone actually feeling that pleasure.

It could, but would that be a more efficient way to interact with the world? We see more complex brains coinciding with a greater ability to create abstractions about the world and leading to a greater flexibility when it comes to problem solving. Hard coding things occurs in simpler life forms with very limited to no ability to solve problems.

People often say "well, nature could create a p-zombie" - could it? It's far from clear that someone could hard code the types of interactions we engage in. Look at the early AI efforts and the failure to simply hard code rules there. Natural selection tends to push creatures in the direction of doing things efficiently, and it seems that our brains are no different. Despite it "feeling wrong" to some people.


> Hard coding things occurs in simpler life forms with very limited to no ability to solve problems.

Hard coding also occurs frequently in humans. There are countless things that are universal (or almost universal) human experiences. Smiling, laughing, pain, walking, kissing. Even things like language seem to be based universal constants (although the specifics vary).

> We see more complex brains coinciding with a greater ability to create abstractions about the world and leading to a greater flexibility when it comes to problem solving

Yes, but the question is whether that ability to create flexible abstraction relates in any way to the ability to have conscious experience. And if so, then how. We can see from deep learning AI models that it's possible to create flexible abstractions artificially. They don't match humans at the moment, but we've yet to hit the limits.

> It could, but would that be a more efficient way to interact with the world

More efficient than what. The point is that we really have no clue what phenomenal experience is.


I thought I covered that. Subjective perception is a mental process. There’s nothing there besides an intricate bunch of signals and processing. The signal that says “you Exist” is just hardwired to look Extremely Important to us, so we put it on different category than “you need to pee”, for example. It’s up there with “Intense pain”, and we feel it all the time while we are awake. What I am saying is that we are all “zombies”. Or put it in another way, qualia is completely “simulable” thing.


The "hard problem" is how it is possible for anything to "look like" or "feel like" anything in the first place. Not only do we not know how to simulate that, we don't even know how to sense that it is taking place in anything other than our own minds.


But that’s the same kind of situation as figuring out how windows draws the solitaire on the screen.

Imagine that you lack all context about computers. You are intelligent, but grew up on a desert island. Then a boat arrives. You see a sailor taking pictures with their iPhone. You don’t even know what you are looking at. “What is a computer and how does a computer work” would take years to explain. The same goes with “how does a screen work”. Etc.

At the stage we are, asking “what is conscience” is like that hunter gatherer asking “how does the little box show my friend’s image inside”. The answer is going to be “let me give you some context first, you are missing too many pieces”.

We just need more pieces. But that doesn’t mean that it is hard. When you are missing lots of pieces, knowledge grows slowly, usually.


Dennett most certainly tackles the "hard problem", by convincingly arguing the problem is ill-defined and not a real problem.

The "hard problem" is not some Riemann Hypothesis of philosophy. Its status as a problem to be solved is highly contentious.


Does he answer the question “why qualia, of any kind, ever?”

> Dennett most certainly tackles the "hard problem", by convincingly arguing the problem is ill-defined and not a real problem.

No, like everyone else he admits defeat before it and tries to define it away. Whatever the relationship between qualia and physical reality, we experience them (I do. Dennett might not, haha) which requires some explanation. Once you have that everything else is easy, but there’s no reason for there to be an experience at all.


I'm not a philosopher, but arguments for the hard problem always come off as very circular to me. And when I dig deeper, it essentially boils down to "but it sure feels like there are qualia, so there must be"

Look, it certainly feels that way to me too. But that's not an argument. It's not even evidence. It also feels like it's impossible for me not to exist, and yet the evidence has me entirely convinced that will happen one day.

In any case, as a non-philosopher I can only gather that among philosophers, this is a highly contentious issue in philosophy of mind, and let the sides try to convince me.

When I read Dennett, he uses an immense breadth of knowledge on neural anatomy, computation, evolution, and cognitive psychology.

When I read arguments from the likes of Chalmers and Searle, I find highly contrived, seemingly circular thought experiments that are purposely constructed to purport their argument as fundamentally true. Feels more like theology to me.


> it essentially boils down to "but it sure feels like there are qualia, so there must be"

no. The argument is that "it feels like something to be me", and then to ask "how can it be possible to feel anything?"

qualia is a term from philosophy that is essentially a synonym for "having an experience" or "it is like something to be X"

nobody says "it feels like there are qualia". the closest would be "i have experience, and we call that qualia"


Yeah yeah, semantics. Snore.

How about an objective argument that isn't dependent on assuming its own conclusions and sneaky, poorly defined and impossible thought experiments?

It's like arguing with a religious person about the existence of god.


Where are you getting this? It’s not an argument, doesn’t have any conclusions and doesn’t require any though experiments. It is not semantics. Most attempts to dodge it are semantics though.


I made a point about the lack of philosophical consensus on this issue. And yes, the hard problem and also qualia depend on thought experiments like Mary's Room. When I dig past all the baroque philosophising all I find is philosophers not thinking objectively and instead being too misguided by their own experience.

Stop pretending the hard problem being a problem is some cut and dry thing. It simply is not.

And I'll generally side with the philosopher who understands more about how actual brains work, which is clearly Dennett.

Mary's room, The Chinese Room and friends seem designed around fundamental misunderstandings of how the brain, psychology and computation actually work.

They're designed to evoke a sort of visceral reaction. "Of course Mary won't understand what it's like to see red". But once you dig into the details of it, the logic breaks down.


I emailed Dennett after Consciousness Explained was published, and put the same point i make at the top of this subthread to him. "I suppose that in some real sense, you are right" was his reply.


If that's literally all he wrote, that sounds more like a polite way of saying "you're wrong, and I have better things to do than argue with you about it."


He wrote more than that. I no longer have emails from that period, so I can't quote his full reply. It was friendly, and seemed to engage with the point I was making.


Yea. "I suppose in some real sense"... What does that even mean?


That his book Consciousness Explained did not explain how there could be consciousness, but instead explained what we are conscious of. It's not that what the book does is not explaining consciousness, but it's only explaining it at a specific level, presupposing its existence and understandibility. Dennett was conceding that the explaining "the hard problem" was a real goal, and that his book did not tackle it.


This is my eternal complaint. Like a fish not noticing water, people don't mention it because it is a given. I'm not even sure how many people are conscious of being conscious.


I am re-reading Dennett's book and this is the best take I have found on the mystery of consciousness.


Just as handwavy as any other explanation of consciousness. I can make an electronic device that runs a Python program that predicts how input affects the device. That doesn't make the device conscious.


A sufficiently advanced such Python program will probably actually be conscious.


Of course not. And sorry if I've missed the sarcasm :)


Why of course not?


What makes you think that a simulation of a process is the process itself? If you were to simulate on silicon kidney function down to sub-particle precision, would you expect your computer to pee on your desk? So why would you expect consciousness to be present in a simulation of brain activity or brain complexity? Can you point to _any_ instance of presumed consciousness in a non-biological complex?


Yes, you would expect the kidney to produce its expected outputs to an entity within" the simulation. Of course it wouldn't produce the same things outside the simulation.


Which is why it's so handwavy. Why not just say "at some point the magic comes in and you get qualia", which again, explains nothing.


These theories just dont' even try to prove that the brain creates consciousness. They just assume it's the case.


For anyone who's wondering why the strange title: https://en.wikipedia.org/wiki/Being_and_Nothingness


Consciousness vs awareness vs sentience are terms that really need some society-scale effort to nail down what we mean by one or another. The conversation circles round and round because many folks talk past each other or interpret discussions in ways that the writer didn't intend. (I'm not saying the answer is available today if only we solve this dialectical issue.)

Philosophers of consciousness define "consciousness" as "phenomenological experience" in the barest, most unqualified sense, ie, the experience of "yellow" when photons of wavelength ~580nm strike a visual sensory organ of some kind of cognitive system.

Note that the above does not automatically imply that the experience is understood or even recognized. A lot of armchair philosophers and intellectual hobbyists conflate the term "consciousness" with the notion of having some kind of mental model through which to comprehend the experience (what I call "awareness"), or an understanding of the dichotomy between self vs environment (what I call "sentience", ie, "self-awareness").

Acting through anthropocentrism, it is easy to assume that the three are inextricable, but I don't think that perspective is the way forward toward understanding of consciousness per se.


> need some society-scale effort to nail down

I think a good approach would be for people to build things that exhibit consciousness according to whatever their model is and claim "this is conscious". Then let people debate whether it is or not.


There is the notion of panpsychism: that consciousness defined in the basic sense is extant everywhere, all the time, in many varied forms and scopes. By the definitions above, awareness would be restricted to those systems which could reasonably be considered "cognitive", and sentience would belong only to those who can conceptualize "cogito ergo sum".


"Metacognition" is a better term for what many refer to as consciousness.


It is not. That I can think about my thinking doesn’t help at all with the question of why I can think.


What I rarely see / hear articulated well enough, and am not even sure that I can, are questions around why _I_ have consciousness. I understand the reasons why a body might develop meta cognition, and how it's advantageous for a being to be aware of it's thoughts. But none of this explains why my body is attached to _this_ consciousness and not another. 'Experience' is the key term I feel when the phenomenal aspect of consciousness is discussed, but I feel many don't understand this view point and attempt to explain it away as something reducable or inevitable.


If you accept that certain bodies have metacognition, then this arguably predicts that each body’s metacognition will perceive itself (the metacognition) and the body as two separate but connected entities. That is, your own perception that “you” are separate from your body would be predicted by the theory. That is, this perception would be predicted by the theory. But it is a mere perception, because the metacognition “machine” (within the brain) is physically part of the body, and hence inherently bound to it, even if its own internal perception differs from that.


I also understand this theory, but why is my perception mine, and not that of some other physical being. Why do I experience it. I know what many will answer and I can look at other beings and understand why that being might develop metacognition and see it's self as conscious, but I will never understand why I should inhabit/own/experience a consciousness from my own internal view point.


It's maybe the other way around - aka conciousness has you, like fish in a net. What if this conciousness and other ones are exactly the same?

But it's hard to think and reason about it really because of our own self-addiction. Kinda like kids high on ice cream can't really care what broccolli tastes like (or the other way around).

Also because consciousness doesn't really need you/us to think about it or discover it because it's kinda the only thing that's here.


You sound like you'd be into the idea of "qualia" if you're not already aware.


If I am God, I will be bored because I am all knowing, all powerful, all present. Is like playing a RPG game with all the cheat codes.

So I created a world that I will not remember who I am, when I reborn, I randomly spawn in a family. This way, I can play the game, without getting bored, forever and ever.

Deep down knowing that if the game eventually ends, I am still omniscient, omnipotent and omnipresent


Interesting short story The Egg explores an similar idea: http://www.galactanet.com/oneoff/theegg_mod.html


I had the same idea. It might or might not have been under the influence of drugs but I have to say that it is interesting to explore. It's easy to explain to people who played video games.


This idea is a very common alternative spirituality thing.


Is there a name for it?


In Hinduism it is called Leela, the divine play.


Assuming that "boredom" exists for such a being as God is a human projection that might have zero relevance in the domain of the absolute.


Sadly (or not!) this is just as good of an answer as the rest to the hard problem. Dualism, computational theory of mind, etc. People talk about a scandal of string theory, they should talk about the cross-disciplinary search for explaining consciousness from compsci, physics, biology, chemistry, and psychology instead.


To me, David Chalmer is really overrated as a philosopher and so are his ideas about consciousness.

Essentially everything is an example of emergence and this is no different. Yes we absolutely do need to have words like "chair" as a level of abstraction. Yes we do need the word "consciousness" to talk about this quality/ability humans have which is so hard to describe but encompasses self awareness and experience (both also emergent).

Underlying all these emergent phenomenon is always what Chalmers brushes off as easy. What he'd probably describe as the world of matter/energy and quantum physics.

Here's a variation of a common thought experiment: Remove every atom from a person's brain, one by one until they are clinically dead. Somewhere along the way to death, that person is going to be right on the cusp of being a conscious human being. It wouldn't be a single moment. There would be a long period of removing atoms where observers could have endless debates about when the lights go out. Think of some very gradually going colorblind. Just like a gradient of color blindness, there would be a gradient of consciousness. From normal levels, to octopus levels (not literally), to newborn baby levels, bug level, etc until death.

This makes it clear to me that consciousness truly is emergent from matter/energy. There's no magic behind it, just mystery because we don't understand it.


> There's no magic behind it, just mystery because we don't understand it.

How is this different from, or inconsistent with, Chalmers' ideas about consciousness?


He believes that consciousness cannot be reduced to atomic level phenomena. That kind of discussion is only about his so-called easy problem. To him, there is something beyond that. I refer to that "something" as magic because I believe there isn't anything beyond matter/energy to describe all examples of emergence, including consciousness.

Going to my original comment's example of removing atoms from a brain, the number of atoms, which atoms, and interactions between the atoms is the mystery.


I agree with your two statements:

> Essentially everything is an example of emergence and this is no different.

> There's no magic behind it, just mystery because we don't understand it.

But I still believe there's a hard problem (that is, its difficult to explain my sense of interiority using the dynamics and grammars of the substrates that surely give rise to the emergent behavior we call consciousness).

By substrate I'm referring to the base layer in emergence (i.e. emergent phenomena and their dynamics emerge from a more basic layer with its own phenomena/dynamics, which is the substrate).

As physics is the substrate for chemistry, and chemistry for biology, and biology for psychology, and psychology for society, etc... (please ignore the that I've surely leapt over or ignored various layers here and/or suggested some type of linearity rather than heirarchy).

With each emergent layer, the explanatory power of the substrate(s) wanes. Perhaps the immediate substrate is still occasionally illuminating (e.g. I can talk about valence in terms of physical properties of electron orbits, or I can talk about metabolism in terms of the chemistry of the Krebs cycle, etc...) but it becomes difficult to talk about biology (cells, tissues, organic processes, etc...) in terms of physics. I have no doubt that physics is a substrate of biology (i.e. I'm a materialist), but its not terribly useful to deploy the standard model in explaining cancer.

In the same way, I assume it to be true that physics is also a substrate of consciousness, but it isn't clear to me how our sense of interiority arises in the emergent layers. I assume its because I'm ignorant (its just a mystery), but it hasn't made the problem any less hard.


I could say that many unexplainable things have a hard problem. How is it that we can smell something and instantly our brain can recall very old memories where we noticed that smell? It seems impossible that atoms can communicate with each other so that the atoms that make up a smell can trigger other atoms in your brain to search for yet other atoms that "stored" a snapshot of a previous time and location (aka a memory). That's despite each individual atom of one memory being identical to the atoms of another memory. It's the number and arrangent of the collection of atoms for each memory that is different.

However, I wouldn't create some artificial distinction for people to debate over about an "easy problem" and "hard problem" to describe how memory works. Actually to Chalmers, this entire memory process is an easy problem that may be unknown but is "simply" some interaction of atoms. I doubt he would say there's anything hard about this problem. That's absurd to me.


On further exploration, attention schema theory does a good job of connecting the dots - with the layers being biology, neurology, psychology (which then hosts the elements of AST that produce the claims/experiences of consciousness)

https://en.wikipedia.org/wiki/Attention_schema_theory


Reading through the comments I get the sense that there's something missing in our attempts to link "qualia" to hardware/wetware. My brain is made of neurons but "I" am not my neurons in and of themselves. There seems to be an emergent system of collective behaviour and responses to inputs that I call me. If a madman were to perform an experiment, successively scooping out teaspons of brain matter from my skull, I would start out being "me", but at some point, I would no longer be me (or perhaps anyone at all in the sense of being a conscious human). So while the hardware/wetware is a requirement, the conscious being "me" that I feel that I am, seems to be an emergent property of the organisation and dare I say "training" (tuning?) of this hardware.

So, if "I" am defined at an emergent level above the mere hardware, then there must be way of describing/defining the model of me. Intuitively, we build models of each other, especially people we know well. What is that model beyond an expectation/predictive model of behaviour based on possible inputs?

I feel that's where the gold is.

Full disclosure: I am just a fascinated bystander to this discussion. If this comment has elicited eye-rolls from those more familiar with the state of play, my apologies.


I think you might enjoy Searle: https://www.youtube.com/watch?v=0XTDLq34M18


Just blindly bought this book because I think consciousness is one of the most fascinating unexplained aspects of our universe.


Maybe I'm a p-zombie then, because I just don't get it.

I've spent hours upon hours thinking about thinking and observing my own thought processes and I don't see anything that couldn't be explained scientifically.


For starters:

- Why is there something rather than nothing?

- Does a universe with no one in it to observe it count as something or nothing?

Then, imagine we're building an AI and want to know whether it's reached our level or not:

- How can we determine whether the AI experiences qualia the same (or similar) way we do, and isn't lying?

- Where to draw the line between conscious being and computer?


How do you know if the people to your left or right experience sensations the same way you do?

When they look at the sky does their mind paint the sky with the same color?


Scientific method was created for this


False. It's unfalsifiable. There is no experiment possible that you can set up to answer these questions, any more than you can construct an experiment to prove whether there is a God.


Why am I me and not you? Why am I not everyone? Am I everyone? Is consciousness an illusion? Is consciousness specific to an individual or do we all start with a 'base consciousness' that evolves with age? What is the chemical, physical or biological process responsible for consciousness? Does consciousness exist on an atomic level? Why do electrons appear to be conscious? Why can't we quantify it? Can we transfer it? Do trees or fungi have consciousness? Why or why not? What's the minimal requirement for consciousness? Is constitutive panpsychism real? Is consciousness a latent property of the universe?

Etc.


One must explain how the jump from quantities to qualities works.


I wish I got the appreciation.. would you be able to describe what is fascinating about it?


Here is a good overview of the problem (and the controversy):

https://en.m.wikipedia.org/wiki/Hard_problem_of_consciousnes...


It's the ultimate riddle.


If we had a system that can show us our thought process and we could explain every aspect of it, will we still call it being conscious? (took this question from Westworld where Maeve gets her hands on the tablet showing what she is thinking at that moment)


I feel like yes, or at least have a new name. Nature seems to love emergence at different levels, with little clear delineation beween, and very different rules in between (particles to elements, elements to chemistry, etc).


This question extends to the likes of GPT-3/SD etc which generate text and images based on statistics and probability of things they have learned with no 'understanding'. We can explain how and why they generate a particular piece.

You already agree and the point I was making is that we might be able to call GPT-15 conscious or at least won't be able to say that it's not.


Right off the bat it seemed like it was saying experts have changed their tone recently. Saying "we're further from understanding consciousness than we thought we were". It never goes on to elaborate this point.

Great book review. If I had more time, I would snap up the book immediately. The review left me wondering if the book elaborates on the above ^^^. I might make the time to read it.


> Right off the bat

This might not be a deliberate reference to Nagel (as mentioned in the article) but at least it's thematically appropriate.


Perhaps. Or a strategy of the author to keep me reading until the end (it worked!)


I am glad, because for years the prevailing attitude was that there's nothing special or interesting about consciousness at all, that it doesn't really exist, it's an illusion, etc.

I don't think we'll ever be able to fully explain it in scientific terms, because not everything is in the realm of scientific knowledge.


What scares me is that some leading AI researchers hold this view (that consciousness in general is nothing special) and it makes them come across as unempathetic, as if all conscious beings---meat or AI---are simply computers and in turn pain is just some kind of computation and therefore that it's silly to even discuss taking precautions to prevent any kind of AI suffering.


It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461


What separates this from the other “steps” towards understanding consciousness that the op article talks about?

My personal favorite is the open worm project. The sentience (not really sentience) of the worm emerges from the arrangement of the neuron models in the physics engine. The muscle models are similarly connected, and the worm model swims as if alive. The human connectome project may be the closest relatable thing in a more intelligent sphere. The hcp is also laughably far away from the resolution required to equate it to the open worm project.

Your reply strikes me similarly to my personal fascination. Also, it’s similar to me in that there is no reason to select the Darwin automata as the most likely path towards artificial consciousness. Even the recent article you linked has the classic language of “it’s possible” and not “it’s reality”.

This circles back around to why I enjoyed the book review. I do believe what you portray as incorrect (that we don’t really know much about the fundamental element of consciousness), and the different perspectives tickle my brain. I’m open to changing my opinion if you can put these things into simple ideas and terms. If you can’t put them into simple terms, though, I rest my case that we’re no closer to real understanding than Descartes was.


That’s because consciousness is not well-defined and might as well be woo. The verbiage around it is the same as a “soul”

Define it rigorously and show a physical basis and you might have something to work with.


All qualia (which consciousness itself closes over) are impossible to rigorously define with words. Attempts to do so will come across as woo.

What we can do however is experience qualia first-hand, and prove to ourselves that they exist. While we may never know if we're experiencing them the same way as each other, the fact that we can experience qualia at all is undeniable, and the question of why we can do so is the real question being asked here.


All matter in the Universe is dormant consciousness. Living systems animate this through electro chemical perturbation of the quantum states.

It is existential being which is consciousness and our brains are localized composite self identifying subjectives of this phenomena.


Maybe we have two system, an imaginary layer and an accepted reality layer, dreams happen in the first one, experience in the other one. Mania happens when first former leaks in the latter.


This is a reminder that, in the end, House's book is not about consciousness — it is about a set of ways for looking at it.


> What is it like to have a brain?

I wouldn't even know


"How rare and beautiful it is to even exist." --some song lyric


Can't believe / How strange it is to be anything at all.


I wonder how this compares with 'The Society of the Mind'


It's time to upgrade this folk philosophy of mind and its obsession with ever-elusive "consciousness" with theories of intelligence build on neuroscientific, cellular and developmental underpinnings.


intelligence is not the target I take it. The target is self-experience/qualia


intelligence creates and experiences all that


Most of the time "intelligence" lags behind experience, I doubt it's a superset.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: