Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I would be interested in your response to the following thought experiment:

After years of heroic work and ingenious insights, along with a lot of technological progress, we "solve" the "easy" problem of consciousness in the following (I admit implausibly in any foreseeable future) strong sense:

(Note: I'm going into quite a lot of detail because I think that when people say things like "understanding how the machinery of consciousness works would not tell us anything about how it is possible to be truly conscious of anything at all" they are commonly underestimating what it would actually mean to understand how the machinery of consciousness works.)

1. There is a scanning device. You can strap yourself into this for half an hour, during which time it shows you images, plays you sounds, asks you to think particular kinds of thoughts, etc., all the while watching all your neurons, how they connect, which ones fire when under what circumstances, etc. It tries to model your peripheral as well as central nervous system, so it has a pretty good model of how all the bits of your body connect to your brain, and of how those bits of body actually operate.

2. There is a simulator. It can, in something approximating real time, pretty much duplicate the operation of a brain that has been scanned using the scanning device. It also has enough simulation of the body the brain's part of that it can e.g. provide the simulated brain with fairly realistic sensory inputs, and respond fairly realistically to its motor outputs. There's a UI that lets you see and hear what the simulated person is doing.

3. Researchers have figured out pretty much everything about the architecture of the brain, and it turns out to be reasonably modular, and they've built into the simulator a UI for looking at the structure, so that you can take a running simulation and explore it top-down or bottom-up or middle-out, either from the point of view of brain structure or that of cognition, perception, etc.

4. So, for instance, you can do the following. Inside the simulation, arrange for something fairly striking to happen to the simulated person. E.g., they're having a conversation with a friend, and the friend suddenly kicks them painfully in the shin. Some time passes and then (still, for the avoidance of doubt, in the simulation) they are asked about that experience, and they say (as the "real" person would) things like "I felt a sharp pain in my leg, and I felt surprised and also a bit betrayed. I trust that person less now." And you can watch the simulation at whatever level of abstraction you like, and observe the brain mechanisms that make all that happen. E.g., when they get kicked you can see the flow of neuron-activation from the place that's kicked, up the spinal cord, into the brain; the system can tell you "these neurons are active whenever the subject feels physical pain, and sometimes when they feel emotional distress, and sometimes when they remember being in pain" and "this cascade of visual processing is identifying the face in front of them as that of Joe Blorfle, and you can see here how these neurons associated with Joe Blorfle are firing while the conversation is happening, and when the pain happens you can see how these connections between the Joe Blorfle neurons and the pain neurons get strengthened a bit, and later on when the subject is asked about Joe Blorfle and the Joe Blorfle neurons fire, so do the pain ones. And you can see this big chunk of neural machinery here is making records of what happened so that the subject can remember it later; here's how the order in which things happen is represented, and here's how memories get linked up to the people and things and experiences involved, etc. And when he's asked about Joe Blorfle, you can see these bits of language-processing brain tissue are active. These bits here are taking input from the ears and identifying syllable boundaries, and these bits are identifying good candidates for the syllable being heard right now, and these other bits are linking together nearby syllables looking for plausible words, with plausibility being influenced by what notions the subject is attending to, and these other bits are putting together something that turns out to be rather like a parse tree, and, and, and ...".

5. That is: the linkage -- at least in terms of actual physical goings-on within the brain -- between being kicked in the shin by Joe Blorfle on Thursday, and expressing resentment when asked about Joe Blorfle on Saturday, is being accurately simulated, and the structure of what's being simulated is understood well enough that you can see its "moving parts" at higher or lower levels of abstraction.

OK, so that's the scenario. I reiterate that it would be wildly optimistic to expect anything like this any time soon, but so far as I know nothing in it is impossible in principle.

Question 1: Do you agree that something along these lines is possible in principle?

[EDITED to add:] For the avoidance of doubt, of course it might well turn out that some of the analysis has to be done in terms not of particular neural "circuits" but e.g. of particular patterns of neural activation. (Consider a computer running a chess-playing program. You can't point to any part of its hardware and say "that bit is computing king safety", but you can explain what processes it goes through that compute king safety and how they relate to the hardware and its states. Similar things may happen in the brain. Or very different things that likewise mean that particular bits of computation aren't always done by specific bits of brain "hardware".)

Question 2: If it happened, would you think there is still a "hard problem" left unsolved?

Question 3: If you think there would still be a "hard problem" left unsolved, is that because you think someone in this scenario could imagine all the machinery revealed by the simulator operating perfectly without any actual qualia?

(My answers, for reference: I think this is possible in principle. I think there would be no "hard problem" left, which makes me disinclined to believe that even now there is a "hard problem" that's as completely separate from the "easy" problem of "just" explaining how everything works as e.g. Chalmers suggests. I think that anyone who thinks they can imagine all the processes that give rise to (e.g.) a philosopher saying "I know how it feels for me to experience being kicked in the shin, and I think no mere simulation could truly capture that", in full detail, without any qualia being present, is simply fooling themselves, in the same way as I would be fooling myself if I said "I can imagine my computer doing all the things it does, exactly as it does, without any actual electrons being present".)



There's too much here to really reply to, but I will say this:

1. the existence of GPT3 and its cousins I think comes close to, if not actually proving, then severely tilting the scale in favor of "you do not need consciousness or qualia to participate in relatively human conversation and self-reporting". This means that scenarios where you interrogate a system to see what it thinks are going to rapidly become less and less interesting and less and less convincing as an indication of anything interesting.

2. The problem with qualia is that their subjective nature (coupled with the self-reporting dilemma mentioned in 1. above) means that it is essentially impossible to know whether a given system/individual is experiencing them or not. Do I think that the system you've described could exist without qualia? I do. Do I think it could report qualia without actually having them? I do. Do I think it might actually have qualia? Yes, possibly, but with a lot of caveats.


The point of my thought experiment isn't "a computer does something relatively human", it's "a computer does the same things a human does at the level of neuron activations, leading to the same things a human does at the level of actual actions".

And the point isn't "can you deny that the simulation has qualia?" (though I do find denying that pretty implausible); it's that it feels pretty clear to me that having the level of understanding that would be demonstrated by such a simulation-plus-analysis would in fact constitute a solution to the "hard problem".

(Of course, that would be entirely irrelevant for anyone who believes that my scenario is impossible in principle. For instance, if someone thinks that humans don't think with their brains but with their immaterial souls, they should predict that all attempts to do the sort of thing I describe will end in failure: you might get the machine do do exactly what the brain-meat does, but that won't lead to human-like behaviour because human-like behaviour is enabled by human-like souls which the simulator doesn't have.)

My maybe-uncharitable view is that the "hard problem" is "hard" because it is not really a problem so much as it is a decision to refuse ever to admit that we understand. No matter how detailed and complete an explanation we might have of human consciousness, you can always say "nope, that doesn't explain why there's anything it feels like for me to eat a perfectly ripe peach". Even if (as in my fanciful scenario) that explanation enables us to trace every detail of the processes that lead from being eating the peach to saying "mmmm, that's delicious", to wanting to buy more peaches in future, to rhapsodizing about how no mechanical explanation could ever do justice to the experience, etc. Even if (again, as in my fanciful scenario) the explanation lets us identify (down to the level of neuron-activations) what is common between the experience of eating a peach and the experience of eating a plum, what is different between the experience of eating a ripe peach and the experience of eating a not-so-ripe one, what is shared by all experiences of seeing something a bright scarlet colour, and so forth.

To me, this all seems like saying that gravity is ineffable, that although we can write down Newton's or Einstein's equations and compute exactly what happens when two massive bodies are near one another, there's still always something left unexplained. I can imagine, I say, things that behave according to the same equations but don't really have mass: they might instead have not actual mass but some mere facsimile of the real thing. Or that chess is ineffable, that although a machine can choose chess moves (and beat grandmasters) it isn't really playing chess but doing some mere facsimile of chess-playing. And one can go through the same manoeuvre with any concept at all. Consider the Hard Problem of Trousers: we may be able to analyse the way in which pieces of fabric are made and shaped and put together to make trousers, but that still leaves completely unanswered the question of why the resulting object is a pair of trousers. After all, I insist, I can imagine taking exactly the same pieces of fabric and putting them together the same way to make something that could be worn like trousers but that aren't really trousers...


Wikipedia states "according to a 2020 Philpapers survey, 29.72% of philosophers surveyed believe that the hard problem does not exist, while 62.42% of philosophers surveyed believe that the hard problem is a genuine problem"

The boring answer to your question is given your thought experiment scenario the numbers would probably change to the hard problem of conscousness philosophers being the minority rather than the majority. If everyone has a seemingly conscious A.I. best friend like in the science fiction stories the numbers will continue to go down, but you won't be able to definitively solve the issue.

Philosophers can't even agree whether the biblical God is running the universe behind the scenes, which would potentially have unadressed implications for your thought experiment scenario.

The survey results for religion are:

God: theism or atheism?

Accept or lean toward: atheism. 678 / 931 (72.8%)

Accept or lean toward: theism. 136 / 931 (14.6%)

Other. 117 / 931 (12.6%)


Your answers all suggest you think consciousness is computational. If we could simulate the complex computations of the brain there’s nothing else there. I’m of the opinion that consciousness is not computational but a fundamental property of the universe that we can’t explain with current physics. I believe this because I can’t adequately explain what I experience otherwise.


As it happens I do think consciousness is computational, but that isn't the point my thought experiment is trying to make.

Rather, I was saying: if it turned out that consciousness is computational, or more precisely implemented by something we can model computationally, and if we understood its mechanisms in enough detail to do the sort of simulation-and-explanation in my thought experiment -- then, I claim, it would be difficult to maintain that there is really a separate "hard problem" of consciousness that remains untouched no matter how thoroughly we solve the "easy problem" of explaining the physical processes by which it works. If I'm right about that, I think it weakens the arguments used to suggest that here in the real world there is a separate "hard problem" that we should be very perplexed by.

(There are actually two different scenarios in which consciousness might be non-computational, and they have different implications for the thought experiment. One is where the "mechanisms" of consciousness are non-computational. In this scenario, my thought experiment could never come true: the world isn't put together in the right way for it to work, because that computer simulation will never produce the same behaviour as actual conscious humans exhibit. The other is where the mechanisms are all computable, and everything in the thought experiment goes through perfectly OK, but there's some further Essence Of Consciousness that we have and our simulations don't, without which we get all the same behaviours, right up to writing books about the nature of phenomenal consciousness or poems about the actual phenomena, but "no one's home" -- there are no real experiences, only behaviours that falsely report experiences. I think the second position is held by many people who worry about "the hard problem", and I don't think it really makes sense, but again that isn't quite the point I was trying to make, though it is closely related.)


This is really interesting, really interested in learning more about this. Is this an opinion specific to your own understanding or a belief held by a broader group of academics?


Yes, this is a belief held by a broader group of people, but I haven't booked up on it much myself. Some quick googles for "Quantum consciousness" and "consciousness as a fundamental field" brought up some of the arguments I remembered hearing about:

[0] https://en.wikipedia.org/wiki/Quantum_mind

[1] https://www.scienceandnonduality.com/video/consciousness-as-...

As I understand it, a common form of this framework is basically panpsychism, but more science-y and less metaphysics-y. Some people believe that consciousness may arise from a fundamental field in the universe just like how there's an electromagnetic field or a Higgs field. I vaguely remember hearing one theory that posits intelligent life forms are to the field of consciousness what photons/electrons are to the electromagnetic field — that is to say, spikes or clusters of energy in the field that stand out drastically against the background noise.


I will take a shot:

> Question 1: Do you agree that something along these lines is possible in principle?

Seems reasonable.

> Question 2: If it happened, would you think there is still a "hard problem" left unsolved?

Yes.

> Question 3: If you think there would still be a "hard problem" left unsolved, is that because you think someone in this scenario could imagine all the machinery revealed by the simulator operating perfectly without any actual qualia?

If I see an amazing demo of something, and then they try to sell me something that is capable of only a small subset of what was just shown in the demo, I am going to balk. I suspect if the topic was most anything other than consciousness (which is one of those topics that seems to cause the mind to behave anomalously for some reason), I think most people would agree.

EDIT: thinking about it more....if you could start to make amazingly accurate and precise predictions about how the mind is going to react under a wide variety of scenarios, I would be much more impressed...although, my intuition is that this is far less complex than it may seem for normal behaviors (fine-grained precision is where I'd have to surrender my skepticism).


I don't understand the relevance of your paragraph beginning "If I see an amazing demo" to the question at hand. I assume it's meant to be an analogy that justifies your answer to Q2, or something of the kind, but I don't quite understand what's being analogized to what; could you give some more details?

For the avoidance of doubt, the intention of the thought experiment is that it does indeed make accurate and precise predictions, so that if you give the real person and the simulated person the same experiences, they respond in the same way.

Of course in practice, even with all the technological advances that would be required to make the thought experiment a reality, you'd never be able to make the experiences, or the starting states, exactly identical. So in cases where how you act is exquisitely sensitive to the details of your starting state or the experiences you have, simulation and reality might diverge. My guess is that many things aren't so delicate, and I personally would be pretty impressed by a me-simulation that consistently said and did things that are the sort of things I would be likely to say and do.


> I don't understand the relevance of your paragraph beginning "If I see an amazing demo" to the question at hand. I assume it's meant to be an analogy that justifies your answer to Q2, or something of the kind, but I don't quite understand what's being analogized to what; could you give some more details?

I'm assuming we're ultimately talking about the capabilities of the human mind - comprehensively, it's capabilities are significant. But then, we also know it to be incredibly flawed, which should be kept in mind during such considerations.

> For the avoidance of doubt, the intention of the thought experiment is that it does indeed make accurate and precise predictions, so that if you give the real person and the simulated person the same experiences, they respond in the same way.

So something kinda like this (way more complex likely)?:

"select distinct [phenomenon] from [human_mind_capabilities] where [type] in ('experience','response')"

I wonder how many rows that would return. I also wonder how much diversity would be in the resultset. Either way, I think one would need a pretty powerful thought experiment for even remotely comprehensive coverage.

> So in cases where how you act is exquisitely sensitive to the details of your starting state or the experiences you have, simulation and reality might diverge.

One example (singular, at least kinda):

https://en.wikipedia.org/wiki/Jeffrey_Dahmer

Another (when networked):

https://en.wikipedia.org/wiki/Casualties_of_the_Iraq_War

> My guess is that many things aren't so delicate, and I personally would be pretty impressed by a me-simulation that consistently said and did things that are the sort of things I would be likely to say and do.

I too have an intuition that there is a very rich vein of ore at some level, and that it may be accessible, were we to look for it.


One thing that surprised me about ‘A Thousand Brains: A New Theory of Intelligence’ by Jeff Hawkins was how many different types of computer simulations currently exist for approximations and parts to your experiment’s steps.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: