The point of my thought experiment isn't "a computer does something relatively human", it's "a computer does the same things a human does at the level of neuron activations, leading to the same things a human does at the level of actual actions".
And the point isn't "can you deny that the simulation has qualia?" (though I do find denying that pretty implausible); it's that it feels pretty clear to me that having the level of understanding that would be demonstrated by such a simulation-plus-analysis would in fact constitute a solution to the "hard problem".
(Of course, that would be entirely irrelevant for anyone who believes that my scenario is impossible in principle. For instance, if someone thinks that humans don't think with their brains but with their immaterial souls, they should predict that all attempts to do the sort of thing I describe will end in failure: you might get the machine do do exactly what the brain-meat does, but that won't lead to human-like behaviour because human-like behaviour is enabled by human-like souls which the simulator doesn't have.)
My maybe-uncharitable view is that the "hard problem" is "hard" because it is not really a problem so much as it is a decision to refuse ever to admit that we understand. No matter how detailed and complete an explanation we might have of human consciousness, you can always say "nope, that doesn't explain why there's anything it feels like for me to eat a perfectly ripe peach". Even if (as in my fanciful scenario) that explanation enables us to trace every detail of the processes that lead from being eating the peach to saying "mmmm, that's delicious", to wanting to buy more peaches in future, to rhapsodizing about how no mechanical explanation could ever do justice to the experience, etc. Even if (again, as in my fanciful scenario) the explanation lets us identify (down to the level of neuron-activations) what is common between the experience of eating a peach and the experience of eating a plum, what is different between the experience of eating a ripe peach and the experience of eating a not-so-ripe one, what is shared by all experiences of seeing something a bright scarlet colour, and so forth.
To me, this all seems like saying that gravity is ineffable, that although we can write down Newton's or Einstein's equations and compute exactly what happens when two massive bodies are near one another, there's still always something left unexplained. I can imagine, I say, things that behave according to the same equations but don't really have mass: they might instead have not actual mass but some mere facsimile of the real thing. Or that chess is ineffable, that although a machine can choose chess moves (and beat grandmasters) it isn't really playing chess but doing some mere facsimile of chess-playing. And one can go through the same manoeuvre with any concept at all. Consider the Hard Problem of Trousers: we may be able to analyse the way in which pieces of fabric are made and shaped and put together to make trousers, but that still leaves completely unanswered the question of why the resulting object is a pair of trousers. After all, I insist, I can imagine taking exactly the same pieces of fabric and putting them together the same way to make something that could be worn like trousers but that aren't really trousers...
Wikipedia states "according to a 2020 Philpapers survey, 29.72% of philosophers surveyed believe that the hard problem does not exist, while 62.42% of philosophers surveyed believe that the hard problem is a genuine problem"
The boring answer to your question is given your thought experiment scenario the numbers would probably change to the hard problem of conscousness philosophers being the minority rather than the majority. If everyone has a seemingly conscious A.I. best friend like in the science fiction stories the numbers will continue to go down, but you won't be able to definitively solve the issue.
Philosophers can't even agree whether the biblical God is running the universe behind the scenes, which would potentially have unadressed implications for your thought experiment scenario.
And the point isn't "can you deny that the simulation has qualia?" (though I do find denying that pretty implausible); it's that it feels pretty clear to me that having the level of understanding that would be demonstrated by such a simulation-plus-analysis would in fact constitute a solution to the "hard problem".
(Of course, that would be entirely irrelevant for anyone who believes that my scenario is impossible in principle. For instance, if someone thinks that humans don't think with their brains but with their immaterial souls, they should predict that all attempts to do the sort of thing I describe will end in failure: you might get the machine do do exactly what the brain-meat does, but that won't lead to human-like behaviour because human-like behaviour is enabled by human-like souls which the simulator doesn't have.)
My maybe-uncharitable view is that the "hard problem" is "hard" because it is not really a problem so much as it is a decision to refuse ever to admit that we understand. No matter how detailed and complete an explanation we might have of human consciousness, you can always say "nope, that doesn't explain why there's anything it feels like for me to eat a perfectly ripe peach". Even if (as in my fanciful scenario) that explanation enables us to trace every detail of the processes that lead from being eating the peach to saying "mmmm, that's delicious", to wanting to buy more peaches in future, to rhapsodizing about how no mechanical explanation could ever do justice to the experience, etc. Even if (again, as in my fanciful scenario) the explanation lets us identify (down to the level of neuron-activations) what is common between the experience of eating a peach and the experience of eating a plum, what is different between the experience of eating a ripe peach and the experience of eating a not-so-ripe one, what is shared by all experiences of seeing something a bright scarlet colour, and so forth.
To me, this all seems like saying that gravity is ineffable, that although we can write down Newton's or Einstein's equations and compute exactly what happens when two massive bodies are near one another, there's still always something left unexplained. I can imagine, I say, things that behave according to the same equations but don't really have mass: they might instead have not actual mass but some mere facsimile of the real thing. Or that chess is ineffable, that although a machine can choose chess moves (and beat grandmasters) it isn't really playing chess but doing some mere facsimile of chess-playing. And one can go through the same manoeuvre with any concept at all. Consider the Hard Problem of Trousers: we may be able to analyse the way in which pieces of fabric are made and shaped and put together to make trousers, but that still leaves completely unanswered the question of why the resulting object is a pair of trousers. After all, I insist, I can imagine taking exactly the same pieces of fabric and putting them together the same way to make something that could be worn like trousers but that aren't really trousers...