If I say I'm innocent, you don't say I have to prove it. Some facts are presumed to be true without burden of evidence because otherwise it could cause great harm.
So we don't require, say, minorities or animals to prove they have souls, we just inherently assume they do and make laws around protecting them.
Thank you for the clarification.
If you expect me to justify an action depending on you being innocent, then I actually do need you to prove it.
I wouldn't let you sleep in my room assuming you're innocent - or in your words: because of the consequences of otherwise.
It feels like you're moving the goalposts here: I don't want to justify an action based on something, i just want to know if something has a specific property.
With regards to the topic: Does AI think?
I don't know, but I also don't want to act upon knowing if it does (or doesn't for that matter).
In other words, I don't care.
The answer could go either way, but I'd rather say that I don't know (especially since "thinking" is not defined).
That means that I can assume both and consider the consequences using some heuristic to decide which assumption is better given the action I want to justify doing or not doing.
If you want me to believe an AI thinks, you have to prove it, if you want to justify an action you may assume whatever you deem most likely.
And if you want to know if an AI thinks, then you literally can't assume it does; simple as that.
So we don't require, say, minorities or animals to prove they have souls, we just inherently assume they do and make laws around protecting them.