Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was so frustrated when I tried to get doctors to quantify their assessment of risk for a surgery my sister was about to undergo. They simply wouldn't give me a number, not even "better or worse than even odds". Finally an anesthesiologist privately told me she thought my sister had maybe a one-third chance of dying on the table and that was enough for me. I'm not sure how much fear of liability had to do with this reluctance, or if it was just a general aversion to discussing risk in quantitative terms (which isn't that hard, gamblers do it all the time!).


Doctor here

1. It’s generally difficult to quantify such risks in any meaningful manner

2. Provision of any number adds liability, and puts you in a damned-if-does, damned-if-it-doesn’t-work-out situation

3. The operating surgeon is not the best to quantify these risks - the surgeon owns the operation, and the anaesthesiologist owns the patient / theatre

4. Gamblers quantify risk because they make money from accurate assessment of risk. Doctors are in no way incentivised to do so

5. The returned chance of 1/3 probably had an error margin of +/-33% itself


Not a lawyer but I do wonder if refusal to provide any number also adds liability, especially if it can be demonstrated to a court later that a reasonable estimate was known or was trivial to look up, and the deciding party would not have gone through with the action that ended in harm if they had been provided said number. I'm also not seeing how giving a number and then the procedure working out results in increased risk, perhaps you can expand on that? Like, where's the standing for a lawsuit if everything turned out fine but in one case you said the base rate number for a knee replacement surgery was around 1/1000 for death at the hospital and 1/250 for all-cause death within 90 days, but in another case you refused to quantify?


> It’s generally difficult to quantify such risks in any meaningful manner

According to the literature 33 out of 100 patients who underwent this operation in the US within the past 10 years died. 90% of those had complicating factors. You [ do / do not ] have such a factor.

Who knows if any given layman will appreciate the particular quantification you provide but I'm fairly certain that data exists for the vast majority of serious procedures at this point.

I've actually had this exact issue with the veterinarian. I've worked in biomed. I pulled the literature for the condition. I had lots of different numbers but I knew that I didn't have the full picture. I'm trying to quantify the possible outcomes between different options being presented to me. When I asked the specialist, who handles multiple such cases every day, I got back (approximately) "oh I couldn't say" and "it varies". The latter is obviously true but the entire attitude is just uncooperative bullshit.

> puts you in a damned-if-does, damned-if-it-doesn’t-work-out situation

Not really. Don't get me wrong, I understand that a litigious person could use just about anything to go after you and so I appreciate that it might be sensible to simply refuse to answer. But from an academic standpoint the future outcome of a single sample does not change the rigor of your risk assessment.

> Doctors are in no way incentivised to do so

Don't they use quantifications of risk to determine treatment plans to at least some extent? What's the alternative? Blindly following a flowchart? (Honest question.)

> The returned chance of 1/3 probably had an error margin of +/-33% itself

What do you mean by this? Surely there's some error margin on the assessment itself but I don't see how any of us commenting could have any idea what it might have been.


> According to the literature 33 out of 100 patients who underwent this operation in the US within the past 10 years died. 90% of those had complicating factors. You [ do / do not ] have such a factor.

Everyone has complicating factors. Age, gender, ethnicity, obesity, comorbidities, activity level, current infection status, health history, etc. Then you have to factor in the doctor's own previous performance statistics, plus the statistics of the anaesthesiologist, nursing staff, the hospital itself (how often do patients get MRSA, candidiasis, etc.?).

And, of course, the more factors you take into account, the fewer relevant cases you have in the literature to rely on. If the patient is a woman, how do you correctly weight data from male patients that had the surgery? What are the error bars on your weighting process?

It would take an actuary to chew through all the literature and get a maximally accurate estimate based on the specific data that is known for that patient at that point in time.


No one said anything about a maximally accurate estimate. This is exactly the sort of obtuse attitude I'm objecting to.

By complicating factors I was referring to things that are known to have a notable impact on the outcome of this specific procedure. This is just summarizing what's known. It explicitly does not take into account the performance of any particular professional, team, or site.

Something like MRSA is entirely separate. "The survival rate is 98 out of 100, but in this region of the country people recovering from this sort of thing have been exhibiting a 10% risk of MRSA. Unfortunately our facility is no exception to that."

If the recipients of a procedure are predominately female and the patient is a male then you simply indicate that to them. "The historical rate is X out of Y, but you're a bit unusual in that only 10% of past recipients are men. I'm afraid I don't know what the implications of that fact might be."

You provide the known facts and make clear what you don't know. No weasel words - if you don't know something then admit that you don't know it but don't use that as an excuse to hide what you do know. It's utterly unhelpful.


So, while you are correct, you are missing an important piece:

most people cannot think like this

I'm not talking about patients, I'm talking about everyone, including doctors. They just can't think in a probabilistic sense. And you'll counter that it's just reporting facts, but they don't even know which ones to report to you, how to report them, none of it. It just doesn't seem to fit in many peoples heads.


Fair enough. It's a depressing thought but you're probably right.


this is part of the mindset had by doctors that makes some people want to “do their own research” rather than trust their physician. A medical intervention has to have positive expected value for it to be a good idea, and figuring out the expected value has to involve some quantification of risks. If doctors don’t want to do that because they could get sued if they don’t give a maximally accurate estimate and producing a maximally correct estimate would be too much work, then fine, it’s a free country and I don’t want to make doctors do anything they don’t feel like doing, but they are creating a situation where parents who want to figure out if something is a good idea have no choice but to start googling things themselves.

I’ve undergone some surgeries that were not without risks and every time, i’ve been stonewalled by doctors when asking for basic information like “in your personal practice, what is the success rate for this surgery?”. Always something like “Oh, everyone is different, so there’s no way to give any estimates.” The only options are, either they have some estimate they think is accurate enough that they’re comfortable recommending the surgery but they won’t tell me (in which case they’re denying me useful information for their own benefit), or they have no idea and are recommending the surgery for some other reason (a very concerning possibility lol). Either way, it instantly makes our relationship adversarial to some extent, and means I need to do my own research if I want to be able to make an informed decision.


Don't they use quantifications of risk to determine treatment plans to at least some extent?


I doubt doctors do: my guess would be most doctors follow a list of best practices devised by people like malpractice actuaries and by their sense of the outcomes from experience.


Thanks for sharing the realities you experience. The rest of this is picayuni.

> Doctors are in no way incentivised to do so

Personal pride, care for patient, and avoiding the mess of a bad outcome seem like powerful incentives. That said, I assume you mean they are not given explicit bonuses for good outcomes (the best trend to attract business and the highest salaries).


> It’s generally difficult to quantify such risks in any meaningful manner

It's not for lack of data, that's for sure...


In Norway a pregnant woman over forty is offered genetic counselling because of the risk of Downs syndrome. These risks are definitely quantifiable and no liability is generated by providing them. The counsellor (a doctor) explains the risks and the syndrome and apart from this appointment is not otherwise involved.

This could surely be done for other situations, especially surgical procedures as the statistics should be collected and associated not only with the procedure but also the hospital and surgeon.


I would rather not have a surgeon considering failure rates ahead of any operation they're about to conduct.


On the off chance you're not being facetious: why? Isn't it part of their job description to weigh the ups and downs of any operation before conducting it? I'd imagine failure to do so would open them to liability.


There are 2 parts:

1. Presumably, the surgeon has determined that this specific intervention is the best possible intervention of all the possible ones (fewest downsides, best outcome, etc). There are always alternatives - including #wontfix.

2. Once this decision has been made, I don't want them second guessing, I want them 100% confident in the decision and their abilities. If there's any lingering doubt - then return to step 1 and re-evaluate.


I think a lot of people don't understand statistics, which may make it hard for doctors to choose how to communicate things, even if they do have important knowledge that could be helpful.

I once asked a doctor how long a relative might have to stay in intensive care:

A: Oh, I couldn't possibly say.

Q: Do you think he might be home in 3 or 4 days?

A: Oh, no, not that soon.

Q: So it might even be as long at 3 weeks?

A: I highly doubt it would be that long.

Q: So a reasonable estimate might be 1-2 weeks?

A: Oh, I couldn't possibly say.

I started the conversation having no idea whatsoever how long it would be, but I ended up with a good feel for a time estimate along with error bars.


> ended up with a good feel for a time estimate along with error bars.

You might have felt that but my impression is that the doctor in question was mostly making up something on the spot. I would guess that the doctor is simply saying that they have never encountered anyone in this situation leaving intensive care within four days and similarly no one who survived needed longer than three weeks. The last answer suggests that they have no actual statistics at all.


Well, they weren't just making things up; there clearly was information that they had, that I wanted, but that they were initially reluctant to give me. At that point ultimate survival did not seem to be the issue, but I had no idea whether the answer to "how long" was going to be hours or months! Even if the statistics they had was something like N=5, that was still (mathematically) infinitely more than the N=0 that I had.


> she thought my sister had maybe a one-third chance of dying on the table and that was enough for me

But what was the alternative? I understand that you didn't get an answer, but the alternative of not operating could have been worst


The alternative was her having to endure more arthritic joint pain in the few years she had left (she was suffering from a degenerative disease and no one expected her to live more than 5 years). We decided that was better than the risk of an invasive surgery (hip replacement) with her history of adverse reaction to anesthesia.


Gamblers are a poor example. Their decisions hardly effect anyone else or institutions or nations.

Increase the cost of the fallout of a decision (your relationships, your bosses job, your orgs existence, economy, national security etc etc) and the real fun starts.

People no matter what they say about other people's risk avoidance, all start behaving the same way as the cost increases.

This is why we end up with Trump like characters up the hierarchy, every where you look, cause no one capable of appreciating the odds, wants to be sitting in those chairs and being held responsible for all kinds of things outside their control.

Its also the reason why we get elaborate Signalling (costumes/rituals/pageantry/ribbons and medals/imposing buildings/PR/Marketing etc) to shift focus away from quantifying anything. See Theory of the Leisure Class. Society hasn't found better ways to keep Groups together while handling complexity the group is incapable of handling. Even small groups will unravel if there is too much focus on low odds of a solution.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: