The disk failure rates are very low when compared to decade ago. I used to change more than a dozen disks every week a decade ago. Now it's an eyebrow raising event which I seldom see.
I think following Backblaze's hard disk stats is enough at this point.
Backblaze reports an annual failure rate of 1.36% [0]. Since their cluster uses 2,400 drives, they would likely see ~32 failures a year (extra ~$4,000 annual capex, almost negligible).
I bet you still have a higher early failure rate because of the stress from transportation, even if there's no funny business. And I expect some funny business because used enterprise drives often come with wiped SMART data, some drives may have been retired by sophisticated clients who decided they were near failure.
I think following Backblaze's hard disk stats is enough at this point.