> But one should expect a single function does not vary in the last 5 digits as one example did.
Why not? The whole selling point of floats is that they're a fast approximation to real arithmetic in a bunch of useful cases. I'd personally be much more miffed by an excessively slow implementation of a transcendental than one which was off even by a large number of bits. If there's a fast solution with better precision then that's great, but low precision by itself doesn't strike me as particularly problematic.
> Why should anyone use double precision if that's the kind of slop an implementation can have?
Because by using float64 instead of float32 you can cheaply get a ton more precision even with sloppy algorithms. If there's only 16ish bits of slop you could probably get a full-precision float32 operation just by using an intermediate cast to float64 (proof needed for the algorithm in question of course).
Why not? The whole selling point of floats is that they're a fast approximation to real arithmetic in a bunch of useful cases. I'd personally be much more miffed by an excessively slow implementation of a transcendental than one which was off even by a large number of bits. If there's a fast solution with better precision then that's great, but low precision by itself doesn't strike me as particularly problematic.
> Why should anyone use double precision if that's the kind of slop an implementation can have?
Because by using float64 instead of float32 you can cheaply get a ton more precision even with sloppy algorithms. If there's only 16ish bits of slop you could probably get a full-precision float32 operation just by using an intermediate cast to float64 (proof needed for the algorithm in question of course).