In Go, values are to be always useful, so `result | error` would be logically incorrect. `(result, result | error)`, perhaps – assuming Go had sum types, but that's a bit strange to pass the result twice.
Just more of the pitfalls of it not being clear how Rust-style applies to an entirely different language with an entirely different view of the world.
> and you have absolutely no idea if the string was "0" or "not zero"
You do – err will tell you. But in practice, how often do you really care?
As Go prescribes "Make the zero value useful" your code will be written in such a way that "0" is what you'll end up using downstream anyway, so most of the time it makes no difference. When it does, err is there to use.
That might not make sense in other languages, but you must remember that they are other languages that see the world differently. Languages are about more than syntax – they encompass a whole way of thinking about programs.
I think it's worth noting that, while the general consensus has converged around (T, error) meaning T XOR error, it does not necessarily mean that. There are some places that violate this assumption, like the io.Reader and io.Writer interfaces. Especially io.Reader, where you can have (n>0, io.EOF), which also isn't even a proper error condition! (This isn't a big problem, though, since you rarely need to directly call Read or Write).
If a function `func foo() (int, error)` returns a non-nil error, then the corresponding `int` is absolutely invalid and should never be evaluated by the caller, unless docs explicitly say otherwise.
Errors are just values, same as other values, it's in no way "exceptional" for a caller to get an error back from a call to some other code. If a function can fail it needs to return an error, if a function call fails the caller needs to deal with that error, not difficult stuff here. "Happy path" is no more or less important than "sad path" and both should be equally represented in the source code as written.
That's doctrine. Saying it doesn't make it useful.
A program serves a business need: so it's well recognized that there's a distinction between business logic, and then implementation details.
So there's obviously no such thing as "just an error" from that alone: because "a thing failed because we ran out of disk space" is very different to "X is not valid because pre-1984 dated titles are not covered under post-2005 entitlement law".
All elephants have 4 legs, but not all things with 4 legs are elephants, and a tiger inside the elephant enclosure isn't "just" another animal.
> So there's obviously no such thing as "just an error" from that alone
The point is that all values are potentially errors. An age value, for example, can be an error if your business case requires restricting access to someone under the age of 18. There is nothing special about a certain value just because it has a type named "error", though.
Let's face it: At the root of this discussion is the simple fact that "if" statements are just not very good. They're not good for handling errors, but they're also not good for handling anything else either. It is just more obvious in the case of what we call errors because of frequency.
Something better is sorely lacking, but seeking better only for types named "error" misses the forest for the trees.
You're simply wrong. If I call a function and it fails, then at the base level it doesn't matter if it failed because "no more disk space" or because "input values are invalid" -- the thing failed, in both cases. The caller needs to deal with that failure, in all cases. Now exactly how it deals with that failure might depend on properties of the error, sure, but the control flow of the program is the same in any case.
Just more of the pitfalls of it not being clear how Rust-style applies to an entirely different language with an entirely different view of the world.