I’ve found that golden master tests (aka snapshot testing) pair very well with fixtures. If I need to add to the fixtures for a new test, I regenerate the golden files for all the known good tests. I barely need to glance at these changes because, as I said, they are known good. Still I usually give them a brief once over to make sure I didn’t do something like add too many records to a response that’s supposed to be a partial page. Then I go about writing the new test and implementing the change I’m testing. After implementing the change, only the new test’s golden files should change.
They are also nice because I don’t have to think so much about assertions. They automatically assert the response is exactly the same as before.
I'm familiar with snapshot testing for UI and I agree with you, they can work really well for this because they're usually quick to verify. And especially if you can build in some smart tolerance to the comparison logic, it can be really easy to maintain.
But how would you do snapshot testing for behaviour? I'm approaching the problem primarily from the backend side and there most tests are about behaviour.
I'm also primarily on the back end. Like most backenders, I spend my workdays on http endpoints that return json. When I test these the "snapshot" is a json file with a pretty-printed version of the endpoint's response body. Tests fail when the file generated isn't the same as the existing file.
Ah, Ok, yes, for API endpoints it makes a lot of sense. Especially if it's a public API, you need to inspect the output anyway, to ensure that the public contract is not broken.
But, I spend very little or no time on API endpoints since I don't work on projects where the frontend is an SPA. :)
They are also nice because I don’t have to think so much about assertions. They automatically assert the response is exactly the same as before.