A "DB connection" in Go is already several layers of abstraction. You could have real production connected to your postgres, integration tests connected to a sqlite file, and unit / functional tests via sqlmock.
Everything is still 'bound to DBs' but that's because your program needs a source of data. Faking a second data source other than the DB via a higher-level shared interface is just inviting integration failures.
Would you have the DB connection be a prominent required parameter of all your functions? Would you have the bulk of your code be integration tests then?
For me this depends on the application. For a REST API I'd probably open a Conn or Tx in an early middleware and carry it on a request context, or via explicit parameter, depending on the HTTP router's features. (In Go that usually means a request context, unfortunately - a more powerful type system would ideally get you some more featureful type-safe routing.) For a more RPC-like or compute-focused API that I might keep some handle to the DB (or other data source) on a method's reciever and route / dispatch to that method. (I think this is the same thing oppositelock suggests elsewhere in this thread - https://news.ycombinator.com/item?id=25807562)
> Would you have the bulk of your code be integration tests then?
I'm not sure if you mean the bulk of my code or just of my tests - for a REST API I would expect mostly functional tests. Do e.g. PUT+GET and make sure the result makes sense. This would be the case regardless of DB architecture.
Everything is still 'bound to DBs' but that's because your program needs a source of data. Faking a second data source other than the DB via a higher-level shared interface is just inviting integration failures.