> I don't need all that for development, I just want to run the dang code
Have you ever worked somewhere where in order to run the code locally on your machine, it's a blend of "install RabbitMQ on your machine, connect to MS-SQL in dev, use a local Redis, connect to Cassandra in dev, run 1 service that this service routes through/calls to locally, but then that service will call to 2 services in dev", etc?
I certainly have. And there is often a Dockerfile or a docker-compose file. And the docs will say "just run docker-compose up, and you should be good to go."
And 50% of the time it actually works. The other 50% of the time, it doesn't, and I spend a week talking to other engineers and ops trying to figure out why.
The thing I've been trying - instead of the docs saying "just run docker-compose up" (and a few other commands), put all of those commands in a bash script. That single bootstrap command should: docker compose down, docker compose build, docker login, [install gems/npm modules], rake db:reset (delete + create), docker compose up, rake generate_development_data.
This way each developer can/should tear down and rebuild their entire development environment on a semi-frequent basis. That way you know it works way more than 50% of the time. (Stretch goal: put this in CI.... :)
The other side benefit: if you reset/restore your development database frequently, you are incentived/forced to add any necessary dev/test data into the "rake generate_development_data", which benefits all team members. (I have thought about a "generate_development_data.local.rb" approach where each dev could extend the generation if they shouldn't commit those upstream, but haven't done that by anymeans....)
Docker compose for those services while the language itself runs natively has been the best solution to this problem for me in the past. Docker compose for redis, postgres, elastic, etc.
IMO Docker for local dev is most beneficial for python where local installs are so all over the place.
This is exactly what I recently set up for our small team: use Docker Compose to start Postgres and Redis, and run Rails and Sidekiq natively. Everyone is pretty happy with this setup, we no longer have to manage Postgres and Redis via Homebrew and it means we're using the same versions locally and in production.
If anyone is curious about the details, I simply reused the existing `bin/dev` script set up by Rails by adding this to `Procfile.dev`:
docker: docker compose -f docker-compose.dev.yml up
The only issue is that foreman (the gem used by `bin/dev` to start multiple processes) doesn't have a way to mark one process as depending on another, so this relies on Docker starting the Postgres and Redis containers fast enough so that they're up and running for Rails and Sidekiq. In practice it means that we need to run the `docker compose` manually the first time (and I suppose every time we'll update to new versions) so that Docker downloads the images and caches them locally.
For your issue could you handle bringing up your docker-compose 'manually' in bin/dev? Maybe conditionally by checking if the image exists locally with `docker images`. Then tear it down and run foreman after it completes?
Yeah, and python has been most of my exposure to local Docker, so that may be coloring my experience here. Running a couple off-the-shelf applications in the background with Docker isn't too bad, but having it at the center of your dev workflow where you're constantly rebuilding a container around your own code is just awful in my experience
Have you ever worked somewhere where in order to run the code locally on your machine, it's a blend of "install RabbitMQ on your machine, connect to MS-SQL in dev, use a local Redis, connect to Cassandra in dev, run 1 service that this service routes through/calls to locally, but then that service will call to 2 services in dev", etc?