I largely agree with this approach, but with 2 important changes:
1) "Next, make it possible to spin service up locally, pointing at production DB."
Do this, but NOT pointing at production DB. Why? You don't know if just spinning up the service causes updates to the database. And if it does, you don't want to risk corruption of production DB. This is too risky. Instead, make a COPY of the production DB and spin up locally against the COPY.
2) OP mentions team of 3 with all of them being junior. Given the huge mess of things, get at least 1 more experienced engineer on the team (even if it's from another team on loan). If not, hire an experienced consultant with a proven track record on your technology stack. What? No budget? How would things look when your house of cards comes crashing down and production goes offline? OP needs to communicate how dire the risk is to upper management and get their backing to start fixing it immediately.
1) "Next, make it possible to spin service up locally, pointing at production DB."
Do this, but NOT pointing at production DB. Why? You don't know if just spinning up the service causes updates to the database. And if it does, you don't want to risk corruption of production DB. This is too risky. Instead, make a COPY of the production DB and spin up locally against the COPY.
2) OP mentions team of 3 with all of them being junior. Given the huge mess of things, get at least 1 more experienced engineer on the team (even if it's from another team on loan). If not, hire an experienced consultant with a proven track record on your technology stack. What? No budget? How would things look when your house of cards comes crashing down and production goes offline? OP needs to communicate how dire the risk is to upper management and get their backing to start fixing it immediately.