It's amazing how much misinformation and vague information there is on this topic. I tried getting to the bottom of this in the following post in the OpenAI forum:
My research shows otherwise. Tuning via transformer adapters pretty much added new knowledge to QA models or could be used for adversarial QA training. You can throw away learned adapters anytime and retrain from scratch with new information if your adapters become stale. Fine-tuning is cheap and small (e.g. 60kB data in an adapter). You can customize it in production for each individual customer as well by swapping adapters at the time of inference. Embeddings for very short-term facts and adapters for medium-long-term info seems like the best combination.
I can't determine if this person is knows what they are talking about or is an extreme amateur just making things up and speaking confidently. He has other videos that are complete nonsense.
https://community.openai.com/t/fine-tuning-myths-openai-docu...
Bottom line is that fine-tuning does not seem to be a feasible option for adding new knowledge to a model for question answering.