It's true that once you have learned enough to tell the LLM exactly what answer you want, it can repeat it back to you verbatim. The question is how far short of that you should stop because the LLM is no longer an efficient way to make progress.
From a knowledge standpoint an LLM can give you pointers at any point.
Theres no way it will "fall short".
You just have to improve your prompt. In the worst case scenario you can say "please list out all the different research angles I should proceed from here and which of these might most likely yield a useful result for me"
My skepticism flares up with sentences like "Theres no way it will "fall short"." Especially in the face of so many first hand examples of LLMs being wrong, getting stuck, or falling short.
I feel actively annoyed by the amount of public gaslighting I see about AI. It may get there in the future, but there is nothing more frustrating than seeing utter bullshit being spouted as truth.
For every problem that stops you, ask the LLM. With enough context it’ll give you at least a mediocre way to get around your problem.
It’s still a lot of hard work. But the only person that can stop yourself is you. (Which it looks like you’ve done.)
List the reasons you’ve stopped below and I’ll give you prompts to get around them.