This was on the first release of ChatGPT so I guess GPT-3.5. Pretty much like WASDx describes. In my case it was even more meta because I was writing a script that was making ChatGPT API calls. It’s an I/O network call so it was fairly easy to rewrite as a multithreaded generator loop, which it got right on the first go. Nice speedup of about 10X, I imagine right about up to the API rate limit.
I did this already with Codex in the playground, definitely works for ChatGPT as well. Just paste code and tell it to make a loop run in parallel with X threads. I've had it produce code using either multiprocessing or asyncio.
Sounds intriguing, could you elaborate? Which gpt did you use? What was the input like and what did the gpt produce?