> None of the prompt guides I've seen really cover pushing GPT3.5 to its limit, I've published one of my more complicated prompts[1] but getting GPT3.5 to output good responses in just this limited sense has taken a lot of work.
Completely agree. We use gpt-3.5 in our feature and it works really well! After my blog post where I detail some of the issues [0] I got a lot of people asking me questions about how we got gpt-3.5 to "work well" because they found it wasn't working for them compared to gpt-4. Almost every time the reason is that they weren't really doing good prompting and expected the magic box to do some magic. The answer is...prompt engineering is actual work, and with some elbow grease you can really get gpt-3.5 to do a lot for you.
Completely agree. We use gpt-3.5 in our feature and it works really well! After my blog post where I detail some of the issues [0] I got a lot of people asking me questions about how we got gpt-3.5 to "work well" because they found it wasn't working for them compared to gpt-4. Almost every time the reason is that they weren't really doing good prompting and expected the magic box to do some magic. The answer is...prompt engineering is actual work, and with some elbow grease you can really get gpt-3.5 to do a lot for you.
[0]: https://www.honeycomb.io/blog/hard-stuff-nobody-talks-about-...