If you used ChatGPT to teach you the topic, you'd write your own words.
Starting the answer with "I asked ChatGPT and it said..." almost 100% means the poster did not double-check.
(This is the same with other systems: If you say, "According to Google...", then you are admitting you don't know much about this topic. This can occasionally be useful, but most of the time it's just annoying...)
I like to ask AI systems sports trivia. It's something low-stakes, easy-to-check, and for which there's a ton of good clean data out there.
It sucks at sports trivia. It will confidently return information that is straight up wrong [1]. This should be a walk in the park for an LLM, but it fails spectacularly at it. How is this useful for learning at all?
I can use ChatGPT to teach me and understand a topic or i can use it to give me an answer and not double check and just copy paste.
Just shows off how much you care about the topic at hand, no?