I mean, if you use SOTA models, they just about never hallucinate on these sorts of topics (elementary through lower-division college math), school-level history, science, etc. I only ever see hallucinations when using non-SOTA models, when using an intermediary product, asking about obscure information, or occasionally coding. I don't think any of that applies to being a elementary-high-school tutor.
Writing is probably the most difficult subject, because even now it's difficult to prompt an LLM to write in a human intonation - but they will do a perfectly good job at explaining how to write (here's a basic essay structure, here are words we avoid in formal writing, etc). You can even augment a writing tutor chatbot by making it use specific, human-written excerpts from English textbooks, instead of allowing it to generate example paragraphs and essays.
Writing is probably the most difficult subject, because even now it's difficult to prompt an LLM to write in a human intonation - but they will do a perfectly good job at explaining how to write (here's a basic essay structure, here are words we avoid in formal writing, etc). You can even augment a writing tutor chatbot by making it use specific, human-written excerpts from English textbooks, instead of allowing it to generate example paragraphs and essays.