A SQL database, with indexing configured correctly, allows you to look up a row in O(log(n)).
A bag of XML bytes doesn't give you that. It takes, at best, a SAX parser to do an O(n) scan through the whole document to find stuff. Most DOM implementations give you O(1) indexing by ID, but they require you to parse it first, and that's going to take O(n).
The problem isn't creating the XML file. The problem is querying it later, after you've dumped it from RAM to disk, you have to load the entire thing off disk back into RAM in order to rebuild the DOM.
A database like SQLite allows you to perform structured queries at faster-than-O(n) speed straight off the disk.
Is there some reason why you can't save the DOM or other datastructures that you help you query/manipulate the XML?
A database isn't a set of flat-file CSVs even though that's all that is actually needed to do everything that people do with databases (albeit one operation at a time). (The data may be normalized but the datastructures aren't necessarily.)
Instead, a database is data together with some data structures (and relevant code) that are maintained to speed up operations on said data.
Why can't one persist auxillary datastructures for XML, including (but not limited to) the DOM?