When we make game-playing AI (which is all AI, depending on your analogy comfort), one of the most promising techniques is Tree Search, which ranks moves based on the descendant moves. In games where you could reach the same state in many ways, much memory might be wasted to re-record the same state node on different branches.
This article is a nice exploration of an approach called Graph Search, which essentially trades compute for memory by doing extra compute work (hashing the game states) to check to see if the nodes are already visited. That saves us from re-recording nodes we already saw, and consequently converts trees (free of cycles) into directed acyclic graphs.
This forces some tinkering with the tree search to get correct results, specifically it demands a focus more on edges (actions or moves) as the unit of optimization, rather than on vertices (states). Itβs a well written technical essay in literate programming written by someone who understands their subject.
I would argue that this saves both compute and memory for any significant search. In a traditional tree-search without treating states as identical and revistable still requires you to store every node that you visit in memory to track the N, Q, and child relations.
Applying the algorithm adds a hash associated with each node, and a counter on each edge. On the other side you're de-duplicating identical states in your search space, saving the memory of duplicate nodes but more importantly saving compute by not requiring independent searches of the same state space. Whether this trade off actually saves memory will be determined based on how likely duplicate states are in your search space.