Why did the researchers use ML models to do the reconstruction and risk getting completely incorrect, hallucinated results when reconstructing a 3D volume accurately using 2D slices is a well-researched field already?
If all of the layers were guaranteed to be orthographic with no twisting, shearing, scaling, squishing, with a consistent origin... Then yeah, there's a huge number of ways to just render that data.
But if you physically slice layers first, and scan them second, there are all manner of physical processes that can make normal image stacking fail miserably.
The methods used here are state of the art. The problem is not just turning 2D slices into a 3D volume, the problem is, given the 3D volume, determining boundaries between (and therefore the 3d shape of) objects (i.e. neurons, glia, etc) and identifying synapses
Although the article mentions Artificial Intelligence, their paper[1] never actually mentions that term, and instead talks about their machine learning techniques. AFAIK, ML for things like cell-segmentation are a solved problem [2].
There are extremely effective techniques, but it is not really solved. The current techniques still require human proofreading to correct errors. Only a fraction of this particular dataset is proofread.