Yes, that has been my exact experience with folks who work within Colab and other Jupyter-like things:
1. They assume everyone has access to the same environment they do
2. They often don't understand anything about the infrastructure that's running their stuff
3. They produce very interesting work (such as this particular TTS work)
4. They drop 90% of their potential audience within 5 mn because the bloody thing lives in a weird cloud-only environment or requires a nightmarish stack of dependencies to run on a local machine and basically can't be simply integrated in a larger pipeline (e.g. a simple shell script).
My experience has been that getting ML researchers to get their head out of colab's ass and learn to type things like "ls" and "cd" is really hard.
Jupyter has similar issues with bad environments but it’s usually much closer to “didn’t get my dependency versions right” or “this could theoretically run with less junk” and things like that.
Colab is far worse, they say “don’t worry about it, you can just clone” and it’s just been a toxic spill, rotting away at the level of understanding in the ML community. Colab let’s you basically never put any effort into management of setup, dependencies, or data, and consequently it’s both amazing and fucking horrible the moment you want to avoid using it because everyone just builds their project “leaning on” the capabilities of Colab… it’s built an entire shanty town of poorly managed ML projects leaning precariously against the supports provided by Colab.
I’m just glad it hasn’t sucked too much air out of Jupyter in the ML community because at least stock Jupyter based tools are easy enough to take apart and reverse engineer since it’s a normal Python ecosystem, no magic Google drive data links, no custom Google tensor unit specific libraries, no push button magic clones of entirely hand crafted environments.