Local storage and Networked storage always have their places. Local solid state storage almost always gets you better performance because you negate the overhead of network. Unless, you’re talking fancy arrays connected to saturate 100G NICs.
Local storage falls apart the moment you need that storage accessible from another machine…
This doesn't say anything about how such storage would be accessed by the mac and some other clients at the same time. It literally just runs iperf over tb. As a matter of architecture it leaves the question of whether it would be preferable to directly attach the storage to the Mac and export it to the rest of the network from there.
Networked storage also falls apart the moment you need anything better than basic file storage. So many filesystem features do not work networked, most network file mounts do not support any kind of modern security and the ones that do are often super slow.
Everything has its place, but I wasted so much time trying to get a nas to work as storage for my home server.
Not to sound pedantic, but Local storage and networked storage are not alternatives. They satisfy different needs.
If you need reasonable access to your storage from more than one computer, you need networked storage(of any kind).
Alternatively, if you are running say a database server, and you know the primary storage could be local, it is ideal to choose better performing local storage and of course plan for backups to networked storage.
As much as I agree with your pain points of networked storage, it’s not like you can “replace” it with local storage everywhere.
I have dealt with a couple PB of networked storage for a research cluster. (FreeBSD+ZFS) over NFS on a dedicated network to a cluster of about 100 servers running recent versions of Linux. It worked like a charm while we admittedly kept things simple in terms of file system features.
All clients mount the appropriate NFS shares they need via autofs(only when they are needed).
ZFS being ZFS was just a marvel of software engineering, was solid as a rock. While giving us transparent compression, superb read caching in memory, and cheap snapshots.
I have a Synology and use the iscsi feature for VMs that aren't even in the same building as the NAS, what sort of issues aren't solved in this sort of way?
Don't get me wrong I have nothing but problems with NFS and SMB access on the Linux clients. Windows works perfectly, as in whatever I set on the Synology is what I get on windows.
I found that there was basically nothing that I could find which would allow me to mount a filesystem as if it was natively connected securely over the local network without going to real enterprise stuff I either couldn't understand or didn't have the hardware for.
Everything I found was basically FUSE style things where it looks kind of like local storage but any time you want to use advanced features or high performance it would fail.
SSHFS was secure and easy to set up but was maxing out the CPU on my NAS and also did not support file permissions.
Yep, it is possible. Here is the official support page on how to set up a shared folder that would be accessible by a any machine using SMB[0], and here is one for access by a macOS machine[1] (that is a bit simpler for setting up certain advanced config options).
Local storage falls apart the moment you need that storage accessible from another machine…