We have considered this and may go down this route in the future. One thing we asked ourselves was what open sourcing provides. Usually it's a desire for privacy or cost in the form of self-hosting, among other reasons.
Currently, our free version is self-hosted and monitors clusters with up to 64 GPUs. We feel this will work for many use cases, especially just to try it out. Monitoring GPUs typically requires you to deploy something where your GPUs live. Since you’re already installing software on your cluster, you might as well keep your data there too.
Your Github repo says you need 120 GB of persistent storage, but our bare metal GPU clusters only have local storage. Would like to try your thing, but hosting the data with the GPUs is a pretty big blocker for us.
Ahh yes...here’s how you solve that. Just install the Neurox Control plane onto any regular Kubes cluster (doesn’t need GPUs, just needs persistent storage. ie: EKS, AKS, GKE, etc) without that last flag in the instructions: `--set workload.enabled=true` (<-- leave this out). More info: https://docs.neurox.com/installation/alternative-install-met...
Then on your GPU cluster w/o disk, you just need to install the Neurox Workload agent. In the Web Portal UI, click on Clusters > New Cluster and copy/paste the snippet there.
Currently, our free version is self-hosted and monitors clusters with up to 64 GPUs. We feel this will work for many use cases, especially just to try it out. Monitoring GPUs typically requires you to deploy something where your GPUs live. Since you’re already installing software on your cluster, you might as well keep your data there too.