You can never truly own anything. Even a hut in the forest will have property and municipal taxes attached to it. With domains it's really about who you are "renting" it from. For example many country TLDs are run by government-affiliated organizations so you can feel pretty safe that you can at least get a fair treatment in any dispute. With generic TLDs I'd generally try to stay away for serious projects.
People who make laws can't see the future, it's a constant race to curb corporate behavior that is deemed unacceptable and the laws evolve as new loopholes are found and exploited. Corporations know this and take it into account when making a cost/benefit calculation. A while ago Facebook was threatening to pull out of the EU. When nobody cared they ponied up the tech required to stay. They will this time as well.
People who make laws can see the future (yeah, sometimes they don't), it's the people that do not understand them and protest, because they do not see the immediate benefit for themselves, so the laws have to be changed to accomodate the snowflakes of the World.
Making laws is about govern, not about futurism.
A compromise has to be found every time a new law is proposed to be approved.
See for example the ban on ICE engines, one side wanted it NOW the other side wanted it NEVER, truth is a good compromise was to accelerate as much as possible for some heavy user (Amazon alone, for example, is responsible for more than 20% of the global deliveries) and make some exception (5 years tops) for supercar luxury brands like Ferrari or Lamborghini, that sell a few thousands cars/ year.
But things being as they are, people complained about Ferrari asking for an extension (for themselves and other luxury brands) but at the same time against the ban in general because everyone owns a car.
The real problem is that people put in charge of making laws other people like them, not people better than them.
It's a vicious circle.
EDIT: the HN paradox: everyone knows what's bad, everyone has a very important job, but apparently things never improve because "politics".
Maybe things are harder than what they look at first sight, from outside, and it's not about "people not seeing the future", but about the fact that the more a society is rich and established, the more changes are hard and people oppose to them.
See for example how hard it is to convince American people that socialism is not a crime and USA could survive free health care paid by everyone's taxes.
Does it mean that Americans are stupid and can't open Wikipedia and do a simple research, that they can't see the truth in front of their eyes, or that they are simply afraid of change, because the system works for a large part of the population, the same part of the population that basically runs the country and decides who gets elected?
The v1 devices never supported a real local API. The v2 devices like the Awair Element do have a local API built-in. It does have to be enabled via the app but it lets you hit the device's LAN address and get back real time JSON with the sensor data. Not to say they couldn't figure out a way to brick those devices in the future, but you could theoretically turn on the Local API and then firewall the devices to your network to prevent future firmware updates.
One thing I've learned over the years is that it's good to "own" your SSID, and preferably stick to pro-grade routers that let you configure local network addressing. As long as you stick to Internet providers that let you run their hardware in "bridge" mode, it means you don't have to set up new WiFi networks at all.
The problem is that many people has less money to go around due to cost of goods and interest rates rising, so it's harder than ever to take advantage of lower stock market prices.
Non-starter for me if it's not possible to self host. Self hosting is not only knowing that my data is private, but also that once the company behind the product dies out, my data won't disappear forever.
>It is understandable that there is some hesitation to invest your time into adding your information to a new application and the obvious fear that it may just disappear one day. I want to reassure you that we are fully committed to Recall for the long term. But in the unlikely and unfortunate case that Recall doesn’t work out we will ensure that there is more than enough time for users to export their data so that it can be moved to another application. We would also open-source the project to allow users to host Recall themselves.
It's not good enough, because there are instances where the project might die but they decide not to.
For example, if they are acquired and shuttered (like when fitbit bought and closed Pebble), if legal says they can't, or if the money needed to open source it after the fact is not available because they went bust and already owe creditors (you might need money to do a license review to ensure that releasing the code doesn't violate any third party licenses in software the project depends on).
It's funny how people are different, after using CapRover for years I tried Coolify but it felt like so much functionality was missing that I would not use it over CapRover in its current state. All the things you mentioned (GitHub webhooks, custom domains, SSL certs, reverse proxying) are supported by Cap. It also supports Docker Swarm.
When I looked at CapRover recently, the GitHub integration in the docs required manually setting up a GitHub Actions integration and I couldn't exactly figure out what parts I needed to change since the example was for a NodeJS app, and I'm assuming it needed to be added or changed for each repo you want to track git pushes from.
In contrast, with Coolify, you simply login with your GitHub account and it installs a (one-time setup) GitHub app onto your account that automatically sets up the webhooks and I didn't need to do any configuration after that, it just worked for all subsequent apps. That level of ease of use is the experience I had with Heroku and I do not understand why more self-hosted PaaS don't seem to replicate that as well.
Also I like the Coolify GUI over the CapRover one, dark mode plus modern non-Bootstrap design is nice.
There are a bunch of ways to deploy via GitHub, I would check out this action, it's the easiest and most straightforward. There should be no need to adapt it based on different frameworks:
https://github.com/marketplace/actions/caprover-deploy
You can also use the built-in webhook from inside CapRover control panel but it requires you to log in via your GitHub account in Cap (I created a secondary CI account for this).
Installing a GitHub app comes with challenges, it basically gives Coolify access to your account should they want it, since they control the app and not you. This is bad from a privacy perspective.
CapRover has dark mode. :-) But yes the design is more utilitarian in Cap and not as nice to look at.
Why would someone care about the look of a GUI dashboard for a PAAS and somehow base his choice on that ? Bootstrap, MUI, made by a designer, why should I care when there are dozens of more important points ?
And UX wise both have done a pretty good job at being plenty sufficient.
The one thing I’m missing in Caprover, and that would make me switch instantly, is I want to be able to feed it a docker-compose file to run a whole stack.
Now if a service has only a docker-compose file I have to pull it apart and start every service individually.
CapRover has a huge collection of "One Click Apps" that will deploy hundreds of fully functioning applications for you including dependencies (so for example for WordPress it will spin up both a web and a database container):
https://caprover.com/docs/one-click-apps.html
Also I think the way you create different kinds of resources are confusing in Coolify, like what is a "Git Source" or "Destination", all I want is to run my container. It's very different from Herokus/Caps interface and not in a good way.
Also why can't I just deploy from an image? In CapRover I often spin up a container, like `redis-commander` or `wordpress` and then just configure its env vars and volume mounts. In Coolify it seems you have to connect a GitHub repo which is not a desirable workflow for me.
I saw CapRover's one-click-apps and I'm thinking of using a similar solution in case of Coolify.
Resources are defined in this way, because you can connect a lot of things to Coolify, GitHub, GitLab, hosted or self-hosted, later on Gitea and other git sources, and also different destinations, local docker engine, remote docker engine and later on Kubernetes. I will improve the onboarding experience to not feel the first use as a burden. :)
Git based deployment is only required, if you would like to deploy a custom thing, that is currently not supported by Coolify. I'm working on a solution where you can use Docker Hub to deploy images (https://feedback.coolify.io/posts/6/deploy-from-docker-hub).
Dokku has a very easy solution for connecting databases and other services to an app with "Dokku link." Currently with Coolify I have to spin up a database then take the link generated, like postgres://user:pass@host/link and then paste it into my env file for my app. It would be cool for Coolify to automatically inject those environment variables into my app when I link two services and apps together like Dokku does. It just needs to tell me what env variable I should be looking for, like DATABASE_URL, and then I can use that variable in my app.
Tangentially I was looking at Render.com the other day and they also don't support zero downtime deployments if you have a mounted volume, so it's not very uncommon even on cloud platforms.
As for restarts, I didn't have that problem yet, I believe CapRover adds `restart: always` to all the running containers so they should automatically boot. You might want to check out the logs of the containers that don't restart or just always hard restart the server after a Docker update.
Every useful app has persistent data. It sounds like you have a design problem, though.
If you're treating application servers like cattle, then none of them should have persistent data (except caching that's disposable). There are fire-drill days that require deployments, so downtime incurred when updating your application code is unacceptable.
To solve this, put your persistent data on a separate server and store it in a proper database (Postgres, for example). Postgres never needs emergency updates, so the downtime for those (infrequent) updates can be pushed to non-peak hours.
That's fine, I wouldn't tell you otherwise. You just complained about a missing feature that isn't actually missing if you use the service properly and treat servers like cattle.
Well yes, some containers only read the persistent data so if those could have zero-downtime as there's no risk of data corruption it would be great, but that option is not offered to me.