It makes absolutely no sense to base this decision on the number of users. We have some applications that don't even have 10 users but still use k8s.
Try to understand the point that was made in the original comment: Kubernetes is a way to actually make infrastructure simpler to understand for a team which maintains lots of different applications, as it can scale from a deployment with just one pod to hundreds of nodes and a silly microservices architecture.
The point is not that every application might need this scalability, no the point is that for a team needing to maintain lots of different applications, some internal, some for customers, some in private datacenters, some in the cloud, Kubernetes can be the single common denominator.
Hell, I'm a hardcode NixOS fan but for most services, I still prefer to run them in k8s just as it is more portable. Today I might be okay having some service sitting on some box, running via systemd. But tomorrow I might want to run this service highly available on some cluster. Using k8s that is simple, so why not do it from the start by just treating k8s as a slighly more powerful docker-compose.
I disagree, seems like a pretty standard structure of one directorz per app and inside that subfolders for configuration, secrets, opaque various data. Not complicated at all really.
Because it's a nonsensically small number; and it's contradicted with the graph immediately below it, that shows larger amounts of energy (MWh) generated in individual hours, than the supposed figure for the entire week. The peaks on the graph are ~40,000 MWh/hour sustained over multiple hours (eyeballing it). The amount generated in the entire week can't be 17,000 MWh, a number smaller than that.
(For anyone unclear about units: the y-axis of the graph is GWh/h, the same as 10^3 MWh/h, or simply just 10^3 MW. The unit [MWh] = 10^6 [Watt]*[hour]).
Yes, the units are correct. But currently Germany is generating 33 GW of solar power (according to electricity maps). Therefore the 17GWh is generated after 0.5 hrs. For a whole week this value is way too low, even if it was a cloudy week (which it wasn't).
Some YouTuber talked about this and I think they were pretty on point: Of course for consumers this could all happen in some app on the phone.
But a 3rd party app will always be less integrated, have less permissions than functionality included by the manufacturer.
And for all this AI integration wide access is pretty much required as you'd want it to access your photos, notes, all kind of apps, etc.
This way manufacturers would have too much leverage over companies developing that kind of AI, as they could always develop better features than them with their own AI agent.
I think Apple Watch is a pretty good example of that already. Third party watches will never be as good as Apple Watch just because Apple won't let them.
Testcontainers is awesome and all the hate it gets here is undeserved.
Custom shell scripts definitely can't compete.
For example one feature those don't have is "Ryuk": A container that testcontainers starts which monitors the lifetime of the parent application and stops all containers when the parent process exits.
It allows the application to define dependencies for development, testing, CI itself without needing to run some command to bring up docker compose beforehand manually.
One cool usecase for us is also having a ephemeral database container that is started in a Gradle build to generate jOOQ code from tables defined in a Liquibase schema.
That's also not true? I'm navigating between pages, and it does get served from cache for all subsequent navigations.
The only case when this code gets loaded is the literally the first cold load of the entire site — and it's only used for powering live editable interactive sandboxes (surely you'd expect a in-browser development environment to require some client-side code). It doesn't block the initial rendering of the page.
I think the issue isn't with the methodology (disabling cache), but rather the erroneous conclusion that the React.dev website continually requests data as somehow problematic when it's a sideeffect of disabling browser cache.
Also, FWIW, OP is one of the authors of react.dev and a member of the react core team (not that it's relevant to the objection).
What would "integration tests" (that you don't write) look then in your opinion?
I ask because in my team we also a long time made the destinction between unit/integration based on a stupid technicality in the framework we are using.
We stopped doing that and now we mostly write integration tests (which in reality we did for a long time).
Of course this all arguing over definitions and kind of stupid but I do agree with the definition of the parent commenter.
> What would "integration tests" (that you don't write) look then in your opinion?
In our local lingo, an integration test is one that also exercises the front-end, while hitting a fully functional back-end. So you could think of our "unit tests" as small back-end integration tests. If you think that way, we don't write very many pure unit tests, mostly just two flavors of integration tests. That works well for our shop. I'm not concerned about the impurity.
The "impurity" isn't the problem. The problem is that such integration tests take a longer time to run and in aggregate, it takes minutes to run your test suite. This changes how often you run your tests and slows down your feedback loop.
That's why you separate them: not because the integration test isn't valuable, but because it takes longer.
The same reason why most people stopped manually editing some random files via FTP to do deployments: to get a proper reproducible, automated and monitored production environment.
I think there's a threshold below which this is just unnecessary infrastructure overhead, and I'd posit that most cron use cases fall below this threshold. If yours is above it, well and good.
It makes absolutely no sense to base this decision on the number of users. We have some applications that don't even have 10 users but still use k8s.
Try to understand the point that was made in the original comment: Kubernetes is a way to actually make infrastructure simpler to understand for a team which maintains lots of different applications, as it can scale from a deployment with just one pod to hundreds of nodes and a silly microservices architecture.
The point is not that every application might need this scalability, no the point is that for a team needing to maintain lots of different applications, some internal, some for customers, some in private datacenters, some in the cloud, Kubernetes can be the single common denominator.
Hell, I'm a hardcode NixOS fan but for most services, I still prefer to run them in k8s just as it is more portable. Today I might be okay having some service sitting on some box, running via systemd. But tomorrow I might want to run this service highly available on some cluster. Using k8s that is simple, so why not do it from the start by just treating k8s as a slighly more powerful docker-compose.