This is a terrible article. I'm not even sure the author researched this before writing it. There's straight up false information about each of the languages.
IIRC, you can, but it's bit elliptical: because IAM access policies can key off tags, you can adopt an IAM role that can only see the desired tags, and list objects.
I'm quite tired of “What Color is Your Function”. In the five years since it was written I have yet to care about this supposed explosive problem in any language that uses async/await. It just seems like something that gets trotted out whenever anyone dares to suggest there are benefits to the approach or to talk about Rust. I'm not sure why I am supposed to accept it as meaningful truth.
One issue I have with this is that on many of the “battlegrounds” of the “culture wars”, non-participation is effectively the same as fighting for one particular side.
I think the people disagreeing with this greatly overestimate what is actually contained in a library like Redux. An entire ecosystem sprang up around it and most of it is completely unnecessary and overly complex. The base library itself is almost nothing at all and should not be hard for anyone with experience to independently create.
> the Rust game dev community is overly fixated on ECS.
Not just Rust. The game dev community everywhere is infatuated with ECS.
It's basically cargo culting. There is a large base of amateur or indie game developers who want to feel like they are doing game development the "right" way. One big aspect of that is performance. ECS has a reputation for efficiency (which is true, when used well in a context where your performance problems are related to caching), so you see a lot of game devs slavishly applying it to their code in hopes that the "go as fast as a AAA game" Gods will land on their runway and deliver the goods.
Every time I see a new ECS framework in JavaScript, I die a little on the inside.
The problem is that software design in general is an incredibly messy field that's still basically in its infancy. Developers want simple solutions to complex design problems, or at least they want some decent architectural guidelines so they can avoid constantly reinventing the wheel, badly. Remember when MVC was all the rage?
ECS is good though, it's a perfectly solid answer to a lot of thorny design questions. Where problems frequently arise is when you try to jam absolutely everything in your game into the ECS structure. In practice you're probably going to have a lot of data which lives outside the system and is not attached to any entity.
Most of what I see about ECS is how much easier it is to have dynamic behaviors without inheritance, and it is, so I don’t see why it would be bad for newcomers to use it or to have an ecs lib written in js.
> how much easier it is to have dynamic behaviors without inheritance
I think you're getting at the idea that instead of having objects differ in their behavior by overriding methods, you have them differ by having fields bound to objects that implement different behavior.
Assuming I understand you right, that's an excellent insight, but it's just the classic principle:
Favor object composition over class inheritance.
There's nothing new in that idea or anything special to ECS. Do you know how I know? Because that sentence here is directly quoted from page 20 of "Design Patterns", which ECS and DoD are often claimed to be in direct opposition to. :)
> I think you're getting at the idea that instead of having objects differ in their behavior by overriding methods, you have them differ by having fields bound to objects that implement different behavior.
I guess I'm more getting at the idea of changing the game design at any point by adding or removing components, a way to make it easier for devs to cope with changing requirements (and they are always changing ofc), but you are right about favoring composition over inheritance, that by itself is pretty good.
I can't really talk about "things that are often claimed" and from the way you talk about this it seems like you have come across different opinions from mine on what ECS is or its value. Sad to see such a useful pattern get "corrupted", but I suppose that is inevitable.
Not only that, it is incredible how it gets trumped as a new idea, when stuff like COM, Objective-C protocols and plenty of other component based architectures were already a subject in OOP related papers during the late 90's.
It's a totally different composition pattern though. It uses more trait-style multiple inheritance, unlike the typical "has-a" composition typically used in object-oriented languages. Additionally, it intentionally breaks encapsulation, which is a key tenet of object-oriented design.
I wouldn't dismiss the JS ECS frameworks without measurement.
Polymorphism has costs, and while dynamic languages work hard to remove them, they still work best when they're able to monomorphize the call site, because that enables inlining without combinatoric explosion from chained polymorphic calls.
Having a single type in your array means field accesses, method calls etc. have the potential to be monomorphized. There are performance wins to laying out your data in ways that avoid the need for polymorphism.
> I wouldn't dismiss the JS ECS frameworks without measurement.
I think the burden of proof is on the part of JS ECS frameworks to show they do have better performance by virtue of DoD and, if so, why. JS engine programmers have been optimizing object-oriented code for literally forty years, all the way back to when they were making Smalltalk VMs.
If somehow a couple of folks hacking on ECS frameworks have managed to write code that runs faster on those JS engines than the kind of code they were designed for, I'd like to see it.
> Having a single type in your array means field accesses, method calls etc. have the potential to be monomorphized.
Sure, but object-oriented code does not require any more polymorphism than DoD does. Consider:
* Iterate over an array of monomorphic components and call a method on each one.
* Iterate over an array of monomorphic entities, access a monomorphic property, and call a method on the latter.
There's an extra property access in the latter (which can easily be inlined), but no polymorphic dispatch. In practice, yes, it is possible to reorganize your JavaScript code in ways that play nicer with inline and possibly even code caching. But I have never seen any evidence that JS ECS frameworks actually do that. Instead, the few I've poked around in seem like typical slow imperative dynamically-typed JS.
If someone is going to take a pattern that was invented specifically for a language like C++ that gives you precise control over memory layout and then apply it to a language that not doesn't give you that control but often uses hash tables to store an object's state, I think the burden of proof is on the framework to show that the pattern actually applies.
That one's pretty interesting. Here you can see they are putting real effort into thinking about the performance of the underlying VM. Using typed arrays is neat.
Modern JS engines won't use a hash table for the object state, they'll use a hidden class, and member accesses will be fixed offset indirect loads guarded by a type check. Initialize your objects carefully in a deterministic order, and I'd expect you can control field order and adjacency.
I'd expect the wins from reworking your JS so that your target JS engine lays out data out better would often be larger than the wins in C++, simply because the worst case is so bad.
I'd add a third option to your pair: iterating over several arrays of properties - especially numeric properties - rather than an array of objects which each have numeric properties. That can get you substantial wins in Java, never mind JS.
> Modern JS engines won't use a hash table for the object state, they'll use a hidden class, and member accesses will be fixed offset indirect loads guarded by a type check.
The type checks themselves have significant overhead, and it's easier to fall off the shadow class fast path than you might expect.
> Initialize your objects carefully in a deterministic order, and I'd expect you can control field order and adjacency.
True, but that's equally true of non-ECS architectures. I have yet to see much evidence that the ECS JS engines I've looked at are actually taking that into account.
> ... so you see a lot of game devs slavishly applying it to their code in hopes that the "go as fast as a AAA game" Gods will land on their runway and deliver the goods.
I watched this sentence unfold with bated breath, waiting to yell "and ze sticks the landing!", only to see it end with "goods" instead of "cargo". It's frustratingly close to perfect, though perhaps to end the paragraph with "cargo", you'd have to begin with some synonym for cargo culting.
I agree that Rust is more suited to ECS than hierarchies. However, the choice is not between ECS or inheritance. The reason I say that the community is overly fixated on it is that many of the benefits attributed to ECS aren't unique to ECS.
I’d agree to a point but it really crosses the whole gamut of hobby game engine development. It’s weirdly self reinforcing even in the face of more interesting architectural choices like DOOM Eternal’s.
The way Eternal's engine is described it could simply be an ECS with a more advanced job system for farming out work because it can parse how the various objects are updated and only do reads after all writes to the objects happen.
I believe it’s only been mentioned in passing and likely will be talked about at conferences soonish. Here’s a HN thread from when it was first talked about:
There was an early talk about using a job system to run the Destiny renderer and then this one for the whole engine which is a very similar premise to the way DOOM(2016) and then DOOM Eternal evolved.
I don't know if I think Helm is garbage but I feel like it's almost always more trouble than its worth. Most charts can be boiled down to just a couple resources and instead of sifting through its weirdo templates you might as well just look at the actual resource configs. It's almost always just a single Deployment or similar + RBAC.
I've actually been using Terraform to manage Kubernetes resources at my latest job, along with literally everything else. I don't even use alb-ingress-controller or external-dns or any of that. I just have Terraform make target groups from the service resources. It breaks a lot less.
But Helm barely adds any boilerplate over that 'single deployment or similar + RBAC'? Just a Chart.yaml (minimal of which is very small) and adding the option to parameterise or templatise the resources.
I don't know why you wouldn't, we don't say 'I don't know if .deb/.rpm/PKGINFO/etc. is worth it, most packages are just a binary'.
For Helm to add LoC you'd have to all but not use it, and even then it'd be about 3LoC. Setup is no harder, with N kubectl applies becoming 1 helm install.
I also started using terraform (directly instead of just triggering helm) - are you using kubernetes tf resources directly though, or the helm provider?
Most helm charts I have seen and used have a deployment yaml or statefulset or daemonset plus a service yaml, optional ingress yaml, sidecar containers for metrics, and several others I'm forgetting at the moment. And I would say most as in 90% of the charts on helm hub.
I would argue that Go's design as a whole is characterized by an attitude of ignoring established ideas for no other reason than that they think they know better.
Something being established is not a grand argument for it's usage. The reasons it got established are relavent, and if you feel the end result of said establishment is obtuse or inane, why would you use it?
That's not to say Go's decisions to toss some established practices are "wise" or "sagely", just that broad acceptance is not a criteria they seemed concerned with. Which is fine.
>they think they know better.
It's safe to say Rob Pike is not clueless or without experience in unix tooling. You should listen to some of his experiences and thoughts with designing Go [0]. I don't always agree with him, [but it's very baseless to suggest he makes decisions on the grounds that they were his, not they have merrit.]
Edit to clarify: [He makes decisions on merrit over authority]
Sure, there's nothing that says established practice is better. That is not, in my opinion, a good defense of Go which makes many baffling design decisions. Besides, an appeal to the authority of Rob Pike is surely not a valid defense if mine is not a valid criticism.
I'm (perhaps unfairly) uninterested in writing out all the details, but “they think they know better” is because I see Go as someone's attempt to update C to the modern world without considering the lessons of any of the languages developed in the meantime. And because of the weird dogmatic wars about generics, modules, and error handling.
In my opinion, the first is a terrible interface because I have no idea what any of the parameters do, not because there are so many. The second is a good interface to me because all of the values are labeled. I don't see any problem with having this many parameters when all of those parameters are relevant (see: Vulkan). Named parameters often implies default values too, which means you wouldn't have to specify them all.
I basically emulate this in other languages, like Rust, with an “options” struct that has default values.