Since this article successfully got me to look at an example using their software and get to the edge of their funnel, has anyone used Rill and can comment on its utility vs just using Excel which I already have or something else?
AKA they're doing what every other aerospace company has been doing for decades, multidisciplinary design analysis and optimization [0] with simulation in the loop. If you were to ask them how they're leveraging Design of Experiments I bet it'd be met with "design of what?".
In standard constraint optimization you know the constraints at compile time. In MDO many constraints are generated at runtime and constantly change as you search for solutions.
In MDAO you definitely know all the constraints at compile time, but inequality constraints can be active or inactive as the optimizer progresses.
That's how SNOPT, IPOPT, presumably KNITRO and nonlinear programming optimizers work.
Yes MDAO is "just" constrained nonlinear optimization.
You come up with a model for your thing, which often involves multiple "disciplines" like mass, propulsion, aerodynamics, loads, trajectory/equations of motion, and then usually use some framework to calculate the constraints values and objective value and their gradients
I probably put down at least 100k lines of Rust and made 15 games of varying sizes from small jam games to much larger ones [0], [1].
It seems like everyone just wants to make the next big popular engine with Rust because it's "safe", and few people really want to make actual games.
I also felt like prototyping ideas was too slow because of all the frequent manual casting between types (very frequent in game code to mix lots of ints and floats, especially in procedural generation).
In the end... it just wasn't fun, and was hard to tune game-feel and mechanics because the ideation iteration loop was slow and painful.
Don't get me wrong, I love the language syntax and the concept. It's just really not enjoyable to write games in it for me...
Maybe all homework could come in two parts with a 70/30 split in the grade. Everyone gets assigned the first 70%, if their solution trips a plagiarism detector then they are automatically assigned the second 30% of the work. Better yet, it's communicated that it intentionally randomly selects some people for the second part of the work, even if they didn't plagiarize. Like random airport screening.
As long as it's clearly communicated in the syllabus, should be fine. If identical code submissions are so common then everyone should be doing the same quantity of work on average and it shouldn't be an issue if you automatically get assigned bonus problems.
It's not necessary to think about the data interface in terms of object orientation.
You can think about it as being a composition of fields, which are individually stored in their respective array.
(Slightly beside the point: Often they are also stored in pairs or larger, for example coordinates, slices and so on are almost always operated on in the same function or block.)
The benefit comes from execution. If you have functions that iterate over these structs, they only need to load the arrays that contain the fields they need.
Zig does some magic here based on comptime to make this happen automatically.
An ECS does something similar at a fundamental level. But usually there's a whole bunch of additional stuff on top, such as mapping ids to indices, registering individual components on the fly, selecting a components for entities and so on. So it can be a lot more complicated than what is presented here and more stuff happens at runtime. It is also a bit of a one size fits all kind of deal.
The article recommends watching Andrew Kelley's Talk on DoD, which inspired the post. I agree wholeheartedly, it's a very fun and interesting one.
One of the key takeaways for me is that he didn't just slap on a design pattern (like ECS), but went to each piece individually, thought about memory layout, execution, trade offs in storing information versus recalculating, doing measurements and back of the envelope calculations etc.
So the end result is a conglomerate of cleverly applied principles and learnings.
More like they used reflection to take a struct and generate a SOA collection for that type. Funnily enough, they skip the part where you can actually get at the arrays and focus on the struct type deconstruction and construction.
Agreed, if you aren't using Monte Carlo methods in your algorithms then your problem probably isn't hard enough, or your solutions are fragile to variance.