I designed some parts for an enclosure over the weekend using claude opus to generate an OpenSCAD file, which I then exported to STL and sent to a 3D printer. I was able to get a visually-correct-enough STL off to the 3D printer in about 5 minutes.
I then waited about an hour for the print to finish, only to discover I wanted to make some adjustments. While I was able to iterate a couple times, I quickly realized that there were certain structures that were difficult to describe precisely enough without serious time spent on wording and deciding what to specify. It got harder as the complexity of the object grew, since one constraint affects another.
In the end, I switched to FreeCAD and did it by hand.
> If you want electron app that doesn't lag terribly
My experience with VS Code is that it has no perceptible lag, except maybe 500ms on startup. I don't doubt people experience this, but I think it comes down to which extensions you enable, and many people enable lots of heavy language extensions of questionable quality. I also use Visual Studio for Windows builds on C++ projects, and it is pretty jank by comparison, both in terms of UI design and resource usage.
I just opened up a relatively small project (my blog repo, which has 175 MB of static content) in both editors and here's the cold start memory usage without opening any files:
- Visual Studio Code: 589.4 MB
- Visual Studio 2022: 732.6 MB
update:
I see a lot of love for Jetbrains in this thread, so I also tried the same test in Android Studio: 1.69 GB!
I easily notice lag in vscode even without plugins. Especially if using it right after zed. Ngl they made it astonishingly fast for an electron app, but there are physical limits of what can be done in web stack with garbage collected js
That easily takes the worst designed benchmark in my opinion.
Have you tried Emacs, VIM, Sublime, Notepad++,... Visual Studio and Android Studio are full IDEs, meaning upon launch, they run a whole host of modules and the editor is just a small part of that. IDEs are closer to CAD Software than text editors.
- notepad++: 56.4 MB (went gray-window unresponsive for 10 seconds when opening the explorer)
- notepad.exe: 54.3 MB
- emacs: 15.2 MB
- vim: 5.5MB
I would argue that notepad++ is not really comparable to VSCode, and that VSCode is closer to an IDE, especially given the context of this thread. TUIs are not offering a similar GUI app experience, but vim serves as a nice baseline.
I think that when people dump on electron, they are picturing an alternative implementation like win32 or Qt that offers a similar UI-driven experience. I'm using this benchmark, because its the most common critique I read with respect to electron when these are suggested.
It is obviously possible to beat a browser-wrapper with a native implementation. I'm simply observing that this doesn't actually happen in a typical modern C++ GUI app, where the dependency bloat and memory management is often even worse.
I never understand why developers spend so much time complaining about "bloat" in their IDEs. RAM is so incredibly cheap compared to 5/10/15/20 years ago, that the argument has lost steam for me. Each time I install a JetBrains IDE on a new PC, one of the first settings that I change is to increase the max memory footprint to 8GB of RAM.
> RAM is so incredibly cheap compared to 5/10/15/20 years ago
Compared to 20 years ago that's true. But most of the improvement happened in the first few years of that range. With the recent price spikes RAM actually costs more today than 10 years ago. If we ignore spikes and buy when the cycle of memory prices is low, DDR3 in 2012 was not much more than the price DDR5 was sitting at for the last two years.
> I never understand why developers spend so much time complaining about "bloat" in their IDEs. RAM is so incredibly cheap compared to 5/10/15/20 years ago, that the argument has lost steam for me. Each time I install a JetBrains IDE on a new PC, one of the first settings that I change is to increase the max memory footprint to 8GB of RAM.
I had to do the opposite for some projects at work: when you open about 6-8 instances of the IDE (different projects, front end in WebStorm, back end in IntelliJ IDEA, DB in DataGrip sometimes) then it's easy to run out of RAM. Even without DataGrip, you can run into those issues when you need to run a bunch of services to debug some distributed issue.
Had that issue with 32 GB of RAM on work laptop, in part also cause the services themselves took between 512 MB and 2 GB of memory to run (thanks to Java and Spring/Boot).
Anyone saying that Java-based Jetbrains is worse than Electron-based VS Code, in terms of being more lightweight, is living in an alternate universe which can’t be reached by rational means.
I frequently use a docker-compose template with prometheus pushgateway + grafana for deploying on single node servers, as described at the start of the article. It works well and is trivial to setup, but the complexity explodes once your metric volume or cardinality requires more scale like prometheus alternatives a la mimir.
I think this would not need to be an issue as frequently if prometheus had a more efficient publish/scraping mechanism. iirc there was once a protobuf metric format that was dropped, and now there is just the text format. While it wouldn't handle billions of unique labels like mimir, a compact binary metric format could certainly allow for millions at reasonable resolution instead of wasting all that scale potential on repeated name strings. I should be able to push or expose a bulk blob all at once with ordered labels or at least raw int keys.
"Everything" is another that puts the default search to shame. I've also seen people who just have a script that pumps all new files into a txt file every so often and runs bruteforce ripgrep on it, which gives instant interactive results. It's really hard to imagine coming up with a search routine that is as slow and unreliable as what ships with mainstream OS file managers.
I hear this a lot, but can you really say that you're consistently saturating a 1Gbps line for netcode or 6+ Gbps nvme for disk data? In my experience this doesn't really happen with code that isn't intentionally designed to minimize unnecessary work.
A lot of slow parsing tends to get grouped in with io, and this is where python can be most limiting.
I don't personally use Python directly for super IO intensive work. In my common use cases, that's nearly always waiting for a database to return or for a remote network API to respond. In my own work, I'm saturating neither disk nor network. My code often finds itself waiting for some other process to do that stuff on its behalf.
There's currently talk of adding gigawatts of data center capacity to the grid just for use cases where python dominates development. While a lot of that will be compiled into optimized kernels on CPU or GPU, it only takes a little bit of 1000x slower code to add up to a significant chunk of processing time at training or inference time.
What percentage of the CPU cycles are actually spent running Python though? My impression is _very_ low in production LLM workloads. I think significantly less than 1%. There are almost certainly better places to spend the effort, and if it did matter, I think they would replace Python with something like C++ or Rust.
It shouldn't take any noticable power/cycles to accomplish this task. Having flags for "performance" littered through the codebase and UI is a classic failure mode that leads to a janky slow base performance. "Do always and inhibit when not needed".
> However those tools do not have the polish that ESRI kit does, but at leas you’re not paying the licensing!
Arguably they have more polish, but less GUI. The open GIS world is more CLI, database, and library driven, which can be an advantage for many users trying to build high reliability or scaling systems.
Potentially this is something Clude Code, Cursor and other interactive LLM tools will disrupt. I watched at work a former GIS consultant without software engineering background delve into novel Python stack with Claude and hot damn … they are delivering value.
> so now the main problem is building the hardware, there are a lot of solutions for the software part.
While cool and all, this type of sim is a tiny, tiny slice of the software stack, and not the most difficult by a long shot. For one, you need software to control the actual hardware, that runs on said hardware's specific CPU(s) stack AND in sim (making an off the shelf sim a lot less useful). Orbital/newtonian physics are not trivial to implement, but they are relatively simple compared to the software that handles integration with physical components, telemetry, command, alerting, path optimization, etc. etc. The phrase "reality has a surprising amount of detail" applies here - it takes a lot of software to model complex hardware correctly, and even more to control it safely.
As a point of comparison, I just downloaded the Windows binary for duckdb (which provides a nice TUI for similar tasks) and it was 9.84MB. People can and should expect better.
I then waited about an hour for the print to finish, only to discover I wanted to make some adjustments. While I was able to iterate a couple times, I quickly realized that there were certain structures that were difficult to describe precisely enough without serious time spent on wording and deciding what to specify. It got harder as the complexity of the object grew, since one constraint affects another.
In the end, I switched to FreeCAD and did it by hand.
reply