Its not just idle, it is a bit more aggressive on cleaning up sooner after objects are de-referenced, meaning the OS gets the memory back sooner. Very useful in situation where you have many small containers/vm's running dotnet stuff, or on large applications where memory pressure is an issue and your usage pattern can benefit from memory releasing earlier.
In the old days you could tune IIS + Garbage collector manually to get similar behaviour but was usually not worth it. Time was better spent elsewhere to optimize other things in the pipe and live with the GC freezes. I suspect GC hickups should now be much smaller/shorter too, as the overall load will be lower with new GC.
That aggressive release behavior is exactly what we need more of most runtimes (looking at you legacy Java) just hoard the heap forever because they assume they are the only tenant on the server.
In C#'s dependency injector you basically have to choose from 3 lifetimes for a class/service: scoped, transient or singletons.
Scoped & transient lifetimes along with the new GC will make the runtime much leaner.
Some application are singleton heavy or misuse the MemoryCache (or wrap it as a singleton... facepalm) - these will still mess the GC situation up.
If you build web/api project it pays dividends to educate yourself on the 3 life times, how the GC works, how async/await/cancellation tokens/disposables work, how MemoryCache work (and when to go out-of-process / other machine aka Redis), how the built-in mechanisms in aspnet works to cache html outputs etc. A lot of developers just wing it and then wonder why they have memory issues.
And for for the dinosaurs: yes we can use the dependency injector in Windows Form, WPF, Console Apps and so on - those packages aren't limited to web projects alone.
It works great in C# (where you have strong typing + strict compiler).
Try this:
Have a look at xyz.cs. Do a full audit of the file and look for any database operations in loops that can be pre-filtered.
Or:
Have a look at folder /folderX/ and add .AsNoTracking() to all read-only database queries. When you are done, run the compiler and fix the errors. Only modify files in /folderX/ and do not go deeper in the call hierarchy. Once you are done, do a full audit of each file and make sure you did not accidentally added .AsNoTracking() to tracked entities. Do no create any new files or make backups, I already created a git branch for you. Do not make any git commits.
Or:
Have a look at the /Controllers/ folder. Do a full audit of each controller file and make sure there are no hard-coded credentials, username, password or tokens.
Or: Have a look at folder /folderX/. Find any repeated hard-coded values, magic values and literals that will make good candidates to extract to Constants.cs. Make sure to add XML comments to the Constants.cs file to document what the value is for. You may create classes within Constants.cs to better group certain values, like AccountingConstants or SystemConstants etc.
These kinds of tasks works amazing in claude code an can often be one shotted. Make sure you check your git diffs - you cannot and should not blame AI for shitty code - its your name next to the commit, make sure it is correct. You can even ask claude to review the file with you afterwards. I've used this kind of approach to greatly increase our overall code quality & performance tuning - I really don't understand all the negative comments as this approach has chopped down days worth of refactorings to a couple of minutes and hours.
In places where you see your coding assistant is slow or making mistakes or it is going line by line where you know a simple regex find/replace would work instantly, ask it to help you create a shell script as a tool for itself to call, that does task xyz that it can call. I've made a couple of scripts that uses this approach that Claude can call locally to fix certain code pattern in 5 seconds that would've taken it (and me checking it) 30 mins at least and it wont eat up context or tokens.
My biggest problem with LLM's at this point is that they produce different and inconsistent results or behave differently, given the same prompt. The better grounding would be amazing at this point. I want to give an LLM the same prompt on different days and I want to be able to trust that it will do the same thing as yesterday. Currently they misbehave multiple times a week and I have to manually steer it a bit which destroys certain automated workflows completely.
It sounds like you have dug into this problem with some depth so I would love to hear more. When you've tried to automate things, I'm guessing you've got a template and then some data and then the same or similar input gives totally different results? What details about how different the results are can you share? Are you asking for eg JSON output and it totally isn't, or is it a more subtle difference perhaps?
It doesn’t really solve it as a slight shift in the prompt can have totally unpredictable results anyway. And if your prompt is always exactly the same, you’d just cache it and bypass the LLM anyway.
What would really be useful is a very similar prompt should always give a very very similar result.
This doesn't work with the current architecture, because we have to introduce some element of stochastic noise into the generation or else they're not "creatively" generative.
Your brain doesn't have this problem because the noise is already present. You, as an actual thinking being, are able to override the noise and say "no, this is false." An LLM doesn't have that capability.
> I want to give an LLM the same prompt on different days and I want to be able to trust that it will do the same thing as yesterday
Bad news, it's winter now in the Northern hemisphere, so expect all of our AIs to get slightly less performant as they emulate humans under-performing until Spring.
Same here. I miss my N900 dearly. It was one of my most expensive items at the time, and I enjoyed this device more than the Galaxy S I9000.
It had the best slide-out keyboard of all the phones, nice and rubbery keys. Super smooth sliding motion.
It also had a FM Transmitter (not just Receiver), so I could blast audio in my first car back then without struggling with bluetooth kits & audio cable (neither was standard).
It also had an infra-red transmitter that was programmable, so you could use it as a remote in certain circumstances.
It the time, the 32GB storage was absolutely massive for a phone.
It also had stereo speakers & a kick-stand, so you could watch a movie on it without issues.
I really miss this phone & era. Maemo OS could've owned the market today, as at the time it was much better than early Androids. Nokia messed up so hard after this, the N9 was shitty in comparison.
If you want a ton of fun, ask AI to write you parable on the joys/wisdom/lessons on topic x versus topic y. Example: Write a parable on the virtues of using a type safe programming language, the benefits of object oriented programming, the generosity of the garbage collector, all in the face of the tyranny of dynamic programming languages. Write it from the perspective of a master monk (a frog wearing a robe and staff), having a conversation with a gorgeous young squirrel that wishes to embark on the journey of learning software engineering.
Here is one output:
The Frog Monk and the Squirrel Student
Deep within the bamboo grove, where the wind whispered like flowing code, sat Master Frog, robed in emerald cloth, staff resting across his lap. Before him, a young squirrel, bright-eyed and eager, bowed low.
“Master,” the squirrel said, “I wish to learn the way of software engineering. But the paths are many, and I do not know which road leads to wisdom.”
The frog monk closed his eyes, and spoke in a voice like rippling water:
The Virtue of Type Safety
“Child,” said the frog, “imagine a bridge built of stone. Each stone is measured, each arch tested. This is the way of type safety. When you walk across, you know the bridge will not crumble beneath your paws.
But in the land of dynamic languages, bridges are built of twigs and promises. They may hold, or they may break when the storm comes. The traveler must always fear the hidden crack.
Thus, type safety is the discipline that guards against chaos. It is the monk’s staff, steady and unyielding.”
The Benefits of Object-Oriented Programming
The squirrel tilted her head. “And what of the world itself, Master?”
The frog smiled. “The world is vast, yet it is made of objects. The tree, the river, the stone—all are beings with their own nature. In object-oriented programming, we honor this truth. We give each thing its form, its methods, its inheritance.
Without this order, the world becomes a tangled forest of functions, each branch crossing another. But with objects, the forest is tended, paths are clear, and the traveler may walk without losing her way.”
The Generosity of the Garbage Collector
The squirrel’s tail flicked. “But Master, what of the burdens we leave behind?”
The frog tapped his staff. “Ah, that is the gift of the garbage collector. Like a kind monk sweeping the temple floor, it gathers the forgotten scraps, the abandoned memory, the dust of old variables.
It asks for no thanks, yet it keeps the temple pure. Without it, the temple would overflow with clutter, and the student would drown in her own neglect. This is generosity: unseen, yet ever present.”
The Tyranny of Dynamic Languages
The frog’s eyes grew stern. “Beware, child, of the tyranny of dynamic languages. They promise freedom: ‘Write quickly, without rules!’ But freedom without discipline is chaos.
The tyrant whispers: ‘Trust me, the errors will reveal themselves later.’ But later may be too late—when the bridge collapses, when the temple burns, when the forest is lost.
Thus, the wise engineer does not bow to the tyrant. She chooses the path of safety, of order, of generosity.”
The Lesson
The squirrel bowed again, her heart alight with understanding. “Master, I see now. To walk the path of software engineering is not merely to write code, but to live with discipline, order, and gratitude.”
The frog monk nodded. “Then go, child. Build bridges of stone, tend forests of objects, and honor the unseen sweepers of memory. In this way, your code shall endure, and your spirit shall remain free.”
And so the squirrel departed, carrying the wisdom of the frog monk into the world of software.
Actually, I've setup a proxmox server last week that run a couple of self-hosted application. I've nextcloud running and it was fairly easy to setup. The next item on my list WAS Immich. I decided against trying to deploy it. The reason is simple: they are essentially forcing the use of Docker, which I won't touch at at all. Either a native proxmox container (which is just lxc) or a proper VM, but I keep those in reserve as they can be heavy. I'm not asking of them to create a native package for debian or a container image; a simple install script that bootstraps the application (checks & install itself and dependencies), bootstrap the database and basic config (data directory, url & ports, admin password) is more than enough. The same script should be use to update the application if possible, or provide an updater on the admin panel to update the application without manual steps or data migrations. Adguard Home does all of this perfectly in my opinion. I know Immich thinks they are making things "easier" to just dump everything into a docker container, but some of us wont touch it at all. Same reason I avoid any projects that heavily relies on nodejs/npm ecosystem.
I really don't understand this take.
A script that installs all required dependencies is fine if and only if you are dedicating a machine to immich. It probably requires some version of node, with possibly hidden dependencies on some python, it uses ffmpeg, so all related libraries and executables need to be there. You then have a couple separate DBs, all communicating together.
Let's not talk about updates! What if you're skipping versions? Now your "simple install script" becomes a fragile behemoth.
I would NOT consider this if it was non docker-native.
Plus, I don't have a server with enough resources for a lot of VMs, with all of their overhead and complications, just to have one per service.
Nowadays there are many ways to run a container not just the original docker.com software, and you can do that on pretty much any platform. Even Android now!
I've never understood it either. I still deploy some things into their own respective manual deployments but for lots of things having a pre-made docker compose means I can throw it on my general app VM and it'll take 5 seconds to spin up and auto get HTTPS certs and DNS. Then I don't lose hours when I get two days into using something and realize it's not for me.
Also have you read some of the setup instructions for some of these things? I'd be churning out 1000 lines of ansible crap.
Either way since Proxmox 9.1 has added at least initial support for docker based containers the whole argument's out the window anyway.
Me neither. Docker is the platform agnostic way to deploy stuff and if I maintained software, it is ideal - i can ship my environment to your environment. Reproducing that yourself will take ages, or alternatively I also need to maintain a lot of complex scripts long-term that may break in weird ways.
These things are a proxmox home lab user's lifeline. My only complaint is that you have to change your default host shell to bash to run them. You only have to do that for the initial container creation though.
I think it's the best of every world. Self contained, with an install script. Can bring up every dependent service needed all in one command. Even your example of "a simple script" has 5 different expectations.
I implore everyone here to please try convert friends & family over to Linux. Fedora + KDE will feel right at home when coming from windows. Easy & Configurable, decent app store.
The other day I installed Windows 7 on a VM for fun.. it was not fun at all. I got weird wave of nostalgic sadness, like being teleported back in time, I felt/remembered how things were back in ~2010, the culture, my university life, how things were with an ex gf, ALL of it. The OS is engrained in my mind and it was gorgeous seeing those aero effects and hearing the startup sounds again. It is so simple and easy. It felt good so see & use it again.
With Windows 11, although I mostly like the UI (rounded corners on a high dpi tablet also with rounded screen is amazing), it feels absolutely gross, in the corporate soulless sense. It feels mentally heavy top operate. I constantly had to battle it to get it to work the way I want it.
These days all my devices are running Fedora with KDE, which is just the best. You basically set it up once the way you like it, and it won't change by itself for months. It is a buttery smooth experience and have had zero need to go back to Windows yet.
If anyone want the same level one-ness with your computer like back in Windows XP & Windows 7 days, give KDE a try. Fedora is pretty simple distro to get used to if you want a good starting point.
I mentioned to a friend recently that W11 is so difficult to use compared to Linux like Mint nowadays. He didn't understand it, though he tried Mint a decade ago but kept using Windows 10, upgraded to 11, continues to have driver problems with his laptop, some weeks network card stops working some weeks his sound card drops out completely. He uses usb dongles intermittently, it reminds me how I used on laptop Linux 20 something years ago and even then it wasn't that bad. I feel preaching Linux is almost counter-effective, but I'm tired of being asked to solve his hw problems caused by bad W11 drivers.
In my experience the problem with Debian is that sooner or later you're bound to want to use something that is only 5 years old and therefore not included yet, so you end up having to install it from source or something else, but something doesn't quite work right so you have to hack it one way or another, and over time all this cruft adds up and you end up with a broken system caused precisely because the base distro refused to change fast enough.
I no longer use Debian, but when I did, I always used Debian Testing, never had any major issues that weren't my own fault, and packages are way more up-to-date. Worth trying if you're in that ecosystem still, and you want later stuff than 1-2 years old softwrae.
Lots of Linux software these days are also distributed as flatpack or appimage, and appimage in particular is dead simple if what you want has it available: place the file wherever on the path, make it executable, and done.
In the old days you could tune IIS + Garbage collector manually to get similar behaviour but was usually not worth it. Time was better spent elsewhere to optimize other things in the pipe and live with the GC freezes. I suspect GC hickups should now be much smaller/shorter too, as the overall load will be lower with new GC.
reply