Hacker Newsnew | past | comments | ask | show | jobs | submit | seabrookmx's commentslogin

Exactly. Or Rust wouldn't be memory safe due to the existence of unwrap().

Not that crashing can't be bad, as we saw recently with Cloudflare's recent unwrap-based incident.


Even without unwrap, Rust could still crash on array out of bounds access. And probably more similar cases.

I quite liked SCons back when I wrote C++!

Neat. Might have to try this on .NET, especially since v10 just got a new (default) GC that claims to use a lot less memory at idle.

Its not just idle, it is a bit more aggressive on cleaning up sooner after objects are de-referenced, meaning the OS gets the memory back sooner. Very useful in situation where you have many small containers/vm's running dotnet stuff, or on large applications where memory pressure is an issue and your usage pattern can benefit from memory releasing earlier.

In the old days you could tune IIS + Garbage collector manually to get similar behaviour but was usually not worth it. Time was better spent elsewhere to optimize other things in the pipe and live with the GC freezes. I suspect GC hickups should now be much smaller/shorter too, as the overall load will be lower with new GC.


That aggressive release behavior is exactly what we need more of most runtimes (looking at you legacy Java) just hoard the heap forever because they assume they are the only tenant on the server.

In C#'s dependency injector you basically have to choose from 3 lifetimes for a class/service: scoped, transient or singletons.

Scoped & transient lifetimes along with the new GC will make the runtime much leaner.

Some application are singleton heavy or misuse the MemoryCache (or wrap it as a singleton... facepalm) - these will still mess the GC situation up.

If you build web/api project it pays dividends to educate yourself on the 3 life times, how the GC works, how async/await/cancellation tokens/disposables work, how MemoryCache work (and when to go out-of-process / other machine aka Redis), how the built-in mechanisms in aspnet works to cache html outputs etc. A lot of developers just wing it and then wonder why they have memory issues.

And for for the dinosaurs: yes we can use the dependency injector in Windows Form, WPF, Console Apps and so on - those packages aren't limited to web projects alone.


That new .NET behavior is the goal smart runtimes that yield memory back to the OS so we don't have to play guess the request in YAML.

Unfortunately most legacy Java/Python workloads I see in the wild are doing the exact opposite: hoarding RAM just in case. Until the runtimes get smarter, we're stuck fixing the configs.


Our network scanning and package scanning both caught this.

Not to be flippant, but if you host Server Side Rendered react on the public internet and you're just hearing about this now, that's a skill issue.


YouTube uses VP9 so it depends if you're talking "number of applications that use it" or "number of hours watched".


They bought Red Hat, which has OpenShift and all their other "DIY Cloud" bits. This stuff is popular in government or old businesses that may have been slow to (or unable to for regulatory reasons) jump to AWS/GCP etc.

To say nothing of the banks and others still using the IBM big iron.


The American hyper scalers are not necessarily the place to be. Modern can mean Non-hyper scalar as well. Can this sentiment just die please? Great that its working out for you and you replaced good sysadmins with aws admins, but it should not be the default strategy perse.


Why does this read like a personal attack? Do you have anything in my comment to refute?

I didn't even use the word "modern."

I actually agree the traditional cloud providers have lots of issues and aren't always the right choice, but the fact remains that offerings from Red Hat and the like are far more popular with older larger corporations than startups or "household name" tech companies like X, Netflix, etc.


When you read your sentence:

> This stuff is popular in government or old businesses that may have been slow to (or unable to for regulatory reasons) jump to AWS/GCP etc.

I think it's fair to say that you think migrating to the hyperscalars is a thing a company should do. That's what my previous post was addressing


I don't. What I'm saying is that the vast majority of companies are, and many of these business using IBM/RedHat/etc. products would follow the tide if not for other things in their way. I've seen it first hand where a fortune 500 kept their large IBM and SAP footprint (because the cost to migrate to something else was huge) and used AWS EKS for all the new apps.

Personally I think at their scale, self hosting and creating more interoperability between the stacks would have been a better investment but I was not CTO or an SVP so I didn't get to make those decisions.


they’ve been partnering with nvidia to build large ML training clusters iirc last time i was in their building at a meetup a few weeks ago


Sony is still the 2nd largest music distributor and label in the world, behind Universal Music and ahead of Warner music.

My 65" Bravia is one of the best TV's in its class and runs Google TV (IMO a major leg up over the junky Tizen/WebOS offerings from competitors).

They make some of the best noise cancelling headphones money can buy. They have the PS5 and own a bunch of game studios to provide exclusive content for it.

They're doing just fine!


To prevent all the other potential memory safety bugs that didn't crash prior to this one?


I had to read their article on "soft-unicast" before I could really grok this one: https://blog.cloudflare.com/cloudflare-servers-dont-own-ips-...


Rest in Peace ATI.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: