Hacker Newsnew | past | comments | ask | show | jobs | submit | Ninjinka's commentslogin

Man this was rushed, typo in the first section:

> Unlike the previous GPT-5.1 model, GPT-5.2 has new features for managing what the model "knows" and "remembers to improve accuracy.


Also, did they mention these features? I was looking out for it but got to the end and missed it.

(No, I just looked again and the new features listed are around verbosity, thinking level and the tool stuff rather than memory or knowledge.)


Increased demand for computer components for purposes other than gaming constitutes "AI bros murdering your lifelong hobby"?

PC gaming is not "murdered", it's doing better than ever.

In 2015 there were 3,000 games released to Steam, last year there were 18,000. In 2015 Steam's peak concurrent user count was 8.6 million. This year it's 41 million.

The inflation-adjusted price per gigabyte of RAM has dropped from $3/GB to $2/GB over the last 10 years, even including the recent price hikes.

So spare me the hysterics, your hobby is fine.

And you know what? The increased demand for compute always spurs innovation, so you'll probably get a better computer in the end as a result. You're welcome.


> In 2015 there were 3,000 games released to Steam, last year there were 18,000. In 2015 Steam's peak concurrent user count was 8.6 million. This year it's 41 million.

This is like saying "Spotify's subscriber count grew by 800% over the last 10 years. Music is doing better than ever!"


If the complaint was about access to music, then yes, that would valid. Which seemed to be the complaint implied regarding RAM as it related to PC gaming.


As someone who has been working on a pair of smart glasses running RTOS, and having to make companion apps for both iOS and Android, I am very interested in reading your approaches to a lot of the same problems I have faced. There's not a lot of information out there on these topics.


Calling this 3D is a stretch, it does not make things appear like they're coming out of the screen. That requires having a way to show something different to each eye, something not possible with a standard display.

I did see a demo of 3D without glasses on a full monitor that DID make it look like it was coming out of the screen at CES, it requires a $3000 monitor though: https://www.3dgamemarket.net/content/32-4k-glasses-free-3d-g...

Also the 3DS obviously.


Not strictly true. Out past a certain distance your brain uses parallax for depth cues because the difference between each of the eye's images is too small. It's 3d, just not stereoscopic 3d.


Shutter glasses would like to disagree about your assertion


My only game dev experience is with Babylon.js, but I decided to give Bevy a shot a couple weeks ago. I gave up once I realized they don't have any sort of editor or scene inspector. Something as simple as seeing what assets are loaded into your scene is not possible with official tooling. Tried Unity, but was ultimately more complex than what I needed. Tried Godot next, and so far it's been great. Super straight forward, and iteration speed is so much faster than Bevy or Unity because the compilation times are so low.


First: Godot is rad, and if you're willing to use third party tooling you can even use rust to code your logic (official Godot languages are gdscript, c#, and c++, but the community has added support for a lot more like rust and swift).

Second: building the editor (in Bevy, same way godot's editor is built with the game engine) is an active project so if you think Bevy is interesting it is worth keeping an eye on whenever that gets released.


Just experienced KDE for the first time myself, and sent this in Slack a couple weeks ago:

hadn't used linux in a desktop environment since college, but installed KDE Plasma on my old laptop today. It's so good

might be enough to finally make me take the time to at least dual boot my desktop


Anthropic also did specifically this, spent millions on it


Jules?


For Google's Cloud Translation API you can choose between the standard Neural Machine Translation (NMT) model or the "Translation LLM (Google's newest highest quality LLM-style translation model)".

https://cloud.google.com/translate/docs/advanced/translating...

DeepL also has a translation LLM, which they claim is 1.4-1.7x better than their classic model: https://www.deepl.com/en/blog/next-gen-language-model


Love how instantly recognizable the default NextJS app is


NextJS (and Vercel) is really easy to use ;)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: