Hacker Newsnew | past | comments | ask | show | jobs | submit | seabass's commentslogin

Love this! Just wanted to note that I think there’s a mistake on the flyweight pattern page’s example. You’re using getting a boolean with Set.has but treating it like a Book (as if you had used Set.get). I also don’t really understand how this saves memory if you’re spreading the result into a new object, but maybe someone here can enlighten me!

Ah I think I understand now. The return type of createBook is true | Book, which is likely a mistake, but happens to work because when you attempt to spread a boolean into an object it removes itself. But if you were to edit the example to have a stable return type of Book then it would no longer save memory, so perhaps that was intentional?

I’m surprised by how good it looks. This is really cool! I do feel like the Q and 4 characters need a little manual tweaking since the blur+threshold technique leaves some artifacts in the corners but those are such minor issues given how readable this font is overall. Love it.


You can compare the writing style from the earlier articles like this from 2020, pre-GPT.

https://andreacanton.dev/posts/2020-02-19-git-mantras/


Strongly disagree. If you read enough of it the patterns in ai text are so familiar. Take this paragraph for example:

> Here’s what surprised me: the practices that made my exit smooth weren’t “exit strategies.” They were professional habits I should have built years earlier—habits that made work better even when I was staying.

“It’s not x—it’s y.”, the dashes, the q&a style text from the parent comment, and overall cadence were too hard to look past.

So for a counterpoint about the complaints being tedious, I’d say they are nice to preempt the realization that I’m wasting time reading ai output.


Regardless, people are going to start writing naturally like current LLM output, because that's a lot of what they are reading.

A tech doc writer once mentioned how she'd been reading Hunter S. Thompson, and that it was immediately bleeding into her technical writing.

So I tried reading some HST myself, and... some open source code documentation immediately got a little punchy.

> So for a counterpoint about the complaints being tedious, I’d say they are nice to preempt the realization that I’m wasting time reading ai output.

Good point. And if it's actually genuine original text from someone whose style was merely tainted by reading lots of "AI" slop, I guess that might be a reason to prefer reading someone who has a healthier intellectual diet.


> A tech doc writer once mentioned how she'd been reading Hunter S. Thompson, and that it was immediately bleeding into her technical writing.

That is honestly incredible and actionable advice.

Can’t wait to sprinkle a taste of the eldritch in my comments after reading some Lovecraft.


Curious - is your concern that the post is 100% AI generated? Or do you object that AI may have been used to clean up the post?


AI writing often leads to word inflation, so getting the original more concise one is helpful IMO. Hiding it is the annoying part, marking that you use AI to help you and having a 'source code' version I think would go over much better. If a person is deceptive and dishonest about something so obvious, how can you trust other things they say?

It also leads to slop spam content. Writing it yourself is a form of anti-spam. I think tools like grammarly help strike a balance between 'AI slop machine' and 'help with my writing'.

And because they are so low effort, it feels like putting links to a google search essentially. Higher noise, lower signal.


> I think tools like grammarly help strike a balance between 'AI slop machine' and 'help with my writing'.

I found Grammerly to be often incorrect, but it's been years since I tried it. I use LanguageTool instead, simply to catch typos.


Ok, well this post seems very similar style from the same author. Why isn't this ai also? https://andreacanton.dev/posts/2020-02-19-git-mantras/


It has a bunch of human imperfections, and I love that. The lowercase lists and inconsistent casing for similarly structured content throughout, the grammar mistakes, and overall structure. This article has a totally different feel compared to the newest ones. When you say it’s very similar, what are you picking up on? They feel like night and day from my perspective.


LLMs got all these patterns from humans in the first place*. They're common in LLM output because they're common in human output. Therefore this argument isn't very reliable.

If P is the probability that a text containing these patterns was generated by an LLM, then yes, P > 0, but readers who are (understandably) tired of generated comments are overestimating P.

* Edit: I see now that the GP comment already said this.


Looks really cool! Love the artwork. Right now the video in the readme doesn’t render on github, though. I had to manually download the mp4 from your demo folder to view it.


I’d expect that the “shut up and do as I say” approach would add more combativeness to the ai, increasing the likelihood that it refuses. Instead, bringing your initial request into a new chat context that hasn’t already been poisoned by a refusal would probably work.


Much like people, I guess.


I'm curious how you build something like this. I see file types in the network tab which, as a web dev, I've never worked with before. ktx2 and drc extensions, for example. I'm also seeing some wasm and threejs. Is there an engine that outputs these things or is it more of a manual process to bring everything together?


Having done some non-trivial things with threejs, I'd guess it's basically like this: 3JS is the rendering engine (taking advantage of WebGL); game logic is in Javascript; ktx2 is an OpenGL container holding the world data and other meshes; from looking at the filenames, WASM bits are used for loading the ktx2 data (draco_decoder.wasm). ogg files are for sound. The multiple worker files imply that the app is using web workers to run the WASM performantly to load the world.


ktx is Khronos Texture a format for storing compressed textures (=image data) so that they can be uploaded to GPU memory without decompressing step inbetween

drc is Draco Compression, it's a library from Google to compress mesh data


KTX can just store compressed GPU textures as-is, but in this case they're using the Basis Universal codec which is a bit more involved. Basis stores textures in a super-compressed form, which is decompressed at runtime to produce a less compressed (but still compressed!) format that the GPU can work with. The advantage is that it's smaller over the wire and one source file can decompress to several different GPU formats depending on what the users hardware supports.


ktx2 = textures

drc = 3d shapes (I think)

ogg = audio

All of these would normally be bundled in the game installer, but are sent down piece by piece over the network in this case.

Then there's some wasm and js for the game's business logic. The browser has WebGL APIs that enable running all of this.

I'm assuming they used a library or engine like Unity, Godot or Three.js that supports WebGL as an execution target.


The NPC at the top of the building sais it's three.js


Definitely Three.js or at least something similarly low-level. I doubt you can get this kind of performance with Godot or Unity on the web.


There is Needle.Tools for porting Unity projects to WebGL/3js


Threejs is like the jQuery for building 3d stuff. But it's not a platform like Unity or Unreal engine. But it doesn't look overly complicated.


On the other hand, I'm so glad it didn't. I enjoyed a few minutes of exploration before realizing what I was meant to be doing.


Second this. Pretty much standard/intuitive controls, it was fun discovering what the game was about while captivated by the beautiful music and graphics.


Really beautiful! Love the artwork and the fact that this runs so well in the browser. Was surprised to realize it was multiplayer!


> how easy is it to administer for clients outside of my network or possibly even outside my country?

You can run Jellyfin in any docker container. If you want to run it on a NAS in your home office and put it on the internet through ngrok or tailscale, you totally can. But you can host it pretty much wherever.

> how good is the app support? I transcode all of my media to AAC and h264 for compatibility

The official clients are just ok. They'll support all the file types you'd expect, but they're fairly slow and not great at streaming 4K. I pay for a client (Infuse Pro) that addresses a lot of those pain points, but it's been relatively poor at auto-detecting tv show metadata, so I'm still in the market for an app I'm happy with. Ideally an open source one.

> - what about for streaming music?

Technically works, but whether it's a good experience depends on the client you're using.

> - what do you like the most about jellyfin

Easy to set up. Great plugins for finding subtitles/artwork/metadata. Open source with good docs. Works with lots of clients. Easy to create and share accounts, and has fun features like synced remote viewing parties.

- what do you miss most about Plex?

The ads. jk never used it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: