Hacker Newsnew | past | comments | ask | show | jobs | submit | tommek4077's commentslogin

In other news: water is wet. I genuinely don't understand how anyone is still pretending otherwise. Server-side rendering is so much easier to deliver in a performant way, yet it feels like it's being increasingly forgotten — or worse, actively dismissed as outdated. Out of convenience, more and more developers keep pushing logic and rendering onto the client, as if the browser were an infinitely capable runtime. The result is exactly what this article describes: bloated bundles, fragile performance, and an endless cycle of optimization that never quite sticks.

Server rendered HTML, htmlf endpoints and JQuery load was always the sweet spot for me - McMaster Carr[0] does the same thing behind the scenes and utterly destroys every "modern" webapp in existence today. Why did everything have to become so hard?

0: https://www.mcmaster.com/


Pure client side rendering is the only way to get max speed with lowest latency possible. With ssr you always have bigger payloads or double network rounds.

You're literally downloading a bunch of stuff first just to do a first paint, versus just sending back already built and styled HTML/CSS. Not only is client-only technically slower, it's also perceptively slower.

That’s a laughable claim. SSR is objectively faster, since the client does nearly zero work other than downloading some assets. If the responses are pre-computed and sitting in server memory waiting for a request to come along, no client side rendering technique can possibly beat that.

Of course there are cases where SSR makes sense, but servers are slow; the network is slow; going back and forth is slow. The browser on modern hardware, however, is very fast. Much faster than the "CPU"s you can get for a reasonable price from data centers/colos. And they're mostly idle and have a ton of memory. Letting them do the work beats SSR. And since the logic must necessarily be the same in both cases, there's no advantage to be gotten there.

If your argument is that having the client do all the work to assemble the DOM is cheaper for you under the constraints you outlined then that is a good argument.

My argument is that I can always get a faster time to paint than you if I have a good cluster of back end services doing all that work instead of offloading it all to the client (which will then round trip back to your “slow servers over a slow network”) anyway to get all the data.

If you don’t care about time to paint under already high client-side load, then just ship another JS app, absolutely. But what you’re describing is how you deliver something as unsatisfying as the current GitHub.com experience.


Idk. My applications are editor-like. So they fetch a bit of data, but rendering the edit options in HTML is much larger in size then the data, especially since there are multiple views on the same data. So that would put a larger burden on the server and make network transfer slower. Since generating the DOM in the browser is quite fast (there's no high client-side load; I don't know where you get that from), I've got good reason to suppose it beats SSR in my case.

Mind you, I've got one server with 4 CPUs and 8GB memory that can run 2 production and 10 test services (and the database for all), and the average load is .25 or so. That makes that it responds quickly to requests, which also has its advantage.


That makes sense. And btw when I say “already high client load”, my assumption is that most users have 50 other tabs open :)

Lol!

Still living the early 2000s eh? Pretty much all interactive responsive apps are all 100% client side rendered. Your claim about SSR being objectively faster looks like a personal vendetta against client side rendered apps. Or javascript. Happy days!

It was faster then and it’s still faster now. Of course, you’d have to learn how a computer works to know that I’m right, but that would be a bridge too far for JavaScript casuals! Just add one more library bro! Then you’ll be able to tell if a number is even or odd!

At least my prediction is accurate after all:)

Confidently wrong, I like it!

> objectively faster

> provides zero evidence


Some pretty compelling evidence is history: we had dynamic and interactive web pages 20 years ago that were faster on computers that were an order of magnitude slower.

I don’t really need to provide “evidence”. I told you why SSR is faster and tbh idc if your time to paint is trash.

Why are "thousands" of requests noticable in any way? Webservers are so powerful nowadays.


Small, cheap VPSs that are ideal for running a small niche-interest blog or forum will easily fall over if they suddenly get thousands of requests in a short time.

Look at how many sites still get "HN hugged" (formerly known as "slashdotted").


I remember my first project posted to HN was hosted on a router with 32MB of RAM and a puny MIPS CPU; despite hitting the front page, it did not crash.

At this point, I have to assume that most software is too inefficient to be exposed to the Internet, and that becomes obvious with any real load.


While true, it's also true that it was (presumably) able to run and serve its intended audience until the scrapers came along.


It's not just one scraper.


But why?


No humans, no point.


But AI scraping doesn't remove humans...?

Even if humans make up a smaller proportion of your traffic, they're still the same number in absolute terms.


How do they get overloaded? Is the website too slow? I have a quite big wiki online and barely see any impact from bots.


A year or two ago I personally encountered scraping bots that were scraping every possible resultant page from a given starting point. So if it scraped a search results page it would also scrape every single distinct combination of facets on that search (including nonsensical combinations e.g. products that match the filter "products where weight<2lbs AND weight>2lbs")

We ended up having to block entire ASNs and several subnets (lots from Facebook IPs, interestingly)


I have encountered this same issue with faceted search results and individual inventory listings.


If you have a lot of pages, AI bots will scrape every single one on a loop - wiki's generally don't have anywhere near the number of pages as an incremented entity primary id. I have a few million pages on a tiny website and it gets hammered by AI bots all day long. I can handle it, but it's a nuisance and they're basically just scraping garbage (statistics pages of historical matches or user pages that have essentially no content).

Many of them don't even self-identify and end up scraping with shrouded user-agents or via bot-farms. I've had to block entire ASNs just to tone it down. It also hurts good-faith actors who genuinely want to build on top of our APIs because I have to block some cloud providers.

I would guess that I'm getting anywhere from 10-25 AI bot requests (maybe more) per real user request - and at scale that ends up being quite a lot. I route bot traffic to separate pods just so it doesn't hinder my real users' experience[0]. Keep in mind that they're hitting deeply cold links so caching doesn't do a whole lot here.

[0] this was more of a fun experiment than anything explicitly necessary, but it's proven useful in ways I didn't anticipate


How many requests per second do you get? I also see a lot of bot traffic but nowhere near to hit the servers significantly, and i render most stuff on the server directly.


Around a hundred per second at peak. Even though my server can handle it just fine, it muddies up the logs and observability for something I genuinely do not care about at all. I only care about seeing real users' experience. It's just noise.


Even moderately sized wikis have a huge number of different page versions which can all be accessed individually.


There’s a lot of factors. Depends how well your content lends itself to being cached by a CDN, the tech you (or your predecessors) chose to build it with, and how many unique pages you have. Even with pretty aggressive caching, having a couple million pages indexed adds up real fast. Especially if you weren’t fortunate enough to inherit a project using a framework that makes server side rendering easy.


In these discussions no one will admit this, but the answer is generally yes. Websites written in python and stuff like that.


It's not "written too slow" if you e.g. only get 50 users a week, though. If bots add so much load that you need to go optimise your website for them, then that's a bot problem not a website problem.


Yes yes, definitely people don’t know what they’re doing and not that they’re operating on a scale or problem you are not. Metabrainz cannot cache all of these links as most of them are hardly ever hit. Try to assume good intent.


But serving HTML is unbelievably cheap, isn't it?


Run 72,000 database queries to generate a bunch of random HTML files no one has asked for in five years is not, especially compared to downloading the files designed for it.


It adds up very quickly.


The worse thing is calendar/schedule. Many crawler tries to load every single day, with day view, week view and month view. Those pages are dynamically generated and virtually limitless


The API seems to be written in Perl: https://github.com/metabrainz/musicbrainz-server


Time for a vinyl-style Perl revival ...


Future markets give traders leverage of 100x sometimes or more. Margin requirements are much lower than trading spot.


Margin requirements for trading spot are zero, though initial capital requirements are obviously, well, whatever spot is.

Futures contracts aren't just pieces of paper traded between people, they are actual promises to pay for physical delivery of the underlying.

It's not surprising to me that crypto people consider them nothing more than leveraged gambling slips but that's really not how one should think about them. Personally I think crypto needs far heavier regulation than it gets.


Ever heard of liquidations?


Yes, it’s really ‘weird’ that they refuse to share any details. Completely unlike AWS, for example. As if being open about issues with their own product wouldn’t be in their best interest. /s


What is really at risk?


Maybe the instances are shared between users via sharding or are re-used and not properly cleaned.

And maybe they contain the memory of the users and/or the documents uploaded?


And what do you expect to get? Some arbitrary uninteresting corporate paper, a homework, someones fanfiction.

Again, what is the risk?


Probably you’re being sarcastic to show that those AI companies don’t give a damn about our data. Right ?


Couldnt this be a first step before further escalation ?


And then what? What is the risk?


I guess a sandbox escape, something, profit?


Dont OpenAI have a ton of data on all of its users ?


And what is at risk? Someone seeing someones else fanfiction? Or another reworded business email? Or the vacancy report of sone guy in southern germany?


This is a wild take and I’m not sure where to begin. What if I leaked your medical data, or your emails, or your browser history. What’s at risk? Your data means nothing to me.


No it is not. Outside this strange bubble on hacker news, no ine really cares or has ever heard of the creator.

They just use wordpress.


1st rule of hacking: don't write your freaking name on it!


Plot twist: Nobody who is in charge should care.

Leave the no to the naysayers.

Ship your app, generate traffic, usage, income. Leave the discussions to other people.


Do that at $BigCorp and Legal will eat you alive, if not fired.

Long ago I went through the company-approved process to link to SQLite and they had such a long list of caveats and concerns that we just gave up. It gave me a new understanding of how much legal risk a company takes when they use a third-party library, even if it's popular and the license is not copyleft.


Unless you are now involved in a lawsuit that asks for a hypothetical 50% of your income for using a tech very similar to their and they speculate its been stolen and not permitted by their license and even if you know you are going to win/or that it doesn't affect you still have to spend money on the lawyers fighting it.


Commenting on this to mark it in my feed for later reference. Well said!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: