There is whole world of difference between sprinkling some effects an making web app.
We also have this dichotomy because using frameworks is all or nothing deal. You mostly cannot just drop it in some place and then use it in other part.
From the other side one also cannot really start adding some interactivity here and there to end up with web application.
You either build an application and use framework or build a website sprinkling interactivity and effects here and there.
Requirements are usually clear from the start. So the discussion is really that too many people use frameworks where they should not.
Finally Wikipedia and Wordpress are not static site - their output is - but their editing capabilities are fully web apps as every CMS out there is an app and not website or something in between.
FWIW I've been tinkering with Astro this week and it's possible to build a website with zero client-side JavaScript.
I wonder if it's possible to achieve something similar in Next.js app router if you never have a "use client" component in your tree?
In any case it seems like that's the latest trend, allowing us to use full-fledged frameworks (powered by plenty of JavaScript) while minimizing (potentially down to zero) the client-side JavaScript.
This used to be possible with the page router, and was in fact one of the big selling points (graceful degradation down to raw HTML). I'd be surprised if the app router couldn't do something similar, though admittedly I haven't tried to do that yet. Anyone with experience?
I’m pretty sure you can’t achieve zero client-side JavaScript with pages router. You get pre-rendered HTML and it will gracefully degrade without executing JavaScript, yes…but it still comes with a JS bundle that clients will normally execute to hydrate that pre-rendered HTML into react components.
RSCs using the app router, on the other hand, don’t have that hydration phase.
You tried doing it with 6+ developers on the team and full backlog of stuff to be implemented and you were able to explain to everyone why "this specific place" has to be different? With business on board and getting web server configuration aligned with ops to have routes and URLs.
Dropping in is the new thing now. Vue and other Javascript frameworks utilize "web components" https://developer.mozilla.org/en-US/docs/Web/API/Web_compone... which are supported all over now. You have your `<video src="someVideo.mp4">` that is part of spec, and then you have your custom element `<myVideoPlayerThing src="someVideo.mp4" fartButton="true">` that does wtf it wants under the hood but surfaces as a regular HTML DOM element.
Ember might be the last of the old guard frameworks where this is somewhere between hard and impossible (last I looked at it anyway).
Nowadays, the main libraries are designed to not totally take over the entire page, though add-on frameworks which want to be "batteries included" (i.e. Next for react) certainly will.
The taxonomy is useful but it’s not exhaustive, and it’s also reductive.
Google Maps flits around to all four corners of this map depending on how you’re using it, and it needs intensive JavaScript support no matter where it ends up.
Wikipedia is, it claims, a definitively ‘informational’ site, ignoring that it also supports an enormous massively multiplayer community of active editors and authors constantly changing it.
What’s YouTube in this breakdown? What’s StackOverflow?
I think you've identified that some websites are secretly multiple websites in a trenchcoat.
The New York Times has a website at NYTimes.com where people read news stories, and a separate content management system where their employees write and publish stories. NYTimes.com is in the "informational" quadrant; the content management system is somewhere in the "online" half. The needs of these two systems are pretty dramatically different.
Other websites blur the lines a bit more in their architecture. For example, WordPress blogs generally include an "admin" app that's almost completely separate from the main website, but hosted on the same domain. Other websites like Wikipedia and StackOverflow are nominally "the same website" but activate a lot of extra UI elements when you're logged in.
But IMO it still makes sense to think of these as separate systems and optimize them individually. For example, Wikipedia uses a different serving path with much more aggressive caching when you're logged out (and the site is "informational"), vs when you're logged in as an editor (and the site is "transactional").
The "islands" architecture discussed in the article lets us extend this to individual features on the same page, too. Think of a startup marketing site's "chat with support" popup; that's pretty clearly a real-time island on an otherwise static page.
Actually on deeper reading I think the author bundles a lot of this into what they call ‘transactional’ in the top left. As they put it:
> Any service that involves literal financial transactions — like Amazon, or your bank’s website — probably lives here
But hang on - I need to be able to grab links to Amazon pages and send them to my friends. I don’t need to do the same with my bank statements. What do these sites have in common that this taxonomy usefully identifies? Forms with input fields?
> What do these sites have in common that this taxonomy usefully identifies? Forms with input fields?
Yes, that's what the article says, almost verbatim. The idea of the category is to identify sites that need to interact with a server to be useful, but do not need rich dynamic client-side interaction.
The editing part of Wikipedia would be "transactional" under this definition, because (last I looked) the way you edit Wikipedia is by typing your changes into a textbox and clicking Submit. It's not a Google Docs kind of situation where all the editors can see each other's cursors move around and type stuff in real time.
Theoretically YouTube would be informational. It's giant lists of videos. Once you find the page for the video it's media embedded in the page and wouldn't actually need JS to function once the video is downloaded. Just like Wikipedia.
It depends on what you think YouTube's product is. Are they a service that streams videos and recommendations to consumers, or a service that streams eyeballs and tracking data to advertisers? The first one might get by with HTML5, but the second one definitely requires some JS.
It's still surprising to me that, (as reasonable people) we continue to try to make PWAs happen at the expense of re-inventing decades of OS evolution just to replicate it into a browser.
It's reasonable because users typically run more than one OS:
- phone
- home desktop/laptop device
- tablet
- workstation
These are not powered by a single OS, yet users want apps that run on all their devices. That's a big reason why we continue to invest in the web.
There are economic reasons as well: many developer shops can't afford to have multiple, OS-specific versions of their apps. Why build a Windows, Mac, iOS, and Android version of your app, when a single web version will run everywhere?
But then the web is a workaround for technical challenges and instead of refining the platforms we have to achieve these needs, we’re… rebuilding it all from scratch in the browser.
The worst part is that the web isn’t immune to the same issues, they just present slightly differently. We still have gatekeeping, web app/extension stores, unbalanced browser market where one vendor is doing whatever they want, DRM, browser services overriding the user’s experience etc.
Also, making cheap “app” that sort of opens on any platform is not the same as making an app that’s truly adapted to a specific form factor or OS.
Building a mobile and desktop version of your app will be challenging unless it's a very simple application or unless you accept that they likely will share very, very little code or design. The form factors and UI models are so different.
Even doing this with the web is extremely challenging, again unless the application is relatively simple.
Creative apps in particular do not "port" between these two contexts very well at all, whether you use the web or not. Yes, such things exist, but it is very debatable whether the desktop users of e.g. Figma have much use for the same thing in a mobile browser.
Except you can write all your important business logic in cross platform languages as a core library and significantly reduce the effort to port it to different platforms. This is especially true for foss projects where you can rely on domain experts to port yo platforms you don't personally use. Look at the success of libtorrent.
I believe that the web (and PWA) is the future, so I made the choice early on to never learn to build native apps.
This was a hard choice to stand by back in the 2010s when everything and everyone seems to have an app... And the gap was undoubtedly huge in terms of functionality.
Today this is no longer the case. We can now use native web APIs and technologies to build feature rich apps that are accessible to anyone with a browser and an internet connection.
I think you’re talking about JavaScript apps vs native apps, not specifically PWA vs native.
I make that difference because electron apps are still JavaScript but not PWA, and I suppose you’re talking about the lack of good UI components in the browser.
This seems like a sunk cost fallacy to me. The web has evolved for decades, too, so why is it so wrong to allow it to compete as an application platform with the established desktop operating systems of the past?
We are essentially bolting on more and more management of hardware onto a userland application residing inside an operating system. At what point is it better to just get rid of the operating system and run the browser directly on the hardware?
What's wrong with slowly building up the hardware management features of the web? Why is that worse than releasing them in one big bang like traditional operating systems have done?
And why worry about the nomenclature to begin with? The web is already serving the use cases of an operating system for many people today whether we like it or not. So maybe it's our terminology that should change to reflect reality, rather than the other way around?
Powerful people stand to lose a lot if developers could just distribute their software directly to users without intermediate browsers/app stores.
You might say that this is technically possible right now, but those people do _everything_ they can to make it easier/cheaper to busy folks to use their software instead of just installing directly.
The same thing can be said about native apps, too: "Powerful people stand to lose a lot if developers can make their software run directly os users' machines without intermediate OSes."
Are we supposed to build everything as single-purpose operating systems now?
I have mixed feelings about it. I like the idea of apps that are fully sandboxed and that I can inspect and modify parts of it easily (stylus & greasemonkey).
On the other hand I like native apps that don't compromise on speed.
Hopefully the desktop wasm runtimes that have been showing up lately will be the solution.
What I find insane is that we have multiple teams writing the same exact logic in slightly different frameworks to support different native platforms. Talk about wasted human potential.
Maybe you missed the memo, but everything old is new again:
* We are slowly but definitely moving towards a hub-and-spoke, mainframe-and-terminal topology again. Processing is being centralized in servers and datacenters again (eg: "AI", game streaming, video and audio streaming, cloud storage, etc.), accessed from terminals again with low power consumption hardware to extend battery life, reduce manufacturing costs, and reduce consumer power use.
* Powerful consumer (and prosumer/enthusiast) hardware is becoming economically unviable again (discrete GPUs particularly), politically unfavorable (inefficient and high power consumption), and socially unfashionable (RGB and "gamer" aesthetic), further spurring the mainframe-and-terminal topology above.
* Traditionally powerful and versatile consumer (and prosumer/enthusiast) software is becoming commercially unviable and software vendors are realigning their revenue streams to adapt and survive in an increasingly cutthroat market, further spurring the mainframe-and-terminal topology above.
If you're into computing as a hobby, it might be prudent to find a new hobby soon.
You can adopt a centralized system. And it's fashionable to do so. Lots of people here and other places advocate doing everything you can on the server.
But it's not required. Additionally, there are plenty of good incentives not to, including cost. Client CPUs are fast, free to use, and don't require a high latency network hop. Local ram, disk, and network transfer are also available in abundance. You can totally reverse the architecture where the client is doing the heavy lifting and provide the user a good experience.
Intel CPUs are inefficient when pegged, but more efficient at idle. On AMD your using the same cores the cloud is. Without the massive IO attached on the server side the client CPUs are more efficient for a unit of work. Apple is more efficient than the PC side presently, and Qualcomm is entering the fray and appears to be quite efficient.
So I'm really not certain where you're efficiency claim about client hardware comes from. All that hardware is just sitting there and sales vary, but they are not cratering for high performance local compute.
You have a point about GPUs, but only a very few very specialized applications need those.
It is not an OS. It is a platform, a platform like a more universal version of Qt or GTK or whatnot. And portable platfoms are still cool. They're just not OS'es.
It doesn't matter whether its more content than events or the opposite. The ingredients and compile target remain the same. People are quick to look for a distinction because they are making the technology more complex than it is, typically with frameworks on top of frameworks insanity. Its a forest for the trees scenario.
(Amusingly I do have a version of it that runs as a completely client-side SPA, by executing the server-side Python code entirely in the browser using WebAssembly: https://lite.datasette.io/#/content/datasette_repos )
All this means is that the logic was shifted client-side. You could move it all to the backend and render the HTML there if you'd rather. (I wouldn't, but it'd still be a web app, IMO).
Kinda like old school Mapquest that was server-side, vs Google Maps that used client-side tile fetches. But both are still web apps, no?
I’ve been thinking along these lines for a while. The web has to support pages all across the spectrum, and it’s reductive to argue that one type of website is what a website “should be.”
This is a pretty nice overview of the current state of affairs. All that I find missing is some exploration of where frameworks like Next.js and Remix fit into the landscape. Next.js's use of React Server Components (RSC) in its new app router reminds me quite a lot of the so-called "islands" architecture.
Straddling this spectrum between "website" and "web app" is a continual challenge for us working on our content creator social media platform at Playboy.
I've been developping two websites for more than a year, both pure react web apps, one of them having millions of users per year.
Both had the particularity to use Netlify's one checkbox prerendering button.
They are heavy computing websites, Google was ranking them in red on lightspeed SEO, since it's now detecting that users are served différent HTML than to crawling user agents.
Then we switched to Next, with some "use client" / RSC headaches we moved some heavy computing to provide most of content to Google/users with a good performance, 75-100.
5 months later, still no SEO impact. Curious to get your SEO measures if you followed the same path.
Then one website was shared on a big media. SEO went x2 with noticeable keyword changes.
The only major advantage of Next to me, aside from being a good framework (but composing React + Webpack + ... is good too), is generating good social share metadata, notably Vercel OG images.
TLDR : you're developping a blog ? Use Next with RSC.
You're developping an interactive Web app ? Use React + prerender and avoid all the server side rendering headaches, or use NextJS with top levels "use client" and "SSR: false" for some components like maps. Just use Next as a PWA with some server side bonus, and worry about perf later, when you have a audience.
The only axis that really matters in terms of web development is WordPress vs anything else.
The reason the world believes websites offer crippled functionality compared to everything else essentially boils down to the fact that everyone uses wordpress and it’s incredibly hard to enter meaningful from the front end into a WordPress site.
So, an app is a thing. It’s not an arbitrary term; it’s a stand alone program that runs on a device.
A “thin client” is also a thing, it’s something that runs “just the ui” for some network application.
This is the distinction between web-app and website: state.
I’ve worked on a lot of these projects, and that’s what it’s always boiled down to. There are only three combinations:
- do you have two state models, one on the sever and one on the client?
- do you have one state model on the server and just show views of that state on the client?
- or (rarely) do you have state only on the client?
All of these “modern takes” are basically just trying to reduce complexity by saying “maybe we only need the server state model? Isn’t that easier?”
…and yes; it is easier to model state that way, but, unsurprisingly, other things become harder when you do; but this is not a new invention. It’s not even a new idea. Apps (like, actual apps) have done this for a long time too… and it’s ok.
It doesn’t solve all your problems… but sure whatever.
However.
Maybe we don’t need to invent new ways to describe things that already exist…?
I don’t think an arbitrary (and it is) four quadrant breakdown like this really does much to help people navigate this space.
Bluntly, you can’t have your cake and eat it too. If you want app like behaviour, you need an app. These things like htmx make things easier, and quicker, but they don’t offer all the power of having a rich client state.
If you want something that’s great, shortcuts don’t get you there… but, most people don’t need that stuff, it’s just nice… but if you’re not prepared to pay for it, don’t.
And yet, in Microsoft's desktop-native version of MVC (i.e. no web-centric concepts or technology at all), they introduced an extra component - the View Model. The app's View Model is a reduced, simplified version of the Model which tracks changes in the Model; the View presents the state of the View Model; the Controller typically modifies the Model.
A website (or a mobile app) is not a single application running on a distributed framework (in most cases).
It is two applications.
The question is usually: how complicated is the client application?
Is it a thin layer that simply renders server state? Or does it have it own inherent state (like “this component is currently in a dragged state”)?
All I’m saying is; if you want rich fluid client behaviour, you’re fooling yourself if you think any kind of hybrid “server only” approach is the best.
It’s not.
You cannot beat native apps for functionality.
What you can do, is write apps more quickly and easily… and perhaps they will be better and more functional for the effort you put in… but ultimately, a well made, well structured native app (be it browser or desktop or mobile) is fundamentally able to do more, with less latency, than a server driven application.
It’s just about where you put effort in, and what final end result you want at the end.
It’s just easier and less complex to drive everything from the server.
> if you want rich fluid client behaviour, you’re fooling yourself if you think any kind of hybrid “server only” approach is the best. It’s not. You cannot beat native apps for functionality.
Where does something like Microsoft Remote Desktop Services fit into this location-of-state model?
It’s a classic thin client app, but you can run it in “RemoteApp” mode, where instead of remote controlling a full Windows session, you only see a single app’s window, making it largely indistinguishable from running a native app.
It can be setup where the user sees, say, an Excel icon on their desktop, and the average user virtually cannot tell if they’re running a native app or a remote app.
The experience is surprisingly good. Even over higher latency connections, I’ll notice slow server hardware before I’ll notice network latency. On the flip side, if we allocate a lot of server resources to a remote app, where a user is doing data analysis on a large data set, they will absolutely prefer the remote app. Point being, the latency tends to be a non issue.
What is the meaning of "latency" here, when we're talking about dragging and dropping a Dashboard card (seamless feeling, because of the responsiveness of the local UI), or saving an update to text fields (again, should feel instant if your UI is done correctly)?
There is a common usage for these terms, they are not arbitrarily.
Describing websites purely on a “static vs dynamic” or “online vs offline” is arbitrary.
That is simply assigning subjective values to the way you interpret a website as behaving and then categorising all websites by that subjective set of metrics.
That’s what arbitrary means.
How do I measure how “dynamic” a website is? Gut feel? It’s a meaningless metric.
How “offline” is my website? Is it a bit offline? Certainty I know when it’s entirely off line… but on a scale of 0-10, how “offline” is a website that runs off an api vs one that fetches static json data files?
Wouldn’t you say that categorising websites by these metrics is totally arbitrary?
I certainly think it is.
This is some classic armchair theorising (look it up); first you invent some labels, then you put everything under those labels in a way that is convenient (rather than data driven) then you talk about it to support some ideas you have.
There is whole world of difference between sprinkling some effects an making web app.
We also have this dichotomy because using frameworks is all or nothing deal. You mostly cannot just drop it in some place and then use it in other part.
From the other side one also cannot really start adding some interactivity here and there to end up with web application.
You either build an application and use framework or build a website sprinkling interactivity and effects here and there.
Requirements are usually clear from the start. So the discussion is really that too many people use frameworks where they should not.
Finally Wikipedia and Wordpress are not static site - their output is - but their editing capabilities are fully web apps as every CMS out there is an app and not website or something in between.