Hacker Newsnew | past | comments | ask | show | jobs | submit | mozanunal's commentslogin

Hi HN. To explore the power of classless CSS, I built this simple page that lets you instantly switch between more than a dozen different frameworks like Pico, Water.css, Sakura, and Missing.css.

It’s a great way to see how each framework interprets pure, semantic HTML without any classes. You can compare how they handle typography, forms, tables, blockquotes, and other elements with a single click.

Live Tool: https://hugo-classless.netlify.app/

The page itself is built with a Hugo theme I created for this purpose, which is also open source for anyone who wants to use it for their own site.

GitHub: https://github.com/mozanunal/hugo-classless

Hope this is a useful resource for folks interested in CSS and minimalist web design. Feedback is welcome!


If you are looking for web app stack, that does not require build step:

TLDR: I was looking for a web app stack that I can work easily in year 2030, it is for side project, small tools I am developing and that is what I came up with.

I've been spending some time thinking about (and getting frustrated by!) the complexity and churn in modern frontend development. It often feels like we need a heavy build pipeline and a Node.js server just for relatively simple interactive applications.

So, I put together some thoughts and examples on an approach I'm calling "No-Build Client Islands". The goal is to build SPAs that are:

Framework-Free (in the heavy sense): Using tiny, stable libraries. No Build Tools Required: Leveraging native ES modules. Long-Lasting: Reducing reliance on rapidly changing ecosystems. Backend Agnostic: Connect to any backend you prefer. https://mozanunal.com/2025/05/client-islands/


Hi HN, Ozan here.

I've been fascinated by the "islands of interactivity" architecture popularized by tools like Astro, but wanted to see if we could achieve a similar DX and performance benefit without a build step or a Node.js server for rendering.

This post, "No-Build Client Islands," explores exactly that: bringing the island concept fully to the client side. We use native ES modules, Preact + HTM for component rendering, and Page.js for client-side routing.

The core idea is that your "static site" (the shell and routes) is rendered by the client after the initial HTML load, and then specific "islands" of interactivity are mounted on demand, much like Astro or Fresh – but all within the browser. This means:

- Zero build tools: No npm, Vite, Webpack. Just ES modules.

- Truly backend-agnostic: Serve static files from anywhere your heart desires (Go, Rust, Python, Java, or just a CDN).

- Tiny runtime: The core libraries (Preact, HTM, Page.js) are very small.

- Client-Side Rendering & Routing: Unlike Astro which does build-time generation for static parts, here the client handles the initial page structure and routing, then hydrates interactive islands.

Think of it as taking the best parts of the islands model (component-level interactivity, selective hydration) and applying them to a purely client-rendered SPA, aiming for maximum simplicity, stability, and longevity.

You can read the full post here: https://mozanunal.com/2025/05/client-islands/

I'm keen to hear your thoughts on this client-side take on islands, how it compares to server-centric/build-time approaches for your use cases, and any potential pitfalls or advantages you see!


Why sllm.nvim?

The [`llm`](https://llm.datasette.io/en/stable/) command-line tool by Simon Willison (creator of Django, Datasette, and sqlite-utils) is a wonderfully extensible way to interact with Large Language Models. Its power lies in its simplicity and vast plugin ecosystem, allowing users to tap into numerous models directly from the terminal.

I was particularly inspired by Simon's explorations into `llm`'s [fragment features for long-context LLMs](https://simonwillison.net/2025/Apr/7/long-context-llm/). It struck me how beneficial it would be to seamlessly manage and enrich this context directly within Neovim, my primary development environment.

Like many developers, I found myself frequently switching to web UIs like ChatGPT, painstakingly copying and pasting code snippets, file contents, and error messages to provide the necessary context for the AI. This interruption broke my workflow and felt inefficient. `sllm.nvim` was born out of the desire to streamline this process. Contained within around 500 lines of Lua, it aims to be a simple yet powerful Neovim plugin. The heavy lifting of LLM interaction is delegated to the robust `llm` CLI. For the user interface components, I've chosen to leverage the excellent utilities from `mini.nvim` – a library I personally use for my own Neovim configuration – and plan to continue using its modules for any future UI enhancements. The focus of `sllm.nvim` is to orchestrate these components to manage LLM context and chat without ever leaving the editor.

As Simon Willison also discussed in his post on [using LLMs for code](https://simonwillison.net/2025/Mar/11/using-llms-for-code/), effective context management is key. `sllm.nvim` aims to significantly contribute to such a workflow by making context gathering and LLM interaction a native part of the Neovim experience.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: