In my experience the LLMs work better with frameworks that have more rigid guidance. Something like Tailwind has a body of examples that work together, language to reason about the behavior needed, higher levels of abstraction (potentially), etc. This seems to be helpful.
The LLMs can certainly use raw CSS and it works well, the challenge is when you need consistent framing across many pages with mounting special cases, and the LLMs may make extrapolate small inconsistencies further. If you stick within a rigid framework, the inconsistencies should be less across a larger project (in theory, at least).
I've visited this museum and it was the highlight of my trip to the netherlands. I also wondered, for hours, about how cool it is to hook up modern hardware to these old systems. Can you imagine playing one live, similar to how an artist would play a synthesizer kit?
This technique is surprisingly powerful. Yesterday I built an experimental cellular automata classifier system based on some research papers I found and was curious about. Aside from the sheer magic of the entire build process with Cursor + GPT5-Codex, one big breakthrough was simply cloning the original repo's source code and copy/pasting the paper into a .txt file.
Now when I ask questions about design decisions, the LLM refers to the original paper and cites the decisions without googling or hallucinating.
With just these two things in my local repo, the LLM created test scripts to compare our results versus the paper and fixed bugs automatically, helped me make decisions based on the paper's findings, helped me tune parameters based on the empirical outcomes, and even discovered a critical bug in our code that was caused by our training data being random generated versus the paper's training data being a permutation over the whole solution space.
All of this work was done in one evening and I'm still blown away by it. We even ported our code to golang, parallelized it, and saw a 10x speedup in the processing. Right before heading to bed, I had the LLM spin up a novel simulator using a quirky set of tests that I invented using hypothetical sensors and data that have not yet been implemented, and it nailed it first try - using smart abstractions and not touching the original engine implementation at all. This tech is getting freaky.
I recently learned that buybacks and short selling historically have not been legal, its only in recent history that they've been standard practice en large (1982 is when buybacks were legalized, I think)
It’s part of Reaganomics. Buybacks weren’t explicitly illegal beforehand though, it just wasn’t clear when they would count as stock price manipulation.
As an analogy, there might be some interesting discussions happening at my local Community Center, or my neighbor's house, but I would have no way of knowing. But to discover these discussions, I would need to meet someone with a shared interest who would, in turn, share with me a place that they go to for continued discussions and to hang out with interesting people who share an interest.
So maybe, if done correctly, this is a feature? The good content is one extra network connection away, but easy enough to find if an advocate chooses to highlight content, share a connection, or otherwise create an inbound reference to the community.
If you had a way to search like "hey there's an interesting conversation going on at my local community center, maybe I will go and join their next session."
At the same time, does your local community center want the unfiltered public to have input to their conversations? Or are they only interested in spreading it to friends/neighbors of people already at the center?
This also reminds me of the Radio Lab episode that tracks bird migration, including one bird (that they were actively tracking) that simply peeled off the group and settled down somewhere else that wasn't part of the historic migration path. Feels like the same idea.
In the book A Mote in God's Eye, they have a concept of the Crazy Eddie (presumably named after the 'eddies' in fluid dynamics), which is a mythical social phenotype where the member disagrees with the status quo and believes there is an unknown solution to their thus-far unsolved generational problem. Simply believing in a solution that is worth searching for denotes the member as 'insane'.
Kind of seems like we, as natural beings and members of natural systems, absolutely have some kind of pattern-breaking behavior built in at a systemic level. A master-level emergent behavior that can exploit local maxima but still succeed in finding other local maxima to ensure the survival and adaptation of a species.
In the ‘mote in god’s eye’, the crazy Eddie’s were a bad thing because they inevitably destabilized the system (or were symptoms of the system destabilizing), and inevitably resulted in apocalyptic consequences if they found something of note (and had dozens of times or something). Which I believe was also what ended up happening in the book, wasn’t it?
I also learned about Permutation City from an HN thread years ago. It's my favorite book now, and I immediately ctrl+F'd this list for Permutation City because everyone else on HN should know it exists.
Also I need to track down that Star Trek TNG episode... it sounds poignant.
reply