Hacker Newsnew | past | comments | ask | show | jobs | submit | atxtechbro's commentslogin

Lately I’ve been heads‑down on a complete rethink of my dotfiles setup. It’s not just a `.vimrc` collection – the goal is to treat the dev environment like any other project: reproducible from scratch, automated, and designed to scale as AI becomes a bigger part of daily work.

The core of the project is a “Spilled Coffee Principle,” which basically says that if I spill coffee on my laptop, I should be back up in an afternoon. Every configuration change is codified into scripts, not a one‑off terminal command. Setup scripts create directories, handle symlinks, document dependencies and generally remove the “Brent the bottleneck hero” problem.

Beyond that, the repo lives inside a P.P.V system (Pillars, Pipelines, Vaults) where dotfiles are one of the pillars. This structure separates foundational configs from automation pipelines and secure vaults. It forces me to think at the system level: how do all of my tools fit together, where do secrets live, and how can I onboard a new machine (or person) with a single `git clone && ./setup.sh`?

What’s really interesting is the mindset shift this has caused. I’ve been experimenting with what I call the OSE (“Outside and Slightly Elevated”) principle: moving from micro‑level, line‑by‑line coding to a macro‑level role where you orchestrate AI agents. At the micro level you’re navigating files in an editor and debugging sequentially; at the macro level you’re using tmux + git worktrees + AI coding assistants to run multiple tasks in parallel. Instead of `1 developer × 1 task = linear productivity`, you get `1 developer × N tasks × parallel execution`, which has obvious 100×–1000× potential. This OSE approach forces me to design workflows, delegate implementation to agents, and focus on the “why” and “what” instead of the “how”.

The result is that my dotfiles aren’t just about aliases anymore; they’re a platform that bootstraps AI‑assisted development, enforces good practices, and keeps me thinking about the bigger picture rather than getting lost tweaking my prompt or editor colours. I’d love to hear how others are approaching the macro vs. micro balance in their own setups.


I've actually settled more on the opposite approach - my tool usage changes enough that I generally only care to configure the latest subset of tools when I get a new PC, and of course the others are there if I need them.

To that end, each tool has its own subdirectory in my dotfiles repo ( https://github.com/bbkane/dotfiles/ ), and I add READMEs to each subdirectory explaining what dependencies are necessary for this tool, what keyboard shortcuts this tool uses, etc.

This approach has been pretty resilient against my changing needs, changing operating systems, and changing tool versions; even if doesn't optimize for a single invocation of ./setup.sh


Hi Simon,

What's the huge difference between the two pelicans riding bicycles? Was one running locally the small version vs the pretty good one running the bigger one thru the API?

Thanks, Morgan


Ollama doesn't like proper naming for some reason, so `ollama pull magistral:latest` lands you with the q4_K_M version (currently, subject to change).

Mistral's API defaults to `magistral-medium-2506` right now, which is running with full precision, no quantization.


Nobody should be ever using ollama, for any reason.

It literally only makes everything worse and more convoluted with zero benefits.


Could you elaborate?


Not the parent but I would say bad defaults or naming. There are countless posts from newbies wondering why a model doesn’t work as well as it should.

It’s usually either because the context size is set very low by default or they didn’t realize that they weren’t running the full model (ollama uses the distilled version in place of the full version but names it after the full version).

There’s also been some controversy over not giving proper credit to llama.cpp which ollama is/was a wrapper around.


> ollama uses the distilled version

I've never used ollama, but perhaps you mean quantized and not distilled? Or do they actually use distilled versions?


They actually use distilled versions. The most egregious example of this is their misleading reference to all distillations of DeepSeek-R1, which are based on a variety of vastly different base models of varying sizes, as alternative versions of DeepSeek-R1 itself. To this day, many users maintain the mistaken impression that DeepSeek-R1 is overhyped and doesn't perform as well as claimed by those who have been using the actual model with 685B parameters.


ollama is just a wrapper for llama.cpp that adds insane defaults.

Just use llama.cpp directly.


Not only the quantization, but what’s available via ollama is magistral-small (for local inference), not the -medium variant.


Yes, the bad one was Mistral Small running locally, the better one was Mistral Medium via their API.


Signed. As a US-based developer, I fully support restoring the deductibility of software development expenses. This policy change quietly gutted countless startups and engineering teams—it’s long past time we fix it.

Appreciate YC and folks like @itsluther pushing this forward. This isn’t just a tax issue—it’s about keeping innovation and talent thriving in the US. Let’s get it done.


> This isn’t just a tax issue—it’s about keeping innovation and talent thriving in the US.

Did ChatGPT write this?


The em dash was in popular use long before chatgpt. It's a useful grammatical symbol and a short dash is not a good substitute. Consider whether you'd use it if it was a dedicated key on your keyboard, if so then it's worth the small inconvenience to learn how to type it.


Not just the em dash, the whole post stinks of ChatGPT, and there are two other obvious tells in the sentence I quoted.

If you know you know.


Fair enough. I'm sensitive about the em dash being used as a tell, which I've seen mentioned once or twice, because I don't want people to dumb down punctuation to avoid being confused for an LLM. I'd guess it's a temporary issue until the LLMs get so good at blending in that we can't tell anymore.


That is actually why software development was allowed to be expensed prior to 2017 - to keep innovation thriving in the US. In 2017, they US simply stopped giving preferential treatment to R&D.


Why does it matter? In a little while this will stop being a question people will ask. If anything it shows I put value into a high quality comment, that shows effort, and I also hand signed the letter form.

I guess the point about how hard it is to manually type that character is a great point though, I appreciate that!


Thank you, Simon! I really enjoyed your PyBay 2023 talk on embeddings and this is great too! I like the personalized benchmark. Hopefully the big LLM providers don't start gaming the pelican index!


How would you support someone who wanted to migrate to this and bring all their data with them? Just wondering if you build something like this hoping people start fresh, or do you build tunnels to help people migrate in and out. Kudos.


I made an export function that just basically dumps the entire schema out to JSON that can be processed with something like jq, but I should see if I can make some sort of bulk import function that would be easy to use


As an E15 Gen 2 owner, I'm in awe of you wizards keeping these ancient ThinkPads alive - my modern entry-level machine suddenly feels inadequate despite having 4x the processing power!


Is this fixed?


It's in the article (and the comments here) -- yes, it was remediated within 3 hours of being reported back in January by GitHub.


I actually have a perfect tool called siphon-cli for this. It adds the headers in between files and everything. https://docs.siphon-cli.com/


Yeah, like a lightweight version of my prompta CLI :)

What I end up with, is one .md file that uses variables like "$SRC", "$TESTS" and "$DOCS" inside of it, that gets replaced when you run `prompta output`, and then there is also a JSON file that defines what those variables actually get replaced with.

Bit off-topic, but curious how your repository ends up having 8023 lines of something for concatenating files, while my own CLI sits on 687 lines (500 of those are Rust) but has a lot more functionality :)


Not OP, but practically all of those lines are from a package-lock.json file (6755 lines) and a changelog (541 lines). It looks like the actual source is 179 lines long.


Sounds interesting, but I assume it is only available if you have access to Claude Desktop which is not available on Linux if I understand correctly.


It should work on any client that supports MCP, of which there are many:

https://modelcontextprotocol.io/clients


But only Claude Desktop gets flat $20 pricing from Claude Pro lol


I have seen this repo on github, I don't really know a lot about Linux but could be interesting.

https://github.com/aaddrick/claude-desktop-debian


Seems like a common itch to scratch and a good tool to scratch it with. I created 'linusfiles' and 'grabout' as tools with this. Grabout copies the last input and error message or other output to clipboard and linusfiles copies the tracked files to clipboard.

But I like the idea of tarballing it, as ndr_ suggested. I'm thinking that could be the move here.

In case anyone wanted to see my workflows https://github.com/atxtechbro/shell-tooling


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: