Hacker Newsnew | past | comments | ask | show | jobs | submit | appsoftware's commentslogin

Claude when used via Github Co-Pilot is much better for useage allowance. I used Opus 4.5 for a months worth of development and only just hit 90 pct of the pro $40 per month allowance.

I don't think a general public set of skills like this is going to work. I see value in vendors producing skills for their own products, and end users maintaining skills to influence agents according to their preferences, but too much in these skills files is opinion. Where does this end? Ordering of skills by specificity, such as org > user > workspace? And we know that skills aren't reliably picked up anyway. And then there's the additional attack surface area for prompt injection.

The post you have commented on is not pertaining to a general set of skills at all. Its a link to a specification for skills.

I use a common README_AI.md file, and use CLAUDE.md and AGENTS.md to direct the agent to that common file. From README_AI.md, I make specific references to skills. This works pretty well - it's become pretty rare that the agent behaves in a way contrary to my instructions. More info on my approach here: https://www.appsoftware.com/blog/a-centralised-approach-to-a... ... There was a post on here a couple of days ago referring to a paper that said that the AGENTS file alone worked better than agent skills, but a single agents file doesn't scale. For me, a combination where I use a brief reference to the skill in the main agents file seems like the best approach.

Location: Wiltshire, UK Remote: Yes (Remote or London) Willing to relocate: No Technologies: .NET 9/Core, C#, JavaScript, Cloud, Database, APIs + lots (see CV) Resume/CV: https://www.appsoftware.com/cv Email: gareth.brown@appsoftware.com

I am a Senior Software Engineer and Solutions Architect with over 18 years of experience building enterprise-grade systems for the public sector, FTSE 100 companies, and fintech startups.

I specialize in the .NET ecosystem and AWS cloud infrastructure, with a focus on building performant, secure, and maintainable systems. I’m equally comfortable leading a development team through complex technical migrations or being a hands-on individual contributor. I bring a pragmatic, "builder" mindset to software—focused on solving business problems with clean code and robust DevOps practices.

Outside of software, I’m an active and practical person who enjoys woodworking, art, literature and fitness including hiking and Brazilian Jiu Jitsu.

Looking for senior or lead roles where I can take ownership of technical strategy and delivery.


We're all ultimately just learning what we need to to get the job done. After 20 years programming, it is very clear that nobody knows everything. Everyone just knows their own little slice of the software world, and even then you have to 'use it or loose it'. If you're feeling imposter syndrome, keep a study side project going where you don't use any AI, something like NAND to Tetris that forces you to learn low level concepts, and then just stay productive using AI for the rest of your work.

What would you take out of C# etc?

All non-generic container classes and nullable reference types.

> nullable reference types.

What would you suggest instead? I quite like the nullable reference types, but I do know many get annoyed. My brain is often a scurry of squirrels, so I grew to become thankful for the nullable refs overtime.


I don't mind NRT but I hate dealing with C# projects that haven't set < Nullable>Enable</Nullable> in their csproj. It's not perfect because I know at runtime it can still be nullable but it's nice when the compiler does most of the checks for you.

The compiler now mostly solves this now but the abstraction is a little leaky.

I heavily use nullable types but I always want them to be declared nullable.


(not OP) I would take out mostly historic stuff, that is in there for backwards compat, that has been superseeded. But this could be achieved using linters.

Obsidian is similar but without the block structure you have to be very specific about linking notes rather than using parent child relationships.


Yep, that matches my experience. Logseq’s block tree gives you “structure by default” (parent/child context), so you can get away with being a bit looser with explicit linking. In Obsidian, because the unit is the note (not the block), you often have to be more intentional about creating/maintaining the links and structure.

Out of curiosity: do you find Logseq’s block hierarchy alone is enough for re-entry, or do you still rely heavily on consistent wikilink naming/tags to avoid the “I swear I linked this but used a different term” problem?

Details in my HN profile/bio if you want the angle I’m exploring around minimizing organization overhead while improving re-entry.


Yes the block hierarchy is enough for re-entry. There is a natural 'pruning' process where I return to notes and realise I need to rework them to make surfacing the information I need easier. I often adjust titles and aliases (and often find I have two notes with similar names that need 'refactoring' to one - but Logseq makes this easy). If I don't find the note straight away I can usually remember adjacent terms to find it, and then when I do, I tag it with the first terms I searched on (as acceptance of the associations my brain had naturally made). I'll keep an eye on your project. What I do struggle with with Logseq is there isn't an easy means to just dump ideas to organise later, partly because the mobile app is so slow. It really needs two UIs that integrate with the same base format for two different modes of note collection. I disagree with others that taking or 'hoarding' notes is more work than its worth. The benefit of being able to dump info quickly and pick it up again and being able to find it easily is so valuable. Sure some notes get written and never see the light of day again, but then they never consume further time because I just don't work on them, but they are there if I need them. There's no way to know what info will definitely be useful in the future.

I like obsidian, but use Logseq day-to-day. I find it pretty easy to dump information without too much work due to the block / indenting structure. Retrieval isn't too bad, because you navigate to the wikilink you need, and everything is under it (I do sometimes swear I used a specific wiki link and then find I used something else, and have to dig for information).

Something that did work well recently, was creating a node script to gather all text under a given wiki link and copy to a doc with some formatting modifications, and then feed the document to an LLM for consolidation and a summary of everything I have recorded for a given subject.


That’s a really solid workflow: keep capture friction low in Logseq, then do “topic export → LLM consolidation” when you actually need a brief. The wiki-link mismatch problem also sounds like a naming/alias layer issue more than retrieval itself.

If you were to take this one step further, would you want the output to be:

1.a consolidated brief you can re-enter later, or

2.a small set of next-actions / open questions extracted from the brief?

This “export + consolidate” pattern is very close to what I’m exploring (details in my HN bio/profile if you’re curious).


Neither really. I don't want a tool to guide me or to generate content. My notes only have meaning to me if I know I wrote them, from my brain. I'm a fan of learning by proximity to the problems I have. Maybe a LLM based tool that highlighted conceptual connections to other notes, but in my mind these associations can be quite disparate and from sometimes completely different disciplines

I think this is where current senior engineers have an advantage, like I felt when I was a junior that the older guys had an advantage in understanding the low level stuff like assembly and hardware. But software keeps moving forward - my lack of time coding assembly by hand has never hindered my career. People will learn what they need to learn to be productive. When AI stops working in a given situation, people will learn the low level detail as they need to. When I was a junior I learned a couple of languages in depth, but everything since has been top down, learn-as-i-need to. I don't remember everything I've learned over 20 years software engineering, and the forgetting started way before my use of AI. It's true that conceptual understanding is necessary, but everyone's acting like all human coders are better than all AI's, and that is not the case. Poorly architected, spaghetti code existed way before LLM's.


> But software keeps moving forward - my lack of time coding assembly by hand has never hindered my career.

Well, yeah. You were still (presumably) debugging the code you did write in the higher level language.

The linked article makes it very clear that the largest decline was in problem solving (debugging). The juniors starting with AI today are most definitely not going to do that problem-solving on their own.


I want to compliment Anthropic for doing this research and publishing it.

One of my advantages(?) when it comes to using AI is that I've been the "debugger of last resort" for other people's code for over 20 years now. I've found and fixed compiler code generation bugs that were breaking application code. I'm used to working in teams and to delegating lots of code creation to teammates.

And frankly, I've reached a point where I don't want to be an expert in the JavaScript ORM of the month. It will fall out of fashion in 2 years anyway. And if it suddenly breaks in old code, I'll learn what I need to fix it. In the meantime, I need to know enough to code review it, and to thoroughly understand any potential security issues. That's it. Similarly, I just had Claude convert a bunch of Rust projects from anyhow to miette, and I definitely couldn't pass a quiz on miette. I'm OK with this.

I still develop deep expertise in brand new stuff, but I do so strategically. Does it offer a lot of leverage? Will people still be using it on greenfield projects next year? Then I'm going to learn it.

So at the current state of tech, Claude basically allows me to spend my learning strategically. I know the basics cold, and I learn the new stuff that matters.


> my lack of time coding assembly by hand has never hindered my career.

I'd kinda like to see this measured. It's obviously not the assembly that matters for nine-9s of jobs. (I used assembly language exactly one time in my career, and that was three lines of inline in 2003.) But you develop a certain set of problem-solving skills when you code assembly. I speculate, like with most problem-solving skills, it has an impact on your overall ability and performance. Put another way, I assert nobody is worse for having learned it, so the only remaining question is, is it neutral?

> everyone's acting like all human coders are better than all AI's

I feel like the sentiment here on HN is that LLMs are better than all novices. But human coders with actual logical and architectural skills are better than LLMs. Even the super-duper AI enthusiasts talk about controlling hoards of LLMs doing their bidding--not the other way around.


Being able to read assembly has helped me debug. You don't have to write it but you have to be able to write it. The same applies to manual transmissions and pocket calculators.


thats fair enough but reading assembly is such a pain in the ass... it was exciting for the first 10 minutes of my life, but now, if i ever got to that point, i will 100% copy-paste the listing to chatgpt with "hey, can you see anything sketchy?"


I agree with the authors observations here. I think rather than it being purely language related, there's a link to the practice of 'rubber ducking', where when you start to explain your problem to someone else it forces you to step through the problem as you start to explain the context, the steps you've tried and where you're stuck. I think LLMs can be that other person for us sometimes, except that other person has a great broad range of expertise.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: