Hacker Newsnew | past | comments | ask | show | jobs | submit | fazlerocks's commentslogin

honestly? teaching or coaching.

been thinking about this a lot lately and realized the skills that made me good at building products, breaking down complex problems, explaining things clearly, helping people think through decisions… those transfer really well to education.

I actually did that early in my career (2013/14/15), wrote content on frontend tech like bootstrap for sites like sitepoint. published multiple books which helped me get my o1 visa :D

there's something appealing about work that's fundamentally about humans helping other humans grow. way harder for AI to replace the relationship part of learning

been mentoring junior devs and it's honestly the most fulfilling work i do. if tech gets fully automated, at least i'd be doing something that actually feels meaningful


we've shifted to focusing way more on problem-solving ability during interviews rather than just coding skills

still do technical screens but now we give people access to AI tools during the process - because that's how they'll actually work. want to see how they break down problems, ask the right questions, and iterate on solutions

honestly the candidates who can effectively use AI to solve complex problems are often better hires than people who can code from scratch but struggle with ambiguous requirements

the key is testing for engineering thinking, not just programming syntax


we've tried a bunch of different approaches and honestly the best feedback comes from getting people in a room (or zoom) together

widgets are fine for quick "this button doesn't work" stuff but for real UI feedback you need context… what were they trying to do, where did they get confused, what did they expect to happen

we do weekly design reviews where everyone can see the same screen and talk through flows in real time. way more valuable than async comments scattered across different tools

the trick is making it feel collaborative instead of like a critique session. when people feel heard they give way better feedback


the best "payment" is often just updating them on how their help changed your trajectory. a simple message years later saying "that advice you gave me led to X" means everything

also - helping the next person who reminds you of your younger self. pay it forward instead of trying to pay it back directly. most mentors get more satisfaction from seeing the ripple effect than getting something back personally


This is actually a really cool direction, using LLMs to interact directly with Android UIs could solve the brittleness problem that's been killing traditional automation.

Like just telling it "navigate to settings and enable dark mode" instead of writing fragile selectors… that's the dream :D

But the current implementation has some issues that make it tough for real use ~

2-5 second latency per action is brutal. A simple login flow would take forever vs traditional automation.

The bigger thing is reliability… how do you actually verify the LLM did what you asked vs what it thinks it did? With normal automation you get assertions and can inspect elements. Here you're kinda flying blind.

Also "vision optional" makes me think it's not great at understanding complex UIs yet… which defeats the main selling point.

That said this feels like where things are headed long term. As LLMs get faster and better at visual stuff, this approach could eventually beat traditional automation for maintainability. Just not quite ready for production yet.


Yes, you are correct; at the current level, this is an MCP. Next, we are going to build an agent on top of it, there we will include vision like a must-have capability, as u mention, to understand complex UI, and we will implement a validator as well after each step.


We're building tools that could genuinely make people's lives better, but instead most companies are laser-focused on "how many jobs can we eliminate?" The whole conversation around AI safety isn't even about keeping people employed… it's about making sure the AI doesn't turn on us.

Investors want to hear about efficiency gains and cost savings (aka layoffs). Customers want solutions that work. Trying to balance building something useful while not contributing to the dystopia is genuinely difficult.

What keeps me going is focusing on problems that actually matter and being selective about who I work with. Not everyone can do this, but if you have some leverage, use it to push back on the worst impulses.


And it's all so short sighted. If their wildest dreams come true with AI, the only employee besides the CEO will be AI agents.

Who's gonna buy what they're selling?


Been messing around with the different memory files and honestly it's pretty useful once you get the hang of it.

I mostly use:

Project ./claude.md - team stuff like our weird API naming conventions, how we structure components, deployment steps that always trip people up

Local ./claude.local.md - my dev URLs, test accounts, personal shortcuts. Stuff I don't want to commit to the team file

User ~/.claude/claude.md - code style preferences, how I like explanations (brief vs detailed), tools I always use

The recursive thing is neat - it picks up memories from parent folders so you don't have to duplicate everything.

That # shortcut is way faster than editing files manually. Still forget about /memory command half the time tho.

Main thing I learned is being specific helps a lot. 'Use our error handling pattern' vs actually showing the pattern makes a big difference.


Learn prompt engineering and how to effectively use AI coding assistants… that's immediately useful and will save you hours daily.

Vector databases (Pinecone, Weaviate) and building RAG systems. Tons of companies need this now and most devs don't know it yet.

Understanding model fine-tuning and when it's worth it vs just better prompting. Also get comfortable with AI ops - monitoring model performance, dealing with hallucinations, cost optimization. The boring stuff that actually matters in production.

And yeah, just stay curious and adaptive. Half the tools we use today didn't exist 18 months ago.


Honestly X is still pretty good for this if you follow the right people. The AI/ML research community is super active there - Andrej Karpathy, François Chollet, Yann LeCun, etc. Plus a lot of the good startups announce stuff there first.

For more traditional dev stuff, I've been getting good signal from newsletters like Changelog, TLDR, and Morning Brew's tech section. Not as real-time as the old blog days but decent curation.

Reddit's r/MachineLearning and r/programming can be hit or miss but sometimes catch things early. GitHub trending is also underrated for spotting new tools.


Right. We're optimizing for the wrong metrics. Hours spent arguing if something is a 3 or 5 story points could've been spent just building it.

The obsession with predictability in an unpredictable process is the real problem, especially in copilot and cursor era. :D


It's used to justify jobs of scrum/pm people and I'm tired of being polite about it. Imagine we enter a tech company, decide it's LARPing time, and create DnD rules for a completely made up role and then pay someone to do it. That is literally what has been happening in the tech industry.


If people are arguing between 3 and 5 points just pick one and move on.

If people are arguing between 3 and 21 points there's a mismatch in understanding what the work entails.


Often the highest is the correct answer. If it's 21, then break it down until nobody thinks it's 21.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: