Hacker Newsnew | past | comments | ask | show | jobs | submit | NewEntryHN's commentslogin

This implication completely depends on the elasticity (or lack thereof) of demand for software. When marginal profit from additional output exceeds labor cost savings, firms expand rather than shrink.

One reason you see a pareto distribution in "normal sized" teams is not solely because of competency, but because the 80% can rest on the 20% and therefore don't feel too pressed to work that much. Therefore the pareto model breaks down in 1-man teams.


To be fair, the diction in modern movies is different than the diction in all other examples you mentioned. YouTube and live TV is very articulate, and old movies are theater-like in style.


Can we go back to articulate movies and shows? And to crappier microphones where actors had to speak rather than whisper? Thanks.


That is exactly my point, the diction in modern movies sucks.


"Software engineer complains bearing the burden of everything and concludes everything would be fixed by firing everybody except themselves."


Of course. IQ tests measures nothing more than the ability to pass an IQ test, which is proxied by a lot of things such as western culture, education, propensity to cram tests, etc.


> IQ tests measures nothing more than the ability to pass an IQ test

Incorrect, IQ is a composite measure correlated with fluid reasoning, crystallized knowledge, working memory, processing speed, and spatial ability. It's true that you can't naively use IQ to compare two diverse groups, but you can correct for this with a large enough sample of any two groups. This idea that it's biased towards western culture or education is vastly overblown.


Why doing it themselves instead of distributing the work to data owners?


This is not just branding, MCP is an implementation detail; the product is chatting with apps.


What's up with the "prompt refinement" business? Are folks trying to get it right with one shot?

My experience is that treating the generated code as a Merge Request on which you submit comment for correction (and then again for the next round) works fairly well.

Because the AI is bad you get more rounds than in a real code review, but because the AI is fast and in your command each round is way faster than with a code review with a human (< 10 minutes feedback loop).


Isn't that mostly the fine-tuning phase? RLHF being cherry on top?


Would't any additional item increase safety?


No, not if you overdo it. You start getting into https://en.wikipedia.org/wiki/Alarm_fatigue territory.


If your checklist is PITA to got through, then completing it will more like to lure you into that false sense of security that you might even miss something obvious.

IMO the best way is to start small, and every time checklist didn't catch an issue either modify existing item(s) or add new item(s). Organic complexity is the best complexity.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: