Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It's true that this could probably also be handled by AI, but in the end, classifying the lenses takes like 1-2% of the time it takes to make a scraper for a website so I found it was not worth trying to build a very good LLM classifier for this.

This is true for technology in general (in addition to specifically for LLMs).

In my experience, the 80/20 rule comes into play in that MOST of the edge cases can be handled by a couple lines of code or a regex. There is then this asymptotic curve where each additional line(s) of code handle a rarer and rarer edge case.

And, of course, I always seem to end up on project where even a small, rare edge case has some huge negative impact if it gets hit so you have to keep adding defensive code and/or build a catch all bucket that alerts you to the issue without crashing the entire system etc.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: