Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> At least my experience is that ChatGPT goes super hard on security, heavily promoting the use of best practices.

Not my experience at all. Every LLM produces lots of trivial SQLI/XSS/other-injection vulnerabilities. Worse they seem to completely authorization business logic, error handling, and logging even when prompted to do so.



Post-edit window, the above should read “…completely skip authorization…”




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: