Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, Pangram does not provide any concrete proof, but it confirms many people's suspicions about their reviews. But it does flag reviews for a human to take a closer look and see if the review is flawed, low-effort, or contains major hallucinations.


Was there an analysis of flawed, low-effort reviews in similar conferences before generative AI models?

From what I remember, (long before generative AI) you would still occasionally get very crappy reviews (as author). When I participated (couple of times) to review committees, when there was a high variance between reviews the crappy reviews were rather easy to spot and eliminate.

Now it's not bad to detect crappy (or AI) reviews, but I wonder if it would change much the end result compared to other potential interventions.


Anecdotally people are seeing a rise of low-quality reviews which is correlated with increased reviewer workload and and AI tools giving reviews an easy way out. I don't know of any studies quantifying review quality, but I would recommend checking the Peer Review Congress program from past years.


> does not provide any concrete proof, but it confirms many people's suspicions

Without proof there is no confirmation.


Formally? Sure. In the current zeitgeist it’s more than enough to start pointing fingers around, etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: