Hacker Newsnew | past | comments | ask | show | jobs | submit | aeneas_ory's commentslogin

Nobody is sleeping on anything. Linting for the most part is static code analysis which by definition does not find runtime bugs. You even say it yourself "runtime bug, ask the LLM if a static lint rule could be turned on to prevent it".

To find most runtime bugs (e.g. incorrect regex, broken concurrency, incorrect SQL statement, ...) you need to understand the mental model and logic behind the code - finding out if "is variable XYZ unused?" or "does variable X oveshadow Y" or other more "esoteric" lint rules will not catch it. Likelihood is high that the LLM just hallucinated some false positive lint rule anyways giving you a false sense of security.


> static code analysis which by definition does not find runtime bugs

I'm not sure if there's some subtlety of language here, but from my experience of javascript linting, it can often prevent runtime problems caused by things like variable scoping, unhandled exceptions in promises, misuse of functions etc.

I've also caught security issues in Java with static analysis.


The usefulness of using static code analysis (strict type systems, linting) versus not using static code analysis is out of the question. Specifically JavaScript which does not have a strict type system benefits greatly from using static code analysis.

But the author claims that you can catch runtime bugs by letting the LLM create custom lint rules, which is hyperbole at least and wrong at most and giving developers a false sense of security at worst.


> But the author claims that you can catch runtime bugs

I think you misinterpreted OP:

Every time you find a runtime bug, ask the LLM if a static lint rule could be turned on to prevent it

Key word is prevent.


Catch or prevent - linting only covers a tiny (depending on programming language sometimes more sometimes less) subset of runtime problems. The whole back pressure discussion feels like AI coders found out about type systems and lint rules - but it doesn’t resolve the type problems we get in agentic coding. The only „agent“ responsible for code correctness (and thus adherence to feature specification) is the human instructing the agent, a better compiler or lint rule will not prevent massive logic bugs LLMs tend to create like tests testing functions that have been created by the LLM for the test to make it pass, broken logic flows, missing DI, recreating existing logic, creating useless code that’s not being used anywhere yet pollutes context windows - all the problems LLM based „vibe“ coding „shines“ with once you work on a sufficiently long running project.

Why do I care so much about this? Because the „I feel left behind“ crowd is being gaslighted by comments like the OPs.

Overall strict type systems and static code analysis have always been good for programming, and I‘m glad vibe coders are finding out about this as well - it just doesn’t fix the lack of intelligence LLMs have nor the responsibility of programmers to understand and improve the generated stochastic token output


Static analysis certainly can find runtime bugs

OP isn't claiming all runtime bugs can be prevented with static lints suggested by LLMs but, if at least some can, I don't see how your comment is contributing. Yet another case of "your suggestion isn't perfect so I'll dismiss it" in Hacker News.

Why is this such a common occurrence here? Does this fallacy have a name?

EDIT: seems to be https://en.wikipedia.org/wiki/Nirvana_fallacy


Well, if you haven't noticed, LLM topics receive a particularly hostile reaction on HN.

My LLM has theorized that its success at answering trivia questions has left some people feeling threatened.


"Still on Claude Code" is a funny statement, given that the industry is agreeing that Anthropic has the lead in software generation while others (OpenAI) are lagging behind or have significant quality issues (Google) in their tooling (not the models). And Anthropic frontier models are generally "You're absolutely right - I apologize. I need to ..." everytime they fuck something up.

Thank you for calling this out, we are being gaslit by attention seeking influencers. The algorithmic brAInrot is propagated by those we thought we can trust, just like the instagram and youtube stars we cared about who turned out to be monsters. I sincerely hope those people become better or wane into meaninglessness. Rakyll seems to spend more time on X than working on advancing good software these days, a shame given her past accomplishments.

Don‘t get locked in by those SaaS-only vendors. Modern stacks self-host because SaaS has a tendency to extort you once they need to show growth and are unable to acquire new customers fast enough.

Your best bet then is Ory https://github.com/ory / https://www.ory.com because it has an OSS version, enterprise version for self hosters, and a SaaS! And the source code is visible to everyone unlike other vendors :) Plus all the big names like OpenAI or Mistral use Ory as well.


Can‘t wait to test out the world download, this is so cool and also scary how much time this must have taken! Did you build it with some schematic editor?


Very impressive, and at the same time very scary because who knows what security issues are hidden beneath the surface. Not even Claude knows! There is very reliable tooling like https://github.com/ory/hydra readily available that has gone through years of iteration and pentests. There are also lots of libraries - even for NodeJS - that have gone through certification.

In my view this is an antipattern of AI usage and „roll your own crypto“ reborn.


To put this into some context: Ory as a product has grown a lot since then, and while it‘s not possible to have „logical user-pool multi-tenancy“ (logical in the sense that it‘s not running multiple instances) on the open source core alone, it certainly is possible on any of the paid-for options!

And generally speaking , there are a couple of examples out there that use the OSS core for multi-tenancy with the deployment scenario, but usually for a finite amount of tenants.

Our thinking behind this is that mostly direct competitors would need true multi tenancy, where every tenant has their own user pools, configs, URLs and so on.


Congrats on the launch Ulysse - impressive what you have been able to spin up with limited resources! Greetings from Ory :)


Thanks Aeneas!


Resolved, Vercel thought we are being DDoS’ed!


The list is really helpful for people to navigate, and here is additional context to the complexity topic :)

If you use our managed services (https://console.ory.sh), it is easy to set up and scale because we have a bunch of defaults, UIs, and the security stuff all set up already.

If you run it completely on your own, which does require some skill especially in terms of (security) incident response, it is more work because you have to figure out a few pieces yourself (the stack is agnostic to the environment).

We have an option for self hosting with all the stuff we have built for the SaaS, but it only makes sense for businesses of a certain size.

Complexity also depends on how many services you combine, some people try to use everything at once and it's overwhelming.

What’s making Ory complex for people who do it by themselves, is that Ory is 3 different API first products that work stand alone or in concert. To wire this up, one requires understanding of every service. Here it is easier to spin up a cloud account, or use an alternate project which is e.g. just one docker container.


EDIT: For the record, I'm grateful Ory is open source and wish you all the success in the world. My comments below are specifically for the indiehosting case.

For indiehosting, my threat model is "what are my options if the team behind this software takes it in a direction I don't like?"

For some projects (Redis, Terraform), the answer is that a high quality fork pops up (Valkey, OpenTofu). For others (MongoDB), there's still not a FLOSS alternative included in major package managers.

But even if a fork does appear, they are relatively likely to eventually fall prey to the same incentives that impacted the original.

I try to cut this off at the root, and prefer software I would be confident forking myself. All of the options marked "simple" on my list fall under that category.

Sometimes you can't avoid complicated software, but you often can. For an indiehosted identity server, 5,000-10,000 lines of code provides pretty much all the features I need. I don't think the extra ~100,000-900,000k lines of code of the major players is worth the risk.


> but it's still less work than setting up JVM correctly :D

I'm not sure that either of these are what I'd called "difficult"

  FROM openjdk:21
Or

  sudo apt install openjdk-21-jdk


I would guess parent is referring more to tuning the JVM.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: