Hacker Newsnew | past | comments | ask | show | jobs | submit | JacobiX's commentslogin

>> it is getting harder for small software vendors

I think maybe this trend will continue and not specifically for indie developers, but for all software vendors. If AI becomes capable of producing genuinely highquality software, competition will intensify, and the industry will start to resemble the music industry. Alternatively, AI may continue to generate software that is not necessarily high quality but is largely indistinguishable from competing products; in that case, the market for lemons dynamic will apply. In either scenario, the value of software will decline...


I think that mathematical proofs, as they are actually written, rely on natural language and on a large amount of implicit shared knowledge. They are not formalized in the Principia Mathematica sense, and they are even further from the syntax required by modern theorem provers. Even the most rigorous proofs such as those in Bourbaki are not directly translatable into a fully formal system.


If you don't mind stretching your brain a bit, Wittgenstein was obsessed with this notion. https://www.bu.edu/wcp/Papers/Educ/EducMaru.htm#:~:text=Witt...


In the end, the article says:

> writing functioning application code has grown easier thanks to AI.

> It's getting easier and easier for startups to do stuff.

> Another answer might be to use the fact that software is becoming free and disposable to your advantage.

For me, the logical conclusion here is: don't build a software startup!


Yup. I'm starting to wonder if the startup space has a pretty big blind spot not realizing that how easy it is to build mostly/semi functioning software is not a unique advantage...

I left an AI startup to do tech consulting. What do I do? Build custom AI systems for clients. (Specifically clients that decided against going with startups' solutions.) Sometimes I build it for them, but I prefer to work with their own devs to teach them how to build it.

Fast forward 3+ years and we're going to see more everyday SMBs hiring a dev to just build them the stuff in-house that they were stuck paying vendors for. It won't happen everywhere. Painful enough problems and worthwhile enough solutions probably won't see much of a shift.

But startups that think the market will lap up whatever they have to offer as long as it looks and sounds slick may be in for a rude surprise.


Of course it still makes sense to have a startup. Not because you will ever find a decent enough market. But if you are well connected enough you can find a VC and play with other people’s money for awhile.

You aren’t doing it to get customers, it’s for investors and maybe a decent acquisition


> Fast forward 3+ years and we're going to see more everyday SMBs hiring a dev to just build them the stuff in-house

I don't see this happening. Businesses generally want familiar tools that work reliably with predictable support patterns.


Tested it on a bug that Claude and ChatGPT Pro struggled with, it nailed it, but only solved it partially (it was about matching data using a bipartite graph). Another task was optimizing a complex SQL script: the deep-thinking mode provided a genuinely nuanced approach using indexes and rewriting parts of the query. ChatGPT Pro had identified more or less the same issues. For frontend development, I think it’s obvious that it’s more powerful than Claude Code, at least in my tests, the UIs it produces are just better. For backend development, it’s good, but I noticed that in Java specifically, it often outputs code that doesn’t compile on the first try, unlike Claude.


> it nailed it, but only solved it partially

Hey either it nailed it or it didn't.


Probably figured out the exact cause of the bug but not how to solve it


Yes; they nailed the root case but the implementation is not 100% correct


I have the feeling that we are still in the early stages of AI adoption, where regulation hasnt fully caught up yet. I can imagine a future where LLMs sit behind KYC identification and automatically report any suspicious user activity to the authorities... I just hope we won’t someday look back on this period with nostalgia :)


Being colored and/or poor is about to get (even) worse


“Colored”?


It's the American spelling; short for "A person of color." Typically, African American, but can be used in regard to any non-white ethnic group.


It's also fallen out of fashion which is why someone might be snidely questioning its use


I took it as an honest question, but the quotations mean you're probably right. For the record, it's still a widely used term in DEI contexts, even though there has been some criticism and alternatives promoted:

https://en.wikipedia.org/wiki/Person_of_color


Person of color is very different than colored


It's literally saying the same thing, just with fewer words.


There were a lot of signs in America at one point in time that said "No Coloreds", "Colored Section", and similar phrases to indicate the spaces that white people had decided non-white people could or could not go.

At the same time, there were not a lot of signs saying "No Persons of Color" or "Persons of Color Section".

Likewise, my grandfather who died 35 years ago was very fond of saying "the coloreds". His use of the term did not indicate respect for non-white people.

Historical usage matters. They are not equivalent terms.


I'm not American, sorry. "Colored" is just an adjective to me.


> Historical usage matters.

To who? Not to me, and I don't have a single black friend who likes "person of color" any more than "colored". What gives you the authority to make such pronouncements? Why are you the language police? This is a big nothing-burger. There are real issues to worry about, let's all get off the euphemism treadmill.


I loved the article, but it overlooks one important point: although the JPEG format is frozen, encoders are still evolving ! Advances such as smarter quantization, better perceptual models, and higher-precision maths enables us achieve higher compression ratios while sticking to a format that's supported everywhere :)


This is true, but there are limits. It's a little bit like DEFLAT. Sure, very advanced compressors like Zopfi exist which can get better compression ratios. But then, there's also just Zstd which will get a better compression ratio and compression speed trivially.


I guess you're thinking of jpegli? Do you know how big a difference this actually makes?


Anywhere from 5-15% if I remember correctly depending on source material. I was at one point thinking this would make JPEG-XL and AV1F moot because all of a sudden JPEG became good enough again. But the Author of JPEG-XL suggest there is still so much JPEG-XL encoder can do to further optimise bit / quality especially in the bpp below 1.0 range.


Jpegli is designed from the ashes of JPEG-XL (same author), both from Google. IIRC he also had a hand in the PNG format?


MozJPEG, Guetzli and also Jpegli


What I like about mechanical watches is that, having survived a near-death experience when quartz watches were introduced, they’ve evolved into a completely different kind of product. It’s fascinating that, unlike most other businesses and products, people don’t buy them for their utility, and the less automated their production process, the better. Brands like A. Lange & Söhne even pride themselves on assembling their movements twice.

When inefficiency and craftsmanship are considered features rather than flaws, you have an industry that won’t easily be replaced by AI or robots.


> people don’t buy them for their utility

That's called luxury goods and that's not limited to watches.


Exactly, a painting, for example, has zero utility.


What item on a wall would have more utility I wonder.


A tapestry can reduce echo/reduce sound and provide some thermal insulation, was more relevant in the past.


In a quick web demo, this library was the only one that could handle interactive viewing and manipulation of a very large graph using its GraphGL component ! I don’t think it's a well-known visualization library, but it's quite interesting ...


Not sure if Michael Roth is related to Philip Roth, but it somehow reminds me of American Pastoral and that era of protests against the Vietnam War and its aftermath. I'm not entirely sure how those demonstrations compare to the ones we’re seeing today, but the parallels are striking


It is fascinating (to me at least) that almost all RSA implementations rely on a probabilistic primality test, even though the probability of picking a pseudoprime is extremely small.


There's extremely small (like 2⁻²⁰, the chance of winning $50 000 with a $2 Powerball ticket [1]), and then there's cryptographically negligible (like 2⁻¹²⁰, the false negative chance of our primality test). The chance of something cryptographically negligible happening is about the same as the chance of the attacker guessing your key on the first try, or of a cosmic ray flipping a CPU flag inverting the result of "if !signature.verified { return err }".

[1]: https://www.powerball.com/powerball-prize-chart


I happened to look at this recently, and while I understand the argument (but not the math) of having to do fewer Miller-Rabain rounds, why would you do so in PRACTICAL settings? Unlike ECC you're likely only generating long term keys, so shorter key generation time seems like a bad tradeoff. Composite candidates are going to be rejected early, so you're (with high probability) not doing expensive calculations for most candidates. My reading of [BSI B.5.2](https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/Publicat...) confirms this.

Of course random bit flips could interfere, but other measures should thwart this in high-stakes environments (at least to some degree).


The number of Miller-Rabin rounds has to be bounded, so if you're not going to base your bound on reaching a cryptographically negligible change of false positives, what are you going to base it on? Should we do 10? 15?

The problem with "x should be enough, but why not do more?" arguments is that they can be applied recursively, and never answer the question "ok so when should we stop?"


The BSI recommendation I linked says 60 times here (considering the worst case and the security level they're targeting). Just wondering why we'd want to do less for a (presumably rare) one-time operation.


The performance of this can matter in some scenarios. In embedded systems, smart cards etc., generating the primes can take a significant amount of time (15-20 seconds typical) even with the 'low' number of iterations. Higher iterations means the user will have to wait even longer. Actually, in such systems it is not unusual to see occasional time-out errors and such when the smart card is unlucky with finding the primes. The timeout value may be a fixed, general value in a part of the stack difficult to change for the specific call.

Another case is short-term keys/certificates, where a fresh key pair is generated, and cert issues, for each and every signature. This setup makes revocation easier to handle (the certificate typically has a lifetime of a few minutes so it is 'revoked' almost immediately).

There are also scenarios where the keys are generated on a central system (HSM's etc.) to be injected into a production line or similar. There performance can matter as well. I worked on a system where the HSM's were used for this. They could typically generate an RSA key in 1-2 seconds, so 12 HSM's were needed to keep up with demand - and this was again with the reduced number of rounds.


Thanks for the reply (and it definitely answers my original question). For both short-lived keys, and where production time matters it makes perfect sense. I was mostly thinking about the scenario where you're generating a key for e.g. a long-lived certificate (maybe even a CA) or high-stakes PGP stuff. Just seems like you'd want spend a few more seconds in that case.


Why not do 120 then? We can show that the chance of false negative of 5 rounds is cryptographically negligible, so 5, 60, and 120 are all the same. If the only argument for 60 is that it's more and this is a really important rare operation, doesn't it also apply to 120?

I'm not trying to be glib, there is no rational stopping point if we reject the objective threshold.


I don't pretend to understand all the involved math, but what I'm trying to say is that as far as I understand T rounds gives 4^-T probability that we've chosen a "bad" prime (really composite) per normal Miller-Rabin in the worst case. Doing ~5 rounds has been shown to be enough to choose a good prime candidate, under most (very probable!) conditions when we get it a random, and thus the argument is that that ~5 rounds is fine. We agree so far?

I'm just asking, why not run the conservative 60 round test, rather than ~5 when you're doing a very rare, one time, key generation? I understand that it's very unlikely to reject any numbers, but at least BSI thinks it's worth it for important keys.

If I understand the recommendation right, you wouldn't do 60 for a 2048 bit key and then 120 for 4096, rather 61 rounds would be enough for 4096 if 120 is alright for 2048.


You're asking why not apply the formula for adversarially selected candidates even if we are randomly selecting candidates. There is simply no reason to, except "maybe we made a mistake" but then why would we not think we made a mistake also in calculating the 1/4 value, or in any other part of the code?

Phrased another way, do you have an argument for why run the conservative 60 round test, instead of asking for an argument for why not run it?

Again, you are "very unlikely" to win the Powerball jackpot. Rounds 6-60 have a cryptographically negligible chance of rejecting a composite. It's different, otherwise we'd have to worry about the "very unlikely" chance of the attacker guessing an AES-128 key on the first try.

(I don't follow you on the key sizes, if you apply the 1/4 probability, the candidate size is irrelevant.)


Thanks, understand what you mean. I probably botched the 1/4 probability thing, was thinking 4^-60 gives 2^-120 bit assurance (roughly, security margin in my mind), and an extra round would quadruple it, but doesn't work that way I realize.


As I understand it, the number of rounds needed (for a given acceptable failure probability) goes down the larger the number is. Note (very importantly) that this is assuming you are testing a RANDOM integer for primality. If you are given an integer from a potential malicious source you need to do the full number of rounds for the given level.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: