Hacker Newsnew | past | comments | ask | show | jobs | submit | hiq's commentslogin

I don't ever look at LLM-generated code that either doesn't compile or doesn't pass existing tests. IMHO any proper setup should involve these checks, with the LLM either fixing itself or giving up.

If you have a CLI, you can even script this yourself, if you don't trust your tool to actually try to compile and run tests on its own.

It's a bit like a PR on github from someone I do not know: I'm not going to actually look at it until it passes the CI.


What does a good developer do when working in a codebase with hundreds of warnings?

Or are you only considering a certain warnings?


Why does your codebase generate hundreds of warnings, given that every time one initially appeared, you should have stamped it out (or specifically marked that one warning to be ignored)? Start with one line of code that doesn't generate a warning. Add a second line of code that doesn't generate a warning...

> Why does your codebase generate hundreds of warnings

Well, it wasn't my codebase yesterday, because I didn't work here.

Today I do. When I build, I get reports of "pkg_resources is deprecated as an API" and "Tesla T4 does not support bfloat16 compilation natively" and "warning: skip creation of /usr/share/man/man1/open.1.gz because associated file /usr/share/man/man1/xdg-open.1.gz (of link group open) doesn't exist" and "datetime.utcnow() is deprecated and scheduled for removal in a future version"

The person onboarding me tells me those warnings are because of "dependencies" and that I should ignore them.


It's rare that I work on a project I myself started. If I start working on an existing codebase, the warnings might be there already. Then what do I do?

I'm also referring to all the warnings you might get if you use an existing library. If the requirements entail that I use this library, should I just silence them all?

But I'm guessing you might be talking about more specific warnings. Yes I do fix lints specific to my new code before I commit it, but a lot of warnings might still be logged at runtime, and I may have no control over them.


> If I start working on an existing codebase, the warnings might be there already. Then what do I do?

What would you do if the code you inherited crashed all the time?

Come up with a strategy for fixing them steadily until they're gone.


If this code crashed all the time there'd be a business need to fix it and I could justify spending time on this.

But that's not what we're discussing here, we're discussing warnings that have been ignored in the past, and all of a sudden I'm supposed to take the political risk to fix them all somehow, even though there's no new crash, no new information.

I don't know how much freedom you have at your job; but I definitely can't just go to my manager and say: "I'm spending the next few weeks working on warnings nobody else cared about but that for some reason I care about".


Because most people are working at Failure/Feature factories where they might work on something and at last minute, they find out something is now warning. If they work on fixing it, the PM will screaming about time slippage and be like "I want you to work on X, not Y which can wait".

2 Years later, you have hundreds of warning.


You found that out at the last minute. So then you did a release. It's no longer the last minute. Now what's your excuse for the next release?

If your management won't resource your project to the point where you can assure that the software is correct, you might want to see if you can find the free time to look for another job. You'll have to do that anyway when they either tank the company, or lay you off next time they feel they need to cut more costs.


Wild (and I guess most of the time bad) idea: on top of the warnings, introduce a `sleep` in the deprecated functions. At every version, increase the sleep.

Has this ever been considered?

The problem with warnings is that they're not really observable: few people actually read these logs, most of the time. Making the deprecation observable means annoying the library users. The question is then: what's the smallest annoyance we can come up with, so that they still have a look?


Yes, people do notice sleep. But it has to be on the scale of minutes or it will be ignored especially if it happens during a CI run.

To be clear this library depends on libsignal.


Whisperfish appears to be an app, not a library.


hiq's comment was not about Whisperfish but about the presage library. My comment can be read as "Whisperfish wrote their own implementation of the signal protocol" - which is wrong. (Sorry, can't edit it anymore)

With presage, Whisperfish has a high-level Rust library that wraps the official libsignal libraries (which are also written in Rust) and makes it much easier to write clients. The official Signal repo only contains Java/Typescript/Swift wrappers. As presage is rather obscure, I thought that some HN readers might appreciate the link.


That's my main way to find interesting links, especially as I usually find comments more interesting than the featured links. I default to the "top 20".


For paying users of Claude Code and other similar services, do you tend to switch to the free tiers of other providers while yours is down? Do you just not use any LLM-based tool in that time? What's your fallback?


ZAI's $3 coding plan is my "We have Claude at home.”


I run Claude Max 200 as my primary. I also have GPT Plus (so Codex) and OpenRouter for MCP calls which I can load via IDE in a pinch.

I seem to have access to Gemini CLI due to one google sub or other, but each time I have a reason to try it, it underwhelms me.


> mostly because it’s new tech

Do you then think it'll improve to reach the same stability as other kinds of infra, eventually, or are there more fundamental limits we might hit?

My intuition is that as the models do more with less and the hardware improves, we'll end up with more stability just because we'll be able to afford more redundancy.


I'm assuming it's too heavy and has too much contact surface (so more friction), making it too hard to glide smoothly.

There's probably something with the position of the hand when you move the mouse as well. At least I seem to be moving mostly the wrist when I use my mouse, meaning that my hand and forearm are not always aligned; without this alignment, I feel there's more strain on the wrist when typing.


my imagined device has the hand a bit more vertical, which would give more leverage for moving the device around.

Could you do a thing with magnets where you have a special mousepad as well with the pad being all one pole pointing up and the device the same pole pointing down?

Also my imagined device would not need the full keyboard, just the full right side of a qwerty keyboard.


> They begin the year at $0 and they end the next year at $0.

Or they're dead.

If you save an extra $2000/year, what are you supposed to do with the money if you're always hungry, if you're always cold? I'm guessing you could buy food and clothes; you'd end up at $0, just slightly better off. If there's no safety net to rely on, you'd save to be able to face the next problem, and maybe pay it less with your health (which is a kind of invisible debt).

And that's even assuming there's some certain income you can rely on. In my case, I know that for the next few months, I'd at least get unemployment benefits if I lost my job. Not everyone get that, and if you don't, the income floor is $0 and it's way harder to budget.

Another aspect to consider is that maybe the case of a single person who would be in poverty throughout a long life is not representative of poverty. Some people get out of poverty, some fall into it, some die early from it. If we're considering a single person always starting at $0 and always ending up at $0, several years in a row, we already dismiss these nuances. I'm sure you can find such examples, someone who lived to be 80 with a constant wealth of $0, but how common are they really?


> Given the trajectory of inference cost, it unlikely that they would fail to reach profitability.

Is there evidence that their revenues are growing faster than their costs?


The place to go for those numbers is https://epoch.ai/data/ai-companies

Very little data about expenses, but it looks like they may be growing a little slower (3-4x a year) than revenue. Which makes sense because inference and training get more efficient over time.


We don't have evidence one way or the other. But from the public statements the idea that they lose roughly their revenue seems constant over time. It's possible that that is simply a psychological barrier for investors. Meaning they grow their losses at roughly 2x their revenue growth rate.


> Given the trajectory of inference cost, it unlikely that they would fail to reach profitability.

> We don't have evidence one way or the other

I don't see how both of these things can be true. How can we know something to be likely or unlikely if we have no evidence of how things are?

If we don't have any evidence they're moving towards profitability, how is it likely they will become profitable?


Growing businesses tend to consume capital. How much capital is appropriate to burn is subjective, but there are good baselines from other industries and internal business justifications. As tech companies burn capital through people time, it's hard to directly figure out what is true CapEx vs. unsustainable burn.

You wouldn't demand that a restaurant jack prices up or shutdown in its first month of business after spending ~1 MM on a remodel to earn ~20k in the first month. You would expect that the restaurant isn't going to remodel again for 5 years and the amortized cost should be ~16k/mo (or less).


I would recommend that restaurant jack up their prices if they're remodeling the restaurant every other day and have no plans on stopping or slowing down that constant remodeling.

> it's hard to directly figure out what is true CapEx vs. unsustainable burn.

Exactly, and yet you're so certain they'll achieve profitability. The cost for pickles could get cheaper but if they're constantly spending more and more on the rest of the burger and remodeling the building all the time to add yet another wing of seating that may or may not actually be needed it doesn't really matter in their overall profitability right?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: