I worked with people that defended the idea that code should not have comments, the code should self explain itself.
I am not a developer and I completely disagree with that, the python scripts I wrote, Ansible playbook, they all have comments because 1 month down the road I no longer remember why I did what I did, was that a system limitation or software limitation or the easiest solution at the time???
I have a source file of a few hundred lines implementing an algorithm that no LLM I've tried (and I've tried them all) is able to replicate, or even suggest, when prompted with the problem. Even with many follow up prompts and hints.
The implementations that come out are buggy or just plain broken
The problem is a relatively simple one, and the algorithm uses a few clever tricks. The implementation is subtle...but nonetheless it exists in both open and closed source projects.
LLMs can replace a lot of CRUD apps and skeleton code, tooling, scripting, infra setup etc, but when it comes to the hard stuff they still suck.
I'm seeing the same thing with my own little app that implements several new heuristics for functionality and optimisation over a classic algorithm in this domain. I came up with the improvements by implementing the older algorithm and just... being a human and spending time with the problem.
The improvements become evident from the nature of the problem in the physical world. I can see why a purely text-based intelligence could not have derived them from the specs, and I haven't been able to coax them out of LLMs with any amount of prodding and persuasion. They reason about the problem in some abstract space detached from reality; they're brilliant savants in that sense, but you can't teach a blind person what the colour red feels like to see.
Well I think that’s kind of the point or value in these tools. Let the AI do the tedious stuff saving your energy for the hard stuff. At least that’s how I use them, just save me from all the typing and tedium. I’d rather describe something like auth0 integration to an LLM than do it all myself. Same goes for like the typical list of records, clock one, view the details and then a list of related records and all the operations that go with that. Like it’s so boring let the LLM do that stuff for you.
This is one of my favourite activites with LLMs as well. After implementing some sort of idea for an algorithm, I try seeing what an LLM would come up with. I hint it as well and push it in the correct direction with many iterations but never tell the most ideal one. And as a matter of fact they can never reach the quality I did with my initial implementation.
> but when it comes to the hard stuff they still suck.
Also much of the really annoying, time consuming stuff, like frontend code. Writing UIs is not rocket science, but hard in a bad way and LLMs are not helping much there.
Plus, while they are _very_ good at finding common issues and gotchas quickly that are documented online (say you use some kind of library that you're not familiar with in a slightly wrong way, or you have a version conflict that causes an issue), they are near useless when debugging slightly deeper issues and just waste a ton of time.
Great point. The thing is that most code we write is not elegant implementations of algorithms, but mostly glue or CRUDs. So LLMs can still broadly be useful.
The over 60s in the UK are probably the most privileged demographic in the history of the nation.
Just last October the government reduced tax free savings allowances on the Cash ISA for everyone...except he over 60s.
The over 60s have iron-clad "triple locked" state pensions that are _guaranteed_ to grow unsustainably (faster than tax revenue) at the cost of the working tax payer.
We need infrastructure and productivity growth, so the over 60s can take their gold plated compulsory buyouts and go do one.
The most important aspect of software design, at least with respect to software that you intend not to completely throw away and will be used by at least one other person, is that it is easy to change, and remains easy to change.
Whether it works properly or not, whether it's ugly and hacky or not, or whether it's slow... none of that matters. If it's easy to change you can fix it later.
Put a well thought out but minimal API around your code. Make it a magic black box. Maintain that API forever. Test only the APIs you ship.
The actual vulnerability is indeed the copy. What we used to do is this:
1. Find out how big this data is, we tell the ASN.1 code how big it's allowed to be, but since we're not storing it anywhere those tests don't matter
2. Check we found at least some data, zero isn't OK, failure isn't OK, but too big is fine
3. Copy the too big data onto a local buffer
The API design is typical of C and has the effect of encouraging this mistake
int ossl_asn1_type_get_octetstring_int(const ASN1_TYPE *a, long *num, unsigned char *data, int max_len)
That "int" we're returning is either -1 or the claimed length of the ASN.1 data without regard to how long that is or whether it makes sense.
This encourages people to either forget the return value entirely (it's just some integer, who cares, in the happy path this works) or check it for -1 which indicates some fatal ASN.1 layer problem, give up, but ignore other values.
If the thing you got back from your function was a Result type you'd know that this wasn't OK, because it isn't OK. But the "Eh, everything is an integer" model popular in C discourages such sensible choices because they were harder to implement decades ago.
Win32 API at some point started using the convention of having the buffer length be a reference. If the buffer is too small the API function updates the reference with the required buffer length and returns an error code.
I quite like that, within the confines of C. I prefer the caller be responsible for allocations, and this makes it harder to mess up.
Why can I not control whether any kind of public image of me at all appears?
reply