Hacker Newsnew | past | comments | ask | show | jobs | submit | boredtofears's commentslogin

It’s not X it’s Y is one of the most obvious LLM writing patterns. Especially the heavily punctuated sentence structure.

I'm on FB primarily because my local buy-nothing group is on it, so I am logging in multiple times a day. I'm so used to this slop it's pretty funny at this point, but as is the case with all social media, you tune your algorithm as you engage. At this point it pushes things like cooking videos and hockey clips more than the AI slop for me.

Sometimes I'll go down a rabbit hole of clicking AI generated videos just because my curiosity is piqued, and then I'll be stuck getting that slop fed to me for the next week. I have to make a mental note to actively disengage with it as quickly as possible to tip the algo in the other direction.


I can't think of an interviewer who interjects their viewpoint more and tries to get his guest to acknowledge/agree to his typically shallow level analysis than Lex. The only redeeming quality about his podcast are the guests he gets. I don't think Dwarkesh is great but he's leagues better.

I just don't understand this view on Lex Fridman at all.

Fridman is quite good at letting the guest speak. The whole show is exceptionally good at keeping a conversation moving.

I think there are technical haters on Lex but that is stupid because Lex is in sales. He is selling a podcast. From a sales perspective, Lex is incredibly good.

It is like saying the chef is only a good cook because of the quality of the ingredients. Yes, exactly. The chef isn't a farmer growing their own organic vegetables for the dishes. The art is in the choice and ability to source quality ingredients and then bring it all together as a full dish.

A podcast is not a lecture or audio book.


I guess you're right - getting your podcast big enough that it becomes a necessary checkbox for book/media tours is a skill. You're correct that he brings absolutely nothing to the podcast, but he interrupts plenty - usually with superficial pet theories about the "oneness of the universe" or "how all we need is love, actually". He never seems well prepared for his guest beyond a chatgpt summary, never gets any kind of interesting answer out of a guest that they weren't already going to give, just absolutely zero criticality to anything in the interview.

A podcast with guests is an interview. Interviewing is a skill. The difference between a good and bad interviewer is night and day.


Helping out with a freelance project I built 15 years ago. It didn’t end on the best of terms, but the relationship has since been repaired (and I’m much better at managing my time now)

It’s been fun to come back to, most of the code I wrote still drives the business (it’s just far outdated).

I was pretty early on in my career when I wrote it, so seeing my mistakes and all the potential areas to improve has been very interesting. It’s like buying back your old high school Camaro that you used to wrench on.


What problem is it even solving? Keeping my car straight so I can be less attentive on the road?

I get it in the context of driverless but find it nothing but annoying as a driver.


Adaptive cruise control requires some degree of lane detection. It has to figure out what car it's actually following, not merely what car is in front of it. (The road is turning, the car in front of you can easily not be the car you are actually behind.)


Lane keep keeps your car in the lane so you can stop paying attention just like cruise control keeps you going the same speed so you can stop paying attention… they don’t.

They are just aids that ease fatigue on long trips.


The "fatigue" from long trips is hardly a result of having to keep in a lane.

It's more so the result of being awake, doing effectively nothing, for a long time. Lane Keep assistance is a useless technology for 99% of the population and the 1% who need it, likely shouldn't be driving a car anyways.

The more we "aid" fatigue, the longer drivers will attempt to drive. This cannot be a good outcome. The worst driving occurs when one is practically half asleep.


I’m not referring to mental fatigue, but the physical ergonomic fatigue simply from continually activating muscles in a narrow range of motion even over a couple of hours.

If you’ve ever driven a 1970s truck you’ll know that continually correcting the steering will wear you out after just a couple of hours. Modern rack and pinion steering is a lot more comfortable, and lane keep is a further comfort improvement.


I dunno, when you've made about 10,000 clay pots its kinda nice to skip to the end result, you're probably not going to learn a ton with clay pot #10,001. You can probably come up with some pretty interesting ideas for what you want the end result to look like from the onset.

I find myself being able to reach for the things that my normal pragmatist code monkey self would consider out of scope - these are often not user facing things at all but things that absolutely improve code maintenance, scalability, testing/testability, or reduce side effects.


Depends on the problem. If the complexity of what you are solving is in the business logic or, generally low, you are absolutely right. Manually coding a signup flow #875 is not my idea of fun either. But if the complexity is in the implementation, it’s different. Doing complex cryptography, doing performance optimization or near-hardware stuff is just a different class of problems.


> If the complexity of what you are solving is in the business logic or, generally low, you are absolutely right.

The problem is rather that programmers who work on business logic often hate programmers who are actually capable of seeing (often mathematical) patterns in the business logic that could be abstracted away; in other words: many business logic programmers hate abstract mathematical stuff.

So, in my opinion/experience this is a very self-inflected problem that arises from the whole culture around business logic and business logic programming.


Coding signup flow #875 should as easy as using a snippet tool or a code generator. Everyone that explains why using an LLM is a good idea always sound like living in the stone age of programming. There are already industrial level tools to get things done faster. Often so fast that I feel time being wasted describing it in english.


Of course I use code generation. Why would that be mutually exclusive from AI usage? Claude will take full advantage of it with proper instruction.


In my experience AI is pretty good at performance optimizations as long as you know what to ask for.

Can't speak to firmware code or complex cryptography but my hunch is if it's in it's training dataset and you know enough to guide it, it's generally pretty useful.


> my hunch is if it's in it's training dataset and you know enough to guide it, it's generally pretty useful.

Presumably humanity still has room to grow and not everything is already in the training set.


> In my experience AI is pretty good at performance optimizations as long as you know what to ask for.

This rather tells that the kind of performance optimizations that you ask for are very "standard".


Most optimizations are making sure you do not do work that is unnecessary or that you use the hardware effectively. The standard techniques are all you need 99% of the time you are doing performance work. The hard part about performance is dedicating the time towards it and not letting it regress as you scale the team. With AI you can have agents constantly profiling the codebase identifying and optimizing hotspots as they get introduced.


> Most optimizations are making sure you [...] use the hardware effectively.

If you really care about using the hardware effectively, optimizing the code is so much more than what you describe.


As most are


import claypot

trillion dollar industry boys


> you're probably not going to learn a ton with clay pot #10,001

Why not just use a library at that point? We already have support for abstractions in programming.


Also funny because there are many product categories on amazon where if its not above 4.5 its probably shit


Here's an example:

I recently inherited an over decade old web project full of EOL'd libraries and OS packages that desperately needed to be modernized.

Within 3 hours I had a working test suite with 80% code coverage on core business functionality (~300 tests). Now - maybe the tests aren't the best designs given there is no way I could review that many tests in 3 hours, but I know empirically that they cover a majority of the code of the core logic. We can now incrementally upgrade the project and have at least some kind of basic check along the way.

There's no way I could have pieced together as large of a working test suite using tech of that era in even double that time.


> maybe the tests aren't the best designs given there is no way I could review that many tests in 3 hours,

If you haven't reviewed and signed off then you have to assume that the stuff is garbage.

This is the crux of using AI to create anything and it has been a core rule of development for many years that you don't use wizards unless you understand what they are doing.


I used a static analysis code coverage tool to guarantee it was checking the logic, but I did not verify the logic checking myself. The biggest risk is that I have no way of knowing that I codified actual bugs with tests, but if that's true those bugs were already there anyways.

I'd say for what I'm trying to do - which is upgrade a very old version of PHP to something that is supported, this is completely acceptable. These are basically acting as smoke tests.


> code coverage

You need to be a bit careful here. A test that runs your function and then asserts something useless like 'typeof response == object' will also meet those code coverage numbers.

In reality, modern LLMs write tests that are more meaningful than that, but it's still worth testing the assumption and thinking up your own edge cases.


I code firmware for a heavily regulated medical device (where mistakes mean life and death), and I try to have AI write unit tests for me all the time, and I would say I spend about 3 days correcting and polishing what the AI gives me in 30 minutes. The first pass the AI gives me, likely saves a day of work, but you would have to be crazy to trust it blindly. I guarantee it is not giving you what you think it is or what you need. And writing the tests is when I usually find and fix issues in the code. If AI is writing tests that all pass without updating the code then it's likely falsely telling you the code is perfect when it isn't.


If you're using a code coverage tool to identify the branches its hitting in the code, you at least have a guarantee that it is testing the code its writing tests for as long as you check the assertions. I could be codifying bugs with tests and probably did (but they were already there anyways). For the purpose of upgrading OS libraries and surrounding software, this is a good approach - I can incrementally upgrade the software, run all the tests, and see if anything falls over.

I'm not having AI write tests for life-or-death software nor did I claim that AI wrote tests that all pass without updating any code.


You know they cause a majority of the code of the core logic to execute, right? Are you sure the tests actually check that those bits of logic are doing the right thing? I've had Claude et al. write me plenty of tests that exercise things and then explicitly swallow errors and pass.


Yes, the first hour or so was spent fidgeting with test creation. It started out doing it's usual whacky behavior like checking the existence of a method and calling that a "pass", creating a mock object that mocked the return result of the logic it was supposed to be testing, and (my favorite) copying the logic out of the code and putting it directly into the test. Lots of course correction, but once I had one well written test that I had fully proofed myself I just provided it that test as an example and it did a pretty good job following those patterns for the remainder. I still sniffed out all the output for LLM whackiness though. Using a code coverage tool also helps a lot.


... Yeah thise tests are probably garbage. The models probably covered the 80% that consists of boiler plate and mocked out the important 20% that was critical business logic. That's how it was in my experience.

For God's sake that's completely slop.


You should read my other comment - I did check that the test was actually checking the logic, so I guess I did some level of review with it.


That sounds more like confirmation that greptile is being included in a lot agentic coding loops than anything


Yeah, I kinda stopped involving myself in other people's OSS projects a while ago for that reason. If I have an itch to scratch, I just use my fork. It usually feels like my itch isn't theirs and I always feel like I'm imposing on the maintainer's vision or at best just taking time away from them. I think maintainers have a lot of pressure to accept things because "open source!".


A couple questions here - would more clear guidelines in the CONTRIBUTING.md file help in clarifying project direction and what contributions would be most valuable? Is there a better way for projects to indicate what issues should be prioritized? Do you ever want to run your contribution ideas past a maintainer before opening up a PR and if so do you ever do that? I'm curious if your itch may actually be something they're looking for too but it's not made clear in an effective way.


To be honest, I didn't really think about the goals of the projects at all. I'm not even sure if a CONTRIBUTING.md existed. I just needed a particular feature for something I was working and it felt like "being a good OSS citizen" to at least offer it back to the maintainer, but I think I just ended up putting them in a position where they felt like they had to make it work.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: