I recently started using LLMs to review my code before asking for a more formal review from colleagues. It's actually been surprisingly useful - why waste my colleagues time with small obvious things? But it's also gone much further than that sometimes with deeper reviews points. Even when I don't agree with them it's great having that little bit more food for thought - if anything it helps seed the review
"Diff to master and review the changes. Branch designed to address <problem statement>. Write output to d:\claudeOut in typst (.typ) format."
It'll do the diffs and search both branch and master versions of files.
I prefer reading PDFs than markdown, but it'll default to markdown unprompted if you prefer.
I have almost all my workspaces configured with /add-dir to add d:/claudeOut and d:/claudeIn as general scratch folders for temporary in/out file permissions so it can read/write outside the context of the workspace for things like this.
You might get better results using a better crafted prompt (or code review skill?). In general I find claude code reviews are:
- Overly fussy about null checking everything
- Completely miss on whether the PR has properly distilled the problem down to its essence
- Are good at catching spelling mistakes
- Like to pretend they know if something is well architectured, but doesn't
So it's a bit of a mixed bag, I find it focuses on trivia but it's still useful as a first pass before letting your teammates have to catch that same trivia.
It will absolutely assume too much from naming, so it's kind of a good spot if it's making wrong kind of assumptions about how parts work, to think how to name things more clearly.
e.g. If you write a class called "AddingFactory", it'll go around assuming that's what it does, even if the core of it returns (a, b) -> a*b.
You have to then work hard to get it to properly examine the file and convince itself that it is actually a multiplier.
Obviously real-world examples are more subtle than that, but if you're finding yourself arguing with it, it's worth sometimes considering whether you should rename things.
This one's served fairly well:
"Review this diff - detect top 10 problem-causers, highlight 3 worst - I'm talking bugs with editing,saving etc. (not type errors or other minor aspects) [your diff]". The bit on "editing, saving" would vary based on goal of diff.
Not who you're replying to but working at a small small company, I didn't have anyone to give my code for review to so have used AI to fill in that gap. I usually go with a specific then general pass, where for example if I'm making heavy use of async logic, I'll ask the LLM to pay particular attention to pitfalls that can arise with it.
We're a Haskell shop, so I usually just say "review the current commit. You're an experienced Haskell programmer and you value readable and obvious code" (because that it is indeed what we value on the team). I'll often ask it to explicitly consider testing, too
Well, this obviously depends on a given programming language/culture, but in my mind I would say in case of parsing a string to an int it is an expected case that it could fail, so I would model it as a Return type of an Int or a ParsingError, or something like that.
Meanwhile, for a function doing numerous file copies and network calls I would throw an exception, as the number of possible failure cases are almost limitless. (Like you surely not want to have an ADT that models FileSystemFullErrors and whatnot).
It so happens that in this particular example one is pure and the other is side-effecting, but I'm not convinced that's a hard rule here.
In a good effect system exceptions as effects are isomorphic to error conditions as data, so the choice comes down to what is more ergonomic for your use case, just like the choice between these three isomorphic functions should is down to ergonomics:
frob1 :: Foo -> Bar -> R
frob2 :: (Foo, Bar) -> R
frob3 :: FooBar -> R
data FooBar = FooBar { foo :: Foo, bar :: Bar }
I'm currently sitting at < 200W, but I expect that to go up with a higher workload. The SER9s idle at 5-7W, but they can run at 50-60W sustained without thermal throttling. Some reviewers have claimed that they can run at 75-80W sustained for 10-15 minutes, but I think that's pretty unlikely.
I write Haskell with Claude Code and it's got remarkably good recently. We have some code at work that uses STM to have what is essentially a mutable state machine. I needed to split a state transition apart, and it did an admirable job. I had to intervene once or twice when it was going down a valid, but undesirable approach. This almost one shot performance was already a productivity boost, but didn't quite build. What I find most impressive now is the "fix" here is to literally have Claude run the build and see the errors. While GHC errors are verbose and not always the best it got everything building in a few more iterations. When it later got a test failure, I suggested we add a bit more logging - so it logged all state transitions, and spotted the unexpected transition and got the test passing. We really are a LONG way away from 3.5 performance.
This keeps getting parroted but it's flawed/overly idealistic/frankly naive. An awful of children are, unfortunately, poorly parented. This is not a new phenomenon, nor something we seem to be improving. OTOH, exposure to extreme material for young children is increasing, and has consequences beyond that child. Exposure to extreme pornography leads to a warped view of sex which affects everyone this child might have sexual encounters with. Exposure to extreme violent material leads to the murder of other innocent children.
I don't know where I stand on this legislation - my gut is that it's too heavy handed and will miss the mark. But I think we need to stop saying this falls solely on parents. The internet is far too big, and parenting is far too varied for this to work. I wish it would, but it won't. There simply aren't enough parents that care enough.
Damned if you do, damned if you don't. I think it's a terrible thing, but in this case, I think doing nothing is better than doing something. The unintended consequences far outweighs the benefits. The kids that want to find extreme stuff will find it anyway, regardless of regulation.
"But the unintended consequences" has been the standard response of tech startups to any kind of regulation, since they were started being regulated. At some point, it stops being believable.
Yes, and my response is compare the tech companies and sizes between the US, Europe, and else where. Over regulation. The same thing is happening with AI in Europe. I am taking an economic stance here.
Because we know the implications of poor quality food, and we also know those who would buy it have no choice but to buy the cheapest. So, no thanks. I'd much rather the state intervene here and keep this crap out. This "let consumers choose" argument is tiring when consumers don't have the ability to choose. They are just trying to survive
>and we also know those who would buy it have no choice but to buy the cheapest. So, no thanks. I'd much rather the state intervene here and keep this crap out.
Having "the state intervene here and keep this crap out" isn't going to magically make the domestic chicken cheaper for those people who "have no choice". You're not improving the chicken quality for them, you're preventing them from buying chicken at all.
Why do you suppose that chicken quality would be tied to price? Do you know where your chicken comes from? If you buy expensive, high quality chicken, do you know it's actually high quality and not just a fancier package with a higher price tag? Do you want to research every single product you eat to make sure it's safe?
They’ve decided as a group that they would rather people eat less better quality food than more poorer quality food. I don’t understand the hang up. Is it the idea that people can collectively decide something? Presumably if it’s such a big problem people in the UK can elect a different government.
even poor people are able to manage and prioritize their spending.
in america, you could just sell the ultra poor people a piece of dirt you picked off the ground for a couple cents. theyre still buying "chicken" but its not at all what people want when theyre wanting to buy chicken.
Honestly there's way more there, and you get consistent solid speeds. Find a provider with a lot of retention and you can find almost all mainstream media regardless of it's age. (Public) torrents tend to track what's popular and quickly fade. The masses seem to favour low size encodes too, so if you're looking for more quality (and again, public trackers) you're usually much more out of luck.
Surely a pinch of critical thinking answers this? 4 deliveries a day isn't going to pay a daily minimum wage. If it did, then we wouldn't have this situation - surely most riders manage more than one delivery every 2 hours and make more than the minimum wage!
When numbers are being presented being clear never hurts. Critical thinking isn’t a key skill a lot of the population has.
I’d assume this was per hour or possibly per shift. If they did a 4-5 hour shift in an evening etc. Best never to make assumptions in these cases though…
Critical thinking will hopefully both let you reject 4/day and the idea that one of the other answers is obvious and unambiguous. If the data is incomplete, treat it as incomplete.