YOU should be writing your commit messages, not an AI.
You can always generate a new commit message (or summary, alternative summary, etc) down the road with AI. You can never replace your mind being in the thick of a changeset.
The author of the commit doesn't matter per se. If someone is just having AI summarize their changes and using that as the commit message, I agree that they're doing it wrong.
These days, lots of my commit messages are drafted by AI after having chatted at length about the requirements. If the commit message is wrong or incomplete, I'll revise it by hand or maybe guide the AI in the right direction. That tends to be a much more useful and comprehensive description of the commit's intent than what I would naturally find worthwhile to write on my own.
OP's approach is interesting as well, at least in principle, and if it works well it might be the next best option in the absence of a chat log. It should just make sure to focus on extracting the "why" more than describing the "what".
> than what I would naturally find worthwhile to write on my own.
I take issue with that statement. There's nothing "natural" about documentation. You're not "naturally disposed" to writing a certain level of documentation. It's a skill and a discipline. If you don't think it's worthwhile to write documentation, that's not a "natural failing". You're making a judgment, and any missing documentation is an error in judgment.
I meant "natural" in a context of having more urgent immediate priorities than extensively detailed documentation at the commit level — not an error in judgement, just a tradeoff.
If a given project has time/budget to prioritize consistent rigorous documentation, of course it should consider doing so. AI's ability to reduce the cost of doing so is a good thing.
If we assume, as many do, that we are going to delegate the work of "understanding the code" to AI in the coming years, this surely becomes even more important.
AI writing code and commit messages becomes a loop divorced from human reasoning. Future AIs will need to read the commit history to understand the evolution of the code, and if they're reading poor summaries from other AIs it's just polluting the context window.
Commit messages are documentation for humans and machines.
Writing commit messages is one of these mundane chores I’d gladly delegate to LLMs which are very very good at this kind of thing.
I mean, if you really know you code, you know it, there is no much value in reinforcing it in your head one more time via writing comprehensive commit messages - it’s a waste of time, imho.
Sounds like you haven't been working long enough to forget your decisions, which you WILL do eventually. In such cases, where you're looking at code you wrote 10 years ago and you find a weird line, when you view the git blame and read the commit message, you'll be very thankful that you explain not just "what" you did, but "why" you did this, something an AI will have a very hard time doing.
You don't have to if you don't want to, but if you think "this commit message is just a summary of the changes made", you'll never write a useful commit message.
I’ve been working in the industry for two decades, and I think commit messages is not the best place for storing decisions and associated context. I personally prefer ADRs.
Two decades and you don't see any value in writing down what's currently in your head?
Anyhow, ADRs are good, but they stand for Architectural decisions, not every decision is at that level.
In general, if there's a better place to store explanations, do use it, but often, in many projects, commit messages are the least bad place; and it's enormously better to write there than nowhere at all.
Hard disagree, though it’s probably dependent on your domain and/or how expressive your test suite is.
But even if that were true, reading a two liner explanation is very obviously more time efficient than reviewing a whole commit diff.
Super common case: you got a subtle bug causing unexpected values in data. You know from the db or logs that it started on 2025-03-02. You check the deployment(s)of that day and there are ~20 of them.
You can quickly read 20 lines in the log and have a good guess of which is likely to be related or go for a round of re-reviewing 20 multi file pull requests and reverse engineer the context from code.
"Super common case" is "initial commit", "fix spelling in README.md", or "small refactor". If your "super common case" is "subtle bug causing unexpected values in data" then you are doing something very, very wrong.
Perhaps this is about commit granularity. If keeping the history about advancing the task is not useful, then I’d merge these commits together before merging the PR; in some workflows this is set up to happen automatically too.
I agree in principle, but in practice, it's horrible right now.
Most AI generate commit messages and PR descriptions are much too verbose and contain 0 additional informational value that couldn't be parsed from the code directly. Most of the time I'd rather read 2 sentences written by a human than a wall of text with redundant information.
> I mean, if you really know you code, you know it, there is no much value in reinforcing it in your head one more time via writing comprehensive commit messages - it’s a waste of time, imho.
I know the code...when I write it. But 2 weeks later all the context is gone, and that's just _for me_. For my colleagues who also have to be in that code, they don't even start with context.
I mean do what works for you, but understand the bulk of the work that this applies to is for >1 person shops with code bases too big to fit in ones head at all, much less for more than a day or so.
You can always generate a new commit message (or summary, alternative summary, etc) down the road with AI. You can never replace your mind being in the thick of a changeset.