Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is probably something where it comes down to preference and familiarity. I would much prefer a simple text file for documentation that I can grep, open in my text editor, modify easily without switching context (oh, I should have been more explicit in the documentation I wrote - let me just fix that now), etc. All the features you mentioned "nice interface, fully searchable API interface, whole public API" are exactly what you get if you open a well written header file in any old text editor.

I used to be a big fan of doxygen etc, but for the stuff I've worked on, I've found that "pretty" documentation is way less important than "useful" documentation, and that the reformatting done by these tools tends to lead towards worse documentation with the people I have worked with ("Oh, I need to make sure every function argument has documentation, so I will just reword the name of the argument"). Since moving away from doxygen I have stopped seeing this behaviour from people - I haven't tried to get a really good explanation as to why, but the quality of documentation has definitely improved, and my (unproven) theory is that keeping the presentation as plain as possible means that the focus turns to the content.

I don't know if rust doc suffers the same issues, but the tooling you are mentioning just seems to add an extra step (depending on how you count steps I suppose, you could perhaps say it is the same number of steps...) and provide no obvious benefit to me (and it does provide the obvious downside that it is harder to edit documentation when you are reading it in the form you are suggesting).

But with all these things, different projects and teams and problem domains will probably tend towards having things that work better or worse.



> well written text file

The problem with this is no one agrees on the definition of "well-written", so consistency is a constant battle and struggle. Language tooling is a better answer for quality of life.


That's an interesting assertion, but not one that matches the experience I've had.

It is one of those things that sounds "obviously true", but in practice I've found that it doesn't really live up to the promise. As a concrete example of this, having a plain text header file as documentation tends to mean that when people are reading it, if they spot a mistake or see that something isn't documented that should be documented, they are much more likely to fix it than if the documentation is displayed in a "prettier" form like HTML.

The problem with header files that aren't "well-written" tends to be that the actual content you are looking for isn't in there, and no amount of language tooling can actually fix that (and can be an impediment towards fixing it).


I'll second this in Java land. I much prefer reading the sources directly than javadocs. Though jshell also comes in handy.


I have the same experience a lot of the time with 3rd party rust crates. Doc.rs is amazing - but it’s rare that I’ll use a library without, at some point, hitting view source.


for most rust ive done (not tons) the docs were very basic as onky auto generated with minimal content. totally useless, have to read sources to find out what is in there. auto documentation to me ia just ti satisfy people who need to tick all of these boxes and want to do with minimal effort. has dox has tests etc. such artitude never leads to quality.


Useful documentation is impossible for the developer to write - they are too close to the code and so don't understand what the users (either api users or end users) need to know. Developers agonize over details that users don't care about while ignoring as obvious important things users don't know.


I know people look at me like I’m a heathen and a scoundrel, but I think a lot of software teams spend too much time trying to make things consistent. Where’s the ROI? There is none.

GitHub readmes? Bring on the weird quirks, art, rants about other software, and so on. I’ll take it all.

Don’t get me started on linters. Yes, there’s lots of things that should actually be consistent in a codebase (like indentation). But for every useful check, linters have 100 random pointless things they complain about. Oh, you used a ternary statement? Boo hoo! Oh, my JavaScript has a mix of semicolons and non semicolons? Who cares? The birds are singing. Don’t bother me with this shite.

Software is a creative discipline. Bland software reflects a bland mind.


> Where’s the ROI? There is none.

> Oh, my JavaScript has a mix of semicolons and non semicolons? Who cares?

i had to refactor and port a javascript codebase that contained a mix of all of javascripts syntactic sugar, no comments anywhere in the codebase, and i was unable to ask the original devs any questions. the high amount of syntactic sugar gave me "javascript diabetes" - it was fun figuring out all the randomness, but it delayed the project and has made it extremely difficult to onboard new folks to the team after i completed the port.

painting is a creative discipline, and the mona lisa has stood the test of time because davinci used a painting style and materials that set the painting up for long term use.

a codebase without standards is akin to drawing the mona lisa on a sidewalk with sidewalk chalk.


I don't like complaining linters. I do like auto fixing linters I can leave running in the background.


> auto fixing linters

any advice on how to implement an auto linter in an old codebase? i hate losing the git blame info.


I use a .git-blame-ignore-revs file. So if you run the fix once, dump that commit in the file and use it when you use git blame, it'll exclude blame in that commit.

https://www.stefanjudis.com/today-i-learned/how-to-exclude-c...


awesome, ty!


Have you looked at how OCaml does it?

The historical way is to have a .ml and a .mli files. The .ml file contains the implementation. Any documentation in that file is considered implementation detail, will not be published by ocamldoc. The .mli file contains everything users need to know, including documentation, function signatures, etc.

Interestingly, the .mli and the .ml signatures do not necessarily need to agree. For instance, a global variable in the .ml does not need to be published in the .mli. More interestingly, a generic function in the .ml does not need to be exposed as generic in the .mli, or can have more restrictions.

You could easily emulate this in Rust, but it's not the standard.


> and that the reformatting done by these tools tends to lead towards worse documentation with the people I have worked with ("Oh, I need to make sure every function argument has documentation, so I will just reword the name of the argument")

That seems like an orthogonal issue to me. I've seen places where documentation is only in the source code, no generated web pages, but there is a policy or even just soft expectation to document every parameter, even if it doesn't dd anything. And I've also seen places that make heavy use of these tools that doesn't have any such expectation.


> All the features you mentioned "nice interface, fully searchable API interface, whole public API" are exactly what you get if you open a well written header file in any old text editor.

No, you can't, and it's not even close.

You have a header file that's 2000 lines of code, and you have a function which uses type X. You want to see the definition of type X. How do you quickly jump to its definition with your "any old text editor"? You try to grep for it in the header? What if that identifier is used 30 times in that file? Now you have to go through all of other 29 uses and hunt for the definition. What if it's from another header file? What if the type X is from another library altogether? Now you need to manually grep through a bunch of other header files and potentially other libraries, and due to C's include system you often can't even be sure where you need to grep on the filesystem.

Anyway, take a look at the docs for one of the most popular Rust crates:

https://docs.rs/regex/1.11.1/regex/struct.Regex.html

The experience going through these docs (once you get used to it) is night and day compared to just reading header files. Everything is cross linked so you can easily cross-reference types. You can easily hide the docs if you just want to see the prototypes (click on the "Summary" button). You can easily see the implementation of a given function (click on "source" next to the prototype). You can search through the whole public API. If you click on a type from another library it will automatically show you docs for that library. You have usage examples (*which are automatically unit tested so they're guaranteed to be correct*!). You can find non-obvious relationships between types that you wouldn't get just by reading the source code where the thing is defined (e.g. all implementations of a given trait are listed, which are usually scattered across the codebase).

> I don't know if rust doc suffers the same issues, but the tooling you are mentioning just seems to add an extra step (depending on how you count steps I suppose, you could perhaps say it is the same number of steps...) and provide no obvious benefit to me (and it does provide the obvious downside that it is harder to edit documentation when you are reading it in the form you are suggesting).

Why would I want to edit the documentation of an external library I'm consuming when I'm reading it? And even if I do then the effort to make a PR changing those docs pales in comparison to the effort it takes to open the original source code with the docs and edit it.

Or did you mean editing the docs for my code? In that case I can also easily do it, because docs are part of my source files and are maintained alongside the implementation. If I change the implementation I have docs right there in the same file and I can easily edit them. Having to open the header file and hunt for the declaration to edit the docs "just seems to add an extra step" and "and provide no obvious benefit to me", if I may use your words. (:


Thanks for the constructive example of the rust doc.

I am not making things up when I say that the very first question I had about how to use this module, either is not answered, or I couldn't find the answer. That question was "what regular expression syntax is supported?". This is such a fundamental question, yet there is no answer provided.

As a preference thing, I don't really like examples in APIs (it is supposed to be a reference in my opinion) and I find them to be mostly noise.

> Why would I want to edit the documentation of an external library I'm consuming when I'm reading it? And even if I do then the effort to make a PR changing those docs pales in comparison to the effort it takes to open the original source code with the docs and edit it.

Right, this is possibly where our experiences differ. I'm frequently pulling in loads of code, some of which I've written, some of which other people have written, and when I pull in code to a project I take ownership of it. Doesn't matter who wrote it - if it is in my project, then I'm going to make sure it is up to the standards I expect. A lot of the time, the code is stuff I've written anyway, which means that when I come back in a few months time and go to use it, I find that things that seemed obvious at the time might not be so obvious, and a simple comment can completely fix it. Sometimes it is a comment and a code change ("wouldn't it be nice if this function handled edge case X nicely? I'll just go in there and fix it").

The distinction between external and internal that you have looks pretty different to me, and that could just be why we have different opinions.


The parent linked to a subsection showing usage for a particular object. If you click back into the root level for the document there is a header specifying ‘syntax’, and other more ‘package-level’ documentation


> I am not making things up when I say that the very first question I had about how to use this module, either is not answered, or I couldn't find the answer. That question was "what regular expression syntax is supported?". This is such a fundamental question, yet there is no answer provided.

This is a fair question to have. As others have already said, this is the API reference for a particular class, so you won't get the high level details here. You can click in the upper left corner to go to the high level docs for the whole library.

> The distinction between external and internal that you have looks pretty different to me, and that could just be why we have different opinions.

Well, there are two "external" vs "internal" distinctions I make:

1. Code I maintain, vs code that I pull in as an external dependency from somewhere else (to give an example, something like libpng, zlib, etc.). So if I want to fix something in the external dependency I make a pull request to the original project. Here I need to clone the original project, find the appropriate files to edit, edit them, make sure it compiles, make sure the tests pass, make a PR, etc. Having the header file immediately editable doesn't net me anything here because I'm not going to edit the original header files to make the change (which are either installed globally on my system, or maintained by my package manager somewhere deep under my /home/).

2. Code that is part of my current project, vs code that is a library that I reuse from another of my projects. These are both "internal" in a sense that I maintain them, but to my current project those are "external" libraries (I maintain them separately and reuse in multiple projects, but I don't copy-paste them and instead maintain only one copy). In this case it's a fair point that if you're browsing the API reference it's extra work to have to open up the original sources and make the change there, but I disagree that it's making things any harder. I still have to properly run any relevant unit tests of the library I'm modifying, still have to make a proper commit, etc., and going from the API reference to the source code takes at most a few seconds (since the API reference will tell me which exact file it is, so I just have to tell my IDE's fuzzy file opener to open up that file to me.) and is still a tiny fraction of all of the things I'd need to do to make the change.


> I am not making things up when I say that the very first question I had about how to use this module, either is not answered, or I couldn't find the answer. That question was "what regular expression syntax is supported?". This is such a fundamental question, yet there is no answer provided.

The main page for the documentation answers that question: https://docs.rs/regex/1.11.1/regex/index.html

It even says "If you just want API documentation, then skip to the Regex type", which is what you were linked to before.


Most decent text editors support something like go to definition. Your entire comment seems to be based on the idea that text editors only support basic search, which is simply false.

Personally I'm quite content with both experiences. But it really is just a matter of preference.


At least moderately advanced text editors often interoperate with symbols tables, so you can jump to a definition. But even with grep, you can usually do it in a way where you differentiate between definition and use. But I am not arguing that you should not use advanced tools if you like is, the deeper point is that you can always use advanced tools even with headers, but you can not go back in a language designed around advanced tools and work with simple tools. So it is strictly inferior IMHO to design a language around this additional cmplexity.


I think the person you're responding to must know all of this. This is stuff that's obvious to anyone who has ever written any code that required using libraries. Unfortunately , people like to pretend to have a gripe with something on the internet just for the sake of arguing. This is the only conclusion I can arrive at when people appear to seriously propose reading a header file in a text editor is somehow better than reading documentation in a purposefully designed documentation format. It's like saying browsers are just a waste of time when you can just use Gopher for everything.


Or it just might be that different people prefer different things. I'm a hardcore fan of header files too. Vim is my preferred way of dealing with text and I can do all kinds of magic with it with the speed of though and I prefer to use as plain as possible text files. In the rare occasion when the documentation needs more than ascii stuff it's best practice to write a nice tex and friends documentation plus a real tutorial anyway. And full literate programming style is hard to beat when you are dealing with complex things.


It's fine to have preferences or cognitive inertia towards working a certain way. It's silly to pretend that doing things this way conveys some kind of universalist advantage or to conjure up a bunch of imaginary/highly niche scenarios (I'm remote coding over 28.8k at the bottom of the ocean and have no access to a browser anywhere!) that necessitate working this way for argumentative purposes.


The OP uses C libraries, and this is used to much simpler interfaces and much smaller dependency sets than the GP. So no, I don't think they know all of this.

But also, they probably to know how to keep their dependencies sane, and possibly think the best way to document that giant 2k lines interface is in a book. What are both really good opinions, that will never be really "understood" by communities the GP takes his libraries from just because it's not viable for them to do it.


Depending on coding style you could just do something like this:

  ^struct whatever


The issue is that the coding style depends on whoever wrote the external library, not on you, so this ends up working only sometimes. You can probably find some other combination that will help you find what you're looking for (I do this all the time when using Github's web interface) but ultimately this is just a bad experience.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: