Hacker Newsnew | past | comments | ask | show | jobs | submit | katet's commentslogin

MkDocs with the Material for MkDocs theme has reasonable blogging support, and a dockerized image for building. It also has some other features like footnotes, mermaid diagrams, and "code annotations" for code blocks.

Type markdown, build in docker, publish assets to GitHub Pages :tada:


MkDocs with the "Material for MkDocs" theme and it has a docker based builder image if you want a one-line isolated build command. Mostly pure Markdown with opt-in syntax extensions for fancy things like card grids or tables.


That bill passed the Commons, but has been amended by the Lords

(I know the Commons can override the Lords if it wants to, but it's not at that stage yet)

https://bills.parliament.uk/bills/3153/stages


Wow, I'm impressed how grumpy one can seem in a few short sentences. Take an upvote for that alone.

GP has a certain - admittedly orthogonal - point. At what point does "just two coffees per month" translate into "too many coffees traded for services"?

If it's relevant, a fairly high-end black coffee made from medium-roasted in-house coffee beans - which I strongly enjoy - costs me $1.70 in my local economy.

So this costs me 5x coffees, or a work-week of coffee. And I'm a pretty well-paid foreigner living in a strange land.

If I were to take a charitable reading of the GP - with no expertise in economics on my side - I'd say they were pointing out that you are potentially losing a 65+% of your market for whom $9 is not "a couple of coffees".

That's not an indictment - maybe you don't care! Maybe 100x customers @ $9 = $900/month and the service pays for itself.

I am a big fan of what I think is Purchasing Power Parity (PPP) / Pay What You Can (PWYC), even though I see it pretty rarely.

I guess it depends on whether you want to provide your product/service to as many people it can benefit at once, versus the minimum number of people who can support your costs with a profit.

Neither position is inherently "wrong"/"privileged", they just seem to have different incentives and/or target markets.

The voice of a potential target market that is being excluded seems valuable to me - even if the revenue increase might be negligible per-user, perhaps it's worth considering the volume of potential users in that market :shrug:

(To re-iterate that I'm trying not to pick a side here: maybe it's not - maybe the business model is to target affluent consumers. That doesn't make criticism of the exclusionary nature of this approach "privileged", however, in my view. YMMV)

(Edit: minor emphasis)


If I'll be accused of being grumpy for calling out some couch quarterback for bringing irrelevant arguments to squash down some developer's hopes and dreams, then so be it. You have to be really full of yourself to criticize someone's launch on HN so harshly. This community is famous for sharing kind feedback, with a few notable exceptions that now live in infamy (eg: Dropbox). A good question to ask yourself would be - "how did my Show HN post compare to this one?" If you haven't made one yet, then be extra kind and humble.


This is akin to having a hissy fit over marketing speak. It's like seeing a red car in a commercial but don't like red cars. Doesn't mean the product isn't targeting you. Just ignore it.


I don't see a comment here for MkDocs, so I'll mention it.

It was started as a pet project for development knowledge, specifically migrating to local Docker-powered development environments. Initially that was all it had: the "how to set up the Dockers" page and a couple of other tidbits.

Over a period of time - years, frankly - it has grown into something a bit more useful, although still woefully short of where I would like it to be.

A few things have happened though:

* The first principle was: if I find something I have to "re-discover", document it

* The more it grows, the more useful it becomes. It's quite pleasing to be able to respond to a Slack message with "read this and tell me if you still have any questions"

* It's still slow going, but after some departures ourselves, we've seen a bit more interest in adding and codifying the "tribal knowledge"

The best things I like about it?

* It's statically hostable: GitLab/GitHub Pages, or if you need to wrangle it, `scp`/`ftp` it from a CI pipeline with a private Apache/NGINX deployment

* It's Markdown text embedded in the repository: you can grep it, or search it from VSCode/Sublime/any-IDE-of-choice

* It's Markdown text embedded in the repository: there's no way I can think of to make it easier for devs to contribute and maintain it than making it part of the same commit, the same MR, the same CI pipeline

* It's Python. Every Linux/OS X machine has python, so there's a minimal barrier to entry for previewing locally

* A bucket-load of plugins: Mermaid, PlantUML, SVG embeds, and more: https://github.com/mkdocs/mkdocs/wiki/MkDocs-Plugins

* The "Material for MkDocs" theme: https://squidfunk.github.io/mkdocs-material/

Nothing stops it becoming just as much of a mess as Confluence, but there's something slightly ineffable about being able to work with your docs as if they were code: refactor them! Reorganise, dedupe, and take MRs to fix stale articles/mistakes.

Other commenters have mentioned incentivising the docs, but I would finally add that someone, somewhere should take a principled stand around the organisation of the docs (and of course, give them space to fail, and experiment, and try again). Broken links are better than no links in the first place, since search is pretty accessible.

I was originally inspired by the documentation structure outlined at https://documentation.divio.com/ - although our organisation has evolved away from this into higher-level topics with subsections similar - but not identical - to the original idea.

Edit: as for the comment about Confluence, this is one of my favourite bits. You don't need "buy-in" for this, there's no subscription plan you need to approve.

If there's a concern around "split-brain" docs, I've used a guiding principle to defuse those conversations:

"Do these docs make sense if you don't have a local checkout of the repository?"

If the docs aren't related to the code repository - RCAs, business requirements, org charts, HR procedures - leave them in Confluence.

If they are related to the repository - coding standards, test coverage generators, docker/unit test commands, migration seed data - then what's the value-add for them being in Confluence? Your CEO isn't going to read them. Marketing don't need them. Put them in the codebase, and reduce the signal-to-noise in Confluence for the rest of the business.


> The more it grows, the more useful it becomes. It's quite pleasing to be able to respond to a Slack message with "read this and tell me if you still have any questions"

People have mentioned this in other comments but it's worth repeating that this habit is the true key. Make docs part of your core workflow. The other great things is that you're making them aware of the docs site (when they see the link I bet some have said "wow I didn't know we had this!") and you're specifically asking for feedback on where the doc is lacking.


Individual Contributor and Performance Improvement Plan


Would you feel this way if he'd won big after buying a lottery ticket, or received an inheritance from a family member?

He took a risk (potentially, maybe he's the founder, maybe it's equity) and it paid off. It could just as equally not have.


Inheritance I wouldn't care about because there's nothing I can do to change what I inherit. Lottery I'd probably care about, though. I've always looked down on people who buy into lottery but I think if I saw someone close to me win big I might feel like I missed out for not playing. It's a good comparison because I remember the early years when his business success was really far from certain - he bought a lottery ticket that was probably worth several hundred thousand dollars in the first years (income he would've easily made if he'd been an employee instead of being a business owner without any customers).


As a British expat living in Asia, I was heavily amused to read this passage in Neal Stephenson's The Diamond Age:

There was lengthy discourse between the two men on which of them was more honored to be in the company of the other, followed by exhaustive discussion of the relative merits of the different teas offered by the proprietors, whether the leaves were best picked in early or late April, whether the brewing water should be violently boiling as the pathetic gwailos always did it, or limited to eighty degrees Celsius.


I'm resistant to GraphQL, although I take the caveat that I was also initially resistant to JSX and CSS-in-JS and my thinking has since evolved.

My two main annoyances are a) GraphQL could be thought of as a custom media type that almost "slots in" to the REST model by defining a request content-type and a response content-type with almost entirely optional fields, and b) the incessant idea that REST has to replicate the n+1 query problem.

For a) "it's strongly typed" has never struck me as much of an argument. It's appealing if you're already lacking a strong definition of your data model, but GraphQL doesn't inherently fix that, it just forces you to confront something you could fix within your existing architecture anyway.

For b), it seems that people tend to map something like /users to `SELECT * FROM users`, and 12 months later, complain that it's retuning too much data, and they have to make N queries to /groups/memberOf.

Am I alone in thinking the obvious solutions are to i) add a comma-separated, dot-notation `?fields=` param to reduce unnecessary data transfer, ii) add an endpoint/content-type switch for `usersWithGroups` and iii) realise this is a problem with your data model, not your API architecture?

As an additional c), my other concern is GraphQL touts "one query language for all your data", but tends to skip over the N+1 problem when _implementing_ the queries to disparate data sources. If your existing queries are bolted into `foreach (input) { do query }` then GraphQl isn't going to give you any speed up, it's just going to give you slightly more simplicity to an already-slow backend.

Granted, I work with "legacy" (i.e. old but functional) code, and I secretly like the idea that adopting GraphQL would force us to fix our query layer, but why can't we fix it within the REST model?

(I happen to be about to sleep, but I am genuinely interested in waking up to see what HN thinks)


For me one of the biggest advantages of using GraphQL is that clients consuming data can directly express what they want. The backend implementation can then do all sorts of magic to turn intent into execution.

Regarding the n+1 problem: of course that can be solved with plain rest endpoints as well, but it requires you to form your API around it, where if you have one unified endpoint solving the n+1 problem is a mere implementation detail on the backend.

In situations where it's desirable that a system's complexity is expressed in the API design, I can see GraphQL not being a good fit. If I compare it to SQL it is understandable that you sometimes do need to restrict what can be done to clear predefined operations, say for performance reasons. But if you can get away with this general "intent-based" querying, which GraphQL does well, I recommend it over plain rest endpoints.


Many outlets gloss over the n+1 query problem, but I find it to be a major shortcoming of the gql spec. Sure there are solutions, but they are not very ergonomic. In one of our products we found simple requests blowing out to 100s of downstream calls to the legacy REST APIs. The pain in optimizing the GQL resolving layer negated almost any benefit of the framework as a whole.

Im happy GQL brought api-contract-type-saftey to a wider audience, but it is similarly solved with swagger/OpenApi/protobuf/et al.

Folks may be interested in dgraph, which exposes a gql api and is not victim to the n+1 problem since it is actually a graph behind the scenes.


The n+1 problem is real but I do think dataloader is a very ergonomic solution. In a simple REST API using your ORM's preload functionality is even more ergonomic, but in more complex cases I've seen a lot of gnarly stuff like bulk-loading a bunch of data and then threading it down through many layers of function calls which definitely feels worse than the batched calls you make to avoid N+1's in gql.


> gloss over the n+1 query problem, but I find it to be a major shortcoming of the gql spec.

Why would an implementation detail like n+1 be part of the spec?


The n+1 query problem arises from having an API that doesn't let you express the full query that you want so the server can send you back all the data in one go instead of N+1 goes. The n+1 query problem arises naturally from bad API design. That's a spec problem.


Unless I'm missing something in the conversation, this is exactly what GQL is designed to solve.


I'm similarly skeptical of GraphQL. However Hasura (company that posted the article we're discussing here) does provide a a couple of key benefits:

1. It actually implements the entire API for you (you can slot in auth with a webhook or using JWTs). If you use it then your query logic is fixed by default, because you haven't implemented it at all, and Hasura implements it in a sensible way.

2. It gives you realtime/data watching capabilities. Generally in a REST model, you'd have to have a separate websocket channel and then implement the watching logic yourself. Hasura does all this for you, and allows you to reuse standard GraphQL queries for subscriptions.

We're not using it for everything (we also have REST APIs), but it's pretty handy where we are using it (and it sits on top of a standard postgres database which you provide the credential for, so it's super easy to integrate with an existing codebase if you're using postgres).


Can hasura provide record history in postgres ? This is typical on many financial systems - changing a record will store a copy of previous record values in historic table.


Yes (by integrating with Postgres and providing some extra information about the user affecting the change): This might help: https://hasura.io/docs/1.0/graphql/manual/guides/auditing-ta...


(Author here)

You are right about REST not necessarily replicating the N + 1 Query problem. ORMs have already provided solutions for this problem. See https://guides.rubyonrails.org/active_record_querying.html#e... for example.

Looking at GraphQL as being a media type for REST misses the point because:

1) If you were to build a system that primarily used GraphQL for communication, you would be breaking the spirit of uniform resource constraint. Since the URI is the resource for REST, you would be having exactly 1 resource. The article talks a bit more about this. 2) The value add from GraphQL is primarily faster (and more efficient) front end development. So the question is do you want to implement the GraphQL media type?

Typed vs non typed interfaces are a personal preference. Typing adds safety but historically made getting started harder. That has changed in the recent years though as tooling has significantly improved.

I've talked about sparse field sets in the article. They are an alternative, but any form of query language requires a parser and implementation in the backend and GraphQL has good tooling for this.

On the third point of N + 1 queries: Hasura compiles the sql query where possible and otherwise uses a batching technique for external APIs, similar to haxl.


The n+1 query problem arises naturally from having limited query support and limited representation of result sets. If your REST binding for your data is one URI per-entity and a query returns the URIs of the entities, then you have the n+1 problem. On the other hand, if your REST binding is that a query is an entity in itself returning a collection representation of result entity set, then you don't have the n+1 problem.

Apart from that, expressing queries in URIs is bloody hard.

I rather like the PostgREST approach.

Another idea is to POST queries to get a nice URI for each query, then you can GET the URI for a query using URI query parameters to supply values to the... query's parameters. (Look ma', no SQL injection.) PostREST gives you this if you wrap queries with functions, which is a bit sucky, but it gives the DB owner a lot of control over what queries get run.


> Hasura compiles the sql query where possible and otherwise uses a batching technique for external APIs, similar to haxl.

This is assuming the backend (REST API behind GraphQL) should support fetch many by collection of ids API - End of the day, this needs to be done whether we are solving N+1 problem by REST or GraphQL.


I agree with you 100%. In fact, in the REST standard in our company, we have 3 things that help with these problems:

1. All of our responses are well documented, both in separate docs and in a self-documenting format (you can GET .../$docs on any URL to get the description of field)

2. We support an "?includes=" query param that can be used to get arbitrarily deep data with a single GET request

3. Each object contains links to its relations, and these can often be relational-style queries. For example, you can make a request to /users and that would return a list of User objects, with each having a link for a field named "groups", with a link like /groups?userid=0. However, if you want directly the users with all groups, you can request /users?include=groups, and the backend will automatically populate that field as well.


> you can GET .../$docs on any URL

Wow, that is some next level stuff right there. Very nicely done. What do you get back? JSON, plain text, HTML, man page?


JSON, describing the fields of the objects. Honestly it's not very widely used, but it was an easy implementation so we decided to leave it exposed.


Philosophically the difference is a collection of resources vs a single connected data model and enforced object representation.

The single data model lets you provide resource join requests and the enforced encoding lets the service select only desired fields.

Field query params could be provided by web frameworks but they can't be fundamental because REST doesn't imply json or even resources with fields. Similarly, no fields means no join syntax. It will always be a bolt on solution for REST apis.

That said, I feel like GraphQL has a flaw around multiple schemas/services. You can build a rest API as a collection of any endpoints of services and because you were doing all the work anyway it doesn't feel strange.

I'm no GraphQL expert but it seems like stitching multiple schemas and joining across them isn't possible from a client perspective. I'm not sure how this plays out in practice.

I would feel much better about GraphQL if it was a client side library that kept the query syntax. In theory something like that could wrap any number of REST services. REST services could opt in to added functionality like query joining and field filtering as needed. Missing functionality would fall back to the client.

If we're taking requests for how to take the improvements of GraphQL into REST, that would be my suggestion.


I see GraphQL as a primarily organizational tool. If, for whatever reason, it is too difficult or costly to get frontend teams and backend teams aligned on the precise functionality of a REST API, GraphQL provides a spec that is almost certainly able to satisfy frontend requirements at the expense of nontrivial backend complexity.

This could arise if the backend team is producing an API for public consumption or consumption by many frontend teams with differing requirements.

I consider this a fairly narrow niche - the typical case of a backend serving a small number of frontends with tight collaboration between backend and frontend teams is, to my eyes, almost always handled more efficiently by standard REST.


My backend take a 'fields' parameter, I use it mostly in list views.

My backend has nested models. I can request the line_items field and get that list along with the root object, submit it back with added fields easily.


That sounds pretty much like what Vulcain [1] is doing for you on top of an existing REST API.

[1] https://github.com/dunglas/vulcain


Where GraphQL really shines is when you need to decouple the front end from the backed, or, explore the API. Documentation is built in and just about any tool (Insomnia, Postman, GraphiQL, etc) can let you explore the introspected documentation right in the tool.

Where it falls flat on its face, is unit testing (at least in C#) as it’s very cumbersome to inject all the things you need.

In my personal money management software, the GraphQL queries do not result in database queries. Instead it results in RPCs to micro services, starting orchestrations, putting items in queues, and, sometimes even batching as required. It was originally a massive REST api that was hard to navigate with a litany of query parameters and left the client doing too much. The backend and the front end were strongly coupled but not anymore.


It sounds like you're talking about something like https://jsonapi.org/, specifically the sparse field sets https://jsonapi.org/format/#fetching-sparse-fieldsets

It feels like it has some similar goals to GraphQL by offering the client more control over the data returned, but not at the cost of potential performance problems.


Not a moral problem? Humanist? Why can't we have nice things and do this out of simple human empathy for those who suffer?


Well. If you have depression and you’re a business owner or small trader for example. It would be impossible to get out of bed some days and feed yourself. So id say it’s one of the most important financial fixes for every human.

You took it as if it needs to be a fix just so people can be happy. Happiness doesn’t mean you’re surviving.


Production is the state's only problem, comrade.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: