Service interfaces should be well-typed. Almost by definition, the cost to update a service interface will be a fraction of the time associated with updating its consumers and testing their integration. Dynamic types can lower a part of the cost (although not by much, in comparison to modern languages and frameworks), but add more expense in many other ways.
When operating at any sort of scale, a system that's "easy to change" stops sounding virtuous. Communicating and integrating service changes effectively is usually more expensive than making the changes themselves, for a large organization. Providing service descriptions which can be converted into static types (e.g. swagger) is very cheap, and can prevent a host of expensive human errors.
In truth, poorly specified services or documents can end up impossible to change, since the risk and impact of doing so is difficult to size (or the exercise of doing so costs more than the change is worth).
A system that's easy to change is very important at scale, unless you want to write a new API or service every time a new business requirement arises.
In many situations, a true statically-typed service interface can be nearly impossible to change, or even add fields to. Adding new schema versions (particularly for write APIs) or dynamic/polymorphic schemas that change based on the input are extraordinarily useful and very easy with dynamically-typed interfaces, but can be very difficult with strict statically-typed interfaces, particularly the kind that e.g. Java nudges developers towards.
There are some hybrid frameworks that mix dynamic and static typing, which solve some of these problems (e.g. protobuf, Avro, Ion). They make reasonable compromises.
Writing movie scripts in YAML using unquoted strings? That's pretty contrived. Using literal style is easy when it is potentially needed (e.g. programmatic output), and any decent editor can highlight inferred types in helpful ways. I've used YAML in a variety of contexts and never been bitten by this one, and I don't think that any of his examples are still problems in YAML 1.2 (from 2009).
The Ruby security problem they reference is also absurdly misattributed. The problem there is with trusting serialized data to mark its own types, and having no limits on what types can be deserialized into. That's a depressingly common security problem in many web frameworks, and YAML as an interchange format isn't a unique source of vulnerability. Any data format is dangerous on the web if you trust it to create arbitrary types.
* Client communicate their desired projection and page size (via $select and $top query string parameters), which can then easily be mapped by the service into efficient calls to the underlying data store.
* OData client page sizes are polite requests, not demands. The server is free to apply its own paging limits, which are then communicated back to the client along with the total results count and a URL that can be followed to get the next page of results. Clients are required to accept and process the page of entities they are given, even if that number differs from the count which was requested due to server limits.
I'd assume GraphQL will adopt similar functionality, if it hasn't already.
I don't know, there are some tradeoffs there. Their sample appears to be "nearly JSON", which doesn't seem too helpful. Being close to but noncompliant with a standard doesn't bring anything but confusion.
And it isn't obvious what they're using for transport, but it seems like they aren't attempting to model programmatic resources as web resources the way that OData does. This is an okay decision if they're trying to make it transport-neutral (i.e. you can issue the same GraphQL request via Thrift or by HTTP POST), but in that direction lies the sins of SOAP.
In the past I've written a client-side caching layer for OData which was capable of doing the same automatic batching and partial cache fulfillment for hierarchical queries that they describe in the article. It is a good tool for writing complex client applications against generalized data services without giving up performance, and I'm not surprised that companies in our post-browser world are starting to move in that direction.
I'm a little bummed that Facebook is throwing its considerable weight behind yet another piece of NIH-ware, though. Beating up the REST strawman was a poor use of half of this article; I'd be much more interested to hear why we need GraphQL when there exists a standard like OData.
I think that part of the problem with REST implied by the article is that it is purely request/response, rather than supporting full bidirectional communication.
To me the term "temporal coupling" is skipping some details, since the real consideration is the duration of the transaction vs the duration of the transport session. REST-over-HTTP can't directly represent transactions which span TCP sessions, and this is a problem if the transaction is very long or the connection is choppy.
When operating at any sort of scale, a system that's "easy to change" stops sounding virtuous. Communicating and integrating service changes effectively is usually more expensive than making the changes themselves, for a large organization. Providing service descriptions which can be converted into static types (e.g. swagger) is very cheap, and can prevent a host of expensive human errors.
In truth, poorly specified services or documents can end up impossible to change, since the risk and impact of doing so is difficult to size (or the exercise of doing so costs more than the change is worth).