Over the last year we built a service and we used Swagger / openAPI to define the API in machine readable format.
The YAML spec of the API is the main contract of our service. From the spec we generate the client code for different languages. We also generate parts of the server-side code, specifically the models coming from the schema objects in the spec and the JSON serialization logic. The models generate case classes in Scala.
Regarding REST API design and so called “best practices” we have gone full circle. The top search results when you research for REST API design on the net could be summarised by these two: best practices and Richardson maturity model. We started iterating with a RPCish API (Level 0) which we refined to using HTTP verbs, resources and HTTP codes up to Level 2.
We invested on improving Swagger’s code generation tools so our Python client would have the best documentation possible generated from the spec. We also automated type checking in the client as a first layer of validation that would provide very descriptive error messages when users would make calls with wrong arguments or types. We would also use the HTTP error code to catch HTTP exceptions and translate them to more semantic errors that the client would throw, for example ZApiUserError for 4xx errors (user error) or ZApiInternalError for 5xx errors (service errors). Having the client throwing HTTP errors from the underlying transport library seems to be a leaky abstraction, and thus we hide them from our users. Having the possibility to improve the code generation to adapt the client to our needs was a positive point, but it required quite some effort. You should do due diligence before asumming that the generated client is going to satisfy your business needs.
One of the take home messages from this process is that you have to be careful when designing your API with Swagger, given that the code generation for different languages have different limitations, so not everything that you can express in the Spec will be supported on the generated client. For example, while you can use inheritance in the spec using type discriminators, the Java client needs to know the concrete type when it’s deserializing from JSON, and it can’t be a parent class.
Regarding “good REST API design”, I said we went full circle because in the end we went back to a RPC like model where we use request objects that we POST which carry the function arguments. We do this so we can guarantee client compatibility when we add additional arguments to calls when we expand on new features.
If you are familiar with REST APIs, Swagger, RAML, etc. you know that there are several ways how you can pass parameters, but objects can be only passed as body parameters and there’s only one body parameter, which means that effectively you can only pass one object in the body. If you want to add a new object to an operation you are stuck changing the model of the body parameter. It comes naturally then to have this single body parameter carry the function arguments and nested object which are then extensible on the future and backwards compatible.
If you work backwards from the customer, most of them are going to use the client that you provide to them. They probably don’t care about JSON, HATEOAS, REST, if the function arguments are located in the query or the body of the request or if you decide to return 404 or 409. What customers interact with and what they see is the following
zapi.do_something(**kwargs), they just want to find what function to call on the client using it’s corresponding documentation. See for example the generated python code from the petstore example
For all intents and purposes HTTP and REST are just the implementation detail of the communication protocol. And unless the API you are exposing really maps well to resources which can be accessed as a tree like a filesystem or an object store like S3, treating REST like and end on itself just gets in the way and makes things more complicated than it should. Another problem is that thinking too much about all these implementation details of the HTTP semantics and ideosyncrasies prevents you to focus on more important things like request pipelining, efficient encoding and throughput, exponential retries etc.
When I worked on Nokia’s map delivery platform, we had to design our API around high volumes of rapidly changing map data to be delivered potentially over the cell network. In that case, using a binary encoding and a protocol supporting pipelining such as SPDY or HTTP2 were key design decisions for example.
If your API doesn’t get any particular benefits of mapping it to resources in a RESTful manner, I would say it’s easier to just use a proper RPC library or just use Swagger and HTTP as a transport detail. The tools are still quite inmature with Swagger and RAML. Take for example how much it takes to define a service with gRPC or thrift. Many are using custom RPC frameworks like finagle, Verizon’s remotely, etc.
Comparing a thrift, avro or grpc call definition to the verbosity and detail required to map the operation to a Swagger operation, leads me to the conclusion that this is all implementation detail that can be autogenerated from the call definition and its arguments in case you want to use HTTP as a transport. Our client generation is fully automated. Server side code is partially generated and this could be improved. Behavioural tests are not autogenerated and they could be less cumbersome to maintain if at least they were partially generated. We didn’t do it yet, but having the rest of the boilerplate code autogenerated such as spray routes and everything so you just need to implement a method for each of your calls would be a huge win.