Skip to main content

REST APIs, versions, and the stratification of error responses

Over the last few years I have been constructing a number of REST-like services. Each time I refine my process and design principles; this time I'm going to address server side errors with a modest sidebar to REST API versions.

I really like the Requests toolkit for python. The example on the home page makes it clear what we should all aspire to. Let me point out the use of the r.headers['content-type']. A recent article I read suggested that the designed mechanism is putting the version in the path.
http://example.com/api/V2/create
I suppose this is functional but it causes a number of challenges. The first is that the infrastructure needs to be able to generate relative references and to be aware of the API version numbers and that it has to be across all APIs.  So it's an all or nothing approach.

The other approach, which I prefer but is is not very Requests friendly is changing the Accept and Content-Type in the header. Something like this:
http://example.com/api/create
where the Content-Type might be:
application/myapp-create-request;v=2
with a matching Accept:
application/myapp-create-response;v=2 
of course there might be a few variations on this but on the whole it provides for a better and cleaner routing and implementation process. On the whole the different versions can coexist in the same application space or be routed through a A/B reverse proxy component.

So much for a brief sidebar on message versions.

Unless your application is running naked you're going to have some infrastructure running between your application and the client. Once the transaction leaves your DMZ you lose all control over everything from availability to recovery. So there are many more things to consider.

For example; The basic response message contains a StatusCode in the response payload. The StatusCode; it's definition, it's values and interpretation is described in the RFC; can be interpreted to mean multiple things. In a normal HTML transaction a 200 means that the request was received, processed, and a response was sent to the client. 4XX usually indicates some sort of authentication or request error and 5XX usually indicates that there is an application error which is typically a crash or non-response of some kind.

But then you have to ask yourself; what should the StatusCode value be when the application determines there is an error? Like when a parameter is missing or might have a wrong value or format. How do you indicate that there was an error and not mess with the StatusCode? Recently I refactored all of my error handlers so that they returned a 400 instead of a 200... along with an error payload.

I think if I had defined the transactions in a more formal manner I would have some to this conclusion a lot sooner. (a) Leave the StatusCode to the infrastructure (b) any non-200 means that the infrastructure is experiencing some pain. (c) when generating responses use the Accept header to determine the format of the response (JSON, XML, plain text, msgpack, ...) and use the Content-Type to specify the return type:
application/json;fmt=create;v=2
Now... between the path and Content-Type the router and the controller know exactly what to do with this request and the client knows exactly what to do with the response.  In fact the interface could be completely decoupled from the workflow.

Comments

Popular posts from this blog

Entry level cost for CoreOS+Tectonic

CoreOS and Tectonic start their pricing at 10 servers. Managed CoreOS starts at $1000 per month for those first 10 servers and Tectonic is $5000 for the same 10 servers. Annualized that is $85K or at least one employee depending on your market. As a single employee company I'd rather hire the employee. Specially since I only have 3 servers.

The pricing is biased toward the largest servers with the largest capacities; my dual core 32GB i5 IntelNuc can never be mistaken for a 96-CPU dual or quad core DELL

If CoreOS does not figure out a different barrier of entry they are going to follow the Borland path to obscurity.

UPDATE 2017-10-30: With gratitude the CoreOS team has provided updated information on their pricing, however, I stand by my conclusion that the effective cost is lower when you deploy monster machines. The cost per node of my 1 CPU Intel NUC is the same as a 96 CPU server when you get beyond 10 nodes. I'll also reiterate that while my pricing notes are not currently…

eGalax touch on default Ubuntu 14.04.2 LTS

I have not had success with the touch drivers as yet.  The touch works and evtest also seems to report events, however, I have noticed that the button click is not working and no matter what I do xinput refuses to configure the buttons correctly.  When I downgraded to ubuntu 10.04 LTS everything sort of worked... there must have been something in the kermel as 10.04 was in the 2.6 kernel and 4.04 is in the 3.x branch.

One thing ... all of the documentation pointed to the wrong website or one in Taiwanese. I was finally able to locate the drivers again: http://www.eeti.com.tw/drivers_Linux.html (it would have been nice if they provided the install instructions in text rather than PDF)
Please open the document "EETI_eGTouch_Programming_Guide" under the Guide directory, and follow the Guidline to install driver.
download the appropriate versionunzip the fileread the programming manual And from that I'm distilling to the following: execute the setup.sh answer all of the questio…

Prometheus vs Bosun

In conclusion... while Bosun(B) is still not the ideal monitoring system neither is Prometheus(P).

TL;DR;

I am running Bosun in a Docker container hosted on CoreOS. Fleet service/unit files keep it running. However in once case I have experienced at least one severe crash as a result of a disk full condition. That it is implemented as part golang, java and python is an annoyance. The MIT license is about the only good thing.

I am trying to integrate Prometheus into my pipeline but losing steam fast. The Prometheus design seems to desire that you integrate your own cache inside your application and then allow the server to scrape the data, however, if the interval between scrapes is shorter than the longest transient session of your application then you need a gateway. A place to shuttle your data that will be a little more persistent.

(1) storing the data in my application might get me started more quickly
(2) getting the server to pull the data might be more secure
(3) using a push g…