Skip to main content

REST semi-realtime transactions


The freelance pattern implemented with TornadoWeb and ZeroMQ.


I recently implemented one of the broker reliable patterns as described by the ZeroMQ guide. It's something very similar to beanstalkd's but left to the reader to implement. This in itself is not a bad thing but it is more code to design, write and test; and had you the budget to hire these guys directly you would get the best broker money could buy. But how reliable is this model. Really?

I'm not a big fan of the broker model. It's a lot of extra code to write for the broker itself. It's also a single point of failure. And then there is the error handling as the client and worker negotiation the status of a transaction only to renegotiate it when the broker fails. And then there are all those places where transactions can queue up and all that code that is written that does not need to be. (the crux of this article)

In a brokerless model each client connects to each server (many to many) and in a traditional socket implementation that would not be possible. But it is with ZMQ. (read the guide). So a user app can connect to more than one server at a time and the client will "fan-out" the send() to the next server.
ctx = zmq.Context()
socket = zmq.Socket(ctx, zmq.REQ) 
socket.setsockopt(zmq.HWM, 1)
socket.connect('http://127.0.0.1:5555')
socket.connect('http://127.0.0.1:5556')
socket.connect('http://127.0.0.1:5557')
. . .
socket.send('a message for you')
socket.send('a message for you')
socket.send('a message for you')

What is going to happen here is that this code is going to send one message each to each of the servers assuming that there is an actual connection. Because the socket defines multiple endpoints. And it's all very orderly and as expected.

The documentation talks about only round robin-ing active connections... sadly a call to connect() without a bind is still considered a valid connection and so this port would still receive a transaction but not actually send it to the server. Meaning that some transactions are going to be delayed. Just how long depends on the restart time for the downed server.

So on the upside... when everything is running smoothly, the transactions are going to be distributed nicely. Each server will be given some work to perform. The workers are still standard userspace applications that do not need any special threading or processing. Just bind to a socket endpoint and wait for incoming work. Do the work and send a response.

When things go wrong or when you might restart a server manually, that endpoint address is still in the client side. Should a transaction be headed that way and the connection had not been reestablished then that message will block until that port instance reconnects. If the server is running via daemontools then it should restart any second. The transaction in the queue will be scooped up and procession will resume. The number of transactions queued per connection depends on the high water mark setting.

I say '1' transaction in the queue because we set the HWM (high water mark) when creating the connections. This is probably a good setting for realtime systems where losing transaction in an invisible queue is the least desirable event. You might also be able to add NOBLOCK on the send() function to get some other actionable events. It really depends on the applications tolerances.

At first I did not like the idea of losing the transaction(s) but I'm warming to the idea that the codebase will be smaller and possibly more reliable overall.

Comments

Popular posts from this blog

Entry level cost for CoreOS+Tectonic

CoreOS and Tectonic start their pricing at 10 servers. Managed CoreOS starts at $1000 per month for those first 10 servers and Tectonic is $5000 for the same 10 servers. Annualized that is $85K or at least one employee depending on your market. As a single employee company I'd rather hire the employee. Specially since I only have 3 servers.

The pricing is biased toward the largest servers with the largest capacities; my dual core 32GB i5 IntelNuc can never be mistaken for a 96-CPU dual or quad core DELL

If CoreOS does not figure out a different barrier of entry they are going to follow the Borland path to obscurity.

UPDATE 2017-10-30: With gratitude the CoreOS team has provided updated information on their pricing, however, I stand by my conclusion that the effective cost is lower when you deploy monster machines. The cost per node of my 1 CPU Intel NUC is the same as a 96 CPU server when you get beyond 10 nodes. I'll also reiterate that while my pricing notes are not currently…

eGalax touch on default Ubuntu 14.04.2 LTS

I have not had success with the touch drivers as yet.  The touch works and evtest also seems to report events, however, I have noticed that the button click is not working and no matter what I do xinput refuses to configure the buttons correctly.  When I downgraded to ubuntu 10.04 LTS everything sort of worked... there must have been something in the kermel as 10.04 was in the 2.6 kernel and 4.04 is in the 3.x branch.

One thing ... all of the documentation pointed to the wrong website or one in Taiwanese. I was finally able to locate the drivers again: http://www.eeti.com.tw/drivers_Linux.html (it would have been nice if they provided the install instructions in text rather than PDF)
Please open the document "EETI_eGTouch_Programming_Guide" under the Guide directory, and follow the Guidline to install driver.
download the appropriate versionunzip the fileread the programming manual And from that I'm distilling to the following: execute the setup.sh answer all of the questio…

Prometheus vs Bosun

In conclusion... while Bosun(B) is still not the ideal monitoring system neither is Prometheus(P).

TL;DR;

I am running Bosun in a Docker container hosted on CoreOS. Fleet service/unit files keep it running. However in once case I have experienced at least one severe crash as a result of a disk full condition. That it is implemented as part golang, java and python is an annoyance. The MIT license is about the only good thing.

I am trying to integrate Prometheus into my pipeline but losing steam fast. The Prometheus design seems to desire that you integrate your own cache inside your application and then allow the server to scrape the data, however, if the interval between scrapes is shorter than the longest transient session of your application then you need a gateway. A place to shuttle your data that will be a little more persistent.

(1) storing the data in my application might get me started more quickly
(2) getting the server to pull the data might be more secure
(3) using a push g…