Skip to main content

Effective Dating Code and JavaScript

The idea that data might have an effective date is nothing new, however, using effective dating on code fragments is interesting but when you add zero downtime with complete code and data synchronicity it is something completely different; possibly evolutionary.

I had initially thought of using C, C#, Java, perl, Ruby, Python, Go, erlang, and elixir. As I went back and forth I skimming away the many fractal dimensions and with every generation I found myself back where I started.

First of all there are two major strategies and a potential hybrid.

The first approach is a router with a synchronized clock which routes transactions to an green/blue server instances. Blue/green is a typical model and in most cases is implemented in an HA scenario... but the key point is that a) the code is staged in advance; b) the switch is instantaneous; c) the system is always known to be in a particular state and transactions are always reproducible (audit-able).

The second approach is effectively a hot-plug type deployment. While erlang and elixir "can" support the glue/green approach most EVM geeks prefer hot-plugging. The only problem with the EVM release manager is that is deploys a module at a time and therefore getting a transaction guarantee (reproducible) is just not possible and this is a big deal is most businesses including gaming.

And then I recently saw a zero downtime solution for Go where the new application assumes the active port numbers as the replacement server takes over for the primary.  The differences here are a) there is no router but the transaction seem to move from the primary to the secondary in a deliberate fashion allowing transactions in flight to complete rather than all at once; b) the loading of the replacement might not be triggered by the effective date; and c) there is some latency from the startup to actual assumption of duties.

Of course there are other complications when comparing compiled versus scripted code. The scripted code is almost ready to execute once the code is assembled and the compiled code requires at least one compile/link step. And then there is the library management requirement when trying to deploy fragments versus small change sets.

Finally all of the above is fine when you are connecting to loosely coupled databases through REST or some sort of IPC/RPC... but when there are DB schema revision requirements everything could go belly up as the two schemas need to coexist. And then there is the librarian and the necessary regression testing.

As for the actual payoff.  Who knows. But for certain if having the continuous deployment, effective delivery, zero downtime in your toolbox then you can decide whether or not to deploy every VCS commit automatically or wait for a librarian to collect, regression test and commit.

I have POC code for the first EDC system.  My JavaScript code exists in the DB with the actual client data and when the code is updated I can automatically update/deploy when ready or then the effective date passes. It's nearly instantaneous.

And I'm working on a Go version now. I see some code for zero downtime integration so I'm looking forward to testing that... but the last piece that will make that possible is some sort of ORM that would manage the schema. Which is hot-plug in the ACID sort of way.

All of the above, however, requires new discipline.

Comments

Popular posts from this blog

Entry level cost for CoreOS+Tectonic

CoreOS and Tectonic start their pricing at 10 servers. Managed CoreOS starts at $1000 per month for those first 10 servers and Tectonic is $5000 for the same 10 servers. Annualized that is $85K or at least one employee depending on your market. As a single employee company I'd rather hire the employee. Specially since I only have 3 servers.

The pricing is biased toward the largest servers with the largest capacities; my dual core 32GB i5 IntelNuc can never be mistaken for a 96-CPU dual or quad core DELL

If CoreOS does not figure out a different barrier of entry they are going to follow the Borland path to obscurity.

UPDATE 2017-10-30: With gratitude the CoreOS team has provided updated information on their pricing, however, I stand by my conclusion that the effective cost is lower when you deploy monster machines. The cost per node of my 1 CPU Intel NUC is the same as a 96 CPU server when you get beyond 10 nodes. I'll also reiterate that while my pricing notes are not currently…

eGalax touch on default Ubuntu 14.04.2 LTS

I have not had success with the touch drivers as yet.  The touch works and evtest also seems to report events, however, I have noticed that the button click is not working and no matter what I do xinput refuses to configure the buttons correctly.  When I downgraded to ubuntu 10.04 LTS everything sort of worked... there must have been something in the kermel as 10.04 was in the 2.6 kernel and 4.04 is in the 3.x branch.

One thing ... all of the documentation pointed to the wrong website or one in Taiwanese. I was finally able to locate the drivers again: http://www.eeti.com.tw/drivers_Linux.html (it would have been nice if they provided the install instructions in text rather than PDF)
Please open the document "EETI_eGTouch_Programming_Guide" under the Guide directory, and follow the Guidline to install driver.
download the appropriate versionunzip the fileread the programming manual And from that I'm distilling to the following: execute the setup.sh answer all of the questio…

Prometheus vs Bosun

In conclusion... while Bosun(B) is still not the ideal monitoring system neither is Prometheus(P).

TL;DR;

I am running Bosun in a Docker container hosted on CoreOS. Fleet service/unit files keep it running. However in once case I have experienced at least one severe crash as a result of a disk full condition. That it is implemented as part golang, java and python is an annoyance. The MIT license is about the only good thing.

I am trying to integrate Prometheus into my pipeline but losing steam fast. The Prometheus design seems to desire that you integrate your own cache inside your application and then allow the server to scrape the data, however, if the interval between scrapes is shorter than the longest transient session of your application then you need a gateway. A place to shuttle your data that will be a little more persistent.

(1) storing the data in my application might get me started more quickly
(2) getting the server to pull the data might be more secure
(3) using a push g…