Skip to main content

Docker, dockerclient, citadel, fig, multi-node, hipache, etcd, nginx, crypt

It's only a matter of time before the Docker team closes the loop on the multi-node Docker stack and starts to chase the complete PaaS solution. Sure; in the early days of Docker it's open source and the various teams are absorbing the code as quickly as it's available; and the different framework teams are all stitching as much code together as they can. But the one quote that seems to be sticking out in my head is something like:
Build it yourself
 So while I have been testing all of the PaaS frameworks out there they are still lacking. Whether it's current (Docker <1.3.0, CoreOS <Alpha, Go < 1.3.3) or of it simply does not work or it only supports a limited functional set.

So here's my intuition... and if it were my money looking for a solution in this space:

I'm starting from the fractal dimension. Docker containers are simply just another fractal dimension either in or out from that of virtual machines, mainframes, or J2EE-like enterprise SOA solutions. And like I need a solid and stable ESX server for running VMware I also need a stable bare-metal OS and so I start with CoreOS.

CoreOS provides five key technologies for free and a sixth with a support contract:

  1. locksmith - manage autoupdates on the server side
  2. systemd - linux system init
  3. etcd - raft-based replicated key/value store
  4. fleetd - distributed systemd and service manager. It maintains the service policy for each if the services. Auto restart, cluster distributed patterns, cron-like config, pre and post command that can be used to alter etcd instance data
  5. docker - just the container
  6. console - enterprise level console monitoring
With docker version 1.3.0 the Docker team has introduced governance which will eventually end up in CoreOS and turn into a trusted compute model. Once you trust the application or micro-services running on the machine you might be able to trust their interconnection.


Fig was acquired by Docker and it provides a single node multi-service configuration tool. This includes many of the config parameters in the Docker build and run commands like both sides of the volume, port mapping, service linking, and it can scale the services on the one node in order to test the linkage of your applications. And so it is ideal for the developer.

Nginx is a nice tool. One killer feature is the soft reboot. The ability to reconfigure the proxy without restarting or losing current connections. Someone in the nodejs implemented a project called hipache. Hipache is capable of the same, however, where Nginx requires a sidekick or ambassador in order to capture the config changes Hipache will simply monitor a Redis instance in realtime. Changes in the Redis db will immediately effect the transaction routes. There is a variation of the Hipache proxy that will monitor various etcd keys which will then update the redis db.

What's nice here is that there is a go implementation of hipache called hipache-go. While it too uses redis it not much of a stretch to replace that code with etcd code and so the route table can be effected directly by the backend services as they are started.

A Fleetd pre event can be used to tell etcd that a new service is to be started and as such can inform hipache to create a route to this service. And when the service is being stopped the post command can cause the config to be removed from etcd which will in turn remove the active route.

Finally there is one additional project that can be used to protect configuration information. The crypt project is a crypto strategy for storing all configuration information in etcd encrypted with a public key via a special purpose CLI tool. The private key would be stored in a private folder on the host OS with permissions limited to a specific user. (do not store the private key in the image or container; instead on the host volume). This has a few weak points that need to be worked out.

Finally with the enterprise GUI I can set policy and monitor updates to CoreOS. This, in turn, will allow me to monitor other aspects of the system as the GUI features increase.

One last thing. While some of the tools I mention here are baked in they are incomplete. That psrt of the puzzle will be addressed by the Citadel project, that uses the dockerclient project, which can deploy multi-node containers with custom schedulers. So any place where these tools fail to deliver they can be augmented with my own code. Now all I need to do is sprinkle in a little SQLite and maybe some RethinkDB...

There you have it.

Comments

Popular posts from this blog

Entry level cost for CoreOS+Tectonic

CoreOS and Tectonic start their pricing at 10 servers. Managed CoreOS starts at $1000 per month for those first 10 servers and Tectonic is $5000 for the same 10 servers. Annualized that is $85K or at least one employee depending on your market. As a single employee company I'd rather hire the employee. Specially since I only have 3 servers.

The pricing is biased toward the largest servers with the largest capacities; my dual core 32GB i5 IntelNuc can never be mistaken for a 96-CPU dual or quad core DELL

If CoreOS does not figure out a different barrier of entry they are going to follow the Borland path to obscurity.

UPDATE 2017-10-30: With gratitude the CoreOS team has provided updated information on their pricing, however, I stand by my conclusion that the effective cost is lower when you deploy monster machines. The cost per node of my 1 CPU Intel NUC is the same as a 96 CPU server when you get beyond 10 nodes. I'll also reiterate that while my pricing notes are not currently…

eGalax touch on default Ubuntu 14.04.2 LTS

I have not had success with the touch drivers as yet.  The touch works and evtest also seems to report events, however, I have noticed that the button click is not working and no matter what I do xinput refuses to configure the buttons correctly.  When I downgraded to ubuntu 10.04 LTS everything sort of worked... there must have been something in the kermel as 10.04 was in the 2.6 kernel and 4.04 is in the 3.x branch.

One thing ... all of the documentation pointed to the wrong website or one in Taiwanese. I was finally able to locate the drivers again: http://www.eeti.com.tw/drivers_Linux.html (it would have been nice if they provided the install instructions in text rather than PDF)
Please open the document "EETI_eGTouch_Programming_Guide" under the Guide directory, and follow the Guidline to install driver.
download the appropriate versionunzip the fileread the programming manual And from that I'm distilling to the following: execute the setup.sh answer all of the questio…

Prometheus vs Bosun

In conclusion... while Bosun(B) is still not the ideal monitoring system neither is Prometheus(P).

TL;DR;

I am running Bosun in a Docker container hosted on CoreOS. Fleet service/unit files keep it running. However in once case I have experienced at least one severe crash as a result of a disk full condition. That it is implemented as part golang, java and python is an annoyance. The MIT license is about the only good thing.

I am trying to integrate Prometheus into my pipeline but losing steam fast. The Prometheus design seems to desire that you integrate your own cache inside your application and then allow the server to scrape the data, however, if the interval between scrapes is shorter than the longest transient session of your application then you need a gateway. A place to shuttle your data that will be a little more persistent.

(1) storing the data in my application might get me started more quickly
(2) getting the server to pull the data might be more secure
(3) using a push g…