Skip to main content

Rancher Labs online classes

Rancher 1.x was a cool project. I liked the approach of deploying the controller, then adding workers, and then deploying applications. Under the covers the orchestration could support several different models including adding sidekicks and persistent container following. They really did some work to spearhead the persistent containers which can be complicated because of remote caching, change management, security and so on. They also supported many different models of orchestration including their own cattle, swarm, meso, kubernetes.

With Rancher 2.x they cut the cord on all orchestration but kubernetes. There may be some backporting except that Rancher excels reverse engineering clusters as well as deploying them. They have not talked about the internal design or motivations but it's clear that a running cluster is more authoritative than the data structure you think you captured that might represent the model.

That said picking the authority is a challenge. Worse still is trying to identify, recover and repair broken systems. I described this problem months ago and it is still an open issue. Strangely while I run swarm in production when it goes south I have to rebuild it from scratch. Docker does not like to be repaired.

Years ago Hightower did some lights out operational demos. They were exciting to see containers crash and then be repaired. Even then the failures were pretty simple. Today Kubernetes is configured with a model that represents "this is how I should look" and kubernetes tests the live cluster and tries for a match. Filling in the parts that do not. I'm reminded of some Erlang cluster networking I did. Repairing an Erlang cluster is near impossible. Erlang would prefer a total failure with a restart... and so does Docker.


There is something to be said for complete redeploys. Especially when the system is small and fast enough. But if you've got hundreds or thousands of systems this is not practical. Then there's the other challenge of having hot spares, keeping the code and data in sync. One thing for certain is that each system is different and with different disaster and availability needs.

This is not really where I was taking this post, but it is clear that disaster recovery is still a thing and neither docker, kubernetes, nor rancher have that problem solved.

Comments

Popular posts from this blog

Entry level cost for CoreOS+Tectonic

CoreOS and Tectonic start their pricing at 10 servers. Managed CoreOS starts at $1000 per month for those first 10 servers and Tectonic is $5000 for the same 10 servers. Annualized that is $85K or at least one employee depending on your market. As a single employee company I'd rather hire the employee. Specially since I only have 3 servers.

The pricing is biased toward the largest servers with the largest capacities; my dual core 32GB i5 IntelNuc can never be mistaken for a 96-CPU dual or quad core DELL

If CoreOS does not figure out a different barrier of entry they are going to follow the Borland path to obscurity.

UPDATE 2017-10-30: With gratitude the CoreOS team has provided updated information on their pricing, however, I stand by my conclusion that the effective cost is lower when you deploy monster machines. The cost per node of my 1 CPU Intel NUC is the same as a 96 CPU server when you get beyond 10 nodes. I'll also reiterate that while my pricing notes are not currently…

eGalax touch on default Ubuntu 14.04.2 LTS

I have not had success with the touch drivers as yet.  The touch works and evtest also seems to report events, however, I have noticed that the button click is not working and no matter what I do xinput refuses to configure the buttons correctly.  When I downgraded to ubuntu 10.04 LTS everything sort of worked... there must have been something in the kermel as 10.04 was in the 2.6 kernel and 4.04 is in the 3.x branch.

One thing ... all of the documentation pointed to the wrong website or one in Taiwanese. I was finally able to locate the drivers again: http://www.eeti.com.tw/drivers_Linux.html (it would have been nice if they provided the install instructions in text rather than PDF)
Please open the document "EETI_eGTouch_Programming_Guide" under the Guide directory, and follow the Guidline to install driver.
download the appropriate versionunzip the fileread the programming manual And from that I'm distilling to the following: execute the setup.sh answer all of the questio…

Prometheus vs Bosun

In conclusion... while Bosun(B) is still not the ideal monitoring system neither is Prometheus(P).

TL;DR;

I am running Bosun in a Docker container hosted on CoreOS. Fleet service/unit files keep it running. However in once case I have experienced at least one severe crash as a result of a disk full condition. That it is implemented as part golang, java and python is an annoyance. The MIT license is about the only good thing.

I am trying to integrate Prometheus into my pipeline but losing steam fast. The Prometheus design seems to desire that you integrate your own cache inside your application and then allow the server to scrape the data, however, if the interval between scrapes is shorter than the longest transient session of your application then you need a gateway. A place to shuttle your data that will be a little more persistent.

(1) storing the data in my application might get me started more quickly
(2) getting the server to pull the data might be more secure
(3) using a push g…