Skip to main content

Docker says what?

I'm trying to bring CoreOS, Docker and possibly Rancher into my work environment. I completely understand the risk associated with deploying alpha and beta level code. In this case both CoreOS and Docker appear to be stable. Rancher might be the weak link, however, since it's just being used to access the registry, deploy the images and connect the sidekicks and a few minor services... I'm not concerned.  Everything can be overridden from the Docker command line.

I started to put together some notes in order to deploy a 3-node cluster on my MacBook. I present them here. Note that they are highlevel and sometimes infuriating.

[7/11/15, 4:42:36 PM] Richard Bucker: I hate to say it but for the purpose of the next play date I am installing virtualbox and vagrant.  Just because I have to in order to kick things off.
[7/11/15, 4:49:41 PM] Richard Bucker: [1] install virtualbox EASY
[7/11/15, 4:49:49 PM] Richard Bucker: [2] install vagrant EASY
[7/11/15, 4:50:06 PM] Richard Bucker: [3] install a git client EASY
[7/11/15, 4:50:39 PM] Richard Bucker: [4] clone github.com/coreos/coreos-vagrant
[7/11/15, 4:50:54 PM] Richard Bucker: [5] copy config file and edit
[7/11/15, 4:51:04 PM] Richard Bucker: [6] copy user-data file and edit
[7/11/15, 4:51:16 PM] Richard Bucker: [7] vagrant up
[7/11/15, 4:51:54 PM] Richard Bucker: [8] vagrant ssh core-01
[7/11/15, 4:52:37 PM] Richard Bucker: [9] install docker client on your client
[7/11/15, 4:53:35 PM] Richard Bucker: [10] export the DOCKER env variable - see the config or user-data.  One or the other had the values. now you can send commands in from the host OS although not necessary.
[7/11/15, 4:54:21 PM] Richard Bucker: [11] ssh into the master: vagrant ssh core-01
[7/11/15, 4:55:34 PM] Richard Bucker: ** get comfortable with some CLI commands.  journalctl, systemctl, etcd, fleetd
[7/11/15, 4:57:02 PM] Richard Bucker: ** rancher is here
[7/11/15, 4:57:31 PM] Richard Bucker: [12] install the rancher master:  docker run -d --restart=always -p 8080:8080 rancher/server
[7/11/15, 5:04:34 PM] Richard Bucker: ** assuming that the previous docker command succeeds.... here are some docker commands.
[7/11/15, 5:04:37 PM] Richard Bucker: docker ps
[7/11/15, 5:04:45 PM] Richard Bucker: docker ps -a
[7/11/15, 5:04:48 PM] Richard Bucker: docker images
[7/11/15, 5:06:26 PM] Richard Bucker: [12] launch your browser on your pc, get the ip address of core-01, and then put this address in your browser:  http://<ip address>:8080
[7/11/15, 5:06:38 PM] Richard Bucker: you should see the rancher server
[7/11/15, 5:07:12 PM] Richard Bucker: [13] follow the add your first host wizard
[7/11/15, 5:07:32 PM] Richard Bucker: [14] save the ip address, click on cutom
[7/11/15, 5:08:09 PM] Richard Bucker: then copy the string below and paste it into the master and the two slaves.  This means you'll have 3 hosts for rancher
[7/11/15, 5:08:35 PM] Richard Bucker: (you do not need the SUDO)
[7/11/15, 5:10:50 PM] Richard Bucker: ** I opened one ssh terminal into each of the 3 slaves and pasted the docker comand... they started to install another docker container from the register.... it's going to take a few min.
[7/11/15, 5:16:20 PM] Richard Bucker: ok, my slaves are finished. I ran a few "docker ps" commands in each window and there they are
[7/11/15, 5:16:54 PM] Richard Bucker: !!!!!!!!!!! my maccbook is suffering!!!!!!
[7/11/15, 5:17:20 PM] Richard Bucker: at the bottom of the web page there is a CLOSE button, click
[7/11/15, 5:20:25 PM] Richard Bucker: when the window closes you'll that the master web page has 3 hosts on it.  Have fun with that and click around.
[7/11/15, 5:20:46 PM] Richard Bucker: eventually you'll have to click on the SERVICES tab
[7/11/15, 5:23:24 PM] Richard Bucker: and if not already configured you'll need to add a docker REGISTRY. Normally in an enterprise you'd have your own registry server and you'd populate your own docker images.
[7/11/15, 5:25:47 PM] Richard Bucker: In the meantime you might have to install the default public registry.  They you can install applications, databases and other services.
[7/11/15, 5:26:35 PM] Richard Bucker: The registry is also where the development team would deploy their applications so that the OPS team can deploy them.... automated or not.
[7/11/15, 5:27:35 PM] Richard Bucker: Rancher allows the OPS team to partition the containers.... DEV, PROD, STAGING etc.... up to the user to name them
[7/11/15, 5:29:33 PM] Richard Bucker: when two or more containers have a inter container link then rancher creates a "sidekick".  The sidekick is special type of container that manages the connections and connection timing so that the containers can be launched in any order leaving the discovery to the sidekick.
[7/11/15, 5:30:33 PM] Richard Bucker: recently rancher added a loadbalancer.  I have not used it but I think it's meant to handle transient services.
[7/11/15, 5:34:14 PM] Richard Bucker: **NOTE because the rancher config was performed manually they will not survive a reboot. They will have to be restarted. That's not going to work well because after the reboot there are going to be some docker breadcrumbs out there.  They need to be cleaned up between reboots.
[7/11/15, 5:34:55 PM] Richard Bucker: my system is junk because I allocated too much RAM per VM.
[7/11/15, 5:36:02 PM] Richard Bucker: The basic required building block for this cluster is going to be a dedicated registry and pxe server.  I could coexist on the same machine... but I'd like a HUGE chunk of disk.
[7/11/15, 5:40:56 PM] Richard Bucker: Docker containers..... To ask a meme these days you'll get all of the worst practices you can expect.  My favorite is when the DEVs use ubuntu or fedora for EVERY container.  My second favorite is when they use phusion's implementation: http://phusion.github.io/baseimage-docker/
[7/11/15, 5:44:36 PM] Richard Bucker: The best way to deploy an application in Docker is to make it completely self container binary executable.  This way the application is the only artifact in the container. This way [a] the application might get hacked but the underlying OS would not... because there isn't one. [b] it's impossible to attack a port that does not have a listener. So it's important to take away that python, perl, ruby etc... applications cannot run self contained in a container with all the perl dependencies.

The entry level environment is pretty big based on normal development standards. I hate to think I'm going to need a MacPro to get started. But on the other hand even that might not be a bad idea since it's so beefy.

Grrr....

UPDATE: While rancher is fun it might not be a good production environment until containers can fail gracefully, become sticky to recover from reboots, and highly available to permit deployment recovery somewhere in the cluster. (See fleetd and cloud-config)

Comments

Popular posts from this blog

Entry level cost for CoreOS+Tectonic

CoreOS and Tectonic start their pricing at 10 servers. Managed CoreOS starts at $1000 per month for those first 10 servers and Tectonic is $5000 for the same 10 servers. Annualized that is $85K or at least one employee depending on your market. As a single employee company I'd rather hire the employee. Specially since I only have 3 servers.

The pricing is biased toward the largest servers with the largest capacities; my dual core 32GB i5 IntelNuc can never be mistaken for a 96-CPU dual or quad core DELL

If CoreOS does not figure out a different barrier of entry they are going to follow the Borland path to obscurity.

UPDATE 2017-10-30: With gratitude the CoreOS team has provided updated information on their pricing, however, I stand by my conclusion that the effective cost is lower when you deploy monster machines. The cost per node of my 1 CPU Intel NUC is the same as a 96 CPU server when you get beyond 10 nodes. I'll also reiterate that while my pricing notes are not currently…

eGalax touch on default Ubuntu 14.04.2 LTS

I have not had success with the touch drivers as yet.  The touch works and evtest also seems to report events, however, I have noticed that the button click is not working and no matter what I do xinput refuses to configure the buttons correctly.  When I downgraded to ubuntu 10.04 LTS everything sort of worked... there must have been something in the kermel as 10.04 was in the 2.6 kernel and 4.04 is in the 3.x branch.

One thing ... all of the documentation pointed to the wrong website or one in Taiwanese. I was finally able to locate the drivers again: http://www.eeti.com.tw/drivers_Linux.html (it would have been nice if they provided the install instructions in text rather than PDF)
Please open the document "EETI_eGTouch_Programming_Guide" under the Guide directory, and follow the Guidline to install driver.
download the appropriate versionunzip the fileread the programming manual And from that I'm distilling to the following: execute the setup.sh answer all of the questio…

Prometheus vs Bosun

In conclusion... while Bosun(B) is still not the ideal monitoring system neither is Prometheus(P).

TL;DR;

I am running Bosun in a Docker container hosted on CoreOS. Fleet service/unit files keep it running. However in once case I have experienced at least one severe crash as a result of a disk full condition. That it is implemented as part golang, java and python is an annoyance. The MIT license is about the only good thing.

I am trying to integrate Prometheus into my pipeline but losing steam fast. The Prometheus design seems to desire that you integrate your own cache inside your application and then allow the server to scrape the data, however, if the interval between scrapes is shorter than the longest transient session of your application then you need a gateway. A place to shuttle your data that will be a little more persistent.

(1) storing the data in my application might get me started more quickly
(2) getting the server to pull the data might be more secure
(3) using a push g…