Skip to main content

New and improved Docker 1.3.0

So this was a lot of fun. At first I was thinking that I was going to leave my Docker development to my cloud servers and my work laptop. But that with the latest Docker 1.3.0 release and the associated boot2docker and fig projects I had to install it on my personal laptop.

First some descriptions:

  • Docker - an open platform for distributed applications
  • boot2docker - lightweight linux distribution that runs inside a VirtualBox virtual machine.
  • fig - configuration and orchestration for a single-host deployment
Second a small piece of advice. Fig and boot2docker are meant for development although fig might work in environments other than boot2docker. There are a number of clues that the docker team and very early adopters (fig was recently acquired by the docker company) have left for the rest of us:

Your build or makefile should use a container to perform the build. Eat your own dogfood. Both fig and boot2docker use a docker container to create the executable tools and boot2docker it's iso image that is later used by the user to build apps.
Here is a short list of steps that I went through to deploy the tools on my personal Yosemite OSX laptop.
  • Download and install the apps
    • boot2docker
      • curl https://github.com/boot2docker/osx-installer/releases/download/v1.3.0/Boot2Docker-1.3.0.pkg
      • now you have to perform the standard installation
    • fig
      • curl -L https://github.com/docker/fig/releases/download/1.0.0/fig-`uname -s`-`uname -m` > /usr/local/bin/fig; chmod +x /usr/local/bin/fig
Now you need an application. Let's start with the fig.yml file:
hello:        image: busybox        command: /bin/echo 'Hello world'
This is an over simplified and small application. It's purpose to to just print the hello world message.

With the fig.yml file in the current folder then let's execute the first docker commands via the tools.

  • initialize boot2docker
    • boot2docker init
  • start the boot2docker VM instance
    • boot2docker start
  • capture the docker config so that the local tools can communicate with the VM host instance
    • `boot2docker shellinit`
  • run the application via fig
    • fig run hello
And that's it. Note that in the fig.yml file I selected the busybox linux distro. This is one of the smallest and docker-trusted distros available with enough features to run the hello application. The command is self explanatory.

** this example is not complicated enough to warrant a makefile or even a repo of it's own. Just know that this is what you'll when you run the run command for the first time:

$ fig run helloPulling image busybox...df7546f9f060: Pull completedf7546f9f060: Already existsdf7546f9f060: Already existsdf7546f9f060: Already existsdf7546f9f060: Already exists511136ea3c5a: Already exists511136ea3c5a: Already exists511136ea3c5a: Already existsbusybox:ubuntu-12.04: The image you are pulling has been verified0dfaa2625e19: Pull completed415c60e5ea3: Pull completebusybox:ubuntu-14.04: The image you are pulling has been verified25fb2184d4af: Pull completed940f6fef591: Pull completeStatus: Downloaded newer image for busyboxHello world
Notice that fig executed all of the necessary docker commands to prepare the environment and eventually executed the command much like this:
docker run busybox /bin/echo 'hello world'
One reason I selected busybox was because of it's size. I had also tried ubuntu but that was pretty big. And when I tried scratch there was not enough gusto in the base OS to run the echo command.

The second run command will result in this:
$ fig run hello
Hello world
Notice that fig/boot2docker and docker itself did not need to re-download or re-build the environment. The container simply ran and produced output.

The default size of the boot partition is 40GB, however, I needed a bigger partition for my project. Here is the cheatsheet of the commands I executed:
  • boot2docker destroy
  • boot2docker --disksize=80000 init
  • boot2docker start
  • `boot2docker shellinit`
  • boot2docker ssh
  • df -h
The results of the 'df' command should indicate that the primary partition is 80GB. (longer init and start command execution times)

$ df -h
Filesystem                Size      Used Available Use% Mounted on
rootfs                    1.8G    204.6M      1.6G  11% /
tmpfs                     1.8G    204.6M      1.6G  11% /
tmpfs                  1004.2M         0   1004.2M   0% /dev/shm
/dev/sda1                75.8G     65.6M     71.9G   0% /mnt/sda1
cgroup                 1004.2M         0   1004.2M   0% /sys/fs/cgroup
none                     55.4G     30.7G     24.7G  55% /Users
/dev/sda1                75.8G     65.6M     71.9G   0% /mnt/sda1/var/lib/docker/aufs

Capture the boot2docker host IP address with this command. 'shellinit' has some use but this is not it.
export D_HOST=`boot2docker ip 2> /dev/null`
That's all there is for now. My next post might include volumes and links.

UPDATE: I forgot to add that the boot2docker host uses ntpclient and other than the first init... subsequent 'start' commands cause the host to sync the time immediately. There is also a timer running on the host to periodically fix timeshift. Sadly, when putting my laptop to sleep the clocks can be way off. So I usually stop/start my environment when I'm not using it. I'm not sure yet but this might have some other benefits like longer battery life.

Comments

Popular posts from this blog

Entry level cost for CoreOS+Tectonic

CoreOS and Tectonic start their pricing at 10 servers. Managed CoreOS starts at $1000 per month for those first 10 servers and Tectonic is $5000 for the same 10 servers. Annualized that is $85K or at least one employee depending on your market. As a single employee company I'd rather hire the employee. Specially since I only have 3 servers.

The pricing is biased toward the largest servers with the largest capacities; my dual core 32GB i5 IntelNuc can never be mistaken for a 96-CPU dual or quad core DELL

If CoreOS does not figure out a different barrier of entry they are going to follow the Borland path to obscurity.

UPDATE 2017-10-30: With gratitude the CoreOS team has provided updated information on their pricing, however, I stand by my conclusion that the effective cost is lower when you deploy monster machines. The cost per node of my 1 CPU Intel NUC is the same as a 96 CPU server when you get beyond 10 nodes. I'll also reiterate that while my pricing notes are not currently…

eGalax touch on default Ubuntu 14.04.2 LTS

I have not had success with the touch drivers as yet.  The touch works and evtest also seems to report events, however, I have noticed that the button click is not working and no matter what I do xinput refuses to configure the buttons correctly.  When I downgraded to ubuntu 10.04 LTS everything sort of worked... there must have been something in the kermel as 10.04 was in the 2.6 kernel and 4.04 is in the 3.x branch.

One thing ... all of the documentation pointed to the wrong website or one in Taiwanese. I was finally able to locate the drivers again: http://www.eeti.com.tw/drivers_Linux.html (it would have been nice if they provided the install instructions in text rather than PDF)
Please open the document "EETI_eGTouch_Programming_Guide" under the Guide directory, and follow the Guidline to install driver.
download the appropriate versionunzip the fileread the programming manual And from that I'm distilling to the following: execute the setup.sh answer all of the questio…

Prometheus vs Bosun

In conclusion... while Bosun(B) is still not the ideal monitoring system neither is Prometheus(P).

TL;DR;

I am running Bosun in a Docker container hosted on CoreOS. Fleet service/unit files keep it running. However in once case I have experienced at least one severe crash as a result of a disk full condition. That it is implemented as part golang, java and python is an annoyance. The MIT license is about the only good thing.

I am trying to integrate Prometheus into my pipeline but losing steam fast. The Prometheus design seems to desire that you integrate your own cache inside your application and then allow the server to scrape the data, however, if the interval between scrapes is shorter than the longest transient session of your application then you need a gateway. A place to shuttle your data that will be a little more persistent.

(1) storing the data in my application might get me started more quickly
(2) getting the server to pull the data might be more secure
(3) using a push g…