Friday, October 31, 2014

dotgo - John Graham-Cumming

John did a presentation on small applications which started to sound like "unix programming" at this years dotgo conference in the EU; or so I think. In his presentation he was talking about an application that his boss requested only to be asked to implement many more similar applications. As things progressed he realized that they all shared something in common. That was that there was some input, some output and some processing in the middle.

He presented some code that was 75 lines of exactly that format for the first request. While John offered a github version of the presentation it was lacking in the "process" details.  The z.go file is simply a template. (Interestingly enough it's licensed under MIT and although it's probably the simplest implementation of this formula it's also very obvious so while I hate software patents and encourage open source; any license for something as simple as this is exactly what's wrong with the system)

Create a file echo.go in the same folder as John's project.
package main
import (
        "fmt"
)
type myfactory struct{}
func (f *myfactory) make(line string) task {
        return &work{line: line}
}
type work struct {
        line string
}
func (w *work) process() {
        w.line = strings.ToUpper(w.line)
}
func (w *work) print() {
        fmt.Println(w.line)
}
Next you'll have to uncomment of paste this line into the main function in the z.go file.
run(&myfactory{})
I ran the program list this:
go build
./dotgo <echo.go
The output of this call was exactly the same as contents of the echo.go file above.

Deis.io version 0.15.0 prerelease

UPDATE: even though the current release of Docker is 1.3.1; this version of Deis, 0.15.0, is using Docker 1.3.0.

Finally!!! Deis' installed and ran the first time.  Now I'll admit it took more than the documented 10 minutes to startup the builder container but when you consider that this container is supposed to identify, classify and build a target container (I think) it's probably bigger than the rest.
deisctl list
 Will produce a list similar to fleetctl list-units but from the host system. I really like to run the command with form a watcher.
watch deisctl list
This will refresh the output every 2 seconds by default.

After downloading and installing my fork of the example-go application the deisctl program did not add the program to the list. There is a different command:
deis apps
That should display all of the applications that I pushed.  Interestingly I saw 3 applications including mine. Since this was a fresh install I was not expecting to see 3.
$ deis apps
=== Apps
tender-magician
newish-dragster
edible-riverbed
The deis info commad produced some results:
$ deis info -a newish-dragster
=== newish-dragster Application
{
  "updated": "2014-11-01T00:09:52UTC",
  "uuid": "850320b0-b920-4d0c-8998-fcb2e5f354be",
  "created": "2014-11-01T00:09:52UTC",
  "url": "newish-dragster.local3.deisapp.com",
  "owner": "admin",
  "id": "newish-dragster",
  "structure": {}
}
=== newish-dragster Processes
=== newish-dragster Domains
No domains

And then I ran the curl command:
curl newish-dragster.local3.deisapp.com
The output of the curl command indicated that it was an NGINX installation. I repeated the deis and curl commands on the other apps and one was NGINX and the other was my application. I'm not sure why deis deployed not one but two NGINX servers in what appears to be application space.

hmmmm... perplexing.

One thing to note.  There is no deis version command so there is no authoritative way to know what version you are running.

Thursday, October 30, 2014

More flow based programming

I just finished watching the kickstarter video for the flowhub/thegrid.io project and I'm still convinced that my FBP framework has great potential. Of course it does not have the GUI that the flowhub project has but it has a different level of self visualization that is still comfortable as most visual programming languages simply fail for not being feature rich or expressive enough. I'm sure flowhub has it's production usecases and I'm curious how big a project can get and how traditional CI/CD and other production migration can be expressed on hundreds or thousands of machines; keeping everything in sync, acid, idempotent and predictable.

PS:  While I was once an erlang programmer and have deployed some non-trivial applications I still find the erlang software development manual inspiring.

Curated golang libs and things

I need a goto place where I can track or at least link to my favorite libraries. Since golang is now my complete preference here is my list:

Good article on self referential functions. helpful article for http/REST. List of go tools. And here you can setup a project.

Instructions for setting up for go cross compiling.

gopass - console password prompt

gorename - the article, the source (the source comes from Google and attached to the go tools repo)

gogs - self hosted git service written in Go.

go-bindata - Useful for embedding binary data into a go program. go-bindata-assertfs - Serve embedded files (http handler type funcs)

msgpack seems to be a good and efficient message container. Of course there is JSON and many different implementations with different performance profiles. What is nice about MP is it's support for so many different languages which could provide significant interop.

MQ - gnatsd or here - go implementation of a nats server (client libs are in adjacent projects). This project has a broker. Beanstalkd - is also a strong pub/sub broker although redis can also be used in this capacity. And of course ZeroMQ offers brokered and brokerless topologies. (I'm not a fan of RabbitMQ so no link is given). libchann is also pretty close to a MQ. go-nanomsg and mangos. messagePack code generator. MessagePack home. Kite seems to offer a combination of RPC and netchan/libchan features. (libchan is stalled; netchan is deprecated; new-netchan is in limbo) Que-go is a work queue based on the ruby analog. There is more than one way to do this but it's worth a read.

CLI - there are some new 3rd party libs for processing the command line.  From what I've seen they must be deeply rooted in the psyche of the author. The stdlib does a good job and does not need replacement. Command lines do not need to be complicated and they should not be.

Logging - Lumberjack for it's rolling logs. But remember that Docker like having the logs expressed through stdout. While I'm not a fan of aggregated log files this(log-shuttle) logger might be useful.

Database - this category is tough. It depends on the type of DB [SQL, NoSQL, other] that you might be deploying and then there are a multitude of wrappers to choose from. The latest that caught my eye was a mongo-like document API that used Postgres 9.4 as the backend. In this case if the underpinning or translated structure is "least surprising" then you might get the best of both worlds in one container. (Golang database drivers)

There are a number of ORM libraries that I have used and reused. My favorite is gorp. Since I only use the ORM methods that support proper marshaling I have been looking into other libraries. Ones that are smaller, easier to use, more thoughtful. This article was helpful. SQLX seems like a good option. One important feature is Server support. Many of the ORMs require specifying the dialect and than can be a challenge.
I prefer to write the proper optimized SQL than letting then letting the ORM layer make that decision for me. Early use if "hibernate" which required hand customization via the HQL syntax meant learning two SQL variations in order to code and debug.
The Hood project has gone to great lengths to incorporate the GoLang tags as complete metadata for the DDL. While it's interesting it's a lot of work. It means that teams will have to develop a meaningful style guide in order to use Hood in it's opinionated way. I think the Go Authors did not really want that to happen as evidenced by gofmt and it's motivation(s). And frankly if you need a DDL then implement a DDL rather than manually preprocessing the DDL by embedding DDL meta info in the structure tags. (see go generate)

The model project does not get me to the happy place either; even though it's author is the one that forced me to shift more directly toward the cleaner SQL marshaling (SQLX) rather than the mega-orm.

Authentication

Dependencies - godep, glide and then there is gopkg.in for API versioning.

Load testing with vegeta or boom, written in go, look interesting because they offer a CLI as well as libs that can be integrated into your application. (yes; there is apache bench [ab], loadrunner, seige, and so many others)

Crypto - crypt storing config data in etc (Kelsey Hightower wrote this; I have not been able to find an authoritative link) What makes this interesting is that every application is sharing the same storage but that only the owner (holder of the keys) can retrieve the data. It would be interesting to combine crypt with go-bindata so that the keys might be auto-baked into the application as part of the pipeline. Additionally, one needs to publish the public keys in the etcd engine itself so that others might publish data for your applications consumption across the cluster. There is some interesting ssh APIs too.

I have been following YubiCo for a long time. They have a number of interesting projects and while I like their HSM is does have a few limitations by itself; that being TPS. So there are only two use-cases that make sense. (a) systems with low transaction rates, or (b) use the HSM to decrypt the persisted, encrypted, private keys and leave the transcoding of the cipher/plain-text to a proxy. This increases the TPS but creates a trust issue. This Yubi-KeyServer is worth watching with the only concern being that it's authorship is outside the US.

Instrumentation - "instrumentation through composition" (article), lumbermill, instruments, metrics, go-metrics, Metrics (video) Monitoring from StackExchange (article, home, code)

Web - vulcand looks like a nice reverse proxy and/or HA with interesting ambassador features for docker

IDE - Wide(web based IDE), lightIDE, Atom, Brackets, and so many others.

As a service on Windows

DNS - GeoDNS. A very lightweight DNS server. SkyDNS2.

a go version of gitreceive so that you can trigger hooks.

templates - I really like the golang std library for templating but there is something to be said for mustache if cross platform or shared templates is your thing.

Package templates - requires go 1.4

Go package updater via browser instead of terminal.

libvirt

link to good concurrency slide deck and video

One of the Go Authors has been arguing against frameworks. I do not recall which one or how strongly but then there are some frameworks that me. These might not really be frameworks but templates.... but who knows where the line blurs. This year John Graham-Cumming presented his dotgo project. I cannot agree with him more. His project reads from stdin and write to stdout. The processing takes place in a goroutine so the results may be unordered but could be highly efficient.

Here is my Docker post with a few links to some docker related projects.

Jobber as a replacement for cron

Mailgun is a great toolkit for sending emails. I tried sendgrid but was not able to get it going and mailgun simply worked.

Wednesday, October 29, 2014

Overdue updates

Tweetbot for iOS is overdue to receive the "add to reading list" function directly from the main stream. Currently you have to open the link in Safari and then add it to your reading list.  That's just too many clicks.

Of course the exact same can be said for Google+'s iOS app. It's simply impossibly to "add to reading list" until you get it open in safari. There is a shortcut or two but in the end you are still in the sam place with the same number of clicks.

And of course Google's Chrome browser already sync's bookmarks and such but they have not implemented a reading list as yet... and I'm not going to re-activate those other guys just yet. Safari did it right... and it would be flattery to get even close.

Panic needs to update StatusBoard with some of the 2.x features they talked about a year ago. And while they are at it they need to fix some awful terminal bugs in Diet Coda.

Cell phone manufacturers need to correct their pricing so that pricing is back inline. First of all we need a voice activated Motorola StarTac.  And if you've never used a StarTac you deserve to try one. It was a small flip phone, long battery and it was so light/rugged enough that it would survive just about any drop.

New nexus 6

It looks like one heck of a phone. And I want one. But I'm not going to pay $699 for a phone and I'm not going to get sucked into AT&T Next. I'm even more suspicious of T-Mobile's transfer program. 

Monday, October 27, 2014

Cleaning up after Docker

I happen to be scanning my CoreOS boot drive and it's 97% used.  CRAP! I have a number of ways to clean up my drives but in this case I had both active and dormant containers and so the usual way of cleaning this up was not going to work. The usual way was "delete everything that is not running; both containers and images".

Well that was not going to work... but then I was lucky too. I found these two commands here:
docker ps -a | grep 'weeks ago' | awk '{print $1}' | xargs --no-run-if-empty docker rm
docker images | grep "<none>" | awk '{print $3}' | xargs docker rmi
What makes these two commands interesting is actually the first.  It only deletes the containers that were  active "weeks ago". Granted it's possible to delete too much depending on the actual output of the docker command but in this case the data matched the query exactly.
The other thing to note is that while docker seemed to delete all of the images... they were not actually deleted.  There is a btrfs_cleanup process that seems to be doing all of the heavy lifting... and my drive storage is returning to normal.

Watching out for Docker

After an intense 2 weeks of deep dives into everything docker I find myself with a very short list of URLs that I need to monitor:

Docker - of course
CoreOS - naturally
fig - cool
boot2docker - required
deis - sigh
kubernetes - ditto x2
Rancher - used it when it was stampede.io. very promising.
Rancher is making a comeback

Phusion has a lot to say about container contents but it's very scary.

Building good docker images - some ideas about creating good docker images.
Reverse proxy with nginx.
Docker patterns.
Continuous integration with DroneIO
systemd alarm clock
citadel docker API
Project Atomic, Dokku
Vulcand, zero downtime deploy
WeaveDNS is one way to make service discovery easier. Weave is a network partitioning tool that could also be useful. And some boot2docker DNS.

multi-server docker.

Under Review
kitematic - docker on your mac
bowline - build server and UI for docker
artifactory - recently added docker registry to artifactory

UPDATE: flynn - I discarded my interest in flynn. The previous pre-release was in Aug and now another that uses Docker 1.3.1. If memory serves there was a strong dependency on a fix that was supposed to be available in Docker 1.3.0 so it's worth reviewing again. Flynn's website still indicates it's not ready for production.

The latest version of Docker is 1.3.0. There were a number of interesting features added to this version but nothing that is actually a game changer. There was one bug that I reported that seems to be fixed so I can take advantage of the patch in the CoreOS alpha channel.

** although docker is getting considerable mindshare I'm starting to rediscover my interest in appscale and google app engine.

Tuesday, October 21, 2014

PaaS critical feature

One critical feature for a PaaS that you're going to run your business on is limiting the number of service outages while upgrading. CoreOS is a good start in that the enterprise toolchain allows the operators to control the rolling reboots, however, looking at the deis instructions for upgrading requires that the entire PaaS fabric is disabled during the upgrade. Before going all "in place" vs "migration" you need to understand that both are just as volatile and the chances that there is going to be an outage is very high.
The only way to manage the potential service interruption is to own the service and the integration points so that the single point of failure between micro-releases is managed.
Green/Blue deployments applies ALL aspects of the stack. The OS, the PaaS and the application/micro-services.

UPDATE: deis.io now supports in-place updates.  I need to give this a try.

Monday, October 20, 2014

First pass - web application

A multi-node HA web application based on a minimal 3-node CoreOS installation.
Install CoreOS out of the box
Expanding on my last post this is what I've been thinking about in order to get my stack operational. The incoming event starts with the client browser so there is not much to do there. The transaction also flows through public DNS and into the primary firewall/router with some built-in HA capability. There might be n routes to n nodes in the multi-node installation.
Configure the router(s) to each of the nodes
Next I need to implement an HA proxy based on hipache using etcd. There is a feature in the server that will try to detect dead servers and if detected will suspend that service until the TTL has expired.
Deploy the  proxy server and pull the backend configuration from etcd.
Implement the backend server. This can be any simple backend Docker micro-service.
Deploy the backend service using Fleet to distribute the service to each of the nodes. The pre/post config in the fleet unit/service file will update etcd so that the proxy server can be updated.
Since the proxy service is similar to a service bus it is possible to create additional services in a semi-recursive pattern so that DBs are now exposed as REST APIs instead of or in addition to implementing Docker links.

Sidekicks and ambassadors can be generalized and implemented with the Citadel toolkit.

Now let's get started.

UPDATE: It's interesting to note that the Citadel toolkit implements image links. This is not a bad thing except that it means that the links are limited to the single node. Frankly, there is a negative side effect of linking cross nodes like follow the dots. If you have to exit the current node you're better off doing a little BUSS/SOA.

Docker, dockerclient, citadel, fig, multi-node, hipache, etcd, nginx, crypt

It's only a matter of time before the Docker team closes the loop on the multi-node Docker stack and starts to chase the complete PaaS solution. Sure; in the early days of Docker it's open source and the various teams are absorbing the code as quickly as it's available; and the different framework teams are all stitching as much code together as they can. But the one quote that seems to be sticking out in my head is something like:
Build it yourself
 So while I have been testing all of the PaaS frameworks out there they are still lacking. Whether it's current (Docker <1.3.0, CoreOS <Alpha, Go < 1.3.3) or of it simply does not work or it only supports a limited functional set.

So here's my intuition... and if it were my money looking for a solution in this space:

I'm starting from the fractal dimension. Docker containers are simply just another fractal dimension either in or out from that of virtual machines, mainframes, or J2EE-like enterprise SOA solutions. And like I need a solid and stable ESX server for running VMware I also need a stable bare-metal OS and so I start with CoreOS.

CoreOS provides five key technologies for free and a sixth with a support contract:

  1. locksmith - manage autoupdates on the server side
  2. systemd - linux system init
  3. etcd - raft-based replicated key/value store
  4. fleetd - distributed systemd and service manager. It maintains the service policy for each if the services. Auto restart, cluster distributed patterns, cron-like config, pre and post command that can be used to alter etcd instance data
  5. docker - just the container
  6. console - enterprise level console monitoring
With docker version 1.3.0 the Docker team has introduced governance which will eventually end up in CoreOS and turn into a trusted compute model. Once you trust the application or micro-services running on the machine you might be able to trust their interconnection.


Fig was acquired by Docker and it provides a single node multi-service configuration tool. This includes many of the config parameters in the Docker build and run commands like both sides of the volume, port mapping, service linking, and it can scale the services on the one node in order to test the linkage of your applications. And so it is ideal for the developer.

Nginx is a nice tool. One killer feature is the soft reboot. The ability to reconfigure the proxy without restarting or losing current connections. Someone in the nodejs implemented a project called hipache. Hipache is capable of the same, however, where Nginx requires a sidekick or ambassador in order to capture the config changes Hipache will simply monitor a Redis instance in realtime. Changes in the Redis db will immediately effect the transaction routes. There is a variation of the Hipache proxy that will monitor various etcd keys which will then update the redis db.

What's nice here is that there is a go implementation of hipache called hipache-go. While it too uses redis it not much of a stretch to replace that code with etcd code and so the route table can be effected directly by the backend services as they are started.

A Fleetd pre event can be used to tell etcd that a new service is to be started and as such can inform hipache to create a route to this service. And when the service is being stopped the post command can cause the config to be removed from etcd which will in turn remove the active route.

Finally there is one additional project that can be used to protect configuration information. The crypt project is a crypto strategy for storing all configuration information in etcd encrypted with a public key via a special purpose CLI tool. The private key would be stored in a private folder on the host OS with permissions limited to a specific user. (do not store the private key in the image or container; instead on the host volume). This has a few weak points that need to be worked out.

Finally with the enterprise GUI I can set policy and monitor updates to CoreOS. This, in turn, will allow me to monitor other aspects of the system as the GUI features increase.

One last thing. While some of the tools I mention here are baked in they are incomplete. That psrt of the puzzle will be addressed by the Citadel project, that uses the dockerclient project, which can deploy multi-node containers with custom schedulers. So any place where these tools fail to deliver they can be augmented with my own code. Now all I need to do is sprinkle in a little SQLite and maybe some RethinkDB...

There you have it.

Apcera's Continuum - works but is costly

I have not made the complete rounds on Continuum, however, I was able to get a VMware host running and interacting with the UI which is a level of success I have not been able to reach with the other PaaS systems. Continuum is similar to Stampede/Cattle in that you can deploy an OS as well as containers. Continuum seems to be tweaking the glossary but the outcome seems to be the same. Docker containers, Full OS, buildpacks... But the pricing is crazy. $2000-$7000 for 32GB to 128GB (I'm not sure what Apcera mean by: "A monthly subscription of Continuum is based on assets under management and consists of a cluster and a standard support package")

Docker + Fig

There is an interesting footnote in the Docker documentation/blog. The core Docker team is merging the features from fig into Docker. While they are at it I hop they are adding multinode and auto ambassadors.

Sunday, October 19, 2014

Docker 1.3.0 framework updates

As of today here is the latest docker progress:

Stampede.io - no updates
cattle.io - no updates
fig.sh - current to Docker 1.3.0
kubernetes - current to Docker 1.3.0 on OSX
deis.io - no updates
flynn.io - no updates
CoreOS - updated in the alpha channel only
Dokku - no updates
OctoHost - no updates
Cocaine - not sure. cannot find the source repo
Dawn - no updates
Tsuru - no updates
ProjectAtomic.io - not finding the source or the commit history
OpenShift 3 - no updates
Panamax - using a recent CoreOS but not alpha channel
Shipyard - no updates
spin-docker - very little action at all
Solum.io - no updates
Consul - no updates

I've decided to stop research this issue. It's entirely possible that many of these projects simply do not need to be updated.

** "no updates" means that the project has not indicated whether or not Docker has been updated.

** it's always clear exactly how these projects are implementing Docker and so other than an explicit reference to a Docker version I cannot be sure whether there is an actual or implied dependency. I can say that until I updated each of [boot2docker, docker, fig] I was not able to get my demo environment to function properly.

boot2docker time sync

I'm struggling to keep my docker container and docker host's clock in sync with my host (laptop). The challenge is when I put my MBA to sleep the VM (VirtualBox) stops receiving clock ticks. In turn the VM believes that nothing ever happened and so the clock is late.

There are a number of ways that this is supposed to be addressed. (1) the VM should have installed the VirtualBox tools. This might allow the VM to trigger on wake but who knows for sure. (2) ntpclient appears to be running as a daemon and it also seems to be running... but it's not updating the clock. And while there is a /var/log/ntpclient.log file it's empty.

There are a few choices to make things right.

(i) stop and start docker. This seems to force the ntpclient to do the right thing.
boot2docker stop && boot2docker start
(ii) you can just run the ntpclient command manually
boot2docker ssh -- dateboot2docker ssh -- /usr/local/bin/ntpclient -c 1  -q 200  -h pool.ntp.org

I'd prefer that ntpclient was working properly.

Apple Yosemite OSX and iOS 8?

I updated my MBA (MacBook Air) to Yosemite. The installation was not as painful as previous releases. In particular I had some FileVault errors that required a complete reinstall. I also watched the iOS 8 and the most recent WWDC where Apple described all the benefits of Yosemite.

Thus far the experience has been relatively painless although not pain free. Different fonts in the menubar, more and more alpha shading, spotlight search is popup instead of a menu-like pulldown, headphone volume was quirky with Chrome until I rebooted a few times, the volume button would chirp when using the keys or mouse. Anyway... lots of changes, not all for the better.

As I watched the Yosemite presentation I found myself adoring many of the features like iCloud integration and continuation. But I still struggle with the cost. Additionally iCloud seems to be sandboxing the data in iCloud so that data cannot be shared between applications. At least it's not clear how the sandboxing is implemented. And then there is the iCloud drive. If it's a place for me to put my stuff then my apps need to be able to access them. Again; not much info.

Now that I'm using my iPhone-2 ish and a new phone and carrier on the horizon I'm considering the Nexus 6 and a Nexus 9 as a replacement for my iPhone-2 and iPad Mini.

Friday, October 17, 2014

Docker 1.3.0 is available

The one feature I have been waiting for is trusted containers although I have yet to completely understand the pipeline but I know it's going to be very important.

New and improved Docker 1.3.0

So this was a lot of fun. At first I was thinking that I was going to leave my Docker development to my cloud servers and my work laptop. But that with the latest Docker 1.3.0 release and the associated boot2docker and fig projects I had to install it on my personal laptop.

First some descriptions:

  • Docker - an open platform for distributed applications
  • boot2docker - lightweight linux distribution that runs inside a VirtualBox virtual machine.
  • fig - configuration and orchestration for a single-host deployment
Second a small piece of advice. Fig and boot2docker are meant for development although fig might work in environments other than boot2docker. There are a number of clues that the docker team and very early adopters (fig was recently acquired by the docker company) have left for the rest of us:

Your build or makefile should use a container to perform the build. Eat your own dogfood. Both fig and boot2docker use a docker container to create the executable tools and boot2docker it's iso image that is later used by the user to build apps.
Here is a short list of steps that I went through to deploy the tools on my personal Yosemite OSX laptop.
  • Download and install the apps
    • boot2docker
      • curl https://github.com/boot2docker/osx-installer/releases/download/v1.3.0/Boot2Docker-1.3.0.pkg
      • now you have to perform the standard installation
    • fig
      • curl -L https://github.com/docker/fig/releases/download/1.0.0/fig-`uname -s`-`uname -m` > /usr/local/bin/fig; chmod +x /usr/local/bin/fig
Now you need an application. Let's start with the fig.yml file:
hello:        image: busybox        command: /bin/echo 'Hello world'
This is an over simplified and small application. It's purpose to to just print the hello world message.

With the fig.yml file in the current folder then let's execute the first docker commands via the tools.

  • initialize boot2docker
    • boot2docker init
  • start the boot2docker VM instance
    • boot2docker start
  • capture the docker config so that the local tools can communicate with the VM host instance
    • `boot2docker shellinit`
  • run the application via fig
    • fig run hello
And that's it. Note that in the fig.yml file I selected the busybox linux distro. This is one of the smallest and docker-trusted distros available with enough features to run the hello application. The command is self explanatory.

** this example is not complicated enough to warrant a makefile or even a repo of it's own. Just know that this is what you'll when you run the run command for the first time:

$ fig run helloPulling image busybox...df7546f9f060: Pull completedf7546f9f060: Already existsdf7546f9f060: Already existsdf7546f9f060: Already existsdf7546f9f060: Already exists511136ea3c5a: Already exists511136ea3c5a: Already exists511136ea3c5a: Already existsbusybox:ubuntu-12.04: The image you are pulling has been verified0dfaa2625e19: Pull completed415c60e5ea3: Pull completebusybox:ubuntu-14.04: The image you are pulling has been verified25fb2184d4af: Pull completed940f6fef591: Pull completeStatus: Downloaded newer image for busyboxHello world
Notice that fig executed all of the necessary docker commands to prepare the environment and eventually executed the command much like this:
docker run busybox /bin/echo 'hello world'
One reason I selected busybox was because of it's size. I had also tried ubuntu but that was pretty big. And when I tried scratch there was not enough gusto in the base OS to run the echo command.

The second run command will result in this:
$ fig run hello
Hello world
Notice that fig/boot2docker and docker itself did not need to re-download or re-build the environment. The container simply ran and produced output.

The default size of the boot partition is 40GB, however, I needed a bigger partition for my project. Here is the cheatsheet of the commands I executed:
  • boot2docker destroy
  • boot2docker --disksize=80000 init
  • boot2docker start
  • `boot2docker shellinit`
  • boot2docker ssh
  • df -h
The results of the 'df' command should indicate that the primary partition is 80GB. (longer init and start command execution times)

$ df -h
Filesystem                Size      Used Available Use% Mounted on
rootfs                    1.8G    204.6M      1.6G  11% /
tmpfs                     1.8G    204.6M      1.6G  11% /
tmpfs                  1004.2M         0   1004.2M   0% /dev/shm
/dev/sda1                75.8G     65.6M     71.9G   0% /mnt/sda1
cgroup                 1004.2M         0   1004.2M   0% /sys/fs/cgroup
none                     55.4G     30.7G     24.7G  55% /Users
/dev/sda1                75.8G     65.6M     71.9G   0% /mnt/sda1/var/lib/docker/aufs

Capture the boot2docker host IP address with this command. 'shellinit' has some use but this is not it.
export D_HOST=`boot2docker ip 2> /dev/null`
That's all there is for now. My next post might include volumes and links.

UPDATE: I forgot to add that the boot2docker host uses ntpclient and other than the first init... subsequent 'start' commands cause the host to sync the time immediately. There is also a timer running on the host to periodically fix timeshift. Sadly, when putting my laptop to sleep the clocks can be way off. So I usually stop/start my environment when I'm not using it. I'm not sure yet but this might have some other benefits like longer battery life.

Taxonomy of living things applied to code

Can code be described in terms of it's taxonomy? In this way we might be able to generate consistent model of all systems across all languages.... maybe.

Thursday, October 16, 2014

hodgepodge of Docker notes as of today

Updating the latest "box" core image for your virtualbox installation requires one quick command.
vagrant box update --box coreos-alpha
 "Yes", CoreOS will update itself based on the LockSmith settings on your system but if you are running a multinode cluster that you might be repeatedly launching and destroying... a normal vagrant box update will be processed for each node in the cluster (running or not).

I was tinkering with the latest deis source and I noticed that there were a number of patches that might have effected my experience but since the project only provides binaries of releases I would need to perform the compilation myself. The process was never completed but I did notice that their Makefile depended on boot2docker in order to deploy a working compiler environment. Being a devops build engineer I found this interesting and comforting even though it did not work properly.

Deis is a cool project. It's probably the furthest ahead of all of the Docker PaaS frameworks. Sadly it's terribly buggy, documentation is very thin, And the docs are thinest where it might matter most in the open source arena. By comparison boot2docker provides some great information for generating your own boot2docker image.

One of the big challenges with Vagrant is getting a boot partition bigger than the default 40GB. The defacto method for configuring additional drive space seems to be host mounted storage or add a virtual drive. There is nothing entirely wrong with this approach, however, boot2docker offers a default 20GB boot storage or you can resize the drive with these instructions; edit the config file here:
$HOME/.boot2docker/profile
BOOT2DOCKER_PROFILE=~/.boot2docker/profile boot2docker init
BOOT2DOCKER_PROFILE=~/.boot2docker/profile boot2docker up
or since the latest "disksize" CLI options are not implemented yet... then use these instructions to roll you're own boot2docker-cli.

This looks like a simple way to combine fig and boot2docker. Made better with a bigger drive as I kept running out of disk space. I'm still trying to determine the best way to develop micro-services in a sandbox and then incorporate sidekicks to integrate them all.

Wednesday, October 15, 2014

Can't live without my speech-to-text

I hate typing on my iPhone.... speech to text has been useful for blogging, tweeting, and all important texting (SMS). While I do not SMS in general public it is the most efficient way to enter text when I'm walking to meetings or waiting for the movie to start. But now that I have destroyed my second iPhone 5 screen and I'm forced to use my early iPhone 2-ish... I find myself wishing that I had my old Motorola StarTac with speech to text.

another bad day for open source

One of the hallmarks of a good open source project is just how complicated it is to install, configure and maintain. Happily gitlab and the ...