Monday, September 29, 2014

CoreOS and Docker - a trusted compute environment

Ben Golub, CEO of Docker was asked:
PwC: Could you extend a governance model along these same lines?
BG: Absolutely.  [...]
Now it's going to be up to the operating system providers to close the gap to the hardware so that (a) the OS is protected and that protection extends to the container agent (docker).

I asked Alex Polvi, CTO at CoreOS, a similar question and he answered:
AP: We are working on a full trusted computing environment using CoreOS
We are living in exciting times!

Sunday, September 28, 2014

"Dear clueless assholes: stop bashing bash and GNU."

I do not know enough about the true history or motivation of the GNU way of doing things. Over the years I have observed an acceleration of free and open "things" that might owe it's velocity to GNU. But then there was a lot going on in those days. Any number of projects could and would have usurped GNU with simple mindshare had the timing been right.

But that's not my reason for the comment. I take issue with the comment "Shellshock is not a critical failure in bash. It is a critical failure in thousands of people who knew a tool so useful that they decided to deploy it far beyond its scope." Eric S Raymond wrote a book called "The Art of UNIX Programming". (ESR is no less important to the Open Source and Free communities). 

In section 1.6 titled "Basics of the Unix Philosophy" he writes 17 rules. The first 2 or 3 seem to immediately contradict Andrew's conjecture.

Rule 1) Rule of Modularity: Write simple parts connected by clean interfaces
Rule 2) Rule of Clarity: Clarity of better than cleverness
Rule 3) Rule of Composition: design programs to be connected with other programs

Frankly; who is to say what exactly the scope of bash was? I have read much of the man pages and some of the code and I do not recall anything that would suggest "don't do because bad things will happen". That's just silly.

That this exploit exists in BASH cannot be debated, however, to defend RMS by suggesting "free" trumps "responsibility" is nonsense. It looks like someone, could be RMS, added a "clever" feature to bash which is now being composed into the exploit which we all now fear.

As I hinted. I do not think his income or quality of life has anything to do with the argument.

Tuesday, September 23, 2014

Google Computer Engine to the rescue

I have a server instance hosted on the GCE platform. The GCE offers some really nice services and I've only scratched the surface. When I created my GCE instance I wrote all my notes down on my iCloud notepad... and thankfully all of my notes are still there. Sadly my MacBook Air crashed and required a new hard drive. As a result I lost some notes that were in text files and not cloud-ified.

Fast-forward a few months and I want to deploy another instance but since I use etcd I have no idea what the UUID is or the various mount points. (when a system instance shares a drive it is converted to RO instead of RW.) And while I have my notes I cannot be 100% certain I have the correct config.

Enter gcloud; I updated my GCE SDK and and issued the command:
$ gcloud compute instances describe pcore1
And voila. I have the exact metadata I used to create the first system. Now I should be able to edit it for the additional instances and move on.

Saturday, September 20, 2014

iOS 8 is so bad....

The combined negative behaviors between my iPhone and IPad only strengthen my decision to move to amazon Fire or Google nexus. Market share is not an excuse to release bugs in every app and it definitely not an excuse to buy an iPhone 6. 

Thursday, September 18, 2014

iOS8 first impressions

At first I was a little disappointed with the 60 minute upgrade.my iPad required a 1.3 gig download and my iPhone required a one gig download. once the new operating system is installed I had to set it up again.The bad news is I can never remember my iCloud password.after a quick password reset I was well on my way.

My first impressions include a useless features of the health monitor, podcast, emoticons, word prediction, and tips. 

The one killer feature is the vast improvement to the spear to text. 

That's what I saw in the first hour of use. 

[UPDATE] i'm a huge fan of the products that panic.com createsand they just released an iOS version of their killer FTP program called transmit.I would love to give them my hard-earned $10 however I have absolutely no idea why I would be FTPing anything into or up from my iPhone or iPad. Swiping to delete email just deletes the email instead of prompting the menu to ?more?flag?delete.

[UPDATE] unfortunately it's buggy where it counts.  The menu button is jumpy and puts you on the wrong view.  Less important but interesting the "Updated" widget on the bottom of the mail client does not update properly. Mine has displayed "Updated Yesterday" since yesterday.

[UPDATE] as fun and interesting as an Amazon phone would be when you want a phone to be a phone then android is still the best platform. 

PS:I've noticed that the menu button seems to be a little buggy.

Tuesday, September 16, 2014

Citadel - Docker APIs

I have been researching Docker; orchestration, composition, or scheduling; depending on which sources you've been following. And as I wander around this maze I have looked at many different approaches to the problem. Everything from Puppet, Chef, Ansible, SaltStack, to Panamax, Kubernetes, Fleet, Mesophere, Deis, Flynn, Fig, shipyard; ambassadord, registrator ... and more. I even departed from these suggestions to look at Citadel which is an API tooklit for interacting with Docker directly.

The Citadel Toolkit give you the necessary tools and frameworks to implement your "configuration as code". It is the foundation which Shipyard is implemented; while Shipyard uses Citadel as part of the GUI there is no reason why the "application" could not be a static configuration in a "solo" fashion. Citadel is robust enough that you could deploy ambassadors or sidekicks while deploying the target container. Citadel shines in that it provides an API metaphor for deploying containers in clusters as well as allowing user defined "scheduling" functions.

Where Citadel fails is that it implements some of the features that Fleet does but is incomplete. Citadel, by itself, cannot cron, auto-restart with restarting forever, and the schedulers still need to be implemented.

If you need basic access for deploying containers or maybe taking an image of the running containers then this is a nice tool. But launching or deploying is another matter. Fleet is still the hands down winner even though it's "configuration as configuration".

[UPDATE] I think that fleet might be a better input mechanism and citadel might be a better output, however, it may not be completely clear where they crossover.


The best project I ever implemented

It's possible that this is a retelling of a post I've already written.

In the spirit of the Food Network's "the best meal I ever ate" show; this post describes the best project I ever implemented.

In the late 1980's early 90's I worked as a contract programmer assigned to work for IBM's Manufacturing Systems Division in Boca Raton Florida. I had considerable CUA (common user access) experience and I was supposed to validate a 3rd party's compliance before IBM released the project. The problem was, however, (a) the application was not compliant, (b) it was extremely buggy and in the first week I wrote 1500 PSR's or bug reports, (c) and the vendor overstated it's scalability.

The first "best project" was a terminal simulator.  Per (c) the vendor stated that their control program could download the application and RTOS to 4096 devices in some ridiculously short period of time. It was essentially the theoretical maximum capacity of the RS-422. Since I was skeptical I designed and implemented a terminal simulator which was capable of simulating all 4096 terminals. Testing proved that the vendor could not achieve more than 25% of their promised capacity. In the end IBM elected to allow them to use the application in development.

The second "best project" was a barcode reader simulator. IBM had an important client that was having problems with their barcode reader on one of IBMs MSD computers. From time to time the computer would misread the barcode by inverting the first two characters. This was not a serious problem but trouble for management nonetheless. Back in those days we had a parallel printer port that we used to connect our printers.  The parallel port was not much more than a DIDO (digital I/O device) with individually addressable pins at the device chip level. So I connected my parallel port to an oscilloscope and then to one of the MUTs (machine under test)... and proceeded to write a simulator that could simulate the barcode device, the human and electrical characteristics, including first bar blooming. After testing for a few days I managed to reproduce the problem. Between first bar blooming, a stuck interrupt in the terminal application and a weakness in the I-2of5 barcode there was a character inversion.

The third "best project" was a regression test tool. By this time IBM had decided to build a new device. This new device was going to be implemented from the ground up and I was going to be working on the regression tests. At the time testing was primitive and manually intensive; so I built some automation tools.  I designed a DSL that I could use to implement the tests. And a GUI+runner that could be used to run the tests and report the results. This saved me a lot of time so that now I could write more tests and test more releases.

The third project was a little less satisfying but it allowed me to continue my contract... and it was the tipping point for the forth and final project. I expanded on the 2nd project so that I was able to simulate, visible light barcode, infrared barcode, laser scanner, magstripe, printers, and a keyboard. I added software downloads, automation, and more human factors. Now when the development team handed me a new release I could perform a complete regression in a fraction of the time.

In order to turn this into an internal product; I worked with the engineering team to bundle and replicate the test harness so that I could test 4 computers per test machine; and we assembled 4 such devices. Later the devices were shipped to the manufacturing facility so that it could be used in the FVT (functional verification testing) stage of the manufacturing process so that boards could be tested after they came off the manufacturing line.

That was the best project I every worked on.

Apple Payments - explained

The Complete Apple Payments strategy is a bit of a blur. Here is a short list of the different types of payments and the Apple product behind it.

  • Apple iTunes is a merchant
  • Apple GiftCard is a closed network issuer
  • Apple AppStore is a merchant and merchant acquirer
  • Apple Pay is a card wallet
The only thing missing here is an Open Network Issuer.

What is a Docker Sidekick?

I'm having trouble getting from sidekick to ambassador and back. The problem is clearly in the container networking wheelhouse; and progress seems to be wicked fast with new tools and idioms being published very rapidly. But here's what I see:

  • you have a client/server service pair
  • each service runs in a different container
  • each container may or may not be on different hosts
  • each service is subject to crashes or network partitioning
  • coreos/fleet is acting as a broker of sorts
  • if the container restarts there is a chance that the container's IP address will change and it's pair will not necessarily be notified
  • the sidekick service is paired with each service in order to maintain the link.
  • one shortcoming of the sidekick service is that it depends on being able to quietly update the config. The best example is the nginx example. (impedance mismatch)
By contrast
  • this example breaks down when one of the services might be a user interface like a redis console.
  • The ambassador pattern uses a special purpose sidekick pattern which corrects the impedance mismatch.
  • where in the sidekick pattern the services seem to model the same client/server POV as it's pair, 
  • in the ambassador model the service pairs are servers that implement a dynamic network proxy
  • in order to repair the torn network fabric.
I might have mangled the vocabulary but this fits in my mental model.

Sunday, September 14, 2014

Making technology choices is easy being successful is hard

For about the last 6 to 12 months I have been investigating docker. In that time I have seen a number of applications and frameworks pop up as the land rush progresses. As the number of supporting tools and frameworks increase the choices are harder to make. The speed of change within the community means that by the time you completed your proof of concept it may not have any value at all.

A week ago I started work on a workshop that included docker and CoreOS. By the time I decided to stop writing more text I had accumulated about 4 to 5 hours of material. Every time I tried to take the simple hello world app to the next level I constantly met with friction from the different frameworks I was evaluating. 

Don't get me wrong, there are plenty of quality tools and frameworks, however, no simple hello world app is going to provide enough depth to either guarantee or suggest success. A properly organized workshop and proof of concept will provide sufficient documentation for the teams to adopt but also provide an opportunity to evaluate the edge cases.

It is not sufficient to make this sort of decision based on intuition alone and sophomoric intuition is not up to the task.

Friday, September 12, 2014

30sec review of Panamax for Docker

It's a polished interface. The widgets are smooth and complete. Sadly it is not really good for development. The expectation is that the containers that you need, probably under development, are already on some repo. So the whole.... (source)  --->  (Dockerfile) ---> (build) ---> (commit) .... etc pipeline is missing. Additionally there is console to show where containers are located in the cluster; if you want to operate a cluster.

Way too immature.

Thursday, September 11, 2014

5 minute Reflection on Docker Frameworks

[update] I spent nearly 4 hours this evening trying to get kubernetes to run the sample application. On the one hand I finally got the container running but without being able to get the client to connect. While I was drowning in 10-15 virtual machines it became clear to me that it's not ready for primetime. It's also clear to me that there are too many differences between host OS' and the tools available. In some installs it was easier to use my laptop and access kubernetes via a tunnel and in some configs it was easier to use a single node. As I look at CoreOS and Fleet I wonder why I would need kuberbetes. I know that the intent is to provide a line of demarkation between the hardware and application but it's not delivering. I might have better success with plain old Fleet and X-Fleet configuration and some sidekick configs.

I've been experimenting (more than play and less than full-time production) with CoreOS and Docker for a number of months now. I was lucky enough to give a short workshop on both. Having to deal with the sidekick and ambassador patterns left a very bad taste in my mouth. I know that the teams are working on this but it was definitely something that makes Docker microservices less plausible.

Now there are multiple frameworks that are supposed to be addressing this.

  • Mesos+Marathon - Apache + JVM (argh!!!)
  • Diego - Cloud Foundry (argh!!!)
  • Kubernetes - too much alpha code (argh!!!)

And then there are some libraries...

  • rudder - virtual networking, meant for kubernetes but still alpha (argh!!!)
  • weave - virtual networking, who are these guys?
One of the dominant concerns is the container size. My next stop is looking at stackbrew for a tiny base image.



Tuesday, September 2, 2014

New frontier of personal privacy

How many times have you been asked for "mother's maiden name", "father's middle initial", "Grand Parent's maiden name", and my favorite, "what street did you live on in 6th grade"?

These questions seem harmless but in the hands of a network graph researcher, a government or a bad guy... they can stitch together you whole life. It's only a matter of time before anyone can model anyone else's life in order to answer random security questions.

Sure this is pessimistic. But how many times have we heard this happen already? Far too many times.

Monday, September 1, 2014

Choosing the right programming language

Choosing the right programming language for your next killer application could be harder than you think as there a number of dynamics out there that represent the facts and bike shedding opinions that represent the fiction.

I recently read two articles that put things into perspective. The first was TCL for network programming and the second was Boeing's 777 is 99.9% Ada.

The thesis for the TCL article comes down to this: TCL is not going to be overrun by a 100mb network. And the thesis for Ada: getting all of the developers "working together" (if I remember my Ada it has a strong producer/consumer contract) and built-in testing.

There are a lot of modern and legacy programming languages to choose from. No one language (or framework) is truly better than the other. They all have their warts.

  • package management
  • version skew
  • tabs vs spaces
  • functional vs procedural vs object-oriented
  • mindshare
  • cross platform
  • interpreted vs compiled
  • static vs dynamic linked libs
  • type system
  • dynamic declaration
  • testing
  • compile time
  • compile-time vs runtime dependencies
  • installation magic
  • libraries (CPAN is the aging Rock Star but still rocks hard)
I'm certain there are other qualities. This was not meant to be exhaustive.

My current go-to language is Google's Go and there are a number of reasons:
  • static compiled
  • support for Linux, BSD, Darwin/OSX, Windows along with capable cross-compiling
  • good standard library
  • testing framework
  • fast compiler/tool chain
  • no need for autotools
  • strong CI tools like drone and travis
  • I really like being able to transcode messages using structure tags
  • concurrency and channels are cool but not necessarily awesome
I've complained about Python3 and Perl6, in the past, but I still like them both; however; I think the next app/tool should be in TCL. (active state could not afford to continue it's development if there was no market for it) I would also add that TCL was the language that we used to develop the command and control code for the SnapGear brand of firewall/routers.

another bad day for open source

One of the hallmarks of a good open source project is just how complicated it is to install, configure and maintain. Happily gitlab and the ...