Saturday, January 31, 2015

Architect, Designer, Build or Assemble.

I'm not an official Webster's representative but experience tells me:

The difference between assemble and build is the amount and level of detail instruction.

The difference between architect and designer is the principle priorities. The architect is primarily interested in scale and function where the designer is primarily interested in aesthetic and function.

And where architects are further defined by scope which blur (application, system, enterprise); designers try to focus with distinct, sharp, boundaries.

Apcera and gnatsd


I've watched a number of Apcera demo videos and while I'm not an expert or freshman user of continuum I can spot excellence. In fact one of the conversations I had been having with some CoreOS, DataDog team members and writing about the comments made by the Fusion base image guys... the Apcera team closed the loop.

Docker and CoreOS started with the notion that the container was supposed to be lightweight. It was supposed to execute a single purpose application and maybe a few dedicated sub-processes... and maybe ssh if absolutely necessary. And to that end the Docker team started publishing their idiomatic base images. Then along came the masses who started creating all sorts of images... and then Fusion stepped in to tell us we were all doing it wrong.

Going the Phusion route meant that the only savings generated by Docker would be the shared kernel and so given the amount of tooling required to manage a Docker cluster vs an OpenStack or VMware... you're probably better using the latter two.

So the Apcera team met me at the tipping point. In their gnatsd project they included a docker folder where they provided a build script and two docker files. The result of running the shell script is a runtime image that is the scratch base image plus the gnatsd server application... and did I mention I was able to build and run the whole thing on my MacBook Air running boot2docker.

All you need is to install boot2docker and git to get things started.
boot2docker init
boot2docker up
boot2docker shellinit | source
git clone https://github.com/apcera/gnatsd
cd gnatsd/docker
./build.sh
docker run apcera/gnatsd
If you're working on a Windows desktop then all you will need to install gitbash.

The best part of this pipeline is that it's all self contained and has the minimal host dependencies. It does not need complex tools like Chef or Puppet and it will install and run anywhere. Uploading the gnatsd image to a private registry is trivial and adding some code to monitor etcd and upgrade, restart  etc is as simple as a fleetd service.

Great job Apcera!

say what about OS X and iOS?

Watching this TechCrunch video Hands on Windows 10 I was actually impressed with Windows. It made a compelling story as I sit here typing on my MacBook Air considering my hardware refresh which should include phones, tablets and laptops. But then the reviewer made a critical mistake. Granted it was not one that I would say cancels out my interest but it does sort of challenge his credibility.

The statement was to the effect that Microsoft was unifying Windows across all 3 devices. Computers, phones and consoles; and that at Apple there was iOS and OS X, implying that they were not the same. That final comment about iOS and OS X is simply wrong. The core of the operating system for both is exactly the same code. This was never a secret and in fact was part of the propaganda used to sell iOS to potential developers. 

Now, of course, it's all about the APIs and experience... and so the same delta exists in the Windows and Apple brands. While the point is taken to mean that the interface experience is different depending on the current user mode (Windows) some things actually stayed the same and appeared to be difficult to operate. Just how many times was he going to press the start button before it actuated?

Windows 10 looked very interesting in the demo. Granted a lot of the code appeared to be W10-ish and so it was nicely visual. Even the fullscreen start menu. The colors were cool too. Unfortunately it's the applications. At least in the windows ecosystem the themes seem to carry through the version changes (see Mavericks). W10 will not tell the complete story until it drops some of the backward compatibility.

Wednesday, January 28, 2015

Makefile in Go


I’ve been having an exchange with some readers about rake and go. In the end I took a page out of the golang playbook and created my own make.go although it was specialized for easy/hound it could be modified to be a more general purpose make program. Granted the more general purpose it becomes the more it's like the actual make. And then there is the installinator project I have been working on where some of this would make plenty of sense.

BLEET - rake and golang?

I'm having a hard  time reconciling the logic that combined Rake and Golang in the etsy/hound project. Why on earth? Most systems already have make and the go authors have some tools that'll construct and assemble the code nicely.

Sunday, January 25, 2015

What a waste of a perfectly good USB drive

My family and I are planning a trip to Disney World in the next few months. In response Disney has decided to send us a USB drive with a message "plug it in for a message" or something like that. The drive itself is a 1 GB device. On the drive is a single HTML file which when opened will redirect your browser to the Internet where it downloads and plays a video. Cute? I'm grateful I did not ask to install any software.

Now that I have watched the video I want to reuse the drive for other purposes. In fact since it was only one gigabyte I thought I would reuse it to install my favorite operating system. After working on the many small details required to create a USB bootable version of my favorite Linux distribution I have come to realize that the device is partially crippled. While the read times are acceptable because the file that is read from the device is very small using it as a bootable operating system device is extremely painful.

I have decided to buy a better USB device instead of trying to reuse this one. But it would have been nice.

"REST is not a silver bullet"

I was reading the article "REST is not a silver bullet" as published on "prismatic", however, after reading the article I wanted to comment on the post. Commenting required registration. However, once I registered the link in my reading list redirected me to additional registration tasks. What a waste!

The author had a number of complaints; but while not wrong they were not as correct as written. The big issue is/was REST and HTTP, however, I've read a lot on the subject and most authors seem to believe that (a) HTTP is ubiquitous, it's well understood, and easy to implement. (b) it's trivial to add SSL. (c) there are plenty of tools for testing and debugging. (d) it is scalable and concurrent. (e) REST just gives the transaction context. (f) long-pole and web sockets can fill additional gaps.

I suppose there are plenty of reasons to hate HTTP/REST but most arguments are limited.

My own questions about deflategate

If found to be responsible for the pressure difference in the game balls... what would the penalty be? Could it or would it disqualify the New England Patriots from the SuperBowl? Would it be an immediate forfeiture? Would they advance their previous opponents?

The answer to that is a resounding no! (a) The Indianapolis Colts are not prepared to play the game and have probably been eating pizza and donuts ever since the loss to the pats. (b) then again the SeaHawks have been practicing for a game against the pats. (c) All the money spent on everything from T-shirts to advertising based on the SeaHawks vs Patriots. So it's just not going to happen.

What could happen, however, is that Brady and Belichick could be benched for the big game or maybe banned for life with no hope for the hall of fame; and of course millions in fines for defaming the NFL brand.

But while everyone seems to end the question there... there are still a few unanswered questions.
  1. If you were New England, why would you wait until the second to last game of the season to change the ball's air pressure. This maneuver could clearly backfire. So how much preparation and premeditation was involved in this?
  2. Where the Colt's balls equally effected by the weather?
  3. Higher ball pressure would be required by the kickoff and field goal units. Could that be the 1 in 12?
  4. The pressure differential was reported as an additional 2lbs different. That's nearly 20%. Does anyone really think that none of the players would notice the difference?
  5. Actually there would have to have been a conspiracy and it will eventually crumble. While the team has a lot to gain from the events they are clearly disproportionate. Where were the equipment coaches? Frankly they are supposed to keep an eye on the balls to make sure someone else does not manipulate the balls.
  6. I would like the teams to clarify if the teams use their own balls exclusively?
It's a lose-lose either way. The Patriots are actually Benedict Arnolds. As for innocent until proven guilty? The democratic process does not apply here. The NFL is a monarchy and the Commissioner is king; therefore it will be ruled from upon high.

Update: and then there is the science

Saturday, January 24, 2015

VirtualBox, Vagrant and NixOS

Most Linux versions, and possibly all Unix variants, provide a customizable welcome message when logging in and one of the things I have come to expect is the IP address in the message. This is especially useful when using VirtualBox or VMware. However, if you're running headless and you're not publishing your guests IP address to a local DNS server then things are going to get a bit challenging.

One of the features I like when using Vagrant is that you can get the IP address of the current guest with a command: vagrant ip. Of course the vagrant config has to be in the cwd so that can be a challenge. And there can be more than one instance too... so bouncing between guest folders is a nuisance.

Here is a NixOS/Vagrant plugin that should be helpful. I also found this article with a couple of sample commands for VirtualBox like:
VBoxManage guestproperty get NixOS "/VirtualBox/GuestInfo/Net/0/V4/IP"
And since I also had a local-only IP address I needed to execute this to get that IP:
VBoxManage guestproperty get NixOS "/VirtualBox/GuestInfo/Net/1/V4/IP"
I suppose there could be a wildcard value to stick in there, however, it's still very difficult to infer which device is hosting which port... but it's still some good info.

And then there is my new favorite command.  Starting the machine in headless mode.
VBoxManage list vms
VBoxHeadless --startvm NixOS
"NixOS" is the name I have my guest and it is in the list from the 'list vms' command.

UPDATE: this is not the polite way to start a guest because it (a) runs in the foreground, and (b) when shutdown inside the guest the command does not terminate. This is less than ideal behavior and requires investigation. There is a way to launch a guest headless from the VirtualBox GUI(shift+double-click).

Thursday, January 22, 2015

Installing NixOS - Short Version

I have been experimenting with NixOS for the last few weeks and every day I find some new reason to like it.

My first experiences were with the VirtualBox version. It was easy to use and the default configuration, with KDE, was pleasant even though I went directly to the Konsole and then with ssh. On the list of interesting features:

  • the Nix installer - it's the foundation of the team's approach across the project
  • related to the installer is the user partitioned installations... i.e. two users can use different versions of the same program since their environments are partitioned
  • and that the installer is transactional
  • It has it's own approach to containers - which I do not fully understand but seems more like chroot or jail than it does Docker. Additional good news is that the container also works like the package manager ... and you can install Docker too.
  • Then I discovered that the project includes a CI, continuous integration, server called hydra. I don't know much about it except that hydra also uses the same Nix ideas and between the two hydra will work with Windows, OS X, and Linux.
Today I installed sshd and started to consider doing a complete install from scratch or at least the closest thing to it. Not knowing how NixOS liked to be installed and that their own manual was a little sparse on the details I found this article that gave me a good head start. I suppose I could just leave the article at that, however, I want my own interpretation of the article in a more concise form; so here we go:
  • download the ISO image, in my case I did the minimal version [no X or desktop]
  • create a VM using VirtualBox and mount the ISO image on the VM's CDROM (I configured it for 512M and 16GB disk)
  • boot the VM
  • run some commands and do some work
    • fdisk /dev/sda
    • press "o" to create a DOS partition
    • press "n, p, 1, ENTER, +2G, t, 82" to create the boot/swap partition
    • press "n, p, 2, ENTER, ENTER" to allocate the rest of the drive for NixOS
    • make the swap partition "mkswap -L swap /dev/sda1"
    • turn the swap on "swapon /dev/sda1"
    • format the partition "mkfs.ext4 -L nixos /dev/sda2"
    • mount the formatted partition "mount /dev/disk/by-label/nixos /mnt"
    • generate the default config file "nixos-generate-config --root /mnt"
    • edit the configuration file "nano /mnt/etc/nixos/configuration.nix"
      • boot.loader.grub.device = “/dev/sda”
      • services.openssh.enable = true
      • I must have janked the user creation because it did not work as expected
    • install NixOS "nixos-install"
    • shutdown the VM "halt"
    • power off the VM from the VirtualBox GUI
    • eject the virtual CDROM
    • add a second network adapter "host-only"
    • re-start the VM
At this point you are pretty close but if you are using VirtualBox you'll have to do at least one more thing in order to be able to ssh into your newly minted system.  Create a new user:
  • login as the root user. Right now the root user does not have a password
  • change the root user's password "passwd"
  • create the new user "useradd -m myusername"
  • change the user's password "passwd myusername"
  • and if the user is an admin user then you need to add this user to the admin group "user mod -G wheel myusername"
Now you should be able to ssh into the VM from your host computer with your newly minted username. I installed vim in my user account because I like it as my preferred editor. 

At this point I shutdown and created a snapshot so that I could return to this state or create clones of my NixOS installation. I hope to find myself going through a number of use-cases shortly:
  • installing packages from the unstable channel
  • using containers
  • installing the Hydra CI
  • trying to determine the Docker play
  • determining the host upgrade workflow
  • physical drive crypto
  • golang and other compiler tech as part of the development stack
  • fossil and/or the hydra storage
It's fun to note that my first 16GB installation left me with 12GB free after the base OS and vim were installed. Based on the way the packages are installed it appears that NixOS is going to take more diskspace than a like Ubuntu or Fedora, however, there may be other economies that are yet to be discovered.

Wednesday, January 21, 2015

"switcher" multiplexing ssh and http over the same port

The Switcher project is an interesting implementation of a dual protocol mux. What is even better is that it's written in go and it's in the same class of solutions as grok. Keep in mind it's irresponsible, immoral and possibly illegal to tunnel through some networks ... after watching this mouse computer promotional video it might be hard for some folks to draw the line or know where the line is drawn.

pub/sub message queues containing blobs

I started off with a story. FAIL. Here's the plan:
  • save the blob to a key-store using a UUID as it's key. The key-store should support a TTL with callback.
  • put the UUID in the MQ
  • The MQ is always going forward
  • the transaction is only every touched by one service at a time
  • If there is a reason to fork the transaction... then each service should replace the UUID and insert a new blob in the key-store.
It is my experience that most services in a transaction only effect a portion a portion of the blob at a time and so copying around is costly.

**Redis has some primitives that allow you to have hashes of array such that you can append to an array as the transaction moves around the system instead of trying to log-aggregate the events after the fact.

Tuesday, January 20, 2015

Docker's machine project is not a CoreOS contender

I'm not sure what Solomon Hykes was thinking when he posted this:



The docker project is strong all by itself and while there appears to be no loveless between CoreOS and Docker... the machine project from the Docker team is no contender to CoreOS. CoreOS is an OS and it stands on it's own as an operating system.  Machine, on the other hand, is a tool for launching containers on different platforms like Vagrant, Google Compute, VMware ... in fact the actual orchestration has been implemented by many 3rd parties.

The statement is clearly inflammatory and not a serious proposition. In fact the opposite is true if you read any of the Phusion posts. The headline on the Phusion website is "Your Docker Image Might be Broken". This tells me is that there is a bigger fundamental problem with Docker.

Saturday, January 17, 2015

DVCS everywhere and everywhere else!!!

Some time ago I wrote about version control systems and storing secrets. And I still think it's a fools errand because you'll never be able to prevent it and education and awareness is the only thing that is going to reduce the incidences.

But now I'm thinking about the footprints left behind... when you cancel or close an email account... cancel or move a public project, team or repository.

For example I have a number of projects that I hosted on both bitbucket and github. In most cases it was more of a land grab than proper use. But in the end it was meaningless.

  1. if I cancelled the project the "brand" remains and someone like a squatter is going to reuse it eventually as there is a google footprint and is likely to make them something
  2. Someone is just as likely to clone the project and use it in a social engineering way to leverage your brand reputation to do bad things
  3. in the case of emails there is a good chance someone is going to receive emails that robots sent you and which might leak personal information, passwords, or grant access to 3rd party systems in the form of "forgot my password" challenges.
I'm fairly certain google cancels an email address once it is canceled or abandoned. It's the sort of thing we should all do.

UPDATE: By DVCS I'm referring to service providers and not self hosted repositories.

Google Support

I was helping my sister in-law recover her Google calendar yesterday when I ran out of options.  I needed Google Support. And the only way you get Google Support is by paying for your Google Apps for Work. Actually getting to support was novel and with just a little friction sprinkled in. While the outcome seems to be a success it's not because of Google but due to a 3rd party app.

Logging in as her Google App admin I found the support button.  No matter how I got there whether it was google search I ended up in the same basic place. There was a moment when I thought I was going to be able to get into the support system and at least ask the question whether or not I could recover her calendar but that never happened. Everything pushed me back. So I finally got her approval to get a paid account... $5 per user per month.

  1. upgrade your account to paid
  2. login as admin
  3. press the support button
  4. press the GET PIN button
  5. the pin is good for 60 minutes
  6. call the support number
  7. enter the pin
  8. when the human answers they will confirm or record contact info that you previous entered...
At this point I recounted my problem. When the support rep repeated what I was asking back to me he said he would consult his peers and get back to me. He returned in less than 5 minutes with the following:
  1. If the calendar was deleted then all hope is lost. The calendar program clearly states that it would not be recoverable.
  2. If someone deleted individual items they could be recovered, however, that would mean using a 3rd party application from the application marketplace... which was free.
  3. Oh, and there is one piece of information missing from your account.... click and click... please select a billing plan.
While I had agreed to the upgrade it was still free for 30 days and Google had not prompted me for billing information. The words he chose for #3 was alarming. I was concerned that while I had been using the free account for years that I had missed a configuration option or something that might put the whole thing at risk... NO! I'll deal with billing later.


While the calendar undelete application worked; it was not flawless. There are calendar entries that would not undelete and without an explanation. Additionally, after completing any task I was prompted for their not-so-free backup service. I'm sure it's a great idea. Clearly Google has not seen fit to provide some of the basic application features that had I a desktop application would likely have been able to recover from.

As for the cost, $5/user/month is not bad and $10/user/month for unlimited disk space is even better. I'm sure that Google has plenty of metrics that prove that it's cost effective for everyone. My overarching concern is that the cloud marketplace is now supplanting my desktop marketplace and the tools are still not in parity.

Friday, January 16, 2015

This is a nice model for orchestration

This disposable code is great for creating disposable redis instances. What makes it more interesting is that if the functions were just a little more generic they might make a good model for orchestration... or as something that would work well with the installinator.

Startup in a Box

If you had to startup a company tomorrow what sorts of services, appliances and applications would you want to deploy? I have started to put together a list which I will eventually try to automate with NixOS and the Nix package manager at the core. Then I will try the same arrangement with CoreOS and Nix.

  • email server to send and receive emails
  • internal MTA
  • internal DNS
  • public DNS
  • VPN
  • LDAP
  • DVCS - preferably something based on git with support for releases
  • managed switch with vlans
  • storage for backups
  • storage for active applications
  • public & private wiki
  • public & private ticket system
  • internal & private document repository
  • fax
  • voip with voicemail and mobile support
  • chat
  • video chat
  • monitoring system / dashboard
  • CMS
  • invoice / billing system
  • general accounting
  • prod, staging, dev environments
  • internal tools
  • master index of all tools
  • MQ
  • database
  • scheduler
  • API Server
  • authentication
  • continuous integration
  • calendar
  • contacts
  • vanity website
  • FTP server (box or dropbox like)
  • fail2ban
  • dropbox
  • SNMP Server
More to come.

Thursday, January 15, 2015

Docker Base Images

The Phusion team would have you believe that all other base images are inferior and you are unsafe.
"YOUR DOCKER IMAGE MIGHT BE BROKEN" --Link
This statement is particularly troubling. Not because my base image is vulnerable but that these guys think so little of the Docker team's ability to create base images correctly. As it turns out there are 12 base images that are considered "from the source".
ubuntu, ubuntu-upstart, debian, centos, busybox, fedora, opensuse, cirros, crus, neurodebian, scratch, oracle-linux
 Phusion's baseimage is present in the docker registry, however, the phusion user is NOT "trusted" and there are plenty of forks by users with "trust". So while their claims are appreciated in one sense they are meritless if not incomplete.

The second challenge is the container promise. Everything I have read so far suggests that it's preferred to only have a single process running in each container. This also makes sense as it's a capacity one can quantify. But if you take the phusion path then you can expect to manage each OS instance as you would any virtual instance... and frankly that will not scale as expected.

In a recent email exchange with CoreOS we talked about the "extras" that the Phusion team was referring to. My very very very simplified impression is that Rocket containers are an improved chroot or jail.

The Internet of Things - phonehome

During this explosion of the internet of things; the things need care and feeding. Sure, sometimes they are small and every unimportant like a bot following a crayon line but then there are other cases where cash registers, scales, remote printers, wifi gateways, or your toaster needs a hug.

One such system I designed used ssh, bash, and a semaphore file. The amazing thing is that it scaled well when I used OpenBSD as the ssh server. I even designed it with HA in mind such that there were two ssh servers that the remote devices could connect to. One weakness that the system has is that it's not on-demand. There is a cycle time between the device and the server.

(a) make a connection to the 'a' server and set the timeout
(b) if the timeout expired drop the connection
(c) make a connection to the 'b' server and set the timeout
(d) if the timeout expired drop the connection
(e) sleep for 5 min

And from time to time I've had to wait 20 minutes to get a connection to the device. So it's time for a new implementation and this time I want to implement it in go. Right now All I have done is look at the go/ssh library and as many examples as I can find. The problem there is that most of the examples are exactly the same and taken from the test cases provided in the source.

In the next post I will describe the requirements.

UPDATE: This link implements something that might come before a proper terminal session. On the subject the goal of this project is not to replace the ssh client side but to provide a specialized ssh framework where the tunnels and the notification are automated for a little instant gratification.

Wednesday, January 14, 2015

BLEET - the phusion view of docker

This article makes me long for chroot or FreeBSD's Jail. I'm hoping CoreOS' Rocket addresses that issue.

Monday, January 12, 2015

Lockfiles

Inspired by `man 1 flock` I have decided to build my own flock utility in golang. After creating a public git repo on bitbucket I started thinking about the details. (a) use stdlib only (b) keep it simple (c) offer a command line and a package.

Nothing new there, but then...

Now the question becomes; should I always have a lock file? Every time I run any application should I have a lock file? Should that lock file prevent multiple instances of THIS version of the tool or any version of the tool. Or should the lock file act as a sort of semaphore indicating that there is an application of this type running?

I ended up with a few choices:
$ myapp -pidlockfile
Use the program name arg[0] and the current pid.
$ myapp -buildlockfile
Use the program name and CI build number.
$ myapp -lockfile
Use the program name prevent multiple instances of the same application on the same machine
$ myapp -nolockfile
Do not use a lock file.

One thing to keep in mind is that if you use fleet to launch your containers, and in turn your applications, then this feature is not really needed. It's not likely that you'll launch multiple instances... but then again it's not going to cost you anything to combine them. Also, fleet will handle the instance generation over the network instead of a single machine.

Logging theory of operation


  • Log everything you need for debugging as part of a possible post modem on the target machine but do not aggregate or ship the logs.
  • make certain each log entry is unique with it's PID or something
  • aggregate duplicate log entries and decide on the max dupe count before writing a sentinel
  • When an actionable event occurs send a message to the monitoring server
  • When an event needs to be monitored send that event to the monitoring server immediately. This is usually an indication that the transaction is either beginning or ending; or some critical timing piece like an external service. 
  • I like to perform a stack trace as the transaction progresses storing the data locally until it completes then use a low priority service to copy it to storage or a reporting server. I like to send the stack trace to the server when the transaction complete.
Logstash and elasticsearch are interesting tools but they are not without their challenges. Once you get to talking about scaling and capacity issues logging is only going to get worse. It never seems to improve.

Sunday, January 11, 2015

GoLang implementation of 'man 1 flock'

Linux has a utility called flock. It's pretty handy because it'll prevent the current program from running a second time. This is particularly useful when a cronjob's runtime is longer than it's interval. The described flock util creates a file and then sets it's lock.  If a second instance is started then the flock function will cause the second to fail so long as you are watching the response values.
package main

import (
"fmt"
"os"
"syscall"
"time"
)

func main() {
file, err := os.OpenFile("test.dat", os.O_CREATE+os.O_APPEND, 0666)
if err != nil {
fmt.Printf("%v\n", err)
}
fd := file.Fd()
fmt.Printf("%x\n", fd)
err = syscall.Flock(int(fd), syscall.LOCK_EX+syscall.LOCK_NB)
if err != nil {
fmt.Printf("%v\n", err)
}
fmt.Println("sleeping")
time.Sleep(time.Duration(15) * time.Second)
}

A little more needs to be done to this code like pull the command from the CLI and a number of other params (see the link). But at least it is possible. Note that the Flock() take the Fd() from the file, however, this value needs to be cast from uintptr to int and back to uintptr. For the timbering this is ok but it may not survive the future. There is also a challenge that int can be < 0.

Which came first; Hodor or the Groot?

Watching Guardians of the Galaxy the guardian named Groot, a humanoid-like tree, can only say "Groot". When suddenly it occurred to me that Hodor did the same thing in Game of Thrones. Now which came first?

Unit Testing - keep it simple instead

Kelsey Hightower(CoreOS) sent the following tweet to Rob Pike(Go Author).
@rob_pike: "Unit testing was driven by the dynamic language people because they had that instead of static typing." - LangNext 2014
In 1983 I wrote two custom applications.  The first was a mail merge program that would take a CSV file of addresses and WordStar document and print the merged results so that they could be stuffed and snail mailed. The second was a warehouse inventory system for perishables.

At the time; the state of the art debugging was the print statement and testing was manual integration testing. Either you got the results you wanted or you didn't. I would like to say that programming today is more complicated and that we need many more guardrails to get from a proposal to functional application but I cannot.

I'm reminded, again, that a Russian space engineer once told me to keep the interface simple and internals simpler. This makes everything simpler. Simple might break but it's simple to isolate and repair.

Mozilla Rust - box()

I started watching an Introduction to Rust video when the presenter got to the box(). According to the rust-lang reference manual:
A box is a reference to a heap allocation holding another value
And when you use the rust stdlib there is a Box class that looks like any "generic" implementation. In rust a box of an int might be Box<int>. And while the JDK manual references auoboxing with a similar description there is no actual box class or function. A box'd instance might look like Integer<int>.

I turned in my java beans a very long time ago and I don't know much about rust but I do know that this is an awkward to design a language. Boxing is probably a good thing for the internals, however, it's existence and promotion to something that requires first-class consideration from the programmer seems less pragmatic.

While the structure is probably there in order to provide some space for the various runtime and compile-time protection mechanisms as a systems language it does not seem to provide value. But then I'm naive to it for now.

And depending on the actual benefit I'm finding myself thinking about erlang and other functional programming languages where this sort of silliness does not exist.

BLEET - Something to like about Nix

I was searching for some comparisons with Nix (a functional package manager). I found this article that does an freshman job describing the Nix system which is a good start. But then the author provided a link to a bridge to a docker/nix (doc) tool. While I do not yet understand what's going on it is curious, fancy and clean.

Pro Tip from Go Authors

Dave Cheney recently tweet this:
#golang top tip: fmt and log packages know how to print errors, prefer fmt.Printf("Oops: %v\n", err) to fmt.Println("Opps:", err.Error())
What makes this interesting is that I also saw something very similar in a live coding session. Andrew and Brad used "%v" all of their Printf functions. 

I do not have a concrete explanation, however, my intuition tells me that the Printf functions interpret the "%v" and then use some reflection and stringer -ification. This is because they seems to use it to display integers and strings. In a pragmatic way it makes sense to use the "%v" so that you do not have to modify the format string and the variables but you might have to do that anyway. At least you'd be able to change(refactor but not really) the types without having to chase down all of its usage.

Saturday, January 10, 2015

Hacker News is excited about: Flow Based Programming

I was just trying to see Morrison's website to see if anything had changed either in response the Hacker News posting or prior to the posting. However I received a status code 509 which indicates that the site has exceeded it's bandwidth limits. That's both good and bad. (good) because FBP is worth looking at and implementing (bad) because it's popularity is likely to lead to more misunderstanding before all the useful tools have been vetted. (noflojs, flowhub)

BLEET - Golang on Android

I see that Google is building, directly or indirectly, for android, NaCL,  plan9 and a few other distributions. I'm looking forward to an Android toolchain. I've been told that Plan9 is a volunteer port and I've read that NaCL is real.

Formatting a USB drive for NixOS

Recently I posted some general instructions on installing NixOS on a USB drive. One area I was having trouble with was the basic format of the drive. At the time I decided to use a SmartOS USB image and a little 'dd' magic to copy the image file from the local drive to the USB stick. Since then I have worked out the missing pieces.

With a booted OS X where the USB drive is located at /dev/disk2 ...

  • sudo fdisk -e /dev/disk2
  • fdisk> auto dos
  • fdisk> f 1
  • fdisk> w
  • fdisk> q
Now you have a msdos formatted(partition table) USB drive. At this point I followed a few of the same steps to be able to connect the USB drive from the host OS X to the quest NixOS. Keep in mind that the USB stick is currently partitioned with partition 1 as the active boot partition; but not formatted so it cannot be mounted just yet.

With NixOS booted and at a console (where the USB drive is located at /dev/sdb and the fat partition is at /dev/sdb1) ...
  • format the partition `sudo mkdosfs /dev/sdb1`
  • net-env -i wget
  • net-env -i unetbootin
  • mkdir -p /media/{NIXBOOT,iso}
  • mount -o loop nixos.iso /media/iso
  • cp -a /media/iso/. /media/NIXBOOT
  • now you can mount the partition `sudo mount -t vfat /dev/sdb1 /media/NIXBOOT`
  • and now you can follow the instructions about unetbootin from the other post
  • unetbootin works great if you have X installed. This might be a starting point: unetbootin lang=en method=diskimage isofile=./nixos.iso installtype=USB targetdrive=/dev/sdb1 autoinstall=yes

OS X mail.app and junk mail

The features that are offered are pretty traditional but not very modern. I cannot remember the last time I had to look at my junk folder to find an email that might have been classified as junk incorrectly.

In the meantime when I tell mail.app to delete the junk mail when quiting the application it only delays the amount of time it takes to shutdown. If I select 1-day then the email is going to linger for 24 hours more or less and quite possibly leave breadcrumbs for the same.

I think I want a "never see junk" setting and "auto delete junk immediately" setting.

fishshell, boot2docker and my config

I like fishshell for a number of reasons with my favorite being the CLI. While it does not support ^R, out of the box, in order to search the command history it feels a lot more natural. Just start retyping the desired command and it will find and highlight the previous versions which you can scroll and select. Of course it has the weakness that if the first few letters of the command are not the same as the next command then there is some CLI navigation gymnastics... but at least the normal operation is the normal operation.

I suppose I have gotten used to the fact that the .bashrc and .profile files are in the $HOME folder because when fish decided to put it's config file here: $HOME/.config/fish/config.fish I could never remember the exact folder and in some cases I could not be bothered to search the docs. This is part of my green folder initiative.

My fish/config.fish looks like this:
set -x fish_user_paths $HOME/binset -x GOPATH $HOMEgo env|grep GOROOT | sed -e "s/\(.*\)=\(.*\)/set -x \1 \2/" |source
The first line adds my bin folder to the PATH environment variable without having to do the work myself. If memory serves the function will check for dupes. This may or may not be a bad thing if order is important... and it should never be.

The second line is obvious. It's just creating the default GOPATH. This is just temporary because there is an idiomatic way to manage the GO environment. I think the GO authors intended for users to set the variable manually or as part of a make/build so that the environment was semi sandboxed and idempotent.

Then the last line. I'm attempting to extract the GOROOT from the current GO cmd and then set the environment in the fish-way. What's interesting is that GO is supposed to figure out it's own GOROOT but it's not very good at it in fish.  I suppose I could have hard coded it, however, I have been known to swap between the homebrew installation of GO and the source or binary directly from GO. I like homebrew because it's fast and easy to install, uninstall, update and upgrade. Not forget that they offer some feature flags for cross platform and others. I hope they stay current.

Finally, absent from the config file is the boot2docker and docker commands. I'm on the fence with this because setting the docker/boot2docker environment is easy but it means that boot2docker has to be running in the VM and I'd rather do that selectively. I tend to move around between local and remote development. So making it optional is better.

Note that the output from the `boot2docker shellinit` command is compatible with fish and while you might use:
$(boot2docker shellinit)
in bash; in fish you'll use:
boot2docker shellinit | source
The changes to the environment are exactly the same.

BLEET - Green Folder Initiative

Green Folder Initiative is my attempt to keep my userspace $HOME folder as generic as possible so that whatever skills I master from the shell, vim or other tools that they will transfer from one *nix to another with minimal relearning or adjustment. (I do not want to maintain and sync multiple $HOME and config across multiple OS as I've already experienced incompatibility across similar OS X, BSD and Linux versions)

Booting NixOS from USB

Creating a bootable NixOS USB has been a challenge. I tried a number of different strategies and none of them worked. In the end the missing element was that the USB drive needed to be formatted as either FAT or FAT32; and the partition table needed to look traditional (my default USB partition table had an EFI partition; right or wrong it seems to have been part of the problem). Also there are a number of differences in 'fdisk' commands between OS' and that was frustrating too.

These are not the steps but the discovery:

  1. I downloaded and created a SmartOS bootable USB
  2. Creating the USB device with the SmartOS image appeared to change the partition table in a way that reminded me of the old Windows and DOS days.
  3. Now I downloaded and installed NixOs for Virtualbox.
  4. I booted NixOS in Virtualbox
  5. in NixOS
    1. installed wget `nix-env -i wget`
    2. installed unetbootin `nix-env -i unetbootin`
    3. downloaded the NixOS image with wget
    4. inserted the USB device from above and allowed it to be recognized by NixOS (takes a couple of restarts and settings changes)
    5. executed unetbootin and used it to create the USB
    6. unmounted the USB
  6. unmounted the USB
  7. booted my desktop with the USB
What was amazing... (i) the target system was a UEFI BIOS and so I was hopeful but not expecting much as I had a lot of problems with booting from USB with older OS'. (ii) the touch screen worked out of the box even though button clicks still need to be resolved and that might have something to do with the fact that the USB mouse was also connected.

NEXT STEPS: NixOS is the natural evolution from the Nix package manager. NixOS is a surprisingly clean operating system. What is truly amazing about it is so simple; the idempotence of the OS based on the package manager provides the same level of sensibility and pragmatism as CoreOS. I can see NixOS being part of a very similar strategy as CoreOS with it's auto-updating channels even though that scheduling would be my responsibility.

Friday, January 9, 2015

Commit Often

I like to commit often and the reasons I give are twofold. (i) making a change to a line or group of lines is usually based on a single notion where making changes all over the code is usually part of a feature or some idea bigger than the isolated code and committing that change is more of a roadmap than a change history. (ii) I have been known to use more than one computer at a time. That depends on how and where the development is taking place. I might like to make some changes, go home, and then pull those changes to my home computer. Having the agility to work anywhere means having this sort of flexibility.

One thing I hate is when every mini commit turns into a build and test cycle; and certainly there are a number of ways around this too.... instead of commits then use a shared drive or a cloud drive. Of course this conflicts with #i. Forking the code and committing works to a point but depending on your CI the branches might auto-build too but with any luck that is configurable.

I also like the idea of developers being able to commit and build independently so that one developer's committed or uncommitted compile will not effect the next. This too is a  bit complicated to implement.

Some months ago I implemented a build system for fossil-scm using a watch script and golang. It worked wonderfully. In the coming weeks I hope to recreate that success with a Docker build agent.


Mesos, Marathon and Mesosphere

Everything is turning up Docker these days and as such there are a number of new orchestration and scheduling systems for Docker that are popping up. Three projects that seem to be connected are Mesos, Marathon and Mesosphere. I'm sure they are interesting projects by in my "Level-A" stack they do not measure up.

Mesos

  • Mess requires the JDK and as such will not install on CoreOS as nicely as one would like. Self installer or not, CoreOS is meant to be immutable and so this is not an idiomatic idea.
  • The getting started documentation gives Ubuntu 12.04 examples and as of this writing Ubuntu is well into the 14.10 cycle with 15.04 just a few months away. They could have updated the doc. (not a good sign)
  • And without first hand knowledge other than reading the architecture documentation it appears that Mesos is something that looks like an analog of Docker in the JVM.
  • There is mention of some Hadoop Clustering and MPI. MPI is part of the cluster compute API framework and never makes it to Docker first class.
  • This link does talk about Docker and Mesos, however, it seems to be a me too.
Marathon
  • Just looking at the requirements and I feel justified.
  • Requires the JDK
  • Requires Mesos and so by extension it a bust
  • While scala is now a professional and fully realized language it is still built on the JDK and I have yet to hear that they have completed a SDK replacement and therefore still suffers from all the problems of Java.
  • ZooKeeper calls itself a coordination server. (Doozer is a highly-available, completely consistent store for small amounts of extremely important data.; and etcd is an open-source distributed key value store)
Mesosphere
  • Requires Mesos and Marathon... meaning it also requires the JDK
  • Looks like it will run on CoreOS, however, that seems to conflict with the JDK requirement.
  • It also requires ZooKeeper even though CoreOS includes etcd.
Without going off the deepen here... Even if this tool stack is awesome it still a pig. The configuration is deep and has lots of moving parts. Getting to any level of excellence is going to be a serious challenge. This has go to be the worst of all of the similar systems in the same space. And having speed-read through the documentation I cannot find a single killer feature worth investing in and I've already had success with many of the others.

BLEET - Oracle's JDK

There is absolutely no reason why the Java installer needs access to root in order to install itself on an OS X or Linux system. It should be very happy in the user's home directory... Just look at golfing.

Thursday, January 8, 2015

OpenStack in terms of CoreOS

CoreOS is meant to be immutable so attaching to running things from the host directly is a bit of a challenge and possibly just wrong. But as I look at OpenStack, CoreOS, Unikernels and the other moving parts I'm curious to know if there is a complete and reasonable analog to OpenStack in terms of CoreOS (other Linux variations later).
  • OpenStack Cinder - Storage. 
There are plenty of Linux storage solutions for CoreOS. If you are not mounting an NFS file system from the core then you're probably creating a DataContainer and attaching to a NAS or SAN. Additionally there are some docs suggesting that ZFS can be attached using Flocker.
  • OpenStack Nova - Command Line
Currently there is no aggregated CLI or GUI for all of the CoreOS features but the same can be said for OpenStack. Nova is but one part of the equation. CoreOS performs that function mainly with the fleet list-machines command.
  • OpenStack KeyStone - Identity Server
There are a number of identity solutions for CoreOS, however, it's not actually a CoreOS function but Docker. There are OAuth and OpenID servers that can be loaded as containers.
  • OpenStack Glance - Image Server
Docker and CoreOS provide on premise enterprise registry servers.
  • OpenStack Neutron - Networking
Docker has built-in isolation and tunneling. When you get into ambassador and sidekick helpers or into schedulers like kubernetes, Deis it's all pretty similar.
  • OpenStack Swift - Object Storage
This could be implemented in any 3rd party data or in ectd.
  • OpenStack Heat - Orchestration
CoreOS has been working on Fleet as a way to schedule containers and tasks. If that's not enough any of the scheduling frameworks are sufficient; even fig.
  • OpenStack Cellometer - Telemetry
I might have to give this one to OpenStack. CoreOS has some tools that will capture system data as part of the enterprise setup. There are even some tools that will install and monitor the system... like DataDog. But things may not be the same as DTrace on SmartOS.
  • OpenStack Trove - Database Service
This is already addressed and from the OpenStack side of things is just a database or DB proxy. It does not appear to be a level up from the existing container based DBs. A handful of Postgres DBs or Mongo DBs would get you into the same place. 
  • OpenStack Sahara - Data processing command line
Just another Hadoop.
  • OpenStack Openstack - Command Line
I'm running out of steam... between the apparent duplicate projects (database and command line) and. While CoreOS + Docker + Fleet + LockSmith + Etc + Systemd + and anything else I've forgotten OpenStack does not take me someplace special. Yes, CoreOS will run in an OpenStack or QEMU environment but (a) once you have more than one OS running on the bare metal you start to lose some scale and (b) And it will effect the density benefits of a Docker plus CoreOS on bare metal. Yes, Docker will run in OpenStack too. It might even support many of the frameworks and tools identified here... however that's just a wrapper inside a wrapper. (consider running docker on windows. There's at least one addition virtualization layer between the container and the bare metal.)

One of the missing pieces from the conversation is composition. CoreOS gets that either by implementing your own tools, scripts, sidekicks, or ambassadors or by deploying under a container framework like Kubernetes, Deis, or my favorite Apcera.

In Conclusion, CoreOS is analogous to OpenStack and although it has choices in how you achieve the integration it still has the power to compete in the same space. Because the tools are supposed to be containerized you're going to get some scale and loose coupling that you're not going to get from OpenStack. If it were my money I'd be spending it on CoreOS.

Wednesday, January 7, 2015

TDD in the small?

I have not deviated from the belief that TDD is junk or at least less useful than some dedicated QA engineers would have you believe. It has always been my position that the more complex a function or task the more testing might be required. When I'm building transactional system I typically build transactions to explore and explode the edge cases... partly to verify the intended functionality but also to provide a framework for regression testing.
Transactional regression testing in the payments industry is critical to success but that does not make it TDD.
 So I have the following questions:

1) how small (LOC or some complexity indicator) does a function have to be in order to justify not implementing tests?

2) how big (LOC or complexity) does the function have to be in order to warrant 100% code coverage?

One could argue the 1 & 2 are the same number but that's not the point I'm trying to make. Is there a level of complexity that does not need to be tested and a level of complexity that MUST be tested?

Monday, January 5, 2015

"Optimizing Go: from 3K requests/sec to 480K requests/sec"

480K TPS would be a wonderful problem to have but in the meantime I think this sort of transaction volume has a limited number of contestants. Just how many companies do work at netflix, google or amazon scale? Not many.

The thing to remember is that evil Big-O notation from those college days. Adding the fewest LOC could cause the 480K to fall like a rock. Every bit of work you ask the transaction to perform could have a drastic effect.

For example; let's say that the 480K is just counting the number of transactions... nothing else. If you decided to add the number a second time then you'd also expect the TPS rate to be half. This becomes more relevant when that "second" thing is much more costly.

Why Atom-shell?

Atom is a fun editor and I've already benefitted from it's plugin ecosystem. The developers have forked the part of the application that provides guardrails for developers to develop desktop applications based on nodejs called atom-shell. On the surface this seems like a good idea but once I started digging into it the less I liked it.

First of all rumor has it that Atom is sending events to the mothership. On the one hand it bothers me but on the other hand Atom's life is limited; I am working on my own browser based IDE.

Second, if I'm building apps they they are supposed to be mobile first and then browser... or they are server only. So there is absolutely no reason to have a desktop version of anything.

As far as I can tell there is no reason to write desktop javascript... and so no reason to use atom-shell.

The any language challenge


One of the challenges of most programming languages is the stratification of complexity. Just last night I was responding to a post which criticized K&R and this morning it's even more clear to me that there is a stratification of code within any application. Meaning that some code is almost considered MACRO like and some code is considered low level. In my opinion one thing that makes any code hard to read is when the delineation between layers gets fuzzy.

In almost every application I write there are 4 layers:
And with a sensible namespace and directory structure it's easy to learn and relearn.

Over the last few years I have repeatedly fallen into a trap. On the one hand I want to make software implementation more like assembly... just bolting things together in a loose sort of way. (think Macros) and every time I get there I start thinking about embedded lua, tcl or some other small footprint language. And in some fits of insanity even a lisp variant.

When I wake from my dream I realize that what I'm thinking about is not embedding a simple macro language but strengthening the layers. This, of course, can be accomplished in the source language and requires no special structure or feature. In the wayback days we would use function and filename namespaces to provide hints. Now everything is a package or module.

** with the latest set of golang tools (version 1.4) it might be possible to take this to another level where the task is the code that is generated. That might just be another view of the same solution...

Sunday, January 4, 2015

What is my next *nix?

I have long since been a fan of Slackware and OpenBSD. Both are rock solid and while they are opinionated they are clean and reliable. Many years ago the Slackware author dropped out for about a year in the middle of the most productive time in the Linux Kernel. Patrick was also highly skeptical of the 3.X branch of the kernel and as a result things, including driver support, lingered.

OpenBSD has long since been laser focused on security and freedom. Theo has always held true to those principles. There was a time when certain wifi and video drivers were closed source. While OpenBSD was the most secure *nix available it simply would not run on some of the most common hardware without requiring substitute video and network hardware.

There is a lot going on right now and things are only getting faster. While I like Ubuntu, Fedora, CentOS, Mint and a few others... I am starting to focus on smaller OS'. For example, CoreOS, NixOS, MirageOS, Erlang on Xen and Elixir on Xen; so called unikernels.

What I like about OpenBSD is that I rarely have to patch it. The team is always back porting bugs and patches. What I really like is that it's very rare that I need to implement the patches because they are in userspace and 3rd party projects. The main OpenBSD Server is tight.

What makes CoreOS a very likable project is in their marketing. (a) mostly immutable (b) green/blue installation inspired by ChromeOS (c) enterprise ready with monitoring service (d) precooked with etcd, fleetd, systemd, and docker (e) scheduled upgrades with locksmith. (f) there are simply fewer dependencies and moving parts. (g) cloud-init. As a devops person I like that the heavy lifting of maintaining the bare metal is virtually eliminated.

NixOS is a relative newcomer; to me. Where most *nix systems rely on batch or script files, orchestration systems like chef, puppet, ansible, saltstack; NixOS has it's own package manager. This package manager addresses many of the shortcoming in the other orchestration systems and yet provides a idempotent OS instance after startup. NixOS could end up spanning the chasm between CoreOS and unikernels.

I was never a fan of Erlang on Xen, however, after watching a presentation from one of the developers at Jane Street I have a new respect. Whether my next unikernel is going to be OCaml, erlang or elixir is still to be seen. The thing to keep in mind is that most services or agents simply do not need all the cruft that a full-blown operating system provides.

Containers, NixOS and unikernels all provide interesting potential for green/blue as well as zero downtime... they all feel like the right future.

GoLang Generics and K&R Troll

Just this morning I read a blog post disparaging The definitive introduction to the C programming language ("The C Programming Language") by someone who professed to be a published author. Unfortunately he had disabled the comment section of the blog suggesting that "we" write our own blogs. This, however, is totally disingenuous as the blogger is actually trying to generate SEO-type traction instead of spreading idea or starting a dialog.

While the K&R book was not meant to be a demonstration of idiomatic C coding style it was the standard at the time... and it has evolved nicely since. At the time the language was written the only operating system of consequence that was written in it was from Bell Labs. DOS, CPM and most other PC-basd operating systems were implemented in Assembler. If memory serves me all buy about 70 lines of the original Unix systems were written in C. C was intended to be a systems language.

This nameless blogger simply does not know enough history to justify his position.

**

The GoLang Generics war is heating up and frankly I do not understand why. Go supports features which reduces the actual need for Generics.  Just look at the sort package in the stdlib. In a way it looks like a cross betters a JavaScript callback and the original C-lang sort.

In 35 years as a professional programmer I have always felt coerced to use generics. They never made my programs better, faster, easier to maintain or even easier to read. They promoted a blackbox mentality that object oriented programmers prefer as demonstrated by implementing access levels. Proper idiomatic Go programming will always be better than polluting the language spec with generics. (when the only tool is a hammer then all problems look like nails.)


mobile first

I'm writing this post in advance of my 10 day vacation but it seems appropriate nonetheless and by the time it posts I should have already returned.

My employers, customers and family members have always had different ideas on how I should behave on my vacations. From totally connected to totally disconnected and all points in between. For me I like to be somewhere in between. For me it's not about being "Wally Pipp" but about being responsible and taking pride and ownership.

With that in mind I'm looking at my briefcase and wondering how I'm going to carry everything.
  • ipad mini
  • iphone
  • 11" macbook air
  • 15" macbook pro
  • 2x iPad 2's with iGuy cases (say bulky)
and then plenty of accessories and power adapters. And carryon for my, my wife, 2 kids and the huge double stroller. And let's not forget winter coats and whatever the rest of the family is going to stuff into the stroller. Some of these are easy to carry and some are difficult. The iPads are trivial and the TSA does not require that they be taken out of your suitcases.

Looking back this year I had promised myself that I was going to stuff everything in the cloud. And I seem to have forgotten this promise. My local storage is simply running out and I have to carry these two laptops with me. It's exhausting.

As I'm getting ready to head out the door I think the idea of mobile first for everything is the best plan. There are tools like prompt and diet coda that make things easier but they are incomplete without an end to end strategy. Even chromebooks are a good idea but they are just as bulky as any laptop but at least they make work time easier.

So as I start on a complete mobile first strategy here are the first steps....

Ditch the laptops. Install the necessary VPNs, add a bluetooth keyboard, install prompt and diet coda, pre download my code to the cloud servers and get ready for the fun.

another bad day for open source

One of the hallmarks of a good open source project is just how complicated it is to install, configure and maintain. Happily gitlab and the ...