Wednesday, December 31, 2014

Lots of DVCS angst

Recent articles covering GHTorrent,github and AWS keys made me cringe. As much as I like bitbucket, github, launchpad and others I'm scared that a quick slip could give away the keys to the kingdom. Even if it's an accident.

I think whatever the circumstance your code have to be In-house and private. 



Fossil is great because backups are as simple as copying a single SQLite file. It also includes a wiki, issues manager, CLI and web GUI. The binary is both client and server; and available for major operating systems. 

Gogs is git with a web wrapper. However Git has an advantage with many proper client apps. Tower, github, tortoisegit, sourceit, and many more. 

My first choice is fossil as it feels the most sensible. 


Unrelated to DVCS there is ngrok. It's a nifty little project but there are so many risks.  (a) it is it's own man in the middle (b) captures and can replay HTTP requests (c) since you might be using it as a phonehome mechanism it might let a little too much information through. And then there is it's little cousin GoPee(India). And Hyperfox(Mexico).

The answer might be ephemeral connections. Ephemeral connections make knowing the actual credentials almost meaningless.

Interesting Paul Graham Quotes

Paul graham is not my favorite and YCombinator even less so. But he has written and made comments that have caught my attention.
We knew that everyone else was writing their software in C++ or Perl. But we also knew that that didn't mean anything. If you chose technology that way, you'd be running Windows.
This is actually very telling. Another article I read about computer chess programs and how they are designed to maximize the number of future moves... this applies to selecting the right languages and frameworks.
When you choose technology, you have to ignore what other people are doing, and consider only what will work the best.
This goes without saying. If you import or incorporate too much 3rd party code you get their opinions and cruft and their design concerns and not your own.
For one thing, it was obvious that rapid development would be important in this market.
This is too general and obvious. It's important to all businesses.


from exaggeration to resumes

I've told you a million times to stop exaggerating
I do not remember who said it or when I first heard it but there it is.
You drive on a parkway and park on a driveway
another quote that fits. But the ultimate example was an acquaintance who posted a picture of a computer kit and with a statement to the effect "my friend built a computer". In actuality she had assembled the computer not built it.

As for resumes it's important to be clear what you actually designed, implemented, coded, tested etc... A resume full of exaggeration will be discarded by the most junior reviewer.

PS: I also wonder about directors and producers.

Wednesday, December 24, 2014

Kindle Paperwhite Trouble

I created a collection on my iPad's Kindle application.  I also populated the collection with 11 books and periodicals. But wouldn't you know it... it would not sync with my paperwhite.

The first problem was that I had set the parental controls at some point and... [a] I had meant to prevent anyone from stealing my kindle and downloading everything they wanted so I protected the store. [b] I had also protected the cloud because that's where all the personal content is backed up. [c] I had forgotten the passcode. CRAP!

[a] that's easy enough. The parental controls are granular enough to separate the store from the cloud. But since I had included [b] the cloud I was not able to navigate to the cloud view even though all the help and links hinted that I was supposed to be able to. But of course it was locked.

So locked meant I was going to get misleading text from Amazon and that the help was going to be no help at all. The giveaway should have been [i] the fact that the "cloud" menu option was grey and would not let me select [ii] There was a small lock on the top center of the menu that indicated that parental controls were active. However, since the "collection" menu option was visible and some of the other menu options were also available this was silly.

Of course you can always create a collection on the device and upload it later. But that's probably where I went wrong. However, a simple "the cloud option is locked" in any of the help screens would have been useful.

Finally, because I had lost my passcode I had to perform a complete reset of the device. As of right now the device has been reset, the collection downloaded, parental controls set to [a] only; hoping that I do not have anything bad in the cloud, and I have a new passcode that I think I'll remember.

PS: I would provide the link used to reset the device but I think I'll let you google it yourself.  It was not particularly difficult to locate but I hate the idea that my kids might read this some day. Of course this means the device would be wiped and not that anyone would read my stuff.

iPhone disable AutoStart iTunes and iPhoto

I like iTunes and iPhoto. I think I'd prefer that Google did a better job and they may in the future but then Apple might figure out that my MacBook Air does not and will not have enough storage and my wife's 1TB of storage is no longer enough. Using external drive is simply not efficient and it's hard for the novice or uninterested.

Even though I'm a tech I have no interest in splitting my photo albums and using external USB drives to hold my pictures. It's just unsatisfying. But when you consider the iCloud option ... it's just incomplete and very expensive. So that's a non-starter too.

Frankly once I've created an iPhoto album it would be fine to store the original images in a fraction of their original resolution and then move on. I'd rather Picasa did the whole facial recognition photo/time-lapse thing instead.

So my current challenge is that whenever I plug my USB into my computer in order to charge my iPhone or iPad both iTunes and iPhoto pop up. I suppose if I worked in a callcenter and a customer was calling then a screen pop would be useful.  This is not.

Here is an article where I learned to disable the iTunes pop.

And here is where I learned to disable iPhoto pop.

Eventually someone is going to figure out the best way to hand these devices. The current state of things is not it.

How not to conduct an interview

I came across this interview question site and it drove me to fits. Anyone who would use this site, it's questions, develop or use interview questions like it is just stupid and lazy. The site represents 4 years of college and two years of ACM competitions.  At the time they were fun but they are meaningless in the context of the work I have performed over 35 years.

In my professional programming career I've only had to implement or model against a standard deviation once. And never once had I specifically answered an interview question that was remotely related to any higher level math. Let's face it... these questions are all derivatives from university studies and while that was 25 years ago I wouldn't remember any of these the day after graduation.

The frustration is that while I was in college we bumped chests and high-fived with the idea that university was teaching us to research and think critically. The specifics of maths, physical sciences or humanities were actually unimportant.

And now that we are on the outside looking for work it has changed from that to can you fit the packages on a truck. Of course it's important to FedEx and UPS but if you are building an email client or text editor where is the parity? Looking at that site I have no particular favorites or raspberries ... I know I hate them all.

When I interview a candidate I'm looking for just a few things:

  • aptitude - does the candidate like the field and are they likely to stick with it
  • are they arrogant to excel but not enough to disrupt the team or quit prematurely
  • focus - watch out for the butterflies and shiny new tech.
  • lazy enough to be efficient

(**) it's better if the candidate is smarter than me too. And I favor individuals who have changed their primary language at least once.

The difficulty, every time, is trying to customize the interview so that I can make an intuitive score on these attributes. I'm not perfect but in my career and hiring 100+ candidates I have only made 3-4 sub par hires and 2 of the four were under duress from management.

boot2docker in fishshell

I have been an on and off again user of fish shell. There are a number of reasons to be wary of a new shell let along one that you downloaded from the net. But if you are like me then from time to time it's ok to take a risk. And while there are some open source projects that promote their security and actually aren't this one is just plain easy going.

One of the things that caused me to go back to bash was poor integration with the tools that I used that were tightly bound to bash or zsh. For a time I used GVM (golang version manager). It's scripts are complicated, deep, and managed the environment assuming it was bash. So when I started to get into my Go programming this became a serious issue. Since then I no longer care about GVM and I'd rather have a better shell experience. bash is just fine but fish can be fun.

So while I'm also getting deep into docker and boot2docker there was a challenge. boot2docker needed to set the environment so that the host'd docker client would have various config information that was needed to connect to the docker server. In bash I would execute:
$(boot2docker shellinit)
however, this does not work in fishshell. In this case the new command is:
boot2docker shellinit | source
Now everything is right with the fish-world.

ngrok in production and other uses

I recently came across ngrok and while the demo that I watched was recommending it development environment I think it might have a few more use cases.

The example that was given: You have implemented some application or feature that you want to share but the user is remote, outside the firewall, or asking the user to update his hosts file make the process painful.

Enter ngrok.

Register with ngrok. Then download their client application. Launch your application in localhost mode. Then configure and launch the ngrok client. The ngrok client will open a long poll connection to the ngrok server. You give the user a target URL to enter into his browser... and voila.

The user's browser connects to the ngrok server, which forwards the message request to the ngrok client, which then opens a connection to the configured client application. The experience is complete.

What makes this model desirable is several fold [a] all applications would be configured as localhost support the just-one executable model [b] (a) supports a similar docker model [c] some firewall routers need to restart or [d] router risk associated with firewall rules [e] this is an inside out connection rather than an outside in.

It's very obvious how this works in development. It's just a way to share.

In CI, continuous integration, this solution would promote a dynamic network structure that would require minimal care and feeding. With a touch of good luck would not require massive DNS changes.

** I have no idea what the capacity requirements are and whether it's fast enough for production but it's exciting as part of a complete solution.

** ngrok has logging and replay features which would need to be disabled in production. Additionally there is no mention of SSL. I presume that ngrok would act as a man in the middle so it would be better inside your firewall.

Tuesday, December 23, 2014

Asterisk Reading List

This post is meant to be a list of references for deploying a feature complete Asterisk system that can compete with SwitchVox and provide a proper call dashboard with screen pop capability.

Reloading an asterisk server - hoping to reload without dropping the the calls in progress.

Asterisk CLI

FreePBX CLI - reloading with dropping calls

Docker Asterisk Configuration, more code

FreePBX download

There is very little code written in or for Go that includes interacting with an Asterisk PBX. Any interaction will require CLI simulation, REST type calls or possibly something else.

NodeJS frameworks

In summary I'm not a fan of the nodejs platform or the web frameworks that have evolved from it. on the other hand I might not have a choice if someone else is throwing the switch so I guess I should know a little something about it/them.

I just watched several videos covering meteor, sails, deployd, and I have written a few express and hapi apps; this article is a good summary. Total is another candidate although new and not much of a following. When it comes down to it anything beyond yellow world required a lot of work and when you get to the edge cases it's even more complicated.

The only silver lining is that meteor offers a comprehensive book and video collection called discover meteor. The next step is going to be trying to identify how all of this might integrate into a docker or rocket strategy such that real work can be accomplished at scale.

PS: I ignored a large number of frameworks.  In fact any framework that claimed to be the NG or next generation of framework was ignored immediately... It is hard to ignore a purple cow in a field or normal cows but difficult to see a purple cow in a field of purple cows. Not to mention that NG is related to time and without a reference point everything is NG.

Monday, December 22, 2014

Rest APIs in Go

There are a number of 3rd party libs like gorilla and rip.  They offer a regex experience in the REST encapsulation of the handler. But like the other ins I've been critical of this too can be performed in your code. The single strongest justification is that the filters are not difficult to add and you own the flexibility where you want it. If you want to add a POST() or GET() middleware handler you can. Simple as that.

UPDATE - bone is a very nice little multiplexer. It basically adds Post() to the handle interface. While gorilla has a mux too, bone is considerably smaller. It's so small that to create a package instead of a gist is a shame.

ectd 2.0 RC1

I like that the CoreOS team has release etcd rc1 and container format and I like it even more that they included a rocket version too. I cannot wait for rocket to be installed by default next to Docker. I wonder if a bridge would be useful or hazardous?

something to try in erlang

I have been working on a flow based programming implementation that would be implemented in go. Part of what would make it successful is that all of the nodes would exist in one package... even main. In this way all nodes are global.

While I have yet to figure out if there are practical limits to the number of public objects in a package I'm also wondering whether or not modules are necessary in erlang. As the major downfall of hot plugging is that the version is by module only and if you have to update multiple modules the transaction repro is sacrificed. But if the application were implemented in a single module as I suggested the FBP application then would hot plugging be more reliable?

... on xen

A few years ago I was introduced to the erlangonxen project. After the initial impression they went back to the drawing board and produced a new and impressive demo. The second demo showed tens of thousands of application instances being spun up and torn down.

In the interim I introduced myself to docker and was able to produce better numbers. EOX was deploying a new environment every 300ms and I was able to deploy a docker container in under 40ms.

As I started to dig into EOX I determined that there was no real OS underneath it and that there were severe limits to the filesystem. Something like 500 files. I still think there are some limits to the design but they may not be as important as I first thought.  First of all it is significant that the application can be launched on demand and it can be readonly preventing any number of potential security problems. One can also address the multi-tenant issue by just building the front-end on demand per tenant.

Now add elixir on xen and MirageOS and things are getting hot. At the moment development is typically taking place on dedicated bare metal and companies are moving to virtual (public and private) systems like VMware, OpenStack, RackSpace, Azure, GCE and so on. And with the sudden success of Docker the entire container ecosystem is getting a much needed boost. While we seem to be moving to containers the question seems to be what is next? And as I think about it I'm starting to fall into the unikernels like MirageOS.

The fact is that the hypervisor manufacturers are doing a great job of abstracting the hardware. And many of the tools makers are starting to take advantage of that standardization. Of course a lot of people seem to have forgotten that in a bare metal world DOS was a unikernel and so this new stuff is actually old.

What makes it interesting is the density.  A moderate sized EOX application could be a few megabytes, however, an equivalent Docker base container could have a much bigger footprint. Even the Scratch and BusyBox images have their limits.

All in all, however, I'm watching the state of things. I could find myself moving back to erlang or haskell as the latest graduating class of programmers pollute go and new breed of system languages that have decades of actual experience behind them.

** don't even talk to me about generics

Testing MicroServices in Go

Testing MicroServices in Go is no different that testing micro services in any language. Go does not offer any additional challenges. The hardest part is simply the testing. MicroServices are no different than the systems in a micro kernel. And as such they are hard to "prove". Once you get past the unit testing you can use the same test framework for integration testing (which is the closest thing to test the service as you can get). 

The stdlib for testing is sufficient. Some of the 3rd party libs make some parts easier and in some cases they can coexist. But for all the effort sticking to the stdlib will avoid any conflict as it works just fine. There are bigger challenges depending on whether you've decided to mock the test attributes or if you're going to perform actual transactions.

Docker, gitreceive, bash are easily integrated so that you do not need the complexity of Travis, Drone, TeamCity, Jenkins or any of those other CIs. The hardest part is the radiator.

Command line parsing in Go

There are a number of CLI libraries out there. Of course there is the stdlib, called flag, and then there are plenty of 3rd party versions. Cobra is an interesting manifestation because they have taken the controller approach as demonstrated in this article. The claim that the author makes is at least in two parts. [1] that the CLI behaves in a way consistent with the go tools. [2] supports nested commands.

The challenge for the CLI is that there are so many opinions as to what is easy or best. The go tool and the docker CLI look a lot alike. And while the go authors have indicated that it is idiomatic to do as much as you can with the stdlib Cobra could be an exception.

I decided to review the code in order to get a handle on the author and the implementation. If the author simply wrapped the flag package I would probably ignore the project and implement my own variation on the controller. The code seems nicely documented, however, one commonly argues that better function and attribute names can make all but the simplest comments meaningless. Also the comments do not seem lint approved. All of which makes the code harder to read.

While the controller mechanism is a little smoke and mirrors... and cobra does come at a cost. One need only look at the go source in order to get a good example. Here is the code that the go tool CLI uses to parse the command line.  It's only

        for _, cmd := range commands {
                if cmd.Name() == args[0] && cmd.Run != nil {
                        cmd.Flag.Usage = func() { cmd.Usage() }
                        if cmd.CustomFlags {
                                args = args[1:]
                        } else {
                                args = cmd.Flag.Args()
                        cmd.Run(cmd, args)

It's not that cobra is bad but it's the CLI. There is just no need to import a library for something you can control in a fraction of the code.

UPDATE: Just as I was putting this idea to rest Gopher Academy posted another article and this time it made a little more sense than Cobra. The Viper toolkit takes the Cobra APIs to another level. This time the attributes can be assigned in various ways:

  • setting defaults
  • reading from yaml, toml and json config files
  • reading from environment variables
  • reading from remote config systems (Etcd or Consul)
  • reading from command line flags
  • setting explicit values
The only problem with this strategy is that it implies that the programmer has no idea what mechanism the devops team is going to use to deploy the application in the first place and this is a poor excuse for providing so many options.

While I have been showing that Cobra and Viper are non essential packages I have an example of one that makes perfect sense. This is an article on git2go. It's a package that gives you API access to the git and right in the sweet spot.

UPDATE: I've been working on a go version of flock. There are a number of challenges with the flag package. The first is that flag aliases and flag groups. Aliases might be like: -c or -command or arg(N); and groups might be -s for shared and -x for exclusive. Flag does not have any algebra to support this sort of specification putting the burden on the programmer, however, these types of truth tables can get very complicated to design, implement and test. 

Sunday, December 21, 2014

Vendoring and dependencies

I'm not so sure that vendoring is the way to go. APIs are a contract between the parties and should be immutable once they achieve production status. The APIs should be layered away from the underlying processes or business. 

While not everyone believes in this contract style you could or should impose that layer yourself. In this way your application will not experience a total collapse if there is a 3rd party change. 

Vendoring can be expensive during the build step. It's expensive on the VCS system. Requires considerable effort to update. And in the end it'll come down to your testing anyway. 

Saturday, December 20, 2014

Response to generics in golang

For the time being it seems that the golang authors are set against generics. Phew! And while there have not been any new arguments only squeaky wheels... If the golang authors ever implement generics I'll slip back to erlang. 

"Yes it's annoying to rewrite boilerplate"

If you believe that statement as the argument for generics then turn in you diploma and clean out your desk. First of all how much code could that be? Second, that's what you get paid for? Added complexity and time in the compiler for what? Just go watch the intro ocaml video. 

Choosing the right host for your docker containers

First of all there are a lot of choices, however, they come in two varieties. The first is and probably the most common is the general purpose Linux distribution. So long as you have a modern kernel with the requisite kernel modules Docker is likely to work. (FreeBSD's compatibility mode is an API layer and not a kernel virtualization). The second type of host is the special purpose or dedicated host.

To put it plainly the general purpose Linux distro is best utilized as development environment so long as all of the other components are present depending on your production environment. ie if you have a dependency on etcd or geard etc...

On the other hand the dedicated distros work equally as well as a production environment and development environment (without the desktop). Here is a list of the hosts.
  • CoreOS - supports both Docker and Rocket
  • ProjectAtomic (Fedora, CentOS, RedHat flavors)
  • Ubuntu 'Snappy Core' - I'm not sure of that's a project name or the brand.
  • Boot2docker - more of a development environment for Windows and OSX users
The Docker and CoreOS teams have been trading quips lately. In the end I have no idea exactly what's going to happen. Both teams have great ideas and their execution is enviable. Where I struggle is if Rocket succeeds and grabs a big enough market share that currently they are alone as a host provider. If Docker continues it's momentum will they ever reach a level of stability that it becomes low risk enough without having to fork an API layer? And will they ever encroach into the host OS market? (CoreOS is awesome in that space).

At this point it might come down to the orchestration and scheduling systems like:
and others. (maybe even home grown). What interesting here is that there are several VPS offerings that are just containers. This in itself is curious. Just look at joynet.

Wednesday, December 17, 2014

"A Formula for the Number of Days in Each Month"

Here is the function in go:

package main

import (

// ref:
func f(x float64) float64 {
 return 28 + math.Mod((x+math.Floor(x/8)), 2) + math.Mod(2, x) + 2*math.Floor(1/x)
func main() {
 fmt.Println("Hello, playground")
 for i := 1; i < 13; i++ {
  fmt.Println(i, f(float64(i)))

The code is here if you want to run it now.

Monday, December 15, 2014

Is there a plan for #Plan9

What I know is that I don't know? For the last while I have noticed that the Golang team has been building a version of Go for both ARM and plan9. While the ARM version makes perfect sense as there is a group that believes in Go on android. But Go on plan9?

Without making any value statements about plan9 it has not had a release since 2002. I posted on twitter and I emailed Rob Pike; and the response did not seem very enthusiastic. Rob directed me to the mailing lists and twitter indicated it was homegrown.

My take away from some comments made in response to a Tannenbaum letter in 1992 is that Plan9 might be a perfect OS to absorb into your stack if you have the cash to staff. It might even stave off the need to head into container land (I need more evidence to prove this but I am thinking about Ford's decision to take on QNX instead of Windows for their next generation)

Sunday, December 7, 2014

First look at CoreOS Rocket

I just completed an internal blog post as part of a Docker/CoreOS introduction and at the same time I decided to read the available Rocket documentation. While I have not executed the examples one thing that caught my eye was that [a] the target application was a fully baked statically linked Go app. [b] while the manifest file indicated it was a Linux container it said nothing about the actual OS.

This is actually an interesting idea since [a] most containers are just running a user app or some single purpose service. [b] that service probably does not need anything more than just itself. So why on earth would I need all the cruft associated with selecting an OS and version.

When I consider the workflow it seems that it's missing some tools that the CoreOS team seems to have deferred to Docker and/or the host. It's not even clear whether or not the base CoreOS is base enough to build the Rocket container instances.  And as such when are they going to establish that Trust?

Friday, December 5, 2014

Docker vs Rocket

The Docker - Rocket discussion is about to get into full swing but in fact the conversation started a long time ago and without getting into a debate on the subjective qualities of each project let's discuss the facts.

Docker is currently the leader in the Linux container market. With their acquisition of the fig project (now called machine), the implementation of the libcontainer (departing from the true LCX container), swarm for clustering containers. While the registry is open source Docker offers several SaaS solutions from free to enterprise.

CoreOS started as a bare metal OS that was meant to be immutable, much like ChromeOS, consume Docker containers as it's distributable unit of work. The CoreOS team developed etcd and fleetd to support Docker clusters. Multiple projects like Deis and Kubernetes were implemented and took advantage of the structure of CoreOS. Frankly when you have a heavy host OS & containers then you have many multiples of systems that need maintaining. CoreOS, with it's tools makes this so much easier. CoreOS also has an enterprise control program for managing the host updates.

As for Docker and Rocket... Docker is clearly encroaching on the CoreOS domain and CoreOS is responding in kind. I suppose if I were an insider I might know more about the exact timing, however, I don't think it matters much. What is interesting to me is that CoreOS is moving toward a trusted environment. This is of critical importance to me as I'm thinking about the best possible design for implementing an HSM (Host Security Module).

**Not to be confused, projectatomic is also interesting but is just enough far enough along to warrant comparison.

Only time will tell who the winner is going to be.  It should be no surprise if we come to find out that Docker's APIs and libcontainer were premeditated in order to protect their assets in a way that is opposed to the open source from which it was spawned. As for CoreOS I think they are already in the sweet spot. They are likely to be able to support both Rocket and Docker (maybe not at the same time) but until Docker deploys a proper OS or partners with someone else CoreOS will have at least one leg up.

the golang hotspot

Google's Go has a great many killer features.  One in particular was called out by Derek Collison, founder of Apcera, at a box conference this year. He said something to the effect that Go is excellent because deploying an application is as simple as copying the executable [all the dependencies are baked in].

There is nothing untrue about the statement, however, when you're deploying webapps where there many hundreds or thousands of static artifacts like javascript, css, html and template versions of the above then packaging all of these artifacts using packages like go-binddata is unusable.

There's not much for me to say in order to justify this position as you'll have to experience it for yourself but converting these files into source code causes an explosion in file size.

another bad day for open source

One of the hallmarks of a good open source project is just how complicated it is to install, configure and maintain. Happily gitlab and the ...