Skip to main content

Command line parsing in Go

There are a number of CLI libraries out there. Of course there is the stdlib, called flag, and then there are plenty of 3rd party versions. Cobra is an interesting manifestation because they have taken the controller approach as demonstrated in this article. The claim that the author makes is at least in two parts. [1] that the CLI behaves in a way consistent with the go tools. [2] supports nested commands.

The challenge for the CLI is that there are so many opinions as to what is easy or best. The go tool and the docker CLI look a lot alike. And while the go authors have indicated that it is idiomatic to do as much as you can with the stdlib Cobra could be an exception.

I decided to review the code in order to get a handle on the author and the implementation. If the author simply wrapped the flag package I would probably ignore the project and implement my own variation on the controller. The code seems nicely documented, however, one commonly argues that better function and attribute names can make all but the simplest comments meaningless. Also the comments do not seem lint approved. All of which makes the code harder to read.

While the controller mechanism is a little smoke and mirrors... and cobra does come at a cost. One need only look at the go source in order to get a good example. Here is the code that the go tool CLI uses to parse the command line.  It's only

        for _, cmd := range commands {
                if cmd.Name() == args[0] && cmd.Run != nil {
                        cmd.Flag.Usage = func() { cmd.Usage() }
                        if cmd.CustomFlags {
                                args = args[1:]
                        } else {
                                args = cmd.Flag.Args()
                        cmd.Run(cmd, args)

It's not that cobra is bad but it's the CLI. There is just no need to import a library for something you can control in a fraction of the code.

UPDATE: Just as I was putting this idea to rest Gopher Academy posted another article and this time it made a little more sense than Cobra. The Viper toolkit takes the Cobra APIs to another level. This time the attributes can be assigned in various ways:

  • setting defaults
  • reading from yaml, toml and json config files
  • reading from environment variables
  • reading from remote config systems (Etcd or Consul)
  • reading from command line flags
  • setting explicit values
The only problem with this strategy is that it implies that the programmer has no idea what mechanism the devops team is going to use to deploy the application in the first place and this is a poor excuse for providing so many options.

While I have been showing that Cobra and Viper are non essential packages I have an example of one that makes perfect sense. This is an article on git2go. It's a package that gives you API access to the git and right in the sweet spot.

UPDATE: I've been working on a go version of flock. There are a number of challenges with the flag package. The first is that flag aliases and flag groups. Aliases might be like: -c or -command or arg(N); and groups might be -s for shared and -x for exclusive. Flag does not have any algebra to support this sort of specification putting the burden on the programmer, however, these types of truth tables can get very complicated to design, implement and test. 


Popular posts from this blog

Entry level cost for CoreOS+Tectonic

CoreOS and Tectonic start their pricing at 10 servers. Managed CoreOS starts at $1000 per month for those first 10 servers and Tectonic is $5000 for the same 10 servers. Annualized that is $85K or at least one employee depending on your market. As a single employee company I'd rather hire the employee. Specially since I only have 3 servers.

The pricing is biased toward the largest servers with the largest capacities; my dual core 32GB i5 IntelNuc can never be mistaken for a 96-CPU dual or quad core DELL

If CoreOS does not figure out a different barrier of entry they are going to follow the Borland path to obscurity.

UPDATE 2017-10-30: With gratitude the CoreOS team has provided updated information on their pricing, however, I stand by my conclusion that the effective cost is lower when you deploy monster machines. The cost per node of my 1 CPU Intel NUC is the same as a 96 CPU server when you get beyond 10 nodes. I'll also reiterate that while my pricing notes are not currently…

eGalax touch on default Ubuntu 14.04.2 LTS

I have not had success with the touch drivers as yet.  The touch works and evtest also seems to report events, however, I have noticed that the button click is not working and no matter what I do xinput refuses to configure the buttons correctly.  When I downgraded to ubuntu 10.04 LTS everything sort of worked... there must have been something in the kermel as 10.04 was in the 2.6 kernel and 4.04 is in the 3.x branch.

One thing ... all of the documentation pointed to the wrong website or one in Taiwanese. I was finally able to locate the drivers again: (it would have been nice if they provided the install instructions in text rather than PDF)
Please open the document "EETI_eGTouch_Programming_Guide" under the Guide directory, and follow the Guidline to install driver.
download the appropriate versionunzip the fileread the programming manual And from that I'm distilling to the following: execute the answer all of the questio…

Prometheus vs Bosun

In conclusion... while Bosun(B) is still not the ideal monitoring system neither is Prometheus(P).


I am running Bosun in a Docker container hosted on CoreOS. Fleet service/unit files keep it running. However in once case I have experienced at least one severe crash as a result of a disk full condition. That it is implemented as part golang, java and python is an annoyance. The MIT license is about the only good thing.

I am trying to integrate Prometheus into my pipeline but losing steam fast. The Prometheus design seems to desire that you integrate your own cache inside your application and then allow the server to scrape the data, however, if the interval between scrapes is shorter than the longest transient session of your application then you need a gateway. A place to shuttle your data that will be a little more persistent.

(1) storing the data in my application might get me started more quickly
(2) getting the server to pull the data might be more secure
(3) using a push g…