Wednesday, September 30, 2015

a reason to hate docker

Once a quarter I perform some system maintenance on a cluster of Asterisk servers and their Dashboards. (a) backing up log and cdr files, (b) purging some logs, (d) repartitioning the cdr log database tables and the trigger that inserts the records. I happen to use Fabric as the remote execution tool. I also have a 50 line microservice that I use to create the trigger on the fly.

So far so good.

The microservice is running on a Google Compute Engine node, on CoreOS, with Docker. GCE is fine although I needed to punch a hole in the firewall. CoreOS is on release 711 or something. And docker is whatever version it is.

The nice thing is that even though this system is running well and survives reboots it has a number of major flaws. Once a quarter I need to build and run the microservice, however, it never works that way.

  • previous docker build can consume 100% of available drive space
  • I have to make sure the 9090 port is a passthru
  • have to remember how to build the container
  • and then how to run it
  • Since docker is many versions back I gotta remember the different commands as things move forward
  • documentation on the project is thin and the version control is weak
  • and the code was copied from server to server as I was resizing my servers
  • does not use a registry
This project suffers from so much technical debt! The only way to repair this is to (a) complete the project (b) remove docker (c) implement a different idea that might include launching a VPS (digital ocean is light) and doing it that way. One thing for certain... it has to be better.

Docker Pricing - WTF

I always knew that Docker was going to charge for it's product. The questions were always; when, how much, and for what? So far most projects that were going to charge for this sort of thing offered the code and binary for free but then charged for support. Granted I had no idea what that support entailed but having been in a corporate environment when even the most expensive subscription service agreements yield less than stellar results; it's just no fun.

So when I tried to download Boot2Docker only to find that it was deprecated and that now docker was offering a non-free TOOLBOX I about lost my lunch. Docker may still offer fragments of their tools slightly crippled or even the FUD that it's not in parity with the open source version... It's just ugly. I suppose they feel the community momentum is in their favor and that the community will continue to test for free. (I'm not so sure).

This also has me concerned about Rancher. They offer a nice package based on Docker. When they start to price things will that include Docker or will they be a one stop shop?

CoreOS seems like a better option right now. Specially because their free offering is the same binary that the paid guys use. Furthermore their +1 offering is truly a +1. Quay, Managed CoreOS. These are real tools. I think it's time to look at rocket(rkt).

major cloud-init weakness

Now that you've wet your pants it's not all that bad.

I continue to deep dive into CoreOS, RancherOS, and Docker. I've also been testing ideas with both Google Compute Engine and Digital Ocean. And a lot of things have been going badly.

The most recent hiccup was realizing that any changes to CoreOS' configuration must be accompanied with a complete refresh of the nodes cloud-config file. While I have no experience with it I'm hoping that the enterprise CoreOS experience is better than my standalone.

Doing a complete CoreOS refresh while there is volume sharing etc with the host means that the cloud-init is very complicated. My development machine is configured with both a Bosun and Grafana containers. And then there is my devbox. Since containers have been known to crash from time to time I am sharing a volume with the host. Only some of that is problematic.

In some environments you might have multiple admins... and so the admin's ssh keys would be installed in the cloud-config file. As a result the admin can login a little easier. There is an option to store a password too but that's pretty much the same thing... The question is; what happens when that person leaves the organization? Is the OPS staff expected to redeploy every node having removed the credentials?

This, of course, is not too big of a problem when the target server is behind a firewall such as GCE. But when you use Digital Ocean there is no firewall. You have to create your own iptables or firewall instance. Using a shared admin key is not an option either. Various compliance bodies expect that each user have their own creds.

PS: if the preferred method for deploying users is cloud-config then that is the only way. Bypassing cloud-config would not be idempotent.

Monday, September 28, 2015

Modern home networking

I work remote.

There I said it and now all those fears you think you've had about QOS, ISP, VPS and so are are all mine. I depend on my network and when I would out of the house I depend on it there to. [I'm selective as to which Starbucks I work from because they are filtering their network]

The actual backstory goes like this.  Last Wednesday there was a severe thunderstorm and historically my ISP loses some equipment which can take several months to convince them to identify and replace. So when my network started to misbehave I knew exactly what to do.

I started making phone calls and was confronted with the same responses. "reboot the modem". "WiFi or wired? WiFi, then connect directly".  "Firewall? Then bypass and surf naked". These recommendations remind me of the old BSOD days from Microsoft ["reboot I say!"]. But I had already tried traceroute and I knew that the problem was in their network. My ISP previously used ATT and was not using ComCast for their back-haul... but when my network failed my traceroute indicated that my packets were going through ATT. Clearly they would discard the packets if they did not have an active contract with my ISP.

At around the same time I had also installed the latest iPhoto and had enabled iCloud Drive Sync. It's well documented that iPhoto/iCloud Sync will eat your network because there is no throttling. I tried disabling the sync but that had no effect. My network continued to jank. In a fit of desperation I switched my firewall from an apple express back to my apple extreme. [many months ago I had converted from extreme to express because there was a VPN issue that could only be patched with hardware and apple was not addressing it directly.]

(this is getting too  long)

With my Airport Express plugged in the (4th from the left) LED blinked yellow. With the Extreme plugged in it blinked Green. According to the Arris documentation they define the color as the speed indicator.

Right now the sync is disable, the extreme is installed and that's it. I've been working fine all day. I'm suspicious of all things at this point. So much that I've ordered Google's OnHub because I can set QOS per client connection.

I lost interest in this story after about the first sentence... I hope does not show. The payoff is that I hate my ISP (advanced cable communications) and I can say the same for ComCast... ATT is useless. What's a remote worker supposed to do?

Sunday, September 27, 2015

CoreOS missing features

In a recent blog post from the CoreOS team they presented a new feature uses rkt and flannel in order to create an ephemeral network between containers on nodes. I do not completely understand the details but that's coming. What did catch my attention was that the cloud-config file that was demonstrated made it clear that the entire deployment needed to be CI capable with zero downtime. Meaning that each node would have to be replaced in realtime without any downtime.

Since this sort of realtime migration has not been discussed in any of the docs or posts I can only conclude that it's implemented with the paid-for CoreOS tools. This is yet another area that makes selling to managers and stakeholders difficult.

PS: I was told to expect an update on CoreOS pricing but that has not happened yet.

Saturday, September 26, 2015

In the last week or so I posted that I had had some success with my CoreOS cluster.

Well, when the cluster is not doing anything except auto updating then that's not really success. I have two service files that I've wanted to use to launch Bosun and Grafana. The problem is they will not launch from the worker. Something is missing in the setup.

When I tried the fleetctl start bosun command I got this error in return:
Error running remote command: SSH_AUTH_SOCK environment variable is not set. Verify ssh-agent is running. See for help.
When I followed the link in the message there was nothing about SSH although there were some very vague hints. I went back to the documentation where I pilfered the image above and read it carefully. This stood out:
The cloud-config files provided with each section are valid, but you will need to add SSH keys and other desired configuration options.
It's not clear at all how to properly setup SSH and get fleet working but it's clearly important.

I've created an issue which I hope CoreOS will address.

UPDATE This article covers some of the issues... without it an etcd cluster node might be assigned rather than a worker.

cloud storage misconceptions

Here are just a few facts:


  • if you have a gmail account any additional storage you might purchase it for that one user
  • if you have a free domain account at google then you get the base storage for free but anything after that is for the single user. The extra storage you might get from buying a chrome device directly from google applies to that one account
  • paid accounts come in two flavors. The vault option is very promising and I have not seen an equal.
  • Google has always been frugal such that the full size images are in the cloud and the thumbnails are on the device. (I think)
  • Google+ photo at reduced resolution is free. (I like this!!!)
  • family sharing does not apply to storage (link)
  • Finally added iCloud application storage for iPhoto. I'd say a little too late... with 58K photos it's going to take a few weeks to get sync'd
  • are they a contender
  • APIs are nice but do I care
  • nice idea, incomplete, need my own cloud servers, and no client software for all the iOS, OSX, Chrome and Android devices I have.
In conclusion Google is still likely the better environment. Apple still costs a premium and has not achieved the level of interop that Google has. Google's not perfect but has a lot more checked boxes. With the exception of one or two features I have no reason to go back to Apple and for those features I think I want a surface or a MacMini with VMware.

Thursday, September 24, 2015

Advanced Cable Communications in Weston hates the rain

This is what happens when it rains in Weston

The internet is just a failure.

UPDATE: I forgot to mention that when I ran a traceroute the results suggested that the problem was in ATT's network.  The problem, however, is that ACC moved to ComCast for their backhaul and so ATT might actual be right to terminate the connection.

All of this suggests that ACC might have a bad route and some damaged equipment that is redirecting the packets. Over my 15 years with ACC these symptoms have occurred and every time it's hardware. Either damaged, wet, a floating ground, or some other "balance".

why not erlang?

I've developed some highly tolerant applications in erlang. The underlying justification was:
"if erlang drives phone switches why not payments"
To this day it still holds. Some of the very basic tenants of erlang and Joe Armstrong's erlang guides still hold. My favorites are "early optimization" and "crashing".

The fact is... erlang is an elite language for the elite programmer. But too many programmers have selected erlang because they want to be elite which is clearly the wrong end of the telescope.

PS: haskell too.

vulcand - quick note

I like vulcand for a number of reasons:

  • I like the team (mailgun)
  • I like the language and what that means (golang)
  • Could run in a "scratch" container (because it's statically linked)
  • warm config changes (uses etcd)
While it's still in BETA the documentation is really weak. The sample docker files demonstrate port forwarding 8181 and 8182 but never really tell you what they do. When I first read this I assumed that I needed to have a port redirect from the firewall or some other proxy in front. WRONG!

It just so happens the documentation is just bad. This doc is also not great, however, they demonstrate forwarding ports 443 and 80 in addition.

Alternatives to vulcand could have been nginx (Russian) and HAProxy (France).  Trust but verify? I'll start with vulcand; thank you.

Trouble in iPhone land

My wife had been having trouble with her iPhone. Several Apple apps would crash immediately after launching. That included Safari and the camera. Additionally the phone was also running hot. Since the phone was a 128GB phone the 5GB free iCloud storage was simply not enough. Finally, plugging the phone into a MacBook did not force a backup.

The three questions that an Apple Genius is going to ask you [a] have you backed it up [b] have you [factory] reset it [c] have you updated the iOS? In my case the backup was not working but I had not spent too much time trying. Also, if the problem is a configuration that has been backed up then the backup may be rendered useless making the process take longer. It's all about risk/reward.

And so I proceeded to factory reset the phone. *sigh* As a result I lost 4 months of pictures. Arguably if I had been able to get the pictures onto iPhoto I would have saved myself a lot of grief. But that's a bit more complicated.

One of the complaints I have had about the latest Apple computers is that there is simply not enough local storage to support the volumes of pictures that most people take.  For example the new MacBook 2 simply doe not have enough storage. Second, even if you have enough storage it's going to takes many days to recover. A few months ago we had a drive failure on our primary macbook. In response I replaced the drive with a 2TB laptop drive from MacSales and restored all of our data. Restoring the 57,000 pictures took nearly 2 weeks thanks to BackBlaze. But it worked.

Apple's iCloud (photo support) seems to have caught up to Google. The phone and computer have thumbnails and the original images are stored on Apple's cloud servers. Also, their prices seem a little more competitive at $10/mo for 1TB. Now it seems that the 128GB phone was an unnecessary expense. With the iCloud support the way it is I did not need that much storage.

All things considered... this is the default behavior for Google and I did not have to do anything special or know anything extra. This was just how it worked.

In the final analysis here is what we have going on the MacBook:
  • Back Blaze - backs everything up but if you delete it from the mac it will eventually be deleted from the storage
  • Picasa - uploads all the photos to Google+ in a private folder but not a full resolution; better to have the memory than not
  • Apple - iPhoto syncs with iCloud optimizing local storage and keeping the original images in the cloud. Deleting a local image or video will eventually delete it on the server.
On the iPhone:
  • Google+ - when connected to WiFi will automatically upload the pictures to Google+ too. I'm not certain what resolution the images will be. It could be the thumbnail or the same reduced image as described above
  • Apple - the photo app is optimized to upload the images to iCloud and sync accordingly. Local storage is optimized such that only thumbnails and reduced images are stored. This can actually be a pain in the ass but that's what the industry is doing.
Do not delete your pictures from the iPhone as it will have a cascading effect on the storage.

Wednesday, September 23, 2015

coreos and etcd overhead

I finally managed to get my environment configured in GCE. Ultimately I want to look like:
This configuration is supposed to be pretty standard. The hardest part of the cloud-config was realizing that I was supposed to use $private_ip4 instead of $public_ip4. Many of the examples use the public IP and that is clearly wrong in EVERY case. Using the public IP might leave the system vulnerable to hackers.

Another note about etcd clusters is that he authors recommend that the etcd systems are left only to that function so that all system resources are left to that function. And when I created the workers I simply created etcd proxies. NOTE: if you omit the ?size=3 from he discovery URL then you have to be certain to include the proxy flag. If you include the ?size=3 then the 4th (or n+1) node will automatically become a proxy.

I now have a deployment of 5 machines.  Three in the etcd cluster and two workers. I happened to be looking at the CPU usage and I saw something strange:
This graph is the same on all 3 etcd servers. It appears that an idle etcd cluster member is running about 30%. (this machine is a GCE t1-micro)

Then I checked the workers. each of the 2 workers looked like:
Notice that the workers are at about 15%. That's clearly half the CPU of the etcd cluster member even though the virtual hardware was exactly the same.

On the one-hand I get it. The systems are busy, watching, and so on.  While the logging tells me some information it's possible that a lot more is going on. And the proxy nature of the worker is essentially sleeping soundly while the worker is quiet.

Overall, I suppose I'm not that surprised that the worker and etcd nodes perform differently, however, I'm not sure the micro server is running at a level I expected. Anyway, I'll continue watching during the burn-in. I also want to move my development into the structure in order to see what happens and how tooling might make it fun and profitable to execute this way.

Texting embedded code

In an upcoming post I want to provide several documents for which I'd prefer to embed as code. I've created a gist in github and copied the javascript "embed" code here:

(if you do not see code here then it failed.)

I have no idea if it's going to work. (I tried bitbucket but it does not offer any possibility to embed code.)

Tuesday, September 22, 2015

Docker Machine Providers - Review

I've been deploying CoreOS and Docker in this configuration:

And while I have had some success I have posted a number of questions and concerns with the Digital Ocean support team and most responses start with "We're sorry" and end with "ask the CoreOS team". I think that there is at least one serious flaw with DO's product and that is that every VM instance received a public IP address and there is no firewall. The side effect being that every system in my cluster has been under attack since it was deployed.

  • no network drives
  • no firewall
  • limited support
So I'll be leaving them shortly. But what is interesting is that Digital Ocean supports Docker Machine. Docker Machine is Docker's way of creating Docker instances. Presumably there is some sort of shim between the host OS and the Docker container... While it might work it's an odd feature.

Of course if you have an OpenStack, VMware, or Vagrant then it makes perfect sense.  The shim will give you the proper cost performance that you need but if MS Azure, RackSpace or GCE is charging micro instance prices for container instances then it seems a little askew.

While I was looking at the Docker Machine Driver page I could not come to any other conclusion... and so I swing back to hosting my own CoreOS or RancherOS instances; I happen to prefer CoreOS as Rancher has not exposed their pricing model and I'm going too far down that rabbit hole.

One thing I like about DO is that it costs $5 for their micro instance; although Google's compute engine is just slightly cheaper.

maintainable code

This YouTube video on maintainable code set my hair on fire. These guys usually have something interesting to say but on this subject they are a bit naive.

My comments:
I think we all strive for maintainable code but how we get there is debatable. Given your examples: PEP-8, for python, has become a religious nexus. The only good side to PEP-8 is that there is a tool the will evaluate the code although one can ignore the warnings and continue. Golang has an opinionated view of code formatting but does not go as far as PEP-8. (I've worked on teams where my peers spent more time criticizing peers for PEP-8 adherence that it just delayed execution). Javascript and PHP are going to be the hardest to get concensus from. It just is and I have no nice way to explain it. 
Writing the documentation first is also false. While it's nice to say it's impossible in practice. See Knuth's books and essays on programming languages. Finally, the whole thing begins with requirements gathering. I developed an environment that would allow the analyst to define the flow and sometimes the signatures. Then the programmers would work on the algorithms.
The reality and by extension the erlang way... make it work simply and then refine it.

Good Enough

Recent hardware failures have made me realize that [a] I/we rely on technology, [b] when bad things happen to tech that I/we are intimately knowledgeable with worse things happen [c] not even Apple can make things simple if you're on a technology budget and even then see [b].

In recent weeks I had a Chromebook Pixel failure. This is an awesome machine with an Intel i7 and 16GB of memory. But when I had my failure I knew I was going to have to solve my problems myself. Even with the online and phone support. In my case both hangouts and google play music failed to start when clicked. The problem is that there are no logs to view and no popups or diagnostics. Not even a BSOD. In the end it took a powerwash+revert and a conversion to dev channel and then back to stable to get the machine back into operating order. Subsequently I emailed Pixel support and they could not help me. (I would have expected someone to ask me for a log or two) I was directed to the individual application feedback form.

The premium I'm paying for a Pixel covers the 3yr for 1TB costs of which I'm only using 1% of and the premium hardware. The support is as non-existent as everyone else. I should have held out for the Dell 13".

In a related failure... my wife's iPhone has been exhibiting some problems recently. Her phone is the iphone 6 plus and it has started to get very warm; and warmer after upgrading to iOS 9. I think I found the problem in the number of apps that are actually running in the background (almost 15 apps). Then different apps started crashing. First it was safari, then it was the camera. We had initially planned to take the phone to a genius but as time passed I realized the first thing that they were going to recommend was that I/we reset the phone. So I/planned to reset the phone last night.

NOTE: iCloud backups has not run in a few months as sh ran out of storage.  That happens when you have a 128GB phone with 5GB backup storage.

As I had been thinking about the problem I decided that a reset and restore from iCloud was not an option because whatever software state was in the machine right now was going to be restored back onto the machine if the error was in the media that was backed up.

NOTE:  A "like new" reset will delete all of your cloud backups.

... getting to the conclusion...

I should have backed the phone up to iTunes in order to save 4 months of pictures that were not backed up anywhere else. I like my android phone because everything defaults to living in the cloud and so there is no partitioning of data as I once experienced when my MacBook ran out of storage for iPhoto. I can replace my android phone a hundred times and I'll have my data.. just like my Chromebook. That's the sweetspot that Google found that Apple did not.

And "good enough" means "the most expensive is not always the best". (probably never is)

Monday, September 21, 2015

What does a complete modern enterprise container-ship look like?

In a recent rancher labs blog post the author covered ELK (Elasticsearch, Logstash and Kibana). What caught my attention was the number of containers required to deploy the design. As I began to consider the deployment I realized that the 4-5 containers deployed to watch one container is a little overkill but of course you have to start somewhere as you transition from a legacy deployment to containers.

Assuming that converting to containers from baremetal or VM solutions has a net zero overhead cost then converting your enterprise from [a] to [b] should require the same hardware footprint/cost. Agreed?

What does a complete modern enterprise container-ship look like?

Wednesday, September 16, 2015

did I authorize this?

When did Google ask me for permission to be solicited for donations?

screenshot region of Chrome browser on my ChromeOS machine

The cost of free

There is something to be said for sweat equity, however, at some point you will need to take a shower. I really like shows like The Profit and Shark Tank. There is a lot of reality distortion going on over there but unless you hit on something yourself it is what it is.

The thing about sweat equity is that it comes at the cost of the equity. What I mean it that if I'm one of those 10x programmers and I work an 16hr day and I'm still taking in an 8hr paycheck then I might only be making pennies on the dollar. This is particularly painful if the equity is not my own.

So while I look at all of the open and free-ness of projects like CoreOS and Kubernetes that free is not free. Building your own system or manually maintaining a CoreOS cluster according to the best practices or managing a Kubernetes cluster while free is still very expensive. When you're a 10x * 16hr accruing equity at pennies on the dollar spending that time on developing infrastructure when it should be vendored out is just not the best exchange rate.

CoreOS offers try before you buy versions of CoreOS but 30 days is just not enough trial for me. And when I look at Tectonic it's preview pricing is at 1500/mo and while that is a good portion of a resource on the payroll building my own systems... But in the free model I cannot get enough experience to champion in my org.

Sunday, September 13, 2015

An iPad for all occasions

I have been a hardware geek as far back as I can remember ... and I have been predicting the iPad 13in for about 4+ years. At the time I was the director of a small software development  department for a payments company headquartered in Alabama. Since I was responsible for everything that had a CPU in it I also contributed to the design of the call center. At the time we were experiencing rapid growth with not much of a disaster recovery plan.  My hope was that Apple was going to expand their tablet offering so that we could (a) deploy a wifi network anywhere we needed it to be including just generally remote with a VPN (b) in-house we could reduce the cost of wiring phone, power, and wired networking (c) the enterprise support for managing users and remote destruction of corporate assets was a huge plus (d) by looping Apple or our hardware leasing company into the DR plan we could have hardware at the ready for on demand deployment to the DR location.
And now here is the iPad Pro in all of it's 13in goodness.
While the iPad of 2010 would have been sufficient the iPad Pro has a lot to offer.  The CPU, RAM and Storage are good. It also has better support for multiple foreground applications... and I'm certain all of the other enterprise features are still there.

But there are some weaknesses.

  • Since it is not supporting the USB-C standard we can expect to see a refresh in the next few releases. Or at least yet another dongle.
  • Lightening is a fine connector, however, the HDMI and DisplayPort adapters are sub par. There are issues with resolution control of external displays. (a) there is only support for one at a time (b) does not add audio (c) mirror mode only (d) no touch.
  • If I buy the 128GB pro then I need a lot of iCloud storage for backups.
When comparing the iPad Pro to the Google Pixel my intuition make some recommendations:
  • the operating cost between apps and cloud storage leans to the Pixel. iCloud costs $20 per month and over 3 yeas that's $720. Google normally charges $10 per month or half of Apple's price. Just on that alone that negates the $300 price difference between the iPad and Pixel.
  • The iPad only has the one port.  The Pixel has 2x USB-C and 2x USB-2 and 1 SD card slot. The Pixel even offers a slow charge from a USB to USB-C adapter cord. (but it's really slow) The one iPad port means that accessing media cards might require disconnecting the external monitor etc...
  • And of course there is the 10s boot time on the Pixel
  • Lastly you can also install any alternative to ChromeOS.
  • not forget beta and dev modes... so much easier than Apple.
The Pixel does not have tablet mode. Ratts! Actually who cares? I have an Asus Flip and it's just too f*cking heavy to use it like a Kindle Whitepaper.

What does my professional hardware look like:
  • all of my day to day interaction with my development servers, in the cloud, is thru my ASUS Chromebox [MU075 16GB RAM] (twin 27" monitors)
  • When I have to work on the road, library, or Starbucks I use my Pixel (sometimes around the house too)
  • My ASUS flip is very rarely used. It's a rescue machine when I'm remote or when I visit family and friends. I can never tell when I'll be called on to rescue someone.
I rarely use my MacBook or iPad; although I recently deployed a dashboard on an iPad so I had to use an older device that I want to give my kids.

I wouldn't mind a better Pixel that get's closer to the iPad Pro but I hope Google does not go too far.

UPDATE - I just looked up the actual pricing of the iPad Pro.  First of all Apple does not offer a 64GB model. Only 32 and 128GB models. I would like to choose the 32GB model for comparison, however, many of the ChromeOS models start at 4GB storage so 32GB storage for the iPad Pro model would to be too optimistic. Keep in mind that iOS keeps everything local and that the applications are mainly Objective-C binaries. While the ChromeOS applications are mainly javascript, css and html which offers a higher compression ratio. (android uses java and other languages)... so the point I was making here is that the base price for an iPad Pro is going to be $1300 for the 128GB iPad with the pen and keyboard. I still get more use-cases out of my Pixel.

Thursday, September 10, 2015

Modern Day VPN

I recently read a G+ posting about VPNs that made my skin crawl. It seems clear to me that the unapologetic entitlement crowd has taken and repurposed the RFC. Clearly VPNs have a wide variety of features, however, when it was initially conceived it was about linking private distributed networks. Then with lower cost crypto appliances it became part of the remote workers hardware inventory and then as it made it's way into the mobile device stack it allowed workers to be mobile.

Let's be clear, it was not meant to (a) obfuscate locate network traffic (b) improve QOS (c) bypass regional service restrictions... although this is what each of the VPN service providers in the Google Play store would have you believe. (clearly there is no money in the traditional VPN, and by using a VPN mom and dad won't see that you spend all your time on porn sites.)

And so there is no ambiguity... I did a whois the top 4 VPN providers on google play.

  1. domain registered 2007
  2. domain registered 2013
  3. domain registered 2010
  4. domain registered 2011
I checked all of their websites... one is totally free. WHAT? How is that possible. Just the act of spinning up their website means that they have costs. If they offer a superior product then they have bandwidth costs too. Their upstream providers are not giving them resources for free. Clicking on their learn more button they make the claim that companies pay them to recommend software to their users. But since they don't have any advertising how are they actually doing that? The site is devoid of real facts and I'm left with the impression that they might actually be a man-in-the-middle and trojan horse wrapped in one.

To be fair the Google Play Store does host other VPN client apps and extensions which I consider more legitimate or traditional. Cisco, SonicWall, Citrix to name a few. These tools are meant to create a virtual network between your computer and the remote network and that's it. From that point forward one usually has to sign a "proper use" or employee manual document so that you're not using the company network to watch movies or download torrents.

Anyway, the big misconception.... While you might be hiding you IP address, obfuscating your browsing history, tricking your ISPs QOS mechanisms... all of your data is now being consolidated by a different 3rd party. Therefore; whatever secrets you thought you had before are no more secure. If you go to a public FTP server and you are not using SFTP or FTPs then your password and content will be in the clear for everyone at the VPN provider to see. 

Monday, September 7, 2015

"Where do the well to do buy their kids toys?"

Sitting in the parking lot at the local toy store I'm watch the various families enter and exit the store. The one thing in common is that they/we are all lower and middle income families. Since we spent more on gas than the toy we were exchanging I find myself asking some questions;

(a) where do the well to do buy their toys as to avoid disappointment in their children?

(b) are there any analytics associated with big box toy stores? 

I'm sure there are many more questions to as but it's not my specialty. If anyone knows Malcolm Gladwell it would be great to see him tear this apart.

UPDATE: my wife decided to purchase a coffee mug for my daughter's teacher. In particular since the school's theme was "superheros" it was fitting that the mug she purchased matched the theme.  What arrived was a "super stylist" and not a "superhero" mug. When I inspected the packaging it was clear the retailer had changed the barcode without regard to the contents. I did my replacement with Amazon and a replacement arrived in a just a few days. Strangely enough the same mistake was made. It is amazing how wasteful this is and I, again, wonder what the well to do; do.

This mug cost $6 at Amazon. Shipping was essentially free because of Amazon prime, however, it cost someone 4x because of the retry and the returns. And of course there is all that wasted time. The irony is that we say the perfect mug at AC Moore last week but it cost $12. Clearly it seems that we should have made that purchase instead of bargain hunting.

Friday, September 4, 2015

debugging production problems with git and go

What follows is an accounting of a debug session I just completed. In the end the issue was not in my code but in a 3rd party library that was in turn effected because a service that it depended on was not running... but this is how I go there. (the stack tract stupid)

Logging into my server I realized that CoreOS had updated my Alpha channel server. It's a pain in my ass when that happens... and there are a number of side effects that I have not yet accounted for.

The login:
Welcome to Secure Shell version 0.8.34.
Answers to Frequently Asked Questions:
Connecting to
Loading NaCl plugin... done.'s password: 
Last login: Fri Sep  4 03:40:27 2015 from
CoreOS alpha (794.0.0)
Failed Units: 6

Crap, my chargeback service did not run. That's going to create a backlog of work for me.  But what happened?
$ sudo journalctl -u ChargebacksYesterday.service -e

Looking at the last page of the log I see that there was a stack trace. This is not going to be good because the trace is going to give me line numbers that do not match the current state of the code.  I'll need to branch the deployed version (after the fact) and then review the code. I hope no changes are required that demand merging.

The the commit-id from the binary
$ ChargebacksYesterday --version

appname ChargebacksYesterday
commit id: 5600cc2
build date dev-20150901141054

ALL of my programs return the above string when the '--version' CLI option is provided.  This way I get to the state of the code in order to debug problems. Looking at the log info, including the stack trace, you'll most likely not be able to debug a project that is being heavily developed.

go to my source
cd zquery

create a branch from the commit-id and push to the remote server
$ git branch
$ git branch debug_nil_pointer 5600cc2
$ git push -u origin debug_nil_pointer
$ git branch

** notice I created the branch from the commit-id and did not actually switch local branch.  I hate branch switching locally as I've janked my projects more than once.  In the next step I switch the branch view in the web GUI.

go to github/bitbucket and switch branches
- this is a visual thing you'll have to do it yourself

find my file and view the code
- I found the exact line of code and it turned out that I was correct. The bosun server running on my local server must have terminated or failed to restart after the CoreOS update.

Thursday, September 3, 2015

"more plausible than not"

While the NFL has stated that it is "more plausible than not" whether Tom Brady knew about, instructed, approved or participated in the deflating of the footballs used to win the semifinal game last season ... the NFL has reversed course on the punishment it handed out.

What a shame! Though professional athletes have been criticized for all the drugs, performance enhancement, salary caps, domestic abuse, contempt for the fan, fighting with fans.... the one thing that we have collectively agreed upon is that cheating is unacceptable. 

The NFL is not a democracy. You do not have a right to play in the NFL. It is a privilege and an opportunity. One for which you, the pro athlete, are paid very well. 

Besides being "more plausible than not" the New England Patriots have a history of bending the rules and even cheating. Bill Belichik should have been banned from the game for his participation in the taping of the Jet's practices. (or whatever that was). 

The Patriots are a dirty team. They cheat. They just have better lawyers than the NFL.

reducing duplicate SQL in Go project

Recently I started writing reports for a client of mine. At first I thought it was going to be just a few reports but over time a number of things have happened. (a) more and more reports (b) even more reports (c) I'm getting lazy so my tools are starting to scale (d) I'm getting lazier and I also have a need to reuse code without cut, copy, paste. (e) the need to share code that the reports generate similar results when based on the same foundation.

While part of the implementation means using CTEs (common template expression) it's not the whole story as I implemented a complete reporting engine that exports to CSV, TSV, text tables, XSLX, JSON, DOT, go templates, and supports it's own DSL including loops and dynamic queries.

In my current implementation I store the SQL in bash, yes bash, shell scripts that export the SQL names like this:

export hello_cte="hello_cte (hello) as (select 'HELLO')"
export hello=";with ${hello_cte} select * from hello"

In one case I need to pass variables into the SQL:

export hello_cte="hello_cte (hello) as (select 'HELLO')"
declare Hello_${name}=";with ${hello_cte} select *, '${name}' from hello"

Then the SQL get's baked into the go code using my envtmpl project. envtmpl works with go's generate action to execute a template and replace the template variables with data from the environment... the SQL. (in the example above the ${name} is expanded by bash not go templates)

NOTE: top level queries are named with a leading uppercase letter and use camel case.

This framework works well for me as I have over 50 reports, however, there are two/three downsides. (1) because I'm using bash variables the SQL statements are losing their formatting [bash's multi-line strings are kludgy at best] (2) any benefit from a modern text editor with syntax highlighting is impossible even though some editors support multiple languages. (3) the interaction between bash and go is so tightly coupled it's going to make decoupling a challenge.

As an aside I've been looking for a SQL reformatter that is written or can be embedded into a go project. I'm not going to use a 3rd party service for the very obvious reasons.

It was the search for a reformatter that offered a glint into the future. I found dotsql. The dotsql project is essentially a librarian and an execution wrapper around go's sql. Give the Load() method the name of a file, reader, or string and it will be parsed into a map[string]string. Then the SQL can be accessed by Prepare, Exec, etc... however, there are two interesting methods. Raw() and QueryMap().  With these two methods I can use my CTE strategy above and get all the benefits I was hoping for as I described my problem space.

var (
        doc = `
-- name: hello_cte
hello_cte (hello) as (select 'HELLO')

-- name: hello
select * from hello_cte

Notice the embedded {{.hello_cte}} in SQL.

Here is the complete source:

package main

import (


var (
        doc = `
-- name: hello_cte
hello_cte (hello) as (select 'HELLO')
-- name: hello
select * from hello_cte

func main() {
        d, err := dotsql.LoadFromString(doc)
        if err != nil {
                log.Printf("ERROR: %v", err)
        sql, err := d.Raw("hello")
        if err != nil {
                log.Printf("ERROR: %v", err)
        tmpl := template.Must(template.New("dotsql").Parse(sql))
        buf := bytes.NewBufferString("")
        err = tmpl.Execute(buf, d.QueryMap())
        if err != nil {
                log.Printf("ERROR: %v", err)
        log.Printf("Finished:\n%s\n", buf)

And this is the output:

2015/09/03 11:56:12 Finished:
hello_cte (hello) as (select 'HELLO')
select * from hello_cte

I still have to validate the dotsql behavior against my DSL but I'm confident it'll work since the '-- name:' seems to be a token used to separate the SQL statements from each other and my DSL uses '----[]----' for a similar but different purpose.

And eventually I'll also use go-bindata in order to embed the complete SQL string into the executable.

CRAP! Just as I wrote that last sentence I realized I could have accomplished the (almost) same thing with go-bindata. That exercise is left to the reader. dotsql has one advantage and that is more than one SQL statement in each file or a single master file. Where go-bindata requires a single file per SQL statement.

Wednesday, September 2, 2015

stop asking for my address book

Either we all know or strongly agree that the like of LinkedIn, Facebook and mySpace find novel ways to make money by marketing to me based on my likes, searches and the possible similarities to people in my circles. So of course they are going to ask me for access to my address book.

But stop fucking asking me. I'm not going to give it to you. And if you turn a phrase that get's me to inadvertently permit you access; not only will you lose my business (who cares right?) but I will also join and support any and all groups that will agree to legislate you into obscurity. As we all know; once they read my address book and slurp the data I will be haunted by my friends likes. 

Just a few days ago I did an Amazon search on small footprint computers like the Asus Chromebook and now everywhere I go I see ads for them. Someone sold me up the river.

The think about my address book is that it container both personal and business related contacts. Of the 12K contacts I only communicate with 30 regularly. The rest are a result of gmail's address book policy. The last thing I want to see is some antivirus software ads because some dipstick I knew in college works for McAfee.

Tuesday, September 1, 2015

parsing go templates

Given a template file I want to parse out some variables so that I can prompt the user but I do not want to build my own parser or some set of regex although after playing with the go templates and parse tree it might be the best thing to do since I'm already making the syntax simple.

My simple document looks like this:

doc := "fred {{.Fred}} barney"

Here's my sample code (playground):

package main

import (

func main() {
doc := "fred {{.Fred}} barney"
t := template.Must(template.New("sample").Parse(doc))
log.Printf("%#v", t)
log.Printf("%#v", t.Tree.Root)
log.Printf("%#v", t.Tree.Root.Nodes)
for _, n := range t.Tree.Root.Nodes {
log.Printf("%#v", n)
if n.Type() == 1 {
log.Printf("%s", n)


The output was interesting but as I expected and similar to the sort of parsing I was thinking about and was demonstrated by Rob Pike in his Lexer video. The downside of using the template parser is that it's going to parse a lot more than the simple (did I say that again) DSL I had in mind even though it's based on the go templates.

This is the logging that I performed before the for look. There is nothing particularly interesting here.

2009/11/10 23:00:00 &template.Template{escapeErr:error(nil), text:(*template.Template)(0x105345c0), Tree:(*parse.Tree)(0x10583a40), nameSpace:(*template.nameSpace)(0x10538360)}
2009/11/10 23:00:00 &parse.ListNode{NodeType:11, Pos:0, tr:(*parse.Tree)(0x10583a40), Nodes:[]parse.Node{(*parse.TextNode)(0x105346e0), (*parse.ActionNode)(0x10534740), (*parse.TextNode)(0x10534760)}}
2009/11/10 23:00:00 []parse.Node{(*parse.TextNode)(0x105346e0), (*parse.ActionNode)(0x10534740), (*parse.TextNode)(0x10534760)}

Here are the 3 nodes that were parsed. It essentially comes down to "fred", "{{.Fred}}", " barney". Since I was only interested in the actionnode I checked for it Type()==1 and then printed the name. I thought that I might get the inner part but then I realized that the parser need to look inside and pull more tokens out... the template schema is a DSL unto itself.

2009/11/10 23:00:00 &parse.TextNode{NodeType:0, Pos:0, tr:(*parse.Tree)(0x10583a40), Text:[]uint8{0x66, 0x72, 0x65, 0x64, 0x20}}
2009/11/10 23:00:00 &parse.ActionNode{NodeType:1, Pos:7, tr:(*parse.Tree)(0x10583a40), Line:1, Pipe:(*parse.PipeNode)(0x1053a1b0)}
2009/11/10 23:00:00 {{.Fred}}
2009/11/10 23:00:00 &parse.TextNode{NodeType:0, Pos:14, tr:(*parse.Tree)(0x10583a40), Text:[]uint8{0x20, 0x62, 0x61, 0x72, 0x6e, 0x65, 0x79}}

There is a lot of stuff in the parsed doc. It might be simple to locate the pieces I'm looking for. But this is a start.

Good reminders for software developers, program managers, and customers

It's not that I hate Agile it's that I hate the "Agile Process". It is worth repeating, the Agile Process shares the same vocabulary with the Agile Manifesto but is a lot more and is filled with the bias of it's supporters who are johnny come latelies who are in it for a buck and not your success.  As soon as they find a "better way" they are going to be knocking on you checkbook again.

While I have not gotten to a point where I have parity with 12-factor apps there seems to be something in the broader strokes that resonates with me. Of course this is yet another cycle (computer history repeating itself). Just as mainframes gave way to the midrange and ultimately the PC so did the frameworks that they operated in. As we dive deeper into PC development and managers want more productivity and reliability from DEV and OPS they are pointed back in the direction of frameworks. And I'm talking about the likes of JCL, CICS and so on. Currently things look like CI/CD, orchestration, configuration management, and so on... it's just the same thing re-imagined.

As the PC seems to be going the way of the mainframe as we move into the cloud, use netbook class machines and tablets... and add the discipline of frameworks... we seems to be moving toward the "internet of things". To see that you only need to read a few articles on the arduino, raspberry pi, beagleboard, edison, cubox and so on. As these devices get smaller they are framework-less. And they too will have to be reigned in.

Bosun and data collection

I like the bosun project but I have to take issue with the guy who packaged the docker version I'm running. Bosun is written in the Go language but has many dependencies that are not. In fact the stackexchange team created their own image that has several java dependencies. And my system, which was lean, is now like a VW Bug pulling a tractor trailer.

Chromebook VPN connection to a Watchguard Firewall

First and foremost the support professionals at WatchGuard have no interest in in taking my calls or emails. My employer is vested in WG firewalls but since the VPN issue is mine and not common to the company I have to deal with it and I'm not likely to get the support contract number or serialno. From that perspective WG is just not my friend.

Chromebook's VPN assumes (a) your certificates have been publicly blessed by a CA (b) using the VPN port and NOT 443 as is most common place. Getting past these limitations means a lot of manual labor. Here are some of the links I have been accumulating:

  • Chromebook VPN Setup (link)
  • convert PEM to key file (link)
  • Chromebook and OpenVPN server (link)
And yet it still does not work.

another bad day for open source

One of the hallmarks of a good open source project is just how complicated it is to install, configure and maintain. Happily gitlab and the ...