Thursday, April 30, 2015

Deregister from Apple's iMessage

There are a number of reasons why you might want to deregister from Apple's iMessage. For me... I crushed my iPhone and decided to try a Nexus 6. While I'm still learning the ropes everyone who messages me is going into the iMessage black hole (if they are sending from an iPhone).

So now that I've missed a few more important messages I've decided to try to decouple. There are two ways to do that. (a) from the iPhone (b) sending an SMS to yourself.

HERE is the link to Apple's website that provides the info and tools. Good luck.

I know I'm going to miss the unified phone, table, desktop experience for texting but not necessarily for video or voice. I know I already dislike ATTs splash screen on the Nexus 6 (with no way to remove it without rooting it).

Here's to the next two years.

golang: os.Args[] vs flag.Args()

os.Args[] and flag.Args() are not the same thing. I know I'm stating the obvious, however, I made the same mistake three times this morning and as silly and obvious as it is... I have to call it out.

os.Args[] includes arg[0] which is the executable path+name. That means that flag.Args() is also going to have one less item.

Wednesday, April 29, 2015

Google Chromebook Pixel 2 - i7

My Chromebook arrived today. It was a pretty big outer box which held two dongles and the laptop (which was secured inside a second box) Essentially 3 Russian dolls.  Opening the packages was nothing special. When I finally got to the device I remarked that the origami plastic wrap reminded me of Apple.

There are no external marking, that I can tell, that indicate what system it is and so I was initially unsure which CPU, RAM and SSD it was. So I found an app called system. It worked well enough to confirm it was the machine I had purchased.

The next thing I did was try to install my google music play extension. Unlike the other Chrome installations I have; it did not work the way I was used to. I imagine it's been a while since I configured it so I probably made a mistake. Right now I'm not using it.

Finally I wanted to know what the battery life looked like. I have the computer propped open with the menu open and the batter indicator displayed. At first it was 8hrs and in the past 15 minutes it has ratcheted up to 14:35hrs with 100% battery left. I'm assuming that the battery indicator is adaptive although I'm not sure how they can do that accurately... but that would be for my hardware days. I'm sure there is some adaptive formula based on some reading and some sort of time series.

It's all good enough for me. In fact I like the keyboard and especially the trackpad. I have not tried the touch screen yet although I have installed the Android smartlogin feature and I like too... but it was a little tricky to install properly.  The bluetooth paring was uncooperative.

UPDATE:  My 1-TB of Google Doc storage has arrived.  It was a just a simple click once I was logged in. Now I have 3 years of storage happiness awaiting me.  One interesting thing I noticed. On my Chromebook's file manager is that I can edit files as if they were local. NICE!

UPDATE: I am trying to use my Pixel exclusively today... and I noticed a few things:

  • running a secondary monitor works great. Some of the foibles seem to be related to the second display.
  • the battery is running low and I'm not getting the 12 hours I expected. Which could be related to playing music to my Bluetooth speaker and the second display.

Connect Chromebook to WatchGuard thru VPN

WARNING - I have not been successful yet.  Everything I've read is leading me down this path but it's still not working. As a baseline I have a working installation on my Mac so I know the credentials are OK.

  • login to the watchguard special URL
  • there are 3 possible download buttons
  • One returns the .ovpn file (get this one)
  • One returns a .zip file (get this one too)
  • Do not bother with the windows download. It returns an .exe file
  • convert the ca.crt, client.crt, client.pem into a pk12 file
    • openssl pkcs12 -export -in client.crt -inkey client.pem -out client.p12 -name MyClient
  • create the .onc file
    • get the sample .onc here
    • generate a uuid here
    • copy/paste the contents of the ca.crt and client.crt as indicated
    • UPDATE: remove the cr+lf from the files because JSON does not support embedded cr/lf
  • browse here: chrome://settings/certificates
  • import the ca.crt in the authorities
  • import the client.p12 in the user certs. (import and bind)
  • browse to: chrome://net-internals/#chromeos
  • upload the .onc file

While this seems to be the approach it's not working. When I compared the contents of the .ovpn and template .onc they are different; and this is concerning.

References: this post has a lot of the information in common to all of the posts I've read so far.

is it me or does VPN really suck? (link)

Monday, April 27, 2015

The killer smartphone

As I've mentioned twice; I'm having second thoughts about my Nexus 6; but one thing for certain I do not regret dumping my iPhone. So I've decided that I've got a better idea.

I want a new category of smart phones. Something that is a hybrid between the Startac and the iPhone. It just needs only a few dedicated apps, reliable with good sound quality, anf maybe the ability to span wifi and LTE networks. But if it's in the Startac form factor then simple drops are not the end of the world. Actually I'd add a 1st gen blackberry scroll wheel and display too.

And for good measure... the battery just has to be good enough to make it from the house to the car, to the destination etc.... a few of those transducer chargers strategically placed and even in public spaces.

PS one thing that is wrong with those devices is that you cannot charge and talk at the same time unless you use bluetooth or headset.

Android phone Nexus 6 redux

I'm struggling with a number of missing features like badges. And I was just remarking to a friend of mine that it seems that between the different Android phones there is a different level of user comfort. My father who was long since a programmer and now retired generally speaking is very happy with his LG. Buying into the Samsung mobile life style is similar to the Apple lifestyle and then the applications user interfaces are curated. However using a naked Nexus 6 gives me the feeling I'm driving a soap box and not a Ferrari. My Nexus 6 needs to have a little more battery life and overall I better feel.

The first reason for a 6 inch screen is to have more detail. Displaying content at a different magnifications is less useful and my experience.

golang required import foo

I was working with a 3rd party library that insisted that I import a particular dependency. The problem was that the package was NEVER referenced.  It turned out that there is a bit of FOO in the package declaration:
package osext // import ""
That comment at the end of the line tell the compiler to do something. Here is an odd place to put this doc but I think it reflects the intent as it may be the official proposal. At least it seems to work this way.

Sunday, April 26, 2015

Nexus 6 + Android 5.1

What have I done? It's one thing to have a ChromeOS or laptop fully baked into the Google experience but my initial reaction to the Nexus 6 + Android 5.1 is less than stellar. I have become somewhat dependent on some iPhone features:

  • badges like email or message counts
  • sticky notifications
  • common sense volume control for ring and media
  • quality 3rd party apps - android still feels kludgy
  • ATT only sells the 32GB version so I'm expecting to run out of memory except it's possible that native android apps are smarter about memory
Other complaints
  • waking the android from sleep always causes a burp from the audio (might actually be defective)
  • The ATT splash screen has to go. In fact there should be no splash audio because you never know where you're going to be powering up
  • I did manage to lose some contacts as I migrated. The contact was in my Apple contacts but not my Gmail contacts... and it should have been
  • Hangouts and Message are junk. Hangouts cannot determine what it is and Message is anemic. Somewhere in there should be something that looks like Fi.
Of course
  • I'd really like to be able to delete the standard apps like mail in preference of the gmail version (Where is Microsoft Explorer now?)
  • I really hate that they changed the power connector
Frankly it's not good for either company. Maybe I should have gone with the Samsung or the LG?

OMG and there is the whole unification thing with messenger that I forgot about.  My wife's address book think I'm an iPhone user when I've switched. Not she's texting me and I'm not getting the messages because It's the internal Apple message system. *sigh*

UPDATE: Android has a do not disturb feature although it works it was not designed by a UX developer or maybe they were trying to avoid a patent lawsuit.

Saturday, April 25, 2015

Merchandising and device accessories; chromebook

As anyone who has watched SpaceBalls knows it's merchandising. But what they don't tell you is that the second level of merchandising hell is accessories.  If you need to see a better example it's Apple. I have the strongest intuition that there is some formula somewhere that suggests the number of extra power supplies purchased by it's customers is related to the cost and size of the power supply; all of which is offset by the battery life.

Now the latest laptop from Apple has switched over to the new USB form factor and they are charging a premium for the computer and the dongles. DOH. They lost me as a customer here.

I just purchased a chromebook pixel and I considered purchasing a second power supply. In the end I decided not to. The claim from Google:
Up to 12 hours of battery life. Fast-charging gets you 2 hours of power in 15 minutes.*
I'm still annoyed that they added the new USB form factor and that at east one of the ports has to be used for power but with 12 hours who cares. At least they included two legacy USB ports and still offer the SD card slot.

The chromebook is going to be the ideal work laptop for me. (a) I do all my communication via google and keep everything in the cloud (b) I can use crouton for local development and I can switch back and forth between systems (c) I do the rest of my development remotely (d) and I'm less worried about my kids deleting the contents of my HDD.

However... I still need to buy a Type-C to USB and Type-C to HDMI; there is a chance I might need a display port version but the cable is longer and less convenient.

UPDATE: I forgot to mention that I get 1TB storage for 3 years and no cost. Which has a $300 value. This puts the cost/benefit way over the Apple.

Friday, April 24, 2015

Golang Constructor-ization

Here is a reminder to myself for when golang constructors and privacy collide. (to my future self) The implementation should be self evident but if you need some addition search terms...
Golang constructor variadic functions as parameters
or something like that. Now here is the code sample.

Thursday, April 23, 2015

Atom Shell was super easy

The Atom Electron project, formerly known as Atom Shell, was easier to implement than I could have every imagined.

  1. download the binary from the release page
  2. create a folder for your hello world project
  3. create the 3 required files (see quick start docs)
  4. launch the electron binary
  5. drag your folder onto the landing part of the executable's window
And your application should just start running. The quick start page offers additional information for running the application from the command line etc.

In my use-case I have two additional requirements:
  • kiosk mode
  • disable close window and quit
Adding these features was trivial(full listing). Essentially I needed to capture the application quit (a) event and prevent the default action and capture the close window (b) event and prevent it's default action.

app.on('before-quit', function(event) {

mainWindow.on('close', function(event) {
And that was it.

UPDATE: electron does not seem to work in my ubuntu 14.04 environment.

Wednesday, April 22, 2015

Large or Complex - remember me

I'm glad these folks were able to summarize my experience. It sounds so familiar to me.

How Complex Systems Fail (PDF)
Lessons Learned while Working on Large-Scale Server Software (link)

One thing I would add:

Stupidity begets stupidity.  This means the same thing as "measure twice and cut once". This is particularly useful when answering the pager at 3am after getting your drink on all night. This has less to do with your drink than it does arrogance. (a) know when to take action and when to back out (b) make sure your choices are easily revertible (c) keep logs (d) have a battle plan before you start (e) stick to the script (f) have a followup plan to make sure it's complete (g) and an escalation plan in case it goes bad.

what sort of team organization strategies do you employ?

I see team organization in approximately 4 flavors.

The first is tic-tac-toe. When you first learn the game you slap your X's and O's in the grid as soon as you can. The notion that there is a real strategy underneath is still a distant future. Sadly passing from that to (b) strategy and then (c) futility is just a few games away.
"dots" seems to have a strategy from the onset. At least the plan would seem to capture 1:1 with your opponent until you get closer to the end game when you start playing for multiple captures and giving up limited captures.

Caribbean and south american style Dominoes seem to be a simple random number problem. I'm not sure how there is a strategy there unless there is some sort of signaling system between players and that might amount to cheating.

Checkers - there seems to be some sort of strategy here, however, the more you play with one player and the limited numbers of moves... you tend to learn your players strategies... and you

And finally chess. A chess game can last years. Opening moves are deliberate, planned, with no wasted energy or moves. The plan is to keep the number of future options as plentiful as possible and yet not wasting time or advantage.

Which game do you play?

better project names

I have complained about how go projects add a 'go' prefix or suffix as a badge for brand recognition. I suppose in the object oriented world of namspacing it's almost a requirement. But since I fundamentally disagree that naming an executable by it's implementation language that would be weird:

  • cppbind or cbind when it's currently named bind
  • c#outlook and not outlook
  • vbexcel
I think I made my point.

But now there is a new class of junk.  When the project is implemented in one language but meant as a tool for another. For Example: boojs. The project is written in Ruby but executes javascript. In this case the name is sitting on a razor-wire fence. From the name you cannot be sure if it's implemented in js or something else or even what it does.

I've been working on a project call goose. I started the project a few months ago and only realized, last night, that 'go' was the prefix. (at least it was not go-oose). In this project I've included tcl, lua and lisp as interpreted languages and the whole thing is written in go. So does that mean I should change the name to something like:
maybe not.

"Minimal Linux Container Host"

VMware finally released it's "Minimal Linux Container Host" (photon). This is supposed to be the shim between the container (Rocket or Docker) and the ESX or vSphere under-layer. I imagine that this is the same thing as what the OpenStack community is doing.

From a business perspective this is just fine.  The containers that you would run in a kubernetes, deis, or similar orchestration framework would run on VMware or OpenStack; almost the same except OpenStack and VMware would be tasked to perform the orchestration. And there is nothing wrong with that if you're more interested in the homogeneous container/VM experience and less about the density of the containers themselves.

One of the selling points of containers is that they share the host operating system.

Let me note that because of the magnification factor you could choose your own baremetal solution, install some type of JE (just enough) linux with some sort of PXE boot server and then build your own cloud installation as you need it. It's surprisingly easy.

As for the criticism for how photon was implemented or deployed; (a) it's early stage in their implementation (b) as I mentioned it's a specialized use-case (c) it's also the unix way of things.

Tuesday, April 21, 2015

dd does not give me any feedback

I'm cloning a drive using the following command:

dd if=/dev/sda of=/dev/sdb bs=512 conv=noerror,sync
It works fine but there is no user feedback that I can see and there is nothing in the man page that provides any illumination. A quick search yielded two commands. The first sends a signal to dd and it dumps the current state.
sudo kill -USR1 $(pgrep ^dd)
There is nothing with this technique. It can be added to a watch command with a reasonable interval and you can watch it make progress.
watch -n5 'sudo kill -USR1 $(pgrep ^dd)'
This is no big deal but in the end it requires that the user open two windows. One for the copy and the other to watch. I ended up using pipeview (pv). I had seen this tool a few years ago but never had a real reason to use it. Now I can watch my drive get cloned.
dd if=/dev/sda bs=512  |  pv  |  dd of=/dev/sdb bs=512 conv=noerror,sync
Of course there is nothing special here either. The happiness is that I'm watching the aggregation on the display as progress is being made. In this instance I'm cloning a 60GB drive and although it's USB2 it's only making progress at 10MB/s. Depressing.

references: link

rancher docker registry

Earlier today I wrote an article describing a rancher quick start. When I mentioned an enterprise deployment I mentioned that I needed to install a registry server. Since docker released version 2 I thought no time like the present.

  • open the rancher console in a browser:
  • click container
  • click add container
  • provide your vanity container name: "registry". This is a name I selected not the container name in the registry
  • in the "image" select "docker" and type "registry"
  • click create
Now rancher is going go get the container and start instantiating it. I did not select the particular host but it's selected one for me. Also, rancher also started a sidekick container for the registry container so that networking is obvious.

Once the registry service is running you should add it to the rancher registry list.
  • get the IP address of the container you created above
  • click registries
  • click add registry
  • provide your vanity name for this service
  • provide the IP address
  • click add
Now if you add a new container you'll see that the registry list now includes the registry you just created.

One thing that is TBD is persisting the storage for the registry so it can be rebooted. But I did not get to that point in the configuration... but I should.

UPDATE:  I had decided to install the latest etcd. (a) adding a container seemed to be a little round robin - unexpected goodness. (b) I had to add to the registry list. (c) the first attempt to install etcd failed because I tried 'etcd' and it should have been 'coreos/etcd'. Interestingly when it failed rancher left the sidekick in running mode (with no way to delete it) and when I corrected the image name it did not create a new sidekick (assuming that rancher reused the existing). Too many unknowns.

UPDATE2: I was hoping this was the end of the story but it's not.  It seems either rancher agent or server are failing. This is not reliable for me.

rancher very quick start

This is so easy and simple it makes me nuts:

  • download and install virtualbox
  • download and install vagrant
  • git clone
  • cd os-vagrant
  • vagrant up
  • vagrant ssh rancher-01
  • docker run -d -p 8080:8080 rancher/server
  • ifconfig (get the eth1 ipaddr)
  • open local browser:
  • click hosts (already by default view)
  • click + add host
  • review and click save settings
  • click custom
  • copy the command in step 3 omit the leading 'sudo'
  • exit (CTRL+D) the terminal session on rancher-01
  • vagrant ssh rancher-02
  • paste the command: docker run -d --privileged -v /var/run/docker.sock:/var/run/docker.sock rancher/agent:v0.5.2
  • exit (CTRL+D) the terminal session on rancher-02
  • vagrant ssh rancher-03
  • paste the command: docker run -d --privileged -v /var/run/docker.sock:/var/run/docker.sock rancher/agent:v0.5.2
  • you might have to refresh the browser a few times as the agent(s) startup
At this point you have a 3 node deployment of rancherOS with rancher server running on rancher-01 and rancher agent(s) running on nodes rancher-02 and rancher-03.

If you're an enterprise class user:
  • then you might need to increase the number of nodes in the Vagrantfile, redeploy, relink all of the agents and so on
  • you'll want to deploy your own registry server
  • and you might want to disable or intercept the public repos via some dns trickery
The rancher tooling does not include any OS or rancher tools for monitoring, health, alerting, logging and so on... so you really have to start bolting IoT (internet of things) onto your environment. It also depends on whether you want to self host these operations or add SaaS type operations.

One thing to note about any of these orchestration applications. (a) there is a certain amount of magnification (or Mandelbrot) that takes place between bare metal and containers; so keep an eye on the APIs because they may be the one constant. (b) you will still need bare metal servers to provide services that you do not want to be inside the VM or container. HA/load balancers, NTP, DNS, PXE server and storage servers come to mind.

PS: if you are going to run rancherVMs inside your containers then you gotta have huge bare metal hosts and you need a way to orchestrate the work as rancher leaves that to the user.

Monday, April 20, 2015

backing up my drive

I have a system that I installed ubuntu 14.04 on and then stripped everything except the bare essentials from. I can clone the drive with simple dd commands but they can be a pain to manage. The source drive I'm working with is 60GB but I've only used about 5GB so the whole thing should fit on one of my new 8GB USB sticks. (I think I should have purchased the Kingston brand drives instead of those but that's what I have for now.

dd if=/dev/sdX conv=sync,noerror bs=64K | gzip -c  > /path/to/backup.img.gz
 fdisk -l /dev/sdX > /path/to/
gunzip -c /path/to/backup.img.gz | dd of=/dev/sdX
dd if=/dev/sdX of=/dev/sdY bs=512 conv=noerror,sync
I should have captured more than just the partitions but that's it for now. The refs below have some information on the MBR.

I have yet to test the restore but it should work in principle. It's probably important that the drive sizes need to be exactly the same size.

Here are the refs (link) (link)

Sunday, April 19, 2015

Golang as a learning language

I'm not sure that Golang should be a learning language. Many of the crappy bits in Ruby are because the noobs were able to influence the language development and the subsequent libraries and applications. Unfortunately this influence is poison; and by example just look at the faction that is looking for generics.

eGalax touch on default Ubuntu 14.04.2 LTS

I have not had success with the touch drivers as yet.  The touch works and evtest also seems to report events, however, I have noticed that the button click is not working and no matter what I do xinput refuses to configure the buttons correctly.  When I downgraded to ubuntu 10.04 LTS everything sort of worked... there must have been something in the kermel as 10.04 was in the 2.6 kernel and 4.04 is in the 3.x branch.

One thing ... all of the documentation pointed to the wrong website or one in Taiwanese. I was finally able to locate the drivers again: (it would have been nice if they provided the install instructions in text rather than PDF)
Please open the document "EETI_eGTouch_Programming_Guide" under the Guide directory, and follow the Guidline to install driver.

  1. download the appropriate version
  2. unzip the file
  3. read the programming manual
And from that I'm distilling to the following:
  • execute the
answer all of the questions in the most obvious.  There is a small complaint by the installer at the end about installing a patch.  I assumed that the patch was already installed. Everything worked!


# . ./ 

(*) Driver installer for touch controller 
(*) Script Version = 1.04.4330 

(I) Check user permission: root, you are the supervisor.
(I) Platform application binary interface = i686
(W) X server detected.

Declaration and Disclaimer
The programs, including but not limited to software and/or firmware (hereinafter referred to "Programs" or "PROGRAMS", are owned by eGalax_eMPIA Technology Inc. (hereinafter referred to EETI) and are compiled from EETI Source code. EETI hereby grants to licensee a personal, non-exclusive, non-transferable license to copy, use and create derivative works of Programs for the sole purpose in conjunction with an EETI Product, including but not limited to integrated circuit and/or controller. Any reproduction, copies, modification, translation, compilation, application, or representation of Programs except as specified above is prohibited without the express written permission by EETI.

Disclaimer: EETI MAKES NO WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, WITH REGARD TO PROGRAMS, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. EETI reserves the right to make changes without further notice to the Programs described herein. Licensee agrees that EETI does not assume any liability, damages or costs, including but not limited to attorney fees, arising out from Programs themselves or those arising out from the application or combination Programs into other products or circuit. The use or the inclusion of EETI's Programs implies that the Licensee assumes all risk of such use and in doing so indemnifies EETI against all charges, including but not limited to any claims to infringement of any third party's intellectual property right.

Do you agree with above patent declaration?
 [Y] Yes, I agree.  [N] No, I don't agree.

(Q) Which interface controller do you use?
(I) [1] RS232 [2] USB [3] PS2 : 2
(I) Please confirm the touch controller is linked with your device. Press [Enter] key to continue..........

(I) Found /etc/rc.local file.
(I) Found a HID compliant touch controller.
(I) Found inbuilt kernel module: usbtouchscreen
(I) It is highly recommended to add it into blacklist.
(Q) Do you want to add it into blacklist? (y/n) y
(I) Add kernel module usbtouchscreen into /etc/modprobe.d/blacklist.conf.

(W) Found a PID:0001 touch controller in kernel 3.8 upwards.

(W) You need to do kernel patch first.
(W) Please follow the Programming Guide to patch kernel.

 [Y] Yes, I've patched kernel already.  [N] No, I haven't patched.
(I) X.Org X server 1.16.0
(I) X version is 1.7.6 upwards
(I) Found uinput at path /dev/uinput
(I) Place eGTouch driver archive to /usr/local/eGTouch32withX.
(I) Create eGTouch daemon shortcut in /usr/bin.
(I) Create eGTouchU tool shortcut in /usr/bin.
(I) Create eCalib tool shortcut in /usr/bin.
(I) Append eGTouch daemon execution into /etc/rc.local.

(Q) How many controllers do you want to plug-in to system? [1-10]
(I) Default [1]:

(I) Device Nums is set to 1
(I) Copy udev rule: 52-egalax-virtual.conf to /usr/share/X11/xorg.conf.d.
(I) Create eGTouchU shortcut in application list.

(I) Driver installation completed. Setup version 1.04.4330.
(I) Please reboot the system.

Friday, April 17, 2015

Circuit or Fleet?

The Circuit website makes this claim:
The circuit is unique in one respect: Once a circuit cluster is formed, the circuit system itself cannot fail—only individual hosts can. In contrast, comparable systems (like CoreOS, Consul and Mesosphere) can fail if the hardware hosting the system's own software fails.
And yet the CoreOS documentation, referring to Fleet,  says this:
Automatic rescheduling of units on machine failure
Which is clearly contradicts the circuit statement. I have a suspicion that Mesosphere, as well as kubernetes, can also reschedule tasks.

I thought I reviewed gocircuit before, however, I cannot locate the original post but if memory serves me I liked the idea but could find a good reason to use it. I remember seeing a presentation and while it seemed to work it was all very special purpose. Especially since I've already indicated that fleet [and docker or rocket] can close the loop.

Change your programming language?

To justify changing from one [programming] language to another you need to have some multiplication of productivity.
-- Bruce Eckel (video 16:57)

Software Patents - The Good and Evil

I suppose there are some edge cases in software that I would like to patent. I'm not quite sure what it is or where it might be but just because I did or did not conceive a use-case that someone else is taking advantage of is nuts.  That someone thinks that you can patent open source is just silly.

I'm not sure what Facebook is planning here but making react opensource but trying to patent it is like giving someone a pen and paper and telling them (a) you cannot draw pictures (b) or make a paper airplane. And you certainly do not need a patent to protect react when licensing can take care of the rest. The various versions of the GPL have various encumbrances and limitations (GPL-A).

Honestly, what is the purpose of open sourcing react if they want to prevent people from using it?

fork fork fork those 3rd party package that you depend on

For forks sake
I just had a project explode in my face because I refreshed my dependencies with the original source. Don't tell me that I could have done a hundred things to backup, restore, fork, godep, vendor...

I have long held that when you produce an API that becomes the contract between you and the consumer. While you might add features or refactor the code under the APIs that's all well and good. But if you're going to delete or change the API signatures then you gotta version the APIs so that it's obvious.

Thursday, April 16, 2015

Great CoreOS news

This release in the alpha channel is the first to offer etcd 2.X. That's great news. Time to rebuild my servers.

golang - are short declarations positional

Short variable declarations are a mainstay of the Go language and it looks something like:
a, b := 1, 2a, c := 3, 4d, a := 5, 6
Notice that in line 2 and 3 variables 'c' and 'd' are declared and assigned; and 'a' has it's value reassigned.

So the question: Is there an idiomatic position, from the go authors, as to whether or not error should be the last field in a function declaration?
func() (int, error)func() (error, int)
All of the sample code I've read has the error declared in the second or last position and the user data in the first of left-most positions.

Docker Memory Footprint

I have a f1-micro instance hosted on GCE (Google Compute Engine). It's great for hosting a reverse proxy or a load balancer; and maybe a very small cache or webapp. But nothing more that that!

I tried to install RethinkDB 2.0 and while it installed it the deploy request responded with a memory warning. Then when I installed a Bosun container on the same VM the machine stopped responding. I was able to determine that the 0.6GB RAM system was feverishly swapping. And that got me to thinking...
hyper visor
1+ guest OS
1+ container (with full OS)
The rule should be; if you are running a container that includes a full OS base image then guestOS must have at least 1GB RAM per instance. And if the container is either scratch or BusyBox then the guestOS should have at least 0.5GB per container.

If the GuestOS is the only OS on the box then all of the resources (read RAM) will belong to that guest and therefore only that instance need be  concerned with swapping. Remember swapping is going to take place at all levels.  The HostOS, GuestOS, and the container OS.

The reality rule is: build your apps standalone with simple self installers so that you get the necessary level of compaction. The only reason that containers are going to be useful is for multi-tenancy.

Tuesday, April 14, 2015

Saturday, April 11, 2015

Stick with the Golang Stdlib

So many people get all wrapped up up using 3rd party packages for everything from processing command line flags to Fourier transforms. While I know absolutely nothing about Fourier transforms and quite a lot about processing the CLI I'm not about to import packages that I do not trust and understand.

For example:

I read this article on working directories. At first I thought it was a reasonable idea. Then the author linked to a package osext. And I about fell out of my chair. I posted a response to the article which has not been moderated yet but it went something like:
os.Args[0] already provides the fullpath of the executable. It even works with `go run`. However, locating artifacts and config files it idoimatic beyond golang.  look in /etc, $HOME, cwd, exec and CLI. Using the path to the executable is not idiomatic especially for webservers as in the given example.

Less is more

I like it when I delete dead project ideas from bitbucket and github. Especially when I've forked a project that I no longer rely on. It's just so fulfilling and better than spring cleaning the garage.

Friday, April 10, 2015

Quick-Start Fossil SCM

There is something to be said about Git and Mercurial. They are clearly the leaders of the pack and there are many good reasons to like them. However there may be as many to hate them too and here's a short list.

  • Linus made it difficult on purpose
  • Git is not entirely binary leaving bits in perl
  • mercurial is all python; not particularly interesting but package hell
And there is a lot to like about fossil
  • single file repo making it easy to backup and restore
  • single executable
  • based on SQLite; from the SQLite author
  • embedded wiki and issue tracker
While there is feature parity fossil features are not embedded in Go and that's as good a reason to be upset as any. All that said I still want to deploy my latest fossil repo. As such this is the quick-start.

My environment is (a) my MacBook (b) a google compute engine instance running CoreOS. Right now I'm planning to run the remote fossil instance directly on my CoreOS instance, however, it should be moved to a docker container as soon as possible. (should be easy enough).

Quick Start
  1. install fossil on the target machines. (not going to describe that)
  2. Let's start on the mac
    1. mkdir $HOME/fossil.repo
    2. cd $HOME/fossil.repo
    3. fossil init example.fossil
    4. cd $HOME
    5. mkdir -p $HOME/src/fossil-scm/example
    6. cd $HOME/src/fossil-scm/example
    7. fossil open $HOME/fossil.repo/example.fossil
    8. echo "richard bucker" >> contributors.txt
    9. fossil add .
    10. fossil commit -m "initial import of contributors"
  3. now let's register the remote repo
    1. scp example.fossil <username>@<hostname>:./fossil.repo/.
    2. fossil remote-url  'ssh://<username>:<pwd>@<IPADDR>//path_to_repo/fossil.repo/example.fossil?fossil=/path_to_fossil_bin/fossil'
    3. fossil sync
I was not crazy about the scp in 3.1 but I guess that's the only way it works. I suppose the intent is to create the repo on the remote system and then clone it locally. These steps and specifying where the fossil executable is located, when using ssh, are just a few more steps. There are clearly a few more steps required in order to get a server running. Maybe another day because it requires setting up a webserver as a reverse proxy.

Thursday, April 9, 2015

RancherOS without local storage

RancherOS is another one of those JE (just enough) Linux distros. When I discovered it there was this:
If you are running from the ISO RancherOS will be running from memory. In order to persist to disk you need to format a file system with the label RANCHER_STATE.
At the time I was reading this I was under the impression that (a) persistence was optional (b) and that it was incomplete. Sadly, now that I read it again I read something completely different. But at the time the emphasis was on "not ready".

Now, as I reconsider the statement, I have a different opinion. Why not simply leave the OS ephemeral so that it is reloaded every time it boots. Meaning that one might boot the baremetal and PXE boot the root OS every time. Then with a little magic have the first boot register the system with some MCP(master control program) and start processing.

This means that here has to be a few static services like a registry for the instance containers and a storage vault for a network filesystem. and a few bootstrapped services. The idea would be for some sort of self organizing dynamic system.

If you think about it a little it's got some salsa to it.

From the "you learn something new every day" category

From time to time I test golang ideas here. The code editor is pretty simple and sufficient to the task. But then I was trolling the chrome webstore when I found this gem.

After clicking on the add button there was silence and not even a whimper. I was initially expecting to see a system icon or an app icon but neither were apparent.

I decided to point my browser to and there it was. Somehow the project replaced all the builtin goodness with some awesomeness.

It might still take some getting used to but it's similar and simple enough. I jealous that I did not come up with this idea and now I need to reverse engineer it.

Y2K and DLLs

What do Y2K and DLLs have in common?

Y2K was the biggest load of nothing to happen in modern history. Everyone went running around expecting the bottom to fall off of everything from the wrist watch to the power grid. But it never happened. (typically because because smart programmers trimmed the two digit century from their internal date formats.)

A manager of mine, said of Y2K, that he received a 10K bonus for saving some number of bytes for compressing the century from all date formats and now he was getting a similar bonus for putting it back in.

DLLs refer to dynamic link libraries. These are libraries or binary files that are used and shared between applications. You might have multiple programs that share a library. These were invented or made popular around the time of early Windows as Microsoft was evolving from DOS com and manageable exe files. Windows had so much bloat and the disk drives at the time were so small that this was the only way to keep things manageable. Today DLLs are almost obsolete as evidenced by languages like Go which offers a statically linked option.

One of the biggest challenges for DLLs is that they may be versioned meaning that there are compatibility issues as they evolve and it's up to the installers and package managers to keep things organized and running.

What I'm pointing out is that while the decisions that led up to Y2K and DLLs were avoidable and in the end cost many millions in consulting fees, development costs, project overruns. DLLs created a number of challenges for the Windows installers and lots of compatibility issues over time.

What is it that you are doing right now, what decisions are you making that has the potential to be on scale with these?  Well, stop it!

Wednesday, April 8, 2015

Codiad on the road to turbo go

I tried Codiad. There is a lot to like about it even though there are more features than the original Turbo Pascal. Since I want a Turbo Go tool the fact that Codiad is implemented in PHP is a little unsettling. Once I had it running I needed to fill my workspaces directly in the projects folder structure; therefore; invalidating the GOPATH idiomatic structure.

Next, I started playing with gowatcher in order to compile the code while I was editing and saving. This only marginally worked; partly because of the GOPATH issue. But mostly because I could not run goimports in the pipeline. If I did run goimports then the IDE was not smart enough to import the changes and would therefore overwrite any changes I might have made to that point.

Collaboration is not a must have feature but it would be a nice feature to have anyway. Codemirror and Ace both have potential but the tools/plugins that help are not free or difficult to plugin. While both Ace and Codemirror are excellent they are still limited. It would be great if they included the client/server bits so that I would be left to implement the watcher and builder.

Since Ace is being implemented by Github it's only a matter of time before they complete the design and turn github into an IDE of sorts. For now I think I need to reboot my turbo go.

golang context

There is no single use-case that defines the best time to use a "context". In one of the larger programs I wrote, I used a context in order to log and timestamp the entry and exit point of every function/method in the callstack. I could have gone deeper to reflect on the current function and it's parameters too but I stopped with the trace, transaction id, and the duration/elapsed time.

The the golang authors created their own "Context"; documented here and written about here. While it is said that Google implements a "Context" parameter in all of it's internal tools their sense of a context is for a different purpose. Mostly trapping elapsed time, canceling long running tasks and it's child goroutines.

Frankly it's a little more complicated than that. I read through the sample code, the package code and the article and other than being able to trap a timeout I'm still digging my way out.

Here is my samplesocket project. You are welcome to contribute. Before the haters warm up their engine; this is not code I would run in production or even let out of the barn to play with Wilbur. It's just some code to start a conversation about Context.

Please Help me Win a Chromebook Pixel

I'm not exactly sure how this works but if you have a moment to click this link I'd appreciate it. I have been in a tailspin trying to decide what my next computer will be based on the failure of yet another MacBook and so I'm in the market for a Chromebook instead. I really want to get everything into the cloud and also try developing a proper workflow for software development.

But I need or rather want a top notch chromebook, So if you have nothing better to do please click for me.

Documentation in Code or Code in Documentation; that is the question

Given that Go now includes go generate I'm wondering if I should use markdown syntax to define and document my code or code my documentation.  For example I have a struct with about 100 fields and at some point I need to develop some real enduser documentation in the form of something that looks like a transaction and this is usually best viewed as a table.
  • field name
  • starting pos
  • length
  • data type
  • validation or regex
  • description
and so on.  Using MarkDown means that I can generate proper enduser (non-programmer) documentation. Using go's generate function I should be able to convert the MarkDown back to code and I can also generate proper PDF and HTML for the enduser.

I might even be able to embed code, in formatted, function form so that it could be exported and then compiled.

Turbo Go

I've said it before so let me repeat myself.
The original Turbo Pascal from Borland International is/was the quintessential IDE and everything since then is a poor imitation.
The basic requirements:

  • simple editor
  • fullscreen
  • go syntax highlighting
  • goimports
  • terminal window
  • makefile support
  • fswatch, rebuild and redeploy

Some of the features I would like but are not very important:

  • multiple project or project aware
  • css, html, js, go templates syntax
  • git, fossil etc...
  • optional folder tree
  • theme
  • browser support

I think I'm missing a few minor requirements and I think there are a few things I'd cut, however, nothing else matters.

PS: the atom editor project seems to be coming along nicely. It runs great from the desktop, however, I think it would be nice to be able to run it remotely.

UPDATE:  Codiad could be a good foundation.

Monday, April 6, 2015

The "Apple Rings Everywhere" ecosystem

I used to think it would be really cool if my cellphone were to ring that I could be notified on all of my devices and maybe even answer the phone in the Dick Tracy way. I would also find it interesting; if I had my headphones on and could not hear the phone that I would be interrupted through the headphone. (this was demonstrated many years ago when TV manufacturers implement on-screen caller id.) But now I realize I hate the feature.

If I forget to turn the volume down on the plethora of Apple sync'd devices in the house or I forget a device in my kid's room... I'm likely to wake them up with the next phone call that I miss.

I'm not sure that the ideal unified device world is here. And I'm not sure that any level of unified command center is going to resolve that as I've sync'd my Google Chrome browser instances and while I can share plenty of information I'd also like to be able to delete bookmarks across all devices too. But as I might be able to do that some day will it also suffer from the same affliction of over implementation?

Sunday, April 5, 2015

Developer-defined infrastructure

Venture Beat is running an article "The geek shall inherit the earth: The age of developer-defined infrastructure" where they talk about the evolution of infrastructure design but I think they missed a few things.

For one; while DDI seems to put the design in the developers hands and thus implying that the role of the architect has been reduced is something akin to NoSQL does not require a DBA. So DDI is actually a misnomer. You can actually make the case that they are talking about adhoc multi simultaneous infrastructures. [which is something completely different]

The second omission is that the definitions that provided are actually layers. First there will always be some definition for the physical layer. You can always push some of the configuration to the software layer above but at some point there is hardware and interconnects. The same can be said with the software layer and the DDI layer. They are building blocks of evolution and not discrete inventions.

Saturday, April 4, 2015

envtmpl instead of safekeeper

Recently I wrote about safekeeper and I complained about the use of a 3rd party package when it was not necessary. I also noticed that his code was overly stringified. (a) you had to specify which environment variable to process in the template. (b) the code had to prefix the variable names with ENV_ (c) I mentioned kingpin

One thing that caught my attention immediately; why not use golang's templates? And then the second thing... doesn't golang have an full environment array?

Of course it does. And so I reimplemented the code in my own project: envtmpl.

Performance is not an issue here and I strongly believe that the template algorithm is going to be faster. It's just intuition, however, if the template is parsed for tokens instead of search and replace strings... that much be faster unless scanning the document once for each key/value pair runs in O(1).

etcd and fleet not starting with CoreOS running on GCE

Google Compute Engine (GCE) is a cool tool. If you do not know what it is you should check it out. Over the years they have made it more configurable that the limited selections that it started with. At one point you could load your own raw image and install that. Cool!

The best part is that I can deploy and destroy systems from a GUI, CLI, and API.

Now for the hard stuff.

I had a CoreOS installation, following the alpha channel, for almost a year now. In that year I have not had to do ANYTHING. In fact I lost my cloud-config file and was then able to recover it using both the CLI and the GUI.

One day, a few months ago, I lost the ability to connect to one of my subdomains.Between the naked domains on my nameserver and lighttpd acting as a reverse proxy... it just would not work.

What I discovered... (a) boot disk was full (b) no logs (c) etcd would not start (d) fleet would not start (e) my lighttpd container was deleted; it was a total mess.


The drive was full so I started cleaning up my images and instances. (not a great choice)

/var/log/btmp was HUGE and google results suggested I was under attack. So I deleted that file and it is already growing fast. There are a number of possible solutions from fail2ban and denyhost. I'll make a choice soon enough.

Ran my docker build and run commands for my lighttpd container

Removed the etcd and fleet sections in my cloud-config because my CoreOS instance was a single and those config options were for multinode.

And then it worked.

NOTE:  Rocket is now installed by default. NICE!

UPDATE: Just a helpful link. More etcd info link and link.

Friday, April 3, 2015

The new safekeeper

Some time ago I started using safekeeper in order to capture certain environment variables and embed them into my code (as template). Safekeeper is an obvious solution to a common problem so I was glad I did not have to implement rev 1. But once I started to review the code I realized some of the risks of 3rd party libraries, however, I was grateful safekeeper was only one small .go file.

Looking at the code I saw what I expected. All of the necessary code to perform the task at hand. A program that had a few parameters that hinted to the input and output of the task of string replacement. But what I also found was the inclusion of an additional package called or from kingpin. It was a command line parser package.

I looked at safekeeper from all directions trying to figure out what it was "exactly" that kingpin's CLI was doing better than the standard flag package. I could not find anything. Not even in the leftmost use case. So I forked safekeeper and implemented the CLI flags with the stdlib version.

Upon further reflection I have decided that I hate the implementation of the original safekeeper. Here is a sample:
//go:generate safekeeper --output=appsecrets.go --keys=CLIENT_ID,CLIENT_SECRET $GOFILE
 What makes it hideous is the "keys" parameter. The provided go:generate command executed the safekeep program and passes the subsequent parameters. One such parameter, keys, tells safekeeper which environment variables to use in the template. Then the code prefixes the keys with a string "ENV_" and then performs a number of string replaces in the template provided in the input param.

dumb dumb dumb.

(a) the stdlib already includes a template function
(b) it's easy to have collisions regardless of implementation
(c) why specify the keys and not use the entire ENV anyway, as-is

So the next time I have a swipe at safekeeper it'll be something like envtmpls or something like that... where one only specifies the input and ouput files and all of the data comes from the environment.

Google Docs does compliance

I'm always sensitive to where my data is. Whether it's in the cloud or on my laptop, flash drive, or NAS etc... However, given my experience in the financial sector I have been exposed to PCI, HIPPA, SOX, ISO and a few other privacy and security audits. So while I consider my project to move my entire electronic life to Google Apps I was gratified to see these two articles: (link) (video). I hope I did not misunderstand anything...

NOTE: At this point the only problem with moving to a chromebook existence is that there is still something gratifying and warming about having my own servers. If things go south I can power them off, disconnect them from the internet, manage my own expectations and so on. The only downside is that I need to monitor them more than I want; starving my other projects of my time. (valuable or not). The cobbler's kids are shoeless and the emperor has no clothes.

Thursday, April 2, 2015

Docker - it's all about the APIs

Early in Docker's history there was an partial uproar when Docker (the company) decided to patent, copyright, or otherwise encumber their APIs. I do not know if this is true or not but the chatter quickly subsided after the discussion lost momentum and for all intents Docker management never entered into the discussion.

Yesterday evening, with the day's events still fresh in my head, it finally occurred to me. The Docker APIs are going to be the thing! And a really big thing.

Docker offers a number of tools and source. But let's just consider Docker, Docker Machine and Docker Swarm; for the moment.

Docker (proper) the daemon manages specific containers

Docker (proper) the command line is an interface to the daemon through the APIs.

Docker Machine provides access to a Docker Server through various providers like AWS, VMware, GCE, etc... through plug-in drivers.

Docker Swarm provides cluster services and APIs by acting as a proxy to Docker Machine and Docker proper.

And finally there is the user, user interface and the backend service provider.


At this point all things are just a homogeneous Docker stack. What makes this powerful is that the APIs are generally consistent, well understood and popular.

But now what would happen if you have a homegrown orchestration platform built on your own unikernel implementation like MirageOS or Ling? What would happen if you have your own PaaS with your own APIs that perform the same basic functionality of Docker?

In the current model:  user -> user interface -> docker swarm -> docker machine(s) -> docker server.

But then there is my proposal: user -> user interface -> docker swarm -> docker machine(s) -> my docker shim -> my PaaS. And then if I'm running Docker Servers in my PaaS then I can proxy the commands:  my PaaS -> Docker server.

The benefit of this architecture, albeit early stage, Docker now becomes more about the APIs than the code. And so anyone can achieve true system composition from end to end.

Wednesday, April 1, 2015

agilemanifesto where did you go?

The agilemanifesto website is off the air. While I like the original work the derivatives are self serving. So I'm pasting the content here. I give all credit where credit is due... and of course you can always go to the wayback machine. - home, principles

Some other interesting links: RADWaterfall12 Factors,  12Factors for IoT.

Principles behind the Agile Manifesto

We follow these principles:
Our highest priority is to satisfy the customer
through early and continuous delivery
of valuable software.

Welcome changing requirements, even late in
development. Agile processes harness change for
the customer's competitive advantage.

Deliver working software frequently, from a
couple of weeks to a couple of months, with a
preference to the shorter timescale.

Business people and developers must work
together daily throughout the project.

Build projects around motivated individuals.
Give them the environment and support they need,
and trust them to get the job done.

The most efficient and effective method of
conveying information to and within a development
team is face-to-face conversation.

Working software is the primary measure of progress.

Agile processes promote sustainable development.
The sponsors, developers, and users should be able
to maintain a constant pace indefinitely.

Continuous attention to technical excellence
and good design enhances agility.

Simplicity--the art of maximizing the amount
of work not done--is essential.

The best architectures, requirements, and designs
emerge from self-organizing teams.

At regular intervals, the team reflects on how
to become more effective, then tunes and adjusts
its behavior accordingly.

another bad day for open source

One of the hallmarks of a good open source project is just how complicated it is to install, configure and maintain. Happily gitlab and the ...