Thursday, November 27, 2014

Just good enough or why TDD should be dead

It's Thanksgiving 2014 and we've returned home after a nice family Thanksgiving dinner. The icing on the cake was the recollection of the Vietnam era Soviet Union and how that story could have meaning on the modern view of TDD and other types of software testing and development. While the stories were wonderful to hear I cannot do the storytelling justice so I'll limit my comments to the learnings except to say that the storyteller was an engineer which included working for the old Soviet space program and other agencies.

When we talked about the space program he verbalized something that UX designers already know. Keep is simple. But what UX designers don't appreciate is that simple also means few moving parts or the second layer of simple in the actual implementation.  As he described it when you're out there in space you do not have time to be distracted by bad interfaces. And things have to be simple to prevent the number of potential points of failure.

The conversation shifted to what I have always referred to as "good enough". When US planes were shot down in Vietnam they would be recovered and reverse engineered by Russian engineers. Oddly enough they discovered that the planes were constructed to be "good enough". It was said that US designers knew that the planes would eventually fail; either by stresses they were performing under or because the enemy would shoot them down.  In either case there was no point in over-engineering them for the long run. They had to be "good enough". One specific example was the size of the bearings used on a particular component. In the US version the bearing was just a few mm and in the Soviet version they were much bigger. The smaller bearings were easier to manufacture but had a shorter life.

As I look back on my career I see moments when I was designing systems that I intended to live forever and others that were only going to run for a few years before they were replaced. One perfect example is a pair of payment gateways.  The first system was implemented in the erlang programming language. I had great hopes for the project based on the capabilities of the language and for the most part it has lived up to that potential, however, [a] it took a lot longer to write the gateway and [b] while gateway is bulletproof it's been so long that I don't think I could make changes to it if I wanted to. Having the maturity to know what is good enough is contrary to what we are taught in school and when we learn early in our careers. Making that right decisions is non-trivial. Most software is replaced after a few years so there is no point in write something that could live for years.

So what I've learned is that I/we need to keep things [i] simple, [ii] good enough, and [iii] that simple and good enough also means that the testing can also be limited to that which supports simple and good enough. So the next time you're thinking of implement tens of thousands of test cases meant to produce 100% code coverage consider that your project might not need all of them and that simple things only need simple tests.

Tuesday, November 25, 2014

Need an alternative to Skype

I must have misplaced the original post so I'll post it again. I have a paid-for account on Skype but recently they started advertising to me. I could understand if I were a free user but I have a balance so why are you bothering me? Here is the proof...

Google Hangout is a good second choice and I have already started to notify my contacts. I might also start using teamview's version of meeting software. There is also Sococo and I can always install a webrtc server of my own. Skype's days are numbered.

Columnar Code NameSpaces

In order to define methods in a global namespace ... either there is no namespace except the first-class namespace where all names are considered at the same semantic level or there is a hierarchy of names representing all of the individual levels. In the first case everything might be labeled global.myname() and in the second it might be

In the sense of language design there are pros and cons for having proper class structure related to object oriented design but then the global design is well suited for procedural and some functional languages. Flow based programming is extremely well suited for global namespaces.

When I wrote the title for this post I was thinking I was going to implement a schema structure that could be used in a procedural context, however, as I started to construct the content I realized that I really wanted to implement the FBP model instead and that's pretty simple.

  • method name
  • method language type
  • method configs
  • method inputs
  • method outputs
  • method code
  • method testcases
  • testcase language
There is not much else to the code construction. But now we need to design the program flow.
  • method name
  • input type from method
  • config type from method
  • output type to method name
  • instance count (for concurrency)

The network map is a little more sparse than I have already implemented but I need to refine this a little so I can create the actual schema.

code:deck - the game as a metaphor

I have no idea how to play code:deck or if it is even a game. What I like is what is implied by "code on a card". It harkens back to the days when best practices regularly defined the ideal size for a function or class as enough code to fit on a single printed page. In recent years this has come to mean "fit on a single screen" which is also reasonable. The ideal column view, for me, would be a contextual column view.

Creating Extensions for Atom and Brackets

I've created in an attempt to recapture the feelings I had when I was programming with Borland International's Turbo Pascal 1.0. Even though I cannot find screenshots and I have no intention of relearning the WordStar navigation keys etc... I really want to recreate the pure developer experience even if it's just for myself.

The first step in the process is going to be some variation on the theme. Can I get Atom or Brackets to  behave nicely. And can I merge or create all of the dependencies that I need. So the first step in that process is creating the first package, compiling or assembling the package, and then publishing the package.

Here are the first links that need reading:

I'm starting with Atom and Brackets because while they are javascript apps running on nodejs they are standalone apps from the host OS perspective. If I happen to go down the Ace or CodeMirror path then I'll probably be using the browser as a self hosted app, however, I have not parsed all of those details yet. Finally, I have been playing around with a pseudo DSL that could benefit from an editor but I'm not certain a freeform editor is the best solution so that is on hold for now.

PS: lighttable is an outlier because [a] it does not install without overriding some OSX security requirements, [b] it's written in clojure (requires the JVM), [c] is interesting but less appealing by intuition.

UPDATE: While I like the productivity I've seen in the hands of an emacs master I will never get there. I've also seen the same type of magic from vi masters too but at least I know enough vi to be productive. In the past I have tried to be vi-masterful but in the end I always find myself with multiple computers with different OS and vi versions. And so it's just not practical to move all my tools all over the net trying to keep them in sync. There was an article that talked about turning vi into an IDE. It was a combination of vi plugins like nerdtree and and powershell. Anyway that attempt was abandoned. In the case of this project I'm doing the same.

I'm thinking that a browser-based solution might have some unseen benefits like docker services, shared code and collaboration, maybe even some git and drone integration, as well as auto download the compiled code. Make the docker container an all in one.

OpenStack - Glossary of Feature Names

Silicon Valley has long since been in love with giving features names instead of numbers. I'm not sure what the exact origin of the practice was but there was a time when projects had names. My first recollection of the double name was from the Debian project which then leaked into the Ubuntu project. I have no real first hand experience with this information it's just the timeline that I'm familiar with... so relax.

I do not have the complete list but the last 3 releases of OpenStack are Grizzly, Havana, and IceHouse. And for the most part that's ok too. They are long standing project with reasonable lifespans and support. People grok the context without much effort.

I typed this list before I found this link. Couldn't the OpenStack team select names that were closer to their function. This is particularly important since the namespace is essentially prefixed by "openstack" anyway. The goofy names are probably better for google searches but they are no less reasonable when combined with "openstack"

  • Cinder - Storage
  • Nova - Command Line
  • KeyStone - Identity Server
  • Glance - Image Server
  • Neutron - Networking
  • Swift - Object Storage
  • Heat - Orchestration
  • Cellometer - Telemetry
  • Trove - Database Service
  • Sahara - Data processing command line
  • Openstack - Command Line

The current OpenStack docs can be found here.

Robot publishers beware

How many times have you received buckets of emails from the same publisher and finally decided to unsubscribe only to fine that the publisher has added some friction like a subscription configuration form or even prompting you for your email address? It happens to me from time to time.

I used to think that the publisher was lazy or simply not taking the time to customize the unsubscribe link. But as I looked closer at the email other elements like my name in the header and to fields and sometimes in the footer of the email. So it's clear to me that they are already doing some of the required work and adding a GUID to the unsubscribe would be trivial. The reality is that the publisher is adding friction in the hopes that I/we/you abandon the task at hand.

Which leads me to a word of warning for publishers of this type.  Make it easy for me to unsubscribe or I will mark you as spam. If enough of my peers mark you as spam the filters will get you and it will cost you a lot more to clean that up than the fraction of a basis point that you got from my email.

Do not underestimate the value of proper Estimates and timely Feedback

I’m finally in front of a proper keyboard after sitting in a waiting room and recovery for hours…

What I observed about estimates and timely feedback:
  • the procedure was as normal as normal gets. Therefore highly predictable.
  • we were told that the procedure was 10 min once she was sedated
  • sedation was going to take a little longer than the procedure
  • after 80min I convinced someone to check with the OR.
So, while the risk was personal it’s no less important than a business, proper estimates and feedback make all the difference. While inflammatory: anyone who does not understand the value of proper estimates or timely feedback does not have the maturity to lead or run a business; this is not agile but common sense.

UPDATE: if you decided that every method had to fit on a single card, page or screen could you estimate the average time to [a] compose a card, [b] compose a hand of cards, [c] play the hand as dealt?

Sunday, November 23, 2014

Introduction to AppScale and Google App Engine - Part 3

I was going to spend the third part of this topic trying the same appscale installation on a VMware install on ubuntu 14.04LTS as well as a CoreOS install. I'm not going to do that now.

I have decided that while the idea of using appscale on private hardware there is a lot to be gained by installing my apps on GAE instead. The fact that I would not have to maintain the host OS is of tremendous benefit. And by extension that could have been accomplished with CoreOS but since CoreOS is really meant for Docker I'm uncomfortable trying to pollute the CoreOS ecosystem.

So this chain of reasoning is over and part 5 of this series will focus on a hello world app in go, dart and maybe nodes if it exists. I will be skipping over python, php, and java. I have already implemented a project or two in python and I like the tools.


turbo lang

Some months ago I reserved the domain, after posting my admiration for everything turbo, because I thought I wanted to build an IDE that reminded me of the original turbo-pascal from Borland international. (no matter how hard I try I have not been able to locate screenshots of turbo-pascal 1.x) A few days ago I read an article about a running tcl code in a browser. And while I was not interested in a browser or NaCL solution I still want that old turbo-pascal feel; and so I registered and I have a github organization called turbolang with the hopes that I can get some help.

I am curious enough to wonder if I can accomplish my goals by forking atom or brackets; or for that matter just implementing some common theme extensions in the way of same compile, link, format etc... But at the core of my thought process is that the turbolang editor needs to be clean, concise, and opinionated. Using the CUA shortcuts etc is just fine but the vim/emacs war is over and I have no interest in using the WordStar commands any more.

If you have any interest, ideas or comments I hope you'll let me know.

Introduction to AppScale and Google App Engine - Part 2

In my previous post I tried to install appscale on a virtualbox installation but it failed. In this post I am going to try the same installation on a Google Compute Engine instance following the appscale. Notice that I'm going to install on GCE and not GAE. The current theme is appscale and so I have not switched over to GAE yet.

Here are the instructions for installing appsacle on GCE.
  • install gcloud SDK
  • create an oath cred
  • add appscale image to your repo
    • gcloud command is buggy... you have to specify the project id
    • $ gcloud config set project VALUE
  • install appscale tools
  • config and start appscale
  • shutdown appscale
I was able to install appscale on GCE and I was able to open the dashboard just to see what was there.  appscale is very interested in selling you a license to use hawkeye and there was something in there for monit but I did not follow it.  There was even a limit to the type of machine that you could install onto; so there was no possibility to install on google's f1-micro; which is probably a good thing for production but for development... meh.

The sample code can be located here although I have not tried anything yet. I have not checked the restart times yet but that's in the plan. Once I deleted my instance using the GCE console tools there seemed to be a disconnect that could not be corrected from the command line. The appscale status command and appscale down could not agree on the state of the instance.

I did not actually install my sample application. I figure that would be for my next attempt in part 3 where I'm interested in trying the installation commands with a local VMware installation and then repeating with CoreOS.


I managed to get the system to start working again by deleting everything in my local $HOME/appscale folder.  Once that folder was cleaned up I was able to re-create the environment on the server. appscale up

Then, instead of deleting the instance from the GCE console I did it from the command line with an appscale down command on the terminal. This worked fine but when I tried to perform a subsequent appscale up command I received some general errors about keys already in use. So clearly I need to delete the folder between invocations.

** appscale clean does not work in the GCE mode.

One thing that is also missing... in a production environment when there are multiple applications in the domain then you need to be able to export the configuration so that the entire environment can be restored. appscale does not appear to be able to provide that info. GCE+CoreOS uses a version of the cloud-init file that allows you to restore your running environment.

Introduction to AppScale and Google App Engine - Part 1

I'm not certain how many parts this is going to take and what the outline looks like. It's off the cuff but I hope you find this interesting nonetheless.

Lately there has been a lot of interest in micro services in the form of Docker. While I am a Docker fan and categorizing it as a micro service container started me thinking about monolithic and micro kernel operating systems; and that also forked a conversation about J2EE and google app engine; with a little flow based programming sprinkled in for good measure.

From the history perspective micro-based systems are faster but harder to orchestrate and debug. This is the one prevailing reasons why MicroSoft's monolithic Windows kernel defeated IBM's microkernel (but that war might have also been over before it started for many other reasons not related to the kernel). Testing a micro service itself is not difficult it's the integration testing within the system and testing the orchestration system/bus itself.

I recently implemented a flow based programming system in go. It was very simple to design and code the nodes. It was straightforward to write test cases. It was even trivial to wire the nodes together in order to define some sort of flow. And graphing the nodes into some sort of visualization was also pretty simple. The system was very data driven so generating queries or refactoring the nodes and reuse became favorable. The only downside is/was that there was no debugging.  GoLang does not currently have a debugger and while the application took advantage of lots of goroutines (one goroutine per node); debugging by writing log messages was not reliable due to timing and intermingling of messages from the concurrent goroutines.

I'm not suggesting that micro services in the many new incarnations are good or bad but I would say that I'm in the business of adding business value for my employer. And in such a position there is a balance between cool tech, future proof, technical debt, and getting shit done.

A few killer reasons for going after app engine are [1] it supports some modern languages including golang and just around the corner dart [2] If the code is implemented correctly then scaling can be a simple matter of increasing the instance count [3] simple APIs for storage, query, cron, MQ.

** I have been administering close to 15 machines for friends over the past 20 years. The only thing I need to do is upgrade the operating system about once a year and apply patches once or twice a month. The challenge is having to remember what I did last time and what the risk is. Worse still is not being 100% certain what the recovery from a failure might be. I'm even thinking about backup/recovery of the user data.  It's just a big mess. If I had implemented these apps in app engine things could be a lot more carefree.

I've decided to restart my app engine experience with app scale and  going through the getting started exercise using virtualbox on my laptop following these getting started docs. (For OSX)

  • install homebrew
  • install ssh-copy-id (not available by default)
    • brew install ssh-copy-id
  • install virtualbox
  • install vagrant
  • register the image with vagrant
  • configure and install your virtual terminal
    • I had to perform the vagrant up command twice
  • deploy appscale on VM
    • the appscale up command required ssh-copy-id
** this outline is now incomplete because I could not get past this step.
There are a few more steps from here but I'm stopping for now.  Appscale is throwing an exception which is going back to vagrant which [a] has installed version 12.04LTS and is complaining about 14.04LTS [b] the guest is aborting. So I have to restart assuming that I missed a prerequisite when I installed everything else... so to the beginning.


After trying the appscale install on virtualbox from the beginning I have had no luck. I get the same error(s) which suggest that there is an IP address mismatch between the appscale private network and the hosted network. As well as the hosted tools.

** running this framework means that eventually I'll be running on GCE. By deploying on GCE I will not be bothered with the host or guest OS versions. This is a big deal. It's the one reason I really like Docker+CoreOS. And even though I have indicated that micro-services are a challenge to debug it's not impossible. So, for the moment, there is no winner.  

In part 2; I will try the installation on Google Compute Engine (that's GCE and not GAE), however, this will be a little self defeating if I cannot get it to work on a plain vagrant install.

I'm just guessing; but in part 3 I might try to install appscale on my own VM install. And wondering if it'll install on CoreOS as-is.

Saturday, November 22, 2014

I won't take a survey to boot your status

One of the last things that happened to me in WDW was that two separate people solicited me for my opinion. They were not actually going to ask me the questions on the spot but wanted permission to ask me in the future. I agreed but only if the information was meant to make the park experience better and not simply to boost their ratings. I have taken an AT&T survey and there was nothing in it that indicated that they wanted to improve their service. Maybe they wanted to fire an employee for poor service but honestly their problems were and still are policy and process.

So don't bother me if you're just trying to boost your cred.

Scaling the Disney way Part 2

Continuing my previous article. Disney can scale it's production. Just look around and just about everything you see has a real world analog. Everything from the light posts, manhole covers, garbage cans, decorative lights, video screens ... are all common household or commercial items. And I think there is a very good reason for this. Liability. If there is a short in an electrical system and someone or many people are injured, WDW does not want the responsibility. But more importantly there is simply time to market. If the imagineers had to imagine, design, implement, and test every single component in every single attraction etc... they might not ever have completed either park.

So the lesson I'm taking away from there is several fold when constructing software:

  • know what it is that I'm developing
  • know that the first MVP is actually a POC
  • know that the second MVP is the alpha
  • know when to implement a library and when to adopt a 3rd party
  • plan to understand the risk and dependencies from adoption
  • take a lesson from chess computers and make sure there are always moves available in the future
One thing I hope to discover is if there is a formula whereby I could predetermine how much should be new and how much should be adopted.

Scaling the Disney way Part 1

I took some notes when I was in Walt Disney World last weekend.  I had originally planned to write an article about how well WDW was in scaling. Over the past few years the Disney company has introduced something called the MagicBand and the FastPass. These devices were meant to improved our experience and to some degree they do.  But there is a dark side too.

  • If you are staying at a Disney property you will receive a MagicBand that you can use to enter your room, pay your tab, buy swag, enter the park, stay late, enter early, and so on.
  • If you're staying on property you can register for your FastPasses 90 days in advance.
  • If you are NOT staying on property you can still buy a band but you have to buy it at the will call, ticket booth or downtown disney. You can only select your FastPasses 30 days in advance.
One thing that disney learns from the fastpass is the public interest in the rides so they know when they might be able to maintain the rides that are of least interest. They might also be able to correctly staff everything from the food and merchandise vendors, characters, ride operators, custodial.

One thing that I have noticed is that the lines are getting longer and longer. Became very clear to me, during my last trip, that [a] Disney is developing more hotel rooms [b] is NOT producing attraction at the same rate. And so the lines are getting longer.

On the Monday before we were set to return home it rained from 2pm until the following morning. As people started to leave the park, during the rain, we decided to stick it out because it was out last day. When the park closed and we returned to our Pop Century hotel cafeteria we were astonished by the sheer numbers of people. It then became clear to me that the ratio of people to dining was skewed. Having stayed in each of the three property types I cannot remember ever seeing this many people waiting to place an order.

I noticed a similar thing in the Pop Century restaurant and the Starlite Cafe in WDW. They have each partitioned the dining by food group. In the Starlite Cafe there is one line for chicken, one for beef and another for sandwiches. In Pop Century there are 4 lines. [1] pizza, [2] pasta, [3] sloppy joes, bean burgers, [4] pot pie, salad, open faced shredded beef or pork. So if you were one parent trying to get a meal for the family you had to choose one line that would satisfy the entire family.

This is not the way to scale this side of the business.

Software estimation failure

If you want to know why some or most software estimation fails then you should watch kids playing with Legos. What you'll see is the mind striving for [a] perfection [b] approval, and [c] improvement. I suppose that there is a lot in common for [a] and [c] but when you see them compete in addition to the aesthetic you'll see the iterative process too.

When kids are constructing gocarts or lego racers time is no object or at least time is not kept and they do not have a sense of when they have achieved good enough. Since I'm not a psychology major I cannot profess these things to be true in all cases but after 30+ years of programming and managing programmers and 4 years of watching my kids ... it might actually be an educated guess.

If we knew what it meant to be GE (good enough) then there is a better chance we'd get there on time.

Javascript and related browser tools

Here are the links to libraries I like to use. It's still under development.


MetricGraphics - display graphs


Bootstrap, wrapbootstrap

html5, html5boilerplate,


This is a work in progress. I need to add many links and complete the lists:

Bootstrap - Bootstrap is the most popular HTML, CSS, and JS framework for developing responsive, mobile first projects on the web.

jQuery - fix and normalize some javascript

squire - wysiwyg editor from fast mail

ace - text editor
codemirror - text editor

more to come...

Wednesday, November 19, 2014

This article is a little more involved than the nodejs community and detractors give it credit for. Taking a quote from the article as the quintessential challenge for nodejs instead of the use your tools properly.  While they share a like sentiment they are vastly different and have a deeper meaning.
What did we learn from this harrowing experience? First, we need to fully understand our dependencies before putting them into production.
I did some programming in nodejs when I was working on a zero downtime migration project. It worked and the nodejs tools were solid. My project failed when my SQL requests became more complicated than counting the number of rows in a table... when a number of round trips or consecutive queries were needed.

Install nodejs
$ brew install nodejs

Install expressjs
$ npm install express

Install hapi
$ npm install hapi

Both express and hapi required 24 packages.

Now that I installed them both... I created my first express project. Now I have installed another 54 module dependencies. When I performed the like hapijs task it did not install anything more than the original hapijs installation.

Grunt required a minimal number of packages while bower seemed to require about 50.

And now it comes to me. Looking at the console from the installation I performed above there was a small note that requires investigation:

> sqlite@1.0.4 preinstall /private/tmp/ha/node_modules/sqlite

Does this mean that all of the modules and submodules that I used to install in my projects as a dependency are now installed by default in nodejs? If so then this is even more egregious a design flaw as the netflix statement. I admit that I'm not certain these dependencies are preinstalled and the net side effect. I decided that I no longer have the time and inclination to pursue nodejs unless I'm getting paid and there are many more security reasons for compiled applications. That netflix suggested that deep dependency understanding is important only serves to reaffirm my position.

LXD (lex-dee)

I could not decide what to write for my 100th post but somehow I could not get LXD(lex-dee) off my radar. It popped up on my reading list and google searches like a bad dream. To hear Ubuntu talk about LXD they swear it's meant to coexist with Docker but when you start to parse the marketing speak you might get a different picture; as I did.

Ubuntu's LXD:
  • for marketeers (link)
  • for the casual observer (link)
  • and for marketeers who are in denial (link)
  • for the developer (link, link2)
While Ubuntu is making verbal declarations that LXD and Docker are meant to coexist the evidence is underwhelming. In the links above you'll read that LXD uses LXC. LXC was the container technology that Docker was built upon until Docker started to build it's own container wrapper (presumably to patent the API).

While LXD may ultimately become a lightweight, complete and secure virtualization of the host OS within which you might be able to execute Docker containers but is this really what is needed or asked for? Let's face it. Ubuntu is in the business of selling Ubuntu and a Docker wrapper does not support that function, however, a wrapper layer that promotes vendor lockin does.

Let's look at this a slightly different way. Docker is a lightweight container. It's meant to be a single process container although it's not a strict requirement. However, a Docker instance was not meant to be a complete OS; just a sandbox of sorts. This approach to containerization means that you get some pretty nice application density because you're not running a whole lot. On the other hand, LXD is meant to be a complete OS running in a container. All those extra process are going to weigh down the most nimble of systems.

Since LXD seems to be some vaporware in terms of real features let's hope it get's fleshed out sooner than later.

Tuesday, November 18, 2014

Contrarian View of Continuous Integration (CI)

CI is great when you do not have the time or resources to oversee the build pipeline 24x7. However, and this is a huge HOWEVER... just like NoSQL did not actually make the DBA obsolete; continuous integration did not make the build custodian obsolete either.

Peter Leschev, of Atlassian, made the following statements in a presentation "Puppet Camp Melbourne Nov 2014 - A Build Engineering Team’s Journey of Infrastructure as Code" (link):

  • Less Human Interaction + More automation = Higher confidence
  • Less Human Effort = Increased frequency of releases
These two statements are probably correct or true in some or even most cases. And here is the however; what is the cost when things go wrong? The bigger the build system the more complicated it can get. But even in the most general and simple CI systems. What is a down state going to cost you? How are you going to call at 3am? Do enough devs know enough to be considered a build SME so that their patches will demonstrate "production-like discipline"? Some call it FUD but others, who have been there, know it's a sad reality.

Friday, November 14, 2014

The normalized cost of Amazon Lambda

Amazon announced a new AWS feature called Lambda. I watched the presentation with great interest, however, I did not make it all the way through... even at 7min it was too long. The idea is simple. Some event that your code is configured to watch occurs. When it occurs your code starts running until it finishes running and then stops.

This is an amazing bit of architecture, design or implementation. Whatever you want to call it. In many respects it borrows from flow based programming without actually saying it.

Pricing - there are three factors that contribute to the overall cost. [1] the number of requests, [2] the running time of the function, [3] the amount of memory you opted for when configuring your function. I assume that amazon is normalizing the CPU over the function in order to calculate some sort of compute index. 

If I were to run my function non-stop for the span of the month then the cost would be just under 6 dollars. (2.6e9 / 100 * 0.000000208) and if you get the upper bounds of memory - 1M - then you're looking at $43... if you run you app non-stop... 24x7.

When I compare the cost of Lambda I get a little apprehensive. [a] there is vendor lock in [b] it's possible for a runaway ... but more importantly while I'm impressed with the scale to which this is possible, being optimistic is one thing, but what are the chances that your business is going to scale that fast? Where is the FUD that is going to follow the recruiting process; and so on. The ripple effect. On the other hand, building generic solutions in javascript that could flip flop into or out of the lambda framework might make a better sweet spot. (see appscale)

Tuesday, November 11, 2014

Deis 1.0.0 was released

I'm happy that the Deis team has decided to release their 1.0.0 offering. In many respects it seems a bit more polished than previous versions; and while I would prefer to write a rave review there are still too many things to hate. I've just looked at the Deis issues list and there are only 5 issues submitted in the last 24 hours. My personal experience suggests that either their user-base is low or version 1 has not been adopted yet.

Here is a list of the bugs I'm reporting:

  • no single installation path; all paths cross
  • in order to install deisctl with the curl | sh -s 1.0.0 command the user needs to run the command as the SUDO because the core user cannot create the /opt/bin.
  • the the sudo ln -fs command cannot be executed because the /usr/local... folder is readonly.
  • the output from the curl+sh command includes some console control characters that make reading the message difficult.
  • the documentation refers to exporting some values. These values are wrong.
  • it's easy to forget or skip the requirement to configure the DNS
  • deisctl install platform works ok but start times out.
The deis team probably thinks it's competing with the Google Kubernetes team, however, that could not be further from the reality. Kuberbetes is still young. Google is going after a different user with it's K8s platform... and frankly I think it's other GoLang and container offerings are stronger.

Deis, while it does not have a proper user interface is competing with

Total Cost of Ownership - Apple, MicroSoft, Google

I constantly struggle with the cost of my hardware and software choices. I also struggle with the cost of quality, reliability and support of the hardware. And finally I struggle with the quality, features, and cost of the software.


In reality, when comparing the cost of Apple hardware to equivalent hardware from Dell, HP, Sony, or Lenovo - the cost is very close. Dell and HP might be a little less expensive, however, Sony and Lenovo can actually cost more. Please keep in mind I'm also comparing hardware component quality, features, reliability, support; so you will not see me compare Acer or Packard Bell.


Since MicroSoft does not have anything except it's software in it's catalog it has to sell it's operating system as the product. In order to receive premium dollars it has to create a catalog of versions with varying levels of features and support. The last time I purchased an OEM Windows disk it cost me about $125 for the home edition and $250 for the pro. Furthermore Microsoft does a full upgrade every 18 - 24 months. And if you consider the amount of bundle-ware that the manufacturers provide (which is a revenue center for them); the number of BSOD (blue screen of death); reformats; viruses...

Once you have purchased an Apple device it's operating system upgrade are free until the machine reaches end of life. Many years ago Apple charged $25 for an upgrade but it's free now. The extra ordinary news is that I have two laptops that are about 5 years old and only one is EOL and cannot be upgraded.


If you consider any premium dollars you might have spent to purchase an Apple computer and the ongoing cost of maintenance then the cost of an Apple computer is close to it's peers if not better than most. It's also trivial to put a price on user experience which Apple wins hand's down.

I like Google but I cannot compare them to Apple or MS.  The hardware is the lowest commodity hardware a person could purchase. In many cases the fit and finish is a little substandard reducing productivity below any meaningful level. It also requires that the user be online in order to take full advantage of the connectivity. I like my ChromeBox and ChromeBook but I cannot replace my MacBook Air with a ChromeBook. It's just not up to the intended use-case.

As a footnote; The integration and style of the other apple products like the iPhone, iPad, watch, Apple TV are so spot on as to appear to be a seamless joint. The counterparts from MicroSoft and Google are still rough around the edges. And while someone once said "once in a while you have to purchase the underdog product to keep the competition going" I'm not certain we're there yet.

Monday, November 10, 2014

Curated OS' and tools

There are a number of tools that I recently used in order to determine the efficacy of my teamcity pipeline and associated chef tasks.  They included iostat, iptraf, iftop, htop. And now I have sysdig.

There is CoreOS and ProjectAtomic. The former is ready for production and the devs are making great strides.  The later has potential, however, it suffers from many serious deficiencies and dependencies. It seems that the devs are making redhat a dependency rather than fedora. This makes CoreOS a better choice. And now it appears that Ubuntu is entering into this field with snappy.

The latest CoreOS with docker, and promise of rocket, are still the frontrunner.

appscale instead of or including google app engine.

Dart lang article, home.

2600hz - Blog, home, micro service video.

Sunday, November 9, 2014

A lesson to remember

I was watching "The Profit" yesterday and it finally hit me. Marcus Memonis has been using the same criteria to evaluate businesses as taught in Harvard MBA classes.

  • people
  • process
  • product
And that lesson did not cost me a Harvard education.

Thursday, November 6, 2014

Surprise editor - Atom

I have not been much of a fan of Atom but as I'm feeling nostalgic for the days when I used TurboPascal for everything I was writing ... Atom and it's Go-Plus and Git-Plus packages are starting to fill the void. It's far from complete but it feels almost as good.

It's safe to compare Atom to a mythical TurboGolang. (sadly it's not the same TurboPascal that I saw on youtube. Those are many years after.  You can tell because the text IDE implemented pseudo windows. Sadly there are no Google images either.)

I wonder if I could fork Atom and make a Go - only environment? The license is very liberal.

PS: the emerald syntax theme makes me feel good.  All someone needs to do is add a yellow.

Monday, November 3, 2014

Sample prettyprint code in blogger

Thanks to Heiner's post I think I can prettyprint code. The code below is from a previous post but meant for testing purposes.

package main
import (
type myfactory struct{}
func (f *myfactory) make(line string) task {
        return &work{line: line}
type work struct {
        line string
func (w *work) process() {
        w.line = strings.ToUpper(w.line)
func (w *work) print() {

another bad day for open source

One of the hallmarks of a good open source project is just how complicated it is to install, configure and maintain. Happily gitlab and the ...