Wednesday, August 27, 2014

parallel execution - flowstaller

The orchestration tool I'm working on got a shot today.  Go has some interesting properties. For example there is something called a Zero value than can be applied to structures as well as first class attributes.
an empty string is the zero value for a string
a 0 (zero) is the zero value for an integer
False is a zero value for a boolean 
A zero value for a struct is when all of the member attributes have their zero value.

When decoding or Unmarshaling it's impossible to know whether the string you decoded yielded an object instance or a zero value.
type SomeStructType struct {
    MyValue string
}
In order to Unmarshal something into this structure I need a wrapper structure:
type SomeStructWrapper struct {
    Something SomeStructType `json:"
SomeStructType"`
}
Unmarshaling a string like:
{ "SomeStructType":{"MyValue":"Hello World"}}
Only works when you're Unmarshaling into a wrapper instance.
s := &SomeStructWrapper{}
err := json.Unmarshal(buffer, s)
But unless you know for certain that the types and buffer are really meant to match then... you need to check. Since you've already allocated the target object instance and it's already the zero value. There is no way to know whether or not the Unmarshal was successful.  The error return value may not be helpful.

I found that if I changed the wrapper to:

type SomeStructWrapper struct {
    Something []SomeStructType `json:"
SomeStructType"`
}
Notice that Something is now a slice/array. And if I change the input document syntax a little:
{ "SomeStructType":[{"MyValue":"Hello World"}]}
Now I can check the length of the Something attribute once the Unmarshal returns.  If the len()=0 or with reflection of something==nil then I know that it failed (arrays are pointers; which is a symantic difference with other types).

PS: I added a minmax tag so I could identify the number of items in the array.

PS: and when there is multiple items in the array/slice I run them in different goroutines.

Silence

WARNING - this is potentially a long story and could, in fact, make a good TED talk.

I started programming, for entertainment, in 1982 as a Junior or Senior in High School. I got my first paid gig writing some software for my father's consulting business. About 10-12 years later I was making changes to the OS/2 kernel and GUI internals.

Shortly after that I started work for a payments processor; and while my work leading up to payments was technical and challenging this new job was 9-5 and 24x7x365; and along with a paycheck came a beeper. (if you were lucky you got the full alpha-text pager and not just the numeric; some even had two-way text capability; later came the first BlackBerrys).

When these pagers went off you had to find a pay-phone and call in. I recall being less than 4 miles from the office when my pager went off and pulling over to a gas station... and sitting on the phone for 45 minutes talking the operations guy off the ledge.

The reason I'm writing this is because several days ago I cracked the display of my iPhone and I finally got around to getting it repaired. The repair was supposed to take 45 minutes so I thought I would get some lunch. Unlike leaving the phone at home or in my sock drawer this felt different. As I drove to the sandwich shop the world got quieter. I realized that in that 45 minutes there was no way for anyone to connect with me and there was no way for me to connect with anyone else unless I found a pay-phone.

As soon as my phone was returned to me I noticed the opposite. The world was suddenly noisy again.

I'm certain there are people who have had phones and smartphones longer than half their lives. How would they deal with this situation? How would they feel when separated from their phone? Today being on vacation usually means monitoring emails etc... so that the shock of returning to work is reduced. Even on a cruise you can check your email; and if you bring your own computer or tablet it's a reasonable cost.

Just because the build is green does not mean it's ready for production

Tuesday, August 26, 2014

Docker - Data-Only Containers

I recently deployed shykes devbox setup and while he did not provide a docker build or docker run command I was able to get things running:

docker run -it -v /media/state/shared/:/var/shared/ rbucker/devbox /bin/bash
Sadly, in my case, I mounted the volume from my host system; but in the dockervolumes article there is a recommendation that "we" use data-only containers. On the surface that makes plenty of sense until I read this comment:
Volumes persist until no containers use them
This really creates a potential problem if you hope that you data is going to survive. In my case I might want to have a redis and/or postgres microservice. If I used a data-only container and either redis or pg crashed then the data-only container and it's data will be gone. The only way that this might be resolved is if the data-only container then used a host volume. That way (a) you get the benefit of linked volumes; which might someday be networked (b) some sort of persistence.

VMware announced docker support

VMware is joining a long line of commercial brands that have decided to get into the Docker LOB(line of business). It's not at all curious but to be expected as there are a number of other brands that are already there.
The most curious brand is Microsoft. On the one hand it makes perfect sense as I believe Azure is supporting some elements of linux but is still a Windows platform.
But as VMware enters what exactly do they have in mind? Docker requires a linux kernel and VMware, while it uses Linux for some of it's offerings, is not dedicated to the container. It is interesting to consider VMware's vCloud offering. It's a complete dashboard for managing the virtual hardware and it's not a stretch to extend the metaphor to include containers. (I've installed all manner of OS on a vCloud system (constructed orchestration services around vCloud) and containers will not take much to implement). VMware will have to choose the right UX and workflows so that the vApp and Template metaphor can coexist.

Flow Based Programming in Go

Go is not the ideal language for implementing a Flow Based Programming environment but it works.

One challenge was that the network graph of nodes started off as statically compiled and linked during the POC stage.  Later I was able to leverage the init() function in order to link the types in what I called the registration process. In the package's init function I would create an instance of a wrapper type that would make it easy to decode the JSON config. This meant that when the dynamic network was applied and the mini-microservices were started that the messages would flow as expected.

**I have not gotten to the point where I know how multiple instances of a particular node is handled... in sort of a fan-out model. But I'm sure it's in the book.

**one other challenge is that while the goroutines and channels are well suited... they are limited to the current process. The networked version of the channel (netchan) has been deprecated and no suitable replacement has been named although the docker team has made a recommendation which is currently incomplete. There are a few others out there... but for docker to succeed this needs to be implemented.

PS: this model is working in my flowstaller project. In this project I have a DSL that looks a lot like the nodes in the FBP model without the concurrency. In the flowstaller I have a wrapper type, the wrapped type, the init() function and the worker called do().

Monday, August 25, 2014

Docker is punk rock

... as in he urban dictionary definition. Docker is experiencing a velocity of attention that I have not seen since the peak of the dot-com days. There has been no single technology that has receive a disproportionate mindshare when the scope is as narrow as the Linux Kernel version 3.10+. Specially with the number of Windows servers in production and the number of entrenched virtualization brands.

The container solution would seem to be a good one. It's particularly interesting when you consider microservices and even mini-microservices when you embed flow-based programming services inside a microservice.

If you have an interest in flow based programming please drop me a note. I've constructed several POCs in go that use channels as the static type pipe and while the Morrison model is what I hope to achieve I'm just trying to get something into the wild. (I also constructed one in NodeJS but once I got into deep promises I gave it up.

DSL in review

UPDATE:  My initial notes have been erased by the ether. It seems that my iOS blogger app ate my homework. So here is the summary.

Last week I was asked the question "why is flowstaller implemented as a DSL or DSL-like instead of implementing the tasks as first class functions" in the language of choice? At the time I was exhausted and not able to give put together a strong argument, however, after reading an article today I found the voice I was looking for.

The article I read made the argument that the author had built a DSL and refined it until it was no longer capable of completing the task for which it was designed. I also read Stack Overflow question/answer... the question was whether or not a physics researcher should implement the research questions in a DSL or in a first class language?

While the first problem is a challenge it's not impossible. Usually it means one of two things.  (a) the DSL was not thought out well enough in advance to account for the major constructs (b) the expectations are too high and some intermediate middle ground is missing. There is a third; the developer has simply lost interest because now there is some impedance mismatch like callstacks, looping, or conditionals that are just no fun to implement.

As for the second; this reminded me why I wanted a DSL for my project in the first place. In my case I was interested in merging DSLs from Chef, Puppet, Ansible, and SaltStack. So on the one hand I needed a DSL engine and on the other I needed the libraries to execute the tasks across all of the DSLs. However, I was reminded that the physics problem was the same and yet different. By starting the researchers with a DSL they are immediately productive. Then, over time, as they learn the DSL and possibly one or more of the underlying programming languages then converting their DSL-based applications to a first-class language should be a simple matter.

So contrary to my previous posts there is a place for a DSL in today's modern development environments.

Rex is not Rexx more like Chef.



It’s obviously a DSL implemented in perl with bits that work in OSX, Windows and most heavily developed in the *Nix world. Since perl is a dynamic language like python and ruby; Rex will suffer the same runtime dependency warts that Chef, Puppet, Ansible and SaltStack do or will.

When will flowstaller achieves it's tipping point?

Sunday, August 24, 2014

Smartphone reduction

Making the decision to remove the apps that serve to distract rather than enhance has been a success. The only apps I have on my phone are the ones that are meant to interrupt whatever I'm doing... phone, SMS, email.  Everything else has been moved to my iPad. The apps on my iPad are the ones where I might want to do some casual reading or searching such as twitter, Facebook, LinkedIn, Google+ and a few others. And if there is a link that needs some concentrated reading I always forward the links to my email or cloud bookmarks for reading or research later when I can dedicate focused attention.

What a great decision! I feel better already!

Tuesday, August 19, 2014

RockSlide or Avalanche - project management metaphor

When passing down project implementation details is your organization a Rockslide or an Avalance? Which is better?

Re-factoring my smart tools

UPDATE: The question I have... which queue do I put twitter? fast or medium? Clearly there is nothing in any tweet that is going to change my day's priorities but it could definitely derail it.

Looking at the number of apps that are running on my iPhone I realize that there's a problem. It's not that there are too many applications or that I don't use them regularly. I simply came to the realization that I needed a priority queue. 

My iPhone was to be used for communications that needed to be real time or near instantaneous. This would be text messages phone calls emails.

Since my iPad mini is slightly bulkier than my iPhone and rarely take them with me everywhere I go. It made more sense that this be my medium queue. Apps like Facebook, LinkedIn, Google plus, and many others.

Finally my slow queue. This would be my personal laptop or my employers laptop. On my laptop I will install many of the applications that have native apps. For the rest I simply have bookmarks on my browser.

I decided on the prioritization of my cues based on The friction of the application and the sludge that it adds to my day.

Saturday, August 16, 2014

Chef, Puppet, Ansible and SaltStack

I've been writing batch installer for years. Different batch sh, bash, zsh, bat, ant, nant; and dynamic languages like perl, python and ruby. But at the end of the day they are all the same. It's some engine that runs some code that produces some expected outcome. And if you're lucky and the tool provides something that looks like a DSL such that the scripts are easy and fast to construct then you've hit the jackpot.

All of this comes to mind as I'm building my own installer DSL. No matter how I parse my work product it looks exactly like the others. It's just a script being executed by an engine. The script can be embedded into a "solo" executable, pulled from a repository, pushed to an agent, tunneled through SSH and a REST API. The only thing that makes my tool different is that (a) it's almost 100% cross platform  [Linux, Darwin, Windows] (b) everything that is needed is statically linked (c) automatic self-update (d) scheduling and (e) orchestration, (f) supports the DevOps method. (some of this is vaporware but it's intended)

One thing that causes me to question this tool is that most of my work is targeted for Linux and Darwin. So while Windows support is possible it's not required. And so my target systems only need some basic sh or bash support in order to deploy some appcode. This is plenty of overkill. I think the sweet-spot is that everything is self contained. Let's see what happens after my first application is deployed and how it might integrate with a generic Docker container.

**Of course the single counter argument is containers and especially the Dockerfile.

DevOps is not a silver bullet

"You keep using that word. I do not think it means what you think it means." --Inigo Montoya (Princess Bride)
I just read a headline on LinkedIn.  "Why DevOps is Key to Software Success". My initial impression from the title and many like it is that all of the programmers and operations staff need to be cross trained and merge into this new classification leaving the uni-skilled programmer or operator in the dust.

Point in fact that Wikipedia defines DevOps as a method and not a role. On the one hand I'm relieved that I was wrong but disheartened by a recent conversation where a manager referred to a DevOps as a person in a role. What I realize is that I have been a practicing the DevOps method for over 30 years. In that time every program that I put into production needed the design, care and feeding stipulated by the DevOps method.

DevOps is nothing new. It just has a new name.

Monday, August 4, 2014

http benchmarks all very interesting but who really needs it

These benchmarks are very interesting to me. When I'm designing systems I'm always thinking about huge numbers and scale. But the reality is... who needs it? How many projects really need 800k connections? And while the article is nice and well written I'm going to bookmark it and probably forget about it until I clean my bookmarks. It would be great to get to that scale but I think the number of businesses that get there are in the extreme minority. But thanks for posting anyway... right now time to market and reliability are more important.

another bad day for open source

One of the hallmarks of a good open source project is just how complicated it is to install, configure and maintain. Happily gitlab and the ...