Sunday, May 31, 2015

Which distro is the right one?

I would guess that a major part of the friction between the Linux distributions is (a) uncertainty (b) complete understanding. I'm just not sure which one seems to win the day. Back in the day I was a huge believer in FreeBSD and Slackware. The reasons were on the trivial side. FreeBSD was the baseline OS that CyberGuard and later Secure Computing selected. The people who made the decision made it because they thoroughly understood all of the moving parts. I remember later conversations when they were trying to decide whether or not to upgrade to the latest kernel design the meaningless arguments about this feature or that and the FUD the might follow. But it was a solid OS, supported a wide variety of hardware, it was reliable, the filesystem was safe, and installer was trivial.

Slackware was my go-to Linux because the installer was solid, the distro was safe, Patrick did not take any risks and always hand selected the kernel versions and patches, the libraries and 3rd party packages and apps were also highly curated. The best part was that the inner circle lacked trolls and that made it a fun and educational place to be. Sadly they stumbled a few times but nothing to do with the project.

So when I'm thinking about the Docker version of the golang in the public registry I get a little skittish. There is nothing overtly wrong with the Debian distro except that they march to their own drum. Debian is the root to a lot of other popular distros but they have the slowest release schedule. So I'm reluctant to install golang on jessie.

I have two general needs.  (a) I want a very lightweight golang docker image for building my projects. As someone put it... my Dockerfile is my Makefile. (b) I want enough tools that I can construct my IDE and not have to worry about all the extra cruft that installing a full distro suggests. I'd prefer that everything was running on scratch or maybe even the slimmest rkt container but that may be a little premature for now. 

I may try to hash shykes' Dockerfile but it's going to take a while. His file is quite old and I'm planning on working on modern code. There are a few projects out there that are just overly complicated but would make create enterprise tool.

Bluetooth pairing and switching in a modern world

I've been having trouble switching from device to device. I sent Jawbone an email and while it might be correct it's also a bunch of canned answers that I cannot accept as best practices. It's entirely possible that Bluetooth is fundamentally flawed.

This article spoke to me.

I'm sitting at my desk in my office. I have 5 Bluetooth capable computers and one phone that are paired with my Jambox and there is no way I am going to turn everything off in order to select the current active device. I suppose I could disable Bluetooth on all of the devices and then power the one device up... but that too, is annoying and not why or how I wanted to use this.

Until Bluetooth or Jawbone improves this process it appears I'm stuck.

CORRECTION - my android phone can take over from a connected Chromebook but not vice versa and not from another paired and active computer.

UPDATE: this might explain why the SmartLock only works some time and not all the local devices at the same time.

my CoreOS .profile

One of the things that I like about CoreOS is that it has a "toolbox"
toolbox is a small script that launches a container to let you bring in your favorite debugging or admin tools. --CoreOS
And as point in fact I am currently using it as my development environment even though I should probably be creating a devbox instead. ("we" already know that this is bad Dockerfile design)

My .profile currently looks like:
$cat .profile
echo "Configuring the environment"
export GOPATH=$HOME/_vendor:$HOME:$HOME/src/
export PATH=$PATH:$HOME/bin:$HOME/_vendor/bin
export PS1="\[\033[01;32m\]\u@\h\[\033[01;34m\] \w \$\[\033[00m\]"
if [ ! -e $HOME/.gitconfig ]; then
git config --global checkout
git config --global branch
git config --global commit
git config --global status
git config --global alias.unstage 'reset HEAD --'
git config --global alias.last 'log -1 HEAD'
export CDPATH=.:~:~/src/
Some of the downsides here is (a) only one SSH sessions (b) spawn get's hung when the sessions times out and I have to log into the host to kill the container. (c) shares the IP address with the host OS [not a big deal]. At least my development is going well for the moment.

Saturday, May 30, 2015

Container host marketplace is gearing up

I don't know when Joyent entered into the market but it only hit my radar in the last few months; maybe 6. At the time I was watching a demo on SmartOS and their containers. I do not fully grock what they are offering but it appears to be something based on OpenSolaris and some sort of Linux container/VM tech. Since they are in the VPS business this layer is critical to their success.

CoreOS, is about to hit their 2 year anniversary. They have a free product that is as bare metal as you can get. There are some moving parts that make clustering, scheduled updates and configuration easier. They have also created a commercial version of Kubernetes which is the orchestration layer above the bare metal (without the multi-tenant features). They are also active in the APPC and container definition projects as thet believe that Docker is not secure enough.

ProjectAtomic and it's related projects are still active.
Snappy and Docker from Ubuntu.
Rancher, RancherOS, RancherVM are also making progress.
Panamax, Mesosphere, Apcera, and so on and so on....

However, as I watched a brief demo of the Android M from a 20-something from TechCrunch I find myself wondering if my father felt the same thing about me and my tech when I was up and coming. It remains that so much of these projects offer an interesting and exciting glimpse into the future but what they are lacking is a crystal view. Too many companies have picked up that ball and tried to cross the finish line only to drop the ball short of the end. And many had good products...

So here is my vision:

CoreOS constrains the operator with very clear but somewhat undefined guardrails. Containers can be airtight or they can be porous. In the later case you might need sidekicks or ambassadors. Networking is a clear pain in the ass. VPN and other segmented networks are even worse. The tools simply do not exist... I think there is an intent to implement some sort of policy feature similar to Apcera. In Rancher there is some sort of intra container feature. Rancher also provides it's own sidekicks. But all of Rancher's orchestration is manual. Discovery services though etcd, consul, zookeeper is not secure, encryption is meaningless, and the APIs are just more work for the user.

All of these projects are weak. They lack adjacent and simple tools for monitoring, orchestrating, operating, integrating, managing. It's too easy to say "it's your profession so learn it". In fact it's a cop-out. Yes you need to know your tools but in this case you also need to get work done and not many freshmen or journeymen are going to hack special purpose one off scripts in production. Just ask Knight Capital.

pay per view is up

I recognize that most people see pay per view as a brand name, however, I'm referring to the not so free versions of Netflix, HBO Go, Amazon Prime, Rokku and so on. In each of these examples you have to pay to view and you definitely pay per view.

So the question, for me, is what is the cause of this sudden decline? Is it:

  1. letters from Comcast and other ISPs who operate on the bounty program with the property owners? They certainly do not do it out of altruism.
  2. What about individual DMCA letters? I think the press has squashed that one by embarassing the property owners.
  3. How about the deep discounts of products like Netflix, Amazon, Comcast on demand, etc... for $8/mo it's all you can eat.
  4. And maybe the numbers are off a bit. The advent of TOR, darknet, and private sharing clubs these numbers are off the grid.
It's probably a combination of things and not limited to the above. The Pirate Bay has been in and out of trouble over the last few years. Wrapping themselves in the cloak of free speech and similar legal prophylactics which not be sustained over time as the founders and operators have been incarcerated and help liable. Frankly speaking there was a time when The Pirate Bay was easy to navigate and easy to view and review content. They only had 2 advertisers; one of which, MacKeeper, is now considered by most to be malware and the other is a porn site.

But as this grey-ish area of the net teeters on the fence of free and not so free malware, sticky websites, poor quality, viruses, and the fear of getting caught... the alternatives are just too plentiful.

My only complaint is that after you combine the costs of:
Comcast $100-200/mo
Netflix $7/mo
Amazon $10/mo
iTunes mkt value
you still cannot (a) get everything you want to watch (b) nothing includes all the network TV (c) even if you added everything there are still independent holdouts. 

The strange thing is that some ISP and cable providers could provide a one price buffet of lobster and fillet but they don't. Comcast is close but their prices are higher than the aggregate, the software SUCKS, and they also inject advertising in place of the original adverts. It's good for them but crappy for me. If we are already paying for it then it should be sans commercials and if it has commercials it should be free. Everyone is getting just a little too greedy.

PS: Comcast is under scrutiny for being a monopoly. Their defense is that they do not complete with themselves in the various markets they are in. Sadly in most cases there is only one cable provider in any community and when there is an opportunity to compete only one division is given access. OTA and satellite are not an option. It's an embarrassment to our country that deep pockets wins the day.

Don't answer the phone

Every couple of months I get a phone call from outside the country. I always tend to hesitate answering the phone because I know that it will end up in just a waste of time. This latest call was from India or possibly Pakistan the caller identifies herself as working for network operations. She continued to tell me that my computer which was either a Dell or an Apple or Microsoft Windows required updating and that she was going to help me through this process.

Before I continue let me be very clear this is a scam. Neither Dell Microsoft or Apple will call you to tell you that you need to do something to update your computer. This is a social engineering play by which they are attempting to get your credit card number to charge you on reasonable amounts of money for an individual workflow or possibly even some sort of ridiculous subscription. In the worst case they're stealing your credit card number for some nefarious activities take place in the future.

If you happen to use your debit card to pay for this transaction the caller may enter your debit account long before you have an opportunity to cry foul or execute some sort of protection plan with your bank.

The other side effect of this work flower that the operator gives you may in fact render your computer as a active or dormant but for some sort of future bot attack which may or may not affect either your domestic country or even some foreign country whether its business or government. Worse yet because you giving them access to your computer and with most computers there are known and unknown backdoors you potentially are also giving them the opportunity to capture additional information whether thats social security numbers credit card numbers or even buying habits as you surf the net normally.

Do not give them your credit card information do not allow them to perform any sort of remote tasks against your computer do not let them access your computer at all. Do not let them instruct you on commands that you should execute on your computer. All of these things will end in a negative outcome for you.

The rule of thumb here is at the very least if the phone call does not originate with in your country do not answer the phone nothing good will come of it.

Friday, May 29, 2015

rkt and actool

I'm in the beginning of using rkt (rocket) and actool. It's obvious to me that the docs are insufficient. While I would like to help the team out with dedicated documentation I have to get some real work done.  My latest project will likely be deployed on a CoreOS cluster. I'm looking at the Beta and Alpha branches because I will be depending on etcd 2.x and the latest rkt tools.

The best part is that both rkt and actool have been installed in the CoreOS image. I'm not sure exactly when that started but for the time being the latest Alpha (695.0.0) is good enough.
$ rkt version
rkt version 0.5.4
appc version 0.5.1+git
$ actool version
actool version 0.5.1
But the challenge for me is that I'm using CoreOS for development too. So while I have a fairly involved setup script I also need to install rkt and actool.
tar xzvf rkt-v0.5.6.tar.gz
cd rkt-v0.5.6
./rkt help 
env GOPATH=/root/_vendor/ go get
The actool installation was pretty tricky because the source did not provide this info.

There is another tool call goaci.  This tool is supposed to create aci files from your go project. This did not work for me (as yet) because I have a private project and the repos may not be smart enough to use the keys. Specially since the tool seems to be converting the git commands to use https APIs instead of git which would use the SSL keys.

UPDATE:  you have to move the contents of the rkt folder to a bin folder somewhere. In my case since I'm using the CoreOS toolbox I moved the files (rkt & stage1.aci) to the /root/bin folder.

Wednesday, May 27, 2015

timing is everything

Sitting in a car dealership waiting for my car to be repaired gives a person time to think about the big picture. And so I found myself asking questions like:

  • why is the shop open early and close early?
  • why is the showroom open later and close later?
Just like a semi-tractor trailer that is not making money unless it's moving, a car dealership is also not making money unless people are working. An idle work bay is not making money. (just watch the staffing levels).

I'd like to say it's a conspiracy but it's not. It's practical planning. They are going for the sweet spot. The dealers want to show movement on both sides of the house without exposing new car buyers to repairs (warranty or otherwise). And they certainly do not want disgruntled customers near the new ones. They also want to the building to host both customers so that there's no wasted space.

I suppose it's not a terrible plan and it might actually be something that could be extended to manufacturing or software development. If ALL of the team were working on new code in the morning and then debugging and customer complaints in the afternoon then the CI/CD pipeline would represent actual work such that a deployment might have a fully vetted feature or fix in a single cycle instead over multiple cycles.

To be clear, I'm suggesting that when a fix or feature requires a team of contributions then the team should be working on the same feature at the same time so that they ting they are delivering is in sync with their peers.

Tuesday, May 26, 2015

Chromebook Powerwash - killer app

I was having trouble with my mini-Jambox this morning. It's a small Bluetooth speaker from Jawbone. Sadly Jawbone continues to have it's troubles. (remember UP) I'm not sure if it's the company it it's products. It's not uncommon for businesses to be marketing entities for manufacturers who provide their IP in different packaging. (it's a common question on Shark Tank).

After trying different strategies to connect, reconnect, pair, repair, forget, nothing worked. One person suggested Powerwashing my Chromebook. Let's be realistic, Jawbone has no real support for my CB or android phone. Their firmware update tool only works on Windows or OSX. So this problem was just frustrating.

But here is the thing.  The Powerwash took less than 3 minutes and my Chromebook was back in it's factory state. The last time I had to re-install OSX or Windows it took days to get my environment back to the way I wanted. At the very minimum I had all my tools installed and all of my icons and settings exactly the way they were (keep in mind I have sync-ing turned on)

I am no longer afraid of the Powerwash.

Monday, May 25, 2015

Facebook for Android is scarey

I was just looking at my android phone's battery consumption and the number of background apps. I found that the facebook app was running in the background. It's likely that this was because it was setup for push notifications, however, even after I disabled notifications the background process was still running. But then I scrolled down to find the section on "permissions". And that's when the sugar hit the fan. If you knew how much information you were giving facebook access to you would never have installed it. Now the question... how much information have they actually gathered?

Sunday, May 24, 2015

metrics and monitoring

Metrics and monitoring are related but not the same. Coda Hale has defined much of this field in a very generic way. Crowley ported hist ideas to go. As I construct services on bare metal, VPS, and containers I have varying needs for my OPS role. In the past I have used rddtool and graphite. Both projects are rock solid but they are starting to show their age. Not to mention there have been some projects that modernize the market.

Go-metrics is a good project because it upgrade the glossary of terms so that it's usage and context is clear where it's predecessors was not as clear, however, it does not offer any sort of visualization.
OpenTSDB - Open Time Series Data Base
The prometheus project is an open source project from the prometheus team. it's offered under the Apache 2.0 so it's favorable, however, other than Google images I don't see any application examples. (I thought prometheus was from soundcloud but it seems I'm mistaken).

Bosun is offered under the MIT license and was developed by StackExchange. The GUI looks like it offers some advanced tools for the operator beyond just a dashboard although a dashboard is something I need. (OpenTSDB and the datastore that backs Bosun needs some review)

Graphana has hit it's stride. The project reached it's 2.0 mark recently and it's UI is very pleasing to the eye. In particular I notice that it supports influxDB which is currently free, open source, and has it's own query language and UI. (go-metrics supports influxDB and there are plenty of native APIS for influxDB)

This is still a work in progress. Comments welcome.

Entitlement, Go Pro, and Poop

I admit I'm still pretty angry and so I'm venting.
This morning I decided to get some fresh bagels for my family and as I was exiting my gated community I stopped at a red light which turned green as I approached. Before I could get my car across the crosswalk there was an onslaught of 50-75 riders blew through their red light denying me the right of way.
Since riders are known to cover 25-100 miles in a ride I cannot say whether or not they reside in Weston... however, many Weston residents have a sense of entitlement which I now extend to these riders.
As I fumbled for my phone in order to capture the moment for the city council I was reminded of some YouTube videos I saw a few years ago where Russian drivers used dashcams as a sort of insurance policy as there had been a rash of people stepping in front of cars in order to collect some financial payoff. I was thinking that I could have used a camera at exactly that moment as I could see my right of way was clear and the riders had certainly in violation.

This is not the same as (a) opening your door and letting your dog poop without a leash, or (b) watering you lawn on an odd or even day. A mistake here and a lot of people get hurt.

Saturday, May 23, 2015

The 10x programmer or the Super-polyglot

I was just looking at for a project I'm working on. I have a number of concerns about integrating with non-critical 3rd party systems; the least of which is privacy and security. What concerns me most is that most new programmers are not polyglots and actually have an opinionated view of the ultimate development stack and so the two compete for mindshare as we try to decide what to build, integrate or buy.
I continue to proselytize the virtues of knowing your stack in order to avoid careless integration.
Going back to swagger; it's written in java and while that's not a very big deal there are the mixed signals from Oracle depending on who you follow and the news you read (FUD). But then there is also the embedded crud-ware that Oracle includes in the installers. You cannot just install Java and go. Of course there are 3rd party JDKs but then you have to validate which JVMs are supported/tested on which app; and so version creep begins. And that can be offset with containers like rkt and docker.
The amount of wrapping is starting to get pretty thick. Each layer requires expert knowledge. Each layer requires regression and readiness testing.
And then there is the question of real value. swagger's value proposition is it's code generation and it's GUI. There are several parts to the code generation and it depends on your perspective. Swagger's code is generated from a spec file. There is nothing special about the spec but the generator will generate client stubs for the defined API.

This is interesting in that the client stubs can be generated with version consistency with the spec file so long as the file is kept consistent with the server side APIs. And that's the rub. Where is the server side in all of this? The amount of work required to keep the spec file should be minimal and a product of some other compilation step.
code generation should be a one to many operation. Having some defined API method a generator should be able to identify the APIs, generate the API entry point and the client stub. (sourcegraph has some interesting go tools in this space)
Your 10x programmer and your super-polyglot will drown if they get tripped up in menial tasks. It's the wrong problem for the wrong person. I'm also trying to say that if you know your tools you should not have to use this sort of tool.

Friday, May 22, 2015

iMessage vs Hangout vs Slack vs other

The only opinion I have on the matter is that I want everyone to use what I'm using. The reason is quite simple; while I'm not an expert I have the necessary pragmatic sensibility between usefulness and security. Regardless of what messaging/communication system you use there is an overarching concern about security. Whether it's personal or corporate security it's irrelevant. If it's a private app then we feel that there is a better chance that our secrets stay within our group.

Security/privacy is a fleeting expectation. Cloud services and open source have put demands on our privacy and security that are (a) under valued (b) over exposed.

  • most email interchange between server is router directed but in the clear
  • most VOIP is not encrypted at all and uses UDP for media transmission
  • DNS services tell the providers what you're reading, watching or listening to
  • Search providers know what you're looking for
  • TOR may or may not be secure but someone could unwind your privacy if they can connect the entry and exit from the TOR
  • SMS messages are stored on the telco servers
  • every time your cell or sleep computer wakes up momentarily to see if there is something for it to do pings whatever wifi it sees reporting your MAC address to the provider
  • your smartphone GPS and smart maps tell the provider where you are and what wifi SSIDs are close by. It also reports your speed that can be correlated to it's maps for marking congestion however it also knows your speed and at some point it might call the cops on you.
So when your company is 50% iOS/OSX and 50% ChromeOS/Android and whatever else is out there... how are we to standardize so that we can all talk to each other. The security issue has to go away and vendors have to get the interop working.  There is no reason why facetime cannot talk to hangout! Granted WebRTC is part the way to creating an interchange and with open source PBX' like Asterisk, FreeSwitch and others it has to be possible... and sooner than later.

One huge challenge is that when you or I are in the DMZ of this communication war you either have to pick sides or use them all. "All" is just not practical. And bringing this in-house is not practical because the cost of development, maintenance, integration is not possible unless you have the deepest pockets.

I'm not sure if I made the case I was hoping for but I am frustrated and at the end of this rope.

Thursday, May 21, 2015

processing the Hexidecimal prefix in Go

This is not considered expert level code or insight but I happened upon this today and wanted to document it for myself.

The strconv.Atoi() is essentially a macro to the ParseInt() method where the base is assigned 10. Therefore if you try to Atoi() a string with the hex prefix it will throw an error. The hex prefix is only a hint for the parser and only applies when the parser does not know the base. Hence when the base is ZERO.
package main
import ( 
func main() {
    x, _ := strconv.Atoi("0x12") 
    fmt.Println("returns 0 because the base is implied to be 10:", x) 
    y, _ := strconv.ParseInt("0x12", 0, 0) 
    fmt.Println("returns the proper value:", y)
UPDATE: I should have read this link before posting. While there is nothing wrong with this code there is a level of improvement in my project.  I should use "\x12" in my project.  That will meet my needs without any other distractions or conversions since my next step would have been to convert the int into a rune or byte.

JSON has no comments

JSON is widely studied and understood so I will no repeat this Wikipedia article. However, I'd prefer to focus on this one fragment from the same article.
...originally derived from the JavaScript scripting language...
 What makes this fascinating is that the JSON schema is absent of any comment structure. Even XML has a comment pattern:
<!-- something here -->
The thing about XML comments is that there are not part of the userspace grammar. They are simply notational for the user but it is expected that the XMPL tools are going to drop the comments.

Conversely, JSON does not offer anything similar. There is no comment mechanism in the schema or in the userspace definition, however, the user can decide to define attributes that would represent comments.
{ "description": {"something here"}}
What makes this superior is that the comments are now embedded into the data set and can therefore be include in reports, documentation, or tools.

continuous integration

I've been in devops for almost 30 years. This clearly predates the origin of the word. Back in those days we did not care about what we called ourselves it was just about the work. In the last few years that has changed as freshman programmers have entered the field and wish to distinguish themselves in order to catch the wave that others have already rode.

I have used or familiar with a number of CI systems. Drone, Travis, Jenkins/Hudson, TeamCity, Go(aka continuum), IBM had a huge build system for it's OS/2 build. One could make a distant argument that some IDEs and Makefiles are also CI systems, however, it would be wrong.

So while I have been working on my IDE project I decided that I would start with tmux (still not there) and a change triggered makefile. This is not exactly a CI system but it is the foundation. The "autobuilder" takes some lessons from the CI systems...
  • when the user saves a file then start the build
  • ignore some files
  • do not get confused by meta artifacts
  • custom build tasks
  • stop on the first failure
  • capture the output
  • stream the logs
This will eventually get some new features.
  • implement git receive (integrate with github and bitbucket)
  • trigger docker builds similar to Apcera Gnatsd (link)
  • integrate with my remote IDE so that it can build local and trigger git builds
  • auto builder dashboard
Pull Requests are welcome.

UPDATE: and I forgot the other point I was trying to make. Depending on the size of the project(s) you are working on your $$$ will go a long way. The think that bugs me is that companies like and charge a lot of money.  Travis charges $129/mo and Drone $20/mo. But you can build your own micro build machine for $20/mo on a VPS anywhere and if you implement a dynamic build system you can shave that price even lower.

UPDATE: I was boyscouting my inbox a few minutes ago and I stumbled over emails from Cloud9, Nitrous, and Codeenvy. While these projects are awesome I think we can all implement our own. With some basic skills in server side programming implementing a websocket app, static server, something that looks like the rmate server, and a CI makefile thing... it's all very doable.

Wednesday, May 20, 2015

ChromeBook add-on idea

Crouton is ok... just ok. Without actually having been able to install Crouton I have a number of concerns. Firat of all... What is it and what can I run in or on it? Since it's name is an acronym for chroot I'm concern that it's not much more than a container and if so why didn't the Googlers responsible for ChromeOS implement Docker as the preferred container? (One thing for certain is that I do not like the "devmode" warning message) Let's not forget the devmode warnings.

Another option is dual boot and even at 9 seconds to boot ChromeOS it's still slower than I want for this purpose.

Now that Intel is selling a computer on a stick and Google the ChromeBit it's just a matter of time before we see what I'm looking for.  A computer on a USB stick. Now that would be the cat's meow. Specially if it connected to a Chromebook with auto networking etc.

non-trivial encoding with golang

Copying struct data from one golang structure to another is pretty simple depending on the variety of structs.
  • create your function similar to Copy(s, d, interface{})
  • at the top of the function check that the parameters are pointers
  • depending on your strategy you might use tags or common field names as the constraint
  • iterate over the structfields and copy the data
  • you might have to validate the types while copying
I've implemented this a few times myself. Custom for each occasion but I would really like a more general solution. Meaning that while the above is mechanical sometimes the data needs to be transformed and there is no simple mechanism for that. 

Ideally it would look something like a sed replace formula. In my latest implementation I used embedded tcl that I had previously incorporated in goose. This time I added a tag called macro and provided the tcl name to execute. This initial implementation is weak because tcl returns strings only; which creates a number of challenges for a more generic solution.

PS: My tag was named macro but I think callback might have been stronger even though I have javascript nightmares at the thought. The again; skip the tag and expand on the reflection. When the copy() function is processing the reflection, look for a named transformation method and execute it. This way I don't have to implement the function and assign it's usage.

UPDATE: I have added/created a project named transcode. It does a good job of copying data between structures.

Tuesday, May 19, 2015

a billion messages a day

A year ago today I wrote this post [recently and I thought I'd wait] and now I'm unleashing it on my readers.

The question of aggregated logging rages on. The stark reality that there is no real or correct answer only time and money relative to the current state of the economics in the system. Google only shuttles actionable events to it's MQ and operations; meaning that if you need to debug a problem you have to log into the system that generated the error. On the other hand there is an option that all of the messages are being forwarded to the aggregate server for immediate evaluation.

And so the proof was presented to me this way:

at my previous company we had 600 servers producing 1B messages a day which we processed on just 15 servers with varying functions.

At first this seemed like a rational description but I was still skeptical; and it only took some simple math.
  • 1B messages divided among 600 machines leaves 1.6M messages
  • given an average concentration of a 10hr day that means 160K messages an hour
  • making a guess that there are 5K messages per transaction ... 
That means that the system was processing about 32 TPH. And even if we agree that they produced 1000 messages per transaction then that's only 160 TPH or 2.6TPS.

Now let's do the math the other way:
1000TPS * 60 = 60K-TPS
60K * 1000msgs = 60M messages per hour
60M * 600 Servers = 36B messages
and over 10hrs that 360B messages
If the average message size is 500bytes then you are talking about something like 180TB per day. And when you are collecting data at this rate you are probably collecting many multiples of petabytes of data. Not to mention backups with ot without compression.

Things I like to hear from Technical Leads

I'm not going to define the role of a tech lead but one would think that a tech lead is part leader of people with technical and subject matter knowledge. There are things a tech lead should not say or better tech leads who say these things might not be qualified.
  • I forgot my database classes
  • I think the language supports that feature
  • I did a little reading last night
I don't mind that you've forgotten something or you might not know or remember all of the details; however; do not offer an opinion until you are refreshed.

Doing a "little" reading is useless. That's going to give you enough information to be a meme and not a leader or innovator. You should be reading regularly and a lot.

I'm not sure what to do when a programmer says:
  • I cannot code without an IDE
  • I need a debugger
You're still a beginner if you cannot write, compile, and test your code from the command line. Furthermore, you are the debugger.

Basic tmux

I have been and and out of trouble with tmux. (if there is a preferred home website for the project I don't know it and google is not sharing.) This video was the one that was the tipping point for me. It's part of a series but it was the one that let me accept the fact that I was going to use bash to construct my console rather than the given conf file.

I ended up with a variation of his file:
SESSION=`basename $PWD`
tmux -2 new-session -d -s $SESSION
tmux rename-window -t $SESSION:1 dev
tmux split-window -h  #"echo try:  ssh builder@"tmux resize-pane -R 30tmux split-window -v "builder ~/src/"tmux split-window -v "openvpn ~/src/certs/client.ovpn"tmux split-window -v "echo minifs; cd ~/src/gwtwo; go run ~/minifs.go"
tmux select-window -t $SESSION:1tmux select-pane -t 0tmux -2 attach -t $SESSION

Monday, May 18, 2015

Chromebook Crouton

Spoiler alert. I tried to install crouton but it failed.
  1. Put Chromebook in developer mode
  2. Download crouton (link)
  3. Execute crouton (sudo sh ~/Downloads/crouton -t xfce) from the shell (Ctl+Alt+t)
Step 1 it tough. There are multiple ways to put your Chromebook into Dev Mode and the dev mode should not be confused with the ChromeOS dev channel on the about page in the settings. It seems that each manufacturer has a different way. On many ARM processor devices it's a keyboard shortcut and on my ASUS Chromebox it seems to be a recessed paperclip button.

Step 2 is a snap and with any luck the short URL is legit.

Step 3 failed to complete the first time. I have no idea what went wrong but it failed to download a package that subsequently failed to install and so the process stopped.  I was able to restart the installer but had to add the -u option. This option should have been used to update an existing installation but in this case I was hoping it was going to resume my current install.

During the installation I was prompted for a username and password; which subsequently never worked; although it could have been user error on my part. When the installation finished I tried to ssh into crouton but failed. I tried "enter-crouton" in the shell. I tried the other window (Ctl+Alt+ <-) and (Ctl+Alt+ ->). My user and password did not work here. And after I tried 3 times the monitor went to sleep and I could not wake it up. I rebooted and tried again with different uid/pwd and again it slept.

PS: In the ChromeOS WebStore there is a Crouton extension. This extension is supposed to let the user launch an X11 session in the browser; connected to the crouton instance. This is probably just a virtual terminal in the form of VNC or similar.

I don't know that I care anymore. I thought dev mode was going to let me run some apps that I needed to offload from my MBA but now I'm thinking I should just have purchased something completely different. And now I have returned my ASUS Chromebox back to it's original state.

PS: It was really annoying to be faced with the "dev mode" screen every time I rebooted while I was trying to get to proper login.

almost sharing chrome tabs

I like Chrome's tab and bookmark sharing. My only criticism for the time being is that I cannot delete tabs on a remote system.

Sunday, May 17, 2015

It's 11:45p Should you be switching to Developer Mode

I want my ASUS Chromebox to do a few more things than I can get accomplished in normal mode, however, making that change in a time when we have containers like Docker and Rkt is a shame and a pity. The ChromeOS developers came up with this idea of Recovery Mode, however, from what I can tell it's just a simple chroot jail. But I think I've already made my point.
Getting my Asus Chromebox into recovery mode was as simple as inserting a pin into the recovery hole just above the Kensington lock hole.
I should be able to recover at any time and restore my system to it's original luster if things go terribly  bad. While I have been thinking about this for a while I had not executed. And now that things are progressing I'm having my concerns.

  • many of the screens and popups that are described are incomplete. There are plenty of warnings in the negative but very few in the affirmative.
  • Now that the machine has rebooted twice (automatically) the machine has clearly been power washed and I have to tell it about my network. (could have been worse)
  • Having to put my username and password in the dev mode system is giving me a hickup. I hope my dev mode system does not sync any crud into my google repo. (it's better that my sync password is different from my user password. Thank goodness for small favors)
  • While I'm in developer mode I'm not sure what to do next. I'll have to google the process again and bookmark the steps this time.
I'll try to update this with my next link.

Loving those Music Bubbles

Music Bubbles is an awesome controller for Google Music.

  • the controller is hidden if the player is is not runing
  • the icon is simple and at the right alpha channel so that it does not overtake any webpage that it's displayed in
  • there is an animated progress bar which is also subtle
  • the controls are reasonable
  • you can "blacklist" a particular address so that the control does not interfere with the page
My only complaint would be that on a ChromeOS installation it would be nicer if it were a desktop icon instead of a browser control. Of course this is only a minor complaint.

Bluetooth Speakers

A few weeks ago I borrowed my brother's bluetooth speaker for my daughter's birthday party. It was awesome except that with all that bluetooth radiation in the room it seemed to interfere with the audio quality. Since then I have played with a few of my friends devices and one is much better than the next. The big jawbone device is looking very nice to me although the mini looks much more practical. Frankly I'd like to have both. One feature that makes the mini compelling is the built-in micriphone and multi-play.

Chromebook missing killer feature

About 10 years ago I discovered an early version of a program that would allow me to host my keyboard and mouse on a single computer but that I could virtually stitch multiple systems together; much the way a KVM worked without sharing the displays.  What made synergy a great project is that it supported multiple operating systems.
Now that Chromebase and Chromebox installations are on the rise it would be great to have a similar functionality. It's important to note that while synergy uses a client/server model with a virtual clipboard a chrome version of the same feature would be killer.

Google Hangouts vs Skype vs iMessage

iMessage and iPhone, iPad, or OSX device are the gold standard for getting all of your devices to ring when there is an incoming call or text message. The actually implementation is a bit harder to realize but it goes something like this.

  • If both parties are an iMessage user then the message is sent through the internet proper to the Apple servers and then directed at the configured devices. (blue send buttons and bubbles)
  • If one party is not iMessage capable then the messages are sent through the traditional SMS network. (green button and bubbles) But if the receiver is an iDevice then the message is redirected back into the Apple message network and is redisplayed on all the subscribed iDevices.

Of course regular phone calls land on the phone and then are simultaneously networked to local subscribed iDevices. This seems to be a variation on the tethering theme only in reverse.

Recently Google Hangouts and Google Voice started to merge.  Hangouts has become my defacto analog phone and desktop SMS. Getting everything working well together was pretty easy but not for the uninitiated. In this sense Google seems to be a bit of a jalopy. The missing element is that it helps to have a google voice account. That allows you to tie all of the comms together across all your subscribed desktops and devices. Transitioning to google voice can be a challenge because it is a new phone number and so you never know who has what.

Google Voice is a topic for another day.

Skype was acquired by Microsoft a few years ago. It was a good buy for them as they needed to capture some technology as well as the "eyeballs". In recent months MS has been collapsing many of their tools into Skype. For example I understand that Lync and Skype are merging in some way. This is a good idea since there is some overlap and it will introduce more new people to the services. The challenge for Skype is (a) their quality is lacking and that may be a function of a less than ideal environment. (b) they invade my address book (c) the abuse that is out there (d) and advertising when I'm a paying subscriber.

The quality issue could be any number of issues, however, it has not improved from the days prior to the acquisition. I think that the Skype team designed a system based on ideal conditions instead of the reality. WiFi conditions vary widely, device capacity to codec demands, local and remote ISP, the networking model (switching only vs switching and media).

Neither Skype or iMessage have a ChromeOS version. But I do not know if I care because hangouts works everywhere I do.

Saturday, May 16, 2015

By any other name it's still a chromebook

I'm looking at the list of devices that my Nexus 6 is paired with and I cannot make heads nor tails of which Chromebook is which. And by any other name my Chromebox is also named Chromebook. Argh!

The good news is that hostnames are not that important. If I need to xfer files from here to there then there are reasonable services like your favorite SAN or NAS. And if that's not realistic then there is always any of the popular cloud services. The basic idea that the Chrome-device is a client and not a server... and it should never act as a server. So hostnames are simply not necessary.

Except when pairing Bluetooth devices.

UPDATE: I can rename my connection names on the Nexus 6 but it takes discipline.

mosh for chromebook

I've never been a fan of the mosh project. I believe that their claims about being more secure than ssh are simply false since ssh is a common core component; even if it's only used to initiate the connection. I'll use the recent ssl vulnerabilities as evidence.


I'm moving into a 100% remote strategy. Meaning, I'm giving up my Windows, OSX and Linux computers in favor of a ChromeOS solution. Each of my kids have their own Asus Chromebox, I have a similar Chromebox as a stationary desktop and two Chromebooks for mobile ad experimentation. And... with the Google Pixel I even have a 3 year 1TB data plan.

I'm adapting to the remote/wireless lifestyle well. It's actually working out very well. Between tethering with my cell phone, free wifi at my favorite stores, and the library... I have never been with the ability to work. It's actually better than it was when I had my 11in MBA.

And there is one downside.

I'm putting together my server farm and I've decided to use CoreOS. There are many reasons for this decision.

  • auto updates (similar to ChromeOS)
  • containers (rkt and docker)
  • clustering and hs (etcd, fleet)
And while I'm also doing my development on the CoreOS server I'm using the CoreOS-toolbox. Unfortunately there are a few weaknesses. (a) only one instance at a time per user (b) tmux and screen are ok but when you drop the connection everything is terminated. (c) have you ever seen those people rnuning around the office with their computers open because they did not want to lose their active sessions.

So there are a couple of options.

Install the mosh server on the CoreOS host and use that as your point of entry. To the credit of the CoreOS designers there is no package manager. So that's just not going to work. And frankly I do not think I want my users logging in this way.

The next option is to integrate mosh into the toolbox script. This should work, for the most part, however it's also going to create some entropy(if that's the right word). The scenario is like this:  the user logs in. CoreOS authentication, like others, grabs the shell script from the passwd file and executes it. At this point the ssh session has been established and forwards the connection to mosh... assuming that this works (does not currently). 

Some observations:
  • Assuming I've lost the client connection... logging in with a second connection should still fail because it's a second connection request an one is already in progress
  • we will need a way to kill the current connection if the client suffers a failure or related use-cases
  • The active client session should still be able to reestablish that server connection
This might be a strong use-case for mosh.

Chromebook Recovery Tool

This article looks interesting although it's more of a how to. It appears that regardless of where you are you can download the recovery tool.  The tool, in turn, downloads an image from Google which is then written to the USB or SD card of your choice. I'm pretty certain this becomes some sort of bootable media with the bootloader on the ChromeOS devices will then use to reimage your chrome device.

Presumably the device will now be in factory condition.

Thursday, May 14, 2015

Chromebook Pixel Type-C USB

I really like my Chromebook Pixel with 16GB RAM, 64GB SSD, i7, and 128GB SD card. However I have a few complaints.
On the left side of the based of the laptop is the Type-C USB port.  It's used for peripherals as well as the power cord.  One of the things that makes the Type-C desirable is that the plugs are not directional. There is no right-side up any more.

What you should also notice is that the width of the base is slim and the receiver is midway; meaning that in order to remove the connector you actually have to tip the laptop in order to wrap your fingers around the plug to remove it. And since there is a similar connector on the other side of the keyboard tilting the laptop could put unreasonable stress on the other connector. Furthermore there is no obvious way to lift the laptop in a smooth motion except to grab it by the screen bezel and since it's a touch screen that might not be a good thing either. One could always argue that closing the lid and then pulling the connector is the best way but then there are some sleep or docking state things to consider.

I wonder of the (a) the connector should have been similar to the MacBook's magsafe (b) some sort of slim contactless charger (c) put the connector on the screen side of the case close to the hing. At least then a one-handed person might have a better chance to manipulate the connector. Only [c] offers keeping the Type-C form factor in place.

Wednesday, May 13, 2015

ChromeBook the missing link

Sadly I am coming to the conclusion that the missing link is, in fact, missing. I might actually repurpose my ChromeBook Pixel as an Ubuntu laptop instead of the Special purpose device that it is. It is simply not capable of doing the sandboxed development that I was hoping for; even though nacl was looking promising.

On the default Chromebook

  • there is no command shell so there is no working git commands
  • there are a few editors that support git, however, there is a bug with bitbucket's git version
  • this is certainly not going to compile
After installing nacl
  • nacl has some tools installed by default
  • golang is NOT one of the tools
  • installing go from source requires 'tar' which is not installed
  • installing dev_tools in order to get nacl_ports installed is easy
  • but getting the dev_tools to work without bash in the expected place or being able to fork (backticks) makes dev_tools useless
dev mode
  • I only have the one Pixel and I'm not going to hack too hard without having some confidence on how to restore it.
  • I read one post that indicated that much of the ChromeOS security is disabled when in dev mode
Additionally, one other thing.  VPN is still not working with watchguard... and I have even read a couple of posts that suggest that VPN is simply broken. In my opinion, however, the two are not related. In my case I need an ovn file and the later the issue may be related to the ineffective GUI.

Finally, one alternative has been a "remote editor".  There are a few and some are better than others. At least one uses the vendor's servers as a proxy.  This is clearly very bad for security. I've been playing with the idea and I have a prototype in both ACE and CodeMirror. Now it might be time to execute... specially since rmate has come to my attention. Now I need a stronger browser based editor.

Monday, May 11, 2015

Pico Services in Go

Peter Bourgon (@peterbourgon) spoke at FOSDEM 2015(video) on the subject of "Go in the modern enterprise". Much of what he described as the weakness of Go he called as examples of structured services. As Peter dives deeper into the description he makes the point but misses the implications.

Peter's definition of structured services fits into what I call dimensional systems or Mandelbrot dimension. It's where the implementation details of the service scale as the dimensional power is applied. And, the actual realization of this exact system of services is found in two places. (a) flow based programming (b) go's generator.

In the case of go's generator, Quinn Slack of SourceGraph, presented at Google I/O 2014 (video). He made a strong case for a number of useful Go patterns. The strongest was the AST and go generate. In particular he talked about wrapping all of the "service" APIs with authentication APIs.

I was introduced to flow based programming after watching some of the flowhub and NoFlow  kickstarter videos and supporting documentation. Later I implemented a similar framework in Go which functioned well but was incomplete as it put a hard burden on the framework which is now better implemented in the ast+generate tools.

One anti pattern for structured services is that the smallest hello world, in Go, is still about 8M. And as you start registering services with whatever discovery services, or virtual SOA buss, the challenges multiply.
  • zero downtime green/blue graceful deploy
  • idempotent transactions
  • audit-able call stack
  • distributed transaction services
UPDATE: The network/buss is still a very expensive operation.

UPDATE:  I've traded a number of tweets with Peter and one thing I discovered that I failed to describe here... the smaller a microservice gets there is a proportionate increase in the cost of servicing it. For example, monitoring a service has cost c. If you divide the service into n smaller services then the cost increases to (n x c). The burden might be multiplied many times over as n is distributed across other subsystems and services.

Sunday, May 10, 2015

Waiting for a connection

I have been waiting for a remote system to hit my server... this watch command makes that easy.
gnuwatch "date; netstat -vant|grep 4081"
Future versions will make a connection to the remote system.

Saturday, May 9, 2015

bootstrapping go web projects

At first I thought that this was going to be a good idea. Some sort of opinionated framework was going to bootstrap whatever project I might be working on... and voila. But that was before I started looking at the details and now I have a different opinion.

It's a bad idea to base my next project on this sort of framework.

Part of my opinion is intuition and the other part is experience. While the authors have made some excellent choices and they will work in most cases they are not going to work in all cases and without a more general plug-in strategy you're better off knowing and understanding the ideas and the glue. Implementing your own strategy.

The authors have clearly just glued a bunch of 3rd party packages together. It's not a bad thing but you need to understand the code before you blindly incorporate it. The Go Authors are very clear on this point as the stdlib is fairly feature complete.

here is their list and my objections:

  • PostgreSQL is chosen for the database.
    • as much as I dislike CAP there are use-cases for document and key/value storage
  • bcrypt is chosen as the password hasher.
    • crypto is a huge risk for any system; I think bcrypt has been cleaned by the openbsd team so it's a good choice
  • Bootstrap Flatly is chosen for the UI theme.
    • can't argue with this as it's clearly pluggable
  • Session is stored inside encrypted cookie.
    • good
  • Static directory is located under /static.
    • ok
  • Model directory is located under /dal (Database Access Layer).
    • not certain this is a good idea. models and packages and CRUD are strong ideas but there is something to be said for generators which is missing.
  • It does not use ORM nor installs one.
    • good
  • Test database is automatically created.
    • meh
  • A minimal Dockerfile is provided.
    • since this is a go program it should have been built on the scratch container
  • is chosen to manage dependencies.
    • godep is the current gold standard but gb is on it's way
  • is chosen to connect to a database.
    • this is a clear winner
  • is chosen for a lot of the HTTP plumbings.
    • not a chance.  This is a core and important system. You should implement your own.
  • is chosen as the middleware library.
    • see previous note
  • is chosen to enable graceful shutdown.
    • this cannot possibly work properly. Do it yourself and integrate your project with haproxy, vukcand etc...
  • is chosen as the database migration tool.
    • schema migration is the hardest part of any deployment. This might be a good tool but it requires some investigation. Commercial ventures make a good living doing this sort of thing.  An open source version of the same quality and versatility would be a great win (depending on the licencing)
  • is chosen as the logging library.
There are a few missing packages... go-bindata and bindata-assetfs. And a good Makefile/Dockerfile. Taking a page from the Apcera gnatsd project they provided a Dockerfile that is the makefile. 

The last thing that is missing is a license dependency tree.  Just what exactly are the challenges here... if there is a single license that includes the GPL-A then you have a number of commercial licensing issues. Any corporate attorney would require this evaluation so you are better off doing it for yourself before you get started.

Good luck.

** know your stack!!! 

Friday, May 8, 2015

Reality check - the real cost of cloud computing

This Chromebook is being advertised for $259. It's an intel i3 with 4GB RAM and 16GB SSD (according to the post they can be upgraded).
What makes this interesting... a single core 3.75GB RAM with zero disk from Google's Compute Engine costs $33/mo or $396/yr. While it's not the same classification as a datacenter hardware it might still make a better compute node at your desk or a luggable mobile server rather than putting everything in the cloud.

The ROI is clearly less than a year if you compare prices, even with a memory and storage upgrade. The keyboard and mouse are just a bonus.

Wednesday, May 6, 2015

Browser IDEs for general purpose programming

There are quite a few of them now:

  • cloud9 - sftp did not function properly. That was not a showstopper but it is frustrating. I have a private repo and 'go get' is not pulling from the private repo. This could be a common problem without baking in my ssh creds. (does not support 'go get' since it defaults the workspace to the current folder and does away with the src folder; and the IDE keeps losing my server in it's known_hosts) [latest go on c9: link1, link2; I should note that I was not able to compile go1.4.2]
  • nitrous - this one needs another try, however, it's cost is probably the highest in the group. And then came the truth.  They are charging by the minute.  It's a good thing I discovered this because it's not actually very good. They tried to create a complete container but then I could not clone a private bitbucket repo. My transaction amount is for $0.21 but they will not let me close out my account until they've posted an invoice; and presumably I've paid it.
  • codev - easier to uninstall than login
  • koding - 
  • codebox - 
  • sourcelair - using go version 1.2.1; which happens to be the same version as cloud9. I do not imagine that they are the same vendor or using the same engines underneath but something's wrong.
  • cde - incomplete and does not support bitbucket in git mode as there is a bug at BB and cde refuses to implement a workaround that others have already implemented.
  • caret - no DVCS support and seemed to be limited to the local drive
  • txt - good editor but not much in the way of an IDE and it certainly does not build. It would be nice if there was a terminal session or at least an SFTP option so I could still open an offboard window.
  • zed - just cheap
  • coding the web
  • shift edit
  • codenvy - I was having a good time with this IDE. It lacked any speed as it completely rebuild the target build environment each time making compiled, test a very long operation.
  • codeanywhere - complicated pricing options although they seem cheap. Still seems to be missing something.
  • tailor -  I got stuck in the IDE trying to commit my code.
  • codepress
  • ra - simple and incomplete
  • code your cloud - might be interesting although it's young and seems to be run on my servers. Not a bad thing but I don't trust them yet.
  • tedit - I need to give this one another try. As with cde it does not support bitbucket's version of git and so I would need to move my project.
  • keypress editor - deleted before it could be tested as the help page returned a 404. I hope it has not been hijacked.
There are probably many more but for the moment I'm looking for a few key features: 
  • quality editor with syntax highlighting
  • need to be able to execute goimports against my code
  • github and bitbucket integration
  • compile, test, run my code
  • either checkout my code to my Chromebook or some other similar functionality
And so there are a few things in the works. 
  • I need to get gitlab running on my servers
  • I need to get my turbo editor implemented
  • I need to support git, hg, and fossil
It's my general conclusion that I need a real server to do this work. I can also make a good case for running an IDE like Coda, from, and then some docker commands like gnatsd from Apcera. This has been a tremendous waste of time.

Frustrated Fedora21 trying to Go

Go once had a number of vim files that I could just copy into my .vim folder. Recently someone on the go team decided to extract those files which has now been propagated through all of the various curated distributions(read RPM etc).

As I try to reconstruct my toolbox I'm struggling not to import every since 3rd party library under the sun. I also want my toolbox to be as mobile/agile as possible. By mobile I do not mean laptop or smartphone nor do I refer to the agile manifesto. I'm referring to the fact that I manage many hundreds of systems and they are all in different levels of operation and I need my tools to come with me wherever I go and so they need to be mobile.

vim-go is neither agile nor mobile.

Tuesday, May 5, 2015

Missing features from web based IDEs

I'm trying my best to evaluate the various ChromeOS IDEs and there is one overwhelming feature that is missing. One cannot cut and paste branches from one project-repo with another.
What I mean is that I have two projects that need to be merged and then all of the folders refactored.
As hard as I've tried I've needed a proper LZ for my files. I need a place where I can clone the files and then move them around. While there are a few IDEs that provide bare bones services it still takes as much effort to create the environment, cut, copy, paste the files etc.

Not impossible just taxing.

Chromebook Powersaver - search for a screensaver

I mentioned that I have not been able to locate a screensaver. I guess that does not really matter anymore because I found something else.  I was searching my Pixel's settings when I found a BATTERY button.  When I clicked it ChromeOS was kind enough to tell me how much power each application was consuming. And not not my surprise... and, combined, were consuming about 70% or my energy needs. 

I cannot do much about that other than to plan better by using my phablet when it counts.

** This seems to be an opportunity for Google to optimize gmail's power consumption for both smartphone and laptops.

Native iPhone Sync with Google is not sufficient

I'm well into my first 30 days of Nexus 6 goodness. Sadly I am realizing just how many contacts the iPhone apps(phone and contacts) failed to merge between the iCloud, iPhone, and Google accounts. One cannot expect Apple or Google to go out of their way to merge/sync but it would be helpful if they would at least import from the other.

Now I have to hope that of my missing phone numbers are actually sync'd to my MacBook Pro or iPad; however, I'm not holding my breath.

ChromeOS Screensavers

I don't have an answer yet. The one screensaver I installed (a) only blanked out the ChromeOS browser window and not the displays (b) did not actually power off the screen (c) did not do anything to the second display. Maybe what I'm really looking for is a power saver mode.
Screensavers no longer serve the function they once designed for. Back in the day of phosphorous and cathode ray tubes and even early model plasma and LCD displays it was necessary to prevent pixel burn-in. Today's displays are much more durable, however, most modern screensavers require that the machine run at full speed to render exotic images. Furthermore while LCD light sources are lasting longer than ever before we still tend to burn them out and discard them. All but the most expensive monitors are not repairable.
** When I worked for IBM they did not want us to power off our computers at night but management insisted that we power off the monitors. Presumably the electricity costs for the monitor was significant and the CPU boot time impacted productivity.

FSA and HSA are a waste of time and money

I was there in the beginning when these programs were making their ways through the prepaid systems developers. As a developer I was grateful to have the work as prepaid transaction volume was on the rise but not at a sustainable pace. Back in those days investors were a lot less transparent about their investment goals and ROI.

Now HSA and FSA have become a money machine. They take money from all sides. And just like every other processor if you fail to do some required task by some date then repayment becomes absolute. And let's not forget that there is very limited carryover.

But it's your money!!

It's yet another broken and corrupt system.  Don't do it!

Monday, May 4, 2015

consensus says wtf?

Just how many consensus based replicated key/value storage systems do we need? Clearly this has become the latest hello world or todo application that everyone implements on their way to whatever is next... but is it really necessary considering that all of these implementations are already open source and most, if not all projects, will accept pull requests.

Here's my list:

  • etcd
  • consul
  • logcabin
  • chubby
  • zookeeper
  • blockstore (???)
And that's just what I can remember.  Follow this link for a list of about 50 implementations. I'll admit that there is something to be said for choosing different programming languages and some features but still... at some reasonable point we need to elect just a few masters so that we can all go back to the task at hand. (Embedding this tool deeper into the stack)

I suppose that since raft and paxos are relatively new the next generation of PHd's need something to tinker with. But seriously, at some point we need to take it to the next level.

PS:  I realize that logcabin is from one of raft's inner circle but there is no accounting for taste in the tools selection. For example it depends on scons which has not been updated in 8 months. I cannot imagine why the git version is a dependency. Looking at the code there are a number of things that could be challenged but that would be petty when I think the bigger issues have nothing to do with this project.

I suppose there is an interview question in here somewhere:
  • design or implement a raft protocol
  • where does it fail
  • what's it good for
  • can it be used for general purpose replication or key/value only

codenvy browser based IDE

I'm converting my development environment from the standard desktop OS (Windows, OSX, Linux) to my ChromeOS and then browser based at that. Having an i7 Google Pixel makes this fun an educational.

Ideally I want to do the development on my local machine but until Google adds features like containers to their current platform I won't really be able to compare apples to apples. In the meantime I plan to test some of the available IDEs.

The most recent candidate is Codenvy.

To begin the process I had to sign up for a free account and then import/create a project instance based on my bitbucket project. This would have been a normal and simple thing except that bitbucket is not supported as a first class repo. In order to work around the problem I had to (a) generate ssh keys (b)  create a small/empty project and then "import" my project creating a second project. It took a while to figure that out.

In the meantime I had a few problems. (1) the git tools had some problems and codenvy did not produce enough helpful information for me. (2) Since I'm writing code in go I would have liked to run goimports as part of the pipeline in order to clean the code including tabs, indenting and imports. At the very least they could have supported user definable tab size. (support recommended that I use the chi plugins but that's just not going to happen.)

The pricing is interesting.  They have moved to a CPU time usage model making them different from the rest. This means that they are doing some sort of passthru and markup. I don't think they are going to earn my business. It's not as polished as I would like.

Sunday, May 3, 2015

Need a new phone

I think they call my Nexus 6 a phablet. It's too big to be a phone and too small to be a tablet. Granted U should not be calling or texting while I'm in the car and we already have Bluetooth headsets for those handsfree moments... but there is nothing like the visceral experience of holding a phone in your hand. And unless you're a 20-something you remember pure cellphone envy when the Startac phones were first released on the public.

Given the complexity of phones today I doubt that the actual Startac could compete. And this idea of an embedded phone is silly.
But there is something to be said for a Bluetooth device that is synced with my phablet in my backpack. Something that might respond to a few basic commands and then act as a high quality phone. The best part is that the phone was so small and light it was easy to put in my pocket. It never generated heat when in standby mode. The battery lasted a nice long while, had a high capacity option and was replaceable.

Bluetooth circuitry is very small and sips power. I've owned some earpieces with hundreds of hours or standby time and hours of talk.

PS:  The Startac came in may flavors. (link)

Saturday, May 2, 2015

converting an .ovpn file to an .onc file

I'm not sure whether this is practical or even possible. While the onc file is well documented I'm not sure what to make of the ovpn file. The OpenVPN project is well documented and so I hope it follows. In the meantime I have a shell of the source meant to implement the structures etc... for a simple transcoder. (another link; debugging)

Innovention Studio

I gave a golang talk at the Hive in Miami a few months ago.  Since then I have been captivated with the idea of having one of these spaces for myself only closer to home. The layout over there was a little more whimsical than I would have liked. It looks like they acquired more space over time and now have the entire building... but it's right off the highway in the middle of downtown Miami.

Now I'm having dreams of building a similar space in Weston.

I bet I could get some investment if I knew what the potential market is. Here are some of the amenities I'd like to offer:

  • receptionist
  • mailroom
  • conference/training room
  • demo space
  • PBX
  • limited secure rackspace
  • diverse highspeed network
  • bathroom and break room
  • first come first serve cubicles
  • reserved offices
  • lockers
I wouldn't have to do all of this but it is the starting point. (Is there a template for this sort of business?)

Friday, May 1, 2015

The perils of shutting down a complex system

I suppose I could be talking about any complex system, however, in this instance I am referring to computers; both server and desktop. And there are so many adjacent questions:

  • when do you begin the shutdown process?
    • triggered by the user
    • triggered by a clock event
    • triggered by an external API
    • triggered by a lack of power or sufficient power to "save" the environment
    • triggered by meltdown avoidance
  • How long do you wait for all of the child processes and their dependencies to terminate before signalling the hardware to power off?
In some of these cases it's obvious that the process should receive a signal and then save some state and terminate.  But what if saving state is not practical? In particular modern CAP-type databases are not ACID and may or may not be responsive or reliable after a shutdown.

This is a complex topic and get expensive when you consider the costs of rebooting a mainframe or a cluster of computers. Companies like Stratus solve part of the problem by implementing redundant and highly available hardware and they are dependent on a reliable and stable operating system and userspace. It's up to the programmers to maintain the reliability of the platform in their own software.


  • persist your data ASAP
  • make recovery possible; thing snapshots and WAL
  • honor system signals
  • when restarting from a crash give the user an option to recover or recover based on a default behavior
Good luck

another bad day for open source

One of the hallmarks of a good open source project is just how complicated it is to install, configure and maintain. Happily gitlab and the ...