Wednesday, March 30, 2016

Intel employment interview approx 1995

I don't remember the exact date but it was some time in the 1990's. By then I had achieved my 10,000 hours and it was time for me to work for someone else. At the time I had worked on every conceivable layer of the software development stack:

  • PLA - programmable logic arrays
  • BIOS
  • QA automation
  • TSR - terminate stay resident
  • device simulation
  • GUI applications including object oriented design predating C++ for OS/2
  • database applications
  • disk device drivers
  • video device drivers
  • AIX kernel design for the mach kernel
  • OS/2 internals and complete retrofit of presentation manager
  • communications programs
  • software copy protection removal
  • other communication protocols
  • x86, x386, x960 assembler projects
And so I decided to interview with Intel in Portland and I was granted an interview. During the interview I was asked the question "how many lines of code did I write in the last year"? While I had particularly prepared for that question it was something I had the answer for. 
While working on ValueLink and related tools I had written about 52,000 lines of code.
Seeing the smirk on the interviewer's face and thinking I was estimating too high I qualified it by saying that some percentage of that was probably headers and some comments. And then he proceeded to question the veracity of my claim and it's probably the one reason why I was not offered the job.

So to the unnamed drone I can now tell you to piss off. (i) because I recall the story almost 20 years later (ii) because working part time on a project over the last 7 months I have written over 35,000 lines of code completely without crazy whitespace or comments. That averages to 60K per year. And likely closer to 100K if I count all of my full time work.

Just for good measure... piss off, again!

Monday, March 28, 2016

distributed systems; database vs filesystem replication

This is just a note to my future self. I just finished watching the last 10 minutes of a goto; presentation that I started a few months ago and it reset a few of my core architecture beliefs and has be going on the bath of thinking outside the box again. While I agree that distributed systems with heavily co-mingled semantics need protection whether it's locks or some other mechanism. But if you can partition the work into it's simplest form it's likely that (a) locks are not needed (b) the simplest implementations will reduce failure and when there is a failure it's easy to fix.

I'm thinking about an application design that uses the filesystem and filesystem replication instead of DB replication for keeping systems in sync; in contrast to using a relational system. Many of the requisites for an RDBMS are also available in the basic filesystem. There is a reality that if the RDBMS is not used the right way then the ACID features of the "system" are degraded, however, implementing a full ACID system will also reduce the efficiency/throughput of the "system".

This might sound like nonsense except that these are notes for myself and it's based on experience. For example in a payment gateway the host-capture subsystem from the POS perspective is pretty simple. Store the request and the response with some basic telemetry. Choose the one path that makes the most logical sense for the transaction and replicate the data across all nodes.

When a transaction arrives that requires information from a previous transaction there is enough information in the second transaction to locate the first as a hash function O(1) in the filesystem. It's no more or less complicated than a relational system except that it does not require a separate DB that is itself replicated.

Sunday, March 27, 2016

the madness of cutting edge

I'm trying to deploy a yubico server. I'm biased toward a 3rd party version that is implemented in Go, however, when I launch the server I get a segmentation error. After a few hours of debugging I determined that there was a problem between Alpine Linux and Go. There was a defect submitted to Google and subsequently corrected and pulled.

In a traditional bloated Linux universe I would simply download the source and rebuild go. Done! But in the docker registry, alpine linux, offial Alpine linux container, official Go container I have to wait endlessly.

  • go has yet to deploy a release with the patch I'm depending on
  • alpine has not deployed an official 3.3.3 container
  • golang has yet to deploy an official container based on alpine 3.3.3
and after all that I'll never know whether the yubico-server I'm trying to deploy actually works.

Enough complaining about the slow lane let's about that javascript guy who was served cease and desist order related to a source file named kik. As a result of a series of bad decisions the author decided to pull all of his projects in protest. The side effect was a cascaded failure into a number of projects that depended on a small 15-line javascript library.

It's a mad mad world!

Amazon where did it go?

Normally when Amazon sells out an item they mark it in a way that obvious... like "out of stock". but recently Amazon decided to delete the page and I cannot figure out why. I'm looking for an Intel NUC-NUC6I5 and while Amazon had them it seems they are all gone and now there are one or two resellers/bundlers that are offering he same bundle for $150 more. It feels a little too much like eBay to me. Since they do not indicate the manufacturer of the bundled parts it's hard to tell where the markup is.

Friday, March 25, 2016

CoreOS things that have gone wrong with Beta 991.1.0

CoreOS performs green/blue deploys of the OS which is consistent with modern thinking. But one thing that is not completely apparent is which folders are supposed to be safe during upgrades? After a recent upgrade to 991.1.0 on the Beta channel I lost 3 of 6 folders in my /media folder. I'm pretty sure that the media folder is meant to hold symlinks or maybe similar to mount points but it's not transparent.

One other thing that happened in my cluster... It is a common belief that each node should have it's own haproxy instance. While it's not clear how it's supposed to be configured things get a little weird when using etcd, confd, and haproxy as DNS records seem to be aggregated across the cluster. And since some of my services are pinned to one machine things get wonky when the alternate machine tries to setup haproxy for a hostname that does not exist. (in my case gitlab and registry)

Thursday, March 24, 2016

Another case against Raspberry Pi Docker Clusters

I really like the idea of the Raspberry Pi; RPI for short. From the perspective of the IoT, internet of things, it's a great place to practice. Chances are that your next IoT project will be based on the ARM processor. ARM seems to be making inroads because of it's cost, power, size (Pi Zero) and even with it's limitations it's still a capable machine.

However, while it's novel to build Docker clusters in order to test various operational theories it's not that real. Just on memory alone the container to host ratio is so much smaller than something like the Intel NUC with 32GB.

The case against RPI is what I just discovered. My two node cluster is running Weave Scope and csysdig in order to monitor the containers, host, resources and apps. With each sampling I see that scope and csysdig are consuming about 30-40% of the systems CPU. I though that scope's websocket webapp might be the bottleneck but it's not. The twp processes are also taking about 250M of physical memory and scope has allocated between 1-3G.

The percentage works it's way into a real number... and that is going to count against the RPI severely. or flannel?

Weave offers a number of projects all related and meant to integrate nicely. The basic service is intra and extra spanning private networks between Docker containers in node(s). This article indicates that rkt is also supported. Weave's killer feature is their scope app which is a container visualization and portal. (as portals go the tool bypasses all authentication and that might not be a good thing) One missing feature here is that weave's DNS is not available to the node; only the containers.

Flannel implements a similar network model, however, there is no DNS and there is no visualization unless the later is part of their paid offering. I've had a lot of good luck with flannel and the one thing I like is that it looks and feels a lot like CoreOS' other tools and so administration feels consistent.

While Flannel is experimenting on multiple networks it is experimental and not working in the most useful edge cases.

problem 1:
Container network and DNS design is no different than previous virtual guest OS' and domain 0 hosts. Weave's DNS seems limited to the containers and not the node.

problem 2:
Weave does not mention multi networking at all

problem 3:
scope bypasses security

problem 4:
rebooting/restarting weave can severely cripple requiring a full service restart

One solution I like is using skyDNS when containers register. The challenge here is naming container properly so they cooperate. Installing an haproxy instance on every node probably does not require a general purpose hostname unless I'm deploying some sort of round robin strategy and in that case the DNS records would be different. Still a better solution.

Sunday, March 20, 2016

painful etcd configuration

I'm not sure what happened but my etcd configuration stopped working. I recently acquired two Intel NUC devices and while they were a pain in the ass to configure the BIOS (I found this link and everything is all better from the boot)

Next it was a matter of selecting the topology. This is normally an easy task but lately I have experienced a few bugs which I cannot explain. The one good news is that I have been able to try and try again.

$ sudo rm -rf /var/lib/etcd2/proxy
$ sudo systemctl stop etcd2
$ sudo coreos-cloudinit --from-file ./cloud-config.yml
$ sudo systemctl start etcd2
$ sudo journalctl -f -u etcd2

And then test the health of the system

$ etcdctl --debug cluster-health 

But for some strange reason my proxy node (worker) seems to lose a healthy connection to the etcd leader.

Mar 21 00:50:43 corenuc2 systemd[1]: Started etcd2.
Mar 21 00:50:43 corenuc2 etcd2[4036]: proxy: listening for client requests on
Mar 21 00:51:13 corenuc2 etcd2[4036]: could not get cluster response from Get dial tcp connection refused
Mar 21 00:51:13 corenuc2 etcd2[4036]: proxy: could not retrieve cluster information from the given urls
Mar 21 00:51:19 corenuc2 etcd2[4036]: proxy: zero endpoints currently available
Mar 21 00:51:19 corenuc2 etcd2[4036]: proxy: zero endpoints currently available
Mar 21 00:51:26 corenuc2 etcd2[4036]: proxy: zero endpoints currently available
Mar 21 00:51:26 corenuc2 etcd2[4036]: proxy: zero endpoints currently available

My SOLO etcd server:
    name: etcdserver
    initial-cluster: etcdserver=


My worker with etcd configured as a proxy
    proxy: on
    name: etcdworker
    initial-cluster: etcdserver=


Saturday, March 19, 2016

Hak 5- Proxmox VE

Here is a comment I posted on YouTube in response to a Hak5 Proxmix VE demo.
While proxmox looks like an interesting platform and there are a few features that really caught my attention there is one serious flaw in the segment. LICENSING. The proxmox ve project is licensed under AGPL which is the absolute worst of all of the GPL options. That said, I was influenced to make an Intel NUC i5 6th gen yesterday. I loaded it with 32GB DDR4 and 500GB SSD. My only complaint is that the BIOS failed to configure the ethernet port so that I could update the BIOS. It also took a lot of BIOS setup to get it to boot the OS I installed. First Ubuntu 14.04, then 15.10, and finally CoreOS.
I've used Proxmox and OpenVZ before and they are OK. I think that once the dust settles it's better from the command line for my purposes.

Friday, March 18, 2016

browser bookmarks

I'm getting tired of having to curate bookmarks. It seems like a meaningless action considering I use google for everything anyway... so I should be able to "like" a link or page so the next time I search for something similar that it appears ahead of the other search results... but it should be a seamless experience.

(a) I do not want to have to curate my bookmarks
(b) I do not want to have to search through my bookmarks to find that one resource I need when it might be in my history or bookmarks, however, that search is not as good as regular search.

Thursday, March 10, 2016

Docker swarm or coreos fleet?

I am in the middle of it all and I have yet to experience the clarity in my design and toolkit that I need and want in order to sustain development thru production.

I've read a dozen articles that continue to compare docker and kubernetes. I've also read a number of blog posts that contain plenty of FUD as it pertains to coreos and docker. I also have multiple working clusters running on coreos with fleet and etcd.

As I waffle back and forth I'm still trying to define the questions so that I can research the answers.

This is what I have so far...

Choosing all things docker from machine to swarm means that I can run on any host OS. Not just coreos. But it also means that tools like rocket and rocket's pods would be excluded. So the only question I can come up with is where to set the anchor? My thinking is that coreos gives me more options going forward. Whether I get into rocket, docker, kubernetes or some other framework or platform it's the most flexible start.

My first Raspberry Pi 3 Model B

The raspberry pi team just released version 3 model b. I ordered a "kit" and it just arrived. I decided on a kit because it was my first step into the pi world and I wanted to get it right. The good news is that it seems to work nicely. The bad news is that the kids are a priced higher than if you assembled the kit yourself. And that's sad.

More good news is that Docker seems to run on the arm processor. Granted everything I've read seems to be very early stage work and mostly about the docker tools itself. But the one idea that I wrapped up is that if I want to go the docker container route on a pi I will have to use all of the Docker tools instead of trying to be agnostic as I have on the Intel+CoreOS platform.

I'm going to need a few more devices to get this under control with a stronger opinion.

Tuesday, March 8, 2016

run me anywhere

This is a quick post inspired by a quote:
Lars Herrmann, general manager of Integrated Solutions & Container Strategy at Red Hat, concurs, informing me that much of Docker’s adoption that Red Hat sees is confined to proofs of concept or initial use of containers, and usually developer teams working on greenfield projects. (The Register:
Seems that the new guard is now the old guard... which means that Red Hat is now moving at Enterprise speed instead of garage band speed. Of course they have not embraced Docker. That would mean competing with their own technology too.

The inspiration...

While Docker is leading the way it's still pretty heavyweight. You still have to implement whatever you you are working on... the great hello world app. Then you have to do all the docker machine, swarm etc... to run it where ever and however you want it. These things seem so 1970 as I remember IBM JCL. Many parts of JCL were about telling the underlying framework how and where to run your executable. It was just an executable config file almost like a Dockerfile.

Food for thought.

Monday, March 7, 2016

"Amazon Echo" and "OK Google"?

Have you ever used Amazon's Echo? What about "OK Google"? A friend of mine demonstrated Echo and I was very impressed; I have also been a "OK Google" user since I bought my first Android phone about 9 months ago. And there are plenty of pros and cons for them both.
For example while I like to listen to BBC and NPR news I never appreciated audio books.
One of the killer features of the Echo is it's "always on" feature. This is also a little spooky because someone could be listening and it's a short hop from Apple cracking iPhones to playing back Echo's audio (There is a Google dashboard flag for disabling backing up OK Google commands so I imagine there is one for Echo.)

The second killer feature for the Echo is that it now comes in 3 varieties. (i) a puck called Dot intended to be connected to a speaker or stereo, (ii) a battery powered speaker called Tap (iii) and the original Echo. Since they are dedicated for their purpose the UX is simple and to the point.

And while Google's Chromecast and Chromeaudio are cool on their own they require specialized Chrome clients and servers to make everything function. And while my Nexus 6 is a little more portable when compared to the Dot I'm not usually telling google what I want to listen to when I'm on a plane. So the use case for Alexa and OK Google are pretty much the same.

And while Alexa is an amazing feat unto itself Amazon is about selling stuff. Their Amazon music is just not as good as Google Play Music. Google needs some better hardware and Amazon needs to play better with Google... at least I hope they learn something; it's bad enough that they share ad and search info.

Feeling some love for Alpine Linux

I'm buried hip deep in OpenLDAP, Free Radius, and Yubikey; as I make my way I'm starting to appreciate Alpine Linux as a Docker scratch OS as absolutely nothing compared to mostly nothing. Meaning that while Alpine Linux is a basic Linux distro it's default Docker container has no running services making it an ideal Docker guest OS. Now the trick is to manage the package manager.

Sunday, March 6, 2016

Review: Chromebit

I really like the idea of a Chromebit I'm just not sure it's ready for daily or travel use.

During a recent business trip I packed my Chromebit in my go bag. It was so small and light that I nearly forgot about it when I was at my destination as I had already started working with my Chromebook and so there was no need. That first evening the geeks started bragging about their toys so I decided to break mine out. Since I had never used it I was expecting greatness. Once I finished the regular registration things and the update I was ready to go. Unfortunately the monitor I was using was a Dell curved 34" display with 3300x1800 (from memory). Disappointingly the Chromebit could not drive the display in it's best resolution.

So here are some observations...

  • depending on what is at your destination you will either have to bring or buy a keyboard and mouse
  • if you're planning to use it in a hotel room TV then the OSHA rating is going to be very low. Working from a bed for prolonged periods is terrible and it requires a different type of keyboard. And you cannot use the TV for anything else... I'm not sure how performant the device is when listening to music or watching video at the same time. This is worse when you're traveling with your family.
  • even though logitech has a few tiny keyboard and trackpad combos the keyboards are really small.
The jury is still out but not looking good.

another bad day for open source

One of the hallmarks of a good open source project is just how complicated it is to install, configure and maintain. Happily gitlab and the ...