Skip to main content


Showing posts from March, 2016

Intel employment interview approx 1995

I don't remember the exact date but it was some time in the 1990's. By then I had achieved my 10,000 hours and it was time for me to work for someone else. At the time I had worked on every conceivable layer of the software development stack:

PLA - programmable logic arraysBIOSQA automationTSR - terminate stay residentdevice simulationGUI applications including object oriented design predating C++ for OS/2database applicationsdisk device driversvideo device driversAIX kernel design for the mach kernelOS/2 internals and complete retrofit of presentation managercommunications programssoftware copy protection removalother communication protocolsx86, x386, x960 assembler projects And so I decided to interview with Intel in Portland and I was granted an interview. During the interview I was asked the question "how many lines of code did I write in the last year"? While I had particularly prepared for that question it was something I had the answer for.  While working on Val…

distributed systems; database vs filesystem replication

This is just a note to my future self. I just finished watching the last 10 minutes of a goto; presentation that I started a few months ago and it reset a few of my core architecture beliefs and has be going on the bath of thinking outside the box again. While I agree that distributed systems with heavily co-mingled semantics need protection whether it's locks or some other mechanism. But if you can partition the work into it's simplest form it's likely that (a) locks are not needed (b) the simplest implementations will reduce failure and when there is a failure it's easy to fix.

I'm thinking about an application design that uses the filesystem and filesystem replication instead of DB replication for keeping systems in sync; in contrast to using a relational system. Many of the requisites for an RDBMS are also available in the basic filesystem. There is a reality that if the RDBMS is not used the right way then the ACID features of the "system" are degrad…

the madness of cutting edge

I'm trying to deploy a yubico server. I'm biased toward a 3rd party version that is implemented in Go, however, when I launch the server I get a segmentation error. After a few hours of debugging I determined that there was a problem between Alpine Linux and Go. There was a defect submitted to Google and subsequently corrected and pulled.

In a traditional bloated Linux universe I would simply download the source and rebuild go. Done! But in the docker registry, alpine linux, offial Alpine linux container, official Go container I have to wait endlessly.

go has yet to deploy a release with the patch I'm depending onalpine has not deployed an official 3.3.3 containergolang has yet to deploy an official container based on alpine 3.3.3 and after all that I'll never know whether the yubico-server I'm trying to deploy actually works.
Enough complaining about the slow lane let's about that javascript guy who was served cease and desist order related to a source file nam…

Amazon where did it go?

Normally when Amazon sells out an item they mark it in a way that obvious... like "out of stock". but recently Amazon decided to delete the page and I cannot figure out why. I'm looking for an Intel NUC-NUC6I5 and while Amazon had them it seems they are all gone and now there are one or two resellers/bundlers that are offering he same bundle for $150 more. It feels a little too much like eBay to me. Since they do not indicate the manufacturer of the bundled parts it's hard to tell where the markup is.

CoreOS things that have gone wrong with Beta 991.1.0

CoreOS performs green/blue deploys of the OS which is consistent with modern thinking. But one thing that is not completely apparent is which folders are supposed to be safe during upgrades? After a recent upgrade to 991.1.0 on the Beta channel I lost 3 of 6 folders in my /media folder. I'm pretty sure that the media folder is meant to hold symlinks or maybe similar to mount points but it's not transparent.

One other thing that happened in my cluster... It is a common belief that each node should have it's own haproxy instance. While it's not clear how it's supposed to be configured things get a little weird when using etcd, confd, and haproxy as DNS records seem to be aggregated across the cluster. And since some of my services are pinned to one machine things get wonky when the alternate machine tries to setup haproxy for a hostname that does not exist. (in my case gitlab and registry)

Another case against Raspberry Pi Docker Clusters

I really like the idea of the Raspberry Pi; RPI for short. From the perspective of the IoT, internet of things, it's a great place to practice. Chances are that your next IoT project will be based on the ARM processor. ARM seems to be making inroads because of it's cost, power, size (Pi Zero) and even with it's limitations it's still a capable machine.

However, while it's novel to build Docker clusters in order to test various operational theories it's not that real. Just on memory alone the container to host ratio is so much smaller than something like the Intel NUC with 32GB.

The case against RPI is what I just discovered. My two node cluster is running Weave Scope and csysdig in order to monitor the containers, host, resources and apps. With each sampling I see that scope and csysdig are consuming about 30-40% of the systems CPU. I though that scope's websocket webapp might be the bottleneck but it's not. The twp processes are also taking about 250M … or flannel?

Weave offers a number of projects all related and meant to integrate nicely. The basic service is intra and extra spanning private networks between Docker containers in node(s). This article indicates that rkt is also supported. Weave's killer feature is their scope app which is a container visualization and portal. (as portals go the tool bypasses all authentication and that might not be a good thing) One missing feature here is that weave's DNS is not available to the node; only the containers.

Flannel implements a similar network model, however, there is no DNS and there is no visualization unless the later is part of their paid offering. I've had a lot of good luck with flannel and the one thing I like is that it looks and feels a lot like CoreOS' other tools and so administration feels consistent.

While Flannel is experimenting on multiple networks it is experimental and not working in the most useful edge cases.

problem 1:
Container network and DNS design is no di…

painful etcd configuration

I'm not sure what happened but my etcd configuration stopped working. I recently acquired two Intel NUC devices and while they were a pain in the ass to configure the BIOS (I found this link and everything is all better from the boot)

Next it was a matter of selecting the topology. This is normally an easy task but lately I have experienced a few bugs which I cannot explain. The one good news is that I have been able to try and try again.

$ sudo rm -rf /var/lib/etcd2/proxy
$ sudo systemctl stop etcd2
$ sudo coreos-cloudinit --from-file ./cloud-config.yml
$ sudo systemctl start etcd2
$ sudo journalctl -f -u etcd2

And then test the health of the system

$ etcdctl --debug cluster-health 
But for some strange reason my proxy node (worker) seems to lose a healthy connection to the etcd leader.

Mar 21 00:50:43 corenuc2 systemd[1]: Started etcd2.
Mar 21 00:50:43 corenuc2 etcd2[4036]: proxy: listening for client requests on
Mar 21 00:51:13 corenuc2 etcd2[4036]: could not get clu…

Hak 5- Proxmox VE

Here is a comment I posted on YouTube in response to a Hak5 Proxmix VE demo.
While proxmox looks like an interesting platform and there are a few features that really caught my attention there is one serious flaw in the segment. LICENSING. The proxmox ve project is licensed under AGPL which is the absolute worst of all of the GPL options. That said, I was influenced to make an Intel NUC i5 6th gen yesterday. I loaded it with 32GB DDR4 and 500GB SSD. My only complaint is that the BIOS failed to configure the ethernet port so that I could update the BIOS. It also took a lot of BIOS setup to get it to boot the OS I installed. First Ubuntu 14.04, then 15.10, and finally CoreOS. I've used Proxmox and OpenVZ before and they are OK. I think that once the dust settles it's better from the command line for my purposes.

browser bookmarks

I'm getting tired of having to curate bookmarks. It seems like a meaningless action considering I use google for everything anyway... so I should be able to "like" a link or page so the next time I search for something similar that it appears ahead of the other search results... but it should be a seamless experience.

(a) I do not want to have to curate my bookmarks
(b) I do not want to have to search through my bookmarks to find that one resource I need when it might be in my history or bookmarks, however, that search is not as good as regular search.

Docker swarm or coreos fleet?

I am in the middle of it all and I have yet to experience the clarity in my design and toolkit that I need and want in order to sustain development thru production. I've read a dozen articles that continue to compare docker and kubernetes. I've also read a number of blog posts that contain plenty of FUD as it pertains to coreos and docker. I also have multiple working clusters running on coreos with fleet and etcd. As I waffle back and forth I'm still trying to define the questions so that I can research the answers. This is what I have so far...Choosing all things docker from machine to swarm means that I can run on any host OS. Not just coreos. But it also means that tools like rocket and rocket's pods would be excluded. So the only question I can come up with is where to set the anchor? My thinking is that coreos gives me more options going forward. Whether I get into rocket, docker, kubernetes or some other framework or platform it's the most flexible start.

My first Raspberry Pi 3 Model B

The raspberry pi team just released version 3 model b. I ordered a "kit" and it just arrived. I decided on a kit because it was my first step into the pi world and I wanted to get it right. The good news is that it seems to work nicely. The bad news is that the kids are a priced higher than if you assembled the kit yourself. And that's sad.
More good news is that Docker seems to run on the arm processor. Granted everything I've read seems to be very early stage work and mostly about the docker tools itself. But the one idea that I wrapped up is that if I want to go the docker container route on a pi I will have to use all of the Docker tools instead of trying to be agnostic as I have on the Intel+CoreOS platform.
I'm going to need a few more devices to get this under control with a stronger opinion.

run me anywhere

This is a quick post inspired by a quote:
Lars Herrmann, general manager of Integrated Solutions & Container Strategy at Red Hat, concurs, informing me that much of Docker’s adoption that Red Hat sees is confined to proofs of concept or initial use of containers, and usually developer teams working on greenfield projects. (The Register: Seems that the new guard is now the old guard... which means that Red Hat is now moving at Enterprise speed instead of garage band speed. Of course they have not embraced Docker. That would mean competing with their own technology too.

The inspiration...

While Docker is leading the way it's still pretty heavyweight. You still have to implement whatever you you are working on... the great hello world app. Then you have to do all the docker machine, swarm etc... to run it where ever and however you want it. These things seem so 1970 as I remember IBM JCL. Many parts of JCL wer…

"Amazon Echo" and "OK Google"?

Have you ever used Amazon's Echo? What about "OK Google"? A friend of mine demonstrated Echo and I was very impressed; I have also been a "OK Google" user since I bought my first Android phone about 9 months ago. And there are plenty of pros and cons for them both.
For example while I like to listen to BBC and NPR news I never appreciated audio books. One of the killer features of the Echo is it's "always on" feature. This is also a little spooky because someone could be listening and it's a short hop from Apple cracking iPhones to playing back Echo's audio (There is a Google dashboard flag for disabling backing up OK Google commands so I imagine there is one for Echo.)

The second killer feature for the Echo is that it now comes in 3 varieties. (i) a puck called Dot intended to be connected to a speaker or stereo, (ii) a battery powered speaker called Tap (iii) and the original Echo. Since they are dedicated for their purpose the UX is simp…

Feeling some love for Alpine Linux

I'm buried hip deep in OpenLDAP, Free Radius, and Yubikey; as I make my way I'm starting to appreciate Alpine Linux as a Docker scratch OS as absolutely nothing compared to mostly nothing. Meaning that while Alpine Linux is a basic Linux distro it's default Docker container has no running services making it an ideal Docker guest OS. Now the trick is to manage the package manager.

Review: Chromebit

I really like the idea of a Chromebit I'm just not sure it's ready for daily or travel use.

During a recent business trip I packed my Chromebit in my go bag. It was so small and light that I nearly forgot about it when I was at my destination as I had already started working with my Chromebook and so there was no need. That first evening the geeks started bragging about their toys so I decided to break mine out. Since I had never used it I was expecting greatness. Once I finished the regular registration things and the update I was ready to go. Unfortunately the monitor I was using was a Dell curved 34" display with 3300x1800 (from memory). Disappointingly the Chromebit could not drive the display in it's best resolution.

So here are some observations...

depending on what is at your destination you will either have to bring or buy a keyboard and mouseif you're planning to use it in a hotel room TV then the OSHA rating is going to be very low. Working from a bed for…