Skip to main content


Showing posts from February, 2016

TH1 - a very elegant decision

I recently posted that I wanted to implement some modern ideas in some language that could be easily compiled or trans-coded for use my modern tools. That might mean rewriting the compiler and interpreter for the new framework but that's OK as the savings on the other side of the equation are incalculable.

For instance rewriting a compiler is a known thing.  The tools for testing the compiler based on the target scripts and language spec is very knowable, where as, rewriting the target scripts and whatever nuance might be embedded in the code that is impossible to know and it's not possible to know whether the script or the MUT (machine under test) is at fault.

tcl is a good language to consider
TH1 is an example as to why tcl was a great choice.

The SQLite development team does all of their testing in tcl. There are vast number of tests and if thy had to rewrite the tests then all would be lost and the reliability of the tests could hold the team back years. I'm not familia…

OSX Photo is taking too long to sync

I've complained about my wife's OSX machine in the recent past. This issue is that when I enable iCloud sync my home network slows to a crawl. The only evidence that I have for the network failure is congestion at the router. At the time my network was rated at 5MB/50MB (up/dn).

And with a little luck my ISP upgraded me to 10MB/100MB today. Since my photos never completed the sync I restarted the process today. As it turns out I watched a national news article about computer ransom-ware last night and I needed to get the backup started again.

The increase in bandwidth did not really fix things but it did redirect my focus.

I might have failed to notice that Photo, backblaze and G+ were all running at the same time. That's not a solid practice.Photo's sync seems to use a shared OSX proxy so that it can monitor or shape the network usage, however, the end result seems to have been complete network saturation.Uploading files at WiFi speeds or even 100MG should not be takin…

are docker containers PCI DSS compliant?

I posted this question on stackoverflow:
before I go full bore kubernetes or apcera; are default docker containers PCI compliant? Would VLANs improve the security or is UDP over 8235 just too open to invalidate VLANs or show the bare metal and metadata be used to support the VLAN structure? and I think I understand why G+ and FB only have +1 & like buttons; but that's for another time. In this case I'll answer my own question. Docker is no less vulnerable and may actually be more vulnerable. Once the host OS has been compromised all of the guests are vulnerable. You might have access to memory through a debugger on the host with root access and the right amount of experience debugging containers.

But there are a number of other vulnerabilities related to the container file images that are persisted on the host. Further any host volume sharing is going to expose the container's data.

As for networking; the container to host bridging may also be insecure, specially of the…

Privacy or not

On Feb 16 2016 Tim Cook the CEO of Apple Corp. posted an open letter to it's customers in response to an FBI request to get Apple's assistance in extracting data from a terrorists phone. While I applaud Mr Cook and Apple for his public statement I think there are just a few problems here:

Apple has already stated that they have provided all data that they currently have. I'm sure that means things like iTunes, iCloud, Maps, and maybe even iMessage. Of course if the phone was backed up then there is a good chance that the remaining bits of data are already available to the FBI.

Just what exactly does Apple need to do that the FBI cannot? Early iOS phones used a 4-number pin and depending on the configuration settings the phone may or may not delete itself. Whether or not the phone will self destruct is likely known to Apple in the data above. But it is 1000 combinations of 4-digits and the phone is unlocked. If it's one of the modern Apple phones then it's possible …

DHCP weakness

I like DHCP because static IP address can be tough to manage. A single lapse can cause holes in the IP pool not to mention as machines are decommissioned almost the same thing happens. There is also physical subnet vs logical submet. It can be a pain in the ass.

On the other side of the coin DHCP makes some of those annoyances go away but creates a few of it's own. DHCP can cause duplicate assignments if the DHCP server is restarted and the IPs not saved localy. DHCP also makes some network security harder. Of course MAC address to static assignments makes somethings easy...

And then it's all confused by containers.

Fleet or swarm

MOSTLY because docker swarm is not ready for scheduling.SECOND because fleet cooperates with systemd giving a single solution for the timebeingAND while docker and coreos work together there is some discourse. I'm trying to have a balance between rkt and docker so that if we have to select a winner that the amount of work is minimal.ALSO Swarm has it's origins in a project called flynn. I used it for a while however it was only marginal.FINALLY, I'm currently using fleet for the Z reporting and it seems to be working well

Prometheus vs Bosun

In conclusion... while Bosun(B) is still not the ideal monitoring system neither is Prometheus(P).


I am running Bosun in a Docker container hosted on CoreOS. Fleet service/unit files keep it running. However in once case I have experienced at least one severe crash as a result of a disk full condition. That it is implemented as part golang, java and python is an annoyance. The MIT license is about the only good thing.

I am trying to integrate Prometheus into my pipeline but losing steam fast. The Prometheus design seems to desire that you integrate your own cache inside your application and then allow the server to scrape the data, however, if the interval between scrapes is shorter than the longest transient session of your application then you need a gateway. A place to shuttle your data that will be a little more persistent.

(1) storing the data in my application might get me started more quickly
(2) getting the server to pull the data might be more secure
(3) using a push g…

Alpine linux

In the last few days I read an article making the claim that Docker was moving to Alpine Linux as it's default Linux distro. I do not know the political issues associated with all things docker but it is curious.

The Alpine team makes the claim that the minimalist installation is only 5MB; implying that it's for the distribution and comparing similar use-cases between Alpine and Ubuntu. With Alpine as the clear winner. If memory serves me there was a time when ubuntu offered a very light version of ubuntu. Something that was essentially kernel only plus a few vital tools.

In my development container the golang+debian container was about 800MB and the Alpine version is 426MB. I'm pretty sure that most of the bulk comes from the golang base image and my packages.
run apk add --update tree openvpn openssh openssl vim docker tmux screen sqlite graphviz sqlite-libs htop unzip net-tools wget curl iputils bash One of my biggest concerns is that it's a community project and I …

continuous integration

There is something to be said for compiling my own code.

There is something about the control of saving a file, building and testing it locally, and then committing to the version control when it's complete. And when projects are monolithic or the team or assignment is small this feels like a good way to do things. But until you've experienced CI/CD firsthand you'll never appreciate it.

Whenever I join a new project the team lead always hands me a list of dependencies and instructs me to pollute my machine with years of project customization which usually ends in a complete OS reinstall. From time to time I've been handed a VM instance which makes things better, however, there are no less than 3 virtual environment for OSX and while they should be able to coexist it's a risky proposition and the VM instance is usually opinionated. (containers can be a variation on the VM theme).

The biggest challenge is the subtle changes that take place in the environment ove…

PXE boot on a home network

I have several home routers which I tend to swap around as I need various services or features. But now comes the hardest of all. I work from home and while I use different virtual servers, digital ocean(DO), google compute engine(GCE), amazon web services(AWS); the problem is that in order to enable enough headroom in my compute instances requires considerable cash outlay. On the other hand when you consider capital outlay, depreciation, sound polution, and the cost of residential power running my own hardware may not be an ideal option. (there is something to be said about lights out operations and continuous deployment from the outset).

Jumping into things... my Dell C6100 does not boot to USB or iPXE. It only supports PXE. Now that I've forked a CoreOS PXE installer project, upgraded it to the latest CoreOS image and fixed a few bugs I'm ready to test PXE booting a raw VM image.

Trying to PXE boot a VMware instance using a PXE server running on another VMware instance mean…