Sunday, February 21, 2016

TH1 - a very elegant decision

I recently posted that I wanted to implement some modern ideas in some language that could be easily compiled or trans-coded for use my modern tools. That might mean rewriting the compiler and interpreter for the new framework but that's OK as the savings on the other side of the equation are incalculable.

For instance rewriting a compiler is a known thing.  The tools for testing the compiler based on the target scripts and language spec is very knowable, where as, rewriting the target scripts and whatever nuance might be embedded in the code that is impossible to know and it's not possible to know whether the script or the MUT (machine under test) is at fault.

tcl is a good language to consider
TH1 is an example as to why tcl was a great choice.

The SQLite development team does all of their testing in tcl. There are vast number of tests and if thy had to rewrite the tests then all would be lost and the reliability of the tests could hold the team back years. I'm not familiar with the effort to get their tests from tcl to th1 but it's still a great idea with a huge upside.

Friday, February 19, 2016

OSX Photo is taking too long to sync

I've complained about my wife's OSX machine in the recent past. This issue is that when I enable iCloud sync my home network slows to a crawl. The only evidence that I have for the network failure is congestion at the router. At the time my network was rated at 5MB/50MB (up/dn).

And with a little luck my ISP upgraded me to 10MB/100MB today. Since my photos never completed the sync I restarted the process today. As it turns out I watched a national news article about computer ransom-ware last night and I needed to get the backup started again.

The increase in bandwidth did not really fix things but it did redirect my focus.

  • I might have failed to notice that Photo, backblaze and G+ were all running at the same time. That's not a solid practice.
  • Photo's sync seems to use a shared OSX proxy so that it can monitor or shape the network usage, however, the end result seems to have been complete network saturation.
  • Uploading files at WiFi speeds or even 100MG should not be taking to the CPU. And at the same time the fan was going crazy the "monitor" indicated that there was no network usage. When I switched over to CPU is seems that a codec was using all available CPU.
Well my network issue has not been resolved except that the interruptions per upload are fewer. The problem, however, is that while my wife has been taking new pictures we are not safe.

UPDATE 2016-04-25:  When I connected my MacBook to a wired network my sync seemed to make progress and only mildly disturbed the other systems in my network. I'm not crazy with the notion that a single wireless device could crash the entire LAN.

are docker containers PCI DSS compliant?

I posted this question on stackoverflow:
before I go full bore kubernetes or apcera; are default docker containers PCI compliant? Would VLANs improve the security or is UDP over 8235 just too open to invalidate VLANs or show the bare metal and metadata be used to support the VLAN structure?
and I think I understand why G+ and FB only have +1 & like buttons; but that's for another time. In this case I'll answer my own question. Docker is no less vulnerable and may actually be more vulnerable. Once the host OS has been compromised all of the guests are vulnerable. You might have access to memory through a debugger on the host with root access and the right amount of experience debugging containers.

But there are a number of other vulnerabilities related to the container file images that are persisted on the host. Further any host volume sharing is going to expose the container's data.

As for networking; the container to host bridging may also be insecure, specially of the messages are in the clear.

Wednesday, February 17, 2016

Privacy or not

On Feb 16 2016 Tim Cook the CEO of Apple Corp. posted an open letter to it's customers in response to an FBI request to get Apple's assistance in extracting data from a terrorists phone. While I applaud Mr Cook and Apple for his public statement I think there are just a few problems here:

Apple has already stated that they have provided all data that they currently have. I'm sure that means things like iTunes, iCloud, Maps, and maybe even iMessage. Of course if the phone was backed up then there is a good chance that the remaining bits of data are already available to the FBI.

Just what exactly does Apple need to do that the FBI cannot? Early iOS phones used a 4-number pin and depending on the configuration settings the phone may or may not delete itself. Whether or not the phone will self destruct is likely known to Apple in the data above. But it is 1000 combinations of 4-digits and the phone is unlocked. If it's one of the modern Apple phones then it's possible that a fingerprint might unlock the phone. Since the FBI has or had the bodies they have access to the phone.

Granted there has not been a trial and "the couple" has not been found guilty, however, the likelihood that they were not the killers is remote if not impossible. As such they have given up their rights. All of the laws that her have on the books are currently lawful and until they are tested must be complied with. I'm not sure that this is the best test case for challenging them. And even so; it would likely only effect future cases since this one is open and shut.

While this whole topic is filled with FUD (fear uncertainty and doubt) the security model likely does have a back door. The Apple Vault product, which allows the user to secure their entire harddrive, has some magic keys that "can" be stored on the iTunes server for later retrieval. It's very likely that this same feature is already present in the iPhone only we don't know it.

Furthermore, the user's pin and fingerprint are only used to unlock the first few layers of the cryptography scheme. A 4-digit or even a 6-digit pin it not big enough to encrypt a block of data let alone an entire phone. So chances are better than 50:50 that there is an NSA team that could get into the phone.

So why did Apple and Tim Cook make the statement?

They made the statement because the FBI made a very public request and if it were known that there was any sort of data leakage whether by accident or on purpose the iPhone and iPad brands would be destroyed and quite possibly Apple itself since so much of it's revenue is tied to those brands. Granted there is a segment of the population that would not care and maybe they would weather the storm but it would leave a mark nonetheless.

Monday, February 15, 2016

DHCP weakness

I like DHCP because static IP address can be tough to manage. A single lapse can cause holes in the IP pool not to mention as machines are decommissioned almost the same thing happens. There is also physical subnet vs logical submet. It can be a pain in the ass.

On the other side of the coin DHCP makes some of those annoyances go away but creates a few of it's own. DHCP can cause duplicate assignments if the DHCP server is restarted and the IPs not saved localy. DHCP also makes some network security harder. Of course MAC address to static assignments makes somethings easy...

And then it's all confused by containers.

Sunday, February 14, 2016

Fleet or swarm

MOSTLY because docker swarm is not ready for scheduling.

SECOND because fleet cooperates with systemd giving a single solution for the timebeing

AND while docker and coreos work together there is some discourse. I'm trying to have a balance between rkt and docker so that if we have to select a winner that the amount of work is minimal.

ALSO Swarm has it's origins in a project called flynn. I used it for a while however it was only marginal.

FINALLY, I'm currently using fleet for the Z reporting and it seems to be working well

Thursday, February 11, 2016

Prometheus vs Bosun

In conclusion... while Bosun(B) is still not the ideal monitoring system neither is Prometheus(P).

TL;DR;

I am running Bosun in a Docker container hosted on CoreOS. Fleet service/unit files keep it running. However in once case I have experienced at least one severe crash as a result of a disk full condition. That it is implemented as part golang, java and python is an annoyance. The MIT license is about the only good thing.

I am trying to integrate Prometheus into my pipeline but losing steam fast. The Prometheus design seems to desire that you integrate your own cache inside your application and then allow the server to scrape the data, however, if the interval between scrapes is shorter than the longest transient session of your application then you need a gateway. A place to shuttle your data that will be a little more persistent.

(1) storing the data in my application might get me started more quickly
(2) getting the server to pull the data might be more secure
(3) using a push gateway may be a semi persistent way to collect data from short lived apps but it might also act as a proxy for WAN data collection.

UPDATE:  I was wrong about one thing. I was reading the godoc and I was left with the general notion that the local handler was going to collect all of the telemetry but I could not locate the actual touch point between the Prometheus configuration and the data being collected in the main part of the application. As it happens the library can instrument the http handler in addition to it's other metrics.

Sunday, February 7, 2016

Alpine linux

In the last few days I read an article making the claim that Docker was moving to Alpine Linux as it's default Linux distro. I do not know the political issues associated with all things docker but it is curious.

The Alpine team makes the claim that the minimalist installation is only 5MB; implying that it's for the distribution and comparing similar use-cases between Alpine and Ubuntu. With Alpine as the clear winner. If memory serves me there was a time when ubuntu offered a very light version of ubuntu. Something that was essentially kernel only plus a few vital tools.

In my development container the golang+debian container was about 800MB and the Alpine version is 426MB. I'm pretty sure that most of the bulk comes from the golang base image and my packages.
run apk add --update tree openvpn openssh openssl vim docker tmux screen sqlite graphviz sqlite-libs htop unzip net-tools wget curl iputils bash
One of my biggest concerns is that it's a community project and I just cannot keep up with the level of dependency checking required. Furthermore this seems to be a direct attack on CoreOS, RancherOS and possibly a few mainstream tiny Linux distros.

One last thing. It's probably better to deploy a static binary instead of a 5MB guest.

Thursday, February 4, 2016

continuous integration

There is something to be said for compiling my own code.

-xkcd

There is something about the control of saving a file, building and testing it locally, and then committing to the version control when it's complete. And when projects are monolithic or the team or assignment is small this feels like a good way to do things. But until you've experienced CI/CD firsthand you'll never appreciate it.

Whenever I join a new project the team lead always hands me a list of dependencies and instructs me to pollute my machine with years of project customization which usually ends in a complete OS reinstall. From time to time I've been handed a VM instance which makes things better, however, there are no less than 3 virtual environment for OSX and while they should be able to coexist it's a risky proposition and the VM instance is usually opinionated. (containers can be a variation on the VM theme).

The biggest challenge is the subtle changes that take place in the environment over time that are unintentionally not captured in the environment.

Continuous integration is but the theme. The implementation depends on the tools and workflow your team has agreed upon. For example forking vs mainline; jenkins, drone, and so on. Push vs pull. Build dependencies.

Tuesday, February 2, 2016

PXE boot on a home network

I have several home routers which I tend to swap around as I need various services or features. But now comes the hardest of all. I work from home and while I use different virtual servers, digital ocean(DO), google compute engine(GCE), amazon web services(AWS); the problem is that in order to enable enough headroom in my compute instances requires considerable cash outlay. On the other hand when you consider capital outlay, depreciation, sound polution, and the cost of residential power running my own hardware may not be an ideal option. (there is something to be said about lights out operations and continuous deployment from the outset).

Jumping into things... my Dell C6100 does not boot to USB or iPXE. It only supports PXE. Now that I've forked a CoreOS PXE installer project, upgraded it to the latest CoreOS image and fixed a few bugs I'm ready to test PXE booting a raw VM image.

Trying to PXE boot a VMware instance using a PXE server running on another VMware instance means that the instances need to be (i) bridged to the network so that the services are not hiding behind a NAT. But it also means that there can be (ii) only one DHCP server in the network. I'm not certain how the PXE client locates a DHCP server but it's probably something like UDP or port scanning. In either case my client is not locating the right DHCP server. (and a google search resulted in similar findings)

** I suppose I should mention that the machine in question is a Dell C6100 and when all 4 nodes are running a production workload they are working at 962 watts. Depending on the actual cost of electricity it could be between $400 and $1100 a year. Plus the original cost of the hardware.

Considering how much effort has already been put into this endeavor I think the project should be cancelled and look for a VPS bargain instead.

Ooops; I forgot to mention that my C6100 includes 96GB or memory. Just matching the memory needs with Digital Ocean at a cost of $1000/mo. Maybe it's a bargain to run these from home after all.

another bad day for open source

One of the hallmarks of a good open source project is just how complicated it is to install, configure and maintain. Happily gitlab and the ...