Skip to main content


Showing posts from January, 2015

Architect, Designer, Build or Assemble.

I'm not an official Webster's representative but experience tells me:

The difference between assemble and build is the amount and level of detail instruction.

The difference between architect and designer is the principle priorities. The architect is primarily interested in scale and function where the designer is primarily interested in aesthetic and function.

And where architects are further defined by scope which blur (application, system, enterprise); designers try to focus with distinct, sharp, boundaries.

Apcera and gnatsd

I've watched a number of Apcera demo videos and while I'm not an expert or freshman user of continuum I can spot excellence. In fact one of the conversations I had been having with some CoreOS, DataDog team members and writing about the comments made by the Fusion base image guys... the Apcera team closed the loop.

Docker and CoreOS started with the notion that the container was supposed to be lightweight. It was supposed to execute a single purpose application and maybe a few dedicated sub-processes... and maybe ssh if absolutely necessary. And to that end the Docker team started publishing their idiomatic base images. Then along came the masses who started creating all sorts of images... and then Fusion stepped in to tell us we were all doing it wrong.
Going the Phusion route meant that the only savings generated by Docker would be the shared kernel and so given the amount of tooling required to manage a Docker cluster vs an OpenStack or VMware... you're probably better…

say what about OS X and iOS?

Watching this TechCrunch video Hands on Windows 10 I was actually impressed with Windows. It made a compelling story as I sit here typing on my MacBook Air considering my hardware refresh which should include phones, tablets and laptops. But then the reviewer made a critical mistake. Granted it was not one that I would say cancels out my interest but it does sort of challenge his credibility.
The statement was to the effect that Microsoft was unifying Windows across all 3 devices. Computers, phones and consoles; and that at Apple there was iOS and OS X, implying that they were not the same. That final comment about iOS and OS X is simply wrong. The core of the operating system for both is exactly the same code. This was never a secret and in fact was part of the propaganda used to sell iOS to potential developers. 
Now, of course, it's all about the APIs and experience... and so the same delta exists in the Windows and Apple brands. While the point is taken to mean that the interf…

Makefile in Go

I’ve been having an exchange with some readers about rake and go. In the end I took a page out of the golang playbook and created my own make.go although it was specialized for easy/hound it could be modified to be a more general purpose make program. Granted the more general purpose it becomes the more it's like the actual make. And then there is the installinator project I have been working on where some of this would make plenty of sense.

What a waste of a perfectly good USB drive

My family and I are planning a trip to Disney World in the next few months. In response Disney has decided to send us a USB drive with a message "plug it in for a message" or something like that. The drive itself is a 1 GB device. On the drive is a single HTML file which when opened will redirect your browser to the Internet where it downloads and plays a video. Cute? I'm grateful I did not ask to install any software.

Now that I have watched the video I want to reuse the drive for other purposes. In fact since it was only one gigabyte I thought I would reuse it to install my favorite operating system. After working on the many small details required to create a USB bootable version of my favorite Linux distribution I have come to realize that the device is partially crippled. While the read times are acceptable because the file that is read from the device is very small using it as a bootable operating system device is extremely painful.
I have decided to buy a better U…

"REST is not a silver bullet"

I was reading the article "REST is not a silver bullet" as published on "prismatic", however, after reading the article I wanted to comment on the post. Commenting required registration. However, once I registered the link in my reading list redirected me to additional registration tasks. What a waste!

The author had a number of complaints; but while not wrong they were not as correct as written. The big issue is/was REST and HTTP, however, I've read a lot on the subject and most authors seem to believe that (a) HTTP is ubiquitous, it's well understood, and easy to implement. (b) it's trivial to add SSL. (c) there are plenty of tools for testing and debugging. (d) it is scalable and concurrent. (e) REST just gives the transaction context. (f) long-pole and web sockets can fill additional gaps.

I suppose there are plenty of reasons to hate HTTP/REST but most arguments are limited.

My own questions about deflategate

If found to be responsible for the pressure difference in the game balls... what would the penalty be? Could it or would it disqualify the New England Patriots from the SuperBowl? Would it be an immediate forfeiture? Would they advance their previous opponents?

The answer to that is a resounding no! (a) The Indianapolis Colts are not prepared to play the game and have probably been eating pizza and donuts ever since the loss to the pats. (b) then again the SeaHawks have been practicing for a game against the pats. (c) All the money spent on everything from T-shirts to advertising based on the SeaHawks vs Patriots. So it's just not going to happen.

What could happen, however, is that Brady and Belichick could be benched for the big game or maybe banned for life with no hope for the hall of fame; and of course millions in fines for defaming the NFL brand.

But while everyone seems to end the question there... there are still a few unanswered questions.
If you were New England, why wou…

VirtualBox, Vagrant and NixOS

Most Linux versions, and possibly all Unix variants, provide a customizable welcome message when logging in and one of the things I have come to expect is the IP address in the message. This is especially useful when using VirtualBox or VMware. However, if you're running headless and you're not publishing your guests IP address to a local DNS server then things are going to get a bit challenging.

One of the features I like when using Vagrant is that you can get the IP address of the current guest with a command: vagrant ip. Of course the vagrant config has to be in the cwd so that can be a challenge. And there can be more than one instance too... so bouncing between guest folders is a nuisance.

Here is a NixOS/Vagrant plugin that should be helpful. I also found this article with a couple of sample commands for VirtualBox like:
VBoxManage guestproperty get NixOS "/VirtualBox/GuestInfo/Net/0/V4/IP" And since I also had a local-only IP address I needed to execute this t…

Installing NixOS - Short Version

I have been experimenting with NixOS for the last few weeks and every day I find some new reason to like it.

My first experiences were with the VirtualBox version. It was easy to use and the default configuration, with KDE, was pleasant even though I went directly to the Konsole and then with ssh. On the list of interesting features:

the Nix installer - it's the foundation of the team's approach across the projectrelated to the installer is the user partitioned installations... i.e. two users can use different versions of the same program since their environments are partitionedand that the installer is transactionalIt has it's own approach to containers - which I do not fully understand but seems more like chroot or jail than it does Docker. Additional good news is that the container also works like the package manager ... and you can install Docker too.Then I discovered that the project includes a CI, continuous integration, server called hydra. I don't know much abou…

"switcher" multiplexing ssh and http over the same port

The Switcher project is an interesting implementation of a dual protocol mux. What is even better is that it's written in go and it's in the same class of solutions as grok. Keep in mind it's irresponsible, immoral and possibly illegal to tunnel through some networks ... after watching this mouse computer promotional video it might be hard for some folks to draw the line or know where the line is drawn.

pub/sub message queues containing blobs

I started off with a story. FAIL. Here's the plan:save the blob to a key-store using a UUID as it's key. The key-store should support a TTL with callback.put the UUID in the MQThe MQ is always going forwardthe transaction is only every touched by one service at a timeIf there is a reason to fork the transaction... then each service should replace the UUID and insert a new blob in the key-store. It is my experience that most services in a transaction only effect a portion a portion of the blob at a time and so copying around is costly.
**Redis has some primitives that allow you to have hashes of array such that you can append to an array as the transaction moves around the system instead of trying to log-aggregate the events after the fact.

Docker's machine project is not a CoreOS contender

I'm not sure what Solomon Hykes was thinking when he posted this:

The docker project is strong all by itself and while there appears to be no loveless between CoreOS and Docker... the machine project from the Docker team is no contender to CoreOS. CoreOS is an OS and it stands on it's own as an operating system.  Machine, on the other hand, is a tool for launching containers on different platforms like Vagrant, Google Compute, VMware ... in fact the actual orchestration has been implemented by many 3rd parties.

The statement is clearly inflammatory and not a serious proposition. In fact the opposite is true if you read any of the Phusion posts. The headline on the Phusion website is "Your Docker Image Might be Broken". This tells me is that there is a bigger fundamental problem with Docker.

DVCS everywhere and everywhere else!!!

Some time ago I wrote about version control systems and storing secrets. And I still think it's a fools errand because you'll never be able to prevent it and education and awareness is the only thing that is going to reduce the incidences.

But now I'm thinking about the footprints left behind... when you cancel or close an email account... cancel or move a public project, team or repository.

For example I have a number of projects that I hosted on both bitbucket and github. In most cases it was more of a land grab than proper use. But in the end it was meaningless.

if I cancelled the project the "brand" remains and someone like a squatter is going to reuse it eventually as there is a google footprint and is likely to make them somethingSomeone is just as likely to clone the project and use it in a social engineering way to leverage your brand reputation to do bad thingsin the case of emails there is a good chance someone is going to receive emails that robots sent…

Google Support

I was helping my sister in-law recover her Google calendar yesterday when I ran out of options.  I needed Google Support. And the only way you get Google Support is by paying for your Google Apps for Work. Actually getting to support was novel and with just a little friction sprinkled in. While the outcome seems to be a success it's not because of Google but due to a 3rd party app.

Logging in as her Google App admin I found the support button.  No matter how I got there whether it was google search I ended up in the same basic place. There was a moment when I thought I was going to be able to get into the support system and at least ask the question whether or not I could recover her calendar but that never happened. Everything pushed me back. So I finally got her approval to get a paid account... $5 per user per month.

upgrade your account to paidlogin as adminpress the support buttonpress the GET PIN buttonthe pin is good for 60 minutescall the support numberenter the pinwhen the…

Startup in a Box

If you had to startup a company tomorrow what sorts of services, appliances and applications would you want to deploy? I have started to put together a list which I will eventually try to automate with NixOS and the Nix package manager at the core. Then I will try the same arrangement with CoreOS and Nix.

email server to send and receive emailsinternal MTAinternal DNSpublic DNSVPNLDAPDVCS - preferably something based on git with support for releasesmanaged switch with vlansstorage for backupsstorage for active applicationspublic & private wikipublic & private ticket systeminternal & private document repositoryfaxvoip with voicemail and mobile supportchatvideo chatmonitoring system / dashboardCMSinvoice / billing systemgeneral accountingprod, staging, dev environmentsinternal toolsmaster index of all toolsMQdatabaseschedulerAPI Serverauthenticationcontinuous integrationcalendarcontactsvanity websiteFTP server (box or dropbox like)fail2bandropboxSNMP Server More to come.

Docker Base Images

The Phusion team would have you believe that all other base images are inferior and you are unsafe.
"YOUR DOCKER IMAGE MIGHT BE BROKEN" --Link This statement is particularly troubling. Not because my base image is vulnerable but that these guys think so little of the Docker team's ability to create base images correctly. As it turns out there are 12 base images that are considered "from the source".
ubuntu, ubuntu-upstart, debian, centos, busybox, fedora, opensuse, cirros, crus, neurodebian, scratch, oracle-linux  Phusion's baseimage is present in the docker registry, however, the phusion user is NOT "trusted" and there are plenty of forks by users with "trust". So while their claims are appreciated in one sense they are meritless if not incomplete.

The second challenge is the container promise. Everything I have read so far suggests that it's preferred to only have a single process running in each container. This also makes sense as…

The Internet of Things - phonehome

During this explosion of the internet of things; the things need care and feeding. Sure, sometimes they are small and every unimportant like a bot following a crayon line but then there are other cases where cash registers, scales, remote printers, wifi gateways, or your toaster needs a hug.

One such system I designed used ssh, bash, and a semaphore file. The amazing thing is that it scaled well when I used OpenBSD as the ssh server. I even designed it with HA in mind such that there were two ssh servers that the remote devices could connect to. One weakness that the system has is that it's not on-demand. There is a cycle time between the device and the server.

(a) make a connection to the 'a' server and set the timeout
(b) if the timeout expired drop the connection
(c) make a connection to the 'b' server and set the timeout
(d) if the timeout expired drop the connection (e) sleep for 5 min
And from time to time I've had to wait 20 minutes to get a connection t…


Inspired by `man 1 flock` I have decided to build my own flock utility in golang. After creating a public git repo on bitbucket I started thinking about the details. (a) use stdlib only (b) keep it simple (c) offer a command line and a package.

Nothing new there, but then...

Now the question becomes; should I always have a lock file? Every time I run any application should I have a lock file? Should that lock file prevent multiple instances of THIS version of the tool or any version of the tool. Or should the lock file act as a sort of semaphore indicating that there is an application of this type running?

I ended up with a few choices:
$ myapp -pidlockfile Use the program name arg[0] and the current pid.
$ myapp -buildlockfile Use the program name and CI build number.
$ myapp -lockfile Use the program name prevent multiple instances of the same application on the same machine
$ myapp -nolockfile Do not use a lock file.

One thing to keep in mind is that if you use fleet to launch your…

Logging theory of operation

Log everything you need for debugging as part of a possible post modem on the target machine but do not aggregate or ship the logs.make certain each log entry is unique with it's PID or somethingaggregate duplicate log entries and decide on the max dupe count before writing a sentinelWhen an actionable event occurs send a message to the monitoring serverWhen an event needs to be monitored send that event to the monitoring server immediately. This is usually an indication that the transaction is either beginning or ending; or some critical timing piece like an external service. I like to perform a stack trace as the transaction progresses storing the data locally until it completes then use a low priority service to copy it to storage or a reporting server. I like to send the stack trace to the server when the transaction complete.Logstash and elasticsearch are interesting tools but they are not without their challenges. Once you get to talking about scaling and capacity issues log…

GoLang implementation of 'man 1 flock'

Linux has a utility called flock. It's pretty handy because it'll prevent the current program from running a second time. This is particularly useful when a cronjob's runtime is longer than it's interval. The described flock util creates a file and then sets it's lock.  If a second instance is started then the flock function will cause the second to fail so long as you are watching the response values.
package main

import (

func main() {
file, err := os.OpenFile("test.dat", os.O_CREATE+os.O_APPEND, 0666)
if err != nil {
fmt.Printf("%v\n", err)
fd := file.Fd()
fmt.Printf("%x\n", fd)
err = syscall.Flock(int(fd), syscall.LOCK_EX+syscall.LOCK_NB)
if err != nil {
fmt.Printf("%v\n", err)
time.Sleep(time.Duration(15) * time.Second)

A little more needs to be done to this code like pull the command from the CLI and a number of other params (see t…

Unit Testing - keep it simple instead

Kelsey Hightower(CoreOS) sent the following tweet to Rob Pike(Go Author).
@rob_pike: "Unit testing was driven by the dynamic language people because they had that instead of static typing." - LangNext 2014 In 1983 I wrote two custom applications.  The first was a mail merge program that would take a CSV file of addresses and WordStar document and print the merged results so that they could be stuffed and snail mailed. The second was a warehouse inventory system for perishables.

At the time; the state of the art debugging was the print statement and testing was manual integration testing. Either you got the results you wanted or you didn't. I would like to say that programming today is more complicated and that we need many more guardrails to get from a proposal to functional application but I cannot.

I'm reminded, again, that a Russian space engineer once told me to keep the interface simple and internals simpler. This makes everything simpler. Simple might break but…

Mozilla Rust - box()

I started watching an Introduction to Rustvideo when the presenter got to the box(). According to the rust-lang reference manual:
A box is a reference to a heap allocation holding another value And when you use the rust stdlib there is a Box class that looks like any "generic" implementation. In rust a box of an int might be Box<int>. And while the JDK manual references auoboxing with a similar description there is no actual box class or function. A box'd instance might look like Integer<int>.

I turned in my java beans a very long time ago and I don't know much about rust but I do know that this is an awkward to design a language. Boxing is probably a good thing for the internals, however, it's existence and promotion to something that requires first-class consideration from the programmer seems less pragmatic.

While the structure is probably there in order to provide some space for the various runtime and compile-time protection mechanisms as a syste…

Pro Tip from Go Authors

Dave Cheney recently tweet this:
#golang top tip: fmt and log packages know how to print errors, prefer fmt.Printf("Oops: %v\n", err) to fmt.Println("Opps:", err.Error()) What makes this interesting is that I also saw something very similar in a live coding session. Andrew and Brad used "%v" all of their Printf functions. 
I do not have a concrete explanation, however, my intuition tells me that the Printf functions interpret the "%v" and then use some reflection and stringer -ification. This is because they seems to use it to display integers and strings. In a pragmatic way it makes sense to use the "%v" so that you do not have to modify the format string and the variables but you might have to do that anyway. At least you'd be able to change(refactor but not really) the types without having to chase down all of its usage.

Hacker News is excited about: Flow Based Programming

I was just trying to see Morrison's website to see if anything had changed either in response the Hacker News posting or prior to the posting. However I received a status code 509 which indicates that the site has exceeded it's bandwidth limits. That's both good and bad. (good) because FBP is worth looking at and implementing (bad) because it's popularity is likely to lead to more misunderstanding before all the useful tools have been vetted. (noflojs, flowhub)

Formatting a USB drive for NixOS

Recently I posted some general instructions on installing NixOS on a USB drive. One area I was having trouble with was the basic format of the drive. At the time I decided to use a SmartOS USB image and a little 'dd' magic to copy the image file from the local drive to the USB stick. Since then I have worked out the missing pieces.

With a booted OS X where the USB drive is located at /dev/disk2 ...

sudo fdisk -e /dev/disk2fdisk> auto dosfdisk> f 1fdisk> wfdisk> q Now you have a msdos formatted(partition table) USB drive. At this point I followed a few of the same steps to be able to connect the USB drive from the host OS X to the quest NixOS. Keep in mind that the USB stick is currently partitioned with partition 1 as the active boot partition; but not formatted so it cannot be mounted just yet.
With NixOS booted and at a console (where the USB drive is located at /dev/sdb and the fat partition is at /dev/sdb1) ... format the partition `sudo mkdosfs /dev/sdb1`net-env…

OS X and junk mail

The features that are offered are pretty traditional but not very modern. I cannot remember the last time I had to look at my junk folder to find an email that might have been classified as junk incorrectly.

In the meantime when I tell to delete the junk mail when quiting the application it only delays the amount of time it takes to shutdown. If I select 1-day then the email is going to linger for 24 hours more or less and quite possibly leave breadcrumbs for the same.

I think I want a "never see junk" setting and "auto delete junk immediately" setting.

fishshell, boot2docker and my config

I like fishshell for a number of reasons with my favorite being the CLI. While it does not support ^R, out of the box, in order to search the command history it feels a lot more natural. Just start retyping the desired command and it will find and highlight the previous versions which you can scroll and select. Of course it has the weakness that if the first few letters of the command are not the same as the next command then there is some CLI navigation gymnastics... but at least the normal operation is the normal operation.

I suppose I have gotten used to the fact that the .bashrc and .profile files are in the $HOME folder because when fish decided to put it's config file here: $HOME/.config/fish/ I could never remember the exact folder and in some cases I could not be bothered to search the docs. This is part of my green folder initiative.

My fish/ looks like this:
set -x fish_user_paths $HOME/binset -x GOPATH $HOMEgo env|grep GOROOT | sed -e "s/\(.*\)…

BLEET - Green Folder Initiative

Green Folder Initiative is my attempt to keep my userspace $HOME folder as generic as possible so that whatever skills I master from the shell, vim or other tools that they will transfer from one *nix to another with minimal relearning or adjustment. (I do not want to maintain and sync multiple $HOME and config across multiple OS as I've already experienced incompatibility across similar OS X, BSD and Linux versions)

Booting NixOS from USB

Creating a bootable NixOS USB has been a challenge. I tried a number of different strategies and none of them worked. In the end the missing element was that the USB drive needed to be formatted as either FAT or FAT32; and the partition table needed to look traditional (my default USB partition table had an EFI partition; right or wrong it seems to have been part of the problem). Also there are a number of differences in 'fdisk' commands between OS' and that was frustrating too.

These are not the steps but the discovery:

I downloaded and created a SmartOS bootable USBCreating the USB device with the SmartOS image appeared to change the partition table in a way that reminded me of the old Windows and DOS days.Now I downloaded and installed NixOs for Virtualbox.I booted NixOS in Virtualboxin NixOSinstalled wget `nix-env -i wget`installed unetbootin `nix-env -i unetbootin`downloaded the NixOS image with wgetinserted the USB device from above and allowed it to be recognized by …

Commit Often

I like to commit often and the reasons I give are twofold. (i) making a change to a line or group of lines is usually based on a single notion where making changes all over the code is usually part of a feature or some idea bigger than the isolated code and committing that change is more of a roadmap than a change history. (ii) I have been known to use more than one computer at a time. That depends on how and where the development is taking place. I might like to make some changes, go home, and then pull those changes to my home computer. Having the agility to work anywhere means having this sort of flexibility.

One thing I hate is when every mini commit turns into a build and test cycle; and certainly there are a number of ways around this too.... instead of commits then use a shared drive or a cloud drive. Of course this conflicts with #i. Forking the code and committing works to a point but depending on your CI the branches might auto-build too but with any luck that is configurabl…

Mesos, Marathon and Mesosphere

Everything is turning up Docker these days and as such there are a number of new orchestration and scheduling systems for Docker that are popping up. Three projects that seem to be connected are Mesos, Marathon and Mesosphere. I'm sure they are interesting projects by in my "Level-A" stack they do not measure up.


Mess requires the JDK and as such will not install on CoreOS as nicely as one would like. Self installer or not, CoreOS is meant to be immutable and so this is not an idiomatic idea.The getting started documentation gives Ubuntu 12.04 examples and as of this writing Ubuntu is well into the 14.10 cycle with 15.04 just a few months away. They could have updated the doc. (not a good sign)And without first hand knowledge other than reading the architecture documentation it appears that Mesos is something that looks like an analog of Docker in the JVM.There is mention of some Hadoop Clustering and MPI. MPI is part of the cluster compute API framework and never m…

BLEET - Oracle's JDK

There is absolutely no reason why the Java installer needs access to root in order to install itself on an OS X or Linux system. It should be very happy in the user's home directory... Just look at golfing.

OpenStack in terms of CoreOS

CoreOS is meant to be immutable so attaching to running things from the host directly is a bit of a challenge and possibly just wrong. But as I look at OpenStack, CoreOS, Unikernels and the other moving parts I'm curious to know if there is a complete and reasonable analog to OpenStack in terms of CoreOS (other Linux variations later).
OpenStack Cinder - Storage. There are plenty of Linux storage solutions for CoreOS. If you are not mounting an NFS file system from the core then you're probably creating a DataContainer and attaching to a NAS or SAN. Additionally there are some docs suggesting that ZFS can be attached using Flocker. OpenStack Nova - Command LineCurrently there is no aggregated CLI or GUI for all of the CoreOS features but the same can be said for OpenStack. Nova is but one part of the equation. CoreOS performs that function mainly with the fleet list-machines command. OpenStack KeyStone - Identity ServerThere are a number of identity solutions for CoreOS, however,…

TDD in the small?

I have not deviated from the belief that TDD is junk or at least less useful than some dedicated QA engineers would have you believe. It has always been my position that the more complex a function or task the more testing might be required. When I'm building transactional system I typically build transactions to explore and explode the edge cases... partly to verify the intended functionality but also to provide a framework for regression testing.
Transactional regression testing in the payments industry is critical to success but that does not make it TDD.  So I have the following questions:

1) how small (LOC or some complexity indicator) does a function have to be in order to justify not implementing tests?

2) how big (LOC or complexity) does the function have to be in order to warrant 100% code coverage?

One could argue the 1 & 2 are the same number but that's not the point I'm trying to make. Is there a level of complexity that does not need to be tested and a lev…

"Optimizing Go: from 3K requests/sec to 480K requests/sec"

480K TPS would be a wonderful problem to have but in the meantime I think this sort of transaction volume has a limited number of contestants. Just how many companies do work at netflix, google or amazon scale? Not many.

The thing to remember is that evil Big-O notation from those college days. Adding the fewest LOC could cause the 480K to fall like a rock. Every bit of work you ask the transaction to perform could have a drastic effect.

For example; let's say that the 480K is just counting the number of transactions... nothing else. If you decided to add the number a second time then you'd also expect the TPS rate to be half. This becomes more relevant when that "second" thing is much more costly.

Why Atom-shell?

Atom is a fun editor and I've already benefitted from it's plugin ecosystem. The developers have forked the part of the application that provides guardrails for developers to develop desktop applications based on nodejs called atom-shell. On the surface this seems like a good idea but once I started digging into it the less I liked it.

First of all rumor has it that Atom is sending events to the mothership. On the one hand it bothers me but on the other hand Atom's life is limited; I am working on my own browser based IDE.

Second, if I'm building apps they they are supposed to be mobile first and then browser... or they are server only. So there is absolutely no reason to have a desktop version of anything.

As far as I can tell there is no reason to write desktop javascript... and so no reason to use atom-shell.

The any language challenge

One of the challenges of most programming languages is the stratification of complexity. Just last night I was responding to a post which criticized K&R and this morning it's even more clear to me that there is a stratification of code within any application. Meaning that some code is almost considered MACRO like and some code is considered low level. In my opinion one thing that makes any code hard to read is when the delineation between layers gets fuzzy.
In almost every application I write there are 4 layers: And with a sensible namespace and directory structure it's easy to learn and relearn.
Over the last few years I have repeatedly fallen into a trap. On the one hand I want to make software implementation more like assembly... just bolting things together in a loose sort of way. (think Macros) and every time I get there I start thinking about embedded lua, tcl or some other small footprint language. And in some fits of insanity even a lisp variant.
When I wake from …

What is my next *nix?

I have long since been a fan of Slackware and OpenBSD. Both are rock solid and while they are opinionated they are clean and reliable. Many years ago the Slackware author dropped out for about a year in the middle of the most productive time in the Linux Kernel. Patrick was also highly skeptical of the 3.X branch of the kernel and as a result things, including driver support, lingered.

OpenBSD has long since been laser focused on security and freedom. Theo has always held true to those principles. There was a time when certain wifi and video drivers were closed source. While OpenBSD was the most secure *nix available it simply would not run on some of the most common hardware without requiring substitute video and network hardware.

There is a lot going on right now and things are only getting faster. While I like Ubuntu, Fedora, CentOS, Mint and a few others... I am starting to focus on smaller OS'. For example, CoreOS, NixOS, MirageOS, Erlang on Xen and Elixir on Xen; so called u…

GoLang Generics and K&R Troll

Just this morning I read a blog post disparaging The definitive introduction to the C programming language ("The C Programming Language") by someone who professed to be a published author. Unfortunately he had disabled the comment section of the blog suggesting that "we" write our own blogs. This, however, is totally disingenuous as the blogger is actually trying to generate SEO-type traction instead of spreading idea or starting a dialog.

While the K&R book was not meant to be a demonstration of idiomatic C coding style it was the standard at the time... and it has evolved nicely since. At the time the language was written the only operating system of consequence that was written in it was from Bell Labs. DOS, CPM and most other PC-basd operating systems were implemented in Assembler. If memory serves me all buy about 70 lines of the original Unix systems were written in C. C was intended to be a systems language.

This nameless blogger simply does not know eno…

mobile first

I'm writing this post in advance of my 10 day vacation but it seems appropriate nonetheless and by the time it posts I should have already returned.
My employers, customers and family members have always had different ideas on how I should behave on my vacations. From totally connected to totally disconnected and all points in between. For me I like to be somewhere in between. For me it's not about being "Wally Pipp" but about being responsible and taking pride and ownership.
With that in mind I'm looking at my briefcase and wondering how I'm going to carry everything. ipad miniiphone11" macbook air15" macbook pro2x iPad 2's with iGuy cases (say bulky) and then plenty of accessories and power adapters. And carryon for my, my wife, 2 kids and the huge double stroller. And let's not forget winter coats and whatever the rest of the family is going to stuff into the stroller. Some of these are easy to carry and some are difficult. The iPads are t…