Tuesday, April 29, 2014

Do it all well

"You do not have to do everything; but everything you do, you need to do well."
Yesterday I read an article where the author suggested that Google+ is dead. I hope not. As an entity it seems to be a mashup of several ideas, however, as a repository of knowledge it must persist. (unlike Wave).

I'm recalling the 80s when there were many all-in-one applications that were awesome. PFS Write, Calc, Database etc... and then the lead was stolen as brands moved into the dedicated app space. Things would so much simpler if PFS had won the day.

I have the same advice for Google. Keep it lean and simple... and done really well.

Broadband service provider SLA bullshit

[UPDATE] This could be a duplicate post or even incomplete for that matter, however, now that Comcast is moving into my neighborhood offering more features and service levels at half the price... I'm moving to them.

My local broadband service provider continues to tell stories about service level in our community. The easiest way to determine whether or not there's truly an event happening in the community is based on the hold time with customer service or technical support.

Regardless of the circumstances customer service has the same answers. #1 our system is working flawlessly and you should be able to perform a speed test with our internal server. Of course that's crap from the get-go when 50% of the formula for determining infrastructure problems is the gateway from The service provider to the Internet as a whole. Many times I have been able to achieve optimum speed tests between my local system and the internal target server however when attempting to run the same test against the remote server it's clear that there is a bandwidth issue.

#2. Technical support also insists that in order to perform a proper speed test that one should remove the internal router and connect one's computer directly to the provided modem. Since the local broadband network is effectively UDP anyone's packets that are not protected by HT TPS are visible to all of its peers. Therefore if one of my neighbors was particularly savvy they could have access to my data. Secondly without a hardware firewall in place there is always a possibility that the local computer is not protected adequately. But I did it anyways.

The following video is a short clip demonstrating that my local MacBook Air was connected to my service providers modem. I then attempted to access the ubiquitous speedtest network. The results should be self-explanatory.

The second video demonstrates that I was able to get the optimum performance using my Wi-Fi network. Therefore the router and Wi-Fi components of my internal network are not part of the compromise.

Historically my Internet service provider has had a number of hardware system problems in the area. The first has to do with the cable system being underground. Typically when we have a seasonal change and increase in precipitation many of the network devices in our cable network malfunction or perform out of tolerance. Our local cable company simply does not put in the time to adequately maintain our network.

Monday, April 28, 2014

So long Rackspace, hello GCE

Rackspace has been very good to me over the years. They have been taking a modest $50USD/month for about 2 years and $75USD/mo for the last year. But now I'm looking back on those servers and realizing that there is just way too much work for me to keep it all alive. So I am moving to CoreOS running on GCE (google cloud engine) with plenty of docker. I should be able to get to the magic $25USD/mo and still get the same service I have been getting.  With the remaining cost savings I might allocate a huge disk so that I can backup my 300GB of family photos at full resolution.

UPDATE: I just deleted my second server.  I had turned this server off several months ago (after backing it up) but in that time I have not missed it.  So goodbye.

UPDATE: only 2 servers left.

The vendor approach to 3rd party libraries

The "vendor approach" is defined by the relative absorption of a 3rd party library directly into a project by creating a "vendor" directory and putting the libraries and all of it's dependencies in that directory.

This might seem to be a reasonable solution because it means that the project is now static and embedded in your project. Sure; it has a bit of security embedded in the fact that you control what changes are implemented in the version you have locally. But unfortunately if the 3rd party lib is part of an active development process then it may be prohibitively expensive to maintain alongside your own code.

My recommendation is that you fork the code so that you have your own copy. This can get a bit hairy when the code has deeper dependencies and for that reason alone this might not be the right library for you. In my case I might build a pkg, import one 3rd party lib and the rest is limited to the standard lib. Reducing the risk based on change. This mill also make it easy to merge pull requests.


main() is a fractal dimension of semi goodness

Most of the  main() functions I've seen and implemented are typically just a few lines long and then immediately launch into the application. On the one-hand it plain to see that the thin layer acts as a way to match the impedance between the application and the underlying operating system. By FD (fractal dimension) one should also be implementing thin layers between the application and any other 3rd party library linked in.

On the other hand... if you migrate some code into the  main() function making it thick and actively implements some of the startup then... also by FD you could argue against thin layers between any part of the application and the 3rd party libraries.

I like the thin layers. They create the best opportunity to mock the target and write more test cases (IoC and Dependency Injection). Ick!! But then you gotta ask yourself whether or not a thin  main() makes any sense or if the compiler/linker should shorten the dependency.

Sunday, April 27, 2014

Using (almost) FREE f1-micro for simple naked redirect

[UPDATE] sadly while I was collecting my notes and trying to reproduce the results I was getting from my n1-standard project I was not able to get my /media/state partition to mount and I have not been able to determine if it has anything to do with the FREE partition or not. I will try my steps again with a larger system and see if that makes any difference. Keep in mind that the FREE f1-micro is awful for anything other than the simplest tasks anyway.

[UPDATE 2] I was able to get it to work. The debug session started after attempting to build the system about 10 times... then I started scanning the logs (sudo journalctl -f) but did not yield any fruit. Finally I looked at the entire log file (sudo journalctl -a) and ready it after several redeploys and reboots. I found a strange error message in the log: "Failed parsing user-data: Unrecognized user-data header: coreos:" I rebooted a few more times and did a few google searches. Still nothing. Then I found an obscure reference in the cloud-config doc/spec that said that "#cloud-config" was the proper header. I thought  #cloud-config was a comment and so I had deleted it before trying to build my instance. I do not mind complaining about this fact. Clearly a comment should not be a header. Not even in a yaml file.

I have a dedicated server at Rackspace that I once used to host a number of dedicated apps that have since been decommissioned or are now hosted via 3rd party SaaS systems and strangely enough none of these SaaS providers support naked domains.

(a naked domain is a domain where the first part of the domain name is absent. A regular domain name for a web server might be www.domain.com and the naked domain would be domain.com. There are some historical reasons why this is the case but it’s probably a good thing for now.)

Now my dedicated server simply intercepts the naked domains and with a wildcard also gets all of the typos and redirects the user to the default server. So if the user entered fred.domain.com the browser would be redirected to www.domain.com. That server costs me $15 a month and is just generating heat. The redirect is rarely used and at this point I just need to make it go away.

Enter Google’s f1-micro server. It’s an anemic server that is free to operate and it will do the trick. The best part of this is going to be the use of CoreOS as the host OS. You’ll have to explore CoreOS for yourself but what interests me is the upgrade process that just works and means that every version is basically LTS (long term support)

And so we begin…

Assuming you have a GCE (google compute engine) account these are the steps I used to deploy the project.

** you’ll need a file on your local system: cloud-config.yaml

#cloud-config
coreos:
  etcd:
    # generate a new token for each unique cluster from https://discovery.etcd.io/new
    discovery: https://discovery.etcd.io/… your key goes here...
    # multi-region and multi-cloud deployments need to use $public_ipv4
    addr: $private_ipv4:4001
    peer-addr: $private_ipv4:7001
  units:
    - name: etcd.service
      command: start
    - name: fleet.service
      command: start
    - name: media-state.mount
      command: start
      content: |
        [Mount]
        What=/dev/disk/by-id/scsi-0Google_PersistentDisk_pdisk01
        Where=/media/state
        Type=ext4

  1. install the GCE SDK and tools
  2. update the SDK
  3. set the default project
    1. gcloud config set project <project-id>
  4. allocate some disk if needed
    1. gcutil adddisk --size_gb=10 --zone=us-central1-a pdisk01
  5. capture the latest CoreOS
    1. gcutil addimage --description="CoreOS 298.0.0" coreos-v298-0-0 gs://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_gce.tar.gz
  6. create an instance
    1. gcutil addinstance --image=coreos-v298-0-0 --persistent_boot_disk --zone=us-central1-a --machine_type=f1-micro --metadata_from_file=user-data:cloud-config.yaml pcore1
  7. attach the disk to the instance
    1. gcutil attachdisk --disk=pdisk01 pcore1
  8. shell into the system and format the disk
    1. gcutil ssh pcore1
    2. sudo mkfs.ext4 -F /dev/sdb
  9. update the GCE firewall to let port 80 through (by default it is disabled)
  10. reboot
    1. sudo reboot

I made a mistake with my cloud-config.yaml. These two steps will help:

  1. get the fingerprint with this command
    1. gcutil  getinstance pcore1
  2. update the config
    1. gcutil setinstancemetadata pcore1 --metadata_from_file=user-data:cloud-config.yaml --fingerprint=". . . fingerprint"

Here is a sample Dockerfile:

FROM debian:jessie

RUN apt-get update -q
RUN apt-get install -qy nginx-extras
RUN echo "\ndaemon off;" >> /etc/nginx/nginx.conf

# Attach volumes.
VOLUME /etc/nginx/sites-available
VOLUME /etc/nginx/sites-enabled
VOLUME /var/log/nginx

# Set working directory.
WORKDIR /etc/nginx

EXPOSE 80
#ENTRYPOINT ["nginx"]
CMD nginx

Once the system has rebooted you need to ssh back into the system.

  1. create a folder in your home folder where you’ll store your Dockerfiles.
    1. mkdir -p ~/Dockerfiles
    2. cd ~/Dockerfiles
  2. open a new Dockerfile
    1. vim Dockerfile
  3. build the container (note that rbucker is my name; you should read up on the docker registry in order to name your container properly)
    1. docker build -t=rbucker/nginx .
  4. run the container. This command does a few things.  The ‘-v’ command mounts the host OS path to the container path in rw mode. the '-p’ maps the container’s port to the host’s port. This command will run the container in the foreground. You’ll have to open a new ssh session to execute the ‘docker stop’ command in order to operate properly. One could also use a ‘-d’ option to run the container in the background.
    1. docker run -p 80:80 -v /media/state/etc/nginx:/etc/nginx/sites-enabled:ro -v /media/state/var/log/nginx:/var/log/nginx rbucker/nginx
  5. running the container normally
    1. sudo docker start  rbucker/nginx
  6. You can see of the container is running with
    1. docker ps
  7. then load your host’s external IP address into a browser and give it a try. The default nginx page should display.

** CoreOS and Docker would prefer that you restart the container with systems service commands instead of letting docker auto-restart. Although that is possible and topic for another post.


Logging everything is not the answer

I have long since held the belief that if you're going to send someone a status email of some kind then whatever alert you're providing must be actionable. I was reminded of this fact while reading a blog post from a recent gophercon. 

In the context of the blog post the author suggested that you should not log anything unless it was actionable.That is a very strong statement and so I started to think about it and more practical terms.

In one recent project there was very little  test automation. It would be too easy to blame the fluidity of the environment and lack of knowledge of the target to justify not writing any tests. This project was an integration between two systems that were very dynamic. It ended up being realize as a series of code templates and cookie-cutter implementations. In this case I had to log everything. Every entry and exit into an API produced a timestamp in the log file along with a calculated elapsed time. Implementing the trace was not a difficult or expensive operations however it did take up a lot of disk space.

Another project I'm working on is it implementation of a flow based framework. There is very little logging at the framework level other than some of the preliminary debugging information. However there are extensive test cases which validate the framework. This implementation is incomplete and will eventually be completely instrumented because part of the benefit of the framework itself is in fact the monitoring of the stages and state of the user application. So while there is not a framework level logging the logging within the framework will blog for the application.


Saturday, April 26, 2014

Curated news instead of social news

One of the things that I really liked about RSS feeds was that I could batch my news and take on a weekly basis. The social news machine like Twitter requires hourly and daily updates just to keep on top of all of the meaningful tweets and filter out all the croft. 

I follow more twitters and Google plus feeds that I can watch in one day and still be productive. Some of those sources are news many of them are meant to provide cues for innovation in my profession and career.

What I've realized is that there are too many distractions. The carefully curated news is inclusive of a greater frequency of social news rather than political financial or human interest news. The technology news needs to be batched so that it does not have a real-time affect on progress being made now.

Wednesday, April 23, 2014

WiFi Spotify

When I run Spotify on my iPhone it refuses to connect when I disable Spotify from the cellular network. What could that be all about?

I'm Absolutely Convinced...

I am absolutely convinced that (a) I should be able to author a non-trivial application using mothing more than my tablet (and maybe a bluetooth keyboard); (b) that the metaphores, processes and tools that game developers use to implement games should also be used for application development.

Call it intuition or some deep seeded post hypnotic suggestion, whatever, it's just a thought I cannot get out my head today.

Sunday, April 6, 2014

Software patents

Two big corporations fight patent lawsuits and pay off patent lawsuits instead of attempting to reform the patent system or for that matter abolishing the patent system of today?

I believe the most large corporations are embracing the current patent system based on the price of entry and the price of violation most start up companies that would get a leg up from that scenario are halted before they start.

another bad day for open source

One of the hallmarks of a good open source project is just how complicated it is to install, configure and maintain. Happily gitlab and the ...