Skip to main content

very fast disaster recovery

Yesterday morning there was a fiber cut that effected everyone in Weston Florida. As I understand it this included every internet and cable provider in the city. Assuming that there is only one trunk into Weston would make it a very bad place for certain businesses that depend on reliable services.

Anyway, this prevented me from using my usual development machines because they are located in Weston and the databases that they connect to are at Amazon. Google's OnHub does not have a feature that would allow me to bridge my entire network nor would I want to given how much bandwidth my YouTube kids consume.

I managed to move my development and get back to work and this is how I did it.

  1. Put mu phone in hotspot mode.
  2. Connected my desktop, a Chromebox, to my hot spot. Since it was already connected to my local network via Ethernet I was able to talk to both networks.
  3. Logged into Digital Ocean and created a CoreOS instance
  4. Logged into the instance
  5. Created a key: ssh-keygen
  6. Gave the key to github
  7. Cloned my current project
  8. Added my personal credentials to .ssh
  9. Tested the helloworld compile/run procedure
  10. Asked the sysadmin to add my IP to the DB server auth list
And continued as if everything were normal. This process took about 15 minutes. I could almost make a case for shutting down development at night if there were some cost savings. And that might permit me to have a bigger machine during the day.

Here are some of the facts:
  • CoreOS is essentially immutable in the host partition
  • git and many other basic tools are available on the host. These are the tools that are generally immutable. The GREAT news is that CoreOS keeps them safe and up to date. In a way they do all of the research that I would have to do if I were running the IT department and startups are way understaffed for that.
  • My build script has many layers. First of all I use CoreOS' rkt-builder. It's a version of sid that is meant to build rkt. I'm reusing that tool chain to build my project. Inside my project there is a build script that launches rkt to build and once that container is running it launches the project build script that creates the binary and also an associated aci file. The build process also shares the host volume so that the compiled targets can be returned to the host.
  • There is a separate run script that can launch the executable inside a rkt container.
  • There is a separate build script for a wrapper project which compiles multiple targets and combines them into one container and can run them all at once.
  • The bottom line is that I do nothing except the keys in order to get my environment operational. This means anyone taking over the project will not have to do anything special in their environment. (one of the things I always hated about assuming someone else's projects was the net effect on my environment. With this structure the net effect is ZERO.
The next part of this process is going to be spawning the build instead of doing it locally. This way I will not need a huge local machine in order to compile/run my project. In fact I think I can make a case for some sort of golang sandbox making an IDE out of the go slide server.

Popular posts from this blog

Prometheus vs Bosun

In conclusion... while Bosun(B) is still not the ideal monitoring system neither is Prometheus(P).

TL;DR;

I am running Bosun in a Docker container hosted on CoreOS. Fleet service/unit files keep it running. However in once case I have experienced at least one severe crash as a result of a disk full condition. That it is implemented as part golang, java and python is an annoyance. The MIT license is about the only good thing.

I am trying to integrate Prometheus into my pipeline but losing steam fast. The Prometheus design seems to desire that you integrate your own cache inside your application and then allow the server to scrape the data, however, if the interval between scrapes is shorter than the longest transient session of your application then you need a gateway. A place to shuttle your data that will be a little more persistent.

(1) storing the data in my application might get me started more quickly
(2) getting the server to pull the data might be more secure
(3) using a push g…

Entry level cost for CoreOS+Tectonic

CoreOS and Tectonic start their pricing at 10 servers. Managed CoreOS starts at $1000 per month for those first 10 servers and Tectonic is $5000 for the same 10 servers. Annualized that is $85K or at least one employee depending on your market. As a single employee company I'd rather hire the employee. Specially since I only have 3 servers.

The pricing is biased toward the largest servers with the largest capacities; my dual core 32GB i5 IntelNuc can never be mistaken for a 96-CPU dual or quad core DELL

If CoreOS does not figure out a different barrier of entry they are going to follow the Borland path to obscurity.

Weave vs Flannel

While Weave and Flannel have some features in common weave includes DNS for service discovery and a wrapper process for capturing that info. In order to get some parity you'd need to add a DNS service like SkyDNS and then write your own script to weave the two together.
In Weave your fleet file might have some of this:
[Service] . . . ExecStartPre=/opt/bin/weave run --net=host --name bob ncx/bob ExecStart=/usr/bin/docker attach bob
In sky + flannel it might look like:
[Service] . . . ExecStartPre=docker run -d --net=host --name bob ncx/bob ExecStartPre=etcdctl set /skydns/local/ncx/bob '{"host":"`docker inspect --format '{{ .NetworkSettings.IPAddress }}' bob`","port":8080}' ExecStart=/usr/bin/docker attach bob
I'd like it to look like this:
[Service] . . . ExecStartPre=skyrun --net=host --name bob ncx/bob ExecStart=/usr/bin/docker attach bob
That's the intent anyway. I'm not sure the exact commands will work and that's partly why we…