Skip to main content

SSL sucks

Don't read this post unless you want to read about FAILs.

Trying to deploy a secure "developer in a box" domain is non trivial. Which also means that trying to deploy any kind of domain is also non-trivial. The challenge is the bootstrap which is also a "race condition" or a "chicken and the egg".

For example in my system I have a single public IP address. The router forwards ports 80, 443 and a few others to my traefic server. My traefic server allows my projects to register with traefic so that their various network requests are forwarded properly. However, since my traefic server is also version controlled in my private git repo and I cannot deploy traefic until my git server is also in place. And therefore the chicken and egg.

As for the SSL title... when you let your SSL certificate expire because it's too hard to remember what you had to do last time to renew and reload the certificate or that you've imposed too many guardrails in the organization in order to manage secrets that any sort of tweak requires a complete redeploy... or when using mechanical and mutually dependent chicken-eggs systems that it's too easy to lose control.

Here's the outline of my hyper developer domain in a box.

  1. create an account on a VPS service like digital ocean
  2. create a project
  3. register a DNS domain and configure your nameservers.
  4. create a gitlab instance (good luck configuring it. so that it's secure)
  5. configure gitlab including letsencrypt
  6. import the baseline traefik project so that [1] you have a known version rather than relying on release version numbers or public repos [2] consistent deploys
  7. create a console machine instance that would be used to deploy the swarm of instances.
  8. ...
OK, first FAIL. Git is famously difficult to deploy and gitlab does not make that much easier especially since we are talking about the bootstrap and console. Starting again... this time instead of deploying a bootstrap git repo and a separate console I'm going to deploy a small container OS like rancher, install docker-machine and docker-compose then install fossil-scm.

  • Add support for Docker. Just install docker and type "sudo docker run -d -p 8080:8080 nijtmans/fossil" to get it running.
Another FAIL. RancherOS does not install anything less than 4GB of ram. Or at least not that I can be certain of. So this time I'm going to try fedora... I like CoreOS, however, releases have been slow and now that RedHat owns them it's tough to know what it's future is.

And another FAIL: FedoraAtomic 26 uses stock docker 1.13 and the latest docker release is 1.18++. Even though RancherOS has some heavy ram requirements at least it stock with the latest version. Now as I'm trying to upgrade my atomic host Atomic failed.


another FAIL -- 'curl' does not exist on the base RancherOS... you have to convert the console to alpine and then apk add curl... and CoreOS does not permit writing to the /usr partition. So you gotta change it to /opt and add that to the path. UGH And on CoreOS forget any plan to add auto-completion.

Now that fossil is running on my console machine and I've installed docker-compose and docker-machine... I've also updated the admin password on fossil. At this point I've gone back and deleted the RancherOS instance because it's 2GB requirement is 2x the 1GB for CoreOS. At this point this machine is meant to be basically idle although I need to add my swarm tools and traefik.

And another FAIL -- the fossil container is over 2 years old and does not provide any information on hosted persistence and no link to the Dockerfile.


Popular posts from this blog

Entry level cost for CoreOS+Tectonic

CoreOS and Tectonic start their pricing at 10 servers. Managed CoreOS starts at $1000 per month for those first 10 servers and Tectonic is $5000 for the same 10 servers. Annualized that is $85K or at least one employee depending on your market. As a single employee company I'd rather hire the employee. Specially since I only have 3 servers.

The pricing is biased toward the largest servers with the largest capacities; my dual core 32GB i5 IntelNuc can never be mistaken for a 96-CPU dual or quad core DELL

If CoreOS does not figure out a different barrier of entry they are going to follow the Borland path to obscurity.

UPDATE 2017-10-30: With gratitude the CoreOS team has provided updated information on their pricing, however, I stand by my conclusion that the effective cost is lower when you deploy monster machines. The cost per node of my 1 CPU Intel NUC is the same as a 96 CPU server when you get beyond 10 nodes. I'll also reiterate that while my pricing notes are not currently…

eGalax touch on default Ubuntu 14.04.2 LTS

I have not had success with the touch drivers as yet.  The touch works and evtest also seems to report events, however, I have noticed that the button click is not working and no matter what I do xinput refuses to configure the buttons correctly.  When I downgraded to ubuntu 10.04 LTS everything sort of worked... there must have been something in the kermel as 10.04 was in the 2.6 kernel and 4.04 is in the 3.x branch.

One thing ... all of the documentation pointed to the wrong website or one in Taiwanese. I was finally able to locate the drivers again: (it would have been nice if they provided the install instructions in text rather than PDF)
Please open the document "EETI_eGTouch_Programming_Guide" under the Guide directory, and follow the Guidline to install driver.
download the appropriate versionunzip the fileread the programming manual And from that I'm distilling to the following: execute the answer all of the questio…

Prometheus vs Bosun

In conclusion... while Bosun(B) is still not the ideal monitoring system neither is Prometheus(P).


I am running Bosun in a Docker container hosted on CoreOS. Fleet service/unit files keep it running. However in once case I have experienced at least one severe crash as a result of a disk full condition. That it is implemented as part golang, java and python is an annoyance. The MIT license is about the only good thing.

I am trying to integrate Prometheus into my pipeline but losing steam fast. The Prometheus design seems to desire that you integrate your own cache inside your application and then allow the server to scrape the data, however, if the interval between scrapes is shorter than the longest transient session of your application then you need a gateway. A place to shuttle your data that will be a little more persistent.

(1) storing the data in my application might get me started more quickly
(2) getting the server to pull the data might be more secure
(3) using a push g…