Skip to main content

SkyDNS vs Consul

Competition is a good thing and that someone at HasiCorp decided to compare SkyDNS to Consul is also a good thing. I'm just a little tweaked about the biased nature of the review even though I cannot find fault with wanting to come out on top.

Getting SkyDNS started is as simple as:
  1. start etcd if not already running (systemctl start etcd2)
  2. pull SkyDNS from the docker registry (docker pull skynetservices/skydns)
  3. set your SkyDNS config (etcdctl set /skydns/config '{"dns_addr":"127.0.0.1:53","ttl":3600, "domain":"nuc.local", "nameservers": ["8.8.8.8:53","8.8.4.4:53"]}')
    1. you must restart SkyDNS after any config change
    2. only supports one domain
    3. passthru to other nameservers
  4. launch SkyDNS (docker run -it --net host --name skydns skynetservices/skydns)
  5. install a test record (etcdctl set /skydns/local/nuc/bob '{"host":"127.0.0.1","port":8080}')
  6. test SkyDNS (dig @localhost bob.nuc.local)
  7. remove the test record (etcdctl rm /skydns/local/nuc/bob)
Some of the criticism from the mentioned reviewer was the distributed datacenter. SkyDNS does not really support that model even though the reviewer said it did and that under DR conditions it was slow to recover. Frankly these statements are WRONG. SkyDNS relies on etcd which does not specifically have a spanning datacenter feature. Partitioned datacenters are common and WAN distributed systems with highlevel of dependency an replication are a problem unto themselves.

In more concise terms. The reviewer said that Consul handled partitioned networks better but then fails to recognize that if the datacenters were partitioned that the dependent distributed services might also be partitioned.
The one benefit of Consul over SkyDNS is a false example.
Operations teams know that there are a number of costs associaed cross datacenter transactions and network reliability, latency, throughout and costs... Consul offers no distinct advantage here. etcd, on the other hand, separates the storage from the protocol.

Lastly; the reviewer makes the point that both HTTP and DNS protocols are supported. In a way that is true, however, to be precise the HTTP(s) service is actually provided by etcd and not SkyDNS.

PS: as of this point in time version 3 of etcd is newly available although I do not know what new and improved features to expect.

Popular posts from this blog

Prometheus vs Bosun

In conclusion... while Bosun(B) is still not the ideal monitoring system neither is Prometheus(P).

TL;DR;

I am running Bosun in a Docker container hosted on CoreOS. Fleet service/unit files keep it running. However in once case I have experienced at least one severe crash as a result of a disk full condition. That it is implemented as part golang, java and python is an annoyance. The MIT license is about the only good thing.

I am trying to integrate Prometheus into my pipeline but losing steam fast. The Prometheus design seems to desire that you integrate your own cache inside your application and then allow the server to scrape the data, however, if the interval between scrapes is shorter than the longest transient session of your application then you need a gateway. A place to shuttle your data that will be a little more persistent.

(1) storing the data in my application might get me started more quickly
(2) getting the server to pull the data might be more secure
(3) using a push g…

Entry level cost for CoreOS+Tectonic

CoreOS and Tectonic start their pricing at 10 servers. Managed CoreOS starts at $1000 per month for those first 10 servers and Tectonic is $5000 for the same 10 servers. Annualized that is $85K or at least one employee depending on your market. As a single employee company I'd rather hire the employee. Specially since I only have 3 servers.

The pricing is biased toward the largest servers with the largest capacities; my dual core 32GB i5 IntelNuc can never be mistaken for a 96-CPU dual or quad core DELL

If CoreOS does not figure out a different barrier of entry they are going to follow the Borland path to obscurity.

Weave vs Flannel

While Weave and Flannel have some features in common weave includes DNS for service discovery and a wrapper process for capturing that info. In order to get some parity you'd need to add a DNS service like SkyDNS and then write your own script to weave the two together.
In Weave your fleet file might have some of this:
[Service] . . . ExecStartPre=/opt/bin/weave run --net=host --name bob ncx/bob ExecStart=/usr/bin/docker attach bob
In sky + flannel it might look like:
[Service] . . . ExecStartPre=docker run -d --net=host --name bob ncx/bob ExecStartPre=etcdctl set /skydns/local/ncx/bob '{"host":"`docker inspect --format '{{ .NetworkSettings.IPAddress }}' bob`","port":8080}' ExecStart=/usr/bin/docker attach bob
I'd like it to look like this:
[Service] . . . ExecStartPre=skyrun --net=host --name bob ncx/bob ExecStart=/usr/bin/docker attach bob
That's the intent anyway. I'm not sure the exact commands will work and that's partly why we…