Wednesday, September 23, 2015

coreos and etcd overhead

I finally managed to get my environment configured in GCE. Ultimately I want to look like:
This configuration is supposed to be pretty standard. The hardest part of the cloud-config was realizing that I was supposed to use $private_ip4 instead of $public_ip4. Many of the examples use the public IP and that is clearly wrong in EVERY case. Using the public IP might leave the system vulnerable to hackers.

Another note about etcd clusters is that he authors recommend that the etcd systems are left only to that function so that all system resources are left to that function. And when I created the workers I simply created etcd proxies. NOTE: if you omit the ?size=3 from he discovery URL then you have to be certain to include the proxy flag. If you include the ?size=3 then the 4th (or n+1) node will automatically become a proxy.

I now have a deployment of 5 machines.  Three in the etcd cluster and two workers. I happened to be looking at the CPU usage and I saw something strange:
This graph is the same on all 3 etcd servers. It appears that an idle etcd cluster member is running about 30%. (this machine is a GCE t1-micro)

Then I checked the workers. each of the 2 workers looked like:
Notice that the workers are at about 15%. That's clearly half the CPU of the etcd cluster member even though the virtual hardware was exactly the same.

On the one-hand I get it. The systems are busy, watching, and so on.  While the logging tells me some information it's possible that a lot more is going on. And the proxy nature of the worker is essentially sleeping soundly while the worker is quiet.

Overall, I suppose I'm not that surprised that the worker and etcd nodes perform differently, however, I'm not sure the micro server is running at a level I expected. Anyway, I'll continue watching during the burn-in. I also want to move my development into the structure in order to see what happens and how tooling might make it fun and profitable to execute this way.

No comments:

Post a Comment

another bad day for open source

One of the hallmarks of a good open source project is just how complicated it is to install, configure and maintain. Happily gitlab and the ...