Saturday, August 24, 2019

ESXi console timeout

My lab is buttoned down pretty tight. That's not to say that I have air gaped my systems but I'm also not inviting intruders. The challege today is that I need access to my different lab systems including two ESXi hosts, however, the console timeout and well it's just a nuisance.

So the first thing I do after installing a guest is disable the screensaver. [a] because it takes CPU when working and when resting [b] it also interrupts my flow.

The second thing I do is disable the ESXi session timeout.
Host > manage > system > advanced settings > "sessiontimeout"

I edit "UserVars.HostClientSessionTimeout" and set the value to 0.

One thing that is required... disable the console lockout. That's the kind of thing that will ruin your day. The default... enter you password wrong 5 times and you cannot try again for 15 minutes.

So in the "advanced settings" tab search for "Security.Account". Change the "Security.AccountUnlockTime" to 0 instead of 900.

Upgrading VMware ESXi Lab

Let's start with the definition of a VMWare lab. It's a non production scale system that has enough hardware and software to perform the function under test. In my case that's a couple of different Intel NUC devices with 500GB to 1TB of SSD, 32GB DDR3 or DDR4 RAM and an i5 or i7 CPU. I also have run everything under ESXi as the host OS.
The decision to use ESXi was a tough one. VMWare offers a free version however there are limits to it's capability like access to APIs which permits docker-machine to spawn client instances. I could have installed CoreOS and run everything in Docker containers but that has it's limits too.
Things get even more sketchy because the CLI version of the updater has a number of limits. For example if the /tmp folder partition is not big enough the update process will complain about insufficient disk space... and worse yet the only documented work around does not work.

Here's how I managed my latest upgrade:

[1] log into vmware and get the latest patch file as a .zip

[2] enable SSH services on the target host
[3] suspend or shutdown all of the active clients on the target host
[4] SSH-mount the target host
[5] upload the .zip file from step (1)
[6] ssh into the target host

[7.1] list the contents of the .zip file and determine which patch to apply
esxcli software sources profile list -d /vmfs/volumes/SSD_01/

[7.1] perform the upgrade (get the target path correct)
esxcli software profile update -p ESXi-6.7.0-20190402001-standard -d /vmfs/volumes/SSD_01/

[8] probably require a reboot... as indicated in the result. It's a good idea to connect a monitor so you can see the results. I have one machine that will refuse to boot from time to time.

[9] restart or resume the guests on the target host

Here is a good reference but I would double check the URLs very carefully to make sure i was not running anything that make my systems vulnerable... like downloading the patch from a 3rd party and there are plenty of those.

Friday, August 23, 2019

Flutter, Android Studio and VMWare

I have a 4-core Intel NUC with 32GB RAM and an SSD running VMWare as the host with either Fedora 30 or Ubuntu 19 as the guest for my complete Android Studio 3.5. It is a painful experience.

While Fedora plays nice with memory management the Android phone simulator and the IDE are just amazingly slow. Even though I'm on a closed Ethernet with plenty of throughput and bandwidth its. just slow. I've tried allocating more RAM, more CPU, and checking the network. But the console is turtles all the way down.

Monday, August 5, 2019

delete the bookmarks

I have a long list of bookmarks for hiking gear. The thing is I have more than everything I need and Frankly the Amazon and Walmart challenges are good enough. I suppose if I were actually going to do a thru hike I'd invest in some UL and SUL gear but in the meantime these links and all the time I spend reviewing gear seems to be a waste of time and money...

Time to say bye bye

Saturday, August 3, 2019

more alternatives

In no particular order let's try to keep things simple.

AppScale is an open source version of Google's AppEngine. While I like the opensource view of this platform there are many advantages to deploying on the google version. First of all I'm just not sure which direction the development went. Was AppScale first or AppEngine. Then there are all the tools, consoles, dashboards, and integrated services... but you're going to pay for that. Granted scale is everything but there is something to be said about the development environment, designs, and the discipline it invokes. (incomplete because there are shards of for profit and platform lock-in)

Bitnami - this platform has been around a long time. I remember the ole days when it was the poor man's multi-tenant platform just prior to the domain landrush. The platform has grown to include many different types of packaging. What makes it ideal is that the platform is separate from the applications.
VMware is acquiring Bitnami to accelerate application delivery for multi-cloud and Kubernetes and expand Bitnami adoption in the enterprise.
Apcera - I do not know where is company is any more. It had some novel solutions for managing security but the platform was very expensive. Skipping it for mow.

Drupal - it's a CMS but I'm not sure how it works except for the multi-tenant/application thing.

Joyent - stupid expensive so never mind.

Rancher - I like rancher. That they seem t be pivoting from 1.0 (cattle and swarm) to k8s and k3s suggests that they know something and are not talking, however, my problem with them is that they send a lot of time talking about deploy and not so much time talking about disaster recovery.

Deis - I thought this project was acquired by docker, however, the domain redirects to Azure.

Kitematic - development tool seems to be acquired by docker.

Heroku - Kinda like bitnami. This article points out some interesting ideas on why not.

Unless you want to build a platform of your own having a platform let's you focus on the things that matter.

Hashicorp Packer; is it a waste of time and money?

I'm lazy. There, I've said it. It's that laziness that prevents me from creating "things" with layers of complexity and simplicity. I simply want to GST (get sh*t done) in the fewest keystrokes, lines of code, and features. All those things are nice to have but in all honestly get some revenue first. Paying customers with feature requests is better than no customers burning through investment.
I recall in one environment we talked about HA replication and scaling to 10s-100s of servers. That was a nice exercise, however, we always said "transaction volume would be a different class of problem to solve". Sadly it never happened and we never grew past a single deployment and all of that DB and security replication cost money to develop and test... and worse support in production. And those features never generated a single extra dollar in revenue.
So what does Packer get you? Frankly I'm not sure. They support a number of different targets but all of them require customization. I'm not sure you get real scale there when you later start adding chef, puppet or ansible. The layers and costs just continue to mount.

Bootstrapping an environment is a common function. It usually means that there is a source code repository some place with at least one script that can initiate the full deployment. Sometimes this bootstrap system is transient and sometimes it's permanent. It does depend on the operating rules. But one thing for certain is that there are certain challenges when you are looking at recursion... you cannot bootstrap a git server if the code you need is in the git server. That's why I believe in the simplest system possible.

My simple solution is a matter of putting my general purpose tools in a public repo like gitlab, github, etc. I'll clone that code onto my bootstrap system.
to regress for a moment, my bootstrap system is either a simple dedicated guest or some other system that resides inside the target network. This resource has access to the tools and the ability to run them. That could include docker, docker-machine, some credentials.
At this point I have private script that can deploy an admin console. Once  the admin console is created all of the clusters config scripts are loaded and deployed. Currently I'm using docker swarm with some minor k8s and k3s implementations. Once the cluster is ready it's just matter of deploying apps and services. That in itself is just a matter of launching some docker stack or service commands. None of which requires packer, chef, etc... just simple shell commands. Once deployed normal orchestration applies.

Simply put you already need a Dockerfile and in some cases some sort of a compose file... now adding layers of configuration as code called packer is a waste.

another bad day for open source

One of the hallmarks of a good open source project is just how complicated it is to install, configure and maintain. Happily gitlab and the ...