Tuesday, October 15, 2019

another bad day for open source

One of the hallmarks of a good open source project is just how complicated it is to install, configure and maintain. Happily gitlab and the git experience has been exactly one of those. There is nothing like a failure to authenticate an SSL certificate in the middle of a basic upgrade.

"we" have always said that open source puts the ownership and cost on the company. That commercial software was an insurance policy with distributed ownership and support.

Saturday, October 12, 2019

k3os is not a general purpose OS

if all you are building is a lightsout k8s/k3s cluster then maybe k3os is a good choice. However if you are trying to choose one OS to rule them all then k3os is not it. That the website boasts that "Doesn’t require a package manager" you have to ask... "for what?" Don't get me wrong... if you're trying to make the OS immutable like CoreOS then maybe this is not the right way.

It's been a long day.... include a package manager or kill the project and make k3s easier and more reliable to install.

open source -- you suck

I'm having a bad day but in the calm of the moment some things are clear to me.
open source sucks
I'm just a minion programmer and I've always railed again using Microsoft tools. My reasons are many. But while I have had great success with open source there was a time when "they" did quality work. People used to take pride in their work and professional reputation meant something. Now people just throw whatever spaghetti code they can against the wall and hope it sticks. (with few exceptions)
The flutter team sucks and it's a suckier product. Too ambitious and too many volunteers trying to impress the hiring team.
So I understand when you use open source tools and there is a problem and you want some help. On the one side you either have to share your code and IP or you have to hire contractors and get NDA, legal and mgmt involved. This sort of thing can be project suicide.

On the other hand try going with Microsoft. Management feels the warm embrace of the support contract but then as a tech getting past the frontline is yet another barrier to entry. In almost every way this is a complex pain in the ass. There was a time when, at IBM or Microsoft, one could escalate. There was a sales person or a manager or a director that you knew and that you could leverage for better help. But it took patience.

Open source has none of that.

I have a partial answer for that but it's for another day.

Friday, October 11, 2019

flutter: what agile got wrong

Let's get some facts straight. Google does not apply resources in the quantities required to design, build, integrate, deploy and market so that they prop up the open source market. There is real profit motive in play and to suggest otherwise is disingenuous. Flutter is an attempt by Google to convert iPhone first to Android first by way of Android too.
the sum of the parts is not the sum but the average --Richard Bucker

Let's be real for a minute. If you have the budget then you do not care about Flutter. You'll have two teams or maybe three and you'll develop your products in or out of parity. If you do not have the budget you'll get the best team you can assemble based on what sales and marketing has on hand. If you have an iPhone sales team then that's what you're going to develop for because that's how you perceive the market.

The problem with Flutter as a tool and platform is that it's not stable. They seem to be relying on free resources to provide support. The documentation is poor and their howto videos are marketing infomercials for managers. It's not until you actually try to bolt the iPhone and Android applications together that the pain starts... if not already as in my case.

So what did agile get wrong. One thing for sure. Releasing often or too often for one. Having a customer that is so huge that you cannot hear the feedback (when the customer is the population at large). And it is possible to release too early. As evidenced by the support team suggesting users use the unstable channels to solve problems.
this was accomplished before the internet and before Google. -- Richard Bucker
In the 1980s I was hired to start work on a new payment system that was to become the anchor of the giftcard industry. At the time I was a DOS system and dbase programmer with some OS/2 internals and some OS/2 application development experience. I had already achieved my 10,00 hours so I was a rock star when it came to absorbing new tech. Within 3 months I built the giftcard system on Sun Solaris, Oracle SQL, Java, designed REST before it was REST, integrated the system into the company's mainframe operations workflow, implemented an integrated IVR, and was the only after hours support person.

The one component I keep trying to forget is the help desk software that I wrote using some tools from CA (computer associates). I'm sure it was fine for some tasks but as a general purpose multi-platform application environment it sucked. And I was too bold not to take the bullet early. The hard lesson I learned is that the sum of the parts is not the sum but the average.  Keep in mind that all of this was done without the internet and at the time Unix system admins were hoarding online and dead tree manuals.

So here's the thing... I head to learn by experimenting with the tools at hand. I had to wait for the mail in order to get the next release and that could be weeks or months and sometimes years. But when I/we received a 1.0 release we knew what it was not production ready but effectively an early release. When 1.1 arrived we knew there was something there to work with. (see the Java language history). The next place was the bookstore. It was the next best place to get good information. I had hundreds of manuals that filled in the gap. At the time the software release cycle was slow enough that there was a book business behind it. (see pragmatic programmer and O'Reilly). And when all else failed there was the library or computer science department at any college.
building on top of crap is a waste of time even if there is time to market. The cost over time will be higher than just getting it right. --Richard Bucker
Now the internet is at warp speed and developers are dumping as much crap on the internet as they possibly can as fast as they can... is it marketing? Is it just to be relevant? Over the years I've brushed against peers who want to use the latest tool or toy until I've experienced it first hand. Early on it was about taking the lead by being the lead. Then it was about not having to support junk I did not believe in and not wanting to replace it with yet another piece of junk... it just misdirects the conversation to something that is nonsense.
tcl and expect are old enough not to have a logo
I'd previously written an orchestration system for vCloud. It served a particular purpose and was designed in functional silos. Now I'm building some orchestration tools for vSphere that solves a different problem. Now I need to deploy and configure different linux guests. All the cool people are using Chef, Puppet, Ansible, Teraform. I've experienced some of those but they are so complicated and worse for non-programmers that might need to support or deploy it. So this new tool is build on bash and expect. Silly me it just works and it exposes just enough for operations staff to use and with a little personal growth to extend.

So now the real rant is over and while I do not regret saying THIS IS NOT NORMAL on a flutter support site... I regret accepting management's decision to use flutter. It's just not normal.

Monday, October 7, 2019

ultimate vs essential

I worked for a company that thought they had the ultimate employees and the ultimate product and the ultimate management... but once you pealed back the layers it was the essential elements that won the day. and while I look thru google domains for another vanity domain I see the ultimate.company is available and essential.company is not.

As I think about it again there is a meaningful distinction between essential and ultimate.

Ugh, docker-machine is abandonware

To comment; I'm disappointed and pissed that this is my reality. Docker-machine is not a rock star but it works. It does not have many providers it does not support many OS. So now what? I suppose I can make the argument that if docker-machine is meant to be the turbo pascal of tools then maybe I should just skip it. Ugh.

  • download the OVA file (As of Oct 2019 it's 3.0)
  • create a VM guest with the OVA; there are few params... one for disk and none for RAM. when creating the machine hostnames make them unique
  • the OVA, by default, is not enough storage so resize it or use the ISO
  • change the password (root/changeme)
  • 'ifconfig' to get the IP address
  • check sshd status: 'systemctl status sshd'
  • add the new machine to docker-machine: 'docker-machine create --driver none -url=tcp:// photon3' or use the generic `docker-machine create --driver generic --generic-ip-address= --generic-ssh-user=root photon3` (generic is better)
That was some basic system config... now comes k3s

  • iptables -A INPUT -p tcp --dport 6443 -j ACCEPT
  • gotta remember to save the iptables changes: iptables-save >/etc/systemd/scripts/ip4save
  • curl -sfL https://get.k3s.io | sh -
  • cat /var/lib/rancher/k3s/server/node-token
  • curl -sfL https://get.k3s.io | K3S_URL= K3S_TOKEN=token_goes_here_for_agent sh -
  • kubectl get nodes
  • kind the master: kubectl cluster-info | grep 'Kubernetes master' | awk '/http/ {print $NF}'
One strange thing is that the OVA file really limits the amount or RAM and the disk is small too. There is a belief that we need many VMs with limited resources each. Well this is just not how it's supposed to be put together.

I had to expand the disk with these instructions.

As I'm writing this I've shutdown the machine and doubled the disk and ram.

check photon updates: tdnf updateinfo info
photon update tdnf update -y

Other tools:
  • tdnf install -y awk
  • add an existing kubernetes cluster to gitlab (doc)

Compares to k3s, docker swarm has as much cruft.
  • tdnf install -y git
  • get the worker token from the leader: docker swarm join-token worker
  • check the docker service: systemctl status docker
  • start docker: systemctl start docker
  • restart docker: systemctl restart docker
  • join the swarm: docker swarm join --token <token_goes_here>
  • check the swarm inventory: docker node ls
  • add labels if necessary: docker node update --label-add type=queue worker1
I've said this about k3s before. It's complicated. The docker swarm setup did not need any iptable changes. It used most of the stuff that was already there. The swarm deploy and container deploy is pretty simple. It's still just simple.

Orchestration through simple tools

When I hit my 10,000 hours I realized that the basis for my success as a programmer was my shear laziness. It's that laziness that became my mantra long before the agile manifesto. I'm only putting together a list now so it will need some refinement.
  • Know when to stop. There are so many good reasons to stop. My favorite is that the longer you delay the more likely it is that the customer is going to ask for something else. In reality a 30 minute delay to color a report header for a one-time report that could be done in seconds by hand; especially when time is critical.
  • Tools that are stable, terse and simple are the most productive even if you have to create a DSL of your own. This way the minutia is separated from the work. So much easier to debug and easy to be fast.
So I get severely peeved when developers design tools like git. It's just tool damn expensive from multiple points of view.

which OS for docker

OMG what a pain in the ISO!!!
I have been plowing through nearly a dozen linux distros to figure out what my DR plan might be if and when Rancher decides to discontinue RancherOS. I like RancherOS for a number of reasons... the first is that it plays nice with docker-machine and so I can deploy the ISO just about anywhere docker-machine will let me. I also get immediate satisfaction of being able to privately ssh into the VM and complete whatever configuration I want.

Back in the olden days I might create a guest on vmware workstation and publish the template. This is a pain in many ways... I'm looking for the OS instance that is closest to what I need.

So as I hacked my way through the many distros I found that:

- some wanted me to press enter to get past grub and select the INSTALL option
- some had a GUI installer
- some booted to root@shell but either there was no default password, sshd was not started, or sshd did not permit root login

RancherOS works the way you expect especially since docker-machine does not let you do a lot of things for deploying like cloud_init.

The backup system is going to be k3os. I'm not a particular fan of k3s or k8s so that's not a secret, however, the rancher team is backing k3os with a little more gusto than rancheros.

linux distros that matter...

Distrowatch is always a good place to checkout the latest linux distros but unfortunately it's not as simple as it once was. Now we hear about evil does attacking from the heart of the Open Source and Free community. Just because I'm always on the lookout I stick to the major brands and that comes with it's own price.

Red Hat
- I wish Google had their own distro in addition to ChromeOS

In recent months some have been deprecated

And there are some I just won't touch for now
microk8s (not sure that is a complete distro)

Each has it's own pros and cons. The most compelling right now is RancherOS. I like that I can install it from docker-machine and effect all sorts of configuration too.

I'm abandoning this post as I lost the train of thought after letting it sit too long.

Sunday, October 6, 2019

how small is big enough

Just as I was clearing the cobwebs and I was resolved about kubernetes vs swarm I find myself asking the other question. Why bother? If your business or service is small enough then why swarm the system at all? If your business fits on one server then why build out a cluster? Backups not withstanding the more machines you have the more likely there is to be a failure that is going to require resources to repair. But a properly designed system could be deployed and made safe with minimal hardware.

Frankly there is no reason for a cluster of Raspberry Pi computers except maybe a Tru64 so long as there is a NAS or iSCSI. The thing is there is a lot that can go wrong especially the routers and depending on the volume and capacity a failure could be considerable. I recall the hardware we bought in order to build early giftcard systems. A giftcard or credit/debit card transaction takes so little work that these commodity machines would do the job. Frankly I could make an argument for a rack of laptops simply due to their batteries. The system got more complicated and expensive the bigger and more systems we added.

I keep forgetting about a proof of concept I worked on after leaving an employer. I built a system that could perform 10-20x the transaction volume and cost 1%, yes one percent, of what we spent in production. Other similar systems were even less expensive.

Saturday, October 5, 2019

it's not the enterprise stupid

I've been rehashing the git vs fossil argument and then thinking about the sqlite vs anydb and I'm thinking about all things NFS, GFS, CEPH, container persistent storage.... argh. All of these things are nice problems to have when you have these problems.

The reality is that the simplest solutions are best. They take minimal effort to build, test, deploy, and more importantly fix. Keep the risk down unless you have bodies to spare. The name of the game it to produce output not repair and the prior art.

UPDATE... the subtitle of this article should have been distributed system when one will do. In basic statistics there is a proof that shows that adding a a dependency to a system makes it twice as likely to fail... So let's take the example of a client application and a remote database server. If that DB was on a second machine it would be twice as likely to fail. Add a second DB server with replication and the failure rate is still higher than the one machine alone.

I think I have my answer...

Do you need a college degree to be a programmer?

The short answer is NO. Over the years I have had a number of coworkers who were actually very good programmers but did not have degrees. One in particular worked for Microsoft and wrote hardware device drivers. Another acquaintance merely formatted reports and wrote simple mainframe JCL.

The fact of the matter is that a majority of programming work is trivial and akin to laying brick after brick. Only and small minority of programmers make decisions like implementing algorithms or computing high level math functions... It's mostly mundane and ordinary.

In 35+ years as a professional programmer there was only one year (30 years ago) when I had to use calculus. Since then it's been just block and tackle laying bricks.

Friday, October 4, 2019

not; git vs fossil

I do not imagine that google will know what to do with my title but it's not about git or fossil. I've referenced this article a couple of times already. The point being that fossil is robust enough and complete enough for normal projects where git is meant for large projects with large teams. Let's be clear fossil is a complete version control system. It contains a wiki and ticket system; neither of which are going to win any UX awards but there is some real crap out there. You might also say it's weak in support for golang but even that has a workaround or two.

So why on earth would I want to use VSCode when I can use Atom; why would I want to use Android Studio, XCode, Visual Studio when I can code in Turbo Pascal(Pascal not withstanding).

So as I struggle with the likes of kubernetes(k8s), k3s, k3os, microk8s; why would I use anything other than Swarm.

The nice thing about swarm is that I can network my swarm devices anywhere I can reach out.... Who needs istio or that other complicated crap! I can put leaders and nodes anywhere. They can take on any kind of workload. The orchestration can be as simple or as complicated as you want. And it just works. The best part is that if and when the cluster crashes it can be restored. It's tedious but can be done.

I'm looking at the projects I'm working on and I'm tempted by the shiny bits in kubernetes but it's clearly a waste of resources.

The funny thing is that most people don't need the complexity either. They just do not have the willpower to move past it.

Docker registry is it safe?

I wrote an article in 2015 that touched on the idea or concerns. In the last year we have seen the docker public registry get compromised. Unfortunately the reporting was a little light on details. I'm not sure what's going on now because I'm not able to pull from the registry unless I login.

Thursday, October 3, 2019

The flutter catastrophe

So things are going badly and there are plenty of solutions... sadly nothing that makes flutter a better project. They certainly do not release fast enough or, in my opinion, focus on the useful features that create a better user experience....

"No devices available" (iOS device on a Macbook Air)

This one was one of the first BIG problems that I fixed. In my case I had a brand new laptop and I deployed flutter according to the sensible instructions. Flutter installs with a number of tools in a cache folder. There were some useful additional tools to install like idevice_id and ideviceinfo that are needed by flutter and they can be overridden with homebrew.

To diagnose this problem... flutter doctor -v --bug-report

extract the zip file, try to locate programs like idevice_id and ideviceinfo and then try to execute them... if you receive a dyndna type error or something that looks like it could not locate the OSX equivalent of a missing DLL. Then you at the very least you can delete the flutter tools folder and reinstall. Alternatively and not-verified I have been told that deleting the bin/cache folder forces the flutter tool to reinstall the apps.

Something about "cocoapods ... runner ... target ... customer"

After running pod install I received a warning at the end. It's not very clear, however, in the ios folder there is a file Podfile that has a number of configs in it. Much of this information is static so be careful and save or preserve your work as the flutter tools may overwrite the default or computed config.

This particular error is because the Podfile contains a snippet at the top that sets the target. (debug, release, etc). But it also happens that xcode sets the values and so pod thinks it's a custom configuration with an override and so you get the warning.

To correct cd to the correct folder and just launch xcode. "runner" -> "info" -> "project" -> "runner"... in the configuration tab set configurations to NONE.

Then rerun the pod install and the error should be gone.


This is a tough one because [a] it's ruby [b] little or not docs [c] the flutter tools seem to do some overwriting.

unrelated... uncomment out this line at the set the version: platform :ios, '9.0'

It's unclear but you gotta set the SWIFT_VERSION but where... put this one at the top
   ENV['SWIFT_VERSION'] = '4.1'
and at the bottom in the post_install
   config.build_settings['SWIFT_VERSION'] = '4.1'

(I tried 5.1 but that did not work, 4.1 worked... need more documentation on that too)

A shit-ton of failures

One thing I have not gotten used to is that these massive number of errors was caused when the runner->target->runner->signing-> probably need to set the cert or the team... or the password.

warning: non-portable path to file '<protobuf/Any.pbobjc.h>'; specified path differs in case from file name on disk [-Wnonportable-include-path]
     #import <Protobuf/Any.pbobjc.h>

another fix -- legacy build .  FILE->WORKSPACE SETTINGS-> BUILD SYSTEM

Application crashes upon start
I have not finished this one... but I'm pretty certain the firebase licensing is failing. The documentation says you download and copy the file but the firebase doc is taking the POV that toy are an iPhone developer and not a flutter developer.  They want you to copy the file to runner/runner but that reference does not exist anyplace except XCode. And XCode has a function in the project viewer to add a file to a project. It's that add file to folder that is the trick. With the file in the "project" the build should assemble something useful.

Application crashes on firebase map
The application crashes and the connection to the mac is also lost. From that point forward the app will restart but will not reconnect to the mac/IDE. Furthermore, trying to re-build and re-attach all failed and then the IDE became unstable. I had to flutter upgrade, then flutter clean, then flutter pub get.

version control
so you think you are going to share code between projects or maybe that you can change the version control project.... forget it. There are too many artifacts that have the folder names baked in and no way to "relocate" the project.


flutter create -i swift
flutter clean
flutter upgrade
flutter pub get
flutter build

pod cache clean --all
pod install
pod update
sudo gem uninstall cocoapods
sudo gem install cocoapods

How much virtual ram

I'm trying to create a guest VM with 32GB or ram and vmware seems to be rejecting the request. That begs the question how much is ideal.

It makes sense that VMware is meant to manage tons of small virtual instances on large hardware. Certainly if you need a lot of resources then baremetal might be the answer. Conversely it makes sense to have vmware be the admin backplane to the infrastructure as most baremetal servers might rely on PXE and that has it's limitations too. Having APIs to build out the network, ability to relocate failed systems, mix pro and commodity hardware with a similar virtual appearance to the app.

another bad day for open source

One of the hallmarks of a good open source project is just how complicated it is to install, configure and maintain. Happily gitlab and the ...