Wednesday, July 29, 2015

deleting folders on AWS-S3

If you read enough of the AWS documentation you'll know less that when you started. I'm not sure if it's me or just the vastness of the APIs and functions available.

I'm in the process of writing an uploader and while I was tinkering I created a number of folders. Some I did manually in the web GUI and others through my software. One thing for certain is that here is not recursive delete function.  There is also some vague limitation to the number of objects returned and how you might page through them. And there is no specific recursive delete function.

** reading the documentation ... there is no actual folder structure. It's a virtual structure.

The bug I ran into was that the folders I created using the GUI could not be deleted. Not with the GUI and not with my app. It turns out that the GUI does not set the object type which the software does (STANDARD). Once I changed the type to STANDARD in the properties I was able to delete the folders.

** Just a note but the DeleteObject API did not return an error when it did not delete the folder.

Tuesday, July 28, 2015

ASUS Flip is not for work

I've had my flip for 2 days now and I've come to the conclusion it's not for work. In fact it's passable for email, video, music and some social media.

My experiment

I had to take my car in for basic maintenance. So I took my ASUS Flip with me in order to get some work done. The dealer offered nice tables and some refreshments. They also offered wifi so I did not need to tether to my phone. (even though my phone is a Nexus-6 tethering is unreliable and the carrier has terrible service in my city)

I have a bug in my current project.  I have a good idea where and why so it should be pretty straight forward.  The workflow is common to work I have performed in the past.

review the logs


As a programmer my development environments vary from project to project. In it's most encumbered mode I use Visual Studio or XCode. This would require some sort of remote desktop software. Regardless of whether I was to use RDP or VNC the outcome is the same.  The Flip display is simply not big enough. Alternatively, I also spend a lot of time in the terminal or console of remote machine. Sometimes I need multiple terminal sessions into the same and different servers. As a result, many times I'll use tmux or GNU's Screen with mixed results because the Chromebook or it's terminal application seem to dump idle sessions and because of the server config I might lose the entire session (remember to save often).

The wifi stuttered from time to time. This was probably the provider. The keyboard repeat and delay settings are still not satisfactory. The screen is small and when on battery power the back light is low, promoting glare, and resulting in eyestrain.

I'm not a touch typist but fairly proficient... I type as fast as I think... usually.  The keyboard is too small.


This is a well finished machine but it's not meant for work unless you have an external KVM. And for those times between marathon editing sessions the system is just fine as-is.  However, I did find myself wondering about a Surface 3 now that Windows 10 is around the corner. Shhh, don't tell my Mac friends.

wifi can be unreliable. While this is not an bug report against the Flip but more of suggestion that I should have checked the wifi performance before making my purchase.

As a side note, ChromeOS' VPN tools are very limited and while simple for non-technical people it's still not robust enough for slightly more technical configurations which is more commonplace.

Monday, July 27, 2015

OpenTable abuse

A few months ago my wife and I went to a local restaurant by the name of Ireland's for dinner. Ireland is one of them more desirable restaurants in Weston and we were hoping to have a very good time. I finally made the reservation using openTable. It was not the most desirable reservation or was it the reservation that I want it. But it would have to do.

When we arrived at the restaurant we were the first people to be seated. Yet to try to get reservations in and around the time that we had selected was not possible. Either the restaurant or open table were intentionally making things difficult. As a way of social engineering the restaurant.

Just a few moments ago I tried to make a reservation at another restaurant in Fort Lauderdale. The first reservation that they have available is at 9:15. Since it is a Monday night the likelihood that they are out of reservations is in fact unlikely.

What is the point of using open table if it is not used properly?

advanced golang json

This might not actually be advanced but it is something that was bothering me about marshaling json that was really bothering. The alternate article title might have been better.

"how to convert a query string to json?"

A legacy system I'm working with uses query strings to store and communicate data. There are no real complaints about that structure as it served it's purpose as a universal container which is easily parsed. But as I want to unmarshal that data into a golang structure there had to be a better way.

My first thought was to strings.Split(s,"&") and again strings.Split(s[i], "="). While it works for my data it is less than perfect. The first challenge is encoding. The second is all the stupid rules for embedding special chars (probably also encoding)

I fixed that problem by using url.ParseQuery().

Now that I had a map[string][]string I had to iterate over a while loop and store the data into the structs fields. The first attempt was to use reflection. That failed because each attribute has a slightly different case and that boxed me in. But then I decided to convert map[string][]string to map[string]string and that was pretty simple.  Once in that format it is pretty simple to Marshal that array to a string and then back, again, to the actual structure.

qvalues, err := url.ParseQuery(p.origDataWrapper.OrigData)
if err == nil {
    vals := make(map[string]string)
    for k, v := range qvalues {
        if len(v) > 0 {
            vals[k] = v[0]
    buf, _ := json.Marshal(vals)
    p.err = json.Unmarshal([]byte(buf), p.origData)
. . .

There might be a way to improve that first-forloop

Sunday, July 26, 2015

ASUS Flip Chromebook

This is still a work in progress but there are some notes that I have collected:
  • The keyboard is not anywhere close to full and it's considerably smaller than the 11" MacBook Air
  • The keyboard travel is nice
  • The keyboard repeat and delay needs to be tweaked
  • The trackpad click has too much travel
  • The trackpad needs to have it's settings tweaked
  • The screen resolution is limited to 1400x900 although ASUS or Google recommend a slightly smaller resolution
  • The resolution of the touch screen seems accurate, however, I did have trouble with YouTube's scroller and the volume slider
  • The prongs on the power brick are fixed
  • The power connector works in either direction although it's custom
  • The bluetooth syncs nicely but has the same issues that I have with my other Chromebooks and Nexus 6
  • The octane score is not close enough to the published values
  • Grafana produced some pleasant graphs
  • The stuttering mouse and audio has stopped (I still think I need to make some tools go away but I'm on the fence because these things make me happy and allow me to work everywhere; but since the keyboard is so much smaller... I dunno)
I'm unplugging the power right now and going to post this article.  I'm hoping that the battery will be full or nearly full in the morning. Then I'm going to use it for various tasks tomorrow in order to determine the battery duration in my use-case. By comparison my Pixel LS is designed to 12hrs, however, I get a lot less. Maybe 5 or 6hrs. But then I have Bluetooth, google music, wifi, and a couple of dashboard monitors while I have a couple of terminal sessions open.

*lights out* until tomorrow.

It's tomorrow.  When I first opened and logged in the battery setting said 6hrs, then it quickly went to 11:33hrs. Then I walked the dog, leaving the flip open, and now it shows 20:45. A few minutes later it showing 11:45. That variance is a bit too high for my liking.
Another thing I noticed is that after waking the Flip, this morning, it stuttered and spit, until whatever processes were spiking managed to settle. Also, the chrome browser instance I had left open the night before was complaining about needing to be restored. I do not know what happened and I do not really care since my ChromeOS devices are meant to fail.

It didn't take long but the smartlock stopped working again. And when I tried to re-connect to my nexus6 I get a FAILED response. One of the strategies I tried, and seemed to work, was disconnecting other bluetooth devices like headphone (from the flip) instead of just locking the user account.  Then when i tried to login it seemed to work more reliably. However it is a pain in the ass to do that every time, especially since I switch between 5 different machines.

The built-in speakers are ok. They are a little tinny but midrange is still clear. With a machine this small it is as I expected. When I used my H800 headhphones the audio quality was as I expected. The audio can be a little muted when the flip is operated on my lap.

In fact when operating the flip on my lap it's a little uncomfortable as the width of the device means having to hold an uncomfortable position in order to prevent it from hitting the floor. I tried the flip in tablet mode as a "reader" and it was not comfortable. The weight is significant and so as a kindle replacement it is not. The required travel from the bottom to the top of the display is going to tire my shoulders in addition to my wrist.

I don't think I want to return this flip. It seems well put together and in the worst case I'll give it to my kids when we travel. But I certainly won't buy a second unless my kids have some better success.

This review is completely subjective and based on my personal needs and expectations and use-cases. But it also comes with 35 years experience on everything from mainframes to pc and some specialized hardware too.

UPDATE: Here is the kicker.  The person who took the picture below must have wanted to disguise the dimensions of the actual screen [I stand corrected. I believed that they were stock photos, however, the effect is the same]. The dimensions of the screen appear to be square when it's clearly rectangular. Even in the 1440x900 mode the number of vertical pixels is still limiting because the browser's controls always take up the top 10-15% of the screen. Getting work done on the flip feels like my MacBook Air. One thing I cannot do on my MBA is us XCode. There are other IDEs like Cloud9, Nitrous, Codeenvy, and a few others that just need more screen.

Here is a picture of my ASUS wireless keyboard, MacBook Air and ASUS flip keyboards. There is a noticeable difference.

Someone asked about TheVerge and Google+.  Both perform ok. There is nothing exceptional here. TheVerge loads a template with some incomplete graphic shapes which they pull after the page has loaded. During the initial load and subsequent load of the images things are snappy.  But once TheVerge starts loading the images things get sticky again. One all of the images are loaded it becomes responsive again.  For comparison I tried my Pixel LS and I noticed the same hisitation when the page loaded but it did not last as long.

The same can be said for Google+ although it can also be sticky but there are fewer signals to the user that there is something going on. Back in the day we'd get spinning beach balls or hour glasses. Now things are left to the user to determine.  I suppose that is ok if the pages were responsive to the UX standards of 200-400ms per page load. Beach balls are bad; sticky is worse.

UPDATE:  My mini-HDMI cable arrived. It was a challenge to get it plugged in.  I'm not sure if that was because it was a new computer or a new adapter. The receiver on the flip is on a rounded edge so that you don't really know where the right angle to the connector is. As for the display, it handled my 1920x1224 display nicely. I did not linger too long but I did notice that the position of the receiver meant that it could be annoying after prolonged use if I did not adapt. 

Saturday, July 25, 2015

US Election, democracy, socialism, healthcare and your cell phone service

In the United States we claim to be a pure democracy when that is clearly not the truth giving "Too Big to Fail" as an example of corporate socialism. The IRS and government is not going to help so why should it help business. Maybe corporations and Wall Street should act a little more conservatively and for the long term rather than instant gratification. (there is plenty of research on the subject... even a 60 minutes article with 1st graders)

Any time you go to the doctor sign all this paperwork agreeing to the cost of whatever procedure or service you are there for, however, they cannot tell you how much it's going to cost. But you have to agree. When was the last time you went to McDonalds ordered a burger and are then presented with a bill for the food and then on the way out the door another bill for using a table and then another for discarding your trash. The last time I was in the hospital with my kids everyone who stopped by sent us a bill. It was assumed that because we had insurance that we could pay every deductible or copay. Had the hospital presented one bill there would be one deductible and one copay. In another example we were transitioning from one insurance company to another and had not yet elected COBRA. When presented with a "uninsured/cash" option that cost a very small fraction insured copay we made that choice. Since then we have received the bill for the full insured cost.

The cell phone market is heating up. It started a few years ago when T-Mobile started converting customers for free and assuming the balance on any phone purchase. Things are heating up again as prices are starting to drop again. Granted programs like NEXT are meant to jack the cost up again and make subscribers sticky again... but the latest pricewar seems to be starting. XXGB+unlimited data and no contract for $50/mo. The "no contract" option is a little marketing misdirection because (a) the company is charging setup fees every time you change carrier (b) and you will likely have to buy a new phone unless you already have an unlocked phone (c) and unless your phone is paid for you'll have to pay that off too.

Call it what you want. Things should be fair and based on truth. This is where government should be stepping in.

** I recently wrote about the 12,000 Apple Watch. Normal markup is 50% but I believe Apple's markup is somewhere between 60-75%. That means for every 12K watch that Apple sells there is a 6000-$9000 profit. And it's no more complicated to sell one of those watches than it does any of their others.

Apple has gone off the reservation

I was looking for a fitbit for my wife. The model she wanted retails doe $150 and out of curiosity I decided to check the Apple Watch. The cheapest Watch is $350. Out of curiosity I decided to look around and that's when my jaw hit the floor. Apple is selling a watch for $12,000. They are out of their minds but then again maybe not. The people who are crazy are the ones who buy them. There isn't a bank account big enough to make me feel less guilty about buying one of these. I just cannot imagine what the marketing department is thinking. Who is their demographic here? Rockstars? Would-be rockstars? There is nothing fashionable or timeless about an Apple Watch.

Thursday, July 23, 2015

Moore's Law and the Chromebook

Moore's Law as summarized:
"Moore's law" is the observation that the number of transistors in a dense integrated circuit has doubled approximately every two years.  ... and projected this rate of growth would continue for at least another decade. (wikipedia)
I find it hard to believe that Moore's Law was more of a project or marketing plan than engineering prediction. Not that doubling transistor count is an easy feat but it's more Economics. For all we know Intel might have had the capability to quadruple transistor density, however, the cost would have kept the chips too high effecting the economics. 

I recall using the Dec Alpha in the same form factor as the PC. They were screamers. I presume it was part manufacturing, part density, and mostly it's mainframe/mini heritage. Sadly, the Alpha was acquired and then discarded.

In the meantime we continue to see a number of chip vendors that are working on chips that are "good enough". ARM and Rock. We also see a number of vendors, possibly taking lessons from one laptop per child and manufacturing Chromebook, Chromebox, Chromebase and Chromebit. For all but the most premium devices all of these machines are relegated to using older or commodity chips, memory and SSD.

While the PC and MAC markets are running wild on the tip of the price spear with the latest chips the ChromeOS hardware market is simply consuming the cast offs. Eventually the ChromeOS market is going to fracture under the stress. PC and MAC manufacturers want to preserve their margins. ChromeOS manufacturers are going to see premium brands enter the market with a need to better margins and then soon enough the pirates are going to enter and try to capture the bottom of the market. (we already see Intel/Microsoft entering into the stick computer and now Microsoft is working on the CloudComputer)

And the exceptions or indicators:
  • HP Stream (appears to be positioned as an RDP console)
  • EeBook (FAIL)
  • in general netbooks (FAIL)
  • RockChip (link) described as SoC based on the ARM processor
  • Apple buys chip manufacturer (link)
  • look at the number school systems that are purchasing ChromeOS based devices
BTW: I'm not touching the OS war here although I have a very strong opinion.

Wednesday, July 22, 2015

Example SFlow

Previously I mentioned a project, execon, which I'm planning to rename SFlow, however, one of the missing pieces for the casual reader is what's it all about?


func DeleteMerchInfoHandlerOld(w http.ResponseWriter, r *http.Request) (int, error) {
        if r.Method != "DELETE" {
                return http.StatusMethodNotAllowed, errInvalidMethod
        payloadValue := r.FormValue("payload")
        if payloadValue == "" {
                return http.StatusBadRequest, errExpectedEmpty

        request := mdata.RemoteRequest{}
        if err := json.Unmarshal([]byte(payloadValue), &request); err != nil {
                return http.StatusBadRequest, err

        merchInfo := mdata.MerchInfo{}
        if err := transcode.Transform(&merchInfo, request); err != nil {
                return http.StatusBadRequest, err

        if found := merchinfo.Dirty(merchInfo); found == false {
                return http.StatusBadRequest, errMerchNotFound

        return http.StatusOK, nil


type DeleteMerchInfo struct {
        Init                 execon.ExeConTask `params:"FIRST"`
        MustDeleteMethod     execon.ExeConTask
        PayloadNotEmpty      execon.ExeConTask
        ParsePayload         execon.ExeConTask
        CastPayloadMerchInfo execon.ExeConTask
        SetMerchInfoDirty    execon.ExeConTask `switch:"{\"HttpError\":\"Finish\"}"`
        Finish               execon.ExeConTask
        Error                execon.ExeConTask
        HttpError            execon.ExeConTask

I don't want to spoon feed the reader with all of the gory details but there were a bunch of things that I learned in the process of converting this workflow.

  1. The AFTER is so much easier to read
  2. The AFTER is so much easier to extend
  3. The AFTER takes advantage of shared code library
  4. The AFTER can actually be assembled, composed or generated from textural fragments or gists
  5. The BEFORE did not have enough documentation
  6. The BEFORE did not consider some other options
  7. The BEFORE required the user to interpret the summary description
As a side effect there were a number of testcases that I missed. There were a few test cases that were broken and needed fixing. And in one case I had missed a use-case that needed to be tested. And while I was working in this particular "switch" (above) I found that I missed a case in the framework implementation. (for another day; but no one is perfect).

Tuesday, July 21, 2015

Can GB be my friend?

Vendoring with golang's tools sucks.
  • golang does not support enough D/VCS systems
  • GOROOT is now computed
  • GOPATH was once expected to be a single path but then someone added multi-path with the first path being the global vendor folder
  • in order to update the 'go get' you need to include the -u flag
  • when you use 'go get' the code include whatever submodule info like .git, .svn, .hg etc
There, I said it and I backed it up.

There are a few choices for vendoring projects in go. godep, nut, and the latest gb. I do not know anything about nut, however, godep works with the standard go tools. All you have to do is update GOPATH. On the other hand gb is a complete departure. Going the gb route means that you need to break backward compatibility with the standard tools. Since I'm new to gb I just do not know if it's a worthy solution.

The problem is not really obvious until you have multiple projects. And with each project you have to change your GOPATH so that the local project vendor files are used and not some other project. And it's not appropriate to share some vendor files as a matter of course.

There are a number of possible solutions that are user-based:\
  • keep your projects in separate containers
  • use a "select project" type script to update the environment
  • use a batch script for all go commands
  • modify the path and replace the go tool with your own per project and construct the environment
  • Makefile
** some of these are the same only different tool

Make is clearly still the solution to the problem. The go tools cannot be all things to all people and that is clear here. Sadly I wish the go tools could do just a little more in this space. Not all of the golang authors dislike make and it's also a pretty good goal.

I'm not sure I want to dump the go tools. if gb fails to impress I could always construct the environment with my own tools. And then I'm right where I started only with someone elses idea of what the ideal project layout is.

UPDATE: one of the biggest issues with gb is that it does not work with the various vim or emacs packages. Things like goimports have no idea what to do when GOPATH is not set.

FAIL: I have a use-case where my application is a huge template of go code. The template contents and the main.go file is resolved from the command line. For example my .sh file fills the environment with SQL fragments which I call a catalog. The build process takes a named root variable and resolves the rest.  Then it inserts all the values into the template as text/template does and produces the new main.go. First of all... this is a preprocessor step and gb does not support that. Second, gb supports multiple main(s) with branched folders in the main (cmd-ish) folder with one main per leaf folder. Sadly, generate could not find the proper artifacts to build and in fact broke the path.  Face it. gb is the preprocessor/wrapper that the go toolchain intended to subvert and now it's being wrapped again except that (a) cannot unwrap the default tools (b) is not a hybrid (b) is not idiomatic and I cannot wrap it again as I would the standard go tools.

Monday, July 20, 2015

renaming execon to SFlow

I want to change the name of this project from execon to SFlow; which stands for SFlow - Structured Flow based on Flow Based Programming where the graph is defined as a Go Structure.

Sunday, July 19, 2015

Let It Crash

I've developed some interesting systems in Erlang. They were fun and interesting projects. I also find it interesting that Joe Armstrong is such an amazing advocate; clearly he has over 26 years dedicated to the project. A good number of people would appear as dear in headlights if one day Joe decided he got it wrong and that BASIC was a better choice.
However, I liked a number of things he had to say and they make perfect sense [paraphrased].
  • let it crash
  • do not program abnormality
  • the set of extra things is enormous
  • that set is called defensive programming
  • let the crash be observable an then make it an issue to be worked
So the question is how can this be applied to other languages... golang in particular?

UPDATE - "clean Erlang code" might just be full of shit. I just spent an hour trying to clean some golang code using the same principles as the erlang demo and I find that trying to create one function per line of code just does not feel natural in golang and it feels less normal for erlang too. Granted there is room to clean my code as there is too much nesting but there is no way I can resolve the sort of reduction the presenter was suggesting.

Failed CoreOS Services

I'm not going to describe how fleetd and systemd work together. That's better researched on the CoreOS site. But I am going to describe a condition that I often find myself in.  My rig is like this.
  • I have a Chromebook running "terminal"
  • my remote CoreOS servers are typically at Google Compute Engine
  • and I use a blend of tools installed in a dedicated docker container
However, from time to time; when I log into my CoreOS instance I get this crazy error message about some "failed unit". After a quick investigation:
  • systemctl status
  • systemctl list-units
  • systemctl --failed
I determined that my ssh session had terminated and left some breadcrumbs behind. I'm exactly sure why the session died but it is common. There must be some idle timer on the chrome-terminal application that I have not configured properly because when I have a similar session open on my macbook it remains open longer.

On one occasion I noticed that I had multiple failed units and I was never able to determine what happened. After some additional systemd research I was able to find away to remove the symptoms.
  • systemctl stop sshd@60338-x.y.z.a:22-a.b.c.d:54566.service
  • systemctl unload sshd@60338-x.y.z.a:22-a.b.c.d:54566.service
  • systemctl reset-failed sshd@60338-x.y.z.a:22-a.b.c.d:54566.service
I'm fairly certain it was the last one that actually cleared the dead unit. The other two did not seem to do anything meaningful.  Also, I needed to prefix the commands with the 'sudo' command.

systemctl --failed | grep "service loaded" | sed -e 's/^.*\(sshd.*service\).*$/\1/' | xargs -n 1 -I {} sudo systemctl stop {}

systemctl --failed | grep "service loaded" | sed -e 's/^.*\(sshd.*service\).*$/\1/' | xargs -n 1 -I {} sudo systemctl reset-failed {}

UPDATE:  In the previous UPDATE I wrote two one-liners.  They both work, and that's all you need. However, while testing the results I executed the two scripts from the clipboard; then I logged off and reconnected very quickly.  I noticed that that CoreOS' motd still displayed the 10 failed units I had previously cleaned. I tried the root command 'systemctl --failed' and it returned '0 failed'.  I then logged out and back in (second time) and the failed units were no longer displayed in the motd. I must have been too fast on the keyboard when I sent the initial cleanup one-liners.

Saturday, July 18, 2015

Docker registry; is it safe?

I make the assertion that Docker's public registry is not safe and I offer "nijtmans" as an example. I was looking to deploy fossil in a docker container but I was too lazy to build my own "scratch" container from scratch. Since I had just installed bosun and grafana from their "trusted" images I felt good about looking for a fossil version. Sorry, FAIL.
  • A docker registry search for "fossil" yielded some 5 images.
  • The first image was 8 months old and makes the claim that it was forked from nijtmans
  • I noticed that nijtmans is not trusted with the docker regitry (no badge)
  • The former image included it's Dockerfile so I could fork it if I wanted
  • The later, niftmans, did not offer any good documentation and it was missing the Dockerfile
  • I decided to try to track the project down and looked for the author on github; sadly he only had the one project
  • when I looked in his repo I could not locate the Dockerfile and the README was unflattering
I do not know anything about this guy. I have no idea what his motives are or what the source looks like. I appreciate that he has shared, but when it comes to putting something in my server it has to have something, anything.

From this vantage point nijtmans and his project are suspicious.

Thursday, July 16, 2015

influxdb, telegraf, chronograf

I've nearly completed my FLOW based SQL report generator. I'm pretty certain the last feature I want to implement is going to be monitoring. Since the program is written in Go and is being launched at 430-UTC every day I want to capture the runtimes as part of system monitoring. I also want to monitor the system that it's running on.

While I need to add go-metrics I need a place to persist the data. Influxdb is the new meme-time-series data db engine on the block. It's also written in Go. While grafana is a good dashboard(also Go) the influxdb team has released Chronograf. And finally the influxdb team released Telegraf in order to collect data from the target system... written in Go. (there are a few other's like statd, collectd; and then there is cadvisor, bosun, graphite, rddtool and maybe a few others)

While Go is also the meme language of choice for most systems programmers these days it's not the meme that generated interest as I have been using and following it pre-1.0. What makes this a good language is not the language itself it is the toolset and's static linking and cross compiling. The rest is important bug not as much.

For something like this I like the single source monitoring since they all work together; somewhat homogeneously.

And now for the bad stuff. The influxdb team has not produced a trusted and idiomatic docker container. The same goes for the telegraf and chronograf. *sigh*. Maybe the documentation will catch up but it's not right now. One of the superior things about Go's static linked applications is that it means that the docker containers can be as simple as a scratch image. And that is a good place to be.

Saturday, July 11, 2015

Docker says what?

I'm trying to bring CoreOS, Docker and possibly Rancher into my work environment. I completely understand the risk associated with deploying alpha and beta level code. In this case both CoreOS and Docker appear to be stable. Rancher might be the weak link, however, since it's just being used to access the registry, deploy the images and connect the sidekicks and a few minor services... I'm not concerned.  Everything can be overridden from the Docker command line.

I started to put together some notes in order to deploy a 3-node cluster on my MacBook. I present them here. Note that they are highlevel and sometimes infuriating.

[7/11/15, 4:42:36 PM] Richard Bucker: I hate to say it but for the purpose of the next play date I am installing virtualbox and vagrant.  Just because I have to in order to kick things off.
[7/11/15, 4:49:41 PM] Richard Bucker: [1] install virtualbox EASY
[7/11/15, 4:49:49 PM] Richard Bucker: [2] install vagrant EASY
[7/11/15, 4:50:06 PM] Richard Bucker: [3] install a git client EASY
[7/11/15, 4:50:39 PM] Richard Bucker: [4] clone
[7/11/15, 4:50:54 PM] Richard Bucker: [5] copy config file and edit
[7/11/15, 4:51:04 PM] Richard Bucker: [6] copy user-data file and edit
[7/11/15, 4:51:16 PM] Richard Bucker: [7] vagrant up
[7/11/15, 4:51:54 PM] Richard Bucker: [8] vagrant ssh core-01
[7/11/15, 4:52:37 PM] Richard Bucker: [9] install docker client on your client
[7/11/15, 4:53:35 PM] Richard Bucker: [10] export the DOCKER env variable - see the config or user-data.  One or the other had the values. now you can send commands in from the host OS although not necessary.
[7/11/15, 4:54:21 PM] Richard Bucker: [11] ssh into the master: vagrant ssh core-01
[7/11/15, 4:55:34 PM] Richard Bucker: ** get comfortable with some CLI commands.  journalctl, systemctl, etcd, fleetd
[7/11/15, 4:57:02 PM] Richard Bucker: ** rancher is here
[7/11/15, 4:57:31 PM] Richard Bucker: [12] install the rancher master:  docker run -d --restart=always -p 8080:8080 rancher/server
[7/11/15, 5:04:34 PM] Richard Bucker: ** assuming that the previous docker command succeeds.... here are some docker commands.
[7/11/15, 5:04:37 PM] Richard Bucker: docker ps
[7/11/15, 5:04:45 PM] Richard Bucker: docker ps -a
[7/11/15, 5:04:48 PM] Richard Bucker: docker images
[7/11/15, 5:06:26 PM] Richard Bucker: [12] launch your browser on your pc, get the ip address of core-01, and then put this address in your browser:  http://<ip address>:8080
[7/11/15, 5:06:38 PM] Richard Bucker: you should see the rancher server
[7/11/15, 5:07:12 PM] Richard Bucker: [13] follow the add your first host wizard
[7/11/15, 5:07:32 PM] Richard Bucker: [14] save the ip address, click on cutom
[7/11/15, 5:08:09 PM] Richard Bucker: then copy the string below and paste it into the master and the two slaves.  This means you'll have 3 hosts for rancher
[7/11/15, 5:08:35 PM] Richard Bucker: (you do not need the SUDO)
[7/11/15, 5:10:50 PM] Richard Bucker: ** I opened one ssh terminal into each of the 3 slaves and pasted the docker comand... they started to install another docker container from the register.... it's going to take a few min.
[7/11/15, 5:16:20 PM] Richard Bucker: ok, my slaves are finished. I ran a few "docker ps" commands in each window and there they are
[7/11/15, 5:16:54 PM] Richard Bucker: !!!!!!!!!!! my maccbook is suffering!!!!!!
[7/11/15, 5:17:20 PM] Richard Bucker: at the bottom of the web page there is a CLOSE button, click
[7/11/15, 5:20:25 PM] Richard Bucker: when the window closes you'll that the master web page has 3 hosts on it.  Have fun with that and click around.
[7/11/15, 5:20:46 PM] Richard Bucker: eventually you'll have to click on the SERVICES tab
[7/11/15, 5:23:24 PM] Richard Bucker: and if not already configured you'll need to add a docker REGISTRY. Normally in an enterprise you'd have your own registry server and you'd populate your own docker images.
[7/11/15, 5:25:47 PM] Richard Bucker: In the meantime you might have to install the default public registry.  They you can install applications, databases and other services.
[7/11/15, 5:26:35 PM] Richard Bucker: The registry is also where the development team would deploy their applications so that the OPS team can deploy them.... automated or not.
[7/11/15, 5:27:35 PM] Richard Bucker: Rancher allows the OPS team to partition the containers.... DEV, PROD, STAGING etc.... up to the user to name them
[7/11/15, 5:29:33 PM] Richard Bucker: when two or more containers have a inter container link then rancher creates a "sidekick".  The sidekick is special type of container that manages the connections and connection timing so that the containers can be launched in any order leaving the discovery to the sidekick.
[7/11/15, 5:30:33 PM] Richard Bucker: recently rancher added a loadbalancer.  I have not used it but I think it's meant to handle transient services.
[7/11/15, 5:34:14 PM] Richard Bucker: **NOTE because the rancher config was performed manually they will not survive a reboot. They will have to be restarted. That's not going to work well because after the reboot there are going to be some docker breadcrumbs out there.  They need to be cleaned up between reboots.
[7/11/15, 5:34:55 PM] Richard Bucker: my system is junk because I allocated too much RAM per VM.
[7/11/15, 5:36:02 PM] Richard Bucker: The basic required building block for this cluster is going to be a dedicated registry and pxe server.  I could coexist on the same machine... but I'd like a HUGE chunk of disk.
[7/11/15, 5:40:56 PM] Richard Bucker: Docker containers..... To ask a meme these days you'll get all of the worst practices you can expect.  My favorite is when the DEVs use ubuntu or fedora for EVERY container.  My second favorite is when they use phusion's implementation:
[7/11/15, 5:44:36 PM] Richard Bucker: The best way to deploy an application in Docker is to make it completely self container binary executable.  This way the application is the only artifact in the container. This way [a] the application might get hacked but the underlying OS would not... because there isn't one. [b] it's impossible to attack a port that does not have a listener. So it's important to take away that python, perl, ruby etc... applications cannot run self contained in a container with all the perl dependencies.

The entry level environment is pretty big based on normal development standards. I hate to think I'm going to need a MacPro to get started. But on the other hand even that might not be a bad idea since it's so beefy.


UPDATE: While rancher is fun it might not be a good production environment until containers can fail gracefully, become sticky to recover from reboots, and highly available to permit deployment recovery somewhere in the cluster. (See fleetd and cloud-config)

Tuesday, July 7, 2015

USB Type-2 to USB Type-C

I read the line, Power and data in one on Google's page. I have a Pixel-2 with 2 USB-C reciprocals. While the power adapter is not much bigger than the Apple 50 Watt MacBook power adapter it is still a pain to carry. It makes for an uncomfortable bulge in my briefcase. And so the idea of using a much smaller USB power adapter to charge my Pixel was really appealing.

The it was good for data was the icing on the cake.

The accessory arrived today and I plugged it in to the USB port on my MacBook and into one of the USB Type-C ports on my Pixel. No matter what combination of power and USB connectors I used the Pixel always indicated that it was charging. That's good news.

However, while there was no detailed description about how data was going to be transferred between devices I had high hopes that one or the other was going to provide an automatic mount point to the other. Unfortunately nothing worked. I've gone back to the packaging and there were no instructions of any kind.

I'm going to link this POST to the Chromebook Google+ site and see what info I get.

Sunday, July 5, 2015

Cannot watch the World Cup 2015

I was walking the dog about a half hour ago when I stopped at a neighbors to chat. As we were wrapping things up another neighbor stepped into the street and alerted us to the fact that Team USA was up 4-1 at half time. I'm not exactly sure what the score was at the time because several search headlines painted a different picture.

Anyway, I started googling for sites where I could watch the game. Sure, I could have walked 20 feet into the living room and turned on the tube but I wanted to watch from my desk. Kinda selfish but that's the way it's evolving.

Once I managed to filter through 5 websites to one with a link to a viewing site (fox2go) it took quite a long time to load the page. In fact over the next 20 minutes I reloaded fox2go nearly 15 times. Unfortunately after the 5th or 6th time I was redirected to a login page. And I had to click on the "other provider" link... so I could search for my service provider. Once I got to the "advanced cable" login screen I had to remember my username and password.  Unfortunately it was not my regular account credentials. It was supposed to be my "view anywhere" credentials. I do not have those credentials yet.

Anyway, it's almost 30 minutes now. The second half of the game should be about 45 minutes... and if I'm lucky I might get to watch the last 5 minutes.

One thing I also noticed is that every single site I tried to access was experiencing some sort of slowdown. It's quite possible that the consumption of bandwidth in my community has saturated the network too... but that cart has long since sailed.

In a day when we are supposed to be in love with neutrality, sharing, and love; it's simply not that way. Corporate greed prevented me from watching the second half of the game. Nothing more.

Saturday, July 4, 2015

Signing off of Twitter

I like twitter has a source of casual information. The problem however is that it is becoming more and more commercial. Every dozen or so tweets I receive some sort of sponsored message. Many of these sponsored messages are in line with my interests and from time to time I have click through however they are still a nuisance. Additionally some of the sources that I get my news from include all sorts of nonsensical news. For example this one news station insist on including pictures from one of their sponsored models. It is off topic that it no longer makes sense.

Recently I started informally monitoring my usage. What started off as reading while in the bath room has turned into almost a full time of session. On the one hand both the golang and docker projects are very informative however it's to the point where many of the posts or the percentage of posts that carry useful information is starting to drop. The reality is if I do not start to curate my own newsfeeds I'm going to be distracted from my primary mission of developing my own software.
I sincerely miss the days of RSS feeds.

UPDATE: I've been off of twitter for 7 days now. It feels good.

Friday, July 3, 2015

A click is not a tap

There are a bunch of Windows user that think that a tap is a click. The tap is a trackpad jesture that converts a momentary tap on the trackpad as a mouse click. I supposed it's not a truly evil function and that for some users it makes certain sense but my challenge is remembering whether it's the default behavior on ChromeOS when I started, my OSX or Windows PCs. All I know is that with my latest upgrade and powerwash of my Chromebook the tap feature was enabled. AND it's annoying.

Do disable the tap do the following:>

  • Settings
  • search : TAP
  • click on touchpad button
  • uncheck "Enable tap-to-click" 
and close the window. Your changes will be saved.

Keep in mind that if it does not work as expected, then repeat the steps just to make sure your selections have been recorded. If you cannot get ChromeOS to record your selection or the feature is not working as expected you might want to do a powerwash and try again.

Thursday, July 2, 2015

Skype on ChromeOS

This article hints that it's possible to download and install Skype for android for my ChromeOS device. One thing that seems to be missing is whether or not it's going to support the different processors. Frankly I'm not sure what the Android version requirements so it's, admittedly, a little FUD.

This lack of support is all the more reason to do something else. Hangouts is a good replacement, however, many corporate users are skittish. While Hangouts is feature complete it suffers from a number a challenges. (a) it's complicated or at least the casual user is not going to record a session on their first attempt (b) privacy, especially of recordings is not clear where Skype requires a 3rd party (c) not everyone is using the Chrome browser or ChromeOS. It is simply not a ubiquitous solution.

iMessage is also an alternative but it's Mac only and there are too many Windows users in the world and on my teams; and since it's not cross platform it's not a possible solution. A Chrome solution would be an easier, however, even I'm staring at a MacBook and a Chromebook wanting to consolidate.

One interesting challenge is that once you get to the realization that this is for business everyone has their hand out. There are few freebees. Ad-supported is not the sort of thing you want to show a client and facebook messaging is also no way to go.

And the one key driver is privacy and security. No matter what your business is there are some conversations that using 3rd party tubes are simply not ok.

Wednesday, July 1, 2015

$200 is the new $100

Ever since I was a kid I've noticed that the "cost of things" seemed to be stratified. Meaning that if you looked at the average cost of things they seemed to float around some ideal price for the thing you or I might purchase. I recall a conversation with someone about why $12.99 was better than $13.00. There is clearly a psychology of pricing and unless you're on the inside tract you or I are never going to understand the who, what, where or why.

Recently there was a post on Google+ which begged the question why the Chromebook manufacturers were not building machines that cost more than $200. While this is an exaggeration (a) $200 seems to be a sweet spot and the new $100. (b) instead of being some big conspiracy it might actually be the fear that there these prices will cut into their profits.

One thing for certain; $200 seems to be the new $100.  I have an ASUS Chromebox that is just not behaving properly. I originally purchased it to be my experimental system so that I could install things like Crouton without compromising my main system (a Pixel 2). But Chrome never installed and then there are all the other problems like Bluetooth. I've tried Powerwashing and reset ... but it's still not working as I would like. It could always be my Jawbone mini Jambox, however, I'm just disgusted.

As for the $200.... I have no idea how or where to return it and it's not worth my effort at this point. I know it's $200 but by the time I box it (assuming I can find the original box) and drive to the post office I might have used up all the return value. RMA or not. I'm also thinking that it is probably in the jawbone because other similar devices seem to work better.

The point it... there was a time when I would return everything that cost over $100 and now that conversation has moved to over $200.

another bad day for open source

One of the hallmarks of a good open source project is just how complicated it is to install, configure and maintain. Happily gitlab and the ...