Sunday, March 23, 2014

Wireless for the last mile

In cabling parlance installers talk about the last mile being the last mile of cable that needs to be installed for the end-user to receive whatever service they are providing.

Recently Comcast entered into my neighborhood. They've been tunneling dredging cabling and tearing up our local neighborhood. I can't wait for Comcast to enter into service because they are clearly a more superior product to the current supplier.

As they are doing this installation I find myself wondering why they aren't applying a Wi-Fi product or some sort of mesh whereby they would not have to enter or dig as much as they are. There are clearly challenges associated with a Wi-Fi scenario in that there is a single point of failure that effect multiple customers.

RDP and VNC are dead

While ubiquitous bandwidth exists within the enterprise and even the local home network it does not exist in the wild. While ubiquitous bandwidth was the promise of telco the regulation it never materialized. Even the more recent net neutrality was supposed to support even in a modest way the remote virtual desktop experience.

In reality the Internet is not as reliable and does not have the bandwidth that's necessary for continuous virtual desktops to be practical. And of course let's not forget security and privacy.

As proof one only needs to open a terminal session to a remote computer even within the same city let alone the same country and typically those connections will be lost or chunky as network usage goes up. Telcos are stealing their network based on average or mean network load not on peak load. One normally sees a rise in latency when the kids come home from school at around 3 o'clock with a slight dip at dinner time around six and then picking again around eight till about 1130 12 o'clock.

Saturday, March 22, 2014

You're cruising obligation

If you're traveling on a cruise you have an obligation to bring alcohol in your suitcase whether or not you will actually consume it yourself. Strategy is pretty simple by bringing liquor on in your suitcase you're creating a unit of work for the security team to look for your bags. However if there is liquor and everyone's bags there's no way that the security team can keep up with the request to search bags. Therefore a good number of bags are going to make it through security and into your statement.

Sunday, March 16, 2014

Monolithic code trees in SCCS or DCVS

I like the Google model of one trunk to rule them all.... until I have to fork the code in order to make a small adjustment for a time app or script. Then I hate waiting 45 minutes for the entire system to checkout. I'm further frustrated by the amount of storage it takes for one of these all-in-one mega-checkouts. Google's answer to the size of the repo is to create a very special filesystem that does pre-checkout staging with symlinks and a bunch of proprietary preprocessing. Who can afford that?

On the other hand when you create multiple sub projects with dependencies and so on... then your maintenance costs go up as you try to maintain, backup and manage some sort of real DR plan. The alike of Fossil-SCM fills that gap nicely but then it still feels manually intensive.

What happens when your business is somewhere in between?

Apple AirPort Extreme and Aventail

It simply does not work. I had to buy a new router!

Zero inbox

... Also means zero ...RSS, twitter, reading list. 

Saturday, March 15, 2014

VPS are fun but at a cost

If you're using VPS solutions... unless you can predict your usage and the usage of the other tenants on the same bare metal then be sure to perform all your processing and computation as soon as possible.  There is no point in delaying the processing ... if you have to delay any processing that then becomes the equivalent of a heavy batch job your system contention will change to one where the batch will never have enough or will have sporadic resources and processing profiles. Doing the processing ASAP means that the profile will be flattened as the intra transaction gap will be erased by the transactional latency.

If I were building a framework from scratch...

What features need to be implemented from scratch:

  • down for maintenance mode screen
  • reverse proxy for load balancing, A/B, and green/blue deployment
  • REST API layer for all of the business intel.
  • Service for Static artifacts
  • complete metrics gathering
  • DEVOPS console including deploy button
  • authentication APIs
  • database abstraction or API wrapper
  • web sockets
  • RPC/SOAP - probably not much different than REST but with a stronger type binding
  • http/https - 
  • REST - use content-type and accept to specify the format of the params.
  • business logic as state machine
  • remove branching in the state machine to keep testing simple
  • wrapper for clustered data like Redis or etcd
  • configuration as code - no config files.
  • feature flags in the code and not in config files
  • store the code with the data so there is complete audit
  • CI/CD by pulling the code from the database
  • take the fossil-scm approach to code, wiki, blog all in one file.
  • LDAP baked in
  • plugin framework to extend the base language APIs.
  • multiple source languages - I do not want to create a DSL but it would be nice to be able to support multiple programming languages. While I currently like Lua, GoLang, erlang and elixir for my programming. There is a place for C-like languages where there are opportunities to compile and link... this is a lot more pragmatic.
  • Docker containers appear to be efficient
  • Host security module (HSM) baked in
I cannot wait for my chance.

** It's not your old hello world any more.

A Question About the Practical Use of Twitter Bootstrap

I like Twitter Bootstrap but there are a number of challenges when it comes to being productive. (forget responsive.)  What happens when you spend 6 months building your killer Bootstrap killer app when out of left field there is a new release. Whether it's a patch or a feature release is not the challenge. But what happens when you purchase a template from a vendor and that also needs an upgrade. Can it be as simple as a drop-in replacement? I don't think so.

So when you buy one of those templates from wrapbootstrap you should plan on being in front of your keyboard when they upgrade. The only way this will work well is if there is a loose coupling between your client-side components and the bootstrap artifacts; and as far as I can tell; that simply does not exist.

REST APIs, versions, and the stratification of error responses

Over the last few years I have been constructing a number of REST-like services. Each time I refine my process and design principles; this time I'm going to address server side errors with a modest sidebar to REST API versions.

I really like the Requests toolkit for python. The example on the home page makes it clear what we should all aspire to. Let me point out the use of the r.headers['content-type']. A recent article I read suggested that the designed mechanism is putting the version in the path.
I suppose this is functional but it causes a number of challenges. The first is that the infrastructure needs to be able to generate relative references and to be aware of the API version numbers and that it has to be across all APIs.  So it's an all or nothing approach.

The other approach, which I prefer but is is not very Requests friendly is changing the Accept and Content-Type in the header. Something like this:
where the Content-Type might be:
with a matching Accept:
of course there might be a few variations on this but on the whole it provides for a better and cleaner routing and implementation process. On the whole the different versions can coexist in the same application space or be routed through a A/B reverse proxy component.

So much for a brief sidebar on message versions.

Unless your application is running naked you're going to have some infrastructure running between your application and the client. Once the transaction leaves your DMZ you lose all control over everything from availability to recovery. So there are many more things to consider.

For example; The basic response message contains a StatusCode in the response payload. The StatusCode; it's definition, it's values and interpretation is described in the RFC; can be interpreted to mean multiple things. In a normal HTML transaction a 200 means that the request was received, processed, and a response was sent to the client. 4XX usually indicates some sort of authentication or request error and 5XX usually indicates that there is an application error which is typically a crash or non-response of some kind.

But then you have to ask yourself; what should the StatusCode value be when the application determines there is an error? Like when a parameter is missing or might have a wrong value or format. How do you indicate that there was an error and not mess with the StatusCode? Recently I refactored all of my error handlers so that they returned a 400 instead of a 200... along with an error payload.

I think if I had defined the transactions in a more formal manner I would have some to this conclusion a lot sooner. (a) Leave the StatusCode to the infrastructure (b) any non-200 means that the infrastructure is experiencing some pain. (c) when generating responses use the Accept header to determine the format of the response (JSON, XML, plain text, msgpack, ...) and use the Content-Type to specify the return type:
Now... between the path and Content-Type the router and the controller know exactly what to do with this request and the client knows exactly what to do with the response.  In fact the interface could be completely decoupled from the workflow.

Tuesday, March 4, 2014

Programming languages that I like enough to install on my laptop

Some time ago I decided that I was not going to install anything my laptop.  First of all because I always find myself installing countless libraries that I can never reproduce the environment accurately enough for a practical CI/CD environment. Secondly because it also means trying to stay current with the latest versions of the language and/or libraries. Third, there is always a time when versions skew between production and development meaning that my local machine needs to be able to operate in multiple versions. Finally, rebuilding my laptop after a reinstall or replacement is time-consuming if not impossible.

While there is nothing I can do about the first and second challenges the others can be handled. There are a number of ways to handle #3. Ruby has RVM, GoLang has GVM, perl has perlbrew, python has pythonbrew and virtualenv. There are other options for different languages but this seem to be getting some traction #4 can be addressed with chef, puppet, salted and a few other installation orchestration tools.

And after all that... these are the languages I've decided to install on my laptop:

  • julia
  • rust
  • golang
  • lua
  • erlang
  • elixir
  • tcl-tk
I also have a few of the default tools like:
  • ruby
  • perl
  • python
  • objective-c
I suppose there could be a few others that I'm not aware of but these are the ones I see right now.

Monday, March 3, 2014

When is it time to change?

If your company or employer is building platform based on third-party tools and frameworks when is the right time to abandon those tool chains for more modern alternatives?

One corollary to this question is how much of the toolchain do you need to own in order to achieve the evolutionary sweet spot where you provide as many choices as needed in order to survive? (this comes from the decomposition of AI game theory which suggests that most artificial intelligence systems play in a direction where they have more choices)

Sunday, March 2, 2014

Getting mosh to build, deploy and run ... on OpenBSD, Ubuntu, OSX

This was a pain in the ass but I'm glad I went through the process. In addition to my previous observations it also became apparent that Mosh is not running in userspace except on the server side depending on how it was installed on the server.

Installing Mosh on OSX using homebrew seemed to have some subtle side-effects when using fishshell so I made certain I was using bash. Also, there was boost package conflict so I had to remove and reinstall it.

About the only good news is that it installed flawlessly except that since it was installed with apt-get it was installed as root. I suppose if I had manually compiled I would have only marginally better results because the installation instructions want a "% make install" which is clearly being installed as root.

Installing Mosh on my OpenBSD 5.3 machine was by far the longest, hardest, and time consuming. The biggest flaw appears to be that the installation instructions on the Mosh site missed a number of dependencies and exposed a number of packages that needed to be removed and reinstalled. And they also left out the necessary environment configurations needed for automake and autoconf. Finally, in order to install the program I noticed that I needed to install as root; which is contrary to the docs that say the service runs in userspace.

sudo pkg_add
sudo pkg_add
sudo pkg_add
sudo pkg_add


./configure && make
sudo make install

My biggest complaint is that Mosh's own documentation recommends against UDP into production systems. (they loosened their stance on being more secure than SSH)

Uninstalling could be a whole new set of pain.

Side Note: as a system professional, part-time security analyst, and humble Mac user I constantly use the provided "terminal" application. I have also installed the 3rd party version called "iterm2".  iterm2 is awesome but I have a number of long term concerns.  The first and foremost is the likelihood that someone, someday, will insert some sort of trojan and start farming my terminal passwords and terminal sessions to some remote point on the globe. The second pain point that I am addressing by using my iPad as a terminal is that just about every application I install on my OSX machine is being installed by the root user. (there are so may complications here)

Mosh - mobile shell

Mosh version 1.2.4 is out and I'm not sure whether or not it's a new release or not however I did read a recent posting. Two things caught my attention specially based on previous posts of mine where I criticized wash for its claims to be more secure than SSH.

The first changes that the mosh website now no longer makes the claim that it is more secure than SSH.

The second change is that the website also suggests that because it uses UDP instead of TCP that it is less desirable to use it on production machines.

The only conclusion I can come to his that Mosch is not really a secure alternative to SSH. The certain reality is that if you have to remotely connecting to a server that machine should always be considered production. It does not matter whether or not that machine is actually a development machine or not. Hey vulnerability to one machine is tantamount to a vulnerability in all machines. All of the advantages of using UDP I no longer valid if you have to total through a TCP VPN.

another bad day for open source

One of the hallmarks of a good open source project is just how complicated it is to install, configure and maintain. Happily gitlab and the ...