Skip to main content


Showing posts from July, 2013

Idiomatic Web Server for the Enterprise and others

Web Servers come in a rainbow of colors and flavors. Think of Apache, Nginx, Lighttpd, Zeus and IIS are among the leaders with application, REST, and custom servers filling in the gaps. However, with the power of GoLang's goroutines and standard library one could make the argument that if the remaining functionality existed (or at least the required mods) then a hybrid GoLang web server would be a real possibility; the benefits are clear.

(1) fewer applications - fewer dependencies
(2) obvious transaction flow - predictable

internet radio costs

How is it that all of the internet radio players from google, spotify, rdio, and pandora (to some extent) all have the same price point? (not that there needs to be a loss leader but the pricing seems too regulated... and unless that regulation comes from government; wouldn't that be a monopoly?

My requirements for the ideal DSL

I do not want to create a primer or answer the question as to what a DSL is (domain specific language) but I recognize that the line between a successful programming language and a ubiquitous uber programming language is somewhere between the range of problems that a language can solve. For example languages like C, C++, and assembler are excellent for low level programming but one can get tangled at high level. Languages like Go, C#, erlang and Java (CLR, Mono, JVM, Beam/EVM) are better at medium level challenges and most dynamic languages like perl, Ruby, JavasScript and Python are better at high level problems.

Going back to Dave Thomas's (pragprog) AU Ruby Conf keynote. Paraphrasing: The power of any DSL is that it's ideally suited for the complexity of the challenge it solves making the programmer productive.

A side note: I have purposefully left out languages like CoffeeScript, Elixir and Clojure because they are more akin to translations than they are standalone; and they…

The real pair programming

Pair programming and modern development shops is implemented where one programmer has hands and eyes on the keyboard and the second programmer is leering as the first programmer codes. This increase your costs by 2x Without realizing any immediate benefit.
Rather than having the second program or simply leering over the first programmers Code. The second programmer should be writing test cases to test the first programmers code. In this way both programmers are still collaborating to complete the task.

What is the actual return on investment when implementing agilemethodologies

Implementing agile methodologies is like replacing all of your incandescent lightbulbs with LED lightbulbs. At least when upgrading your lightbulbs you know what the return on investment is. Depending on when you purchased lightbulbs in the initial cost ROI is approximately 2 to 3 years according to current estimates.

But when it comes to Agile methodologies how much of it is incremental and how much of it is generational improvement. When you consider the cost of training, consultants, resources, software upgrades and new purchases. Just how cost-effective is this methodology or is it just disruptive? When addressing pair programming it tends to be endorsed by individuals who have never or will never pair. Especially leadership. Is the 2x cost per line of code really better or a misinterpretation of the task?

UPDATE: Another great article on Agile rebuttal.

Is Lenexa kernel development considered agile

How is it that a handful of developers who are considered benevolent dictators still manage to incorporate such large changesets into the Linux kernel and yet outspoken leaders like Linus Torvalds are highly opinionated? What does this say about everyday development in large and medium size companies whom rely on standard agile methodologies, scrum methodologies and Kanban methodologies?

Docker and docker-registry; first and second reaction

My first reaction to docker was that I was going to do a lot more with a lot less.  I was interested in using docker containers to deploy jails and such for various private web servers; many serving static content. That mission was delayed as I realized that the docker registry is public access. And while the content is meant to be consumed it is the property of their owners. Not a good idea.

So I thought I would deploy my own docker registry. One look at the config file and I realized that I need to have an S3 account. Which is probably why they do the public thing and why dotcloud is so expensive. (Amazon does not give great discounts as it's meant for the enduser and not the reseller)

I'll have to look at libvirt directly now. It can access lxc containers directly and that in itself might be a winner.

C100K and C250K on nodejs

I am somewhat skeptical that they actually managed to get C100K and C250K on an Amazon server. However, I'm very skeptical that they got any real work done on that machine. Partly because of the overhead per connection (up to 4K on some operating systems) but that doing real work has severe  implications for every component in the system. From network IO to disk IO.

Making the connections is easy. Doing real work is hard.

You `Mako Server` Me Crazy

Mako Server's benchmarks are impressive and I'm very curious how they managed to double the page rate of Apache. It's too easy to be a critic...

dynamic files, depending on complexity, are easier to producemako server is a micro server and therefore does not have all the features or protections that Apache doesI think the biggest difference is the licensing. Mako's prices seem reasonable but they are not free as in beer or anything else. Frankly there are so many other servers out there that are FREE that mako needs to fall off my radar.

Feature Flags are Essential

when building a non-trivial continuous delivery or deployment system you will absolutely require feature flags for every single change that goes into the system.  This will allow devops or even traditional operations to enable new features or patches as well as providing the ability to disable the patch or feature... and all systems can be synchronized at once.

It's a trivial feature but so powerful.

erlang on xen - again

This is a nice writeup. It's easier to read than from the source. There are a number of problems with this project:
1) closed source until they make some money 2) limited to 512 files in the filesystem 3) the compiler is custom and is not compatible with beam files 3a) build/package requires using their service (your code is not private) 4) is still very limited in functionality
as apposed to docker: 5) docker can launch a hello world app in 20ms on a basic Rackspace server 6) provides jailed systems no different that VM 7) no overhead from the VM 8) access to real filesystems 9) will run anything you can run normally 10) fully instrumented 11) uses the LXC container 12) free 13) actively developed by the hoards. 14) and will run erlang as a beam... also elixir... as-is
erlang on xen is interesting but novel.

Zero downtime for go

The goagain library is fantastic. It permits to smooth transition between versions of a go application with zero downtime. There are a few more things to say about it. Many of the signals are not implemented limiting its overall functionality. And there is a design flaw or two. However it is functional and does perform the main task of zero downtime.