Sunday, October 30, 2011

The Perfect Sandbox

I've been writing software for over 20 years... and while I'm no longer allowed to call myself a programmer, as a marketing ploy, that what I do. Recently I started having troubles with my primary OSX machine. This straight on the heals of some problems with my backup OSX machine.

The backup works fine standalone but when I plug it into my monitor the sync seems off and it bothers me and my productivity. The primary machine is starting to show lowres icons on the dock and the task switcher. I followed the first set of repair steps and it has not improved.

I have a third system I can use but what I hate most about this is that the 6-Sigma curve is blown and my development environment is ground zero.

So I've started thinking about what's next. Clearly I could put my desktop in the cloud. While google has accomplished part of this it is incomplete and does not address the "programmer" use-case. It also does not take into consideration the local developer only the remote. Then I see my wife's shinny iPad. All of the software and the OS is sandboxed. Any interaction between applications is "managed". The applications and it's data is in a sandbox and they can work with remote servers.

So here's what I want. I want a semi-desktop version of an iPad with keyboard and mouse. Applications that run in sandboxes. Most of all, when my machine dies I want to place an order for a replacement that comes exactly the way the current one was configured and installed. I want it's basic behavior to be more like a desktop/laptop instead of an iPad but I want the structure.

PS: I also want some of the benefits from a platform like VMWare and running complete remote desktops... in yet another sandbox.

Cut the Cord!

Depending on who you talk to or read they pretty much say the same thing. "One big reason that OSX succeeded where other failed is because they cut the cord and started fresh". And if you talk to the insiders at Microsoft they'll say pretty much the same thing about Windows. "It's [Windows] missed shipping dates and quality goals because of the deep seeded need to be completely backward compatible.

Well, Linux is about to or has already branched to version 3. And with vendors like Ubuntu, Red Hat, and others contributing to the kernel and other subsystems... everything has been in "add more code" mode. Much the way that Microsoft has been running it's ship for the last 20+ years.

With Virtual technology like VMWare, Parallels, and others. it's time to move on. The need for backward compatibility is over. The desktop needs to be more reliable and stable.

Saturday, October 29, 2011

Mojolicious and MojoX::Redis

I've been looking at the code for MojoX::Redis for a couple of days now and I'm impressed and depressed at the same time.

First the good news. Like many projects it's open source. The better news is that it looks cool. The code is nicely formatted and if anyone was following the PEP equivalent for perl then you'd say it was adhered to.

On the sad news side of things. While there is some POD doc at the end of the main project file that's it. The code is not documented at all. And the worst of it is that the code is the exact reason why people hate this crap. This person clearly knows the ins and outs or perl and he demonstrated that aptitude well. But if you asked me to reverse engineer it... it's going to take a while and a few cases of wine or beer.

The best feature is that it implements non-blocking in a way that complements Mojolicious, however, the first side effect is that the main thread continues to run while the first request is processing. When really the benefit of this sort of functionality is to let peer events run not the current main thread. Since it was not documented in any meaningful way this had to be experienced first hand... and after reviewing the test code I'm not sure that my conclusions from my code are correct.

Anyway, here is an explanation as I see it in pseudo code.
1) do some redis function like INCR expecting a response
2) do a get on the same key
3) compare the results and they will always fail because
the results from #1 have not completed by the time #2 completes.

Some code that demonstrates this
my $retval1 = undef;
$redis->execute("incr" => [$mykey] => sub{my ($redis,$res)=@_; $retval1=$res;});
my $retval2 = undef;
$redis->execute("get" => [$mykey] => sub{my ($redis,$res)=@_; $retval2=$res;});
die "they do not match" if $retval1 != $retval2;

The side effect here is that it simply does not work. The only way to make this work is something like this:
my $retval1 = undef;
my $retval2 = undef;
$redis->execute("incr" => [$mykey]
=> sub{
my ($redis,$res)=@_;
$retval1=$res;}
$redis->execute("get" => [$mykey]
=> sub{my ($redis,$res)=@_; $retval2=$res;});
);
die "they do not match" if $retval1 != $retval2;

The effect in the above code is that since the sub() that is called upon completion of the incr() is called when the incr() is completed. The same for the subsequent call to the get(). The last die() will still have the same effect of getting control before the redis calls have completed execution. So fo for this to be effective the die() needs to be inside the sub() of the get(). Phew!

I looked at the test cases in MojoX::Redis and there were some interesting examples. There was an implementation of the redis pipeline in the form of a multi() transaction. This could be interesting since one could do an incr() and a get() in the same pipeline, however, if you needed the result in order to perform future calls then you'd have the same timing problems with the response not being ready or available in local memory for future calls.

An async lib of the redis tools seems novel but it makes certain use-cases very difficult and verbose. For example I was playing with the sinatra example of RestMQ. Sinatra being ruby has many of the same warts, and certainly the ruby version of the lib was not evented like MojoX::Redis so I do not expect that it's going to get much work done. (I really like their demo version because it is so little code and it so accurately depicts the mission that it's hard not to like the elegance. Even though it's ruby.) But the reality is that it is still constrained.

In summary, while Mojolicious is nice and simple to use(I still like it). The simple use-cases are simple but as soon as you advance to the next step this will get tricky. If you track after the "get it to work correctly" and then think about performance you could end up rewriting the project to make it performant. So be mindful.
post '/q/:queue' => sub {
my $self = shift;
my $result = undef;
my $queue = $self->param('queue');
my $value = $self->param('value');
if (! defined $queue) {
$self->app->log->debug('queue was not in the URL ('.$queue.')');
$self->render_not_found;
} else {
my $uuid = undef;
my $lkey = undef;
my $q1 = $queue . $QUEUE_SUFFIX;
my $q2 = $queue . $UUID_SUFFIX;
$redis->execute("incr" => [$q2] => sub{
my ($redis, $res) = @_;
$uuid = $res->[0];
$self->app->log->debug('the uuid is ('.($uuid||'undefined').')');
});
$lkey = $queue . ':' . $uuid;
$redis->execute("sadd" => [$QUEUESET, $q1]);
$redis->execute("set" => [$lkey, $value]);
$redis->execute("lpush" => [$q1, $lkey]);
$self->app->log->debug('the uuid q is ('.($q2||'undefined').')');
$self->render(text => '{ok, ' . $lkey . '}');
}
};

When this code executes... the following output is on the console:
rbucker@mvgw:~/hg/metaventures/gwtwo$ ./alt/restmq.pl  daemon
[Sun Oct 30 00:59:50 2011] [info] Server listening (http://*:3000)
Server available at http://127.0.0.1:3000.
[Sun Oct 30 00:59:51 2011] [debug] Your secret passphrase needs to be changed!!!
[Sun Oct 30 00:59:51 2011] [debug] POST /q/myqueue/ (Wget/1.12 (linux-gnu)).
[Sun Oct 30 00:59:51 2011] [debug] Dispatching callback.
Use of uninitialized value $uuid in concatenation (.) or string at ./alt/restmq.pl line 47.
[Sun Oct 30 00:59:51 2011] [debug] the uuid q is (myqueue:UUID)
[Sun Oct 30 00:59:51 2011] [debug] 200 OK (0.003678s, 271.887/s).
[Sun Oct 30 00:59:51 2011] [debug] the uuid is (19)

My observation is that the output from the sub() is in the log after the rest of the output. Also the error complaining about line 47 is because $uuid is currently undefined when the line executes. Therefore the callback is not merging the execution and therefore any sensible use requires that the code be nested. And that sucks.

Monday, October 24, 2011

where are all the programmers

[updated 2011.10.26] grammar and a few notes.

This is just a  short note:
It seems that the number of qualified Java programmers is starting to dwindle in South Florida. There was a time when Java was interesting and exciting. The promise of write-one run-anywhere has been delivered, however, in the meantime there are so many other languages that are more productive with few dependencies. Python and perl for example.

Now the question... if you are working in a small development department and you are having trouble staffing the team what do you do? I think it's a no brainer. Get some strong and qualified manager(s) and then hire as many freshman programmers as the budget will allow.  The cream will float to the top as you develop you project and the standards. The manager(s) will mentor the freshman programmers in the way of the journeyman programmer.

Additionally, you will need to implement some sort of programmer-bill-of-rights and company-bill-of-rights.  This looks a lot like the Agile Manifesto as it was originally written. Finally, stick to the sweet spot in the language you choose and keep the dependencies shallow and light. Lastly, when designing the overall system, make certain that it is possible to support more than one language in the framework. This way migration will be possible.

If you follow this strategy you're going to accomplish a number of goals. a) develop a team of programmers that is going to cohesive and cost effective. b) there will be a career path since they are mostly freshman. c) you will be able to backfill as you promote because there will be interest in the tools. d) you'll be able to scale the teams and the code as needed.

If you think this idea is interesting as a way to scale the team, the application, and the workload. Then you should call me and let's talk. I am currently in the market for a new long term client or project although I'd prefer an FTE position.

Wednesday, October 19, 2011

Is the Facebook App really an iPhone App?

Some months ago I remember reading that Apple was denying iPhoneApps that were simple wrappers for Safari. As we have come to accept Apple keeps many of these decisions close to the vest.

This evening I was logging into Facebook when I saw the Facebook App partially render itself and then a number of Safari controls were displayed. In order to test my theory out I brought up Facebook with the actual Safari App on on the phone. And the results were identical except for the toolbar the bottom of the screen.

So now what's the difference between creating a desktop link to Facebook and actually using their app?

More importantly, what's the point? How much more information does Facebook receive because I downloaded the app versus using it natively in Safari?

Beanstalkd client/worker sample code

The following code is not tested and there is no defensive coding at all. Those are activities for the reader for now. In the next few days I'll implement the same code in perl and C.  The advantage of this strategy is that some A/B testing and monitoring will let you know which modules need to be rewritten for performance etc... The other side effect is that you can be language agnostic. (I might just try Lua too)

One of the other great side effects of this design is that since there are multiple small applications that are distributed they are easier to debug and extend. (think of all of the advantages of microkernels.)

The Client:
# load the required libraries
import beanstalkc

# make a connection to the beanstalk broker
beanstalk = beanstalkc.Connection(host='localhost', port=14711)

# select the tube that is going to forward the message to your "worker"
# you can have multiple workers listening on the same tube or different tubes
# or a combination.
beanstalk.use('msg_for_worker')

# this is my message's transaction id, it is also the key used to locate the
# data in the cache and it is the name of the response tube.
msg_id = str(uuid.uuid1())

# HERE store the full transaction in the redis DB and use the msg_id as the key

# start watching the response tube
beanstalk.watch(msg_id)

# send the message to the worker
beanstalk.put(msg_id)

# wait for a response (timeout is in seconds)
job = beanstalk.reserve(timeout=15)
print job.body

# stop watching the response tube
beanstalk.ignore(msg_id)

# DONE

The Worker:
# load the required libraries
import beanstalkc

# make a connection to the beanstalk broker
beanstalk = beanstalkc.Connection(host='localhost', port=14711)

# setup a watch for incoming messages over the well known tube
beanstalk.watch('msg_for_worker')

while True:
# wait for a request (timeout is in seconds)
job = beanstalk.reserve(timeout=15)
if job:
msg_id = job.body

# HERE we retrieve the message/transaction body from the redis DB. and do our work.

# open a tube to the client response tube
beanstalk.use(msg_id)

# send the response message to the client
beanstalk.put(msg_id)

Tuesday, October 18, 2011

Scaling REST design

The diagram to the left should give you starting point when designing a scalable system using Mojolicious or just about any kpoll/eventd single threaded/process framework. The same or similar can be said of other daemon type applications that are trying to get a lot of work done without having to deal with all of the complexities of threading. (someone recently posted that threads are the domain of a special few as they are difficult and very heard to get right) and while I agree completely I stay away from threads because the problem is more fundamental/theoretical than that. I just hate the idea of giving up all those spare cycles. (the test is effected by monitoring the thing being tested) I forgot who and what they said exactly, however, threading has overhead that I want to avoid. Of course I should also mention that threading is not cross platform. So, to recap, everything is brute force and locally optimized. And we are leaving the process scheduling to the operating system. We are also going to try to keep all of the data in memory (stay away from the disk).

For those expecting to see some code, I have not made the code pretty enough or generic enough to share but I would like to mention that this has been tested and it works and I will share it shortly... but there are some things to be aware of before I get started. I'm also not going to show you how to install or configure the different components. I will mention, however, that daemontools is a great way to get things started and keep them running.

So first thing's first. What is beanstalkd?
Its interface is generic, but was originally designed for reducing the latency of page views in high-volume web applications by running time-consuming tasks asynchronously.

What that means is that the client sends in one way messages into the broker, beanstalkd, which then queues until a worker registers to process transactions. Unlike ZeroMQ's request/response use-case beanstalkd's method uses channels.

The workers register or read from a well known channel and return the response over a private channel which is configured by the client and provided in the work payload.

The Client's pseudo code looks like:
- get a GUID
- create a channel from the GUID
- put the GUID in the work payload
- write the work message to the broker over the well known channel
- wait for a response on the private response channel

The worker, on the other hand looks like this:
- connects to the broker and starts reading from the well known channel
- the message is parsed and the response channel name is identified
- the worker performs the required function
- when a response is ready the worker writes the response to that response channel

And that's it. The rest of left up to the broker to perform. Granted there are still a few remaining bits. In the drawing I marked that the client and the worker used the same instance of redis. This is because the different applications were actually running on the same chassis. This is a good thing because all of the messaging takes place in the same box and never hits the network with is busy and constrained. The other benefit is that the messages being passed from client to server are never marshaled more than they absolutely have to. By passing the request's GUID and the response channel ID in the actual work payload the overall workload against the CPU(s) is reduced.

Speaking of TPS rates. It's important to note that everything you do is considered a "transaction". Therefore reading a transaction from the client is a transaction. Writing a response to the client or the broker would be considered a transaction. So in the example drawing there are actually 6-10 application transactions for every user transaction. Therefore, if your system is clocked at "10M TPS" then when the full application is running you're only going to get 1/10th of the total TPS if you're counting user transactions.

Logging... is no different than any other transaction and they count against the overall transaction rates. If you have that same 10M TPS CPU and it performs 10x application transactions per user transaction. And you log 100times per transaction then the system will only process 1/10Kth of the overall capability.

Mojolicious, Beanstalkd, Redis and perl are very capable. In the next week I'm going to put together a template in the spirit of a go-lang implementation of SkyNet. Stay tuned.

Business Class Communication With Remote Employees

At the moment I'm sitting at my desk in my home office and I have several programs open in order to communicate with me and I hate the fact that I need so many applications and so many duplicates. And it's not that they are bad applications but they are all bloated, have difference security models, and then there is the elitism of some of them.

Here's the rundown:

  • Mailplane is my regular eMail client for all of the gmail accounts. I currently have nearly 20 accounts.

  • I use iChat connected to 3 IM accounts

  • I use Adium for IRC

  • Skype and Skype Chat

  • Google Voice (it's connected to my MailPlane interface and my Chrome browser by extension)

  • A client of mine was using Mio so I installed it once too

  • And then there is FaceTime on my desktop and on my iPad

  • go to my pc


When you look at my desktop most of the apps that are running are actually idle and not running at all. Just think about all that screen real-estate, memory, CPU and network resources that they are consuming. How can this get more normalized? As a business owner and user I want:

  1. simple and functional applications even if they are not integrated

  2. secure so that my competitors are not listening in

  3. not off putting

  4. complete so that I can even screen share quickly

  5. reasonable cost


And something that is compelling enough that I don't have to worry about keeping those other applications around any more. Can't we all just get alone?

Friday, October 14, 2011

iCloud is featureless and juvenile

And in conclusion I'm canceling, disabling, and deleting all of my iCloud accounts because for the price it not only does live up to a single expectation but that which it might do is more about vendor lock-in than service, form or function. It's time for the makes of DropBox, Box.net, crashplan and SugarSync to step up to the plate with everything they can. iCloud is going to fall like a house of cards if they do not update ASAP.

Cue in the flashback ringtone like in the movie/TV "Wayne's World"

I've been a DropBox user for about a year. I really like the Sync properties. I've managed to use it cross platform between my Windows box running inside a VMWare Fusion session, my Linux instances running on Rackspace hardware and my several Macs. I like their public folders, media folders and their iPhone application. Sure it's missing a few things but "this" is a sync function I really like. (I have a 50GB account and I'm considering a 100GB and team account for the family.)

I also use Box.net for smaller files and their plug-ins for LinkedIn and WordPress. Since I can produce proper PDF and RTF versions of my resume and some detailed project work. These folders and documents are inherited immediately by the plugins and I do not have any additional work. It's very sweet. The immediate challenge for Box is that the desktop app requires a business account. While I am happy that they gave me a 50GB account free for life it's a pretty hollow victory because they do not have any apps that I can use with it. Clearly I need a sync like DropBox.

CrashPlan is by far one of my favorites. Before DropBox I had a small cluster of machines in the house that would sync ... and then I upgraded to the cloud version and I even opted for the non-free desktop software. The good news here is that I have an unlimited account. The better news is that my wife has all of our video and pictures saved in the could and after 5 years of marriage she has collected over 200GB of pictures and video. (One day soon I will need to perform a restore so make sure all of this stuff is good.)

And now iCloud. I recently wrote about iTunes and iTunes Match etc ... so I will not rehash that conversation. But I can say that I was expecting iCloud to be more like DropBox and it is not.  iCloud seems to be a slightly more polished version of MobileMe (which I never purchased).

Things started to go downhill when I was required to create a new appleid for my devices. It first I used my current email address but then when I tried to activate some features it wanted to create a new ID... and then the ID needed to be @me.com.  They did not give me a way to change the AppleID and in the end I needed to delete the ID and start again. Of course as I started to delete all the wrong IDs I had used I started getting warning messages from Apple that the "whatever the module data was" was about to be deleted. Since this was a one-way trip I just let them delete whatever they were going to delete and let the chips fall where they may. I can only hope that this did not include my pictures.

The iCloud website it a sugar coated website that is trying to look like my Mac OSX system and while it performs nicely it is strictly limited to the 5 applications. (calendar, contacts, email(and not even all of my Mail.app email accounts just the one @me.com account), find my mac, and the last one escapes me for the moment.) I was really expecting to see a filemanager like iDisk, DropBox or Box.net.

Since my wife and I share our current AppleID for our iPhones, iPad, iTunes etc... I was expecting to be able to share everything else. Well iCloud for storage treats us separately as far as the cloud storage is concerned. So if I had to do it I would need a 300GB account for her and a 50GB for me. (if they could sync my drive like DropBox.) However, the price for this will never compare to what I pay for at CrashPlan and I get so much more for my money.

And that's when the camel's back finally broke. I have 2 iPhones with 32GB storage each. I have an iPad with 16GB storage. I have 2 MacBooks with 1TB storage and a MacBook Air with 64GB. Oh, and there is one Mac Mini with 1TB storage too, When I put all of this together and I figure out what is really going where. iCloud just cannot do it.

If you've every imagined what a cancer looks like when it makes that connection between itself and the host you can only image that it looks a little like a Mandelbrot. The same might be said of the pads of a chameleon's foot pads. So as I look back at my first Mac Mini + iPod purchase 5 years ago ... I realize that there is a lot more going on here than just a simple evolution. Whether Apple was ready to deliver on the promise that was iCloud or not is irrelevant. Someone at Apple has decided that I am worth $XXX dollars a year in goods and services and they way that they make that happen is with underperforming releases with a splash of eureka. (It's like a sampling of name-your-favorite-additive-drug.)

Man o man, this is turning into a rant that I was not expecting...

So what is the plan? If things do not change radically over the next year or so I will likely end up going over to the Android side. MetroPCS has a an all you can eat plan for 50/mo (I want to cut the core to ATT completely by dropping my land line). I'll also get a Dell Z series for about half the price of my MacBook and my MB Air.

PS: Ubuntu One has a change at things. I've used their product for a while but they do not have a Mac version. I'm not certain that I need one for now. But there are some interesting options with the VMWare solution. (more on that later).

Thursday, October 13, 2011

Website name?

I need help deciding what the name of my project website is.  a) it's going to use the mongoDB b) and Mojolicious... to store, manage and print mailing labels. This is just a sample project to demonstrate (a) and (b).

[polldaddy poll=5582814]

The Agile Manifesto

from thinkgeekIf you are really hell bent on going down the Agile footpath then I urge you to read the Agile Manifesto.  Then throw everything else in the trash.  The manifesto makes common sense and frankly if you need the other books, references, and cheatsheets. The you probably don't get it and you should look into another career.

I know this is a harsh thing to say and commit to a blog but it you think about it for just a moment and you clear your head of all of the hubris that you hold for Agile ... you'll come to the same happy place and realize your glass is half full and you'll still be able to do your work but you won't be generating heat doing agile to the agile process.

There is a whole world out there and while your dedication to a "thing" is admirable. I might just be a waste of time.

Connecting to MongoLab - perl and python

[update 2011.10.13] I thought I would add the following quote from the MongoLab support pages: "If you connect to your database from outside EC2 or Rackspace your data is less secure. While your database does require username / passord authentication, you are potentially vulnerable to others "sniffing" your traffic. We are currently exploring ways to provide for more secure methods of connecting to MongoLab databases from outside the cloud."

I'm working on a mojolicious project as currently mentioned on this site. The next logical step for the application is a connection to the DB. I was originally going to deploy a mongoDB instance on my own server... and that would be great. But I've decided to use MongoLabs instead.  I suppose I could also use MongoHQ and try them independently and for comparison. That's a story for another day.

Connecting to MondoLab was pretty simple:

  • create an account

  • create a database

  • create a collection

  • create a user for interacting with the collection


That's it.  The nice thing is that MongoLab gives you the mongo-cli interface sample so you know exactly what's what. From here you can test the connection on your PC if you have the mongoDB client installed.

From outside of Rackspace:
mongo dbh04.mongolab.com:27047/mydb -u <username> -p <password>
From within Rackspace:
mongo 10.183.5.47:27047/mydb -u <username> -p <password>

Looking at the CPAN help for MongoDB (previously installed in mojolicious part 1) So now we have to test the connection with this sample code (I got it from the CPAN and then made some corrections).
use MongoDB;

my $connection = MongoDB::Connection->new(host => 'mongodb://dbh99.mongolab.com:27999');
$connection->authenticate('mydb', 'username', 'password');
my $database = $connection->mydb;
my $collection = $database->get_collection(my_collection);
my $id = $collection->insert({ some => 'data' });
my $data = $collection->find_one({ _id => $id });

Once I executed the program I verified that the data was written to the DB by logging into the webGUI and checking the collection. The data was there and ready.

Then I took this program and made a few modifications so that I could dump the record I just inserted. The code looks like this:
use MongoDB;

my $connection = MongoDB::Connection->new(host => 'mongodb://dbh99.mongolab.com:27999');
$connection->authenticate('mydb', 'username', 'password');
my $database = $connection->mydb;
my $collection = $database->get_collection(my_collection);
my $data = $collection->find_one();
while (($key, $value) = each(%$data)){
print $key.", ".$value."<br />";
}

It's not a very sophisticated dumper and there are some good libs for that sort of thing, however, my mission was to dump the data and so I did.
I'd like to take a sidebar moment to mention that I recently read an article "why perl". The take away from the article was that perl programmers are 'A' and that perl programs are 'B'. Granted there is no real evidence of this, however, there is a corollary. If you want to hire smart people who take an interest in their craft and you do not want to go through throngs of java resumes post an erlang position. So what I'm saying is that perl is a edge language where python, ruby, javascript, java are in the median space and therefore "mostly" attracting median skilled programmers.(let the trolling begin)

So I implemented the same exact program (to pull the data from the DB) in python. It took half the time because I was already familiar with the syntax etc... as there is some nuance in perl that I've flushed from my cache I wanted to make a connection to my my python side.
from pymongo import Connection
connection = Connection('dbh99.mongolab.com',27999)
db = connection.mydb
db.authenticate('username','password')
collection = db.mycollection
print collection.find_one()

Worked like a charm. The output was prettier because python makes that easy. I'm looking forward to part 3. In the meantime I'm going to try this against mongoHQ.

Software development in the cloud

Software development is about to change forever. Certainly the people at Cloud9 have recognized that and so have a dozen-ish web and desktop collaborative IDE projects (see wikipedia).

I recently purchased a MacBook Air. The small 11" version with 64GB of SSD. It's not a lot of memory or disk but it is enough if all I want to be is a "user".  But as soon as I put on my programmer hat, it's not enough.

Partly because of the screen size but more importantly the tools. Sure my mac is a general purpose computer but my clients and applications are not. From one project to the next I can end up with completely different tools requirements. For any set of projects I can be required to use widely different versions of Java, python, perl, ruby or even erlang. And then it gets crazy as I try to handle the different versions of and dependencies of libraries.

Additionally, as I get ready to package and deploy the application(s) it wickedly hard to deploy without picking up unwanted dependencies. Specially when the dependencies run deep. It's always better to deploy on a fresh machine.

The good news is that most development heavy companies have already noticed this. They tend to use cloned drives when giving employees new computers so that they do not have to install each application individually.  Just clone a drive and off you go.  In many cases you can provide Dell a drive image and they'll manufacture a set of laptops for you. It's even more fun when they encrypt the entire volume. So your IT staff does not have to lift a finger other than to deliver and plug-in your new computer. Many of those same companies use operations' centric computers running something like VMWare to slice off developer systems that look the same. This way everyone has the same starting point.

So as we independents and SOHO business owners move forward we need to consider this. Spend more money on upgraded networking at the office and remote locations. Provide commodity hardware. Move everything to the cloud and replicate/duplicate everything.

PS: I did some math.  As much as I like my MBA, it cost me $999. Amortized over 5 years that comes to $16 per month. For just about $11/mo I can get a reasonable dev server at rackspace. It's still expensive and I have no idea what their expenses are it might actually be reasonable.  For my $11/mo I get a dedicated server with shell access with a public and private IP address. I get plenty of network bandwidth for development. And the system runs 24x7; unlike my laptop which I turn off at night.

So my recommendations for rackspace. Bring the price down and open up some cloud services even if they are repackaged and bundled google services. You could capture a huge market if you had a proper workbench for different vertical markets.

Thursday, October 6, 2011

10 hours of spotify is not enough

[update 2011.10.08] One more thing about Pandora. If you're using your cellphone to play music, make sure that you turn off the player when not in use. You could do some serious damage to your wallet if you forget... unlike iTunes.

[update 2011.10.08] Pandora is awesome.  a) it's free. b) you do not need to provide ANY user id or password. c) and the upgrade for $3/month adds real value like no advertising and a real desktop app (although I do not like adobe air being installed on my computer)

I'm trying to make a case for uploading all of my music in the cloud. So I downloaded the cloud-beta version of iTunes and I allowed iTunes to take several hours and plenty of bandwidth to upload portions of my library that they did not identify or sell to me. I'm not sure whether it's a good or bad thing that 25% of my lib was not already available.

So let's do the rundown:

Spotify:

Several months ago I was in Stockholm Sweden working onsite for a client. Back in the day Spotify was a "Sweden" only application, in fact I had to use the company's address in order to unlock the application (along with my IP address).

In Sweden the application worked great, but, when I returned to Florida the app stopped. Clearly I needed a premium account.

Now Spotify has entered into the US market. So I downloaded the US version... But you would believe the number of hoops I had to go through to get the app to start working. Including having to change my address. But it finally worked. Well it only worked for 10 hours. And in that 10 hours... Spotify learned how much, how often and what I listen to. They solicited me every two or three songs with some commercial offering whether it was for premium services or foot cream.

I have several complaints about the spotify apps. First the desktop. a) there is no mini version of the GUI. Clearly because they want to advertise to me and they did not go to the google advertising college. b) something is very wrong with the radio playlist randomizer. They picked a log of Swedish music, the same 5 or 6 artists, and very many of the same songs... instead of pulling from their huge library.

As for the iPhone app: I was never able to get it to run. They wanted a premium account or they would play my iTunes music over WiFi. Neither was interesting.

iHeartRadio:

Clear channel is into everything. They are probably the new kings of all media leaving Howard Stern as the Joker. I actually like Stern but I never get to listen. Ya gotta pay for the priviledge. They have been running a promotion for the last few months so that their music was free and commercial free. However, beginning in 2012 they are going to start commercialization. I guess they have to make a buck.

Their iPhone app is pretty good. I've found various affiliate stations that I like to listen to. Including a local Tallahassee station that calls the Florida State Football games.

My only complaints are that they do not have a desktop app (I guess they want to search my browser history) and they want me to join them on facebook if I want to build custom channels.

Pandora:

Back in the day when we had our first child I would take Julia and our Cocker Spaniel Lucy for a walk first thing in the morning. To help pass the time I would turn on Pandora to one of several kids channels. It was fine for a few months. Even the commercials were... tolerable. But then one day I was approaching the house after the walk when I tried to "pause" while a commercial was playing. Pandora refused. Over then next few weeks the same thing occurred. It's not a bug in the software, it's just how they do things. And since it is not how I need it to work, I had to give it up.

As Seen on TV!

These companies are marketing businesses. Calling them "media" is like putting lipstick on a pig. And what most people fail to realize is the real value of their personal information no matter how anonymous or trivial it might seem.

Going back 10 years I had a conversation with a grey market - marketing company executive. They did snailmail spam. He said that a) it cost pennies on the dollar to send 100K or more mailings and that b) the return on the investment was huge. Anytime a person responded to an advert their qualified personal information was worth about 17 or 18 US dollars. That information could then be sold and resold over and over again.

So when you buy that "as seen on tv" product... for shipping and handling only. They are only interested in you and your mailing address. It's where they make their real money.

Here we are 10 years later. We have the likes of Spotify and iHeartRadio that want to follow us on facebook, read our walls, link to our friends, and so on. What do you suppose that is worth? Maybe a few hundred dollars a year. And on top of that they want us to pay for the content.

Spotify is charging $10US a month.  That's one full album on iTunes. Since I only buy 3 or 4 albums a year and now I'm into singles... It's hardly worth it.

Apple's iCloud is interesting and an interesting price of $25; I assume to offset storage and bandwidth charges... and maybe a small kickback *cough* royalty  to the record companies.  Now that they are promising to upscale my music to 200K bps plus... I'm not sure that's going to be good either. Now I have extra bandwidth costs from my cell phone and I cannot tell the difference anyway, specially when I'm listening to my cellphone on cheap headphones in a car, train or bus with plenty of environmental noise.

All of this reminds me of something Billy Crystal's character said in the movie Space Balls: "Merchandising"

Steve Jobs - not a me too

I never wanted to be a me too but I thought I had to say something.

Gates, Balmer, Elison, Allen, Buffet... are all rich and influential people and I'm certain I'd get some value from a meeting, however, Jobs was a person I really wanted meet. In fact I have applied for open positions, at Apple, every few years on the odd chance they were looking for me. Alas that's over.

RIP

Wednesday, October 5, 2011

Google Doc - resume templates

About.com has a career advice column and recently they pointed me to Google Doc for their excellent resume templates. With anticipation I clicked on "public templates", then searched for "resume". I quickly scrolled to the bottom to see the number of matches... "1-20 of thousands".  So I was getting the idea that I was going to see some cool templates.

I was very premature. Before I had a chance to click on "next" to advance to the next page of templates I started to notice that the templates had personal information in them. As I continued to the bottom of the page and on to the second page there were more and more resumes of real people.

I cannot decide whether these are real or fake. Did these people save "as template" on purpose or accident? And depending on what their answer is, are they people I would hire. Clearly the first personal resume that was posted should be rewarded for individual creativity... but what about #2 and beyond?  Now they have just filled the system with loads of junk and all of this creativity is now seen as a "me too".

Hold that thought as I upload my resume as a template.

I'm back now.  Have a nice day and if you have a moment please review my resume.

Monday, October 3, 2011

Eventually Consistent Storage Will Save Mankind

I recently read a tweet from @justinsheehy, the very public face of Riak @ basho.com. He wrote:
Paraphrasing @GeorgeReese: to be protected from failure, put as much of your data in an eventually-consistent system as possible.

In response, and without thinking too deeply, I asked the question:
@justinsheehy @georgereese good point so why not a flat file and import later? Why all the extra cycles/rotations to write to any type DB?

And then @georgereese and I started to converse at 140-byte intervals until he sent me a link to this article: Eventual consistency - Wikipedia, the free encyclopedia.  At that moment I realized that my original question was really more of a statement; in it's absolute simplest form; the wiki definition for eventual consistency can be applied to a flatfile on a DOS-based computer so long as you take backups and restore them on another computer... at some point in time.

That said, I think; and I could be wrong, Sheehy and Reese were probably talking about Riak which has a lot more moving parts in it than -say... a zipped-flatfile and rsync... and there is plenty of computer science reference material that discusses BLOC (bugs per line of code).

I'm currently designing and implementing a credit card payment gateway. It's not overly complicated, however, the most interesting piece of this implementation is the use of Redis as the storage engine. While Redis stores everything in memory, I have enabled the feature/function that saves the data to disk; so while I have not enabled replication... this "system" can be described as eventually consistent.

Eventually Consistent is so much more interesting when applied generally and globally across systems instead of narrowly defined applications.

In the interest of full disclosure; I recently interviewed with @justinsheehy for a position on the Riak project. While I recognized that I had not performed well after only having a few hours sleep thanks to my pair of newborns I have not yet received any formal feedback. This conversation and post are meant to be informative and with the sincere hope that one day basho might offer me a position.

another bad day for open source

One of the hallmarks of a good open source project is just how complicated it is to install, configure and maintain. Happily gitlab and the ...