Monday, April 30, 2012

Would the next generation desktop please stand up!

When we used DOS we wanted more and we received TopView.

When we used DesqView and we wanted more we received Windows.

When we used Windows and we wanted more we received OS/2.

When we used OS/2 and we wanted more we received Windows.

When we used Windows and we wanted more we received Mac OS9, others got Next, others got XWindows and a few others.

Now we are in 2012 and we use Windows 8, OS/X Mountain Lion and we still want more.

I see the current situation in 2 parts. (a) XWindows has not seen any real improvement since it was first released. Sure there are several window managers but they are all built on top of X11 and we still have the same kludgy interface. (1) multiple screens is painful (2) cut and paste is painful (3) programming APIs are worse still (4) it has not seen a true refresh in many years.

And in the second part (b) while Microsoft has been keeping the status quo and incrementally added Aero and Metro as part of their recent releases... Apple rewrote the GUI when it deployed OS/X but has not done anything interesting since then. It seems that they have ignored the GUI in order to focus on the core. I'm sure they have reasons same as Microsoft but given hardware technology you'd expect more from the leaders.

But going back to (a). X does not really have the advantage of being connected to hardware vendors the way that Microsoft and Apple are. But they have freedom to do anything so why not fix it. Canonical and RedHat have the deep pockets to improve the situation but they would rather shuffle the deck chairs. (Have you tried the setting panel on Ubuntu 12.04? It's terrible.)
Windows 3.1 called and said it wanted it's BitBlt back!

Apple took a lot of heat when it departed from X-11 in order to build it's desktop. Now, in hindsight, it was a stroke of genius. Clearly the X-11 team is mired in the past, their conferences, meetings, meritocracy, politics and a complete lack of understanding. It's time for a new and modern desktop for Linux. And if you need X-11, take a hind from team Apple and ... "there's an app for that".

More Credit Card Fraud, Where is the Bank Fraud?

I just wrote an article about credit card fraud... but here's some food for thought.

Computers have been in banking for a good many years. Probably since the 1960 or even a little earlier than that. But in recent history we hear about credit card fraud and not banking fraud. The systems are typically integrated and supposed to be equally secure... but the attack vector is always credit cards.

I wonder of saying it was credit card fraud (a) allows the banks to charge more for credit cards (b) allows the government and banks to say our banking and reserve system is secure.

The thing to think about... the credit card company (the issuing processor and all entities) they do not need your social security number. For anything.  Your bank does and they do not need you card number(s).

There are many ways to fix this problem (a) laws, (b) banks (c) technology.

Credit Card Fraud! Again? Really?

I'm somewhat of an expert when it comes to credit card systems. I have worked for the likes of NaBanco, First Data, WildCard Systems, MetaVentures, Insight Cards, Klarna, NXSystems. I have also collaborated and certified directly with Visa, MasterCard, American Express, and Discover. I have also designed open and closed loop systems including stealth platforms like insurance eligibility. Finally I have participated in several PCI audits as the target and the auditor.

Yet I was still outraged when I received a letter from a major card brand that my account had been compromised; they go on to reassure me that my social security number and some other private details have not been compromised.

Let me be perfectly clear here.  *** This is utter and total bullshit !!!  ***  I'd like a chance to repeat myself but that might be gloating or looking for business.

Firstly; PCI and may other security and privacy measures are not as secure as I'd like. PCI takes the rent-a-cop approach to security. Observe and record. There is nothing in the PCI document that tells the institution to take an active role.

Secondly; The Rules and Regulations for the various major associations does not go any farther than the PCI when it comes to detection or the active prevention of fraud. Again, observe and record. And unless you are doing something that is going to hurt the brand-name the issuers and acquirers can take whatever risks they deem necessary to capture and keep a cardholder.

The CEO of Klarna (Sweden) is always talking about removing friction from the transaction process. His company's product does not use credit cards and is similar to Bill Me Later (temporary credit is offered on the fly). Part of what makes his product successful is not that his customer's credit is tied to their SSN but that the laws in the countries that Klarna operates is mindful of how this private information is being used and in fact the it's not so private. It's about as common as your cell number.

[caption id="attachment_1044" align="aligncenter" width="645" caption="who are the players in the credit card process"][/caption]

GLOSSARY


(*smiley*) This is the cardholder. The cardholder is on both sides of the picture because the cardholder deposits his hard earned cash into a bank or makes partial or full payments for credit that had been provided. The cardholder also buys goods or services from merchants. Therefore the cardholder is on both sides of the credit equation.

(M) This is the merchant. The merchant provides goods and services to cardholders. The merchant also pays a percentage of each sale to all of the entities to the right.

(MB) The merchant bank is where the final settlement funds are deposited once the transactions cleared.

(GW) The gateway processor is considered a 3rd party service provider. They provide some level of transaction, reporting or security service for the merchant. They may provide other types of business integration or workflow.

(GW Bank) Depending on the acquirers rules the gateway processor has a clearing bank in order to capture their commission from the day's transactions.

(AP) The acquiring processor is just a technical entity that processes transactions between the merchant and the association. The AP does not actually have to be a bank but they need to be bank sponsored.

(A Bank) The acquiring processor bank performs the clearing function for the acquiring processor, however, more importantly this bank sponsors the AP's relationship with the association.

(association) Visa and MasterCard are associations of banks. American express is referred to as an association but was a privately held company at one time. Discover was spun off from Sears and is/was also a proper bank.

(IP) Like the AP, the issuing processor does not need to be a proper bank. The IP need only be sponsored.

(IP Bank) The issuing processor bank handles the clearing and settlement on an on-demand basis. Sometimes this entity is extending credit to the cardholder and sometimes this entity is holding the cardholder deposits. It depends on the individual card program.

(Bank) The cardholder bank is where there cardholder interacts with deposits and payments.

Authorization - this is the first part of a 2 or 3 step process (from the merchant). It depends on where the transaction is being performed. If you are buying a book from the book store then this is the first of 2 transactions. It's just intended to see if you have enough funds. If it's a gas station or a restaurant then it's a pre-authorization -- because it is absent of a tip.

Settlement - the settlement process takes place at least once a day (from the merchant). It is when the point of sale device tells the issuers what transactions were actually completed. This triggers the clearing and settlement process.

Clearing and Settlement - The association takes all of the settled transactions and groups them together sending like transactions to the individual issuing processors along with a "demand" file which the issuer uses in order to pay the association.

Single Message System - this is when the authorization and the settlement transaction are performed in one transaction. ATM transactions are typical single message system(s).

PS: There are few differences between credit cards and debit cards. I suppose the actuary have a different view of this but it amounts to the same results. It's still a 15 or 16 digit card number.

The Short Version


What does all of this mean?

The cardholder bank makes money when you deposit money and potentially gives you a fraction back as interested, once they have charged you fees. The cardholder bank also makes money during the clearing and settlement process as "demand". The bank does pay processing fees of a sort but the majority of the bank's gross revenue comes from the transaction.

The reality is that the merchant pays the freight on card transactions. And that is passed through to the cardholder.

NOTE: if you want to create an issuing processor from the ground up then I strongly recommend that you get someone to do the IP for you. Get some cardholders and capture the transaction revenue. You can also use your own system (although you might be processing on someone else's IP at least you are getting instant discounts. I hope that makes sense) This is the reason that Discover can return 5% on all transactions and the similar for Costco-Amex and others.

What does it all mean?


Someone in the diagram above lost or allowed to be stolen; my data. Whether or not that data is used to perform actual fraudulent transactions should not be my problem. I pay to get the card. I pay to use the card. And I get a fraction of the value in interest if I do nothing... except fees for not using it.

This letter that I received should not be a "get out of jail free" card for whichever entity permitted my data to leak. I should be able to sue them individually because any class action lawsuit only benefits the lawyers and not the cardholders. In fact they should just start dumping money on my doorstep in advance of any bad thing that might happen. And more importantly I will be watching my credit scores for the rest of my life... looking over my shoulder waiting for someone to take advantage.

PS: Suzy Orman once said that you should never cancel a credit card. If you do it will negatively effect your credit score. I have a Delta/Amex frequent flier card that I do not use.  They charge me $100/year for membership and I get nothing in return except that they extended me some credit that I have to pay for anyway if I elect to use it.

In the US our laws seem to protect corporate America and not America. What is good for corporate America is not always good for me!

In Summary


We are not safe and we are paying too much.

I almost Forgot


... the reason for writing this post in the first place.  The association that sent me the letter recommended that I check with the various credit bureaus in order to see whether my personal information was in fact being used. True, that is an option, however, the credit bureaus only give me one or two free reports a year. And if you've ever used their services they harass you with FUD and other tough sale pitches and tactics in order to get you into a subscription. The wording in their online Apps is so questionable it was obviously intended to get me or anyone else to make a mistake.

Really what I'm suggesting here is that this service needs to be FREE for the individual. Forever.

Saturday, April 28, 2012

Google drive support page useless

Google drive is close to dropbox but adds support for syncing google docs.

Google's support for the application is severely limited. There is little or no indication of any type of errors. I still have 100 files missing on my target system.

Dropbox is much more reliable.

Thursday, April 26, 2012

Google Drive - just not syncing everything

[Update 2012-04-28] It was looking like over 100 files failed to sync. And I discovered a couple of things. (a) Google Drive's desktop app for OSX does not replicate the .DS_Store file. (Yippee). (b) is also appears that if you "touch" or change the file's timestamp, that is not enough to trigger a "re-copy". (c) does not copy or process symlinks.

Google Drive is mostly working... but it is simply not syncing all of my files reliably. This would be a critical flaw and a single reason to go back to DropBox.

Yesterday a couple of files were in the Google Drive menu as "failed to upload for unknown reason". Strangely enough the files were already uploaded and they had been downloaded by one of the other machines. They were Google App files (gdoc and gsheet), however, the sync indicator was on the files and the parent folder icon. What was that all about?

Now I have files with the sync icon all over my filesystem instead of the checkmark. This is a fundamental function/behavior that should not happen.

Sorry I do not have any recommendations... I really like the unified filesystem. It's so much better than what DropBox does. What that means is that many of the files are just stubs that cause the Google Apps to launch and open the file or do some other action. I'm hoping that this "sparse" file system is extended to regular files.

Wednesday, April 25, 2012

Google Drive - ToS cannot stand the test

Some writers have criticized the Terms of Service for Google Drive.It's nice that someone has time, energy and a law degree in order to take them to task. But while it's nice to throw rocks at Goliath I'm not certain that they are going to get any real remedy as the ToS is critically flawed in my opinion... sort of a poison pill.

First and formost the ToS requires that I relinquish some rights to the material I'm uploading or allowing to be processed by Google's services. (a) this is very similar to the verbiage to the other cloud file servers. This is because they are going to compress and uncompress the files in order to be efficient on their side and it's sort of a request to hold them harmless in a sense because they make mistakes and shit happens sort of a thing... sometimes on purpose and sometimes not. (b) since I/we have been are likely to uploading documents that have various types of licenses assigned to me/you/us that are non-transferable ... just because I said it was ok is not going to make it so.

Anyway, this argument and ones like it have already been bantered about with Box, DropBox, SugarSync, and CrashPlan among others. Frankly I do not think that Google is going to read the email you sent with the family recipe for Oatmeal Cookies and open a bakery. (take 5 minutes and do a little light reading on the internet email systems. Most of the email systems are in the clear. Like sending a letter to your grandmother and when it gets to the postoffice someone opens the letter and then sends the letter inside to the postoffice closest to her home. The the postmaster puts it in a new envelope and delivers it to her house. But this only describes some of the ways email is delivered.)

On the other hand a little paranoia is a good thing.

Google Drive - cannot sign in

I've been having trouble signing into my Google Drive and I finally figured it out.

(a) Google started updating my account several hours ago. Or maybe not. Anyway, I went to http://drive.google.com and signed up for notification. I assume that maybe this put me on a list somewhere.

(b) I finally received my invitation and a link to download and install google drive desktop app. And so I did.

(c) When I launched Google Drive... (1) it took a while to load and to show up on the task switcher. (2) when the login screen was displayed the account/email address was pre-filled and I only had to enter my password.

However, no matter what I entered the screen never processed my password. I tried downloading a fresh copy and then re-installing it. Sadly that had no effect.

There was a HELP link on the login screen. When I clicked on it my network monitoring program, little snitch, complained that an app was trying to access the network. What caught my attention was that there were some video artifacts that suggested that there was something shared between Google Drive and Google Chrome.

(d) I quit google Chrome

(e) restarted Google Drive and was able to signin

(f) restarted Google Chrome.

Now everything is working well. I'm not certain what happened... My intuition suggests that there must have been a shared library or install file that collided. Anyway it's syncing now.

Google App Drive Space

People have been reporting that their Google Docs account has 10GB... and they are supposing that this is on account of the upcoming Google Drive. I tried to download Google Drive but I get this youtube video to watch and a "notify me" link. And when I'm on my gmail account I see the storage meter advancing.

When I first created my gmail account I had about 7GB of free space. Over the last 2 days It has grown to 9GB. I like the additional space although I recently upgraded my account so that I had 25GB storage for my docs.

On the one hand you'd think that all of these applications would share the same storage; but I guess not.

Now that I'm running around in circles ... is Google really adding 100MB an hour to my account or is this more of an animation like a progress meter so that I get some sense that something is going on in the background as they get around to my account?

I dunno...

Where is Google now?

Over the past few days there have been press reports that Google was deprecating some of it tools as evidenced by google's own project pages (link1, link2) What has me concerned about this policy is that I might have an idea for the next great webapp or I might have a client using some critical tools that Google is deprecating... now what?

Being an observer it's too difficult to know what projects are in or out. It's probably safe to say that GMail and Google Apps are in. While GMail is a free and there is a free version of Google Apps; there is a commercial component here too. But what about AppEngine? Well, there seems to be an ecosystem here and they just released GO v1 for AppEngine. But while this is fun and interesting for geeks and internal Googlers what does it mean for external businesses?

I think that Google is a riskier play than say deploying on a virtual or dedicated host or even another cloud vendor. And until someone can corner Google management with a commitment it might be better to pass on AppEngine for now.

That said, the platform development strategy going forward will be either be Java or Python (probably Python) making certain that the code is compartmentalized into libraries that will work on either platform... giving the client flexibility. The good news is that Django also works in both spaces.

Tuesday, April 24, 2012

Boutique Headhunters

There was a time when I thought there might be a need for a boutique headhunting service. I thought that I might also consider outsourcing my interviewing or tech screening to locate headhunters and recruiters. But it's complicated and the mission for the commercial recruiter is different than the corporate recruiter and definitely the job seeker. I was never able to convince myself that there was a real market.

Enter boutique headhunters...

Recently I was contacted by one such firm. They were looking for someone with perl and payments experience. On the perl side I've worked on a number of proprietary projects; a general purpose reporting tool/framework, NOC monitoring webapp, sysadmin webapp, DBA webapp, trouble ticket webapp, Verifone terminal administration tool and then a few experimental apps which were just personal exploration.

This headhunter was only interested in my perl code and my CPAN contributions and what perl idioms I was familiar with. And so if you have read any of my previous articles you know exactly how I feel about this.

(a) I do not have any commercial grade software, written in perl, that I can subject to this scrutiny. Partly because it's not mine.

(b) the public code that I have is scrap code that I used to prove a point or debug a module. Again nothing worthy of scrutiny.

And as I sit here trying to write the conclusion to this experience my mind is wandering in several directions. Either the recruiter does not get it or the client does not. This sort of narrow focused recruiting approach never turns out well. It's a temporary patch at best. People do not like to work in hostile work environments unless they are at the top of the food chain.

Django project folder format

It seems that the Django has changed the project directory structure with version 1.4. This stackoverflow post does a good job describing the new layout. I'm in the process of refactoring an asterisk dashboard so that it's multi-tenant and runs from a single server instead of one per tenant. This is just the first stop.

Monday, April 23, 2012

Cloud - be skeptical

I like using virtual servers. I often find myself wanting PCI compliant virtual servers but that's just not going to happen unless I own the hardware. But in the meantime the big boys have me covered. Although the next time I speak to my PCI auditor I have to ask if there is a way to get PCI certified on virtual servers... anyway...

I'm always looking for a bargain... recently I was schooled that Amazon's EC2 was less expensive than RackSpace's virtual server. I had to see it for myself and he was correct. It was not much but enough. Of course if I had automated migration or installation scripts that would be one thing (no chef or puppet here). But the savings was not going to offset my time and then the risk of failure. But it was interesting.

This morning I had some StackOverflow page up and there was an advert for MediaTemple. I've never used them but I have looked at their service. In the past they offered shared servers... where your app coexisted with others in the same space. (very un-PCI friendly). And some time later they started to virtualize.

(mt) posted this on their landing page:
Virtualization on a Diet
Built on Virtuozzo 4 from Parallels. Lightweight virtualization technology, with less system overhead than Xen servers.

For some reason they seem to think that this is a selling point for me and I think they have it all wrong. This is how they would justify Xen internally and not externally. And it certainly does not make them smarter...

The way I read this:
we wanted to achieve greater VM instance concentration per physical node compared to the competition

Which does not help justify their premium!

Saturday, April 21, 2012

Agile Manifesto - Agile Rails - the proof

Typically in a scientific endeavor if just one component of the proof turns out to be false or unprovable then the entire  endeavor is considered a failure. This is not the same in software development. The successful parts are kept and the failures are deleted.

But this brings me to the point. The author of Agile Web Development with Rails Fourth Edition (pragprog) quotes the Agile Manifesto and then draws an inference with Rails.
"Individuals and interactions over processes and tools" --Agile Manifesto

and then ...
There are just small groups of developers, their favorite editors, and chunks of Ruby code. --Agile Web Development

and then all was lost...
"Working software over comprehensive documentation" --Agile Manifesto

You won’t find 500-page specifications at the heart of a Rails project. --Agile Web Development

The "problem" here is that is assumes that everyone is an expert; everyone is performing to the same level all the time; that there is no attrition on the tech or business side; and that people never get bored in their jobs.

It's nice to say that everyone can look at a SHA-512 function and determine what and how it works from the function prototype and the code... as easily as they would understand a hello world. That is simply not the case. The existence of the SHA-512 is based on a rigorous mathematical proof that is akin to the 500 pages of requirements.

I recently tweeted that I could not imagine the construction crew of the Empire State Building or the Brooklyn Bridge completing their projects without (a) a requirements doc (b) a blueprint or two.

The Agile group can work on small and less complicated projects but once you get to a certain scale in complexity or Mythical Man Month scale then Agile is the real myth.

Friday, April 20, 2012

RVM excels over virtualenv... (update) NOT!

[Update 2012-04-21:] What a freakin' mistake. Rails is such crapware that it defies explanation or description. I had just completed installation of rails on 2 different Macs and an Ubuntu server. I then created my "demo" project to make sure that everything was installed properly. And I discovered that I had not. This was was a pretty good patch and it went flawlessly. When I went back to my project and tried the "bundle install" each of the projects barfed. When I finally got the ubuntu installation to fein completion I ran "rake about" and I now get javascript errors. I get it, I'm missing more prerequisites. This reminds me that I had a complaint about rub, rails and gems. The dependency stack is just too freakin' deep. There is no way that anyone knows everything from shell to DB. Think about the autoconf tools. It is so long in the tooth these days that it is more magic than  reality. The difference is that the executables there are clear, maintainable and reproducible. Ruby and Rails are no more eliteware than erlang. Show me someone who claims to be a Ruby expert and I'll show you someone who build vaporware on pretendware. (I'm pissed for spending 100's of dollars on books and RubyMine; and weeks letting my mind consider that there was some value in Ruby; for taking a Ruby Job in Alabama... which I was converting to anything else... and for scanning the ruby job boards over the last few months) Virtualenv might not support many different versions of python, however, python just freakin' works and the same can be said for Django.

[Update 2012-04-20:] With RVM you can install just about any version/revision of Ruby that suites you. I am in the process of installed 1.9.3-p194 right now and RVM supports a number of different flavors like MacRuby, ree. Virtualenv, on the other hand, is at the mercy of userspace installation of the target python and even then versions like pypy require patches not yet pulled back into virtualenv. I cannot say that this is the only reason for moving to Ruby from python but it is pretty strong.

... when dealing with the issue of the language versions. RVM gives you direct access to the versions of ruby currently available and installed and virtualenv puts the burden on the user. And installation is a pain in the ass,

Wednesday, April 18, 2012

Ruby MetaProgramming? Really?

Let's tee one up with another visit to PragProg. This time it's metaprogramming. I do not take issue with the book or the publisher. The book is well written and if I did not like the publisher I would not be buying from them,,, and I have plenty.

In the very first few pages of the book the auth describes metaprogramming:

  • wrappers and "magic"...

  • bend the syntax...

  • remove duplication...

  • stretch and twist ruby...

  • ... and code generation


Which inspired me to tweet these:

  • Ruby metaprogramming may be best implemented by ruby l337 but even they move on. It's not scalable or sustainable.

  • Ruby metaprogramming is unmaintainable in any environment where maintenance is important. Worse than read-only Perl!!!

  • Ruby metaprogramming = NoRuby


Code generation is reasonable and good. You can leverage data to produce realized code paths instead of completely data driven code paths. In many cases debugging is easier because you test and debug the fragments and how they stitch together instead of every possible combination as driven by data alone.

But that's where metaprogramming shines.

All of the other meta attributes suggest that Ruby "bend and twist" to be something other than what it is. And that is what drove me to tweet. It's Ruby but not really Ruby... How much metaprogramming would it take for Ruby to simulate python?

Traditionally metaprogramming was described by code and data... not by adjustment to the syntax or the language. One needs another name for whatever that syntax mangling function is called. But also, good luck to you and your read-only-ruby.

The 501 Manifesto

[Update 2012-04-24] This article and others like it have been "trending" lately and I hate to attract attention to it because it's not that good.

The 501 Manifesto is a fun read but it's crap.

Monday, April 16, 2012

Too Much RF to be good



In the picture above you see a MacBook Unibody, MacBook Air 11" late 2010, a POSX, eMachine M6811(installing AVG free after a complete Windows 7 re-install) and offscreen an Acer Laptop. Just below the eMachine there is a keyboard drawer with an Apple bluetooth keyboard and trackpad. (and there is a baby monitory that is turned off... and the base station for a cordless phone.)

Today's mystery is why is the keyboard not functioning properly? I cannot seem to wakeup my MB no matter what I do.  So I opened the MB and logged in directly and then closed the clamshell so that I could use the external monitors... but the keyboard stopped working and only the trackpad was working. I checked the batteries and so on. This is the same machine that is giving me trouble with my Jawbone BT headset too.

Hmmm... too much RF? Yes? Who's fault is it? Mine. *sigh* I know what I'm doing tomorrow.

Common Sense - NoSQL

A btree is a btree is a btree. And no matter how many servers you distribute the workload it is still going to take the same amount of time to find the record or records that you are looking for. By extension the very same thing can be said about a hash function lookup. And finally an unconstrained mapreduce produces the same results as any database scan. So we have gone from O(log(n)) to O(1) to O(n).

When you realize that the same numbers apply to SQL and NoSQL systems alike you have to start thinking critically. For example if the search time is the same between a SQL and NoSQL datastore then where are the differences?

  • Network latency

  • Merging results

  • Distributed search on smaller subsets of the data

  • Disk latency


I'm probably missing a few things here but the point I want to make is this:

  • if you have a search that take 100 units of work

  • if that search is initiated by 100 users

  • this creates 1000 user/units of work.

  • Now distribute that work over 100 or 1000 compute nodes and you get the same number of work units per compute node.


Assuming that everything else is fair and equal it should take exactly the same amount of time +/- just a little of the overhead I mention above.

Where the real differences are between NoSQL and SQL is that NoSQL uses CAP as it's guiding principle and SQL uses ACID. ACID clearly has more overhead than CAP and CAP makes very few actual promises where ACID requires complete adherence.

So the next time you start thinking about the database you want to build... first decide whether CAP or ACID are more important. Then chose your brand.

PS: I've watched this pseudo-video a couple of times and I have no idea what the author is really promoting... but taking the message at face value is what interested me. At face value it is inline with my comments but from the other side of the same stream.

Sunday, April 15, 2012

REVIEW - Build Awesome Command-Line Applications in Ruby

Having a great respect for the fellows at PragProg and having many of their titles it's easy to find something to criticize. When I purchased the book "Build Awesome Command-Line Applications in Ruby" I had great expectations. I had skimmed the table of contents and I thought that this would be a great way to supplement my Ruby education since I was already heading in a Rails direction.

I maintain that even though I have WebApps development in _________ (fill in your favorite language) I have always needed command line scripts to manage the DB, config, or some other function that I'd need to repeat but did not want to do manually; specially when there were multiple steps.

So I had high hopes. Now that I look back at the TOC I think it might have been better as a guide or best practice instead of 195 tomb. (what do you call a paperweight that is made out of paper; even if it's e-paper)?

The TOC:

  1. Have a Clear and Concise Purpose - that goes without saying. It's "the UNIX way".

  2. Be Easy to Use - same. But take a look at TornadoWeb. Their command-line tools are pretty cool. The make it trivial to manage the command line. You could even go so far as to make the process dynamic or data driven if there was a use for it.

  3. Be Helpful - plenty of tools for python and perl developers here. I particularly like the perldoc tools. It makes man pages very easy to write.

  4. Play Well with Others - this is a nice checklist that makes good sense. Specially when writing those quick and destructive apps where if the task does not complete the data will be left in an unknown state... therefore we need to capture certain signals like CTRL+C.

  5. Delight Casual Users - this one is kinda insulting that it took more than a page to recommend "good default values".

  6. Make Configuration Easy - This is the same as #5.

  7. Distribute Painlessly - Just about every language publisher has some sort of novel way to publish new modules. GO, Perl, Python, Ruby, Lua... some are better than others.

  8. Test, Test, Test... -  That goes without saying. TDD (Test Driven Development) is at the heart of some new *cough* Agile techniques. There are some tools like Cucumber that are designed to be more English like with it's Grammar. Since I've only skimmed that subject I'm not sure whether it's better than Python's nose or CPAN's test. (Ruby also has RSpec).

  9. Be Easy to Maintain - This is a subjective comment. Sometimes one-file is a better idea. They seem to have missed comments completely. And the discussion between comments and well written code is never discussed.

  10. Add Color, Formatting and Interactivity - This chapter had so much potential. I like that they wanted to discuss color and formatting. Their use of table formatting made be very happy. But they missed ncurses and curses (from the same family). Assuming that at least one person was going to be a unix or OSX programmer... Not to mention that every Rails screencast I've ever watched was done on a Mac.


At this point I have not learned much other than my 20 bucks was not well spent. And frankly I think that Dave and Andy should withdraw this book. It does not live up to any potential that I had in mind when I bought it.

PS: I botched my article title so I had to go back to pragprog for the title. That's when I realized it "Build Awesome Command-Line Applications in Ruby: Control Your Computer, Simplify Your Life" was really an extreme exaggeration and there was no way that they were going to live up to it. (I don't feel cheated because I got to write this critique but it was not money well spent.)

Saturday, April 14, 2012

Hold me harmless

Everyone complains about terms and conditions in our desktop software and our web services etc. I happen to use trello, github, bitbucket and other free services regularly. So I cannot fault them for including topics like hold harmless when referring to the service, and security or availability of my data. But it's a completely different matter when you pay for services like crashplan, Dropbox and iCloud. These are their primary function.

One of these days I will need commit to reading more T&Cs.

mnesia sucks - give me SQL any day!

I started programing SQL some 15 years ago. I do not particularly like it because it reminds me of COBOL and I most certainly do not like it either. But that was yesterday's news.

In recent weeks I have had this notion that well written code is better than well written documentation and poorly written code. That said I can see the value in the "values" presented by COBOL. Sadly, SQL is not as expressive, however, it's syntax reads a lot better than the existing query syntax for the likes of Riak, MongoDB, Mnesia, Cassandra (although Cassandra is trying to address that).

The missing link is that there are no REAL tools for administer these databases or applications. And at the extreme end Mnesia demands that the administrator write their own scripts to manage the database. (The so called emacs-IDE for erlang is not the answer and never will be)

As a demonstration of this frustration I posted a question on StackOverflow. It's a long and involved question but it clearly demonstrates that administering an erlang application and a mnesia database cannot be performed by a standard DBA or SYSADMIN. It requires highly paid and highly skilled programmers. It's the sort of thing that cannot and will not scale in an enterprise.

Until this changes I will not have much use for erlang. But keep this message in mind when you start your next project. If people/administrators who are paid to manage your application do not have to tools they need to detect and resolve production problems then you have not performed well for your client or employer or profession.

PS: If anyone can answer my question I'd appreciate that.

Friday, April 13, 2012

Where are all the clouds?

I'm about to start comparing cloud server brands. Please help identify the brands and criteria.

BRANDS: rackspace, peer1, amazon. Linode, gogrid, beanstalk, heoku, google appengine.

CRITERIA: phone app, CPU, ram, disk, network, geographic diversity, cost, backup, storage, OS variety, disk persistence, overall system latency, reliability, customer service, PCI.

I would like to compare these services to a home grown solution but my intuition suggests that a performant server org all the required hardware to match these brands will be out of reach for just one or two virtual server instances. I'm taking about 50K USD. Multiple cores, lots of speedy ram, San disk cluster plus spare parts. There is now way to build this on the cheap.

Thursday, April 12, 2012

Your Next Web Application Framework?

Suppose you are the person who has to make the decision as to what language and framework your startup is going to use to deploy it's application.  What would you choose? There are so many interesting and qualified frameworks that are already powering a good portion of the internet.

(You don't have to know the language and framework but enough to argue what makes it idea.)

What would it be?

For example: I read an article several years ago that strongly recommended Erlang. At the time it was a great idea. The author suggested that using Erlang would attract smart people and keep the actual number of respondents to something manageable. Since then I implemented several applications that in hindsight: (a) impossible to attract new talent. (b) the more time that passes the more fragile the app gets because my detail recollection is fading (c) and it lacks common tools that would make allow generalized apps to give access to "operators" instead of programmers.

Other examples include: perl, ruby, python... mojolicious, rails and sinatra, flask, tornadoweb, and django.

I have my ideas... what are yours?

Tuesday, April 10, 2012

Mosh is a bit of a pit

[Update 2012-12-07] Thanks to Curtis for bringing me back to this post. For a second I thought that maybe I was overly critical of Mosh as it was a while ago when I wrote this article. So I revisited the Mosh site and it's pretty much the same as I remember it. They might be supporting a few more operating systems or environments but it is close. For me the news is that it is the same product as I originally reviewed. The underlying implementation is the same. That SSH is not IP locked is actually a security risk I had not considered originally and as one of the attendees asked... how many ports do I need to open in my firewall? To which the answer was "as many as the number of connections". Roaming is a hard problem. It's probably the reason Google's apps are implemented the way they are.

I was enticed by the mosh project homepage. After I started the install on my OSX box I realized it was a mistake and against my user admin principles and best practices.

(a) the port command required that I do a selfupdate.This should have been the first and only warning. There were going to be many more upgrades after this... and there were. It was painful and it currently risky. Not that anything bad has happened yet. I'm just a little concerned.

(b) once I got things compiled... I tried to connect to my favorite servers. That was a mistake too. My remote servers did not have the mosh server installed and quite frankly I'm not going to either.

(c) userspace my ass. The homepage suggests that mosh can be installed in userspace. This is not the case when you install via port.

So this was a waste of time.

I do not care much about the bugs in ssh. Besides mosh uses ssh to tunnel. So where is the real benefit. I was drawn in my slick marketing. *sigh*. Anyway, ssh+tmux or screen is more than satisfactory. In hindsight... how plausable is UDP for ssh'ing? The last thing I want is my keypresses broadcasted around the planet. Not a very good security plan.

A reflection of noisier times

Sitting in my home office I was browsing the Apple site and looking at the new Macs. In particular I was considering a new computer for my in-laws who have destroyed 3 Desktop PCs and 1 Laptop in the last 7 years.

As I reflect on their computer troubles I start to remember what my office sounded like when I had 2 desktops plus 7 "silent" PCs plus 1 Mac Mini(PowerPC); which I have traded in for a late model MacBook, MacBook Air and 4 RackSpace virtual servers. And now I have so much to be thankful for.

(a) It's quieter in the house. I could hear my office in every room in the house. I recall the couple of times that the power went out... The house was eerily calm.

(b) my home office is not as hot as it was. Sure, later in the day is get warmer because it faces the sun but it is still so much better.

(c) The virtual servers at RackSpace are not perfect but they are a lot more reliable than my home network, internet service supplier, and my own hardware.

So as I'm thinking about new toys... I recall a mantra I had about 20 years ago. Get the biggest and best monitor you can afford, then get the CPU. In the end the monitor is going to last longer and have more life than the CPU which you will likely replace in a year or so. Never has that been more true than now. And as much as I like apple products there is simply no incentive to buy an all-in-one. Apple would have been better off making the Mac Mini a pluggable or dock-able module into the back of one of it's great monitors.

 

Monday, April 9, 2012

How long should your resume be?

I have two resumes. The one that I use most often is a one-pager; and then there is the "complete" resume of almost everything I have accomplished in 25+ years.

I use the one-pager for several reasons; and I know this because I have been a hiring manager. (a) Many times I'm printing more than one resume on a shared printer. More times than not the printer spills the resumes on the flow and in most cases the multi-page resumes are no fun to reassemble without page numbers and names on every page. (b) Once the pages have been printed, try to find a stapler, and the staples, and then dig through the pile of resumes for all the multi-pagers again. (c) and if you've ever read 100+ really long blog posts in a quick succession you know that after the first few you start to speed read trying to filter to the best few that are going to provide the best information.

This applies whether it's an entry level position or a senior position.

So what is the ideal length of a resume? It's the one that get's you hired. But if you can get into an interview then you're almost there. You should feel free to get some feedback on your resume from the interviewer. What they liked and did not like? What caught their attention? Once you have enough feedback then you can tweak it for content, length and format.

The trick about executive positions. It's no trick really... while I have never interviewed executive candidates I have been on the other side. I have also received some solid advice from accomplished executives and it has been fairly consistent. One executive I know was hired by a company. "I sat around; went to meetings; observed the company from the inside out. Then at about the 6 month mark the CEO told him to define his job." Another executive articulated that at executive levels you are hired for your knowledge and skills and not what you think the outline of the job is.

What does this have to do with resume length? Well, if I were hiring an executive the list of candidates would probably be short from the beginning. They probably came from recommendations or personal relationships... or possibly vetted by executive headhunters. But at this point I would expect to see something like the "our executive" page on any corporate website with a nice picture too. Of course you'll need your employment history with your success somewhere for reference.

Good luck to us all.

Saturday, April 7, 2012

commenting best practices for your favorite language

I thought about reading a reddit article about NOT commenting you code.

I decided not to read it because I did not want to be swayed by something that could really be meaningless drivel. But now I'm about to start a code review of a new module and I'm considering the amount of documentation I need to create and the format too.

In the past I would spend hours updating javadoc in order to produce a reasonable resource. In the end it was a  very expensive effort. (1) writing it, (2) maintaing it, (3) ignoring it and reading the code anyway. And in the end all of the players would do the same.

I decided to take a slightly different approach. Sadly it looks more like COBOL and python. But I think I made my point for myself:
def lookup(self, lookup_key, lookup_value):
"""
Convert the lookup_value through a translate dict assigned to the lookup_key.
"""
retval = ''
LOOKUP_DEFAULT_VALUE = '*'
lookup_dict = self.wp[lookup_key]
try:
retval = lookup_dict[lookup_value]
except KeyError:
retval = lookup_dict[LOOKUP_DEFAULT_VALUE]
return retval

Except for the 'wp' variable this function should be very clear:

(a) setup
(b) get the translate dict instance if there is one else throw an exception
(c) lookup the value in the translate dict and throw an exception if it is not there.
(d) if the translate threw an exception, then try looking for a default transation
(e) if the default fails, then throw the exception

(off topic) In hindsight I find this funny and little ironic. As I looked at some example code in the djanjo project source tree I see that not everything is getting the benefit of the so called best practices of commenting everything. As a result I started to think about those hiring managers that read GitHub contributions. What would they think of now?

default value bug in python???

One of the annoying things about languages, specially dynamic languages, is some of the odd side effects. This is particularly challenging when trying to debug moderately complex or involved code fragments.  Here an interesting code sample that I'm looking at right now:
  3 
4 def func(a, b, c={}):
5 if c:
6 print 'something is wrong'
7 d = a+b
8 c['a'] = a
9 c['b'] = b
10 c['d'] = d
11 return c
12
13
14 print func(1,2)
15 print func(1,2)
16

In the function declaration I set c with a default value {} which is an empty dict. And unless I provide the 3rd param I would expect c to be empty every time. However when I execute this function I get something different:
$ python bug.py
{'a': 1, 'b': 2, 'd': 3}
something is wrong
{'a': 1, 'b': 2, 'd': 3}

Notice that the message "something is wrong" is displayed.  That means that the second call to func() is ignoring the emptyhash or it's actually something completely different. The possible choices are: (1) {} defines a semi-static-scoped reference and not a new heap instance. (2) the '=' assignment in the function prototype does not actually work as you would expect.

The lesson here might be more than is there a bug in the language design or implementation. More importantly it supports my A-1 best practice ... stay in the sweet spot.

another bad day for open source

One of the hallmarks of a good open source project is just how complicated it is to install, configure and maintain. Happily gitlab and the ...