Friday, June 29, 2012

BitBucket vs GitHub

I tell you... what a hard decision to make! I cannot decide which tool is better for my DVCS needs. Clearly the toolset is deeper if I go with GitHub. Not only does GH have lots of tools but there are plenty of 3rd party tools too. Some of the slicker GUIs include Tower for OSX. On the downside; if you have to install the CLI client it can get squirrely  because there are a few different languages in the dependency tree although I've never actually had a problem deploying it.

On the other hand, the pricing for BitBucket is better when you consider my shop. I'm "mostly" a sole contributor for my company and having an unlimited number of private and public repositories is reasonable and cost effective. The other thing I really like is that the CLI is written in python and mostly python.  I think there is some native C code that compiles but I cannot be sure. I know that it installs in a ver stable fashion.

As for which has better. I'm not certain. I know that they are both quality applications. The companies behind them are strong. Both are widely used. Both are integrated into the GO tools which I really like. Some people talk about branching and merging and the differences there. I'm not sure I care. I've rarely had to branch/merge.

One last note. GitHub has added some sort of subversion(SVN) framework to their platform and BitBucker has added a GitHub framework.  I'm not sure if this is an alias type of thing or if it is a real deployment.

Sorry I cannot offer any solutions. I use them all and I'm still trying to reduce my footprint.

PS: an interesting thing I just noticed.  The initials for GitHub are GH and the shortcut for BitBucket -> mercurial is HG.  HG <=> GH  ?? Fun!

PS: two additional mentions. MacHG is a good OSX GUI for HG. SourceTree works on both HG and GH.

Thursday, June 28, 2012

What is middle management good for?

Rosabeth Moss Kanter recently wrote an article for the Harvard Business Review. She has a number of views on the subject of management hierarchy and I think she misses the point. In response...

First of all, as an example, middle management is where executives and upper management get to take off their training wheels. There is a reason why Donald Trump has an apprentice program and not just the TV show. There is a reason why government does pretty much the same. I would not expect some undergrad with a polisci degree to run the country. Without middle management the revolving door at Yahoo and HP would become a game of chutes and ladders. (The last thing I want to see is all upper management graduating with Harvard MBA and no job experience)

Second, it's not the first time that corporate america has ejected middle management in the last 30 years. The last time was because of the recession/depression or whatever it was. This covered manufacturing and technology companies. And in the end it was a huge mistake. There was so much intellectual property in middle management that companies took years trying to retool for the next cycle.

Thirdly, your post was tweeted by an agile coach. I respect this person very much. And his tweet of just the article's title suggested to me that there is an agile twist to this as most agile teams and companies are very flat. This sort of suggests that a rank and file worker at a company that practices agile from top to bottom is (1) a pawn to be replaced (2) expendable (3) not really going to advance in responsibility because the ratio of workers to managers is so high.

Finally, this article suggests that we (in the US) are moving from a capitalistic society to socialistic one and while there are some benefits in both; the problem is that we are democratic capitalists. We want to win the lottery. We want the new shiny toys, cars and houses. So until Madison Ave stops selling gold plated cell phones, and MTV stops selling gold plated teeth and "cribs" and until Donald trump moves into the 4/3 next door to me ...

Mrs Kanter I appreciate that you took the time to write this article but I see it as a warning of yet another economic disaster.

PS: There was a period of economic growth after the last recession because many of those middle managers started their own companies. I'm not certain that can happen again. I'm just an out of work programmer with 25 years experience and with my dependents I need middle management responsibility and salary; I cannot compete with high school and college grads because they do not have houses, cars, families, insurance payments and so on.

Wednesday, June 27, 2012

Why is middle management on the decline again?

I really hate this subject because it effects me directly. Back when Reagan was President we had a huge recession. As a result middle management was all but obliterated from the middle corporate America as management thought this was the way to survive. And we learned two important lessons from this reaction. (1) recession almost turned into depression and corporate America was almost erased (2) many middle managers spawned successful startup businesses. (the two are not offsetting even with the dot com boom and bust)

Now, it seems, that we are in economic hard times again and whether it's a recession or depression is for future economists, however, I do know that middle management is being erased again; and top managers are calling it lean or agile. As a result the execs are making more money (as part of their "incentive" packages) and entry level positions have become highly competitive as the job market shrinks without rational stimulation. I wish that it was a fair competition but it's not. It favors the young person. Not because they are smarter, faster, knowledgable, malleable, "agile" or even efficient but because they are cheaper. They do not have houses, cars, families, college tuition, electricity, water, heat or A/C. And of course retirement considerations.

Top-down total-agile-management might work in the EU where just about everything is socialized and retirement assured. But here in the US, where every 2 to 4 years we talk about how bankrupt the federal retirement system is, this obliteration of middle management and a genuine career path is paramount. Companies that do not support a middle management risk losing their knowledge base, fail to mentor or apprentice their line of succession, and are only interested in the short term benefits.

So my message to all senior and 'C' level executives. Start thinking about your company as if it were to exist for another 200 years and stop thinking about your own personal legacy. If you take care of the future then the future will take care of you.

Tuesday, June 26, 2012

My wishes for Apple and iOS

I have been thinking about the number of PCs that I have recycled and the number that need to be recycled. And then I think about my toddlers that could benefit from a computer but are clearly not ready for anything that looks like Windows or OSX. But what is amazing is that have just about mastered iOS. They can play games, recognize and switch games by their icons, view pictures and navigate the albums.

So here's my wish. I want to recycle my old PCs by installing iOS on it so that my kids have something to play with. I just don't see myself buying them a $750 iPad at the age of two and the iPad that I'm using is not ready to be recycled nor do I have more than one iPad that hand-me-downs are an option.

Monday, June 25, 2012

Vizio is on the move

[Update 2012-06-26] Vizio just announced "CoStar". It's a GoogleTV appliance that costs $99USD and has all of the basic features like video and radio from the various media suppliers. The price for the device is inline with AppleTV now it's a matter of comparing the cost of the media.

It's been a news story recently. Vizio, an American company and maker of reasonably priced TVs, is branching into making computers. Their laptops and all-in-one desktops are aesthetically designed with plenty of "out of the box supported" modern features(no custom device drivers) and none of the sponsor-ware. Which leads me to a number of concerns:

(1) Is the $899 starting price tag really the best price? If you look at the TV market and compare; Vizio was never at the utter bottom but their prices and quality make them onto any gift-list I imagine. However, the price is closer to Sony and Lenovo instead of Gateway and eMachines. I would like to get a hands on to see it in action. I would also like to know if *nix or *BSD are viable.

(2) While the OS is supposed to be an out of the OEM box install of Windows I find myself being skeptical. WIthout actually having a machine to evaluate I could not tell, however, one of the common themes over the last few years has been the general move to social data capture. If the general consumer had any idea how much information was included in the "phone home" feature of many of their installed and commercial applications they would be concerned. On the one hand this might have offset the cost of the software but what is the real cost? That's another story.

(3) Just how functional is an OEM install of Windows? My recollection is not much. Once you get past the deeply embedded Internet Explorer and some of the basic Control Panel, MineSweeper, Calc, Remote Desktop, NotePad and a dozen other tools and utilities you're done. I would hope that, by now, Microsoft is offering some Express versions of it's software "by default" as part of the OEM install in order to compete with OSX. On the one-hand I'd rather install them separately so that I could decide whether or not to install MS Office but on the other hand when considering a computer like this for Grandma/pa I'd rather complete their install with as little friction as possible.

Saturday, June 23, 2012

Freelance IT / Programmer Consumerism

A few days ago I came across this article and as I ponder some of the projects I'm working on, clients and missed opportunities I've realized a few and important facts.

Every SOHO and small to medium business out there depends on computers and software just as much as they depend on electricity and the dial tone. So it's a wonder to me why many of these businesses do not hire companies like DockYard. Not to implement some critical or non-critical application at $120 - $250/hr or $4,000 - $7,000/wk or more. But as a resource to make sure that the High School kid, summer intern, or college CS major does a good job or gets a solid background or framework to begin with.

I'm not trying to take jobs away from this cadre of would-be future Zucks; I'm trying to say that you, as a business owner depend on your systems and applications to keep your operating costs down and your productivity up. So when your systems are buggy, damaged, or down you're losing money through failure to convert a customer, loss of a customer, or increased support costs. So it is your best interest to take that into consideration the next time you have to build and deploy a critical application based on the experience of a hello world level of experience.

PS: This editorial was paid for advertisement for the services of Florida Freelance IT LLC. I hope you'll consider me for your next project.

Sandboxing OSX apps is a good start

The idea of sandboxing OSX apps is not new or unique. Both OSX and Windows have features that prevent software, particularly 3rd party apps, from accessing various physical and data resources but it's not without it's detractors most of which are just haters. What bothers me is that many in this verbal minority have an agenda whether it's selling more anti-virus services or their one of those users that does not care.

The reality is, however, system or computer security whether it's in the form of in-built firewalls, Little Snitch, or sandboxing has more to do with protecting the brand rather than user's data. One other side effect is going to be the cost of support.

(1) the first thing you'll notice whether you're installing software using the appstore or downloading directly from the vendor's website is that the app is being installed as a "shared" app which means that the user needed to be the administrator or have administrator access. And since the installer is built into the application which has been promoted to administrator could install much more than just the application. (think trojan horse)

(2) disk space is relatively cheap these days even though SSD is becoming more prevalent (and is more expensive than the mechanical alternative) prices are falling and it's still pretty efficient. So having multiple copies per user is not terrible.

(3) Sandboxing means that the user would install the app in their user folder(s) and that the app would only have access to it's own data. On the whole this is a good idea, specially if you're talking about something like quickbooks where the application's data could be encrypted either by the sandbox or the application.

(4) At some point, however, applications will need the ability to bridge sandboxes. It seems to me that bridging is a permissions thing that the kernel is ideally suited for.

What does all of this really mean for the user experience? On the one hand I believe that it's going to eliminate the biggest problem for most computer users; and that is the dreaded "you need to reinstall the operating system and all of your applications".

On the one hand sandboxing is meant to protect the operating system from the user applications. On the other hand it's also meant to prevent one application from accessing other applications for either innocent or nefarious reasons.

Friday, June 22, 2012

Assertions in Java and Go

First and foremost assertions do not exist in Go at all. The language designers had a very specific opinion about it. On the Java side assert was converted to a keyword. This too is interesting and yet maybe not so much.

When servers/daemons are designed in the erlang way where failure is an option then things like assertions are ok. The application will crash, possibly generate a core file, definitely generate a log entry, and then restart. It could be the best of all worlds, especially when you have certain expectations.

Here's the thing. Unlike the Python and erlang idioms java is strongly typed so it almost makes sense to test input ranges etc and throw exceptions when things go bad. Assertions on the other hand do pretty much the same thing by setting the expectation during development and testing. The idea that the java language designers had was that assertions cost time and space; and if assertions can be removed from production then speed and size are recouped. However, during production you lose the validation that you had during testing and dev. And while TDD etc are supposed to perform exhaustive testing that's just never the case. Code coverage or not.

If you think I talked myself around in circles just now... I agree. My head is spinning as a result. So here is the bottom line. Forget the keyword. Use the IF statement and generate your own AssertionError. Sadly you won't win any coding awards for beauty or brevity but you have a chance at consistency and accuracy.

Thursday, June 21, 2012

It's never really the __END__ in Perl

One of the really neat constructs of Perl is the __END__ token. I do not know the origin, genus or species of this token but I like it. Back in the day when I was a contractor at IBM I was handed the tomb called ISO-9001. I don't remember much of it anymore other than one or two facts.

(1) every page that is numbered needs to be numbered thus:  "1 of 10". The reason for numbering the document this way was to insure that the reader would know when they reached the end of the document.

(2) similarly the author was required to identify when the reader had reached the end of the document. I do not remember what the ISO token was for the end of the document but over the years I have adopted "# # #".

It's because of these rules that I have, out of habit or some deep seeded need to conform, that I always put some sort of tag at the end of every source file to indicate the EOF. And since Perl has this tag already defined... and with the exception of COBOL and maybe Fortran (I do not found forth, pascal, or prolog) it just made sense to me and so I keep doing it.

It interesting to note that, in Perl, __END__ is the end of the code and not the end of the file. Oh well.

N-way file merge Perl, Python, Go and Lua [Java, Ruby]- Compared

[Update 2012-06-22] Here is the java version of this assignment. It is/was awful. Once you stray from OO, in java, the code inflates like a sea monkey and clearly OO is over the top overkill.

[Update 2012-06-22] Here is the Ruby version of this assignment. I like it's compactness although that came at a steep price as accessing hash elements meant clunky dereferencing and string comparisons were just awful. [That was an error on my part; works as you'd expect]



Not to beat a dead horse but I now have 4 example implementations in Perl, Python, Go, and Lua.

I did my complaining about Lua in a previous article, however, in summary here... this example in Lua is verbose and lacks consistency. I'm not expecting to reduce this to a single LOC (line of code) but I would have liked some additional APIs that would have implemented more efficient algorithms based on internals knowledge. Or at least well documented idioms.

The Go example was fun because the version 1.x of the toolset was simple to use. I would regularly execute "go run merge_tick_data_hash.go file1.csv file2.csv" and it would run like a champ. The only challenge is/was that simple errors that most dynamic languages permit until the code would actually execute would cause the compiler to barf. And initially I had no idea they were compiler errors; but it was easy enough to get used too. The compiled version of the code was lightening fast to startup and execute even though it was 1.4M bytes in size.

The Perl version took some doing. I was able to reduce the LOC and optimize the code quite a bit. I think there is still some room for improvement based on the Python implementation which was just a few lines smaller because it had the benefit of being written last. In this case I sacrificed adding the filename to the %ticks hash and that reduced a few LOC but added some de-referencing which "might" be optimized by a good JIT; cannot say for certain.

I'd like to compare these implementations to a Ruby version but I'm just not a fan of RVM this week. As for the remaining candidates. I have to admit that I really liked the Go version, however, I do have a complaint that while "Go" seemed like a good name for the project when it started. (prefix for google) right now it's hard to do google searches. GO is such a small and common word that there is no way to optimize searches. One strong advantage is the static linking once the project is compiled.

Wednesday, June 20, 2012

Things I hate about Lua!

A few days ago I was given a programming assignment. It was pretty simple. In fact it was so simple that I still cannot decide whether the challenge is whether I could follow instructions or whether I could see deeply enough into the assignment to find all of the traps... or were those traps meant for discussions later or was the whole thing there to determine if I knew enough Perl idioms to be a viable candidate.

So now that the assignment is complete and submitted I decided to implement the assignment in GO and Lua. I might even do a python and Ruby version but not right now. Maybe tomorrow.
The assignment: Write a program that could take as input any number of files that are presorted and merge the input file(s) preserving the sort order and keeping the amount of memory usage to a minimum. And you cannot use external shell commands.

That was pretty much it. The assignment missed a number of issues:

(1) what do do when no files are provided

(2) what to do when 1 file is provided (presumably return that file; but I did not do that)

(3) what happens when one or more filenames are repeated

(4) what happens when the file is either empty or only contains comments

(5) what about comments distributed randomly in the file

That's all I can come up with from memory. Now to my solution. Just so I've said it... I'm not a Lua expert and for that matter I did not sleep at a Holiday Inn Express last night so while I feel certain that there may be some optimizations here and there my contention is that Lua, while interesting, has not made the cut in spite of WOW,  LR, FS etc...

(complaint #1)  Given the number of ways to get the number of items in a table is troublesome. Especially if you are implementing an array with holes in the index(sparse table) list like: 1, 2, 4, 5 the typical #table will yield a result of 2 which is the wrong answer. And while everything is supposed to be a table one would think that the same length functions would apply to the string. (see my len() function)

(complaint #2)  The string manipulation functions are limited to a set of primitives that can be used to implement the functions that are missing. For example: trim(), rtrim(), ltrim(). The real issue here is that there is no indication as to which standard functions would implement the function best. Part of that it implementation details and the other part is keep the source code small so that there is that much less to care and feed.

(complaint #3) use of the arg[] table is strictly non-standard. It looks nothing like C, Perl, Python or Ruby. The fact that it's a table (array) is only part of the problem. In order to count the number of user parameters you have to delete 2 from the len() of the array. The problem here is that "lua" is the first parameter and therefore if this were compiled instead of interpreted code the item could be absent from the table and would therefore cause my len(arg)-2 to fail.

(complaint #4) the absence of a formatted print. Instead Lua implements a formatted string.

(complaint #5) in the various loop structures there is no next, last, continue, just a break.

(complaint #6) there is nothing special about Brazil that could possibly justify using ~= as the "not equals" operator.

(complaint #7) the file i/o is just plain clunky. The idea that once you open a file, that file is considered the "current" file and that from there on you can perform certain io operations on the current. But that they do not provide decent examples on how to switch current. Instead I had to find examples of file io that mixed a could of the functions: io: and file:.

(complaint #8) pairs() is a nice function but it would also be nice to have a keys() function too.

(complaint #9) the commenting is similar to many SQL dialects but it's still trouble; specially when the first line of the file is the #!/usr/bin/env and therefore you are mixing comment types. Unless there is some purist manual that says that the first line of any script file that includes an external directive is not really owned by the program... MEH

(complaint #10) there are two ways to delete items from a table. If the table is indexed as an array then table.delete() is the preferred method. If the table is a hash then assigning a nil to the value is preferred. And that's when I get really annoyed. "preferred"?  Are you kidding me. There is no other way. And I had to find this nugget of information after a search on an 3rd party site.

(complaint #11) the docs. They are better than most but they still suck. In some cases the grammar is terrible but if you get past that it's simply incomplete. Many of the functions are not described completely. Whether it's input/output or just a simple functional description and sometimes an example. (see file:read)

I think I have a few more complaints in me but I'm done for now. I may write-up my experience with GO in comparison as it's not without it's warts but that is a topic for another day.

Compare table length in Perl and Lua

In the beginning there Perl; and in Perl was the array and the hash. Later there was Lua; and in Lua there is the table. The table in Lua seems to be both an array and hash, where they are both implemented as a hash. But let's start with Perl.

In Perl (don't panic there are idioms that make this less straight forward):

[sourcecode language="perl"]
# length of an array
my @a = (4,5,6);
print scalar @a; #-- prints 3

# length of a hash
my %h = (four=>4, five=>5, six=>6);
print scalar keys %h; #-- prints 3
[/sourcecode]

In the array example I'm thinking that the number of elements in the array is actually stored in the array and that the function of converting the array to a scalar causes Perl to return the stored value. I really hate the idea that Perl might be counting every element in the array every time this was performed... and if it was then that might justify storing the length locally for the different uses although this get's funky when multi-threading.

As for the hash version.  The keys() method exports the keys held in the hash as an array which is then converted to a scalar as in the array example.

In Lua (go ahead and panic as the code is ugly):

[sourcecode]
-- length of an array
local a = { 4, 5, 6}
count = 0
for _ in pairs(a) do count = count + 1 end
print(count)
print(#a)

-- length of a hash
local h = { four = 4, five = 5, six = 6}
count = 0
for _ in pairs(h) do count = count + 1 end
print(count)
print(#h)
[/sourcecode]

In the array example the values 4, 5, 6 are assigned to index' 1, 2, 3 respectfully. When counting the pairs() we are evaluating each entry in the table just like the perl hash function keys(). However, when we execute #a, unlike the perl array functionality, Lua starts initialized an index counter at '1' and increments it so long as the value is not nil. So in the case of the array these two methods are synonyms.

In the hash example the count method works exactly the same, however,  the #h fails completely.

Clearly the language designers did not think this was a big deal. Whether it was because counting the pairs is truly efficient or there is another more efficient way or the reader is actually expected to fill in the gaps with more robust code. After all Lua is not made up of too much code anyway.

I'm not certain that I really care but it is frustrating. Specially when modern languages like Python and Ruby seem to have done away with particular issue. And as a side comment we should do away with idiomatic programming.

Friday, June 15, 2012

Ruby Contracts Gem - Contradiction?

I continue to buzz Ruby like an Army Combat Drone. Recently I captured a post that caught my attention. The basic idea is:
The addition of decorators to Ruby methods so that params and return values were of a particular type.

To my knowledge the absence of a "contract" is actually a strength of Ruby, Python and Perl. In fact Python specifically recommends that one should not check the param types and that throwing some sort of type exception is preferred. (it in one of the best practices docs).

The only reason I can fathom for this "Contracts" gem is so that programmers coming from strongly typed languages (java, c, c++) can transition easier. The only problem is that there is no place for this sort of coding when on a team of experienced programmers in that particular language. The establishment is going to have it's way of doing things and it will be on the new guy to assimilate.

Here is a perl edge case:

[sourcecode language="perl"]
sub add($$) {
    my ($a,$b) = @_;
    return $a + $b;
}
[/sourcecode]

And here is the preferred version:

[sourcecode language="perl"]
sub add {
    my ($a,$b) = @_;
    return $a + $b;
}
[/sourcecode]
Notice that the difference is that the ($$) has been removed from the first example. The reason being that it is redundant as the first line of the function is
my ($a,$b) = @_;

and it sufficiently describes the function.

"Contracts" is less of a bridge and more of a beaver damn.

Wednesday, June 13, 2012

advert-ware reinvented - no free lunch

It used to be that when you bought your new PC (specially from IBM) or Mac that you received the hardware and the base operating system. Nothing more.

Then some slick marketing guys realized that they could subsidize cheap hardware by installing would-be free-ware or later adware and then it got so bad (Packard Bell and eMachine) that they were installing 100s of apps leaving little room or overhead for your own apps. Many of the apps could not be removed. And when you did a fresh install of white label MS Windows there was always some driver that was missing.

In recent years a very similar thing has been happening in the browser market.  Many of the browsers manufacturers get paid for directing your search queries to one search engine or another. The fact that some browsers give you a choice of search engines is not FREE. They are getting paid by all of them or there would be no incentive. (I have yet to see duckduckgo installed on Chrome)

A few months ago I was impressed that twitter was integrated into my iPhone. At first I thought it was a cool idea, however, now that Facebook integration was announced with iOS6 I'm pissed.  How long before all the sponsor-ware consumes enough resources that I cannot save that last family picture or favorite song? It's no wonder that Apple is moving everything to iCloud. They want all of the local storage for this new model.

We must wake up! There is no free lunch. Everything that you think you are getting for free, specially on the web... has a price. It may not be immediately obvious to you but it's there; you're just not looking hard enough.

For example:

(1) browsers - already discussed.

(2) anything GPL - strictly encumbered with viral like requirements

(3) Facebook - you are constantly advertised too and your social network is worth more than gold

(4) AIM - AOL has targeted AIM's end of life but it's intent was for marketshare and retention of their existing user base.

(5) GitHub or BitBucket - no secret there. These are businesses. At some point they should be making money with their paid subscriptions but the freebees are techno-crack.

(6) XCode - If Apple did not offer a free toolset then someone would likely underprice them (recall Borland and the Turbo brand of compilers). Offering the free toolset lets them control the API. Microsoft lost this battle but is still trying to win the war. Now they are offering an express version of their development environment.

**Feel free to suggest a would-be "free" app and I'll try to locate the cost.

Sunday, June 10, 2012

MsgPack vs JSON - size vs readable

MsgPack as an RPC with integrated cross platform compressed payload is useful and interesting. However when you strictly consider JSON's format to MsgPack's format it's all about JSON.

One thing we already know about compression is that plaintext ZIPs to about 50% compression and that binary does not compress very well at all. So compressing a JSON payload will yield some coos compression where MsgPack will not and therefore we need to compare it's native format.

And after all that I/we can read or write JSON documents easily by hand.

Thursday, June 7, 2012

Liability Insurance for the Freelance IT Professional

If you ask an attorney, corporate counsel or your insurance agent I'm certain that the answer would be the same. YES, you need insurance! And I suppose from a strictly paranoia or disaster point of view that's true, however, what about the other 99% of the time.

In the most fundamental projects, regardless of complexity or risk, there is an acceptance phase where the client takes ownership and responsibility of the work product and it's benefit. And so the risk transfers or should transfer to the client upon acceptance. So then what is the purpose of the Liability Insurance?

In reality, contractors are not charging enough. If they were then it's likely that more companies would hire full-time resources instead of contractors or consultants except in the most extreme or specialized vertical knowledge.

Wednesday, June 6, 2012

integration, convergence, happy desktop - what's next?

I'm banging away on my development system; installing Fedora-17 and VMware tools for kicks, watching some YouTube, and skimming some emails, reconfiguring iChat, iCal, Skype, MailApp and Sparrow; and thinking about what would make a real productive desktop. Windows, Mac, Linux, *BSD, console-mode, proprietary GUI, X-Windows, is there a cloud or virtual platform that makes sense, what about ChromeOS, ChromeBook?

There are just so many brands, platforms, tools, development environments, and purpose built tools that it's almost impossible to be a user and a developer at the same time.

As a user, project manager, and business person; I need access to word processors, spreadsheets, accounting software, some purpose built applications for reporting, IM.

As an architect, developer, testing; I need access to the command line, the necessary desktop development tools like IDEs, access to various databases, remote systems, IRC.

The list goes on and on, but the truth is that there is no one "slim" platform that offers all of this goodness of which there are 3 and a half platforms. (1) Microsoft Windows + SkyDrive + Office (2) Apple OSX + iCloud + iWork (3) Chrome or ChromeOS + Google Drive + Google Apps. (3.5) is OpenOffice.

Microsoft, Apple and Google all have their own hardware platform. Microsoft tried some privacy limiting moves a few years ago and was slammed. Apple did it and people did not seem to notice. And with Google people don't seem to care.

Microsoft and Apple offer standalone apps for the different desktop apps where Google offers a virtual separate app for each function, however, they load lightning fast and they perform as needed most of the time. OpenOffice is built on Java and as fast as the JVM is supposed to be OpenOffice is just not there.

But when I put on my developer hat the Google platform is more of a handcuff than a tool. Only Microsoft and Apple offer reasonable toolsets. Google will need to offer a sandbox for development if it's platform is going to get the attention of devs.

Monday, June 4, 2012

Another tech question

Given a set of weighted values what combination(s) equal 10?

I coded this latest challenge here. It's fairly concise and it implements the challenge fairly cleanly with a few loose ends. For example I do not like the use of a global counter '$i'. I'm also bugged by tokens with a value of 0 (zero). But it's a start.

[Update 2012-06-04] Just to be clear my concern with the code I presented was that the reduce() function had external side effects. And that's just yukky code.  I've updated the code with this and I like it because it eliminates the side effects entirely but it meant having to install List::MoreUtils.

[Update 2012-06-05] I forgot to mention that I had initially tried to code the sample in python, however, I was not able to get a BigInt-like class installed. I did not try very hard but hard enough. Perl's was there by default or simple install and while I did not implement the code in Ruby; it [Ruby] has a BigNum implementation so a direct translation should not be too difficult.

[Update 2012-06-05] This is the latest version. In this version I replaced the xdump() calls with Data::Dumper(). I've used this class before and I thought of using it... but as I moved farther away from the initial discussion some details mean less than they did initially. Data::Dumper() is a good tool and I should have used it from the beginning. I'm not sure what the performance benefit is but I changed sprint("%d", $n) to "$n".

https://gist.github.com/2876097

Friday, June 1, 2012

Monte Hall Paradox

[Updated 2012-06-04] I added a Perl version of the same function.

I was recently taking part in a technical interview when the interviewer provided me with the Monte Hall. This is nothing like The Full Monty. It is something completely different. I started to solve the problem using a random DSL of my choosing... which turned out to be something akin to pseudocode or maybe more like detailed comments that one might write for assembler code 'cause most comments are getting a bad rap these days.

My intuition told me that if we started with 3 doors, 1 car and 2 goats... that once Monte removed one of the goat/doors that if I did not change my guess that my odds were going to change from 33% to 50% regardless. Meaning that whether or not I did anything I was going to have the same odds. (spoiler alert: I was wrong)

So on my flight home I decided to solve the problem or at least simulate it. (the solution is over my head)

[sourcecode language="python"]
#!/usr/bin/env python
import random</pre>
def monte_hall(change):
car = random.randint(1,3)
guess = random.randint(1,3)
if change:
return guess != car
return guess == car

def monte_run(change, trials):
wins = reduce(lambda acc, trial: acc+monte_hall(change), xrange(1, trials))
print "trials (%d), wins (%d) ratio (%f)" % (trials, wins, (wins / (trials*1.0)))

if __name__ == "__main__":
trials = 10000
monte_run(False, trials)
monte_run(True, trials)
[/sourcecode]

The code has a few shortcuts in it but those were only implemented after the brute force version implemented it concisely. There may even be a way to reduce the code in the monte_hall() function but it's small enough for me now. Using the reduce() and lambda were the fun parts thanks to erlang.

As a result of the simulation here are a few things I learned:

(1) if you do not change your selection after Monte discards one goat/door then your odds of winning are 33%

(2) if you ALWAYS change your selection after Monte discards one goat/door then your odds of winning are 66%

(3) and while I did not actually focus on this, but if you randomly (50:50) changed then for some strange reason your chances of winning were 50%. This choice has a few problems. Mainly that there aren't enough trials on the actual show to give you a chance to win. So choosing (2) would always seem to be the best strategy.

I decided to rewrite the code in Perl:

[sourcecode language="perl"]
#!/usr/bin/env perl
use List::Util qw(reduce);</pre>
sub monte_hall($) {
my ($change) = @_;
my $car = int(rand(3));
my $guess = int(rand(3));
return $guess != $car if $change;
return $guess == $car;
}

sub monte_run($$) {
my ($change, $trials) = @_;
my $wins = reduce { $a + monte_hall($change) } 0, 1 .. $trials;
printf("trials (%d), wins (%d) ratio (%f)\n", $trials, $wins, ($wins / ($trials*1.0)));
}

my $trials = 10000;
monte_run(0, $trials);
monte_run(1, $trials);
[/sourcecode]

Coda2 weakness

Coda2 is actually pretty cool but I do have a complaint. When you have a remote project and a local copy too. It's pretty easy to screw up the version control when you edit remote and local files ... accidentally you're now going to be merging on both systems. And that's a nuisance.

another bad day for open source

One of the hallmarks of a good open source project is just how complicated it is to install, configure and maintain. Happily gitlab and the ...