Skip to main content

commit often?

[Update 2012.03.07] - I forgot to mention that fork'ing, rebase'ing, and some other basic features make it easier to be a write-only developer. One of the critical paths is when you update/merge your local repo with the remote base while you're actively developing. Yes, there are some procedures to minimize the risk like commit then pull and merge, however, this does not isolate your changes nicely. A subject for a completely different conversation is "one code repo" like Google or "one code repo per project"?

I have participated in a number of strategies for commit'ing code to a repository. Each strategy has side effects, unintended consequences, and unimaginable consequences. The common strategies are:

  • commit often

  • commit after a complete thought

  • commit daily regardless of state

  • commit after a complete feature


When beginning a new project I like to start with a fresh directory/folder; layout the first set of project files which could be created by a generator like Rails, Django or an IDE; and then I like to take a commit snapshot. This gives me a solid "initial commit/import" so that any false commits from this point can always be rolled back to right here.

Commit'ing often protects the project from hardware and human error and when you are in a team setting where the scope is narrow or with a high level of mutual dependency then commit'ing often seems like a good idea. The problem here is that rolling back commits can be a very painful procedure. Specially when commits are things like "updated a comment". It's just not on-par.

Commit'ing after a complete thought seems like another good idea. For me, however, a complete thought can take days to commit. Typically there is a POC (proof of concept) and then a few cycles of refinement. I hate to commit the POC only because peer review can be brutal when not taken in context.

Commit'ing daily regardless of state is probably the worst idea ever. This is guaranteed to break the build. In the book Debugging The Development Process the author talks about not commit'ing unless it builds locally without errors and passes the regression testcases. PragProg has a book, long since forgotten, where they first introduced me to CI (continuous integration). Someone hooked up multicolor lava lamps to the CI machine and any time the build broke the red light would be on. And the person who commit'ed the broken code became the librarian.

Finally, commit'ing after a complete feature. This is not unlike commit'ing after each thought, however, in this case you have a better connection between the feature request, aka story, (I hate that usage), and the actual code. I don't really like the idea of a project request having n number of pull requests. I remember a teammate/librarian; she was constantly pulling requests and merging back into trunk. It was simply painful to watch,.

As an aside; commit'ing for the sake of code reviews or metrics is a mistake. Programmers with eventually find a way to game the system; whatever system you setup. LOC (lines of code) has long since been debunked.

So what is the best practice? It's probably somewhere in between. It depends on the team and their capabilities, the scope of the project or projects. The important thing, however, is to be flexible.

Comments

  1. Nice post, when to commit work is the daily dilemma most of the agile developers go through :) striking balance is important, to achieve it one has to know the application well & know your dependencies well to collaborate for delivering right code consistently. It is hard to come by....

    ReplyDelete

Post a Comment

Popular posts from this blog

Entry level cost for CoreOS+Tectonic

CoreOS and Tectonic start their pricing at 10 servers. Managed CoreOS starts at $1000 per month for those first 10 servers and Tectonic is $5000 for the same 10 servers. Annualized that is $85K or at least one employee depending on your market. As a single employee company I'd rather hire the employee. Specially since I only have 3 servers.

The pricing is biased toward the largest servers with the largest capacities; my dual core 32GB i5 IntelNuc can never be mistaken for a 96-CPU dual or quad core DELL

If CoreOS does not figure out a different barrier of entry they are going to follow the Borland path to obscurity.

UPDATE 2017-10-30: With gratitude the CoreOS team has provided updated information on their pricing, however, I stand by my conclusion that the effective cost is lower when you deploy monster machines. The cost per node of my 1 CPU Intel NUC is the same as a 96 CPU server when you get beyond 10 nodes. I'll also reiterate that while my pricing notes are not currently…

eGalax touch on default Ubuntu 14.04.2 LTS

I have not had success with the touch drivers as yet.  The touch works and evtest also seems to report events, however, I have noticed that the button click is not working and no matter what I do xinput refuses to configure the buttons correctly.  When I downgraded to ubuntu 10.04 LTS everything sort of worked... there must have been something in the kermel as 10.04 was in the 2.6 kernel and 4.04 is in the 3.x branch.

One thing ... all of the documentation pointed to the wrong website or one in Taiwanese. I was finally able to locate the drivers again: http://www.eeti.com.tw/drivers_Linux.html (it would have been nice if they provided the install instructions in text rather than PDF)
Please open the document "EETI_eGTouch_Programming_Guide" under the Guide directory, and follow the Guidline to install driver.
download the appropriate versionunzip the fileread the programming manual And from that I'm distilling to the following: execute the setup.sh answer all of the questio…

Prometheus vs Bosun

In conclusion... while Bosun(B) is still not the ideal monitoring system neither is Prometheus(P).

TL;DR;

I am running Bosun in a Docker container hosted on CoreOS. Fleet service/unit files keep it running. However in once case I have experienced at least one severe crash as a result of a disk full condition. That it is implemented as part golang, java and python is an annoyance. The MIT license is about the only good thing.

I am trying to integrate Prometheus into my pipeline but losing steam fast. The Prometheus design seems to desire that you integrate your own cache inside your application and then allow the server to scrape the data, however, if the interval between scrapes is shorter than the longest transient session of your application then you need a gateway. A place to shuttle your data that will be a little more persistent.

(1) storing the data in my application might get me started more quickly
(2) getting the server to pull the data might be more secure
(3) using a push g…