Skip to main content

bootstrapping go web projects

At first I thought that this was going to be a good idea. Some sort of opinionated framework was going to bootstrap whatever project I might be working on... and voila. But that was before I started looking at the details and now I have a different opinion.

It's a bad idea to base my next project on this sort of framework.

Part of my opinion is intuition and the other part is experience. While the authors have made some excellent choices and they will work in most cases they are not going to work in all cases and without a more general plug-in strategy you're better off knowing and understanding the ideas and the glue. Implementing your own strategy.

The authors have clearly just glued a bunch of 3rd party packages together. It's not a bad thing but you need to understand the code before you blindly incorporate it. The Go Authors are very clear on this point as the stdlib is fairly feature complete.

here is their list and my objections:

  • PostgreSQL is chosen for the database.
    • as much as I dislike CAP there are use-cases for document and key/value storage
  • bcrypt is chosen as the password hasher.
    • crypto is a huge risk for any system; I think bcrypt has been cleaned by the openbsd team so it's a good choice
  • Bootstrap Flatly is chosen for the UI theme.
    • can't argue with this as it's clearly pluggable
  • Session is stored inside encrypted cookie.
    • good
  • Static directory is located under /static.
    • ok
  • Model directory is located under /dal (Database Access Layer).
    • not certain this is a good idea. models and packages and CRUD are strong ideas but there is something to be said for generators which is missing.
  • It does not use ORM nor installs one.
    • good
  • Test database is automatically created.
    • meh
  • A minimal Dockerfile is provided.
    • since this is a go program it should have been built on the scratch container
  • is chosen to manage dependencies.
    • godep is the current gold standard but gb is on it's way
  • is chosen to connect to a database.
    • this is a clear winner
  • is chosen for a lot of the HTTP plumbings.
    • not a chance.  This is a core and important system. You should implement your own.
  • is chosen as the middleware library.
    • see previous note
  • is chosen to enable graceful shutdown.
    • this cannot possibly work properly. Do it yourself and integrate your project with haproxy, vukcand etc...
  • is chosen as the database migration tool.
    • schema migration is the hardest part of any deployment. This might be a good tool but it requires some investigation. Commercial ventures make a good living doing this sort of thing.  An open source version of the same quality and versatility would be a great win (depending on the licencing)
  • is chosen as the logging library.
There are a few missing packages... go-bindata and bindata-assetfs. And a good Makefile/Dockerfile. Taking a page from the Apcera gnatsd project they provided a Dockerfile that is the makefile. 

The last thing that is missing is a license dependency tree.  Just what exactly are the challenges here... if there is a single license that includes the GPL-A then you have a number of commercial licensing issues. Any corporate attorney would require this evaluation so you are better off doing it for yourself before you get started.

Good luck.

** know your stack!!! 


Popular posts from this blog

Entry level cost for CoreOS+Tectonic

CoreOS and Tectonic start their pricing at 10 servers. Managed CoreOS starts at $1000 per month for those first 10 servers and Tectonic is $5000 for the same 10 servers. Annualized that is $85K or at least one employee depending on your market. As a single employee company I'd rather hire the employee. Specially since I only have 3 servers.

The pricing is biased toward the largest servers with the largest capacities; my dual core 32GB i5 IntelNuc can never be mistaken for a 96-CPU dual or quad core DELL

If CoreOS does not figure out a different barrier of entry they are going to follow the Borland path to obscurity.

UPDATE 2017-10-30: With gratitude the CoreOS team has provided updated information on their pricing, however, I stand by my conclusion that the effective cost is lower when you deploy monster machines. The cost per node of my 1 CPU Intel NUC is the same as a 96 CPU server when you get beyond 10 nodes. I'll also reiterate that while my pricing notes are not currently…

eGalax touch on default Ubuntu 14.04.2 LTS

I have not had success with the touch drivers as yet.  The touch works and evtest also seems to report events, however, I have noticed that the button click is not working and no matter what I do xinput refuses to configure the buttons correctly.  When I downgraded to ubuntu 10.04 LTS everything sort of worked... there must have been something in the kermel as 10.04 was in the 2.6 kernel and 4.04 is in the 3.x branch.

One thing ... all of the documentation pointed to the wrong website or one in Taiwanese. I was finally able to locate the drivers again: (it would have been nice if they provided the install instructions in text rather than PDF)
Please open the document "EETI_eGTouch_Programming_Guide" under the Guide directory, and follow the Guidline to install driver.
download the appropriate versionunzip the fileread the programming manual And from that I'm distilling to the following: execute the answer all of the questio…

Prometheus vs Bosun

In conclusion... while Bosun(B) is still not the ideal monitoring system neither is Prometheus(P).


I am running Bosun in a Docker container hosted on CoreOS. Fleet service/unit files keep it running. However in once case I have experienced at least one severe crash as a result of a disk full condition. That it is implemented as part golang, java and python is an annoyance. The MIT license is about the only good thing.

I am trying to integrate Prometheus into my pipeline but losing steam fast. The Prometheus design seems to desire that you integrate your own cache inside your application and then allow the server to scrape the data, however, if the interval between scrapes is shorter than the longest transient session of your application then you need a gateway. A place to shuttle your data that will be a little more persistent.

(1) storing the data in my application might get me started more quickly
(2) getting the server to pull the data might be more secure
(3) using a push g…