Skip to main content

the packs look nothing like what I bought

In my previous post I was talking round about sales, marketing and manufacturing of packs and what I might want next... looking closely at the Kumo and Murmur.

Here are some other packs that I saw, liked and purchased.



I liked this pack because it was the largest in the Camelbak line of packs, it had one deep pocket and was 22L gross and 19 net capacity. One serious downside is that the bladder is not modern and there is no disconnect or drain.


The Kompressor Plus is a 20L pack. Unlike the Camelbak above this Marmot is down to 18L when I add the 2L water bladder. Although there is a water drain. One interesting comparison with the Camelbak is the location of the frame pad. If you use a bladder with the marmot the bladder is separated from your body by a piece of fabric where the Camelbak frame is between you and the bladder.

The Klymit Stash 18 is just small. And while there is room for a bladder it means having to leave some gear behind. In the previous post I slimmed down to one night with barely enough food.

What is interesting and compelling about these pictures is that they really help sell the product. There are some realities about the volume and amount of gear you really want to carry. That said this is what these packs really look like.


There are simply not very attractive and there is no way I would buy them. So clearly it's a matter to forget the fashion and work on the function.

And so with the biggest of my packs being the Camelbak Arete 22 I packed all my gear and there was plenty of room given the previous constraints. It does not look like the pretty sales material but at least I know this is a solid 2-3 night pack now that I have extra room for food and a trowel.


Popular posts from this blog

Prometheus vs Bosun

In conclusion... while Bosun(B) is still not the ideal monitoring system neither is Prometheus(P).

TL;DR;

I am running Bosun in a Docker container hosted on CoreOS. Fleet service/unit files keep it running. However in once case I have experienced at least one severe crash as a result of a disk full condition. That it is implemented as part golang, java and python is an annoyance. The MIT license is about the only good thing.

I am trying to integrate Prometheus into my pipeline but losing steam fast. The Prometheus design seems to desire that you integrate your own cache inside your application and then allow the server to scrape the data, however, if the interval between scrapes is shorter than the longest transient session of your application then you need a gateway. A place to shuttle your data that will be a little more persistent.

(1) storing the data in my application might get me started more quickly
(2) getting the server to pull the data might be more secure
(3) using a push g…

Entry level cost for CoreOS+Tectonic

CoreOS and Tectonic start their pricing at 10 servers. Managed CoreOS starts at $1000 per month for those first 10 servers and Tectonic is $5000 for the same 10 servers. Annualized that is $85K or at least one employee depending on your market. As a single employee company I'd rather hire the employee. Specially since I only have 3 servers.

The pricing is biased toward the largest servers with the largest capacities; my dual core 32GB i5 IntelNuc can never be mistaken for a 96-CPU dual or quad core DELL

If CoreOS does not figure out a different barrier of entry they are going to follow the Borland path to obscurity.

Weave vs Flannel

While Weave and Flannel have some features in common weave includes DNS for service discovery and a wrapper process for capturing that info. In order to get some parity you'd need to add a DNS service like SkyDNS and then write your own script to weave the two together.
In Weave your fleet file might have some of this:
[Service] . . . ExecStartPre=/opt/bin/weave run --net=host --name bob ncx/bob ExecStart=/usr/bin/docker attach bob
In sky + flannel it might look like:
[Service] . . . ExecStartPre=docker run -d --net=host --name bob ncx/bob ExecStartPre=etcdctl set /skydns/local/ncx/bob '{"host":"`docker inspect --format '{{ .NetworkSettings.IPAddress }}' bob`","port":8080}' ExecStart=/usr/bin/docker attach bob
I'd like it to look like this:
[Service] . . . ExecStartPre=skyrun --net=host --name bob ncx/bob ExecStart=/usr/bin/docker attach bob
That's the intent anyway. I'm not sure the exact commands will work and that's partly why we…