Skip to main content

Posts

Showing posts from May, 2014

Another win for ChromeBox

I had my niece and nephew over for some pool and pizza today. They were playing on my daughters' ChromeBox and teaching them how to play games etc... it was a beautiful site and a fantastic advertisement. Not to mention it was reassuring that I made the right purchase for them.

Mavericks 10.9.3 killed my keychain

I have no idea what just happened but it appears that the latest update to OSX (10.9.3) just ate my keychain and, not to be undone, while I was trying to enter my password it would not connect to the internet as the WiFi password was also deleted and so nothing meaningful was working until I could get past the iCloud password request and get the OS running long enough to get the WiFi started and then restart iCloud.

What a mess!

And to think I just had a conversation with my father about how bad Windows 8.1 was. I cannot imagine what sort of things he'd have to say if he went through this.

Flow Based Programming - groups

The Flow Based Programming model is essentially sending strongly typed messages between nodes that are running in separate threads. (at least in my model). The entry and exit point of each node is typed and has a queue; they act with the ability to fan-out and fan-in.

When one node is sending typed messages to another node... the messages can be consumed serially and without being interfered with by other linked nodes. Meaning a fan-in input can consume all of the input group from one fan-out before accepting input from other groups or other nodes.

The producer of the group needs to identify the group, make a connection to the target subscriber (one to one), and if granted permission to start sending... starts, until the publisher runs our of data at which time the publisher sends an end of group message.

Creating the pipes is the first challenge. (i) using the same pipe for grouped and ungrouped messages will be a bit messy; the channel itself cannot tell the difference between the mess…

SQLite4 is a great idea

There were two interesting ideas in the design document. (a) SQLite4 is not a replacement for SQLite3. (b) they are implementing pluggable backends. (c) it's all one key/value store.

That 4 is not a replacement for 3 is nice in that I can be reasonably assured that my existing code has some running room before I have to find an alternative.

Pluggable backends, however, is causing me some concern. One thing I like about the folks at SQLite is that they code is highly reflective and opinionated. Meaning that they've thought about it long before they executed and put it in my toolbox. Now pluggable backends will provide a vector for all sorts of not so considerate code to make it into my project.

Finally, that the engine is based on a key/value model really has me thinking. Why not just put some sort of wrapper around redid and call it a day (actually that's an epic for the reader).

PROLOGUE: I find it interesting that all manner of new languages are starting to make progress in …

Chrome Reading List

When is Google going to implement a "reading list" like feature in their Chrome browser? I hope that the reading list feature is not patented! It's just too close to the bookmark. Which I also hope it not patented? The integrated reading list is really helpful and is simply without friction. While I have been able to use goo.gl and bit.ly they are nowhere near as smooth as the Apple Reading List.

Comcast is digging up my lawn

I'm not sure exactly how Comcast can afford to provide Internet service phone service and cable service for half the price of the incumbent in my neighborhood. Advance Cable Communications is charging me $209 a month for an average product. Comcast is willing to charge me half that price for the exact same service plus an amazing network bandwidth. Being naturally skeptical I find myself wondering when the shoe is going to drop.
If I had the sort of clout required to investigate I would be very interested in knowing where the different subsidies are taking place in order to justify the price difference. I have a hard time believing that it is strictly based on the economy of scale or the margins that the incumbent is operating under.

Using deferred robs your go program of speed

The benchmarks RN and even with the latest version of go 1.3 beta it appears that the defer command is slower by significant numbers. Apparently the implementation of the defer command allocates memory which must be garbage collected which actually has performance side effects.
Sadly I have a few hundred the first date that must now be re-factored.

Comment your code or Commit every change?

I do not know which is better, however, I believe that the two should be linked. That the commit is taking place on a decoupled system means that you have no direct visibility to the comment and when the comment is in the code they tend to get stale. (of course there is always literate programming).

UPDATE: my editor needs a proper blame view where the blame column might be commit comments instead of person. (literate programming is starting to feel better) I read that Donald Knuth writes 2 to 3 programs a week in this way.

ChromeBox is a Success

You cannot mistaken a ChromeBox from a late model laptop and while "it's really important" (a jab at the Microsoft Surface 3) that some of the new tablets are also full of power and functionality you have to ask yourself if it's really all that important.

My chromebox is currently playing the web-spotify client and I'm writing this post at the same time. I suppose I could have a few additional windows open but to what end. Earlier this evening I had a few terminal sessions running and a few browsers windows. And it was all working like a champ.

I suppose I would like a little more memory for the browser but for the simple things in life who cares. Right now all I need is a text editor and a terminal session and I'm in the sweetspot. (I suppose a Chrome Pixel would be nice if someone were to buy one for me but for the moment this will do just fine)

Life without a proper laptop

My MacBook Air has given up the ghost. After 4 complete reinstalls the machine refuses to encrypt the boot partition which leads me to the conslusion that there must be a problem with the drive. It formats and accepts the installation of OSX but that's as far as it goes. (this post is being written on my iPad mini and while typing into the blogger app is functional it's just not pleasant.)
There was a time when I was hoping that my iPad was going to be a complete laptop replacement but now that it's truth time I'm not certain it's going to work. And while Apple is asking all of it's app makers to sandbox their apps I'm not sure it's going to be any better than the iPad experience.
I do not think that the Nexus or the Kindle would be any better. I do have high hopes for the chromebox.

Secure Software Development Lifecycle

Justification:

While there are a number of obvious attack vectors for would-be black hats - most are never considered or defended against until there has been an incident. This is not to say that a huge investment is required from day one; as we have learned from the copy protection cat and mouse of the 1980s - it is expensive and with diminishing returns. But if we do a few things up front and in the beginning then we raise the cost for the attacker thus we become a less desirable target.

Secure Software Development Lifecycle:

frameworks are good

References:

salted password hashing
https://crackstation.net/hashing-security.htm

OWASP cheat sheets
https://www.owasp.org/index.php/Cheat_Sheets

Twenty-three Evergreen Developer Skills
http://blog.zeusprod.com/2014/02/twenty-three-evergreen-developer-skills.html?m=1

Google vs Facebook - trunk
http://paulhammant.com/2014/01/08/googles-vs-facebooks-trunk-based-development/

7 Habits of Dysfunctional Programmers
http://www.ganssle.com/articles/7ha…

CoreOS - auto update goodness

So long as CoreOS does not embed any malware, is not compromised, and remains in business then it is the killer Linux distribution of 2014 and is far superior to Project Atomic. The idea that the kernel is going to be updated either automatically or upon next reboot is going to take some time to get used to. It also means that I have to keep my eyes glued to their site to make sure that any new changes are accounted for.  It also means that CoreOS must maintain backward compatibility forever. It also means that the tools like etcd, fleet, locksmith, and systemd must always be backward compatible.

But for me... I have been waiting for Docker 0.11.0 to arrive. After a reboot this afternoon. There it was. Amazing.

What is Google Scale?

It's Mother's day and I'm thinking about "Google Scale". People talk about the next big thing and what that means in terms of scale and invariably the optimistic continues on to world domination at scale. Nuts!

I'm thinking about scale because that's the place where I work and play. I'm always chasing the scale monster and right or wrong I think I've made a discovery.

The number of compute nodes you need to solve a problem is proportional to the population.  (a) not everyone is online at the same time. (b) not all of the data is needed all of the time (c) failure happens (d) people are born with no data and when they die most of their data loses value.

So if you want to Google scale your business there is going to be some magic number of hardware and other resources you'll need assuming that all users need instant access and that number can be massaged based on availability and the number of applications actually running... but in most cases I…

Flow Based Programming - toolchain

With the team at NoFlow going public with an early beta I have had a chance to further develop some ideas that were previously just cloudy thought bubbles.

Assuming that the development team is morphing into a multidiscipline team of logic designers and component programmers then the question is where does one begin when there is an empty pallet? This is a particularly difficult question when the designer needs building blocks to connect and when the programmer needs requirements in order to construct the components. The chicken and the egg argument has never been so clear.

In my vision I see that everything is made up DNA. There is a network DNA and a component DNA. Depending on signatures; instances of each can connect and interact. (in a very, high school, biomechanics way). Therefore, the designer can layout the network using very basic component and pathway definitions; and later refine the network with more precisely named channels and add individual requirements for each componen…

guests are like fish ... they smell after a few days

... or something like that. I've been using fishshell for a few months and while I like the command line, history, color, and config I really hate that it's not compatible with bash. As a result none of the interesting tools, like GVM (golang version manager), which I depend on daily function properly because the syntax is just too different. About the only thing I can do is launch fishshell from bash but that's just a hairball.

Therefore, as much as I like some of the other features... fishshell stinks.

As much as it pains me bash is still the strongest contender. (zsh and all of it's candy is not as good as fishshell but even so the syntax is different enough).  It's clearly time to upgrade bash, make a compatibility layer to the others, or build a new shell to rule them all. (tcl, lua, go, something else)