Skip to main content

Everything is in the cloud - it's all damage control now

There was a time when I was really concerned about what information was "out there". Then I tried to control that information. And finally I realized it was hopeless. Now it's just a matter of trying to keep it clean and ongoing damage control. Not that I've done anything wrong or embarrassing in my life...

The fact remains that the likes of google, yahoo, "social media", advertisers, and so are... while they might really want my demographic information, spending habits, income, neighbors, surfing habits, they will accept my anonymous information just the same. And I give them plenty of both.

The way it works... you visit a website. The site owner has contracted with someone like google for some sort of service. Whether it's advertising, sales or just some general tracking information it's not important. That service drops a cookie on your browser. Later, you visit another site. That site contracts with the same service provider... and looks for a cookie that they might have dropped previously. When they find it they can connect the dots.

This sort of function is everywhere. It's in everything we see and do. From the transponders in our cars, cell phones, the GPS in our cameras. It's everywhere.

So why not put everything in the cloud? What is it that I think I can hide? I'm a lawful person and so I'm not going to advertise that I went through a red light, but some day there is going to be an intersection device that is going to register the infractions... made by the car, not necessarily the person.

I do not care that I use GMail and Google Docs. That all of my music is in iTunes match/iCloud. That I backup everything to Dropbox and CrashPlan. The information privacy war is over. We lost. Now all we can do is damage control.


  1. Google Docs is the most popular web application to create and store documents online today. Since your documents are stored on Google’s cloud servers, your data is mostly protected from the dangers of hard disk failure, sudden power outages or surges and natural disasters that happen in your area. For that reason, most users feel confident about the security of their files.
    However, studies show that this is not the case. According to a report from the IT Compliance Group, up to 20% of organizations experience 22 or more cases of sensitive data loss a year. This includes customer, employee and IT security data being lost or stolen. Half of these cases could have been avoided if not for the leading cause of data loss – human error. In 1990, noted organizational theorist and sociologist Charles B. Perrow noted in his book “Normal Accidents: Living with High-Risk Technologies” that operators and the personnel handling data are the blamed for disasters and data loss 60-80% of the time.

    Whether it’s deliberate or accidental, human error causes huge problems for small businesses and large enterprises alike because of how easy it is for something wrong to happen. On Google Docs, a team member with access to your files may accidentally delete vital documents without any chance of recovery (meaning even clean up “Trash” folder). This poses a problem of data security that is unpredictable and almost inevitable. Because of this, keeping your Google Docs files backed up is of utmost importance.
    By using a service to keep your Google Docs files synchronized (in real-time!) and backed up, you can avoid the headache of lost data. Through, you can keep your Google Docs, Basecamp projects and Dropbox files backed up automatically so that you’ll always have a way to recover your files, whatever the disaster.

    [EDITED 2011.09.16] Selim provides some good information, however, the tail included SPAM. I removed the advert element.


Post a Comment

Popular posts from this blog

Entry level cost for CoreOS+Tectonic

CoreOS and Tectonic start their pricing at 10 servers. Managed CoreOS starts at $1000 per month for those first 10 servers and Tectonic is $5000 for the same 10 servers. Annualized that is $85K or at least one employee depending on your market. As a single employee company I'd rather hire the employee. Specially since I only have 3 servers.

The pricing is biased toward the largest servers with the largest capacities; my dual core 32GB i5 IntelNuc can never be mistaken for a 96-CPU dual or quad core DELL

If CoreOS does not figure out a different barrier of entry they are going to follow the Borland path to obscurity.

UPDATE 2017-10-30: With gratitude the CoreOS team has provided updated information on their pricing, however, I stand by my conclusion that the effective cost is lower when you deploy monster machines. The cost per node of my 1 CPU Intel NUC is the same as a 96 CPU server when you get beyond 10 nodes. I'll also reiterate that while my pricing notes are not currently…

eGalax touch on default Ubuntu 14.04.2 LTS

I have not had success with the touch drivers as yet.  The touch works and evtest also seems to report events, however, I have noticed that the button click is not working and no matter what I do xinput refuses to configure the buttons correctly.  When I downgraded to ubuntu 10.04 LTS everything sort of worked... there must have been something in the kermel as 10.04 was in the 2.6 kernel and 4.04 is in the 3.x branch.

One thing ... all of the documentation pointed to the wrong website or one in Taiwanese. I was finally able to locate the drivers again: (it would have been nice if they provided the install instructions in text rather than PDF)
Please open the document "EETI_eGTouch_Programming_Guide" under the Guide directory, and follow the Guidline to install driver.
download the appropriate versionunzip the fileread the programming manual And from that I'm distilling to the following: execute the answer all of the questio…

Prometheus vs Bosun

In conclusion... while Bosun(B) is still not the ideal monitoring system neither is Prometheus(P).


I am running Bosun in a Docker container hosted on CoreOS. Fleet service/unit files keep it running. However in once case I have experienced at least one severe crash as a result of a disk full condition. That it is implemented as part golang, java and python is an annoyance. The MIT license is about the only good thing.

I am trying to integrate Prometheus into my pipeline but losing steam fast. The Prometheus design seems to desire that you integrate your own cache inside your application and then allow the server to scrape the data, however, if the interval between scrapes is shorter than the longest transient session of your application then you need a gateway. A place to shuttle your data that will be a little more persistent.

(1) storing the data in my application might get me started more quickly
(2) getting the server to pull the data might be more secure
(3) using a push g…