Skip to main content

Keeping secrets

There are so many ways to keep secrets but few ways to protect them.

HSM or Host Security Modules are probably the most robust system because they are typically a combination of physical security, network security, and access security. They also have a way to implement a DR or disaster recovery plan. The strategies are complex and expensive and so are the devices.

Home grown HSMs are interesting because the DR is typically easier, however, it usually means that the data is at rest some place and so it's a little more risky.

Expiration dates are the best and the worst. If you've decided that access to the data MUST be cut off by some date and that it's a universal policy for all things... and then someone approves an exception then all hell breaks loose as OPS tries to manage the exceptions.

In continuation when deploying several million unique keys with expiration dates one simply cannot manage the exceptions and so they typically fall-back on one key to drive them all. And that makes the data vulnerable.

Other secrets like Docker Secrets are interesting because they replicate the secrets and only the container can see the secrets that are assigned. The problem here is that if you can log into a swarm manager you can see the secrets by creating a simple container. Docker secrets are a very simple implementation and do not seem to have features like expiration dates or rolling keys. One challenge here is that the "names" of the secrets need to be provided on the CLI when deploying the service. If you have lots of secrets then that's a long command line.

Then there are tools like HashiCorp's Vault. While it has features like rolling keys, cluster networks, expiration dates it still has plenty of weaknesses. Once you have access to any of the nodes in the cluster you can delete or overwrite the existing data just like on swarm. And if you're already in the inner circle you'll find the various tokens etc for becoming a client. This is especially obvious when you have access to the source.

Hey but what about OpenPGP.  When that's great! OpenPGP is both a set of tools, libraries, and algorithms for doing crypto-like functions but in most case these libraries are already linked into your tools/apps and spawning to a shell to use their CLI tools only create a series of other vulnerabilities.

One attack vector not discussed is when the attacker manages to cause a core dump. A core dump is a file image of memory at the time the core was dumped. So if you have SSNs or credit card numbers in the clear in ram then an attacker need only cause a core dump and scoop up the file to get their treasure. Keep in mind that even today POS devices rarely print you card numbers on receipts.

All of this gets more complicated when going all DEVOPS and trying to embed secrets in the containers or when trying to deploy TEST actions in the pipeline. Anything that does not actually model production is a possible point of failure. My advice. Know what risks you're willing to live with and how you might live with the DR that fails.

UPDATE: let me add one other challenge and that is version control of the secrets. That's about as big of a deal as any.


Comments

Popular posts from this blog

Entry level cost for CoreOS+Tectonic

CoreOS and Tectonic start their pricing at 10 servers. Managed CoreOS starts at $1000 per month for those first 10 servers and Tectonic is $5000 for the same 10 servers. Annualized that is $85K or at least one employee depending on your market. As a single employee company I'd rather hire the employee. Specially since I only have 3 servers.

The pricing is biased toward the largest servers with the largest capacities; my dual core 32GB i5 IntelNuc can never be mistaken for a 96-CPU dual or quad core DELL

If CoreOS does not figure out a different barrier of entry they are going to follow the Borland path to obscurity.

UPDATE 2017-10-30: With gratitude the CoreOS team has provided updated information on their pricing, however, I stand by my conclusion that the effective cost is lower when you deploy monster machines. The cost per node of my 1 CPU Intel NUC is the same as a 96 CPU server when you get beyond 10 nodes. I'll also reiterate that while my pricing notes are not currently…

eGalax touch on default Ubuntu 14.04.2 LTS

I have not had success with the touch drivers as yet.  The touch works and evtest also seems to report events, however, I have noticed that the button click is not working and no matter what I do xinput refuses to configure the buttons correctly.  When I downgraded to ubuntu 10.04 LTS everything sort of worked... there must have been something in the kermel as 10.04 was in the 2.6 kernel and 4.04 is in the 3.x branch.

One thing ... all of the documentation pointed to the wrong website or one in Taiwanese. I was finally able to locate the drivers again: http://www.eeti.com.tw/drivers_Linux.html (it would have been nice if they provided the install instructions in text rather than PDF)
Please open the document "EETI_eGTouch_Programming_Guide" under the Guide directory, and follow the Guidline to install driver.
download the appropriate versionunzip the fileread the programming manual And from that I'm distilling to the following: execute the setup.sh answer all of the questio…

Prometheus vs Bosun

In conclusion... while Bosun(B) is still not the ideal monitoring system neither is Prometheus(P).

TL;DR;

I am running Bosun in a Docker container hosted on CoreOS. Fleet service/unit files keep it running. However in once case I have experienced at least one severe crash as a result of a disk full condition. That it is implemented as part golang, java and python is an annoyance. The MIT license is about the only good thing.

I am trying to integrate Prometheus into my pipeline but losing steam fast. The Prometheus design seems to desire that you integrate your own cache inside your application and then allow the server to scrape the data, however, if the interval between scrapes is shorter than the longest transient session of your application then you need a gateway. A place to shuttle your data that will be a little more persistent.

(1) storing the data in my application might get me started more quickly
(2) getting the server to pull the data might be more secure
(3) using a push g…