Friday, July 13, 2018

docker registry pushing latest

My latest CI/CD performs all the functions that a build and deploy system is supposed to. Sure there are some purists that talk about deploying everything, chaos monkey, and so on... but until you've trashed a financial database with millions of records your opinion might not matter.

CI/CD can promote systems to production either through a push or a pull. And there are not many advantages although some security people would prefer a pull and I understand that... but the last thing you want to do it get your head wrapped around versions. I'm currently tagging my registry images with the pipeline ID. And as each pipeline completes I also push 'latest'.

Depending on the speed of the CI/CD runner 'latest' can be assigned to the wrong image. Also, it really isn't the latest until it's been pushed all the way around the system. Now that I'm using pipeline ID instead of the one ID:latest I'm seeing that latest can be promoted prematurely.

Just look at any of the images on the hub and you'll see that there is some crazy numbering schemes there... seems that at some point in their projects there is a branch and the promotion of the branch with some config identifies the version. Make sense? It's just a lot of many versioning.  Just look at the golang package... it's crazy how many versions they publish at once. I think I get the why and how but as a best practice it means there is an army of high priced SREs managing the bits.

Right now my project looks like:

The build task builds the system, packages it and pushes it by ID and latest to the registry. Staging is a QA instance even though there is a DEV instance too. "Weston" is one prod instance and "Zurvita" is another. The challenge here is that we deploy from gitlab and there is no manual deploy. So at what point does the version become "latest" and does it really matter?

Wednesday, July 11, 2018

The New Cookies

I'm not sure what happened or what the motivation was but recently I started to notice a trend that just about every website I encountered asked me to accept some terms and conditions associated with cookies and related artifacts... In most cases the documentation would say something about the customer experience and how the user would benefit from allowing cookies. It seems to me that there might be some smoke and mirrors here because these cookies are associated with cross site usage as experienced when you go to amazon, perform a search, and then go to facebook and see the same ads. Clearly the only way either property would know you had been would have been if they shared data but in this case they simply waved a hand to the EU and continued business as usual.

How is this what they intended?

Tuesday, July 10, 2018

Keeping secrets

There are so many ways to keep secrets but few ways to protect them.

HSM or Host Security Modules are probably the most robust system because they are typically a combination of physical security, network security, and access security. They also have a way to implement a DR or disaster recovery plan. The strategies are complex and expensive and so are the devices.

Home grown HSMs are interesting because the DR is typically easier, however, it usually means that the data is at rest some place and so it's a little more risky.

Expiration dates are the best and the worst. If you've decided that access to the data MUST be cut off by some date and that it's a universal policy for all things... and then someone approves an exception then all hell breaks loose as OPS tries to manage the exceptions.

In continuation when deploying several million unique keys with expiration dates one simply cannot manage the exceptions and so they typically fall-back on one key to drive them all. And that makes the data vulnerable.

Other secrets like Docker Secrets are interesting because they replicate the secrets and only the container can see the secrets that are assigned. The problem here is that if you can log into a swarm manager you can see the secrets by creating a simple container. Docker secrets are a very simple implementation and do not seem to have features like expiration dates or rolling keys. One challenge here is that the "names" of the secrets need to be provided on the CLI when deploying the service. If you have lots of secrets then that's a long command line.

Then there are tools like HashiCorp's Vault. While it has features like rolling keys, cluster networks, expiration dates it still has plenty of weaknesses. Once you have access to any of the nodes in the cluster you can delete or overwrite the existing data just like on swarm. And if you're already in the inner circle you'll find the various tokens etc for becoming a client. This is especially obvious when you have access to the source.

Hey but what about OpenPGP.  When that's great! OpenPGP is both a set of tools, libraries, and algorithms for doing crypto-like functions but in most case these libraries are already linked into your tools/apps and spawning to a shell to use their CLI tools only create a series of other vulnerabilities.

One attack vector not discussed is when the attacker manages to cause a core dump. A core dump is a file image of memory at the time the core was dumped. So if you have SSNs or credit card numbers in the clear in ram then an attacker need only cause a core dump and scoop up the file to get their treasure. Keep in mind that even today POS devices rarely print you card numbers on receipts.

All of this gets more complicated when going all DEVOPS and trying to embed secrets in the containers or when trying to deploy TEST actions in the pipeline. Anything that does not actually model production is a possible point of failure. My advice. Know what risks you're willing to live with and how you might live with the DR that fails.

UPDATE: let me add one other challenge and that is version control of the secrets. That's about as big of a deal as any.

Wednesday, July 4, 2018

different tarps

I cannot wait until my next overnight hiking trip into the Big Cypress Preserve. The weather, lately, has been very wet, hot and steamy; so making the right tarp selection is important. The grey tarp did a fine job, kept me dry from the condensation but in the pouring rain I'm way too exposed because the poles are too tall and not adjustable. I think they were 48".

The green, Gossamer Gear Twinn Tarp, has a nice size and is meant to be close to the ground.

The black tarp is an option because unlike the grey tarp it might dry faster. Unlike the Twinn Tarp this one (bearpaw wilderness designs) does not have a seam in the ridge line I'm not concerned about the seams being sealed.

The Twinn Tarp has a nice lineloc, line, grommet etc...

I did some DIY from a poop bag roll and I have a grommet from Yama Mountain Gear (shown but not in use).

Lastly the conditions changed and the visible shade was obvious (unlike the previous pictures). The black tarp provided plenty of shade.

In conclusion the choice seems clear to me. Where the Twinn seems to be meant for 2 people, has a seam down the center,  pitched in the one configuration, and with limited choices on the ridgeline... the flat tarp from bearpaw wilderness is just fine for one person and since it's flat is capable of my configurations. The black means it'll dry faster and looks like it's also got more shade.

PS: both have bivy or let tent loops just inside the tarp on the ridgeline.

another bad day for open source

One of the hallmarks of a good open source project is just how complicated it is to install, configure and maintain. Happily gitlab and the ...