Skip to main content

Using (almost) FREE f1-micro for simple naked redirect

[UPDATE] sadly while I was collecting my notes and trying to reproduce the results I was getting from my n1-standard project I was not able to get my /media/state partition to mount and I have not been able to determine if it has anything to do with the FREE partition or not. I will try my steps again with a larger system and see if that makes any difference. Keep in mind that the FREE f1-micro is awful for anything other than the simplest tasks anyway.

[UPDATE 2] I was able to get it to work. The debug session started after attempting to build the system about 10 times... then I started scanning the logs (sudo journalctl -f) but did not yield any fruit. Finally I looked at the entire log file (sudo journalctl -a) and ready it after several redeploys and reboots. I found a strange error message in the log: "Failed parsing user-data: Unrecognized user-data header: coreos:" I rebooted a few more times and did a few google searches. Still nothing. Then I found an obscure reference in the cloud-config doc/spec that said that "#cloud-config" was the proper header. I thought  #cloud-config was a comment and so I had deleted it before trying to build my instance. I do not mind complaining about this fact. Clearly a comment should not be a header. Not even in a yaml file.

I have a dedicated server at Rackspace that I once used to host a number of dedicated apps that have since been decommissioned or are now hosted via 3rd party SaaS systems and strangely enough none of these SaaS providers support naked domains.

(a naked domain is a domain where the first part of the domain name is absent. A regular domain name for a web server might be www.domain.com and the naked domain would be domain.com. There are some historical reasons why this is the case but it’s probably a good thing for now.)

Now my dedicated server simply intercepts the naked domains and with a wildcard also gets all of the typos and redirects the user to the default server. So if the user entered fred.domain.com the browser would be redirected to www.domain.com. That server costs me $15 a month and is just generating heat. The redirect is rarely used and at this point I just need to make it go away.

Enter Google’s f1-micro server. It’s an anemic server that is free to operate and it will do the trick. The best part of this is going to be the use of CoreOS as the host OS. You’ll have to explore CoreOS for yourself but what interests me is the upgrade process that just works and means that every version is basically LTS (long term support)

And so we begin…

Assuming you have a GCE (google compute engine) account these are the steps I used to deploy the project.

** you’ll need a file on your local system: cloud-config.yaml

#cloud-config
coreos:
  etcd:
    # generate a new token for each unique cluster from https://discovery.etcd.io/new
    discovery: https://discovery.etcd.io/… your key goes here...
    # multi-region and multi-cloud deployments need to use $public_ipv4
    addr: $private_ipv4:4001
    peer-addr: $private_ipv4:7001
  units:
    - name: etcd.service
      command: start
    - name: fleet.service
      command: start
    - name: media-state.mount
      command: start
      content: |
        [Mount]
        What=/dev/disk/by-id/scsi-0Google_PersistentDisk_pdisk01
        Where=/media/state
        Type=ext4

  1. install the GCE SDK and tools
  2. update the SDK
  3. set the default project
    1. gcloud config set project <project-id>
  4. allocate some disk if needed
    1. gcutil adddisk --size_gb=10 --zone=us-central1-a pdisk01
  5. capture the latest CoreOS
    1. gcutil addimage --description="CoreOS 298.0.0" coreos-v298-0-0 gs://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_gce.tar.gz
  6. create an instance
    1. gcutil addinstance --image=coreos-v298-0-0 --persistent_boot_disk --zone=us-central1-a --machine_type=f1-micro --metadata_from_file=user-data:cloud-config.yaml pcore1
  7. attach the disk to the instance
    1. gcutil attachdisk --disk=pdisk01 pcore1
  8. shell into the system and format the disk
    1. gcutil ssh pcore1
    2. sudo mkfs.ext4 -F /dev/sdb
  9. update the GCE firewall to let port 80 through (by default it is disabled)
  10. reboot
    1. sudo reboot

I made a mistake with my cloud-config.yaml. These two steps will help:

  1. get the fingerprint with this command
    1. gcutil  getinstance pcore1
  2. update the config
    1. gcutil setinstancemetadata pcore1 --metadata_from_file=user-data:cloud-config.yaml --fingerprint=". . . fingerprint"

Here is a sample Dockerfile:

FROM debian:jessie

RUN apt-get update -q
RUN apt-get install -qy nginx-extras
RUN echo "\ndaemon off;" >> /etc/nginx/nginx.conf

# Attach volumes.
VOLUME /etc/nginx/sites-available
VOLUME /etc/nginx/sites-enabled
VOLUME /var/log/nginx

# Set working directory.
WORKDIR /etc/nginx

EXPOSE 80
#ENTRYPOINT ["nginx"]
CMD nginx

Once the system has rebooted you need to ssh back into the system.

  1. create a folder in your home folder where you’ll store your Dockerfiles.
    1. mkdir -p ~/Dockerfiles
    2. cd ~/Dockerfiles
  2. open a new Dockerfile
    1. vim Dockerfile
  3. build the container (note that rbucker is my name; you should read up on the docker registry in order to name your container properly)
    1. docker build -t=rbucker/nginx .
  4. run the container. This command does a few things.  The ‘-v’ command mounts the host OS path to the container path in rw mode. the '-p’ maps the container’s port to the host’s port. This command will run the container in the foreground. You’ll have to open a new ssh session to execute the ‘docker stop’ command in order to operate properly. One could also use a ‘-d’ option to run the container in the background.
    1. docker run -p 80:80 -v /media/state/etc/nginx:/etc/nginx/sites-enabled:ro -v /media/state/var/log/nginx:/var/log/nginx rbucker/nginx
  5. running the container normally
    1. sudo docker start  rbucker/nginx
  6. You can see of the container is running with
    1. docker ps
  7. then load your host’s external IP address into a browser and give it a try. The default nginx page should display.

** CoreOS and Docker would prefer that you restart the container with systems service commands instead of letting docker auto-restart. Although that is possible and topic for another post.


Comments

Popular posts from this blog

Entry level cost for CoreOS+Tectonic

CoreOS and Tectonic start their pricing at 10 servers. Managed CoreOS starts at $1000 per month for those first 10 servers and Tectonic is $5000 for the same 10 servers. Annualized that is $85K or at least one employee depending on your market. As a single employee company I'd rather hire the employee. Specially since I only have 3 servers.

The pricing is biased toward the largest servers with the largest capacities; my dual core 32GB i5 IntelNuc can never be mistaken for a 96-CPU dual or quad core DELL

If CoreOS does not figure out a different barrier of entry they are going to follow the Borland path to obscurity.

UPDATE 2017-10-30: With gratitude the CoreOS team has provided updated information on their pricing, however, I stand by my conclusion that the effective cost is lower when you deploy monster machines. The cost per node of my 1 CPU Intel NUC is the same as a 96 CPU server when you get beyond 10 nodes. I'll also reiterate that while my pricing notes are not currently…

eGalax touch on default Ubuntu 14.04.2 LTS

I have not had success with the touch drivers as yet.  The touch works and evtest also seems to report events, however, I have noticed that the button click is not working and no matter what I do xinput refuses to configure the buttons correctly.  When I downgraded to ubuntu 10.04 LTS everything sort of worked... there must have been something in the kermel as 10.04 was in the 2.6 kernel and 4.04 is in the 3.x branch.

One thing ... all of the documentation pointed to the wrong website or one in Taiwanese. I was finally able to locate the drivers again: http://www.eeti.com.tw/drivers_Linux.html (it would have been nice if they provided the install instructions in text rather than PDF)
Please open the document "EETI_eGTouch_Programming_Guide" under the Guide directory, and follow the Guidline to install driver.
download the appropriate versionunzip the fileread the programming manual And from that I'm distilling to the following: execute the setup.sh answer all of the questio…

Prometheus vs Bosun

In conclusion... while Bosun(B) is still not the ideal monitoring system neither is Prometheus(P).

TL;DR;

I am running Bosun in a Docker container hosted on CoreOS. Fleet service/unit files keep it running. However in once case I have experienced at least one severe crash as a result of a disk full condition. That it is implemented as part golang, java and python is an annoyance. The MIT license is about the only good thing.

I am trying to integrate Prometheus into my pipeline but losing steam fast. The Prometheus design seems to desire that you integrate your own cache inside your application and then allow the server to scrape the data, however, if the interval between scrapes is shorter than the longest transient session of your application then you need a gateway. A place to shuttle your data that will be a little more persistent.

(1) storing the data in my application might get me started more quickly
(2) getting the server to pull the data might be more secure
(3) using a push g…