Commit 8b7f43a7 authored by Michiel de Jong's avatar Michiel de Jong
Browse files

move deploy/ and doc/ folders from ../dev-scripts here

parent 5864814a
for i in `deploy/ $1`; do
echo "Approving combined cert for $i";
cp ../orchestration/TLS/combined/$i.pem ../orchestration/TLS/approved-certs/$i.pem;
scp ../orchestration/TLS/approved-certs/$i.pem root@$1:/data/server-wide/haproxy/approved-certs/
if [ $# -eq 2 ]; then
echo "CA is $CA"
echo Some information about cert ../orchestration/TLS/cert/$1.cert:
openssl x509 -text -in ../orchestration/TLS/cert/$1.cert | head -50 | grep -v ^\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
#echo Some information about chain cert ../orchestration/TLS/chain/$2.pem:
#openssl x509 -text -in ../orchestration/TLS/chain/$2.pem
#echo Some information about key ../orchestration/TLS/key/$1.key:
#openssl rsa -text -in ../orchestration/TLS/key/$1.key
cat ../orchestration/TLS/cert/$1.cert ../orchestration/TLS/chain/$CA.pem ../orchestration/TLS/key/$1.key > ../orchestration/TLS/combined/$1.pem
echo Running a test server on port 4433 on this server now \(please use your browser to check\):
openssl s_server -cert ../orchestration/TLS/combined/$1.pem -www
if [ $# -ge 1 ]; then
echo "Usage: sh ./deploy/ server [branch [user]]"
exit 1
if [ $# -ge 2 ]; then
if [ $# -ge 3 ]; then
if [ -e ../orchestration/per-server/$SERVER/default-site ]; then
DEFAULTSITE=`cat ../orchestration/per-server/$SERVER/default-site`
echo "Infrastructure branch is $BRANCH"
echo "Remote user is $USER"
echo "Default site is $DEFAULTSITE"
chmod -R go-w ../orchestration/deploy-keys
if [ -f ../orchestration/deploy-keys/authorized_keys ]; then
scp -r ../orchestration/deploy-keys $USER@$SERVER:.ssh
scp ./deploy/ $USER@$SERVER:
ssh $USER@$SERVER sudo mkdir -p /var/lib/coreos-install/
scp ../infrastructure/cloud-config $USER@$SERVER:/var/lib/coreos-install/user_data
cd ../orchestration/per-server/$SERVER/sites/
for i in * ; do
echo "setting up site $i as `cat $i` on $SERVER";
ssh $USER@$SERVER sudo mkdir -p /data/per-user/$i/
scp ../../../TLS/approved-certs/$i.pem $USER@$SERVER:/data/server-wide/haproxy/approved-certs/$i.pem
rsync -r ../../../../user-data/live/$SERVER/$i/ $USER@$SERVER:/data/per-user/$i/
ssh $USER@$SERVER sudo sh /data/infrastructure/scripts/ $i `cat $i`
# Restart the default site now that its data has been rsync'ed in place:
ssh $USER@$SERVER sudo systemctl restart nginx\@$DEFAULTSITE
ssh-keygen -R $1
cd ../orchestration/per-server/$1/sites
for i in *; do
echo $i
echo Starting etcd:
/usr/bin/coreos-cloudinit --from-file=/var/lib/coreos-install/user_data
echo Cloning the infrastructure repo into /data/infrastructure:
mkdir /data
cd /data
git clone
cd infrastructure
echo Checking out $1 branch of the IndieHosters infrastructure:
git checkout $1
git pull
echo Running the server setup script:
sh scripts/ $2
# Deploying a server
## Before you start
Make sure you read [getting started]( first and created your `indiehosters` folder structure somewhere
on your laptop.
### Prepare your orchestration data
* Get a CoreOS server, for instance from [RackSpace]( or [Vultr](
* If you didn't add your public ssh key during the order process (e.g. through your IaaS control panel or a cloud-config file), and unless it's already there from a previous server deploy job, copy your laptop's public ssh key (probably in `~/.ssh/`) to `indiehosters/orchestration/deploy-keys/authorized_keys`
* Give the new server a name (in this example, we call the server 'k3')
* Create an empty folder `indiehosters/orchestration/per-server/k3/sites` (replace 'k3' with your server's domain name)
* Add k3 to your /etc/hosts with the right IP address
* If you have used this name before, run `./deploy/ k3`
* From the `indiehosters/dev-scripts` folder, run `sh ./deploy/ k3`
* This will ask for the ssh password once; the rest should be automatic!
### Adding a website to your server
* For each site you want to deploy on the server, e.g., do the following:
* Does already exist as a domain name?
* If yes, then find out to what extent it's currently in use (and needs to be migrated with care). There are a few options:
* Transfer the domain into your DNR account.
* Set up DNS hosting for it and ask the owner to set authoritative DNS to the DNS servers you control
* Ask the user to keep DNR and DNS control where it is currently, but to switch DNS when it's ready at the new server
* In any case, you will probably need access to the email address, for the StartSSL process *before*
the final DNS switch. You could also ask them to tell you the verification code that arrives there, but that has to be done
in real time, immediately when you click 'verify' in the StartSSL UI. If they forward the email the next day, then the token
will already have expired.
* If no, register it (at Namecheap or elsewhere).
* Decide which image to run as the user's main website software (check out `../dockerfiles/sites/` to see which ones can be used for this)
* Say you picked nginx, then create a text file containing just the word 'nginx' at
* If you already have some content that should go on there, and which is compatible with the image you chose,
put it in `indiehosters/user-data/` (replace 'nginx' with the actual image name you're using;
note that for wordpress it's currently a bit more complicated, as this relies on more than one image, so you
would then probably have to import both the user's wordpress folder and their mysql folder).
* Unless there is already a TLS certificate at `indiehosters/user-data/` get one
(from StartSSL or elswhere) for and concatenate the certificate
and its unencrypted private key into `indiehosters/user-data/`
* Make sure the TLS certificate is valid (use `indiehosters/infrastructure/scripts/` for this), and if it is,
copy it from
to `indiehosters/orchestration/TLS/approved-certs/`.
* Now run `deploy/ k3` again. It will make sure the server is in the correct state, and scp the user data and the
approved cert into place, start a container running the image requested, update haproxy config, and restart the haproxy container.
* Test the site using your /etc/hosts. If you did not import data, there should be some default message there. For wordpress, be aware
that the site is installed in a state where any visitor can take control over it.
* Switch DNS and note down the current DNS situation in `indiehosters/orchestration/DNS/` (or if you're hosting
a subdomain of another domain, update whichever is the zone file you edited).
## Deploying a mailserver
Right now, this is still a bit separate from the rest of the infrastructure - just get a server with Docker (doesn't have to be coreos), and run:
docker run -d -p 25:25 -p 443:443 indiehosters/yunohost /sbin/init
Then set up the mail domains and forwards through the web interface (with self-signed cert) on
Use Chrome for this, because Firefox will refuse to let you view the admin interface because of the invalid TLS cert.
The initial admin password is 'changeme' - change it on
# Developing Dockerfiles and infrastructure
## Developing Dockerfiles
To develop Dockerfiles, you can use a server that's not serving any live domains, use `docker` locally on your laptop, or use the `vagrant up` instructions to run the infrastructure inside vagrant.
## Developing infrastructure
To develop the infrastructure, create a branch on the infrastructure repo and specify that branch at the end of the deploy command, for instance:
sh ./deploy/ k4 dev
Will deploy a server at whatever IP address "k4" points to in your /etc/hosts, using the "dev" branch of
## Testing new Dockerfiles in the infrastructure
To test the infrastructure with a changed Dockerfile, you need to take several steps:
* Develop the new Dockerfiles as described above at "Developing Dockerfiles"
* When you're happy with the result, publish this new Dockerfile onto the docker hub registry under your own username (e.g. michielbdejong/haproxy-with-http-2.0)
* Now create a branch on the infrastructure repo (e.g. "dev-http-2.0")
* In this branch, grep for the Dockerfile you are updating, and replace its name with the experimental one everywhere:
* the `docker pull` statement in scripts/
* the `docker run` statement in the appropriate systemd service file inside unit-files/
* Push the branch to the repo
* Now deploy a server from your experimental infrastructure branch (which references your experimental Docker image), as described above at "Developing infrastructure"
Getting started as an IndieHosters hoster
# Prerequisites
Each IndieHoster is an entirely autonomous operator, without any infrastructural ties to other IndieHosters.
These scripts and docs will help you run and manage servers and services as an IndieHoster, whether you're
a certified as a branch of the IndieHosters franchise or not. To get started, on your laptop machine,
create a folder structure as follows:
indiehosters --- billing
-- dev-scripts
-- dockerfiles
-- infrastructure
-- orchestration --- deploy-keys
| |
| -- DNR
| |
| -- DNS
| |
| -- MON
| |
| -- per-server
| |
| -- TLS --- approved-certs
| |
-- user-data --- backup -- cert
| |
-- live -- chain
-- combined
-- key
The `infrastructure`, `dockerfiles`, and `dev-scripts` folders are the corresponding repos under
# Hoster data
The `orchestration` folder will contain your orchestration data (what *should* be happening on each server, at each domain name
registrar, and at each TLS certificate authority), and `billing` will contain
your billing data (data about your human customers, including contact info,
who is in control of which product, which products were/should be added/removed on which dates, and history of all tech support
issues of this customer, and if for paying customers also the billing and payment history).
If you're used to working with git as a versioning tool, then it's a good idea to make `indiehosters/orchestration` and
`indiehosters/billing` into (private!) git repos, so
that you can track changes over time, and search the history to resolve mysteries when they occur. You may also use a different
versioning system, or just take weekly and daily backups (but then it's probably a good idea to retain the weeklies for a couple
of years, and even then it will not be as complete as a history in a versioning system).
The per-server orchestration data is about what a specific one of your servers *should* be doing at this moment.
This is fed into CoreOS (systemd -> etcd -> confd -> docker) to make sure the server actually starts and keeps doing these things,
and also into monitoring, to make sure you get alerted when a server misbehaves.
The DNR, TLS, MON, and DNS folders under orchestration are for you to keep track of Domain Name Registration, Transport
Layer Security, MONitoring, and Domain Name System services which you are probably getting from
third-party service providers, alongside the services which
you run on your own servers.
Note that although it's probably inevitable that you resell DNR and TLS services from some third party, and your monitoring would ideally
also run on a system that's decoupled from your actual servers, you may not be reselling DNS
hosting. If you host DNS for your customer on server-wide bind services that directly read data from files on the per-user data folders,
then you don't need this folder, and instead DNS data will be under `indiehosters/user-data`.
The deploy-keys folder contains the authorized_keys file which is the first thing you scp to each server you add to your fleet.
# User data
Everything under `indiehosters/user-data` is data owned by one of your users. Which human owns which site is something you can administer
by hand somehow in the `indiehosters/billing` folder.
All user data is *untrusted* from your point of view, it is not owned by you as a hoster,
and users may change it at any time (and then probably contact you for a backup whenever they mess up!). It makes sense to give users
only read-only access to this data by default, and have a big "Are you sure? Warranty will be void!" warning before they can activate
write-access to their own data (and then probably trigger an extra backup run just before allowing them to edit their own raw data).
This is how some operating systems on user devices also deal with this.
But in the end, the user, and not you, owns this data, and they can do with it what they want, at their own risk.
Just like a mailman is not supposed to open and read letters, you also should treat each user's data as a closed envelope
which you never open up, unless in the following cases:
* There may things you need to import from specific files on there (like a user-supplied TLS certificate or DNS zone)
* When running backups, you sometimes can't avoid seeing some of the modified filenames flying by (depending on the backup software)
* After explicit permission of the user, when this is useful for tech support (e.g. fix a corrupt mysql database for them)
# Backups
This folder structure contains all the critical data of your operations as an IndieHoster, from start to finish, so make sure you don't
ever lose it, no matter what calamity may strike. Once a month, put a copy of it on a USB stick, and put that in a physically safe place.
You may give a trusted person an emergency key to your infrastructure, in case you walk under a bus. Think about the risk of data loss and
establish an emergency recovery plan for when, for instance, the hard disk of your laptop or of one of your servers die.
Make sure you often rsync the live data from each of your servers to indiehosters/user-data/live/{servername}/{domain} and store snapshots
regularly (for instance to indiehosters/user-data/backup). Users *will* contact you sooner or later asking for "the backup from last Tuesday"
and they will expect you to have one.
# Basic digital hygiene
At the same time, be careful who may obtain access to your critical data. Is your laptop really safe? Does the NSA have access to the servers you run?
Someone may plant a Trojan on a computer in an internet cafe from where you access your Facebook account, access your gmail account
for which you use the same password, reset your RackSpace password and restore a backup from your Cloud Files to somewhere else.
Make a diagram of how your laptop talks to your USB sticks and your servers. Then make a diagram of the services you use and to which
email addresses they send password reset emails. Draw a perimeter of trust in both diagrams, and start taking some basic measures to
keep your laptop secure.
Don't mix accounts and email addresses which you may
use from other computers, and keep your IndieHosters passwords and accounts separate from your other passwords and accounts, and reset
them every few months. It might even
make sense to dual-boot your laptop or boot from a live disk which resets on boot to make sure everything you do with IndieHosters data
is done in a sterile environment.
Also: lock your screen when walking away from your laptop, and think about what someone could do with it if they were to steal your bag,
or your smartphone.
# Do I have to use this?
You can of course use any folder structure and scripts you want, as long as it doesn't change the format of each user-data folder, so that
your customers can still migrate at will between you and other IndieHosters. However, you might find some of the scripts in this repo
helpful at some point, and they (will) rely on
`../infrastructure`, `../dockerfiles`, and `../orchestration/per-server` to be where they are in the diagram above.
That's why it makes sense to create this folder structure now, and then continue to [deploying a server](! :)
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment