Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • lupa/compose.libre.sh
  • libre.sh/compose.libre.sh
  • ecobytes/compose.libre.sh
  • jordan.mitchell/compose.libre.sh
  • timothee/compose.libre.sh
5 results
Show changes
Commits on Source (617)
Showing with 548 additions and 410 deletions
.vagrant
docker-haproxy-confd
*.swp
# 1.2.0
* Add app admin email as argument in provision #189
* Handle git branches when provisioning #174
* REPO mode to retrieve application recipe #187
# 0.3.0
* adds automation script for user provisionning
* moves backup to duplicity
* big simplification
* some fixes
# 0.2.4
* improve tests
* wordpess version 4.1
* Internal modifications
* rename project
* rename images
* integrate dockerfiles to the project
* add hotfixes
# 0.2.3
* fixes backup
* better tests
* import dump.sql when relevant
# 0.2.2
* add Known as an application
# 0.2.1
* draft instructions for how to add an application (whether server-wide or per-user)
* several bugfixes
# 0.2.0
* a separation between /data/domains and /data/runtime, making site immigration much easier
* the wordpress image and the mysql image it depends on
* the backup service which commits all user content, including a mysql dump, to a private git repo, and pushes that out to a remote destination every hour
* the nginx image from 0.1.0 split into static and static-git
# 0.1.0
* Static webhosting
* based on haproxy with nginx backends
* all running as Docker containers
* SNI-capable (multiple https domains on one single IPv4 address)
* pulls in content from any git repo, then updates every 10 minutes
* can be run redundantly in round-robin DNS setup
* email forwarder
* based on postfix
* stateless apart from simple configuration files
* can be run redundantly on multiple MX handlers
* automated administration
* Docker containers are orchestrated with etcd and systemd
* script to deploy it on a remote coreos server
* script for adding a site from a git repo
* script for adding an empty placeholder site
* docs describing how to use these scripts
* Vagrantfile for using it inside vagrant
# Instructions to install libre.sh
## Recommendation
- you'd need API key on Namecheap (if you want to automatically buy and configure domain name)
## Installation
These instructions depend a bit on your cloud provider.
### [Digital Ocean](https://m.do.co/c/1b468ce0671f)
1. Install [doctl](https://github.com/digitalocean/doctl/)
2. Issue the following command:
```
doctl compute droplet create libre.sh --user-data-file ./user_data --wait --ssh-keys $KEY_ID --size 1gb --region lon1 --image coreos-stable
```
### Provider with user_data support
If you use a cloud provider that support `user_data`, like [Scaleway](http://scaleway.com/), just use [this user_data](https://raw.githubusercontent.com/indiehosters/libre.sh/master/user_data).
### Hetzner
You can also buy a baremetal at [Hetzner](https://serverboerse.de/index.php?country=EN) as they are the cheapest options around. Follow these [instructions](INSTALL_HETZNER.md) in this case.
### Provider without user_data support
Use boot a live cd, and issue that command:
```
wget https://raw.github.com/coreos/init/master/bin/coreos-install
bash coreos-install -d /dev/sda -c user_data
```
And voila, your first libre.sh node is ready!
# Instructions to install libre.sh
## Recommendation
- ssd on /dev/sda
- hdd on /dev/sdb
- hdd on /dev/sdc
- API key on Namecheap (if you want to automatically buy domain name)
# Installation
First, you need a server.
We recommend [Hetzner](https://serverboerse.de/index.php?country=EN) as they are the cheapest options around.
You can filter servers with ssd.
These instructions can also work on any VM/VPS/Hardware.
## Install the system
```
IP=
ssh -o "StrictHostKeyChecking no" root@$IP
hostname=
ssh_public_key=""
fdisk -l #find your ssd
# Setup raid
cat > /etc/mdadm.conf << EOF
MAILADDR dev@null.org
EOF
mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sdb /dev/sdc
mkfs.ext4 /dev/md0
cat > cloud-config.tmp << EOF
#cloud-config
hostname: "$hostname"
ssh_authorized_keys:
- $ssh_public_key
EOF
apt-get install gawk
wget https://raw.github.com/coreos/init/master/bin/coreos-install
bash coreos-install -d /dev/sda -c cloud-config.tmp
reboot
```
```
ssh core@$IP
#configure mdmonitor.
sudo su -
mdadm --examine --scan > /etc/mdadm.conf
vim /etc/mdadm.conf
#ADD your mail
MAILADDR xxx@xxx.org
# Start service
systemctl start mdmonitor.service
cat > /etc/systemd/system/data.mount << EOF
[Mount]
What=/dev/md0
Where=/data
Type=ext4
EOF
wget https://raw.githubusercontent.com/indiehosters/libre.sh/master/user_data -O /var/lib/coreos-install/user_data
coreos-cloudinit /var/lib/coreos-install/user_data
# Instructions to install libre.sh on linux with Systemd
## Recommendation
- Systemd distro (ubuntu server 18.04.3 or debian 9 )
# Installation
Where basicly reproduce what the user_data do for us.
as root
# configure sshd (Optional)
Don't forget to create the user core and adding your ssh key before
You could also remove AllowUsers core or/and change the username.
```
cat > /etc/ssh/sshd_config <<EOF
UsePrivilegeSeparation sandbox
Subsystem sftp internal-sftp
PermitRootLogin no
AllowUsers core
PasswordAuthentication no
ChallengeResponseAuthentication no
EOF
chmod 600 /etc/ssh/sshd_config
systemctl restart sshd
```
# add kernel parameter (optional but recommended )
```
cat > /etc/sysctl.d/libresh.conf <<EOF
fs.aio-max-nr=1048576
vm.max_map_count=262144
vm.overcommit_memory=1
EOF
chmod 644 /etc/sysctl.d/libresh.conf
sysctl -p /etc/sysctl.d/libresh.conf
echo never > /sys/kernel/mm/transparent_hugepage/enabled
```
# define Localhost (should not be needed but... )
```
cat > /etc/hosts <<EOF
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
EOF
```
# define envrionment
```
cat > /etc/environment <<EOF
NAMECHEAP_URL="namecheap.com"
NAMECHEAP_API_USER="pierreo"
NAMECHEAP_API_KEY=
IP="curl -s http://icanhazip.com/"
FirstName="Pierre"
LastName="Ozoux"
Address=""
PostalCode=""
Country="Portugal"
Phone="+351.967184553"
EmailAddress="pierre@ozoux.net"
City="Lisbon"
CountryCode="PT"
BACKUP_DESTINATION=root@xxxxx:port
MAIL_USER=
MAIL_PASS=
MAIL_HOST=mail.indie.host
MAIL_PORT=587
EOF
```
# install docker
*Current tested version : 19.03.5 see https://docs.docker.com/install/linux/docker-ce/ubuntu/ .*
# install docker-compose
*Remark I did a variante to find the last version of DockerCompose and download it*
```
mkdir -p /opt/bin &&\
dockerComposeVersion=$(curl -s https://api.github.com/repos/docker/compose/releases/latest|grep tag_name|cut -d'"' -f4) &&\
curl -L https://github.com/docker/compose/releases/download/$dockerComposeVersion/docker-compose-`uname -s`-`uname -m` > /opt/bin/docker-compose &&\
chmod +x /opt/bin/docker-compose
```
# install Libre.sh
```
git clone https://lab.libreho.st/libre.sh/compose.libre.sh /libre.sh &&\
mkdir -p /{data,system} &&\
mkdir -p /data/trash &&\
cp /libre.sh/unit-files/* /etc/systemd/system && systemctl daemon-reload &&\
systemctl enable web-net.service &&\
systemctl start web-net.service &&\
mkdir -p /opt/bin &&\
cp /libre.sh/utils/* /opt/bin/
```
# add /opt/bin path
```
cat > /etc/profile.d/libre.sh <<EOF
export PATH=$PATH:/opt/bin
EOF
chmod 644 /etc/profile.d/libre.sh
```
## IndieHosters
# libre.sh Version 1.2
[![Backers on Open Collective](https://opencollective.com/libresh/backers/badge.svg)](#backers)
[![Sponsors on Open Collective](https://opencollective.com/libresh/sponsors/badge.svg)](#sponsors)
This repository contains the confd and bash scripts we use to control our servers.
It can run inside Vagrant (see below; FIXME: check whether these instruction currently work) or
[deploy to a server](doc/getting-started-as-a-hoster.md) (FIXME: update those instructions to
prescribe less folder structure, explain static https+smtp hosting, and check if they currently
work).
## Introduction
## Prerequisites to work on this project using vagrant:
- [vagrant](http://www.vagrantup.com/)
- [virtualbox](https://www.virtualbox.org/)
- optional: [vagrant-hostsupdater](https://github.com/cogitatio/vagrant-hostsupdater)
- run `vagrant plugin install vagrant-hostsupdater` to install
An ecosystem to ease free software hosting \o/
## Get started:
We are working on bootstrapping an ecosystem of tools to facilitate the hosting of free software.
Think of it as
- [ISPconfig](https://www.ispconfig.org/)
- FLOSS [cpanel](https://www.cpanel.net/products/)
- [cloudron](https://cloudron.io/) with email
```bash
vagrant up
* Libre.sh V1 (Stable) is using docker-compose
* Libre.sh V2 (Alpha) is using [kubernetes](https://kubernetes.io/).
This ecosystem can be deployed on [Raspberries](https://kubecloud.io/setting-up-a-kubernetes-1-11-raspberry-pi-cluster-using-kubeadm-952bbda329c8) or on popular cloud providers and scale globally or anything in between.
We can affirm that V2 scales globally because it is based on kubernetes, a tool developped from the experience of Google hosting containers at scale.
## Installation
To install it, follow the instructions in `INSTALL_LINUX.md` : https://lab.libreho.st/libre.sh/compose.libre.sh/blob/master/INSTALL_LINUX.md
Or run our installer script
https://lab.libreho.st/libre.sh/compose.libre.sh/raw/master/install.linux.sh
### What is libre.sh
libre.sh is a little framework to host Docker. It is simple and modular and respect the convention over configuration paradigm.
This is aimed at Hosters to manage a huge amount of different web application, and a quantity of domain names related with emails and so on.
It is currently installed at 3 different hosters in production and hosting ~20 different web applications, with ~500 containers.
Once well installed, in one bash command, you'll be able to:
- buy a domain name
- configure DNS for it
- configure email for it
- configure dkim for that domain
- configure dmarc for that domain
- configure autoconfig for that domain
- install and start a web application on that domain (WordPress, Nextcloud, piwik...)
- provision a TLS cert on that domain
Amazing, right?
### Modular
The PaaS is really modular, that's why it contains the strict necessary, then you'll probably want to add `system` modules or `applications`.
It contains 2 [unit-files](https://lab.libreho.st/libre.sh/compose.libre.sh/tree/master/unit-files) to manage system modules and applications, start them at boot, and load the appropriate environment.
### Support
You can use the following channels to request community support:
- [mailinglist/forum](https://forum.indie.host/t/about-the-libre-sh-category/71)
- [chat](https://chat.indie.host/channel/libre.sh)
For paid support, just send an inquiry to support@libre.sh.
You can also watch the Fosdem Video : [Video Fosdem](https://fosdem.org/2017/schedule/event/libre_sh/)
All of this is hosted by libre.sh :)
## System modules
Here is a list of modules supported:
- https proxy:
- [HAProxy](https://lab.libreho.st/libre.sh/compose/haproxy)
- [Nginx](https://lab.libreho.st/libre.sh/compose/nginx)
- [monitoring](https://lab.libreho.st/libre.sh/compose/monitoring)
- [git-puller](https://lab.libreho.st/libre.sh/compose/git-puller)
Go to their respective page for more details.
### To install and start a module:
```
cd /system/
git clone https://lab.libreho.st/libre.sh/compose/[module]
cd module
libre enable
libre start
```
Wait for the provisioning to finish (~40mins), and go to your browser: https://indiehosters.dev
## Applications
### FIXME: this is outdated: If you want to add another nginx instance apart from indiehosters.dev:
- For e.g. example.dev, put a cert for it in /data/per-user/example.dev/combined.pem on
the host system.
- Test it with `openssl s_server -cert /data/per-user/example.dev/combined.pem -www`
### List of supported applications
| Application | Latest Version | Comments |
|--------------|---------------------------|------------|
| wordpress | 5.9 | Includes the support of SMTP email though libresh variables |
| dolibarr | 15.0.3 | need manual deletion of the install.lock to upgrade |
### Installation
To install application `wordpress` on `example.org`, first make point example.org to your server IP, and then, just run:
```bash
vagrant ssh
sudo sh /data/indiehosters/scripts/approve-user.sh example.dev wordpress
```
Check https://example.dev in your bowser!
libre provision -a wordpress -u example.org -s
```
### Cleaning up
- -u [arg] URL to process. Required.
- -a [arg] Application to install. (wordpress in REPO_MODE)
- -t [arg] Checkout a specific tag or branch from the application repo. default to master
- -e [arg] Specify the email of the application admin
- -s Start the application right away.
- -b Buys the associated domain name.
- -i Configure OpenDKIM.
- -c Configures DNS if possible.
To clean up stuff from previous runs of your VM, you can do:
## To debug a module or an application:
```bash
vagrant ssh
rm -rf /etc/systemd/system/multi-user.wants/*
```
and then restart the VM with `vagrant reload`.
libre ps
libre logs -f --tail=100
libre stop
libre restart
```
## Contributing
If you have any issue (something not working, missing doc), please do report an issue here! Thanks
This system is used in production at [IndieHosters](https://indiehosters.net/) so it is maintained. If you use it, please tell us, and we'll be really happy to update this README!
You can help us by:
- starring this project
- sending us a thanks email
- reporting bugs
- writing documentation/blog on how you got up and running in 5mins
- writing more documentation
- sending us cake :) We loove cake!
## Contributors
This project exists thanks to all the people who contribute. [[Contribute](CONTRIBUTING.md)].
<a href="https://github.com/indiehosters/libre.sh/graphs/contributors"><img src="https://opencollective.com/libresh/contributors.svg?width=890&button=false" /></a>
## Backers
Thank you to all our backers! 🙏 [[Become a backer](https://opencollective.com/libresh#backer)]
<a href="https://opencollective.com/libresh#backers" target="_blank"><img src="https://opencollective.com/libresh/backers.svg?width=890"></a>
## Sponsors
Support this project by becoming a sponsor. Your logo will show up here with a link to your website. [[Become a sponsor](https://opencollective.com/libresh#sponsor)]
<a href="https://opencollective.com/libresh/sponsor/0/website" target="_blank"><img src="https://opencollective.com/libresh/sponsor/0/avatar.svg"></a>
<a href="https://opencollective.com/libresh/sponsor/1/website" target="_blank"><img src="https://opencollective.com/libresh/sponsor/1/avatar.svg"></a>
<a href="https://opencollective.com/libresh/sponsor/2/website" target="_blank"><img src="https://opencollective.com/libresh/sponsor/2/avatar.svg"></a>
<a href="https://opencollective.com/libresh/sponsor/3/website" target="_blank"><img src="https://opencollective.com/libresh/sponsor/3/avatar.svg"></a>
<a href="https://opencollective.com/libresh/sponsor/4/website" target="_blank"><img src="https://opencollective.com/libresh/sponsor/4/avatar.svg"></a>
<a href="https://opencollective.com/libresh/sponsor/5/website" target="_blank"><img src="https://opencollective.com/libresh/sponsor/5/avatar.svg"></a>
<a href="https://opencollective.com/libresh/sponsor/6/website" target="_blank"><img src="https://opencollective.com/libresh/sponsor/6/avatar.svg"></a>
<a href="https://opencollective.com/libresh/sponsor/7/website" target="_blank"><img src="https://opencollective.com/libresh/sponsor/7/avatar.svg"></a>
<a href="https://opencollective.com/libresh/sponsor/8/website" target="_blank"><img src="https://opencollective.com/libresh/sponsor/8/avatar.svg"></a>
<a href="https://opencollective.com/libresh/sponsor/9/website" target="_blank"><img src="https://opencollective.com/libresh/sponsor/9/avatar.svg"></a>
## Other projects
Simplifying web application hosting has always been a goal for a lot of other projects, here is some project that share goals with libre.sh
- Yunohost https://yunohost.org
- Sandstorm https://sandstorm.io/
- Cloudron https://git.cloudron.io/cloudron/box
# TL;DR
- k8s
- [ ] ceph
- [ ] flannel
- [ ] baremetal install
# Object
The aim of this document is to write the big lines of the future of libre.sh.
# Version 1
The current version, let's call it 1, is a nice opiniated framework on how to run a single host with docker-compose.
It provides a list of packages and module compatible with this framework.
The best features of this framework are:
- https only
- some integration between the tools (auto provisioning of emails for new applications)
- domain name buying (Namecheap api)
- dns configuration (Namecheap api)
# Version 2 - k8s
This roadmap will discuss about the migration to kubernetes (k8s).
## Distributions
There are various k8s distributions (Tectonic, deis, openshift..) and the aim of libre.sh is not to become yet another distribution.
It would be nice if we could list them, evaluate them, and decide to use one of them or not.
## Installation/Operation
libre.sh should be opiniated on the way to install and operate the cluster.
It should provide easy steps to install on baremetal first. We aim for libre software, and as such, we can't rely
on cloud providers like gcloud, aws, or digital ocean.
As a second priority, we should give easy instructions to deploy on any cloud providers, as people are free to choose their chains :)
## Storage
One big challenge in k8s cluster context is to provide an implementation of major cloud providers about [PersistantVolume](https://kubernetes.io/docs/user-guide/persistent-volumes/).
In a libre cluster, this function would be achieved by a distributed file system technology.
After some investigation, the choice would be to use ceph.
There are already some work done on it like the [ceph-docker](https://github.com/ceph/ceph-docker/tree/master/examples) repo.
## Network
Another big challenge is network. k8s is strongly opiniated on what should be the network configuration.
Ideally, we would use some IPsec to secure the links between machine in a context we can't trust the network (like at hetzner).
There are 3 options:
- zerotier
- tinc vpn
- flannel that might implement IPsec in a near future
The cheapest in term of work would be to bet on flannel.
## Packages
There is now a way to create and distribute packages in a standard way.
We can then remove the idea of modules and applications.
They will all be packages.
The k8s standard for that is [helm](http://helm.sh/). There is already a big list of packages.
As for libre.sh, the idea would be to contribute the missing packages there.
### opportunistic packages
libre.sh would then be, just a repo of documentation on how to install, operate and manage a k8s cluster on baremetal.
There is still a place where we can have a difference.
This idea is called opportunistic package.
This would be a package based on an official one.
Let's take the example of WordPress.
The libre.sh version of WordPress would be based on the official one.
But it will have some mechanisms to discovers services available inside the cluster it is running on.
These services could be:
- ldap
- piwik
- email
So, when you install a new WordPress, it will try to discover opportunistically if there is a ldap service in the cluster,
and if yes, configure WordPress to use this ldap service.
This pattern will help make it happen:
https://github.com/kubernetes-incubator/service-catalog
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.require_version ">= 1.5.0"
# Size of the CoreOS cluster created by Vagrant
$num_instances=1
# Official CoreOS channel from which updates should be downloaded
$update_channel='stable'
# Setting for VirtualBox VMs
$vb_memory = 1024
$vb_cpus = 1
BASE_IP_ADDR = ENV['BASE_IP_ADDR'] || "192.168.65"
HOSTNAME = ENV['HOSTNAME'] || "indiehosters.dev"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "coreos-%s" % $update_channel
config.vm.box_version = ">= 308.0.1"
config.vm.box_url = "http://%s.release.core-os.net/amd64-usr/current/coreos_production_vagrant.json" % $update_channel
(1..$num_instances).each do |i|
config.vm.define "core-#{i}" do |core|
core.vm.provider :virtualbox do |vb|
vb.memory = $vb_memory
vb.cpus = $vb_cpus
# On VirtualBox, we don't have guest additions or a functional vboxsf
# in CoreOS, so tell Vagrant that so it can be smarter.
vb.check_guest_additions = false
vb.functional_vboxsf = false
end
# plugin conflict
if Vagrant.has_plugin?("vagrant-vbguest") then
core.vbguest.auto_update = false
end
core.vm.hostname = HOSTNAME
core.hostsupdater.aliases = ["example.dev"]
core.vm.network :private_network, ip: "#{BASE_IP_ADDR}.#{i+1}"
core.vm.synced_folder ".", "/data/indiehosters", id: "coreos-indiehosters", :nfs => true, :mount_options => ['nolock,vers=3,udp']
core.vm.provision :file, source: "./cloud-config", destination: "/tmp/vagrantfile-user-data"
core.vm.provision :shell, inline: "cp /data/indiehosters/scripts/unsecure-certs/*.pem /data/server-wide/haproxy/approved-certs"
core.vm.provision :shell, path: "./scripts/setup.sh", args: [HOSTNAME]
core.vm.provision :shell, inline: "etcdctl set /services/default '{\"app\":\"nginx\", \"hostname\":\"#{HOSTNAME}\"}'"
core.vm.provision :shell, path: "./scripts/approve-user.sh", args: [HOSTNAME, "nginx"]
end
end
end
#cloud-config
coreos:
update:
reboot-strategy: best-effort
etcd:
addr: 172.17.42.1:4001
peer-addr: 172.17.42.1:7001
units:
- name: etcd.service
command: start
#!/bin/sh
for i in `deploy/list-sites.sh $1`; do
echo "Approving combined cert for $i";
cp ../orchestration/TLS/combined/$i.pem ../orchestration/TLS/approved-certs/$i.pem;
scp ../orchestration/TLS/approved-certs/$i.pem root@$1:/data/server-wide/haproxy/approved-certs/
done
#!/bin/sh
if [ $# -eq 2 ]; then
CA=$2
else
CA="startssl"
fi
echo "CA is $CA"
echo Some information about cert ../orchestration/TLS/cert/$1.cert:
openssl x509 -text -in ../orchestration/TLS/cert/$1.cert | head -50 | grep -v ^\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
#echo Some information about chain cert ../orchestration/TLS/chain/$2.pem:
#openssl x509 -text -in ../orchestration/TLS/chain/$2.pem
#echo Some information about key ../orchestration/TLS/key/$1.key:
#openssl rsa -text -in ../orchestration/TLS/key/$1.key
cat ../orchestration/TLS/cert/$1.cert ../orchestration/TLS/chain/$CA.pem ../orchestration/TLS/key/$1.key > ../orchestration/TLS/combined/$1.pem
echo Running a test server on port 4433 on this server now \(please use your browser to check\):
openssl s_server -cert ../orchestration/TLS/combined/$1.pem -www
startssl
#!/bin/sh
if [ $# -ge 1 ]; then
SERVER=$1
else
echo "Usage: sh ./deploy/deploy.sh server [branch [user]]"
exit 1
fi
if [ $# -ge 2 ]; then
BRANCH=$2
else
BRANCH="master"
fi
if [ $# -ge 3 ]; then
USER=$3
else
USER="core"
fi
if [ -e ../orchestration/per-server/$SERVER/default-site ]; then
DEFAULTSITE=`cat ../orchestration/per-server/$SERVER/default-site`
else
DEFAULTSITE=$SERVER
fi
echo "Infrastructure branch is $BRANCH"
echo "Remote user is $USER"
echo "Default site is $DEFAULTSITE"
chmod -R go-w ../orchestration/deploy-keys
if [ -f ../orchestration/deploy-keys/authorized_keys ]; then
scp -r ../orchestration/deploy-keys $USER@$SERVER:.ssh
fi
scp ./deploy/onServer.sh $USER@$SERVER:
ssh $USER@$SERVER sudo mkdir -p /var/lib/coreos-install/
scp ../infrastructure/cloud-config $USER@$SERVER:/var/lib/coreos-install/user_data
ssh $USER@$SERVER sudo sh ./onServer.sh $BRANCH $DEFAULTSITE
cd ../orchestration/per-server/$SERVER/sites/
for i in * ; do
echo "setting up site $i as `cat $i` on $SERVER";
ssh $USER@$SERVER sudo mkdir -p /data/per-user/$i/
scp ../../../TLS/approved-certs/$i.pem $USER@$SERVER:/data/server-wide/haproxy/approved-certs/$i.pem
rsync -r ../../../../user-data/live/$SERVER/$i/ $USER@$SERVER:/data/per-user/$i/
ssh $USER@$SERVER sudo sh /data/infrastructure/scripts/activate-user.sh $i `cat $i`
done
# Restart the default site now that its data has been rsync'ed in place:
ssh $USER@$SERVER sudo systemctl restart nginx\@$DEFAULTSITE
#!/bin/sh
ssh-keygen -R $1
#!/bin/sh
cd ../orchestration/per-server/$1/sites
for i in *; do
echo $i
done
#!/bin/sh
echo Starting etcd:
/usr/bin/coreos-cloudinit --from-file=/var/lib/coreos-install/user_data
echo Cloning the infrastructure repo into /data/infrastructure:
mkdir /data
cd /data
git clone https://github.com/indiehosters/infrastructure.git
cd infrastructure
echo Checking out $1 branch of the IndieHosters infrastructure:
git checkout $1
git pull
echo Running the server setup script:
sh scripts/setup.sh $2
# Deploying a server
## Before you start
Make sure you read [getting started](getting-started-as-a-hoster.md) first and created your `indiehosters` folder structure somewhere
on your laptop.
### Prepare your orchestration data
* Get a CoreOS server, for instance from [RackSpace](rackspace.com) or [Vultr](vultr.com).
* If you didn't add your public ssh key during the order process (e.g. through your IaaS control panel or a cloud-config file), and unless it's already there from a previous server deploy job, copy your laptop's public ssh key (probably in `~/.ssh/id_rsa.pub`) to `indiehosters/orchestration/deploy-keys/authorized_keys`
* Give the new server a name (in this example, we call the server 'k3')
* Create an empty folder `indiehosters/orchestration/per-server/k3/sites` (replace 'k3' with your server's domain name)
* Add k3 to your /etc/hosts with the right IP address
* If you have used this name before, run `./deploy/forget-server-fingerprint.sh k3`
* From the `indiehosters/dev-scripts` folder, run `sh ./deploy/deploy.sh k3`
* This will ask for the ssh password once; the rest should be automatic!
### Adding a website to your server
* For each site you want to deploy on the server, e.g. example.com, do the following:
* Does example.com already exist as a domain name?
* If yes, then find out to what extent it's currently in use (and needs to be migrated with care). There are a few options:
* Transfer the domain into your DNR account.
* Set up DNS hosting for it and ask the owner to set authoritative DNS to the DNS servers you control
* Ask the user to keep DNR and DNS control where it is currently, but to switch DNS when it's ready at the new server
* In any case, you will probably need access to the hostmaster@example.com email address, for the StartSSL process *before*
the final DNS switch. You could also ask them to tell you the verification code that arrives there, but that has to be done
in real time, immediately when you click 'verify' in the StartSSL UI. If they forward the email the next day, then the token
will already have expired.
* If no, register it (at Namecheap or elsewhere).
* Decide which image to run as the user's main website software (check out `../dockerfiles/sites/` to see which ones can be used for this)
* Say you picked nginx, then create a text file containing just the word 'nginx' at
`indiehosters/orchestration/per-server/k3/sites/example.com`
* If you already have some content that should go on there, and which is compatible with the image you chose,
put it in `indiehosters/user-data/example.com/nginx/` (replace 'nginx' with the actual image name you're using;
note that for wordpress it's currently a bit more complicated, as this relies on more than one image, so you
would then probably have to import both the user's wordpress folder and their mysql folder).
* Unless there is already a TLS certificate at `indiehosters/user-data/example.com/tls.pem` get one
(from StartSSL or elswhere) for example.com and concatenate the certificate
and its unencrypted private key into `indiehosters/user-data/example.com/tls.pem`
* Make sure the TLS certificate is valid (use `indiehosters/infrastructure/scripts/check-cert.sh` for this), and if it is,
copy it from
`indiehosters/user-data/example.com/tls.pem`
to `indiehosters/orchestration/TLS/approved-certs/example.com.pem`.
* Now run `deploy/deploy.sh k3` again. It will make sure the server is in the correct state, and scp the user data and the
approved cert into place, start a container running the image requested, update haproxy config, and restart the haproxy container.
* Test the site using your /etc/hosts. If you did not import data, there should be some default message there. For wordpress, be aware
that the site is installed in a state where any visitor can take control over it.
* Switch DNS and note down the current DNS situation in `indiehosters/orchestration/DNS/example.com` (or if you're hosting
a subdomain of another domain, update whichever is the zone file you edited).
## Deploying a mailserver
Right now, this is still a bit separate from the rest of the infrastructure - just get a server with Docker (doesn't have to be coreos), and run:
```bash
docker run -d -p 25:25 -p 443:443 indiehosters/yunohost /sbin/init
```
Then set up the mail domains and forwards through the web interface (with self-signed cert) on https://server.com/.
Use Chrome for this, because Firefox will refuse to let you view the admin interface because of the invalid TLS cert.
The initial admin password is 'changeme' - change it on https://server.com/yunohost/admin/#/tools/adminpw
# Developing Dockerfiles and infrastructure
## Developing Dockerfiles
To develop Dockerfiles, you can use a server that's not serving any live domains, use `docker` locally on your laptop, or use the `vagrant up` instructions to run the infrastructure inside vagrant.
## Developing infrastructure
To develop the infrastructure, create a branch on the infrastructure repo and specify that branch at the end of the deploy command, for instance:
```bash
sh ./deploy/deploy.sh k4 dev
```
Will deploy a server at whatever IP address "k4" points to in your /etc/hosts, using the "dev" branch of https://github.com/indiehosters/infrastructure.
## Testing new Dockerfiles in the infrastructure
To test the infrastructure with a changed Dockerfile, you need to take several steps:
* Develop the new Dockerfiles as described above at "Developing Dockerfiles"
* When you're happy with the result, publish this new Dockerfile onto the docker hub registry under your own username (e.g. michielbdejong/haproxy-with-http-2.0)
* Now create a branch on the infrastructure repo (e.g. "dev-http-2.0")
* In this branch, grep for the Dockerfile you are updating, and replace its name with the experimental one everywhere:
* the `docker pull` statement in scripts/setup.sh
* the `docker run` statement in the appropriate systemd service file inside unit-files/
* Push the branch to the https://github.com/indiehosters/infrastructure repo
* Now deploy a server from your experimental infrastructure branch (which references your experimental Docker image), as described above at "Developing infrastructure"
Getting started as an IndieHosters hoster
===========
# Prerequisites
Each IndieHoster is an entirely autonomous operator, without any infrastructural ties to other IndieHosters.
These scripts and docs will help you run and manage servers and services as an IndieHoster, whether you're
a certified as a branch of the IndieHosters franchise or not. To get started, on your laptop machine,
create a folder structure as follows:
```
indiehosters --- billing
|
-- dev-scripts
|
-- dockerfiles
|
-- infrastructure
|
-- orchestration --- deploy-keys
| |
| -- DNR
| |
| -- DNS
| |
| -- MON
| |
| -- per-server
| |
| -- TLS --- approved-certs
| |
-- user-data --- backup -- cert
| |
-- live -- chain
|
-- combined
|
-- key
```
The `infrastructure`, `dockerfiles`, and `dev-scripts` folders are the corresponding repos under https://github.com/indiehosters.
# Hoster data
The `orchestration` folder will contain your orchestration data (what *should* be happening on each server, at each domain name
registrar, and at each TLS certificate authority), and `billing` will contain
your billing data (data about your human customers, including contact info,
who is in control of which product, which products were/should be added/removed on which dates, and history of all tech support
issues of this customer, and if for paying customers also the billing and payment history).
If you're used to working with git as a versioning tool, then it's a good idea to make `indiehosters/orchestration` and
`indiehosters/billing` into (private!) git repos, so
that you can track changes over time, and search the history to resolve mysteries when they occur. You may also use a different
versioning system, or just take weekly and daily backups (but then it's probably a good idea to retain the weeklies for a couple
of years, and even then it will not be as complete as a history in a versioning system).
The per-server orchestration data is about what a specific one of your servers *should* be doing at this moment.
This is fed into CoreOS (systemd -> etcd -> confd -> docker) to make sure the server actually starts and keeps doing these things,
and also into monitoring, to make sure you get alerted when a server misbehaves.
The DNR, TLS, MON, and DNS folders under orchestration are for you to keep track of Domain Name Registration, Transport
Layer Security, MONitoring, and Domain Name System services which you are probably getting from
third-party service providers, alongside the services which
you run on your own servers.
Note that although it's probably inevitable that you resell DNR and TLS services from some third party, and your monitoring would ideally
also run on a system that's decoupled from your actual servers, you may not be reselling DNS
hosting. If you host DNS for your customer on server-wide bind services that directly read data from files on the per-user data folders,
then you don't need this folder, and instead DNS data will be under `indiehosters/user-data`.
The deploy-keys folder contains the authorized_keys file which is the first thing you scp to each server you add to your fleet.
# User data
Everything under `indiehosters/user-data` is data owned by one of your users. Which human owns which site is something you can administer
by hand somehow in the `indiehosters/billing` folder.
All user data is *untrusted* from your point of view, it is not owned by you as a hoster,
and users may change it at any time (and then probably contact you for a backup whenever they mess up!). It makes sense to give users
only read-only access to this data by default, and have a big "Are you sure? Warranty will be void!" warning before they can activate
write-access to their own data (and then probably trigger an extra backup run just before allowing them to edit their own raw data).
This is how some operating systems on user devices also deal with this.
But in the end, the user, and not you, owns this data, and they can do with it what they want, at their own risk.
Just like a mailman is not supposed to open and read letters, you also should treat each user's data as a closed envelope
which you never open up, unless in the following cases:
* There may things you need to import from specific files on there (like a user-supplied TLS certificate or DNS zone)
* When running backups, you sometimes can't avoid seeing some of the modified filenames flying by (depending on the backup software)
* After explicit permission of the user, when this is useful for tech support (e.g. fix a corrupt mysql database for them)
# Backups
This folder structure contains all the critical data of your operations as an IndieHoster, from start to finish, so make sure you don't
ever lose it, no matter what calamity may strike. Once a month, put a copy of it on a USB stick, and put that in a physically safe place.
You may give a trusted person an emergency key to your infrastructure, in case you walk under a bus. Think about the risk of data loss and
establish an emergency recovery plan for when, for instance, the hard disk of your laptop or of one of your servers die.
Make sure you often rsync the live data from each of your servers to indiehosters/user-data/live/{servername}/{domain} and store snapshots
regularly (for instance to indiehosters/user-data/backup). Users *will* contact you sooner or later asking for "the backup from last Tuesday"
and they will expect you to have one.
# Basic digital hygiene
At the same time, be careful who may obtain access to your critical data. Is your laptop really safe? Does the NSA have access to the servers you run?
Someone may plant a Trojan on a computer in an internet cafe from where you access your Facebook account, access your gmail account
for which you use the same password, reset your RackSpace password and restore a backup from your Cloud Files to somewhere else.
Make a diagram of how your laptop talks to your USB sticks and your servers. Then make a diagram of the services you use and to which
email addresses they send password reset emails. Draw a perimeter of trust in both diagrams, and start taking some basic measures to
keep your laptop secure.
Don't mix accounts and email addresses which you may
use from other computers, and keep your IndieHosters passwords and accounts separate from your other passwords and accounts, and reset
them every few months. It might even
make sense to dual-boot your laptop or boot from a live disk which resets on boot to make sure everything you do with IndieHosters data
is done in a sterile environment.
Also: lock your screen when walking away from your laptop, and think about what someone could do with it if they were to steal your bag,
or your smartphone.
# Do I have to use this?
You can of course use any folder structure and scripts you want, as long as it doesn't change the format of each user-data folder, so that
your customers can still migrate at will between you and other IndieHosters. However, you might find some of the scripts in this repo
helpful at some point, and they (will) rely on
`../infrastructure`, `../dockerfiles`, and `../orchestration/per-server` to be where they are in the diagram above.
That's why it makes sense to create this folder structure now, and then continue to [deploying a server](deploying-a-server.md)! :)
#!/bin/bash
if [ ! -d "/data/per-user/$USER/nginx/data" ]; then
mkdir -p /data/per-user/$USER/nginx/data/www-content
echo Hello $USER > /data/per-user/$USER/nginx/data/www-content/index.html
touch /data/per-user/$USER/nginx/.env
fi