Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • lupa/compose.libre.sh
  • libre.sh/compose.libre.sh
  • ecobytes/compose.libre.sh
  • jordan.mitchell/compose.libre.sh
  • timothee/compose.libre.sh
5 results
Show changes
Showing
with 194 additions and 405 deletions
#!/bin/sh
for i in `deploy/list-sites.sh $1`; do
echo "Approving combined cert for $i";
cp ../orchestration/TLS/combined/$i.pem ../orchestration/TLS/approved-certs/$i.pem;
scp ../orchestration/TLS/approved-certs/$i.pem root@$1:/data/server-wide/haproxy/approved-certs/
done
#!/bin/sh
if [ $# -eq 2 ]; then
CA=$2
else
CA="startssl"
fi
echo "CA is $CA"
echo Some information about cert ../orchestration/TLS/cert/$1.cert:
openssl x509 -text -in ../orchestration/TLS/cert/$1.cert | head -50 | grep -v ^\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
#echo Some information about chain cert ../orchestration/TLS/chain/$2.pem:
#openssl x509 -text -in ../orchestration/TLS/chain/$2.pem
#echo Some information about key ../orchestration/TLS/key/$1.key:
#openssl rsa -text -in ../orchestration/TLS/key/$1.key
cat ../orchestration/TLS/cert/$1.cert ../orchestration/TLS/chain/$CA.pem ../orchestration/TLS/key/$1.key > ../orchestration/TLS/combined/$1.pem
echo Running a test server on port 4433 on this server now \(please use your browser to check\):
openssl s_server -cert ../orchestration/TLS/combined/$1.pem -www
startssl
#!/bin/sh
if [ $# -ge 1 ]; then
SERVER=$1
else
echo "Usage: sh ./deploy/deploy.sh server [folder [branch [user]]]"
exit 1
fi
if [ $# -ge 2 ]; then
FOLDER=$2
else
FOLDER="./data/"
fi
if [ $# -ge 3 ]; then
BRANCH=$3
else
BRANCH="master"
fi
if [ $# -ge 4 ]; then
USER=$4
else
USER="core"
fi
if [ -e "${FOLDER}server-wide/haproxy/approved-certs/${SERVER}.pem" ]; then
DEFAULTSITE=$SERVER
else
echo "Please make sure ${FOLDER}server-wide/haproxy/approved-certs/${SERVER}.pem exists, then retry"
exit 1
fi
echo "Hoster data folder is $FOLDER"
echo "Infrastructure branch is $BRANCH"
echo "Remote user is $USER"
echo "Default site is $DEFAULTSITE"
scp -r $FOLDER $USER@$SERVER:/data
scp ./deploy/onServer.sh $USER@$SERVER:
ssh $USER@$SERVER sudo mkdir -p /var/lib/coreos-install/
scp cloud-config $USER@$SERVER:/var/lib/coreos-install/user_data
ssh $USER@$SERVER sudo sh ./onServer.sh $BRANCH $DEFAULTSITE
#!/bin/sh
ssh-keygen -R $1
#!/bin/sh
cd ../orchestration/per-server/$1/sites
for i in *; do
echo $i
done
#!/bin/sh
echo Starting etcd:
/usr/bin/coreos-cloudinit --from-file=/var/lib/coreos-install/user_data
echo Cloning the indiehosters repo into /data/indiehosters:
mkdir -p /data
cd /data
git clone https://github.com/indiehosters/indiehosters.git
cd indiehosters
echo Checking out $1 branch of the IndieHosters indiehosters:
git checkout $1
git pull
echo Running the server setup script:
sh scripts/setup.sh $2
# Architecture based on systemd, docker, haproxy, and some bash scripts
Our architecture revolves around a
[set of systemd unit files](https://github.com/indiehosters/indiehosters/tree/master/unit-files). They come in various types:
## Server-wide processes
The haproxy.* and postfix.* unit files correspond to two server-wide processes. They run Docker containers from images in the
[server-wide/ folder of our dockerfiles repo](https://github.com/indiehosters/dockerfiles/tree/master/server-wide).
The haproxy-confd.* unit starts a side-kick service for haproxy, which monitors `etcdctl ls /services` to see if any new backends were created, and updates the haproxy configuration, which lives in `/data/server-wide/haproxy/` on the host sytem. It is required by the haproxy.* unit. That means that when you run `systemctl start haproxy`, and then run `docker ps` or `systemctl list-units`, you will see that systemd not only started the haproxy container, but also the haproxy-confd container.
There is currently no similar side-kick for updating `/data/server-wide/postfix/`, so you will have to update the configuration files in that folder manually, and then run `systemctl restart postfix`.
The `scripts/setup.sh` takes care of setting up etcd, enabling and starting the haproxy and postfix service (as well as one haproxy backend, to serve the default site), and the haproxy-confd side-kick to listen for changes in the backends configuration in etcd, so that new backends are automatically added to the haproxy config as soon as their IP address is written into etcd.
## HAProxy backends: nginx, wordpress
A per-user process is a haproxy backend for a specific domain name. At the time of writing we have two applications available: nginx and wordpress.
You will notice there are also some other units in the `unit-files/` folder of this repo, like the gitpuller and mysql ones. Whenever you start a wordpress unit, it requires a mysql service.
Whenever you start an nginx unit, it wants a gitpuller unit. In all three cases, an -importer unit and a -discovery unit are required.
This works through a
[`Requires=` directive](https://github.com/indiehosters/indiehosters/blob/0.1.0/unit-files/nginx@.service#L6-L7) which systemd interprets, so that if you start one service, its dependencies are also started (you can see that in `systemctl list-units`).
## Discovery
The -discovery units check find out the local IP address of the backend, checks if it is up by doing a `curl`, and if so, writes the IP address into etcd. The `haproxy-confd` process notices this, and update the haproxy config.
## Import
The -import units check if data exists, and if it doesn't, create initial data state, for instance by doing a git clone, untarring the php files of a virgin wordpress installation, or both. So -import is actually a misnomer, probably -init would have been a better name.
Note that some initialization is also done by the Docker images themselves - for instance the wordpress image runs a [shell script](https://github.com/pierreozoux/tutum-docker-wordpress-nosql/blob/master/run-wordpress.sh) at container startup, that creates the initial mysql database if it didn't exist yet.
## Gitpuller
The -gitpuller unit is scheduled to run every 10 minutes by the .timer file, and is configure to only run if the GITURL file exists at the path specified in the .path file. When it runs, it does a git pull to update the website content at one of the haproxy backends from the git repository mentioned in the GITURL file.
## Scripts
There are two important scripts you can run at your server. You can also run the commands they contain manually, then you just use them as a cheatsheet of how to [set up a new server](https://github.com/indiehosters/indiehosters/tree/master/scripts/setup.sh) or [activate a new user](https://github.com/indiehosters/indiehosters/tree/master/scripts/activate-user.sh), respectively.
There are also deploy scripts which do the same from a jump box, so you can orchestrate multiple servers from one central vantage points. They are in the
[deploy/](https://github.com/indiehosters/indiehosters/tree/master/deploy)
folder of this repo, and they are the scripts referred to in the 'how to deploy a server' document. They basically run the scripts from the scripts/ folder over ssh.
# Deploying a server
## Before you start
Make sure you read [getting started](getting-started-as-a-hoster.md) first.
### Prepare your orchestration data
* Get a CoreOS server, for instance from [RackSpace](rackspace.com) or [Vultr](vultr.com).
* If you didn't add your public ssh key during the order process (e.g. through your IaaS control panel or a cloud-config file),
scp your laptop's public ssh key (probably in `~/.ssh/id_rsa.pub`) to `.ssh/authorized_keys` for the remote user
you will be ssh-ing and scp-ing as (the default remote user of our deploy scripts is 'core').
* Give the new server a name (in this example, we call the server 'k3')
* Add k3 to your /etc/hosts with the right IP address
* If you have used this name before, run `./deploy/forget-server-fingerprint.sh k3`
* From the root folder of this repository, run `sh ./deploy/deploy.sh k3 ./data/ master root` (where `./data/` should contain
`server-wide/postfix/`
and `server-wide/haproxy/approved-certs/k3.pem`; see the existing folder `data/` in this repo for an example of what the email forwards and
TLS certificate files should look like).
* Add the default site by following the 'Adding a website to your server' instructions below with domain name k3 instead of example.com
* The rest should be automatic!
### Preparing backups
* ssh into your server, and run `ssh-keygen -t rsa`
* set up a backups server at an independent location (at least a different data center, but preferably also a different IaaS provider, the bu25 plan of https://securedragon.net/ is a good option at 3 dollars per month).
* set up a git server with one private git repo per domain by following http://www.git-scm.com/book/en/v2/Git-on-the-Server-Setting-Up-the-Server (instead of 'project.git' you can use 'domainname.com.git')
### Adding a website to your server
* For each site you want to deploy on the server, e.g. example.com, do the following:
* Does example.com already exist as a domain name?
* If yes, then find out to what extent it's currently in use (and needs to be migrated with care). There are a few options:
* Transfer the domain into your DNR account
* Set up DNS hosting for it and ask the owner to set authoritative DNS to the DNS servers you control
* Ask the user to keep DNR and DNS control where it is currently, but to switch DNS when it's ready at the new server, and every time
you add or remove an IP address (not a good idea, unless the user insists that they prefer this option)
* In any case, you will probably need access to the hostmaster@example.com email address, for the StartSSL process *before*
the final DNS switch. You could also ask them to tell you the verification code that arrives there, but that has to be done
in real time, immediately when you click 'verify' in the StartSSL UI. If they forward the email the next day, then the token
will already have expired.
* If no, register it (at Namecheap or elsewhere).
* Decide which image to run as the user's main website software (in version 0.1 only 'nginx' is supported)
* If you already have some content that should go on there, and which is compatible with the image you chose,
put it in a public git repository somewhere.
* Unless there is already a TLS certificate at `./data/server-wide/haproxy/example.com.pem` get one
(from StartSSL or elswhere) for example.com and concatenate the certificate
and its unencrypted private key into `indiehosters/user-data/example.com/tls.pem`
* Make sure the TLS certificate is valid (use `scripts/check-cert.sh` for this).
* Now run `deploy/add-site.sh k3 example.com ../hoster-data/TLS/example.com.pem nginx https://github.com/someone/example.com.git root`.
It will make sure the server is in the correct state, and git pull and scp the user data and the
approved cert into place, start a container running the image requested, update haproxy config, and restart the haproxy container.
* set up a git repo for the new site on the backup server (see http://www.git-scm.com/book/en/v2/Git-on-the-Server-Setting-Up-the-Server again), and for instance if you called the backup repo example.com.git and your backup server is in /etc/hosts on k3 as 'bu25', ssh into k3 and run:
echo "git@bu25:/opt/git/example.com.git" > /data/per-user/example.com/backup/BACKUPDEST
USER=example.com
/data/indiehosters/importers/backup-init.sh
* Test the site using your /etc/hosts. You should see the data from the git repo on both http and https.
* Switch DNS and monitoring.
# Developing Dockerfiles and infrastructure
## Developing Dockerfiles
To develop Dockerfiles, you can use a server that's not serving any live domains, use `docker` locally on your laptop, or use the `vagrant up` instructions to run the infrastructure inside vagrant.
## Developing infrastructure
To develop the infrastructure, create a branch on this repo and specify that branch at the end of the deploy command, for instance:
```bash
sh ./deploy/deploy.sh k4 dev
```
That will deploy a server at whatever IP address "k4" points to in your /etc/hosts, using the "dev" branch of https://github.com/indiehosters/indiehosters.
## Testing new Dockerfiles in the infrastructure
To test the infrastructure with a changed Dockerfile, you need to take several steps:
* Develop the new Dockerfiles as described above at "Developing Dockerfiles"
* When you're happy with the result, publish this new Dockerfile onto the docker hub registry under your own username (e.g. michielbdejong/haproxy-with-http-2.0)
* Now create a branch on the infrastructure repo (e.g. "dev-http-2.0")
* In this branch, grep for the Dockerfile you are updating, and replace its name with the experimental one everywhere:
* the `docker pull` statement in scripts/setup.sh
* the `docker run` statement in the appropriate systemd service file inside unit-files/
* Push the branch to the https://github.com/indiehosters/indiehosters repo (if you don't have access to that, you will have to edit
`deploy/onServer.sh` to use a different repo, to which you do have access).
* Now deploy a server from your experimental infrastructure branch (which references your experimental Docker image), as described above at "Developing infrastructure"
Getting started as an IndieHosters hoster
===========
# Prerequisites
Each IndieHoster is an entirely autonomous operator, without any infrastructural ties to other IndieHosters.
These scripts and docs will help you run and manage servers and services as an IndieHoster, whether you're
certified as a branch of the IndieHosters franchise or not.
# Hoster data
If you're used to working with git as a versioning tool, then it's a good idea to make `hoster-data` and
`billing` into (private!) git repos where you keep track of what you're doing, including e.g. TLS certificates, so
that you can track changes over time, and search the history to resolve mysteries when they occur. You may also use a different
versioning system, or just take weekly and daily backups (but then it's probably a good idea to retain the weeklies for a couple
of years, and even then it will not be as complete as a history in a versioning system).
The hoster-data is about what each one of your servers *should* be doing at this moment.
This is fed into CoreOS (systemd -> etcd -> confd -> docker) to make sure the server actually starts and keeps doing these things,
and also into monitoring, to make sure you get alerted when a server misbehaves.
You probably also want to keep track of Domain Name Registration, Transport
Layer Security, Monitoring, and Domain Name System services which you are probably getting from
third-party service providers, alongside the services which
you run on your own servers.
Note that although it's probably inevitable that you resell DNR and TLS services from some third party, and your monitoring would ideally
also run on a system that's decoupled from your actual servers, you may not be reselling DNS
hosting. If you host DNS for your customer on server-wide bind services that directly read data from files on the per-user data folders,
then DNS data will be considered user-data for you.
# User data
User data is data owned by one of your users. Which human owns which site is something you can administer
by hand somehow in the `billing` folder.
All user data is *untrusted* from your point of view, it is not owned by you as a hoster,
and users may change it at any time (and then probably contact you for a backup whenever they mess up!). It makes sense to give users
only read-only access to this data by default, and have a big "Are you sure? Warranty will be void!" warning before they can activate
write-access to their own data (and then probably trigger an extra backup run just before allowing them to edit their own raw data).
This is how some operating systems on user devices also deal with this.
But in the end, the user, and not you, owns this data, and they can do with it what they want, at their own risk.
Just like a mailman is not supposed to open and read letters, you also should treat each user's data as a closed envelope
which you never open up, unless in the following cases:
* There may be things you need to import from specific files on there (like a user-supplied TLS certificate or DNS zone)
* When running backups, you sometimes can't avoid seeing some of the modified filenames flying by (depending on the backup software)
* After explicit permission of the user, when this is useful for tech support (e.g. fix a corrupt mysql database for them)
In version 0.1 no user data exists because the TLS cert is part of the hoster-data, and so are the secondary email address to forward
to and the git repository to pull the website content from. We don't need to back up users' websites, because they are already versioned
and backed up in the git repository from which we're pulling it in.
# Backups
Your user-data, hoster-data, and billing folders together contain all the critical data
of your operations as an IndieHoster, from start to finish, so make sure you don't
ever lose it, no matter what calamity may strike. Once a month, put a copy of it on a USB stick, and put that in a physically safe place.
You may give a trusted person an emergency key to your infrastructure, in case you walk under a bus. Think about the risk of data loss and
establish an emergency recovery plan for when, for instance, the hard disk of your laptop or of one of your servers die.
Make sure you often rsync the live data from each of your servers to somewhere else, and store snapshots of it
regularly. Users *will* contact you sooner or later asking for "the backup from last Tuesday"
and they will expect you to have one.
# Basic digital hygiene
At the same time, be careful who may obtain access to your critical data. Is your laptop really safe? Does the NSA have access to the servers you run?
Someone may plant a Trojan on a computer in an internet cafe from where you access your Facebook account, access your gmail account
for which you use the same password, reset your RackSpace password and restore a backup from your Cloud Files to somewhere else.
Make a diagram of how your laptop talks to your USB sticks and your servers. Then make a diagram of the services you use and to which
email addresses they send password reset emails. Draw a perimeter of trust in both diagrams, and start taking some basic measures to
keep your daily operations secure.
Don't mix accounts and email addresses which you may
use from other computers, and keep your IndieHosters passwords and accounts separate from your other passwords and accounts, and reset
them every few months. It might even
make sense to dual-boot your laptop or boot from a live disk which resets on boot to make sure everything you do with IndieHosters data
is done in a sterile environment. Ubuntu has a 'guest account' option on the login screen which maybe also be handy for this.
Also: lock your screen when walking away from your laptop, and think about what someone could do with it if they were to steal your bag,
or your smartphone.
# Do I have to use this?
As an IndieHoster you can of course use any infrastructure and scripts you want, as long as it doesn't change the format of each user-data folder, so that
your customers can still migrate at will between you and other IndieHosters. However, you might find some of the scripts in this repo
helpful at some point, and we hope you contribute any features you add to it back upstream to this repo.
Thanks for taking the time to read through these general considerations - the next topic is [deploying a server](deploying-a-server.md)! :)
#!/bin/bash -eux
echo initializing backups for $USER
mkdir -p /data/per-user/$USER/backup/mysql
mkdir -p /data/per-user/$USER/backup/www
git config --global user.email "backups@`hostname`"
git config --global user.name "`hostname` hourly backups"
git config --global push.default simple
cd /data/per-user/$USER/backup/
git init
echo "backups of $USER at IndieHosters server `hostname`" > README.md
git add README.md
git commit -m"initial commit"
if [ -e /data/per-user/$USER/backup/BACKUPDEST ]; then
cd /data/per-user/$USER/backup/
git remote add destination `cat /data/per-user/$USER/backup/BACKUPDEST`
git push -u destination master
fi
#!/bin/bash -eux
if [ -e /data/per-user/$USER/mysql ]; then
echo backing up mysql databases for $USER
mkdir -p /data/per-user/$USER/backup/mysql/
cp /data/per-user/$USER/mysql/.env /data/per-user/$USER/backup/mysql/.env
/usr/bin/docker run --link mysql-$USER:db\
--env-file /data/per-user/$USER/mysql/.env \
indiehosters/mysql mysqldump --all-databases --events -u admin \
-p$(cat /data/per-user/$USER/mysql/.env | cut -d'=' -f2) \
-h db > /data/per-user/$USER/backup/mysql/dump.sql
fi
if [ -e /data/per-user/$USER/wordpress-subdir ]; then
echo backing up www from wordpress-subdir for $USER
mkdir -p /data/per-user/$USER/backup/www/wordpress-subdir/
cp /data/per-user/$USER/wordpress-subdir/.env /data/per-user/$USER/backup/www/wordpress-subdir/.env
rsync -r /data/per-user/$USER/wordpress-subdir/data/wp-content /data/per-user/$USER/backup/www/wordpress-subdir/wp-content
if [ -e /data/per-user/$USER/wordpress-subdir/data/GITURL ]; then
cp /data/per-user/$USER/wordpress-subdir/data/GITURL /data/per-user/$USER/backup/www/wordpress-subdir/GITURL
fi
fi
cd /data/per-user/$USER/backup/
git add *
git commit -m"backup $USER @ `hostname` - `date`"
if [ -e /data/per-user/$USER/backup/BACKUPDEST ]; then
git pull --rebase
git push
fi
#!/bin/bash
cd /data/per-user/$USER/$APP/data/www-content && git pull
#!/bin/bash -eux
if [ ! -d "/data/per-user/$USER/mysql/data" ]; then
mkdir -p /data/per-user/$USER/mysql/data
echo MYSQL_PASS=`echo $RANDOM ${date} | md5sum | base64 | cut -c-10` > /data/per-user/$USER/mysql/.env
fi
#!/bin/bash -eux
if [ ! -e "/data/per-user/$USER/nginx/data/www-content/index.html" ]; then
if [ -e "/data/per-user/$USER/nginx/data/GITURL" ]; then
git clone `cat /data/per-user/$USER/nginx/data/GITURL` /data/per-user/$USER/nginx/data/www-content
cd /data/per-user/$USER/nginx/data/www-content && git checkout master
else
mkdir -p /data/per-user/$USER/nginx/data/www-content
echo Hello $USER > /data/per-user/$USER/nginx/data/www-content/index.html
fi
fi
#!/bin/bash -eux
if [ ! -d "/data/per-user/$USER/wordpress/data" ]; then
cd /data/per-user/$USER/
tar xvzf /data/indiehosters/blueprints/wordpress.tgz
fi
cat /data/per-user/$USER/mysql/.env | sed s/MYSQL_PASS/DB_PASS/ > /data/per-user/$USER/wordpress/.env
#!/bin/bash
#This script is tested on Debian 12
#Current version of libre.sh to be installed
LIBRE_VERSION=1.2
# System env vars : can be overrided by a values.env file next to this install file
### CONFIG : Specify you template repo ROOT without training slash (Optional) or comment if you want to supply full url for apps
APP_REPO_URL="lab.libreho.st/libre.sh/compose"
## domain handling
### CONFIG : change to your domain vendor ( namecheap, ovh , scaleway, )
DOMAIN_SERVER=namecheap
### Namecheap specific
NAMECHEAP_URL="namecheap.com"
NAMECHEAP_API_USER="pierreo"
NAMECHEAP_API_KEY=
### ovh specific (WIP)
OVH_URL="eu.api.ovh.com"
OVH_API_USER=""
OVH_API_KEY=
### Scaleway specific (WIP)
SCALEWAY_URL=""
SCALEWAY_API_USER=""
SCALEWAY_API_KEY=
### TODO : change your settings
IP="curl -s http://icanhazip.com/"
FirstName="Pierre"
LastName="Ozoux"
Address=""
PostalCode=""
Country="Portugal"
Phone="+351.967184553"
EmailAddress="pierre@ozoux.net"
City="Lisbon"
CountryCode="PT"
## Backup
BACKUP_DESTINATION=root@xxxxx:port
### CONFIG : Change your mail settings.
## SMTP
MAIL_USER=
MAIL_PASS=
MAIL_HOST=mail.indie.host
MAIL_PORT=587
MAIL_SECURITY=
# Default admin emails for apps
ADMIN_EMAIL=support@ekimia.fr
### TODO : source a setting file is present to override defaults
echo "-------- Welcome to libre.sh $LIBRE_VERSION installer"
echo "---- sourcing local values.env file if present"
source values.env
# STEP add kernel parameter
# STEP Define environnement
echo "-------- setting up system variables"
echo "APP_REPO_URL=${APP_REPO_URL}" >> /etc/environment
echo "LIBRE_VERSION=${LIBRE_VERSION}" >> /etc/environment
echo "MAIL_USER=${MAIL_USER}" >> /etc/environment
echo "MAIL_PASS=${MAIL_PASS}" >> /etc/environment
echo "MAIL_HOST=${MAIL_HOST}" >> /etc/environment
echo "MAIL_PORT=${MAIL_PORT}" >> /etc/environment
echo "MAIL_SECURITY=${MAIL_SECURITY}" >> /etc/environment
echo "ADMIN_EMAIL=${ADMIN_EMAIL}" >> /etc/environment
# STEP Install Docker
name="docker.io"
# TODO : Fix a version for docker ?
dpkg -s $name &> /dev/null
if [ $? -ne 0 ]
then
echo "$name not installed"
apt-get update
# curl -fsSL https://get.docker.com -o get-docker.sh
# sh get-docker.sh
apt install -y $name
echo "-------- Native docker installed "
else
echo "$name already installed"
fi
# STEP "install docker-compose"
echo "-------- Install native docker-compose "
# TODO : Fix a version for docker compose ?
#mkdir -p /opt/bin &&\
#dockerComposeVersion=$(curl -s https://api.github.com/repos/docker/compose/releases/latest|grep tag_name|cut -d'"' -f4) &&\
#curl -L https://github.com/docker/compose/releases/download/$dockerComposeVersion/docker-compose-`uname -s`-`uname -m` > /opt/bin/#docker-compose &&\
#chmod +x /opt/bin/docker-compose
apt install -y docker-compose
# STEP "install git"
echo "-------- Install git"
distro=$( ( lsb_release -ds || cat /etc/*release || uname -om ) 2>/dev/null | head -n1 | cut -d " " -f1)
if [[ "$distro" == "Ubuntu" || "$distro" == "Debian" ]]; then
apt-get install -y git
elif [[ "$distro" == "CentOS" || "$distro" == "AlmaLinux" || "$distro" == "Rocky" || "$distro" == "Fedora" ]]; then
yum install -y git
elif [[ "$distro" == "openSUSE" ]]; then
zypper install git
elif [[ "$distro" == "Arch" ]]; then
pacman -S git
elif [[ "$distro" == "Mageia" ]]; then
urpmi git
fi
# STEP install Libre.sh
echo " ---Removing previous install --- "
rm -rf /libre.sh
echo "-------- installing libre.sh"
git clone https://lab.libreho.st/libre.sh/compose.libre.sh.git /libre.sh
mkdir -p /{data,system}
mkdir -p /data/trash
mkdir -p /data/domains
cp /libre.sh/unit-files/* /etc/systemd/system && systemctl daemon-reload
systemctl enable web-net.service
systemctl start web-net.service
mkdir -p /opt/bin
cp /libre.sh/utils/* /opt/bin/
# STEP add /opt/bin path
echo "-------- updating PATH"
cat > /etc/profile.d/libre.sh <<EOF
export PATH=$PATH:/opt/bin
EOF
chmod 644 /etc/profile.d/libre.sh
bash /etc/profile.d/libre.sh
#TODO : reload profile to use libre right away
#!/bin/bash -eux
# Verify they are all in sync with git, if not, print the domain name.
for oo in `ls -d ./oo-*`;do
cd $oo
if ! git diff --exit-code --quiet; then
echo $oo
fi
cd ..
done
# Update all oo
for oo in `ls -d ./oo-*`;do
cd $oo
libre update
cd ..
done
#!/bin/bash -eux
if [ $# -ge 2 ]; then
DOMAIN=$1
IMAGE=$2
else
echo "Usage: sh /data/indiehosters/scripts/activate-user.sh domain image [gitrepo]"
exit 1
fi
mkdir -p /data/per-user/$DOMAIN/$IMAGE/data
if [ $# -ge 3 ]; then
GITREPO=$3
echo $GITREPO > /data/per-user/$DOMAIN/$IMAGE/data/GITURL
fi
# Start service for new site (and create the user). This will also enable the git puller.
systemctl enable $IMAGE@$DOMAIN.service
systemctl start $IMAGE@$DOMAIN.service