Installing WordPress with NGINX on Debian or Ubuntu using EasyEngine

$ apt-get update

$ apt-get upgrade

it can be useful to be able to make fromdos:

$ apt-get install tofrodos
 

STEP 1

$ useradd -m username
$ passwd username
$ passwd root
to know the shell used by the user (bash, zsh, sh...) :
$ getent passwd root
root:x:UID:UID:root:/root:/bin/bash #we see here that bash is the shell used by root here
 
to change the shell used by the user:
$ chsh -s /bin/bash root
 
by default during creation the shell is sh :
$ getent passwd username
-> username:x:UID::/home/username:/bin/sh
we're gonna make the bash shell more evolved
$ chsh -s /bin/bash username

STEP 2: we add the right to use sudo to the user

$ usermod -a -G sudo username

STEP 3

 generate an RSA key pair (private key and public key) with PuTTY key generator. (Choose an SSH-2 RSA key type, key size for example 2048 or 4096 bits, then click on Generate. Then when it's done, do Save public key and Save private key. Attention the private key must as its name indicates it remain secret...)

STEP 4

$ su – username

STEP 5

$ mkdir .ssh
$ chmod 700 .ssh
$ nano .ssh/authorized_keys

STEP 6

copy of the public key in
remove useless comments (if present)
delete all unnecessary line breaks
add ssh-rsa before the key (if you have generated an RSA key of course)
add user@your_server_name to the end of the key
we find the name of the server with the command hostname
then CTRL + x and y (for yes) or o (for yes)

STEP 7

$ chmod 600 .ssh/authorized_keys
$ exit
 

STEP 8

$ nano /etc/ssh/sshd_config

-> PermitRootLogin no.
-> port LeNumberChoose by default 22 (otherwise choose between 1024 and 65537): in our case leave it at 22 to be able to install EasyEngine easily
-> uncomment AuthorizedKeysFile %h/.ssh/authorized_keys
-> PermitEmptyPasswords no
-> PasswordAuthentication no
-> LoginGraceTime 30 (max time to connect in seconds)
Add the following lines:
-> ClientAliveInterval 600 (max idle time in seconds)
-> ClientAliveCountMax 0 (max number of messages the server can send without receiving any response from it)


STEP 9

$ systemctl restart ssh

STEP 10 Logging out and logging back in with the user account

To connect to our server, we now need to set PuTTY to use our private key : In Category > Connection > SSH > Auth -> In Private key file for authentication do Browse... and select the private key file (.ppk)

STEP 11

$ wget -qO ee rt.cx/ee && sudo bash ee

STEP 12 to define the parameters for the creation of the web server

$ sudo nano /etc/ee/ee.conf

STEP 13

$ sudo ee site create example.com –php7
$ sudo ee site update example.com –mysql
$ sudo ee site update example.com –wp
$ sudo ee site update example.com –wpfc
$ sudo ee site update example.com –letsencrypt
 

STEP 14 Force the use of HTTPS and not HTTP with HTTP Strict Transport Security HSTS

$ sudo nano /etc/nginx/sites-available/nom_du_site

insert the following line in the server block to force https connections for 100 years:

add_header Strict-Transport-Security 'max-age=3153600000; includeSubDomains; preload';



then we restart NGINX:

$ sudo service nginx restart

STEP 15 Setting up the Firewall - Firewall

Check the firewall's default rules: 

$ sudo iptables -L

If no firewall rules have been defined then you get:
Chain INPUT (policy ACCEPT)
target prot opt source destination

Chain FORWARD (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination

We then create a file that will contain the firewall rules :

$ sudo nano /etc/iptables.firewall.rules

Paste it on: 

*filter

# Allow all loopback (lo0) traffic and drop all traffic to 127/8 that doesn’t use lo0
-A INPUT -i lo -j ACCEPT
-A INPUT -d 127.0.0.0/8 -j REJECT

# Accept all established inbound connections
-A INPUT -m state –state ESTABLISHED,RELATED -j ACCEPT

# Allow all outbound traffic – you can modify this to only allow certain traffic
-A OUTPUT -j ACCEPT

# Allow HTTP and HTTPS connections from anywhere (the normal ports for websites and SSL).
-A INPUT -p tcp –dport 80 -j ACCEPT
-A INPUT -p tcp –dport 443 -j ACCEPT

# Allows SMTP access
-A INPUT -p tcp –dport 25 -j ACCEPT
-A INPUT -p tcp –dport 465 -j ACCEPT
-A INPUT -p tcp –dport 587 -j ACCEPT

# Allows pop and pops connections
# -A INPUT -p tcp –dport 110 -j ACCEPT
# -A INPUT -p tcp –dport 995 -j ACCEPT

# Allows imap and imaps connections
-A INPUT -p tcp –dport 143 -j ACCEPT
-A INPUT -p tcp –dport 993 -j ACCEPT

# Allow SSH connections
# The -dport number should be the same port number you set in sshd_config
-A INPUT -p tcp -m state –state NEW –dport 22 -j ACCEPT

# Allow ping
-A INPUT -p icmp –icmp-type echo-request -j ACCEPT

# Log iptables denied calls
-A INPUT -m limit –limit 5/min -j LOG –log-prefix « iptables denied:  » –log-level 7

# Drop all other inbound – default deny unless explicitly allowed policy
-A INPUT -j DROP
-A FORWARD -j DROP

COMMIT

Of course, the port numbers must be changed if they do not have the standard value (for example for the SSH port number).
Then activate the firewall rules with the following command:

$ sudo iptables-restore < /etc/iptables.firewall.rules
The new rules are then rechecked with the command :
$ sudo iptables -L
Then to make sure that the rules will be taken into account each time the server is restarted:
  • We create a new script:
    $ sudo nano /etc/network/if-pre-up.d/firewall
  • Copy and paste the following lines into it: 
#!/bin/sh
/sbin/iptables-restore < /etc/iptables.firewall.rules
Record and exit: Control-X and Y (for yes) or O (for Yes) to record
  • The script is given permission to run:
    $ sudo chmod +x /etc/network/if-pre-up.d/firewall

STEP 16 Installation of Fail2Ban

Fail2Ban is an application that protects against dictionary attacks. When Fail2Ban detects multiple failed connections from the same IP address, it creates temporal firewall rules that block communications originating from the attacker's IP address. By default Fail2Ban only monitors SSH but it is possible to set it to monitor http and SMTP as well.
To install it :

$ sudo apt-get install fail2ban
Once installed, Fail2Ban monitors and as soon as an IP address exceeds the maximum number of authentication attempts, it is blocked (network layer) and the event is recorded in the logs that can be found in the file/var/log/fail2ban.log

To see the jails: $ sudo fail2ban-client status
 
In our case we're looking at a prison for ssh connections:

We can see information about this prison (banned IPs...) with:
$ sudo fail2ban-client status ssh
 
To set the default configuration of fail2ban "prisons":
$ sudo nano /etc/fail2ban/jail.conf
You can then change the maximum number of connection attempts (maxretry), the ban time expressed in seconds (bantime). Moreover, if you wish to receive an email to warn you when an IP address is banned, you just have to set a recipient email address (destemail)...

Finally, for the new configuration to be taken into account :
$ sudo fail2ban-client reload
 
To check that fail2ban is active you can type :
$ ps -ef | grep fail2ban
 
STEP 17 Automatic renewal of the certificate
$ sudo crontab -e
 
MAILTO=aaaaaaa@bbbbb.com
0 0 * * 0 /usr/local/bin/ee site update –le=renew –all

for the crontab setting we can use the following utility: https://crontab.guru/ or https://crontab-generator.org/

STEP 18 Setting up a backup system

$cd ~
$mkdir backups
$cd backups
$nano backup_site.sh
 
 
#!/bin/sh

script="$0"
basename="$(dirname $script)"

cd $basename

mkdir -p files
mkdir -p files/sitebackup
mkdir -p files/dbbackup

THESITE="nom_du_site.com"
THEDB="nom_de_la_base_de_donnees"

THEDBUSER="nom_de_l_utilisateur_de_la_base_de_donnees"
THEDBPW="mot_de_passe_de_la_base_de_donnees_pour_ce_user"
THEDATE=`date +%Y%m%d_%H%M%S`

mysqldump -u $THEDBUSER -p${THEDBPW} $THEDB | gzip > files/dbbackup/dbbackup_${THEDB}_${THEDATE}.bak.gz

tar –absolute-names -czf files/sitebackup/sitebackup_${THESITE}_${THEDATE}.tar /var/www/$THESITE
gzip files/sitebackup/sitebackup_${THESITE}_${THEDATE}.tar

find files/sitebackup/site* -mtime +5 -exec rm {} \;

find files/dbbackup/db* -mtime +5 -exec rm {} \; 

$crontab -e

0 4 * * * /home/Utilisateur/backups/backup_site.sh >/dev/null 2>&1

 

STEP  19 Transfer the backup files to another server that will be named backup server to avoid confusion.

To do this we need to generate an ssh key pair for our web server. To do this we execute the following commands:

$ ssh-keygen -t rsa -b 8192

In the previous command we asked for the creation of an rsa key pair and we specified the length of this key here 8192 bits. At runtime we are asked in which file to save the key; by default the private key will be saved in ~/.ssh/id_rsa and this is what we want in our case because it is for our server. The public key is saved in ~/.ssh/id_rsa.pub. We are also asked for a passphrase which is a password that will be needed to use the private key. In our case, we leave it blank.

Note we could have used putty to generate the key pair and we would have saved these keys in the same place as the ssh-keygen does.

On the backup server, we will add the public key of the web server, so that it can connect to the backup server in ssh. To do this we connect to the backup server and then :

$ mkdir .ssh
$ chmod 700 .ssh
$ nano .ssh/authorized_keys
We paste the content of the id_rsa.pub file of the web server (everything except the user@host part; so just ssh-rsa laClef) and at the end of the line after a space we add: 
nameOfTheBackupServerUserAccount@NameOfTheBackupServer
we save it and then: 
$ chmod 600 .ssh/authorized_keys
 
The ssh connection provides a double check: client check and server check. However it often happens that only the server verifies the good "identity" of the client, the client often trusts in the good identity of the server with which he is talking... If you don't want to verify the good identity of the server then you can add the following lines to backup_site.sh :
 

scp -B -o StrictHostKeyChecking=no /home/Utilisateur/backups/files/dbbackup/dbbackup_${THEDB}_${THEDATE}.bak.gz user@serveurBackup:cheminCompletVersLeDossierDeDestinationDbbackup

scp -B -o StrictHostKeyChecking=no /home/Utilisateur/backups/files/sitebackup/sitebackup_${THESITE}_${THEDATE}.tar.gz user@serveurBackup:cheminCompletVersLeDossierDeDestinationSitebackup

 

If we want the web server (in this case the client) to verify that it is communicating with the authentic backup server then we need to add the public key of the backup server to the known host keys.

————————————-
ssh in host behaviour: by default the keys used (dsa, ecdsa, ed25519, rsa) are in /etc/ssh/ssh_host_ecdsa_key for the private key and its .pub for public key. If you want to change the location of the host keys used, you have to change the ssh configuration (in /etc/ssh/sshd_config)
————————————-
ssh in client behavior: we can define the key pair used in ~/.ssh/id_rsa or id_dsa or id_ed25519 or id_ecdsa as well as their respective .pub.
————————————-
Allow a client to connect to a host server with an ssh key : the client must already generate its ssh key pair with ssh-keygen then copy its public key to the ~/.ssh/authorized_keys file of the host server userServeurHote@machineHote
————————————-

How to add the server key in the known_hosts ; we won't write in the ~/.ssh/known_hosts file manually...So we do so :
One starts an ssh connection with the host in StrictHostKeyChecking mode : ssh -o trictHostKeyChecking=yes serverHote@machineHote
So the system searches in the known_hosts and doesn't find the server so it asks us if we are sure we want to add the host in this list. To do this, we are given the fingerprint of the server key (we are also told the type of algo that is used: dsa, ecdsa, ed25519 or rsa) so that we can manually check the authenticity of the server.
Before accepting to add the host in the list of known_hosts we will check on the server that it is authentic and for that on the host server we type :

$ssh_keygen -lf /etc/ssh/ssh_host_ecdsa_key.pub
Of course we adapt the previous command according to the type of algo (dsa, ecdsa, ed25519 or rsa); if it was an rsa key fingerprint, then we test with ssh_host_rsa_key.pub
If the fingerprints match then we communicate well with our host server and we can add it to the list of known_hosts.

In this case, we can add the following lines to backup_site.sh :
 
scp -B -o StrictHostKeyChecking=yes /home/Utilisateur/backups/files/dbbackup/dbbackup_${THEDB}_${THEDATE}.bak.gz user@serveurBackup:cheminCompletVersLeDossierDeDestinationDbbackup
scp -B -o StrictHostKeyChecking=yes /home/Utilisateur/backups/files/sitebackup/sitebackup_${THESITE}_${THEDATE}.tar.gz user@serveurBackup:cheminCompletVersLeDossierDeDestinationSitebackup

STEP 20 Finally we will add the purge of the folders of the backup server at the end of the file backup_site.sh

ssh -o StrictHostKeyChecking=yes UtilisateurServeurBackup@serveurBackup ‘find cheminCompletVersLeDossierDeDestinationDbbackup -mtime +5 -regextype posix-egrep -regex « .*\.(ok|gz)$ » -exec rm -f {} \;’
ssh -o StrictHostKeyChecking=yes UtilisateurServeurBackup@serveurBackup ‘find cheminCompletVersLeDossierDeDestinationSitebackup -mtime +5 -regextype posix-egrep -regex « .*\.(ok|gz)$ » -exec rm -f {} \;’


STEP 21 Improved file transfer to the backup server (18 and 19): more secure solution with tests that the important ssh and scp commands were executed correctly and checksum sha512

Note if you don't want to check the correct identity of the server it is sufficient as before to use StrictHostKeyChecking=no, but this is a security weakness.

#####################
#####################
sha512_dbbackup_original=`sha512sum files/dbbackup/dbbackup_${THEDB}_${THEDATE}.bak.gz | awk ‘{sub(/ .*$/,"")}1’`
sha512_sitebackup_original=`sha512sum files/sitebackup/sitebackup_${THESITE}_${THEDATE}.tar.gz | awk ‘{sub(/ .*$/,"")}1’`
#####################
scp -B -o StrictHostKeyChecking=yes files/dbbackup/dbbackup_${THEDB}_${THEDATE}.bak.gz UtilisateurServeurBackup@serveurBackup:/home/Utilisateur/backup/files/dbbackup
if [ $? -ne 0 ]
then
exit $?
fi
sha512_dbbackup_apres_transfert=`ssh -o StrictHostKeyChecking=yes UtilisateurServeurBackup@serveurBackup sha512sum /home/Utilisateur/backup/files/dbbackup/dbbackup_${THEDB}_${THEDATE}.bak.gz | awk ‘{sub(/ .*$/,"")}1’`
if [ "$sha512_dbbackup_apres_transfert" != "$sha512_dbbackup_original" ]
then
exit 111111
fi
ssh -o StrictHostKeyChecking=yes UtilisateurServeurBackup@serveurBackup touch /home/Utilisateur/backup/files/dbbackup/dbbackup_${THEDB}_${THEDATE}.bak.gz.ok
#####################
scp -B -o StrictHostKeyChecking=yes files/sitebackup/sitebackup_${THESITE}_${THEDATE}.tar.gz UtilisateurServeurBackup@serveurBackup:/home/Utilisateur/backups/files/sitebackup
if [ $? -ne 0 ]
then
exit $?
fi
sha512_sitebackup_transfert=`ssh -o StrictHostKeyChecking=yes UtilisateurServeurBackup@serveurBackup sha512sum /home/Utilisateur/backups/files/sitebackup/sitebackup_${THESITE}_${THEDATE}.tar.gz | awk ‘{sub(/ .*$/,"")}1’`
if [ "$sha512_sitebackup_original" != "$sha512_sitebackup_transfert" ]
then
exit 222222
fi
ssh -o StrictHostKeyChecking=yes UtilisateurServeurBackup@serveurBackup touch /home/Utilisateur/backups/files/sitebackup/sitebackup_${THESITE}_${THEDATE}.tar.gz.ok
#####################
ssh -o StrictHostKeyChecking=yes UtilisateurServeurBackup@serveurBackup ‘find /home/Utilisateur/backup/files/dbbackup -mtime +5 -regextype posix-egrep -regex ".*\.(ok|gz)$" -exec rm -f {} \;’
ssh -o StrictHostKeyChecking=yes UtilisateurServeurBackup@serveurBackup ‘find /home/Utilisateur/backups/files/sitebackup -mtime +5 -regextype posix-egrep -regex ".*\.(ok|gz)$" -exec rm -f {} \;’
#####################
##################### 

$ sudo apt-get install apticron

STEP 22 be notified by email once a day when packages need to be updated

Configure the recipient(s) email(s) :

$ sudo nano /etc/apticron/apticron.conf
You can put a list of recipient emails (separated by a space)
 
EMAIL="mon.email@serveur_mail mon.email2@serveur_mail"

To manually launch the utility :

$ sudo /usr/sbin/apticron
for info the crontab configuration of apticron can be found in /etc/cron.d/apticron

STEP 23  a little more security

lynis installation (Security auditing systems)

$ sudo apt-get install lynis
launch a security audit on the server :
$ sudo lynis –check-all 
Then we have to go through the safety recommendations and make the changes ourselves if we think they are relevant.