SpinupWP https://spinupwp.com/ Your Own Flawless WordPress Server, Spun Up in Minutes Thu, 04 May 2023 13:34:32 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 How to Install Ansible and Automate Your Ubuntu 22.04 Server Setup https://spinupwp.com/automating-server-setup-ansible/ https://spinupwp.com/automating-server-setup-ansible/#replybox Mon, 24 Apr 2023 09:00:39 +0000 https://spinupwp.com/?p=653 We break down why you might want to use Ansible as your automation engine, how to install it and get it set up.

The post How to Install Ansible and Automate Your Ubuntu 22.04 Server<span class="no-widows"> </span>Setup appeared first on SpinupWP.

]]>
Setting up new servers is tedious and time-consuming. In this article, I’ll show you how to skip all that, automate the entire process, and provision new servers in a matter of minutes with little to no intervention on your end.

Don’t get me wrong. As a web developer, there’s no better way to understand how web servers work than building your own from scratch. It’s a great learning experience, one that I recommend all WordPress developers undertake. Doing so will give you a greater understanding of the various components required to serve a website, not just the code you write. It can also broaden your knowledge on security and performance topics, which are often overlooked when you’re deep into coding.

However, once you are familiar with the process, setting up new servers is a task that you’re better off automating. Thankfully, you can do this using a tool called Ansible.

Why Ansible?

Ansible is an open-source automation tool for provisioning, application deployment (WordPress deployment in this case), and configuration management. Gone are the days of SSHing into your server to run a command or hacking together bash scripts to semi-automate painful tasks. Whether you’re managing a single server or an entire fleet, Ansible can simplify the process and save you time. So what makes Ansible so great?

Like SpinupWP, Ansible is completely agentless, meaning you don’t have to install any software on your remote servers (aka managed hosts). All commands are run through Ansible via SSH. If Ansible needs updating, you only need to update your single control machine and not any remote hosts. The only prerequisite to running Ansible commands is to have Python installed on your control machine.

Commands you execute via Ansible are idempotent, meaning they can be applied multiple times and will always result in the same outcome. This allows you to safely run multiple hosts without anything being changed unless required. For example, let’s say you need to ensure Nginx is installed on all hosts. Just run one command and Ansible will ensure only those hosts that are missing Nginx will install it. All other hosts will remain untouched.

That’s enough of an introduction. Let’s see Ansible in action.

Installing Ansible

We need to set up a single control machine which we’ll use to execute our commands. I’m going to install Ansible locally on macOS, but any Unix-like platform with Python installed would also work (e.g., Ubuntu, Red Hat, CentOS, etc.). Currently, Ansible requires Python version 3.8 or newer. Windows is not supported at this time.

To install Ansible on macOS, first, install the Python package manager, pip.

curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python3 get-pip.py

You may see a Homebrew deprecation warning about “Configuring installation scheme with distutils config files,” which points to this Homebrew issue. The issue details the fact that this is a Python deprecation warning of something that will be removed in Python 3.12, but that a full solution will be implemented before then, so it’s safe to continue.

Then install Ansible using pip.

sudo pip install ansible

Once the installation has completed, you can verify that everything was installed correctly by issuing:

ansible --version

On Linux operating systems, it should be possible to install Ansible via the default package manager. In Ubuntu 21.10 and newer, you can install it using apt.

sudo apt install ansible

However, if you’re running Ubuntu 22.04 LTS, you need to first add the Ansible PPA, before you can install Ansible.

sudo add-apt-repository --yes --update ppa:ansible/ansible
sudo apt update
sudo apt install software-properties-common
sudo apt install ansible
sudo apt install python3-pip python3-pip
sudo pip install passlib

The Ansible docs have detailed instructions for other operating systems.

Now that Ansible is set up, we need a few servers to work with. For the purpose of this article, I’m going to fire up three small DigitalOcean droplets with Ubuntu 22.04 LTS x64 installed. I’ve also added my public key so that it will be copied to each host during the droplet creation. This will ensure we can SSH in via Ansible using the root user without providing a password later on.

Creating droplets in DigitalOcean.

Once they’ve finished provisioning, you’ll be presented with the IP addresses.

Droplets with IP addresses.

Make sure you have manually SSHed into each droplet as the root user to validate that the ECDSA key fingerprint is valid, and add it to the list of known hosts on your control machine.

ashley@macbook:~$ ssh root@138.68.188.195
The authenticity of host '138.68.188.195 (138.68.188.195)' can't be established.
ECDSA key fingerprint is SHA256:xCF0vWr/hG6qz2wSAdQRf1XB7ZdM2lOCnhvh2swe9LY.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes

Inventory Setup

Ansible uses a simple inventory system to manage your hosts. This allows you to organize hosts into logical groups and negates the need to remember individual IP addresses or domain names. Want to run a command only on your staging servers? No problem. Pass the group name to the CLI command and Ansible will handle the rest.

Next, let’s create our inventory. Before doing so, we need to create a new directory to house our Ansible logic. Anywhere is fine, but I use my home directory.

mkdir ~/wordpress-ansible

The default location for the inventory file is /etc/ansible/hosts. However, we’re going to configure Ansible to use a different hosts file. Create a new plain text file called hosts in the new directory,

cd wordpress-ansible/
nano hosts

with the following contents:

[production]
138.68.188.195
159.65.57.233
159.65.31.8

The first line indicates the group name. The following lines are the servers we provisioned in DigitalOcean. Multiple groups can be created using the [group name] syntax and hosts can belong to multiple groups. For example:

[staging]
139.59.170.69

[production]
138.68.188.195
159.65.57.233
159.65.31.8

[wordpress]
139.59.170.69
139.59.170.70
139.59.170.79

Now we need to configure Ansible to tell it where our hosts file is located. Create a new file called ansible.cfg

nano ansible.cfg

with the following contents.

[defaults]
inventory = hosts

Running Commands

With our inventory file populated we can start running basic commands on the hosts, but first let’s briefly look at modules. Modules are small plugins that are executed on the host and allow you to interact with the remote system as if you were logged in via SSH. Common modules include apt, service, file, and lineinfile. Ansible ships with hundreds of core modules, all of which are maintained by the core development team. Modules greatly simplify the process of running commands on your remote systems and cut down the need to manually write shell or bash scripts. Generally, most Unix commands have an associated module. If not, someone else has probably created one.

Let’s take a look at the ping module, which ensures we can connect to our hosts by returning a “pong” response if successful:

ansible production -m ping -u root

To build the syntax, we provide the group, followed by the module we wish to execute. We also need to provide the remote SSH user (by default, Ansible will attempt to connect using your local user). Assuming everything is set up correctly, you should receive three success responses.

ashley@macbook:~/wordpress-ansible$ ansible production -m ping -u root
138.68.188.195 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
159.65.57.233 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
159.65.31.8 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}

You can also run any arbitrary command on the remote hosts using the a flag. For example, to view the available memory on each host:

ansible all -a "free -m" -u root

This time I haven’t provided a group, but instead passed all which will run the command across every host in your inventory file.

ashley@macbook:~/wordpress-ansible$ ansible production -m ping -u root
138.68.188.195 | CHANGED | rc=0 >>
            total       used        free    shared  buff/cache   available
Mem:            976         142         431         0       403         687
Swap:           0           0           0
159.65.57.233 | CHANGED | rc=0 >>
            total       used        free    shared  buff/cache   available
Mem:            976         142         431         0       402         688
Swap:           0           0           0
159.65.31.8 | CHANGED | rc=0 >>
            total       used        free    shared  buff/cache   available
Mem:            976         141         433         0       402         688
Swap:           0           0           0

Already you should start to see how much time Ansible can save you over manually SSHing to your server to run commands, but running single commands on your hosts will only get you so far. Often, you will want to perform a series of sequential actions to fully automate the process of provisioning, deploying, and maintaining your servers. Let’s take a look at playbooks.

Ansible Playbooks

Playbooks allow you to chain commands together, essentially creating a blueprint or set of procedural instructions. Ansible will execute the playbook in sequence and ensure the state of each command is as desired before moving onto the next. If you cancel the playbook execution partway through and restart it later, only the commands that haven’t completed previously will execute. The rest will be skipped.

Playbooks allow you to create truly complex instructions, but if you’re not careful they can quickly become unwieldy. This brings us to roles.

Roles add organization to Ansible playbooks. They allow you to split your complex build instructions into smaller reusable chunks, very much like a class function in OOP programming. This makes it possible to share your roles across different playbooks, without duplicating code. For example, you may have a role for installing Nginx and configuring sensible defaults which can be used across multiple hosting environments.

Provisioning a Modern Hosting Environment on Ubuntu 22.04

For the remainder of this article, I’m going to show you how to put together a playbook based on our How to Install WordPress on Ubuntu 22.04 guide. The provisioning process will take care of the following:

  • User setup
  • SSH hardening
  • Firewall setup

It will also install the following software:

  • Nginx
  • PHP 8.1
  • MySQL
  • Redis
  • WP-CLI

You can clone the completed playbook from GitHub and follow along, but I will explain how it works below.

Organization

Let’s take a look at how our playbook is organized.

├── ansible.cfg
├── hosts
├── provision.yml
└── roles
  └── nginx
    ├── handlers
    └── main.yml
    ├── tasks
    └── main.yml
  ...    

The hosts and ansible.cfg files should be familiar, but let’s take a look at the provision.yml file.

---
- hosts: production
  user: root
  vars:
    username: ansible
    password: $6$rlLdG6wd1CT8v7i$7psP8l26lmaPhT3cigoYYXhjG28CtD1ifILq9KzvA0W0TH2Hj4.iO43RkPWgJGIi60Mz0CsxWbRVBSQkAY95W0
    public_key: ~/.ssh/id_rsa.pub
  roles:
   - common
   - ufw
   - user
   - nginx
   - php
   - mysql
   - wp-cli
   - ssh

We set the group of hosts from our inventory file, select the user to run the commands, specify a few variables used by our roles, and list the roles to execute. The variables instruct Ansible which user to create on the remote hosts. We provide the username, the hashed sudo password, and the path to our public key. You’ll notice that I’ve included the password here, but for a more secure solution you should look into Ansible Vault. Once each server has been provisioned you will need to SSH in with the specified user, as the root user will be disabled.

The roles mostly map to the tasks we need to perform and the software that needs to be installed. The common role performs simple actions that do not need additional configuration, for example installing Fail2Ban.

Let’s break down the Nginx role to see how roles are put together, as it contains the majority of modules used throughout the remainder of the playbook.

Handlers

Handlers contain logic that should be performed after a module has finished executing, and they work very similarly to notifications or events. For example, when the Nginx configurations have changed, run service nginx reload. It’s important to note that these events are only fired when the module state has changed. If the configuration file didn’t require any updates, Nginx will not be reloaded. Let’s take a look at the Nginx handler file:

---
- name: restart nginx
  service:
    name: nginx
    state: restarted

- name: reload nginx
  service:
    name: nginx
    state: reloaded

You’ll see we have two handlers: One to restart Nginx and one to reload the configuration files.

Tasks

Tasks contain the actual instructions which are to be carried out by the role. Nginx consists of the following steps:

---
- name: Add Nginx repo
  apt_repository:
    repo: ppa:ondrej/nginx

The first command adds the package repository maintained by Ondřej Surý that includes the latest Nginx stable packages (this is the equivalent of doing add-apt-repository in Ubuntu). Each command is formatted the same way: provide a name, the module we wish to execute, and any additional parameters. In the case of apt_repository, we just pass the repo we wish to add.

Next, we need to install Nginx.

- name: Install Nginx
  apt:
    name: nginx
    state: present
    force: yes
    update_cache: yes

The command is fairly self-explanatory, but state and update_cache are worth touching upon. The state parameter indicates the desired package state, in our case we want to ensure Nginx is installed, but you could pass latest to ensure that the most current version is installed. Due to adding a new repo in the prior command we also need to ensure we run apt-get update, which the update_cache parameter handles. This will ensure the repo caches are updated so that Nginx pulls from the correct branch.

You’ll definitely need to customize the Nginx configs for whatever you’re hosting, but that’s outside the scope of this article. If you’re hosting WordPress, or really any PHP-based app , I suggest taking a look at our Install WordPress on Ubuntu 22.04 guide and downloading the accompanying Nginx configs:

[convertflow id=”download” class=”cf-6063-area-18943″]

The file module allows us to symlink the default site into the sites-enabled directory, which configures a catch-all virtual host and ensures we only respond to enabled sites. You will also see that we notify the reload nginx handler for the changes to take effect.

- name: Symlink default site
  file:
    src: /etc/nginx/sites-available/default
    dest: /etc/nginx/sites-enabled/default
    state: link
  notify: reload nginx

Next, we use the lineinfile module to update our Nginx config. We search the /etc/nginx/nginx.conf file for a line beginning with user and replace it with user {{ username }};. The {{ username }} is an Ansible variable that refers to a value in our main provision.yml file.

Finally, we restart Nginx to ensure the new user is used for spawning processes.

- name: Set Nginx user
  lineinfile:
    dest: /etc/nginx/nginx.conf
    regexp: "^user"
    line: "user {{ username }};"
    state: present
  notify: restart nginx

That’s all there is to the Nginx role. Check out the other roles on the repo to get a feel for how they work.

Running the Playbook

To run the playbook run the following command:

ansible-playbook provision.yml

Assuming your hosts file is populated and the hosts are accessible, your servers should begin to provision.

ashley@macbook:~/wordpress-ansible$ ansible-playbook provision.yml

PLAY [production] *************************************************************************************************************************************************************

TASK [Gathering Facts] ********************************************************************************************************************************************************
ok: [138.68.188.195]
ok: [159.65.57.233]
ok: [159.65.31.8]

TASK [common : Upgrade packages] **********************************************************************************************************************************************
ok: [159.65.31.8]
ok: [159.65.57.233]
ok: [138.68.188.195]

TASK [common : Install packages] **********************************************************************************************************************************************
changed: [159.65.57.233]
changed: [138.68.188.195]
changed: [159.65.31.8]

TASK [ufw : Enable firewall] **************************************************************************************************************************************************
changed: [159.65.57.233]
changed: [159.65.31.8]
changed: [138.68.188.195]

TASK [ufw : Allow HTTP] *******************************************************************************************************************************************************
changed: [159.65.31.8]
changed: [138.68.188.195]
changed: [159.65.57.233]

TASK [ufw : Allow HTTPS] ******************************************************************************************************************************************************
changed: [138.68.188.195]
changed: [159.65.31.8]
changed: [159.65.57.233]

TASK [ufw : Allow SSH] ********************************************************************************************************************************************************
changed: [159.65.31.8]
changed: [138.68.188.195]
changed: [159.65.57.233]

TASK [user : Ensure sudo group is present] ************************************************************************************************************************************
ok: [138.68.188.195]
ok: [159.65.31.8]
ok: [159.65.57.233]

TASK [user : Ensure sudo group has sudo privileges] ***************************************************************************************************************************
changed: [138.68.188.195]
changed: [159.65.57.233]
changed: [159.65.31.8]

TASK [user : Create default user] *********************************************************************************************************************************************
changed: [138.68.188.195]
changed: [159.65.57.233]
changed: [159.65.31.8]

TASK [user : Add authorized key] **********************************************************************************************************************************************
changed: [138.68.188.195]
changed: [159.65.57.233]
changed: [159.65.31.8]

TASK [nginx : Add Nginx repo] *************************************************************************************************************************************************
changed: [159.65.31.8]
changed: [159.65.57.233]
changed: [138.68.188.195]

TASK [nginx : Install Nginx] **************************************************************************************************************************************************
changed: [138.68.188.195]
changed: [159.65.31.8]
changed: [159.65.57.233]

TASK [nginx : Symlink default site] *******************************************************************************************************************************************
ok: [159.65.31.8]
ok: [159.65.57.233]
ok: [138.68.188.195]

TASK [nginx : Set Nginx user] *************************************************************************************************************************************************
changed: [159.65.31.8]
changed: [138.68.188.195]
changed: [159.65.57.233]

TASK [php : Add PHP repo] *****************************************************************************************************************************************************
changed: [138.68.188.195]
changed: [159.65.57.233]
changed: [159.65.31.8]

TASK [php : Install PHP] ******************************************************************************************************************************************************
changed: [138.68.188.195]
changed: [159.65.31.8]
changed: [159.65.57.233]

TASK [php : Set PHP user] *****************************************************************************************************************************************************
changed: [159.65.31.8]
changed: [138.68.188.195]
changed: [159.65.57.233]

TASK [php : Set PHP group] ****************************************************************************************************************************************************
changed: [138.68.188.195]
changed: [159.65.31.8]
changed: [159.65.57.233]

TASK [php : Set PHP listen owner] *********************************************************************************************************************************************
changed: [159.65.31.8]
changed: [138.68.188.195]
changed: [159.65.57.233]

TASK [php : Set PHP listen group] *********************************************************************************************************************************************
changed: [159.65.31.8]
changed: [138.68.188.195]
changed: [159.65.57.233]

TASK [php : Set PHP upload max filesize] **************************************************************************************************************************************
changed: [159.65.31.8]
changed: [138.68.188.195]
changed: [159.65.57.233]

TASK [php : Set PHP post max filesize] ****************************************************************************************************************************************
changed: [159.65.31.8]
changed: [138.68.188.195]
changed: [159.65.57.233]

TASK [mysql : Install MySQL] **************************************************************************************************************************************************
changed: [159.65.31.8]
changed: [159.65.57.233]
changed: [138.68.188.195]

TASK [wp-cli : Install WP-CLI] ************************************************************************************************************************************************
changed: [138.68.188.195]
changed: [159.65.57.233]
changed: [159.65.31.8]

TASK [wp-cli : Install WP-CLI tab completions] ********************************************************************************************************************************
changed: [159.65.57.233]
changed: [159.65.31.8]
changed: [138.68.188.195]

TASK [ssh : Disable root login] ***********************************************************************************************************************************************
changed: [159.65.31.8]
changed: [138.68.188.195]
changed: [159.65.57.233]

TASK [ssh : Disable password authentication] **********************************************************************************************************************************
ok: [159.65.31.8]
ok: [159.65.57.233]
ok: [138.68.188.195]

RUNNING HANDLER [nginx : restart nginx] ***************************************************************************************************************************************
changed: [159.65.31.8]
changed: [159.65.57.233]
changed: [138.68.188.195]

RUNNING HANDLER [php : reload php] ********************************************************************************************************************************************
changed: [159.65.31.8]
changed: [138.68.188.195]
changed: [159.65.57.233]

RUNNING HANDLER [php : restart php] *******************************************************************************************************************************************
changed: [159.65.31.8]
changed: [138.68.188.195]
changed: [159.65.57.233]

RUNNING HANDLER [ssh : restart ssh] *******************************************************************************************************************************************
changed: [159.65.31.8]
changed: [138.68.188.195]
changed: [159.65.57.233]

PLAY RECAP ********************************************************************************************************************************************************************
159.65.31.8             : ok=32   changed=27   unreachable=0    failed=0    skipped=0   rescued=0   ignored=0   
159.65.57.233           : ok=32   changed=27   unreachable=0    failed=0    skipped=0   rescued=0   ignored=0   
138.68.188.195          : ok=32   changed=27   unreachable=0    failed=0    skipped=0   rescued=0   ignored=0  

The process should take roughly 5 minutes to complete all three servers, which is extraordinary when compared to the time it would take to provision them manually. Not only that, but if you’ve configured any of the roles incorrectly, you can fix that run, run the playbook, and any roles that have already been completed will be skipped. Once complete, the servers are ready to house your individual sites and should provide a good level of performance and security out of the box.

Conclusion

Manually running dozens of commands every time you need to add a new site becomes old quickly. You could script it, but that’s a lot of work, and before you know it your scripts are out-of-date. That gets old fast too.

SpinupWP uses Ansible to manage your server, and its scripts are always up-to-date, so you might want to give it a try. It also handles backups.

As I’m sure you can appreciate, Ansible is a very powerful tool and one which can save you a considerable amount of time.

Do you use Ansible for provisioning? What about other tools such as Puppet, Chef or Salt? Let us know in the comments below.

The post How to Install Ansible and Automate Your Ubuntu 22.04 Server<span class="no-widows"> </span>Setup appeared first on SpinupWP.

]]>
https://spinupwp.com/automating-server-setup-ansible/feed/ 10
Complete Nginx Configuration Kit for WordPress https://spinupwp.com/hosting-wordpress-yourself-complete-nginx-configuration/ https://spinupwp.com/hosting-wordpress-yourself-complete-nginx-configuration/#replybox Tue, 04 Apr 2023 10:00:06 +0000 https://spinupwp.com/?p=400 In this final chapter, we offer a complete Nginx configuration optimized for WordPress sites. Not only does it amalgamate all the information from the previous chapters, but we also draw upon the best practices from our experience over the years.

The post Complete Nginx Configuration Kit for<span class="no-widows"> </span>WordPress appeared first on SpinupWP.

]]>

This is article 10 of 10 in the series “Hosting WordPress Yourself”

In the previous chapter we set up server monitoring and discussed ongoing maintenance for our Ubuntu server. In this final chapter I offer a complete Nginx configuration optimized for WordPress sites.

In addition to amalgamating all information from the previous 8 chapters, I will be drawing upon best practices from my experience and various sources I’ve come across over the years. The following example domains are included, each demonstrating a different scenario:

  • ssl.com – WordPress on HTTPS
  • ssl-fastcgi-cache.com – WordPress on HTTPS with FastCGI page caching
  • multisite-subdomain.com – WordPress Multisite using subdomains
  • multisite-subdirectory.com – WordPress Multisite using subdirectories

The configuration files contain inline documentation throughout and are structured in a way to reduce duplicate directives, which are common across multiple sites. This should allow you to quickly create new sites with sensible defaults out of the box, which can be customized as required.

Usage

You can use these configs as a reference for creating your own configuration, or directly by copying into your etc directory. Follow the steps below to replace your existing Nginx configuration.

Backup any existing config:

sudo mv /etc/nginx /etc/nginx.backup

Copy one of the example configurations from sites-available to sites-available/yourdomain.com:

sudo cp /etc/nginx/sites-available/ssl.com /etc/nginx/sites-available/yourdomain.com`

Edit the config as necessary, paying close attention to the server name and server paths. You will also need to create any directories used within the configuration and ensure Nginx has read/write permissions.

To enable the site, symlink the configuration into the sites-enabled directory:

sudo ln -s /etc/nginx/sites-available/yourdomain.com /etc/nginx/sites-enabled/yourdomain.com

Test the configuration:

sudo nginx -t

If the configuration passes, restart Nginx:

sudo /etc/init.d/nginx reload

Nginx Config Preview

The following is the ssl.com Nginx configuration file that’s contained in the package. It should give you a good idea of what it’s like to use our configs.

Warning: The following Nginx config will not work on its own. You’ll notice there are several include statements which require files contained in the package. Download the Complete Nginx Config Package

server {
    # Ports to listen on, uncomment one.
    listen 443 ssl http2;
    listen [::]:443 ssl http2;

    # Server name to listen for
    server_name ssl.com;

    # Path to document root
    root /sites/ssl.com/public;

    # Paths to certificate files.
    ssl_certificate /etc/letsencrypt/live/ssl.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/ssl.com/privkey.pem;

    # Don't use outdated SSLv3 protocol. Protects against BEAST and POODLE attacks.
    ssl_protocols TLSv1.2 TLSv1.3;

    # Use secure ciphers
    ssl_ciphers EECDH+CHACHA20:EECDH+AES;
    ssl_ecdh_curve X25519:prime256v1:secp521r1:secp384r1;
    ssl_prefer_server_ciphers on;

    # Define the size of the SSL session cache in MBs.
    ssl_session_cache shared:SSL:1m;

    # Define the time in minutes to cache SSL sessions.
    ssl_session_timeout 24h;

    # Tell browsers the site should only be accessed via https.
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;

    # File to be used as index
    index index.php index.html;

    # Overrides logs defined in nginx.conf, allows per site logs.
    access_log /sites/ssl.com/logs/access.log;
    error_log /sites/ssl.com/logs/error.log;

    # Default server block rules
    include global/server/defaults.conf;

    # SSL rules - ssl_certificate, etc
    include global/server/ssl.conf;

    location / {
        try_files $uri $uri/ /index.php?$args;
    }

    location ~ \.php$ {
        try_files $uri =404;
        include global/fastcgi-params.conf;

        # Use the php-fpm pool defined in the upstream variable.
        # See global/php-pool.conf for definition.
        fastcgi_pass   $upstream;
    }

    # Rewrite robots.txt
    rewrite ^/robots.txt$ /index.php last;
}

# Redirect http to https
server {
    listen 80;
    listen [::]:80;
    server_name ssl.com www.ssl.com;

    return 301 https://ssl.com$request_uri;
}

# Redirect www to non-www
server {
    listen 443;
    listen [::]:443;
    server_name www.ssl.com;

    return 301 https://ssl.com$request_uri;
}

Download the Complete Nginx Configuration Kit

Enter your name and email below and we’ll email you a zip of the Nginx configuration files. I promise we will only use your email to send you the config files, notify you of updates to the config files & this guide in the future and share helpful tips for managing your own server.



Unsubscribe any time from the footer of any email we send you. If you want news about SpinupWP, you’ll need to subscribe at the bottom of the page.

That’s All Folks!

Job done! I encourage you to explore the config files further and read through the documented configuration to get a feel for what’s going on. It should feel familiar as it follows the same conventions used throughout this guide.

Over time I will improve the configuration and add new best practices as they emerge. If you have any improvements, please let me know.

That concludes this chapter and the guide as a whole. It’s been quite a journey, but hopefully you’ve learned a lot and are more confident managing a server than when you started.

The post Complete Nginx Configuration Kit for<span class="no-widows"> </span>WordPress appeared first on SpinupWP.

]]>
https://spinupwp.com/hosting-wordpress-yourself-complete-nginx-configuration/feed/ 11
Monitoring and Ongoing Maintenance https://spinupwp.com/hosting-wordpress-yourself-server-maintenance/ https://spinupwp.com/hosting-wordpress-yourself-server-maintenance/#replybox Tue, 04 Apr 2023 09:50:51 +0000 https://spinupwp.com/?p=426 In this chapter we walk through setting up server monitoring and alerting on DigitalOcean. We discuss how to investigate problems when you get an alert. Then we emphasize the importance of keeping plugins and themes up-to-date, checking that backups are actually running, and watching log files for problems. Finally we walk through updating server software and upgrading PHP to a new major version.

The post Monitoring and Ongoing<span class="no-widows"> </span>Maintenance appeared first on SpinupWP.

]]>

This is article 9 of 10 in the series “Hosting WordPress Yourself”

So you’ve followed our in-depth guide and built yourself a shiny new server that’s secure and tuned for optimal WordPress performance, but what’s next?

In this chapter I’m going to outline a few tasks that should be carried out on a regular basis to ensure that your server continues to run securely and perform well. We’ll look at performing software updates, upgrading PHP, and a few “gotchas” to watch out for that we may have experienced ourselves.

But first, let’s start with server monitoring.

Server Monitoring

As I’m using Digital Ocean, monitoring server performance is a relatively simple process thanks to the built-in monitoring tools. For those not hosting with Digital Ocean, Netdata is a great alternative or Pingdom Server Monitoring if you’d prefer not to host it yourself.

If you enabled monitoring during the Droplet creation, then you’re good to go! Otherwise, you’ll need to install DigitalOcean’s metrics agent manually.

Once installed, you should see the Droplet’s current resource utilization when visiting the Droplet dashboard.

Monitoring dashboard

Alert Policies

Alert policies are extremely useful, and you should configure them for all Droplets. Alerts reduce the need to manually check your server’s health by sending a notification when one of your chosen metrics reaches a certain threshold.

Head to the Monitoring tab and click Create alert policy.

Create alert policies

Here you can create as many alert policies as required. I like to set up alerts for CPU, memory and disk utilization with a threshold of 80%, which is a good starting point for most people.

Set up alert policies

Alert policies list

Investigating Alerts

A high resource usage alert can be the first sign that something is wrong with your server or the applications that it’s running. To investigate, SSH into your server:

ssh ashley@pluto.ashleyrich.com

Run htop to have a look at the system resource usage in real-time:

htop

Using htop to check server usage

The above screenshot is sorting by CPU usage, so if the alert you’re investigating was about high CPU usage, you should be able to see the processes that are eating a lot of your CPU at the top of the list. If you’re investigating an alert about memory, you’ll need to change the sort column by hitting the F6 key.

If it’s Nginx or PHP appearing at the top of the list, you should have a look at your access and error logs. If it’s MySQL, you’ll need to enable the slow query log. A much easier way to diagnose issues is to use an application monitoring tool like New Relic.

Recently, we were receiving high CPU alerts from our VPS provider. After performing some application monitoring using New Relic, we discovered that a database query was running slow, which was causing MySQL to spike CPU usage. It turns out we were missing an index on one of our database tables. Adding the index resulted in a big drop in CPU usage:

CPU spike

If the alert is for high disk utilization, you’ll need to check your disk space usage to find out which files are taking up a lot of space on your server.

That’s all there is to server monitoring, now onto maintaining your server!

Keep Plugins and Themes Updated

Let’s start with an easy one that isn’t just applicable to self-hosted WordPress installs. WordPress itself, WordPress themes, and all plugins should be regularly updated. No software is immune to vulnerabilities and updating often will ensure those inevitable vulnerabilities are patched.

While you’re at it, make sure you delete obsolete themes and plugins. There’s no reason to keep them around, other than to provide a potential entry point for malicious users. The majority of premium plugins and themes won’t receive updates via the WordPress dashboard if they’re deactivated. For that reason, it’s better to delete them than leave them lying around.

Check Backups are Running

It’s inevitable that things go wrong, sometimes horribly. Regular site backups are an integral part of any disaster recovery process. In chapter 6, I demonstrated how to set them up. However, you need to ensure that they are running as expected months down the line.

Earlier this year, we noticed that our site backups hadn’t made it to S3 for almost two months. The cause? Our backup script had stopped working due to a Python dependency issue. The moral of the story? Schedule a recurring time to verify that backups are being carried out. You should also periodically test the backups by importing the SQL into a test database to ensure they’re usable if things were to go wrong.

Watch Those Log Files

Just like server monitoring, log files can help to identify issues early on before they become catastrophic. Every once in a while it’s worth checking the logs to see if anything unusual is happening. I would concentrate on the following:

We use Papertrail to make it easier to monitor those log files (especially debug.log, which we have permanently enabled). What makes Papertrail ideal is that you can easily send certain logs to Slack. We have an alert configured for any fatal errors that occur.

Set Up Persistent Debug.log

Enabling debug.log for live sites is often discouraged. This is because, by default, the log is stored in a publicly accessible location. This can expose potentially sensitive information about your server to would-be hackers. However, this log can be critical for helping track down obscure errors in production environments.

We can get the benefits of a persistent log while avoiding the risks by moving it to a new location and denying public access to all log files. To move the log files alongside other logs, update your wp-config.php with the following lines:

define( 'WP_DEBUG', true );
define( 'WP_DEBUG_DISPLAY', false );
define( 'WP_DEBUG_LOG', ‘/sites/pluto.ashleyrich.com/logs’ );

Next, configure Nginx to disallow access to .log files. Add the following Nginx location block to your site’s Nginx config file, in my case /etc/nginx/sites-available/pluto.ashleyrich.com. You should place this block inside the main server block, just before the location / block:

# Prevent access to certain file extensions
location ~\.(ini|log|conf)$ {
    deny all;
} 

Enable Logrotate

Log files themselves can become problematic if left unchecked. One downside to most VPS providers is that they provide a limited amount of disk space, which means the logs can quickly fill the server’s storage. In the past, I’ve had a server completely fall over because a log file grew to over 25 GBs in size.

One way to help prevent this is to enable log rotation, available by default on servers running Ubuntu 16.04 or higher through the logrotate command. This will rotate (rename), compress, and remove old log files to prevent them from eventually consuming all of your disk space. The default configuration located in /etc/logrotate.conf rotates logs weekly and removes log files older than 4 weeks. This configuration handles logs in the /var/logs directory.

To create a site-specific logrotate configuration, add a new file to the /etc/logrotate.d/ directory, named after your specific site.

sudo nano /etc/logrotate.d/ashleyrich.com

For example, this is what the contents of /etc/logrotate.d/ashleyrich.com looks like:

/sites/ashleyrich.com/logs/*.log {
    daily
    rotate 14
    size 1M
    compress
    missingok
    notifempty
    dateext
    create 0664 pluto pluto
    sharedscripts
    postrotate
        invoke-rc.d nginx rotate >/dev/null 2>&1
    endscript
}

Below are the details of each of the options used above:

  • daily: perform the rotation every day
  • rotate 14: remove the log files after 14 days
  • size 1M: only rotate logs that are a minimum of 1MB
  • compress: all rotated logs will also be compressed
  • missingok: prevent errors from being thrown if the log file is missing
  • notifempty: prevent empty logs from being rotated
  • dateext: use the date as a suffix of the rotated files
  • create: create a new log file with a specific name and permissions after rotating the old one

Finally, once the logs are rotated, any commands configured in the postrotate script will be run once at the end.

Update Server Packages

If you followed along with this guide, you would have configured automatic security updates using unattended-upgrades, which will apply security updates on a daily basis. However, it won’t install general software updates that contain new features and bug fixes. If you SSH into your server you’ll often be presented with the following:

Welcome to Ubuntu 22.04.1 LTS (GNU/Linux 5.15.0-41-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:  https://landscape.canonical.com
 * Support:     https://ubuntu.com/advantage

  System information as of Fri Nov 11 19:43:40 GMT 2022

  System load:  0.0             Users logged in:    1
  Usage of /:   15.2% of 24.05GB   IPv4 address for eth0: ***.***.***.***
  Memory usage: 73%             IPv4 address for eth0: 10.20.0.11
  Swap usage:   0%              IPv4 address for eth1: 10.118.0.8
  Processes:    108

48 updates can be applied immediately.
To see these additional updates run: apt list --upgradable


*** System restart required ***

Those packages won’t automatically get updated because they don’t contain security fixes. To update them, run:

sudo apt update
sudo apt dist-upgrade

I recommend using apt dist-upgrade vs. apt upgrade because it will intelligently handle dependencies.

Before performing any upgrades, you should create a full system backup via your VPS provider (sometimes called snapshots).

Upgrade PHP

The previous step will update PHP through the point releases (e.g. 8.0.0 to 8.0.25), but it won’t upgrade PHP to a new major version (e.g. 8.1). Luckily, it’s a simple process and possible with little to no downtime.

If you are using the ppa:ondrej/php repository, you can safely install multiple major versions of PHP side-by-side, which we can use to our advantage. Remember that you only need to follow these steps if your server isn’t already running PHP 8.0. Otherwise, just run apt dist-upgrade.

sudo apt update
sudo apt install php8.1-cli php8.1-common php8.1-curl php8.1-dev php8.1-fpm php8.1-gd php8.1-imagick php8.1-imap php8.1-intl php8.1-mbstring php8.1-mysql php8.1-opcache php8.1-redis php8.1-soap php8.1-xml php8.1-zip

Next, you should configure the new version of PHP as demonstrated in chapter 2. Remember to also copy any modifications from your existing php.ini file, found at /etc/php/{PHP VERSION}/fpm/php.ini.

At this point, you’ll have multiple PHP versions installed, but Nginx will still pass requests to the older version. Next, update your site’s Nginx config so that the fastcgi_pass directive passes requests to PHP 8.1.

sudo nano /etc/nginx/sites-available/ashleyrich.com
location ~ \.php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass unix:/run/php/php8.1-fpm.sock;
        fastcgi_index index.php;
        include fastcgi_params;
}

Test that your config changes are correct, then restart Nginx.

sudo nginx -t
sudo service nginx reload

Visit Site Health under Tools in your WordPress dashboard to ensure the site is now running on PHP 8.1. Once confirmed, repeat the changes for each site on the server. Ideally, you’ll perform these changes on a staging site and test the critical paths before updating any production sites.

Once you’re happy that everything is working as expected you can remove the old version, like so:

sudo apt purge php8.0-*

What About Updating Ubuntu?

Ubuntu releases a new LTS version every two years, the most recent of which is 22.04 (Jammy Jellyfish), released on April 23, 2022.

So should you update to a new LTS version as they’re released?

It’s a question often debated, and you’ll get different recommendations depending on who you ask. Personally, I don’t ever upgrade a server’s operating system (OS).

An OS upgrade can be a slow process to complete (think about the last time you updated your computer to a new major release of macOS, Windows, or Linux). During this time your sites will be completely inaccessible.

There’s also a good chance that some existing server packages won’t be compatible with the new version of the OS. This can further increase downtime as you scamper to rectify such issues.

Spinning up a fresh server is a much safer approach. Doing so allows you to test that everything is fully working as expected. A clean slate is also an excellent opportunity to reassess if your current setup still meets your needs or if a different server stack might be more suitable. You can safely try these changes without impacting your live sites.

Once you’re happy with the new setup, you can switch your DNS over with minimal downtime.

Action Plan

I’ve covered the various tasks that should be carried out on your server, but how often should these tasks be performed? I recommend that you perform the following tasks once a month. All together, it shouldn’t take you more than 30 minutes:

  • Perform WordPress updates, including themes and plugins
  • Ensure backups are running and that they’re usable
  • Check your server metrics to see if there were any unusual spikes
  • Quickly scan your server’s error logs for problems
  • Update server packages

As for upgrading to a new version of Ubuntu, you should definitely be aware of the end-of-life date of your version of Ubuntu. At that point there will be no further updates released of any kind including security updates. You should definitely move all your sites to a fresh server and decommission the old server before that date. We typically provision a fresh server roughly every two years, generally after a new LTS release.

That’s all for this chapter. In the next chapter you will be able to snag a complete set of Nginx configuration files optimized for WordPress that we’ve been building throughout this guide.

The post Monitoring and Ongoing<span class="no-widows"> </span>Maintenance appeared first on SpinupWP.

]]>
https://spinupwp.com/hosting-wordpress-yourself-server-maintenance/feed/ 6
Migrating WordPress to a New Server https://spinupwp.com/hosting-wordpress-yourself-moving-wordpress-to-a-new-server/ https://spinupwp.com/hosting-wordpress-yourself-moving-wordpress-to-a-new-server/#replybox Tue, 04 Apr 2023 09:40:45 +0000 https://spinupwp.com/?p=433 Once you have your server up and running, the first thing you’re likely to want to do is move an existing site over to it from elsewhere. In this chapter we walk through copying the site files, Nginx configs, and SSL certificates. Next we export the database and import the database. Then we test the site on the new server before flipping the switch.

The post Migrating WordPress to a New<span class="no-widows"> </span>Server appeared first on SpinupWP.

]]>

This is article 8 of 10 in the series “Hosting WordPress Yourself”

In the previous chapter we enhanced security and performance with tweaks to the Nginx configuration. In this article, I’m going to walk you through the steps required to migrate an existing WordPress site to a new server.

There can be lots of reasons to migrate a site. Perhaps you’re moving from one web host to another. If you’re moving a site to a server you’ve set up with SpinupWP, the following guide will work but I recommend using our documentation on migrating a site to a SpinupWP server for more specific instructions. I promise it will save you time and headaches. 🙂

Another good reason to migrate a site is to retire a server. We do not recommend upgrading a server’s operating system (OS). That is, we don’t recommend upgrading Ubuntu even though Ubuntu might encourage it. The truth is a lot can go wrong upgrading the OS of a live server and it’s just not worth the trouble.

A much safer approach is to spin up a fresh server, migrate existing sites, and shut down the old server. This approach allows you to test everything is working on the new server before switching the DNS and directing traffic to it.

If you haven’t already completed the previous chapters to fire up a fresh new server, you should start at the beginning. (Interested in a super quick and easy way to provision new servers tuned for hosting WordPress? Check out how SpinupWP works.) Let’s get started!

Securely Copying Files

Before we begin migrating files, we need to figure out the best way to copy them to the new server. There are a couple of methods, including SFTP, but the safest and quickest route is to use SCP.

SCP will allow us to copy the files server-to-server, without first downloading them to our local machine. Under the hood, SCP uses SSH; therefore we need to generate a new SSH key so that we can connect to our old server from the new server. On the newly provisioned server, create a new SSH key using the following command:

ssh-keygen -t rsa -b 4096 -C "your_server_ip_or_hostname"

Then copy the public key to your clipboard. You can view the public key, like so:

cat ~/.ssh/id_rsa.pub

On the old server add the public key to your authorized_keys file:

sudo echo "public_key" >> ~/.ssh/authorized_keys

Then verify that you’re able to connect to the old server from the new server using SSH.

ssh ashley@pluto.ashleyrich.com

If you’re unable to connect, go back and verify the previous steps before continuing.

File Migration

We’ll start by migrating the site’s files, which includes WordPress and any files in the web root. Issue the following command from the new server. Remember to substitute your old server’s IP address and the path to the site’s web root.

scp -r ashley@pluto.ashleyrich.com:~/ashleyrich.com ~/ashleyrich.com

With the files taken care of, it’s time to add the site to Nginx.

Nginx Configuration

There are a couple of ways you can add the site to Nginx:

  1. Create a fresh config based on chapter 3
  2. Copy the config from the old server

I recommend copying the existing configuration, as you know it works. However, starting afresh can be useful, especially if your virtual host file contains a lot of redundant directives. You can download a zip of complete Nginx configs as a fresh starting point.

In this example I’m going to copy the existing configuration. As we did with the site data, copy the file using SCP:

scp -r ashley@pluto.ashleyrich.com:/etc/nginx/sites-available/ashleyrich.com ~/ashleyrich.com

Next, move the file into place and ensure the root user owns it:

sudo mv ashleyrich.com /etc/nginx/sites-available
sudo chown root:root /etc/nginx/sites-available/ashleyrich.com

The last step is to enable the site in Nginx by symlinking the virtual host into the enabled-sites directory:

sudo ln -s /etc/nginx/sites-available/ashleyrich.com /etc/nginx/sites-enabled/ashleyrich.com

Before testing if our configuration is good, we should copy over our SSL certificates.

SSL Certificates

Certificate file permissions are more locked down, so you will need to SSH to the old server and copy them to your home directory first.

sudo cp /etc/letsencrypt/live/ashleyrich.com/fullchain.pem ~/
sudo cp /etc/letsencrypt/live/ashleyrich.com/privkey.pem ~/

Then, ensure our SSH user has read/write access:

sudo chown ashley *.pem

Back on the new server, copy the certificates.

scp -r ashley@pluto.ashleyrich.com:~/*.pem ~/

We’re going to generate fresh certificates using Let’s Encrypt once the DNS has switched over (see Finishing Up), so we’ll leave the certificate files in our home directory for the time being and update the Nginx configuration to reflect the new paths.

sudo nano /etc/nginx/sites-available/ashleyrich.com

You’ll need to update the ssl_certificate and ssl_certificate_key directives.

ssl_certificate /home/ashley/fullchain.pem;
ssl_certificate_key /home/ashley/privkey.pem;

To confirm the directives are correct, once again test the Nginx config:

sudo nginx -t

If everything looks good, reload Nginx:

sudo service nginx reload

Spoof DNS

It’s a good idea to test the new server as we go. We can do this by spoofing our local DNS, which will ensure the old server remains active for your visitors but allow you to test the new server. On your local machine add an entry to your /etc/hosts file, which points the new server’s IP address to the site’s domain:

46.101.3.65    ashleyrich.com

Once updated, if you refresh the site you should see “Error establishing a database connection” because we haven’t imported the database yet. Let’s handle that next.

Before continuing, remember that the domain now points to the new server’s IP address. If you usually SSH to the server using the hostname, this will no longer work. Instead, you should SSH to each server using their IP addresses until the migration is complete.

Database Import

Before we can perform the import, we need to create the database and database user. On the new server, log in to MySQL using the root user:

mysql -u root -p

Create the database:

CREATE DATABASE ashleyrich_com CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_520_ci;

Then, create the database user with privileges for the new database:

CREATE USER 'username'@'localhost' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON ashleyrich_com.* TO 'username'@'localhost';
FLUSH PRIVILEGES;
EXIT;

With that taken care of, it’s time to export the data. We’re going to use mysqldump to perform the database export. If you need to do anything more complex, like exclude post types or perform a find and replace on the data, I would recommend using WP Migrate DB Pro.

To export the database, issue the following command from the old server, replacing the database credentials with those found in your wp-config.php file:

mysqldump --no-tablespaces -u DB_USER -p DB_NAME > ~/export.sql

Switch back to the new server and transfer the database export file:

scp -r ashley@pluto.ashleyrich.com:~/export.sql ~/

Finally, import the database:

mysql -u DB_USER -p DB_NAME < export.sql

If any of the database connection information is different from that of the old server you will need to update your wp-config.php file to reflect those changes. Refresh the site to confirm that the database credentials are correct. If everything is working, you should now see the site.

It’s Time to Test

You now have an exact clone of the live site running on the new server. It’s time to test that everything is working as expected.

For ecommerce sites, you should confirm that the checkout process is working and any other critical paths. Remember, this is only a clone of the live site, so anything saved to the database won’t persist, as we’ll be re-importing the data shortly.

Once you’re happy that everything is working as expected, it’s time to perform the migration.

Migrating with Minimum Downtime

On busy sites, it’s likely that the database will have changed since performing the previous export. For us to ensure data integrity, we need to prevent the live site from modifying the database while we carry out the migration. To do that we’ll perform the following actions:

  1. Update the live site to show a ‘Back Soon’ message
  2. Export the live database from the old server
  3. Import the live database to the new server
  4. Switch DNS to point to the new server

To stop the live site from modifying the database we’re going to show the following ‘Back Soon’ page:

<!doctype html>
<html>
    <head>
        <title>Back Soon</title>
        <style>
          body { text-align: center; padding: 150px; }
          h1 { font-size: 50px; }
          body { background-color: #e13067; font: 20px Helvetica, sans-serif; color: #fff; line-height: 1.5 }
          article { display: block; width: 650px; margin: 0 auto; }
        </style>
    </head>

    <body>
        <article>
            <h1>Back Soon!</h1>
            <p>
                We're currently performing server maintenance.<br>
                We'll be back soon!
            </p>
        </article>
    </body>
</html>

We’ll save this as an index.html page, upload it to the web root and update Nginx to serve this file, instead of index.php.

On the old server, modify your site’s virtual host file:

sudo nano /etc/nginx/sites-available/ashleyrich.com

Ensure that the index directive looks like below, which will ensure that our ‘Back Soon’ page is loaded for all requests instead of WordPress:

index index.html index.php;

Once done, reload Nginx. Your live site will now be down. If you’re using Nginx FastCGI caching, any cached pages will continue to be served from the cache. However, requests to admin-ajax.php and the WordPress REST API will fail. Therefore, you will not be able to use plugins such as WP Migrate DB Pro to perform the migration.

Before continuing, you should confirm that your live site is indeed showing the ‘Back Soon’ page by checking it from another device or removing the entry from your /etc/hosts file, which we added earlier.

Flipping the Switch

Now that the live site is down it’s time to export and import the database once more (as we did above) so that any changes that occurred to the database while we were testing are migrated. However, this time you won’t need to create a database or database user.

Once the export/import is complete you may want to add the entry back into your /etc/hosts file (if you removed it) so that you can quickly check that the database migration was successful. Once you’re confident that everything is working as expected, log into your DNS control panel and update your A records to point to the new server. Modifying your DNS records will start routing traffic to your new server. However, keep in mind that DNS queries are cached, so anyone who has visited your site recently will likely still be routed to the old server and see the ‘Back Soon’ page. Once the user’s machine re-queries for the domain’s DNS entries they should be forwarded to the new server.

We use Cloudflare as our DNS provider, with a TTL of 300 seconds. This means that most users are routed to the new server quickly when we make a DNS change. However, if your DNS TTL is higher, I would recommend lowering it a few days prior to performing the migration. This will ensure DNS changes propagate more quickly to your users.

Finishing Up

Now that the new server is live, there are a few loose ends we need to take care of, but fortunately we’ve already covered them in previous chapters:

  1. Add a Unix cron
  2. Ensure automatic backups are running
  3. Generate a new SSL certificate using Let’s Encrypt

That’s everything there is to know about migrating a WordPress site to a new server. If you follow the steps outlined here, you should have a smooth migration with little downtime. In the final chapter we’ll cover how to keep your server and sites operational with ongoing maintenance and monitoring.

The post Migrating WordPress to a New<span class="no-widows"> </span>Server appeared first on SpinupWP.

]]>
https://spinupwp.com/hosting-wordpress-yourself-moving-wordpress-to-a-new-server/feed/ 6
Nginx Security Hardening https://spinupwp.com/hosting-wordpress-yourself-nginx-security-tweaks-woocommerce-caching-auto-server-updates/ https://spinupwp.com/hosting-wordpress-yourself-nginx-security-tweaks-woocommerce-caching-auto-server-updates/#replybox Tue, 04 Apr 2023 09:15:51 +0000 https://spinupwp.com/?p=389 Even after configuring HTTPS to encrypt connections between the browser and server, sites are still open to other areas of attack such as XSS, Clickjacking and MIME sniffing. We’ll take a look at each of those and how to deal with them. You’ll learn what a referrer policy is and how it can be useful.

The post Nginx Security<span class="no-widows"> </span>Hardening appeared first on SpinupWP.

]]>

This is article 7 of 10 in the series “Hosting WordPress Yourself”

In chapter 3 you learned how to add HTTPS sites to your server, but there is more we can do to improve HTTPS performance and security. We’ll also look at how we can minimize the risks from other types of attacks, such as XSS, Clickjacking, and MIME sniffing.

If you would find it easier to see the whole Nginx config at once, feel free to download the complete Nginx config kit now.

SSL Hardening

Although your site is configured to only handle HTTPS traffic it still allows the client to attempt further HTTP connections. Adding the Strict-Transport-Security header to the server response will ensure all future connections enforce HTTPS. An article by Scott Helme gives a thorough overview of the Strict-Transport-Security header.

Open the main Nginx configuration file.

sudo nano /etc/nginx/nginx.conf

Add the following directive to the http block:

add_header Strict-Transport-Security "max-age=31536000; includeSubdomains";

You may be wondering why the 301 redirect is still needed if this header automatically enforces HTTPS traffic: unfortunately the header isn’t supported by IE10 and below.

Test your Nginx config and if it’s ok, reload.

sudo nginx -t
sudo service nginx reload

Now if you perform a scan using the Qualys SSL Test tool you should receive a grade A+. Not bad!

SSL - Image 8

SSL Performance

HTTPS connections are a lot more resource hungry than regular HTTP connections. This is due to the additional handshake procedure required when establishing a connection. However, it’s possible to cache the SSL session parameters, which will avoid the SSL handshake altogether for subsequent connections. Just remember that security is the name of the game, so you want clients to re-authenticate often. A happy medium of 10 minutes is usually a good starting point.

Open the main Nginx configuration file.

sudo nano /etc/nginx/nginx.conf

Add the following directives within the http block.

ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;

Test the Nginx configuration and reload if successful.

sudo nginx -t
sudo service nginx reload

Cross-Site Scripting (XSS)

The most effective way to deal with XSS is to ensure that you correctly validate and sanitize all user input in your code, including that within the WordPress admin areas. But most input validation and sanitization is out of your control when you consider third-party themes and plugins. You can however reduce the risk of XSS attacks by configuring Nginx to provide a few additional response headers.

Let’s assume an attacker has managed to embed a malicious JS file into the source code of your site, maybe through a comment form or something similar. By default, the browser will unknowingly load this external file and allow its contents to execute. Enter Content Security Policy, which allows you to define a whitelist of sources that are approved to load assets (JS, CSS, etc.). If the script isn’t on the approved list, it doesn’t get loaded.

Creating a Content Security Policy can require some trial and error, as you need to be careful not to block assets that should be loaded such as those provided by Google or other third party vendors. This sample policy will allow the current domain and a few sources from Google and WordPress.org:

default-src 'self' https://*.google-analytics.com https://*.googleapis.com https://*.gstatic.com https://*.gravatar.com https://*.w.org data: 'unsafe-inline' 'unsafe-eval';

Alternatively, some people opt to only block non-HTTPS assets, which although less secure is a lot easier to manage:

default-src 'self' https: data: 'unsafe-inline' 'unsafe-eval';

You can add this directive to nginx.conf or each site’s individual configuration file, depending on whether you want to share the policy across all sites. Personally, I specify a generic policy in the global config file and override it on a per-site basis as needed.

sudo nano /etc/nginx/nginx.conf

Add the following code within the http block:


##
# Security
##

add_header Content-Security-Policy "default-src 'self' https: data: 'unsafe-inline' 'unsafe-eval';" always;

Some of you may have picked up on the fact that this only deals with external assets, but what about inline scripts? There are two ways you can handle this:

  1. Completely disable inline scripts by removing 'unsafe-inline' and 'unsafe-eval' from the Content-Security-Policy. However, this approach can break some third party plugins or themes, so be careful.
  2. Enable X-Xss-Protection which will instruct the browser to filter through user input and ensure suspicious code isn’t output directly to HTML. Although not bulletproof, it’s a relatively simple countermeasure to implement.

To enable the X-Xss-Protection filter add the following directive below the ‘Content-Security-Policy’ entry:

add_header X-Xss-Protection "1; mode=block" always;

These headers are no replacement for correct validation or sanitization, but they do offer another very strong line of defense against common XSS attacks. Only installing high quality plugins and themes from trusted sources is your best first line of defense.

Clickjacking

Clickjacking is an attack which fools the user into performing an action which they did not intend to, and is commonly achieved through the use of iframes. An article by Troy Hunt has a thorough explanation of clickjacking.

The most effective way to combat this attack vector is to completely disable frame embedding from third party domains. To do this, add the following directive below the X-Xss-Protection header:

add_header X-Frame-Options "SAMEORIGIN" always;

This will prevent all external domains from embedding your site directly into their own through the use of the iframe tag:

<iframe src="http://mydomain.com"</iframe>

MIME Sniffing

MIME sniffing can expose your site to attacks such as “drive-by downloads.” The X-Content-Type-Options header counters this threat by ensuring only the MIME type provided by the server is honored. An article by Microsoft explains MIME sniffing in detail.

To disable MIME sniffing add the following directive:

add_header X-Content-Type-Options "nosniff" always;

Referrer Policy

The Referrer-Policy header allows you to control which information is included in the Referrer header when navigating from pages on your site. While referrer information can be useful, there are cases where you may not want the full URL passed to the destination server, for example, when navigating away from private content (think membership sites).

In fact, since WordPress 4.9 any requests from the WordPress dashboard will automatically send a blank referrer header to any external destinations. Doing so makes it impossible to track these requests when navigating away from your site (from within the WordPress dashboard), which helps to prevent broadcasting the fact that your site is running on WordPress by not passing /wp-admin to external domains.

We can take this a step further by restricting the referrer information for all pages on our site, not just the WordPress dashboard. A common approach is to pass only the domain to the destination server, so instead of:

https://myawesomesite.com/top-secret-url

The destination would receive:

https://myawesomesite.com

You can achieve this using the following policy:

add_header Referrer-Policy "origin-when-cross-origin" always;

A full list of available policies can be found over at MDN.

Permissions Policy

The Permissions-Policy header allows a site to enable and disable certain browser features and APIs. This allows you to manage which features can be used on your own pages and anything that you embed.

A Permissions Policy works by specifying a directive and an allowlist. The directive is the name of the feature you want to control and the allowlist is a list of origins that are allowed to use the specified feature. MDN has a full list of available directives and allowlist values. Each directive has its own default allowlist, which will be the default behavior if they are not explicitly listed in a policy.

You can specify several features at the same time by using a comma-separated list of policies. In the following example, we allow geolocation across all contexts, we restrict the camera to the current page and the specified domain, and we block the microphone across all contexts:

add_header Permissions-Policy "geolocation=*, camera=(self 'https://example.com'), microphone=()";

Download the complete set of Nginx config files including these security directives.

That’s all of the suggested security headers implemented. Save and close the file by hitting CTRL X followed by Y. Before reloading the Nginx configuration, ensure there are no syntax errors.

sudo nginx -t

If no errors are shown, reload the configuration.

sudo service nginx reload

After reloading your site you may see a few console errors related to external assets. If so, adjust your Content-Security-Policy as required.

You can confirm the status of your site’s security headers using SecurityHeaders.io, which is an excellent free resource created by Scott Helme. This, in conjunction with the SSL Server Test by Qualys SSL Labs, should give you a good insight into your site’s security.

That concludes this chapter. In the next chapter we’ll move a WordPress site from one server to another with minimal downtime.

The post Nginx Security<span class="no-widows"> </span>Hardening appeared first on SpinupWP.

]]>
https://spinupwp.com/hosting-wordpress-yourself-nginx-security-tweaks-woocommerce-caching-auto-server-updates/feed/ 15
Automated Remote Backups https://spinupwp.com/backup-wordpress-amazon-glacier/ https://spinupwp.com/backup-wordpress-amazon-glacier/#replybox Tue, 04 Apr 2023 09:00:30 +0000 https://spinupwp.com/?p=2324 This chapter is dedicated to implementing an automated, reliable way to create website backups. We cover how to automate backing up your site files and database. Then we dive into copying your backups to an offsite location, using Amazon S3. Finally we take a look at how to save costs for your remote backup storage, by implementing lifecycle rules that move your S3 backups to Amazon Glacier storage.

The post Automated Remote<span class="no-widows"> </span>Backups appeared first on SpinupWP.

]]>

This is article 6 of 10 in the series “Hosting WordPress Yourself”

In the previous chapter, I walked you through how to configure a WordPress server-level cron and set up outgoing email for your Ubuntu server. In this chapter, we’ll look at configuring automatic backups for your sites.

Performing backups on a regular basis is essential. It’s inevitable that at some point in the future you will need to restore data – whether that’s due to user error, corruption, or a security breach. You never know what could go wrong, so having a recent backup on hand can really make your life easier as a systems administrator.

There are generally two types of backups we recommend you perform. The first is a full system backup and the second is a backup of each individual site hosted on the server.

Full system backups are best performed by your VPS provider, but they are not usually enabled by default. Most VPS providers, including DigitalOcean, Akamai (formerly Linode, Google Cloud, and AWS, offer this service for a fee.

A full system backup is generally reserved for situations where you need to recover an entire server. For example, in the event of a rare, catastrophic failure where all the data on your server was lost. You won’t want to restore the entire system if only a single site needs restoration.

A single site backup saves the database and all files of the site, allowing you to restore just that site. For a WordPress site, you might think all you need to back up are the database and the uploads directory. After all, WordPress core files, themes, and plugins can be re-downloaded as needed. Maybe you’re even thinking of skipping backups for your uploads directory if you’re using a plugin like WP Offload Media, as the files are automatically sent to your configured storage provider when added to the media library. This approach to backups can lead to trouble down the line.

There are two reasons we recommend including all data and files in a single site backup.

First, some WordPress plugins may have functionality that stores files to the uploads directory, often in a separate location from the WordPress Media Library directory structure. A common example are forms plugins that allow users to upload files from the front end. Your media offloading solution won’t move these files to the offsite storage provider. If you exclude the uploads directory from your backup, you won’t be able to restore them.

Second, if you only backup your database and uploads directory, you’ll have to manually download the WordPress core files and any themes or plugins. This is not ideal if you are hosting high traffic sites, like ecommerce, membership, or community sites. You need to recover from a failure quickly, or you will lose business.

Configuring Site Backups

A weekly backup should suffice for sites that aren’t updated often, but you may want to perform them more frequently. For example, you may want to perform backups for an ecommerce site every few hours, depending on how often new orders are received.

To set up backups for a site, first, create a new backups directory in the site’s root directory. This will store all your backup files.

cd /home/ashley/ashleyrich.com
mkdir backups

If you’ve been following the rest of this guide, the backups directory will sit alongside the existing cache, logs, and public directories.

ashley@pluto:~/ashleyrich.com$ ls 
backups  cache  logs  public
ashley@pluto:~/ashleyrich.com$

Next, we’ll create a new shell script called backup.sh.

nano backup.sh

Paste the following contents into the file.

#!/bin/bash

NOW=$(date +%Y%m%d%H%M%S)
SQL_BACKUP=${NOW}_database.sql
FILES_BACKUP=${NOW}_files.tar.gz

DB_NAME=$(sed -n "s/define( *'DB_NAME', *'\([^']*\)'.*/\1/p" wp-config.php)
DB_USER=$(sed -n "s/define( *'DB_USER', *'\([^']*\)'.*/\1/p" wp-config.php)
DB_PASSWORD=$(sed -n "s/define( *'DB_PASSWORD', *'\([^']*\)'.*/\1/p" wp-config.php)
DB_HOST=$(sed -n "s/define( *'DB_HOST', *'\([^']*\)'.*/\1/p" wp-config.php)

# Backup database
mysqldump --add-drop-table -u$DB_USERNAME -p$DB_PASSWORD -h$DB_HOST $DB_NAME > ../backups/$SQL_BACKUP 2>&1

# Compress the database dump file
gzip ../backups/$SQL_BACKUP

# Backup the entire public directory
tar -zcf ../backups/$FILES_BACKUP .

What this script does:

  1. Configures the script to run as a bash script.
  2. Sets up a current date variable (NOW), a SQL filename variable (SQL_BACKUP) which includes the current date in the file name, and an archive file name variable (FILES_BACKUP), which also includes the current date.
  3. Retrieves the database credentials from the wp-config.php file and sets them up as variables to use in the mysqldump command which exports the database to the SQL_FILE file in the backups directory. It also ensures that the SQL file includes the drop table MySQL command. This is useful when using this file to restore one database over another that has existing tables with the same name.
  4. Uses gzip to compress the SQL file so that it takes up less space. The resulting compressed filename looks something like this: 20211028191120_database.sql.gz.
  5. Creates an archive of the site’s public directory in the backups directory. The resulting archive filename looks something like this: 20211028191120_files.tar.gz.

You will also notice that any time we’re referring to the location of the backups directory, we’re using ../. This Linux file system syntax effectively means ‘go one directory above the current directory’ which we’re doing because we’re running the script from inside the public directory. We’ll also need to be aware of this when we set up the scheduled cron job later on.

Hit CTR + X followed by Y to save the file.

The next step is to ensure the newly created script has execute permissions so that it can be run by a server cron job.

chmod u+x backup.sh

The last step is to schedule the backup script to run at a designated time. Begin by opening the crontab for the current user.

crontab -e

Add the following line to the end of the file.

0 5 * * 0 cd /home/ashley/ashleyrich.com/public/; /home/ashley/ashleyrich.com/backup.sh  >/dev/null 2>&1

This cron job will change the current directory to the site’s public directory, and then run the backup.sh script in the context of that directory, every Sunday morning at 05:00, server time.

If you would prefer to run the backup daily, you can edit the last cron date/time field.

0 5 * * * cd /home/ashley/ashleyrich.com/public/; /home/ashley/ashleyrich.com/backup.sh  >/dev/null 2>&1

Just remember, whichever option you use, you’ll need to add this crontab entry for each individual site you wish to back up.

WP-CLI Not Required

A little note about WP-CLI. You probably know that you could use the WP-CLI wp db export command, especially as we installed WP-CLI back in Chapter 2 and we use it in many of our WordPress tutorials.

However, it’s better to use mysqldump instead of WP-CLI, because it reduces dependencies and risk. For example, let’s say you update to a new version of PHP, but WP-CLI doesn’t work with that version. Your backups will be broken.

Cleaning Up Old Backups

Over time, this backup process is going to create a bunch of SQL and file archives in the backups directory, which can be a common reason for running out of server disk space. Depending on the data on your site, and how often it’s updated, you probably aren’t going to need to keep backups older than a month. So it would be a good idea to clean up old site backups you don’t need.

To remove old backups, add a line to the bottom of the backups.sh script.

# Remove backup files that are a month old
rm -f ../backups/$(date +%Y%m%d* --date='1 month ago').gz

This line uses a date command to get the date one month ago and creates a filename string with the wildcard character *. This will match any filename starting with the date of one month ago and ending in .gz, and removes those files. For example, if the script is running on July 24th, it will remove any backup files created on June 24th. So long as your script runs every day, it will always remove backups from a month ago.

The updated backup script looks like this:

#!/bin/bash

NOW=$(date +%Y%m%d%H%M%S)
SQL_BACKUP=${NOW}_database.sql
FILES_BACKUP=${NOW}_files.tar.gz

DB_NAME=$(sed -n "s/define( *'DB_NAME', *'\([^']*\)'.*/\1/p" wp-config.php)
DB_USER=$(sed -n "s/define( *'DB_USER', *'\([^']*\)'.*/\1/p" wp-config.php)
DB_PASSWORD=$(sed -n "s/define( *'DB_PASSWORD', *'\([^']*\)'.*/\1/p" wp-config.php)
DB_HOST=$(sed -n "s/define( *'DB_HOST', *'\([^']*\)'.*/\1/p" wp-config.php)

# Backup database
mysqldump --add-drop-table -u$DB_USERNAME -p$DB_PASSWORD -h$DB_HOST $DB_NAME > ../backups/$SQL_BACKUP 2>&1

# Compress the database dump file
gzip ../backups/$SQL_BACKUP

# Backup the entire public directory
tar -zcf ../backups/$FILES_BACKUP .

# Remove backup files that are a month old
rm -f ../backups/$(date +%Y%m%d* --date='1 month ago').gz

Configuring Remote Backups

One problem with the site backups we’ve just set up is that the backup files still reside on your VPS server. If the server goes down, it will take the backups with it. Therefore, it’s a good idea to store your individual site backups somewhere other than your server. One great option for this is to move them to an Amazon S3 bucket.

Creating an S3 Bucket

First, we’ll need to create a new S3 bucket to hold our backups.

Log in to the AWS Console and navigate to Services => S3. Click the Create bucket button, give the bucket a name and select a region. You’ll need to remember the region for a later step. You can leave the rest of the settings as their defaults.

Creating a new Amazon S3 bucket.

Scroll down and click the Create bucket button to create the bucket.

Saving the new bucket.

Setting Up an AWS User

Now that we have a bucket, we need a user with permission to upload to it. For details on how to do this, see our WP Offload Media documentation, but the TL;DR version is:

  1. Navigate to IAM users and create a user
  2. Assign the AmazonS3 access permissions
  3. Copy your Access Key ID and Secret Access Key

Be sure to hang onto your Access Keys as you will need them later.

Installing AWS CLI

Amazon offers an official set of command line tools for working with all its services including S3. They also provide detailed installation instructions. You may need to install the unzip utility first, which you can do with sudo apt install unzip. Once the AWS CLI is installed you can run aws from your command line terminal.

Uploading to S3

To upload to S3, we first need to configure the AWS CLI with the Access Keys of the user we created earlier, by running the aws configure command. Set the default region to the same region you chose for the S3 bucket and leave the default output format.

aws configure
AWS Access Key ID [None]: **************
AWS Secret Access Key [None]: ***************************
Default region name [None]: us-west-2
Default output format [None]:

Once this is done, it’s straightforward to upload a file to our S3 bucket, using the aws s3 cp command:

aws s3 cp ../backups/20211111122207_database.sql.gz s3://backups-ashleyrich.com/ --storage-class STANDARD

Now we need to add this to our backup script. At the bottom of the file, add the following to upload both the SQL backup and the files backup:

# Copy the files to the S3 bucket
aws s3 cp ../backups/$SQL_BACKUP.gz s3://backups-ashleyrich.com/ --quiet --storage-class STANDARD
aws s3 cp ../backups/$FILES_BACKUP s3://backups-ashleyrich.com/ --quiet --storage-class STANDARD

A Little Refactoring

Now that the basics of the backup script are in place, let’s review the script and see if we can improve it. It would be great if the script was more generic and could be used for any site.

  • Ideally, we should pass the S3 bucket name as an argument to the script
  • The script should make sure that the backups folder exists

Here is the updated version of the backup script, with those additions in place.

#!/bin/bash

# Get the bucket name from an argument passed to the script
BUCKET_NAME=${1-''}

if [ ! -d ../backups/ ]
then
    echo "This script requires a 'backups' folder 1 level up from your site files folder."
    exit
fi

NOW=$(date +%Y%m%d%H%M%S)
SQL_BACKUP=${NOW}_database.sql
FILES_BACKUP=${NOW}_files.tar.gz

DB_NAME=$(sed -n "s/define( *'DB_NAME', *'\([^']*\)'.*/\1/p" wp-config.php)
DB_USER=$(sed -n "s/define( *'DB_USER', *'\([^']*\)'.*/\1/p" wp-config.php)
DB_PASSWORD=$(sed -n "s/define( *'DB_PASSWORD', *'\([^']*\)'.*/\1/p" wp-config.php)
DB_HOST=$(sed -n "s/define( *'DB_HOST', *'\([^']*\)'.*/\1/p" wp-config.php)

# Backup database
mysqldump --add-drop-table -u$DB_USERNAME -p$DB_PASSWORD -h$DB_HOST $DB_NAME > ../backups/$SQL_BACKUP 2>&1

# Compress the database dump file
gzip ../backups/$SQL_BACKUP

# Backup the entire public directory
tar -zcf ../backups/$FILES_BACKUP .

# Remove backup files that are a month old
rm -f ../backups/$(date +%Y%m%d* --date='1 month ago').gz

# Copy files to S3 if bucket given
if [ ! -z "$BUCKET_NAME" ]
then
    aws s3 cp ../backups/$SQL_BACKUP.gz s3://$BUCKET_NAME/ --quiet --storage-class STANDARD
    aws s3 cp ../backups/$FILES_BACKUP s3://$BUCKET_NAME/ --quiet --storage-class STANDARD
fi

Finally, it would be useful to move the backup.sh script. Because we’ve made sure the script could potentially be located anywhere, you could even move it to the server’s usr/local/bin directory, and make it available across the entire server. For our purposes, we’ll just move it to a scripts location in the current user’s home directory.

mkdir /home/ashley/scripts
mv /home/ashley/ashleyrich.com/backup.sh /home/ashley/scripts/

In the cron job, we’ll update the path to the script and include the bucket name to copy the files to S3 like this:

0 5 * * * cd /home/ashley/ashleyrich.com/public/; /home/ashley/scripts/backup.sh backups-ashleyrich.com

If you don’t want to copy files to S3, you would omit the bucket name:

0 5 * * * cd /home/ashley/ashleyrich.com/public/; /home/ashley/scripts/backup.sh

Introducing Amazon Glacier

While sending your backups to S3 is a good start, it would be a good idea to configure Amazon Glacier. Amazon Glacier is an S3 storage class designed for data archiving and backup. While you can retrieve your data from the S3 Standard storage class almost instantly, data retrieval from Glacier can take several hours. The big advantage of Glacier is cost. For example, if you store your backups in the US West (Oregon) region, it costs only $0.0036 per GB per month if you choose the flexible retrieval option, whereas S3 Standard costs $0.023 per GB. This makes it perfect for backups.

If you read the documentation on the aws cp command you will see that all you need to do to implement the Glacier storage class is to change the --storage-class option from STANDARD to GLACIER. However, you should take into account that Glacier data retrieval can take several hours. In the event that you need to restore a site backup, it would be better to access backup files on the Standard storage class, as you could download those immediately. But wouldn’t it be great if you could keep the most recent backups on Standard storage, and then move them to Glacier after a set number of days?

Fortunately, you can configure Amazon S3 Lifecycle rules on your S3 bucket objects. These rules allow you to transition your backup files between storage classes and even set expiration dates on them. The expiration option is great for cleaning outdated backups from your S3 bucket, saving you the cost of keeping these files around forever.

Configuring Amazon S3 Lifecycle Rules

For this guide, we’re going to configure an S3 Lifecycle rule that transitions the backup files to Glacier after one day and deletes them after one year. You might want to increase/decrease these values, depending on your requirements. For example, we move our site backups to Glacier after 90 days, and delete them after two years. It’s also worth noting that once an object has been moved to the Glacier storage class, there is a minimum storage duration of 90 days. This means if you delete any item in Glacier storage that’s been there for less than 90 days, you will still be charged for the 90 days.

To create an S3 Lifecycle rule, access your bucket in the AWS management console. If you have quite a few buckets, you can use the search box to filter by bucket name.

Search for bucket name in S3.

Click on the bucket name in the list to view the bucket details, then click on the Management tab. Click on either of the Create lifecycle rule buttons.

S3 Bucket settings showing Management tab.

Give the rule a name, and then choose the “Apply to all objects in the bucket” scope. Tick the checkbox to acknowledge that you understand that this rule will apply to all objects in the bucket.

S3 Lifecycle rule scope configuration.

In the “Lifecycle rule actions” area, tick to select the specific actions you want to apply. We want to use the “Move current versions of objects between storage classes” action and the “Expire current versions of objects” action.

Lifecycle rule action configuration.

We’re configuring both actions in one Lifecycle rule. However, there is nothing stopping you from creating one rule for the transition and another for the expiration.

The final step is to configure each of the actions.

For the transition rule I’ve selected Glacier for the “Choose storage class transitions” field and 1 for the “Days after object creation” field. This configuration will move the backup files to the Glacier storage class one day after they are copied to the bucket. Tick the checkbox to acknowledge that you understand that this will incur a one-time lifecycle request cost per object on small objects. When taking the Glacier Requests & data retrievals cost on the pricing page into account, the additional one-time cost per transition is still cheaper than leaving your files on Standard S3 storage.

For the expiration rule I’ve set 365 as the value for the “Days after object creation” field, which means it will expire any objects in the bucket after one year.

Lifecycle transitions configuration.

The bottom of the Lifecycle rule configuration page shows an overview of the actions you’ve configured. As you can see, current versions of objects are uploaded on day 0, moved to Glacier on day 1, and expired on day 365.

Reviewing Lifecycle rules before saving.

Noncurrent versions are only available if you’ve enabled object versioning on your bucket. This allows you to restore any files in case you delete them by accident, which isn’t something we need since we’re storing backups.

Click the Save button once you’re happy with your rules. If you’ve configured the rule correctly, after a day, you’ll see your backup files have moved from the Standard storage class to Glacier.

One day later, all objects transitioned to Glacier.

Conclusion

So there you have it, a fairly straightforward setup to backup your WordPress site and store it remotely. You may also want to consider using our WP Offload Media plugin to copy files to S3 as they are uploaded to the Media Library. Not only do you save disk space by storing those files in S3 instead of your server, but you can configure Amazon CloudFront or another CDN to deliver them very fast. You can also enable versioning on the bucket so that all your files are restorable in case of accidental deletion.

That concludes this chapter. In the next chapter, we’ll improve the security of our server with tweaks to the Nginx configuration.

The post Automated Remote<span class="no-widows"> </span>Backups appeared first on SpinupWP.

]]>
https://spinupwp.com/backup-wordpress-amazon-glacier/feed/ 9
WordPress Cron and Email Sending https://spinupwp.com/hosting-wordpress-yourself-cron-email-automatic-backups/ https://spinupwp.com/hosting-wordpress-yourself-cron-email-automatic-backups/#replybox Tue, 04 Apr 2023 08:50:03 +0000 https://spinupwp.com/?p=369 In this chapter, we’ll cover what cron is and how to get around some typical hurdles. Then we’ll set up automatic renewals of HTTPS certificates. Next we discuss why we don’t set up an email server and step through configuration of outgoing email sending.

The post WordPress Cron and Email<span class="no-widows"> </span>Sending appeared first on SpinupWP.

]]>

This is article 5 of 10 in the series “Hosting WordPress Yourself”

In the previous chapter, I walked you through WordPress caching. In this chapter I will demonstrate how to configure WordPress cron and set up outgoing email.

Cron

WordPress has built-in support for scheduled tasks, which allows certain processes to be performed in the background at designated times. Out-of-the-box WordPress performs the following scheduled tasks:

  • Automatic updates which are pushed out by the WordPress core team to fix security vulnerabilities
  • Check WordPress is running the latest stable release
  • Check for plugin updates
  • Check for theme updates
  • Publish any posts scheduled for future release

However, the cron system used by WordPress isn’t the most performant or accurate of
implementations. Scheduled tasks in WordPress are triggered during the lifecycle of a page request, therefore if your site doesn’t receive any visits for a set period of time, no cron events will be triggered during this time.

This is especially true of sites that use page caching, such as Nginx FastCGI cache introduced in the previous chapter. With page caching enabled, WordPress is no longer processing each page request if the page cache is hit. This means that cron will not fire until the page cache expires. If you have configured the cache to expire after 60 minutes this may not be an issue, however, if you are caching for longer periods of time this may become problematic.

Using page requests to execute the cron is also problematic on sites without page caching that receive a lot of traffic. Checking if the cron needs to be executed on every page request is hard on server resources and several simultaneous requests could cause the cron to execute multiple times.

To overcome these issues cron should be configured using the operating system daemon (background process), available on Linux and all Unix-based systems. Because cron runs as a daemon it will run based on the server’s system time and no longer requires a user to visit the site.

Before configuring cron it’s recommended that you disable WordPress from automatically handling cron. Add the following line to your wp-config.php file:

define('DISABLE_WP_CRON', true);

Introducing Crontab

Scheduled tasks on a server are added to a text file called crontab and each line within the file represents one cron event. If your server hosts multiple sites you will need one entry per site.

Begin by connecting to your server.

ssh ashley@pluto.ashleyrich.com

Open the crontab using the following command. If this is the first time you have opened the crontab, you may be asked to select an editor. Option 2 (nano) is usually the easiest.

crontab -e

Crontab Editor

I’m not going to go into detail on the crontab syntax, but adding the following to the end of the file will trigger WordPress cron every 5 minutes. Remember to update the file path to point to your WordPress install and to repeat the entry for each site.

*/5 * * * * cd /home/ashley/ashleyrich.com/public; /usr/local/bin/wp cron event run --due-now >/dev/null 2>&1

Some articles suggest using wget or curl for triggering cron, but using WP-CLI is recommended. Both Wget and cURL make requests through Nginx and are subject to the same timeout limits as web requests. However, you may want your cron jobs to run for longer periods of time, for example if a plugin is uploading hundreds or thousands of media files to Amazon S3 as a background process. There is no timeout limit when running WordPress cron via WP-CLI, it will execute until complete.

The >/dev/null 2>&1 part ensures that no emails are sent to the Unix user account initiating the cron job.

Save the file by hitting CTRL + X followed by Y.

Cron is now configured using the Unix system cron tool, but I’ll demonstrate how to check it’s running correctly later on.

Email

Email servers are notoriously difficult to set up. Not only do you need to ensure that emails successfully hit recipient inboxes, but you also have to consider how you’ll handle spam and viruses (sent as email attachments). Installing the required software to run your own mail server can also eat up valuable system resources and potentially open up your server to more security vulnerabilities. This DigitalOcean article discusses in more detail why you may not want to host your own mail server.

I do not recommend that you configure your server to handle email and instead use a ‘best of breed’ service provider, such as Google Workspace. However, WordPress still needs to send outgoing emails:

  • Admin notifications
  • New user signups
  • Password resets
  • Auto update notifications

And that’s just WordPress core. Add plugins to the mix and the volume and importance of emails sent from your site can balloon. Think WooCommerce and order receipts.

Outgoing Email

Although it’s popular to configure WordPress to use SMTP for outgoing email, we do not recommend it. Instead, you should use a WordPress plugin that connects directly to an email sending service via an API.

Preferably a plugin that queues emails to be sent later when the API is unreachable instead of never sending them. WP Offload SES and WP Offload SES Lite are such plugins. In addition to simplifying the setup process WP Offload SES sends emails via Amazon SES, so you can be sure of high deliverability and low costs.

Testing Cron and Outgoing Email

In order to test that both cron and outgoing emails are working correctly, I have written a small plugin that will send an email to the admin user every 5 minutes. This isn’t something that you’ll want to keep enabled indefinitely, so once you have established that everything is working correctly, remember to disable the plugin!

Create a new file called cron-test.php within your plugins directory, with the following contents:

<?php
/**
 * Plugin Name: Cron & Email Test
 * Plugin URI: https://spinupwp.com/hosting-wordpress-yourself-cron-email-automatic-backups/
 * Description: WordPress cron and email test.
 * Author: SpinupWP
 * Version: 1.0
 * Author URI: http://spinupwp.com
 */

/**
 * Schedules
 *
 * @param array $schedules
 *
 * @return array
 */
function db_crontest_schedules( $schedules ) {
    $schedules['five_minutes'] = array(
        'interval' => 300,
        'display'  => 'Once Every 5 Minutes',
    );

    return $schedules;
}
add_filter( 'cron_schedules', 'db_crontest_schedules', 10, 1 );

/**
 * Activate
 */
function db_crontest_activate() {
    if ( ! wp_next_scheduled( 'db_crontest' ) ) {
        wp_schedule_event( time(), 'five_minutes', 'db_crontest' );
    }
}
register_activation_hook( __FILE__, 'db_crontest_activate' );

/**
 * Deactivate
 */
function db_crontest_deactivate() {
    wp_unschedule_event( wp_next_scheduled( 'db_crontest' ), 'db_crontest' );
}
register_deactivation_hook( __FILE__, 'db_crontest_deactivate' );

/**
 * Crontest
 */
function db_crontest() {
    wp_mail( get_option( 'admin_email' ), 'Cron Test', 'All good in the hood!' );
}
add_action( 'db_crontest', 'db_crontest' );

Upon activating the plugin, you should receive an email shortly after. If not, check your crontab configuration and WP Offload SES settings.

That concludes this chapter. In the next chapter we’ll look at configuring automatic backups for your sites.

The post WordPress Cron and Email<span class="no-widows"> </span>Sending appeared first on SpinupWP.

]]>
https://spinupwp.com/hosting-wordpress-yourself-cron-email-automatic-backups/feed/ 38
Object Caching, Page Caching, and Other Speed Optimizations https://spinupwp.com/hosting-wordpress-yourself-server-monitoring-caching/ https://spinupwp.com/hosting-wordpress-yourself-server-monitoring-caching/#replybox Tue, 04 Apr 2023 08:40:33 +0000 https://spinupwp.com/?p=363 We’ll start this chapter with a benchmark of site speed without caching and end it with a benchmark with caching enabled. We’ll install Redis and a companion WordPress plugin that will work together to enable object caching. Next we’ll return to our Nginx config files and add a batch of directives to enable FastCGI Cache and tell it what not to cache, including some directives for WooCommerce.

The post Object Caching, Page Caching, and Other Speed<span class="no-widows"> </span>Optimizations appeared first on SpinupWP.

]]>

This is article 4 of 10 in the series “Hosting WordPress Yourself”

In the previous chapter, I walked you through the process of configuring Nginx to serve your WordPress sites over HTTPS on your Linux server. However, we need to do more if we want our sites to feel snappy. In this chapter I will guide you through the process of caching a WordPress site. Caching will increase throughput (requests per second) and decrease response times (improve load times).

Initial Benchmarks: How Bad is WordPress Performance Without Caching?

I want to show you how this setup handles traffic prior to any caching. It’s difficult to simulate real web traffic. However, it is possible to send a large amount of concurrent requests to a server and track the time of responses. This gives you a rough indication of the amount of traffic a server can handle, but also allows you to measure the performance gains once you’ve implemented the optimizations.

The server I have set up for this series is a 1GB DigitalOcean Droplet running Ubuntu. I’m using Loader to send an incremental amount of concurrent users to the server within a 60 second time period. The users scale, starting with 1 concurrent user and increasing to 50 concurrent users by the end of the test.

Initial benchmark results

The server was able to handle a total of 1,322 requests. You’ll see that as concurrent users increase, so does the site’s response time. Meaning the more visitors on the site, the slower it will load, which could eventually lead to timeouts. Based on the results, the server can theoretically handle 1,903,680 requests a day with an average response time of 1,134ms.

Monitoring the server’s resource usage shows that the load is split between both PHP and MySQL.

htop results

It’s time to optimize!

Object Cache

An object cache stores database query results so that instead of running the query again the next time the results are needed, the results are served from the cache. This greatly improves the performance of WordPress as there is no longer a need to query the database for every piece of data required to return a response.

Redis is an open-source option that is the latest and greatest when it comes to object caching. However, popular alternatives include Memcache and Memcached.

To get the latest stable version of Redis, you can use the official Redis package repository. First add the repository with the signing key and update the package lists:

curl -fsSL https://packages.redis.io/gpg | sudo gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/redis.list
sudo apt update

Then issue the following commands to install the Redis server and restart PHP-FPM:

sudo apt install redis-server -y
sudo service php8.0-fpm restart

You could install the Redis Nginx-module on your server to perform simple caching, but in order for WordPress to use Redis as an object cache, you need to install a Redis object cache plugin. Redis Object Cache by Till Krüss is a good choice.

Object Cache - Plugins Screen

Once installed and activated, go to Settings > Redis to enable the object cache.

Object Cache - Enable

This is also the screen where you can flush the cache if required.

Object Cache - Flush

I’m not going to run the benchmarks again as the results won’t dramatically change. Although object caching reduces the average amount of database queries on the front page from 22 to 2 (theme and plugin dependant), the database server is still being hit. Establishing a MySQL connection on every page request is one of the biggest bottlenecks within WordPress.

The benefit of object caching can be seen when you look at the average database query time, which has decreased from 2.1ms to 0.3ms. The average query times were measured using Query Monitor.

To see a big leap in performance and a big decrease in server resource usage, we must avoid a MySQL connection and PHP execution altogether.

Page Cache

Although an object cache can go a long way to improving your WordPress site’s performance, there is still a lot of unnecessary overhead in serving a page request. For many sites, content is rarely updated. It’s therefore inefficient to load WordPress, query the database, and build the desired page on every single request to the web server. Instead, you should serve a static HTML version of the requested page.

Nginx allows you to automatically cache a static HTML version of a page using the FastCGI module. Any subsequent requests to the page will receive the cached HTML version without ever hitting PHP or MySQL.

Setup requires a few changes to your Nginx server block. If you would find it easier to see the whole thing at once, feel free to download the complete Nginx config kit now. Otherwise, open your virtual host file:

sudo nano /etc/nginx/sites-available/ashleyrich.com

Add the following line before the server block, ensuring that you change the fastcgi_cache_path directive and keys_zone. You’ll notice that I store my cache within the site’s directory, on the same level as the logs and public directories.

fastcgi_cache_path /home/ashley/ashleyrich.com/cache levels=1:2 keys_zone=ashleyrich.com:100m inactive=60m;

You need to instruct Nginx to not cache certain pages. The following will ensure admin screens and pages for logged in users are not cached, plus a few others. This should go above the first location block.

set $skip_cache 0;

# POST requests and urls with a query string should always go to PHP
if ($request_method = POST) {
    set $skip_cache 1;
}   
if ($query_string != "") {
    set $skip_cache 1;
}   

# Don’t cache uris containing the following segments
if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml") {
    set $skip_cache 1;
}   

# Don’t use the cache for logged in users or recent commenters
if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") {
    set $skip_cache 1;
}

Next, within the PHP location block add the following directives.

fastcgi_cache_bypass $skip_cache;
fastcgi_no_cache $skip_cache;
fastcgi_cache ashleyrich.com;
fastcgi_cache_valid 60m;

Download the complete set of Nginx config files

Notice how the fastcgi_cache directive matches the keys_zone set before the server block. In addition to changing the cache location, you can also specify the cache duration by replacing 60m with the desired duration in minutes. The default of 60 minutes is a good starting point for most people.

If you modify the cache duration, you should consider updating the inactive parameter in the fastcgi_cache_path line as well. The inactive parameter specifies the length of time cached data is allowed to live in the cache without being accessed before it is removed.

Save the configuration when you’re happy with it.

Next you need to add the following directives to your nginx.conf file.

sudo nano /etc/nginx/nginx.conf

Add the following below the gzip settings.

##
# Cache Settings
##

fastcgi_cache_key "$scheme$request_method$host$request_uri";
add_header Fastcgi-Cache $upstream_cache_status;

The first directive instructs the FastCGI module on how to generate key names. The second adds an extra header to server responses so that you can easily determine whether a request is being served from the cache.

Save the configuration and restart Nginx.

sudo service nginx restart

Now when you visit the site and view the headers, you should see an extra parameter.

Nginx - Response Headers

The possible return values are:

  • HIT – Page cached
  • MISS – Page not cached (refreshing should cause a HIT)
  • BYPASS – Page cached but not served (admin screens or when logged in)

The final step is to install the Nginx Cache plugin, also by Till Krüss. This will automatically purge the FastCGI cache of specific cache files whenever specific WordPress content changes. You can manually purge the entire cache from the top bar in the WordPress dashboard.

You can also purge the entire cache by SSH’ing into your server and removing all the files in the cache folder:

sudo rm -Rf /home/ashley/ashleyrich.com/cache/*

This is especially handy when your WordPress dashboard becomes inaccessible, like if a redirect loop has been cached.

Once installed, navigate to Tools > Nginx Cache and define your cache zone path. This should match the value you specified for the fastcgi_cache_path directive in your Nginx hosts file.

WooCommerce FastCGI Cache Rules

Although page caching is desired for the majority of front-end pages, there are times when it can cause issues, particularly on ecommerce sites. For example, in most cases you shouldn’t cache the shopping cart, checkout, or account pages as they are generally unique for each customer. You wouldn’t want customers seeing the contents of other customer’s shopping carts!

Additional cache exclusions can be added using conditionals and regular expressions (regex). The following example will work for the default pages (Cart, Checkout, and My Account) created by WooCommerce :

if ($request_uri ~* "/(cart|checkout|my-account)/*$") {
    set $skip_cache 1;
}

Open the configuration file for your chosen site, in my case:

sudo nano /etc/nginx/sites-available/ashleyrich.com

Add the new exclusion to the server directive, directly below the existing conditionals. Once you’re happy, save, test, and reload the configuration for the changes to take effect. You should now see that the “fastcgi-cache” response header is set to “BYPASS” when visiting any of the WooCommerce pages.

WooCommerce isn’t the only plugin to create pages that you should exclude from the FastCGI cache. Plugins such as Easy Digital Downloads, WP eCommerce, BuddyPress, and bbPress all create pages that you will need to exclude. Each plugin should have documentation on how to add caching rules to exclude its pages from caching.

Final Benchmarks: How Much Better is WordPress Performance With Caching?

With the caching configured, it’s time to perform a final benchmark. This time I’m going to up the maximum concurrent users from 50 to 750.

Final benchmark results

Not bad at all! The server was able to handle a total of 222,323 requests with an average response time of 101ms. You’ll notice that the response time doesn’t increase at the same rate as the number of concurrent users.

The server’s resource usage looks a little different too. Nginx is now solely causing the increased CPU usage spikes.

Final htop results

Performance optimization is much more difficult on highly dynamic sites where the content updates frequently, such as those that use bbPress or BuddyPress.

In these situations it’s required to disable page caching on the dynamic sections of the site (the forums for example). This is achieved by adding additional rules to the skip cache section within the Nginx server block. This will force those requests to always hit PHP and generate the page on the fly. Doing so will often mean you have to scale hardware sooner, thus increasing server costs. Another option is to implement micro caching.

Caching Plugins

At this point you may be wondering why I chose this route instead of installing a plugin such as WP Rocket, W3 Total Cache or WP Super Cache. First, not all plugins include an object cache. For those that do, you will often need to install additional software on the server (Redis for example) in order to take full advantage of the feature. Second, caching plugins don’t perform as well as server-based caching.

Offloading Media to the Cloud with WP Offload Media Lite

One significant way to reduce server requests is to use a plugin like WP Offload Media to move files that you upload to the server through the WordPress Media Library to cloud storage. The plugin will automatically rewrite the media URLs to serve the files from cloud storage.

WP Offload Media also allows you to configure a CDN to serve your media much faster, which means your pages load faster. This can lead to increased conversions and may even help improve your Google search engine rankings. Offloading your media will also mean your site’s media files don’t use up your server disk space.

Once you install the WP Offload Media Lite plugin, configure your storage provider settings. The plugin will guide you on doing this for the cloud storage providers it supports (Amazon S3, DigitalOcean Spaces, and Google Cloud Storage).

Configure storage provider

After configuring your storage settings, you can adjust your Delivery settings to take advantage of CDN benefits. Then you can start uploading your media to the library, and you’re rolling!

Configure CDN

That concludes this tutorial on caching and speed improvements. In the next chapter we’ll dig into cron and email sending.

The post Object Caching, Page Caching, and Other Speed<span class="no-widows"> </span>Optimizations appeared first on SpinupWP.

]]>
https://spinupwp.com/hosting-wordpress-yourself-server-monitoring-caching/feed/ 100
Configuring Nginx to Serve Your First Site Over HTTPS https://spinupwp.com/hosting-wordpress-yourself-setting-up-sites/ https://spinupwp.com/hosting-wordpress-yourself-setting-up-sites/#replybox Tue, 04 Apr 2023 08:25:40 +0000 https://spinupwp.com/?p=358 In this chapter we’ll discuss HTTPS and why it’s so important before updating our DNS and obtaining our first SSL certificate from Let’s Encrypt. Then we’ll add a new config file to Nginx for our first site complete with a redirect from HTTP to HTTPS. Next we’ll create a database for the site and we’ll use WP-CLI to install WordPress. We’ll wrap up with a discussion about creating more sites on the server.

The post Configuring Nginx to Serve Your First Site Over<span class="no-widows"> </span>HTTPS appeared first on SpinupWP.

]]>

This is article 3 of 10 in the series “Hosting WordPress Yourself”

In the previous chapter, I showed you how to install Nginx, PHP 8.0, WP-CLI, and MySQL, which formed the foundations of a working Linux web server. In this chapter, I will guide you through the process of deploying your first HTTPS enabled WordPress site with HTTP/2 support.

HTTP/2

HTTP/2 is the latest version of the HTTP protocol, which is the communication protocol used whenever you interact with a website. HTTP/2 can provide a significant improvement to the load time of your sites. I wrote a complete article on HTTP/2, which explains the benefits in more detail. In short, there really is no reason not to enable HTTP/2, the only requirement is that the site must also be served over HTTPS.

HTTPS

HTTPS is an extension for HTTP/2 which secures the communication between a server and a client. It ensures that all data sent between the devices is encrypted and that only the intended recipient can decrypt it. Without HTTPS any data transmitted will be sent in plain text, allowing anyone who might be eavesdropping to read the information.

HTTPS is especially important on sites which process credit card information but has gained widespread adoption over the last couple of years. This is partly due to Google announcing it as a factor in ranking your website in search results and also due to the introduction of Let’s Encrypt, which provides free SSL certificates.

Obtaining an SSL Certificate

Before obtaining an SSL certificate you will need to ensure that you’ve added an A record to your DNS provider that points to the IP address of your server.

Now let’s install Certbot, the free, open source tool for managing Let’s Encrypt certificates:

sudo apt install software-properties-common
sudo add-apt-repository universe
sudo apt update
sudo apt install certbot python3-certbot-nginx

To obtain a certificate, you can now use the Nginx Certbot plugin, by issuing the following command. The certificate can cover multiple domains (100 maximum) by appending additional d flags.

sudo certbot --nginx certonly -d ashleyrich.com -d www.ashleyrich.com

After entering your email address and agreeing to the terms and conditions, the Certbot client will generate the requested certificate. Make a note of where the certificate file fullchain.pem and key file privkey.pem are created, as you will need them later.

Successfully received certificate.
Certificate is saved at: /etc/letsencrypt/live/ashleyrich.com/fullchain.pem
Key is saved at: /etc/letsencrypt/live/ashleyrich.com/privkey.pem

Certbot will handle renewing all your certificates automatically, but you can test automatic renewals with the following command:

sudo certbot renew --dry-run

Nginx Server Block

Now we need to set up a server block so that Nginx knows how to deal with requests. By default, our Nginx configuration will drop any connections it receives, as in the previous chapter you created a catch-all server block. This ensures that the server only handles traffic to domain names that you explicitly define.

When we installed Nginx you may remember we created a php.info file in the /var/www/html directory. This was because this is the default document root that Nginx configures. However, we want to make use of a more manageable directory structure for our WordPress configuration.

If you’re not already there, navigate to your home directory.

cd ~/

For simplicity’s sake, all of the sites that you host are going to be located in your home directory and have the following structure:

Directory Structure

The logs directory is where the Nginx access and error logs will be stored, and the public directory will be the site’s root directory, which will be publicly accessible.

Begin by creating the required directories and setting the correct permissions:

mkdir -p ashleyrich.com/logs ashleyrich.com/public
chmod -R 755 ashleyrich.com

With the directory structure in place it’s time to create the server block in Nginx. Navigate to the sites-available directory:

cd /etc/nginx/sites-available

Create a new file to hold the site configuration. Naming this the same as the site’s root directory will make server management easier when hosting a number of sites:

sudo nano ashleyrich.com

Copy and paste the following configuration, ensuring that you change the server_name, access_log, error_log, and root directives to match your domain and file paths. You will also need to replace the file paths to the certificate and certificate key obtained in the previous step. The ssl_certificate directive should point to the fullchain.pem file, and the ssl_certificate_key directive should point to the privkey.pem file. Hit CTRL + X followed by Y to save the changes.

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;

    server_name ashleyrich.com;

    ssl_certificate /etc/letsencrypt/live/ashleyrich.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/ashleyrich.com/privkey.pem;

    access_log /home/ashley/ashleyrich.com/logs/access.log;
    error_log /home/ashley/ashleyrich.com/logs/error.log;

    root /home/ashley/ashleyrich.com/public/;
    index index.php;

    location / {
        try_files $uri $uri/ /index.php?$args;
    }

    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass unix:/run/php/php8.0-fpm.sock;
        fastcgi_index index.php;
        include fastcgi_params;
    }
}

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;

    server_name www.ashleyrich.com;

    ssl_certificate /etc/letsencrypt/live/ashleyrich.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/ashleyrich.com/privkey.pem;

    return 301 https://ashleyrich.com$request_uri;
}

server {
    listen 80;
    listen [::]:80;

    server_name ashleyrich.com www.ashleyrich.com;

    return 301 https://ashleyrich.com$request_uri;
}

Download the complete set of Nginx config files

This is a bare-bones server block that informs Nginx to serve the ashleyrich.com domain over HTTPS. The www subdomain will be redirected to ashleyrich.com and HTTP requests will be redirected to HTTPS.

The two location blocks essentially tell Nginx to pass any PHP files to PHP-FPM for interpreting. Other file types will be returned directly to the client if they exist, or passed to PHP if they don’t.

By default Nginx won’t load this configuration file. If you take a look at the nginx.conf file you created in the previous chapter, you will see the following lines:

##
# Virtual Host Configs
##

include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;

Only files within the sites-enabled directory are automatically loaded. This allows you to easily enable or disable sites by simply adding or removing a symbolic link (or symlink, think of it as a shortcut) in the sites-enabled directory, linked to the configuration file in sites-available.

To enable the newly created site, symlink the file that you just created into the sites-enabled directory, using the same filename:

sudo ln -s /etc/nginx/sites-available/ashleyrich.com /etc/nginx/sites-enabled/ashleyrich.com

In order for the changes to take effect, you must reload Nginx. However, before doing so you should check the configuration for any errors:

sudo nginx -t

If the test fails, recheck the syntax of the new configuration file. If the test passes, reload Nginx:

sudo service nginx reload
ashley@pluto:~$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
ashley@pluto:~$ sudo service nginx restart
* Restarting nginx                                                           [ OK ]
ashley@pluto:~$

With Nginx configured to serve the new site, it’s time to create the database so that WordPress can be installed.

Creating a Database

When hosting multiple sites on a single server, it’s good practice to create a separate database and database user for each individual site. You should also lock down the user privileges so that the user only has access to the databases that they require.

Log into MySQL with the root user.

mysql -u root -p

You’ll be prompted to enter the password which you created when setting up MySQL.

ashley@pluto:~$ mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 9
Server version: 8.0.31-0ubuntu0.22.04.1 (Ubuntu)

Copyright (c) 2000, 2022, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

Once logged in, create the new database, replacing ashleyrich_com with your chosen database name:

CREATE DATABASE ashleyrich_com CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_520_ci;

Next, create the new user using the following command, remembering to substitute username and password with your own values:

CREATE USER 'username'@'localhost' IDENTIFIED BY 'password';

You then need to add the required privileges. To keep things simple, you can grant all privileges but restrict them to your database only, like so:

GRANT ALL PRIVILEGES ON ashleyrich_com.* TO 'username'@'localhost';

Alternatively, you can have more granular control and explicitly define the privileges the user should have:

GRANT SELECT, INSERT, UPDATE, DELETE ON ashleyrich_com.* TO 'username'@'localhost';

Be careful not to overly restrict permissions. Some plugins and major WordPress updates require heightened MySQL privileges (CREATE, DROP, ALTER, etc.), therefore revoking them could have adverse effects. The WordPress Codex has more information on MySQL privileges.

For the changes to take effect you must flush the MySQL privileges table:

FLUSH PRIVILEGES;

Finally, you can exit MySQL:

exit;

Now that you have a new database, it’s time to install WordPress.

Installing WordPress

You could install WordPress manually by using something like cURL or wget to download the latest.zip or latest.tar.gz archive, extract it, and then follow the WordPress installer in a web browser. As we already have WP-CLI installed, we’ll be using that instead.

Start by navigating to the site’s public directory:

cd ~/ashleyrich.com/public

Then, using WP-CLI, download the latest stable version of WordPress into the working directory:

wp core download

You now need to create a wp-config.php file. Luckily, WP-CLI has you covered. Make sure to use the database details you set up in the previous step:

wp core config --dbname=ashleyrich_com --dbuser=username --dbpass=password

Finally, with the wp-config.php file in place, you can install WordPress and set up the admin user in one fell swoop:

wp core install --url=https://ashleyrich.com --title='Ashley Rich' --admin_user=ashley --admin_email=hello@ashleyrich.com --admin_password=password

You should see the following message:

sh: 1: /usr/sbin/sendmail: not found
Success: WordPress installed successfully.

You can safely ignore the sendmail not found error. This occurs because we haven’t set up email sending yet. We’ll set up email sending in chapter 5.

You should now be able to visit the domain name in your browser and be presented with a default WordPress installation:

Blank WordPress Installation

Adding Additional Sites

Additional sites can be added to your server using the same procedure as above and you should be able to fire up new sites within a couple of minutes. Here’s a quick breakdown of how to add additional sites:

  1. Add the relevant DNS records to the domain.
  2. Obtain an SSL certificate.
  3. Navigate to your home directory and create the required directory structure for the new site (logs and public).
  4. Navigate to the sites-available directory within Nginx and copy an existing config file for the new server block. Ensure you change the relevant directives.
  5. Symlink the config file to the sites-enabled directory to enable the site and restart Nginx
  6. Create a new database and MySQL user.
  7. Navigate to the site’s public directory and download, configure and install WordPress using WP-CLI.

You are free to add as many additional sites to your server as you like, the only limiting factors are available system resources (CPU, memory, and disk space) and bandwidth restrictions imposed by your VPS provider. Both of which can be overcome by upgrading your package. Caching will also greatly reduce system resource usage, which is something I will guide you through in the next chapter.

The post Configuring Nginx to Serve Your First Site Over<span class="no-widows"> </span>HTTPS appeared first on SpinupWP.

]]>
https://spinupwp.com/hosting-wordpress-yourself-setting-up-sites/feed/ 57
Install Nginx, PHP 8.0, WP-CLI, and MySQL https://spinupwp.com/hosting-wordpress-yourself-nginx-php-mysql/ https://spinupwp.com/hosting-wordpress-yourself-nginx-php-mysql/#replybox Tue, 04 Apr 2023 08:05:37 +0000 https://spinupwp.com/?p=353 This chapter is all about setting up the software needed to run a WordPress site. First we’ll install Nginx and configure it with better settings for our use. Next we’ll install PHP and its packages required by WordPress and configure PHP-FPM. Then we’ll install WP-CLI and MariaDB.

The post Install Nginx, PHP 8.0, WP-CLI, and<span class="no-widows"> </span>MySQL appeared first on SpinupWP.

]]>

This is article 2 of 10 in the series “Hosting WordPress Yourself”

In chapter 1 of this guide, I took you through the initial steps of setting up and securing a virtual server on DigitalOcean using Ubuntu 22.04. In this chapter I will guide you through the process of setting up Nginx, PHP-FPM, and MySQL—which on Linux is more commonly known as a LEMP stack—that will form the foundations of a working web server.

Before moving on with this tutorial, you will need to open a new SSH connection to the server, if you haven’t already:

ssh ashley@pluto.ashleyrich.com
Ashley:~$ ssh ashley@pluto.ashleyrich.com
Welcome to Ubuntu 22.04.1 LTS (GNU/Linux 5.15.0-41-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Wed Sep 21 19:30:02 BST 2022

  System load:  0.0               Processes:             112
  Usage of /:   44.2% of 4.67GB   Users logged in:       0
  Memory usage: 18%               IPv4 address for ens4: ***.***.***.***
  Swap usage:   0%

Last login: Wed Sep 21 19:19:37 2022 from ***.***.***.***
ashley@pluto:~$

Installing Nginx

Nginx has become the most popular web server software used on Linux servers, so it makes sense to use it rather than Apache. Although the official Ubuntu package repository includes Nginx packages, they’re often very outdated. Instead, I like to use the package repository maintained by Ondřej Surý that includes the latest Nginx stable packages.

First, add the repository and update the package lists:

sudo add-apt-repository ppa:ondrej/nginx -y
sudo apt update

There may now be some packages that can be upgraded, let’s do that now:

sudo apt dist-upgrade -y

Then install Nginx:

sudo apt install nginx -y

Once complete, you can confirm that Nginx has been installed with the following command:

nginx -v
ashley@pluto:~$ nginx -v
nginx version: nginx/1.22.0

Additionally, when visiting the Fully Qualified Domain Name (FQDN) pointing to your server’s IP address in the browser, you should see an Nginx welcome page.

Welcome to Nginx

Now that Nginx has successfully been installed it’s time to perform some basic configuration. Out-of-the-box Nginx is pretty well optimized, but there are a few basic adjustments to make. However, before opening the configuration file, you need to determine your server’s CPU core count and the open file limit.

Enter the following command to get the number of CPU cores your server has available. Take note of the number as we’ll use it in a minute:

grep processor /proc/cpuinfo | wc -l

Run the following to get your server’s open file limit and take note, we’ll need it as well:

ulimit -n

Next, open the Nginx configuration file, which can be found at /etc/nginx/nginx.conf:

sudo nano /etc/nginx/nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
        worker_connections 768;
        # multi_accept on;
}

http {

        ##
        # Basic Settings
        ##

I’m not going to list every configuration directive but I am going to briefly mention those that you should change. If you would find it easier to see the whole thing at once, feel free to download the complete Nginx config kit now.

Start by setting the user to the username that you’re currently logged in with. This will make managing file permissions much easier in the future, but this is only acceptable security-wise when running a single user access server.

The worker_processes directive determines how many workers to spawn per server. The general rule of thumb is to set this to the number of CPU cores your server has available. In my case, this is 1.

The events block contains two directives, the first worker_connections should be set to your server’s open file limit. This tells Nginx how many simultaneous connections can be opened by each worker_process. Therefore, if you have two CPU cores and an open file limit of 1024, your server can handle 2048 connections per second. However, the number of connections doesn’t directly equate to the number of users that can be handled by the server, as the majority of web browsers open at least two connections per request. The multi_accept directive should be uncommented and set to on. This informs each worker_process to accept all new connections at a time, opposed to accepting one new connection at a time.

Moving down the file you will see the http block. The first directive to add is keepalive_timeout. The keepalive_timeout determines how many seconds a connection to the client should be kept open before it’s closed by Nginx. This directive should be lowered, as you don’t want idle connections sitting there for up to 75 seconds if they can be utilized by new clients. I have set mine to 15. You can add this directive just above the sendfile on; directive:

http {

        ##
        # Basic Settings
        ##

        keepalive_timeout 15;
        sendfile on;

For security reasons, you should uncomment the server_tokens directive and ensure it is set to off. This will disable emitting the Nginx version number in error messages and response headers.

Underneath server_tokens add the client_max_body_size directive and set this to the maximum upload size you require in the WordPress Media Library. I chose a value of 64m.

Further down the http block, you will see a section dedicated to gzip compression. By default, gzip is enabled but you should tweak these values further for better handling of static files. First, you should uncomment the gzip_proxied directive and set it to any, which will ensure all proxied request responses are gzipped. Secondly, you should uncomment the gzip_comp_level and set it to a value of 5. This controls the compression level of a response and can have a value in the range of 1 – 9. Be careful not to set this value too high, as it can have a negative impact on CPU usage. Finally, you should uncomment the gzip_types directive, leaving the default values in place. This will ensure that JavaScript, CSS, and other file types are gzipped in addition to the HTML file type.

That’s the basic Nginx configuration dealt with. Hit CTRL + X followed by Y to save the changes.

In order for Nginx to correctly serve PHP, you also need to ensure the fastcgi_param SCRIPT_FILENAME directive is set. Otherwise, you will receive a blank, white screen when accessing any PHP scripts. Open the fastcgi_params file:

sudo nano /etc/nginx/fastcgi_params

Ensure the following directive exists, if not add it to the file:

fastcgi_param  SCRIPT_FILENAME    $document_root$fastcgi_script_name;

To save the fastcgi_params file, hit CTRL + X followed by Y.

You must restart Nginx for the changes to take effect. Before doing so, ensure that the configuration files contain no errors by issuing the following command:

sudo nginx -t

If everything looks OK, go ahead and restart Nginx:

sudo service nginx restart
ashley@pluto:~$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
ashley@pluto:~$ sudo service nginx restart
 * Restarting nginx                                                           [ OK ]
ashley@pluto:~$

Install PHP 8.0

Just as with Nginx, the official Ubuntu package repository does contain PHP packages. However, they are not the most up-to-date. Again, I use one maintained by Ondřej Surý for installing PHP. Add the repository and update the package lists as you did for Nginx:

sudo add-apt-repository ppa:ondrej/php -y
sudo apt update

Then install PHP 8.0, as well as all the PHP packages you will require:

sudo apt install php8.0-fpm php8.0-common php8.0-mysql \
php8.0-xml php8.0-xmlrpc php8.0-curl php8.0-gd \
php8.0-imagick php8.0-cli php8.0-dev php8.0-imap \
php8.0-mbstring php8.0-opcache php8.0-redis \
php8.0-soap php8.0-zip -y

You’ll notice php-fpm in the list of packages being installed. FastCGI Process Manager (FPM) is an alternative PHP FastCGI implementation with some additional features that plays really well with Nginx. It’s the recommended process manager to use when installing PHP with Nginx.

After the installation has completed, confirm that PHP has installed correctly:

php-fpm8.0 -v
ashley@pluto:~$ php-fpm8.0 -v
PHP 8.0.23 (fpm-fcgi) (built: Sep 18 2022 10:25:06)
Copyright (c) The PHP Group
Zend Engine v4.0.23, Copyright (c) Zend Technologies
    with Zend OPcache v8.0.23, Copyright (c), by Zend Technologies

Configure PHP 8.0 and PHP-FPM

Once Nginx and PHP are installed you need to configure the user and group that the service will run under. As mentioned in the series introduction, this setup does not provide security isolation between sites by configuring PHP pools, so we will run a single PHP pool under your user account. If security isolation between sites is required we do not recommend that you use this approach and instead use SpinupWP to provision your servers.

Open the default pool configuration file:

sudo nano /etc/php/8.0/fpm/pool.d/www.conf

Change the following lines, replacing www-data with your username:

user = www-data
group = www-data

listen.owner = www-data
listen.group = www-data

Next, you should adjust your php.ini file to increase the WordPress maximum upload size. Both this and the client_max_body_size directive within Nginx must be changed for the new maximum upload limit to take effect. Open your php.ini file:

sudo nano /etc/php/8.0/fpm/php.ini

Change the following lines to match the value you assigned to the client_max_body_size directive when configuring Nginx:

upload_max_filesize = 64M
post_max_size = 64M

Hit CTRL + X and Y to save the configuration. Before restarting PHP, check that the configuration file syntax is correct:

sudo php-fpm8.0 -t
ashley@server:~$ sudo php-fpm8.0 -t
[21-Sep-2022 19:44:24] NOTICE: configuration file /etc/php/8.0/fpm/php-fpm.conf test is successful

If the configuration test was successful, restart PHP using the following command:

sudo service php8.0-fpm restart

Now that Nginx and PHP have been installed, you can confirm that they are both running under the correct user by issuing the htop command:

htop

If you hit SHIFT + M, the output will be arranged by memory usage which should bring the nginx and php-fpm8.0 processes into view. You should notice a few occurrences of both nginx and php-fpm.

Both processes will have one instance running under the root user. This is the main process that spawns each worker. The remainder should be running under the username you specified.

top - 08:55:43 up 29 min,  1 user,  load average: 0.00, 0.00, 0.00
Tasks:  97 total,   1 running,  94 sleeping,   2 stopped,   0 zombie
%Cpu(s):  0.0 us,  6.2 sy,  0.0 ni, 93.8 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :    981.2 total,    528.4 free,    129.4 used,    323.4 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.    700.2 avail Mem

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
    676 root      20   0  241232  35892  29620 S   0.0   3.6   0:00.11 php-fpm8.0
    680 root      20   0  630404  28408  15544 S   0.0   2.8   0:00.58 snapd
    675 root      20   0   29272  17952  10292 S   0.0   1.8   0:00.04 networkd-dispat
    341 root      19  -1   51464  13312  12308 S   0.0   1.3   0:00.11 systemd-journal
    760 ashley    20   0  241608  12916   6616 S   0.0   1.3   0:00.00 php-fpm8.0
    761 ashley    20   0  241608  12916   6616 S   0.0   1.3   0:00.00 php-fpm8.0
    888 root      20   0   13796   8916   7472 S   0.0   0.9   0:00.00 sshd
    863 root      20   0   12176   7408   6484 S   0.0   0.7   0:00.00 sshd
    566 systemd+  20   0   90228   6056   5292 S   0.0   0.6   0:00.03 systemd-timesyn
    998 ubuntu    20   0   13928   5992   4528 S   0.0   0.6   0:00.45 sshd
   1096 ashley    20   0   58988   5596   3756 S   0.0   0.6   0:00.00 nginx

If not, go back and check the configuration, and ensure that you have restarted both the Nginx and PHP-FPM services.

Test Nginx and PHP

To check that Nginx and PHP are working together properly, enable PHP in the default Nginx site configuration and create a PHP info file to view in your browser. You are welcome to skip this step, but it’s often handy to check that PHP files can be correctly processed by the Nginx web server.

First, you need to uncomment a section in the default Nginx site configuration which was created when you installed Nginx:

sudo nano /etc/nginx/sites-available/default

Find the section which controls the PHP scripts.

# pass PHP scripts to FastCGI server
#
#location ~ \.php$ {
#       include snippets/fastcgi-php.conf;
#
#       # With php-fpm (or other unix sockets):
#       fastcgi_pass unix:/run/php/php8.0-fpm.sock;
#       # With php-cgi (or other tcp sockets):
#       fastcgi_pass 127.0.0.1:9000;
#}

As we’re using php-fpm, we can change that section to look like this:

# pass PHP scripts to FastCGI server

location ~ \.php$ {
       include snippets/fastcgi-php.conf;

       # With php-fpm (or other unix sockets):
       fastcgi_pass unix:/run/php/php8.0-fpm.sock;
}

Save the file by using CTRL + X followed by Y. Then, as before, test to make sure the configuration file was edited correctly.

sudo nginx -t

If everything looks okay, go ahead and restart Nginx:

sudo service nginx restart

Next, create an info.php file in the default web root, which is /var/www/html.

cd /var/www/html
sudo nano info.php

Add the following PHP code to that info.php file, and save it by using the same CTRL + X, Y combination.

<?php
phpinfo();
?>

Lastly, because you set the user directive in your nginx.conf file to the user you’re currently logged in with, give that user permissions on the info.php file.

sudo chown ashley info.php

Now, if you visit the info.php file in your browser, using the FQDN you set up in chapter 1, you should see the PHP info screen, which means Nginx can process PHP files correctly.

PHP info screen.

Once you’ve tested this, you can go ahead and delete the info.php file.

sudo rm /var/www/html/info.php

Catch-All Server Block

Currently, when you visit the server’s FQDN in a web browser you will see the Nginx welcome page. However, this usually isn’t the desired behavior. It would be better if the server returned an empty response for unknown domain names.

Begin by removing the following two default site configuration files:

sudo rm /etc/nginx/sites-available/default
sudo rm /etc/nginx/sites-enabled/default

Now you need to add a catch-all block to the Nginx configuration. Edit the nginx.conf file:

sudo nano /etc/nginx/nginx.conf

Towards the bottom of the file you’ll find a line that reads:

include /etc/nginx/sites-enabled/*;

Underneath that, add the following block:

server {
    listen 80 default_server;
    listen [::]:80 default_server;
    server_name _;
    return 444;
}

Hit CTRL + X followed by Y to save the changes and then test the Nginx configuration:

sudo nginx -t

If everything looks good, restart Nginx:

sudo service nginx restart

Now when you visit the FQDN you should receive an error:

Browser error.

Here’s my final nginx.conf file, after applying all of the above changes. I have removed the mail block, as this isn’t something that’s commonly used.

user ashley;
worker_processes 1;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
    worker_connections 1024;
    multi_accept on;
}

http {

    ##
    # Basic Settings
    ##

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 15;
    types_hash_max_size 2048;
    server_tokens off;
    client_max_body_size 64m;

    # server_names_hash_bucket_size 64;
    # server_name_in_redirect off;

    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    ##
    # SSL Settings
    ##

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
    ssl_prefer_server_ciphers on;

    ##
    # Logging Settings
    ##

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    ##
    # Gzip Settings
    ##

    gzip on;

    # gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 5;
    # gzip_buffers 16 8k;
    # gzip_http_version 1.1;
    gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

    ##
    # Virtual Host Configs
    ##

    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;

    server {
        listen 80 default_server;
        listen [::]:80 default_server;
        server_name _;
        return 444;
    }
}

Download the complete set of Nginx config files

Installing WP-CLI

If you have never used WP-CLI before, it’s a command-line tool for managing WordPress installations, and greatly simplifies the process of downloading and installing WordPress (plus many other tasks).

Navigate to your home directory:

cd ~/

Using cURL, download WP-CLI:

curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar

You can then check that it works by issuing:

php wp-cli.phar --info

The command should output information about your current PHP version and a few other details.

To access the command-line tool by simply typing wp, you need to move it into your server’s PATH and ensure that it has execute permissions:

chmod +x wp-cli.phar
sudo mv wp-cli.phar /usr/local/bin/wp

You can now access the WP-CLI tool by typing wp.

NAME

  wp

DESCRIPTION

  Manage WordPress through the command-line.

SYNOPSIS

  wp 

SUBCOMMANDS

  cache             Adds, removes, fetches, and flushes the WP Object Cache object.
  cap               Adds, removes, and lists capabilities of a user role.
  cli               Reviews current WP-CLI info, checks for updates, or views defined aliases.
  comment           Creates, updates, deletes, and moderates comments.
  config            Generates and reads the wp-config.php file.
  core              Downloads, installs, updates, and manages a WordPress installation.

Installing MySQL

The final package to install is MySQL. The official Ubuntu package repository does contain a MySQL package.

To install MySQL, issue the following command:

sudo apt install mysql-server -y

You can secure MySQL once it’s installed. Luckily, there’s a built-in script that will prompt you to change a few insecure defaults. However, you’ll first need to change the root user’s authentication method, because by default on Ubuntu installations the root user is not configured to connect using a password. Without the change, it will cause the script to fail and lead to a recursive loop which you can only get out of by closing your terminal window.

First, open the MySQL prompt:

sudo mysql

Next, run the following command to change the root user’s authentication method to the secure caching_sha2_password method and set a password:

ALTER USER 'root'@'localhost' IDENTIFIED WITH caching_sha2_password BY 'password';

And then exit the MySQL prompt:

exit

Now we can safely run the security script:

sudo mysql_secure_installation

Follow the instructions and answer the questions. You’ll enter the password that you just set. Here are my answers:

ashley@server:~$ sudo mysql_secure_installation

Securing the MySQL server deployment.

Enter password for user root: ********

VALIDATE PASSWORD COMPONENT can be used to test passwords
and improve security. It checks the strength of password
and allows the users to set only those passwords which are
secure enough. Would you like to setup VALIDATE PASSWORD component?

Press y|Y for Yes, any other key for No: Y

There are three levels of password validation policy:

LOW Length >= 8
MEDIUM Length >= 8, numeric, mixed case, and special characters
STRONG Length >= 8, numeric, mixed case, special characters and dictionary file

Please enter 0 = LOW, 1 = MEDIUM and 2 = STRONG: 2
Using existing password for root.

Estimated strength of the password: 50
Change the password for root ? ((Press y|Y for Yes, any other key for No) : Y

New password: ********

Re-enter new password: ********

Estimated strength of the password: 100
Do you wish to continue with the password provided?(Press y|Y for Yes, any other key for No) : Y
By default, a MySQL installation has an anonymous user,
allowing anyone to log into MySQL without having to have
a user account created for them. This is intended only for
testing, and to make the installation go a bit smoother.
You should remove them before moving into a production
environment.

Remove anonymous users? (Press y|Y for Yes, any other key for No) : Y
Success.


Normally, root should only be allowed to connect from
'localhost'. This ensures that someone cannot guess at
the root password from the network.

Disallow root login remotely? (Press y|Y for Yes, any other key for No) : Y
Success.

By default, MySQL comes with a database named 'test' that
anyone can access. This is also intended only for testing,
and should be removed before moving into a production
environment.


Remove test database and access to it? (Press y|Y for Yes, any other key for No) : Y
 - Dropping test database...
Success.

 - Removing privileges on test database...
Success.

Reloading the privilege tables will ensure that all changes
made so far will take effect immediately.

Reload privilege tables now? (Press y|Y for Yes, any other key for No) : Y
Success.

All done!

That’s all for this chapter. In the next chapter I will guide you through the process of setting up your first WordPress site and how to manage multiple WordPress installs.

The post Install Nginx, PHP 8.0, WP-CLI, and<span class="no-widows"> </span>MySQL appeared first on SpinupWP.

]]>
https://spinupwp.com/hosting-wordpress-yourself-nginx-php-mysql/feed/ 85