Install HAProxy with SSL Termination

These days I have been working with scaling solutions for a PHP framework. Previously I came with Nginx as load balancers, however, with the requirement of health check and failover, I need to come to HAProxy this time. So I write this entry as a note for installing HAProxy with SSL Termination. Most of my machines are on the Ubuntu 16.04 stock OS. My testing cluster comprises 4 following machines:

  • SERVER_1 plays the HAProxy role (I will reuse it for ProxySQL later).
  • SERVER_2 and SERVER_3 act as 2 web servers: webserver-01 and webserver-02.
  • SERVER_4 is dedicated for MySQL 5.7 database. No replication, group replication or multiple DB servers in this article.

I just quick notes steps, since this blog entry is for me, and why do I care if others are angry with it :D?

Install MySQL server to SERVER_4

  1. One line:
  2. Do remember to create a user which can access from your web application servers and allow remote connection to MySQL from these servers. In case of testing, you can simply create a user with ‘%’ in the host and tell MySQL to listen for every network interfaces (Add bind-address = 0.0.0.0 to the [mysqld] config section).

Install Nginx with PHP-FPM on SERVER_2 and SERVER_3

  1. Just starting with Nginx:
  2. Next we will install PHP-FPM:
  3. Edit /etc/nginx/sites-available/default to enable index.php and PHP for default site. Remember, I am testing so why do I need to care with configuration? Remember to restart Nginx after this step.
  4. Create a /var/www/html/info.php file with a simple phpinfo thing:
  5. Access SERVER_2/info.php and SERVER_3/info.php to be sure things are working well.

Install Let’s Encrypt and HAProxy to SERVER_1

  1. Now it’s time to install Let’s Encrypt:
  2. Then, start and enable HA-Proxy:
  3. Edit /etc/haproxy/haproxy.cfg and add the following lines for the HA to listen on port 80, go to backend at port 8888 for Let’s Encrypt requests and go to 2 web-servers (SERVER_2 and SERVER_3) for normal requests:

    Then, service haproxy reload and access the SERVER_1_IP/info.php to different results (round-robin loading from 2 different servers).

    • One note on this: if we want to access backend web servers with session persistence (so N requests from 1 user only stick to 1 server), we need to define a cookie to be used by HAProxy as follows:
    • If we do not want to listen for all traffics to HA server, we can then only use be-scaling backend for a specific domain as in the following configuration:

       
  4. Create a location for HAProxy SSL and get cert issued. I do not use the port 80 this time as an assumption that HAProxy is running on it (so it does work in case we install on an existing HA-based system):
  5. Now we need to edit /etc/haproxy/haproxy.cfg and add the SSL listening directive. We will use 2 different frontends for HTTP and HTTPS so that we can pass some additional headers in different cases (to avoid nginx unlimited redirect some cases):

    Then, service haproxy reload and access the https://MY_HA_DOMAIN/info.php to see the result.
  6. Finally, we need to setup renewal and schedule it to run monthly. We can create a new /root/haproxy-certbot-renewal.sh as follows:

     

Enable HAProxy Stats / Monitoring

  1. We can simply edit /etc/haproxy/haproxy.cfg and add the stats section as follows:

    With the above configuration, we configure the port for stats server is 8080 (so do remember to open port for it on HA server), path to access is /ha-monitor?stats, and the user to access is haadmin with password 1234567890.
  2. Reload HAProxy and access to the ports & path that we defined in the above step.

GlusterFS in each web server for file replication

  1. Add all web servers IPs to /etc/hosts of each web server machines:
  2. Open necessary ports on each web server for GlusterFS:
    • TCP and UDP ports 24007 and 24008 on all GlusterFS servers. The port 2049 TCP-only (from GlusterFS 3.4 & later) for portmapper.
    • One port for each brick starting from port 49152. A brick is a filesystem that is mounted. In my case I only need one mount point for one web application, so only need to open port 49152.
  3. Install GlusterFS on each web server machine:
  4. Configure the trusted pool for GlusterFS:
  5. Set up a GlusterFS volume on each web server:
    1. On each web server, create the folder to contain the app:
    2. On any web server, create a GlusterFS volume (I use force param here since I will create the GlusterFS volume inside the system root partition) and start it:
    3. Check with gluster volume info to see if the gvapp0 volume is properly started.
  6. On each web server, we need to mount a brick to the web application location:
    1. Install the attr package:
    2. Mount the brick to web app home:

      Remember that you need to specify the VOLUME_NAME (e.g. gvapp0), not the full path (/data/bricks/gvapp0) when mounting, otherwise you will get the error “failed to fetch volume file (key:/data/bricks/gvapp0)”.
    3. When you find that mounting is working properly, just simply edit the /etc/fstab to include the mounting point when the server starts:
  7. On each web server, start to configure Nginx settings to listen on port 80 with the root web app of the HA domain pointing to /home/hawebapp/ to start serving your users.

Enable NFS and mount client as NFS instead of Gluster

Even after enabling meta-data cache for Gluster volumes, the performance of mounting the volume as the gluster type is still bad for my web app (A legacy PHP app with more than 15k small-size files). I often see glusterd and glusterfs services consume lots of CPU, so I come to enabling NFS for the gluster volume. Even NFS is marked as deprecated, it is still ok to use, comparing to the original gluster mount type. Of course, if we start with Redhat / CentOS, we should use nfs-genesha as per the guide. I am in Ubuntu, so I just use the deprecated NFS with NFS v3.

  1. In any Gluster server, enable NFS for the volume:
  2. In all Gluster servers, unmount current volume and re-mount as NFS:
  3. In all Gluster servers, edit mount point in /etc/fstab:
  4. Put enough load to your web app to see the difference in term of resource consumption.

Leave a Reply