Creating Servers For Load Balancing
In this step-by-step guide we create the underlying server structure required for successful load balancing.
We will create master server, which to the outside world will be just another server behind the load balancer. However, the master server node will have additional responsibilities including:
- Serving as the main point for applications where uploads, installations etc are carried out.
- Carrying out synchronisations of data across all other slave/clone server nodes.
To begin, create a new node. Make sure to create the node within the same region as all other Rackspace services you use.
- Create the instance (here we use the smallest RAM size available 1gb).
Use SSH to login to the instance via terminal:
# ssh [email protected]
Change root password:
# passwd
Create a new user for future regular usage:
# adduser mynewuser
Add the new user to the sudo group:
# usermod –a –G sudo mynewuser
Update the sudo group configuration to allow members to run as root:
# visudo
Add the following text at the bottom of the file opened:
%sudo ALL=(ALL) ALL
Save by using CTRL+X, Y and Enter.
Logout as root user and log back in under the mynewuser account:
# ssh [email protected] # sudo su
Update the package manger apt-get, the upgrade pre-installed packages:
# apt-get update # apt-get upgrade
Set up a basic IPTABLES firewall for the purposes of running an apache web server. Enter the following rules at command line, substitute where required:
# iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # iptables -A INPUT -p tcp --dport ssh -j ACCEPT # iptables -A INPUT -p tcp --dport 80 -j ACCEPT # iptables -A INPUT -p tcp --dport 443 -j ACCEPT # iptables -A INPUT -j DROP # iptables -I INPUT 1 -i lo -j ACCEPT
These will allow existing connections to continue, localhost connections, allow connections for SSH purposes and also allow incoming connections of the default HTTP port (80) and secure HTTP port (443). Everything else is blocked at firewall level.
Save the IPTABLES and create startup service to restore the rules on restart:
# iptables-save > /etc/iptables.rules
# nano /etc/network/if-pre-up.d/iptaload
Enter the following text in this new file:
#!/bin/sh iptables-restore < /etc/iptables.rules exit 0
Save and exit using CTRL+X, Y and Enter.
Create a service to run when the network is shut down so rules are saved:
# nano /etc/network/if-post-down.d/iptasave
Enter the following text in this new file:
#!/bin/sh iptables-save -c > /etc/iptables.rules if [ -f /etc/iptables.downrules ]; then iptables-restore < /etc/iptables.downrules fi exit 0
Make both of these previous new files executable:
# chmod +x /etc/network/if-post-down.d/iptasave # chmod +x /etc/network/if-pre-up.d/iptaload
Install NTP for date synchronization:
# apt-get install ntp
Prevent root login for SSH by editing the /etc/ssh/sshd_config file and amending the ‘PermitRootLogin’ value to ‘no’. You may skip this part if you require root login, but remember to change later. If you have amended the SSH config restart ssh using:
# sudo service ssh restart
The previous steps were just a basic set up for a newly created cloud node. Please look further into security and how to secure against external threats.
At this point application specific to the server should be installed:
# apt-get install apache2 libapache2-mod-php5 php5 php5-mysql mysql-client gcc –y # apt-get install php5-imagick php5-mcrypt php5-gd php5-memcache php5-curl
The previous commands install the Apache Web Server, PHP 5, MySQL Extensions, Memchache Extensions, etc. You can add other extensions or options depending upon your own needs.
Set up the apache configuration as per the existing server setup, or copy it across from the old server by (whilst logged into your old server) using:
# scp -rp /etc/apache2/sites-available [email protected]:/etc/apache2/
Copy across the old www directories that are relevant to the server configuration:
# scp -rp /path/to/www/dir/ [email protected]: /path/to/www/dir/
At this point create an image backup of the master server. It will later be used as a basis for the slave/clone server setup. Name it ‘BaseImg’.
You can now install PhpMyAdmin on the master server if you wish to be able to maintain the cloud database via a web interface:
# apt-get install phpmyadmin
Update the following configuration file: '/etc/phpmyadmin/config.inc.php'. After the block of code that looks like:
if(!empty($dbname)){ //* a block of code */ }
Add the following:
/* * Cloud Database config added */ $i++; /* Authentication type */ $cfg['Servers'][$i]['auth_type'] = 'cookie'; /* Server parameters */ $cfg['Servers'][$i]['host'] = ‘PRIVATE HOSTNAME OF CLOUD DB INSTANCE’; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['compress'] = false; /* Select mysqli if your server has it */ $cfg['Servers'][$i]['extension'] = 'mysql';
Save the file and exit.
Moving away from the master server for the moment, use the Rackspace control panel to create a new server node. Use the ‘BaseImg’ saved earlier as the boot image. Make a note of the IP and password details provided by the control panel.
We will now create the load balancer and configure it to use the master server and clone server.
Use the Rackspace interface to create a Load balancer. Set the session persistence option to ‘on’ and the algorithm to ‘round robin’. Add the master server and clone server as part of the cluster available to the load balancer.
On the master server you will now create an SSH key which will be provided to the clone server so the master server can access the clone server without passwords.
Whilst logged into the master server, run:
# ssh-keygen
You will be asked to enter a filename, leave this blank and Enter. This will create two files corresponding to a private and public key. The public key is what we need to provide to the clone server. it is stored at ‘public/root/.ssh/id_rsa.pub.
Copy the key over to the clone server by running:
# ssh-copy-id [email protected]
If the copying process was successful, you should be able to SSH log into the clone server from the master server without requiring a password.
You can re-secure SSH at this point on both master and clone servers by preventing SSH root access:
- On clone server change /etc/ssh/sshd_config PermitRootLogin no to PermitRootLogin without-password
- On master server change /etc/ssh/sshd_config PermitRootLogin yes to PermitRootLogin no
Now that the master and clone servers are configured to talk to each other without interference, we can install a tool for replication. This needs to take place on the master server.
Install LSync on the master server:
# apt-get install lua5.1 liblua5.1-dev pkg-config rsync asciidoc make –y # cd /var/tmp # wget http://lsyncd.googlecode.com/files/lsyncd-2.1.4.tar.gz # tar xzvf lsyncd-2.1.4.tar.gz # cd lsyncd-2.1.4 # ./configure && make && make install
Create a startup script so LSync automatically runs at boot. Put the block below inside "/etc/init/lsyncd.conf":
description "lsyncd file syncronizer" start on (starting network-interface or starting network-manager or starting networking) stop on runlevel [!2345] expect fork respawn respawn limit 10 5 exec /usr/local/bin/lsyncd /etc/lsyncd.lua
create the symbolic link:
# ln -s /lib/init/upstart-job /etc/init.d/lsyncd
Configure logging for Lsync:
# mkdir /var/log/lsyncd
Put the following block inside "/etc/logrotate.d/lsyncd":
/var/log/lsyncd/*log { missingok notifempty sharedscripts postrotate if [ -f /var/lock/lsyncd ]; then /sbin/service lsyncd restart > /dev/null 2>/dev/null || true fi endscript }
Create a configuration file for Lsyncd. Inside the /etc/lsyncd.lua place the following configuration:
settings { logfile = "/var/log/lsyncd/lsyncd.log", statusFile = "/var/log/lsyncd/lsyncd-status.log", statusInterval = 20 } sync { default.rsync, source="/path/to/www/dir/", target="PRIVATE.IP.OF.CLONE.SERVER:/path/to/www/dir/", rsync = { compress = true, acls = true, verbose = true, rsh = "/usr/bin/ssh -p 22 -o StrictHostKeyChecking=no" } }
You can now start LSyncd:
# start lsyncd
You can test the configuration by creating a file within the selected directory and checking to see if the file appears on the clone server. You can use the ‘touch’ command to create a file.
# touch /path/to/www/dir/testfile.test
The master server is now fully configured! Make an image for backup purposes called ‘MasterServerBackup’.
The final step is to configure the clone server with a reverse proxy. There may be certain tasks that must take place on the master server in order to propagate changes across the clone servers using replication.
A good tool for this is the varnish reverse proxy application. Follow the following steps to install and configure it:
# apt-get install varnish –y
Get ready to update the basic configuration by making a backup of the default configuration provided by the package:
# mv /etc/varnish/default.vcl /etc/varnish/default.vcl.backup
I’ve created a configuration just for redirection purposes. Varnish can be used to cache certain request too. Create the following script within /etc/varnish/default.vcl:
backend default { .host = "127.0.0.1"; .port = "8080"; } backend master { .host = "PRIVATE.IP.OF.MASTER.SERVER"; .port = "80"; } sub vcl_recv { if (!req.http.X-Forwarded-For) { set req.http.X-Forwarded-For = client.ip; } if (req.restarts > 0 || req.url ~ "(phpmyadmin|cron|another-specific-url-string-that-should-go-to-master-server-pipe-separated)") { set req.backend = master; return(pass); } return(pass); } sub vcl_fetch { if (beresp.status == 404 && req.restarts == 0) { return(restart); } }
Make varnish listen on port 80 so it is the first contact point for incoming requests:
# perl -pi -e 's/6081/80/;' /etc/default/varnish
As Varnish is now listening to outside requests, Apache must now listen on a secondary port. Change the port that apache listens on from ‘80’ to ‘8080’:
# sed -i 's/Listen 80/Listen 8080/g' /etc/apache2/ports.conf
Restart both apache and varnish so the changes take effect:
# service apache2 restart # service varnish restart
You can at this point make a final image backup for the clone server called ‘CloneFinalBackup’.
When you are scaling the servers in the future this image is central. You create as many servers as required using the image. Then, add the private IP’s to the load balancer pool. On the master server, you would need to update the Lsync configuration so the new clones are included in replication tasks.
The servers are now ready for incoming requests. If you update your DNS records for associated domains to point to the public IP of the load balancer everything should work in the following sequence:
- External HTTP request is routed to the Load balancer.
- Load balancer selects a server to fulfil the request based upon its pool and selection algorithm. If the visitor has been to a particular server before, the request is routed to the same server.
- If the request lands on a non-master server, Varnish checks the request against urls that should be routed to the master server. If a match is found the request goes back to the master server from the clone server.
- Request is fulfilled.
0 comments
Login or Register to post comments.