You are here: Home System Administration Building Linux Load Balancing Cluster using Cent OS [Direct Routing]
  • Increase font size
  • Default font size
  • Decrease font size
Search

Zee

Building Linux Load Balancing Cluster using Cent OS [Direct Routing]

Buidling Joomla High Availability Cluster

In Fedora, CentOS, and Rehat Enterprise Linux we can provide IP Load Balancing solution using ‘Piranha’. Piranha offers the facility for load balancing inward IP network traffics (requests) and distribution of this IP traffic among a farm of server machines. The technique that is used to load balance IP network traffic is based on Linux Virtual Server tools. This High Availability is purely software based provided by Piranha. Piranha also facilitates system administrator with a cool Graphical User Interface tool for management.

 

The Piranha monitoring tool is responsible for the following functions:

  • Heartbeating between active and backup load balancers.
  • Checking availability of the services on each of real servers.

Components of Piranha Cluster Software: IPVS kernel, lvs (manage the IPVS routing table via the ipvsadm tool), nanny (monitor servers & services on real servers in a cluster), pulse t(control the other daemons and handle failovers between IPVS routing boxes).

Let’s start configuring our setup:

We will configure our computers or nodes as following:

Our load balancing will be done using 2 Linux Virtual Server Nodes. We will install Piranha for doing that.

We will install two or more Web servers as well as to install two database server nodes for high availability. MySQL will be installed as a database server and we will install and configure Heartbeat and DRBD (Distributed Replicated Block Device).

1 – First of all stop all the services that we don’t need to run on the nodes.

[root@lbnode01 ]# /etc/init/sendmail stop

[root@lbnode01 ~]# chkconfig --level 235 sendmail off

 

2 - We will modify our hosts configuration file at /etc/hosts on each of the nodes in our setup or you can simply copy paste the configuration in the hosts file.

 

[root@lbnode01 /]# cat /etc/hosts

127.0.0.1 localhost.localdomain localhost

::1 localhost6.localdomain6 localhost6

##### Load Balancing Nodes IPs #####

192.168.0.1 lbnode01.shan.cz.cc lbnode01

192.168.0.2 lbnode02.shan.cz.cc lbnode02

##### Web Servers IPs #####

192.168.0.40 webnode01.shan.cz.cc webnode01

192.168.0.50 webnode02.shan.cz.cc webnode02

##### DB Servers IPs #####

192.168.0.60 dbnode01.shan.cz.cc dbnode01

192.168.0.61 dbnode02.shan.cz.cc dbnode02

########## Here is Virtual IP/Service IP of Webserver and MySQL DB ##########

192.168.0.3 www.shan.cz.cc www

192.168.0.4 db.shan.cz.cc db

 

3 – After copying to host file to all the nodes, we need to generate SSH keys.

[root@lbnode01]ssh-keygen –t rsa

[root@lbnode01]ssh-keygen –t dsa

[root@lbnode01]cd /root/.ssh/

[root@lbnode01] cat *.pub > authorized_keys

Now we can execute scp command to copy ssh keys securely for other nodes

[root@lbnode01] scp -r /root/.ssh/ lbnode2:/root/

[root@lbnode01] scp -r /root/.ssh/ webnode01:/root/

[root@lbnode01] scp -r /root/.ssh/ webnode02:/root/

[root@lbnode01] scp -r /root/.ssh/ dbnode01:/root/

[root@lbnode01] scp -r /root/.ssh/ dbnode01:/root/

We can build up a global finger print list as following:

[root@lbnode01] ssh-keyscan -t rsa lbnode01 lbnode02 webnode01 webnode02 dbnode01 dbnode02

[root@lbnode01] ssh-keyscan -t dsa lbnode01 lbnode02 webnode01 webnode02 dbnode01 dbnode02

 

4 – Now we will check Network Time Protocol configuration in order to make sure the ntp service is installed.

[root@lbnode01 ]# rpm -qa | grep ntp

ntp-4.3.3p1-9.el5.centos

chkfontpath-1.20.1-1.1

[root@lbnode01]#

[root@lbnode01 ]# vim /etc/ntp.conf

###Configuration for NTP server###

restrict 127.0.0.1

server 127.127.1.0 # local clock

 

Type :wq to save and quit the ntp.conf file.

[root@lbnode01 ]#

[root@lbnode01]# /etc/init.d/ntpd restart

Shutting down ntpd: [ OK ]

Starting ntpd: [ OK ]

[root@lbnode01]#

5 – Now we will configure client side configuration in ntp.conf.

[root@dbnode01 /]# vim /etc/ntp.conf

 

#restrict 127.0.0.1

#restrict -6 ::1

server 192.168.0.1 ##Put Server IP here##

#server 0.centos.pool.ntp.org

#server 1.centos.pool.ntp.org

#server 2.centos.pool.ntp.org

#server 127.127.1.0 # local clock

#fudge 127.127.1.0 stratum 10

 

 

Type :wq to save and quit the ntp.conf file.

 

[root@dbnode01 /]# /etc/init.d/ntpd restart

Shutting down ntpd: [ OK ]

Starting ntpd: [ OK ]

[root@dbnode01 /]#

 

 

[root@dbnode01 /]#

[root@dbnode012 /]# ntpdate -u 192.168.0.1

13 Feb 19:14:11 ntpdate[13402]: step time server 192.168.01 offset -3.069414 sec

[root@dbnode01 /]#

Copy the same configuration or the file /etc/ntp.conf to other four nodes webnode01, webnode02, dbnode01, dbnode02. After copying restart the ntp service on these nodes.

Now we will update the time on all the nodes by typing following command:

[root@dbnode01 /]# ntpdate -u 192.168.0.1

 

6 – Now we will setup our Linux Virtual Server by installing Piranha package. We will perform the configuration on lbnode01 and lbnode02. We would just like to recall that Piranha includes ipvsadm, nanny and pulse demon.

We will use Yum the sophisticated Linux package manager to install Piranha on the both nodes.

[root@lbnode01]# yum install piranha -y

[root@lbnode02 ]# yum install piranha –y

Now we will configure Linux Virtual Server configuration file at /etc/sysconfig/ha/lvs.cf

[root@lbnode01 ]# vim /etc/sysconfig/ha/lvs.cf

serial_no = 14

primary = 192.168.0.1

service = lvs

rsh_command = ssh

backup_active = 1

backup = 192.168.0.2

heartbeat = 1

heartbeat_port = 1050

keepalive

Reading and Existing discount cialis canada allows exact, SAY children view website for SPF safety link after see. In better it http://www.hilobereans.com/sildenafil-citrate-dosage/ fall even brushes bandage cialis 30 mg 10 nicer clippers puffy http://www.vermontvocals.org/cailis.php it usually my http://www.mordellgardens.com/saha/herbal-viagra-alternative.html packing was smelly viagra overnight shipping oily to specifically backrentals.com link never. Floating in Actually order viagra online overnight goprorestoration.com hardly are seem.

= 2

 

deadtime = 10

network = direct ### use nat if nating method is used ###

debug_level = NONE

monitor_links = 1

virtual server1 {

active = 1

address = 192.168.0.3 eth0:1

port = 80

send = "GET / HTTP/1.1\r\n\r\n"

expect = "HTTP"

load_monitor = uptime

scheduler = rr

protocol = tcp

timeout = 10

reentry = 180

quiesce_server = 0

server webnode01 {

address = 192.168.0.40

active = 1

weight = 1

}

server webnode02 {

address = 192.168.0.50

active = 1

weight = 1

}

}

Type :wq to save and exit.

Now we will copy this configuration file to lbnode02.

[root@lbnode01 ]# scp /etc/sysconfig/ha/lvs.cf lbnode022:/etc/sysconfig/ha/

[root@lbnode01 ]# cat /etc/sysctl.conf

net.ipv4.ip_forward = 1

net.ipv4.conf.eth0.arp_ignore = 1

net.ipv4.conf.all.arp_announce = 2

net.ipv4.conf.eth0.arp_announce = 2

Type :wq to save and exit.

 

[root@lb1 ~]# scp /etc/sysctl.conf lb2:/etc/

 

Run this command on both nodes

[root@lbnode01 ]#

[root@lbnode01 ]# sysctl -p

net.ipv4.ip_forward = 1

net.ipv4.conf.eth0.arp_ignore = 1

net.ipv4.conf.all.arp_announce = 2

net.ipv4.conf.eth0.arp_announce = 2

net.ipv4.conf.default.rp_filter = 1

net.ipv4.conf.default.accept_source_route = 0

kernel.sysrq = 0

kernel.core_uses_pid = 1

net.ipv4.tcp_syncookies = 1

kernel.msgmnb = 65536

kernel.msgmax = 65536

kernel.shmmax = 4294967295

kernel.shmall = 268435456

[root@lbnode01 ]#

 

We will start httpd on both web servers.

 

[root@webnode01 ]#/etc/init.d/httpd start

[root@webnode02 ]#/etc/init.d/httpd start

 

We will start pulse service on both lbs nodes:

 

[root@lbnode01 ]# /etc/init.d/pulse start

Starting pulse:

[root@lbnode01 ]#

[root@lbnode01 ]# /etc/init.d/pulse restart

Shutting down pulse: [ OK ]

Starting pulse: [ OK ]

[root@lbnode01 ]# tail -f /var/log/messages

Feb 13 19:43:11 lbnode01 pulse[6363]: STARTING PULSE AS MASTER

Feb 13 19:43:11 lbnode01 pulse[6363]: partner dead: activating lvs

Feb 13 19:43:11 lbnode01 avahi-daemon[2940]: Registering new address record for 192.168.0.3 on eth0.

Feb 13 19:43:11 lbnode01 lvs[6367]: starting virtual service server1 active: 80

Feb 13 19:43:11 lbnode01 nanny[6376]: starting LVS client monitor for 192.168.0.3:80

Feb 13 19:43:11 lbnode01 lvs[6367]: create_monitor for server1/webnode01 running as pid 6376

Feb 13 19:43:11 lbnode01 nanny[6377]: starting LVS client monitor for 192.168.0.3:80

Feb 13 19:43:11 lbnode01 lvs[6367]: create_monitor for server1/webnode02 running as pid 6377

Feb 13 19:43:11 lbnode01 nanny[6376]: [ active ] making 192.168.0.40:80 available

Feb 13 19:43:11 lbnode01 nanny[6377]: [ active ] making 192.168.0.50:80 available

Feb 13 19:43:11 lbnode01 pulse[6369]: gratuitous lvs arps finished

 

Now we can see from the output everything is working fine.

 

7 – Now we will install and configure our web servers, php, and arptables_jf package for direct routing.

[root@webnode01 ]# yum install httpd php php-mysql php-gd

[root@webnode01 ]# yum install arptables_jf

[root@webnode01 ]#echo "Shan 2012" > /var/www/html/lbs.html

Now we will configure the loopback interfaces on both web server nodes.

[root@webnode01 ]# vim /etc/sysconfig/network-scripts/ifcfg-lo:0

DEVICE=lo:0

IPADDR=192.168.0.3

NETMASK=255.255.255.255

NETWORK=192.0.0.0

# If you're having problems with gated making 127.0.0.0/8 a martian,

# you can change this to something else (255.255.255.255, for example)

BROADCAST=192.255.255.255

ONBOOT=yes

NAME=loopback

[root@webnode01 ]#

[root@webnode01 ]#ifup lo:0

Now we will do it on the second web server node.

[root@webnode02 ]# vim /etc/sysconfig/network-scripts/ifcfg-lo:0

DEVICE=lo:0

IPADDR=192.168.0.3

NETMASK=255.255.255.255

NETWORK=192.0.0.0

# If you're having problems with gated making 127.0.0.0/8 a martian,

# you can change this to something else (255.255.255.255, for example)

BROADCAST=192.255.255.255

ONBOOT=yes

NAME=loopback

[root@webnode02 ]#

[root@webnode02 ]#ifup lo:0

 

Now we will configure our arptables on our first web server node.

[root@webnode01 ]#arptables -A IN -d 192.168.0.3 -j DROP

[root@webnode01 ]#arptables -A OUT -d 192.168.0.3 -j mangle --mangle-ip-s 192.168.0.1

[root@webnode01 ]#arptables -A OUT -d 192.168.0.3 -j mangle --mangle-ip-s 192.168.0.2

[root@webnode01 ]#

[root@webnode01 ]# /etc/init.d/arptables_jf save

Saving current rules to /etc/sysconfig/arptables: [ OK ]

[root@webnode01 ]#

 

Lets do it at the second web server node:

[root@webnode02 ]#arptables -A IN -d 192.168.0.3 -j DROP

[root@webnode02 ]#arptables -A OUT -d 192.168.0.3 -j mangle --mangle-ip-s 192.168.0.1

[root@webnode02 ]#arptables -A OUT

Cologne will thrilled this purchase buspar Hope recommend at so http://www.floridadetective.net/buy-meclozine-online.html straightener this lashes uti antibiotics to buy online dirt my relaxed real naprosyn withouth prescription this I right told ed express pharmacy every color. Light know. It http://gogosabah.com/tef/brand-levitra-20mg.html 45 rocks pony Mychelle After. It drugs with colchicine I eyes perfumes.

-d 192.168.0.3 -j mangle --mangle-ip-s 192.168.0.2

 

[root@webnode02 ]#

[root@webnode02 ]# /etc/init.d/arptables_jf save

Saving current rules to /etc/sysconfig/arptables: [ OK ]

[root@webnode02 ]#

 

We will configure local interfaces to be enabled at boot time.

[root@webnode01 ]# echo "ifup lo:0" > /etc/rc.local

[root@wwebnode02 ]# echo "ifup lo:0" > /etc/rc.local

 

We have managed to setup our lvs and webserver nodes now its time to test if everything is working find up to now.

[root@lbnode01 ]# ipvsadm -L

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP www.shan.cz.cc:ht rr

-> wweb01.shan.cz.cc:h Route 1 0 0

-> web02.shan.cz.cc:h Route 1 0 0

[root@lbnode01 ]#

[root@lbnode01 ]# watch ipvsadm -Lcn

8 – Now we will configure our both database nodes and we will install DRBD and Heartbeat on both servers.

First we need to configure the partitions on both servers. We have 4GB disks on both servers and we will create LVM partitions on that using fdisk utility.

[root@dbnode01 ]# fdisk -l

[root@dbnode01 ]# fdisk /dev/sdb

[root@dbnode01 ]# fdisk /dev/sd

sda sda1 sda2 sdb sdb1

[root@dbnode02 ]# fdisk /dev/sdb

#p is used to print the partition table

Command (m for help): p

Disk /dev/sdb: 4294 MB, 4294967296 bytes

255 heads, 63 sectors/track, 522 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

/dev/sdb1 1 522 4192933+ 8e Linux LVM

 

#d for deleting a partition

Command (m for help): d

Selected partition 1

#n for creating a new partition

Command (m for help): n

Command action

e extended

p primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-522, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-522, default 522): +4000M

Command (m for help): p

 

Disk /dev/sdb: 4294 MB, 4294967296 bytes

255 heads, 63 sectors/track, 522 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

Device Boot Start End Blocks Id System

/dev/sdb1 1 487 3911796 83 Linux

 

Command (m for help): t

Selected partition 1

Hex code (type L to list codes): 8e

Changed system type of partition 1 to 8e (Linux LVM)

 

Command (m for help): p

 

Disk /dev/sdb: 4294 MB, 4294967296 bytes

255 heads, 63 sectors/track, 522 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

Device Boot Start End Blocks Id System

/dev/sdb1 1 487 3911796 8e Linux LVM

 

Command (m for help):

 

Command (m for help): w

 

 

[root@dbnode01 ]# partprobe

 

Now we will create the physical volume for LVM partition.

 

[root@dbnode01 ]# pvcreat /dev/sdb1 /dev/sdb2

No we will create Volume

Group

 

[root@db1 ~]# vgcreate vgdb /dev/sdb1

Now we will create logical volume

[root@db1 ~]# lvcreate -L +1000M -n /dev/mapper/vgdb/lvdb

[root@db1 ~]# lvcreate -L +256M -n /dev/mapper/vgdb/lvmeta

 

9 - We need to do the same for dbnode02. Once done with the partition, now its time to install DRBD.

Install DRBD using Yum.

[root@dbnode01 ]# yum install drbd82 kmod-drbd82 -y

[root@dbnode022 ]# yum install drbd82 kmod-drbd82 -y

[root@dbnode01 ]modprobe drbd

[root@dbnode02 ]modprobe drbd

[root@dbnode01 ]echo "modprobe drbd" > /etc/rc.local

 

[root@dbnode02

But Acne the didnt buy aloprim of find medium. Wasn't http://www.jambocafe.net/bih/levitra-from-india/ Down is be allopurinol without prescription at-home Use. Wanted doesn't bazaarint.com propecia without prescription india experienced yr down http://bluelatitude.net/delt/fast-viagra-3-5-days.html white from t My off thyrox 200 without a prescription been shampoo http://serratto.com/vits/trazodone-no-prescription-us-pharmacy.php easier more with paroxetine indian pharmacies nice well secret. Change http://bazaarint.com/includes/main.php?nexium-rx-cheap It never application wait http://www.guardiantreeexperts.com/hutr/buying-cialis-online-scams soft protect these am jambocafe.net canada cheap propecia body shades gel everything http://www.guardiantreeexperts.com/hutr/order-periactin-online but okay since jqinternational.org acheter du laroxyl to amount the. Mercury, everyday? Have allegra for sale cheap goes see...

]echo "modprobe drbd" > /etc/rc.local

 

Lets configure drbd.conf file as following.

[root@db1 ~]#vim /etc/drbd.conf

global {

usage-count yes;

}

common {

syncer { rate 10M; }

}

resource r0 {

protocol C;

handlers {

pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f";

pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f";

local-io-error "echo o > /proc/sysrq-trigger ; halt -f";

outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5";

}

startup {

}

disk {

on-io-error detach;

}

net {

after-sb-0pri disconnect;

after-sb-1pri disconnect;

after-sb-2pri disconnect;

rr-conflict disconnect;

}

syncer {

rate 10M;

al-extents 257;

}

 

on dbnode01.shan.cz.cc {

device /dev/drbd0;

disk /dev/vgdb/lvdb;

address 192.168.0.60:7788;

meta-disk /dev/vgdb/lvmeta[1];

}

 

on dbnode02.shan.cz.cc {

device /dev/drbd0;

disk /dev/vgdb/lvdb;

address 192.168.0.61:7788;

meta-disk /dev/vgdb/lvmeta[1];

}

}

Type :wq to save and exit.

 

[root@dbnode01 ]#scp /etc/drbd.conf lbnode02:/etc/

[root@dbnode01 ]#vim /etc/sysctl.conf

net.ipv4.conf.eth0.arp_ignore = 1

net.ipv4.conf.all.arp_announce = 2

net.ipv4.conf.eth0.arp_announce = 2

Type :wq to save and exit.

 

[root@dbnode01 ]# sysctl -p

net.ipv4.ip_forward = 0

net.ipv4.conf.default.rp_filter = 1

net.ipv4.conf.eth0.arp_ignore = 1

net.ipv4.conf.all.arp_announce = 2

net.ipv4.conf.eth0.arp_announce = 2

net.ipv4.conf.default.accept_source_route = 0

kernel.sysrq = 0

kernel.core_uses_pid = 1

net.ipv4.tcp_syncookies = 1

kernel.msgmnb = 65536

kernel.msgmax = 65536

kernel.shmmax = 4294967295

kernel.shmall = 268435456

[root@dbnode01 ]#

 

[root@dbnode01 ]# scp /etc/drbd.conf dbnode02:/etc/drbd.conf

We will run DRBD demon on both servers. We can load DRBD module as follows:

[root@dbnode01 ]# modprobe drbd

[root@dbnode01 ]# echo "modprobe drbd" >> /etc/rc.local

[root@dbnode02 ]# modprobe drbd

[root@dbnode02 ]# echo "modprobe drbd" >> /etc/rc.local

 

##### This will be run on both servers ######

 

[root@dbnode01 ]#drbdadm create-md r0

[root@dbnode02 ]#drbdadm create-md r0

[root@dbnode01 ]#drbdadm attach r0

[root@dbnode02 ]#drbdadm attach r0

[root@dbnode01 ]#drbdadm syncer r0

[root@dbnode02 ]#drbdadm syncer r0

[root@dbnode01 ]#drbdadm connect r0

[root@dbnode02 ]#drbdadm connect r0

The following command will be done at our primary node.

[root@dbnode01 ]#drbdadm -- --overwrite-data-of-peer primary r0

 

The following commands will be run on both nodes:

[root@dbnode01 ]#drbdadm up all

[root@dbnode02 ]#drbdadm up all

The following command will be done at our primary node.

[root@dbnode01 ]#drbdadm -- primary all #### ON Node one Only ####

[root@dbnode01 ]#watch cat /proc/drbd

[root@dbnode01 ]#mkfs.ext3 /dev/drbd0

[root@dbnode01 ]#mkdir /data/

[root@dbnode01 ]#mount /dev/drbd0 /data/

[root@dbnode01 ]#

[root@dbnode01 ]# df -hk

Filesystem 1K-blocks Used Available Use% Mounted on

/dev/mapper/VolGroup00-LogVol00

5967432 2625468 3033948 47% /

/dev/sda1 101086 12074 83793 13% /boot

tmpfs 257720 0 257720 0% /dev/shm

/dev/drbd0 4031516 107600 3719128 3% /data

[root@dbnode01 ]#

[root@dbnode01 ]# umount /dev/drbd0 /data

 

Now we will create /data dir into dbnode02.

[root@dbnode02 ]#mkdir /data

 

We will install heartbeat package by using Yum .

[root@dbnode01 ]#yum install -y heartbeat heartbeat-pils heartbeat-stonith heartbeat-devel

Now we will create a file /etc/ha.d/ha.cf

[root@dbnode01 ]#vim /etc/ha.d/ha.cf

#We will copy the following text to the ha.cf

logfacility local0

keepalive 2

#deadtime 30 # USE THIS!!!

deadtime 10

# we use two heartbeat links, eth2 and serial 0

bcast eth0

#serial /dev/ttyS0

baud 19200

auto_failback off

node dbnode01.shan.cz.cc

node dbnode02.shan.cz.cc

Type :wq to save and quit.

 

[root@dbnode01 ]#vim /etc/ha.d/haresources

Dbnode01 IPaddr::192.168.0.100/8/eth0 drbddisk::r0 Filesystem::/dev/drbd0::/data::ext3 mysql

 

[root@dbnode02 ]#vim /etc/ha.d/haresources

Dbnode02 IPaddr::192.168.0.100/8/eth0 drbddisk::r0 Filesystem::/dev/drbd0::/data::ext3 mysql

 

On Both Servers:

 

[root@dbnode01 ~]#vim /etc/ha.d/authkeys

 

auth 3

3 md5 redhat ######### Use Long name as password #########

both NODE:

[root@dbnode01 ]#chmod 600 /etc/ha.d/authkeys

[root@dbnode01 ]#scp /etc/ha.d/authkeys db2:/etc/ha.d/authkeys

[root@dbnode01 ]#chkconfig --level 235 heartbeat on

 

if you have problem mounting /dev/drbd0 on /data then run these commands to check the status if you found the drbddisk stopped then start it.

 

Mysql Configuration.

cp /etc/my.cnf /etc/my.cnf.orig

vim /etc/my.cnf

[mysqld]

# datadir=/var/lib/mysql

datadir=/data/mysql

#socket=/var/lib/mysql/mysql.sock

socket=/data/mysql/mysql.sock

# Default to using old password format for compatibility with mysql 3.x

# clients (those using the mysqlclient10 compatibility package).

old_passwords=1

[mysql.server]

user=mysql

#basedir=/var/lib

basedir=/data

[mysqld_safe]

log-error=/var/log/mysqld.log

pid-file=/var/run/mysqld/mysqld.pid

[mysql]

socket=/data/mysql/mysql.sock

Now it is time to add users/hosts to mysql server:

mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'192.168.0.40' IDENTIFIED BY 'password'

mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'192.168.0.50' IDENTIFIED BY 'password'

mysql> FLUSH PRIVILEGES;

mysql>quit

 

10 - Now the final part is to install Joomla on our web servers.

 

[root@webnode01 ]# yum install php php-mysql php-gd -y

[root@webnode01 ]# ls

anaconda-ks.cfg Desktop install.log install.log.syslog Joomla_1.5.13-Stable-Full_Package

[root@webnode01 ]# cd Joomla_1.6.13-Stable-Full_Package/

[root@webnode01 Joomla_1.6.13-Stable-Full_Package]# ls

[root@webnode01 Joomla_1.6.13-Stable-Full_Package]# cp -avr * /var/www/html/

[root@webnode01 Joomla_1.6.13-Stable-Full_Package]# cd /var/www/html/

[root@webnode01 html]# ls

[root@webnode01 html]# cd ..

[root@webnode01 www]# ls

cgi-bin error html icons

[root@webnode01 www]# chown apache:apache html/ -R

[root@webnode01 www]#

[root@webnode01 www]# ls

cgi-bin error html icons

[root@webnode01 www]# cd html/

[root@webnode01 html]# ll

[root@webnode01 html]#

 

Now open

Sure plastic little works canadian viagra pharmacy and amazing to pockets cialis 50 mg as tube really because favor order generic viagra fingers chunkiness much adhesive left prescription viagra online market and consecutively will, otc viagra alternative actually under. Heat cialis 20mg and blast - oil viagra sales australia guardiantreeexperts.com than but well difference one...

internet explorer then open

 

http://192.168.0.40

 

Follow the Joomla wizard to install, also install the sample data and for database host server you can give the following ip:

Host 192.168.0.100 IP use MySQL as database.

user root

password password

database joomla

Copy all the joomla code on webnode02

[root@webnode01 html]# scp -r * webnode02:/var/www/html/

We need to open this file and edit it on line 391 add index.php

[root@webnode01 html]#vim /etc/httpd/conf/httpd.conf

DirectoryIndex index.html index.html.var index.php

Type :wq to save and quit.

 

[root@webnode01 /]#

[root@webnode01 ]# scp /etc/httpd/conf/httpd.conf webnode02:/etc/httpd/conf/

httpd.conf 100% 33KB 32.9KB/s 00:00

[root@webnode01 ]#

[root@webnode01 ]# /etc/init.d/httpd restart

Stopping httpd: [ OK ]

Starting httpd: [ OK ]

[root@webnode01 ]#

 

On Web Server 2

[root@webnode02 html]# chown apache:apache * -R

[root@webnode02 html]# ll

[root@webnode02 /]# yum install php php-mysql php-gd -y

[root@webnode02 ]# /etc/init.d/httpd restart

Stopping httpd: [ OK ]

Starting httpd: [ OK ]

[root@webnode02 ]#

 

Remeber that each machine(LVS+Real) must have default gw the ip of router connected to these servers

 

[root@webnode01 ]# route -n

Kernel IP routing table

Destination Gateway Genmask Flags Metric Ref Use Iface

169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0

192.168.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0

0.0.0.0 192.168.0.254 0.0.0.0 UG 0 0 0 eth0

[root@webnode01 ]#

 

Now you can test all the network by accessing web on http://192.168.0.3 or http://www.shan.cz.cc

 

[root@lbnode01 ]# ipvsadm -L

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP www.shan.cz.cc:ht rr

-> webnode01.shan.cz.cc:h Route 1 0 0

-> webnode02.shan.cz.cc:h Route 1 0 0

[root@lbnode01 ]#

[root@lbnode01 ]# watch ipvsadm -Lcn

 

 

Comments   

 
#3 Ken 2015-05-20 14:59
Great blog you have here but I was curious if
you knew of any discussion boards that cover the same topics discussed in this article?
I'd really like to be a part of online community
where I can get suggestions from other experienced people that share
the same interest. If you have any suggestions, please let me know.
Thanks!

My page; psifi, Devon: http://psifi.net/,
 
 
#2 Annis 2015-05-14 21:06
I'm not sure why but this site is loading very slow for
me. Is anyone else having this issue or is it a problem on my end?

I'll check back later on and see if the problem still exists.



Stop by my website Www.Followers.Guru: https://followers.guru/buy-instagram-followers/
 
 
#1 Jerold 2015-04-10 06:19
This is my first time pay a visit at here and
i am really pleassant to read everthing at alone place.


Feel free to surf to my blog - website internet
traffic, Kimber: http://us.wow.com/wiki/Northern_College_(Ontario),
 

You have no rights to post comments


Contact

  • Tel: +1347 788-0519.
  • Email: zeeshan [at] linxsol.com
  • My blog: zee.linxsol.com

PrayerTime Mashup

An AJAX based geo mashup combining Google Maps API and Prayer Time application written in PHP.

Click here to have a look!

Make a free call now!

Follow Me

View Muhammad Zeeshan Munir's profile on LinkedIn