Updating VMware Tools on Red Hat Enterprise/Scientific/CentOS Linux 6 for VMware ESXi 5

October 4, 2011 7 comments

In a previous post I discussed installing the open source VMware tools for Red Hat Enterprise/Scientific/CentOS Linux 6 from a yum package repository provided by VMware. This is in contrast to using the version distributed directly with VMware ESX/ESXi, which with the Linux platform was not provided in an RPM package format. With the upgrade to ESXi 5 the update process for the tools installed from the repository is not very seamless because of some update issues in addition to package changes.

First off, to get the VMware tools packages updated it is important to upgrade to the latest version of Red Hat/Scientific/CentOS Linux 6 due to some issues with the yum version distributed originally with 6.0. VMware has disabled the automatic update due to this issue:

http://packages.vmware.com/tools/docs/engineering-release-notes/rhel-upgrade

I am using Scientific Linux 6, for Red Hat and CentOS the upgrade process should be similar but most likely will have a few differences. Be careful to test your software installed because doing the following procedure will result in a minor version upgrade (6.1 at the time of this writing):

https://www.scientificlinux.org/documentation/howto/upgrade.6x

Now it is time to get the VMware tools updated. First you will want to remove the old version distributed from the package repository if you followed a procedure similar to my original post. The other VMware tools packages should be automatically removed since they depend on this kernel module.

# yum remove vmware-open-vm-tools-kmod

Verify all VMware tools packages are removed, RPM should return no results. Remove any additional VMware packages if necessary.

# rpm -qa | grep vmware

Now you’ll need to edit the original yum repo file created for the VMware tools repository:

# nano /etc/yum.repos.d/vmware-tools.repo

Modify the “baseurl” directive to be similar to below.

baseurl=http://packages.vmware.com/tools/esx/5.0/rhel6/i386

I am using the 32-bit Linux version so I have i386 at the end, substitute x86_64 if using the 64-bit version.

Now you need to clean out the cached info about the repositories so that it will read the package info from the modified VMware repo:

# yum clean all

Install the kernel module packages:

# yum install vmware-tools-esx-kmods

If you are using the PAE kernel you’ll need to append “-PAE”  or “-pae” with no space before at the end of the command above.

Time to install the new VMware tools, in my case I’ll install without graphical components since my Linux servers don’t run with a GUI:

# yum install vmware-tools-esx-nox

Or use this if your Linux VM is running with a graphical interface:

# yum install vmware-tools-esx

That should do it. Your VMware tools should now be updated to the latest version available with ESXi 5.

Categories: Uncategorized

Configure HAProxy and Keepalived for Load Balancing and Reverse Proxy on Red Hat/Scientific/CentOS Linux 5/6

June 28, 2011 6 comments

HAProxy is an open source load balancer/reverse proxy that can provide high availability for your network services. While generally used for web services, it can also be used to provide more reliability for services such as SMTP and terminal services. In addition we can combine it with the Keepalived package to allow high availability/failover for the HAProxy server itself. HAProxy plus Keepalived can provide a good solution for high availability at a very low cost in comparison to proprietary hardware based load balancers.

In this example I will configure 2 HAProxy/Keepalived servers (lb1/lb2) that will direct traffic to 2 Apache web servers (web1/web2). I will not detail the set up of the web servers. Here is a list of the server and IP address configuration scheme:

lb1  192.168.20.11
lb2  192.168.20.12
web1  192.168.20.21
web2  192.168.20.22

Configure on both lb1/lb2

First we need to activate the Extra Packages for Enterprise Linux (EPEL) repository, which should host packages of the HAProxy and Keepalived software. Install EPEL repo info into YUM:

# rpm -ivh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-5.noarch.rpm

Now we will install the HAproxy and Keepalived software:

# yum -y install haproxy keepalived

Configure Keepalived on lb1/lb2

Move the existing config because we will basically be starting from scratch.

# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak

Edit the Keepalived config file and make it look something like below:

# nano /etc/keepalived/keepalived.conf

vrrp_script chk_haproxy {           # Requires keepalived-1.1.13
script "killall -0 haproxy"     # cheaper than pidof
interval 2                      # check every 2 seconds
weight 2                        # add 2 points of prio if OK
}
vrrp_instance VI_1 {
interface eth0
state MASTER
virtual_router_id 51
priority 101                    # 101 on master, 100 on backup
virtual_ipaddress {
192.168.20.20
}
track_script {
chk_haproxy
}
}

Make sure and change the priority under the vrrp_instance to 100 on the backup HAProxy server “lb2”. Other than that parameter the rest of the keepalived.conf file should be identical on both lb1 and lb2. Also if you want to add additional virtual IP addresses for additional services/servers in your network simply add an IP address under the virtual_ipaddress directive.

Now we need to configure the system to allow HAProxy to access shared virtual IP addresses. First make a backup of the sysctl.conf file:

# cp /etc/sysctl.conf /etc/sysctl.conf.bak

Now add this line to the end of the /etc/sysctl.conf file:

net.ipv4.ip_nonlocal_bind = 1

Now execute sysctl to apply the new parameter:

# sysctl -p

Extra configuration of iptables is required for keepalived, in particular we must enable support for multicast broadcast packets. If this doesn’t work you can always disable the iptables firewall for testing purposes. Of course this is not recommended in production environments!

Enable the multicast packets:

# iptables -I INPUT -d 224.0.0.0/8 -j ACCEPT

You may need to add this rule for the VRRP IP protocol:

# iptables -I INPUT -p 112 -j ACCEPT

In addition insert a rule that will correspond with the traffic that you are load balancing, in my case HTTP:

# iptables -I INPUT -p tcp --dport 80 -j ACCEPT

Finally save the iptables config so it will be restored after restarting and start Keepalived:

# service iptables save
# service keepalived start

Check lb1

Now let’s check to see if Keepalived is listening on the virtual IP address that we specified:

# ip addr sh eth0

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
link/ether 00:50:56:87:00:0f brd ff:ff:ff:ff:ff:ff
inet 192.168.20.11/24 brd 192.168.20.255 scope global eth0
inet 192.168.20.20/32 scope global eth0
inet6 fe80::250:56ff:fe87:f/64 scope link
valid_lft forever preferred_lft forever

Note that the virtual IP address is bound to network interface eth0.

Check lb2

# ip addr sh eth0

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
link/ether 00:50:56:87:00:22 brd ff:ff:ff:ff:ff:ff
inet 192.168.20.12/24 brd 192.168.20.255 scope global eth0
inet6 fe80::250:56ff:fe87:22/64 scope link
valid_lft forever preferred_lft forever

Note that the virtual IP address is not bound on the backup HAProxy server lb2. You can test failover by disconnecting lb1 from the network or shutting down the keepalived service on lb1.

Configure HAProxy on lb1/lb2

First make a backup of the HAProxy config file for good measure:

# cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak

Basically you will want to keep the global and default configuration sections intact. Remove or remark out the frontend and backend sections below them. The frontend/backend configuration is a more modern way to implement proxying, however I will perform the more traditional configuration and specify my options in one configuration section for each specific virtual IP address that I will proxy.

# nano /etc/haproxy/haproxy.cfg

listen webfarm 192.168.20.20:80
mode http
balance source
cookie JSESSIONID prefix
option httpchk HEAD /check.txt HTTP/1.0
option httpclose
option forwardfor
server web1 192.168.20.21:80 cookie A check
server web2 192.168.20.22:80 cookie B check

You can read about the different configuration options available on the HAProxy website. There is a plethora of them! Basically the “balance source” option will have each client connect to the same web server unless it fails. A variable will be inserted in a session cookie that will specify which server to direct a client to. Also HAProxy will check if a web server is available with “option httpchk” by inspecting a file on the web server root named check.txt.  You will want to create this file on each web server. The “option httpclose” and “option forwardfor” are needed to allow the web server to log the actual IP address source for each client request, otherwise the HAProxy server IP will be logged because it is proxying the connection.

Enable HAProxy and Keepalived to start automatically on system boot and start HAProxy:

# chkconfig haproxy on
# chkconfig keepalived on
# service haproxy start

Configure logging on web1/web2

This part is optional but there are some configuration changes needed on the web servers in order to have proper logging. In this example my web1/web2 servers are Apache and Red Hat/CentOS based.

# nano /etc/httpd/conf/httpd.conf

First since the HAProxy servers for available on the web server by requesting the check.txt file, we will want to disable logging of requests for this file. Otherwise all access to this file will be logged.

Remark out any lines that begin with “CustomLog”. Then at the end of the file add these lines which will prevent logging access to check.txt:

SetEnvIf Request_URI "^/check\.txt$" dontlog
CustomLog logs/access_log combined env=!dontlog

Additionally you will want to specify that Apache log the client IP address forwarded by HAProxy, not the IP of the HAProxy server. Remark out the existing LogFormat directive similar to the one commented below and substitute the next line with the “X-Forwarded-For” parameter. Please note that the second line is wrapped, it should be only a single line below the commented line.

#LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined

Hopefully that should do it. You can test by shutting down or disconnecting the network on either of your web servers.

For planned maintenance you can delete the check.txt file on the web server that you will be working on. With this configuration new client requests will go to the server that passes the check but existing client sessions should be maintained on the existing server with the check file removed. In this way you can have a scheme where you wait for client connections to be drained completely on the server with the check file removed, then you can perform maintenance. That way downtime is minimized and the user experience with sessions is maintained.

Good luck with HAProxy!

References

http://haproxy.1wt.eu/download/1.2/doc/architecture.txt

http://www.howtoforge.com/setting-up-a-high-availability-load-balancer-with-haproxy-keepalived-on-debian-lenny

Categories: Linux Tags:

Compiz Fusion and Dell Inspiron 700m with Intel 855GM Video Chipset

June 23, 2011 4 comments

The Dell Inspiron 700m is a mid 2000’s vintage laptop with a Pentium M 1.6Ghz CPU and an Intel 855GM video adapter. While antiquated by today’s standards I have one as a spare and I find it still works reasonably well for basic web surfing and word processing. One downside with the age of the hardware is that the Intel graphics adapter is not supported for many graphics capabilities of modern OS’s such as Aero with Windows Vista/7. For me Windows 7 is not very pretty without Aero.

However, with a few minor issues I have found that the 700m is capable of supporting 3D acceleration and Compiz Fusion under Linux. For many laptops there have been issues with the Intel 855 adapter and the current development of the Intel graphics driver on Linux. As a result with Linux distros such as Ubuntu 10.04/10.10 the intel driver is not used by default and the fbdev driver is loaded instead. See the Ubuntu wiki for details on these issues and some possible resolutions:

https://wiki.ubuntu.com/X/Bugs/Mavericki8xxStatus

In the case of the Inspiron 700m I have found that these workarounds were not needed with several Linux distributions. In particular, Fedora 14 seems to load the Intel driver and Compiz can be enabled without any tweaking after performing a standard GNOME based desktop install. One caveat is that I have not tested Fedora extensively, so there may be issues still that I am not aware of.

In addition, Ubuntu 10.10 is capable of Compiz desktop effects on the 700m. However, it will not work immediately after the initial GNOME desktop install. Here are the steps I needed to get Compiz working on Ubuntu 10.10:

Run a full system update. At the time of this writing the the system is updated to Linux kernel version 2.6.35-28 and the Intel driver (xserver-xorg-video-intel) version is 2:2.12.0-1. When completed reboot the system.

Create an xorg.conf file. By default this is not present on Ubuntu. However, it seems to be necessary for Compiz desktop effects to activate successfully even though no modifications to the file are necessary. To make the xorg.conf:

At the GNOME desktop press Ctrl-Alt-F1.

Login with your standard user account.

Shutdown the display manager, enter the user password again when prompted:

sudo /etc/init.d/gdm stop

Run X with configure switch to create the xorg.conf file:

sudo X -configure

Now copy the the X11 directory:

sudo cp xorg.conf.new /etc/X11/xorg.conf

Now reboot the system:

sudo reboot

After the reboot log back in and go to System > Preferences > Appearance, Visual Effects tab and choose Normal or Extra. Hopefully your Compiz effects should activate.

One issue I have seen is that there are some small artifacts on the window borders of inactivate windows occasionally inside GNOME. I have found that under the XFCE desktop environment these visual defects are not present. I have not tried using Emerald as the window decorator so perhaps that could solve the issue as well.

At the moment I have not found any way to get Compiz to work with the Intel 855GM under Ubuntu 11.04. It is my understanding that the version of Compiz in Ubuntu 11.04 requires OpenGL 1.4 and the Intel 855GM is only capable of OpenGL 1.3. I have tried downgrading the Compiz version on Ubuntu 11.04 but I was still not able to get compositing to work correctly, all kinds of visually artifacts would appear. Please note that this will also disable the Unity interface. If you’d like to give the downgrade a try, please see this guide:

http://www.webupd8.org/2011/05/how-to-downgrade-to-compiz-086-in.html

Good luck with your Compiz adventure!

Categories: Linux, Ubuntu Tags: ,

Configure OpenSSH Public Key Encryption with Keychain for Passwordless SSH Logins

April 19, 2011 1 comment

Public key encryption is a powerful tool that you can use with SSH in order to achieve logins to remote hosts without entering a password everytime.  It can be much more secure than using simple password authentication.  It is also ideal for use with unattended scripting and automation when a password cannot be entered to authenticate.

In this example I will authenticate between two SSH systems, aptly named “client1” and “server1”.  Server1 is the host that we want to log in to without a password.  In this example I am using an RPM based Red Hat Enterprise/Centos/Scientific Linux system, but it should be similar for most other Linux distributions and Windows tools such as Cygwin.

First on client1 run the ssh-keygen utility to generate a public/private key pair.  On my system it will default to RSA type with 2048 bit encryption depth.  You will also want to enter a passphrase to decrypt your private key.  It is a good idea to make the passphrase very complex with numbers and caps included. Avoid the temptation to leave the passphrase blank because later we will set up keychain and the ssh-agent utility to avoid entering the passphrase every time we log in.

[aaron@client1 ~]$ ssh-keygen

Generating public/private rsa key pair.
Enter file in which to save the key (/home/aaron/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/aaron/.ssh/id_rsa.
Your public key has been saved in /home/aaron/.ssh/id_rsa.pub.

Now you will want to copy the user’s public key to the server we want to authentication in to.  One thing to note is that with the minimal server install of Red Hat Enterprise/Scientific Linux 6, the openssh-clients package does not get installed.  This will prevent us from using secure copy (SCP) to copy the public key to the server.  Since I will use SCP in the future I will make sure the clients package is installed on the Linux SSH servers.  You could run something like this from your client if on Red Hat/Scientific or just do it locally on the server.

[aaron@client1 ~]$ ssh root@server1 "yum -y install openssh-clients"

Now we will copy the public key over:

[aaron@client1 ~]$ scp ~/.ssh/id_rsa.pub aaron@server1:

The authenticity of host 'server1 (192.168.1.101)' can't be established.
RSA key fingerprint is 5c:83...
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'server1,192.168.1.101' (RSA) to the list of known hosts.
aaron@server1's password:
id_rsa.pub                                    100%  392     0.4KB/s   00:00

Now we need to login to the remote server.  We need to create the “~/.ssh” directory where we will place the public key in a special file to allow us to authenticate with it.  Unfortunately I wasn’t able to remotely chain commands together to make this directory automatically because SELinux denies this right and the openssh client software itself must create the directory.  So we have to login to the server and then run the ssh command to set it up the first time.

[aaron@client1 ~]$ ssh server1

aaron@server1's password:
Last login: Mon Mar 21 17:11:09 2011 from client1

Now run “ssh” and attempt to connect to any host to set up the ~/.ssh directory.  We can simply choose “no” when prompted if we want to continue, we don’t need to connect to a host to get the directory created.

[aaron@server1 ~]$ ssh localhost

The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is 10:01...
Are you sure you want to continue connecting (yes/no)? no

Now copy the user public key into the authorized_keys file under “~/.ssh”, we also need to set the permissions on the file to allow only this user to read/write the file:

[aaron@server1 ~]$ cat id_rsa.pub >> ~/.ssh/authorized_keys
[aaron@server1 ~]$ chmod 600 ~/.ssh/authorized_keys

And we’ll now exit to go back to the client:

[aaron@server1 ~]$ exit
logout
Connection to server1 closed.

Finally let’s test to make sure the server will use the user’s public/private key pair for encryption rather than passwords:

[aaron@client1 ~]$ ssh server1

Enter passphrase for key '/home/aaron/.ssh/id_rsa':
Last login: Tue Mar 22 14:40:52 2011 from client1

The prompt asked for the passphrase so it should now be using the public/private key pair.  Now exit again back to the client:

[aaron@server1 ~]$ exit

Keychain and SSH-Agent Configuration

Now we will use keychain along with ssh-agent to cache our decrypted private key in memory so that we won’t have to enter the passphrase each time we login with ssh to the server.  While there is a minor security risk in doing this it is much more secure than using a private key with no passphrase.

First ensure that the keychain package is installed, it should be available on most Linux environments (including Cygwin).  I am using Scientific Linux so for it or Red Hat/CentOS it should be available at RPMForge repository. I am on Scientific Linux 6/32-bit:

[aaron@client1 ~]$ su -c 'rpm -ivh http://apt.sw.be/redhat/el6/en/i386/rpmforge/RPMS/rpmforge-release-0.5.2-2.el6.rf.i686.rpm'

And install the keychain package:

[aaron@client1 ~]$ su -c 'yum -y install keychain'

First test that keychain can access and launch the ssh-agent to decrypt the private key and load it into memory:

[aaron@client1 ~]$ /usr/bin/keychain ~/.ssh/id_rsa

* keychain 2.7.0 ~ http://www.funtoo.org
* Starting ssh-agent...
* Starting gpg-agent...
* Adding 1 ssh key(s): /home/aaron/.ssh/id_rsa
Enter passphrase for /home/aaron/.ssh/id_rsa:
* ssh-add: Identities added: /home/aaron/.ssh/id_rsa

Now we need to tell the current session to use the ssh-agent if it is already running:

[aaron@client1 ~]$ source ~/.keychain/${HOSTNAME}-sh > /dev/null

That should do it.  You will now be able to authenticate without entering the passphrase everytime:

[aaron@client1 ~]$ ssh server1
Last login: Tue Mar 22 14:45:56 2011 from client1
[aaron@server1 ~]$

Now edit the “.bash_profile” file and append the content we entered manually above to run when we start a login session. This will prompt you to decrypt the RSA private key with the passphrase at the next login if the ssh-agent doesn’t have the key decrypted already in memory:

$ vi ~/.bash_profile

# Start Keychain
/usr/bin/keychain ~/.ssh/id_rsa
source ~/.keychain/${HOSTNAME}-sh > /dev/null

I have noticed that on Linux that the ssh-agent will keep running with the private key decrypted until the client is rebooted.  If the client is on Windows in an environment like Cygwin the ssh-agent will stay running until the user is fully logged out of their desktop session.

Reference

http://www.ibm.com/developerworks/library/l-keyc.html

Categories: Linux Tags:

Installing Windows Remote Management (WinRM) and PowerShell 2.0 on Windows Server 2003 / XP

March 31, 2011 Leave a comment

Windows Remote Management WinRM and PowerShell 2.0 are two very versatile tools that can greatly increase the manageability of your Windows hosts.  Unfortunately it has been somewhat difficult for me locating the most up to date versions of this software.  Basically the package available that installs PowerShell 2.0 also includes the WinRM 2.0 release as well.  Also available at the link below are WinRM/PowerShell 2.0 releases for Windows Vista and Server 2008 R1.

There is a prerequisite that the computer is running Microsoft .Net Framework 2.0 SP1.  I have included a link below to .Net 2.0 SP2:

http://www.microsoft.com/downloads/en/details.aspx?FamilyID=5b2c0358-915b-4eb5-9b1d-10e506da9d0f&displaylang=en

Now you can install the WinRM 2.0/PowerShell 2.0 Management Framework package here:

http://support.microsoft.com/kb/968930

Categories: Uncategorized, Windows Tags: