Lessons learned from a virtualized Oracle upgrade

So about a week ago, we did a rather massive upgrade at my main client to the Oracle E-Business infrastructure. The main things in this upgrade were:

Licensing modules necessary for us to have a full installation of Oracle HR
Upgrade Oracle database from 11.1.0.7 64-bit to 11.2.0.1 64-bit
Apply all CPU security patches thru Apr 2010
Upgrade memory on DB server from 8G to 12G
Upgrade server side java from 1.6.0_16 to 1.6.0_20
Upgrade client side java from 1.6.0_16 to 1.6.0_20b5 (see this link on why the special b5 version)
Apply approximately 350 (not a typo) individual E-Business patches, for the following things:
o Minimum Baseline Patch Requirements for Extended Support on Oracle E-Business Suite 11.5.10 (Note 883202.1)
o Upgrading from Financials Family Pack F to Family Pack G (FIN_PF.G)
o Recommended 11i Apps patches for all our products
o Java related patches
o Latest DST v11 related patches (see here)
o Implement WebADI

As you might gather from this list, it was a rather large upgrade. The apps patches alone totaled about 10GB of patches once merged into one patch and the backup directory for the merged patches ended up totaling 6GB. Test runs had the upgrade running about 24 hours with 8 CPUs on some scratch disk storage I had in the SAN . Like I mentioned in previous posts, we utilized VMware snapshots on our boxes at various points in the upgrade in case we needed to roll back or experienced an unforeseen issue.

One of the VMware best practices we follow with our VMs is to break the boot “disk” and the data “disk” for our VMs into their own virtual disks. Besides during booting up / shutting down of a VM, the boot disk generally experiences very low traffic. So it’s pretty typical, especially with a replicated SAN system such as ours, to put your boot “disks” (VMDKs) for a bunch of VMs on one VMware datastore, possibly with slower disks, and your data “disks” (VMDKs) on another dedicated datastore. In our case, the boot disk datastore is a 2 disk RAID 1 (mirrored) set with Fiber Channel drives and the data disk datastore is a 9 disk (8+1) RAID 5 datastore of SSDs (aka EFDs aka super super fast disks).

Although I had run multiple dry runs before the upgrade, one thing I failed to notice / realize is that by default VMware snapshots are stored where the VM lives, or more specifically, where the VM’s configuration file lives… in this case on my slowest disks.

This became extremely clear during our large merged patch of 330+ Apps patches – things got slower and slower. At that point, shutting down the VM and moving the snapshots wasn’t really an option. It was just a matter of suffering thru and learning for next time. Luckily the business had fully planned on the upgrade taking 24 hours for the patching even though I expected us to be at roughly 1/2 that time with SSDs.

By the time the upgrade was done and the business analysts had finished their testing and calling the upgrade good (and hence when we were ready to delete the 5 sets of snapshots), the snapshots for my two VMs that utilize about 450GB of space had grown to about 200GB. It took about 5 hours for the snapshots to be merged into the base VMDKs. Although the system was usable during that time, it was quite laggy. Luckily it was still the weekend for most of our users and they weren’t too inclined to utilize Oracle.

On the subject of VMware snapshot deletions, I recently came across two notes that should be of use to other VMware admins
1) With the latest version of vSphere (4.0 Update 2), VMware has greatly improved the speed and efficiency of deleting all the snapshots for a VM. You can read more about it here. Unfortunately at the time of my Oracle upgrade I was on vSphere 4.0 Update 1.
2) When you delete a large snapshot, it will frequently appear to “hang” at 95% – check out this knowledge base article on how to monitor snapshot deletions.

Overall the upgrade was a success and minus the occasional user issues Monday morning (first business day after the upgrade) was pretty much a non-event.

These are the sorts of situations that make sending your people to training, or giving them the time and inclination to read manuals and blogs, so essential. Not as a result of this, but somewhat related, I’ll be attending the VMware vSphere troubleshooting class in the next month or two and will be (assuming I pass the test) earning my VCP and possibly trying to earn a VCAP-DCA by end of year.

Leave a Reply

Your email address will not be published. Required fields are marked *