Tag Archives: lessons

Software. Hardware. Complete.

So recently I was browsing Walmart.com ‘s Electronics section and was amazed at the selection they have.

You want to buy a computer? They’ve got it.

You want an operating system for that computer? They’ve got it.

You want to buy a network switch and cables to link multiple computers together? They’ve got it.

You want to buy 4TB of NAS storage? They’ve got it.

You can get them all from one vendor. The switches say their certified with the OS. The computer says its certified with the OS. Your storage is certified with your OS.

You can even install Oracle database on the hardware and be fully supported by Oracle (thought not certified by Oracle because Oracle doesn’t certify 3rd party hardware).

Have you ever bought a wireless Microsoft keyboard and mouse that didn’t work right with your Microsoft Windows OS running on a PC with a sticker on it that said “Designed for Windows” ? It’s all from one vendor. Just one throat to choke, right?

So why isn’t most of your data center running off of what’s at Walmart?

Because those products might not be leaders in their category.

Because the technical support backing those products might be crappy.

Because the software might not be enterprise ready .

Just because you can buy everything from one company doesn’t mean you should.

Lessons learned from a virtualized Oracle upgrade

So about a week ago, we did a rather massive upgrade at my main client to the Oracle E-Business infrastructure. The main things in this upgrade were:

Licensing modules necessary for us to have a full installation of Oracle HR
Upgrade Oracle database from 11.1.0.7 64-bit to 11.2.0.1 64-bit
Apply all CPU security patches thru Apr 2010
Upgrade memory on DB server from 8G to 12G
Upgrade server side java from 1.6.0_16 to 1.6.0_20
Upgrade client side java from 1.6.0_16 to 1.6.0_20b5 (see this link on why the special b5 version)
Apply approximately 350 (not a typo) individual E-Business patches, for the following things:
o Minimum Baseline Patch Requirements for Extended Support on Oracle E-Business Suite 11.5.10 (Note 883202.1)
o Upgrading from Financials Family Pack F to Family Pack G (FIN_PF.G)
o Recommended 11i Apps patches for all our products
o Java related patches
o Latest DST v11 related patches (see here)
o Implement WebADI

As you might gather from this list, it was a rather large upgrade. The apps patches alone totaled about 10GB of patches once merged into one patch and the backup directory for the merged patches ended up totaling 6GB. Test runs had the upgrade running about 24 hours with 8 CPUs on some scratch disk storage I had in the SAN . Like I mentioned in previous posts, we utilized VMware snapshots on our boxes at various points in the upgrade in case we needed to roll back or experienced an unforeseen issue.

One of the VMware best practices we follow with our VMs is to break the boot “disk” and the data “disk” for our VMs into their own virtual disks. Besides during booting up / shutting down of a VM, the boot disk generally experiences very low traffic. So it’s pretty typical, especially with a replicated SAN system such as ours, to put your boot “disks” (VMDKs) for a bunch of VMs on one VMware datastore, possibly with slower disks, and your data “disks” (VMDKs) on another dedicated datastore. In our case, the boot disk datastore is a 2 disk RAID 1 (mirrored) set with Fiber Channel drives and the data disk datastore is a 9 disk (8+1) RAID 5 datastore of SSDs (aka EFDs aka super super fast disks).

Although I had run multiple dry runs before the upgrade, one thing I failed to notice / realize is that by default VMware snapshots are stored where the VM lives, or more specifically, where the VM’s configuration file lives… in this case on my slowest disks.

This became extremely clear during our large merged patch of 330+ Apps patches – things got slower and slower. At that point, shutting down the VM and moving the snapshots wasn’t really an option. It was just a matter of suffering thru and learning for next time. Luckily the business had fully planned on the upgrade taking 24 hours for the patching even though I expected us to be at roughly 1/2 that time with SSDs.

By the time the upgrade was done and the business analysts had finished their testing and calling the upgrade good (and hence when we were ready to delete the 5 sets of snapshots), the snapshots for my two VMs that utilize about 450GB of space had grown to about 200GB. It took about 5 hours for the snapshots to be merged into the base VMDKs. Although the system was usable during that time, it was quite laggy. Luckily it was still the weekend for most of our users and they weren’t too inclined to utilize Oracle.

On the subject of VMware snapshot deletions, I recently came across two notes that should be of use to other VMware admins
1) With the latest version of vSphere (4.0 Update 2), VMware has greatly improved the speed and efficiency of deleting all the snapshots for a VM. You can read more about it here. Unfortunately at the time of my Oracle upgrade I was on vSphere 4.0 Update 1.
2) When you delete a large snapshot, it will frequently appear to “hang” at 95% – check out this knowledge base article on how to monitor snapshot deletions.

Overall the upgrade was a success and minus the occasional user issues Monday morning (first business day after the upgrade) was pretty much a non-event.

These are the sorts of situations that make sending your people to training, or giving them the time and inclination to read manuals and blogs, so essential. Not as a result of this, but somewhat related, I’ll be attending the VMware vSphere troubleshooting class in the next month or two and will be (assuming I pass the test) earning my VCP and possibly trying to earn a VCAP-DCA by end of year.

How virtualization can magnify your architecture problems

I recently started working with a new client who has a hosting provider hosting their Oracle database on Linux under VMware. An excellent choice, but this client is experiencing major performance issues – data for forms taking a minute or more to come up is just one example.

As I learned more about their environment I found that virtualization (VMware in this case, though the issue isn’t specific to any particular virtualization vendor) actually made their system performance worse. I know, I’m a VMware groupie (heck a VMware vExpert!) and we’re all amazed I’d write such a thing, but alas, it’s true.

The database is around 80GB in size. Each day this hosting provider would take a full (level 0 incremental) backup of the Oracle database via RMAN. The hosting provider wrote this RMAN backup to the same mount point in the VM that the database uses.

Please take a moment to catch your breath and stop clenching your hands into fists over this very very bad idea.

So why is this such a bad idea? For a couple of reasons.

One is performance – you’re now greatly degrading the performance of your database by writing a full backup to the same disks that are trying to handle database requests. You have, at the least, doubled the amount of I/O going to those disks.

Two is the ability to recover. If your ESX host or your VM experiences an issue (running out of disk space, disk corruption, fire, whatever), you can no longer access the mount point in the VM where you backed up the data.

Best practice for implementing RMAN in a situation like this is to backup your database to another set of disks on another machine in another physical location. A typical example is to have an NFS export on your backup destination server (in another datacenter) and have RMAN write direLet’s say ctly to that NFS mount. This way you aren’t writing your backup to the same disks (thereby not impacting production performance much) and you’re covered in the case of issues with the hardware or VM itself.

So where does VMware fit into this? I mentioned that the hosting provider was also performing VM-level backups. In particular, they were performing VM-level backups at the same time they were running RMAN backups. All to the same set of disks.

Now I’ve got the VMware Admins and the Oracle DBAs cringing.

When you initiate a VM level backup, VMware takes a snapshot of the VM. This means it makes a delta file on the same ESX datastore and stops writing to the VMDK(s) that make up the VM. All changes to the VM get written to the delta file instead. That delta file can grow (8 megs at a time) up till it’s the same size as the original VMDK.

When you are taking a VM level backup, you want to choose a time when you’re not doing many writes to the VM. This way the delta file won’t grow so big that you could run out of space on the datastore (LUN) and your performance impact is decreased.

So here they are writing their full Oracle backup of 80GB out to a mount point inside their VM. That’s 80GB of writes you’re doing. VMware see those writes and has to write them to it’s snapshot (delta file). So now not only are you serving up database queries on your disks, you’re also scanning every block of your database on those disks for changes (this database did not employ Oracle Changed Block Tracking), you’re writing a full RMAN backup to those disks and VMware is having to copy all those writes into a delta file on those same disks.

Virtualization can be wonderful and solve or simplify many of the issues an administrator faces, but it can also magnify fundamental architecture flaws.