All posts by J

Software. Hardware. Complete.

So recently I was browsing Walmart.com ‘s Electronics section and was amazed at the selection they have.

You want to buy a computer? They’ve got it.

You want an operating system for that computer? They’ve got it.

You want to buy a network switch and cables to link multiple computers together? They’ve got it.

You want to buy 4TB of NAS storage? They’ve got it.

You can get them all from one vendor. The switches say their certified with the OS. The computer says its certified with the OS. Your storage is certified with your OS.

You can even install Oracle database on the hardware and be fully supported by Oracle (thought not certified by Oracle because Oracle doesn’t certify 3rd party hardware).

Have you ever bought a wireless Microsoft keyboard and mouse that didn’t work right with your Microsoft Windows OS running on a PC with a sticker on it that said “Designed for Windows” ? It’s all from one vendor. Just one throat to choke, right?

So why isn’t most of your data center running off of what’s at Walmart?

Because those products might not be leaders in their category.

Because the technical support backing those products might be crappy.

Because the software might not be enterprise ready .

Just because you can buy everything from one company doesn’t mean you should.

Webinars of interest to Oracle Apps DBAs on RedHat Linux with VMware

So recently I’ve been getting notifications about a number of various interesting Webinars. Since they’re coming from all sorts of random sources, I’m sharing the links as a service to my readers.

I’m receiving no consideration or such from the companies mentioned, these are things I thought would be of interest to me, and perhaps my readers

Upcoming (all dates / times are based on Central US time zone)

Get The Facts: Oracle’s Unbreakable Enterprise Kernel for Linux Oct 26th 11am, put on by Oracle

EBS Workflow Purging – Best Practices Nov 10th 11am, put on by Oracle

How to use My Oracle Support for ATG issues Oct 27th 11am, put on by Oracle

E-Business Suite using a DB-Tier with RAC Oct 28th 11am, put on by Oracle

Top 10 Virtual Storage Mistakes Oct 28th 1pm, put on by Quest Software

On-Demand (already happened)

Oracle Virtualization Webcast put on by Embarcadero and VMware

RHCE Virtual Loopback: Unlocking the value of the Cloud put on by RedHat (pdf)

RHCE Virtual Loopback: Performance and Scalability RHEL5 -> RHEL6 put on by RedHat (pdf)

Managing Red Hat Enterprise Linux in an Increasingly Virtual World put on by RedHat

State of “Btrfs” File System for Linux put on by Oracle

Lower Your Storage Costs with Oracle Database 11g and Compression put on by Oracle

Linux Configuration and Diagnostic Tips & Tricks put on by Oracle

Feel free to share any other Seminars you think may be of use in the comments!

Oracle VM Security: Sometimes you need hip waders

Have you ever read something and thought, “what a load of crap. I had better get my hip waders out.”

Well, as a cynical jaded DBA, I have that experience regularly.

Take this Oracle.com blog post on Oracle VM where Rene Kundersma who is a Technical Architect with Oracle explains Oracle’s reasons for NOT shipping Oracle VM with a “fancy Gnome X-Window” environment:

“Oracle has it reasons to NOT ship Oracle VM with all the bells and whistles of a fancy Gnome X-Window environment. This has to do with vulnerabilities, not tested situations of software combination’s and whatever reason that makes Oracle VM not to behave as tested and intended.”

Vulnerabilities as the reason for Oracle VM not having a “fancy X-Window environment”. Vulnerabilities… really? But isn’t Oracle VM running on a special version of Oracle Unbreakable Linux (hint: yes – they’re both based off of RedHat Enterprise Linux)?

Want to get to the console of a VM running under Oracle VM? It uses VNC. Sure, you need to know the password to connect to the VNC Desktop, but guess what, the VNC traffic isn’t encrypted. The password is sent in cleartext.

Unbreakable indeed.

I find this all the more contradictory when one of Oracle’s talking points for why to use Oracle VM is Secure Live Migration which SSL encrypts the live migration (aka vMotion) traffic. My favorite line: “No need to purchase special hardware or deploy special secure networks. “

No need to deploy special secure networks! VLANs? Who needs them? We’ve got encrypted live migration!

Oh wait, in Oracle’s own Oracle Real Application Clusters in Oracle VM Environments guide, there’s this tidbit

“While Secure Live Migration is enabled by default, it should be considered that a secure connection to the remote guest (using –ssl) adds overhead to the Live Migration operation. It is therefore recommended to avoid such secure connections, if permitted. In general, a secure connection can be avoided, if the Live Migration network is inherently secure. “

Seriously Oracle, which is it?

But let’s get back to the main point Rene was trying to get across – that Oracle VM doesn’t come with a GUI to reduce vulnerabilities. Oracle’s October 2010 CPU (Critical Patch Update) was released on October 12th, 2010 and for the current version of Oracle VM (2.2.1) it lists 4 vulnerabilities, 3 of which have a base score of 9.0 (the scale is from 0.0 to 10.0, with 10.0 representing the highest severity of vulnerability). All 3 of those 9/10 severity vulnerabilities have a low access complexity (they’re easy to do) and result in complete access.

Oracle, thank you for not including a “fancy Gnome X-Window” with Oracle VM so as to reduce vulnerabilities. Given how insecure your product appears without a GUI, I shudder to think what things would be like with a GUI.

An alternative to scrambling data: Restricting access with Virtual Private Database (VPD)

Back in June, I wrote a blog post on scrambling HR data in our EBS instances . Although effective, it was a bit of a kludge – it involved an excel spreadsheet, and giving everyone the same salary and banking info.

As we went with this solution in our development and test environments, we ran into some issues where the salary data would totally screw up the benefits data as it’s calculated as a percentage of salary. The solution was effective at keeping the data secure, but it wasn’t optimal. After some investigation, we turned to Oracle VPD – Virtual Private Database – functionality. With this we are able to restrict access to certain columns (such as salary or national identifiers) to all but necessary users. With an EBS database, where every connection is connect as APPS, this poses special considerations.

I’ll cover the technical details of implementing VPD in an EBS environment. Then I’ll talk about the changes you need to make to keep things functional for your business analysts and yet keep the data secure.

First it was necessary to create a policy function. In our case this is very generic, basically just returning the predicate.

CREATE OR REPLACE FUNCTION “APPS”.”LUM_HIDE_HR_COLS” (schema in varchar2, tab in varchar2)

return varchar2 as predicate varchar2(8) default ‘1=2’;

begin

return predicate;

end;

/

Next we add a policy on the column we want to restrict access to point it at the policy function we created

begin dbms_rls.add_policy(object_schema => ‘HR’, object_name => ‘PER_ALL_PEOPLE_F’, policy_name => ‘LUM_HIDE_HR_COLS’, function_schema => ‘APPS’, policy_function => ‘LUM_HIDE_HR_COLS’, statement_types => ‘select,’, update_check => FALSE , enable => TRUE , static_policy => FALSE , policy_type => dbms_rls.STATIC , long_predicate => FALSE , sec_relevant_cols => ‘NATIONAL_IDENTIFIER,’ , sec_relevant_cols_opt => DBMS_RLS.ALL_ROWS ); end;

/

In this case we’re ADDing a policy and ENABLing it, applying the LUM_HIDE_HR_COLS function on the NATIONAL_IDENTIFIER column of table PER_ALL_PEOPLE_F in the HR schema, preventing users from SELECTing data and stating that this is for ALL_ROWS.

Once we issued that, all users (besides SYS and SYSTEM) will get NULLs when they select NATIONAL_IDENTIFIER from that table. That took care of our social security number concern.

We also set up additional policies on other data:

begin dbms_rls.add_policy(object_schema => ‘HR’, object_name => ‘PER_PAY_PROPOSALS’, policy_name => ‘LUM_HIDE_HR_COLS’, function_schema => ‘APPS’, policy_function => ‘LUM_HIDE_HR_COLS’, statement_types => ‘select,’, update_check => FALSE , enable => TRUE , static_policy => FALSE , policy_type => dbms_rls.STATIC , long_predicate => FALSE , sec_relevant_cols => ‘PROPOSED_SALARY_N,’ , sec_relevant_cols_opt => DBMS_RLS.ALL_ROWS ); end;

/

The above policy restricts access to PROPOSED_SALARY_N column of HR.PER_PAY_PROPOSALS. That took care of our salary data concern.

begin dbms_rls.add_policy(object_schema => ‘HR’, object_name => ‘PAY_EXTERNAL_ACCOUNTS’, policy_name => ‘LUM_HIDE_HR_COLS’, function_schema => ‘APPS’, policy_function => ‘LUM_HIDE_HR_COLS’, statement_types => ‘select,’, update_check => FALSE , enable => TRUE , static_policy => FALSE , policy_type => dbms_rls.STATIC , long_predicate => FALSE , sec_relevant_cols => ‘SEGMENT3,SEGMENT4,’ , sec_relevant_cols_opt => DBMS_RLS.ALL_ROWS ); end;
/

The above policy restricts access to the SEGMENT3 and SEGMENT4 columns of HR.PAY_EXTERNAL_ACCOUNTS. That took care of our banking data concern.

begin dbms_rls.add_policy(object_schema => ‘HR’, object_name => ‘PER_ADDRESSES’, policy_name => ‘LUM_HIDE_HR_COLS’, function_schema => ‘APPS’, policy_function => ‘LUM_HIDE_HR_COLS’, statement_types => ‘select,’, update_check => FALSE , enable => TRUE , static_policy => FALSE , policy_type => dbms_rls.STATIC , long_predicate => FALSE , sec_relevant_cols => ‘ADDRESS_LINE1,ADDRESS_LINE2,’ , sec_relevant_cols_opt => DBMS_RLS.ALL_ROWS ); end;
/
The above policy restricts access to ADDRESS_LINE1 and ADDRESS_LINE2 columns of HR.PER_ADDRESSES. That took care of our concern of employees looking up addresses of other employees.

begin dbms_rls.add_policy(object_schema => ‘HR’, object_name => ‘PER_ALL_ASSIGNMENTS_F’, policy_name => ‘LUM_HIDE_HR_COLS’, function_schema => ‘APPS’, policy_function => ‘LUM_HIDE_HR_COLS’, statement_types => ‘select,’, update_check => FALSE , enable => TRUE , static_policy => FALSE , policy_type => dbms_rls.STATIC , long_predicate => FALSE , sec_relevant_cols => ‘GRADE_ID,’ , sec_relevant_cols_opt => DBMS_RLS.ALL_ROWS ); end;
/

The above policy restricts access to the GRADE_ID column of HR.PER_ALL_ASSIGNMENTS_F. That took care of our concern about employees looking up the pay grade of other employees.

begin dbms_rls.add_policy(object_schema => ‘HR’, object_name => ‘PAY_ELEMENT_ENTRY_VALUES_F’, policy_name => ‘LUM_HIDE_HR_COLS’, function_schema => ‘APPS’, policy_function => ‘LUM_HIDE_HR_COLS’, statement_types => ‘select,’, update_check => FALSE , enable => TRUE , static_policy => FALSE , policy_type => dbms_rls.STATIC , long_predicate => FALSE , sec_relevant_cols => ‘SCREEN_ENTRY_VALUE,’ , sec_relevant_cols_opt => DBMS_RLS.ALL_ROWS ); end;
/

The above policy restricts access to the SCREEN_ENTRY_VALUE column of HR.PAY_ELEMENT_ENTRY_VALUES_F. That took care of our figuring out salary based on insurance coverage concern.

That’s all there is to it. You can just issue the code above in an Apps 11i instance and at that point no user besides SYS and SYSTEM can see that data.
Now that you’ve handled the technical details, there’s the issue of cloning the instances and being able to test HR functionality in development environments while still restricting the data. Here’s what we do.
In PROD, in addition to the function and policies listed above, we create a read-only user calls APPSHR:

create user APPSHR identified by XXXXXXXXXXXXX;
GRANT ALTER SESSION TO “APPSHR”;
GRANT CREATE DATABASE LINK TO “APPSHR”;
GRANT CREATE PUBLIC SYNONYM TO “APPSHR”;
GRANT CREATE SESSION TO “APPSHR”;
GRANT CREATE SYNONYM TO “APPSHR”;
GRANT EXECUTE ANY PROCEDURE TO “APPSHR”;
GRANT SELECT ANY DICTIONARY TO “APPSHR”;
GRANT SELECT ANY SEQUENCE TO “APPSHR”;
GRANT SELECT ANY TABLE TO “APPSHR”;
GRANT UNLIMITED TABLESPACE TO “APPSHR”;
GRANT “RESOURCE” TO “APPSHR”;
ALTER USER “APPSHR” DEFAULT ROLE ALL;
That APPSHR user now has the ability to select any data in the system but it’s read only (no updating).
We then exempt the APPS and APPSHR user in PROD ONLY from the policies we created:

GRANT EXEMPT ACCESS POLICY to APPS;
GRANT EXEMPT ACCESS POLICY to APPSHR;

At this point only users who connect at the database level as APPS (that would be all forms based users) and APPSHR (our HR analysts) can see the restricted data. The APPS password in our PROD environment is known only to the DBAs. The APPSHR password is known only to the HR Business Analysts / HR Developers / DBAs. All other business analysts have access to another read-only account called APPSXXXX that is NOT exempt from the security policies. With that APPSXXXX account, the regular business analysts and developers can query the database directly for all but the restricted data and can access any data via the forms that their forms responsibilities allow.
When we clone an instance, we don’t have to do any scrambling. For all of our DEV, TEST and PSUP (Production Support) instances we merely have to change the APPS password to a commonly known password and issue

REVOKE EXEMPT ACCESS POLICY FROM APPS;

Now everyone can read and write data as APPS, but because APPS now is restricted by the policies we put in place, APPS can’t see the sensitive data. The only time this causes a problem is for the HR Business Analyst or HR Developers who need a non-PROD place to work issues or develop code. For them we created a special HR cloned instance with the same security setup as PROD but with the APPS password known to the HR Analysts and HR Developers.

This solution has worked out much better for us than the scrambling. Give it a shot and let me know how it goes for you in the feedback!

It’s been too long

Wow, it’s really been over a month since I posted last. Since then I’ve attended VMware VMworld, went on a cycling and camping vacation for a week, attended Oracle Openworld, was at work a week, and dealt with a distant relative dying.

It’s been a bit busy.

During all that however, I’ve just had tons of blog post ideas floating around in my head that need to get out.

First, a housekeeping point. As Paul wrote in the comments of my previous post on Advanced Compression, using the compression and de-duplication aspects of Oracle SecureFiles DOES require licensing advanced compression. This is putting a big crimp in my plans for an end of year conversion to SecureFiles that was going to reduce our space usage, but you’ve got to stay legal.

On to other matters. VMware VMworld was awesome.  It was my third VMworld, and the best I’ve attended. Major changes took place with scheduling sessions (now first come, first serve) and the labs (all now on demand and there were 38 of them compromising about 50 hours of training). It was extremely impressive just to see the 480 on demand lab stations and performance overall was good to excellent. I attended one session on the architecture of the lab systems and I was blown away. Super super impressive.

Oracle Openworld was huge this year – 41,000 people. It’s funny how just different the two conferences feel. VMworld feels to me much more about the underlying technology. How to do cool things with the products, how to make things work faster or better… Openworld… well, it just has that big business feel to it. The sessions although interesting, weren’t generally covering that much new, at least with regards to E-Business Suite.

More on these topics later, I need to get back to providing good technical data.

Oracle Advanced Compression Advisor

My main Oracle Applications Database has been growing steadily and is now around 270GB. In terms of databases this isn’t huge, but when you keep multiple development and test copies around on enterprise class storage, AND replicate it to your DR site, that can get expensive.

With Oracle 11g database, Oracle came out with two products to help manage space (and improve performance!) in your Oracle database – Oracle Advanced Compression and Oracle SecureFiles. Although both are for reducing disk usage, they are aimed at different areas of disk usage within an Oracle database.

SecureFiles is the next generation of storage for Oracle LOBs (Large OBjects). Oracle LOBs stored in the format before SecureFiles are said to be stored in BasicFiles. SecureFiles is aimed at attachments to the database – CLOBS (Character LOBs), BLOBs (Binary LOBs), and NCLOBs (multi-byte character LOBs). SecureFiles offers a number of benefits over BasicFiles. Two are relevant to reducing space usage – de-duplification and compression. SecureFiles is a free feature of Enterprise Edition and has no additional licensing costs. As a result, it’s the sort of low hanging fruit that should be of interest to any Oracle DBA out there – free improved performance and free reduced disk storage. What’s not to like? Because this feature is free, we’re actively testing with this in our environments and plan on rolling this out by end of year. I’ll post a much longer blog post with our space savings and details of converting data from BasicFiles to SecureFiles later.

Advanced Compression is aimed at table data – compressing the data stored in the tables. This not only saves space on the file system, but actually improves performance by reducing the amount of data that needs to be read from disk (reading data from disk is frequently the bottleneck with Oracle databases – which is why Oracle is so memory hungry and tries to cache much of the data it needs in the System Global Area (SGA)). Advanced Compression is a add-on feature to Enterprise Edition at a cost of $11,500 per x86 license (and remember it takes TWO x86 CORES to equal one x86 LICENSE) – and like everything Oracle, that is based on how many cores are in the box, not how many your database cpu_count is set to or VM (if you virtualize your Oracle database) utilizes.

With Oracle Enterprise Manager (OEM) 11g, one of the new features is a Compression Advisor. You can read about other reasons to upgrade to OEM 11g at this blog post on OEM 11g new features. When run against an Oracle 11gR2 database, this advisor will analyze your database objects, estimate the compression ratio you’ll achieve and even make recommendations on the best compression settings for your environment. Although my largest database is 11gR2, I have a variety of other database versions on those same physical hosts (gotta love virtualization!) that aren’t 11gR2 and hence don’t have the DBMS_COMPRESSION package.

Luckily, I stumbled across a standalone version on Oracle Technology Network. This standalone version will work with Oracle 9iR2 (9.2.0.X) through 11gR1 (11.1.0.X) and can give you the data you need to convince business areas to upgrade to 11g database.

One thing to be aware of with this script: it will create a temporary table of the compressed data so you may wish to reating a tablespace specifically for storing the temporary table and making that the default tablespace of the user executing the script. The temporary table gets dropped at the end.

Note: The example on the Oracle Technology Network link above is incorrect. It is using the DBMS_COMPRESSION package which is in 11gR2 Oracle database and NOT provided by this package. So if using an 11gR2 database, you use DBMS_COMPRESSION package, but if using a 9iR2 thru 11gR1 database, use the DBMS_COMP_ADVISOR package like in my example below

Here’s the output from running it against a 9.2.0.8 database with a table OM_DATA in a schema called OO_MAIL. The table has 4.5 million rows and is 9.5 GB in size. (The product that uses this database requires Oracle 9iR2, for those wondering)

SQL> exec DBMS_COMP_ADVISOR.getratio(‘OO_MAIL’,’OM_DATA’,’OLTP’,25);

Sampling table: OO_MAIL.OM_DATA

Sampling percentage: 25%

Compression Type: OLTP

Estimated Compression Ratio: 1.62

PL/SQL procedure successfully completed.


I also ran this against my largest table in my Oracle Applications (11gR2) instance (INV.MTL_TRANSACTION_ACCOUNTS) – a 2.5GB table with 14 million rows:


Sampling table: INV.MTL_TRANSACTION_ACCOUNTS

Sampling percentage: 25%
Compression Type: OLTP
Estimated Compression Ratio: 2.57

So that works out to 3.64GB space I would save on the 9i database and 1.57GB in my 11gR2 database. A total of about 5GB saved. Every database (and the data it contains) is different, so run the numbers against your database to decide if Advanced Compression is worth it in your environment… and check out SecureFiles. It’s free.

Oracle internal cloud session updates from VMworld Day 1

This week I’m at VMware VMworld in San Francisco. Yesterday was day one of the event and the Oracle related highlight for me was session

EA7061 Creating an Internal Oracle Database Cloud Using vSphere by Jeff Browning of EMC.

I’ve been to Jeff’s sessions before and always found them entertaining and informative. Below are some of my thoughts from what was covered at the session.

The most striking informative graphic was an X-Y graph where the X axis was scalability and Y was availability. At the high end of both were Oracle RAC. At the low end of both was MS Access and MySQL. In the sweet spot was Oracle standard edition coupled with VMware vSphere HA clusters.

What does this say to the DBAs? What many of us already knew – not every workload is appropriate for being virtualized under VMware. If your system or the business it’s supporting cannot survive the downtime you’d have in the event of a host failure and subsequent HA restart, you should spend the $$ for Oracle RAC. However, Jeff pointed out that in his experience roughly 90% of systems can survive the downtime associated with a HA event – that’s 90% of the databases out there being good candidates for virtualizing Oracle under VMware vSphere.

One of Jeff’s great examples of why to virtualize was to reduce database sprawl. He cited a Fortune 100 company with 824 physically booted Oracle databases and they pay $228 Million a year to support those machines.

To reduce this sprawl, you’ve got two approaches – according to Jeff, Oracle’s preferred way is to use RAC and come up with one global instance where you can put all your various products. Unfortunately that just doesn’t strike me as realistic in any sort of large company. I run primarily Oracle’s own products and even they can’t run on the same database version in many cases. Oracle E-Business requires Oracle 10g or Oracle 11gR2. Yet Oracle Email Center requires an Oracle 9i database (which needs RedHat 4). A global RAC instance just doesn’t make sense.

The other approach is to virtualize the machines – now I’ve got a RedHat 4 32-bit OS machine running Oracle 9i database on the same hardware as a RedHat 5 64-bit OS running a 11gR2 database. There’s lots of cost savings on both Oracle licensing and reducing the amount of hardware that one can gain with this approach.

One thing I hadn’t really thought about that Jeff brought up with regards to VMware vSphere and Oracle is that the time to vMotion your Oracle database can be longer than with other types of virtual machines – sometimes taking as long as twenty minutes. The reason for this has to deal with how vMotion works – its basically copying the contents of RAM for that VM to another server and then copying over memory blocks that have changed since the first copy, over and over till the delta is very small. Oracle heavily uses memory for its SGA (System Global Area) and so for heavy transaction OLTP systems, vMotions can take a longer than expected time.

The final thing I want to share from Jeff’s presentation was the relevant performance of different protocols and file systems with regards to Oracle and VMware. On the NAS (NFS) storage side, Jeff assigned a value of 95% efficiency when accessing database datafiles via Oracle Direct NFS (DNFS) offering. Compare this to 65% efficiency running VMDKs over traditional NFS. That’s a huge performance difference. As a result, Jeff recommends just using this for a boot / OS disk and definitely not for your database files. On the SAN side, Jeff noted the best performance (100% relative efficiency) comes from using Oracle’s Automatic Storage Management (ASM) with VMWare Raw Disk Mapping (RDM) containers. Compare this with a 98% efficiency with ASM using VMware Virtual Machine Disk Format (VMDK) containers. This is another example of how the Oracle DBAs need to communicate with the VMware administrators when planning out their environment. Many times DBAs don’t even realize they’re running in a virtual environment, and you can’t expect a VMware admin to know about the performance benefits of Oracle DNFS or ASM.

Overall it was a great session and I’m definitely looking forward to applying what I learned to my environments when I get back home.

Too big to fail: Virtualizing Big Iron databases

Recently I was talking with a company in Houston,TX running Oracle E-Business (EBS) 11i on Solaris with an Oracle 9i database. They run VMware for other parts of their business and wanted to leverage the features of VMware vSphere and Site Recovery Manager (SRM) to virtualize their EBS environment and have the ability to quickly move their EBS environment in the event of a hurricane or other natural disaster bearing down on Houston to their geographically diverse DR site.

This call had lots of moving parts. They were on Oracle 9i database and wanted to move to Oracle 11g to ease their support costs. They wanted to move from Solaris hardware to commodity x86 hardware and RedHat linux to ease their support costs. Their existing Oracle 9i database was running on and using 8 SPARC processors at peak levels throughout the business day.

in VMware vSphere, the maximum processors you can make visible to a VM is 8 virtual x86 processors. Is a virtualized x86 processor as fast as a physical SPARC processor? Would their SQL run faster on Oracle 11g than it did on Oracle 9i? Is RedHat Linux going to allow the database to process requests as fast as Solaris? Will their SAN storage and LUN layout be fast enough? Will their file system be a limiter?

Besides building up the environment and just going for it in production, how can you know?

By leveraging some very cool tools from both Oracle and VMware.

For Oracle database, Oracle offers an add-on called Real Application Testing which has a feature called Database Replay. Database replay allows you to capture the workload on your production database server and replay it on another environment to see if things are faster or slower. Although this was a new feature on Oracle 11g database, Oracle made backports of this available for 9i and 10g databases – exactly for purposes like this (well, maybe not to aid in virtualizing to VMware, but you get my drift).

Using Database Reply and Real Application Testing (both licensable features from Oracle Enterprise Edition) allows companies to test SAN changes, hardware changes, database upgrades, OS changes, etc., all with a production load, but without risking actual production issues.

Where does VMware fit into this? The way Real Application Testing and Database Reply work is by capturing all the transactions generated in production, massaging them a little bit, and then playing them back against a clone of Production. That clone needs to be at the exact same point in time (or SCN – System Change Number in database speak) as PROD so that the replay is playing back against an exact replica of the database. Although setting up a clone to an exact isn’t hard for an Oracle DBA, it does require time – time to build the test system, time to restore a backup of the database and time to apply archive logs and roll the database forward to match PROD’s SCN.

Even in cases such as this where the Production database isn’t virtualized, by making the test system virtualized, we can not only test all these changes, but we can leverage VMware snapshot technology to allow us to very quickly take our database back to the SCN we want to run Database Replay against, without having to continually restore the database. Using snapshots you just go thru that setup effort once, take a snapshot and then just keep rolling back to your snapshot as many times as necessary to test performance.

Of course, you may find that the 8 processor limit in VMware or the OS or the SAN can’t handle your production load. Time to give up and stay physical? No. In Oracle 10g and further refined in Oracle 11g, Oracle has greatly improved the ability the database has to help a DBA manage the system load and even to have the database tune itself. By leveraging features such as Advanced Compression and SecureFiles (to reduce the physical I/O), Automatic Optimizer Statistics Collection and Automatic SQL Tuning Advisor (to tune queries to use less CPU and/or disk resources), you can give your database more room to grow yet still stay on the same (or less!) hardware.

Kicking the tires on the CloudShare cloud

I’m extremely lucky as an Oracle Apps DBA – I’ve got literally tons of hardware for me to experiment and play on to learn.   

But not everyone has hardware and software lying around and the time and ability to set it up. I may know my way around Oracle and RedHat Linux, but setup Windows with SQL Server? Active Directory? Sharepoint? Not a chance. That’s what my (very nice and understanding) Windows Administrator coworkers are the experts in. But I came across a way to create a prebuilt environment with all this and more, for free.

Recently I stumbled across CloudShare and their free Pro offering. This isn’t a sales pitch, I promise.

Right now they are in public beta and while it is in public beta, it’s free to use. All you need to sign up is to provide your name and an email address.

With the free account you can create an environment of up to 6 VMs (Linux or Windows) at a time based on the available templates:

Xubuntu 8.04 Desktop

Windows XP With Office 2007

Windows XP Professional

Windows XP Pro with Office 2010

Windows Server 2008 Enterprise Edition x64

Windows Server 2008 Enterprise

Windows Server 2003 R2 Enterprise Edition

Windows 7 Pro

Windows 2008 with SQL Server 2008

Windows 2008 with Microsoft CRM Dynamics

Windows 2008 with Active Directory

Windows 2003 with Sharepoint 2007

Windows 2003 R2 with Oracle 11g

Ubuntu 8.04 Desktop

CentOS 5 With RubyOnRails

CentOS 5 with MySQL

CentOS 5 with KDE

CentOS 5

Sure the machines aren’t the most powerful things out there, but that’s not the point. You want to go through an Oracle install? Go for it. Want to kick the tires on Windows 7 without buying it? 5 minutes and you’re up and running. Want to mess with Sharepoint or Active Directory but avoid all those downloads, installs and configuration? Poof – done. Want to setup a 6 VM environment and share it with friends over the internet? Done.

It’s actually pretty cool.

There’s one feature they have that I especially like: FastUpload functionality. With this you download one of their existing templates (which are good for 90 days), fire it up with VMware Workstation, and make whatever changes you want and then you can upload your VM back to their servers. The magic of this is it doesn’t re-upload the whole VM (Gigabytes and gigabytes of data), but just the changed blocks.

How cool is that? Short of having friends in the know, this is the only legal way I know of to download a prebuilt Windows VM (please note you should have your own Microsoft licenses) and be able to check it in after configuring it just so.

So now you’ve made this super cool environment and want to share it with your friends – you can send them an invitation to use your environment. You can send 10 invitations each month. When your friend opens their invitation, they see the last snapshot of your environment and can make whatever changes they want, without it effecting your original. Your friend can decide to take ownership of the environment and then they become authors of the copy for as long as they want. Think linked clones in VMware speak.

So while your VM is in CloudShare’s cloud, how do you access it? It’s got a External address ( XXXXXXXX.env.cloudshare.com ) so you can access it from anywhere on the Internet and also an Internal IP so it can see the other VMs in your environment. Want to RDP to your private Windows XP Pro with Office 2010 box from your iPhone to open a document in your email? Go for it. Their platform supports http/https, ssh, RDP and any fat client that uses public IPs.

I spent a couple hours messing around in my environment this past weekend and setup an Oracle 11gR1 database on a windows box, downloaded some evaluation software from Oracle and setup a little client server environment involving 3 boxes. I was able to upgrade the Centos box (it was Centos 5.4 – latest is 5.5), and get it configured to talk to my Oracle server.

CloudShare has some big Partners , notably to most of my readers, VMware. There’s even a quote from Paul Maritz (President and CEO of VMware) on the Partners page: “We see the CloudShare platform as an attractive technology that strengthens, supports and extends our ecosystem.”

Why Oracle VM isn’t enterprise ready

Starting this week, Oracle has publicly started really pushing Oracle’s virtualization products. I attended a seminar on Tuesday covering the road map and yesterday was an all day online Virtualization forum.

Oracle’s server virtualization is focused mainly on two products – Oracle VM for Sparc and Oracle VM for x86. I’m going to focus on Oracle VM for x86, as commodity x86 hardware is the big industry focus and Oracle is really focusing on why you should go with Oracle VM versus VMware.

I’m hear to tell you Oracle VM just isn’t ready for the enterprise. Sure, there are large reference customers out there, but Oracle VM doesn’t have the features I consider necessary to be called enterprise ready. I run VMware vSphere in my enterprise environments and so I’ll compare Oracle VM to VMware vSphere, since I believe VMware vSphere is enterprise ready.

Load Balancing – with virtualization you can run many virtual servers on one physical server. Oracle VM’s load balancing works by performing automated load balancing at each virtual machine power on. Basically what that means is when you start a VM it gets placed on the least busy (in terms of memory and CPU usage) physical server in your server pool. That’s it. VMware’s load balancing called DRS (Distributed Resource Scheduling) not only does this but also checks the load on each host in the cluster every 5 minutes and (if you have it set to fully automated – the VMware best practice) automatically redistributes VMs for the best possible performance.

In my environments, and I suspect almost everyone’s, the workload on the servers changes throughout the day. During the business day, much of the system load is OLTP type loads – users entering data, querying data, placing orders, etc. After the primary business hours, the system load becomes much more batch intensive as things like reports are generated, statistics are gathered, and backups are performed. With Oracle VM, this isn’t taken into account. I could have some Oracle VM servers completely idle while others are overwhelmed. I believe overall system performance to be critical to a product being enterprise ready.

Snapshots – being able to take a snapshot of a VM is, in my belief, critical to an enterprise virtualized environment. Oracle VM doesn’t do snapshots. Simple as that. When I asked on Tuesday at road map seminar with Oracle if that would be available in the next version (officially due sometime in the next 12 months, though I suspect it might be released in the next month), I was told they couldn’t answer yes or no. The fact is, Oracle VM doesn’t have snapshots and VMware vSphere does. But what really is the big deal? Why do I want snapshots?

o Patching – enterprise systems frequently have patches and code changes and need to have a failback plan if something doesn’t go right. With Oracle VM I’m out of luck. Sure I can go back to the last full system backup I took, but we’re probably talking hours of downtime if I need to failback. With VMware vSphere, I take a snapshot of the VM before I start patching (something that takes only a couple of seconds – no exaggeration) and then start my patching. If I need to fail back, I just go in the vSphere menu and choose “Revert to current snapshot” and the VM will restart right back to where it was when you took the snapshot. You even have the option to “snapshot the virtual machine’s memory” meaning if you revert back, your system won’t be in a state as if it had just rebooted, but will have all the processes running as if the machine never stopped.

o Backups – with Oracle VM, if I want to take a backup of the entire VM, I have to use a software agent running inside the VM. As anyone who has ever dealt with Windows knows, you frequently have troubles backing up open files… you know, like an Oracle database or the OS itself. As that backup runs, something that frequently takes hours, files are changing and you’re not getting a completely consistent image of the system. In VMware vSphere, there are many software packages, both from VMware and from third-party vendors that utilize snapshots to take a consistent image of the system. To me, enterprise ready includes good backups. Maybe I’m too demanding.

o Cloning – with Oracle VM if you want to clone a VM, you need to power it off. Yes, if I want to clone my production ERP system VMs, Oracle VM requires I turn VMs off to perform a clone. It’s on page 68 of the Oracle VM Manager 2.1.5 Manual . In VMware vSphere, I can clone with the VM up and available to users. In addition, with the latest version of VMware vSphere, vSphere uses public vStorage APIs to push much of this work onto the SAN itself, thereby reducing and almost eliminating network traffic AND freeing up compute resources on your cluster.

Memory usage – One of the benefits of virtualizing is consolidation – putting many VMs onto one physical server and thereby getting getting better usage of my resources. Oracle VM offers no memory consolidation technologies to increase your consolidation ratios (how many VMs you can put on a physical server). VMware vSphere offers FOUR technologies to increase your consolidation – Transparent Page Sharing, Ballooning, Memory Compression and Swapping. If I’m going to virtualize to consolidate my infrastructure, why not use the product that allows the best consolidation?

There’s many more scenarios where VMware vSphere is a much better and mature product than Oracle VM, but that’s not my point here. My point is that Oracle VM doesn’t meet what I would consider to be an enterprise ready product.