Tag Archives: oracle ebs

Oracle listened, customers WIN! RAC supported on VMware

As I was flying home last night and downloading tweets before takeoff, I found out some amazing news. Ugh, not the time to have intermittent internet access! But eventually I got home, did the reading and confirmed the news.

Oracle RAC 11gR2 (11.2.0.2) is now supported by Oracle under VMware.

You can read the updated My Oracle Support (MOS) announcement yourself in note 249212.1 which now states:

NOTE:  Oracle has not certified any of its products on VMware.
For Oracle RAC, Oracle will only accept Service Requests as described in this note on
Oracle RAC 11.2.0.2 and later releases.

(Remember:  Certified is different than Supported .  Oracle doesn't certify hardware that isn't Oracle's own )

This is simply fantastic news.  I talked to an petroleum company in Houston earlier this year who wanted to virtualize their Oracle EBS system and move platforms from Sun Solaris to x86 architecture.  Their big concern was that they were using 8 SPARC Processors and they knew that 8 x86 CPUs is the limit for a virtual machine under VMware vSphere 4.1.  We discussed various steps they could take to ensure their environment would thrive under this limitation, but now it's a non-issue. In the event they need more computing power, they can implement Oracle RAC under vSphere and start up another RAC instance as necessary. 
I do need to point out that as of this moment, 11.2.0.2 database is not certified or supported with Oracle Application (Oracle EBS) 11i or R12.  These certifications usually come out a few months after the initial database announcement (which was Sept 10th for 11.2.0.2).  If you check out the blog of Steven Chan (a Senior Director in Oracle's Applications Technology Group - the group responsible for the Oracle E-Business Suite technology stack) and specifically these comments , you'll see that Steven wrote:

We haven't certified 11.2.0.2 with Oracle E-Business Suite Release
11i yet.  This project is underway now.  11.2.0.1 is the latest
certified database release for the E-Business Suite.
Oracle's Revenue Recognition rules prohibit us from discussing
certification and release dates, but you're welcome to monitor or
subscribe to this blog for updates, which I'll post as soon as soon as
they're available.



So 11.2.0.2 database certification with EBS 11i and R12 is coming.My main client doesn't use RAC (our business can survive the downtime associated with a HA event and we aren't near the 8 CPU limitation of VMware vSphere 4.1), but knowing its an option can only give upper management even more confidence that virtualizing our entire Oracle environment under VMware vSphere was the right thing to do.


For those wanting more information on Oracle RAC under VMware vSphere, I'd suggest watching this Oracle virtualization webcast put on by Embarcadero and VMware a few weeks ago. I'd also highly recommend following VirtualTodd on Twitter.  Todd Muirhead was at Oracle OpenWorld in the VMware booth and presented some very interesting performance data from running RAC under VMware.  I can't find a link to the presentation, but you can follow Todd's postings and perhaps find his testing results at his blog on the VMware communities site .

Think of the possibilities of combined Oracle RAC and VMware vSphere:

o  Multiple RAC nodes on different vSphere hosts means no database downtime during a hardware failure.

o  Combining multiple RAC databases on same vSphere host to consolidate workloads but still segregate environments 

o  Much faster provisioning of new RAC nodes with vSphere virtual machine cloning and VMware VAAI (vStorage APIs for Array integration)

o ... and many more I still need to wrap my head around

Webinars of interest to Oracle Apps DBAs on RedHat Linux with VMware

So recently I’ve been getting notifications about a number of various interesting Webinars. Since they’re coming from all sorts of random sources, I’m sharing the links as a service to my readers.

I’m receiving no consideration or such from the companies mentioned, these are things I thought would be of interest to me, and perhaps my readers

Upcoming (all dates / times are based on Central US time zone)

Get The Facts: Oracle’s Unbreakable Enterprise Kernel for Linux Oct 26th 11am, put on by Oracle

EBS Workflow Purging – Best Practices Nov 10th 11am, put on by Oracle

How to use My Oracle Support for ATG issues Oct 27th 11am, put on by Oracle

E-Business Suite using a DB-Tier with RAC Oct 28th 11am, put on by Oracle

Top 10 Virtual Storage Mistakes Oct 28th 1pm, put on by Quest Software

On-Demand (already happened)

Oracle Virtualization Webcast put on by Embarcadero and VMware

RHCE Virtual Loopback: Unlocking the value of the Cloud put on by RedHat (pdf)

RHCE Virtual Loopback: Performance and Scalability RHEL5 -> RHEL6 put on by RedHat (pdf)

Managing Red Hat Enterprise Linux in an Increasingly Virtual World put on by RedHat

State of “Btrfs” File System for Linux put on by Oracle

Lower Your Storage Costs with Oracle Database 11g and Compression put on by Oracle

Linux Configuration and Diagnostic Tips & Tricks put on by Oracle

Feel free to share any other Seminars you think may be of use in the comments!

An alternative to scrambling data: Restricting access with Virtual Private Database (VPD)

Back in June, I wrote a blog post on scrambling HR data in our EBS instances . Although effective, it was a bit of a kludge – it involved an excel spreadsheet, and giving everyone the same salary and banking info.

As we went with this solution in our development and test environments, we ran into some issues where the salary data would totally screw up the benefits data as it’s calculated as a percentage of salary. The solution was effective at keeping the data secure, but it wasn’t optimal. After some investigation, we turned to Oracle VPD – Virtual Private Database – functionality. With this we are able to restrict access to certain columns (such as salary or national identifiers) to all but necessary users. With an EBS database, where every connection is connect as APPS, this poses special considerations.

I’ll cover the technical details of implementing VPD in an EBS environment. Then I’ll talk about the changes you need to make to keep things functional for your business analysts and yet keep the data secure.

First it was necessary to create a policy function. In our case this is very generic, basically just returning the predicate.

CREATE OR REPLACE FUNCTION “APPS”.”LUM_HIDE_HR_COLS” (schema in varchar2, tab in varchar2)

return varchar2 as predicate varchar2(8) default ‘1=2’;

begin

return predicate;

end;

/

Next we add a policy on the column we want to restrict access to point it at the policy function we created

begin dbms_rls.add_policy(object_schema => ‘HR’, object_name => ‘PER_ALL_PEOPLE_F’, policy_name => ‘LUM_HIDE_HR_COLS’, function_schema => ‘APPS’, policy_function => ‘LUM_HIDE_HR_COLS’, statement_types => ‘select,’, update_check => FALSE , enable => TRUE , static_policy => FALSE , policy_type => dbms_rls.STATIC , long_predicate => FALSE , sec_relevant_cols => ‘NATIONAL_IDENTIFIER,’ , sec_relevant_cols_opt => DBMS_RLS.ALL_ROWS ); end;

/

In this case we’re ADDing a policy and ENABLing it, applying the LUM_HIDE_HR_COLS function on the NATIONAL_IDENTIFIER column of table PER_ALL_PEOPLE_F in the HR schema, preventing users from SELECTing data and stating that this is for ALL_ROWS.

Once we issued that, all users (besides SYS and SYSTEM) will get NULLs when they select NATIONAL_IDENTIFIER from that table. That took care of our social security number concern.

We also set up additional policies on other data:

begin dbms_rls.add_policy(object_schema => ‘HR’, object_name => ‘PER_PAY_PROPOSALS’, policy_name => ‘LUM_HIDE_HR_COLS’, function_schema => ‘APPS’, policy_function => ‘LUM_HIDE_HR_COLS’, statement_types => ‘select,’, update_check => FALSE , enable => TRUE , static_policy => FALSE , policy_type => dbms_rls.STATIC , long_predicate => FALSE , sec_relevant_cols => ‘PROPOSED_SALARY_N,’ , sec_relevant_cols_opt => DBMS_RLS.ALL_ROWS ); end;

/

The above policy restricts access to PROPOSED_SALARY_N column of HR.PER_PAY_PROPOSALS. That took care of our salary data concern.

begin dbms_rls.add_policy(object_schema => ‘HR’, object_name => ‘PAY_EXTERNAL_ACCOUNTS’, policy_name => ‘LUM_HIDE_HR_COLS’, function_schema => ‘APPS’, policy_function => ‘LUM_HIDE_HR_COLS’, statement_types => ‘select,’, update_check => FALSE , enable => TRUE , static_policy => FALSE , policy_type => dbms_rls.STATIC , long_predicate => FALSE , sec_relevant_cols => ‘SEGMENT3,SEGMENT4,’ , sec_relevant_cols_opt => DBMS_RLS.ALL_ROWS ); end;
/

The above policy restricts access to the SEGMENT3 and SEGMENT4 columns of HR.PAY_EXTERNAL_ACCOUNTS. That took care of our banking data concern.

begin dbms_rls.add_policy(object_schema => ‘HR’, object_name => ‘PER_ADDRESSES’, policy_name => ‘LUM_HIDE_HR_COLS’, function_schema => ‘APPS’, policy_function => ‘LUM_HIDE_HR_COLS’, statement_types => ‘select,’, update_check => FALSE , enable => TRUE , static_policy => FALSE , policy_type => dbms_rls.STATIC , long_predicate => FALSE , sec_relevant_cols => ‘ADDRESS_LINE1,ADDRESS_LINE2,’ , sec_relevant_cols_opt => DBMS_RLS.ALL_ROWS ); end;
/
The above policy restricts access to ADDRESS_LINE1 and ADDRESS_LINE2 columns of HR.PER_ADDRESSES. That took care of our concern of employees looking up addresses of other employees.

begin dbms_rls.add_policy(object_schema => ‘HR’, object_name => ‘PER_ALL_ASSIGNMENTS_F’, policy_name => ‘LUM_HIDE_HR_COLS’, function_schema => ‘APPS’, policy_function => ‘LUM_HIDE_HR_COLS’, statement_types => ‘select,’, update_check => FALSE , enable => TRUE , static_policy => FALSE , policy_type => dbms_rls.STATIC , long_predicate => FALSE , sec_relevant_cols => ‘GRADE_ID,’ , sec_relevant_cols_opt => DBMS_RLS.ALL_ROWS ); end;
/

The above policy restricts access to the GRADE_ID column of HR.PER_ALL_ASSIGNMENTS_F. That took care of our concern about employees looking up the pay grade of other employees.

begin dbms_rls.add_policy(object_schema => ‘HR’, object_name => ‘PAY_ELEMENT_ENTRY_VALUES_F’, policy_name => ‘LUM_HIDE_HR_COLS’, function_schema => ‘APPS’, policy_function => ‘LUM_HIDE_HR_COLS’, statement_types => ‘select,’, update_check => FALSE , enable => TRUE , static_policy => FALSE , policy_type => dbms_rls.STATIC , long_predicate => FALSE , sec_relevant_cols => ‘SCREEN_ENTRY_VALUE,’ , sec_relevant_cols_opt => DBMS_RLS.ALL_ROWS ); end;
/

The above policy restricts access to the SCREEN_ENTRY_VALUE column of HR.PAY_ELEMENT_ENTRY_VALUES_F. That took care of our figuring out salary based on insurance coverage concern.

That’s all there is to it. You can just issue the code above in an Apps 11i instance and at that point no user besides SYS and SYSTEM can see that data.
Now that you’ve handled the technical details, there’s the issue of cloning the instances and being able to test HR functionality in development environments while still restricting the data. Here’s what we do.
In PROD, in addition to the function and policies listed above, we create a read-only user calls APPSHR:

create user APPSHR identified by XXXXXXXXXXXXX;
GRANT ALTER SESSION TO “APPSHR”;
GRANT CREATE DATABASE LINK TO “APPSHR”;
GRANT CREATE PUBLIC SYNONYM TO “APPSHR”;
GRANT CREATE SESSION TO “APPSHR”;
GRANT CREATE SYNONYM TO “APPSHR”;
GRANT EXECUTE ANY PROCEDURE TO “APPSHR”;
GRANT SELECT ANY DICTIONARY TO “APPSHR”;
GRANT SELECT ANY SEQUENCE TO “APPSHR”;
GRANT SELECT ANY TABLE TO “APPSHR”;
GRANT UNLIMITED TABLESPACE TO “APPSHR”;
GRANT “RESOURCE” TO “APPSHR”;
ALTER USER “APPSHR” DEFAULT ROLE ALL;
That APPSHR user now has the ability to select any data in the system but it’s read only (no updating).
We then exempt the APPS and APPSHR user in PROD ONLY from the policies we created:

GRANT EXEMPT ACCESS POLICY to APPS;
GRANT EXEMPT ACCESS POLICY to APPSHR;

At this point only users who connect at the database level as APPS (that would be all forms based users) and APPSHR (our HR analysts) can see the restricted data. The APPS password in our PROD environment is known only to the DBAs. The APPSHR password is known only to the HR Business Analysts / HR Developers / DBAs. All other business analysts have access to another read-only account called APPSXXXX that is NOT exempt from the security policies. With that APPSXXXX account, the regular business analysts and developers can query the database directly for all but the restricted data and can access any data via the forms that their forms responsibilities allow.
When we clone an instance, we don’t have to do any scrambling. For all of our DEV, TEST and PSUP (Production Support) instances we merely have to change the APPS password to a commonly known password and issue

REVOKE EXEMPT ACCESS POLICY FROM APPS;

Now everyone can read and write data as APPS, but because APPS now is restricted by the policies we put in place, APPS can’t see the sensitive data. The only time this causes a problem is for the HR Business Analyst or HR Developers who need a non-PROD place to work issues or develop code. For them we created a special HR cloned instance with the same security setup as PROD but with the APPS password known to the HR Analysts and HR Developers.

This solution has worked out much better for us than the scrambling. Give it a shot and let me know how it goes for you in the feedback!

Oracle Advanced Compression Advisor

My main Oracle Applications Database has been growing steadily and is now around 270GB. In terms of databases this isn’t huge, but when you keep multiple development and test copies around on enterprise class storage, AND replicate it to your DR site, that can get expensive.

With Oracle 11g database, Oracle came out with two products to help manage space (and improve performance!) in your Oracle database – Oracle Advanced Compression and Oracle SecureFiles. Although both are for reducing disk usage, they are aimed at different areas of disk usage within an Oracle database.

SecureFiles is the next generation of storage for Oracle LOBs (Large OBjects). Oracle LOBs stored in the format before SecureFiles are said to be stored in BasicFiles. SecureFiles is aimed at attachments to the database – CLOBS (Character LOBs), BLOBs (Binary LOBs), and NCLOBs (multi-byte character LOBs). SecureFiles offers a number of benefits over BasicFiles. Two are relevant to reducing space usage – de-duplification and compression. SecureFiles is a free feature of Enterprise Edition and has no additional licensing costs. As a result, it’s the sort of low hanging fruit that should be of interest to any Oracle DBA out there – free improved performance and free reduced disk storage. What’s not to like? Because this feature is free, we’re actively testing with this in our environments and plan on rolling this out by end of year. I’ll post a much longer blog post with our space savings and details of converting data from BasicFiles to SecureFiles later.

Advanced Compression is aimed at table data – compressing the data stored in the tables. This not only saves space on the file system, but actually improves performance by reducing the amount of data that needs to be read from disk (reading data from disk is frequently the bottleneck with Oracle databases – which is why Oracle is so memory hungry and tries to cache much of the data it needs in the System Global Area (SGA)). Advanced Compression is a add-on feature to Enterprise Edition at a cost of $11,500 per x86 license (and remember it takes TWO x86 CORES to equal one x86 LICENSE) – and like everything Oracle, that is based on how many cores are in the box, not how many your database cpu_count is set to or VM (if you virtualize your Oracle database) utilizes.

With Oracle Enterprise Manager (OEM) 11g, one of the new features is a Compression Advisor. You can read about other reasons to upgrade to OEM 11g at this blog post on OEM 11g new features. When run against an Oracle 11gR2 database, this advisor will analyze your database objects, estimate the compression ratio you’ll achieve and even make recommendations on the best compression settings for your environment. Although my largest database is 11gR2, I have a variety of other database versions on those same physical hosts (gotta love virtualization!) that aren’t 11gR2 and hence don’t have the DBMS_COMPRESSION package.

Luckily, I stumbled across a standalone version on Oracle Technology Network. This standalone version will work with Oracle 9iR2 (9.2.0.X) through 11gR1 (11.1.0.X) and can give you the data you need to convince business areas to upgrade to 11g database.

One thing to be aware of with this script: it will create a temporary table of the compressed data so you may wish to reating a tablespace specifically for storing the temporary table and making that the default tablespace of the user executing the script. The temporary table gets dropped at the end.

Note: The example on the Oracle Technology Network link above is incorrect. It is using the DBMS_COMPRESSION package which is in 11gR2 Oracle database and NOT provided by this package. So if using an 11gR2 database, you use DBMS_COMPRESSION package, but if using a 9iR2 thru 11gR1 database, use the DBMS_COMP_ADVISOR package like in my example below

Here’s the output from running it against a 9.2.0.8 database with a table OM_DATA in a schema called OO_MAIL. The table has 4.5 million rows and is 9.5 GB in size. (The product that uses this database requires Oracle 9iR2, for those wondering)

SQL> exec DBMS_COMP_ADVISOR.getratio(‘OO_MAIL’,’OM_DATA’,’OLTP’,25);

Sampling table: OO_MAIL.OM_DATA

Sampling percentage: 25%

Compression Type: OLTP

Estimated Compression Ratio: 1.62

PL/SQL procedure successfully completed.


I also ran this against my largest table in my Oracle Applications (11gR2) instance (INV.MTL_TRANSACTION_ACCOUNTS) – a 2.5GB table with 14 million rows:


Sampling table: INV.MTL_TRANSACTION_ACCOUNTS

Sampling percentage: 25%
Compression Type: OLTP
Estimated Compression Ratio: 2.57

So that works out to 3.64GB space I would save on the 9i database and 1.57GB in my 11gR2 database. A total of about 5GB saved. Every database (and the data it contains) is different, so run the numbers against your database to decide if Advanced Compression is worth it in your environment… and check out SecureFiles. It’s free.

Oracle internal cloud session updates from VMworld Day 1

This week I’m at VMware VMworld in San Francisco. Yesterday was day one of the event and the Oracle related highlight for me was session

EA7061 Creating an Internal Oracle Database Cloud Using vSphere by Jeff Browning of EMC.

I’ve been to Jeff’s sessions before and always found them entertaining and informative. Below are some of my thoughts from what was covered at the session.

The most striking informative graphic was an X-Y graph where the X axis was scalability and Y was availability. At the high end of both were Oracle RAC. At the low end of both was MS Access and MySQL. In the sweet spot was Oracle standard edition coupled with VMware vSphere HA clusters.

What does this say to the DBAs? What many of us already knew – not every workload is appropriate for being virtualized under VMware. If your system or the business it’s supporting cannot survive the downtime you’d have in the event of a host failure and subsequent HA restart, you should spend the $$ for Oracle RAC. However, Jeff pointed out that in his experience roughly 90% of systems can survive the downtime associated with a HA event – that’s 90% of the databases out there being good candidates for virtualizing Oracle under VMware vSphere.

One of Jeff’s great examples of why to virtualize was to reduce database sprawl. He cited a Fortune 100 company with 824 physically booted Oracle databases and they pay $228 Million a year to support those machines.

To reduce this sprawl, you’ve got two approaches – according to Jeff, Oracle’s preferred way is to use RAC and come up with one global instance where you can put all your various products. Unfortunately that just doesn’t strike me as realistic in any sort of large company. I run primarily Oracle’s own products and even they can’t run on the same database version in many cases. Oracle E-Business requires Oracle 10g or Oracle 11gR2. Yet Oracle Email Center requires an Oracle 9i database (which needs RedHat 4). A global RAC instance just doesn’t make sense.

The other approach is to virtualize the machines – now I’ve got a RedHat 4 32-bit OS machine running Oracle 9i database on the same hardware as a RedHat 5 64-bit OS running a 11gR2 database. There’s lots of cost savings on both Oracle licensing and reducing the amount of hardware that one can gain with this approach.

One thing I hadn’t really thought about that Jeff brought up with regards to VMware vSphere and Oracle is that the time to vMotion your Oracle database can be longer than with other types of virtual machines – sometimes taking as long as twenty minutes. The reason for this has to deal with how vMotion works – its basically copying the contents of RAM for that VM to another server and then copying over memory blocks that have changed since the first copy, over and over till the delta is very small. Oracle heavily uses memory for its SGA (System Global Area) and so for heavy transaction OLTP systems, vMotions can take a longer than expected time.

The final thing I want to share from Jeff’s presentation was the relevant performance of different protocols and file systems with regards to Oracle and VMware. On the NAS (NFS) storage side, Jeff assigned a value of 95% efficiency when accessing database datafiles via Oracle Direct NFS (DNFS) offering. Compare this to 65% efficiency running VMDKs over traditional NFS. That’s a huge performance difference. As a result, Jeff recommends just using this for a boot / OS disk and definitely not for your database files. On the SAN side, Jeff noted the best performance (100% relative efficiency) comes from using Oracle’s Automatic Storage Management (ASM) with VMWare Raw Disk Mapping (RDM) containers. Compare this with a 98% efficiency with ASM using VMware Virtual Machine Disk Format (VMDK) containers. This is another example of how the Oracle DBAs need to communicate with the VMware administrators when planning out their environment. Many times DBAs don’t even realize they’re running in a virtual environment, and you can’t expect a VMware admin to know about the performance benefits of Oracle DNFS or ASM.

Overall it was a great session and I’m definitely looking forward to applying what I learned to my environments when I get back home.

Scrambling of HR data in an Oracle EBS instance

With our most recent upgrade, we’ve implemented some of the Human Resources modules.

We’re now storing some sensitive employee data (stuff like salaries, Social Security Numbers, banking info, etc.) and one of the business mandates is to scramble this data in our cloned instances.

I knew from reading Steven Chan’s blog (this article by Sanchit Jindal) that Oracle offers two plugins for Oracle Enterprise Manager (OEM) that might serve our needs. These are the Data Masking Pack and the Application Management Pack’s for E-Business Suite.

While reviewing which of these might be best for our environment, I made note of the following resources, so for those interested:

Application Management Pack for E-Business Suite
Data Sheet- http://www.oracle.com/technology/products/oem/pdf/apps_mgmt_ebiz.pdf
Download- My Oracle Support, Patches, Search for Patch Number- 8333939

Data Masking Pack
Data Sheet- http://www.oracle.com/technology/products/manageability/database/pdf/ds/data-masking-pack-11gr2-datasheet.pdf
Canned Demo- http://download.oracle.com/technology/products/oem/screenwatches/data_masking/index.html
Download- http://www.oracle.com/technology/software/products/oem/index.html

I also found the comparison matrix at the Steven Chan blog post above (direct link is http://blogs.oracle.com/stevenChan/images/comparisonmatrix.jpg ) to be extremely useful.

For an EBS environment, AMP looks to be a better fit – It has pre-defined masking for EBS, which was exactly what I needed since I was going to be modifying data in pre-defined EBS tables / objects. AMP also allows you to hot-clone an Oracle EBS instance. Many newer Apps DBAs don’t have methods / scripts written to do this themselves.

In our environments, AMP would have required some significant and PROD performance impacting changes in order to use the data scrambling.

The data scrambling can only take place as part of a clone and AMP only supports cloning thru RapidClone integration and specifically does NOT support cloning from an RMAN backup. This means that utilizing AMP would require I clone directly from PROD, causing a non-negligible performance impact.

Another issue was cost. AMP is licensed per processor at a list price of $7k per processor. The number of processor licenses needed is obtained by adding up your Database, Application Server and Web Server (if on separate system from Application Server) processors to get the # of processor licenses you need to buy. In our case we utilize 8 Oracle Database licenses and 4 Oracle Application Server / Web Server licenses, so 12 * $7k or $84,000 list price, plus yearly support fees.

As a result of these two issues with AMP, the business asked me to see if I could come up with a cheaper home grown solution that would allow us to utilize our existing cloning strategies and not cost tens of thousands of dollars.

We had 3 main things we needed to scramble: National Identifiers (aka Social Security Numbers or SSNs), Salary data, and banking direct deposit information like routing and account numbers.

National Identifiers (SSNs):

National identifiers are stored in table per_all_people_f . This is NOT the primary key however. The primary key for per_all_people_f is
(PERSON_ID, EFFECTIVE_START_DATE, EFFECTIVE_END_DATE) and each time a person changes positions or leaves / comes back to the company, a new row is added with an appropriate effective date range. This means I had to change the SSN to the same value for each record for a person. For example, Bob might have 4 records in this table due to leaving and coming back or switching positions. In all those records in PROD, he’d have the same SSN of “123-45-6789” and I needed to ensure when I cloned the instance and scrambled the data that his new SSN was consistently the same (but of course different from PROD) in the clone.

First I needed to generate a bunch of random but valid SSNs. There’s actually quite a few rules associated with this (for example, if you receive your SSN while living in Deleware, the first three digits must be 221 or 222) and luckily I found some scripts at fakenamegenerator.com which allowed me to create a long list of fake random SSNs. The fngssn.class.php file is GPL and is available at this link. I also modified their example.php script to call the fngssn.class.php file and output a SSN. I then just looped thru this program however many hundreds or thousands of times I needed, outputting and concatenating the SSNs to a dump file.

I then bring this list of SSNs into an excel spreadsheet as column B.
Column A is update per_all_people_f set national_identifier = ZZ
Column B is my list of fake SSNs
Column C is ZZ where national_identifier = ZZ
Column D is my list of real SSNs – this can be obtained with the following SQL: select distinct(national_identifier) from per_all_people_f where national_identifier IS NOT NULL;
Column E is ZZ;

I then save this spreadsheet out to a plain text file and edit the file, replacing all the ZZ with ‘ (Excel handles ‘ weirdly, so I just use vi afterward to do a global replace of ZZ with ‘ – it’s one command and takes a couple seconds).

At this point I’m left with a sql file with commands like
update per_all_people_f set national_identifier = ‘567-42-3477’ where national_identifier = ‘123-45-6789’;

I just need to run these and poof, the SSNs are now scrambled.

Salary data:

This was the easiest to change. Some people may get paid hourly, some salary. In either case though, their salary or hourly rate is stored in per_pay_proposals.proposed_salary_n . For someone that gets a salary of $100k a year this field would have 100000 and for someone paid $10 an hour the field would have 10 . It’s perfectly ok for people to have the same salary in real life, and I figured for the purpose of our scrambled clone, I’d go with a socialist ideal and just gave everyone the same value of 50. Admittedly, this means people who are salary now get $50 a year while the hourly people get $50 an hour. It’s good to be an hourly employee, at least in my scrambled instances and maybe now the hourly employees can afford health care!

update per_pay_proposals set proposed_salary_n = 50 where proposed_salary_n IS NOT NULL;

Banking data:

There are two parts to the banking data: the routing number (think of it as the bank number) and the account number. These are the two sets of numbers at the bottom of your checks (there’s also the check number as the third set of numbers on your checks but that’s not relevant).

The pay_external_accounts table holds the routing and account number info in some of the segment columns – where exactly depends on your implementation. We only wanted to update this data for people’s personal accounts, so our SQL looks like this:

update pay_external_accounts
set segment6 = ‘44444444444’, segment7 = ‘1111111111’
where external_account_id IN (select b.external_account_id from pay_personal_payment_methods_f a, pay_external_accounts b
where a.external_account_id = b.external_account_id);


There you go, all done. I know it sounds like a lot of manual steps, but once you’ve built the excel spreadsheet and generated your list of fake SSNs, all you really need to do is run the SQL for Column D and paste those values in, change the ZZ to ‘ and run the file. The SQL for salary and banking data take a few seconds each. Call it 10 or 15 minutes total per clone.

Now could this all be better? Sure. I could generate my fake SSNs via a SQL cursor and never need excel or php. But for now, this works. Please feel free to improve on my system and I’ll update the blog with the changes.

Why run Oracle E-Business on vSphere?

If I’ve got the dedicated hardware for my Oracle E-Business (EBS) install, why would I want to run it virtualized?

Most of the places I’ve seen typically have dedicated hardware for the Oracle EBS instances – so why virtualize it at all?

One major reason is the ability to rollback the entire environment in the event of a patching issue. My main client has an upcoming upgrade involving a database upgrades (11gR1 to 11gR2) OS level patches (updates to RedHat 5.4), Java updates (going to 1.6.0_20b5) and roughly 300 Oracle EBS patches. Yes, 300. Although we’re still staying at 11.5.10.2 ATG RUP 7, Oracle recently updated their minimum patch list you must be on to be received Extended Support on Apps 11i as of Nov. You can find the full up to date list at Metalink document Minimum Baseline Patch Requirements for Extended Support on Oracle E-Business Suite 11.5.10 (Note 883202.1).

Without virtualization, I’d have to take a full backup of my Database and App Tiers before starting the upgrade and then be prepared to go back to these if we can’t finish the upgrade by a certain time. Depending on your environment, this could be hours, days or not even practical.

Instead, at the start of the upgrade I merely take a snapshot of each VM involved. If I do this with the VMs shut down (i.e. not capturing the memory state of the VM as well) this takes about 10 seconds. Done. In the event we decide to rollback all our upgrade changes, it’s just a quick Revert to the Snapshot (might take as long as a minute per VM – OMG, a whole minute!!), restart the Apps services and poof, you’re done.

If only the upgrades themselves were so easy and foolproof.

Another reason to virtualize is of course getting the best usage of your Oracle licenses. Oracle EBS requires Oracle Enterprise Edition. I’ve blogged previously here about how to do that so I won’t go into that here.

Avoid the RAC tax. According to Oracle’s most recent Global Price list, RAC is an additional $23k per processor on top of Enterprise Edition. RAC makes sense for companies in two situations – 1) When their database must not go down. 2) Systems too large to run on 1 physical box. Sure, there are some installations where EBS requires more horsepower than can be run under 8 processors (the limit of a VM under vSphere 4.0 Update 2), but they are the vast minority. Many companies running EBS that implement RAC because they want to be protected from hardware failures. VMware has two features that protect you here – High Availability (HA) which will restart your VM on another vSphere host if the VM goes down and the much cooler and more RAC like feature, Fault Tolerance (FT). FT will actually run your VM on two ESX hosts simultaneously. They work in an active / passive arrangement – you aren’t having users connecting to both machines and load balancing between the two machines, but instead all users are connecting to one machine and in the event it fails, user sessions and other processes and connections are now active on the other node. Your users don’t notice anything, no disconnections, no restarts, no autoconfig required. It just simply works. The current drawback to FT is that it’s currently limited to 1 CPU VMs – that’s reasonable in my experience with an Apps 11i Forms/Web tier, but can frequently be a show stopper with a database server. However, if you’re willing to leverage the performance tuning features of 11g, you may be able to get past that. It is also rumored that VMware is working on getting FT to work with multiple processor VMs. When they do that it should really put a dent in RAC. Before I started the leveraging the SQL tuning available in 11gR2 and SQL Profiles, my main client system ran with CPU as the constraint and had a pretty constant load of 4 CPUs. After tuning the SQL, CPU load occasionally spikes at over 1 CPU and typical CPU load is 0.5 processors. The system is now a prime candidate for FT. My limiter became disk I/O which we addressed with more spindles.

Creating a clone of an EBS system for development or testing can be a big pain. Almost always you’re talking about cloning 2 servers (DB and Apps tiers), running autconfig, canceling scheduled concurrent jobs, etc. With VMware Lab Manager, you can have your developers / analysts create linked clones of your Production environment in minutes. No need to run autoconfig, the systems show up with their same instance names, same machine names, etc. This is done behind the scenes by putting the cloned VMs in their own private network. Instead of your copy of PROD taking up 100GB or so for EBS executables and another XXX GB for the database itself, it’s merely just the size of the changes between your production environment and your cloned VM. I have yet to do this in my own environment, so I can’t speak from experience, but what I have seen has been impressive.

It’s been a few years and your server hardware needs to be replaced due to expensive support costs? In the physical world that means building up new database / App servers and having a downtime to copy everything over, reconfigure the networking (and possibly the machine names) and then hoping you aren’t running into some sort of unexpected problem (like the AMD time drift issue a few years back that caused Kernel Panics – see here )… or you could just install vSphere onto your new host, make it visible to your shared storage and vMotion your system live and in production to the new hardware. What’s not to love?

Who really does run Oracle EBS under VMware though? Small little companies? No. Two of my favorite examples are VMware and EMC. Both run Oracle EBS virtualized.

I press Help for help… and get a 404?

A quick heads up to other Oracle Applications DBAs out there who apply the quarterly security (CPU) patches from Oracle to their Apps 11i instances – the patches break the online help for the end users.  Luckily, Oracle has released a patch for this (yay!) but they haven’t updated the CPU documents (which have a section just for that – Section 5, Document Modification History) nor have they re-released the patches or notified people who downloaded the patches – things easy to do and things that would make customers better value Oracle Support.

Failures like this by Oracle to get their documentation in sync drive DBAs and clients crazy and makes customers question why they pay 20% a year for Oracle Support.

For those looking to fix the issue, check out My Oracle Support document ID 1080465.1  .  It’ll tell you you need patch 9506302.

This was an issue caused by the OCT 09 CPU patches and was also in the JAN 10 CPU patches.  You can check for yourself that Oracle didn’t update the documents – Note IDs are 880170.1 and 985520.1 respectively.