Category: Database

February 22nd, 2017 by dbakevlar

As a heterogenous company, I get a lot of questions on what database platforms are important, why and if we should invest in them.  When looking at some of the lists and some of the marketing, it can be a bit overwhelming.

Now I’m not going to promise to take all of the confusion out of the myriad of database platforms out there, but I’m hoping to help with what’s relevant.


So let’s take a look at what is in the list of top databases engines in the industry per  The reason I use this site as my main “go-to” for information on database popularity has a lot to do with it’s method for collecting its information.  There are other ways of calculating popularity, but I’ve found that if you do by dollar amount, number of installations or instances, there is a bias that impacts and I find db-engines is just more reliable and telling of trends.

Categories of Database Platforms

The first thing you’ll notice is that [currently], RDBMS, (relational databases) rule the world.  Features, scalability and maturity have given relational a strong base and it continues today.  There are a number of databases that fall into the “big data” arena, including Cassandra, Hive, Solr and others, that bridge different platforms.  I still agree with the way that db-engines has classified them and it should be noted that they’ve looked at the architecture and not the use of the database.


Oracle had a large lead in the field, but as you can see, it’s decreasing on a consistent basis.  Most of us in the Oracle realm are hoping that the cloud shift will start to show the success that they are betting on.  Just as they did the search from (I)nternet to (C)loud, we have faith in this giant.   In contrast, MySQL is consistently strengthening their #2 spot.  They still are viewed as “open source” by many and with the power behind their advanced features, including clusters with version 5.7, folks are finding them to be a strong alternative to Oracle while still staying with the power of the Oracle company.

SQL Server is #3 and with their Azure cloud offering, they are currently the second strongest cloud behind Amazon.  I’ve worked in SQL Server as long as Oracle, having started with MSSQL 6.0-7.0 back in 2000.  I was very passionate about database technology and didn’t feel a strong kinship to any one platform, so was happy to work in Oracle, SQL Server, Sybase or others that came along in my early years of database administration.  Microsoft has invested heavily in business intelligence and data visualization with 2016 and I’m enthralled with some of the newest features.

Document Stores

MongoDB is the only document store that’s in the top 20.  It’s a love/hate relationship for those that use it and for those that came from the RDBMS world, they had a negative view of the platform.  There are a number of web developers that are loyal to the database platform and have propagated it out to the technical world.

A document store is quite the “anti-RDBMS”, considering it violates so many of the rules of an RDBMS, with different columns per row and no schemas.  Amazon attempts to have their own version of any database and for MongoDB, it’s DynamoDB.  Other competitors include:

Wide Column Stores

Cassandra and Hbase, both via Apache, are the top choices.  These have similarities to DynamoDB and other NoSQL databases.  These databases are highly scalable with their ability to be spanned on bare metal hardware and require significant technical knowledge of hardware, software and development to architect an environment that functions seamlessly and efficiently.  Having a strong architect is essential and having a strong, collaborative team is important to this type of technology.

Key-Column Stores

Rounding up the top ten in the list is Redis, which is a key-column store database engine.  Redis is an in-memory data store that supports a wide range of development languages and has a strong open source community.  One of the interesting features in this database engine is its ability to house geospatial indexes and use radius queries with them.  I worked with spatial data for four years when I first became a DBA and was always fascinated with this type of data.  Redis has something called GEORADIUS that makes working with this data much easier.

Search Engines

When I was at Pass Summit, Elasticsearch was  something that was mentioned often.  I also noticed that it was one of the platforms mentioned in Brent Ozar’s salary data, too.  Its pushed for AWS, but I’m seeing it more commonly on Azure and it has numerous followers due to its ability to do deep log analysis then build custom dashboards with Kibana.

I was also surprised, as I think a few will be, that Splunk is listed.  It is a search engine that offers insight into log data.  As logs can accumulate considerable storage in a short time, they’ve branched wisely into a cloud offering.   Just as tracing and logs are the key to database administrators, the value in log data can benefit anyone in Information Technology.

Not Listed

Now there are a few others that fall outside of what’s in the top categories and database platforms that I’d like to mention.

Snowflake, (169) and Redshift, (69) are starting to gain momentum.  Why?  With migration to the cloud, less is more.  The ability to virtualize your data warehouses are a great way to make it an easier shift, but also to lessen your cloud footprint, which means less storage usage.

This is a key to what Delphix is educating customers on as well, as we also virtualize all of non-production databases, many of the platforms seen in the db-engine list, making it easier to migrate to the cloud and lessen the storage footprint.  This is an area that technology should be emphasizing more.  I believe, as with most monthly subscriptions, the smaller bill looks great at first, but when you add it up over the long haul, you realize how much you’re paying for the cloud and it’s essential to recognize the opportunities to save on storage, compute and resources.


The second area I want to focus on is control.  Control over our data, control over what we can do with our data and control over support when something goes wrong with our data.  We choose database platforms to have some control over what features and product support we have.  The dynamic has shifted as many companies now look to open source to control over what they produce by creating the features and product more on their own.  They feel limited by large technical companies and want more control over how much they can do and direction.

No matter what you decide, no matter what your favorite database platform is or what you use in your company, databases make the world go round.  The world continues to spin and databases continue to be the mechanism to deliver the data that we use everyday to answer questions, build trends and analysis from and make the decisions that the world relies on.  We Database Administrators and Developers are the ones that are lucky enough to be those to work in this exciting arena in technology.

Posted in Database Tagged with: ,

September 22nd, 2016 by dbakevlar

The Oak Table network is a closed membership of about 100 technical scientists who have a passion for technology and the world around them.  These men and women, representing some of the brightest minds from each of their companies, are individuals who have built companies and major technical innovations.

Initially, much of their research and work was presented at Oracle Open World.  Over time, Oracle Open World welcomed less and less of their participation.  They decided to create their own network and present their information at their own conference, held annually at the same time as OOW.  Over a decade later, the event now has company sponsors and a dedicated following of Oracle specialists who attend to receive the deepest and most interesting details of technical data in the industry.

There is a “cool factor” for those going to Oak Table World.  Most of the members are also part of Oracle’s famed ACE Program, (most have ACE Director associations).  Some of the members work for Oracle; most members work for companies that are Oracle partners.  All of the sessions have a tendency to leave the attendee with their mind blown after each session.


The conference this year was sponsored by Delphix, Pythian and Amazon Web Services and was held at the Children’s Creativity Museum.  The two day schedule only allows for 13 1-hour sessions and 8 10-minute “TED” style talks.  This year the conference attracted over 400 attendees in two days. A majority of the talks were the deeper technical content that wasn’t deemed suitable for the more marketing driven Oracle Open World event.
Because Delphix is one of the major sponsors, by default, it receives quite a bit of exposure at the event.  In addition to sponsoring, Delphix does the coordination for the event.  These opportunities provide us a way to create awareness with the primary technical thought leaders in the Oracle community.

In the next couple weeks, you’ll see a post on Oak Table with links for the presenters incredible slides and to the recorded sessions. One session that wasn’t captured was Toon Koppelaars one hour session on the first day. We know this was a session that many said was a must-see, so I’ll be working with Laura Ramsey from OTN to get a webinar set up so many can take advantage of Toon’s amazing contribution.

If you have questions, please feel free to reach out to me and I’ll happily share information on what we shared today and how we will continue to support Oak Table Network and thank you to each of the sponsors who, like Delphix, know the value of these incredible technologists.

delphix-logo_w_500 Pythian_Company_Logo_2015AmazonWebservices_Logo.svg


Posted in Database, DBA Life Tagged with:

June 24th, 2016 by dbakevlar

I’ve been going through some SERIOUS training in just over a week.  This training has successfully navigated the “Three I’s”, as in its been Interesting, Interactive and Informative.  The offerings are very complete and the knowledge gained is limitless.

I’d also like to send a shout out to Steve Karam, Leighton Nelson and everyone else at Delphix who’s had a hand in designing the training, both for new employees and for the those working with our hands on labs.  I’ve had a chance to work with both and they’re just far above anything I’ve seen anywhere else.

The Challenge of Patching and Upgrades

Most DBAs know-  If you attempt to take a shortcut in patching or upgrading, either by not testing or hoping that your environments are the same without verifying, shortcuts can go very wrong, very quickly.

Patching is also one of the most TEDIOUS tasks required of DBAs. The demands on the IT infrastructure for downtime to apply quarterly PSU patches, (not including emergency security patches) to all the development, test, QA and production databases is a task I’ve never looked forward to.  Even when utilizing Enterprise Manager 12c Patch Plans with the DBLM management pack, you still had to hope that you checked compliance for all environments and prayed that your failure threshold wasn’t tripped, which means a large amount of your time would have to be allocated to address patching outside of just testing and building out patch plans.

This is Where Delphix Saves the Day

I bet most of you already knew you could virtualize your development and test from a single Delphix compressed copy, (referred to as a DSource.) create as many virtual copies, (referred to as VDBs) as your organization needs to have for development, testing, QA, backup and disaster recovery, (if you weren’t aware of this, you can thank me later… :))

What you may not know, (and what I learned this week) is that you can also do the following:

  • Test patches and upgrades on a VDB or a DSource to verify there aren’t any issues instead of doing a full, manual clone which is very time consuming.
  • Apply patches and upgrades to a DSource and the patch and/or upgrade all the VDBs attached to the DSource by simply performing/scheduling  a refresh.

Screen Shot 2016-06-24 at 10.56.17 AM

Considering how much time and resources are saved by just eliminating such a large portion of time required for patching and upgrading, this is worth investing in Delphix just for this alone!

Want to learn more?  Check out the following links:

Testing Oracle Upgrade and Application Patching

Upgrade Oracle DSources After an Oracle Upgrade

Want to Demo Delphix? <– Click here!   

Posted in Database, DBA Life, Delphix Tagged with: , ,

February 19th, 2016 by dbakevlar

I’m a Leaf on the Wind, Watch How I Soar.

This is one of my favorite lines from the movie Serenity.


Without Alan Tudyk’s character dying and all at the end of this, (sorry if I ruined the movie for anyone…) this is how a DBA should be in the database environment-  skilled, reliable and maybe a little off our rockers. Our job is to protect the data, the database and all of the database.

With that said, I’m going to list my Ten Rules of Database Administration.

  1. Fixing a performance problem with hardware is the best way to guarantee the return of the problem in the near future.
  2. A Database Administrator is only as good as their last backup, (or database image, clone, flashback and other redundancy.)  It’s the only protection from ID10T errors- our own and others.
  3. The best performing database is one that has no users.  The best performing query is one that doesn’t have to be executed.
  4. Optimize what annoys the user vs. what annoys you and you’ll never have to worry about your job.
  5. Never assume, always research and double-check/triple-check your findings.  Data is the savior of the DBA.
  6. Performance issues are rarely simple.  If they were simple, the user could fix them and we’d be out of a job.
  7. If a database is up and running, then something has changed.  Don’t ever accept the answer that nothing’s changed.  They’d have to be using paper and pen instead of the database.
  8. A developer’s goal is to have an application or procedure complete requirements. Your job is to make sure the code they produce does so without risk to data, database and does so efficiently.
  9. You can’t do your job as well as you can if you understand what the application developer, user and business does.
  10. The database is always guilty until proven innocent and by the way, you only have access to 1/2 the case evidence.  You’re it’s attorney-  Congratulations.

Happy Friday Folks!




Posted in Database, DBA Life Tagged with: , ,

February 17th, 2016 by dbakevlar

There was a question posted on Oracle-l forum today that should have a blog post for easy lookup for folks.  Regarding your Enterprise Manager repository database, (aka OMR.)   This database has a restricted use license, which means you can use it for the Enterprise Manager repository, but you can’t add partitioning to it or RAC or dataguard features without licensing those features.  You also can’t use the diagnostic and tuning pack features available in Enterprise Manager on the repository database without licensing it outside of the EMDiagnostics tool.  You can view information about the license that is part of the OMR here.

No one wants to be open to an audit or have a surprise when inspecting what management packs they’re using.


To view what management packs you’re using for any given EMCC page, you can use the console and access it from the Setup menu from EM12c or EM13c:


With that said, Hans Forbrich made a very valuable addition to the thread and added how to disable EM management control access in your OMR database-

Run the following to disable it via SQL*Plus as SYSDBA:


Other packs are disabled using the EM Cloud Control with the appropriate privileges in the console using the SETUP menu in with a patch or higher:


The view can be changed from licensed databases to all databases and then you can go through and adjust management packs as licensed and then apply.


Don’t make yourself open to an audit when Enterprise Manager can make it really easy to manage the management packs you are accessing.

Posted in ASH and AWR, Database, EM13c, Enterprise Manager, Oracle Tagged with: , ,

November 24th, 2015 by dbakevlar

After my AWR Warehouse session at DOAG in Nuremberg, Germany last week, an attendee asked me if he really needed to use trace files anymore with the value provided by AWR, (Automatic Workload Repository) and ASH, (Active Session History.)  I responded that trace files were incredibly valuable and should never be discounted.

To use an analogy, when I was 22 years old, and entered a doctor’s office missing a small percentage of my visual field after an apparent migraine, doctors used a number of different “tools” at their disposal in an attempt to diagnose what had happened.  They first started with an Ophthalmoscope to determine the health of my retinas.  They assumed that I was simply experiencing visual disturbances due to a migraine and sent me home with a prescription strength case of Tylenol after verifying that my eyes were healthy.

After no change with another two days passed, the doctors then proceeded with a Computed Tomography Scan, aka CAT/CT scan.  This is a standard first line inspection of the brain and again, it resulted with no information as to what was causing the visual disturbance.  If it hadn’t been for a neurologist that was on duty at the hospital, I may very well have been sent home again.  He asked me about the severe migraine, the actual specifics of the experience and suddenly realized he was hearing someone describe to him the symptoms of a stroke.  No one had considered to ask what had happened and since I was in my early twenties, hadn’t considered this.  At this point, the doctor asked for me to have an MRI, (Magnetic Resonance Imaging) with contract.  An injection of gadolinium contrast resulted in certain tissues and abnormalities more clearly visible and for me, it showed that I had experienced a break in the blood vessels in the back right of my brain, issuing a small aneurysm, but suffering only a little blood loss.  The main damage was to the brain tissue in the back area of my brain which “translates” the information being sent from my optic nerve and this was the cause of my left side visual field loss.  Due to this new information, he was able to start the research that in the coming years saved my life.


This doctor, like a database administrator or developer, knew to use the right tool for the job once he knew what he was up against and this is how we succeed when others fail in the technical industry.  To be told that you no longer need a tool such as tracing because valuable data is provided from AWR or ASH limits the ability to see real culprits that the secondary tools discussed weren’t really designed to capture.

To know when to use one tool over the other is often the biggest challenge.  A number of years back, a customer was experiencing terrible, overall performance issues in an ETL process that no one had been able to pinpoint.  I always love a good challenge and began to look at it via AWR reports.  I noticed that there were waits that were unaccounted for in the SQL processing.

Execution Plan

| Id  | Operation                       | Name                       | Rows  | Bytes | Cost  | Pstart| Pstop |
|   0 | DELETE STATEMENT                |                            |       |       |   688 |       |       |
|   1 |  DELETE                         | _ITEM_MONTH_ROLLUP         |       |       |       |       |       |
|   2 |   NESTED LOOPS                  |                            |  1490 | 86420 |   688 |       |       |<--This one doesn't add up to time/value consumed.
|   3 |    VIEW                         | VW_NSO_1                   |     1 |    26 |    20 |       |       |
|   4 |     SORT UNIQUE                 |                            |     1 |    40 |    20 |       |       |
|   5 |      TABLE ACCESS BY INDEX ROWID| PROCESS_LOG_MASTER         |     1 |    40 |     4 |       |       |
|   6 |       INDEX RANGE SCAN          | PROCESS_LOG_MASTER_I7      |   132 |       |     1 |       |       |
|   7 |    PARTITION RANGE ITERATOR     |                            |  1490 | 47680 |   386 |   KEY |   KEY |
|   8 |     INDEX RANGE SCAN            | ITEM_MONTH_ROLLUP_I2       |  1490 | 47680 |   386 |   KEY |   KEY |


– dynamic sampling used for this statement (level=8)

– SQL profile “SYS_SQLPROF_013dad8465770005” used for this statement

CPU Costing is off and there is a profile on this statement. Stats are up to date, why am I seeing dynamic sampling level 8??

I proceeded to capture a trace and that’s when I was able to demonstrate that due to auto-DOP in 11.2, a choice was made to perform dynamic sampling that was adding over 80% to the execution time, impacting performance significantly.

SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring blah, blah, blah, blah... :)


call     count       cpu    elapsed       disk      query    current rows
------- ------  -------- ---------- ---------- ---------- ---------- ----------
Parse        1      0.00       0.00          0          0          0          0
Execute      1     28.60      48.52     322895      43776     489123        189
Fetch        0      0.00       0.00          0          0          0          0
------- ------  -------- ---------- ---------- ---------- ---------- ----------
total        2     28.60      48.52     322895      43776     489123        189

Limiting your tools, no matter if you limit yourself to just using tracing or AWR/ASH, SLOB, Snapper or any other tool is a poor choice.  Use the right tool for the situation that provides answer to the questions the business relies on you for.  You wouldn’t trust a doctor that used a CAT scan to check a broken bone vs. an X-ray, so why would you trust an optimization specialist that limits themselves to one set of tools?

Posted in Database Tagged with: , ,

October 13th, 2015 by dbakevlar

So I’ll be heading out for Oracle Open World 2015 in just over another week.  This is the first year in a while that I won’t be speaking, (my own fault, as dummy here hasn’t figured out how to submit sessions internally yet, duh!)   but I’m still going to support the demo grounds, meet with customers and speak at Oak Table World!


Per Steve Karam, if you need to know who’s speaking and why you should attend the sessions at the Children’s Museum, well here’s a few:

Jonathan Lewis – Yes. That Jonathan Lewis. Also an alien of extraordinary ability.
Tanel Poder – Doer of things you didn’t know SQL could do and the snappiest snappermaker
Alex Gorbachev – Jack of all trades, master of…well, all of them really.
Kellyn Pot’Vin-Gorman – Cloud Control control, techno wiz, and data lacquerer
Tim Gorman – Kellyn’s hype man. And world renowned Oracle expert.
Kevin Closson – Refusing to party like it’s 1999 since 1999
Gwen Shapira – A fifty petabyte machine learning datastore in human form
Cary Milsap – His advice has probably saved your database and career
Oak Table World – The best Oracle speakers outside of OpenWorld. Literally, not figuratively.

Now if that doesn’t sum it up, well, let’s be honest, there are over 60 members and more speaking that listed here, so if you have time, it’s the best “unconference”, EVER.

With that said, we’ll get back to why everyone is there in the first place and it’s because of my awesome employer, Oracle!


I’m busy preparing my new sessions for post Oracle Open World, which I have three conferences in November, (San Antonio, Chicago and Germany) and ensuring customers I’m in charge of for OOW preparation are ready to go when the stage lights go up!  It’s a fun kind of crazy chaos and the kind I commonly thrive under, so we’re all good!

With that said, there are a few events I’m looking forward to-

Sunday, Oct. 25th, 8:30am- Bridge Run with Crazy Oracle Peeps

Yeah, me and my arthritis will be walking that bridge, but it’s still a great event and I’ll be going.  I’m not doing the bay swim or the bike event.  Enough is enough, you crazy, exercise nuts! 🙂

Monday, Oct. 26th, 1:30pm- Managing at Hyper-Scale: Oracle Enterprise Manager as the Nerve Center of Oracle Cloud [CON9710], Moscone South–300

I have wonderful peers at Oracle and Akshai Duggal is one of them!  She’s going to be explaining how our SCP teams build out the REALLY massive Enterprise Manager environments, ensuring uptime, clear and concise monitoring, along with understanding pesky things like metrics and templates.  Check out her session with real use cases, how can you beat that?

Monday, Oct. 26th, 5pm- in the RAC Attack Area-  Enterprise Manager SIG from IOUG and Oracle.

Come meet up with all the top EM specialists in the world as we discuss the best topics surrounding Enterprise Manager and enjoy the hospitality of the OTN lounge area!


Monday, Oct. 26th, 6pm, the ODTUG Reception, (another invite only event…)

Another event I look forward to every year and even though I’m over booked this year, I’ll make sure to stop in for this, too!

Tuesday, Oct. 27th, 10-10:50am- Oak Table World-  Stewart Bryson and I are going to talk about OBIEE and AWR Warehouse.  Children’s Museum 

We just wanted to play around a bit and see how much trending and analysis we could do with the product and had a great time building this out.  Come check it out and then head on over for….

Tuesday, Oct. 27th, 11am-  General Session: Oracle Management Cloud—Real-Time Monitoring, Log, and IT Operations Analytics [GEN9778]  Moscone South—102

The big boss is talking, no not Larry, but my boss, Prakash Ramamurthy!  He’s going to be discussing the coolest of the cool in new stuff that I’m totally obsessed with right now and if you miss out on this session, you’re going to be missing out on the best!

Tuesday, Oct. 27th, 12:15pm, Oak Table Ted Talk!  Secret Topic! Children’s Museum

I can’t tell you what I’m going to talk about, but I’m going to tell you what I can’t tell you now.  Yeah, I really just said that… 🙂

Wednesday, Oct. 28th, 11am- Using Oracle Enterprise Manager Effectively to Become an Oracle Enterprise Manager Champion [CON9711] Moscone South 104

Courtney Llamas and a slew of great folks will be on stage talking how to become an EM Champion!

Wednesday, Oct. 28th, 4:15pm- Hybrid Cloud—Pivot to the Cloud with Oracle Enterprise Manager Cloud Control [CON9712] Moscone South 300

I really enjoy working with Steve Steltling in our SCP team and he’ll be presenting with Phillip Brown on Hybrid Cloud!

Wednesday, Oct. 28th- Bloggers Meetup from Pythian…..I’m waiting….still waiting….no invite yet….Where is it?

This is an event that I always enjoy and wait for the invite from Pythian each year, but nothing yet and will be pestering Alex Gorbachev until it shows up… 🙂

Wednesday Oct. 28th, evening- NOT GOING TO TREASURE ISLAND

Yeah, you heard that right.  Every year, even when I did get a wrist band, I gave it to someone deserving and then Tim and I enjoy the quiet city for a dinner out in ANY restaurant we like… 🙂  We just aren’t much into concerts and love the opportunity to allow someone else to go that may not have by giving them Tim’s band, (now that I’m with Oracle, I no longer get one…)

Thursday, Oct. 29th, 12pm- Oracle Management Cloud Best Practices for Mining Oracle Enterprise Manager Data [CON9714] Marriott Marquis—Salon 10/11

Werner DeGruyter, our EM Yoda and Shailesh Dwivedi are going to talk about a great topic.  I love mining Oracle EM data, so if that’s something you are impressed with, don’t miss out on this session!

If you want to see all the best of the Enterprise Manager sessions, here’s the Oracle Open World Schedule specific to EM!

Friday is the flight home, pick up my dog, DaVinci from the pet hotel and find out if my youngest son, Josh, still knows who I am!



Posted in Database, DBA Life Tagged with: , ,

May 4th, 2015 by dbakevlar

There may be a reason that one needs to re-add a database to the AWR Warehouse.  This is a new opportunity for me to learn and offer assistance, but I’m working off the grid to figure out a solution.  Luckily, it’s just a matter of creating a new process from pre-existing code and processes.

The process requires both the AWR Warehouse UI and a SQLPlus connection.

1.  Disable the snapshots

2.  Extract the AWR for the Database

3.  Drop the database from the AWR Warehouse

4.  Add the database back into the AWR Warehouse with the retention time or any other configuration change.

5.  Update the configuration table in the source database to show the correct snapshots.

Disable Snapshot Uploads

Log into the AWR Warehouse console and locate the database that needs to be re-added:


From the AWR Warehouse UI, click on the database and then choose to disable the snapshot uploads for the database in question.



Extract AWR Data for the Database

From the AWR Warehouse host, log into the AWR Warehouse repository, (database) and now extract the AWR data.  This will use the formal AWR data extract procedure, found in the $ORACLE_HOME/rdbms/admin directory and yes, it works with the AWR Warehouse.

~ This script will extract the AWR data for a range of snapshots ~
~ into a dump file. The script will prompt users for the ~
~ following information: ~
~ (1) database id ~
~ (2) snapshot range to extract ~
~ (3) name of directory object ~
~ (4) name of dump file ~
SQL> @$ORACLE_HOME/rdbms/admin/awrextr.sql;

A list of databases will be displayed that you can choose from.  In the AWR Warehouse, all the source databases will be shown:

Databases in this Workload Repository schema
DB Id DB Name Host
------------ ------------ ------------
9352681 APPSB1 HOST1
4212609618 AASDB  HOST2
4212608695 AASDB HOST3
so on and so forth....

Enter in the DBID for the database from the list.  We’ll use the first AASDB from the list for our example:

Enter value for dbid: 4212607231
Using 4212607231 for Database ID
Specify the number of days of snapshots to choose from
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed. Pressing <return> without
specifying a number lists all completed snapshots.
Enter value for num_days:
Listing all Completed Snapshots

As I didn’t enter a number in for the number of days, you can see I want ALL of the AWR data for this database.  Now the extract script will ask for snapshot information.  Again, we want all of it and the report will show us all of the snapshots.  If there are so many, that you can’t see the first one, use the AWR Warehouse UI to see the first and last snapshot for this step.

Specify the Begin and End Snapshot Ids
Enter value for begin_snap: 1777
Begin Snapshot Id specified: 1777

Enter value for end_snap: 2303
End Snapshot Id specified: 2303
Specify the Directory Name
Directory Name Directory Path
------------------------------ -----------------------------------
DATA_PUMP_DIR /u01/admin/AWRWHSE/dpdump/
OPATCH_INST_DIR /u01/home/oracle/
OPATCH_LOG_DIR /u01/home/oracle/
OPATCH_SCRIPT_DIR /u01/home/oracle/

Choose a Directory Name from the above list (case-sensitive).
Enter value for directory_name: DATA_PUMP_DIR
Using the dump directory: DATA_PUMP_DIR
Specify the Name of the Extract Dump File
The prefix for the default dump file name is awrdat_1777_2303.
To use this name, press <return> to continue, otherwise enter
an alternative.
Enter value for file_name: awrw_AASDB
Using the dump file prefix: awrw_AASDB
End of AWR Extract
SQL> !ls -la /u01/admin/AWRWHSE/dpdump/awrw_AASDB.dmp
-rw-r----- 1 oracle oinstall 268779520 May 4 16:49 /u01/admin/AWRWHSE/dpdump/awrw_AASDB.dmp

Now that the extract is completed and the data is safe, it’s time to return to the AWR Warehouse console.


Remove the Database from the AWR Warehouse

Click on the database in the AWR Warehouse and choose to remove it from the repository.



Once the database has been removed, go through the proper steps to add it again to the AWR Warehouse.  Once added, disable the upload of snapshots.  This will keep from concurrency issues and other issues until we’re finished.

Reload the AWR Data

Log back into the AWR Warehouse Repository database, it’s time to load the data back into the AWR Warehouse.


Specify the Name of the Dump File to Load
Please specify the prefix of the dump file (.dmp) to load: /u01/admin/AWRWHSE/dpdump/awrw_AASDB.dmp
Pressing  will result in the recommended default
tablespace (identified by *) being used.

Enter value for default_tablespace: MGMT_TABLESPACE

As the space will be used at some point anyway, (the mgmt_tablespace is expected to grow, so no concern here and the temporary user utilized for the process will be dropped at the end.

Choose to use the standard TEMP tablespace and proceed forward past this step.  As the AWR objects are already built into the MGMT_TABLESPACE, there is no need to remap from the SYSAUX as the process in a standard database would require.

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "AWR_STAGE"."WRH$_SQL_PLAN"                 89.485 MB   329981 rows
. . imported "AWR_STAGE"."WRH$_SQLTEXT"                  192.34 MB    398276 rows
. . imported "AWR_STAGE"."WRH$_SYSMETRIC_SUMMARY"        232.12 KB    1129472 rows
	Append Data for "AWR_STAGE".WRH$_SGASTAT.

Update Metadata Table

Now that we’ve loaded the data back in, we need to update the metadata table to show that more than the initial data exists.

select * from dbnsmp.caw_load_metadata
where dbid=<dbid for database in question>;
aasdb3 oracle_database 4212607231 1777 1778
27-APR-15 27-APR-15 3 1

Update the table to reflect the new data, as there should only be one upload max that’s been sent over. If there is more than one, you’ll need to change the update statement to just reflect the one row that shows the max(end_snap_id) as the AWR Warehouse loads from the lowest snapid to the most recent.

select * from dbsnmp.caw_load_metadata
where dbid=421607231;
Update dbsnmp.caw_load_metadata
set end_snap_id=2303
where dbid=4212607231
and dump_id=<most recent dump_id>;

If there isn’t any data at all, then force one upload and then edit it.  This is easily done from the Console:



Proceed back and update the row in the metadata table and then check the UI to verify that the beginning and end snapshot looks correct.  You should now have all the AWR data loaded back into the AWR Warehouse! And that’s, how to reload AWR Data from a database that must be removed and re-added to the AWR Warehouse.

This is a new process for me, so please, let me know if you note anything amiss and I’ll be glad to research it!


Posted in AWR Warehouse, Database, Oracle Tagged with: ,

April 9th, 2015 by dbakevlar

Here’s a quick guide to my schedule of sessions at IOUG Collaborate 2015 for this year.  I’m looking forward to seeing everyone next week, (I’ll be arriving on Saturday, so I really should say in a couple days) and have no doubt we’re up for another phenomenal conference at Las Vegas, this year at the Mandalay!

Additionally, there are some great opportunities in professional development at the IOUG Strategic Leadership Program.  I’ll be speaking with James Lui on how to improve your value with personal brands and social media.

I’m the WIT luncheon speaker on Monday, so for those of you that signed up for this great, yearly event for women in tech before it sold out, I’m looking forward to meeting as many of the powerful women in our industry, (and hopefully a few men, too!) before and after the event.

There are also a few SIG’s that I’ll be attending that aren’t on the schedule:

Tue-12:30 p.m. – 1:00 p.m.Cloud Computing and Virtualization SIG Meeting (ID: 943)

Wed-12:30 p.m. – 1:00 p.m.Oracle Enterprise Manager SIG Meeting (ID: 949)

Id Number Date and Time Session Title Product Line(s) Session Track(s) Session Room
976 Sun. Apr. 12
9:00 am – 1:00 pm
Everything I Needed to Know About Enterprise Manager I Learned at COLLABORATE – Hands-on Lab Oracle Enterprise Manager Manageability Palm D
0 Mon. Apr. 13
9:15 – 10:15 am
Zero to Manageability in One Hour: Build a Solid Foundation for Oracle Enterprise Manager 12c Oracle Cloud – DBaaS/PaaS| Oracle Database| Oracle Enterprise Manager Manageability Banyan B
112 Mon. Apr. 13
3:15 – 4:15 pm
The Power of the AWR Warehouse Oracle Enterprise Manager Manageability Banyan B
967 Mon. Apr. 13
4:30 – 5:30 pm
IOUG Strategic Leadership Program: Staying on Top of Key Trends and Growing Your Personal Brand with Social Media Applicable to All Professional Development Banyan E
1003 Wed. Apr. 15
2:00 – 3:00 pm
OakTable: TED-Style Lightning Talks (2:00-3:00pm) Applicable to All Database Mandalay K
986 Mon. Apr. 13
12:45 – 1:45 pm
Twitter 101 for Oracle Professionals Applicable to All| Professional Development Banyan D

Thanks to everyone at IOUG for the opportunity to be so involved with this great conference and see everyone soon!

Posted in Database, DBA Life, Enterprise Manager Tagged with: ,

October 27th, 2014 by dbakevlar

I’ve just returned from my honeymoon in Venice with the wonderful Tim Gorman, but before we enjoyed our week in this picturesque city, we spent a week doing joint keynotes and two sessions each on our own tech topics at these two incredible conferences in neighboring countries.

SIOUG, (Slovenia Oracle User Group) in Ljubljana, Slovenia held their 2014 conference Oct. 13th-14th at the Plaza Hotel.  Tim and I did a joint keynote, “Assumptions, the Death of First Dates and Tech Projects Around the World” which was great fun, as it’s loaded with a number of Pulp Fiction stills that transcends any language barrier and gets everyone finding humor in the obvious, but also understanding how important it is to never assume anything in the tech world.

As soon as we arrived via car, driving in from Venice, we were hosted to a wonderful, traditional Slovenian dinner by Joze, @joc1954 and Lily Senegacnik and started hanging out with Milena Gerova, @milenagerova and Nikolay Kovachev, (both from the Bulgarian Oracle User Group).  A great time was had by all and the conference was wonderful!  Throughout both conferences, both Milena and Nikolay were in the middle of a number of technical and social events, always having a great time!


Once finished with the two day conference in Slovenia, we then took a wonderful drive down to Croatia to present the same sessions and keynote at the HROUG, (Croatian Oracle User Group) fall conference in Rovinj, Croatia.  This is a beautiful small town on the coast and we were not disappointed.

I’d met the HROUG president at the IOUC summit back at the end of September and seeing him in his “element” was well worth the time!  Davor is bigger than life and treated us as if we were all family.  He sang with the wonderful band that played at many of the events and ensured we had both a tour of the wonderful town of Rovinj, along with an incredible meal!


The actual 2014 conference was held at the Hotel Istra on the island near Rovinj.  It was an absolutely beautiful location to do a keynote and tech sessions.  We were humbled by the incredibly positive feedback Tim and I received from both conferences keynote attendees and thrilled that it was received so well.  We also appreciated the support of Zoran Pavlovic, @ChallengeZoran and Maja Vaselica, @orapassion, (Serbian Oracle User Group) as we pointed them out as a fellow and rare Oracle couple.  It was wonderful to finally meet them and get to know these two wonderfully warm and technically bright folks!  I was also impressed that as many folks as there were attended the advanced EM12c concepts and of course, the AWR Warehouse had a great turnout, as expected.


Both conferences are well worth the time and effort, my only recommendation is, if you decide to drive there with a rental car, just make sure that you purchase a Vignette BEFORE coming into the country, (or Switzerland, Austria or Bulgaria, too!) I’ll leave you folks to ask Tim what happens when you don’t! 🙂



Posted in Database, DBA Life, Eventify Events

January 1st, 2014 by dbakevlar

**Note-  This is an article I wrote about 4 years ago that, as far as I know, can only be found in hard copy and although I’ve changed my stance on statistics, etc. with 11g/12c, the main point of CTAS benefits still stands the test of time.  

As a kid did you ever watch the cartoon Hanna-Barbara’s, “the Jetson’s” and see Elroy drop out of George Jetson’s flying  saucer,  going to school via his personal,  hyper-pod and thought, “forget the bus, I want to go to school in the hyper-pod!”?

For any developer or DBA working with large objects in an OLAP environment, updating and deleting large tables, even when partitioned, can feel like you are on that school bus again.  So how do you get off the school bus and ride on Elroy’s hyper-pod to get things done, “Jetson’s Style”?

In my time working with large objects, I’ve seen and discovered more ways to use the CTAS, (“create table as” for those who are new to the acronym!)  than I ever thought possible.  It should be a pact of any DBA or developer in a warehouse or mart environment to have all the ways this feature can be used added to their arsenal.

For our current day vs. Jetson’s futuristic comparison, we’re going to say our environment consists of data partitioned by company_id, residing mostly in five primary tables.  This partitioning method has been quite successful for how the business loads and reports on data, but your first challenge is to prune out all purchase orders older than five years from the system.   Once complete, the business has some reporting requirements for you to invest time into, so keep that mind open and ready to work through some challenges!

The tables involved have the following info, examples of columns and sizes:

COMPANY_ACCT, 253 columns,  296GB, 1000 partitions total.
C_ID (partitioning key)
… so on and so forth…

COMPANY_TRANS,120 columns, 312GB, 1000 partitions total.
C_ID(partitioning key),
… so on and so forth…

ID_COMP_DIM, 412 columns, 212GB, 1000 partitions total.
C_ID(partitioning key),
… so on and so forth…

GEO_COMP_DIM, 82 columns, 180GB, 1000 partitions total.
… so on and so forth…

The traditional OLTP processing method  to tackle the removal would be to either:

  • Delete with where clause against the tran_dt or last_act_dt and having an index on these columns.
  • Bulk process the deletes, using the same method but performing in parallel, using the c_id, (company id) to allow for most efficient performance.

How could you perform the delete faster?   How could you accomplish this task without actually utilizing a delete statement at all?

Elroy Jetson would say “CTAS it!”  Anything that can be done, can be done faster, more efficiently and more suave when it’s done with a CTAS.  Combine this with  exchange partition feature and there are opportunities here that anyone can appreciate.  Anything that can be done, can be done faster, more efficiently and more suave when it’s done with a CTAS, (“create table as” for those of you who are newer to the acronym..)  Pairing up CTAS with exchange partition and you have a match made in heaven.

Now to return to our challenge at hand-  let’s take our first table, the COMPANY_ACCT table and remove all the accounts that haven’t been active for over five years.

The traditional way,  version 1:


Or version 2, process via a procedure and pass in the company id and process multi-thread or in parallel:


After each of these steps, you will have stale statistics and considerable empty blocks-  blocks that will be scanned on every hash and table scan that are empty until you “coalesce” this partition.   This is something that you must keep in mind when you process data the “old-fashioned” way.

How more efficient would this be to process this via a CTAS with an exchange partition?

Note**  No delete in the following statement!!


Update statistics and note that I use very, very small sample sizes for my estimate_percent.  I have very consistent data across my data sets, but I have very large amounts of data.  This gives me the luxury of collecting stats and not the time required for larger sample sizes.  Due to the amount of partitions, I am careful to gather only what I need to not impact the performance of my data dictionary.  I do not like seeing long ops on tables such as col$, tab$, ind$, etc…  If you see this, look into the amount of statistics-  are you gathering what you need or just flooding the data dictionary with data that can not give you the consistent and solid execution plans that are your goal?

Exec dbms_stats.gather_table_stats(ownname=>’OWNER’,tabname=>’COMPANY_ACCT_229’, estimate_percent=>.00001);

We are now ready to exchange our partition-


Your delete of the data for this partition is complete.  There is no fragmentation and you have updated the stale statistics, you are as clean as possible for all processing going forward.

What is the cost savings?

Original Delete process per partition, no updateof statistics, on average: 28 min.

CTAS and exchange partition, including index creation, minus statistics update on average:  7 min.

What if you’re challenged to reproduce the whole table-  all the partitions without the older than five years data.  How can you complete this task with the CTAS method?

  C_ID                     NUMBER not null,
  C_NBR                    VARCHAR2(30) not null,
  ACCT_TYPE_CD          VARCHAR2(1) not null,
  … ßColumns to be included in the new table
partition by list (C_ID)
 partition P_1001 values ('1001'),
partition P_1022 values ('1022'),
partition P_1023 values ('1023'),
partition P_1024 values ('1024'),
partition P_1025 values ('1025'),
partition P_1026 values ('1026'),
partition P_1029 values ('1029'),
partition P_1031 values ('1031'),
…  ßpartition list
<storage parameters, including parallel> PARALLEL (degree 8)
AS select /*+ PARALLEL(ca 8) */

Create the indexes that exist on the original, update the statistics on the secondary and then perform a rename-  renaming the original to “old” and the COMPANY_ACCT2 to the original name, COMPANY_ACCT.

What are the time savings, performing this way vs. original:

Delete Statement, in parallel, against all the partitions, WITHOUT update of statistics:  23 hrs, 12 min.

CTAS and rename, INCLUDING update of statistics:  7 hrs, 4 min.

This is a 60% time savings-  60% less maintenance window of the DBA’s personal time or outage to the business.  How can you say no to this?

Dimensional data the CTAS way…

As a special request for the business, a developer needs to create a table with multiple joins to three of the largest tables.   The developer makes multiple attempts, but must stop due to performance hits on the database.

SELECT  /*+  parallel(g 8) parallel(i 8) use_hash(g i) */
            DECODE (i.c_id,
                    0, 'N',
                    NVL2 (g.geo _tract, 'B', 'Z'))
               AS cmatch,
            g. next_col2,
            g. next_col3,
            g. next_col4,
            g. next_col5,
            g. next_col5,
            g. next_col19
     FROM   geo_comp_dim g, id_comp_dim i
    WHERE   g.c_id = i.c_id;

The explain plan shows the impressive cost of this query, along with outrageous amount of temporary tablespace required to perform the hash and sort.

Explain Plan:

Description Owner Object Cost Rows Temp









2.4 TB




























What is the heaviest weight in this process to the cost of the query?

The hash join between the two tables, but what does it really need from the two tables?  The original query needs almost everything from GEO_COMP_DIM, but it’s a relatively small cost vs. ID_COMP_DIM, which the developer only needs three columns of the total 412 columns!

Those of us who have studied a hash join know that we should expect the hash join to require 1.6 times our table we lead in with for our hash.  What is we created a smaller lead table for the hash then?

CTAS to the rescue again!


Replace the original ID_COMP_DIM table in the query with ID_COMP_CID_DATA.  What is the performance improvement?


Description Owner Object Cost Rows Temp















        HASH JOIN



        PX RECEIVE



















Same explain plan, different, narrow table to work with…  The time savings is considerable, too..

Original:  7 hrs, 16 min.

Same Query off of CTAS “dimensional”  table:

Creation of ID_COMP_CID_DATA: 10 min

Query:  28 min.

Total:  38min.

The business has now been asked to change how they report data.  Originally, the data was represented  as yes or no, but the data now needs to be presented as values requested by the client, no longer satisfied with Yes or No.

A simple CTAS of the table with a decode of the columns to the new values will create the new table in the format you need without having to perform a standard update.

create table COMP_BUS
 tablespace tbl_data1
    pctfree 10
    initrans 96
    maxtrans 255
 STORAGE (FREELISTS 48)  nologging as
 select /*+ parallel(I,8) */
       , DECODE(i.sold_flag, 'Y', '1', '0') AS sold
       , i.pass_id AS p_id
       , TO_NUMBER(DECODE(i.sl_dt, null, null, i.sl_dt  - TO_DATE('01-JAN-1999', 'DD-MON-YYYY'))) AS sl_dt
from DM_OWNER_A.comp_bus_dim i ;

create index bus_cid_idx on DM_OWNER.comp_bus (c_id ) tablespace  BUS_INDX1 parallel 4;

exec dbms_stats.gather_table_stats…

ALTER INDEX bus_cid_idx noparallel;

No concern over empty blocks or performance issues that would be experienced with an update will be experienced with the CTAS.  Create the table, create any indexes, update statistics and rename the table.  The steps may seem  more involved, but the time savings and ability for the CTAS to pay-forward in performance benefits are significant.

Here’s where you have to ask yourself- Do you want to be on the yellow school bus or the hyper-pod?

Features that can be utilized in a CTAS that should be investigated for performance benefits:

  • No-logging
  • Parallel/parallel DML
  • Compression for space savings
  • Initial transaction settings
  • Partitioning/sub-partitioning
  • In some specialized cases-  freelists/freelist groups instead of ASSM.

In the world of data warehouses and marts, theory would say, anyone should be able to easily update or delete rows without issue, but in reality, there are more efficient ways to process large amounts of data.  We, as DBAs and developers, must be more pro-active, more futuristic-  more ”Jetson”  like.

Posted in Database, Oracle

March 26th, 2012 by Kellyn Pot'Vin

Congratulations to my coworker, Andy Colvin who just received his Oracle ACE!! Just one more immensely technically gifted individual who is very deserving of this award at Enkitec and makes me proud to be part of such a great group of folks!

Posted in Database

March 17th, 2012 by Kellyn Pot'Vin

Originally submitted at REI

The REI Monorail Sling bag has the volume of a daypack and the features to meet all the challenges of the daily urban grind. Outdoor-inspired details make hauling the load comfortable and convenient.

Light, Sturdy, Roomy and Comfortable!!

By dbakevlar from Westminster, CO on 3/17/2012


5out of 5

Pros: Lightweight, Easy To Access Items, High Quality, Comfortable, Good Capacity, Easy To Carry, Stylish

Cons: No Zipper pocket in main

Best Uses: Weekend Trips

Describe Yourself: Modern, Stylish, Practical, Comfort-Oriented, Career

Travel Frequency: 6-10 Annual Trips

Primary use: Business

Was this a gift?: No

Going on both regional and international trips this year and needed a bag to carry both my ultrabook, tablet and accessories. This bag sits close to the body and is extremely comfortable. I have arthritis in my neck and shoulders, so for anyone needing extra comfort for long trips through airports, cities and towns- this is a great bag!!


Posted in Database

March 13th, 2012 by Kellyn Pot'Vin

Due to scheduling conflict though, it’s on a Thursday evening for once!

I will be presenting for the first time as an Enkitec employee in Colorado Springs, March 22nd. All you SQL Server folks out there that want to come on out, here are the details:


Posted in Database

March 9th, 2012 by Kellyn Pot'Vin

I’ve been again, unfortunately, reminded what makes a poor work environment for a DBA or Developer.  Eddie Awad had sent this out and the second one really resonated with me:

I refer to this in the IT world as the “technical bully syndrome” and it’s quite common in the arena.  It’s not because it’s a male dominated group- I’ve seen bullies come in all shapes and sizes, along with being of both genders.  In fact, there are a couple studies out there that say women managers and leads are more likely to bully a subordinate than a male is.

Now there isn’t any law against having a bad boss/manager/lead.  I lucked out in that I actually have quite a good manager where I currently work, but even if you are a good manager, you also have to note the cost to the business when you have people report to someone who does bully.  This individual pinpointed as the bully, will most likely get along very well with 90% of the folks around them.  They have a consistent need though, to pick out an “outlet” for their aggressions/frustrations and may have a small set of subordinates or peers that they switch from, bullying in intervals.    Often the switch occurs as the bully’s target starts to or simply threatens to escalate to management or HR if the bullying does not cease, the bully will then move their bulls-eye onto another target.

The cost?  Time, resources, revenue and productivity to the business.

It takes time to bully someone- time out of the bully’s day, but also time out of the target’s day when they could be productive.  The target can also be highly impacted post the incidents, feeling frustrated, disrespected and devalued.  The time lost then results in productivity loss to both parties, which in turn then costs the company revenue.  If left to continue for very long, resources are lost and high-turnover becomes the norm.

The target, who may have been highly productive in the past will start to show less results and seem to have conflicting requirements.  They may seem frustrated and angry, feeling unable to understand how to address the problem successfully or no matter what changes they make to their work style, without management intervention, they are unable to find a way to avoid the bully’s wrath.

The bully often chooses targets that are not weak, but strong.  Peers or subordinates that make them feel threatened, no matter if the threat is real or imagined.  They will continually find ways to undermine the target and make them seem insignificant.  If the target does decide to choose a defensive pose, becoming angry or responding in kind, this can escalate the behavior from the bully.  Often the bully does not see themselves as doing anything wrong!  They may be completely unaware of their behavior, having grown up with it and/or may live with similar disrespect in their personal relationships currently.

When I have been a target in the past, I have often either threatened the bully because of my technical skills, which they seemed to be completely focused on trying to convince everyone that were not sound and/or give conflicting requests to make it impossible for me to meet requirements or I reminded them of someone, (often a wife or mother..) that they had long-time issues with and felt it was acceptable to use me as an outlet when they became frustrated.

There are those that were equal-opportunity bullies.  It didn’t matter if you were male, female, what race you belonged to or your position- as long as you were below them, you were fair game and you simply did your best to avoid their bulls-eye.

I have stayed employed for a company where I was a target on and off for a couple years, until someone figured out what was going on and rid the company of the bully that was causing the high-turnover and tumultuous work environment.  The DBA’s group at that time found quite peaceful, productive years post the bully’s departure.

I have worked for another company for a very short time, where I saw the bulls-eye aiming for me and left so fast that the bully’s head spun.

No one knows what a DBA does but a DBA.  No one knows what a good DBA is except other DBA’s, (and then some would argue that there are a few that are heavily followed that would not be if most DBA’s only knew… J)  As a DBA who has seen how well IT groups can work together when everyone respects each other and respects what skills each person offers and how poorly the group will perform, how high turnover will be when a bully exists in the group, I hope more managers taken notice.

Yes, it’s not against the law to have a bully in your midst, but it should be a crime to allow it to damage the productivity, growth, resources and revenue for the company.

Posted in Database, DBA Life

February 28th, 2012 by Kellyn Pot'Vin

Kellyn’s Schedule of Presentations, 2012, 2nd and 3rd Quarter
March 24th, 2012, Colorado Springs, CO
SQL Saturday
Oracle for SQL Server DBA’s

April 19th, 2012, Billund, Denmark
MOW 2012
ASH Analytics- Top Activity, the Next Generation

April 20th, 2012, Billund, Denmark
MOW 2012
EM12c: Making Oracle Enterprise Manager Work for You

May 30th, 2012, Hämeenlinna, Finland
OUG Harmony 2012
EM12c: Making Oracle Enterprise Manager Work for You

June 1st, 2012, Riga, Latvia
JUV Harmony 2012
EM12c: Making Oracle Enterprise Manager Work for You

June 26th, 2012, San Antonio, TX.
EM12c: Making Oracle Enterprise Manager Work for You

Posted in Database

February 27th, 2012 by Kellyn Pot'Vin

Yes, my dear Watson, Bind Peeking is still an issue in 11g….

We recently upgraded from to and come across a couple interesting performance challenges. I do like this type of problem and have been quite busy working on the fixes for each one that has shown itself. This was one of the latest, requiring a solid, positive relationship between the developer and the DBA to solve- The DBA may have the answer, but the developer must be comfortable with the solution and make the code/design change that is required to correct the issue. I do work with a number of such developers, so this was a quick fix in our fast-paced environment…

Oracle’s Cost Based Optimizer makes the best decisions when the design and the code are in accordance with “best practices” as well. I know the word “best practices” is a bit over-used, so we will discuss first what this means.

In the example I’ll be using in my case below, the following is the design practices that should have been applied when the objects were first introduced to production, but were not. I wanted to include the example so that folks could see just how many different ways Oracle could slice and dice the data when deciding what path to take through the database to a completion of a process.
Best practice in this case would be:
1. partitions with similar sizing.
2. No truncated partitions.
3. Single column partitioning keys.
4. Stage tables that are utilized for a process and then promptly emptied after the completion of the process.

Performance challenge on one step in feed process. This process utilizes the following steps:
1. Chooses a “partition ID”, which is the partitioning key on a stage table. There is no logic to the partitioning key, (p_id) simply chosen as part of the steps from a list marked as a truncated partition that is ready to be utilized.
2. Inserts into this partition. Some loads can be quite small, (less than 1M, others can be quite large, 100G+.
3. Merge statement is then performed on stage table to delete from partition any duplicates and other non-loading indicators.
4. This partition is then used to join to the final table to decide what is inserted, updated and deleted from another partition. The final partition has the same vulnerability in design of askew sizing in partition data.
5. The partition in the staging table is then truncated.

First Step by the DBA:

Twelve different explain plans were found to have been used for the insert statement in question.

Second step by the DBA:
Checked table, partition and column stats.

Select count(column_name) Col_cnt, last_analyzed from dba_tab_columns
 where table_name='CUSTOMER_ACCOUNTS'
 group by Last_analyzed;
 27     1/24/2012 3:09:58 AM
 3      2/23/2012 12:41:23 PM

The columns with the newest date in question are the indexed columns, which is good in this environment. The bad is the amount of histograms on columns, we have already researched and found our waits are on the data dictionary and the scheduler in 11g since moving to collecting histograms on tons of columns, on tons of partitions that we do not use in any way for where clauses, etc., but no way to notify the database of this in the largely CTAS type process code base.

So, we now know we do have new stats, histograms were bloated, set to auto on method_opt, but they are still accurate and up to date. The performance issue started before the date we changed over to the method option on the stats..

Question by DBA: How does the database know what and how to build histograms when different logic is used depending on the code, when a partition can be used by any one of the processe? This results in the CBO making poor decisions on column stats.
This is quickly verified when column stats are compared between one partition and the next for large feeds using the same logic last use of the partition.

Unique keys are multi-column, partitioning key is askew on final tables. This key is in where clause, so we do have accurate information outside of the CTAS code/stats histogram issue. With more data involving histograms and partition data askew across the partitions, does this leave us vulnerable to bind peeking?

DBA Test case for bind peeking:
After pinpointing the hash_value for the plan used by the poor performing, large processing version of the performance issue, detail what the values for both bind variables that would have been used.

Replaced the bind variables, one at a time, with the actual values, noted the change in execution plan.
Noted that when the partition was a literal vs. the bind variable, the execution plan changed drastically and contained valid values for the cost and row count for the process.

Although the :B1 bind variable appears to be just as susceptible to the bind peek, it is the partitioning key for the staging table, (CTS1) that is the true culprit on large processing.

DBA recommendation:
Change from inserting into a partition to performing a CTAS for the logic involved, bypassing any deletes or merges, simply creating the table with the data it requires, uniquely named.
Use this in the insert’s subselect to eliminate the bind peek issue.


The simple concept behind the “CTS1” table was that is was a “parking lot”, where a process could simply find an empty parking space,  (partition) and proceed to utilize it.  The problem is that Oracle can’t figure out if it’s just parked a smart car or a semi in the spot.  To eliminate the issue, the parking lot was removed by going to a working table vs. the partitioned stage table and then the database simply had to work with whatever size of “vehicle” it had been issued in the process, (post stats collection, of course!)

 SQL_ID 7rjk8mr85s2nq
 S.COL19, S.COL5, S.COL15, S.COL22,
 S.COL29, S.COL31, S.COL33,
 S.COL35, S.COL36, S.COL37,
 S.COL38, S.COL39, S.COL40,
 S.COL43, S.COL44, S.F1, S.F2, S.F3,
 S.F4, S.F5, S.F6, S.F7, S.F8, S.F9, S.F10,
 S.COL45, S.COL46, S.COL47,
 S.COL48, S.COL81
 FROM ( SELECT /*+ use_hash(cts ca) */
 COL43, :B5 AS COL44, CTS.F1, CTS.F2,
 CTS.F3, CTS.F4, CTS.F5, CTS.F6, CTS.F7, CTS.F8,
 CTS.F9, CTS.F10, CTS.COL45, :B4 AS COL46, :B3 AS
 AND CTS.COL72 = 'N' ) S ) X
 ( SELECT /*+ use_hash(ct) */ 1
 AND CT.COL3 = :B1 )
Plan hash value: 653243445
 | Id | Operation | Name | Rows | Bytes | Cost | Pstart| Pstop |
 | 0 | INSERT STATEMENT | | | | 1659 | | |
 | 1 | HASH JOIN RIGHT ANTI | | 23 | 110K| 1659 | | |
 | 2 | PARTITION LIST SINGLE | | 1 | 52 | 1 | KEY | KEY |
 | 3 | TABLE ACCESS BY LOCAL INDEX ROWID | CT1 | 1 | 52 | 1 | KEY | KEY |
 | 4 | INDEX RANGE SCAN | CT1_IDX | 1 | | 1 | KEY | KEY |
 | 5 | VIEW | | 23 | 109K| 1657 | | |
 | 6 | WINDOW SORT | | 23 | 4232 | 1657 | | |
 | 7 | HASH JOIN | | 23 | 4232 | 1644 | | |
 | 8 | VIEW | | 1 | 56 | 14 | | |
 | 9 | WINDOW SORT PUSHED RANK | | 1 | 52 | 14 | | |
 | 10 | FILTER | | | | | | |
 | 11 | PARTITION LIST SINGLE | | 1 | 52 | 1 | KEY | KEY |
 | 12 | TABLE ACCESS BY LOCAL INDEX ROWID| CA1 | 1 | 52 | 1 | KEY | KEY |
 | 13 | INDEX RANGE SCAN | CA1 _PK | 1 | | 1 | KEY | KEY |
 | 14 | PARTITION RANGE SINGLE | | 11M| 1346M| 1385 | KEY | KEY |
 | 15 | TABLE ACCESS FULL | CTS1 | 11M| 1346M| 1385 | KEY | KEY |
 - cpu costing is off (consider enabling it)
Plan hash value: 1492830811
 | Id | Operation | Name | Rows | Bytes |TempSpc| Cost | Pstart| Pstop |
 | 0 | INSERT STATEMENT | | | | | 23810 | | |
 | 1 | HASH JOIN RIGHT ANTI | | 23 | 110K| | 23810 | | |
 | 2 | PARTITION LIST SINGLE | | 1 | 52 | | 1 | KEY | KEY |
 | 3 | TABLE ACCESS BY LOCAL INDEX ROWID| CT1 | 1 | 52 | | 1 | KEY | KEY |
 | 4 | INDEX RANGE SCAN | CT1_IDX | 1 | | | 1 | KEY | KEY |
 | 5 | VIEW | | 23 | 109K| | 23808 | | |
 | 6 | WINDOW SORT | | 23 | 4646 | | 23808 | | |
 | 7 | HASH JOIN | | 23 | 4646 | | 23795 | | |
 | 8 | PARTITION RANGE SINGLE | | 1 | 146 | | 20 | KEY | KEY |
 | 9 | TABLE ACCESS FULL | CTS1 | 1 | 146 | | 20 | KEY | KEY |
 | 10 | VIEW | | 14M| 753M| | 23623 | | |
 | 11 | WINDOW SORT PUSHED RANK | | 14M| 457M| 1293M| 23623 | | |
 | 12 | FILTER | | | | | | | |
 | 13 | PARTITION LIST SINGLE | | 14M| 457M| | 2267 | KEY | KEY |
 | 14 | TABLE ACCESS FULL | CA1 | 14M| 457M| | 2267 | KEY | KEY |
 - cpu costing is off (consider enabling it)
Plan hash value: 1588626705
 | Id | Operation | Name | Rows | Bytes |TempSpc| Cost | Pstart| Pstop |
 | 0 | INSERT STATEMENT | | | | | 3699K| | |
 | 1 | HASH JOIN RIGHT ANTI | | 62M| 287G| 2977M| 3699K| | |
 | 2 | PARTITION LIST SINGLE | | 70M| 2165M| | 10018 | KEY | KEY |
 | 3 | INDEX FAST FULL SCAN | CTS1_PK | 70M| 2165M| | 10018 | KEY | KEY |
 | 4 | VIEW | | 62M| 285G| | 546K| | |
 | 5 | WINDOW SORT | | 62M| 9461M| 27G| 546K| | |
 | 6 | HASH JOIN | | 62M| 9461M| 2794M| 182K| | |
 | 7 | VIEW | | 43M| 2301M| | 75523 | | |
 | 8 | WINDOW SORT PUSHED RANK| | 43M| 1315M| 3620M| 75523 | | |
 | 9 | FILTER | | | | | | | |
 | 10 | PARTITION LIST SINGLE| | 43M| 1315M| | 13076 | KEY | KEY |
 | 11 | TABLE ACCESS FULL | CA1 | 43M| 1315M| | 13076 | KEY | KEY |
 | 12 | PARTITION RANGE SINGLE | | 62M| 6107M| | 5407 | KEY | KEY |
 | 13 | TABLE ACCESS FULL | CTS1 | 62M| 6107M| | 5407 | KEY | KEY |
 - cpu costing is off (consider enabling it)
Plan hash value: 1672751078
 | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
 | 0 | INSERT STATEMENT | | | | | 235K(100)| | | |
 | 1 | LOAD TABLE CONVENTIONAL | | | | | | | | |
 | 2 | HASH JOIN ANTI | | 1355 | 6522K| 632M| 235K (7)| 00:03:24 | | |
 | 3 | VIEW | | 135K| 631M| | 134K (10)| 00:01:57 | | |
 | 4 | WINDOW SORT | | 135K| 24M| 26M| 134K (10)| 00:01:57 | | |
 | 5 | HASH JOIN | | 135K| 24M| 18M| 132K (10)| 00:01:55 | | |
 | 6 | PARTITION RANGE SINGLE | | 135K| 16M| | 287 (32)| 00:00:01 | KEY | KEY |
 | 7 | TABLE ACCESS FULL | CTS1 | 135K| 16M| | 287 (32)| 00:00:01 | KEY | KEY |
 | 8 | VIEW | | 11M| 610M| | 90184 (13)| 00:01:19 | | |
 | 9 | WINDOW SORT PUSHED RANK| | 11M| 338M| 480M| 90184 (13)| 00:01:19 | | |
 | 10 | FILTER | | | | | | | | |
 | 11 | PARTITION LIST SINGLE| | 11M| 338M| | 38574 (9)| 00:00:34 | KEY | KEY |
 | 12 | TABLE ACCESS FULL | CA1 | 11M| 338M| | 38574 (9)| 00:00:34 | KEY | KEY |
 | 13 | PARTITION LIST SINGLE | | 20M| 840M| | 6426 (5)| 00:00:06 | KEY | KEY |
 | 14 | INDEX RANGE SCAN | CTS1_PK | 20M| 840M| | 6426 (5)| 00:00:06 | KEY | KEY |
Plan hash value: 1930179405
 | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
 | 0 | INSERT STATEMENT | | | | | 1572K(100)| | | |
 | 1 | LOAD TABLE CONVENTIONAL | | | | | | | | |
 | 2 | HASH JOIN RIGHT ANTI | | 3033K| 13G| 691M| 1572K (2)| 00:22:44 | | |
 | 3 | PARTITION LIST SINGLE | | 17M| 489M| | 4192 (6)| 00:00:04 | KEY | KEY |
 | 4 | INDEX RANGE SCAN | CTS1_PK | 17M| 489M| | 4192 (6)| 00:00:04 | KEY | KEY |
 | 5 | VIEW | | 5152K| 23G| | 246K (9)| 00:03:34 | | |
 | 6 | WINDOW SORT | | 5152K| 835M| 909M| 246K (9)| 00:03:34 | | |
 | 7 | HASH JOIN | | 5152K| 835M| 619M| 155K (11)| 00:02:15 | | |
 | 8 | PARTITION RANGE SINGLE | | 5152K| 560M| | 11305 (31)| 00:00:10 | KEY | KEY |
 | 9 | TABLE ACCESS FULL | CTS1 | 5152K| 560M| | 11305 (31)| 00:00:10 | KEY | KEY |
 | 10 | VIEW | | 11M| 608M| | 69790 (16)| 00:01:01 | | |
 | 11 | WINDOW SORT PUSHED RANK| | 11M| 325M| 478M| 69790 (16)| 00:01:01 | | |
 | 12 | FILTER | | | | | | | | |
 | 13 | PARTITION LIST SINGLE| | 11M| 325M| | 20402 (16)| 00:00:18 | KEY | KEY |
 | 14 | TABLE ACCESS FULL | CA1 | 11M| 325M| | 20402 (16)| 00:00:18 | KEY | KEY |
Plan hash value: 2157924392
 | Id | Operation | Name | Rows | Bytes |TempSpc| Cost | Pstart| Pstop |
 | 0 | INSERT STATEMENT | | | | | 267K| | |
 | 1 | HASH JOIN RIGHT ANTI | | 4459K| 20G| 152M| 267K| | |
 | 2 | PARTITION LIST SINGLE | | 3204K| 116M| | 564 | KEY | KEY |
 | 3 | TABLE ACCESS FULL | CT1 | 3204K| 116M| | 564 | KEY | KEY |
 | 4 | VIEW | | 4459K| 20G| | 44503 | | |
 | 5 | WINDOW SORT | | 4459K| 723M| 1574M| 44503 | | |
 | 6 | HASH JOIN | | 4459K| 723M| 357M| 16818 | | |
 | 7 | VIEW | | 5516K| 294M| | 8663 | | |
 | 8 | WINDOW SORT PUSHED RANK| | 5516K| 173M| 505M| 8663 | | |
 | 9 | FILTER | | | | | | | |
 | 10 | PARTITION LIST SINGLE| | 5516K| 173M| | 492 | KEY | KEY |
 | 11 | TABLE ACCESS FULL | CA1 | 5516K| 173M| | 492 | KEY | KEY |
 | 12 | PARTITION RANGE SINGLE | | 3052K| 331M| | 329 | KEY | KEY |
 | 13 | TABLE ACCESS FULL | C | 3052K| 331M| | 329 | KEY | KEY |
 - cpu costing is off (consider enabling it)
Plan hash value: 2900900913
 | Id | Operation | Name | Rows | Bytes |TempSpc| Cost | Pstart| Pstop |
 | 0 | INSERT STATEMENT | | | | | 306K| | |
 | 1 | HASH JOIN RIGHT ANTI | | 35M| 164G| | 306K| | |
 | 2 | PARTITION LIST SINGLE | | 1 | 52 | | 1 | KEY | KEY |
 | 3 | TABLE ACCESS BY LOCAL INDEX ROWID| CT1 | 1 | 52 | | 1 | KEY | KEY |
 | 4 | INDEX RANGE SCAN | CT1_IDX | 1 | | | 1 | KEY | KEY |
 | 5 | VIEW | | 35M| 163G| | 278K| | |
 | 6 | WINDOW SORT | | 35M| 6046M| 16G| 278K| | |
 | 7 | HASH JOIN | | 35M| 6046M| 687M| 47011 | | |
 | 8 | VIEW | | 10M| 566M| | 17018 | | |
 | 9 | WINDOW SORT PUSHED RANK | | 10M| 343M| 971M| 17018 | | |
 | 10 | FILTER | | | | | | | |
 | 11 | PARTITION LIST SINGLE | | 10M| 343M| | 988 | KEY | KEY |
 | 12 | TABLE ACCESS FULL | CA1 | 10M| 343M| | 988 | KEY | KEY |
 | 13 | PARTITION RANGE SINGLE | | 14M| 1624M| | 1595 | KEY | KEY |
 | 14 | TABLE ACCESS FULL | CTS1 | 14M| 1624M| | 1595 | KEY | KEY |
 - cpu costing is off (consider enabling it)
Plan hash value: 2973217172
 | Id | Operation | Name | Rows | Bytes |TempSpc| Cost | Pstart| Pstop |
 | 0 | INSERT STATEMENT | | | | | 208K| | |
 | 1 | HASH JOIN RIGHT ANTI | | 6346K| 29G| | 208K| | |
 | 2 | PARTITION LIST SINGLE | | 12908 | 630K| | 3 | KEY | KEY |
 | 3 | INDEX RANGE SCAN | CTS1_PK | 12908 | 630K| | 3 | KEY | KEY |
 | 4 | VIEW | | 6346K| 28G| | 85943 | | |
 | 5 | WINDOW SORT | | 6346K| 1373M| 2916M| 85943 | | |
 | 6 | HASH JOIN | | 6346K| 1373M| 533M| 33969 | | |
 | 7 | VIEW | | 8231K| 439M| | 15389 | | |
 | 8 | WINDOW SORT PUSHED RANK| | 8231K| 282M| 754M| 15389 | | |
 | 9 | FILTER | | | | | | | |
 | 10 | PARTITION LIST SINGLE| | 8231K| 282M| | 2423 | KEY | KEY |
 | 11 | TABLE ACCESS FULL | CA1 | 8231K| 282M| | 2423 | KEY | KEY |
 | 12 | PARTITION RANGE SINGLE | | 6346K| 1034M| | 1171 | KEY | KEY |
 | 13 | TABLE ACCESS FULL | CTS1 | 6346K| 1034M| | 1171 | KEY | KEY |
 - cpu costing is off (consider enabling it)
****This was the hash value used in our poorest performer!!!!*****
Plan hash value: 3005115193
 | Id | Operation | Name | Rows | Bytes |TempSpc| Cost | Pstart| Pstop |
 | 0 | INSERT STATEMENT | | | | | 89378 | | |
 | 1 | HASH JOIN RIGHT ANTI | | 287K| 1347M| 912M| 89378 | | |
 | 2 | PARTITION LIST SINGLE | | 22M| 658M| | 2193 | KEY | KEY |
 | 3 | INDEX FAST FULL SCAN | CTS1_PK | 22M| 658M| | 2193 | KEY | KEY |
 | 4 | VIEW | | 287K| 1338M| | 63268 | | |
 | 5 | WINDOW SORT | | 287K| 41M| 92M| 63268 | | |
 | 6 | HASH JOIN | | 287K| 41M| 29M| 61637 | | |
 | 7 | PARTITION RANGE SINGLE | | 287K| 26M| | 25 | KEY | KEY |
 | 8 | TABLE ACCESS FULL | CTS1 | 287K| 26M| | 25 | KEY | KEY |
 | 9 | VIEW | | 27M| 1446M| | 42660 | | |
 | 10 | WINDOW SORT PUSHED RANK| | 27M| 852M| 2482M| 42660 | | |
 | 11 | FILTER | | | | | | | |
 | 12 | PARTITION LIST SINGLE| | 27M| 852M| | 2548 | KEY | KEY |
 | 13 | TABLE ACCESS FULL | CA1 | 27M| 852M| | 2548 | KEY | KEY |
 - cpu costing is off (consider enabling it)
Plan hash value: 3149367802
 | Id | Operation | Name | Rows | Bytes |TempSpc| Cost | Pstart| Pstop |
 | 0 | INSERT STATEMENT | | | | | 87386 | | |
 | 1 | HASH JOIN RIGHT ANTI | | 1260K| 5917M| 210M| 87386 | | |
 | 2 | PARTITION LIST SINGLE | | 4603K| 158M| | 327 | KEY | KEY |
 | 3 | TABLE ACCESS FULL | CT1 | 4603K| 158M| | 327 | KEY | KEY |
 | 4 | VIEW | | 1260K| 5874M| | 22365 | | |
 | 5 | WINDOW SORT | | 1260K| 251M| 539M| 22365 | | |
 | 6 | HASH JOIN | | 1260K| 251M| 198M| 12826 | | |
 | 7 | PARTITION RANGE SINGLE | | 1260K| 183M| | 142 | KEY | KEY |
 | 8 | TABLE ACCESS FULL | CTS1 | 1260K| 183M| | 142 | KEY | KEY |
 | 9 | VIEW | | 4701K| 251M| | 7346 | | |
 | 10 | WINDOW SORT PUSHED RANK| | 4701K| 152M| 430M| 7346 | | |
 | 11 | FILTER | | | | | | | |
 | 12 | PARTITION LIST SINGLE| | 4701K| 152M| | 238 | KEY | KEY |
 | 13 | TABLE ACCESS FULL | CA1 | 4701K| 152M| | 238 | KEY | KEY |
 - cpu costing is off (consider enabling it)
Plan hash value: 3517833265
 | Id | Operation | Name | Rows | Bytes |TempSpc| Cost | Pstart| Pstop |
 | 0 | INSERT STATEMENT | | | | | 142K| | |
 | 1 | HASH JOIN ANTI | | 37130 | 174M| 173M| 142K| | |
 | 2 | VIEW | | 37130 | 173M| | 130K| | |
 | 3 | WINDOW SORT | | 37130 | 6563K| 13M| 130K| | |
 | 4 | HASH JOIN | | 37130 | 6563K| 4992K| 130K| | |
 | 5 | PARTITION RANGE SINGLE | | 37130 | 4532K| | 5 | KEY | KEY |
 | 6 | TABLE ACCESS FULL | CTS1 | 37130 | 4532K| | 5 | KEY | KEY |
 | 7 | VIEW | | 57M| 3089M| | 90806 | | |
 | 8 | WINDOW SORT PUSHED RANK| | 57M| 1655M| 4859M| 90806 | | |
 | 9 | FILTER | | | | | | | |
 | 10 | PARTITION LIST SINGLE| | 57M| 1655M| | 12448 | KEY | KEY |
 | 11 | TABLE ACCESS FULL | CA1 | 57M| 1655M| | 12448 | KEY | KEY |
 | 12 | PARTITION LIST SINGLE | | 14M| 548M| | 2629 | KEY | KEY |
 | 13 | INDEX FAST FULL SCAN | CTS1_PK | 14M| 548M| | 2629 | KEY | KEY |
 - cpu costing is off (consider enabling it)
Plan hash value: 3518559863
 | Id | Operation | Name | Rows | Bytes | Cost | Pstart| Pstop |
 | 0 | INSERT STATEMENT | | | | 49 | | |
 | 1 | HASH JOIN ANTI | | 1 | 4924 | 49 | | |
 | 2 | VIEW | | 1 | 4887 | 32 | | |
 | 3 | WINDOW SORT | | 1 | 154 | 32 | | |
 | 4 | HASH JOIN | | 1 | 154 | 19 | | |
 | 5 | VIEW | | 1 | 56 | 14 | | |
 | 6 | WINDOW SORT PUSHED RANK | | 1 | 52 | 14 | | |
 | 7 | FILTER | | | | | | |
 | 8 | PARTITION LIST SINGLE | | 1 | 52 | 1 | KEY | KEY |
 | 9 | TABLE ACCESS BY LOCAL INDEX ROWID| CA1 | 1 | 52 | 1 | KEY | KEY |
 | 10 | INDEX RANGE SCAN | CA1 _PK | 1 | | 1 | KEY | KEY |
 | 11 | PARTITION RANGE SINGLE | | 23150 | 2215K| 4 | KEY | KEY |
 | 12 | TABLE ACCESS FULL | CTS1 | 23150 | 2215K| 4 | KEY | KEY |
 | 13 | PARTITION LIST SINGLE | | 1963K| 69M| 1 | KEY | KEY |
 | 14 | TABLE ACCESS BY LOCAL INDEX ROWID | CT1 | 1963K| 69M| 1 | KEY | KEY |
 | 15 | INDEX RANGE SCAN | CT1_IDX | 1333 | | 1 | KEY | KEY |
 - cpu costing is off (consider enabling it)

Posted in Database

November 22nd, 2011 by Kellyn Pot'Vin

As part of an 11g Upgrade, it was found that a database environment could be built from one of the upgraded databases through a duplicate.  As this process had never been performed before in this fashion, a test was in order.  The test was an excellent chance to discover that the OSUser that performs the duplicate process was the proud owner of a .kshrc file with hard coded Oracle variables set which was an excellent choice if you want to really screw up a new duplicate database… 🙂


1. Duplicate has started with correct variables set.

2.  Subsequent shell scripts then “upset” the environment mid-process leaving a failed duplicate with ASM files for the controlfiles created, but the DBID and dbname still set to the target database, not the auxillary.

3.  Post the failure, the auxillary, (duplicate) database can only be mounted, not opened.

After setting environment to building auxillary database environment, after duplicate failure and you see in the spfile, name of db is no longer the auxilary database.
1.  Shutdown the auxilary database:
 [oracledbs]$ sqlplus ‘/as sysdba’
SQL> shutdown;
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.@
In a second screen set to +ASM instance for auxillary, remove controlfiles for target db that have been set to that name through the asmcmd command console:
[oracle@dbs]$ asmcmd
control1.ctl <–these are control files for auxillary, stuck with dbname of target, can’t be renamed, can’t mount db!
ASMCMD> rm control*
You may delete multiple files and/or directories.
Are you sure? (y/n) y
ASMCMD> quit
Back on your original screen now, restart the auxillary with pfile set to corect dbname and start with this pfile-
SQL>  startup nomount pfile=’/u01/app/oracle/product/11.2.0/dbhome_2/dbs/initdb_h.ora’
ORACLE instance started.
Total System Global Area 7.6964E+10 bytes
Fixed Size                  2215704 bytes
Variable Size            3.0065E+10 bytes
Database Buffers         4.6708E+10 bytes
Redo Buffers              189513728 bytes
SQL> create spfile from pfile;
File created.
SQL> shutdown;
ORACLE instance shut down.
SQL> startup nomount;
ORACLE instance started.
Total System Global Area 7.6964E+10 bytes
Fixed Size                  2215704 bytes
Variable Size            3.0065E+10 bytes
Database Buffers         4.6708E+10 bytes
Redo Buffers              189513728 bytes
Now you can restart the duplicate process and the database will again be recognized correctly.

Posted in Database

  • Facebook
  • Google+
  • LinkedIn
  • Twitter