Category: Database

February 19th, 2016 by dbakevlar

I’m a Leaf on the Wind, Watch How I Soar.

This is one of my favorite lines from the movie Serenity.

tumblr_m71006H8hV1qir083o1_500

Without Alan Tudyk’s character dying and all at the end of this, (sorry if I ruined the movie for anyone…) this is how a DBA should be in the database environment-  skilled, reliable and maybe a little off our rockers. Our job is to protect the data, the database and all of the database.

With that said, I’m going to list my Ten Rules of Database Administration.

  1. Fixing a performance problem with hardware is the best way to guarantee the return of the problem in the near future.
  2. A Database Administrator is only as good as their last backup, (or database image, clone, flashback and other redundancy.)  It’s the only protection from ID10T errors- our own and others.
  3. The best performing database is one that has no users.  The best performing query is one that doesn’t have to be executed.
  4. Optimize what annoys the user vs. what annoys you and you’ll never have to worry about your job.
  5. Never assume, always research and double-check/triple-check your findings.  Data is the savior of the DBA.
  6. Performance issues are rarely simple.  If they were simple, the user could fix them and we’d be out of a job.
  7. If a database is up and running, then something has changed.  Don’t ever accept the answer that nothing’s changed.  They’d have to be using paper and pen instead of the database.
  8. A developer’s goal is to have an application or procedure complete requirements. Your job is to make sure the code they produce does so without risk to data, database and does so efficiently.
  9. You can’t do your job as well as you can if you understand what the application developer, user and business does.
  10. The database is always guilty until proven innocent and by the way, you only have access to 1/2 the case evidence.  You’re it’s attorney-  Congratulations.

Happy Friday Folks!

 

 

 

Posted in Database, DBA Life Tagged with: , ,

February 17th, 2016 by dbakevlar

There was a question posted on Oracle-l forum today that should have a blog post for easy lookup for folks.  Regarding your Enterprise Manager repository database, (aka OMR.)   This database has a restricted use license, which means you can use it for the Enterprise Manager repository, but you can’t add partitioning to it or RAC or dataguard features without licensing those features.  You also can’t use the diagnostic and tuning pack features available in Enterprise Manager on the repository database without licensing it outside of the EMDiagnostics tool.  You can view information about the license that is part of the OMR here.

No one wants to be open to an audit or have a surprise when inspecting what management packs they’re using.

horrified

To view what management packs you’re using for any given EMCC page, you can use the console and access it from the Setup menu from EM12c or EM13c:

mgmt_pack

With that said, Hans Forbrich made a very valuable addition to the thread and added how to disable EM management control access in your OMR database-

Run the following to disable it via SQL*Plus as SYSDBA:

ALTER SYSTEM SET CONTROL_MANAGEMENT_PACK_ACCESS='NONE' scope=BOTH;

Other packs are disabled using the EM Cloud Control with the appropriate privileges in the console using the SETUP menu in 12.1.0.4 with a patch or higher:

mgnt_pck_acs

The view can be changed from licensed databases to all databases and then you can go through and adjust management packs as licensed and then apply.

mngt_acs_2

Don’t make yourself open to an audit when Enterprise Manager can make it really easy to manage the management packs you are accessing.

Posted in ASH and AWR, Database, EM13c, Enterprise Manager, Oracle Tagged with: , ,

November 24th, 2015 by dbakevlar

After my AWR Warehouse session at DOAG in Nuremberg, Germany last week, an attendee asked me if he really needed to use trace files anymore with the value provided by AWR, (Automatic Workload Repository) and ASH, (Active Session History.)  I responded that trace files were incredibly valuable and should never be discounted.

To use an analogy, when I was 22 years old, and entered a doctor’s office missing a small percentage of my visual field after an apparent migraine, doctors used a number of different “tools” at their disposal in an attempt to diagnose what had happened.  They first started with an Ophthalmoscope to determine the health of my retinas.  They assumed that I was simply experiencing visual disturbances due to a migraine and sent me home with a prescription strength case of Tylenol after verifying that my eyes were healthy.

After no change with another two days passed, the doctors then proceeded with a Computed Tomography Scan, aka CAT/CT scan.  This is a standard first line inspection of the brain and again, it resulted with no information as to what was causing the visual disturbance.  If it hadn’t been for a neurologist that was on duty at the hospital, I may very well have been sent home again.  He asked me about the severe migraine, the actual specifics of the experience and suddenly realized he was hearing someone describe to him the symptoms of a stroke.  No one had considered to ask what had happened and since I was in my early twenties, hadn’t considered this.  At this point, the doctor asked for me to have an MRI, (Magnetic Resonance Imaging) with contract.  An injection of gadolinium contrast resulted in certain tissues and abnormalities more clearly visible and for me, it showed that I had experienced a break in the blood vessels in the back right of my brain, issuing a small aneurysm, but suffering only a little blood loss.  The main damage was to the brain tissue in the back area of my brain which “translates” the information being sent from my optic nerve and this was the cause of my left side visual field loss.  Due to this new information, he was able to start the research that in the coming years saved my life.

image_brn

This doctor, like a database administrator or developer, knew to use the right tool for the job once he knew what he was up against and this is how we succeed when others fail in the technical industry.  To be told that you no longer need a tool such as tracing because valuable data is provided from AWR or ASH limits the ability to see real culprits that the secondary tools discussed weren’t really designed to capture.

To know when to use one tool over the other is often the biggest challenge.  A number of years back, a customer was experiencing terrible, overall performance issues in an ETL process that no one had been able to pinpoint.  I always love a good challenge and began to look at it via AWR reports.  I noticed that there were waits that were unaccounted for in the SQL processing.

Execution Plan

| Id  | Operation                       | Name                       | Rows  | Bytes | Cost  | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT                |                            |       |       |   688 |       |       |
|   1 |  DELETE                         | _ITEM_MONTH_ROLLUP         |       |       |       |       |       |
|   2 |   NESTED LOOPS                  |                            |  1490 | 86420 |   688 |       |       |<--This one doesn't add up to time/value consumed.
|   3 |    VIEW                         | VW_NSO_1                   |     1 |    26 |    20 |       |       |
|   4 |     SORT UNIQUE                 |                            |     1 |    40 |    20 |       |       |
|   5 |      TABLE ACCESS BY INDEX ROWID| PROCESS_LOG_MASTER         |     1 |    40 |     4 |       |       |
|   6 |       INDEX RANGE SCAN          | PROCESS_LOG_MASTER_I7      |   132 |       |     1 |       |       |
|   7 |    PARTITION RANGE ITERATOR     |                            |  1490 | 47680 |   386 |   KEY |   KEY |
|   8 |     INDEX RANGE SCAN            | ITEM_MONTH_ROLLUP_I2       |  1490 | 47680 |   386 |   KEY |   KEY |
--------------------------------------------------------------------------------------------------------------

Note
—–

– dynamic sampling used for this statement (level=8)

– SQL profile “SYS_SQLPROF_013dad8465770005” used for this statement


CPU Costing is off and there is a profile on this statement. Stats are up to date, why am I seeing dynamic sampling level 8??

I proceeded to capture a trace and that’s when I was able to demonstrate that due to auto-DOP in 11.2, a choice was made to perform dynamic sampling that was adding over 80% to the execution time, impacting performance significantly.

SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring blah, blah, blah, blah... :)

 

call     count       cpu    elapsed       disk      query    current rows
------- ------  -------- ---------- ---------- ---------- ---------- ----------
Parse        1      0.00       0.00          0          0          0          0
Execute      1     28.60      48.52     322895      43776     489123        189
Fetch        0      0.00       0.00          0          0          0          0
------- ------  -------- ---------- ---------- ---------- ---------- ----------
total        2     28.60      48.52     322895      43776     489123        189

Limiting your tools, no matter if you limit yourself to just using tracing or AWR/ASH, SLOB, Snapper or any other tool is a poor choice.  Use the right tool for the situation that provides answer to the questions the business relies on you for.  You wouldn’t trust a doctor that used a CAT scan to check a broken bone vs. an X-ray, so why would you trust an optimization specialist that limits themselves to one set of tools?

Posted in Database Tagged with: , ,

October 13th, 2015 by dbakevlar

So I’ll be heading out for Oracle Open World 2015 in just over another week.  This is the first year in a while that I won’t be speaking, (my own fault, as dummy here hasn’t figured out how to submit sessions internally yet, duh!)   but I’m still going to support the demo grounds, meet with customers and speak at Oak Table World!

oaktableworld2015

Per Steve Karam, if you need to know who’s speaking and why you should attend the sessions at the Children’s Museum, well here’s a few:

Jonathan Lewis – Yes. That Jonathan Lewis. Also an alien of extraordinary ability.
Tanel Poder – Doer of things you didn’t know SQL could do and the snappiest snappermaker
Alex Gorbachev – Jack of all trades, master of…well, all of them really.
Kellyn Pot’Vin-Gorman – Cloud Control control, techno wiz, and data lacquerer
Tim Gorman – Kellyn’s hype man. And world renowned Oracle expert.
Kevin Closson – Refusing to party like it’s 1999 since 1999
Gwen Shapira – A fifty petabyte machine learning datastore in human form
Cary Milsap – His advice has probably saved your database and career
Oak Table World – The best Oracle speakers outside of OpenWorld. Literally, not figuratively.

Now if that doesn’t sum it up, well, let’s be honest, there are over 60 members and more speaking that listed here, so if you have time, it’s the best “unconference”, EVER.

With that said, we’ll get back to why everyone is there in the first place and it’s because of my awesome employer, Oracle!

OOW-01

I’m busy preparing my new sessions for post Oracle Open World, which I have three conferences in November, (San Antonio, Chicago and Germany) and ensuring customers I’m in charge of for OOW preparation are ready to go when the stage lights go up!  It’s a fun kind of crazy chaos and the kind I commonly thrive under, so we’re all good!

With that said, there are a few events I’m looking forward to-

Sunday, Oct. 25th, 8:30am- Bridge Run with Crazy Oracle Peeps

Yeah, me and my arthritis will be walking that bridge, but it’s still a great event and I’ll be going.  I’m not doing the bay swim or the bike event.  Enough is enough, you crazy, exercise nuts! 🙂

Monday, Oct. 26th, 1:30pm- Managing at Hyper-Scale: Oracle Enterprise Manager as the Nerve Center of Oracle Cloud [CON9710], Moscone South–300

I have wonderful peers at Oracle and Akshai Duggal is one of them!  She’s going to be explaining how our SCP teams build out the REALLY massive Enterprise Manager environments, ensuring uptime, clear and concise monitoring, along with understanding pesky things like metrics and templates.  Check out her session with real use cases, how can you beat that?

Monday, Oct. 26th, 5pm- in the RAC Attack Area-  Enterprise Manager SIG from IOUG and Oracle.

Come meet up with all the top EM specialists in the world as we discuss the best topics surrounding Enterprise Manager and enjoy the hospitality of the OTN lounge area!

 

Monday, Oct. 26th, 6pm, the ODTUG Reception, (another invite only event…)

Another event I look forward to every year and even though I’m over booked this year, I’ll make sure to stop in for this, too!

Tuesday, Oct. 27th, 10-10:50am- Oak Table World-  Stewart Bryson and I are going to talk about OBIEE and AWR Warehouse.  Children’s Museum 

We just wanted to play around a bit and see how much trending and analysis we could do with the product and had a great time building this out.  Come check it out and then head on over for….

Tuesday, Oct. 27th, 11am-  General Session: Oracle Management Cloud—Real-Time Monitoring, Log, and IT Operations Analytics [GEN9778]  Moscone South—102

The big boss is talking, no not Larry, but my boss, Prakash Ramamurthy!  He’s going to be discussing the coolest of the cool in new stuff that I’m totally obsessed with right now and if you miss out on this session, you’re going to be missing out on the best!

Tuesday, Oct. 27th, 12:15pm, Oak Table Ted Talk!  Secret Topic! Children’s Museum

I can’t tell you what I’m going to talk about, but I’m going to tell you what I can’t tell you now.  Yeah, I really just said that… 🙂

Wednesday, Oct. 28th, 11am- Using Oracle Enterprise Manager Effectively to Become an Oracle Enterprise Manager Champion [CON9711] Moscone South 104

Courtney Llamas and a slew of great folks will be on stage talking how to become an EM Champion!

Wednesday, Oct. 28th, 4:15pm- Hybrid Cloud—Pivot to the Cloud with Oracle Enterprise Manager Cloud Control [CON9712] Moscone South 300

I really enjoy working with Steve Steltling in our SCP team and he’ll be presenting with Phillip Brown on Hybrid Cloud!

Wednesday, Oct. 28th- Bloggers Meetup from Pythian…..I’m waiting….still waiting….no invite yet….Where is it?

This is an event that I always enjoy and wait for the invite from Pythian each year, but nothing yet and will be pestering Alex Gorbachev until it shows up… 🙂

Wednesday Oct. 28th, evening- NOT GOING TO TREASURE ISLAND

Yeah, you heard that right.  Every year, even when I did get a wrist band, I gave it to someone deserving and then Tim and I enjoy the quiet city for a dinner out in ANY restaurant we like… 🙂  We just aren’t much into concerts and love the opportunity to allow someone else to go that may not have by giving them Tim’s band, (now that I’m with Oracle, I no longer get one…)

Thursday, Oct. 29th, 12pm- Oracle Management Cloud Best Practices for Mining Oracle Enterprise Manager Data [CON9714] Marriott Marquis—Salon 10/11

Werner DeGruyter, our EM Yoda and Shailesh Dwivedi are going to talk about a great topic.  I love mining Oracle EM data, so if that’s something you are impressed with, don’t miss out on this session!

If you want to see all the best of the Enterprise Manager sessions, here’s the Oracle Open World Schedule specific to EM!

Friday is the flight home, pick up my dog, DaVinci from the pet hotel and find out if my youngest son, Josh, still knows who I am!

 

 

Posted in Database, DBA Life Tagged with: , ,

May 4th, 2015 by dbakevlar

There may be a reason that one needs to re-add a database to the AWR Warehouse.  This is a new opportunity for me to learn and offer assistance, but I’m working off the grid to figure out a solution.  Luckily, it’s just a matter of creating a new process from pre-existing code and processes.

The process requires both the AWR Warehouse UI and a SQLPlus connection.

1.  Disable the snapshots

2.  Extract the AWR for the Database

3.  Drop the database from the AWR Warehouse

4.  Add the database back into the AWR Warehouse with the retention time or any other configuration change.

5.  Update the configuration table in the source database to show the correct snapshots.

Disable Snapshot Uploads

Log into the AWR Warehouse console and locate the database that needs to be re-added:

awrw_rbld1

From the AWR Warehouse UI, click on the database and then choose to disable the snapshot uploads for the database in question.

awrw_rbld2

 

Extract AWR Data for the Database

From the AWR Warehouse host, log into the AWR Warehouse repository, (database) and now extract the AWR data.  This will use the formal AWR data extract procedure, found in the $ORACLE_HOME/rdbms/admin directory and yes, it works with the AWR Warehouse.

~~~~~~~~~~~~~
AWR EXTRACT
~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~ This script will extract the AWR data for a range of snapshots ~
~ into a dump file. The script will prompt users for the ~
~ following information: ~
~ (1) database id ~
~ (2) snapshot range to extract ~
~ (3) name of directory object ~
~ (4) name of dump file ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SQL> @$ORACLE_HOME/rdbms/admin/awrextr.sql;

A list of databases will be displayed that you can choose from.  In the AWR Warehouse, all the source databases will be shown:

Databases in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DB Id DB Name Host
------------ ------------ ------------
* 1027807260 AWRWHSE AWRWHSE_HOST
9352681 APPSB1 HOST1
4212609618 AASDB  HOST2
4212608695 AASDB HOST3
so on and so forth....

Enter in the DBID for the database from the list.  We’ll use the first AASDB from the list for our example:

Enter value for dbid: 4212607231
Using 4212607231 for Database ID
Specify the number of days of snapshots to choose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed. Pressing <return> without
specifying a number lists all completed snapshots.
Enter value for num_days:
Listing all Completed Snapshots

As I didn’t enter a number in for the number of days, you can see I want ALL of the AWR data for this database.  Now the extract script will ask for snapshot information.  Again, we want all of it and the report will show us all of the snapshots.  If there are so many, that you can’t see the first one, use the AWR Warehouse UI to see the first and last snapshot for this step.

Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 1777
Begin Snapshot Id specified: 1777

Enter value for end_snap: 2303
End Snapshot Id specified: 2303
Specify the Directory Name
~~~~~~~~~~~~~~~~~~~~~~~~~~
Directory Name Directory Path
------------------------------ -----------------------------------
DATA_PUMP_DIR /u01/admin/AWRWHSE/dpdump/
OPATCH_INST_DIR /u01/home/oracle/12.1.0.2/OPatch
OPATCH_LOG_DIR /u01/home/oracle/12.1.0.2/QOpatch
OPATCH_SCRIPT_DIR /u01/home/oracle/12.1.0.2/QOpatch

Choose a Directory Name from the above list (case-sensitive).
Enter value for directory_name: DATA_PUMP_DIR
Using the dump directory: DATA_PUMP_DIR
Specify the Name of the Extract Dump File
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The prefix for the default dump file name is awrdat_1777_2303.
To use this name, press <return> to continue, otherwise enter
an alternative.
Enter value for file_name: awrw_AASDB
Using the dump file prefix: awrw_AASDB
End of AWR Extract
SQL> !ls -la /u01/admin/AWRWHSE/dpdump/awrw_AASDB.dmp
-rw-r----- 1 oracle oinstall 268779520 May 4 16:49 /u01/admin/AWRWHSE/dpdump/awrw_AASDB.dmp

Now that the extract is completed and the data is safe, it’s time to return to the AWR Warehouse console.

 

Remove the Database from the AWR Warehouse

Click on the database in the AWR Warehouse and choose to remove it from the repository.

awrw_rbld3

 

Once the database has been removed, go through the proper steps to add it again to the AWR Warehouse.  Once added, disable the upload of snapshots.  This will keep from concurrency issues and other issues until we’re finished.

Reload the AWR Data

Log back into the AWR Warehouse Repository database, it’s time to load the data back into the AWR Warehouse.

$ORACLE_HOME/rdbms/admin/awrload.sql;

Specify the Name of the Dump File to Load
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Please specify the prefix of the dump file (.dmp) to load: /u01/admin/AWRWHSE/dpdump/awrw_AASDB.dmp
Pressing  will result in the recommended default
tablespace (identified by *) being used.

Enter value for default_tablespace: MGMT_TABLESPACE

As the space will be used at some point anyway, (the mgmt_tablespace is expected to grow, so no concern here and the temporary user utilized for the process will be dropped at the end.

Choose to use the standard TEMP tablespace and proceed forward past this step.  As the AWR objects are already built into the MGMT_TABLESPACE, there is no need to remap from the SYSAUX as the process in a standard database would require.

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "AWR_STAGE"."WRH$_SQL_PLAN"                 89.485 MB   329981 rows
. . imported "AWR_STAGE"."WRH$_SQLTEXT"                  192.34 MB    398276 rows
. . imported "AWR_STAGE"."WRH$_SYSMETRIC_SUMMARY"        232.12 KB    1129472 rows
.......
........
	Append Data for "AWR_STAGE".WRH$_SGASTAT.
INSERT /*+ APPEND */ INTO SYS.WRH$_SGASTAT (SNAP_ID, DBID, INSTANCE_NUMBER, NAME, POOL, BYTES) SELECT SNAP_ID,
3228342000, INSTANCE_NUMBER, NAME, POOL, BYTES FROM "AWR_STAGE".WRH$_SGASTAT WHERE DBID = :c_dbid

Update Metadata Table

Now that we’ve loaded the data back in, we need to update the metadata table to show that more than the initial data exists.

select * from dbnsmp.caw_load_metadata
where dbid=<dbid for database in question>;
aasdb3 oracle_database 4212607231 1777 1778
27-APR-15 27-APR-15 3 1

Update the table to reflect the new data, as there should only be one upload max that’s been sent over. If there is more than one, you’ll need to change the update statement to just reflect the one row that shows the max(end_snap_id) as the AWR Warehouse loads from the lowest snapid to the most recent.

select * from dbsnmp.caw_load_metadata
where dbid=421607231;
Update dbsnmp.caw_load_metadata
set end_snap_id=2303
where dbid=4212607231
and dump_id=<most recent dump_id>;
commit;

If there isn’t any data at all, then force one upload and then edit it.  This is easily done from the Console:

awrw_rbld4

 

Proceed back and update the row in the metadata table and then check the UI to verify that the beginning and end snapshot looks correct.  You should now have all the AWR data loaded back into the AWR Warehouse! And that’s, how to reload AWR Data from a database that must be removed and re-added to the AWR Warehouse.

This is a new process for me, so please, let me know if you note anything amiss and I’ll be glad to research it!

 

Posted in AWR Warehouse, Database, Oracle Tagged with: ,

April 9th, 2015 by dbakevlar

Here’s a quick guide to my schedule of sessions at IOUG Collaborate 2015 for this year.  I’m looking forward to seeing everyone next week, (I’ll be arriving on Saturday, so I really should say in a couple days) and have no doubt we’re up for another phenomenal conference at Las Vegas, this year at the Mandalay!

Additionally, there are some great opportunities in professional development at the IOUG Strategic Leadership Program.  I’ll be speaking with James Lui on how to improve your value with personal brands and social media.

I’m the WIT luncheon speaker on Monday, so for those of you that signed up for this great, yearly event for women in tech before it sold out, I’m looking forward to meeting as many of the powerful women in our industry, (and hopefully a few men, too!) before and after the event.

There are also a few SIG’s that I’ll be attending that aren’t on the schedule:

Tue-12:30 p.m. – 1:00 p.m.Cloud Computing and Virtualization SIG Meeting (ID: 943)

Wed-12:30 p.m. – 1:00 p.m.Oracle Enterprise Manager SIG Meeting (ID: 949)

Id Number Date and Time Session Title Product Line(s) Session Track(s) Session Room
976 Sun. Apr. 12
9:00 am – 1:00 pm
Everything I Needed to Know About Enterprise Manager I Learned at COLLABORATE – Hands-on Lab Oracle Enterprise Manager Manageability Palm D
0 Mon. Apr. 13
9:15 – 10:15 am
Zero to Manageability in One Hour: Build a Solid Foundation for Oracle Enterprise Manager 12c Oracle Cloud – DBaaS/PaaS| Oracle Database| Oracle Enterprise Manager Manageability Banyan B
112 Mon. Apr. 13
3:15 – 4:15 pm
The Power of the AWR Warehouse Oracle Enterprise Manager Manageability Banyan B
967 Mon. Apr. 13
4:30 – 5:30 pm
IOUG Strategic Leadership Program: Staying on Top of Key Trends and Growing Your Personal Brand with Social Media Applicable to All Professional Development Banyan E
1003 Wed. Apr. 15
2:00 – 3:00 pm
OakTable: TED-Style Lightning Talks (2:00-3:00pm) Applicable to All Database Mandalay K
986 Mon. Apr. 13
12:45 – 1:45 pm
Twitter 101 for Oracle Professionals Applicable to All| Professional Development Banyan D

Thanks to everyone at IOUG for the opportunity to be so involved with this great conference and see everyone soon!

Posted in Database, DBA Life, Enterprise Manager Tagged with: ,

October 27th, 2014 by dbakevlar

I’ve just returned from my honeymoon in Venice with the wonderful Tim Gorman, but before we enjoyed our week in this picturesque city, we spent a week doing joint keynotes and two sessions each on our own tech topics at these two incredible conferences in neighboring countries.

SIOUG, (Slovenia Oracle User Group) in Ljubljana, Slovenia held their 2014 conference Oct. 13th-14th at the Plaza Hotel.  Tim and I did a joint keynote, “Assumptions, the Death of First Dates and Tech Projects Around the World” which was great fun, as it’s loaded with a number of Pulp Fiction stills that transcends any language barrier and gets everyone finding humor in the obvious, but also understanding how important it is to never assume anything in the tech world.

As soon as we arrived via car, driving in from Venice, we were hosted to a wonderful, traditional Slovenian dinner by Joze, @joc1954 and Lily Senegacnik and started hanging out with Milena Gerova, @milenagerova and Nikolay Kovachev, (both from the Bulgarian Oracle User Group).  A great time was had by all and the conference was wonderful!  Throughout both conferences, both Milena and Nikolay were in the middle of a number of technical and social events, always having a great time!

sioug

Once finished with the two day conference in Slovenia, we then took a wonderful drive down to Croatia to present the same sessions and keynote at the HROUG, (Croatian Oracle User Group) fall conference in Rovinj, Croatia.  This is a beautiful small town on the coast and we were not disappointed.

I’d met the HROUG president at the IOUC summit back at the end of September and seeing him in his “element” was well worth the time!  Davor is bigger than life and treated us as if we were all family.  He sang with the wonderful band that played at many of the events and ensured we had both a tour of the wonderful town of Rovinj, along with an incredible meal!

davor

The actual 2014 conference was held at the Hotel Istra on the island near Rovinj.  It was an absolutely beautiful location to do a keynote and tech sessions.  We were humbled by the incredibly positive feedback Tim and I received from both conferences keynote attendees and thrilled that it was received so well.  We also appreciated the support of Zoran Pavlovic, @ChallengeZoran and Maja Vaselica, @orapassion, (Serbian Oracle User Group) as we pointed them out as a fellow and rare Oracle couple.  It was wonderful to finally meet them and get to know these two wonderfully warm and technically bright folks!  I was also impressed that as many folks as there were attended the advanced EM12c concepts and of course, the AWR Warehouse had a great turnout, as expected.

hroug

Both conferences are well worth the time and effort, my only recommendation is, if you decide to drive there with a rental car, just make sure that you purchase a Vignette BEFORE coming into the country, (or Switzerland, Austria or Bulgaria, too!) I’ll leave you folks to ask Tim what happens when you don’t! 🙂

 

 

Posted in Database, DBA Life, Eventify Events

January 1st, 2014 by dbakevlar

**Note-  This is an article I wrote about 4 years ago that, as far as I know, can only be found in hard copy and although I’ve changed my stance on statistics, etc. with 11g/12c, the main point of CTAS benefits still stands the test of time.  

As a kid did you ever watch the cartoon Hanna-Barbara’s, “the Jetson’s” and see Elroy drop out of George Jetson’s flying  saucer,  going to school via his personal,  hyper-pod and thought, “forget the bus, I want to go to school in the hyper-pod!”?

For any developer or DBA working with large objects in an OLAP environment, updating and deleting large tables, even when partitioned, can feel like you are on that school bus again.  So how do you get off the school bus and ride on Elroy’s hyper-pod to get things done, “Jetson’s Style”?

In my time working with large objects, I’ve seen and discovered more ways to use the CTAS, (“create table as” for those who are new to the acronym!)  than I ever thought possible.  It should be a pact of any DBA or developer in a warehouse or mart environment to have all the ways this feature can be used added to their arsenal.

For our current day vs. Jetson’s futuristic comparison, we’re going to say our environment consists of data partitioned by company_id, residing mostly in five primary tables.  This partitioning method has been quite successful for how the business loads and reports on data, but your first challenge is to prune out all purchase orders older than five years from the system.   Once complete, the business has some reporting requirements for you to invest time into, so keep that mind open and ready to work through some challenges!

The tables involved have the following info, examples of columns and sizes:

COMPANY_ACCT, 253 columns,  296GB, 1000 partitions total.
C_ID (partitioning key)
C_NBR
ACCT_TYPE_CD
SUB_CID
DEPT_ID
ADDRESS
PO_BOX
CITY
STATE
ZIP
LAST_ACTIVE_DT
… so on and so forth…

COMPANY_TRANS,120 columns, 312GB, 1000 partitions total.
C_ID(partitioning key),
TRAN_NBR
TRAN_DT
… so on and so forth…

ID_COMP_DIM, 412 columns, 212GB, 1000 partitions total.
C_ID(partitioning key),
MOD_KEY,
COMP_MOD_KEY,
… so on and so forth…

GEO_COMP_DIM, 82 columns, 180GB, 1000 partitions total.
C_ID,
GEO_KEY,
TERR_CD,
… so on and so forth…

The traditional OLTP processing method  to tackle the removal would be to either:

  • Delete with where clause against the tran_dt or last_act_dt and having an index on these columns.
  • Bulk process the deletes, using the same method but performing in parallel, using the c_id, (company id) to allow for most efficient performance.

How could you perform the delete faster?   How could you accomplish this task without actually utilizing a delete statement at all?

Elroy Jetson would say “CTAS it!”  Anything that can be done, can be done faster, more efficiently and more suave when it’s done with a CTAS.  Combine this with  exchange partition feature and there are opportunities here that anyone can appreciate.  Anything that can be done, can be done faster, more efficiently and more suave when it’s done with a CTAS, (“create table as” for those of you who are newer to the acronym..)  Pairing up CTAS with exchange partition and you have a match made in heaven.

Now to return to our challenge at hand-  let’s take our first table, the COMPANY_ACCT table and remove all the accounts that haven’t been active for over five years.

The traditional way,  version 1:

DELETE FROM COMPANY_ACCT
WHERE LAST_ACTIVE_DT < TO_DATE(’01-JUN-2005’,’DD-MON-YYYY’);

Or version 2, process via a procedure and pass in the company id and process multi-thread or in parallel:

DELETE FROM COMPANY_ACCT
WHERE C_ID=:b1
AND LAST_ACTIVE_DT < TO_DATE(’01-JUN-2005’,’DD-MON-YYYY’);

After each of these steps, you will have stale statistics and considerable empty blocks-  blocks that will be scanned on every hash and table scan that are empty until you “coalesce” this partition.   This is something that you must keep in mind when you process data the “old-fashioned” way.

How more efficient would this be to process this via a CTAS with an exchange partition?

Note**  No delete in the following statement!!

CREATE TABLE COMPANY_ACCT_229 AS
SELECT * FROM COMPANY_ACCT
WHERE C_ID=229
AND LAST_ACTIVE_DT > TO_DATE(’01-JUN-2005’,’DD-MON-YYYY’);
CREATE INDEX CA_229_idx1 on COMPANY_ACCT_229(LAST_ACTIVE_DT) LOCAL;

Update statistics and note that I use very, very small sample sizes for my estimate_percent.  I have very consistent data across my data sets, but I have very large amounts of data.  This gives me the luxury of collecting stats and not the time required for larger sample sizes.  Due to the amount of partitions, I am careful to gather only what I need to not impact the performance of my data dictionary.  I do not like seeing long ops on tables such as col$, tab$, ind$, etc…  If you see this, look into the amount of statistics-  are you gathering what you need or just flooding the data dictionary with data that can not give you the consistent and solid execution plans that are your goal?

Exec dbms_stats.gather_table_stats(ownname=>’OWNER’,tabname=>’COMPANY_ACCT_229’, estimate_percent=>.00001);

We are now ready to exchange our partition-

ALTER TABLE COMPANY_ACCT EXCHANGE PARTITION P_229 WITH COMPANY_ACCT_229 UPDATE INDEXES;

Your delete of the data for this partition is complete.  There is no fragmentation and you have updated the stale statistics, you are as clean as possible for all processing going forward.

What is the cost savings?

Original Delete process per partition, no updateof statistics, on average: 28 min.

CTAS and exchange partition, including index creation, minus statistics update on average:  7 min.

What if you’re challenged to reproduce the whole table-  all the partitions without the older than five years data.  How can you complete this task with the CTAS method?

CREATE TABLE DW_PROD.COMPANY_ACCTS2
(
  C_ID                     NUMBER not null,
  C_NBR                    VARCHAR2(30) not null,
  ACCT_TYPE_CD          VARCHAR2(1) not null,
  … ßColumns to be included in the new table
)
partition by list (C_ID)
(
 partition P_1001 values ('1001'),
partition P_1022 values ('1022'),
partition P_1023 values ('1023'),
partition P_1024 values ('1024'),
partition P_1025 values ('1025'),
partition P_1026 values ('1026'),
partition P_1029 values ('1029'),
partition P_1031 values ('1031'),
…  ßpartition list
)
<storage parameters, including parallel> PARALLEL (degree 8)
AS select /*+ PARALLEL(ca 8) */
ca.C_ID,
ca.C_NBR,
ca.ACCT_TYPE_CD,
FROM COMPANY_ACCT ca
WHERE ca.LAST_ACTIVE_DT > TO_DATE(’01-JUN-2005’,’DD-MON-YYYY’);

Create the indexes that exist on the original, update the statistics on the secondary and then perform a rename-  renaming the original to “old” and the COMPANY_ACCT2 to the original name, COMPANY_ACCT.

What are the time savings, performing this way vs. original:

Delete Statement, in parallel, against all the partitions, WITHOUT update of statistics:  23 hrs, 12 min.

CTAS and rename, INCLUDING update of statistics:  7 hrs, 4 min.

This is a 60% time savings-  60% less maintenance window of the DBA’s personal time or outage to the business.  How can you say no to this?

Dimensional data the CTAS way…

As a special request for the business, a developer needs to create a table with multiple joins to three of the largest tables.   The developer makes multiple attempts, but must stop due to performance hits on the database.

SELECT  /*+  parallel(g 8) parallel(i 8) use_hash(g i) */
            i.c_id,
            i.c_mod_key,
            i.comp_mod_id,
            DECODE (i.c_id,
                    0, 'N',
                    NVL2 (g.geo _tract, 'B', 'Z'))
               AS cmatch,
            g.next_col1,
            g. next_col2,
            g. next_col3,
            g. next_col4,
            g. next_col5,
            g. next_col5,
            ...
            g. next_col19
     FROM   geo_comp_dim g, id_comp_dim i
    WHERE   g.c_id = i.c_id;

The explain plan shows the impressive cost of this query, along with outrageous amount of temporary tablespace required to perform the hash and sort.

Explain Plan:

Description Owner Object Cost Rows Temp
SELECT STATEMENT, GOAL = ALL_ROWS

1145371

176422800

 PX COORDINATOR
 PX SEND QC (ORDER) SYS :TQ10002

1145371

176422800

   SORT ORDER BY

1145371

176422800

2.4 TB
   PX RECEIVE

3035

176422800

   PX SEND RANGE SYS :TQ10001

3035

176422800

      HASH JOIN

3035

176422800

     PX RECEIVE

163

373294

     PX SEND BROADCAST SYS :TQ10000

163

373294

     PX BLOCK ITERATOR

163

373294

     TABLE ACCESS FULL DM_OWNER GEO_COMP_DIM

163

373294

  PX BLOCK ITERATOR

2730

176422800

  TABLE ACCESS FULL DM_OWNER ID_COMP_DIM

2730

176422800

 

What is the heaviest weight in this process to the cost of the query?

The hash join between the two tables, but what does it really need from the two tables?  The original query needs almost everything from GEO_COMP_DIM, but it’s a relatively small cost vs. ID_COMP_DIM, which the developer only needs three columns of the total 412 columns!

Those of us who have studied a hash join know that we should expect the hash join to require 1.6 times our table we lead in with for our hash.  What is we created a smaller lead table for the hash then?

CTAS to the rescue again!

CREATE TABLE ID_COMP_CID_DATA AS
SELECT C_ID, C_MOD_KEY, C_COMP_ID FROM ID_COMP_DIM;

Replace the original ID_COMP_DIM table in the query with ID_COMP_CID_DATA.  What is the performance improvement?

 

Description Owner Object Cost Rows Temp
SELECT STATEMENT, GOAL = ALL_ROWS

41289

176422800

 PX COORDINATOR
  PX SEND QC (ORDER) SYS :TQ10002

41289

176422800

  SORT ORDER BY

41289

176422800

153G
    PX RECEIVE

623

176422800

    PX SEND RANGE SYS :TQ10001

623

176422800

        HASH JOIN

623

176422800

        PX RECEIVE

163

373294

        PX SEND BROADCAST SYS :TQ10000

163

373294

         PX BLOCK ITERATOR

163

373294

  TABLE ACCESS FULL DM_OWNER GEO_COMP_DIM

163

373294

  PX BLOCK ITERATOR

312

176422800

  TABLE ACCESS FULL DM_OWNER ID_COMP_CID_DATA

312

176422800

 

Same explain plan, different, narrow table to work with…  The time savings is considerable, too..

Original:  7 hrs, 16 min.

Same Query off of CTAS “dimensional”  table:

Creation of ID_COMP_CID_DATA: 10 min

Query:  28 min.

Total:  38min.

The business has now been asked to change how they report data.  Originally, the data was represented  as yes or no, but the data now needs to be presented as values requested by the client, no longer satisfied with Yes or No.

A simple CTAS of the table with a decode of the columns to the new values will create the new table in the format you need without having to perform a standard update.

create table COMP_BUS
 tablespace tbl_data1
    pctfree 10
    initrans 96
    maxtrans 255
 STORAGE (FREELISTS 48)  nologging as
 select /*+ parallel(I,8) */
         i.c_id
       , DECODE(i.sold_flag, 'Y', '1', '0') AS sold
       , i.pass_id AS p_id
       ,i.sess_id
       , TO_NUMBER(DECODE(i.sl_dt, null, null, i.sl_dt  - TO_DATE('01-JAN-1999', 'DD-MON-YYYY'))) AS sl_dt
      ,c_dev_id
from DM_OWNER_A.comp_bus_dim i ;

create index bus_cid_idx on DM_OWNER.comp_bus (c_id ) tablespace  BUS_INDX1 parallel 4;

exec dbms_stats.gather_table_stats…

ALTER TABLE COMP_BUS noparallel;
ALTER INDEX bus_cid_idx noparallel;

No concern over empty blocks or performance issues that would be experienced with an update will be experienced with the CTAS.  Create the table, create any indexes, update statistics and rename the table.  The steps may seem  more involved, but the time savings and ability for the CTAS to pay-forward in performance benefits are significant.

Here’s where you have to ask yourself- Do you want to be on the yellow school bus or the hyper-pod?

Features that can be utilized in a CTAS that should be investigated for performance benefits:

  • No-logging
  • Parallel/parallel DML
  • Compression for space savings
  • Initial transaction settings
  • Partitioning/sub-partitioning
  • In some specialized cases-  freelists/freelist groups instead of ASSM.

In the world of data warehouses and marts, theory would say, anyone should be able to easily update or delete rows without issue, but in reality, there are more efficient ways to process large amounts of data.  We, as DBAs and developers, must be more pro-active, more futuristic-  more ”Jetson”  like.

Posted in Database, Oracle

March 26th, 2012 by Kellyn Pot'Vin

Congratulations to my coworker, Andy Colvin who just received his Oracle ACE!! Just one more immensely technically gifted individual who is very deserving of this award at Enkitec and makes me proud to be part of such a great group of folks!

Posted in Database

March 17th, 2012 by Kellyn Pot'Vin

Originally submitted at REI

The REI Monorail Sling bag has the volume of a daypack and the features to meet all the challenges of the daily urban grind. Outdoor-inspired details make hauling the load comfortable and convenient.


Light, Sturdy, Roomy and Comfortable!!

By dbakevlar from Westminster, CO on 3/17/2012

 

5out of 5

Pros: Lightweight, Easy To Access Items, High Quality, Comfortable, Good Capacity, Easy To Carry, Stylish

Cons: No Zipper pocket in main

Best Uses: Weekend Trips

Describe Yourself: Modern, Stylish, Practical, Comfort-Oriented, Career

Travel Frequency: 6-10 Annual Trips

Primary use: Business

Was this a gift?: No

Going on both regional and international trips this year and needed a bag to carry both my ultrabook, tablet and accessories. This bag sits close to the body and is extremely comfortable. I have arthritis in my neck and shoulders, so for anyone needing extra comfort for long trips through airports, cities and towns- this is a great bag!!

(legalese)

Posted in Database

March 13th, 2012 by Kellyn Pot'Vin

Due to scheduling conflict though, it’s on a Thursday evening for once!

I will be presenting for the first time as an Enkitec employee in Colorado Springs, March 22nd. All you SQL Server folks out there that want to come on out, here are the details:

SpringsSQL

Posted in Database

March 9th, 2012 by Kellyn Pot'Vin

I’ve been again, unfortunately, reminded what makes a poor work environment for a DBA or Developer.  Eddie Awad had sent this out and the second one really resonated with me:

http://www.washingtonpost.com/national/on-leadership/how-to-completely-utterly-destroy-an-employees-work-life/2012/03/05/gIQAxU3iuR_story.html?tid=sm_twitter_washingtonpost

I refer to this in the IT world as the “technical bully syndrome” and it’s quite common in the arena.  It’s not because it’s a male dominated group- I’ve seen bullies come in all shapes and sizes, along with being of both genders.  In fact, there are a couple studies out there that say women managers and leads are more likely to bully a subordinate than a male is.

Now there isn’t any law against having a bad boss/manager/lead.  I lucked out in that I actually have quite a good manager where I currently work, but even if you are a good manager, you also have to note the cost to the business when you have people report to someone who does bully.  This individual pinpointed as the bully, will most likely get along very well with 90% of the folks around them.  They have a consistent need though, to pick out an “outlet” for their aggressions/frustrations and may have a small set of subordinates or peers that they switch from, bullying in intervals.    Often the switch occurs as the bully’s target starts to or simply threatens to escalate to management or HR if the bullying does not cease, the bully will then move their bulls-eye onto another target.

The cost?  Time, resources, revenue and productivity to the business.

It takes time to bully someone- time out of the bully’s day, but also time out of the target’s day when they could be productive.  The target can also be highly impacted post the incidents, feeling frustrated, disrespected and devalued.  The time lost then results in productivity loss to both parties, which in turn then costs the company revenue.  If left to continue for very long, resources are lost and high-turnover becomes the norm.

The target, who may have been highly productive in the past will start to show less results and seem to have conflicting requirements.  They may seem frustrated and angry, feeling unable to understand how to address the problem successfully or no matter what changes they make to their work style, without management intervention, they are unable to find a way to avoid the bully’s wrath.

The bully often chooses targets that are not weak, but strong.  Peers or subordinates that make them feel threatened, no matter if the threat is real or imagined.  They will continually find ways to undermine the target and make them seem insignificant.  If the target does decide to choose a defensive pose, becoming angry or responding in kind, this can escalate the behavior from the bully.  Often the bully does not see themselves as doing anything wrong!  They may be completely unaware of their behavior, having grown up with it and/or may live with similar disrespect in their personal relationships currently.

When I have been a target in the past, I have often either threatened the bully because of my technical skills, which they seemed to be completely focused on trying to convince everyone that were not sound and/or give conflicting requests to make it impossible for me to meet requirements or I reminded them of someone, (often a wife or mother..) that they had long-time issues with and felt it was acceptable to use me as an outlet when they became frustrated.

There are those that were equal-opportunity bullies.  It didn’t matter if you were male, female, what race you belonged to or your position- as long as you were below them, you were fair game and you simply did your best to avoid their bulls-eye.

I have stayed employed for a company where I was a target on and off for a couple years, until someone figured out what was going on and rid the company of the bully that was causing the high-turnover and tumultuous work environment.  The DBA’s group at that time found quite peaceful, productive years post the bully’s departure.

I have worked for another company for a very short time, where I saw the bulls-eye aiming for me and left so fast that the bully’s head spun.

No one knows what a DBA does but a DBA.  No one knows what a good DBA is except other DBA’s, (and then some would argue that there are a few that are heavily followed that would not be if most DBA’s only knew… J)  As a DBA who has seen how well IT groups can work together when everyone respects each other and respects what skills each person offers and how poorly the group will perform, how high turnover will be when a bully exists in the group, I hope more managers taken notice.

Yes, it’s not against the law to have a bully in your midst, but it should be a crime to allow it to damage the productivity, growth, resources and revenue for the company.

Posted in Database, DBA Life

February 28th, 2012 by Kellyn Pot'Vin

Kellyn’s Schedule of Presentations, 2012, 2nd and 3rd Quarter
March 24th, 2012, Colorado Springs, CO
SQL Saturday
Oracle for SQL Server DBA’s

April 19th, 2012, Billund, Denmark
MOW 2012
ASH Analytics- Top Activity, the Next Generation

April 20th, 2012, Billund, Denmark
MOW 2012
EM12c: Making Oracle Enterprise Manager Work for You

May 30th, 2012, Hämeenlinna, Finland
OUG Harmony 2012
EM12c: Making Oracle Enterprise Manager Work for You

June 1st, 2012, Riga, Latvia
JUV Harmony 2012
EM12c: Making Oracle Enterprise Manager Work for You

June 26th, 2012, San Antonio, TX.
KSCOPE 2012
EM12c: Making Oracle Enterprise Manager Work for You

Posted in Database

February 27th, 2012 by Kellyn Pot'Vin

Yes, my dear Watson, Bind Peeking is still an issue in 11g….

We recently upgraded from 10.2.0.4 to 11.2.0.2.0 and come across a couple interesting performance challenges. I do like this type of problem and have been quite busy working on the fixes for each one that has shown itself. This was one of the latest, requiring a solid, positive relationship between the developer and the DBA to solve- The DBA may have the answer, but the developer must be comfortable with the solution and make the code/design change that is required to correct the issue. I do work with a number of such developers, so this was a quick fix in our fast-paced environment…

Oracle’s Cost Based Optimizer makes the best decisions when the design and the code are in accordance with “best practices” as well. I know the word “best practices” is a bit over-used, so we will discuss first what this means.

In the example I’ll be using in my case below, the following is the design practices that should have been applied when the objects were first introduced to production, but were not. I wanted to include the example so that folks could see just how many different ways Oracle could slice and dice the data when deciding what path to take through the database to a completion of a process.
Best practice in this case would be:
1. partitions with similar sizing.
2. No truncated partitions.
3. Single column partitioning keys.
4. Stage tables that are utilized for a process and then promptly emptied after the completion of the process.

Scenario:
Performance challenge on one step in feed process. This process utilizes the following steps:
1. Chooses a “partition ID”, which is the partitioning key on a stage table. There is no logic to the partitioning key, (p_id) simply chosen as part of the steps from a list marked as a truncated partition that is ready to be utilized.
2. Inserts into this partition. Some loads can be quite small, (less than 1M, others can be quite large, 100G+.
3. Merge statement is then performed on stage table to delete from partition any duplicates and other non-loading indicators.
4. This partition is then used to join to the final table to decide what is inserted, updated and deleted from another partition. The final partition has the same vulnerability in design of askew sizing in partition data.
5. The partition in the staging table is then truncated.

First Step by the DBA:
dbms_xplan.display_awr

Twelve different explain plans were found to have been used for the insert statement in question.

Second step by the DBA:
Checked table, partition and column stats.

Select count(column_name) Col_cnt, last_analyzed from dba_tab_columns
 where table_name='CUSTOMER_ACCOUNTS'
 group by Last_analyzed;
COL_CNT LAST_ANALYZED
 27     1/24/2012 3:09:58 AM
 3      2/23/2012 12:41:23 PM

The columns with the newest date in question are the indexed columns, which is good in this environment. The bad is the amount of histograms on columns, we have already researched and found our waits are on the data dictionary and the scheduler in 11g since moving to collecting histograms on tons of columns, on tons of partitions that we do not use in any way for where clauses, etc., but no way to notify the database of this in the largely CTAS type process code base.

So, we now know we do have new stats, histograms were bloated, set to auto on method_opt, but they are still accurate and up to date. The performance issue started before the date we changed over to the method option on the stats..

Question by DBA: How does the database know what and how to build histograms when different logic is used depending on the code, when a partition can be used by any one of the processe? This results in the CBO making poor decisions on column stats.
This is quickly verified when column stats are compared between one partition and the next for large feeds using the same logic last use of the partition.

Unique keys are multi-column, partitioning key is askew on final tables. This key is in where clause, so we do have accurate information outside of the CTAS code/stats histogram issue. With more data involving histograms and partition data askew across the partitions, does this leave us vulnerable to bind peeking?

DBA Test case for bind peeking:
After pinpointing the hash_value for the plan used by the poor performing, large processing version of the performance issue, detail what the values for both bind variables that would have been used.

Replaced the bind variables, one at a time, with the actual values, noted the change in execution plan.
Noted that when the partition was a literal vs. the bind variable, the execution plan changed drastically and contained valid values for the cost and row count for the process.

Although the :B1 bind variable appears to be just as susceptible to the bind peek, it is the partitioning key for the staging table, (CTS1) that is the true culprit on large processing.

DBA recommendation:
Change from inserting into a partition to performing a CTAS for the logic involved, bypassing any deletes or merges, simply creating the table with the data it requires, uniquely named.
Use this in the insert’s subselect to eliminate the bind peek issue.

Summary:

The simple concept behind the “CTS1” table was that is was a “parking lot”, where a process could simply find an empty parking space,  (partition) and proceed to utilize it.  The problem is that Oracle can’t figure out if it’s just parked a smart car or a semi in the spot.  To eliminate the issue, the parking lot was removed by going to a working table vs. the partitioned stage table and then the database simply had to work with whatever size of “vehicle” it had been issued in the process, (post stats collection, of course!)

DBMS_XPLAN.DISPLAY_AWR Output:
 PLAN_TABLE_OUTPUT
 SQL_ID 7rjk8mr85s2nq
 --------------------
 INSERT INTO DW_USR.CT1 (COL1,COL2,COL3,COL4,COL5,COL6,COL7.....
 COL18,COL19,COL20)
 SELECT * FROM ( SELECT S.COL1, S.COL2, S.COL3, ROW_NUMBER ()
 OVER (PARTITION BY S.COL5, S.COL6,S.COL7, S.COL8
 ORDER BY S.COL12) COL6,S.COL2, S.COL12, S.COL15,
 S.COL19, S.COL5, S.COL15, S.COL22,
 S.COL27, S.LAST_FILE_ID, S.COL28,
 S.COL29, S.COL31, S.COL33,
 S.COL35, S.COL36, S.COL37,
 S.COL38, S.COL39, S.COL40,
 S.COL43, S.COL44, S.F1, S.F2, S.F3,
 S.F4, S.F5, S.F6, S.F7, S.F8, S.F9, S.F10,
 S.COL45, S.COL46, S.COL47,
 S.COL48, S.COL81
 FROM ( SELECT /*+ use_hash(cts ca) */
 CTS.COL5, CA.COL3, CTS.COL6,
 CTS.COl76 AS COL6, CTS.COL2,
 CTS.COL12, CTS.COL15, NULL AS COL19,
 CTS.COL5, CTS.COL15, CTS.COL22,
 CTS.COL27, CTS.COL7 AS LFID,
 CTS.COL28, CTS.COL17, CTS.COL31,
 CTS.COL33, CTS.COL35, CTS.COL36,
 CTS.COL37, CTS.COL38,
 CTS.COL39, CTS.COL40 AS COL40, :B5 AS
 COL43, :B5 AS COL44, CTS.F1, CTS.F2,
 CTS.F3, CTS.F4, CTS.F5, CTS.F6, CTS.F7, CTS.F8,
 CTS.F9, CTS.F10, CTS.COL45, :B4 AS COL46, :B3 AS
 COL47, CTS.COL48, CTS.COL49, CTS.COL50,
 CTS.COL51, CTS.COL52,
 CTS.COL53, CTS.COL54,
 CTS.COL55, CTS.COL48,
 CTS.COL28
 FROM DW_USR.CTS1 CTS,
 (SELECT COL2, COL3, COL5
 FROM (SELECT COL2, COL3,
 COL5, ROW_NUMBER () OVER(PARTITION BY COL2, COL5
 ORDER BY COL43 DESC, IND_ID) C_RNK
 FROM DW_USR.CA1 WHERE COL2 = :B1 ) C
 WHERE C_RNK = 1) CA
 WHERE CTS.P_ID = :B2  AND CTS.COL2 = :B1  AND CTS.COL2 = CA.COL2 (+)
 AND CTS.COL5 = CA.COL5 (+)
 AND CTS.COL72 = 'N' ) S ) X
 WHERE X.COL3 IS NOT NULL
 AND NOT EXISTS
 ( SELECT /*+ use_hash(ct) */ 1
 FROM DW_USR.CT1 CT
 WHERE X.COL9 = CT.COL9
 AND X.COL4 = CT.COL4
 AND X.COL2 = CT.COL2
 AND X.COL2 = CT.COL3
 AND CT.COL3 = :B1 )
Plan hash value: 653243445
-------------------------------------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost | Pstart| Pstop |
 -------------------------------------------------------------------------------------------------------------------
 | 0 | INSERT STATEMENT | | | | 1659 | | |
 | 1 | HASH JOIN RIGHT ANTI | | 23 | 110K| 1659 | | |
 | 2 | PARTITION LIST SINGLE | | 1 | 52 | 1 | KEY | KEY |
 | 3 | TABLE ACCESS BY LOCAL INDEX ROWID | CT1 | 1 | 52 | 1 | KEY | KEY |
 | 4 | INDEX RANGE SCAN | CT1_IDX | 1 | | 1 | KEY | KEY |
 | 5 | VIEW | | 23 | 109K| 1657 | | |
 | 6 | WINDOW SORT | | 23 | 4232 | 1657 | | |
 | 7 | HASH JOIN | | 23 | 4232 | 1644 | | |
 | 8 | VIEW | | 1 | 56 | 14 | | |
 | 9 | WINDOW SORT PUSHED RANK | | 1 | 52 | 14 | | |
 | 10 | FILTER | | | | | | |
 | 11 | PARTITION LIST SINGLE | | 1 | 52 | 1 | KEY | KEY |
 | 12 | TABLE ACCESS BY LOCAL INDEX ROWID| CA1 | 1 | 52 | 1 | KEY | KEY |
 | 13 | INDEX RANGE SCAN | CA1 _PK | 1 | | 1 | KEY | KEY |
 | 14 | PARTITION RANGE SINGLE | | 11M| 1346M| 1385 | KEY | KEY |
 | 15 | TABLE ACCESS FULL | CTS1 | 11M| 1346M| 1385 | KEY | KEY |
 -------------------------------------------------------------------------------------------------------------------
Note
 -----
 - cpu costing is off (consider enabling it)
Plan hash value: 1492830811
---------------------------------------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes |TempSpc| Cost | Pstart| Pstop |
 ---------------------------------------------------------------------------------------------------------------------
 | 0 | INSERT STATEMENT | | | | | 23810 | | |
 | 1 | HASH JOIN RIGHT ANTI | | 23 | 110K| | 23810 | | |
 | 2 | PARTITION LIST SINGLE | | 1 | 52 | | 1 | KEY | KEY |
 | 3 | TABLE ACCESS BY LOCAL INDEX ROWID| CT1 | 1 | 52 | | 1 | KEY | KEY |
 | 4 | INDEX RANGE SCAN | CT1_IDX | 1 | | | 1 | KEY | KEY |
 | 5 | VIEW | | 23 | 109K| | 23808 | | |
 | 6 | WINDOW SORT | | 23 | 4646 | | 23808 | | |
 | 7 | HASH JOIN | | 23 | 4646 | | 23795 | | |
 | 8 | PARTITION RANGE SINGLE | | 1 | 146 | | 20 | KEY | KEY |
 | 9 | TABLE ACCESS FULL | CTS1 | 1 | 146 | | 20 | KEY | KEY |
 | 10 | VIEW | | 14M| 753M| | 23623 | | |
 | 11 | WINDOW SORT PUSHED RANK | | 14M| 457M| 1293M| 23623 | | |
 | 12 | FILTER | | | | | | | |
 | 13 | PARTITION LIST SINGLE | | 14M| 457M| | 2267 | KEY | KEY |
 | 14 | TABLE ACCESS FULL | CA1 | 14M| 457M| | 2267 | KEY | KEY |
 ---------------------------------------------------------------------------------------------------------------------
Note
 -----
 - cpu costing is off (consider enabling it)
Plan hash value: 1588626705
-------------------------------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes |TempSpc| Cost | Pstart| Pstop |
 -------------------------------------------------------------------------------------------------------------
 | 0 | INSERT STATEMENT | | | | | 3699K| | |
 | 1 | HASH JOIN RIGHT ANTI | | 62M| 287G| 2977M| 3699K| | |
 | 2 | PARTITION LIST SINGLE | | 70M| 2165M| | 10018 | KEY | KEY |
 | 3 | INDEX FAST FULL SCAN | CTS1_PK | 70M| 2165M| | 10018 | KEY | KEY |
 | 4 | VIEW | | 62M| 285G| | 546K| | |
 | 5 | WINDOW SORT | | 62M| 9461M| 27G| 546K| | |
 | 6 | HASH JOIN | | 62M| 9461M| 2794M| 182K| | |
 | 7 | VIEW | | 43M| 2301M| | 75523 | | |
 | 8 | WINDOW SORT PUSHED RANK| | 43M| 1315M| 3620M| 75523 | | |
 | 9 | FILTER | | | | | | | |
 | 10 | PARTITION LIST SINGLE| | 43M| 1315M| | 13076 | KEY | KEY |
 | 11 | TABLE ACCESS FULL | CA1 | 43M| 1315M| | 13076 | KEY | KEY |
 | 12 | PARTITION RANGE SINGLE | | 62M| 6107M| | 5407 | KEY | KEY |
 | 13 | TABLE ACCESS FULL | CTS1 | 62M| 6107M| | 5407 | KEY | KEY |
 -------------------------------------------------------------------------------------------------------------
Note
 -----
 - cpu costing is off (consider enabling it)
Plan hash value: 1672751078
------------------------------------------------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
 ------------------------------------------------------------------------------------------------------------------------------
 | 0 | INSERT STATEMENT | | | | | 235K(100)| | | |
 | 1 | LOAD TABLE CONVENTIONAL | | | | | | | | |
 | 2 | HASH JOIN ANTI | | 1355 | 6522K| 632M| 235K (7)| 00:03:24 | | |
 | 3 | VIEW | | 135K| 631M| | 134K (10)| 00:01:57 | | |
 | 4 | WINDOW SORT | | 135K| 24M| 26M| 134K (10)| 00:01:57 | | |
 | 5 | HASH JOIN | | 135K| 24M| 18M| 132K (10)| 00:01:55 | | |
 | 6 | PARTITION RANGE SINGLE | | 135K| 16M| | 287 (32)| 00:00:01 | KEY | KEY |
 | 7 | TABLE ACCESS FULL | CTS1 | 135K| 16M| | 287 (32)| 00:00:01 | KEY | KEY |
 | 8 | VIEW | | 11M| 610M| | 90184 (13)| 00:01:19 | | |
 | 9 | WINDOW SORT PUSHED RANK| | 11M| 338M| 480M| 90184 (13)| 00:01:19 | | |
 | 10 | FILTER | | | | | | | | |
 | 11 | PARTITION LIST SINGLE| | 11M| 338M| | 38574 (9)| 00:00:34 | KEY | KEY |
 | 12 | TABLE ACCESS FULL | CA1 | 11M| 338M| | 38574 (9)| 00:00:34 | KEY | KEY |
 | 13 | PARTITION LIST SINGLE | | 20M| 840M| | 6426 (5)| 00:00:06 | KEY | KEY |
 | 14 | INDEX RANGE SCAN | CTS1_PK | 20M| 840M| | 6426 (5)| 00:00:06 | KEY | KEY |
 ------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1930179405
------------------------------------------------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
 ------------------------------------------------------------------------------------------------------------------------------
 | 0 | INSERT STATEMENT | | | | | 1572K(100)| | | |
 | 1 | LOAD TABLE CONVENTIONAL | | | | | | | | |
 | 2 | HASH JOIN RIGHT ANTI | | 3033K| 13G| 691M| 1572K (2)| 00:22:44 | | |
 | 3 | PARTITION LIST SINGLE | | 17M| 489M| | 4192 (6)| 00:00:04 | KEY | KEY |
 | 4 | INDEX RANGE SCAN | CTS1_PK | 17M| 489M| | 4192 (6)| 00:00:04 | KEY | KEY |
 | 5 | VIEW | | 5152K| 23G| | 246K (9)| 00:03:34 | | |
 | 6 | WINDOW SORT | | 5152K| 835M| 909M| 246K (9)| 00:03:34 | | |
 | 7 | HASH JOIN | | 5152K| 835M| 619M| 155K (11)| 00:02:15 | | |
 | 8 | PARTITION RANGE SINGLE | | 5152K| 560M| | 11305 (31)| 00:00:10 | KEY | KEY |
 | 9 | TABLE ACCESS FULL | CTS1 | 5152K| 560M| | 11305 (31)| 00:00:10 | KEY | KEY |
 | 10 | VIEW | | 11M| 608M| | 69790 (16)| 00:01:01 | | |
 | 11 | WINDOW SORT PUSHED RANK| | 11M| 325M| 478M| 69790 (16)| 00:01:01 | | |
 | 12 | FILTER | | | | | | | | |
 | 13 | PARTITION LIST SINGLE| | 11M| 325M| | 20402 (16)| 00:00:18 | KEY | KEY |
 | 14 | TABLE ACCESS FULL | CA1 | 11M| 325M| | 20402 (16)| 00:00:18 | KEY | KEY |
 ------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2157924392
--------------------------------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes |TempSpc| Cost | Pstart| Pstop |
 --------------------------------------------------------------------------------------------------------------
 | 0 | INSERT STATEMENT | | | | | 267K| | |
 | 1 | HASH JOIN RIGHT ANTI | | 4459K| 20G| 152M| 267K| | |
 | 2 | PARTITION LIST SINGLE | | 3204K| 116M| | 564 | KEY | KEY |
 | 3 | TABLE ACCESS FULL | CT1 | 3204K| 116M| | 564 | KEY | KEY |
 | 4 | VIEW | | 4459K| 20G| | 44503 | | |
 | 5 | WINDOW SORT | | 4459K| 723M| 1574M| 44503 | | |
 | 6 | HASH JOIN | | 4459K| 723M| 357M| 16818 | | |
 | 7 | VIEW | | 5516K| 294M| | 8663 | | |
 | 8 | WINDOW SORT PUSHED RANK| | 5516K| 173M| 505M| 8663 | | |
 | 9 | FILTER | | | | | | | |
 | 10 | PARTITION LIST SINGLE| | 5516K| 173M| | 492 | KEY | KEY |
 | 11 | TABLE ACCESS FULL | CA1 | 5516K| 173M| | 492 | KEY | KEY |
 | 12 | PARTITION RANGE SINGLE | | 3052K| 331M| | 329 | KEY | KEY |
 | 13 | TABLE ACCESS FULL | C | 3052K| 331M| | 329 | KEY | KEY |
 --------------------------------------------------------------------------------------------------------------
Note
 -----
 - cpu costing is off (consider enabling it)
Plan hash value: 2900900913
---------------------------------------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes |TempSpc| Cost | Pstart| Pstop |
 ---------------------------------------------------------------------------------------------------------------------
 | 0 | INSERT STATEMENT | | | | | 306K| | |
 | 1 | HASH JOIN RIGHT ANTI | | 35M| 164G| | 306K| | |
 | 2 | PARTITION LIST SINGLE | | 1 | 52 | | 1 | KEY | KEY |
 | 3 | TABLE ACCESS BY LOCAL INDEX ROWID| CT1 | 1 | 52 | | 1 | KEY | KEY |
 | 4 | INDEX RANGE SCAN | CT1_IDX | 1 | | | 1 | KEY | KEY |
 | 5 | VIEW | | 35M| 163G| | 278K| | |
 | 6 | WINDOW SORT | | 35M| 6046M| 16G| 278K| | |
 | 7 | HASH JOIN | | 35M| 6046M| 687M| 47011 | | |
 | 8 | VIEW | | 10M| 566M| | 17018 | | |
 | 9 | WINDOW SORT PUSHED RANK | | 10M| 343M| 971M| 17018 | | |
 | 10 | FILTER | | | | | | | |
 | 11 | PARTITION LIST SINGLE | | 10M| 343M| | 988 | KEY | KEY |
 | 12 | TABLE ACCESS FULL | CA1 | 10M| 343M| | 988 | KEY | KEY |
 | 13 | PARTITION RANGE SINGLE | | 14M| 1624M| | 1595 | KEY | KEY |
 | 14 | TABLE ACCESS FULL | CTS1 | 14M| 1624M| | 1595 | KEY | KEY |
 ---------------------------------------------------------------------------------------------------------------------
Note
 -----
 - cpu costing is off (consider enabling it)
Plan hash value: 2973217172
-------------------------------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes |TempSpc| Cost | Pstart| Pstop |
 -------------------------------------------------------------------------------------------------------------
 | 0 | INSERT STATEMENT | | | | | 208K| | |
 | 1 | HASH JOIN RIGHT ANTI | | 6346K| 29G| | 208K| | |
 | 2 | PARTITION LIST SINGLE | | 12908 | 630K| | 3 | KEY | KEY |
 | 3 | INDEX RANGE SCAN | CTS1_PK | 12908 | 630K| | 3 | KEY | KEY |
 | 4 | VIEW | | 6346K| 28G| | 85943 | | |
 | 5 | WINDOW SORT | | 6346K| 1373M| 2916M| 85943 | | |
 | 6 | HASH JOIN | | 6346K| 1373M| 533M| 33969 | | |
 | 7 | VIEW | | 8231K| 439M| | 15389 | | |
 | 8 | WINDOW SORT PUSHED RANK| | 8231K| 282M| 754M| 15389 | | |
 | 9 | FILTER | | | | | | | |
 | 10 | PARTITION LIST SINGLE| | 8231K| 282M| | 2423 | KEY | KEY |
 | 11 | TABLE ACCESS FULL | CA1 | 8231K| 282M| | 2423 | KEY | KEY |
 | 12 | PARTITION RANGE SINGLE | | 6346K| 1034M| | 1171 | KEY | KEY |
 | 13 | TABLE ACCESS FULL | CTS1 | 6346K| 1034M| | 1171 | KEY | KEY |
 -------------------------------------------------------------------------------------------------------------
Note
 -----
 - cpu costing is off (consider enabling it)
****This was the hash value used in our poorest performer!!!!*****
Plan hash value: 3005115193
-------------------------------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes |TempSpc| Cost | Pstart| Pstop |
 -------------------------------------------------------------------------------------------------------------
 | 0 | INSERT STATEMENT | | | | | 89378 | | |
 | 1 | HASH JOIN RIGHT ANTI | | 287K| 1347M| 912M| 89378 | | |
 | 2 | PARTITION LIST SINGLE | | 22M| 658M| | 2193 | KEY | KEY |
 | 3 | INDEX FAST FULL SCAN | CTS1_PK | 22M| 658M| | 2193 | KEY | KEY |
 | 4 | VIEW | | 287K| 1338M| | 63268 | | |
 | 5 | WINDOW SORT | | 287K| 41M| 92M| 63268 | | |
 | 6 | HASH JOIN | | 287K| 41M| 29M| 61637 | | |
 | 7 | PARTITION RANGE SINGLE | | 287K| 26M| | 25 | KEY | KEY |
 | 8 | TABLE ACCESS FULL | CTS1 | 287K| 26M| | 25 | KEY | KEY |
 | 9 | VIEW | | 27M| 1446M| | 42660 | | |
 | 10 | WINDOW SORT PUSHED RANK| | 27M| 852M| 2482M| 42660 | | |
 | 11 | FILTER | | | | | | | |
 | 12 | PARTITION LIST SINGLE| | 27M| 852M| | 2548 | KEY | KEY |
 | 13 | TABLE ACCESS FULL | CA1 | 27M| 852M| | 2548 | KEY | KEY |
 -------------------------------------------------------------------------------------------------------------
Note
 -----
 - cpu costing is off (consider enabling it)
Plan hash value: 3149367802
--------------------------------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes |TempSpc| Cost | Pstart| Pstop |
 --------------------------------------------------------------------------------------------------------------
 | 0 | INSERT STATEMENT | | | | | 87386 | | |
 | 1 | HASH JOIN RIGHT ANTI | | 1260K| 5917M| 210M| 87386 | | |
 | 2 | PARTITION LIST SINGLE | | 4603K| 158M| | 327 | KEY | KEY |
 | 3 | TABLE ACCESS FULL | CT1 | 4603K| 158M| | 327 | KEY | KEY |
 | 4 | VIEW | | 1260K| 5874M| | 22365 | | |
 | 5 | WINDOW SORT | | 1260K| 251M| 539M| 22365 | | |
 | 6 | HASH JOIN | | 1260K| 251M| 198M| 12826 | | |
 | 7 | PARTITION RANGE SINGLE | | 1260K| 183M| | 142 | KEY | KEY |
 | 8 | TABLE ACCESS FULL | CTS1 | 1260K| 183M| | 142 | KEY | KEY |
 | 9 | VIEW | | 4701K| 251M| | 7346 | | |
 | 10 | WINDOW SORT PUSHED RANK| | 4701K| 152M| 430M| 7346 | | |
 | 11 | FILTER | | | | | | | |
 | 12 | PARTITION LIST SINGLE| | 4701K| 152M| | 238 | KEY | KEY |
 | 13 | TABLE ACCESS FULL | CA1 | 4701K| 152M| | 238 | KEY | KEY |
 --------------------------------------------------------------------------------------------------------------
Note
 -----
 - cpu costing is off (consider enabling it)
Plan hash value: 3517833265
-------------------------------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes |TempSpc| Cost | Pstart| Pstop |
 -------------------------------------------------------------------------------------------------------------
 | 0 | INSERT STATEMENT | | | | | 142K| | |
 | 1 | HASH JOIN ANTI | | 37130 | 174M| 173M| 142K| | |
 | 2 | VIEW | | 37130 | 173M| | 130K| | |
 | 3 | WINDOW SORT | | 37130 | 6563K| 13M| 130K| | |
 | 4 | HASH JOIN | | 37130 | 6563K| 4992K| 130K| | |
 | 5 | PARTITION RANGE SINGLE | | 37130 | 4532K| | 5 | KEY | KEY |
 | 6 | TABLE ACCESS FULL | CTS1 | 37130 | 4532K| | 5 | KEY | KEY |
 | 7 | VIEW | | 57M| 3089M| | 90806 | | |
 | 8 | WINDOW SORT PUSHED RANK| | 57M| 1655M| 4859M| 90806 | | |
 | 9 | FILTER | | | | | | | |
 | 10 | PARTITION LIST SINGLE| | 57M| 1655M| | 12448 | KEY | KEY |
 | 11 | TABLE ACCESS FULL | CA1 | 57M| 1655M| | 12448 | KEY | KEY |
 | 12 | PARTITION LIST SINGLE | | 14M| 548M| | 2629 | KEY | KEY |
 | 13 | INDEX FAST FULL SCAN | CTS1_PK | 14M| 548M| | 2629 | KEY | KEY |
 -------------------------------------------------------------------------------------------------------------
Note
 -----
 - cpu costing is off (consider enabling it)
Plan hash value: 3518559863
-------------------------------------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost | Pstart| Pstop |
 -------------------------------------------------------------------------------------------------------------------
 | 0 | INSERT STATEMENT | | | | 49 | | |
 | 1 | HASH JOIN ANTI | | 1 | 4924 | 49 | | |
 | 2 | VIEW | | 1 | 4887 | 32 | | |
 | 3 | WINDOW SORT | | 1 | 154 | 32 | | |
 | 4 | HASH JOIN | | 1 | 154 | 19 | | |
 | 5 | VIEW | | 1 | 56 | 14 | | |
 | 6 | WINDOW SORT PUSHED RANK | | 1 | 52 | 14 | | |
 | 7 | FILTER | | | | | | |
 | 8 | PARTITION LIST SINGLE | | 1 | 52 | 1 | KEY | KEY |
 | 9 | TABLE ACCESS BY LOCAL INDEX ROWID| CA1 | 1 | 52 | 1 | KEY | KEY |
 | 10 | INDEX RANGE SCAN | CA1 _PK | 1 | | 1 | KEY | KEY |
 | 11 | PARTITION RANGE SINGLE | | 23150 | 2215K| 4 | KEY | KEY |
 | 12 | TABLE ACCESS FULL | CTS1 | 23150 | 2215K| 4 | KEY | KEY |
 | 13 | PARTITION LIST SINGLE | | 1963K| 69M| 1 | KEY | KEY |
 | 14 | TABLE ACCESS BY LOCAL INDEX ROWID | CT1 | 1963K| 69M| 1 | KEY | KEY |
 | 15 | INDEX RANGE SCAN | CT1_IDX | 1333 | | 1 | KEY | KEY |
 -------------------------------------------------------------------------------------------------------------------
Note
 -----
 - cpu costing is off (consider enabling it)

Posted in Database

November 22nd, 2011 by Kellyn Pot'Vin

As part of an 11g Upgrade, it was found that a database environment could be built from one of the upgraded databases through a duplicate.  As this process had never been performed before in this fashion, a test was in order.  The test was an excellent chance to discover that the OSUser that performs the duplicate process was the proud owner of a .kshrc file with hard coded Oracle variables set which was an excellent choice if you want to really screw up a new duplicate database… 🙂

Scenario:

1. Duplicate has started with correct variables set.

2.  Subsequent shell scripts then “upset” the environment mid-process leaving a failed duplicate with ASM files for the controlfiles created, but the DBID and dbname still set to the target database, not the auxillary.

3.  Post the failure, the auxillary, (duplicate) database can only be mounted, not opened.

After setting environment to building auxillary database environment, after duplicate failure and you see in the spfile, name of db is no longer the auxilary database.
1.  Shutdown the auxilary database:
 [oracledbs]$ sqlplus ‘/as sysdba’
SQL> shutdown;
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.@
In a second screen set to +ASM instance for auxillary, remove controlfiles for target db that have been set to that name through the asmcmd command console:
[oracle@dbs]$ asmcmd
ASMCMD> ls
DATA_H/
DATA_RAM_H/
ASMCMD> cd DATA_H
ASMCMD> ls
DB_H/
ASMCMD> cd DB_H
ASMCMD> ls
CONTROLFILE/
ASMCMD> cd CONTROLFILE
ASMCMD> ls
control1.ctl <–these are control files for auxillary, stuck with dbname of target, can’t be renamed, can’t mount db!
control2.ctl
control3.ctl
current.389.766743537
current.445.766743537
current.500.766743537
ASMCMD> rm control*
You may delete multiple files and/or directories.
Are you sure? (y/n) y
ASMCMD> quit
Back on your original screen now, restart the auxillary with pfile set to corect dbname and start with this pfile-
SQL>  startup nomount pfile=’/u01/app/oracle/product/11.2.0/dbhome_2/dbs/initdb_h.ora’
ORACLE instance started.
Total System Global Area 7.6964E+10 bytes
Fixed Size                  2215704 bytes
Variable Size            3.0065E+10 bytes
Database Buffers         4.6708E+10 bytes
Redo Buffers              189513728 bytes
SQL> create spfile from pfile;
File created.
SQL> shutdown;
ORACLE instance shut down.
SQL> startup nomount;
ORACLE instance started.
Total System Global Area 7.6964E+10 bytes
Fixed Size                  2215704 bytes
Variable Size            3.0065E+10 bytes
Database Buffers         4.6708E+10 bytes
Redo Buffers              189513728 bytes
Now you can restart the duplicate process and the database will again be recognized correctly.

Posted in Database

October 20th, 2011 by Kellyn Pot'Vin

I had worked hard on a report, 47 SQL statements in all to tune it down from 5 hrs to under 30 minutes.  The first runs had been quite successful, so when a third run sent an alert on temp usage, I knew something was wrong.

     SID PROCESS      MACHINE SQL_TEXT    SQL_ID     TOTAL MB
-------- ------------ -------------------- ---------------------
     507 1137	      HOST SELECT *** 7t3muww36xhzn  45516
     600 1139	      HOST SELECT *** 7t3muww36xhzn  45516
     525 1132	      HOST SELECT *** 7t3muww36xhzn  45516
     509 1135	      HOST SELECT *** 7t3muww36xhzn  45516

I checked the stats first, as one of the fixes was to ensure the staging tables in this process were collecting stats after the initial feeds came in, but both tables involved showed valid statistics:

SQL> select num_rows, last_analyzed from dba_tab_partitions 2 where table_name='<I_STAGE>' 3 and partition_name='P170';

  NUM_ROWS LAST_ANAL
---------- ---------
 480900000 17-OCT-11
SQL> select last_analyzed from dba_tables 2 where table_name='<SML_TBL>';

LAST_ANAL
---------
17-OCT-11

I ran a quick AWR report for the specific SQL_ID to see what I was dealing with, execution plan wise..


              Snap Id      Snap Time      Sessions Curs/Sess
            --------- ------------------- -------- ---------
Begin Snap:     46198 18-Oct-11 09:00:09       282       7.9
  End Snap:     46200 18-Oct-11 10:00:13       245       8.4
   Elapsed:               60.07 (mins)
   DB Time:            1,874.62 (mins)

 

SQL ID: 7t3muww36xhzn           DB/Inst: PRODUCTION/PROD  Snaps: 46198-46200
-> 1st Capture and Last Capture Snap IDs
   refer to Snapshot IDs witin the snapshot range
-> SELECT ***

    Plan Hash           Total Elapsed                 1st Capture   Last Capture
#   Value                    Time(ms)    Executions       Snap ID        Snap ID
--- ---------------- ---------------- ------------- ------------- --------------
1   324636810               4,785,428             4         46199          46199
2   4097803110                  1,047             1         46200          46200
          -------------------------------------------------------------

Plan 1(PHV: 324636810)
----------------------

Plan Statistics                 DB/Inst: PRODBASE/prodbase  Snaps: 46198-46200
-> % Total DB Time is the Elapsed Time of the SQL statement divided
   into the Total Database Time multiplied by 100

Stat Name                                Statement   Per Execution % Snap
---------------------------------------- ---------- -------------- -------
Elapsed Time (ms)                         4,785,428    1,196,357.0     4.3
CPU Time (ms)                             3,450,070      862,517.6     8.4
Executions                                        4            N/A     N/A
Buffer Gets                                 759,453      189,863.3     0.0
Disk Reads                                  683,619      170,904.8     2.8
Parse Calls                                      35            8.8     0.0
Rows                                              0            0.0     N/A
Execution Plan
------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name        | Rows  | Bytes | Cost  | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib
------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |             |       |       |     9 |       |       |        |      |
|   1 |  COUNT STOPKEY                |             |       |       |       |       |       |        |      |
|   2 |   PX COORDINATOR              |             |       |       |       |       |       |        |      |
|   3 |    PX SEND QC (ORDER)         | :TQ10002    |     1 |   240 |     9 |       |       |  Q1,02 | P->S | QC (ORDER)
|   4 |     VIEW                      |             |     1 |   240 |     9 |       |       |  Q1,02 | PCWP |
|   5 |      SORT ORDER BY STOPKEY    |             |     1 |   120 |     9 |       |       |  Q1,02 | PCWP |
|   6 |       PX RECEIVE              |             |     1 |   240 |       |       |       |  Q1,02 | PCWP |
|   7 |        PX SEND RANGE          | :TQ10001    |     1 |   240 |       |       |       |  Q1,01 | P->P | RANGE
|   8 |         SORT ORDER BY STOPKEY |             |     1 |   240 |       |       |       |  Q1,01 | PCWP |
|   9 |          HASH JOIN            |             |     1 |   120 |     5 |       |       |  Q1,01 | PCWP |
|  10 |           PX RECEIVE          |             |     1 |   109 |     3 |       |       |  Q1,01 | PCWP |
|  11 |            PX SEND BROADCAST  | :TQ10000    |     1 |   109 |     3 |       |       |  Q1,00 | P->P | BROADCAST
|  12 |             PX BLOCK ITERATOR |             |     1 |   109 |     3 |   KEY |   KEY |  Q1,00 | PCWC |
|  13 |              TABLE ACCESS FULL| I_STAGE     |     1 |   109 |     3 |   KEY |   KEY |  Q1,00 | PCWP |
|  14 |           PX BLOCK ITERATOR   |             |  5002 | 55022 |     2 |       |       |  Q1,01 | PCWC |
|  15 |            TABLE ACCESS FULL  | SML_TBL     |  5002 | 55022 |     2 |       |       |  Q1,01 | PCWP |
------------------------------------------------------------------------------------------------------------------------

This is the one that was eating up all the temp! Note that even though I checked stats, stats were correct as of the previous day, no changes to the partition stats, the execution plan only shows one row, which anyone who listens to Maria Colgan knows, that’s just Oracle giving you the benefit of the doubt and saying, “I don’t think there’s any rows in this object, (or sub-object in this case..) but I’ll give you 1 row for the fun of it!”

Second execution plan in the report is the one I desired:

Plan 2(PHV: 4097803110)
-----------------------

Plan Statistics                 DB/Inst: PRODUCTION/PROD  Snaps: 46198-46200
-> % Total DB Time is the Elapsed Time of the SQL statement divided
   into the Total Database Time multiplied by 100

Stat Name                                Statement   Per Execution % Snap
---------------------------------------- ---------- -------------- -------
Elapsed Time (ms)                             1,047        1,046.7     0.0
CPU Time (ms)                                   967          967.0     0.0
Executions                                        1            N/A     N/A
Buffer Gets                                   2,007        2,007.0     0.0
Disk Reads                                        3            3.0     0.0
Parse Calls                                       9            9.0     0.0
Rows                                             40           40.0     N/A
User I/O Wait Time (ms)                           1            N/A     N/A
Execution Plan
------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name        | Rows  | Bytes |TempSpc| Cost  | Pstart| Pstop |    TQ  |IN-OUT| PQ
------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |             |       |       |       |  2834 |       |       |        |      |
|   1 |  COUNT STOPKEY                |             |       |       |       |       |       |       |        |      |
|   2 |   PX COORDINATOR              |             |       |       |       |       |       |       |        |      |
|   3 |    PX SEND QC (ORDER)         | :TQ10002    |  1272K|   291M|       |  2834 |       |       |  Q1,02 | P->S | QC
|   4 |     VIEW                      |             |  1272K|   291M|       |  2834 |       |       |  Q1,02 | PCWP |
|   5 |      SORT ORDER BY STOPKEY    |             |  1272K|   152M|   389M|  2834 |       |       |  Q1,02 | PCWP |
|   6 |       PX RECEIVE              |             |    40 |  9600 |       |       |       |       |  Q1,02 | PCWP |
|   7 |        PX SEND RANGE          | :TQ10001    |    40 |  9600 |       |       |       |       |  Q1,01 | P->P | RA
|   8 |         SORT ORDER BY STOPKEY |             |    40 |  9600 |       |       |       |       |  Q1,01 | PCWP |
|   9 |          HASH JOIN            |             |  1272K|   152M|       |     7 |       |       |  Q1,01 | PCWP |
|  10 |           PX RECEIVE          |             |  5005 | 55055 |       |     2 |       |       |  Q1,01 | PCWP |
|  11 |            PX SEND BROADCAST  | :TQ10000    |  5005 | 55055 |       |     2 |       |       |  Q1,00 | P->P | BR
|  12 |             PX BLOCK ITERATOR |             |  5005 | 55055 |       |     2 |       |       |  Q1,00 | PCWC |
|  13 |              TABLE ACCESS FULL| SML_TBL     |  5005 | 55055 |       |     2 |       |       |  Q1,00 | PCWP |
|  14 |           PX BLOCK ITERATOR   |             |  1271K|   139M|       |     4 |   KEY |   KEY |  Q1,01 | PCWC |
|  15 |            TABLE ACCESS FULL  | I_STAGE     |  1271K|   139M|       |     4 |   KEY |   KEY |  Q1,01 | PCWP |
------------------------------------------------------------------------------------------------------------------------

So what changed? What impacted my statistics?

Upon investigation, I came to the conclusion that it is a combination of a “feature” with what I think is a bug in 10g dbms_stats.

A search of stats processing showed that during the one process that was executing against the P170 partition on the I_STAGE, there were a number of other partitions in this same table having stats gathered post loading.

declare v_stage_table_name varchar2(64); begin select min(stage_table_name) into v_stage_table_name from stage_tables 
where stage_table_type_cd = '<I_STAGE>'; dbms_stats.gather_table_stats (ownname => 'dw_user', tabname => v_stage_table_name, 
partname => 'P450' ,estimate_percent =>.01, granularity=>'PARTITION', method_opt=>'for all columns size 1', 
no_invalidate=> false, cascade=>false, degree=>4); end;

Now the key here in the statement above is:

no_invalidate=>false

If you read the description for this from Oracle:

no_invalidate Does not invalidate the dependent cursors if set to TRUE. The procedure invalidates the dependent cursors immediately if set to FALSE. Use DBMS_STATS.AUTO_INVALIDATE. to have Oracle decide when to invalidate dependent cursors. This is the default. The default can be changed using the SET_PARAM Procedure.

The surmised bug is one where even though the dbms_stats being performed by another process is partition level, the invalidation of the cursors is across all partitions in the object, causing them all to be invalidated, requiring them to re-parse the SQL.  (There are a number of similar bugs already documented in 10.2.0.4.0 for partition level statistics gathering…)

The feature to allow Oracle to re-parse and take advantage of the newest statistics information in the data dictionary resulted in a poor performance challenge in this instance, as the cursors were invalidated on a process that needed no changes to statistics.

I tested repeatedly against partitions, collecting stats with the no_invalidate set to false or true and even to AUTO to see what would occur and it consistently impacted my cursors against other partitions.  I can find not documented bug, but as many know, I’m about to move everything to 11g  in short order and expect it would be a waste of time to pursue it to far…

I notified the Java developer who owns this code to please update the no_invalidate=>true to correct the performance impact short term and look forward to 11g bugs to replace my exhaustion on 10g ones! 🙂

Posted in Database

  • Facebook
  • Google+
  • LinkedIn
  • Twitter