Oracle

New Temp Tablespace Group

I know I owe a few folks a blog on temp tablespace groups-  THANK YOU for your patience, I know these last couple months I’ve been way too busy and haven’t been able to allocate the time I would have liked here…

Temp tablespace groups ROCK–  plain and simple..  This is a feature I have been working with extensively and have seen great performance improvement with in my datawarehouse environments when implemented correctly.  My main focus has been striping temp tablespace groups across fusion I/O cards, but the improvement moving from a standard temp tablespace to a temp tablespace group on standard hardware shows significant improvement as well for systems with high read and write waits on temp.

For anyone who has huge temp waits for read and writes, it’s painful to watch, especially if you have the ability to view it through OEM. The bands of blue and yellow wait lines that show up in I/O waits or in “OTHER” can be frustrating.

If you run an ASH Report, you will see waits, similar to what I experienced for the process that I will use for my example further into this post:

Event

% Event

P1 Value, P2 Value, P3
Value

% Activity

Parameter 1

Parameter 2

Parameter 3

direct path read temp

43.27

“1001”,”3502″,”7″

0.01

file number

first dba

block cnt

PX Deq Credit: send blkd

26.18

“268501001”,”2″,”0″

1.50

sleeptime/senderid

passes

qref

 

 

“268501001”,”1″,”0″

1.18

 

 

 

 

 

“268501003”,”2″,”0″

1.06

 

 

 

direct path read

5.10

“50”,”11333″,”11″

0.01

file number

first dba

block cnt

direct path write temp

1.47

“1003”,”50949″,”7″

0.01

file number

first dba

block cnt

SQL ID

Planhash

% Activity

Event

% Event

SQL Text

dhytpraqah5c5

2880106315

100.00

direct path read temp

43.27

create table report_tbl1…

 

 

 

PX Deq Credit: send blkd

26.18

 

 

 

CPU + Wait for CPU

23.93

There are three things to remember when planning to move to a temp tablespace group:
1.  Plan your temp tablespace group carefully-  the tablespaces should be of the same size, have the same amount of files with the tempfiles each the same size.
2.  Inconsistent sizes across the tablespaces can affect the efficiency of the striping-  no auto-extend..
3.  The temp work will stripe across the tablespaces quite close to evenly, not use one files in one tablespace at a time, so keep this in mind when looking at temp tablespace usage.
4.  Stripe your files in differing orders to avoid having the same # file on the same drive as another temp tablespace from the temp tablespace group-  this will eliminate more I/O.
4.  You should estimate the required size of each tablespace to be approximately 40-60% or your current temp tablespace needs, depending on your database usage, (parallel processing, rowid range scans and types of sorts…)
5.  Do not use temp tablespace groups as a substitute for good ol’ fashion tuning!  Larger than acceptable temp tablespace usage in correlation to the data set size the code is working with is a clear indicator that the code should be examined for tuning opportunities.

The testing process below is based a process that is the largest consumer of temp tablespace in one of our databases.  The process utilizes 720GB of temp tablespace, had an execution time of 16 hrs and my goal was to do the following:

  • Cut the execution time in 1/2 to re-allocate the resources to other processes.
  • Minimize temp usage by 60%.
  • Use temp tablespace groups to eliminate temp read/write waits in existing process.

Create Temp Tablespace Group, three temp tablespaces total, 200GB each—-
CREATE TEMPORARY TABLESPACE TEMP_STDRD1 TEMPFILE ‘/u00/oradata/sid1/temp_stdrd1_01.dbf’ SIZE 51200M,
 
‘/u10/oradata/sid1/temp_stdrd1_02.dbf’ SIZE 51200M,
‘/u20/oradata/sid1/temp_stdrd1_03.dbf’ SIZE 51200M,
‘/u30/oradata/sid1/temp_stdrd1_04.dbf’ SIZE 51200M
TABLESPACE GROUP STDRD_TEMP;

CREATE TEMPORARY TABLESPACE TEMP_STDRD2 TEMPFILE ‘/u30/oradata/sid1/temp_stdrd2_01.dbf’ SIZE 51200M,
‘/u20/oradata/sid1/temp_stdrd2_02.dbf’ SIZE 51200M,
‘/u10/oradata/sid1/temp_stdrd2_03.dbf’ SIZE 51200M,
‘/u00/oradata/sid1/temp_stdrd2_04.dbf’ SIZE 51200M;
ALTER TABLESPACE TEMP_STDRD2 TABLESPACE GROUP STDRD_TEMP;


CREATE TEMPORARY TABLESPACE TEMP_STDRD3 TEMPFILE ‘/u30/oradata/martd/temp_stdrd3_01.dbf’ SIZE 51200M,
‘/u10/oradata/martd/temp_stdrd3_02.dbf’ SIZE 51200M,
‘/u00/oradata/martd/temp_stdrd3_03.dbf’ SIZE 51200M,
‘/u20/oradata/martd/temp_stdrd3_04.dbf’ SIZE 51200M;
ALTER TABLESPACE TEMP_STDRD3 TABLESPACE GROUP STDRD_TEMP;

Switch over database and user to new temp tablespace group—
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE STDRD_TEMP;

Break up original process from one process with complex aggregating to two CTAS, less one process sorting:

 SQL> create table orpt_test tablespace tbl_data1 
compress nologging parallel 8 as 
SELECT /*+ full(c) parallel (c 8) */ <--Note Parallel here, all orders for all time! 
c.member_id, ... PERCENT_RANK() 
OVER (ORDER BY c.orders_by_amt) pct_rnk 
FROM all_transactions c WHERE trans_type_cd != 'F'; 

Table created. 
Elapsed: 03:42:41.38

Estimated time run was 16 hrs old way

SQL>; CREATE INDEX orpt_tst_idx1
 on orpt_test(m_id, file_id) PARALLEL 6 nologging
 tablespace idx_data1;

Index created.
Elapsed: 01:12:49.62

Set table to unparallel and gather stats—
ALTER TABLE orpt_test noparallel;

exec dbms_stats.gather_table_stats( ownname=>'OWNER',tabname=>'ORPT_TEST', ESTIMATE_PERCENT=>.00001, 
METHOD_OPT=> 'FOR ALL INDEXED COLUMNS', GRANULARITY=>'ALL',CASCADE=>TRUE, DEGREE=>6);

Create the secondary table that utilizes the index to ease into the hash and limit the temp usage on the amount of data it ends up having to perform in the sort if it was performed in one CTAS—

create table NEW_RPT_TBL
  
 tablespace tbl_data2 compress nologging pctfree 0 parallel 8
 as
 SELECT /*+ parallel(o,2) */
 b_id,
 MIN(t_dt) this one,
 MAX(t_dt)that one,
 COUNT(*) ,
 SUM(o_amt) ,
 SUM(o_amt) this one,
 so on and so forth, more and more aggregation, hard to believe there could be so much... :)
 FROM
 (
 SELECT -- /*+ parallel(c,2)  index(c,orpt_tst_idx1) use_hash(c cm mm m) */
 c.m_id,
 ...
 FROM orpt_test c , < Here's our new table, already much of the aggregation performed!
 m_tbl1 mm,
 crm_mtbl2 cm,
 files_form3 m
 where c.m_id = mm.m_id 
 AND mm.m_cd = cm.m_cd
 AND c.file_id = m.file_id (+)
 AND c.m_id = m.m_id (+)
 ) o
 GROUP BY m_id,
 c_cd,
 m_cd,
 l_name,
 f_id,
 b_id;
Table created.
Elapsed: 02:36:21.12

Drop “work” Table/Indexes post final table creation.

DROP TABLE ORPT_TEST PURGE;

NEW PROCESS COMPLETE…

Did I meet my goal?
Largest amount of temp space used:  178GB,  (Old way: 721GB)
Time to complete process:  7hrs, 31 min., (Old way: 16 hrs)

Temp Tablespace Group Benefits to this entire tuning process:
Temp space usage without temp tablespace group, new process: 423GB, (over twice as much!)
Process completion time of new process without temp tablespace group: 12hrs, (4 1/2 hrs longer!)
Temp Tablespace Waits Eliminated with temp tablespace groups from original process:
Reads-  62%
Writes- 27%

Where are the improvements seen?
1.  I/O is distributed evenly across multiple datafiles instead of one datafile when presented with onle a single temp tablespace.

2.  Sorts in parallel performance increased, on average, over 20% for window and buffer sorts, (improvement was not seen for sorts involving rowid range scans…)
3.  Database overall improved as resources were re-allocated to all processes that may have been hindered by I/O bottleneck by heavy swapping to temp tablespace.

4.  For large hash processes, where temp tablespace usage is unavoidable, temp tablespace groups offer significant improvement in I/O performance and more CPU utilization.

12 thoughts on “New Temp Tablespace Group

  • Hello Bennie,
    Could you elaborate, please? I'm not positive what you are looking for here… 🙂

  • Kellyn,
    Excellent coverage on temp tablespace group with great details. Thanks for sharing the numbers.

    – Sanjay

  • Thank you for all the kind comments- hope the data benefits and I will provide more as I have more time! 🙂

  • Martin Berger

    Kellyn,
    Do you think Temp Tabpeslace Groups also make sense with ASM Diskgroups (with multiple disks)?
    I assume, they should spread IOs similar over several disks.
    great post,
    Martin

  • I have only had limited experience with ASM, (wish I did have environments that would benefit more from the soon to be standard from Oracle, but until then, I'll be managing my files to the OS!) but, considering how temp tablespace groups function, allowing to stripe across each of the tablespaces involved in the group, I would expect excellent performance with ASM.
    I can see one benefit to manual management of OS datafiles vs. ASM though- if you note in my example, I discuss how through testing I found the best performance when I placed files on different drives, carefully choosing the layout so no two files that would be written to at any given time would be on the same drive. I'd have to wonder if ASM would be able to determine this for temp tablespace groups? I know it stripes by OS I/O load, but could it foresee the I/O striping and usage of a temp tablespace group?

    Something to think about…and try out! 🙂
    Thanks!
    Kellyn

  • Virendra Kumar

    I loved the idea but have a question, do you think the idea of temporary tablespace group will be helpful if all the tablespaces in the group are on same mounts. Means all the datafiles on the same mounts.

    Regards,
    Virendra

  • “It depends” is the answer. If you are on a non-ASM environment, then I would say the chances are very, very high you would greatly benefit from temporary tablespace groups, even if they were all on the same mount due to how temp space is written to in a non-ASM environment. If it is ASM, then it has to do with how you have configured your environment/users to use temp space.
    Thanks,
    Kellyn

  • Pingback: RMOUG 2014 Abstracts- Last Week! - Oracle - Oracle - Toad World

  • This was a great blog about temp tablespace groups. So good I sent it to all my dba friends.
    and i”m going to ignore you said there are 3 things to remember then listed 6, #4 used twice.
    A new fan.
    M

  • Pingback: EM12c Release 4, Health Overview - Oracle - Oracle - Toad World

  • Pingback: One Year With Oracle - Oracle - Oracle - Toad World

  • Pingback: ASH Data and How Sizing an AWR Warehouse Can be Mitigated - Oracle - Oracle - Toad World

Comments are closed.