Oracle

When PGA Size is Not Enough

I’ve come to realize that most folks don’t really understand the limitations in the allocation of PGA memory to hashing and sorting. They truly believe that if they set their PGA high enough, the database will just allocate as much memory as they have in the setting towards any hash or sort process, when in truth, (and I am referencing 10g here…) there is a “threshold” to protect the database from allocating too much PGA memory to any one process. This ceiling is for sorting or hashing and this is why I am such a strong advocate of viewing temp space usage in execution plans.

Most developers and DBA’s, especially anyone working in a datawarehouse environment, where hash joins are more prevalent, should be viewing temp space usage in their explain plans. Knowing that once you hit this threshold, the database is going to “swap” to temp space usage for sorting and hashing is important to be aware of since this is when your performance most often goes out the window.

Utilizing our “trusty” query to check for temp space usage, we drill down for sorting and hashing:

select vs.sid, vs.osuser, vs. process, vs.sql_id, vtu.segtype,
((vtu.blocks*8)/1024)MB, vtu.tablespace
from v$tempseg_usage vtu, v$session vs
where vtu.session_num=vs.serial#
and segtype in (‘HASH’,’SORT’)
order by blocks desc;

These are the sql_id’s that you should pull the statements for and identify for tuning exercises.

If you do pull the temp usage for an explain plan, it will look similar to the one below, (but hopefully in a more readable format than mine!):

Description Owner Object_Name Cost Rows Temp Usage
HASH JOIN ANTI 15806 75479 222663050
VIEW DW_ONR Hash_1 10942 75479 220021285
WINDOW SORT 10942 75479 14718405

HASH JOIN 10386 75479

PARTITION RANGE SINGLE 149 28815
TABLE ACCESS FULL DW_ONR Table_1 149 28815

VIEW DW_ONR 7216 4327766 242354896
WINDOW SORT PUSHED RANK 7216 4327766 147144044
…and so on and so forth…

If you can’t tune below the threshold, there is another option that can be examined to see if it’s feasible for your environment- alter the session to NOT use PGA and set the hash and sort area size to the appropriate sizes to manage the process efficiently BEFORE the execution of the step in the procedure that performs the large hash or sort:

 

EXECUTE IMMEDIATE ‘ALTER SESSION SET WORKAREA_SIZE_POLICY = MANUAL’;
EXECUTE IMMEDIATE ‘ALTER SESSION SET SORT_AREA_SIZE = 536870912’;
EXECUTE IMMEDIATE ‘ALTER SESSION SET HASH_AREA_SIZE = 536870912’;

The third option and this should only be taken with the approval of Oracle support, is to actually change the threshold of the hash and sort size in the PGA with undocumented parameters. All standard precautions with undocumented parameters should apply. Below are the parameters, which are unseen in the standard list of parameters and the best practice per Oracle of how they are calculated currently in any database when not specified as an undocumented parameter. 

· _PGA_MAX_SIZE which will override the 200MB allocation
· _smm_max_size, 5% of _pga_max_size
· _smm_px_max_size, (defaults to 30% of pga_aggregate_target)
· Degree of parallelism > CEILING(smm_px_max_size/_smm_max_size)
· _PGA_MAX_SIZE=2*_smm_max_size
· And _smm_px_max_size=50%*PGA_AGGREGATE_TARGET
· When DOP<=5 then _smm_max_size is used. · When DOP>5 then _smm_px_max_size/DOP is used.

13 thoughts on “When PGA Size is Not Enough

  • timurakhmadeev

    Hello, Kevin,

    this one
    >(and I am referencing 10g here…) there is a "threshold" to protect the database from allocating too much PGA memory to any one process, of 200MB
    is wrong. In 10g one can use 10% of the PAT for a single workarea; a good way to get the threshold value is to query v$pgastat and see for 'global memory bound'.
    Here is a quick example (sorry, don't know how to format here):
    drop table t1 cascade constraints;
    create table t1(x varchar2(500)) pctfree 0 pctused 99;
    insert /*+ append */ into t1
    with t as (select /*+ materialize */ null from all_objects where rownum <= 5000)
    select lpad(rownum, 400, 'x')
    from t t12, t t22
    where rownum <= 1e6;
    commit;
    exec dbms_stats.gather_table_stats(user, 't1');

    then execute a query

    select /*+ no_eliminate_oby(@q) */ count(*) from (select /*+ qb_name(q) */ * from t1 order by x);

    and monitor the v$sql_workarea_active:

    select sql_id
    , operation_type
    , trunc(work_area_size/1024/1024)
    , trunc(max_mem_used/1024/1024)
    , trunc(tempseg_size/1024/1024)
    from v$sql_workarea_active;

    You would see something like this:

    SQL_ID OPERATION_TYPE TRUNC(WORK_AREA_SIZE/1024/1024 TRUNC(MAX_MEM_USED/1024/1024) TRUNC(TEMPSEG_SIZE/1024/1024)
    ————- ——————– —————————— —————————– —————————–
    anyvpqvwsm67p SORT (v2) 409 399

  • timurakhmadeev

    Sorry, I wanted to write Kellyn 🙂

  • Kellyn Pedersen

    No problem, I have one of those names…:)
    The query above shows the total PGA allocated per process, but not per hash or sort in the process, which can be seen more efficiently with an explain plan.
    I want to make sure I'm understanding what specifically you are disageeing with. I've used different docuements, from Oracle and other DBA's as I've worked with this over the years, specifically Johnathan Lewis' drawn out work on it as well- http://www.jlcomp.demon.co.uk/snark.html
    Let me know what you disagree on and I'll either correct or elaborate on where I haven't explained myself well.
    Thanks!
    Kellyn

  • timurakhmadeev

    Kellyn,

    my point is 200MB is not a limit for a process' workareas in 10gR2, and I've demonstrated that a single workarea uses ~400MB (with PAT=4GB) in 10.2.0.4.

    Jonathan's comments are of course correct, but they are applicable for 9i mostly, not 10g; and things usually changes between major Oracle releases.

    Refer to Joze Senegacnik's presentation and paper for in-depth details on PGA memory management: http://tonguc.yilmaz.googlepages.com/JozeSenegacnik-PGAMemoryManagementvO.zip

  • Randolf

    Kellyn,

    regarding the manual workarea_size policy: There is a nasty bug with SORT_AREA_SIZE that has been introduced with 10.2.0.3, see e.g. Jonathan's blog post for more details.

    Randolf

  • Kellyn Pedersen

    I'll read up on the paper for 10g- although this is the version I work in primarily, my experience in this area is old enough that I may be behind the times and need to update!
    I think my most major concern is that many folks don't realize that there is a limit at all on how much will fit per process in PGA. So many seem surprised when they "swap" to temp, scratching their heads when they are using temp to sort or hash.. 🙂
    And Randolph- that bug! The manual workarea_size_policy is in legacy code in my own environment and I just happened to run into a bug in version 10.2.0.4 after applying the Oct2009 PSU patch! I was not thrilled and after not seeing any issues during the six months I've been with the company, (although prefer to tune SQL before the idea of over-riding anyway, it was still preferred to undocumented parameters!) was ever less thrilled with the bug!
    Thank you for the info!
    Kellyn

  • I’ve read this- it’s really good! As I’ve tuned PGA, I’ve learned there is one really important rule of thumb- no matter how much PGA you give, if you have a few queries/processes that are “swapping” out of PGA, as in, there is no way they can fit within the percentage granted to a process of PGA area, then you will always have processes that bring down your PGA cache hit. Monitoring for temp usage and knowing what is “swapping” out of PGA during processing is a good indicator of poor performance.
    Good luck and thank you for pasting the link!
    ~Kellyn

  • Paul Muller

    if you intend to modify a session to use workarea_size_policy = MANUAL and then set the HASH_AREA_SIZE to a value large enough to hold the expected hash operation, how is this number affected if the operation in question is going to be run in parallel? Do we set the hash_area_size for the total operation, or for the expected size divided by the expected number of parallel slaves?

  • Hi Paul,
    Be mindful of the version of Oracle you are running on. I’ve noted in 11.2.0.3 that poor performance is now seen when PGA is bypassed in this way and am working most everyone who has utilized away from it, but in 10g, it still does work for those that need a larger hash_area_size than would be allocated by the PGA.
    Now, with that being said, with parallel, each parallel slave will have allocated the hash_area_size amount that is set. The hash_area_size is a per session allotment, so each slave would be granted this amount. You must consider this when configuring the SGA, (since PGA is a separate configuration and it’s a target, able to max out what is available on it’s own and adjust accordingly…) If using Auto DOP, (back into 11g again!) then this would be very important to note and could be another reason NOT to go this route and work within PGA allocation of hash.

  • Paul Muller

    Thanks for the reply! We are on Exadata (11.2.0.3) and have a process which sometimes runs fine, consuming 3-4 Gb of tempspace; but other times consumes 10 times that much temp and fails with a ORA-01652 error.
    I suspect (but am not sure) that the failures are happening at times when the optimizer allocates less PGA to the session workarea. Assuming this theory is correct, I was thinking about making the process run with workarea_size_manual and giving it a large HASH_AREA_SIZE; but am having seconds thoughts after your comment.
    So i guess I’m left with the following options:
    1. request a larger PGA (I doubt that will happen and further doubt that it would help much!)
    2. only run the process at less busy times.
    3. tune the process to use less temp.

    Any thoughts on how I might prove or disprove my theory? And if its correct how I might prevent the failures?

  • I would take the time to clearly document what is occurring when it runs well and when it doesn’t. 1652 means you have run out of temp space, which means something is going on and you need to see what.
    Is there a stats update that invalidates the objects during the execution?
    Is it using parallel and the number of threads are actually causing the sorts and hash processes to consume more temp at times?
    Is there duplicate data or a coding issue that could be causing the excess temp usage?

    Inspect what hash/sort is using the temp usage and focus on this first.
    Look at what other processes occur at the same time as the poor performing one.

    I would again, stay away from bypassing or raising PGA. Inspect your PGA Cache hit ratio table and see if you can allocate enough PGA to hit 100%, if not, very likely this query is one of them that falls outside of the allocation %, no matter what you raise it to.

  • Pingback: » Oracle SQL Developer: Extending a Tablespace

Comments are closed.