Although Streams is a goner, (Golden Gate taketh over…) My notes on performance considerations when utilizing streams as I work with it is always valuable.
These came from a number of blogs, notes from Oracle and simple grief, so thought I would share… 🙂
1. TYPE OF DATA REPLICATION
Due to the way this table is loaded, we need to be concious of “queue spill”. Queue spill, per Oracle’s support documentation, is associated with LCR’s not being processed in a timely fashion per interval
from the buffered queue, resulting in the old LCRs to “spill” from memory out to disk and to the table, “aq$_”_p.
If queue spill is experienced, first attempt the following:
alter table aq$__p enable row movement;
alter table aq$__p shrink space;
If the streams pool is not large enough to hold the LCR’s, they can also spill out to disk. In Oracle 10g, R2, there can also be apply spill on the target db. Large and long running transactions from the queue
can be impacting, even though the apply table is partitioned. The Logminer spill writes cached info out to the logminer spill tables. You may have to reconfigure the size of the cached logimer can retain info to address the problem.
Checkpoint intervals and time elapsed can also effect performance. This data is retained in a single table that can be queried by the DBA if a performance impact is seen.
Frequencey and purge intervals can be configured to address any performance issue seen in the checkpoint feature.
Especially if there are more than one, concurrent capture processes configured.
Check dba_capture.required_checkpoint_scn or if using logminer, system.logminer_restart_ckpt$ if there is a concern with performance.
Changing the checkpoint frequency:
exec dbms_capture_adm.set_parameter (”, ‘_checkpoint_frequency’,’1000′); <–how often by mb of data
Older versions required a forced checkpoint:
execute dbms_capture_adm.set_parameter(”, ‘_CHECKPOINT_FORCE’, ‘Y’);
How much checkpoint data do you need to retain? Defaults can be up to 60 days, to change this:
exec dbms_capture_adm.alter_capture(capture_name=>’lsf_capture’, checkpoint_retention_time=> 7);
Streams is extremely inefficient in performance if attempted on a table without a primary key configuration.
Even a process such as the one above, where it is by partition, still will perform extensively more efficiently if the partitioning key is or is part of
the primary key.
Remove any non-necessary FK’s or indexes, these should always be justified before adding to a streams replicated object, as well.
5. JOB LATENCY
Reduce latency between jobs, set the hidden parameter:
alter system set “_job_queue_interval” = 1
Reduce latency on propagation, which is set in seconds and defaults to 60. This can be eliminated significantly by decreasing it to 1 second:
6. NETWORK BANDWIDTH
Increasing the SDU in a SAN will offter better performance for the streams process and must be set in the listener.ora and tnsnames.ora.
Please perform the necessary research as to the best values to set this at before making the change to your configuration.
7. LARGE TRANSACTIONS AND WAITS ON APPLY
Two undocumented parameters can assist in performance.
_TXN_BUFFER_SIZE=80 <–start with this in 10.1 and 10.2
_TXN_BUFFER_SIZE=80 for parallel, 320 minimum with auto tuned in 11.2
_HASH_TABLE_SIZE= 8000 for parallelism or 100000 and 100000 for 11.2, take care with 220.127.116.11, there is an unpublished bug for this one… 🙁
If ITL contention is seen, raise the INITRANS value for the replicated table and index so that it is equal to the number of concurrent transactions occurring against the object.
In any non-ASSM tablespaces, (if performance issues are seen in freelist contention), pctfree should be set to 20.
table pctfree 20;
alter index rebuild pctfree 20;
Example for 10.2 and above:
For environments that experience consistent transactions of over 100000 LCRs then setting the transaction buffer size to accommodate is a solid choice:
If your workload consists update statements that modify less than half the columns for any given row, consider using another hidden parameter:
exec dbms_apply_adm.set_parameter (”,’_dynamic_stmts’,’y’);
For all versions of Oracle, if there are a considerable number of dependency waits on replicated tables with constraints, increase the size of the dependent hash table, the value is set by number of rows that will be housed within it.
exec dbms_apply_adm.set_parameter(”,’_hash_table_size’,’10000000′); <–10 million –Parameter Recommendations, (per Oracle) Streams Pool 11g >= 100M
10gR2 auto tune
10gR1 init.ora configuration to memory levels available.
8. HIDDEN PARAMETERS FOR STREAMS
>10gR1, remove this parameter and allow it to be managed dynamically by the queue management background process.
9. Keep it simple!!
Avoid complex rules at all costs! Just as many other features in Oracle work best when kept simple, same goes for Streams.
The less complexity in rules, the less overhead to the database.
- Watch the performance impact of supplemental logging.
- Do not parallelize the capture process, not recommended!
NOTE:238455.1 – Streams DML Types Supported and Supported Datatypes
NOTE:259609.1 – Script to Prevent Excessive Spill of Message From the Streams Buffer Queue To Disk
NOTE:273674.1 – Streams Configuration Report and Health Check Script
NOTE:290605.1 – Oracle Streams STRMMON Monitoring Utility
NOTE:298877.1 – 10gR1 Streams Recommended Configuration
NOTE:313748.1 – Using Automatic Statistics Collection In A Streams Environment
NOTE:377152.1 – Best Practices for automatic statistics collection on Oracle 10g
NOTE:780733.1 – Streams Propagation Tuning with Network Parameters