…also known as follow up post on “dropping unused columns…”
In my last post I discussed how to drop unused columns on a compressed, partitioned table. The reason I had come across this task, as I had listed in the post, was that a developer had a huge update that would have taken over 24 hrs that we could have done in 70 seconds if we utilized an exchange partition option.
This update on the two partitions needed to be performed BEFORE I could get the maintenance window I needed to drop the unused columns. Being the “too smart for my own britches” DBA that I am, I figured out a work around that I thought I would share and then also share the backlash of being too smart for your own britches… 🙂
To work around the compressed partition with unused columns, I was able to still utilize the exchange partition option with the following steps:
Instead of a CTAS, I had the developer create the table with the appropriate unused columns in the correct order as they would have been originally, the actual name of the columns does not matter, it’s the original COLUMN #, along with datatype, values and if they are nullable that’s important:
where obj# in
(select object_id from dba_objects where owner=”and object_name = ‘TABLE_NAME>’)
order by col#;
You will gather the following information on the Unused columns from your source table:
COLUMN NAME: Original Column # DATATYPE Value Nullable
SYS_C00017_09121318:40:16$ 7 NUMBER 22 Y
SYS_C00028_09121318:40:46$ 8 DATE Y
SYS_C00029_09121318:41:04$ 9 DATE Y
create table OWNER.NEW_TAB4712
( MEMBER_ID NUMBER not null,
CUSTOMER_NBR VARCHAR2(30) not null,
IND_ID NUMBER not null,
COLUMN7 NUMBER, <–New Columns in place with the right values and nullable!
tablespace TBL_DATA pctfree 10 initrans 1;
ALTER TABLE NEW_TAB4712 set unused column COLUMN7;
ALTER TABLE NEW_TAB4712 set unused column COLUMN8;
ALTER TABLE NEW_TAB4712 set unused column COLUMN9;
With the tables moved to unused, these columns will be invisible to your transactions, but available to match your columns for exchange partition.
If interested, you will see that the columns on your new table now match the columns in dba_tab_cols for your source partitioned table, including unused columns, (not counting the unused column names, but remember, that’s not important):
select * from dba_tab_cols
where table_name=” <–replace with your table name here.. and column_id is null;
Now, insert data to your new table that was in your partition:
Insert into new_tab4712 select * from source_table
where member_id=4712; <–Our partitioning key
The developer now updated the rows he needed in the new_tab4712, preparing it for the exchange. Now commit the work and exchange the partition:
ALTER TABLE source_table EXCHANGE PARTITION P_4712 WITH TABLE new_tab4712 UPDATE INDEXES;
You will not receive the mismatch error on the unused columns and able to successfully exchange a partition on a compressed table with unused columns!
NOW, let’s discuss the backlash… 🙂
When I wrote out these steps for my developer as a learning exercise, as well as show the benefits of exchange partition for fixing data issues, I didn’t tell him that after he exchanged the partition, (this developer is production support, so he is the one who actually does this work…) we would need to ensure that it was in the APPROPRIATE TABLESPACE!
Now this would not seem to be THAT big of a deal for most shops, but we build new datamart each week, based off a RMAN recovery of this production system, with a list of tablespaces that are excluded, so as not to bring over unnecessary objects.
Yes, you guessed it, the two partitions he performed the earlier steps on and exchanged into the 400GB partitioned table were in a different tablespace than the rest of the table and not brought over in the initial mart build step.
The mart build step that is sourced off this 400GB table errored out due to the error:
ERROR at line 1:
ORA-00376: file 167 cannot be read at this time
ORA-01110: data file 167: ‘/u01/app/oracle/product/10.2.0/db_1/dbs/MISSING00167’
Oh yeah, dumb DBA trick, (Kellyn strikes head against keyboard several times at this point!!) I was bound and determined that the other DBA, who is on call this week, was not going to have to start over on the mart build, already three days in before this was breeched, so I started digging in and figuring out how to correct it, (I think the other DBA has started taking bets at this point on if I’d be able to get myself out of this tangled mess, too!:))
Similar steps to above, so I won’t go into full detail, but these were the steps:
1. Build export/import script from production to the new mart that exports the two missing partitions and would import them back into the schema in the mart.
2. Drop the global index, (and yes, any indexes must be dropped on the table with the missing partitions, painful, painful, hurts, yes!)
3. Drop the two partitions from the table that were on the tablespace/datafiles not recovered in the initial step of the mart build, (with the index(es) gone, you no longer will receive the ORA-0376 error when attempting a drop partition.)
4. Create new, empty partitions with the appropriate values, SPECIFYING the correct tablespace that DOES exist and that they should have been in to begin with.
5. Run the shell script that exported out and imported in the partitions, both parition imports set to ignore=Y so it would simply import the partitions back into the pre-created partitions in step 4.
6. Recreate the global index that was needed as part of the mart build processes.
Problem fixed- the mart build was back on track, losing about 16 hours, 80% of that due to the global index creation, (remind me why I hate global indexes, no matter if we need them at times or not! )
Final step- MOVE THE DARN TWO PARTITIONS IN THE PRODUCTION SYSTEM to where they SHOULD BE, so I never, NEVER have to go through this again! 🙂