After upgrading to 10.2.0.5 on Linux, our Oracle Enterprise Manager would report intermittently that the OC4J was down- Target Name=EnterpriseManager0.serve3r Target type=Oracle Application Server Host=mtlincoln Occurred At=March 12, 2010 3:09:52 PM MDT Message=The application server instance is down Severity=Critical Acknowledged=No Notification Rule Name=Application Server Availability and Critical States Notification Rule Owner=SYSMAN If you immediately checked the status of the OEM, all responses reported acceptable- ./opmnctl status Processes in Instance: EnterpriseManager0.serv3r ——————-+——————–+———+——— ias-component | process-type |pid | status ——————-+——————–+———+——— HTTP_Server | HTTP_Server | 25823 | Alive LogLoader | logloaderd | N/A | Down dcm-daemon | dcm-daemon | N/A | Down OC4J | Home | 25822 | Alive…
-
-
I know I owe a few folks a blog on temp tablespace groups- THANK YOU for your patience, I know these last couple months I’ve been way too busy and haven’t been able to allocate the time I would have liked here… Temp tablespace groups ROCK– plain and simple.. This is a feature I have been working with extensively and have seen great performance improvement with in my datawarehouse environments when implemented correctly. My main focus has been striping temp tablespace groups across fusion I/O cards, but the improvement moving from a standard temp tablespace to a temp tablespace group on standard hardware shows significant improvement…
-
Let’s say you have a database that everything can be recreated just by re-running a process and no nightly backups are required- the database would be quicker to recreate from the backup running on it’s source database than restoring from a nightly backup or taking the performance hit of doing so. Let’s say you have three tablespaces that are on I/O fusion cards that aren’t protected by hardware mirroring and losing these three could be more of a challenge than the business would like to have. What options could you come up with to protect those tablespaces? Oracle Streams? CTAS…
-
Interesting Issue… I was working on an interrupted distributed, (remote) transaction that I’d come across in one of our smaller reporting systems and attempting to rerun since 2008 against a decommissioned database, (yes, I said 2008…) This is a very “sublime” issue, as Oracle thinks it’s just taking care of a problem behind the scenes, but, like I said, this database no longer existed and was taken down while this remote process was running. The problem consisted of three databases, the reporting database, we’ll call REPRT_1, the current production warehouse, PROD_1 and the older production base, PROD_X Oracle keeps submitting…
-
Tanel Poder had a great post about bind variable peeking, http://blog.tanelpoder.com/2010/02/02/bind-variable-peeking-execution-plan-inefficiency/ I run into this exact issue more often than I would like to and have found a few tricks to refrain it from happening in code that is “vulnerable” to the situation. Truthfully, there isn’t much you can do when Oracle has gone in and performed a bind peek and chosen a poor example to peek on. As a DBA, you simply know there is an issue when the explain plan states that the CBO has determined the cost is exceptionally low to perform a nested loop on a…
-
The Lead Developer and I have been working very, very hard on tuning the main process that produces much of the revenue for our company. The tuning exercise has been very successful but we’ve run into some challenging on the pre-processing that occurs for each week to build the mart that the newly tuned process runs on afterwards. Due to this, I’ve been granted some time to monitor the build process a little closer than previously and although I have a lot of tuning to do, some of the largest challenges comes in the form of queries that the analysts…
-
I’ve come to realize that most folks don’t really understand the limitations in the allocation of PGA memory to hashing and sorting. They truly believe that if they set their PGA high enough, the database will just allocate as much memory as they have in the setting towards any hash or sort process, when in truth, (and I am referencing 10g here…) there is a “threshold” to protect the database from allocating too much PGA memory to any one process. This ceiling is for sorting or hashing and this is why I am such a strong advocate of viewing temp…
-
I’m preparing to build our new 11g environment and as I “study up” on all that is new in 11g, there are a few features that stand out over the rest, (at least in my book!) I/O Calibration I’m am challenging the upper most limits of my hardware with how much I/O I can produce. I have spent much of my time since I started with my new company correcting the I/O issues manually, (yes, ASM failed me, but I’m sure it will live up to my expectations someday!:)) The CALIBRATE_IO procedure that comes as part of the DBMS_RESOURCE_MANAGER packge…
-
After a lovely week off, I came back into work and brought up the OEM console to take a quick look at our databases. I noticed all our databases from one server were set to timing out on their uploads of the agent to grid control in the “All Targets” View. I logged onto the server and attemped to upload them manually: From the $AGENT_HOME- emctl upload agent; I received the following message: ERROR-400Data will be rejected for upload from agent ‘https://:/emd/main/’, max size limit for direct load exceeded [6827354/5242880] This error is due to a file limit on the…
-
Two weeks ago, after a new mart had performed excellent for 24hrs, a sudden and detrimental decline in performance was seen. I’d been trying to track this issue for almost two months, but had only seen it in two or three processes total and never to this level. I had some history from previous DBA’s, but they didn’t know what was causing it and when a mart has a limited lifespan before you are dropping and recreating it from a 6TB warehouse- any DBA knows, this is not going to be easy. The main goal of the mart is to…
-
…also known as follow up post on “dropping unused columns…” In my last post I discussed how to drop unused columns on a compressed, partitioned table. The reason I had come across this task, as I had listed in the post, was that a developer had a huge update that would have taken over 24 hrs that we could have done in 70 seconds if we utilized an exchange partition option. This update on the two partitions needed to be performed BEFORE I could get the maintenance window I needed to drop the unused columns. Being the “too smart for…
-
A developer needed to do some serious data changes to a partition. I thought I’d be a helpful DBA and suggested he do a CTAS of the one partition, make the change and we’d exchange the partition. He was thrilled when the update statement that I’d killed after 24 hrs of no successful runs was cut down to 70 seconds with the CTAS and then all we needed to do was have me exchange it back in- ALTER TABLE TAB1 EXCHANGE PARTITION P4706 WITH TABLE TAB_CPY_4706; ERROR at line 1: ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE…
-
A developer friend of mine sent me an email today frustrated that none of the DBA’s he worked with could explain this error to him and sent it onto me. I thought I would get a post out with this one as I think it trips up a lot of folks: “I have a report query that worked fine on Wednesday and today produced a ‘single-row query returns too many rows’ error. So when I went to track it back through Toad, the query just flat out stopped working. ” Here was my explanation and a few hints to correct…
-
The Challenge: You want one process in your application to use a separate temp tablespace created on faster “ram disks” even though it executes the same packages on the same objects as the other applications. The complication in the challenge is that TEMP is not like other resources such as default tablespace. Even though you might create a new user with a separate default temp tablespace, the database will allocate temp by the default tablespace of the owner of the package/objects involved. The object’s owner temp tablespace will then be used for all temp tables, hashing and sorting. Now I’ve…
-
This is a subject that I see a lot of debate on, but as a DBA who works in mostly large data warehouse and mart environments, I am a pro-rebuild DBA. Many of the large systems I work on are home-grown and designed. This means that there is either design or code issues that lend itself to poor choices that can leave high fragmentation and ridiculouly over-sized indexes. I have rarely seen this in smaller, OLTP environments, but offer me a large table in a warehouse that somebody has mistakenly decided to stage data to and I will bet you…
-
You have a statement that somebody, without thinking about resources, requested 96 total parallel processes for. They have now come to you, the DBA and want to know why, they aren’t getting the parallel they have requested: The statement starts out something like this: create table table_too_large_data tablespace mart_data1 compress nologging pctfree 0 parallel 16 as SELECT /*+ use_hash(i itmaff) parallel(i, 16) parallel(itmaff, 16) */ last_name ,… The developer who wrote it, did not take the parameter PARALLEL_THREADS_PER_CPU into consideration, so they didn’t even realize they have requested pretty much everything above *2.. As a DBA, you should first query…
-
OEM Reports that a server load is critical and top confirms this- PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 14721 oracle 15 0 4243m 201m 196m S 1.7 0.6 0:43.83 oracle 14998 oracle 15 0 8338m 47m 42m S 1.3 0.1 0:00.44 oracle 14713 oracle 15 0 4248m 2.8g 2.8g S 0.7 8.8 0:53.50 oracle 14715 oracle 15 0 4243m 2.8g 2.8g S 0.7 9.1 0:55.10 oracle 13734 oracle 16 0 4241m 693m 690m D 0.3 2.2 0:02.96 oracle 13744 oracle 15 0 4241m 697m 693m S 0.3 2.2 0:02.97 oracle Upon inspecting the database,…
-
method_opt=>’for all columns size 1′ This is the method option for DBMS_STATS.GATHER_TABLE_STATS that is commonly recommended- “Let Oracle look at all the columns and decide what kind of statistics, histograms, etc. need to be gathered on each…” I’ve never been real thrilled about it, especially in a data warehouse or mart environment where we rarely utilize more than a few columns per table in our where clauses. Really, why does Oracle need so much specific info on columns it’s not doing anything other than selecting? We have a set of columns that we consistently use in our where clause- these…