I’ve decided it’s probably a good thing that blogspot doesn’t show the *exact* time I publish my posts… 🙂 I rarely write or publish anything before 10pm at night and have been saddened as I’ve become aware, (as I am a perfectionist at heart…) that I’ve made a couple mistakes in my blog. My ADD brain has either raced faster than my fingers could keep up on the keyboard, (meaning clarity of technical facts are just not there when I’ve read through a post later on) or I’ve edited an example script where the actual output wouldn’t have resulted in the error that…
-
-
I’ve come to realize that most folks don’t really understand the limitations in the allocation of PGA memory to hashing and sorting. They truly believe that if they set their PGA high enough, the database will just allocate as much memory as they have in the setting towards any hash or sort process, when in truth, (and I am referencing 10g here…) there is a “threshold” to protect the database from allocating too much PGA memory to any one process. This ceiling is for sorting or hashing and this is why I am such a strong advocate of viewing temp…
-
I’m preparing to build our new 11g environment and as I “study up” on all that is new in 11g, there are a few features that stand out over the rest, (at least in my book!) I/O Calibration I’m am challenging the upper most limits of my hardware with how much I/O I can produce. I have spent much of my time since I started with my new company correcting the I/O issues manually, (yes, ASM failed me, but I’m sure it will live up to my expectations someday!:)) The CALIBRATE_IO procedure that comes as part of the DBMS_RESOURCE_MANAGER packge…
-
After a lovely week off, I came back into work and brought up the OEM console to take a quick look at our databases. I noticed all our databases from one server were set to timing out on their uploads of the agent to grid control in the “All Targets” View. I logged onto the server and attemped to upload them manually: From the $AGENT_HOME- emctl upload agent; I received the following message: ERROR-400Data will be rejected for upload from agent ‘https://:/emd/main/’, max size limit for direct load exceeded [6827354/5242880] This error is due to a file limit on the…
-
Two weeks ago, after a new mart had performed excellent for 24hrs, a sudden and detrimental decline in performance was seen. I’d been trying to track this issue for almost two months, but had only seen it in two or three processes total and never to this level. I had some history from previous DBA’s, but they didn’t know what was causing it and when a mart has a limited lifespan before you are dropping and recreating it from a 6TB warehouse- any DBA knows, this is not going to be easy. The main goal of the mart is to…
-
…also known as follow up post on “dropping unused columns…” In my last post I discussed how to drop unused columns on a compressed, partitioned table. The reason I had come across this task, as I had listed in the post, was that a developer had a huge update that would have taken over 24 hrs that we could have done in 70 seconds if we utilized an exchange partition option. This update on the two partitions needed to be performed BEFORE I could get the maintenance window I needed to drop the unused columns. Being the “too smart for…
-
A developer needed to do some serious data changes to a partition. I thought I’d be a helpful DBA and suggested he do a CTAS of the one partition, make the change and we’d exchange the partition. He was thrilled when the update statement that I’d killed after 24 hrs of no successful runs was cut down to 70 seconds with the CTAS and then all we needed to do was have me exchange it back in- ALTER TABLE TAB1 EXCHANGE PARTITION P4706 WITH TABLE TAB_CPY_4706; ERROR at line 1: ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE…
-
As I’ve spent two “tours of duty” as a Lead DBA in my career, I often keep in mind what makes a great DBA team, noting both when companies realize what it takes and when they don’t. I’ve watched great DBA’s walk out the door because during the interview process someone decided the candidate didn’t have the EXACT skills of another DBA or they just weren’t sure what they were looking for due to the fact that the DBA role can have such a wide range of skill demands when the manager really needed to look at what skill were…
-
A developer friend of mine sent me an email today frustrated that none of the DBA’s he worked with could explain this error to him and sent it onto me. I thought I would get a post out with this one as I think it trips up a lot of folks: “I have a report query that worked fine on Wednesday and today produced a ‘single-row query returns too many rows’ error. So when I went to track it back through Toad, the query just flat out stopped working. ” Here was my explanation and a few hints to correct…
-
The Challenge: You want one process in your application to use a separate temp tablespace created on faster “ram disks” even though it executes the same packages on the same objects as the other applications. The complication in the challenge is that TEMP is not like other resources such as default tablespace. Even though you might create a new user with a separate default temp tablespace, the database will allocate temp by the default tablespace of the owner of the package/objects involved. The object’s owner temp tablespace will then be used for all temp tables, hashing and sorting. Now I’ve…
-
OK, Who else out there has a 3.8TB tablespace? Anyone, anyone? Bueller, Bueller?? I know, I know- Oracle can support terabytes and terabytes of data in one tablespace, so what is my complaint? Most developers, both database and application do not have the technology to access that data efficiently enough and DBA’s rarely have the disk read and write hardware that will support the kind of pressure that will result from large tablespaces. I am moving more and more of my hardest hit data to ram disk, but it’s expensive and all of us know the challenges at budget time…
-
This is a subject that I see a lot of debate on, but as a DBA who works in mostly large data warehouse and mart environments, I am a pro-rebuild DBA. Many of the large systems I work on are home-grown and designed. This means that there is either design or code issues that lend itself to poor choices that can leave high fragmentation and ridiculouly over-sized indexes. I have rarely seen this in smaller, OLTP environments, but offer me a large table in a warehouse that somebody has mistakenly decided to stage data to and I will bet you…
-
So I’ve been accepted to present at the 2010 RMOUG Conference and I am understandably nervous. This is my first large presentation I’ve given and I’m thrilled to be doing it. My topic is the power of parallel processing, something I’ve been working heavily in as of late and I feel I have a lot to offer on the subject matter. I’ve already written and submitted my abstract, now comes the daunting task of putting together a presentation that is beneficial and that everyone can follow,(vs. how my ADHD brain works, running rampant from one aspect of the challenging subject…
-
You have a statement that somebody, without thinking about resources, requested 96 total parallel processes for. They have now come to you, the DBA and want to know why, they aren’t getting the parallel they have requested: The statement starts out something like this: create table table_too_large_data tablespace mart_data1 compress nologging pctfree 0 parallel 16 as SELECT /*+ use_hash(i itmaff) parallel(i, 16) parallel(itmaff, 16) */ last_name ,… The developer who wrote it, did not take the parameter PARALLEL_THREADS_PER_CPU into consideration, so they didn’t even realize they have requested pretty much everything above *2.. As a DBA, you should first query…
-
OEM Reports that a server load is critical and top confirms this- PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 14721 oracle 15 0 4243m 201m 196m S 1.7 0.6 0:43.83 oracle 14998 oracle 15 0 8338m 47m 42m S 1.3 0.1 0:00.44 oracle 14713 oracle 15 0 4248m 2.8g 2.8g S 0.7 8.8 0:53.50 oracle 14715 oracle 15 0 4243m 2.8g 2.8g S 0.7 9.1 0:55.10 oracle 13734 oracle 16 0 4241m 693m 690m D 0.3 2.2 0:02.96 oracle 13744 oracle 15 0 4241m 697m 693m S 0.3 2.2 0:02.97 oracle Upon inspecting the database,…
-
method_opt=>’for all columns size 1′ This is the method option for DBMS_STATS.GATHER_TABLE_STATS that is commonly recommended- “Let Oracle look at all the columns and decide what kind of statistics, histograms, etc. need to be gathered on each…” I’ve never been real thrilled about it, especially in a data warehouse or mart environment where we rarely utilize more than a few columns per table in our where clauses. Really, why does Oracle need so much specific info on columns it’s not doing anything other than selecting? We have a set of columns that we consistently use in our where clause- these…