## EM12c- Managing Incidents, Stopping the Insanity, Part III

Nothing is more annoying that getting alerted on things that are not critical to you or that you already know is occurring and there is not a darn thing you can do about it.

I’ve also been frustrated to wake up in the morning and to see my inbox flooded with a ton of alerts from numerous EM12c systems and really- Â none of them are truly critical, but can appear to be simply overwhelming!

How to we steer the inbox from madness to manageable?

The answer to this question is not just managing metric settings and thresholds- Â there are a number of different ways to control what end up in your inbox or alerting you at every waking and sleeping hour.

The part in this series is focused on fine grain rule sets. Â This is a post that will show you how in a few, simple steps, with the hopes that you can recreate this as simple or complex as you wish, based to make your EM12c server you better.

Note: Â This next post is a change, that as I said, is about changing a rule set- Â Remember- Â changing a rule set is a change to ALL targets utilizing the rule set, vs. changing a metric policy or threshold at a target level, so think through what you want to change and how you want to make the change BEFORE you make the change. Â All options I show you here can be performed in many, many different ways to address the problem. Â Rationally think about the change you need to make, test it out and use common sense.

So for our email today, one of my clients systems have started to send this incident to my inbox at night:

 Host=hostxyz.client7.com
Target type=Oracle Management Service
Target name=hostxyz.client7.com:4892_Management_Service
Categories=Performance
Message=Loader Throughput (rows per second) for Loader_D exceeded the critical threshold (75). Current value: 65.08Â
Severity=Critical
Event reported time=Dec 4, 2013Â 8:50:10 AM CST
Operating System=Linux
Platform=x86_64
Associated Incident Id=51218
Associated Incident Status=New
Associated Incident Owner=SYSMAN
Associated Incident Acknowledged By Owner=No
Associated Incident Priority=Very High
Associated Incident Escalation Level=0
Metric value=65.08
Rule Name=Â Incident Management Ruleset,Incident creation Rule for metric alerts.
Rule Owner=SYSMAN
Update Details:
Loader Throughput (rows per second) for Loader_D exceeded the critical threshold (75). Current value: 65.08
Incident created by rule (Name = Incident Management Ruleset, Incident creation Rule for metric alerts.; Owner = SYSMAN).

I could really care less about this and am already receiving this information once it hits the warning threshold for review. Â I don’t want to be woke up in the middle of the night, nor is it something that I can really address as a critical outage for this client.

Let’s edit our rule set by taking the information offered us in the email from the “Rule Name” section:

Rule Name=Â Incident Management Ruleset,Incident creation Rule for metric alerts.

Incident Rule Sets

Log into the EM12c and click on “Setup”, “Incidents” and then “Incident Rules”

You should see your Incident Rule Sets listed.

We’ll take the following information from our Incident email we recevied:

Rule Name=Â Incident Management Ruleset,Incident creation Rule for metric alerts.

and then one more section, “Categories”, (remember, some Incidents can belong to more than one category):

Categories=Performance

Taking just these two lines above, this tells us what incident rule is alerting us. Â What we may not realize, is that by default, critical metric alerts notify on ALL categories and you then distinguish this by rule set, by target, by group, etc. Â This is where the EM12c again proves itself to be a self-service product, giving the power to the administrator to receive notifications on any incident in the way that they want to receive or NOT receive.

Armed with this information, we are now going to take this example for our client and edit the rule set-

We’ve highlighted the rule set that matches the first part of the Rule name “combination” and click on Edit.

Then upon entering the info for the rule set, we’ll need to edit the actual rule, which is the second part of the combination offered in the “Rule Name” from the email. Â Click on the Rule Tab, the rule which we wish to edit, then click on “Edit”.

This will take you into the rules basic information on what it uses for requirements it needs to trigger an incident.

By default, most rules are created to be triggered by very simple choices-

• Severity, in this case- Â Critical

All other granularity has been left wide open, but you can change this and finely tune the granularity.

Now in the above email, we are told that the Category involved is “Performance”, but we really don’t want this waking us up in the middle of the night, as-

1. Â This may be a server that regularly has high resource usage.

2. Â It’s not a critical “pending outage” issue, but an issue that we would need to investigate in the morning or may already be a known issue that is scheduled to be addressed.

To address the emails, Â we are going to make two changes.

1. Â Only email on what is mission critical outage issues for Metric Alerts

2. Â Create a new rule that will create an incident for any categories that are outside of what we want to be notified of for metric alerts.

As you can see in the above example, I’ve added a check-mark for the Category and chosen the following:

• Availability
• Capacity
• Error
• Fault

I can then click on Next and Save the change.

We still have the other categories that are important to us, but we just don’t want them emailing anymore. Â I want them to create an incident and I’ll review them when I review my Incident Manager, as I’ve been a strong proponent of using the Incident manager in this manner.

Create a New Rule for Category Level Metric Alert Coverage

We have now returned to the Rules that make up the Rule Set- Â Click on “Create” to create a new rule. Â We are going to create a rule very similar to the one we just edited, (just in case you need an example to use as a reference…) but we are going to choose the other categories and have the rule handle these categories of Metric Alerts incidents differently.

Choose the default radio button, “Incoming events and updates to events” and click Continue, which will take you to the rule wizard.

For this rule, we choose the following:

• Severity: Critical
• Category: All the categories we didn’t choose for our rule that is still in place.

Click on Next to proceed to the next set the step of actions when the Metric is triggered in the wizard:

We want the EM to ALWAYS perform the actions when a metric alert for these categories are triggered, so leave the default here.

To save us time and energy, I believe is automating the assignment of the metric alerts and other common incident.

• Assign to SYSMAN or a User created for this purpose in the EM12c. Â You can even assign a specific email address if one person or group are in charge of addressing these types of incidents, (more options, more options… )
• This is still critical, so assign the priority to “Urgent” or “Very High”.

We want these categories to NOT email, so skip the email section.

Choose to clear events permanently, unless you wish to retain this data.

Proceed to the next section of the wizard, where you can review your Condition and Action Summary.

Click to proceed to the next screen and add a meaningful name for your rule and a meaningful description-

Click on Save and then you will need to click on OK to save the rule to your existing rule set.

By following this process, we haveÂ

1. Â Removed specific categories from emailing from a specific rule so that only critical “possible” production outage incidents are going to email/paging.

2. Â Added a second rule to handle those categories no longer in the original and to create incidents for review when appropriate time.

This could easily be built out so that a unique user is created with an email address to page uniquely and only assign mission critical, production OUTAGE to alert.

One rule could handle this for just production admin group, one business line, etc. to happen only after hours by editing the notification schedule.

You have the power in your hands to build out your Enterprise Manager 12c environment in the way you need it to support you and to do what your business needs to be more productive.

## EM12c- Managing Incidents, Stop the Insanity Part II

So what do you do when network hiccups and other “small” issues start to send you incident notifications that a target is down, when in fact, it was really about a target just being delayed in communicating with the Oracle Management Server, (OMS)?

These are just one more way that “white noise” can drive a DBA to pull their hair out.

This post will discuss ways you can eliminate this type of EM12c “white noise”.

Let’s say we are receiving this incident notification, almost every night. Â We’ve done some research and discovered that heavy network traffic at this time is causing just enough delay in the connection to cause an availability incident notification to be sent. Â Our goal is to have the notifications stopped, as the network traffic is outside our control

Host=hostxyz@domain.com
Target type=Oracle Authorization Policy Manager
Target name=/EMGC_GCDomain/GCDomain/EMGC_OMS1/oracle.security.apm(11.1.1.3.0)
Categories=Availability
Message=The J2EE Application is downÂ
Severity=Fatal
Event reported time=Dec 6, 2013Â 12:10:54 AM CST
Operating System=Linux
Platform=x86_64
Associated Incident Id=51782Â
Associated Incident Status=New
Associated Incident Owner=SYSMAN
Associated Incident Acknowledged By Owner=No
Associated Incident Priority=Very High
Associated Incident Escalation Level=0
Event Type=Target Availability
Event name=Status
Availability status=Down
Rule Name=Incident Management Ruleset,Incident creation Rule for target down.
Rule Owner=SYSMAN
Update Details:
The J2EE Application is down
Incident created by rule (Name = Incident Management Ruleset, Incident creation Rule for target down.; Owner = SYSMAN).

What we want, is to dial the “sensitivity” down a bit and for EM12c to double check before it alerts. Â To do so, we are going to click on the link:

Target name=/EMGC_GCDomain/GCDomain/EMGC_OMS1/oracle.security.apm(11.1.1.3.0)Â

This link will take you to the “oracle.security.apm

On this page, click on the Drop down below the description of the feature, (in this case, it’s the Authorization Policy Manager) click on “Monitoring” and then “Metric and Collection Settings”.

This will take you to the specific metric settings for this feature that are triggering the incident-

There are two areas we are going to edit- The first, is accessible by clicking on the “pencil” icon to the very right.

This will take you to the following page:

Notice that we are alerting on the following:

1. Â If the target shows that it is down.

2. It occurs once

3. We check this every 1 minute.

We are going to make the following change:

Now we are only going to create an incident and alert if it’s occurred three times in a row and as we don’t want our global monitoring templates over-riding it for this particular target, we have check marked the box at the bottom for the template override option.

Now click on the “Every 1 Minute” link, which will take you to the interval settings for this metric:

The interval’s default setting let’s me know that the repeat interval is set quite low and is causing extra load. Â A change to ever 3 minutes, upon applying, shows it’s still quite low, so for this server, I’ve decided, I’m going to move it to every 5 minutes:

I now can continue and review my metric setting changes and if satisfied, save them and verify that the confirmation of the change has been received to the OMS.

This now means that this incident notification will ONLY be triggered if the incident occurs 3 times and the interval of collecting this information, (checking for an outage) will only occur once every five minutes. Â During heavy traffic times, I will not offer my OMS a bit of a break- Â less stress on it, and eliminate false availability incident creations.

## EM12c- Managing Incidents, Stopping the Insanity, Part I

As many folks know, “white noise” or having incident alerts that don’t offer value is something that I just refuse to tolerate.

The metric alert “Listener response to a TNS ping is xxxx msecs” is valuable, as it represents how many milliseconds it takes the listener to respond to a network request, (i.e. ping).

In the EM12c metric settings, this is set to default threshold of 400 to signal a warning and 1000 for critical. Â For most Listener targets, this is a solid set of thresholds, but that isn’t always the case. Â There are advanced logic capabilities involved with EM12c metrics, allowing anyone to set the threshold for a given metric to a unique value and even require the threshold to be reached X number of times before initiating the response set in the incident rule set.

This offers the EM12c administrator customization settings unique for targets that regularly experience delays in a network request. Instead of having “white noise”, (i.e. constant and annoying incident nofitications), you can customize the metric settings so to only notify if there is a real issue.

If you are receiving an incident notification for this type of metric, then it’s going to look very similar to the following:

Host=host.domain.com
Target type=Listener
Target name=LISTENER_HOST
Message=Listener response to a TNS ping is 4,380 msecsÂ
Severity=Critical
Event reported time=Nov 22, 2013Â 10:04:56 AM CST
Operating System=Linux
Platform=x86_64
Associated Incident Id=47082
Associated Incident Status=New
Associated Incident Owner=SYSMAN
Associated Incident Acknowledged By Owner=No
Associated Incident Priority=Very High
Associated Incident Escalation Level=0
Event name=Response:tnsPing
Metric Group=Response
Metric=Response Time (msec)
Metric value=4380
Key Value=Â
Rule Name={client]_Incident Management Ruleset,Incident creation Rule for metric alerts.
Rule Owner=SYSMAN
Update Details:
Listener response to a TNS ping is 4,380 msecs
Incident created by rule (Name = TCS Incident Management Ruleset, Incident creation Rule for metric alerts.; Owner = SYSMAN).

To address an incident that is not offering significant value, you have multiple ways to pursue it, but we’re going to work through this with the concept that we want to review historical data and make an intelligent decision about what the thresholds need to be set for this metric.

1. Â Click on the link in the “Message”, which will take you to the Listener target summary page:

Don’t be surprised if there isn’t an incident shown in the “Incidents” section. Â The intermittent network response issue is already gone and the incident is cleared. Â To locate this incident and see how often it occurs, Click on drop down in the upper left, “Oracle Listener”, Â ”Monitoring” and “Alert History”:

This will then show you the alert history for just this target. Â Now the first thing you are going to want to do is get a solid view of the history. Â The default view is for the last 24hrs. Â In the upper right hand corner, change the view so that it displays the last 31 days:

You will now display a more complete view of how often the metric has alerted, both for warning and critical:

You can now determine if this metric, at its current settings are offering you the best value in information. Â For the above example, we’re dissatisfied with the amount of notification generation and due to increased network traffic, the decision is to increase the values so that it only alerts when the ping is at a level that there really is an issue to respond to.

First, I’ll investigate the incident history collected by EM12c. Â This will let me view the actual highs and lows over periods of time and justify what the warning and critical thresholds should be for this servers Listener response time. Â Â To do this, (there are a number of ways to reach this, but remember, we’re in the mode of logical investigation and response steps…) Â I move my cursor to where one of the critical alert sections are in the Response time bar, (either a yellow or red line you see in the graphic above…) and double click on it, which then bring me to the indicent history data:

We can see that some of the response times in just the last week have been quite high. Â Look for what values make sense for warnings and critical values. Â Now it’s time to view the actual metric settings and see what the Enterprise Manager has set for thresholds.

Click on “Oracle Listener”, “Monitoring” and “Metric and Collection Settings”:

Please Note: Â This will take you to the metric and collection settings for just THIS listener. Â Keep in mind when editing this section you are ONLY editing the listener that you are currently working on and no other listener on any other target will be effected.

Default settings of 400 for warning and 1000 for critical are currently in place. Â We can now update this by clicking on the “Pencil” Â icon to the right of the metric thresholds:

Scroll to the bottom of the page and you will find the “Threshold Suggestion” feature. Â This feature can now take what you’ve already noted in your investigations and you can test the new thresholds against the target’s history. Â The default display will show you the current settings and how they are impacting, so you’ll be offered another view of how effective or ineffective the current metric thresholds are. Â Change the values for warning and threshold to what you believe will offer you the best informational results and change the “View Data” to 31 days so you can compare it against the max amount of history:

As you can see, these new values offer thresholds that make sense for the demand on the host and will hit the warning threshold once during our historical view. Â  If you aren’t satisfied with the response in the test, continue to change the metric thresholds until you see the response desired.

Once satisfied, now scroll back up to the top and set the actual thresholds for the metric.

Notice that I’ve also asked the Enterprise Manager NOT to respond unless this threshold has been reached by either warning or critical THREE TIMES in the “Number of Occurrences” Â section and I’ve also check marked the box to stop any Template Override. Â We’ve already investigated and decided the metric threshold for this target needed to be higher, so why would I want a monitoring template to override it and undo all my hard work?

Once I’ve made these changes, I click the “Continue” button. Â I’m then returned to the Listener Metric and Collection Settings page. Â The actual changes are not saved until I hit the “OK” on this page.

This metric has now been updated to effectively notify with thresholds that now make sense. Â Congratulations- Â white noise eliminated from this target…

## Easy EM12c Agent Deployment on Windows

One of the greatest advancements in EM12c from previous Enterprise Manager versions is the auto-deployment. Â I have a number of clients with Windows environments and upon another recent search on Google, I found that there was a consistent and solid recommendation of installing Cygwin, (or another shell emulator installation) to utilize the auto-deploy. Â There is a large amount of work that is required to perform the pre-requisites to then take advantage of the auto-deploy, so I’m thrilled when I see someone recommend using the silent installation with Windows installation. Â I’m aware that I, along with others, have created posts that demonstrate silent deployments, but not JUST for Windows, so here it is!

The Software Library

A silent deploy demands a software library for a download the agent zip file to. Â This must be pre-configured beforehand.

To configure the software library, log into the EM12c environment and go to the software library:

Enter a name and location for the software library, ensure you’ve already created the folder names in Windows to ensure success.

Click on OK

You will see the software library in the list:

You will now use this directory to download the agent deployment zip files to.

Deployment is done by the Enterprise Manager Command Line Interface, (EMCLI). Â  The emcli is ready with EM12c, release 2, but you need to set the environment up to support the emcli. Â The JAVA_HOME must be set before your Windows emcli will work correctly. Â Either set it in the environment variables or set it at the command line. Â It must be java 1.6 or greater to work with emcli.

set JAVA_HOME=C:\Progra~2\Java\jre7

Log into the EMCLI as SYSMAN or user with agent deployment privs:

E:\app\oracle\em12c\oms\BIN>emcli login -username=sysman
Login successful

Syncronize the EMCLI so that it is up to date with the repository:

E:\app\oracle\em12c\oms\BIN>emcli sync
Synchronized successfully

Query the repository to get what agents are in the repository:

E:\app\oracle\em12c\oms\BIN>emcli get_supported_platforms
Getting list of platforms ...
Check the logs at E:\app\oracle\em12c\gc_inst\em\EMGC_OMS1\sysman\emcli\setup/.emcli/agent.log
About to access self-update code path to retrieve the platforms list..
Getting Platforms list ...
-----------------------------------------------
Version = 12.1.0.2.0
Platform = Microsoft Windows x64 (64-bit)
-----------------------------------------------
Platforms list displayed successfully.

Deploy the AgentÂ

OK, time to download it to the software drive. Â The syntax asks for the destination, which we list, along with the exact name of the platform of the agent that we want and the version:

emcli get_agentimage -destination=E:/sw_lib/12.2 â€“platform="Microsoft Windows x64 (64-bit)" â€“version=12.1.0.2.0
Platform:Microsoft Windows x64 (64-bit)
Destination:e:\swlib
=== Partition Detail ===
Space free : 20 GB
Space required : 1 GB
Check the logs at E:\app\oracle\em12c\gc_inst\em\EMGC_OMS1\sysman\emcli\setup/.emcli/get_agentimage_2013-10-24_12-55-31-PM.log

Copy this file over to the target server that you wish to deploy the EM12c agent to and unzip the file:

You will see a list of files, (depending on version of EM12c, output may vary some…) but there are two files you are mainly interested in:

agentDeploy.bat Â  Â  Batch file to deploy the agent.

agent.rspÂ  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â Agent response file that requires configuration.

The agent.rsp requires the following changes made:

OMS_HOST=OMS_ORCL.com #OMS Host Server name
AGENT_INSTANCE_HOME=E:\app\agent12c #Installation directory on new target
AGENT_PORT=3872 #Agent port
b_startAgent=true
ORACLE_HOSTNAME=TRGT_ORCL.com #Name of target host
s_agentHomeName=TRGT_ORCL #Name of the target in EM12c

Save the changes and execute the batch file to deploy the agent:

agentDeploy.bat AGENT_BASE_DIR=E:\app RESPONSE_FILE=E:\app\12.1.0.2.0_AgentCore\agent.rsp
E:\12.1.0.2.0_AgentCore_233>agentDeploy.bat AGENT_BASE_DIR=E:\app RESPONSE_FILE=E:\12.1.0.2.0_AgentCore_233\agent.rsp
E:\12.1.0.2.0_AgentCore_233
Present working directory:E:\12.1.0.2.0_AgentCore_233
Archive location:E:\12.1.0.2.0_AgentCore_233 directory
AGENT_BASE_DIR
AGENT_BASE_DIR
E:\app
Agent base directory:E:\app
E:\app
RESPONSE_FILE
E:\12.1.0.2.0_AgentCore_233\agent.rsp
Agent base directory:E:\app
OMS Host:
Agent image loc : "E:\12.1.0.2.0_AgentCore_233"
E:\app configonlyfalse
1 file(s) copied.
This is the version 12.1.0.2.0
This is the type core
This is the aru id 233
"E:\app\core\12.1.0.2.0"
"Executing command : E:\app\core\12.1.0.2.0\jdk\bin\java -classpath E:\app\core\12.1.0.2.0\jlib\agentInstaller.jar:E:\app\core\12.1.0.2.0\oui\jlib\OraInstaller.jar oracle.sysman.agent.installer.AgentInstaller E:\app\core\12.1.0.2.0 "E:\12.1.0.2.0_AgentCore
Validating oms host & port with url: http://OMS_ORCL.com:4800/empbs/genwallet
Validating oms host & port with url: https://OMS_ORCL.com:4800/empbs/genwallet
Return status:3
"E:\12.1.0.2.0_AgentCore_233"\12.1.0.2.0_PluginsOneoffs_233.zip
"Executing command : E:\app\core\12.1.0.2.0\jdk\bin\java -classpath E:\app\core\12.1.0.2.0\jlib\OraInstaller.jar:E:\app\core\12.1.0.2.0\sysman\jlib\emInstaller.jar:E:\app\core\12.1.0.2.0\jlib\xmlparserv2.jar:E:\app\core\12.1.0.2.0\jlib\srvm.jar:E:\app\core
Executing agent install prereqs...
Executing command: E:\app\core\12.1.0.2.0\oui\bin\setup.exe -ignoreSysPrereqs -prereqchecker -silent -ignoreSysPrereqs -waitForCompletion -prereqlogloc E:\app\core\12.1.0.2.0\cfgtoollogs\agentDeploy -entryPoint oracle.sysman.top.agent_Complete -detailedEx
Prereq Logs Location:E:\app\core\12.1.0.2.0\cfgtoollogs\agentDeploy\prereq<timestamp>.log
Agent install prereqs completed successfully
Cloning the agent home...
Executing command: E:\app\core\12.1.0.2.0\oui\bin\setup.exe -ignoreSysPrereqs -clone -forceClone -silent -waitForCompletion -nowait ORACLE_HOME=E:\app\core\12.1.0.2.0 -responseFile E:\12.1.0.2.0_AgentCore_233\agent.rsp AGENT_BASE_DIR=E:/app AGENT_BASE_DIR
Clone Action Logs Location:C:\Program Files\Oracle\Inventory\logs\cloneActions<timestamp>.log
Cloning of agent home completed successfully
Attaching sbin home...
Executing command: E:\app\core\12.1.0.2.0\oui\bin\setup.exe -ignoreSysPrereqs -attachHome -waitForCompletion -nowait ORACLE_HOME=E:\app\sbin ORACLE_HOME_NAME=sbin12c1 -force
Attach Home Logs Location:E:\app\core\12.1.0.2.0\cfgtoollogs\agentDeploy\AttachHome<timestamp>.log
Attach home for sbin home completed successfully.
Updating home dependencies...
Executing command: E:\app\core\12.1.0.2.0\oui\bin\setup.exe -ignoreSysPrereqs -updateHomeDeps -waitForCompletion HOME_DEPENDENCY_LIST={E:\app\sbin:E:\app\core\12.1.0.2.0,} -invPtrLoc E:\app\core\12.1.0.2.0\oraInst.loc -force
Update Home Dependencies Location:E:\app\core\12.1.0.2.0\cfgtoollogs\agentDeploy\UpdateHomeDeps<timestamp>.log
Update home dependency completed successfully.
Performing the agent configuration...
Executing command: E:\app\core\12.1.0.2.0\oui\bin\runConfig.bat ORACLE_HOME=E:\app\core\12.1.0.2.0 RESPONSE_FILE=E:\app\core\12.1.0.2.0\agent.rsp ACTION=configure MODE=perform COMPONENT_XML={oracle.sysman.top.agent.11_1_0_1_0.xml} RERUN=true
Configuration Log Location:E:\app\core\12.1.0.2.0\cfgtoollogs\cfgfw\CfmLogger<timestamp>.log
Agent Configuration completed successfully
Agent deployment log location:
E:\app\core\12.1.0.2.0\cfgtoollogs\agentDeploy\agentDeploy_2013-10-24_13-26-41-PM.log
Agent deployment completed successfully.
E:\12.1.0.2.0_AgentCore_233>E:\12.1.0.2.0_AgentCore_233

A couple times, you will see the Oracle Installer come up and you may see an error with an opportunity to close the program or repair. Â Choose to not close and let it continue, it’s an issue as the Installer attempts to login in without direct admin privileges, but it does recover.

You have now deployed the agent successfully and you can now log back into your EM12c and see the target host you just deployed. Â If you have auto-discovery, the secondary targets will be discovered and configuration completed or you can manually discover targets from this point.

This is all there is to deploying in Windows and not one Cygwin installation in sight!

## EMCLI Rel 3, Post 1

So as the book gets under way on the Enterprise Manager Command Line Interface, (EMCLI) I’m starting to move away from the introduction statements that I commonly was required to repeat to folks, (“it’s the return to the golden age of the DBA 1.0- command line, baby!” ) and now are onto what has changed in release 3.

The first things we want to talk about is the cool new features with Jython scripting and the the formatted output using JSON, (JavaScript Object Notation). Â These may not seem like much to most, but for those of us that have been using the EMCLI for some time, they are major enhancements that take the interface to a whole new level. Â The scripting option offers the DBA the option to build production grade Jython modules for Enterprise Manager. Â This is cool because,Â Â JSON text format is syntactically identical to the code for creating JavaScript objects. Â  Due toÂ this similarity, instead of requiring a parser, a JavaScript program can use the built-in eval() function and execute JSON data to produce native JavaScript objects.

Now I know I’ve heard a couple folks complain, “here we go into the JavaScript maelstrom!” but do give JSON a chance. Â I haven’t found anyone yet who has not worked with JSON and found it to be easy to learn, use and implement to an environment. Â This has created an instant following with developers and the success of the scripting language is impressive.

The JSON format is simple and clean- Â often used in an array, (also lists or maps, too) JSON is a collection of names and values set up in pairs. Â To understand why JSON has become so popular, here’s a comparison between JSON and XML for the same examples of code:

## Example in JSON

{
"glossary": {
"title": "example glossary",
"GlossDiv": {
"title": "S",
"GlossList": {
"GlossEntry": {
"ID": "SGML",
"SortAs": "SGML",
"GlossTerm": "Standard Generalized Markup Language",
"Acronym": "SGML",
"Abbrev": "ISO 8879:1986",
"GlossDef": {
"para": "A meta-markup language, used to create markup languages such as DocBook.",
"GlossSeeAlso": ["GML", "XML"]
},
"GlossSee": "markup"
}
}
}
}
}

## Same Example in XML

<!DOCTYPE glossary PUBLIC "-//OASIS//DTD DocBook V3.1//EN">
<glossary><title>example glossary</title>
<GlossDiv><title>S</title>
<GlossList>
<GlossEntry ID="SGML" SortAs="SGML">
<GlossTerm>Standard Generalized Markup Language</GlossTerm>
<Acronym>SGML</Acronym>
<Abbrev>ISO 8879:1986</Abbrev>
<GlossDef>
<para>A meta-markup language, used to create markup
languages such as DocBook.</para>
<GlossSeeAlso OtherTerm="GML">
<GlossSeeAlso OtherTerm="XML">
</GlossDef>
<GlossSee OtherTerm="markup">
</GlossEntry>
</GlossList>
</GlossDiv>
</glossary>
"id": "file",
"value": "File",
"popup": {
{"value": "New", "onclick": "CreateNewDoc()"},
{"value": "Open", "onclick": "OpenDoc()"},
{"value": "Close", "onclick": "CloseDoc()"}
]
}
}}

You can see the full example of comparisonsÂ here, but the lesser amount of code and the simplicity makes JSON an obvious choice of the two for those that want clean, simple and effective code.

## Installation

To install the EMCLI kit, you now have an option in the Setup drop down menu, (to the top right of your screen…) Before installing, you must ensure your JAVA_HOME is properly set and that it’s part of your PATH. Â If you don’t wish to use the console to install the EMCLI, you can download it directly from the OMS host and then copy the jar file to where you would like to install it.

Installing from the command line is as simple as the following command:

$<path to where jar installation file was copied to>/java -jar emcliadvancedkit.jar client -install_dir=<input directory if not where jar file resides> Once you complete the installation, you will need to synchronize with the OMS repository to download the help content. Â In previous versions, this was done with the setup, but no longer. Â To perform the sync, the following command is run: emcli sync -url=https://<EMHost>:<port>/em -username=sysman -trustall After inputting the password, you should see a successful completion of the sync and you should be ready to go. Once you’ve set everything up, it’s time to log in and start to script. ## Scripting ./emcli login -username=<username> Enter password : ****** For our example today, we’ll start with something simple. Â We’ll create a simple report on failed jobs- #chk_failed_jobs.py #Import all emcli verbs to current program from emcli import * #Set the OMS URL to connect to set_client_property('EMCLI_OMS_URL','http://emrephost_orcl.com:7801:/em') #Accept all the certificates set_client_property('EMCLI_TRUSTALL','true') #Login to the OMS Â login(username='sysman') l_jobs = get_jobs(status_id=4) for jobs in l_jobs.out()['data']: JN = jobs[NAME'] TY = jobs['TYPE'] JI = jobs['JOB_ID'] SD = jobs['SCHEDULED'] ST = jobs['STATUS'] ON = jobs['OWNER'] TT = jobs['TARGET TYPE'] TN = jobs['TARGET NAME'] print "Job Name= "+ JN + " Job Type =" + TY + " JobID =" + JI "Scheduled= "+ SD " Status= "+ ST " Owner= "+ O "Target Type= "+ TT "Target Name= "+ TN The output will then show a failed job, which I’ve created a dummy job to fail for this demonstration: ./emcli @chk_brkn_jobs.py Enter password : ****** Job Name = TEST_JOB_2 Job Type = SQLScript JobID = E8F8F923012D7516E043200B14ACFD07 E8F8F92301307516E043200B14ACFD07 Scheduled= 2013-10-17 18:14:01 Status= Failed Owner= SYSMAN Target Type= oracle_database Target_Name= emrep12.orcl.com Logout successful If you’d like to learn more, I would recommend the Oracle Screen Watches and read up on the EMCLI documentation for release 3. ## Abstracts, Reviews and Conferences, Oh My! So yes, I’ve been involved in a number of conferences and in a number of different roles. Â I started out presenting, then volunteering Â at conferences, Â reviewing abstracts, then as a track lead and now as a conference director for RMOUG. Â This year I also am the database track lead for ODTUG’s KSCOPEÂ for a second year in a row. A lot of folks have asked me recently what they need to know to submit a great abstract, how to get accepted and why they may not have been selected. Â It’s often a lot more complicated than what it may first seem and I hope to shed some light here. I’ll begin by recommending a post from Gwen Shapira on Pythian’s site, as it is an excellent blog post from a very established and gifted presenter. Â I think the biggest change is to the opening line- Â no longer is October the abstract writing month, as many conferences have started to push for earlier and earlier abstract submission windows hoping to capture the presenter before the others. Â We opened RMOUG’s abstract submissions in early July and a number of conferences were right on our tails! Like Gwen, I recommend starting with a sentence(s) to capture the reviewer/attendee’s interest, then dig into the subject deeper as to why they will want to attend. ## Abstract Writers: Topics Find a main area that is of strong interest, but I recommend finding an interesting twist on it. Â Something unique that makes your abstract stand out. Â Try to stay in this area and don’t jump too much around or try to fit too much into one presentation. Â The more features you are covering in one session, the higher level the session will have to be. Do 1. Â Be clear about what the attendee will gain by coming to your session. Â What value does this offer to their career and/or their day-to-day job? Â If it’s not in the standard core knowledge for their jobs, why would they be interested in learning about your topic? 2. Â Always list out at least three “take-away” items that the attendee will gain. Â These should also offer the reviewer value to the score they submit for your abstract review. 3. Â Do not write a “novel” regarding the topic in your abstract. Â If your abstract is so long that it causes issues with the abstract review software, you have a problem. Â Keep it to 175 words or less. Â Most abstract review forms will tell you what the maximum word count is for a submission. Â If you require more that what the maximum listed, then you are presenting at the wrong conference…:) Additionally 1. Â Fill out all areas of the abstract submission form. Â Do not be surprised if some reviewers score an abstract lower if the biography or other areas aren’t filled out by the submitter. Â Remember, many would like to know a bit about the presenter, not just the abstract. 2. Â Its alright to paste the abstract also in a summary area, but I would recommend taking the time to fill in a simple and clear “interest capture” statement that will look nice in the handout and/or mobile app to attract attendees to your session. Â This is what most of the attendees will be seeing when they make that last minute decision of which session to go to. 3. Â If you don’t get accepted for a conference, email and ask for any feedback/advice that might assist you in your next abstract submission. Â Many conferences are more than happy to share overall score information and any comments about your abstract. ## Recommendations for Reviewers The reviewer is a valued volunteer. Â Many conferences rely on their expertise and their time to gather valuable scores on abstracts to eliminate much wasted time when choosing the final abstracts to accept. Â Being the best reviewer possible is knowing the needs of the conference and so you can offer the most educated reviews. Questions to ask 1. How many overall sessions and what is the breakdown for each track? 2. What percentage do you commonly see for each score level? Â (5-5%, 4-25%, 3-60%, etc…) 3. Â What tracks are there and what ones should I be reviewing? 4. Â What are the rules about reviewing my own abstract or those for my own company, etc.? 5. Â Who do I contact if I have a question about a review? 6. Â Are there any per company speaker limits? Â Are there any per company minimum accepted slots? One more recommendation: Â If you aren’t [really] knowledgeable about a subject, DON’T REVIEW IT. Â No one should ever ask you to review an abstract that you aren’t comfortable with the content for. ## The Limitations of Conferences Conferences, in general, often have different requirements that they have to fill. Â Where one can offer an acceptance another can’t. Â Why? • Limited session slots. • Limited number of compensated speakers per tracks vs. open slots. • Limited accepted per company speakers. • Low interest in topic caused low scoring on abstracts. • Too many abstract submissions for any one topic. How to increase your chance for acceptance to a conference? 1. Â Introduce yourself to the conference director. Â For RMOUG, this will not improve your score of your abstracts, (we have around 50 reviewers) but if you are a new speaker, I’d like to be aware that you are interested in an introduction to present to our members. 2. Â Offer to volunteer. Â Help the conference out with abstract reviews, be an ambassador or find another way to volunteer and help out. 3. Â Present at smaller venues, get your name out there and network with some of the better known speakers and introduce yourself to those that are involved in the bigger conferences. Â If folks see someone who presents well, word will get around. What You Shouldn’t Do 1. Â Do NOT try to get your abstract in by coercion or continually attempting to go around the conference director or track/chair lead. Â I have seen those that push to get things their way work for maybe one or two conferences before it starts to impact their reputation. 2. Â Be impolite to the folks supporting the conference. Â This means all the volunteers, conferences staff, etc. Â A conference is hard work and these folks deserve all our support and courtesy. Following these guidelines won’t guarantee you a slot at any conference, but it may help get you speaking opportunities at a few. Good luck this conference year to all the great seasoned speakers, the new speakers and all the great content to come! ## Tuning for Time- CTAS and Views Inspecting work area usage for memory is an important aspect of my job when I’m performing a tuning exercise. Â This is especially true when we are talking about specific processes vs. overall database level. Â Often, along with SQL enhancements, choices in source objects to a process can improve performance drastically. Â Most developers are focused on the results of query, especially with tight deadlines vs. performance and the DBA can often assist them in succeeding with both. The process for our example today is a CTAS, (create table as select) that has a number of joins , a union, along with aggregation on a number of large tables, one of which is also CTAS in earlier processing, (WRK_TBL3 Â in the example below) that as part of the build process, has no stats collected post creation. Â The areas in bold will be explained as we go further into the post: createÂ tableÂ T1 nologgingÂ as selectÂ Â a11.S_ID, Â Â Â Â sum(a11.SCOL3_QTY)Â WJXBFS1, Â Â Â Â sum(a11.SCOL4)Â WJXBFS2, Â Â Â Â sum(a11.SCOL7_AMT)Â WJXBFS3 fromÂ Â Â Â TBL1 a11 Â Â Â Â joinÂ Â Â Â TBL2 Â Â a12 Â Â Â Â Â onÂ Â Â Â (a11.WEEK_IDÂ =Â a12.WEEK_ID) Â Â Â Â join Â Â TBL2_RM_TBL_VW1 Â a13 Â Â Â Â Â onÂ Â Â Â (a11.S_ID =Â a13.S_ID) whereÂ Â Â ((a11.S_ID) inÂ (((selectÂ Â Â ps21.S_ID Â Â Â Â from Â Â WRK_TBL3 Â Â ps21 Â Â Â Â Â Â Â Â joinÂ Â Â Â TBL1_VW2 Â Â s22 Â Â Â Â Â Â Â Â Â onÂ Â Â Â (ps21.S_ID =Â s22.S_ID) Â Â Â Â whereÂ Â Â (s22.STAT inÂ ('A',Â 'D',Â 'H') Â Â Â Â Â andÂ ps21.FLG3_1Â =Â 1)) unionÂ (selectÂ Â ps21.S_ID Â Â Â Â fromÂ Â Â Â WRK_TBL3 Â Â ps21 Â Â Â Â Â Â Â Â joinÂ Â Â TBL2 Â Â s22 Â Â Â Â Â Â Â Â Â onÂ Â Â Â (ps21.S_ID =Â s22.S_ID) Â Â Â Â whereÂ Â Â (s22.STAT inÂ ('I') Â Â Â Â Â andÂ ps21.FLG6_1Â =Â 1)))) andÂ a13.COL7 inÂ (9382) andÂ a12.W_IDÂ inÂ (201319,Â 201320,Â 201321,Â 201322,Â 201323,Â 201324,Â 201325,Â 201326,Â 201327,Â 201328,Â 201329,Â 201330,Â 201331)) groupÂ byÂ Â Â Â a11.S_ID; Looking at the execution plan for the query, there are a few things you will note. 1. Â A call to a remote table through a view exists. 2. Â The ‘group hash by’ that is using temp also shows most time elapsed in process, but it’s on the view joined to the remote table. 3. Â Dynamic sampling is shown, as it was required on the WRK_TBL3 for the CBO to make some type of intelligent decision, as stats are not gathered as part of the CTAS process. SQL> select * from table(dbms_xplan.display_awr('4d7bz2bmscqz4'));  IdOperationNameRowsBytesTempSpcCost (%CPU)Time 0CREATE TABLE STATEMENT14M(100) 1 LOAD AS SELECT 2 HASH GROUP BY442014M (1)48:45:27 3 FILTER 4 HASH JOIN OUTER442014M (1)48:45:27 5 NESTED LOOPS3839 296K14M (1)48:45:26 6 HASH JOIN3839 273K14M (1)48:45:26 7 VIEWVW_NSO_16888478 (3)00:00:01 8 SORT UNIQUE682312 78(52)00:00:01 9 UNION-ALL 10 NESTED LOOPS 11 NESTED LOOPS341156 37 (0)00:00:01 12 TABLE ACCESS FULLWRK_TBL334884 3 (0)00:00:01 13 INDEX UNIQUE SCANTBL2_PK1 0(0) 14 TABLE ACCESS BY INDEX ROWIDTBL218 1 (0)00:00:01 15 NESTED LOOPS 16 NESTED LOOPS341156 37 (0)00:00:01 17 TABLE ACCESS FULLWRK_TBL334884 3 (0)00:00:01 18 INDEX UNIQUE SCANTBL2_PK1 0 (0) 19 TABLE ACCESS BY INDEX ROWIDTBL218 1 (0)00:00:01 20 MERGE JOIN1357K 77M14M (1)48:45:25 21 SORT JOIN 76M 3641M 14M (1)48:45:25 22 VIEWTBL1_VW2 76M 3641M 14M (1)48:45:25 23 HASH GROUP BYÃ¿ 76M 53G 64G 12M (1)42:34:04 24 VIEWTBL1_VW176M 53G 770K (1)02:34:08 25 UNION-ALL 26 PARTITION RANGE ALL 1667K 254M10762 (1)00:02:10 27 TABLE ACCESS FULLTBL1 1667K 254M10762 (1)00:02:10 28 PARTITION RANGE ALL 74M 12G 342KÃ¿(2)01:08:36 29 TABLE ACCESS FULLTBL1 74M 12G342K (2)01:08:36 30 SORT JOIN1111012 (9)00:00:01 31 INLIST ITERATOR 32 TABLE ACCESS BY INDEX ROWIDTBL211110 11 (0)00:00:01 33 INDEX UNIQUE SCANTBL2_IDX113 2 (0)00:00:01 34 INDEX UNIQUE SCANTBL2_PK160 (0) 35 REMOTERM_TBL_FRM_VW17916454K67 (0)00:00:01 Â  If we inspect the work area while the process is running, we can also view how many resources are being used, even if it’s not a complete picture until the process is complete: select vst.sql_text, swa.sql_id, swa.sid, swa.tablespace , swa.operation_type , trunc(swa.work_area_size/1024/1024) "PGA MB" , trunc(swa.max_mem_used/1024/1024)"Mem MB" , trunc(swa.tempseg_size/1024/1024)"Temp MB" from v$sql_workarea_active swa, v$session vs, v$sqltext vst
where swa.sid=vs.sid
and swa.sql_id=vs.sql_id
and vs. sql_id='4d7bz2bmscqz4'
and vs.sql_id=vst.sql_id
and vst.piece=0
order by swa.sql_id;
SQL_TEXT                             SQL_ID        SID OPERATION_TYPE       PGA MB Mem MB Temp MB
------------------------------- ------------- ---------- ---------------------------
create table T1 nologging as select  4d7bz2bmscqz4 1135 GROUP BY (HASH)       3    0
create table T1 nologging as select  4d7bz2bmscqz4 1135 HASH-JOIN             3    1
create table T1 nologging as select  4d7bz2bmscqz4 1135 LOAD WRITE BUFFERS    0    0
create table T1 nologging as select  4d7bz2bmscqz4 1135 SORT (v1)             8   97       100
create table T1 nologging as select  4d7bz2bmscqz4 1135 HASH-JOIN             3    0
create table T1 nologging as select  4d7bz2bmscqz4 1135 GROUP BY (HASH)      87 1051     21842

We already know the Group Hash By is our challenge- it’s temp usage, it’s the elapsed time in the process, but how can we minimize this in a build process?

SQL> select count(*) from dba_tab_columns
2 where table_name=’TBL1_VW1′;

COUNT(*)
———-
61

Now the view that we’re calling is coming from a remote table and combined with the local view we just queried columns on. Â So this is a view, upon a view to a table. Â I often see this kind of call “confuse” the optimizer and I often find myself moving to mviews or simplify the design to ensure the CBO isn’t left in this position. Â The CBO is currently pulling all 61 columns of this data into the group hash by.

Look at the query above- Â What columns are we really interested in of those total 61 columns in the view?

The A13.col7 is what is important from the table, then it will need to join on the S_ID to complete the process. Â

First, we look to see what other tables contain this column:

1 select table_name, column_name from dba_tab_columns
2 where column_name='COL7'
3 and table_name in (select table_name from dba_tab_columns
4* where column_name='S_ID')
1222 rows selected.

Well, looks like we have a lot of opportunities to find another choice in the join that could result in better performance just be changing where we are sourcing from.

After inspecting the view with the remote table, a new view was created based off the following existing view that also satisfied multiple processes:

SQL> desc TBL1_VW4;
D_ID NOT NULL DATE
S_ID NOT NULL NUMBER(8)
S_NO Â NOT NULL NUMBER(4)
S_ID NUMBER
S_CNT NUMBER
S_I_CNT NUMBER

SIX rows is a tenth of the rows the other view possessed. Â The hash group by on a full scan of this view would provide much better performance, again, we’re only interested in two columns. Â A new view was then created, joining to the remote table, then tested:

SQL_TEXT                             SQL_ID        SID OPERATION_TYPE       PGA MB Mem MB Temp MB
------------------------------- ------------- ---------- ---------------------------
create table T1 nologging as select Â 78fqd7xtxpybv 4162 GROUP BY (HASH)      18    1005      3295
create table T1 nologging as select  78fqd7xtxpybv 4162 HASH-JOIN             3       1
create table T1 nologging as select Â 78fqd7xtxpybv 4162 LOAD WRITE BUFFERS    0       0
create table T1 nologging as select Â 78fqd7xtxpybv 4162 SORT (v1)            34      97       697
create table T1 nologging as select Â 78fqd7xtxpybv 4162 GROUP BY (HASH)      12       0

Upon inspection of the new execution plan, we can see an almost 10 times improvement, elapsed time decreasing from 48 minutes to just 5 1/2 minutes. Â There is no other change at this time, but note the difference in temp usage and the introduction of bloom filters:

SQL> select * from table(dbms_xplan.display_awr('78fqd7xtxpybv'));

Id
Operation
Name
Rows
Bytes
TempSpc
Cost (%CPU)
Time
0CREATE TABLE STATEMENT 1643K(100)
2 HASH GROUP BY4290 385K 8272K 1643K (1)05:28:44
3 FILTER
4 HASH JOIN83635 7514K 1642KÃ¿ (1)05:28:34
5 JOIN FILTER CREATE:BF00006266260 15 (0)00:00:01
6 TABLE ACCESS FULLTBL1_VW6266260 15 (0)00:00:01
7 FILTER
8 JOIN FILTER USE:BF000083635 6697K 1642KÃ¿(1)05:28:33
9MERGE JOIN OUTER83635 6697K 1642K (1)05:28:33
10 MERGE JOIN 76M 4078M 1642K (1)05:28:33
11 SORT JOIN 76M 3641M 1642K (1)05:28:32
12 VIEWTBL1_VW2 76M 3641M 1642K (1)05:28:32
13 HASH GROUP BY 76M 4223M5575M 1502K(1)05:00:27
14 VIEWTBL_VW1_NW 76M 4223M 432K (1)01:26:30
15 UNION-ALL
16 PARTITION RANGE ALL 1667K 49M10721 (1)00:02:09
17 TABLE ACCESS FULLTBL1_VW2 1667K 49M10721 (1)00:02:09
18 PARTITION RANGE ALL 74M 2279M 341K (1)01:08:13
19 TABLE ACCESS FULLTBL1_VW2 74M 2279M 341K (1)01:08:13
20 SORT JOIN39362 230K 31 (7)00:00:01
21 INDEX FAST FULL SCANTBL2_PK39362 230K 29 (0)00:00:01
22 SORT JOIN17916 454K69 (3)00:00:01
23 REMOTERM_TBL_FRM_VW17916 454K 67 (0)00:00:01
24 SORT UNIQUE268 14 (58)00:00:01
25 UNION-ALL
26 NESTED LOOPS134 5 (0)00:00:01
27 TABLE ACCESS BY INDEX ROWIDTBL218 2 (0)00:00:01
28 INDEX UNIQUE SCANTBL2_PK11 (0)00:00:01
29TABLE ACCESS FULLWRK_TBL31263 (0)00:00:01
30 NESTED LOOPS134 5 (0)00:00:01
31TABLE ACCESS BY INDEX ROWIDTBL2_VW2182 (0)00:00:01
32 INDEX UNIQUE SCANTBL2_PK11 (0)00:00:01
33TABLE ACCESS FULLWRK_TBL31263 (0)00:00:01
 Remote SQL Information (identified by operation id):
 Â  23 – SELECT “S_ID” FROM “RM_TBL_FRM_VW” “B” (accessing ‘RM_DBLINK’ )
 Â Â  – dynamic sampling used for this statement (level=2)

Here’s a clear example of how a small change can make a great difference in performance. Â The view available to the developer didn’t support performance and now we can continue to work on improvements on statistics and coding.

## Being Friends with Larry Ellison… :)

So I tried to finish out some of my OOW posts, but I ended up talking about this with a friend and thought I would share how I became Facebook friends with Larry Ellison instead….

After returning from Oracle Open World 2011, Â I sat one day at my desk, staring in disbelief, (as many Oracle folks busy in the social media world during OOW were that day) at a friend request from Larry Ellison’s Facebook account. Â I accepted it and me and my fellow DBAs then reviewed the account to verify that this was Larry’s official account and yes, he’d friended little ol’ me.

We joked about it from time to time in meetings or in conversation, “but then again, Kellyn is friends with Larry Ellison…” but I didn’t do anything with it, as what would I say to the man, really, (or whoever runs his Facebook account…)?

A number of months go by and it comes time to do one last, huge promotion for RMOUG Training Days 2012. Â I went on Facebook, started to perform the invite to the event via the “Invite Friends” option and started down my list of friends. Â I invited everyone I could think of that would benefit from attending the conference and then came to Larry Ellison’s name.

What should I do? Â Should I invite him? Â I decided the heck with it and marked the box and off went the invite.

And this is how I was unfriended in approximately one day from being friends with him. Â I’m sure if someone asked him why he unfriended me, he would have a very confused look on his face and wonder who the heck I was, (like Larry really manages his own Facebook account…) but there are three lessons from this story:

1. Â Always take a chance- Â accept friendships, send invites.

2. Â If nothing else, you’ll always have a great story to tell.

3. Â Don’t invite CEOs of major companies to local events….

## #OOW, Tuesday Followup

Great day today with a second WIT session and all the interviews and General Session for Enterprise Manager 12c. Â If you are looking for the ebook, Â you can download it from Oracle here.

A couple people caught on that I was tweeting while discussing the benefits of EM12c, even capturing me in a photo, (will never live it down… ) Â They played the trailer for the new Cloud Odyssey movie Oracle has been working on to promote EM12c and cloud. Â It’s been a lot of fun as they’ve talked about how they came up with the idea based off the original article and I think Leighton Nelson has even changed his profile picture to his doppelganger in the movie. Â The android jokes have gotten a bit out of hand with some of the folks around the conference, but I and the rest of the authors are enjoying the great opportunity to be a small “part” of the movie. Â If you are interested, here’s a link to the Facebook page and the 20 minute movie should be out at different Oracle events in a couple months.

I promised someone I would put the link to EMCLI verb documentation out on my blog and here it is! Â Use it wisely, use it well…

I really appreciated all the positive feedback and collaboration at the WIT session. Â I really appreciate the men and women who attended, considering the fantastic technical content going on at OOW and just in the other room at OTW! Â For the women who have contacted me in regards to mentoring opportunities, I will be in touch with you soon.

Tomorrow are my technical sessions on ASH with JB and Exadata with ODI and Parallel before the Blogger’s Meetup. Â If you are interested in attending these sessions, here’s the details!

## Oracle Open World- Sunday!

The ACE Director briefings are done and now we are onto Sunday sessions. Â I’ve already had a couple meetings with folks today and wondering why I didn’t wear flats at 2pm in the afternoon, but there are the important necessities that must be taken care of, like when am I speaking this week, so here’s the schedule:

If you want to meet up to chat, best to hit me up on Twitter @DBAKevlar and loving Oracle Open World 2013 already!