Subscribe to Blog via Email
How many times have you had maintenance or a release complete and everyone is sure that everything’s been put back the way it should have been, all t’s crossed, all i’s dotted and then you release it to the customers only to find out that NOPE, something was forgotten in the moving parts of technology? As the Database Administrator, you can do a bit of CYA and not be the one who has to say-
Having the ability to compare targets is a powerful feature in Enterprise Manager 13c, (and 12c, don’t feel left out there…:)) The comparison feature is the first of three options that encompass Configuration Management-
Upon entering Configuration Management in EM13c, you will be offered the option to create a one-time comparison, or use a pre-existing comparison as your base. You can access the Configuration Management utility via the Enterprise drop down in EM13c Cloud Control:
For our example today and due to the small environment I possess in my test environment, we’re going to compare two database targets. The Configuration Management utility is fantastic at comparing targets to see if changes have occurred and I recommend collecting “baseline” templates to have available for this purpose, but know that the tool is an option to perform other comparisons, such as:
For our example today, we’re going to be working on a CDB, then as the “Lead DBA”, we’ll discover that changes that weren’t reverted as part of our maintenance with the Configuration Management Comparison tool.
We first need to set up the comparison “baseline”, so to do this, I’m going to make a copy of the default template, Database Instance Template. It’s just good practice to make copies and leave the locked templates, in case there are times where we find there are areas we need to watch for changes in any environment that may not have been turned on by default.
Once you enter the main dashboard, click on the bottom icon on the left, which when highlighted, will show you is for Templates.
Scroll down till you see Database Instance Template, highlight it and click on Create Like at the top menu. You will need to name your new copy of the original template. For mine, I’ve named it CDB_Compare:
Click OK and you now will be brought to the template with all it’s comparison values displayed. If there are any areas that you want to compare for immediately, check that area and make sure that there is a check mark in the box for that change. For our example, let’s say that we have a process in this CDB that when quarterly maintenance is complete, the pluggable database must be brought back up, but sometimes it’s a step that the DBAs forget to complete. By default, the configuration template is checking for this, but if it didn’t, I would place the check mark in the appropriate box and save the template before proceeding.
Now that I have my template ready, I can use it to do a comparison. On the far left, click on the top icon, (of a bar graph) that will take you to the Overview page or the One Time Comparison Results, both of which will offer you an opportunity to create the baseline of the CDB that you want to compare against.
Click on Create Comparison and fill in the following information:
Click on Submit and as expected, no differences are found, (we just compared the environment against itself using the new CDB_Compare template, that checks everything out) but we now have our baseline.
Our maintenance has been completed, now our database is ready to be released to the users, but we want to verify that the changes performed should have been performed and no steps were missed that would hinder it from being ready for production use.
We perform another comparison, this time against our baseline and choose to only show differences-
Per the report, the databsae is in read only and if we log in via SQLPlus, we can quickly verify that this:
SQL> select name, open_mode from v$database; NAME OPEN_MODE --------- -------------------- CDBKELLY READ WRITE SQL> select name, open_mode from v$pdbs; NAME OPEN_MODE ------------------------------ ---------- PDB$SEED READ ONLY PDBKELLYN MOUNTED PDBK_CL1 READ ONLY
So instead of mistakenly releasing the database back to the users, we can run the following and know we’ve verified that we are safe to:
ALTER PLUGGABLE DATABASE PDBKELLYN OPEN;
Well, that’ll save us from having to explain how that was missed…Whew!
Per Oracle documentation, you’re able to upgrade from EM12c to EM13c, including the OMR, (Oracle Management Repository) to DB12c and the OMS, (Oracle Management Service) to EM13c, but leave the agent software at a version of 126.96.36.199 or higher, leaving upgrades of the agents to be performed later on using the Gold Agent Image, removing a lot of work for the administrator.
This is a great option for many customers, but what if your existing EM environment isn’t healthy and you need to start fresh? What options are left for you then?
I’ve covered, (and no, it isn’t supported by Oracle) the ability to discover a target, doing a silent deployment push from the target host back to the OMS using a 188.8.131.52 software image by simply adding the -ignoreprereqs argument. This does work and will deploy at least the last version of EM12c agent to the EM13c environment. What I don’t know- If you have an installation of software that was upgraded to a supported version in the past, (i.e. 184.108.40.206 upgraded 220.127.116.11, etc.) I haven’t tested this and I can’t guarantee it, but it’s worth a try. Same goes for an unsupported OS version, but I think if you choose to push the deploy from the target and choose to ignore the prerequisite checks, it may successfully add the target.
If you have an earlier version of EM12c agent, 18.104.22.168 to 22.214.171.124 that can’t be updated to EM13c, there is still hope outside of what I’ve proposed. The word on the streets is that with the release of 13.2, there will be backward support of earlier versions of agent software and that WILL be supported fully by Oracle.
That also offers a silver lining for those that may be considering going to EM13c, won’t be upgrading and want to take advantage of redirecting existing targets with EM12c agent software to the new installation. I’m assuming they’ll simply run the following command to redirect those targets, (or something close!):
emctl secure agent <new EM_URL and credential information>
I have high hopes for this option being available in 13.2 and will cross my fingers along with you.
Enterprise Manager does a LOT. Sometimes it may do too much. Customers on forums, on support or via email and social media may come to us asking how to address something they view as not working right and the truth is, we could simply answer their question, but they aren’t using the right tool to accomplish what they’re attempting.
A customer was frustrated as he was performing scheduled exports using the export utility that can be found under the Database, Schema drop down. He wanted to perform these exports more than once per hour and was running into issues due to limitations in the functionality for how often and naming convention. The scheduling mechanism quite as robust as he needed, so I understood his frustration, but I also realized that he was using the wrong tool for the job. It seemed so natural to me that he should be using the EM Job System, but he really didn’t understand why he should use that when exports was right in the drop down.
Even though he can do the following:
It can be very confusing if you don’t know that we commonly have about 10 ways to skin a cat in Enterprise Manager and its important to review your requirements before choosing which one will meet those requirements, even if the naming convention tells you there is a specific feature for it. An infrastructure feature may be the correct one that is built out to support advanced functionality for all that you have to accomplish vs. one specific requirement.
I’m a command line DBA, so I wasn’t even aware of the Export utility in the drop down menu. I rarely, if ever, look at the database administration offerings. I took the time this morning on one of my databases using the export utility in EM13c so that I knew what it offered, (along with what it didn’t…)
Please, don’t ask me if EM Express offers this. I really couldn’t tell you, (inside joke… :))
This last week I presented at Great Lakes Oracle Conference, (GLOC16) and the discussion on monitoring of non-Oracle databases came up while we were on the topic of management packs, how to monitor usage and what ones were required to monitor non-Oracle databases. I didn’t realize how confusing the topic could be until I received an email while in on layover in Chicago and relaying what the attendee had taken away from it. I was even more alarmed when I read the email again, planning to blog about it today after a full nights sleep!
You’ll often hear me refer to EM13c as the single-pane of glass when discussing hybrid cloud management, performance management when concerning AWR Warehouse and such, but it also can make a multi-platform environments easier to manage, too.
The difference between managing many Oracle features with EM13c and non-Oracle database platforms is that we need to shift the discussion from Management Packs to Plug-ins. I hadn’t really thought too much of it when I’d been asked what management packs were needed to manage Microsoft SQL Server, Sybase or DB2. My brain was solely focused on the topic of management packs and I told the audience how they could verify management packs on any page in EM, (while on the page, click on Settings, Management Packs, Packs Used for This Page) for any database they were monitoring:
As easily demonstrated in the image above, there aren’t any management packs utilized to access information about the MSSQL_2014 Microsoft SQL Server and you can quickly see each of the User databases status, CPU usage, IO for read and writes, along with errors and even control the agent from this useful EM dashboard.
I can do the same for a DB2_unit6024 database environment:
You’ll note that the DB2 database dashboard is different from the SQL Server one, displaying the pertinent data for that database platform.
Now, you may be saying, Kellyn’s right, I don’t need to have any management packs, (which is true) but then you click on Settings, Extensibility, Plug-ins and you’ll then locate the Database Plug-ins used to add each one of these databases to the Enterprise Manager.
These plug-ins are offered often by third parties and must be licensed through them. There may be and are often charges from these providers and I should have been more in-tune to the true discussion and not stuck on the topic of management packs.
Luckily for me, there is a small amount of explanation on the very bottom of the management pack documentation that should clear up any questions. Hope this offers some insight and thank you to everyone who came to my sessions at GLOC!
Change is difficult for technical folks. Our world is always moving at blinding speed, so if you start changing things that we don’t think need to be changed, even if you improve upon them, we’re not always appreciative.
As requests came in for me to write on the topic of Configuration Management, I found the EM13c documentation very lacking, having to push back to EM 126.96.36.199 to fill in a lot of missing areas. There were changes to the main interface that you use to work with the product.
Now I’m going to explain to you why this change is good. In Enterprise Manager 188.8.131.52, (on the left) you can see that the Comparison feature of the Configuration Management has a different drop down option than in Enterprise Manager 184.108.40.206.
You might think it is better to have a direct access to the Compare, Templates and Job Activity directly via the drop downs, but it really is *still directly* accessible, but the interface has changed.
When you accessed Configuration Management in EM12c, you would click on Comparison Templates and reach the following window:
You can see all the templates, access them quickly, but what if you want to then perform a comparison? Intuition would tell you to click on Actions and then Create. This unfortunately, only allows you to create a Comparison Template, not a One-Time Comparison.
To create a one-time comparison in EM12c, you would have to start over, click on the Enterprise menu, Configuration and then Comparison. This isn’t very user friendly and can be frustrating for the user, even if they’ve become accustomed to the user interface.
EM13c has introduced a new interface for Configuration Management. The initial interface dashboard is the Overview:
You can easily create a One-time Comparison, a Drift Management definition or Consistency Management right from the main Overview screen. All interfaces for the Configuration Manager now includes tab icons on the left so that you can easily navigate from one feature of the Configuration Management utility to another.
In EM13c, if you are in the Configuration Templates, you can easily see the tabs to take you to the Definitions, the Overview or even the One-Time Comparison.
No more returning to the Enterprise drop down and starting from the beginning to simply access another aspect of Configuration Management.
See? Not all change is bad… 🙂 If you’d like to learn more about this cool feature, (before I start to dig into it fully with future blog posts) start with the EM12c documentation. There’s a lot more to understanding the basics in this documentation.
With the addition of the Configuration Management from OpsCenter to Enterprise Manager 13c, there are some additional features to ease the management of changes and drift in Enterprise Manager, but I’m going to take these posts in baby steps, as the feature can be a little daunting. We want to make sure that you understand this well, so we’ll start with the configuration searches and search history first.
To access the Configuration Management feature, click on Enterprise and Configuration.
Click on Search to being your journey into Configuration Management.
From the Search Dashboard, click on Actions, Create and History. You’ll be taken to the History wizard and you’ll need to fill in the following information:
And then click on Schedule and Notify to build out a schedule to check the database for configuration changes.
For our example, we’ve chosen to run our job once every 10 minutes, set up a grace period and once satisfied, click on Schedule and Notify. Once you’ve returned to the main screen, click on Save.
Now when we click on Enterprise, Configuration, Search, we see our Search we created in the list of Searches. The one we’ve created is both runnable AND MODIFIABLE. The ones that come with the EM13c are locked down and should be considered templates to be used in Create Like options.
The job runs every 10 minutes, so if we wait long enough after a change, we can then click on the search from the list and click on Run from the menu above the list:
As I’ve made a change to the database, it shows immediately in the job and if I set this up to notify, it would email me via the settings for the user who owns the configuration:
If you highlight a row and click on See Real-Time Observations. This will take you to the reports that show you that each of the pluggable databases weren’t brought back up to an open mode post maintenance and that they need to be returned to an open status before they will match the original historical configuration.
We can quickly verify that the databases aren’t open. In fact, one is read only and the other is only mounted:
SQL> select name, open_mode from v$database; NAME OPEN_MODE --------- -------------------- CDBKELLY READ WRITE SQL> select name, open_mode from v$pdbs; NAME OPEN_MODE ------------------------------ ---------- PDB$SEED READ ONLY PDBKELLYN MOUNTED PDBK_CL1 READ ONLY
So let’s open our PDBs and then we’ll be ready to go :
ALTER PLUGGABLE DATABASE PDBKELLYN OPEN; ALTER PLUGGABLE DATABASE PDBK_CL1 CLOSE; ALTER PLUGGABLE DATABASE PDBK_CL1 OPEN;
Ahhhh, much better.
There are two ways to compare one database to another in the AWR Warehouse. I covered the ADDM Comparison Report here and now we’ll go through the second one, which is much more involved and has us empowering the AWR Warehouse taking two AWR Warehouse reports and comparing two databases to each other.
The AWR Warehouse, once setup and databases that are targets already monitored by your EM12c or EM13c environment, can then be added and upload all AWR snapshots to this central repository.
The AWR Warehouse second comparison reporting option is accessible from the drop down menu in the AWR Warehouse dashboard:
Once you click on Compare Period Report, you’re offered to choose a baseline or snapshots from the list for the databases you wish to compare:
In my example, I simply chose the DNT database, with a one hour snapshot window to compare to an OMR, (Oracle Management Repository) database for another one hour snapshot interval. Clicking on Generate Report will then create an HTML formatted report.
In the report summary, not only does the report show that I’m comparing two different databases from two different hosts, but any differences about the main configuration will be displayed. We can see that although I’m comparing the same amount of time, the average number of users is twice and the DB Time is extensively different for the two databases.
The report will then start comparing the high level information, including the host, the memory and I/O configuration-
The Top Ten Foreground events are displayed for each environment, ensuring there isn’t anything missed that could be confusing if a comparison was performed. In a more similar database, (let’s say test against production or old production vs. a newly consolidated environment) there’s going to be more similarities and you’d be able to see how the workload had changed between systems.
Each section contains values for the specific database and then the differences, saving the DBA considerable time manually calculating what has changed. Once you get to the Top SQL, the report updates it’s format again to display the SQL in order, over all, for time elapsed, CPU, etc. and then bread down between the times for each environment run or not and the difference.
After breaking down the SQL in every way possible, as commonly seen in an AWR report, but with the added benefit of comparisons between two different AWR reports and databases, the report digs into each of the Activity Stats and compares all of those:
The report then does comparisons for SGA, PGA, interconnects and even IO:
Once completed with these, it then digs into the objects and tablespaces to see if there are any outliers or odd differences in what objects are being called by both or either database.
As with all AWR reports, it also pulls up all Initialization Parameters and performs a clear comparison of what is set for each database so you can view if there is anything amiss that would cause performance impacts.
This is an incredibly valuable report for those that want to perform a deep analysis comparison between two databases for time periods around performance, workload, migration or consolidation. The comparison reports are one of the top features of the AWR Warehouse and is so infrequently considered a selling point of the product, (and if you already have the diagnostic and tuning pack, heck, it comes with it’s own limited EE license like the RMAN catalog and Enterprise Manager repository database) so what are you waiting for??
A lot of my ideas for blog posts come from questions emailed to me or asked via Twitter. Today’s blog is no different, as I was asked by someone in the community what the best method of comparing databases using features within AWR when migrating from one host and OS to another.
There is a lot of planning that must go into a project to migrate a database to another host or consolidate to another server, but when we introduce added changes, such as a different OS, new applications, workload or other demands, these need to be taken into consideration. How do you plan for this and what kind of testing can you perform to eliminate risk to performance and the user experience once you migrate over?
I won’t lie to any of you, this is where the AWR Warehouse just puts it all to shame. The ability to compare AWR data is the cornerstone of this product and it’s about to shine here again. For a project of this type, it may very well be a consideration to deploy one and load the AWR data into the warehouse, especially if you’re taking on a consolidation.
There are two main comparison reports, one focused on AWR, (Automatic Workload Repository) data and the other on ADDM, (Automatic Database Diagnostic Monitor).
From the AWR Warehouse, once you highlight a database from the main dashboard, you’ll have the option to run either report and the coolest part of these reports is that you don’t just get to compare time snapshots from the same database, but you can compare one snapshot from a database source in the AWR Warehouse to ANOTHER database source that resides in the warehouse!
This report is incredibly valuable and offers the comparisons to pinpoint many of the issues that are going to create the pain-points of a migration. The “just the facts” and crucial information about what is different, what has changed and what doesn’t match the “base” for the comparison will be displayed very effectively.
When you choose this report, the option to compare from any snapshot interval for the current database is offered, but you can then click on the magnifying glass icon for the Database to compare to and change to compare to any database that is loaded into the AWR Warehouse-
For our example, we’re going to use a day difference, same timeline to use as our Base Period. Once we fill in these options, we can click Run to request the report.
The report is broken down into three sections-
We can clearly compare between the two comparisons of activity that there was more commit waits during the base period, along with user I/O in the comparison period. During a crisis situation, these graphs can be very beneficial when needed to show waits to less technical team members.
The Configuration tab below the activity graphs will display quickly what differences in OS, initialization parameters, host and other external influences to the database. The Findings tab will then go into the performance comparisons differences. Did the SQL perform better or degrade? In the below table, the SQL ID, along with detailed information about the performance change is displayed.
Resources are the last tab to display graphs about the important area of resource usage. Was there an impact difference to CPU usage between one host and the other?
Was there swapping or other memory issues?
In our example, we can clearly see the extended data reads and for Exadata consolidations, the ever valuable single block read latency is shown-
Now for those in engineered systems and RAC environments, you’re going to want to know waits for interconnect. Again, these are simply and clearly compared, then displayed in graph form.
This report will offer very quick answers to
“What Happened at XXpm?”
The value this report provides is easy to see, but when offered to compare one database to another, even when on different hosts, you can see how valuable the AWR Warehouse becomes that even the consolidation planner can’t offer.
Next post, I’ll go over the AWR Warehouse AWR Comparision Period Report.
The OMS Patcher is a newer patching mechanism for the OMS specifically, (I know, the name kind of gave it away…) Although there are a number of similarities to Oracle’s infamous OPatch, I’ve been spending a lot of time on OTN’s support forums and via email, assisting folks as they apply the first system patch to 220.127.116.11.0. Admit it, we know how much you like patching…
The patch we’ll be working with is the following:
./emctl stop oms
$ORACLE_HOME should be set to OMS_HOME and set omspatcher to the OMSPATCHER :
export omspatcher=$OMS_HOME/OMSPATCHER/omspatcher export ORACLE_HOME=/u01/app/oracle/13c
$ omspatcher apply -analyze -property_file <location of property file>
I’d recommend running the following instead, which is a simplified command and will result in success if you’re set up your environment:
omspatcher apply <path to your patch location>/22920724 -analyze
If this returns with a successful test of your patch, then simply remove the “-analyze” from the command and it will then apply the patch:
omspatcher apply <path to your patch location>/22920724
You’ll be asked a couple of questions, so be ready with the information, including verifying that you can log into your Weblogic console.
Verify that the Weblogic domain URL and username is correct or type in the correct one, enter the weblogic password
Choose to apply the patch by clicking “Y”
Patch should proceed.
The output of the patch will look like the following:
OMSPatcher log file: /u01/app/oracle/13c/cfgtoollogs/omspatcher/22920724/omspatcher_2016-04-29_15-42-56PM_deploy.log Please enter OMS weblogic admin server URL(t3s://adc00osp.us.oracle.com:7102):> Please enter OMS weblogic admin server username(weblogic):> Please enter OMS weblogic admin server password:> Do you want to proceed? [y|n] y User Responded with: Y Applying sub-patch "22589347 " to component "oracle.sysman.si.oms.plugin" and version "18.104.22.168.0"... Applying sub-patch "22823175 " to component "oracle.sysman.emas.oms.plugin" and version "22.214.171.124.0"... Applying sub-patch "22823156 " to component "oracle.sysman.db.oms.plugin" and version "126.96.36.199.0"... Log file location: /u01/app/oracle/13c/cfgtoollogs/omspatcher/22920724/omspatcher_2016-04-29_15-42-56PM_deploy.log OMSPatcher succeeded.
Note the sub-patch information. It’s important to know that this is contained in the log, for it you needed to rollback a system patch, it must be done via each sub-patch using the Identifier listed here.
If you attempted to rollback the system patch, using the system patch identifier, you’d receive an error:
$ 01/app/oracle/13c/OMSPatcher/omspatcher rollback -id 22920724 -analyze < OMSPatcher Automation Tool Copyright (c) 2015, Oracle Corporation. All rights reserved. ...... "22920724" is a system patch ID. OMSPatcher does not support roll back with system patch ID. OMSRollbackSession failed: "22920724" is a system patch ID. OMSPatcher does not support roll back with system patch ID.
Once the system patch has completed successfully, you’ll need to add the agent patch and best practice is to use a patch plan and apply it to one agent, make it the gold agent current image and then apply that to all your agents that are subscribed to it. If you need more information on how to use Gold Agent Images, just read up on it in this post.
Monitoring templates are an essential feature to a basic Enterprise Manager environment, ensuring consistent monitoring across groups and target types. There’s an incredibly vast group of experts in the EM community and to demonstrate this, Brian Williams from Blue Medora, a valuable partner of Oracle’s in the Enterprise Manager space, is going to provide a guest blog post on how simple and efficiently you can monitor even PostgreSQL databases with EM13c using custom monitoring templates!
Guest Blogger: Brian Williams
Oracle Enterprise Manager 13c is a premier database monitoring platform for your enterprise. With Enterprise Manager 13c, users have access to many database-level metric alerting capabilities, but how do we standardize these threshold values across your database environment? The answer is simple: by creating and deploying Oracle Enterprise Manager’s monitoring templates.
Monitoring templates allow you to standardize monitoring settings across your enterprise by specifying the monitoring settings and metric thresholds once you apply them to your monitored targets. You can save, edit, and apply these templates across multiple targets or groups. A monitoring template is specified for a particular target type and can only be applied to targets of the same type. A monitoring template will have configurable values for metrics, thresholds, metric collection schedules, and corrective actions.
Today we are going to walk through the basic steps of creating a custom monitoring template and apply that template to select database targets. In this example, I will be creating templates for my newly added PostgreSQL database targets monitored with the Oracle Enterprise Manager Plugin for PostgreSQL from Blue Medora.
To get started, login to your Oracle Enterprise Manager 13c Cloud Control Console. Navigate to the Enterprise menu, select Monitoring and then Monitoring Templates. From this view, we can see a list of all monitoring templates on the system. To begin creating a new monitoring template, select Create from this view. If you are not logged in as a super admin account, you may need to grant the resource privilege Create Monitoring Template.
Figure 1 – Monitoring Templates Management Page
From the Create Monitoring Template page, select the Target Type radial button. In the Target Type Category drop down, select Databases. In the Target Type drop down, select PostgreSQL Database, or the target type of your choice. Click Continue.
The next screen presented will be the Create Monitoring Template page. Name your new template, give a description, and then click the Metric Thresholds tab. From the Metric Thresholds tab, we can begin defining our metric thresholds for our template.
You will be presented with many configurable metric thresholds. Find your desired metrics and from the far right column named Edit, click the Pencil Icon to edit the collection details and set threshold values. After setting the threshold values, click Continue to return to the Metric Thresholds view and continue to configure additional metric thresholds as needed. After all metrics have been configured, click OK to finish the creation of the monitoring template.
The final step to make full use of your newly created template is to apply the template to your selected target databases. From the Monitoring Templates screen, highlight your template, select Actions, and then Apply. Select the apply option to completely replace all metric settings in the target to use only metrics configured in your template. Click the Add button and select all database targets desired for the application. After the targets are added to the list, click Select All to mark targets for final application. Click OK to process the application. The deployment can be tracked by watching the Pending, Passed, or Failed number for the Apply Status box on the Monitoring Templates page.
Figure 2 – Apply Monitoring Template to Destination Targets
Now that I have the newly created template applied, I can navigate back to my database target home page and view top-level critical alerts based on my configurations.
Figure 3 – Target Home Page and PostgreSQL Overview
Although your database targets will eventually alert with issues, there is a solution available to give you at-a-glance visibility into PostgreSQL high availability via replication monitoring; check out the Oracle Enterprise Manager Plug-in for PostgreSQL by visiting Blue Medora’s website for product and risk-free trial information. For more walkthroughs on creating and applying monitoring templates, refer to the Enterprise Manager Cloud Control Administrator’s Guide, Chapter 7 Using Monitoring Templates.
Brian Williams is a Solutions Architect at Blue Medora specializing in Oracle Enterprise Manager and VMware vRealize Operations Manager. He has been with Blue Medora for over three years, also holding positions in software QA and IT support. Blue Medora creates platform extensions designed to provide further visibility into cloud system management and application performance management solutions.
Licensing can be a confusing topic for many, but additional stress can be felt for those that use tools that cover multiple products and features that can span more than one management pack.
I’ve demonstrated how you can see what management packs are used and how to control this via EM13c, (also available in EM12c) but that can take you away from the task at hand. That’s where turning on Annotations for Management Packs may be beneficial.
What are annotations and why use them?
For Enterprise Manager 13c, this results in initials for management packs placed after feature drop down menus throughout the interface.
Turning this feature on is very easy. Click on Settings, Management Pack and then Annotations.
Once this is enabled, your drop down menus will look a little different than they did previously, as the annotations will be added for the management pack(s) used by the feature. This is for both the upper right hand drop down from Enterprise to Settings and then the lower left menu, from the main Target Type and across.
To give an example, lets say we’ve logged into a database target and clicked on Performance. We’d now see the annotations for the management packs used for first section of options:
We quickly recognize the Database Diagnostics, (DD) and Database Tuning, (DT) Pack annotations next to each of the features.
Let’s take one of the drop downs from the Enterprise Menu with the annotations turned on. From Enterprise, Configuration, can you tell what management packs are being used outside of DBLM, (Database Lifecycle Management) in the list below?
There’s a lot of acronyms and initials there and hint, hint… I already showed you how to find out this information earlier, so take your time, I’ll wait right here…. 🙂
Have a great weekend!
Someone pinged me earlier today and said, “Do I even really need to know about logs in Enterprise Manager? I mean, it’s a GUI, (graphical user interface) so the logs should be unnecessary to the administrator.”
You just explained why we receive so many emails from database experts stuck on issues with EM, thinking its “just a GUI”.
Yes, there are a lot of logs involved with the Enterprise Manager. With the introduction back in EM10g of the agent, there were more and with the EM11g, the weblogic tier, we added more. EM12c added functionality never dreamed before and with it, MORE logs, but don’t dispair, because we’ve also tried to streamline those logs and where we weren’t able to streamline, we at least came up with a directory path naming convention that eased you from having to search for information so often.
The directory structure for the most important EM logs are in the $OMS_HOME/gc_inst/em/OMGC_OMS1/sysman/log directory.
Now in many threads on Oracle Support and in blogs, you’ll hear about the emctl.log, but today I’m going to spend some time on the emoms properties, trace and log files. Now the EMOMS naming convention is just what you would think it’s about- the Enterprise Manager Oracle Management Service, aka EMOMS.
After all that talk about logs, we’re going to jump into the configuration files first. The emoms.properties is in a couple directory locations over in the $OMS_HOME/gc_inst/em/EMGC_OMS1/sysman/config directory.
Now in EM12c, this file, along with the emomslogging.properties file was very important to the configuration of the OMS and it’s logging, which without this, we wouldn’t have any trace or log files or at least the OMS wouldn’t know what to do with the output data it collected! If you look in the emoms.properties/emomslogging.properties files for EM13c, you’ll receive the following header:
#NOTE #---- #1. EMOMS(LOGGING).PROPERTIES FILE HAS BEEN REMOVED
Yes, the file is simply a place holder and you now use EMCTL commands to configure the OMS and logging properties.
There are, actually, very helpful commands listed in the property file to tell you HOW to update your EM OMS properties! Know if you can’t remember an emctl property commands, this is a good place to look to find the command/usage.
Trace files are recognized by any DBA- These files trace a process and for the emoms*.trc files, these are the trace files for EM OMS processes, including the one for the Oracle Management Service. Know that a “warning” isn’t always a thing to be concerned about. Sometimes it’s just letting you know what’s going on in the system, (yeah, I know, shouldn’t they just classify that INFO then?”
2016-04-09 01:00:07,523 [RJob Step 62480] WARN jobCommand.JvmdHealthReportJob logp.251 - JVMD Health report job has started
These files do contain more information than the standard log file, but it may be more than what a standard EM administrator is going to search through. They’re most helpful when working with MOS and I recommend uploading the corresponding trace files if there is a log that support has narrowed in on.
Most of the time, you’re going to be in this directory, looking at the emctl.og, but remember that the emoms.log is there for research as well. If you perform any task that involves the OMS and an error occurs, it should be written to the emoms.log, so looking at this log can provide insight to the issue you’re investigating.
The format of the logs are important to understand and I know I’ve blogged about this in the past, but we’ll just do a quick and high level review. Taking the following entry:
2016-01-12 14:54:56,702 [[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] ERROR deploymentservice.OMSInfo logp.251 - Failed to get all oms info
We can see that the log entry starts with timestamp, module, message, status, (ERROR, WARN, INFO) detail, error message. This simplifies it when having to read these logs or knowing how one would parse them into a log analysis program.
There are other emoms log files, simply specializing in loader processing and startup. Each of these logs commonly contain a log file with more detailed information about the data its in charge of tracing.
If you want to learn more, I’d recommend reading up on EM logging from Oracle.
The change in EM13c, is that it support multiple proxies, but you may still not know how to set up a proxy and then use it with your MOS credentials and then assign out your CSI’s to targets.
To do this, click on Settings, Proxy Settings, My Oracle Support. Click on Manual Proxy Setting and then type in your proxy host entry, (sans the HTTPS, that’s already provided for you) and the port to be used:
Once entered, click on Test and if successful, then click on Apply. If it fails, make sure to check the settings with your network administrator and test the new ones offered. Once you have a proxy that works, you’ll receive the following message:
Next, you’ll need to submit your MOS credentials to be used with the EM environment. Keep in mind, the credentials used for this account, (let’s say you’re logged in as SYSMAN) will be identified with this EM login unless updated or removed.
Click on Settings, My Oracle Support, My Credentials. Enter the credentials to be used with this login and click Apply.
You’ve now configured MOS credentials to work with the main features of EM13c.
Under the same location as the one you set up your MOS credentials, you’ll notice the following drop down: Support Identifier Assignment.
This option allows you to verify and assign CSI’s to the targets in Oracle Enterprise Manager. Its a nice inventory features in EM that can save you time as you work with MOS and SR support, too.
As you can see from my setup above, I only have a few targets in this EM environment and I was able to do a search of the CSI that is connected to my MOS credentials and then assign it to each of these targets, (whited out.) If you have more than one CSI, you can assign the appropriate one to the targets that the targets belong to after searching for the target names or by target types you wish to locate.
And that’s the 101 on Proxy, MOS and CSI Setup in EM13c!
How much do you know about the big push to BI Publisher reports from Information Publisher reporting in Enterprise Manager 13c? Be honest now, Pete Sharman is watching…. 🙂
I promise, there won’t be a quiz at the end of this post, but its important for everyone to start recognizing the power behind the new reporting strategy. Pete was the PM over the big push in EM13c and has a great blog post with numerous resource links, so I’ll leave him to quizzing everyone!
IP Reports are incredibly powerful and I don’t see them going away soon, but they have a lot of limitations, too. With the “harder” push to BI Publisher with EM13c, users receive a more robust reporting platform that is able to support the functionality that is required of an IT Infrastructure tool.
You can access the BI Publisher in EM13c from the Enterprise drop down menu-
There’s a plethora of reports already built out for you to utilize! These reports access only the OMR, (Oracle EM Management Repository) and cover numerous categories:
Note: Please be aware that the license for BI Publisher included with Enterprise Manager only covers reporting against the OMR and not any other targets DIRECTLY. If you decide to build reports against data residing in targets outside the repository, it will need to be licensed for each.
Once you click on one of the reports, you’ll be taken from the EM13c interface to the BI Publisher one. Don’t panic when that screen changes- it’s supposed to do that.
You’ll notice be brought to the Home page, but you’ll notice that you’ll have access to your catalog of reports, (it will mirror the reports in the EM13c reporting interface) the ability to create New reports, open reports that you may have drafts of or are local to your machine, (not uploaded to the repository) and authentication information.
In the left hand side bar, you will have menu options that duplicate some of what is in the top menu and tips access to help you get more acquainted with BI Publisher-
This is where you’ll most likely access the catalog, create reports and download local BIP tools to use on your desktop.
To run a standard, pre-created report, is pretty easy. This is a report that’s already had the template format created for you and the data sources linked. Oracle has tried to create a number of reports in categories it thought most IT departments would need, but let’s just run two to demonstrate.
Let’s say you want to know about Database Group Health. Now there’s not a lot connected to my small development environment, (four databases, three in the Oracle Public Cloud and one on-premise) and this is currently aimed at my EM repository. This limits the results, but as you can see, it shows the current availability, the current number of incidents and compliance violations.
We could also take a look at what kinds of targets exist in the Enterprise Manager environment:
Or who has powerful privileges in the environment:
Now this is just a couple of the dozens of reports available to you that can be run, copied, edited and sourced for your own environment’s reporting needs out of the BI Publisher. I’d definitely recommend that if you haven’t checked out BI Publisher, spend a little time on it and see how much it can do!
While at Collaborate 2016, a number of us were surprised that people still aren’t using Corrective Actions and in EM13c, there are a number of cool ones built into the system to make your life easier. In this post, we’ll use the very valuable, Add Tablespace corrective action to ensure the DBA is no longer woke up at night, automating the tedious task of adding logical space by extending or adding data files.
So let’s talk about how you can stop databases from making you get out of bed to add tablespace… 🙂
Most of this corrective action is already build out for you. All you ave to do is create the corrective action and use the built in from the corrective action library. Start by clicking on Enterprise -> Monitoring -> Corrective Action Library. Select the Add Space to Tablespace job and click Go.
Name you new Corrective Action, and select the Event Types ‘Metric Alert’. This ensures that the corrective action only runs when it meets the metric threshold.
Next, click on the Parameters tab to customize how the space should be added, the location of the space and other pertinent information. Once you’ve finished filling in all of this criteria, click on Save to Library.
This Corrective Action isn’t production yet, as with any full development life cycle, the Corrective Action is now set to Draft and you will have to Publish it before it’s available for use.
Highlight the new Corrective Action and click on Publish. That’s all there is to it.
Now we need to assign the Corrective Action to a target to be used when it meets the metric threshold criteria.
Clock on Targets -> Databases and choose one of your databases that you wish to set up to use the corrective action. Once you come to the Oracle Database Home page, click on Oracle Database -> Monitoring -> Metric & Collection Settings.
In the list of metrics, scroll down till you see Tablespace Full. Choose to edit the metric Tablespace Used (%).
At the bottom of the page, you’ll see a section that says All Others with a radio button, click on Edit. In the Corrective Actions section, Click on Add next to the Warning row.
Leave the default to use the Library and choose your Corrective Action you created earlier to add space. Ensure you add the correct Named Credentials that will be able to add the space, just as you would use to perform this task in the UI or as a DBA on the host and click Continue.
We’ve set this to perform the Corrective Action at warning, so there is never a time it will reach Critical and hit an out of space issue before it may be able to complete the Corrective Action, ensuring everyone is able to rest without worry!
Ensure you Continue and Save the Changes to your metric settings and that’s all there is to it. To test the Corrective Action, you can add a small test tablespace to a database, limited size and create a table that maxes it out. It should add space automatically when it reaches the warning threshold.
Sweet Dreams from Kellyn and your Enterprise Manager 13c!
On my previous post, I submitted a job to create a test master database in my test environment. Now my test environment is a sweet offering of containers that simulate a multi-host scenario, but in reality, it’s not.
I noted that after the full copy started, my EMCC partially came down, as did my OMR, requiring both to be logged into and restarted.
Upon inspection of TOP on my “host” for my containers, we can see that there is some serious CPU usage from process 80:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 80 root 20 0 0 0 0 R 100.0 0.0 134:16.02 kswapd0 24473 oracle 20 0 0 0 0 R 100.0 0.0 40:36.39 kworker/u3+
and that this is Linux process is managing the swap, (good job, Kellyn! :))
$ ps -ef | grep 80 root 80 2 0 Feb16 ? 02:14:36 [kswapd0]
The host is very slow to respond and is working hard. Now what jobs are killing it so? Is it all the test master creation?
Actually, no, remember, this is Kellyn’s test environment, so I have four databases that are loading to a local AWR Warehouse and these are all container environments sharing resources.
I have two failed AWR extract jobs due to me overwhelming the environment and can no longer get to my database home page to even remove them. I had to wait for a bit for the main processing to complete before I could even get to this.
As it got closer to completing the work of the clone,I finally did log into the AWR Warehouse, removed two databases and then shut them down to free up resources and space. We can then see the new processes for the test master, owned by 500 instead of showing as the oracle OS user as they’re running on a different container than the one I’m running the top command from:
24473 root 20 0 0 0 0 R 100.0 0.0 52:23.17 kworker/u3+ 15682 500 20 0 2456596 58804 51536 R 98.0 0.2 50:07.48 oracle_248+ 5034 500 20 0 2729304 190600 181552 R 91.1 0.8 8:59.36 ora_dbw0_e+ 2946 500 20 0 2802784 686440 626148 R 86.8 2.8 3:15.62 ora_j019_e+ 5041 500 20 0 2721952 19644 17612 R 68.6 0.1 6:36.20 ora_lg00_e+
It looks a little better thou as it starts to recover from me multi-tasking too much at once. After a certain amount of time, the test master was finished and up:
I did get it to the point during the clone where there was no swap left:
KiB Mem : 24423596 total, 178476 free, 10595080 used, 13650040 buff/cache KiB Swap: 4210684 total, 0 free, 4210684 used. 1148572 avail Mem
They really should just cut me off some days… 🙂
Note: Followup on this. Found out upon comparing the host environment to my other environments, the swap was waaaaaay too low. This is a good example of what will happen if you DON’T have enough swap to perform these types of tasks, like cloning!
I get a lot of questions starting with, “Where do I find…” and end with “in the Oracle Management Repository, (OMR)?”
The answer to this is one that most DBAs are going to use, as it’s no different than locating objects in most databases, just a few tricks to remember when working with the OMR.
Using this information, you can answer a lot of questions when trying to figure out a command you’re seen but don’t have your specific syntax and need to know where to get it from.
As a working example, someone asked me today how they would locate what platform # is used for their version of Linux? The documentation referred to a command that listed one, but they couldn’t be sure if it was the same one that they were deploying.
So how would we find this?
./emcli <insert command here> -platform=?
select table_name from dba_tab_columns where owner='SYSMAN' and table_name like 'MGMT%' and column_name='PLATFORM_NAME';
This is going to return 5 rows and trust me, pretty much all of them are going to have the PLATFORM_ID along with that PLATFORM_NAME one way or another in it. There are a few that stand out that with a little logic, make sense:
TABLE_NAME -------------------------------------------------------------------------------- MGMT_ARU_PLATFORMS_E MGMT$ARU_PLATFORMS MGMT$EM_LMS_ACT_DATA_GUARD_VDB MGMT_ARU_PLATFORMS MGMT_CCR_HOST_INFO
SQL> select distinct(platform_name), platform_id from sysman.mgmt$aru_platforms 2 order by platform_id;
PLATFORM_NAME PLATFORM_ID ---------------------------------------- ----------- HP OpenVMS Alpha 89 Oracle Solaris on x86 (32-bit) 173 HP-UX Itanium 197 Microsoft Windows Itanium (64-bit) 208 IBM: Linux on System z 209 IBM S/390 Based Linux (31-bit) 211 IBM AIX on POWER Systems (64-bit) 212 Linux Itanium 214 Linux x86-64 226 IBM: Linux on POWER Systems 227 FreeBSD - x86 228
The person who posted the question was looking for the Platform_ID for Linux x86-64, which happens to be 226.
I’d always recommend checking views, as they may be in reserve for plugins or management packs that haven’t been deployed or used before counting on data, but there’s a lot that you can find out even if it isn’t in the GUI.
We’re DBAs, we love data and there’s plenty of that in the OMR for EM13c.
The Gold Agent Image is going to simplify agent management in EM13c, something a lot of folks are going to appreciate.
First step to using this new feature is to create a image to be used as your gold agent standard. This should be the agent that is the newest, most up to date and patched agent that you would like your other agents to match.
You can access this feature via your cloud control console from the Setup menu, Manage Cloud Control, Gold Agent Images.
If it’s the first time you’re accessing this, you’ll want to click on Manage all Images button in the middle, right hand side to begin.
The first thing you’ll do is click on Create and the will begin the step to build out your shell for your gold image.
The naming convention requires underscores between words and can accept periods, which is great to keep release versions straight.
Type in a description, choose the Platform, which pulls from your software library and then click Submit.
You’ve now created your first Gold Agent Image for the platform you chose from the drop down before clicking Submit.
Now let’s return to Gold Agent Images by clicking on the link that you see above on the left hand side of the above screen.
As this environment only has one agent to update, it matches what I have in production and says everything is on the gold agent image.
You may want to know where you go from here- There are a number of ways to manage and use Gold Agent Images for provisioning. I’ve covered much of it in this post.
You may be less than enthusiastic about all this clicking in the user interface. We can avoid that with incorporating the Enterprise Manager Command Line Interface, (EMCLI) into the mix. The following commands can be issued from any host with the EMCLI installed.
The syntax to subscribe agents to an existing Gold Agent Image from my example from above to two hosts, would be:
$<OMS_HOME>/bin/emcli subscribe_agents -image_name="AgentLinux131000" -agents="host1.us.oracle.com:1832,host2.us.oracle.com:1832"
Or if the agents belong to an Admin group, then I could deploy the Gold Agent Image to all the agents in a group by running the following command from the EMCLI on the OMS host:
$<OMS_HOME>/bin/emcli subscribe_agents -image_name="AgentLinux131000" -groups="Admin_dev1,Admin_prod1"
The syntax to provision the new gold agent image to a host(s) is:
<ORACLE_HOME>/bin/emcli update_agents -gold_image_series="Agent13100" -version_name="V1" agents="host1.us.oracle.com:1832,host2…"
Status’ of provisioning jobs can be checked via the EMCLI, as can other tasks. Please see Oracle’s documentation to see more cool ways to use the command line with the Gold Agent Image feature!