May 26th, 2016 by dbakevlar

Per Oracle documentation, you’re able to upgrade from EM12c to EM13c, including the OMR, (Oracle Management Repository) to DB12c and the OMS, (Oracle Management Service) to EM13c, but leave the agent software at a version of 12.1.0.3 or higher, leaving upgrades of the agents to be performed later on using the Gold Agent Image, removing a lot of work for the administrator.

tumblr_mkglvt0YfL1s604jro1_500

This is a great option for many customers, but what if your existing EM environment isn’t healthy and you need to start fresh?  What options are left for you then?

I’ve covered, (and no, it isn’t supported by Oracle) the ability to discover a target, doing a silent deployment push from the target host back to the OMS using a 12.1.0.5 software image by simply adding the -ignoreprereqs argument.  This does work and will deploy at least the last version of EM12c agent to the EM13c environment.  What I don’t know-  If you have an installation of software that was upgraded to a supported version in the past, (i.e. 12.1.0.2 upgraded 12.1.0.5, etc.)  I haven’t tested this and I can’t guarantee it, but it’s worth a try.  Same goes for an unsupported OS version, but I think if you choose to push the deploy from the target and choose to ignore the prerequisite checks, it may successfully add the target.

If you have an earlier version of EM12c agent, 12.1.0.3 to 12.1.0.5 that can’t be updated to EM13c, there is still hope outside of what I’ve proposed.  The word on the streets is that with the release of 13.2, there will be backward support of earlier versions of agent software and that WILL be supported fully by Oracle.

That also offers a silver lining for those that may be considering going to EM13c, won’t be upgrading and want to take advantage of redirecting existing targets with EM12c agent software to the new installation.  I’m assuming they’ll simply run the following command to redirect those targets, (or something close!):

emctl secure agent <new EM_URL and credential information>

I have high hopes for this option being available in 13.2 and will cross my fingers along with you.

Posted in EM13c, Enterprise Manager Tagged with: , ,

May 23rd, 2016 by dbakevlar

Enterprise Manager does a LOT.  Sometimes it may do too much.  Customers on forums, on support or via email and social media may come to us asking how to address something they view as not working right and the truth is, we could simply answer their question, but they aren’t using the right tool to accomplish what they’re attempting.

wrongtool

The Export Feature in Enterprise Manager

A customer was frustrated as he was performing scheduled exports using the export utility that can be found under the Database, Schema drop down.  He wanted to perform these exports more than once per hour and was running into issues due to limitations in the functionality for how often and naming convention.  The scheduling mechanism quite as robust as he needed, so I understood his frustration, but I also realized that he was using the wrong tool for the job.  It seemed so natural to me that he should be using the EM Job System, but he really didn’t understand why he should use that when exports was right in the drop down.

Even though he can do the following:

  1.  Export a schema or database
  2. Schedule it to happen immediately or later and set it to repeat.
  3. Allows for variable calls in the file naming
  4. Simple GUI interface

Limitations Include:

  • Was never meant to replace the job system, it just was enhanced and offered the ability to schedule and repeat jobs.
  • Doesn’t offer all the bells and whistles you’d be given if you scripted with shell, perl or another scripting language from the command line.
  • Has no success notification or alerting for failure in the job interface.
  • No template like the Job Library.

It can be very confusing if you don’t know that we commonly have about 10 ways to skin a cat in Enterprise Manager and its important to review your requirements before choosing which one will meet those requirements, even if the naming convention tells you there is a specific feature for it.  An infrastructure feature may be the correct one that is built out to support advanced functionality for all that you have to accomplish vs. one specific requirement.

I’m a command line DBA, so I wasn’t even aware of the Export utility in the drop down menu.  I rarely, if ever, look at the database administration offerings.  I took the time this morning on one of my databases using the export utility in EM13c so that I knew what it offered, (along with what it didn’t…)

Please, don’t ask me if EM Express offers this.  I really couldn’t tell you, (inside joke… :))

 

Posted in EM13c Tagged with: ,

May 20th, 2016 by dbakevlar

This last week I presented at Great Lakes Oracle Conference, (GLOC16) and the discussion on monitoring of non-Oracle databases came up while we were on the topic of management packs, how to monitor usage and what ones were required to monitor non-Oracle databases.  I didn’t realize how confusing the topic could be until I received an email while in on layover in Chicago and relaying what the attendee had taken away from it.  I was even more alarmed when I read the email again, planning to blog about it today after a full nights sleep!

raw

You’ll often hear me refer to EM13c as the single-pane of glass when discussing hybrid cloud management, performance management when concerning AWR Warehouse and such, but it also can make a multi-platform environments easier to manage, too.

The difference between managing many Oracle features with EM13c and non-Oracle database platforms is that we need to shift the discussion from Management Packs to Plug-ins.  I hadn’t really thought too much of it when I’d been asked what management packs were needed to manage Microsoft SQL Server, Sybase or DB2.  My brain was solely focused on the topic of management packs and I told the audience how they could verify management packs on any page in EM, (while on the page, click on Settings, Management Packs, Packs Used for This Page) for any database they were monitoring:

em13c_mssql

As easily demonstrated in the image above, there aren’t any management packs utilized to access information about the MSSQL_2014 Microsoft SQL Server and you can quickly see each of the User databases status, CPU usage, IO for read and writes, along with errors and even control the agent from this useful EM dashboard.

I can do the same for a DB2_unit6024 database environment:

em13c_sybase

You’ll note that the DB2 database dashboard is different from the SQL Server one, displaying the pertinent data for that database platform.

Now, you may be saying, Kellyn’s right, I don’t need to have any management packs, (which is true) but then you click on Settings, Extensibility, Plug-ins and you’ll then locate the Database Plug-ins used to add each one of these databases to the Enterprise Manager.

em13c_dbplugins

These plug-ins are offered often by third parties and must be licensed through them.  There may be and are often charges from these providers and I should have been more in-tune to the true discussion and not stuck on the topic of management packs.

Luckily for me, there is a small amount of explanation on the very bottom of the management pack documentation that should clear up any questions. Hope this offers some insight and thank you to everyone who came to my sessions at GLOC!

Posted in EM13c, Enterprise Manager Tagged with: , , , , ,

May 18th, 2016 by dbakevlar

Change is difficult for technical folks.  Our world is always moving at blinding speed, so if you start changing things that we don’t think need to be changed, even if you improve upon them, we’re not always appreciative.

change

Configuration Management, EM12c to EM13c

As requests came in for me to write on the topic of Configuration Management, I found the EM13c documentation very lacking, having to push back to EM 12.1.0.5 to fill in a lot of missing areas.  There were changes to the main interface that you use to work with the product.

When comparing the drop down, you can see the changes.config_chng1

Now I’m going to explain to you why this change is good.  In Enterprise Manager 12.1.0.5, (on the left)  you can see that the Comparison feature of the Configuration Management has a different drop down option than in Enterprise Manager 13.1.0.0.

EM12c Configuration Management

You might think it is better to have a direct access to the Compare, Templates and Job Activity directly via the drop downs, but it really is *still directly* accessible, but the interface has changed.

When you accessed Configuration Management in EM12c, you would click on Comparison Templates and reach the following window:

config_c5

You can see all the templates, access them quickly, but what if you want to then perform a comparison?  Intuition would tell you to click on Actions and then Create.  This unfortunately, only allows you to create a Comparison Template, not a One-Time Comparison.

To create a one-time comparison in EM12c, you would have to start over, click on the Enterprise menu, Configuration and then Comparison.  This isn’t very user friendly and can be frustrating for the user, even if they’ve become accustomed to the user interface.

EM13c Configuration Management Overview

EM13c has introduced a new interface for Configuration Management.  The initial interface dashboard is the Overview:

config_c4

You can easily create a One-time Comparison, a Drift Management definition or Consistency Management right from the main Overview screen.  All interfaces for the Configuration Manager now includes tab icons on the left so that you can easily navigate from one feature of the Configuration Management utility to another.

In EM13c, if you are in the Configuration Templates, you can easily see the tabs to take you to the Definitions, the Overview or even the One-Time Comparison.

config_c6

No more returning to the Enterprise drop down and starting from the beginning to simply access another aspect of Configuration Management.

See?  Not all change is bad… 🙂  If you’d like to learn more about this cool feature, (before I start to dig into it fully with future blog posts) start with the EM12c documentation.  There’s a lot more to understanding the basics in this documentation.

Posted in EM13c, Enterprise Manager, Oracle Tagged with: , ,

May 17th, 2016 by dbakevlar

With the addition of the Configuration Management from OpsCenter to Enterprise Manager 13c, there are some additional features to ease the management of changes and drift in Enterprise Manager, but I’m going to take these posts in baby steps, as the feature can be a little daunting.  We want to make sure that you understand this well, so we’ll start with the configuration searches and search history first.

babysteps

To access the Configuration Management feature, click on Enterprise and Configuration.

Click on Search to being your journey into Configuration Management.

confs2

From the Search Dashboard, click on Actions, Create and History.  You’ll be taken to the History wizard and you’ll need to fill in the following information:

config_hist2

And then click on Schedule and Notify to build out a schedule to check the database for configuration changes.

config_hist1

For our example, we’ve chosen to run our job once every 10 minutes, set up a grace period and once satisfied, click on Schedule and Notify.  Once you’ve returned to the main screen, click on Save.

Now when we click on Enterprise, Configuration, Search, we see our Search we created in the list of Searches.  The one we’ve created is both runnable AND MODIFIABLE.  The ones that come with the EM13c are locked down and should be considered templates to be used in Create Like options.

The job runs every 10 minutes, so if we wait long enough after a change, we can then click on the search from the list and click on Run from the menu above the list:

confs4

As I’ve made a change to the database, it shows immediately in the job and if I set this up to notify, it would email me via the settings for the user who owns the configuration:

confs5

If you highlight a row and click on See Real-Time Observations.  This will take you to the reports that show you that each of the pluggable databases weren’t brought back up to an open mode post maintenance and that they need to be returned to an open status before they will match the original historical configuration.

We can quickly verify that the databases aren’t open.  In fact, one is read only and the other is only mounted:

SQL> select name, open_mode from v$database;

NAME OPEN_MODE
--------- --------------------
CDBKELLY READ WRITE

SQL> select name, open_mode from v$pdbs;

NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDBKELLYN MOUNTED
PDBK_CL1 READ ONLY

So let’s open our PDBs and then we’ll be ready to go :‏

ALTER PLUGGABLE DATABASE PDBKELLYN OPEN;
ALTER PLUGGABLE DATABASE PDBK_CL1 CLOSE;
ALTER PLUGGABLE DATABASE PDBK_CL1 OPEN;

Ahhhh, much better.

Posted in EM13c, Enterprise Manager Tagged with: ,

May 3rd, 2016 by dbakevlar

Monitoring templates are an essential feature to a basic Enterprise Manager environment, ensuring consistent monitoring across groups and target types.  There’s an incredibly vast group of experts in the EM community and to demonstrate this, Brian Williams from Blue Medora, a valuable partner of Oracle’s in the Enterprise Manager space, is going to provide a guest blog post on how simple and efficiently you can monitor even PostgreSQL databases with EM13c using custom monitoring templates!

Jon

Creating and Applying Custom Monitoring Templates in Oracle Enterprise Manager 13c

Guest Blogger:  Brian Williams

Oracle Enterprise Manager 13c is a premier database monitoring platform for your enterprise.  With Enterprise Manager 13c, users have access to many database-level metric alerting capabilities, but how do we standardize these threshold values across your database environment? The answer is simple: by creating and deploying Oracle Enterprise Manager’s monitoring templates.

Monitoring templates allow you to standardize monitoring settings across your enterprise by specifying the monitoring settings and metric thresholds once you apply them to your monitored targets. You can save, edit, and apply these templates across multiple targets or groups. A monitoring template is specified for a particular target type and can only be applied to targets of the same type. A monitoring template will have configurable values for metrics, thresholds, metric collection schedules, and corrective actions.

Today we are going to walk through the basic steps of creating a custom monitoring template and apply that template to select database targets. In this example, I will be creating templates for my newly added PostgreSQL database targets monitored with the Oracle Enterprise Manager Plugin for PostgreSQL from Blue Medora.

To get started, login to your Oracle Enterprise Manager 13c Cloud Control Console. Navigate to the Enterprise menu, select Monitoring and then Monitoring Templates. From this view, we can see a list of all monitoring templates on the system. To begin creating a new monitoring template, select Create from this view.  If you are not logged in as a super admin account, you may need to grant the resource privilege Create Monitoring Template.

bw_im1

Figure 1 – Monitoring Templates Management Page

From the Create Monitoring Template page, select the Target Type radial button. In the Target Type Category drop down, select Databases. In the Target Type drop down, select PostgreSQL Database, or the target type of your choice. Click Continue.

The next screen presented will be the Create Monitoring Template page. Name your new template, give a description, and then click the Metric Thresholds tab. From the Metric Thresholds tab, we can begin defining our metric thresholds for our template.

You will be presented with many configurable metric thresholds. Find your desired metrics and from the far right column named Edit, click the Pencil Icon to edit the collection details and set threshold values. After setting the threshold values, click Continue to return to the Metric Thresholds view and continue to configure additional metric thresholds as needed. After all metrics have been configured, click OK to finish the creation of the monitoring template.

The final step to make full use of your newly created template is to apply the template to your selected target databases. From the Monitoring Templates screen, highlight your template, select Actions, and then Apply. Select the apply option to completely replace all metric settings in the target to use only metrics configured in your template. Click the Add button and select all database targets desired for the application. After the targets are added to the list, click Select All to mark targets for final application. Click OK to process the application. The deployment can be tracked by watching the Pending, Passed, or Failed number for the Apply Status box on the Monitoring Templates page.

bw_im2

Figure 2 – Apply Monitoring Template to Destination Targets

Now that I have the newly created template applied, I can navigate back to my database target home page and view top-level critical alerts based on my configurations.

bw_im3

Figure 3 – Target Home Page and PostgreSQL Overview

Although your database targets will eventually alert with issues, there is a solution available to give you at-a-glance visibility into PostgreSQL high availability via replication monitoring; check out the Oracle Enterprise Manager Plug-in for PostgreSQL by visiting Blue Medora’s website for product and risk-free trial information. For more walkthroughs on creating and applying monitoring templates, refer to the Enterprise Manager Cloud Control Administrator’s Guide, Chapter 7 Using Monitoring Templates.

 

Brian Williams is a Solutions Architect at Blue Medora specializing in Oracle Enterprise Manager and VMware vRealize Operations Manager. He has been with Blue Medora for over three years, also holding positions in software QA and IT support. Blue Medora creates platform extensions designed to provide further visibility into cloud system management and application performance management solutions.

 

 

 

Posted in EM13c, Enterprise Manager, Guest Blogger Tagged with: , ,

April 27th, 2016 by dbakevlar

Someone pinged me earlier today and said, “Do I even really need to know about logs in Enterprise Manager?  I mean, it’s a GUI, (graphical user interface) so the logs should be unnecessary to the administrator.”

luke

You just explained why we receive so many emails from database experts stuck on issues with EM, thinking its “just a GUI”.

Log Files

Yes, there are a lot of logs involved with the Enterprise Manager.  With the introduction back in EM10g of the agent, there were more and with the EM11g, the weblogic tier, we added more.  EM12c added functionality never dreamed before and with it, MORE logs, but don’t dispair, because we’ve also tried to streamline those logs and where we weren’t able to streamline, we at least came up with a directory path naming convention that eased you from having to search for information so often.

The directory structure for the most important EM logs are in the $OMS_HOME/gc_inst/em/OMGC_OMS1/sysman/log directory.

Now in many threads on Oracle Support and in blogs, you’ll hear about the emctl.log, but today I’m going to spend some time on the emoms properties, trace and log files.  Now the EMOMS naming convention is just what you would think it’s about-  the Enterprise Manager Oracle Management Service, aka EMOMS.

The PROPERTIES File

After all that talk about logs, we’re going to jump into the configuration files first.  The emoms.properties is in a couple directory locations over in the $OMS_HOME/gc_inst/em/EMGC_OMS1/sysman/config directory.

Now in EM12c, this file, along with the emomslogging.properties file was very important to the configuration of the OMS and it’s logging, which without this, we wouldn’t have any trace or log files or at least the OMS wouldn’t know what to do with the output data it collected!  If you look in the emoms.properties/emomslogging.properties files for EM13c, you’ll receive the following header:

#NOTE
#----
#1. EMOMS(LOGGING).PROPERTIES FILE HAS BEEN REMOVED

Yes, the file is simply a place holder and you now use EMCTL commands to configure the OMS and logging properties.

There are, actually, very helpful commands listed in the property file to tell you HOW to update your EM OMS properties!  Know if you can’t remember an emctl property commands, this is a good place to look to find the command/usage.

The TRACE Files

Trace files are recognized by any DBA-  These files trace a process and for the emoms*.trc files, these are the trace files for EM OMS processes, including the one for the Oracle Management Service.  Know that a “warning” isn’t always a thing to be concerned about.  Sometimes it’s just letting you know what’s going on in the system, (yeah, I know, shouldn’t they just classify that INFO then?”

2016-04-09 01:00:07,523 [RJob Step 62480] WARN jobCommand.JvmdHealthReportJob logp.251 - JVMD Health report job has started

These files do contain more information than the standard log file, but it may be more than what a standard EM administrator is going to search through.  They’re most helpful when working with MOS and I recommend uploading the corresponding trace files if there is a log that support has narrowed in on.

The LOG Files

Most of the time, you’re going to be in this directory, looking at the emctl.og, but remember that the emoms.log is there for research as well.  If you perform any task that involves the OMS and an error occurs, it should be written to the emoms.log, so looking at this log can provide insight to the issue you’re investigating.

The format of the logs are important to understand and I know I’ve blogged about this in the past, but we’ll just do a quick and high level review.  Taking the following entry:

2016-01-12 14:54:56,702 [[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] ERROR deploymentservice.OMSInfo logp.251 - Failed to get all oms info

We can see that the log entry starts with timestamp, module, message, status, (ERROR, WARN, INFO) detail, error message.  This simplifies it when having to read these logs or knowing how one would parse them into a log analysis program.

There are other emoms log files, simply specializing in loader processing and startup.  Each of these logs commonly contain a log file with more detailed information about the data its in charge of tracing.

If you want to learn more, I’d recommend reading up on EM logging from Oracle.

 

 

Posted in EM13c, Enterprise Manager, Oracle Tagged with: , ,

April 25th, 2016 by dbakevlar

The change in EM13c, is that it support multiple proxies, but you may still not know how to set up a proxy and then use it with your MOS credentials and then assign out your CSI’s to targets.

scratcheshead

Proxies

To do this, click on Settings, Proxy Settings, My Oracle Support.  Click on Manual Proxy Setting and then type in your proxy host entry, (sans the HTTPS, that’s already provided for you) and the port to be used:

proxy1

Once entered, click on Test and if successful, then click on Apply.  If it fails, make sure to check the settings with your network administrator and test the new ones offered.  Once you have a proxy that works, you’ll receive the following message:

proxy2

My Oracle Support Credentials

Next, you’ll need to submit your MOS credentials to be used with the EM environment.  Keep in mind, the credentials used for this account, (let’s say you’re logged in as SYSMAN)  will be identified with this EM login unless updated or removed.

Click on Settings, My Oracle Support, My Credentials.  Enter the credentials to be used with this login and click Apply.

proxy3

You’ve now configured MOS credentials to work with the main features of EM13c.

Support Identifier Assignment

Under the same location as the one you set up your MOS credentials, you’ll notice the following drop down:  Support Identifier Assignment.

This option allows you to verify and assign CSI’s to the targets in Oracle Enterprise Manager.  Its a nice inventory features in EM that can save you time as you work with MOS and SR support, too.

proxy4

As you can see from my setup above, I only have a few targets in this EM environment and I was able to do a search of the CSI that is connected to my MOS credentials and then assign it to each of these targets, (whited out.)  If you have more than one CSI, you can assign the appropriate one to the targets that the targets belong to after searching for the target names or by target types you wish to locate.

And that’s the 101 on Proxy, MOS and CSI Setup in EM13c!

 

Posted in EM13c, Enterprise Manager Tagged with: , , , ,

April 20th, 2016 by dbakevlar

How much do you know about the big push to BI Publisher reports from Information Publisher reporting in Enterprise Manager 13c?  Be honest now, Pete Sharman is watching…. 🙂

george

I promise, there won’t be a quiz at the end of this post, but its important for everyone to start recognizing the power behind the new reporting strategy.  Pete was the PM over the big push in EM13c and has a great blog post with numerous resource links, so I’ll leave him to quizzing everyone!

IP Reports are incredibly powerful and I don’t see them going away soon, but they have a lot of limitations, too.  With the “harder” push to BI Publisher with EM13c, users receive a more robust reporting platform that is able to support the functionality that is required of an IT Infrastructure tool.

BI Publisher

You can access the BI Publisher in EM13c from the Enterprise drop down menu-

bippub4

There’s a plethora of reports already built out for you to utilize!  These reports access only the OMR, (Oracle EM Management Repository) and cover numerous categories:

  • Target information and status
  • Cloud
  • Security
  • Resource and consolidation planning
  • Metrics, incidents and alerting

bipub3

Note: Please be aware that the license for BI Publisher included with Enterprise Manager only covers reporting against the OMR and not any other targets DIRECTLY.  If you decide to build reports against data residing in targets outside the repository, it will need to be licensed for each.

Many of the original reports that were converted over from IP Reports were done so by a wonderful Oracle partner, Blue Medora, who are well known for their VMware plugins for Enterprise Manager.

BI Publisher Interface

Once you click on one of the reports, you’ll be taken from the EM13c interface to the BI Publisher one.  Don’t panic when that screen changes-  it’s supposed to do that.

bipub4

 

You’ll notice be brought to the Home page, but you’ll notice that you’ll have access to your catalog of reports, (it will mirror the reports in the EM13c reporting interface) the ability to create New reports, open reports that you may have drafts of or are local to your machine, (not uploaded to the repository) and authentication information.

In the left hand side bar, you will have menu options that duplicate some of what is in the top menu and tips access to help you get more acquainted with BI Publisher-

bipub7

This is where you’ll most likely access the catalog, create reports and download local BIP tools to use on your desktop.

Running Standard Reports

 

To run a standard, pre-created report, is pretty easy.  This is a report that’s already had the template format created for you and the data sources linked.  Oracle has tried to create a number of reports in categories it thought most IT departments would need, but let’s just run two to demonstrate.

Let’s say you want to know about Database Group Health.  Now there’s not a lot connected to my small development environment, (four databases, three in the Oracle Public Cloud and one on-premise) and this is currently aimed at my EM repository. This limits the results, but as you can see, it shows the current availability, the current number of incidents and compliance violations.bipub1

We could also take a look at what kinds of targets exist in the Enterprise Manager environment:

bipub11

Or who has powerful privileges in the environment:

bipub10

Now this is just a couple of the dozens of reports available to you that can be run, copied, edited and sourced for your own environment’s reporting needs out of the BI Publisher.    I’d definitely recommend that if you haven’t checked out BI Publisher, spend a little time on it and see how much it can do!

 

 

Posted in EM13c, Enterprise Manager, Oracle Tagged with: , ,

April 7th, 2016 by dbakevlar

It’s that time of year again and the massive undertaking of the Collaborate conference is upon us.  This yearly conference, a collaboration between Quest, Oracle Applications User Group, (OAUG) and Independent Oracle User Group, (IOUG) is one of the largest conferences in the world for those that specialize in all areas of the Oracle database.

The conferene is held in different cities, but recently its been sticking to the great destination of Las Vegas, NV.  We’ll be at the Mandalay, which like many casinos, is like it’s own little self-contained city within a city.

vegasbaby

The week will be crazy and I’ll start right out, quite busy with my partners in crime, Courtney Llamas and Werner De Gruyter with Sunday’s pre-conference hands on lab. “Everything I Ever Wanted to Know About Enterprise Manager I Learned at Collaborate” was a huge hit last year, so we’re repeating it this year, but we’ve updated it to the newest release, EM13c.  For those that are able to gain a coveted spot in this HOL, it will be a choice event.  We’re going to not just cover the new user interface, but some of the coolest need-to-know features of the new release.

Sunday evening is the Welcome Reception and Awards Ceremony.  This year I’m receiving the Ken Jacobs award for my contributions to the user community as an Oracle employee.  I’m very honored to be receiving this and thank everyone at IOUG for recognizing the importance that even as an Oracle employee, you can do a lot to help make the community great!

Throughout the week, I’ll have a number of technical sessions:

Monday

Now my Database as a Service Session is up first for the week on Monday, 9:15am in Palm B, but I’m going to warn you, since this abstract was submitted very early on, the abstract isn’t as descriptive as I wanted.  Know that this is a DBaaS session and I’ll be covering on-premise, private cloud and even Oracle Public Cloud!  Come learn how easy it can be and forget all those datapump, transportable tablespace and other silly commands people are telling you have to do to provision… ☺

Right  after my DBaaS session, 10:15, same room, (Palm B) we’ll have a special session covering the new product that so many of us have put so much energy, time and vision into-  The Oracle Management Cloud, (OMC)!  Read more about this session here.

The Welcome Reception in the Exhibit Hall is from 5:30-8pm.  Don’t miss out on getting there first and see all the cool exhibitors.  I’ll be at the EM13c booth, so come say hi!

Tuesday

So Tuesday morning, the 12th, I’m back in Palm B at noon for the first of my certification sessions, covering 30 minutes of Enterprise Manager 13c New Features.

Wednesday

Wednesday, at noon, I’m back in my favorite room, Palm B to finish the second part of the certification sessions on new features with Enterprise Manager 13c.

I’ll be presenting at Oak Table World at Collaborate at 2pm in the Mandalay Bay Ballroom.  I’ll be doing my newest session on Enterprise Manager 13c and DB12c.  It’s always a great venue when we have Oakies at conferences and I almost squeaked out of it this year, but dragged back in at the last minute!

The Enterprise Manager SIG is right afterwards at 3-4  in the South Seas Ballroom E.  This is where we meet and geek out over everything Enterprise Manager, so don’t miss out on that!

Thursday

For the last day, Thursday at 9:45am, I’ll be in- wait for it….  Palm B!  Yes, I know it’s a surprise for both of us, but I’ll be using my experience helping customers Upgrade to Enterprise Manager 13c and sharing it with everyone at Collaborate.  This is another certification session, so collect those certificates and get the most out of your conference!

I’ve made a lot of updates with new material to my slides recently, so I promise to upload my slides to SlideShare after the conference, too!

See you next week in Las Vegas!

 

 

Posted in ASH and AWR, EM13c, Oracle, Oracle Management Cloud Tagged with: , , , ,

April 1st, 2016 by dbakevlar

I get a lot of questions starting with, “Where do I find…” and end with “in the Oracle Management Repository, (OMR)?”

confused_lost

The answer to this is one that most DBAs are going to use, as it’s no different than locating objects in most databases, just a few tricks to remember when working with the OMR.

  1.  SYSMAN is the main schema owner you’ll be querying in an OMR, (although there are others, like SYSMAN_RO and others.)
  2. Most views you will be engaging with when querying the OMR start with MGMT or MGMT$.
  3. DBA_TAB_COLUMNS is your friend.
  4. Know the power of _GUID and _ID columns in joins.

Using this information, you can answer a lot of questions when trying to figure out a command you’re seen but don’t have your specific syntax and need to know where to get it from.

Getting Info

As a working example, someone asked me today how they would locate what platform # is used for their version of Linux?  The documentation referred to a command that listed one, but they couldn’t be sure if it was the same one that they were deploying.

So how would we find this?

./emcli <insert command here>
 -platform=?

 

select table_name from dba_tab_columns
where owner='SYSMAN'
and table_name like 'MGMT%'
and column_name='PLATFORM_NAME';

This is going to return 5 rows and trust me, pretty much all of them are going to have the PLATFORM_ID  along with that PLATFORM_NAME  one way or another in it.  There are a few that stand out that with a little logic, make sense:

TABLE_NAME
--------------------------------------------------------------------------------
MGMT_ARU_PLATFORMS_E
MGMT$ARU_PLATFORMS
MGMT$EM_LMS_ACT_DATA_GUARD_VDB
MGMT_ARU_PLATFORMS
MGMT_CCR_HOST_INFO
SQL> select distinct(platform_name), platform_id from sysman.mgmt$aru_platforms
 2 order by platform_id;
PLATFORM_NAME PLATFORM_ID
---------------------------------------- -----------
HP OpenVMS Alpha 89
Oracle Solaris on x86 (32-bit) 173
HP-UX Itanium 197
Microsoft Windows Itanium (64-bit) 208
IBM: Linux on System z 209
IBM S/390 Based Linux (31-bit) 211
IBM AIX on POWER Systems (64-bit) 212
Linux Itanium 214
Linux x86-64 226
IBM: Linux on POWER Systems 227
FreeBSD - x86 228

The person who posted the question was looking for the Platform_ID for Linux x86-64, which happens to be 226.

Summary

I’d always recommend checking views, as they may be in reserve for plugins or management packs that haven’t been deployed or used before counting on data, but there’s a lot that you can find out even if it isn’t in the GUI.

We’re DBAs, we love data and there’s plenty of that in the OMR for EM13c.

 

 

 

 

 

Posted in EM13c Tagged with: ,

March 29th, 2016 by dbakevlar

The Gold Agent Image is going to simplify agent management in EM13c, something a lot of folks are going to appreciate.

django

First step to using this new feature is to create a image to be used as your gold agent standard.  This should be the agent that is the newest, most up to date and patched agent that you would like your other agents to match.

Managing Gold Images

You can access this feature via your cloud control console from the Setup menu, Manage Cloud Control, Gold Agent Images.

If it’s the first time you’re accessing this, you’ll want to click on Manage all Images button in the middle, right hand side to begin.

The first thing you’ll do is click on Create and the will begin the step to build out your shell for your gold image.

agent2

The naming convention requires underscores between words and can accept periods, which is great to keep release versions straight.

agent1

Type in a description, choose the Platform, which pulls from your software library and then click Submit.

You’ve now created your first Gold Agent Image for the platform you chose from the drop down before clicking Submit.

agent4

The Gold Agent Dashboard

Now let’s return to Gold Agent Images by clicking on the link that you see above on the left hand side of the above screen.

As this environment only has one agent to update, it matches what I have in production and says everything is on the gold agent image.

gaig

You may want to know where you go from here- There are a number of ways to manage and use Gold Agent Images for provisioning.  I’ve covered much of it in this post.

You may be less than enthusiastic about all this clicking in the user interface.  We can avoid that with incorporating the Enterprise Manager Command Line Interface, (EMCLI) into the mix.  The following commands can be issued from any host with the EMCLI installed.

Subscribing and Provisioning Via the EMCLI

The syntax to subscribe agents to an existing Gold Agent Image from my example from above to two hosts, would be:

$<OMS_HOME>/bin/emcli subscribe_agents -image_name="AgentLinux131000" -agents="host1.us.oracle.com:1832,host2.us.oracle.com:1832"

Or if the agents belong to an Admin group, then I could deploy the Gold Agent Image to all the agents in a group by running the following command from the EMCLI on the OMS host:

$<OMS_HOME>/bin/emcli subscribe_agents -image_name="AgentLinux131000" -groups="Admin_dev1,Admin_prod1"

The syntax to provision the new gold agent image to a host(s) is:

<ORACLE_HOME>/bin/emcli update_agents -gold_image_series="Agent13100" -version_name="V1" agents="host1.us.oracle.com:1832,host2…"

Status’ of provisioning jobs can be checked via the EMCLI, as can other tasks.  Please see Oracle’s documentation to see more cool ways to use the command line with the Gold Agent Image feature!

Posted in EM13c, Enterprise Manager Tagged with: , ,

March 23rd, 2016 by dbakevlar

This issue can be seen in either EM12c or EM13c AWR Warehouse environments.  It occurs when there is a outage on the AWR Warehouse and/or the source database that is to upload to it.

oh_gif_by_gifsandstock-d4ldoq9

The first indication of the problem, is when databases appear to not have uploaded once the environments are back up and running.

awrw5

The best way to see an upload, from beginning to end is to highlight the database you want to load manually, (click in the center of the row, if you click on the database name, you’ll be taken from the AWR Warehouse to the source database’s performance home page.)  Click on Actions, Upload Snapshots Now.

A job will be submitted and you’ll be aware of it by a notification at the top of the console:

awrw1

Click on the View Job Details and you’ll be taken to the job that will run all steps of the AWR Warehouse ETL-

  1.  Inspect what snapshots are required by comparing the metadata table vs. what ones are in the source database.
  2. Perform a datapump export of those snapshots from the AWR schema and update the metadata tables.
  3. Perform an agent to agent push of the file from the source database server to the AWR Warehouse server.
  4. Run the datapump import of the database data into the AWR Warehouse repository, partitioning by DBID, snapshot ID or a combination of both.
  5. Update support tables in the Warehouse showing status and success.

Now note the steps where metadata and successes are updated.  We’re now inspecting the job that we’re currently running to update our tables, but instead of success, we see the following in the job logs:

awrw2

We can clearly see that the extract, (ETL step on the source database to datapump the AWR data out)  has failed.

Scrolling down to the Output, we can see the detailed log to see the error that was returned on this initial step:

awrw3

ORA-20137: NO NEW SNAPSHOTS TO EXTRACT.

Per the Source database, in step 1, where it compares the database snapshot information to the metadata table, it has returned no new snapshots that should be extracted.  The problem, is that we know on the AWR Warehouse side, (seen in the alerts in section 3 of the console) there are snapshots that haven’t been uploaded in a timely manner.

How to Troubleshoot

First, let’s verify what the AWR Warehouse believes is the last and latest snapshot that was loaded to the warehouse via the ETL:

Log into the AWR Warehouse via SQL*Plus or SQLDeveloper and run the following query, using the CAW_DBID_MAPPING table, which resides in the DBSNMP database:

SQL> select target_name, new_dbid from caw_dbid_mapping;
TARGET_NAME
--------------------------------------------------------------------------------
NEW_DBID
----------
DNT.oracle.com
3695123233
cawr
1054384982
emrep
4106115278

and what’s the max snapshot that I have for the database DNT, the one in question?

SQL> select max(dhs.snap_id) from dba_hist_snapshot dhs, caw_dbid_mapping cdm
2 where dhs.dbid=cdm.new_dbid
3 and cdm.target_name='DNT.oracle.com';
MAX(DHS.SNAP_ID)
----------------
501

The Source

These next steps require querying the source database, as we’ve already verified the latest snapshot in the AWR WArehouse and the error occurred on the source environment, along with where it failed at that step in the ETL process.

Log into the database using SQL*Plus or another query tool.

We will again need privileges to the DBSNMP schema and the DBA_HIST views.

SQL> select table_name from dba_tables
where owner='DBNSMP' and table_name like 'CAW%';
TABLE_NAME
--------------------------------------------------------------------------------
CAW_EXTRACT_PROPERTIES
CAW_EXTRACT_METADATA

These are the two tables that hold information about the AWR Warehouse ETL process in the source database.

There are a number of ways we could inspect the extract data, but the first thing we’ll do is get the last load information from the metadata table, which will tell us what were the

SQL> select begin_snap_id, end_snap_id, start_time, end_time, filename
from caw_extract_metadata 
where extract_id=(select max(extract_id) 
from caw_extract_metadata);
502 524
23-MAR-16 10.43.14.024255 AM
23-MAR-16 10.44.27.319536 AM
1_2EB95980AB33561DE053057AA8C04903_3695123233_502_524.dmp

So we can see that per the metadata table, the ETL BELIEVES it’s already loaded the snapshots from 502-524.

We’ll now query the PROPERTIES table that tells us where our dump files are EXTRACTED TO:

SQL> select * from caw_extract_properties
 2 where property_name='dump_dir_1';
dump_dir_1
/u01/app/oracle/product/agent12c/agent_inst
ls /u01/app/oracle/product/agent12c/agent_inst/*.dmp
1_2EB95980AB33561DE053057AA8C04903_3695123233_502_524.dmp

So here is our problem.  We have a dump file that was created, but never performed the agent to agent push or load to the AWR Warehouse.  As the source table was updated with the rows to the METADATA table, it now fails to load these rows.

Steps to Correct

  1. Clean up the dump file from the datapump directory
  2. Update the METADATA table
  3. Rerun the job
cd /u01/app/oracle/product/agent12c/agent_inst
rm 1_2EB95980AB33561DE053057AA8C04903_3695123233_502_524.dmp

Note: You can also choose to rename the extension in the file if you wish to retain it until you are comfortable that everything is successfully loading, but be aware of size constraints in your $AGENT_HOME directory.  I’ve seen issues due to space constraints.

Log into the database and remove the latest row update in the metadata table:

select extract_id from caw_extract_metadata
where being_snap_id=502 and end_snap_id=504;
101
delete from caw_extract_metadata where extract_id=101;
1 row deleted.
commit;

Log into your AWR Warehouse dashboard and run the manual Upload Snapshots Now for the database again.

awrw4

Posted in AWR Warehouse, EM13c, Enterprise Manager Tagged with: , , ,

March 21st, 2016 by dbakevlar

I appreciate killing two birds with one stone.  I’m all about efficiency and if I can satisfy more than one task with a simple, productive process, then I’m going to do it.  Today, I’m about to:

  1. Show you why you should have a backup copy of previous agent software and how to do this.
  2. Create a documented process to restore previous images of an agent to a target host.
  3. Create the content section for the Collaborate HOL on Gold Images and make it reproducible.
  4. Create a customer demonstration of Gold Agent Image
  5. AND publish a blog post on how to do it all.

igotthis

I have a pristine Enterprise Manager 13c environment that I’m working in.  To “pollute” it with a 12.1.0.5 or earlier agent seems against what anyone would want to do in a real world EM, but there may very well reasons for having to do so:

  1.  A plugin or bug in the EM13c agent requires a previous agent version to be deployed.
  2. A customer wants to see a demo of the EM13c gold agent image and this would require a host being monitored by an older, 12c agent.

Retaining Previous Agent Copies

It would appear to be a simple process.  Let’s say you have the older version of the agent you wish to deploy in your software repository.  You can access the software versions in your software library by clicking on Setup, Extensibility, Self-Update.

extensibl1

Agent Software is the first in our list, so it’s already highlighted, but otherwise, click in the center of the row, where there’s no link and then click on Actions and Open to access the details on what Agent Software you have downloaded to your Software Library.

If you scroll down, considering all the versions of agent there are available, you can see that the 12.1.0.5 agent for Linux is already in the software library.  If we try to deploy it from Cloud Control, we notice that no version is offered, only platform, which means the latest, 13.1.0.0.0 will be deployed, but what if we want to deploy an earlier one?

Silent Deploy of an Agent

The Enterprise Manager Command Line Interface, (EMCLI) offers us a lot more control over what we can request, so let’s try to use the agent from the command line.

Log into the CLI from the OMS host, (or another host with EMCLI installed.)

[oracle@em12 bin]$ ./emcli login -username=sysman
Enter password :
Login successful

First get the information about the agents that are stored in the software library:

[oracle@em12 bin]$ ./emcli get_supportedplatforms
Error: The command name "get_supportedplatforms" is not a recognized command.
Run the "help" command for a list of recognized commands.
You may also need to run the "sync" command to synchronize with the current OMS.
[oracle@em12 bin]$ ./emcli get_supported_platforms
-----------------------------------------------
Version = 12.1.0.5.0
 Platform = Linux x86-64
-----------------------------------------------
Version = 13.1.0.0.0
 Platform = Linux x86-64
-----------------------------------------------
Platforms list displayed successfully.

I already have the 13.1.0.0.0 version.  I want to export the 12.1.0.5.0 to a zip file to be deployed elsewhere:

[oracle@em12 bin]$ ./emcli get_agentimage -destination=/home/oracle/125 -platform="Platform = Linux x86-64" -version=12.1.0.5.0
ERROR:You cannot retrieve an agent image lower than 13.1.0.0.0. Only retrieving an agent image of 13.1.0.0.0 or higher is supported by this command.

OK, so much for that idea!

So what have we learned here?  Use this process to “export” a copy of your previous version of the agent software BEFORE upgrading Enterprise Manager to a new version.

Now, lucky for me, I have multiple EM environments and had an EM 12.1.0.5 to export the agent software from using the steps that I outlined above.  I’ve SCP’d it over to the EM13c to use to deploy and will retain that copy for future endeavors, but remember, we just took care of task number one on our list.

  1.  Show you why you should have a backup copy of previous agent software and how to do this.

Silent Deploy of Previous Agent Software

If we look in our folder, we can see our zip file:

[oracle@osclxc ~]$ ls
 12.1.0.5.0_AgentCore_226.zip 
p20299023_121020_Linux-x86-64.zip
 20299023 p6880880_121010_Linux-x86-64.zip

I’ve already copied it over to the folder I’ll deploy from:

scp 12.1.0.5.0_AgentCore_226.zip oracle@host3.oracle.com:/home/oracle/.

Now I need to upzip it and update the entries in the response file, (agent.rsp)

OMS_HOST=OMShostname.oracle.com
 EM_UPLOAD_PORT=4890 <--get this from running emctl status oms -details
 AGENT_REGISTRATION_PASSWORD=<password> You can set a new one in the EMCC if you don't know this information.
 AGENT_INSTANCE_HOME=/u01/app/oracle/product/agent12c
 AGENT_PORT=3872
 b_startAgent=true
 ORACLE_HOSTNAME=host.oracle.com
 s_agentHomeName=<display name for target>

Now run the shell script, including the argument to ignore the version prerequisite, along with our response file:

$./agentDeploy.sh -ignorePrereqs AGENT_BASE_DIR=/u01/app/oracle/product RESPONSE_FILE=/home/oracle/agent.rsp

The script should deploy the agent successfully, which will result in the end output from the run:

Agent Configuration completed successfully
The following configuration scripts need to be executed as the "root" user.
#!/bin/sh
#Root script to run
 /u01/app/oracle/core/12.1.0.5.0/root.sh
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts
Agent Deployment Successful.

Check that an upload is possible and check the status:

[oracle@fs3 bin]$ ./emctl status agent
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved.
---------------------------------------------------------------
Agent Version : 12.1.0.5.0
OMS Version : 13.1.0.0.0
Protocol Version : 12.1.0.1.0
Agent Home : /u01/app/oracle/product/agent12c
Agent Log Directory : /u01/app/oracle/product/agent12c/sysman/log
Agent Binaries : /u01/app/oracle/product/core/12.1.0.5.0
Agent Process ID : 2698
Parent Process ID : 2630

You should see your host in your EM13c environment now.

fs31

OK, that takes care of Number two task:

2.  Create a documented process to restore previous images of an agent to a target host.

Using a Gold Agent Image

From here, we can then demonstrate the EM13c Gold Agent Image effectively.  Click on Setup, Manage Cloud Control, Gold Agent Image:

Now I’ve already created a Gold Agent Image in this post.  Now it’s time to Manage subscriptions, which you can see a link at the center of the page, to the right side.  Click on it and then we need to subscribe hosts by clicking on “Subscribe” and add it to the list, (by using the shift or ctrl key, you can choose more than one at a time.

gai1

As you can see, I’ve added all my agents to the Gold Image Agent as subscriptions and now it will go through and check the version and add it to be managed by the Gold Agent Image.  This includes my new host on the 12.1.0.5.0 agent.  Keep in mind that a blackout is part of this process for each of these agents for them to be added, so be aware of this step as you refresh and monitor the additions.

Once the added host(s) update to show that they’re now available for update, click on the agent you wish to update, (you can choose even one that’s already on the current version…) and click on Update, Current Version.  This will use your Current version gold image that its subscribed to and deploy it via an EM job-

agent_upd

The job will run for a period of time as it checks everything out, deploys the software and updates the agent, including a blackout so as not to alarm everyone as you work on this task. Once complete, the agent will be upgraded to the same release as your gold agent image you created!

gaig

Well, with that step, I believe I’ve taken care of the next three items on my list!  If you’d like to know more about Gold Agent Images, outside of the scenic route I took you on today, check out the Oracle documentation.

Posted in EM13c, Oracle Tagged with: , ,

March 1st, 2016 by dbakevlar

With EM13c, DBaaS has never been easier.  No matter if you’re solution is on-premise, hybrid, (on-premise to the cloud and back) or all cloud, you’ll find that the ability to take on DevOps challenges and ease the demands on you as the DBA is viewed as the source of much of the contention.

too easy

On-Premise Cloning

In EM13c, on- premise clones have been built in by default and easier to manage than they were before.  The one pre-requisite I’ll ask of you is that you set up your database and host preferred credentials for the location you’ll be creating any databases to.  After logging into our the EMCC and going to our Database Home Page, we can choose a database that we’d like to clone.  There are a number of different kinds of clones-

  • Full Clones from RMAN Backups, standby, etc.
  • Thin Clones with or without a test master database
  • CloneDB for DB12c

For this example, we’ll take advantage of a thin clone, so a little setup will be in order, but as you’ll see, it’s so easy, that it’s just crazy not to take advantage of the space savings that can be offered with a thin clone.

What is a Thin Clone?

A thin clone is a virtual copy of a database that in DevOps terms, uses a test master database, a full copy of the source database, as a “conduit” to then create unlimited number of thin clone databases that save up to 90% on storage requirements separate full clone for each would need.

testmaster

One of the cool features of a test master is that you can perform data masking on the test master so that there is no release of sensitive production data to the clones.  You also have the ability to rewind or in other words, let’s say, a tester is doing some high risk testing on an thin clone and gets to a point of no return.  Instead of asking for a new clone, they can simply rewind to a snapshot in time before the issue that caused the problem occurred.  Very cool stuff…. 🙂

Creating a Test Master Database

From our list of databases in cloud control, we can right click on the database that we want to clone and proceed to create a test master database for it:

clone2

The wizard will take us through the proper steps to perform to create the test master properly.  This test master will reside on an on-premise host, so no need for a cloud resource pool.

clone3

As stated earlier, it will pay off if you have your logins set up as preferred credentials.  The wizard will allow you to set those up as “New” credentials, but if there is a failure and they aren’t tested and true, it’s nice to know you already have this out of the way.

Below the Credentials section, you can decide at what point you want to recover from.  It can be at the time the job is deployed or from a point in time.

You have the choice to name your database anything.  I left the default, using the naming convention based off the source, with the addition of tm, for Test Master and the number 1.   If this was a standard database, you might want to make it a RAC or RAC one node.

Then comes the storage.  As this is an on-premise, I chose the same Oracle Home that I’m using for another database on the nyc host and used the same preferred credentials for normal database operations.  You would want to place your test master database on a storage location that would be separate from your production database so as not to create a performance impact.

clone4

The default location for storage of datafiles is offered, but I do have the opportunity to use OFA or ASM for my options.  I can set up Flashback, too.  Whatever listeners are discovered for the host will be offered up and then I can decided on a security model.  Set up the password model that best suits your policies and if you have a larger database to clone, then you may want to up the parallel threads that will be used to create the test master database.  I always caution those that would attempt to max the number out, thinking more means better.  Parallel can be throttled by a number of factors and those should be taken into consideration.  You will find with practice that you find a “sweet spot” for this setting.  In your environment, 8 may be the magic number due to network bandwidth or IO resource limitations.  You may find it can be as high as 32, but do take some time to test out and know your environment.

clone5

Now comes the spfile settings.  You control this and although the defaults spfile for a test master is used here, for a standard clone, you may want to update the settings for a clone to limit the resources allocated for a test or development clone.

Now if you have special scripts that need to be run as part of your old manual process of cloning, you can still add that here.  That includes BEFORE and AFTER the clone.  For the SQL scripts, you need to specify a database user to run the script as, too.

If you started a standard clone and meant to create a test master database, no fear!  You still have the opportunity to change this into a Test Master at this step and you can create a profile to add to your catalog options if you realize that this would be a nice clone process to make repeatable.

clone7

The EM Job that will create the clone is the next step.  You can choose to run it immediately and decide on what kind of notifications you’d like to receive via your EM profile, (remember, the user logged into the EMCC creating this clone is the credentials that will be used for notification….)  You can also choose to perform the clone later.

clone8

The scheduling feature is simple to use, allowing you to choose a date and time that makes the clone job schedule as efficient as possible.

clone9

Next, review the options you’ve chosen and if satisfied, click on Clone.  If not, click on Back and change any options that didn’t meet your expectations.

If you chose to run the job immediately, the progress dashboard will be brought up after clicking Clone.

clone10

Procedure Activity is just another term for an EM Job and you’ll find this job listed in Job Activity.  It’s easier to watch the progress from here and as checkmarks show in the right hand column, the step is completed successfully for your test master or clone.

Once the clone is complete, remember that this new database is not automatically monitored by EM13c unless you’ve set up Automatic Discovery and Automatic Promotion.  If not, you’ll need to manually discover it.  You can do that following this blog post.  Also keep in mind, you need to wait till the clone is finished, so you can set the DBSNMP user status to unlocked/open and ensure the password is secure.

Now that we’ve created our test master database, in the next post, we’ll create a thin clone.

 

Posted in DBaaS, EM13c, Oracle Tagged with: , , ,

February 23rd, 2016 by dbakevlar

Let’s say you’re on call and you’re woke from a deep, delightful sleep from the pager, stating the Enterprise Manager Cloud Control isn’t available.

whywakeme

You log into the host, check the status, it tells you that the Weblogic server is up and everything else is down.  The host logs show that the servers were restarted unexpectedly, so you want a clean shutdown before bringing Enterprise Manager back up, so you shut it down and then attempt to start it clean:

$ ./emctl start oms
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved.
Starting Oracle Management Server...
WebTier Could Not Be Started.
Error Occurred: WebTier Could Not Be Started.
Please check /u01/app/oracle/gc_inst/em/EMGC_OMS1/sysman/log/emctl.log for 
error details

Well, of course you’re going to follow the recommendations and look at the log for errors!

2016-02-22 00:18:32,342 [main] INFO ctrl_extn.EmctlCtrlExtnLoader logp.251 - 
2016-02-22 00:18:32,342 [main] INFO ctrl_extn.EmctlCtrlExtnLoader logp.251 -
2016-02-22 00:18:32,360 [main] INFO ctrl_extn.EmctlCtrlExtnLoader logp.251 - 
Connection refused
2016-02-22 00:18:32,360 [main] INFO commands.BaseCommand printMessage.
426 - extensible_sample rsp is 1 message is JVMD Engine is Down

The error we notice is the connection is refused.  This is odd and it really doesn’t provide us a lot of information to go on.  Logs are our friends, but this time, we’re going to move from a log onto a message file that may be able to assist us further, as in the emctl.msg file.  There’s not a lot of data in this message file, in fact, but the health monitor does direct us to what we need:

HealthMonitor Feb 22, 2016 12:18:32 AM JobDispatcher error: Could
not connect to repository
/u01/app/oracle/gc_inst/user_projects/domains/GCDomain/servers/EMGC_OMS1/
logs/EMGC_OMS1.out

This points us to the Weblogic domain log directory and to an output file that will offer us the insight we need:

view /u01/app/oracle/gc_inst/user_projects/domains/GCDomain/servers/
EMGC_OMS1/logs/EMGC_OMS1.out

Unlike the message file, this output file contains a LOT of information, but there are some really cool entries in here to be aware of for the Node Manager-  including environment paths used for each service/process, usernames used for logging in, IP addresses, resource allocation, ports used AND process information.

Sure enough though, if we view the out file that coincides with the log entries, we see the following:

<Feb 22, 2016 00:18:32 AM PST> <INFO> <NodeManager> <The server 'EMGC_OMS1' 
with process id 3016 is no longer alive; waiting for the process 
to die.>

These errors are due to the inability of the OMS, (Oracle Management Service and JVMD to connect to the Weblogic tier processes.  Even if you do a clean shutdown and restart, it’s still unable to spawn the weblogic processes due to secondary
There’s a couple ways to look for these old processes and clean them up.

  1.  If you’re EM environment runs as a separate OS user, grep for the user to identify the processes and kill them.
  2. Look at the path of the middleware home to identify something unique to grep for: /u01/app/oracle/13c/ohs/bin/, (in this case, 13c and ohs are our best bet.)

Perform the search for the processes:

$ps -ef | grep 13c
oracle 3016 1 0 Jan28 ? 00:02:38 /u01/app/oracle/13c/ohs/bin/httpd.worker 
oracle 3019 3016 0 Jan28 ? 00:01:27 /u01/app/oracle/13c/ohs/bin/odl_rotatelogs  
oracle 3020 3016 0 Jan28 ? 00:01:10 /u01/app/oracle/13c/ohs/bin/odl_rotatelogs

Bad, BAD Web Tier!  Look at you leaving all those orphan processes after the reboot! The easiest way to address this is to kill these processes manually, but ensure that you only kill these, not your Oracle Management repository, (the database) or agent or other processes such as the listener.  You are looking for the OHS or Weblogic processes here.

Note that they processes are running, even though all of EM is down and have a date from Jan. 28th.  By killing the parent: 3016, I’ll remove the other two child processes, but always good to check if they are orphaned.

$ kill -9 3016

Once I’ve verified that everything is clean and no orphaned processes for the EM tiers exist, restart Enterprise Manager:

$ ./emctl start oms
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved.
Starting Oracle Management Server...
WebTier Successfully Started
Oracle Management Server Successfully Started
Oracle Management Server is Up
JVMD Engine is Up
Starting BI Publisher Server ...
BI Publisher Server Successfully Started
BI Publisher Server is Up

All happy again!

The Weblogic log directory holds historical information, too, so don’t despair if you’re looking into something that happened in the last restart.  The out files are numbered, retaining 8 total and numbering from newest with the .out exension and then reversing to 00001 for the oldest.

 ls -ltr *.out*
-rw-r----- 1 oracle dba 28261 Jan 12 14:30 EMGC_OMS1.out00001
-rw-r----- 1 oracle dba 5120147 Jan 19 06:31 EMGC_OMS1.out00002
-rw-r----- 1 oracle dba 2568825 Jan 22 23:16 EMGC_OMS1.out00003
-rw-r----- 1 oracle dba 25591 Jan 22 23:16 EMGC_OMS1.out00004
-rw-r----- 1 oracle dba 5121593 Feb 4 10:16 EMGC_OMS1.out00005
-rw-r----- 1 oracle dba 4215122 Feb 10 13:21 EMGC_OMS1.out00006
-rw-r----- 1 oracle dba 224605 Feb 22 00:52 EMGC_OMS1.out00007
-rw-r----- 1 oracle dba 233190 Feb 22 16:00 EMGC_OMS1.out

 

Posted in EM13c, Enterprise Manager Tagged with: , ,

February 18th, 2016 by dbakevlar

This question was posted in Twitter from @matvarsh30, who asked, “How can I display CPU usage over different periods of time for databases in Enterprise Manager?”

Everyone loves their trusty Top Activity, but the product’s functionality is limited when it comes to custom views and this is what our user had run into. There are numerous ways to display this data, but I’m going to focus on one of my favorite features in the product that was created to replace Top Activity, ASH Analytics.

Retaining AWR Data for ASH Analtyics

Note: This process to display CPU graphs will work for EM12c and EM13c.  Other than the location of the target menu, not much else has changed.

The default display is for one hour and as ASH Analytics is dependent upon AWR data, so although 8 days of detailed information is easy, it is important that you set your retention in the source, (target) databases appropriately to ensure you’re able to view and or research past the default 8 day retention of AWR in any database.  I am a firm believer that if you have the diagnostic and tuning pack for your EE databases, you should be getting the most out of these tools and up the retention time from the default by running the following command via SQL*Plus with the appropriate privileges:

BEGIN
  DBMS_WORKLOAD_REPOSITORY.modify_snapshot_settings(
    retention => 86400,        -- In minutes, 86400 is 60 days
    interval  => 30);          -- In minutes, only change from 60 if doing workload tests, etc. 60 min interval is sufficient.
END;
/

Now you have not just the EM metric data that rolls up, but the AWR data for ASH Analytics to do deep analysis and reporting.

With this in place, let’s create some graphs that answer the question – “How do I display CPU usage for a database over one week, 10 days, 30 days, 60 days?”

Default to Power User View

Once logged into any database, you can access ASH Analytics from the Performance drop down.  If you have an 11g or earlier database, you may have to install the package to create the EMCC views, but this will need to be done to utilize this powerful tool in Enterprise Manager.  ASH Analytics works in databases version 10.2.0.4.0 and above.

Logging into ASH Analytics will display data for the instance based off one hour, but to change to a one week view, you’ll simply click on “Week” and then move the displayed view for the bottom section graph out to encompass the entire week:

ashan1

Using this example, you can then see that I’m now showing a similar graph to Top Activity, but for a whole week and without the aggregation that Top Activity some times suffers from.

ashan2

We’re not going to stick with this view though.  Leaving it on “Activity”, click on Wait Class and go to Resource Consumption and click on Wait Event, (it’s on Wait Class by default.)

As you can see on the right side, there is an overlap in the legend that needs to be fixed, (I’ll submit an ER for it, I promise!)  but luckily, we’re focused on CPU and we all know, in EM, CPU is green!

ashan4

When we highlight the green, it turns a bright yellow.  Now that you have chosen this by hovering your cursor over it, double click to choose it.  The graph will now update to display CPU:

ashan5

You now possess a graph that displays all CPU usage for over the last week vs. total wait classes.  You can see the overall percentage of activity in the left hand side table and on the right bottom is even displayed by top user sessions.  You can also see the total CPU cores for the machine, which can offer a clear perspective on how CPU resources are being used.

Now you may want to see this data without the “white noise”.  We can uncheck the “Total Activity” box to remove this information and only display CPU:

ashan6

We could also choose to remove the CPU Cores and just display what is being used:

ashan7

By removing the top core info, we see the patterns of usage and just cores used much clearer. We could also decide we want to view all the wait classes again, without the CPU Cores or Total Activity.  The only drawback is the overlap in the legend, (I so hate this bug in the browser display….)ashan8

Now, as requested, how would you do this for 10, 30 and 60 days?  As noted in the top view, you note that you’re offered a view by hour, day, week, month and custom.  As many months have 31 days in them, you may choose to do custom view for all three of those requests, but a custom request is quite simple:

ashan9

Yep, just put in your dates that you request and then click OK.  If you have already stretched your window to beginning to end on the lower view, don’t be surprised if it retains this view and shows you all the data, but yes, all of it will display, that is if your example database was active during that time… 🙂  Yes, the database I chose, as it’s from one of our Oracle test environments was pretty inactive during the Christmas and January time period.

ashan10

And that’s how you create custom CPU activity reports in ASH Analtyics in Enterprise Manager!

 

Posted in ASH and AWR, EM12c Performance, EM13c, Enterprise Manager Tagged with: , , , , ,

February 17th, 2016 by dbakevlar

There was a question posted on Oracle-l forum today that should have a blog post for easy lookup for folks.  Regarding your Enterprise Manager repository database, (aka OMR.)   This database has a restricted use license, which means you can use it for the Enterprise Manager repository, but you can’t add partitioning to it or RAC or dataguard features without licensing those features.  You also can’t use the diagnostic and tuning pack features available in Enterprise Manager on the repository database without licensing it outside of the EMDiagnostics tool.  You can view information about the license that is part of the OMR here.

No one wants to be open to an audit or have a surprise when inspecting what management packs they’re using.

horrified

To view what management packs you’re using for any given EMCC page, you can use the console and access it from the Setup menu from EM12c or EM13c:

mgmt_pack

With that said, Hans Forbrich made a very valuable addition to the thread and added how to disable EM management control access in your OMR database-

Run the following to disable it via SQL*Plus as SYSDBA:

ALTER SYSTEM SET CONTROL_MANAGEMENT_PACK_ACCESS='NONE' scope=BOTH;

Other packs are disabled using the EM Cloud Control with the appropriate privileges in the console using the SETUP menu in 12.1.0.4 with a patch or higher:

mgnt_pck_acs

The view can be changed from licensed databases to all databases and then you can go through and adjust management packs as licensed and then apply.

mngt_acs_2

Don’t make yourself open to an audit when Enterprise Manager can make it really easy to manage the management packs you are accessing.

Posted in ASH and AWR, Database, EM13c, Enterprise Manager, Oracle Tagged with: , ,

  • Facebook
  • Google+
  • LinkedIn
  • Twitter