Category: Oracle

July 11th, 2016 by dbakevlar

Delphix Express offers a virtual environment to work with all the cool features like data virtualization and data masking on just a workstation or even a laptop.  The product has an immense offering, so no matter how hard Kyle, Adam and the other folks worked on this labor of love, there’s bound to be some manual configurations that are required to ensure you get the most from the product.  This is where I thought I’d help and offer a virtual hug to go along with the virtual images…:)


If you’re already set on installing and working<– (Link here!!) with Delphix Express, you will find the following Vimeo videos- importing the VMs and configuring Delphix Express quite helpful. Adam Bowen did a great job with these videos to get you started, but below, I’ll go through some technical details a bit deeper to give folks added arsenal in case they’ve missed a step or challenged just starting out with VMWare.

Note- Delphix Express requires VMWare Fusion, which you can download after purchasing a license, ($79.99) but well worth the investment.

Resource Usage

Not enough memory to run all three VM’s required as part of Delphix Express or after an upgrade, the Delphix Express uses over 6Gb.

Different laptops/workstations have different amounts of memory, CPU and space available.  Memory is the most common constraint with today’s pc.  Although the VMs are configured for optimal performance, the target and source environments can have the memory trimmed to 2Gb each and still perform when resources are constrained.

The VM must be shut down for this configuration change to be implemented.  After stopping or before starting the VM, click on Virtual Machine, Settings.  Click on Processors and Memory and then you can configure the memory usage via a slider option as seen below:

Screen Shot 2016-07-11 at 3.04.08 PM

Move the slider to under 2G for the VM in question and then close the configuration window and start the VM.  Perform this for each VM, (the Delphix Engine VM should already be at 2Gb.)


Issue- Population of sources and targets is empty after successful configuration.

Checking the Log

After starting the target and source VMs, a UI interface with command line is opened and you can login right from the VMWare.  Virtualbox would require a terminal opened to the desktop, but either way, you can get to the command line interface in such a way without using Putty or another desktop terminal from your workstation.

On the target VM command line, login as the delphix user.  The target VM has a python script that runs in the background upon startup that checks for a delphix engine once every minute and if it locates one, will run the configuration.  You can view this running in the cron:

crontab -l
@reboot ..... /home/delphix/

It writes to the following log file:


You can view this file, (or tail it or cat it, whatever you are comfortable doing to view the end of the file…)  I prefer just to view the last ten lines, so I’ll run a command to look at JUST the last ten lines:

tail -10 landshark_setup.log

If the configuration is having issues locating the Delphix engine, it will show in this log file.  Once confirmed, then we have a couple steps to check:

VMWare issue with the one of the virtual machines not visible to another.  Each VM needs to be able to communicate and interact with each other.  When importing in each VM, the ability for the VM to be “host aware” with the Mac may not have occurred.  If you the delphix engine VM isn’t viewable to the target or the source, you can check the log and then verify in the following way.

Click on Virtual Machine, Settings and then click on Network Adapter.  Verify that the top radio option is selected for “Share with my Mac”:

Screen Shot 2016-07-11 at 3.20.33 PM

Verify that this is configured for EACH of the three virtual machines involved.  If this hasn’t corrected and the configuration doesn’t populate the virtual environments in the Delphix interface, then it’s time to look at the configuration for the target machine.

Get IP Address

While SSH connected to the target machine, type in the following:


Use the IP address shown, (inet address) and open a browser on your PC, adding the port used for the target configuration file, (port 8000 by default):


You should be shown the configuration file for your target server that is used to run the delphix engine configuration.  There are options to update the values for different parameters.  The you should focus on are:


linux_source_ip= make sure this matches the source VM’s ip address when you type in “ifconfig”.


engine_address= ip address for the delphix engine VM when you type in ifconfig on the host

engine_password= should match the password that you updated your delphix_admin to when you went through the configuration.  Update it to match if it doesn’t, as I’ve seen some folks not set it to “landshark” as demonstrated in the videos, so of course, the setup will fail when the file doesn’t match the password set by the user.


oracle_xe = If you set Oracle_xe to true, then don’t set the 11g or 12c to true.  To conserver workstation resources,  choose only one database type.

Once you’re made all the changes you want to the page, click on Submit Changes.

Screen Shot 2016-07-11 at 3.37.21 PM

You need to run the reconfiguration manually now.  Remember, this runs in the background each minute, but when it does that, you can’t see what’s going on, so I recommend killing the running process and running it manually.

Manual Runs of the Landshark Setup

From the target host, type in the following:

ps -ef | grep landshark_setup

Kill the running processes:


Check for any running processes, just to be safe:

ps -ef | grep landshark_setup

Once you’ve confirmed that none are running, let’s run the script manually from the delphix user home:


Verify that the configuration runs, monitoring as it steps through each step:

Screen Shot 2016-07-10 at 6.52.54 PM

This is the first time you’re performed these steps, so expect a refresh won’t be performed, but a creation will.  You should now see the left panel of your Delphix Engine UI populated:

Screen Shot 2016-07-11 at 3.48.40 PM

Now we’ve come to the completion of the initial configuration.  In my next post on Delphix Express, I’ll discuss the Dsource and Target database configurations for different target types.  Working with these files and configurations are great practice to learning about Delphix, even if your Delphix Express even if you are amazed at how easy this all was.

If you want to see more about Delphix Express, check the following links from Kyle Hailey or our very own Oracle Alchemist guy, Steve Karam….:)

Screen Shot 2016-07-11 at 5.18.14 PM






Posted in Delphix, Delphix Express, Oracle Tagged with: ,

June 6th, 2016 by dbakevlar

How many times have you had maintenance or a release complete and everyone is sure that everything’s been put back the way it should have been, all t’s crossed, all i’s dotted and then you release it to the customers only to find out that NOPE, something was forgotten in the moving parts of technology?  As the Database Administrator, you can do a bit of CYA and not be the one who has to say-


Having the ability to compare targets is a powerful feature in Enterprise Manager 13c, (and 12c, don’t feel left out there…:))  The comparison feature is the first of three options that encompass Configuration Management-


Building Comparison Baselines

Upon entering Configuration Management in EM13c, you will be offered the option to create a one-time comparison, or use a pre-existing comparison as your base.  You can access the Configuration Management utility via the Enterprise drop down in EM13c Cloud Control:


For our example today and due to the small environment I possess in my test environment, we’re going to compare two database targets.  The Configuration Management utility is fantastic at comparing targets to see if changes have occurred and I recommend collecting “baseline” templates to have available for this purpose, but know that the tool is an option to perform other comparisons, such as:

  • investigating a change in a target.
  • researching drift or consistency

Maintenance Use Case

For our example today, we’re going to be working on a CDB,  then as the “Lead DBA”, we’ll discover that changes that weren’t reverted as part of our maintenance with the Configuration Management Comparison tool.

We first need to set up the comparison “baseline”, so to do this, I’m going to make a copy of the default template, Database Instance Template.  It’s just good practice to make copies and leave the locked templates, in case there are times where we find there are areas we need to watch for changes in any environment that may not have been turned on by default.

Once you enter the main dashboard, click on the bottom icon on the left, which when highlighted, will show you is for Templates.


Scroll down till you see Database Instance Template, highlight it and click on Create Like at the top menu.  You will need to name your new copy of the original template.  For mine, I’ve named it CDB_Compare:


Click OK and you now will be brought to the template with all it’s comparison values displayed.  If there are any areas that you want to compare for immediately, check that area and make sure that there is a check mark in the box for that change.  For our example, let’s say that we have a process in this CDB that when quarterly maintenance is complete, the pluggable database must be brought back up, but sometimes it’s a step that the DBAs forget to complete.  By default, the configuration template is checking for this, but if it didn’t, I would place the check mark in the appropriate box and save the template before proceeding.


Now that I have my template ready, I can use it to do a comparison.  On the far left, click on the top icon, (of a bar graph) that will take you to the Overview page or the One Time Comparison Results, both of which will offer you an opportunity to create the baseline of the CDB  that you want to compare against.

Click on Create Comparison and fill in the following information:


Click on Submit and as expected, no differences are found, (we just compared the environment against itself using the new CDB_Compare template, that checks everything out) but we now have our baseline.

Perform Maintenance and Compare

Our maintenance has been completed, now our database is ready to be released to the users, but we want to verify that the changes performed should have been performed and no steps were missed that would hinder it from being ready for production use.

We perform another comparison, this time against our baseline and choose to only show differences-



Per the report, the databsae is in read only and if we log in via SQLPlus, we can quickly verify that this:

SQL> select name, open_mode from v$database;

--------- --------------------

SQL> select name, open_mode from v$pdbs;

------------------------------ ----------

So instead of mistakenly releasing the database back to the users, we can run the following and know we’ve verified that we are safe to:


Well, that’ll save us from having to explain how that was missed…Whew!


Posted in EM13c, Oracle Tagged with: , ,

June 2nd, 2016 by dbakevlar

I know Werner DeGruyter will like the title of this post, so here’s a post dedicated to him as my last week at Oracle is off to a busy start…. 🙂


As I attempt to wrap up any open tasks at Oracle, I’m still Training Days 2017 Conference Director for RMOUG, have a planning meeting for the 800 member Girl Geek Dinner Boulder/Denver Meetup group that I’m the owner of, designing the booth and building out all the projects for the MakerFaire event at the Denver Science Museum next weekend and now have taken on the Summer Quarterly Education Workshop at the Denver Aquarium in the end of July.  This is a bit much as I start a new job for the end of June, but there are things that need to be done for community organizations to survive and often not enough people doing it.

As I know that many other user groups are in the same boat, I come to you with pleading, open arms and say to you, as part of your community, volunteer your time.  If all of us give a little, it really adds up to a lot in the end.  As attendees at conferences, events and meetups ask what happened to this or that group and wonders why they don’t have activities any longer, it seems to always boil down to commitment from it’s volunteers.  If it’s not fed and cared for by time and care, then it won’t survive.  I have this conversation at almost every user group conference and hear similar stories from meetup groups and other event groups that you might think are no where related.  It all comes back to the passion and commitment of those involved, along with the support of those that may not be giving as much, but ensure those that are, are well cared for.

So here are the rules for the survival of a group:

  1. Everyone do a little so no one is doing too much and you end up losing those valuable resources.  I see volunteer groups and board of directors that are commonly unbalanced and that is expected-  we all have lives and demands, but if it’s always like this with no check and balances, then it’s time to for people to step up and put in some time.
  2. If you aren’t doing, you don’t complain.  If you do see something that isn’t working well and want it fixed, please be prepared to volunteer some time to help out vs. volunteering others to do the work.  That’s just flaky and do you really want to be that flaky whiner at the event? 🙂
  3. Support the people who volunteer their time to your groups and defend them within an inch of your life.  Without them you don’t have a user group, you don’t have new members, you age out and your user group dies a slow, quiet death.  If you doubt me, just look at the strewn carcasses of user groups that were once impressive specimens in the event world.
  4. There is no magic formula.  What works for one group might not work for another.  There are some very impressive individuals that are working hard to create the newest, shiniest events that attract speakers, sponsors and attendees.  This is how it always will be. The groups that have been around awhile just need to keep updating with the new and improved to stay on top of the game and compete.

RMOUG has an incredible board of directors and our volunteers are SECOND to NONE!!  This has served us well all these years.  I don’t know how I would survive the demands of Training Days if it wasn’t for the volunteers and those on the board that help me when the going gets tough.  I’m quite aware of this need in other user groups as well.

So here’s the challenge for those of you out there-  Reach out to your local user group and consider volunteering a little time to one of it’s events.

  • Help with event check in.
  • Help with setup.
  • Consider sponsorship with your company.
  • Consider helping with the board, (we have members at large on our board that is a good way to test out the waters as a board member.)
  • Look at how to become a board of director.

Ask the user group or meetup what they could use help with and DO IT.

Here is the list of regional user groups from Oracle and from IOUG.  Find yours and volunteer to your community.  It’s worth your time, valuable to your career, and it’s the only way they can continue to be successful.

Posted in DBA Life, Oracle Tagged with: ,

May 18th, 2016 by dbakevlar

Change is difficult for technical folks.  Our world is always moving at blinding speed, so if you start changing things that we don’t think need to be changed, even if you improve upon them, we’re not always appreciative.


Configuration Management, EM12c to EM13c

As requests came in for me to write on the topic of Configuration Management, I found the EM13c documentation very lacking, having to push back to EM to fill in a lot of missing areas.  There were changes to the main interface that you use to work with the product.

When comparing the drop down, you can see the changes.config_chng1

Now I’m going to explain to you why this change is good.  In Enterprise Manager, (on the left)  you can see that the Comparison feature of the Configuration Management has a different drop down option than in Enterprise Manager

EM12c Configuration Management

You might think it is better to have a direct access to the Compare, Templates and Job Activity directly via the drop downs, but it really is *still directly* accessible, but the interface has changed.

When you accessed Configuration Management in EM12c, you would click on Comparison Templates and reach the following window:


You can see all the templates, access them quickly, but what if you want to then perform a comparison?  Intuition would tell you to click on Actions and then Create.  This unfortunately, only allows you to create a Comparison Template, not a One-Time Comparison.

To create a one-time comparison in EM12c, you would have to start over, click on the Enterprise menu, Configuration and then Comparison.  This isn’t very user friendly and can be frustrating for the user, even if they’ve become accustomed to the user interface.

EM13c Configuration Management Overview

EM13c has introduced a new interface for Configuration Management.  The initial interface dashboard is the Overview:


You can easily create a One-time Comparison, a Drift Management definition or Consistency Management right from the main Overview screen.  All interfaces for the Configuration Manager now includes tab icons on the left so that you can easily navigate from one feature of the Configuration Management utility to another.

In EM13c, if you are in the Configuration Templates, you can easily see the tabs to take you to the Definitions, the Overview or even the One-Time Comparison.


No more returning to the Enterprise drop down and starting from the beginning to simply access another aspect of Configuration Management.

See?  Not all change is bad… 🙂  If you’d like to learn more about this cool feature, (before I start to dig into it fully with future blog posts) start with the EM12c documentation.  There’s a lot more to understanding the basics in this documentation.

Posted in EM13c, Enterprise Manager, Oracle Tagged with: , ,

May 9th, 2016 by dbakevlar

A lot of my ideas for blog posts come from questions emailed to me or asked via Twitter.  Today’s blog is no different, as I was asked by someone in the community what the best method of comparing databases using features within AWR when migrating from one host and OS to another.


There is a  lot of planning that must go into a project to migrate a database to another host or consolidate to another server, but when we introduce added changes, such as a different OS, new applications, workload or other demands, these need to be taken into consideration.  How do you plan for this and what kind of testing can you perform to eliminate risk to performance and the user experience once you migrate over?

AWR Warehouse

I won’t lie to any of you, this is where the AWR Warehouse just puts it all to shame.  The ability to compare AWR data is the cornerstone of this product and it’s about to shine here again.  For a project of this type, it may very well be a consideration to deploy one and load the AWR data into the warehouse, especially if you’re taking on a consolidation.

There are two main comparison reports, one focused on AWR, (Automatic Workload Repository) data and the other on ADDM, (Automatic Database Diagnostic Monitor).


From the AWR Warehouse, once you highlight a database from the main dashboard, you’ll have the option to run either report and the coolest part of these reports is that you don’t just get to compare time snapshots from the same database, but you can compare one snapshot from a database source in the AWR Warehouse to ANOTHER database source that resides in the warehouse!

ADDM Comparison Period

This report is incredibly valuable and offers the comparisons to pinpoint many of the issues that are going to create the pain-points of a migration.  The “just the facts” and crucial information about what is different, what has changed and what doesn’t match the “base” for the comparison will be displayed very effectively.

When you choose this report, the option to compare from any snapshot interval for the current database is offered, but you can then click on the magnifying glass icon for the Database to compare to and change to compare to any database that is loaded into the AWR Warehouse-



For our example, we’re going to use a day difference, same timeline to use as our Base Period.  Once we fill in these options, we can click Run to request the report.

The report is broken  down into three sections-

  • A side by side comparison of activity by wait event.
  • Details of differences via tabs and tables
  • Resource usage graphs, separated by tabs.


We can clearly compare between the two comparisons of activity that there was more commit waits during the base period, along with user I/O in the comparison period.  During a crisis situation, these graphs can be very beneficial when needed to show waits to less technical team members.


The Configuration tab below the activity graphs will display quickly what differences in OS, initialization parameters, host and other external influences to the database.  The Findings tab will then go into the performance comparisons differences.  Did the SQL perform better or degrade?  In the below table, the SQL ID, along with detailed information about the performance change is displayed.

Resources are the last tab to display graphs about the important area of resource usage.  Was there an impact difference to CPU usage between one host and the other?


Was there swapping or other memory issues?


In our example, we can clearly see the extended data reads and for Exadata consolidations, the ever valuable single block read latency is shown-


Now for those in engineered systems and RAC environments, you’re going to want to know waits for interconnect.  Again, these are simply and clearly compared, then displayed in graph form.


This report will offer very quick answers to

“What Changed?”

“What’s different?”

“What Happened at XXpm?”

The value this report provides is easy to see, but when offered to compare one database to another, even when on different hosts, you can see how valuable the AWR Warehouse becomes that even the consolidation planner can’t offer.

Next post, I’ll go over the AWR Warehouse AWR Comparision Period Report.







Posted in ASH and AWR, AWR Warehouse, Oracle Tagged with: , , ,

May 4th, 2016 by dbakevlar

The OMS Patcher is a newer patching mechanism for the OMS specifically, (I know, the name kind of gave it away…)  Although there are a number of similarities to Oracle’s infamous OPatch, I’ve been spending a lot of time on OTN’s support forums and via email, assisting folks as they apply the first system patch to  Admit it, we know how much you like patching…


The patch we’ll be working with is the following:

Before undertaking this patch, you’ll need to log into your weblogic console as the weblogic admin, (you do still remember the URL and the login/password, right? :))  as this will be required as part of the patching process.
Once you’ve verified this information, you’ll just need to download the patch, unzip it and read the README.txt to get an understanding of what you’re patching.
Per the instructions, you’ll need to shut down the OMS (only).
./emctl stop oms
Take the time to ensure your environment is set up properly.  The ORACLE_HOME will need to be switched over from the database installation home, (if the OMS and OMR are sharing the same host, the ORACLE_HOME is most likely set incorrectly for the patch requirements.)
As an example, this is my path environment on my test server:
/u01/app/oracle/13c/bin <–Location of my bin directory for my OMS executables.
/u01/app/oracle/13c/OMSPatcher/omspatcher <– location of the OMSPatcher executable.

$ORACLE_HOME should be set to OMS_HOME and set omspatcher to the OMSPATCHER :

export omspatcher=$OMS_HOME/OMSPATCHER/omspatcher
export ORACLE_HOME=/u01/app/oracle/13c
If you return to the README.txt, you’ll be there awhile, as the instructions start to offer you poor advice once you get to the following:
$ omspatcher apply -analyze  -property_file <location of property file>
This command will result in a failure on the patch annoy those attempting to apply it.

I’d recommend running the following instead, which is a simplified command and will result in success if you’re set up your environment:

omspatcher apply <path to your patch location>/22920724 -analyze

If this returns with a successful test of your patch, then simply remove the “-analyze” from the command and it will then apply the patch:

omspatcher apply <path to your patch location>/22920724

You’ll be asked a couple of questions, so be ready with the information, including verifying that you can log into your Weblogic console.

Verify that the Weblogic domain URL and username is correct or type in the correct one, enter the weblogic password
Choose to apply the patch by clicking “Y”
Patch should proceed.
The output of the patch will look like the following:

OMSPatcher log file: /u01/app/oracle/13c/cfgtoollogs/omspatcher/22920724/omspatcher_2016-04-29_15-42-56PM_deploy.log

Please enter OMS weblogic admin server URL(t3s://>
Please enter OMS weblogic admin server username(weblogic):>
Please enter OMS weblogic admin server password:>

Do you want to proceed? [y|n]
User Responded with: Y

Applying sub-patch "22589347 " to component "" and version ""...

Applying sub-patch "22823175 " to component "oracle.sysman.emas.oms.plugin" and version ""...

Applying sub-patch "22823156 " to component "oracle.sysman.db.oms.plugin" and version ""...

Log file location: /u01/app/oracle/13c/cfgtoollogs/omspatcher/22920724/omspatcher_2016-04-29_15-42-56PM_deploy.log

OMSPatcher succeeded.


Note the sub-patch information.  It’s important to know that this is contained in the log, for it you needed to rollback a system patch, it must be done via each sub-patch using the Identifier listed here.

If you attempted to rollback the system patch, using the system patch identifier, you’d receive an error:

$ 01/app/oracle/13c/OMSPatcher/omspatcher rollback -id 22920724 -analyze <
OMSPatcher Automation Tool
Copyright (c) 2015, Oracle Corporation. All rights reserved.

"22920724" is a system patch ID. OMSPatcher does not support roll back with system patch ID.

OMSRollbackSession failed: "22920724" is a system patch ID. OMSPatcher does not support roll back with system patch ID.

Once the system patch has completed successfully, you’ll need to add the agent patch and best practice is to use a patch plan and apply it to one agent, make it the gold agent current image and then apply that to all your agents that are subscribed to it.  If you need more information on how to use Gold Agent Images, just read up on it in this post.


Posted in EM13c, Enterprise Manager, Oracle Tagged with: , ,

April 27th, 2016 by dbakevlar

Someone pinged me earlier today and said, “Do I even really need to know about logs in Enterprise Manager?  I mean, it’s a GUI, (graphical user interface) so the logs should be unnecessary to the administrator.”


You just explained why we receive so many emails from database experts stuck on issues with EM, thinking its “just a GUI”.

Log Files

Yes, there are a lot of logs involved with the Enterprise Manager.  With the introduction back in EM10g of the agent, there were more and with the EM11g, the weblogic tier, we added more.  EM12c added functionality never dreamed before and with it, MORE logs, but don’t dispair, because we’ve also tried to streamline those logs and where we weren’t able to streamline, we at least came up with a directory path naming convention that eased you from having to search for information so often.

The directory structure for the most important EM logs are in the $OMS_HOME/gc_inst/em/OMGC_OMS1/sysman/log directory.

Now in many threads on Oracle Support and in blogs, you’ll hear about the emctl.log, but today I’m going to spend some time on the emoms properties, trace and log files.  Now the EMOMS naming convention is just what you would think it’s about-  the Enterprise Manager Oracle Management Service, aka EMOMS.


After all that talk about logs, we’re going to jump into the configuration files first.  The is in a couple directory locations over in the $OMS_HOME/gc_inst/em/EMGC_OMS1/sysman/config directory.

Now in EM12c, this file, along with the file was very important to the configuration of the OMS and it’s logging, which without this, we wouldn’t have any trace or log files or at least the OMS wouldn’t know what to do with the output data it collected!  If you look in the files for EM13c, you’ll receive the following header:


Yes, the file is simply a place holder and you now use EMCTL commands to configure the OMS and logging properties.

There are, actually, very helpful commands listed in the property file to tell you HOW to update your EM OMS properties!  Know if you can’t remember an emctl property commands, this is a good place to look to find the command/usage.

The TRACE Files

Trace files are recognized by any DBA-  These files trace a process and for the emoms*.trc files, these are the trace files for EM OMS processes, including the one for the Oracle Management Service.  Know that a “warning” isn’t always a thing to be concerned about.  Sometimes it’s just letting you know what’s going on in the system, (yeah, I know, shouldn’t they just classify that INFO then?”

2016-04-09 01:00:07,523 [RJob Step 62480] WARN jobCommand.JvmdHealthReportJob logp.251 - JVMD Health report job has started

These files do contain more information than the standard log file, but it may be more than what a standard EM administrator is going to search through.  They’re most helpful when working with MOS and I recommend uploading the corresponding trace files if there is a log that support has narrowed in on.

The LOG Files

Most of the time, you’re going to be in this directory, looking at the emctl.og, but remember that the emoms.log is there for research as well.  If you perform any task that involves the OMS and an error occurs, it should be written to the emoms.log, so looking at this log can provide insight to the issue you’re investigating.

The format of the logs are important to understand and I know I’ve blogged about this in the past, but we’ll just do a quick and high level review.  Taking the following entry:

2016-01-12 14:54:56,702 [[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] ERROR deploymentservice.OMSInfo logp.251 - Failed to get all oms info

We can see that the log entry starts with timestamp, module, message, status, (ERROR, WARN, INFO) detail, error message.  This simplifies it when having to read these logs or knowing how one would parse them into a log analysis program.

There are other emoms log files, simply specializing in loader processing and startup.  Each of these logs commonly contain a log file with more detailed information about the data its in charge of tracing.

If you want to learn more, I’d recommend reading up on EM logging from Oracle.



Posted in EM13c, Enterprise Manager, Oracle Tagged with: , ,

April 20th, 2016 by dbakevlar

How much do you know about the big push to BI Publisher reports from Information Publisher reporting in Enterprise Manager 13c?  Be honest now, Pete Sharman is watching…. 🙂


I promise, there won’t be a quiz at the end of this post, but its important for everyone to start recognizing the power behind the new reporting strategy.  Pete was the PM over the big push in EM13c and has a great blog post with numerous resource links, so I’ll leave him to quizzing everyone!

IP Reports are incredibly powerful and I don’t see them going away soon, but they have a lot of limitations, too.  With the “harder” push to BI Publisher with EM13c, users receive a more robust reporting platform that is able to support the functionality that is required of an IT Infrastructure tool.

BI Publisher

You can access the BI Publisher in EM13c from the Enterprise drop down menu-


There’s a plethora of reports already built out for you to utilize!  These reports access only the OMR, (Oracle EM Management Repository) and cover numerous categories:

  • Target information and status
  • Cloud
  • Security
  • Resource and consolidation planning
  • Metrics, incidents and alerting


Note: Please be aware that the license for BI Publisher included with Enterprise Manager only covers reporting against the OMR and not any other targets DIRECTLY.  If you decide to build reports against data residing in targets outside the repository, it will need to be licensed for each.

Many of the original reports that were converted over from IP Reports were done so by a wonderful Oracle partner, Blue Medora, who are well known for their VMware plugins for Enterprise Manager.

BI Publisher Interface

Once you click on one of the reports, you’ll be taken from the EM13c interface to the BI Publisher one.  Don’t panic when that screen changes-  it’s supposed to do that.



You’ll notice be brought to the Home page, but you’ll notice that you’ll have access to your catalog of reports, (it will mirror the reports in the EM13c reporting interface) the ability to create New reports, open reports that you may have drafts of or are local to your machine, (not uploaded to the repository) and authentication information.

In the left hand side bar, you will have menu options that duplicate some of what is in the top menu and tips access to help you get more acquainted with BI Publisher-


This is where you’ll most likely access the catalog, create reports and download local BIP tools to use on your desktop.

Running Standard Reports


To run a standard, pre-created report, is pretty easy.  This is a report that’s already had the template format created for you and the data sources linked.  Oracle has tried to create a number of reports in categories it thought most IT departments would need, but let’s just run two to demonstrate.

Let’s say you want to know about Database Group Health.  Now there’s not a lot connected to my small development environment, (four databases, three in the Oracle Public Cloud and one on-premise) and this is currently aimed at my EM repository. This limits the results, but as you can see, it shows the current availability, the current number of incidents and compliance violations.bipub1

We could also take a look at what kinds of targets exist in the Enterprise Manager environment:


Or who has powerful privileges in the environment:


Now this is just a couple of the dozens of reports available to you that can be run, copied, edited and sourced for your own environment’s reporting needs out of the BI Publisher.    I’d definitely recommend that if you haven’t checked out BI Publisher, spend a little time on it and see how much it can do!



Posted in EM13c, Enterprise Manager, Oracle Tagged with: , ,

April 7th, 2016 by dbakevlar

It’s that time of year again and the massive undertaking of the Collaborate conference is upon us.  This yearly conference, a collaboration between Quest, Oracle Applications User Group, (OAUG) and Independent Oracle User Group, (IOUG) is one of the largest conferences in the world for those that specialize in all areas of the Oracle database.

The conferene is held in different cities, but recently its been sticking to the great destination of Las Vegas, NV.  We’ll be at the Mandalay, which like many casinos, is like it’s own little self-contained city within a city.


The week will be crazy and I’ll start right out, quite busy with my partners in crime, Courtney Llamas and Werner De Gruyter with Sunday’s pre-conference hands on lab. “Everything I Ever Wanted to Know About Enterprise Manager I Learned at Collaborate” was a huge hit last year, so we’re repeating it this year, but we’ve updated it to the newest release, EM13c.  For those that are able to gain a coveted spot in this HOL, it will be a choice event.  We’re going to not just cover the new user interface, but some of the coolest need-to-know features of the new release.

Sunday evening is the Welcome Reception and Awards Ceremony.  This year I’m receiving the Ken Jacobs award for my contributions to the user community as an Oracle employee.  I’m very honored to be receiving this and thank everyone at IOUG for recognizing the importance that even as an Oracle employee, you can do a lot to help make the community great!

Throughout the week, I’ll have a number of technical sessions:


Now my Database as a Service Session is up first for the week on Monday, 9:15am in Palm B, but I’m going to warn you, since this abstract was submitted very early on, the abstract isn’t as descriptive as I wanted.  Know that this is a DBaaS session and I’ll be covering on-premise, private cloud and even Oracle Public Cloud!  Come learn how easy it can be and forget all those datapump, transportable tablespace and other silly commands people are telling you have to do to provision… ☺

Right  after my DBaaS session, 10:15, same room, (Palm B) we’ll have a special session covering the new product that so many of us have put so much energy, time and vision into-  The Oracle Management Cloud, (OMC)!  Read more about this session here.

The Welcome Reception in the Exhibit Hall is from 5:30-8pm.  Don’t miss out on getting there first and see all the cool exhibitors.  I’ll be at the EM13c booth, so come say hi!


So Tuesday morning, the 12th, I’m back in Palm B at noon for the first of my certification sessions, covering 30 minutes of Enterprise Manager 13c New Features.


Wednesday, at noon, I’m back in my favorite room, Palm B to finish the second part of the certification sessions on new features with Enterprise Manager 13c.

I’ll be presenting at Oak Table World at Collaborate at 2pm in the Mandalay Bay Ballroom.  I’ll be doing my newest session on Enterprise Manager 13c and DB12c.  It’s always a great venue when we have Oakies at conferences and I almost squeaked out of it this year, but dragged back in at the last minute!

The Enterprise Manager SIG is right afterwards at 3-4  in the South Seas Ballroom E.  This is where we meet and geek out over everything Enterprise Manager, so don’t miss out on that!


For the last day, Thursday at 9:45am, I’ll be in- wait for it….  Palm B!  Yes, I know it’s a surprise for both of us, but I’ll be using my experience helping customers Upgrade to Enterprise Manager 13c and sharing it with everyone at Collaborate.  This is another certification session, so collect those certificates and get the most out of your conference!

I’ve made a lot of updates with new material to my slides recently, so I promise to upload my slides to SlideShare after the conference, too!

See you next week in Las Vegas!



Posted in ASH and AWR, EM13c, Oracle, Oracle Management Cloud Tagged with: , , , ,

March 28th, 2016 by dbakevlar

Even though I didn’t have the “official” prerequisite classes of HTML and CSS for the JavaScript class offered by sister Meetup group, Girls Develop it, I decided on Friday that I wanted to take the weekend class and signed up.

The class is held at the Turing Development school and it was a great downtown location.  Very centralized, no matter if you’re North, South, East or West of the city and the venue is a school, so it’s set up with plenty of power, WiFi and projector with multiple screens.  It had started another spring time snow, so I was one of the first ones in the class that morning, but we quickly got situated-  about 25 female students all there to learn JavaScript!  I don’t think I’ve seen that many women in one technical class in my life and no matter how much I love hanging out with the guys that I do in the Oracle realm, this was a refreshing change.  The room was filled with women of all ages, all walks of life and no, I was not the only woman in the room with brightly colored streaks in her hair, tattoos or multiple piercings.


Now if you haven’t already done so, I recommend joining Meetup and checking out the groups that are in your area of interest.  I run three groups, (RMOUG Women in Technology, Raspberry Pi and STEM and Girl Geek Dinners of Boulder/Denver)  I’m also part of a number of other groups, including the Big Data, Women who Code and Girls Who Develop It Meetup, which this one day class was offered by.  At $80, it was a great opportunity to dig into a new language and gain a strong introduction to a computer language, even if you didn’t have any previous experience.

Through the day, we learned how to build out a main page, test code through the console log, incorporate java script into our pages and best practices of beginning Java Script.

Now there are two things I will share with you that I feel are great tips from this class that are available to everyone.  It’s two sites for opportunities to continue with your web design/JavaScript education and they are:

  1.  CodePen–  This site demonstrates different examples of webcode, broken down between HTML, CSS and JavaScript, (any combination of 1,2 or all 3…) and you can make changes to the code to see how it impacts the outcome of the graphics and framework.  It really puts how these three interact to build out impressive web designs and where you would use one over another.
  2. Exercism–  This site gives you real world coding problems, allows you to code a solution, submit the solution for valuable feedback.  It’s important to use what you learn every day to improve upon it.  This site gives that opportunity to you.


Posted in DBA Life, Oracle, WIT Tagged with: ,

March 21st, 2016 by dbakevlar

I appreciate killing two birds with one stone.  I’m all about efficiency and if I can satisfy more than one task with a simple, productive process, then I’m going to do it.  Today, I’m about to:

  1. Show you why you should have a backup copy of previous agent software and how to do this.
  2. Create a documented process to restore previous images of an agent to a target host.
  3. Create the content section for the Collaborate HOL on Gold Images and make it reproducible.
  4. Create a customer demonstration of Gold Agent Image
  5. AND publish a blog post on how to do it all.


I have a pristine Enterprise Manager 13c environment that I’m working in.  To “pollute” it with a or earlier agent seems against what anyone would want to do in a real world EM, but there may very well reasons for having to do so:

  1.  A plugin or bug in the EM13c agent requires a previous agent version to be deployed.
  2. A customer wants to see a demo of the EM13c gold agent image and this would require a host being monitored by an older, 12c agent.

Retaining Previous Agent Copies

It would appear to be a simple process.  Let’s say you have the older version of the agent you wish to deploy in your software repository.  You can access the software versions in your software library by clicking on Setup, Extensibility, Self-Update.


Agent Software is the first in our list, so it’s already highlighted, but otherwise, click in the center of the row, where there’s no link and then click on Actions and Open to access the details on what Agent Software you have downloaded to your Software Library.

If you scroll down, considering all the versions of agent there are available, you can see that the agent for Linux is already in the software library.  If we try to deploy it from Cloud Control, we notice that no version is offered, only platform, which means the latest, will be deployed, but what if we want to deploy an earlier one?

Silent Deploy of an Agent

The Enterprise Manager Command Line Interface, (EMCLI) offers us a lot more control over what we can request, so let’s try to use the agent from the command line.

Log into the CLI from the OMS host, (or another host with EMCLI installed.)

[oracle@em12 bin]$ ./emcli login -username=sysman
Enter password :
Login successful

First get the information about the agents that are stored in the software library:

[oracle@em12 bin]$ ./emcli get_supportedplatforms
Error: The command name "get_supportedplatforms" is not a recognized command.
Run the "help" command for a list of recognized commands.
You may also need to run the "sync" command to synchronize with the current OMS.
[oracle@em12 bin]$ ./emcli get_supported_platforms
Version =
 Platform = Linux x86-64
Version =
 Platform = Linux x86-64
Platforms list displayed successfully.

I already have the version.  I want to export the to a zip file to be deployed elsewhere:

[oracle@em12 bin]$ ./emcli get_agentimage -destination=/home/oracle/125 -platform="Platform = Linux x86-64" -version=
ERROR:You cannot retrieve an agent image lower than Only retrieving an agent image of or higher is supported by this command.

OK, so much for that idea!

So what have we learned here?  Use this process to “export” a copy of your previous version of the agent software BEFORE upgrading Enterprise Manager to a new version.

Now, lucky for me, I have multiple EM environments and had an EM to export the agent software from using the steps that I outlined above.  I’ve SCP’d it over to the EM13c to use to deploy and will retain that copy for future endeavors, but remember, we just took care of task number one on our list.

  1.  Show you why you should have a backup copy of previous agent software and how to do this.

Silent Deploy of Previous Agent Software

If we look in our folder, we can see our zip file:

[oracle@osclxc ~]$ ls

I’ve already copied it over to the folder I’ll deploy from:


Now I need to upzip it and update the entries in the response file, (agent.rsp)
 EM_UPLOAD_PORT=4890 <--get this from running emctl status oms -details
 AGENT_REGISTRATION_PASSWORD=<password> You can set a new one in the EMCC if you don't know this information.
 s_agentHomeName=<display name for target>

Now run the shell script, including the argument to ignore the version prerequisite, along with our response file:

$./ -ignorePrereqs AGENT_BASE_DIR=/u01/app/oracle/product RESPONSE_FILE=/home/oracle/agent.rsp

The script should deploy the agent successfully, which will result in the end output from the run:

Agent Configuration completed successfully
The following configuration scripts need to be executed as the "root" user.
#Root script to run
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts
Agent Deployment Successful.

Check that an upload is possible and check the status:

[oracle@fs3 bin]$ ./emctl status agent
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved.
Agent Version :
OMS Version :
Protocol Version :
Agent Home : /u01/app/oracle/product/agent12c
Agent Log Directory : /u01/app/oracle/product/agent12c/sysman/log
Agent Binaries : /u01/app/oracle/product/core/
Agent Process ID : 2698
Parent Process ID : 2630

You should see your host in your EM13c environment now.


OK, that takes care of Number two task:

2.  Create a documented process to restore previous images of an agent to a target host.

Using a Gold Agent Image

From here, we can then demonstrate the EM13c Gold Agent Image effectively.  Click on Setup, Manage Cloud Control, Gold Agent Image:

Now I’ve already created a Gold Agent Image in this post.  Now it’s time to Manage subscriptions, which you can see a link at the center of the page, to the right side.  Click on it and then we need to subscribe hosts by clicking on “Subscribe” and add it to the list, (by using the shift or ctrl key, you can choose more than one at a time.


As you can see, I’ve added all my agents to the Gold Image Agent as subscriptions and now it will go through and check the version and add it to be managed by the Gold Agent Image.  This includes my new host on the agent.  Keep in mind that a blackout is part of this process for each of these agents for them to be added, so be aware of this step as you refresh and monitor the additions.

Once the added host(s) update to show that they’re now available for update, click on the agent you wish to update, (you can choose even one that’s already on the current version…) and click on Update, Current Version.  This will use your Current version gold image that its subscribed to and deploy it via an EM job-


The job will run for a period of time as it checks everything out, deploys the software and updates the agent, including a blackout so as not to alarm everyone as you work on this task. Once complete, the agent will be upgraded to the same release as your gold agent image you created!


Well, with that step, I believe I’ve taken care of the next three items on my list!  If you’d like to know more about Gold Agent Images, outside of the scenic route I took you on today, check out the Oracle documentation.

Posted in EM13c, Oracle Tagged with: , ,

March 15th, 2016 by dbakevlar

As we migrate to the cloud, secured credentials are starting to become a standard in most DBAs worlds.  If you don’t take the time to understand ssh keys and secured credentials, well, you’re not going to get very far in the cloud.


Now ssh keys are a way to authenticate to a host without the need for a password.  Its far more secure, but takes a little time to set up and comprehension between public and private keys, along with the impact of pass phrases.

Private keys shouldn’t be shared, as anyone with access to the private key can generate and get access to your host.  The Public host has a generated key that can be used to authenticate the user to another host when the key is implemented to that secondary host.

As RSA generation is the most common and what we will require here for EM13c and OPC setup, we’ll focus our post on this type of ssh key.

Generating an SSH Key

As the user that needs to authenticate to the secondary host or cloud provider, you’ll need to generate the key.  This is a simple process on Unix and Linux and with Windows, Putty provides a great Putty Key Generator program that can be used to perform the task.

In a putty console, again, as the OS User that will need to authenticate, run the following command:

$ssh-keygen -b 2048 -t rsa

The command calls the ssh key generator, asks for the key to be in 2048 bytes, (-b) and the type, (-t) is RSA.

The command will ask for a directory to create the files, but a default .ssh directory will be used from the directory the command was issued from if none is specified.

Three files are generated:

-rw-------+ 1 oracle dba 1675 Mar 11 15:01 id_rsa
-rw-r--r--+ 1 oracle dba 398 Mar 11 15:01
-rw-r--r--+ 1 oracle dba 1989 Mar 14 14:42 known_hosts

Note the read write permissions on these files and protect them as you see fit, especially the private key, (the one without an extension… :))

Now I’ve discussed how to create a named credential in my previous post, but I want to discuss how to use these keys to make sure that they’re set up correctly to log into the OPC and other hosts.

You can validate the ssh key by performing the following:

ssh -i ./id_rsa remote_OSUser@remote_hostname

If you can’t log into the remote server, your named credentials creation has an issue and you need to check for typos or incorrect key usage.

Using an SSH Key to Connect to a Remote Host

Once you create a Named Credential, the key is then added to that remote host so that no password is required to authenticate.

On that remote host, you’ll now see the .ssh folder just as you have one on your local host:

-rw------- 1 opc opc 221 Mar 15 12:46 .bash_history
-rw-r--r-- 1 opc opc 18 Oct 30 2014 .bash_logout
-rw-r--r-- 1 opc opc 176 Oct 30 2014 .bash_profile
-rw-r--r-- 1 opc opc 124 Oct 30 2014 .bashrc
-rw-r--r-- 1 opc opc 500 Oct 10 2013 .emacs
drwx------ 2 opc oinstall 4096 Mar 14 17:43 .ssh

In this folder, there is only one file called “authorized_key” until you create ssh keys on the remote host and the authorized_key file has all the public keys and other ssh keys that are required to authenticate.  You can also view this file and you should see the entry for the local host you created a ssh key and named credential for earlier:

ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAy+LQteSnuVmxquHXrA0Vfk6Vi8I4aW/vAgmm1BNpMoIe...........== oracle@orig_hostname

This entry will match the .pub file entry on the hostname for the OS User that you generated and used for your Named Credential earlier.  This is all that’s done when a named credential with an SSH key is used for a target-  It simply adds this information to the authorized_key file.

I hope this short post about RSA SSH keys is beneficial in adding to your knowledge base on EM13c, cloud based authentication and other remote host authentication challenges that may require a little more explanation and until next post, keep curious.


Posted in Oracle

March 14th, 2016 by dbakevlar

This is going to be a multi-post series, (I have so many of those going, you’d hope I’d finish one vs. going onto another one and coming back to others, but that’s just how I roll…:))

As I now have access to the Oracle Public Cloud, (OPC) I’m going to start by building out some connectivity to one of my on premise Enterprise Manager 13c environments.  I had some difficulty getting this done, which may sounds strange for someone who’s done projects with EM12c and DBaaS.


Its not THAT hard to do, it’s just locating the proper steps when there are SO many different groups talking about Database as a Service and Hybrid Cloud from Oracle.  In this post, we’re talking the best and greatest one-  Enterprise Manager 13c’s Database as a Service.

Generate Public and Private Keys

This is required for authentication in our cloud environment, so on our Oracle Management Service, (OMS) environment, let’s create our SSH keys as our Oracle user, (or the owner of the OMS installation):

ssh-keygen -b 2048 -t rsa

Choose where you would like to store the support files and choose not to use a passphrase.

Global Named Credential for the Cloud

We’ll then use the ssh key as part of our new named credential that will be configured with our cloud targets.

Click on Setup, Security and then Named Credentials.  Click on Create under the Named Credentials section and then proceed to follow along with these requirements for the SSH secured credential:


Now most instructions will tell you that you need to “Choose File” to load your SSH Private and Public Keys into the Credential properties, but you can choose to open the file and just copy and paste the information into the sections.  It works the same way.  Ensure you choose “Global” for the Scope, as we don’t have a target to assign this to yet.

Once you’ve entered this information in, click on Save, as you won’t be able to test it.  I will tell you, if you don’t paste in ALL of the information from each of the the public and private key file in the properties section, it has checks for the headers and footers that will cause it to send an error, (you can see the “****BEGIN RSA PRIVATE KEY****” and “ssh-rsa” in the ones I pasted into mine.)

Create a Hybrid Cloud Agent

Any existing agent can be used for this step and will then serve two purposes.  It will be both the local host agent, as well as an agent for the cloud, which is why its referred to as a hybrid agent.

We’ll be using EM CLI, (the command line tool for EM) to perform this step.  I’m going to use the OMS’ agent, but I’d commonly recommend using another hosts and create a few to ensure higher availability.

 $ ./emcli login -username=sysman
 Enter password :

Login successful
 $ ./emcli register_hybridgateway_agent -hybridgateway_agent_list=''
 Successfully registered list of agents as hybridgateways.

Make sure to restart the agent after you’re performed this step.  Deployments to the cloud can fail if you haven’t cycled the agent you’ve converted to a hybrid gateway before performing a deployment.

Create Database Services in OPC

Once that’s done, you’ll need to create some services to manage in your OPC, so create a database service to begin.  I have three to test out with my EM13c on premise environment that we’re going to deploy a hybrid agent to.


Now that we have a couple database services createed, then I’ll need to add the information regarding each new target to the /etc/hosts file on the on premise Enterprise Manager host.

Adding the DNS Information

You can capture this information from your OPC cloud console by clicking the left upper menu, Oracle Compute Cloud Service.

For each service you add, the Oracle Compute Cloud Service provides the information for the DNS entry you’ll need to add to your /etc/hosts file, along with public IP addresses and other pertinent information.


Once you’ve gathered this, then as a user with SUDO privs on your OMS box, add these entries to your hosts file:

$sudo vi /etc/hosts
# ###################################### # localhost.localdomain loghost localhost
IP Address   Host Name    Short Name
So on, and so forth....

Save the changes to the file and that’s all that’s required, otherwise you’ll have to use the IP Addresses for these environments to connect.

Now, let’s use our hybrid gateway agent and deploy to one or more of our new targets on the Oracle Public Cloud.

Manual Target Additions

We’ll add a target manually from the Setup menu, and choose to add a host target:


We’ll fill out the standard information of agent installation directory, run sudo command, but we’ll also choose to use our cloud credentials we created earlier and then we need to check the box for Optional Details and check mark that we’re going to configure a Hybrid Cloud Agent.  If you’re OS user doesn’t have sudo to root, no problem, you’ll just need to run the script manually to complete the installation.


Notice that I have a magnifying glass I can click on and choose the agent that I’ve made my hybrid cloud agent.  One of the tricks  for the proxy port is to remove the default and let the installation deploy to the port that it finds is open.  It eliminates the need to guess and the default isn’t always correct.

Click on Next once you’ve filled out these sections and if satisfied, click on Deploy Agent.  Once complete, the deployment to the cloud is complete.

Next post we’ll discuss the management of cloud targets and hybrid management.





Posted in DBaaS, EM13c, Oracle Tagged with: , , ,

March 8th, 2016 by dbakevlar

I’ve been working on a test environment consisting of multiple containers in a really cool little setup.  The folks that built it create the grand tours for Oracle and were hoping I’d really kick the tires on it, as its a new setup and I’m known for doing major damage to resource consumption… 🙂  No fear, it didn’t take too long before I ran into an interesting scenario that we’ll call the “Part 2” of my Snap clone posts.


Environment after Kellyn has kicked the tires.

In EM13c, if you run into errors, you need to know how to start to properly troubleshooting and what logs provide the most valuable data.  For a snap or thin clone job, there are some distinct steps you should follow.

The Right Logs in EM13c

The error you receive via the EMCC should direct you first to the OMS management log.  This can be found in the $OMS_BASE/EMGC_OMS1/sysman/log directory.  view the emoms.log first and for the time you issued the clone, there should be some high level information about what happened:

2016-03-01 17:31:04,143 [130::[ACTIVE] ExecuteThread: '16' for queue: 'weblogic.kernel.Default (self-tuning)'] WARN clone.CloneDatabasesModel logp.251 - Error determining whether the database target is enabled for thin provisioning: null

For the example, we can see that our TestMaster is shown that it wasn’t enabled for thin provisioning as part of it’s setup.

If log into EMCC, log into our source database, (BOFA) and then go from Database, Cloning, Clone Management, we can then see that although we had requested this to be a test master database, when I overwhelmed the environment, something went wrong and this full clone hadn’t become a test master for BOFA:


Even though the database that should be the Test Master is visible from the BOFA database Cloning Management page and highlighted, I’m unable to Enable as a Test Master or choose the Remove option.  I could delete it and I’d only be prompted for the credentials needed to perform the process.


For this post, we’re going to say that I also was faced with no option to delete the database from the EMCC, too.  Then I’d need to go to the command line interface for EM13c.

EM CLI to the Rescue

As we can’t fix our broken test master view the console, we’ll take care of it with the command line interface, (EM CLI.)

First we need to know information about the database we’re having problems with, so log into the OMR, (Oracle Management Repository, the database behind EM13c)  via SQL*Plus as a user with access to the sysman schema and get the TARGET_GUID for the database in question:

select display_name, target_name, target_guid 
from mgmt_targets where target_name like 'tm1%';


Ignore the system entry and focus on the BOFAtm1.  It’s our target that’s having issues from our Clone Management.

We need to create an entry file with the following parameters to be used by our input file argument-

vi /home/oracle/delete_cln.prop

Next, log into the EM CLI as the sysman user, (or if you’ve set up yours with proper EM CLI logins, then use that…)

$ ./emcli login -username=sysman
 Enter password :
Login successful
./emcli delete_database -input_file=data:/home/oracle/delete_cln.prop
Submitting delete database procedure...
Deployment procedure submitted successfully

Notice the output from the run: “…procedure SUBMITTED successfully”.  This isn’t an instantaneous execution and it will take a short while for the deletion and removal of the datafiles to take place.

There are a ton of EM CLI verbs for creating, managing and automating DBaaS, this is just demonstrating the use of one of them when I ran into an issue due to resource constraints causing a failure on my test environment.  You can find most of them here.

After some investigation of host processes, I noted that the swap was undersized and after resizing, the job completed successfully.


Posted in DBaaS, EM13c, Enterprise Manager, Oracle Tagged with: ,

March 4th, 2016 by dbakevlar

I fly out on Sunday for HotSos and am looking forward to giving a joint keynote with Jeff Smith, as well as giving two brand new sessions on Enterprise Manager New Features.  IOUG’s Collaborate is just a month afterwards, so the spring conference chaos is officially under way.


With running the RMOUG conference, Feb. 9th-11th, I think you can imagine what my response was like when I realized how much content I had to produce for HotSos’ two sessions and then another four for Collaborate, plus a Hands on Lab.


As focused as I’ve been on day job demands for a new product, Oracle Management Cloud, which I’m sure you’ve heard of as it goes through trials, I found myself furiously building out everything I needed for my Enterprise Manager 13c environment.  At the same time, we needed to build and test out the HOL container environment and then Brian Spendolini was kind enough to give me access to the Oracle Public Cloud to test out the new Database as a Service with Hybrid Cloud offering.

I know all of it is going to be awesome, but my brain works like a McDonalds with 256 open drive thrus, so until it comes together at the end, I’m sure it looks pretty chaotic.


With that said, everything is starting to come together, first with HotSos and then with Collaborate, really well.

HotSos Symposium 2016

This will be my fourth year presenting at HotSos Symposium and where other conferences may have mixed content, this is all about performance.  It’s my favorite topic and I really get to discuss the features that I love-  AWR, ASH, EM Metrics, SQL Monitor, AWR Warehouse.  It’s all technical, all the time and I really enjoy the personal feel of the conference that the HotSos group put into it, as well as the quality of the attendees that are there with such a focused objective on what they want to learn.

That Jeff and I are going to do our keynote on Social Media at HotSos really demonstrates the importance of it’s value to a techie career.  Social Media is assumed to be natural to those that are technically fluent and to be honest, it can be a very foreign concept.  Hopefully those in attendance will gain value in professional branding and how it can further their career.

IOUG Collaborate 2016

Collaborate is another conference where I enjoy speaking at immensely.  The session attendance is high, allowing you to reach a large user base and the locations often change from year to year, offering you some place new to visit.  The venue this year is in Las Vegas at the Mandalay.  There’s so much to do during the event that its almost impossible for you to go outside or do something outside the hotel, ( can you call these monstrosities in Las Vegas just a “hotel”? :))  and I know I only went outside once back in 2014 after arriving.


Joe Diemer did a great job putting together a page to locate all the great Enterprise Manager and Oracle Management Cloud content at Collaborate this year.  Make sure to bookmark this and use those links to fulfill your Collaborate scheduler so you don’t miss out on any of it!  This includes incredible presenters and I know I’ll be using it to try and see sessions for a change!

Along with my four technical sessions, I’ll be doing a great HOL with Courtney Llamas and Werner DeGruyter.  We’re updating last year’s session, (OK, we’re pretty much writing a whole new HOL…) to EM13c and we’re going to cover all the latest and coolest new features, so don’t miss out on this great pre-conference hands on lab!

Hopefully I’ll see you either this next week at HotSos or in April at Collaborate!



Posted in DBA Life, Oracle Tagged with: , , ,

March 1st, 2016 by dbakevlar

With EM13c, DBaaS has never been easier.  No matter if you’re solution is on-premise, hybrid, (on-premise to the cloud and back) or all cloud, you’ll find that the ability to take on DevOps challenges and ease the demands on you as the DBA is viewed as the source of much of the contention.

too easy

On-Premise Cloning

In EM13c, on- premise clones have been built in by default and easier to manage than they were before.  The one pre-requisite I’ll ask of you is that you set up your database and host preferred credentials for the location you’ll be creating any databases to.  After logging into our the EMCC and going to our Database Home Page, we can choose a database that we’d like to clone.  There are a number of different kinds of clones-

  • Full Clones from RMAN Backups, standby, etc.
  • Thin Clones with or without a test master database
  • CloneDB for DB12c

For this example, we’ll take advantage of a thin clone, so a little setup will be in order, but as you’ll see, it’s so easy, that it’s just crazy not to take advantage of the space savings that can be offered with a thin clone.

What is a Thin Clone?

A thin clone is a virtual copy of a database that in DevOps terms, uses a test master database, a full copy of the source database, as a “conduit” to then create unlimited number of thin clone databases that save up to 90% on storage requirements separate full clone for each would need.


One of the cool features of a test master is that you can perform data masking on the test master so that there is no release of sensitive production data to the clones.  You also have the ability to rewind or in other words, let’s say, a tester is doing some high risk testing on an thin clone and gets to a point of no return.  Instead of asking for a new clone, they can simply rewind to a snapshot in time before the issue that caused the problem occurred.  Very cool stuff…. 🙂

Creating a Test Master Database

From our list of databases in cloud control, we can right click on the database that we want to clone and proceed to create a test master database for it:


The wizard will take us through the proper steps to perform to create the test master properly.  This test master will reside on an on-premise host, so no need for a cloud resource pool.


As stated earlier, it will pay off if you have your logins set up as preferred credentials.  The wizard will allow you to set those up as “New” credentials, but if there is a failure and they aren’t tested and true, it’s nice to know you already have this out of the way.

Below the Credentials section, you can decide at what point you want to recover from.  It can be at the time the job is deployed or from a point in time.

You have the choice to name your database anything.  I left the default, using the naming convention based off the source, with the addition of tm, for Test Master and the number 1.   If this was a standard database, you might want to make it a RAC or RAC one node.

Then comes the storage.  As this is an on-premise, I chose the same Oracle Home that I’m using for another database on the nyc host and used the same preferred credentials for normal database operations.  You would want to place your test master database on a storage location that would be separate from your production database so as not to create a performance impact.


The default location for storage of datafiles is offered, but I do have the opportunity to use OFA or ASM for my options.  I can set up Flashback, too.  Whatever listeners are discovered for the host will be offered up and then I can decided on a security model.  Set up the password model that best suits your policies and if you have a larger database to clone, then you may want to up the parallel threads that will be used to create the test master database.  I always caution those that would attempt to max the number out, thinking more means better.  Parallel can be throttled by a number of factors and those should be taken into consideration.  You will find with practice that you find a “sweet spot” for this setting.  In your environment, 8 may be the magic number due to network bandwidth or IO resource limitations.  You may find it can be as high as 32, but do take some time to test out and know your environment.


Now comes the spfile settings.  You control this and although the defaults spfile for a test master is used here, for a standard clone, you may want to update the settings for a clone to limit the resources allocated for a test or development clone.

Now if you have special scripts that need to be run as part of your old manual process of cloning, you can still add that here.  That includes BEFORE and AFTER the clone.  For the SQL scripts, you need to specify a database user to run the script as, too.

If you started a standard clone and meant to create a test master database, no fear!  You still have the opportunity to change this into a Test Master at this step and you can create a profile to add to your catalog options if you realize that this would be a nice clone process to make repeatable.


The EM Job that will create the clone is the next step.  You can choose to run it immediately and decide on what kind of notifications you’d like to receive via your EM profile, (remember, the user logged into the EMCC creating this clone is the credentials that will be used for notification….)  You can also choose to perform the clone later.


The scheduling feature is simple to use, allowing you to choose a date and time that makes the clone job schedule as efficient as possible.


Next, review the options you’ve chosen and if satisfied, click on Clone.  If not, click on Back and change any options that didn’t meet your expectations.

If you chose to run the job immediately, the progress dashboard will be brought up after clicking Clone.


Procedure Activity is just another term for an EM Job and you’ll find this job listed in Job Activity.  It’s easier to watch the progress from here and as checkmarks show in the right hand column, the step is completed successfully for your test master or clone.

Once the clone is complete, remember that this new database is not automatically monitored by EM13c unless you’ve set up Automatic Discovery and Automatic Promotion.  If not, you’ll need to manually discover it.  You can do that following this blog post.  Also keep in mind, you need to wait till the clone is finished, so you can set the DBSNMP user status to unlocked/open and ensure the password is secure.

Now that we’ve created our test master database, in the next post, we’ll create a thin clone.


Posted in DBaaS, EM13c, Oracle Tagged with: , , ,

February 25th, 2016 by dbakevlar

I thought this was kind of a cool feature- the ability to send a message to appear to specific or all users in the Cloud Control Console.  I have to admit that I used to like a similar feature in Microsoft/MSSQL to send network broadcast messages to desktops that offered one more way to get information to users that they might be less inclined to miss.


Anyone who’s already deployed/upgraded to Enterprise Manager 13c and wanted to search how to use this feature, it’s not well documented, so I’m going to blog about it and hopefully that will assist those that would like to put this great little feature to use.

First off, know that the broadcast message is issued by the administrator from the Enterprise Manager command line, (EMCLI.)  There isn’t a current cloud control interface mechanism to perform this.

If you look in the documentation, you’ll most likely search, (like I did) for a verb that has a naming convention of %broadcast%, but came up with nothing.  The reason you can’t find anything is that the verb is wrong in the docs and I’ve submitted a document bug to have this corrected, (thanks to Pete Sharman who had a previous example of the execution, so realized it didn’t match what he had in his examples…)

In the docs, you’ll find the entry for this under the verb: publish_message

The correct verb for this feature is: send_system_broadcast

I ended up pinging Pete because I was concerned that the verb didn’t exist and it took a search for the right key word to find it after dumping out all the EMCLI verbs to a text file. Its a good idea to know how to do this, simply type in the following to gather all the verbs from the library and redirect them to a file that’s easier to parse through with an editor:

$ ./emcli help > emcli.list

You can then view this list and in it, you’ll find the correct verb name:

System Broadcast Verbs
send_system_broadcast — Send a System Broadcast to users logged into the UI

Once you know the verb, then you can request detailed information from the EMCLI verb help command:

$ ./emcli help send_system_broadcast
 emcli send_system_broadcast
 [-to="comma separated user names"]
 [-messageType="INFO|CONF|WARN|ERROR|FATAL" (default is INFO)]
 -message="message details"
 Enter the value ALL to send to all users logged into the Enterprise Manager UI enter SPECIFIC to send to a specific EM User
 Comma separated list of users. This is only used if -toOption
 Type of System Broadcast, it can be one of following types
 Message that needs to be sent in the System Broadcast. It must have a maximum of 200 characters.

EM CLI verbs can be issued two different ways, as a single command from the host command line interface or internal to the EM CLI utility.  For the command to execute successfully, the login into the EM CLI must be performed, otherwise, you’ll receive an unauthorized error like the one below:

$ ./emcli send_system_broadcast -toOption="ALL" -message="System Maintenance, 6pm"
 Status:Unauthorized 401
$ ./emcli login -username=sysman
 Enter password :

Login successful
$ ./emcli send_system_broadcast -toOption="ALL" -messageType="WARN" -message="System Maintenance, 6pm"
 Successfully requested to send System Broadcast to users.

Note: If you upgraded your EM12c to EM13c, ensure you syncronize your CLI library before attempting using a new verb from the 13c library, too.

I wasn’t as satisfied with the internal CLI utility.  The error messages weren’t as helpful as when it was issued by the command line and then there were odd ones like below:

 emcli>send_system_broadcast (
 ... toOption="ALL"
 ... ,messageType="WARN"
 ... ,message="Applying EM Patch at 6pm MST, 3/1/2016"
 ... )
 com.sun.jersey.api.client.ClientHandlerException: oracle.sysman.emCLI.omsbrowser.OMSBrowserException

So I found that issuing it from the host command line offered much better results:

$ ./emcli send_system_broadcast -toOption="ALL" -message="Testing"
 Successfully requested to send System Broadcast to users.
$ ./emcli send_system_broadcast -toOption="ALL" -messageType="WARN" -message="Hello EM Users, Maintenance Outage at 6pm MST"
 Successfully requested to send System Broadcast to users.

The message shows at the top right of the screen and will continue to be displayed until the user clicks on Close-


Now, not that I’m advocating sending bogus or silly messages, but you can have some fun with this feature and send messages to unique users using the specific to option and call to any EMCC user:

$./emcli send_system_broadcast  -toOption="SPECIFIC" -to="KPOTVIN" -messageType="WARN" -message="Get off my EM13c Console, NOW!!"

What does the message look like?


And no, you can’t send a specific user message to the SYSMAN user:

Following users are inactive/invalid. Cannot send System Broadcast to them: sysman.

Mean ol’ Enterprise Manager… 🙂

Posted in EM13c, Enterprise Manager, Oracle Tagged with: ,

February 17th, 2016 by dbakevlar

There was a question posted on Oracle-l forum today that should have a blog post for easy lookup for folks.  Regarding your Enterprise Manager repository database, (aka OMR.)   This database has a restricted use license, which means you can use it for the Enterprise Manager repository, but you can’t add partitioning to it or RAC or dataguard features without licensing those features.  You also can’t use the diagnostic and tuning pack features available in Enterprise Manager on the repository database without licensing it outside of the EMDiagnostics tool.  You can view information about the license that is part of the OMR here.

No one wants to be open to an audit or have a surprise when inspecting what management packs they’re using.


To view what management packs you’re using for any given EMCC page, you can use the console and access it from the Setup menu from EM12c or EM13c:


With that said, Hans Forbrich made a very valuable addition to the thread and added how to disable EM management control access in your OMR database-

Run the following to disable it via SQL*Plus as SYSDBA:


Other packs are disabled using the EM Cloud Control with the appropriate privileges in the console using the SETUP menu in with a patch or higher:


The view can be changed from licensed databases to all databases and then you can go through and adjust management packs as licensed and then apply.


Don’t make yourself open to an audit when Enterprise Manager can make it really easy to manage the management packs you are accessing.

Posted in ASH and AWR, Database, EM13c, Enterprise Manager, Oracle Tagged with: , ,

  • Facebook
  • Google+
  • LinkedIn
  • Twitter