Category: DBaaS

April 4th, 2016 by dbakevlar

On my previous post, I submitted a job to create a test master database in my test environment.  Now my test environment is a sweet offering of containers that simulate a multi-host scenario, but in reality, it’s not.

me

I noted that after the full copy started, my EMCC partially came down, as did my OMR, requiring both to be logged into and restarted.

Upon inspection of TOP on my “host” for my containers, we can see that there is some serious CPU usage from process 80:

 PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
 80 root 20 0 0 0 0 R 100.0 0.0 134:16.02 kswapd0
24473 oracle 20 0 0 0 0 R 100.0 0.0 40:36.39 kworker/u3+

and that this is Linux process is managing the swap, (good job, Kellyn! :))

$ ps -ef | grep 80
root 80 2 0 Feb16 ? 02:14:36 [kswapd0]

The host is very slow to respond and is working hard.  Now what jobs are killing it so?  Is it all the test master creation?

test_m1

Actually, no, remember, this is Kellyn’s test environment, so I have four databases that are loading to a local AWR Warehouse and these are all container environments sharing resources.

I have two failed AWR extract jobs due to me overwhelming the environment and can no longer get to my database home page to even remove them. I had to wait for a bit for the main processing to complete before I could even get to this.

As it got closer to completing the work of the clone,I finally did log into the AWR Warehouse, removed two databases and then shut them down to free up resources and space.  We can then see the new processes for the test master, owned by 500 instead of showing as the oracle OS user as they’re running on a different container than the one I’m running the top command from:

24473 root 20 0 0 0 0 R 100.0 0.0 52:23.17 kworker/u3+
15682 500 20 0 2456596 58804 51536 R 98.0 0.2 50:07.48 oracle_248+
 5034 500 20 0 2729304 190600 181552 R 91.1 0.8 8:59.36 ora_dbw0_e+
 2946 500 20 0 2802784 686440 626148 R 86.8 2.8 3:15.62 ora_j019_e+
 5041 500 20 0 2721952 19644 17612 R 68.6 0.1 6:36.20 ora_lg00_e+

It looks a little better thou as it starts to recover from me multi-tasking too much at once.  After a certain amount of time, the test master was finished and up:

test_m2

I did get it to the point during the clone where there was no swap left:

KiB Mem : 24423596 total, 178476 free, 10595080 used, 13650040 buff/cache
KiB Swap: 4210684 total, 0 free, 4210684 used. 1148572 avail Mem

They really should just cut me off some days… 🙂

Note:  Followup on this.  Found out upon comparing the host environment to my other environments, the swap was waaaaaay too low.  This is a good example of what will happen if you DON’T have enough swap to perform these types of tasks, like cloning!

 

 

 

Posted in DBaaS, EM13c Tagged with: , ,

March 14th, 2016 by dbakevlar

This is going to be a multi-post series, (I have so many of those going, you’d hope I’d finish one vs. going onto another one and coming back to others, but that’s just how I roll…:))

As I now have access to the Oracle Public Cloud, (OPC) I’m going to start by building out some connectivity to one of my on premise Enterprise Manager 13c environments.  I had some difficulty getting this done, which may sounds strange for someone who’s done projects with EM12c and DBaaS.

fa65f436a4c535257341d334bb62c0b2

Its not THAT hard to do, it’s just locating the proper steps when there are SO many different groups talking about Database as a Service and Hybrid Cloud from Oracle.  In this post, we’re talking the best and greatest one-  Enterprise Manager 13c’s Database as a Service.

Generate Public and Private Keys

This is required for authentication in our cloud environment, so on our Oracle Management Service, (OMS) environment, let’s create our SSH keys as our Oracle user, (or the owner of the OMS installation):

ssh-keygen -b 2048 -t rsa

Choose where you would like to store the support files and choose not to use a passphrase.

Global Named Credential for the Cloud

We’ll then use the ssh key as part of our new named credential that will be configured with our cloud targets.

Click on Setup, Security and then Named Credentials.  Click on Create under the Named Credentials section and then proceed to follow along with these requirements for the SSH secured credential:

opc_em5

Now most instructions will tell you that you need to “Choose File” to load your SSH Private and Public Keys into the Credential properties, but you can choose to open the file and just copy and paste the information into the sections.  It works the same way.  Ensure you choose “Global” for the Scope, as we don’t have a target to assign this to yet.

Once you’ve entered this information in, click on Save, as you won’t be able to test it.  I will tell you, if you don’t paste in ALL of the information from each of the the public and private key file in the properties section, it has checks for the headers and footers that will cause it to send an error, (you can see the “****BEGIN RSA PRIVATE KEY****” and “ssh-rsa” in the ones I pasted into mine.)

Create a Hybrid Cloud Agent

Any existing agent can be used for this step and will then serve two purposes.  It will be both the local host agent, as well as an agent for the cloud, which is why its referred to as a hybrid agent.

We’ll be using EM CLI, (the command line tool for EM) to perform this step.  I’m going to use the OMS’ agent, but I’d commonly recommend using another hosts and create a few to ensure higher availability.

 $ ./emcli login -username=sysman
 Enter password :

Login successful
 $ ./emcli register_hybridgateway_agent -hybridgateway_agent_list='agentname.oracle.com:1830'
 Successfully registered list of agents as hybridgateways.

Make sure to restart the agent after you’re performed this step.  Deployments to the cloud can fail if you haven’t cycled the agent you’ve converted to a hybrid gateway before performing a deployment.

Create Database Services in OPC

Once that’s done, you’ll need to create some services to manage in your OPC, so create a database service to begin.  I have three to test out with my EM13c on premise environment that we’re going to deploy a hybrid agent to.

agent_dep4

Now that we have a couple database services createed, then I’ll need to add the information regarding each new target to the /etc/hosts file on the on premise Enterprise Manager host.

Adding the DNS Information

You can capture this information from your OPC cloud console by clicking the left upper menu, Oracle Compute Cloud Service.

For each service you add, the Oracle Compute Cloud Service provides the information for the DNS entry you’ll need to add to your /etc/hosts file, along with public IP addresses and other pertinent information.

opc_em4

Once you’ve gathered this, then as a user with SUDO privs on your OMS box, add these entries to your hosts file:

$sudo vi /etc/hosts
# ###################################### #
127.0.0.1 localhost.localdomain loghost localhost
IP Address   Host Name    Short Name
So on, and so forth....

Save the changes to the file and that’s all that’s required, otherwise you’ll have to use the IP Addresses for these environments to connect.

Now, let’s use our hybrid gateway agent and deploy to one or more of our new targets on the Oracle Public Cloud.

Manual Target Additions

We’ll add a target manually from the Setup menu, and choose to add a host target:

agent_dep1

We’ll fill out the standard information of agent installation directory, run sudo command, but we’ll also choose to use our cloud credentials we created earlier and then we need to check the box for Optional Details and check mark that we’re going to configure a Hybrid Cloud Agent.  If you’re OS user doesn’t have sudo to root, no problem, you’ll just need to run the root.sh script manually to complete the installation.

agent_dep2

Notice that I have a magnifying glass I can click on and choose the agent that I’ve made my hybrid cloud agent.  One of the tricks  for the proxy port is to remove the default and let the installation deploy to the port that it finds is open.  It eliminates the need to guess and the default isn’t always correct.

Click on Next once you’ve filled out these sections and if satisfied, click on Deploy Agent.  Once complete, the deployment to the cloud is complete.

Next post we’ll discuss the management of cloud targets and hybrid management.

 

 

 

 

Posted in DBaaS, EM13c, Oracle Tagged with: , , ,

March 8th, 2016 by dbakevlar

I’ve been working on a test environment consisting of multiple containers in a really cool little setup.  The folks that built it create the grand tours for Oracle and were hoping I’d really kick the tires on it, as its a new setup and I’m known for doing major damage to resource consumption… 🙂  No fear, it didn’t take too long before I ran into an interesting scenario that we’ll call the “Part 2” of my Snap clone posts.

tire

Environment after Kellyn has kicked the tires.

In EM13c, if you run into errors, you need to know how to start to properly troubleshooting and what logs provide the most valuable data.  For a snap or thin clone job, there are some distinct steps you should follow.

The Right Logs in EM13c

The error you receive via the EMCC should direct you first to the OMS management log.  This can be found in the $OMS_BASE/EMGC_OMS1/sysman/log directory.  view the emoms.log first and for the time you issued the clone, there should be some high level information about what happened:


2016-03-01 17:31:04,143 [130::[ACTIVE] ExecuteThread: '16' for queue: 'weblogic.kernel.Default (self-tuning)'] WARN clone.CloneDatabasesModel logp.251 - Error determining whether the database target is enabled for thin provisioning: null

For the example, we can see that our TestMaster is shown that it wasn’t enabled for thin provisioning as part of it’s setup.

If log into EMCC, log into our source database, (BOFA) and then go from Database, Cloning, Clone Management, we can then see that although we had requested this to be a test master database, when I overwhelmed the environment, something went wrong and this full clone hadn’t become a test master for BOFA:

thin_c5

Even though the database that should be the Test Master is visible from the BOFA database Cloning Management page and highlighted, I’m unable to Enable as a Test Master or choose the Remove option.  I could delete it and I’d only be prompted for the credentials needed to perform the process.

delete_db

For this post, we’re going to say that I also was faced with no option to delete the database from the EMCC, too.  Then I’d need to go to the command line interface for EM13c.

EM CLI to the Rescue

As we can’t fix our broken test master view the console, we’ll take care of it with the command line interface, (EM CLI.)

First we need to know information about the database we’re having problems with, so log into the OMR, (Oracle Management Repository, the database behind EM13c)  via SQL*Plus as a user with access to the sysman schema and get the TARGET_GUID for the database in question:

select display_name, target_name, target_guid 
from mgmt_targets where target_name like 'tm1%';
 DISPLAY_NAME
 --------------------------------------------------------------------------------
 TARGET_NAME
 --------------------------------------------------------------------------------
 TARGET_GUID
 --------------------------------
 BOFAtm1_sys
 BOFAtm1_sys
 EF9FC557D210477B439EAC24B0FDA5D9
 
 BOFA_TestMaster-03-01-2016-1
 BOFAtm1
 893EC50F6050B95012EAFA9B7B7EF005

 

Ignore the system entry and focus on the BOFAtm1.  It’s our target that’s having issues from our Clone Management.

We need to create an entry file with the following parameters to be used by our input file argument-

vi /home/oracle/delete_cln.prop
DB_TARGET_GUID=893EC50F6050B95012EAFA9B7B7EF005
HOST_CREDS=HOST-ORACLE:SYSMAN
HOST_NAME=nyc.oracle.com
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1
DBNAME=BOFAtm1
DB_SID=BOFAtm1
DB_TARGET_NAME=BOFAtm1

Next, log into the EM CLI as the sysman user, (or if you’ve set up yours with proper EM CLI logins, then use that…)

$ ./emcli login -username=sysman
 Enter password :
Login successful
./emcli delete_database -input_file=data:/home/oracle/delete_cln.prop
Submitting delete database procedure...
2D146F323DB814BAE053027AA8C09DCB
Deployment procedure submitted successfully

Notice the output from the run: “…procedure SUBMITTED successfully”.  This isn’t an instantaneous execution and it will take a short while for the deletion and removal of the datafiles to take place.

There are a ton of EM CLI verbs for creating, managing and automating DBaaS, this is just demonstrating the use of one of them when I ran into an issue due to resource constraints causing a failure on my test environment.  You can find most of them here.

After some investigation of host processes, I noted that the swap was undersized and after resizing, the job completed successfully.

 

Posted in DBaaS, EM13c, Enterprise Manager, Oracle Tagged with: ,

March 1st, 2016 by dbakevlar

With EM13c, DBaaS has never been easier.  No matter if you’re solution is on-premise, hybrid, (on-premise to the cloud and back) or all cloud, you’ll find that the ability to take on DevOps challenges and ease the demands on you as the DBA is viewed as the source of much of the contention.

too easy

On-Premise Cloning

In EM13c, on- premise clones have been built in by default and easier to manage than they were before.  The one pre-requisite I’ll ask of you is that you set up your database and host preferred credentials for the location you’ll be creating any databases to.  After logging into our the EMCC and going to our Database Home Page, we can choose a database that we’d like to clone.  There are a number of different kinds of clones-

  • Full Clones from RMAN Backups, standby, etc.
  • Thin Clones with or without a test master database
  • CloneDB for DB12c

For this example, we’ll take advantage of a thin clone, so a little setup will be in order, but as you’ll see, it’s so easy, that it’s just crazy not to take advantage of the space savings that can be offered with a thin clone.

What is a Thin Clone?

A thin clone is a virtual copy of a database that in DevOps terms, uses a test master database, a full copy of the source database, as a “conduit” to then create unlimited number of thin clone databases that save up to 90% on storage requirements separate full clone for each would need.

testmaster

One of the cool features of a test master is that you can perform data masking on the test master so that there is no release of sensitive production data to the clones.  You also have the ability to rewind or in other words, let’s say, a tester is doing some high risk testing on an thin clone and gets to a point of no return.  Instead of asking for a new clone, they can simply rewind to a snapshot in time before the issue that caused the problem occurred.  Very cool stuff…. 🙂

Creating a Test Master Database

From our list of databases in cloud control, we can right click on the database that we want to clone and proceed to create a test master database for it:

clone2

The wizard will take us through the proper steps to perform to create the test master properly.  This test master will reside on an on-premise host, so no need for a cloud resource pool.

clone3

As stated earlier, it will pay off if you have your logins set up as preferred credentials.  The wizard will allow you to set those up as “New” credentials, but if there is a failure and they aren’t tested and true, it’s nice to know you already have this out of the way.

Below the Credentials section, you can decide at what point you want to recover from.  It can be at the time the job is deployed or from a point in time.

You have the choice to name your database anything.  I left the default, using the naming convention based off the source, with the addition of tm, for Test Master and the number 1.   If this was a standard database, you might want to make it a RAC or RAC one node.

Then comes the storage.  As this is an on-premise, I chose the same Oracle Home that I’m using for another database on the nyc host and used the same preferred credentials for normal database operations.  You would want to place your test master database on a storage location that would be separate from your production database so as not to create a performance impact.

clone4

The default location for storage of datafiles is offered, but I do have the opportunity to use OFA or ASM for my options.  I can set up Flashback, too.  Whatever listeners are discovered for the host will be offered up and then I can decided on a security model.  Set up the password model that best suits your policies and if you have a larger database to clone, then you may want to up the parallel threads that will be used to create the test master database.  I always caution those that would attempt to max the number out, thinking more means better.  Parallel can be throttled by a number of factors and those should be taken into consideration.  You will find with practice that you find a “sweet spot” for this setting.  In your environment, 8 may be the magic number due to network bandwidth or IO resource limitations.  You may find it can be as high as 32, but do take some time to test out and know your environment.

clone5

Now comes the spfile settings.  You control this and although the defaults spfile for a test master is used here, for a standard clone, you may want to update the settings for a clone to limit the resources allocated for a test or development clone.

Now if you have special scripts that need to be run as part of your old manual process of cloning, you can still add that here.  That includes BEFORE and AFTER the clone.  For the SQL scripts, you need to specify a database user to run the script as, too.

If you started a standard clone and meant to create a test master database, no fear!  You still have the opportunity to change this into a Test Master at this step and you can create a profile to add to your catalog options if you realize that this would be a nice clone process to make repeatable.

clone7

The EM Job that will create the clone is the next step.  You can choose to run it immediately and decide on what kind of notifications you’d like to receive via your EM profile, (remember, the user logged into the EMCC creating this clone is the credentials that will be used for notification….)  You can also choose to perform the clone later.

clone8

The scheduling feature is simple to use, allowing you to choose a date and time that makes the clone job schedule as efficient as possible.

clone9

Next, review the options you’ve chosen and if satisfied, click on Clone.  If not, click on Back and change any options that didn’t meet your expectations.

If you chose to run the job immediately, the progress dashboard will be brought up after clicking Clone.

clone10

Procedure Activity is just another term for an EM Job and you’ll find this job listed in Job Activity.  It’s easier to watch the progress from here and as checkmarks show in the right hand column, the step is completed successfully for your test master or clone.

Once the clone is complete, remember that this new database is not automatically monitored by EM13c unless you’ve set up Automatic Discovery and Automatic Promotion.  If not, you’ll need to manually discover it.  You can do that following this blog post.  Also keep in mind, you need to wait till the clone is finished, so you can set the DBSNMP user status to unlocked/open and ensure the password is secure.

Now that we’ve created our test master database, in the next post, we’ll create a thin clone.

 

Posted in DBaaS, EM13c, Oracle Tagged with: , , ,

April 23rd, 2014 by dbakevlar

The last couple weeks I’ve been lucky enough to get time with a new ZFSSA Simulator, (it will be out soon for everyone, so you’ll have to just play with the current one available, patience grasshopper! :)) and spent checking out the newest features available with Database as a Service, (DBaaS) Snapclone via Enteprise Manager 12c.  I’m really thrilled with Snapclone-  In two previous positions, I spent considerable time finding new ways of speeding up RMAN duplicates to ease the stress of weekly datamart builds that were sourced off of production and this feature would have been a lifesaver back then!

As Oracle’s DBaaS offering is so full featured, I think its easy to have misconceptions about the product or find blog posts and documentation on earlier releases that lead to misconceptions. Due to this, we thought it might help if I tried to dispel some of myths, letting more folks realize just how incredible Snapclone really is!

Misconception #1- Snapclone only works with the ZFS Storage Appliance

DBaaS does offer a fantastic hardware solution that requires a hardware NAS like Netapp and ZFS/ZFSSA, but that’s not the only option.  There is also the software solution which  can work on ANY storage.  There’s no requirement for the test master database, (used to track changes and save considerable space vs. a traditional cloning method…) on where it must reside and that means it can be on different storage than production.

There are benefits to both the hardware and software solutions of Snapclone.  Keep in mind, Oracle prefers to support hardware and software that are engineered to work together, but they realize that not all customers will have a ZFS or NetApp Solution in place, so they ensure that Snapclone solutions are available to all storage a customer may have in their shop.  Some of those benefits and requirements are listed below, but you can clearly see, Snapclone does not have a requirements of ZFS or NetApp:

tm_profile_2

Misconception #2-  Snapclone Requires a Cloud Implementation

It is true that Snapclone requires database pools to be configured and built, but this is in the environment’s best interest to ensure that it is properly governed placement is enforced. This feature is separate from an actual “cloud” implementation and the two shouldn’t be confused.  There are often dual definitions for terms and cloud is no different.  We have private cloud, public cloud, the over-all  term for cloud technology and then we have database pools that are part of Snapclone and those are configured in the cloud settings of DBaaS.  They should not be confused with having to implement “cloud”.

Misconception #3- Unable to Perform Continuous Refresh on Test Master Database

There are a couple easy ways to accomplish a refresh of the test master database used to clone from production.  Outside of the traditional profiles that would schedule a snapshot refresh, DBAs can set up active or physcial dataguard or can use storage replication technologies that they already have in place.  To see how these look from a high level, let’s first look at a traditional refresh of a test master database:

tm_profile_1

 

Now you’ll notice that the diagram states the test master is “regularly” refreshed with a current data set from production and if you inspect the diagram below, you will see an example of an example of a software or hardware refresh scenario to the test master database, (using datamasking and subsetting if required) and then creating Snap Clones.

 

tm_using_testmaster

Now as I said earlier, you can use a standby database, too, to perform a clone.  The following diagram shows the simplicity of a standby with Dataguard or Golden Gate.  Notice where Snap Clone takes over-  it’s at the Standby database tier, so you still get the benefit of the feature, but can utilize comfortable technology such as Dataguard or Golden Gate:

tm_using_standby

Misconception #4- Snapclone Can Only be Used with DB12c

Snapclone works with any supported version of the database 10gR2 to 12c. Per the DBaaS team of experts, it may work on earlier versions, they just haven’t certified it, so if there are any guinea pigs out there that want to test it out, we’d love to hear about your experience!

The process of Snapclone is very simple once the environment is set up.  With the Rapid Start option, it’s pretty much the “Easy button” for setting up the DBaaS environment and the process solidified once service templates are built and the service request has been submitted in the Self Service Portal.  There isn’t anymore confusion surrounding where an Oracle home installation should be performed or what prefix is used for database naming convention and other small issues that can end up costing an environment in unnecessary complexity later on.

Misconception #5- Snapclone doesn’t Support Exadata

A couple folks have asked me if Snapclone is suppored on Exadata and in truth, Exadata with Snapclone offers a unique opportunity with consolidations and creating a private cloud for the business.  I’ll go into it in depth in another blog post, as it deserves it’s own post, but the following diagram does offer a high level view of how Snapclone can offer a really cool option with Exadata:

tm_exadata

There are so many features that are provided by Snapclone that its difficult to keep on top of everything, but trying to dispel the misconceptions is important so people don’t miss out on this impressive opportunity to save companies time, money and storage.  I know my whitepaper was over 40 pages on DBaaS and I only focused on NetApp and ZFS, so I do understand how easy it is, but hopefully this first post will get people investigating DBaaS options more!

Posted in Cloud, DBaaS, Enterprise Manager, Oracle

  • Facebook
  • Google+
  • LinkedIn
  • Twitter