Category: Cloud

January 30th, 2017 by dbakevlar

I don’t want to alarm you, but there’s a new Delphix trial on AWS!  It uses your own AWS account and with a simple set up, allows you to deploy a trial Delphix environment.  Yes, you hear me right-  just with a couple steps, you could have your own setup to work with Delphix!

There’s documentation to make it simple to deploy, simple to understand and then use cases for individuals determined by their focus, (IT Architect, Developer, Database Administrator, etc.)

This was a huge undertaking and I’m incredibly proud of Delphix to be offering this to the community!

So get out there and check this trial out!  All you need is an AWS account on Amazon and if you don’t have one, it only takes a few minutes to create one and set it up, just waiting for a final verification before you can get started!  If you have any questions or feedback about the trial, don’t hesitate to email me at dbakevlar at gmail.

Posted in AWS Trial, Cloud, Delphix Tagged with: , ,

January 9th, 2017 by dbakevlar

We, DBAs, have a tendency to over think everything.  I don’t know if the trait to over think is just found in DBAs or if we see it in other technical positions, too.

I believe it corresponds to some of why we become loyal to one technology or platform.  We become accustomed to how something works and it’s difficult for us to let go of that knowledge and branch out into something new.  We also hate asking questions-  we should just be able to figure it out, which is why we love blog posts and simple documentation.  Just give us the facts and leave us alone to do our job.

Take the cloud-  Many DBAs were quite hesitant to embrace it.  There was a fear of security issues, challenges with network and more than anything, a learning curve.  As common, hindsight is always 20/20.  Once you start working in the cloud, you often realize that its much easier than you first thought it would be and your frustration is your worst enemy.

So today we’re going to go over some basic skills the DBA requires to manage a cloud environment, using Amazon, (AWS) as our example and the small changes required to do what we once did on-premise.

In Amazon, we’re going to be working on EC2, also referred to as the Elastic Compute Cloud.

Understanding Locations, Regions and Zones

EC2 is built out into regions and zones.  Knowing what region you’re working in is important, as it allows you to “silo” the work you’re doing and in some ways, isn’t much different than a data center. Inside of each of these regions, are availability zones, which isolates services and features even more, allowing definitive security at a precise level, with resources shared only when you deem it should.

Just as privileges granted inside a database can both be a blessing and a curse, locations and regions can cause challenges if you don’t pay attention to the location settings when you’re building out an environment.

Amazon provides a number of links with detailed information on this topic, but here’s the tips I think are important for a DBA to know:

  1.  Before setting anything up that is part of a complete solution requiring multiple setup page configurations, ALWAYS check the region in the upper right corner.  I was surprised when it would change from page to page or after a login-

2.  If you think you may have set something up in the wrong region, the dashboard can tell you what is deployed to what region under the resources section:

Understanding Security Keys

Public key cryptography makes the EC2 world go round.  Without this valuable 2048-bit SSH-2 RSA key encryption, you can’t communicate or log into your EC2 host securely.  Key pairs, a combination of a private and public key should be part of your setup for your cloud environment.

Using EC2’s mechanism to create these is easy to do and eases management.  Its not the only way, but it does simplify and as you can see above in the resource information from the dashboard, it also offers you a one-stop shop for everything you need.

When you create one in the Amazon cloud, the private key downloads automatically to the workstation you’re using and it’s important that you keep track of it, as there’s no way to recreate the private key that will be required to connect to the EC2 host.

Your key pair is easy to create by first accessing your EC2 dashboard and then scroll down on the left side and click on “Key Pairs”.  From this console, you’ll have the opportunity to create, import a pre-existing key or manage the ones already in EC2:

Before creating, always verify your region you’re working in, as we discussed in the previous section and if you’re experiencing issue with your key, verify typographical errors and if the location of the private file matches the name listed for identification.

If more than one group is managing the EC2 environment, carefully consider before deleting a key pair.  I’ve experienced the pain caused by a key removal that created a production outage.  Creation of a new key pair is simpler to manage than implementation of a new key pair across application and system tiers after the removal of one that was necessary.

Understanding Roles and Security

Security Groups are silo’d for a clear reason and no where is this more apparent than in the cloud.  To ensure that the cloud is secure, setting clear and defined boundaries of accessibility to roles and groups is important to keep infiltrators out of environments they have no business accessing.

As we discussed in Key Pairs, our Security Groups are also listed by region under resources so we know they exist at a high level.  If we click on the Security Groups link under Resources in the EC2 Dashboard, we’ll go from seeing 5 security group members:

To viewing the list of security groups:

If you need to prove that these are for N. California, (i.e. US-West-1) region, click on the region up in the top right corner and change to a different region.  For our example, I switched to Ohio, (us-east-2) and the previous security groups aren’t listed and just the default security group for Ohio region is displayed:

Security groups should be treated in the cloud the same way we treat privileges inside a database-  granting the least privileges required is best practice.

Understanding How to SSH to a Host

You’re a DBA, which means you’re most likely most comfortable at the command line.  Logging in via SSH on a box is as natural as walking and everything we’ve gone through so far was to prepare you for this next step.

Your favorite command line tool, no matter if it’s Putty or Terminal, if you’re set up everything in the previous sections correctly, then you’re ready to log into the host, aka instance.

  1.  Ensure your downloaded private key is saved in an easily accessible spot for you to use to log in or that you know the username/password, (keys just make this easier…)
  2. Gather the information about “instances” by clicking on the EC2 dashboard, then click on Instances.
  3. The Public DNS and the Public IP is displayed and note the region, too:

You can use this information to then ssh into the host:

ssh -i "<keypair_name>.pem" <osuser>@<public dns or ip address>.<region>.compute.amazonaws.com

Once logged in as the OS user, you can SU over to the application or database user and proceed as you would on any other host.

If you attempt to log into a region with a key pair from another region, it state that the key pair can’t be found, so another aspect showing the importance of regions.

Understanding How to SCP a File

This is the last area I’ll cover today, (I know, a few of you are saying, “good, I’ve already got too much in my head to keep straight, Kellyn…)

With just about any Cloud offering, you can bring your own license.  Although there are a ton of images, (AMIs in AWS, VHDs in Azure, etc.) pre-built, you may need to use a bare metal OS image and load your own software or as most DBAs, bring over patches to maintain the database you have running out there.  Just because you’re in the cloud doesn’t mean you don’t have a job to do.

Change over to the directory that contains the file that you need to copy and then run the following:

scp -i <keypair>.pem <file name to be transferred> <osuser>@<public dns or ip address>.<region>.compute.amazonaws.com:/<direction you wish to place the file in>/.

If you try to use a key pair from one region to log into a SCP to a host, (instance) in another region, you won’t receive an error, but it will act like you skipped the “-i” and the key pair and you’ll be prompted for the password for the user:

<> password: 

pxxxxxxxxxxxx_11xxxx_Linux-x86-64.zip             100%   20MB  72.9KB/s   04:36

This is a good start to getting started as a DBA on the cloud and not over-thinking.  I’ll be posting more in the upcoming weeks that will not only assist those already in the cloud, but those wanting to find a way to invest more into their own cloud education!

Posted in Cloud, Delphix, Delphix Express

April 7th, 2016 by dbakevlar

While at Collaborate on Monday, Oracle has been offered a spot to highlight The Oracle Management Cloud.  OMC is a suite of next-generation monitoring and management cloud services designed for heterogeneous environments, all empowered by Oracle’s own cloud!  The first three OMC services (Application Performance Monitoring, Log Analytics and IT Analytics) were launched at OpenWorld 2015.  Both Log Analytics and IT Analytics contain must-have capabilities for DBAs.  This is the only OMC session that will be available at Collaborate and it’s a not to miss session with an all-star cast to present on it.  Manning the slides will be Brian Hengen and Yuri Grinshteyn, while Courtney Llamas and I will be sharing our expertise during the demonstrations.

omc1

 

We’ll be covering how to grapple with large environment resource usage, identifying what really needs your time allocated to it (versus what you can just let go!) and how you can help your manager manage your infrastructure better.  We all know how much damage false assumptions can cause to your ability to respond to priorities and demands.  IT Analytics has the intelligence  you need built in, so no more guesswork based on capacity planning scripts, Excel spreadsheets and graphing to prove to the business that you have a handle on where your energy is most needed.

We’ll also discuss how to extract valuable insight from mountains of logs.  We DBAs know how valuable log data is but we also know how voluminous it is.  Having it reduced to human-scale and correlate seamlessly with  the analytics discussed above is an impressive feat and you’ll see how powerful log data can be when displayed this way for the business.  No more “going down rabbit holes” trying to digest log data on one server, matching it to output from a user interface from a tool on another server and finally verifying that it actually had anything to do with the actual problem you were investigating in the first place.

So ensure you’ve added this session to your Collaborate schedule on Monday, at 10:15 in room Palm B. An impressive group of folks from Oracle have made sure it’s going to be an interesting, informative and empowering session on  the Oracle Management Cloud. I have a feeling the room may be packed, so make sure to register beforehand!

Posted in Cloud, Oracle Management Cloud Tagged with: ,

February 4th, 2016 by dbakevlar

I’ve been discussing for years about the importance of network to database performance, especially once I started working on VLDBs, (Very Large Databases) but its a topic that often is disregarded.  Now that I’m working more and more in the cloud, it’s become more evident the importance of the network to our survival.

For each and every cloud project I’ve been involved in, there is evidently going to be multiple challenges that turn to the network administrator for a solution.  I don’t blame the administrator in any way when he becomes exasperated by our requests.  As it is my solemn duty to protect the database, the network administrator is the sole protector of the network.  You’ll hear a frustrated DBA say, “just open the &^$# network up!  Let’s just get this connected to our cloud provider!” I have to admit that this request must be akin to someone asking a DBA to provide SYSDBA to a developer in production.

vizag-real-estate-is-not-happening

So yes, there are a lot of moving parts in a cloud environment.  No, not all of them are at the database level, but many of them could be at the network level.  This means that your new cloud environment must connect past firewalls, proxies, blocked ports and authentication steps that may not have been required back in the sole on-premise days.

hybrid_cloud_agent_to_oms_comm_ha

Yeah, there’s a bit more to the network than demonstrated in the picture above.

The database connection needs a secure connection past the firewall and may require proxy configurations to access via a web browser.  The application interface to manage them may require proxy settings in browsers that may have automated processes to manage outside a manual proxy setting.  You may have network configurations that are different from one local office to another.  We’ve only discussed configuration and haven’t even considered speed, packet size and bandwidth.

So here is my recommendation-  make friends with your network administrator.  In fact, take the ol’ chap out for a beer or two.  Learn about what it takes to master, protect and ensure the company’s network from the threats outside.  Learning about the network will provide you with incredible value as a cloud administrator and you may get a great friend out of the venture, too.  For those of you that don’t make friends with your network admin, I don’t want to be hearing about any mishaps with phenobarbital to get the information, OK? 🙂

JGLCheers

 

 

 

Posted in Cloud Tagged with: ,

September 15th, 2015 by dbakevlar

So I’m going to start this post with an admission-  I don’t have access to a cloud environment to test this out, but I know what I would do first if I experienced slow response time on database creation or cloning via EM12c to the cloud and I would like to at LEAST post what I would do to give others the chance to test it out and see if it offers them some help.

Knowing what is causing slow performance is essential to trouble shooting any problem.  I’ve had a few folks come to me with complaints that their hybrid cloud clones or database creations in the cloud are slow, but with the little data I’ve seen in the exchanges, its starting to give telltale signs that the cloud isn’t the issue. There are some Enterprise Manager features with 12.1.0.5 that may be able to assist and that’s what we’ll discuss today.

Page Performance

Although log data and diagnostic utilities are crucial when I’m trouble shooting anything, simple visuals based on this data can be very helpful.  The Page Performance for the Repository is one of those features, but there is one caveat-  the page that is experiencing slow performance, (i.e. database creation wizard with Cloud Control to the PaaS in the cloud, etc.) must be run at least two times in the last 24 hrs to be captured in this tool.

The data provided by this tool is based off a considerable amount that we collect via the EM Diagnostics utilities, so although it doesn’t provide as deep a diagnostic as the utility does, it may shed some high level light on what you’re facing.

To access the page, you need to be in the OMR, (Repository) target, which commonly is accessed via Setup, Manage Cloud Control, Repository and then from there, click on OMS and Repository, Monitoring, Page Performance.  You’ll see the second tab in is Page Level Performance, which will have the data that we’re about to go deeper into today.

pageperf1

Upon accessing the dashboard, you’ll quickly notice that many pages have very different requirements to produce the output that you view in the console pages, (i.e. not all plugins and collections are created the same… :))

pageperf2

Now I want you to focus on the headers to note the data collected:

pageperf3

Cloud Control breaks down the page processing time for a number of different areas-

  • Total Page Processing Time in Seconds
  • Page Processing Time in the OMS, (Oracle Management Service)
  • Page Processing Time in the OMR, (Oracle Management Repository)
  • Processing Time in Browser/Network, Multi- Requests Page
  • Requests per Page, (very good when it comes to efficiency)
  • Number of SQL/PLSQL Executions per Page

The one I’m going to focus on is the Processing Time in Browser or Network.  If you note, there are sort options for each of the Avg and Max totals that are collected in each column.

If we sort by Max Processing Time in Browser/Network and we’ll choose the first in the list that contains an active report, (the others are stale, so those pages aren’t links you can click on…):

pageperf4

Again, I don’t have any cloud access to demonstrate this on and as you can see, the RAC Top Activity is what we’ll use for this example of how we can inspect wait information on page response.

pageperf5

pageperf6

The top of the page clearly shows what IS the time distribution across Database, Java, Agent and Network/Browser.  You can in the lower section, (actually to the far right in the console) the amount of seconds allocated to each type of task by the process, which for our example is considerably Java Time.  If there is SQL involved, it also breaks down the execution plans and wait events.

For this example, there is a small amount of network time, (this is the “High Availability” environment at my work, so it’s a pretty buff setup) but it does display the light pink section that shows there are some waits for the network and/or browser.

If WE WERE inspecting a cloud usage process, we should see what is the time distribution for the network and I’m pretty sure we would see that displayed in these pages clearly, (again, this is an assumption on my part, as I haven’t had the first hand experience to investigate it…)

Summary

If you are working in the Oracle cloud and are experiencing slowness, run the cloud process via the console a couple times, (even if it does timeout) and consider using the Page Performance feature to make a quick inspection of the report.  Although the example above clearly shows how much time is being spent on Java, you may find for the cloud, you’re dealing with network slowness that requires some investigation into firewall, DNS resolution and other challenges, but this report may quickly show the Oracle cloud as the innocent party.

Posted in Cloud, EM12c Performance, Enterprise Manager, Oracle

June 23rd, 2015 by dbakevlar

The sales, support and technical teams were brought into the Denver Tech Center office to do some advanced training in Hybrid Cloud.  There were many take-aways from the days we spent in the office, (which is saying a lot-  most of you likely know how much I hate working anywhere but from home… :)) and I thought I would share a bit of this information with those that are aching for more details on this new and impressive offering from release 5.

If you’re new to Hybrid Cloud and want to know the high level info, please see my blog post on the topic.

Cloud Control Cloning Options

Cloud Control now includes a number of new options for database targets in EM12c.  These new drop down options include cloning to ease access to the new hybrid cloning.  Once you’ve logged into Cloud Control, go to Targets à Databases and then choose a database you wish to implement cloning features for.  Right click on the target and the drop downs will take you to the cloning options under Oracle Database –> Cloning.

tm_creation

There will be the following choices from this drop down:

  • Clone to Oracle Cloud-  ability to directly clone to the cloud from an on-premise database.
  • Create Full Clone-  Full clone from an image copy or an RMAN backup.
  • Create Test Master- Create a read-only test master database from the source target.
  • Enable as a Test Master- Use the database target as a test master, which will render it read-only and it would rarely be an option for a production database.
  • Clone Management-  Manage existing cloning options.

Using a test master is essential for snap clones, which are a great way to offer great space savings and eliminates the time that is required for standard cloning processes.  The test master is in a read only mode, so it will need to be refreshed or recreated with an up to date copy, (which will then be another option in the drop down, “Disable Test Master”) for new cloning procedures.

Snapshot Clone

For the example today, we’ll use the following production database:

tm1

We’ll use an existing test master database to perform our clone from:

tm4

We can right click on the database and choose to create a clone.  This is going to be an artifact via a snapclone, so keep this in mind as we inspect the times and results of this process.

tm8

Upon choosing to create a snapshot clone of a pluggable database.  This will then create snapshot clone, each clone is just a copy of the file header with block changes involved on the read only or read-write clone.

Once you fill out the pertinent data for the clone, using the correct preferred credentials with SYSDBA privileges, name the new pluggable database, the name you’d like it displayed in  Cloud Control as and enter the PDB  administration credentials, password and confirm the password.  Once that’s done, choose if you’d like to clone it to a different container, (CDB) than what the source resides on and then add the database host and ASM credentials.

Once you click next, you have more advanced options to view and/or setup:

tm10

The advanced options allow you to change the sparse disk group to create the clone on, storage limits and database options.

tm11

You can then choose to have data masking to protect any sensitive data, but keep in mind, once you do so, you will no longer be using a snapclone due to the masking, but the option to implement it at this step is an option.  You can also set up any pre, post or SQL scripts that need to be run as part of the clone.  This could include resetting sequences, user passwords, etc.

The next step allows you to schedule the clone in the future or run immediately.

tm12

You can also choose the type of notifications, as this is simply an EM Job that is submitted to perform the cloning process.  Once you’ve reviewed the cloning steps chosen via the wizard, you can then submit.

tm13

Once the jobs been submitted, the submitter can monitor the job steps:

tm14

 

Success

Once the clone has completed, you can view each of the steps, including the time each took.

tm15

The source database was over 500 GB and was cloned in less than one minute!  You also will see the new cloned database in the targets list:

tm16

If curious, note that this is a fully cloned database that is on ASM, which you can view, just as you would for any other database.

Again, note the size and that this can be managed like any other database that you would have created via a DBCA template or through a standard creation process.

tm17

More to come soon and thanks to Oracle for letting us get our hands on the new 12.1.0.5 hybrid cloning!

Posted in Cloud, Enterprise Manager Tagged with: , , ,

June 18th, 2015 by dbakevlar

Last week’s release of 12.1.0.5 was a pleasant surprise for everyone out in the Oracle world.  This release hit the bulls-eye for another cloud target of Oracle’s, announcing the introduction of Enterprise Manager 12c’s offering a single pane of glass management of the hybrid cloud.  The EM12c team has been been trained and testing out the new features of this release with great enthusiasm and I have to admit, pretty cool stuff, folks!

Why is the Hybrid Cloud Important?

Many companies are still a bit hesitant to embrace the cloud or due to sensitive data and security requirements, aren’t able to take advantage of cloud offerings for their production systems.  Possessing a powerful tool like Enterprise Manager to help guide them to the cloud could make all the difference-

hybcld1

 

You’re going to start hearing the EM folks use the term, “Single Pane of Glass” a lot in the upcoming months, as it’s part of the overall move, taking Enterprise Manager from the perception that EM is still a DBA tool and getting everyone to embrace the truth that EM12c has grown into an infrastructure tool.

What is the hybrid cloud?

As we’ve discussed the baby-steps that many companies are taking, (vs. others that are jumping in, feet first! :)) with the hybrid cloud, the company can now uphold those requirements and maintain their production systems within on-premise sites, but enforce data masking and sub-setting, the sensitive data is never presented outside the production database, (including to the test master database that is used to track the changes in the snapclone copies…)  This then allows them with Database as a Service to clone development, test, Q&A environments to a less expensive cloud storage platform without exposing any sensitive data.

clone_datamask

Once the datamasking or any other pre-clone data cleansing/subsetting is performed, then the Test Master database is created and can be used to create as many snap clones as needed.  These snaps can be used for development, QA or testing.  The space savings continues to increase as the snapclone copies are added, as the block changes are most of the space consumption in the test master database.  This can add up to a 90% storage savings over traditional database full copies.

Hybrid Cloning

The power of hybrid cloning is the Hybrid Cloud Gateway, a secure SSH tunneling, that allows seamless communication between on-premise systems and the cloud.

hybcld3

Types of Clones

There are four types of clones currently offered with Hybrid cloning-

  • On-premise source cloned to the cloud.
  • Cloud source, cloned to on-premise.
  • Cloud source cloned in the cloud.
  • Migrate from a schema in a database to a PDB in the cloud.

Simplicity is Key

The user interface is simple to engage, use to create a clone or clones, save off templates, build out a catalog to be used for a self-service portal and when cloning, the status dashboard is a great quick view of success on cloning steps:

hybcld4

If deeper investigation of any single step needs to be perforrmed, the logging is no different than inspecting an EM job log, (because an EM job is exactly what it is… :)):

hybcld5

I’ll be returning from Europe soon and hope to do more with the product, digging into this great new feature, but until then, here’s a great overview of 12.1.0.5’s brand new star!

 

Posted in Cloud, Enterprise Manager Tagged with: ,

January 13th, 2015 by dbakevlar

Every job comes with tasks that no one likes to perform and database administration is no exception.  Patching is one of those necessary tasks that must be performed and when we are expected to do more with less everyday, the demands of patching another host, another agent, another application is often a task that no one looks forward to.  It’s not that it goes wrong, but that it’s just tedious and many DBAs know there are a lot of other tasks that could be better use of their time.   Patching is still an essential and important task that must be performed, we all know that. OPatch and other patching utilities from Oracle make patching easy, but it can still remove a lot of time from a resource’s day.

Enterprise Manager 12c’s automated patching and provisioning, using the Database Lifecycle Management Pack is gaining more appreciation from the IT community as it assists the DBA with features to search recommended patches, create patch plans, review for conflicts and allow sharing and re-use of patch plans.

Configuring a Database for Online or Offline Patching

After logging into a target database, you can click on Setup and go to the Offline Patching setup:

patching22

You can then choose to use Online patching with MOS credentials:

patching1

or use Offline Credentials and configure the patching catalog and ensure you upload all the XML’s for the catalog, which will now be stored locally to a workstation.  Once the upload is complete, run the Refresh From My Oracle Support job.

patching2

The Online configuration is recommended and works with the software library.  It’s what we’ll be talking about today.

Also ensure that you’ve set up correct privileges to perform patching. Provisioning and patching require steps to be performed that will require privileges to run root scripts, so ensure that the credentials that are used for the patching allow to sudo to root or PBrun.

Database Patch Plans

To set up a patch plan for a database, there are a number of steps, but the patch plan wizard makes this very easy to do.  For our example, we’ll choose to patch 11.2.0.4 databases to the latest recommended patches.

First, let’s do a search to find out what patches we’ll need to apply to our 11.2.0.4 databases in our EM environment.

patching3

Our Enterprise menu takes us to the Provisioning and Patching, Patches and Updates.

From this console page, we can view what patch plans are already created in case we can reuse one:

patching4

As there isn’t an existing plan that fits what we need to do, we are going to first search for what patches are recommended with the Recommended Patch Advisor:

patching10

We’ve chosen to perform a search for recommended patches for 11.2.0.4.0 databases on Linux x86-64.  This will return the following four patches:

patching11

We can click on the first Patch Name, which will take us to the patch information, including what bugs are addressed in this patch, along with the option to download or create a patch plan.  For the purpose of this post, we’ll choose to create a patch plan:

patching12

We’ll create a new patch plan for this, as our existing ones currently do not include an 11g database patch plan that would be feasible to add to.  We can see our list of patches on the left, too, so this helps as we proceed to build onto our patch plans.

After clicking on the Add to New, we come to the following:

patching13

Name your patch plan something meaningful, (I choose to name the patch for a single instance, “SI”, the patch number and that it’s for 11.2.0.4) and then choose the database from the list you wish to apply the patch to.  You can hold down the CTRL key and choose more than one database and when finished, click on Create Plan.

The patch plan wizard will then check to see if any other targets monitored by Cloud Control will be impacted and asks you to either add them to the patch plan or to cancel the patch plan for further investigation:

patching14

If you are satisfied to with the additions, you can click on Add All to Plan to proceed.  The wizard then checks for any conflicts by the additions and will report them:

patching15

In our example above, I’ve added an 11.2.0.3 instance home to show that the wizard notes it and offers to either ignore the warnings and add it or (more appropriately) cancel the patch plan and correct the mistake.

Adding to Patch Plans

In our recommended patch list, we had four recommended patches.  Once we’ve created our first patch plan, we can now choose to add to it with the subsequent patches from the list:

patching16

This allows us to create one patch plan for all four patches and EM will apply them in the proper order as part of the patch deployment process.

Patch Plan Review and Deploy

One a patch plan is created, the next step is to review and deploy it.  Choose the patch plan from the list that we created earlier:

patching18

Double clicking on it will bring up the validation warning if any exist:

patching17

We can then analyze the validations required and correct any open issues as we review the patch plan and correct them before deploying:

patching29

We can see in the above checks, that we are missing credentials required for our patches to be successful.  These can now be set by clicking to the right of the “Not Set” and proceed with the review of our patch plan.

patching20

Next we add any special scripts that are required, (none here…) any notification on the patching process so we aren’t in the dark while the patch is being applied, rollback options and conflicts checks.

These steps give the database administrator a true sense of comfort that allows them to automate, yet have notifications and options that they would choose if they were running the patch interactively.

Once satisfied with the plan, choose the Deploy button and your patch is now ready to scheduled.

patching21

Once the patching job completes or if it experiences an issue and results in executing the logic placed in the above conflict/rollback steps, the DBA can view the output log to see what issues have occurred before correcting and rescheduling.

Output Log 
Step is being run by operating system user : 'ptch_em_user' 
 
Run privilege of the step is : Normal  

This is Provisioning Executor Script
…
Directive Type is SUB_Perl
…
The output of the directive is:
…
Tue Jan 6 00:15:40 2015 - Found the metadata files; '19121551' is an patch
…
Tue Jan 6 00:15:40 2015 - OPatch from '/u01/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch.pl' 
  will be used to apply the Interim Patch.
…
Tue Jan 6 00:15:52 2015 - Invoking OPatch 11.2.0.4.7
…
Following patches will be rolled back from Oracle Home on application of the patches in the given list :
   4612895
…
Do you want to proceed? [y|n]
Y (auto-answered by -silent)
User Responded with: Y
OPatch continues with these patches:  6458921  

Do you want to proceed? [y|n]
Y (auto-answered by -silent)
User Responded with: Y

Running prerequisite checks...

This is high level, but really, it’s quite easy and the more you automate provisioning and patching, the easier it’ll get and you’ll wonder why you waited so long!

 

 

Posted in Cloud, Enterprise Manager Tagged with: , ,

April 23rd, 2014 by dbakevlar

The last couple weeks I’ve been lucky enough to get time with a new ZFSSA Simulator, (it will be out soon for everyone, so you’ll have to just play with the current one available, patience grasshopper! :)) and spent checking out the newest features available with Database as a Service, (DBaaS) Snapclone via Enteprise Manager 12c.  I’m really thrilled with Snapclone-  In two previous positions, I spent considerable time finding new ways of speeding up RMAN duplicates to ease the stress of weekly datamart builds that were sourced off of production and this feature would have been a lifesaver back then!

As Oracle’s DBaaS offering is so full featured, I think its easy to have misconceptions about the product or find blog posts and documentation on earlier releases that lead to misconceptions. Due to this, we thought it might help if I tried to dispel some of myths, letting more folks realize just how incredible Snapclone really is!

Misconception #1- Snapclone only works with the ZFS Storage Appliance

DBaaS does offer a fantastic hardware solution that requires a hardware NAS like Netapp and ZFS/ZFSSA, but that’s not the only option.  There is also the software solution which  can work on ANY storage.  There’s no requirement for the test master database, (used to track changes and save considerable space vs. a traditional cloning method…) on where it must reside and that means it can be on different storage than production.

There are benefits to both the hardware and software solutions of Snapclone.  Keep in mind, Oracle prefers to support hardware and software that are engineered to work together, but they realize that not all customers will have a ZFS or NetApp Solution in place, so they ensure that Snapclone solutions are available to all storage a customer may have in their shop.  Some of those benefits and requirements are listed below, but you can clearly see, Snapclone does not have a requirements of ZFS or NetApp:

tm_profile_2

Misconception #2-  Snapclone Requires a Cloud Implementation

It is true that Snapclone requires database pools to be configured and built, but this is in the environment’s best interest to ensure that it is properly governed placement is enforced. This feature is separate from an actual “cloud” implementation and the two shouldn’t be confused.  There are often dual definitions for terms and cloud is no different.  We have private cloud, public cloud, the over-all  term for cloud technology and then we have database pools that are part of Snapclone and those are configured in the cloud settings of DBaaS.  They should not be confused with having to implement “cloud”.

Misconception #3- Unable to Perform Continuous Refresh on Test Master Database

There are a couple easy ways to accomplish a refresh of the test master database used to clone from production.  Outside of the traditional profiles that would schedule a snapshot refresh, DBAs can set up active or physcial dataguard or can use storage replication technologies that they already have in place.  To see how these look from a high level, let’s first look at a traditional refresh of a test master database:

tm_profile_1

 

Now you’ll notice that the diagram states the test master is “regularly” refreshed with a current data set from production and if you inspect the diagram below, you will see an example of an example of a software or hardware refresh scenario to the test master database, (using datamasking and subsetting if required) and then creating Snap Clones.

 

tm_using_testmaster

Now as I said earlier, you can use a standby database, too, to perform a clone.  The following diagram shows the simplicity of a standby with Dataguard or Golden Gate.  Notice where Snap Clone takes over-  it’s at the Standby database tier, so you still get the benefit of the feature, but can utilize comfortable technology such as Dataguard or Golden Gate:

tm_using_standby

Misconception #4- Snapclone Can Only be Used with DB12c

Snapclone works with any supported version of the database 10gR2 to 12c. Per the DBaaS team of experts, it may work on earlier versions, they just haven’t certified it, so if there are any guinea pigs out there that want to test it out, we’d love to hear about your experience!

The process of Snapclone is very simple once the environment is set up.  With the Rapid Start option, it’s pretty much the “Easy button” for setting up the DBaaS environment and the process solidified once service templates are built and the service request has been submitted in the Self Service Portal.  There isn’t anymore confusion surrounding where an Oracle home installation should be performed or what prefix is used for database naming convention and other small issues that can end up costing an environment in unnecessary complexity later on.

Misconception #5- Snapclone doesn’t Support Exadata

A couple folks have asked me if Snapclone is suppored on Exadata and in truth, Exadata with Snapclone offers a unique opportunity with consolidations and creating a private cloud for the business.  I’ll go into it in depth in another blog post, as it deserves it’s own post, but the following diagram does offer a high level view of how Snapclone can offer a really cool option with Exadata:

tm_exadata

There are so many features that are provided by Snapclone that its difficult to keep on top of everything, but trying to dispel the misconceptions is important so people don’t miss out on this impressive opportunity to save companies time, money and storage.  I know my whitepaper was over 40 pages on DBaaS and I only focused on NetApp and ZFS, so I do understand how easy it is, but hopefully this first post will get people investigating DBaaS options more!

Posted in Cloud, DBaaS, Enterprise Manager, Oracle

  • Facebook
  • Google+
  • LinkedIn
  • Twitter