Subscribe to Blog via Email
I’ve been going through some SERIOUS training in just over a week. This training has successfully navigated the “Three I’s”, as in its been Interesting, Interactive and Informative. The offerings are very complete and the knowledge gained is limitless.
I’d also like to send a shout out to Steve Karam, Leighton Nelson and everyone else at Delphix who’s had a hand in designing the training, both for new employees and for the those working with our hands on labs. I’ve had a chance to work with both and they’re just far above anything I’ve seen anywhere else.
Most DBAs know- If you attempt to take a shortcut in patching or upgrading, either by not testing or hoping that your environments are the same without verifying, shortcuts can go very wrong, very quickly.
Patching is also one of the most TEDIOUS tasks required of DBAs. The demands on the IT infrastructure for downtime to apply quarterly PSU patches, (not including emergency security patches) to all the development, test, QA and production databases is a task I’ve never looked forward to. Even when utilizing Enterprise Manager 12c Patch Plans with the DBLM management pack, you still had to hope that you checked compliance for all environments and prayed that your failure threshold wasn’t tripped, which means a large amount of your time would have to be allocated to address patching outside of just testing and building out patch plans.
I bet most of you already knew you could virtualize your development and test from a single Delphix compressed copy, (referred to as a DSource.) create as many virtual copies, (referred to as VDBs) as your organization needs to have for development, testing, QA, backup and disaster recovery, (if you weren’t aware of this, you can thank me later… :))
What you may not know, (and what I learned this week) is that you can also do the following:
Considering how much time and resources are saved by just eliminating such a large portion of time required for patching and upgrading, this is worth investing in Delphix just for this alone!
Want to learn more? Check out the following links:
Want to Demo Delphix? <– Click here!
The sales, support and technical teams were brought into the Denver Tech Center office to do some advanced training in Hybrid Cloud. There were many take-aways from the days we spent in the office, (which is saying a lot- most of you likely know how much I hate working anywhere but from home… :)) and I thought I would share a bit of this information with those that are aching for more details on this new and impressive offering from release 5.
If you’re new to Hybrid Cloud and want to know the high level info, please see my blog post on the topic.
Cloud Control now includes a number of new options for database targets in EM12c. These new drop down options include cloning to ease access to the new hybrid cloning. Once you’ve logged into Cloud Control, go to Targets à Databases and then choose a database you wish to implement cloning features for. Right click on the target and the drop downs will take you to the cloning options under Oracle Database –> Cloning.
There will be the following choices from this drop down:
Using a test master is essential for snap clones, which are a great way to offer great space savings and eliminates the time that is required for standard cloning processes. The test master is in a read only mode, so it will need to be refreshed or recreated with an up to date copy, (which will then be another option in the drop down, “Disable Test Master”) for new cloning procedures.
For the example today, we’ll use the following production database:
We’ll use an existing test master database to perform our clone from:
We can right click on the database and choose to create a clone. This is going to be an artifact via a snapclone, so keep this in mind as we inspect the times and results of this process.
Upon choosing to create a snapshot clone of a pluggable database. This will then create snapshot clone, each clone is just a copy of the file header with block changes involved on the read only or read-write clone.
Once you fill out the pertinent data for the clone, using the correct preferred credentials with SYSDBA privileges, name the new pluggable database, the name you’d like it displayed in Cloud Control as and enter the PDB administration credentials, password and confirm the password. Once that’s done, choose if you’d like to clone it to a different container, (CDB) than what the source resides on and then add the database host and ASM credentials.
Once you click next, you have more advanced options to view and/or setup:
The advanced options allow you to change the sparse disk group to create the clone on, storage limits and database options.
You can then choose to have data masking to protect any sensitive data, but keep in mind, once you do so, you will no longer be using a snapclone due to the masking, but the option to implement it at this step is an option. You can also set up any pre, post or SQL scripts that need to be run as part of the clone. This could include resetting sequences, user passwords, etc.
The next step allows you to schedule the clone in the future or run immediately.
You can also choose the type of notifications, as this is simply an EM Job that is submitted to perform the cloning process. Once you’ve reviewed the cloning steps chosen via the wizard, you can then submit.
Once the jobs been submitted, the submitter can monitor the job steps:
Once the clone has completed, you can view each of the steps, including the time each took.
The source database was over 500 GB and was cloned in less than one minute! You also will see the new cloned database in the targets list:
If curious, note that this is a fully cloned database that is on ASM, which you can view, just as you would for any other database.
Again, note the size and that this can be managed like any other database that you would have created via a DBCA template or through a standard creation process.
More to come soon and thanks to Oracle for letting us get our hands on the new 184.108.40.206 hybrid cloning!
Oracle OEM 12c introduces a new feature that enables the creation of Oracle database thin clones by leveraging file system snapshot technologies from either ZFS or Netapp. The OEM adds a graphic interface to the process of making database thin clones. The feature that enables database thin cloning in OEM is called Snap Clone and is part of OEM’s Cloud Control Self Service for data cloning. Snap Clone is available via the feature Database as a Service (DBaaS). Snap clone leverages the copy on write technologies available in some storage systems for database cloning. Support is initially available for NAS storage and specifically on ZFS Storage and NetApp Storage.
In order to use Snap Clone, one has to install the source database such that the source database data files are on a ZFS storage or Netapp array and have the storage managed by agents on a LINUX machine and then one can thin clone data files on that same storage array.
Snap Clone offers role based access, so storage admin can login in and only have access to areas they are responsible for as well as limiting access to source databases, clones and resource by end users.
The prerequisites for getting start with Snap Clone are having available storage on ZFS Storage Appliance or Netapp storage array as well as having access to a master test database. A master test databse is a database that has a sanatized version of a production database such that it is either a subset and or masked. The test master database has to be registered with OEM. After the test master is registered with OEM, Snap Clone can be setup. To set up snap clone, come into Oracle Cloud Control 12c as “cloud administrator” role with “storage adminstator” priviledge or super administrator and register the storage. To register the storage navigate to “ setup -> provisining patching -> storage registration”.
Figure 1. Shows the entry page in OEM 12c Cloud Control. From here go to the top right and choose setup, then provisining patching then storage registration as shown above.
To setup Snap Clone navigate to storage registyratiom choose the menus “setup -> provisining patching -> storage registration”.
Figure 2 Shows a zoom into the menus to choose
Figure 3. Storage Registration Page
Figure 4. Choose the type of storage array
Once on the Storage Registration page, choose “Register” and then choose the storage, either Netapp or Sun ZFS.
Figure 5. Storage Registration Page
To register the storage supply the following information
All of which is documented in cloud administration guide.
Then define agents used to manage storage. Agents have are required to run on a LINUX host. More than one agen can define to provide redundancy. The agents will be the path by which OEM communicates with the storage. For each agent, supply the following information
And finally define the frequency with which the agent synchronizes with the storage to gather the sorage hardware details such as information on aggregates shares volumes.
After the storage information, agent information and agent synchronization information has been filled out, then hit “submit” button in the top right. Hitting the submit button will return the UI back to the “Storage Registration”. On the “Storage Registration”, click on the storage appliance listed in the top, then click on the contents tab on the bottom half of the page. This will list all the volumns and aggregates in the storage appliance.
Figure 6. Editing storage ceiling by clicking on a aggregate and hitting the “Edit Storage Config” tab.
For each aggregate one can set storage ceilings. Click on the aggregate or FlexVol and the click “Edit Storage Ceilings” tab.
On the database tab is a list of databases that can be used for cloning. OEM detects the database automatically on the hosts it is managing. OEM will also automatically correlate databases that have storage on the storage array added storage registration. OEM looks for all databases that have files on the registered storage. Click on database, then the show files tabs which will show the files and volumes for this database.
Figure 7. List of files by volumn for database
Figure 8. Enable Snap Clone for databases that will be used as test masters.
Nominating a database as test master requires enabling snap clone. To enable snap clone for a database, click on the chosen database, then click “Enable Snap Clone” tab just above the list of databases. This will automatically validate that all the volumes are flex clone enabled (in the case of Netapp).
The next step is to configure zone which can be used to organize cloud resources
Choose the menu option “Enterprise -> Cloud -> Midelware and Database Home”
Figure 9. Navigate first to “Cloud -> Middleware and Database Home”
Figure 10. Middleware and Databawe Cloud page
In order to see the zones defined, click on the number next to the title “Paas Infrastructure Zones” in the top left under General Information.
Figure 11. PaaS Infrastructure Zones
To create a zone, click the tab “Create”.
Figure 12. first page of wizard to create a PaaS Infrastructure Zone, give a meaningful name and description of the zone and define maximum CPU utilizaiton and memory allocation.
In the first page of the “PaaS Infrastructure Zone”, give zones a meaningful name and description. Define constraints such as maximum host CPU and memory allocations.
Figure 13. Second page of the “PaaS Infrastructure Zone” wizard, add hosts that are available to the zone.
Next define hosts that are members of the zone and provide credentials that operate across all members of this zone
Figure 14. Third page of the “PaaS Infrastructure Zone” wizard, limit which roles can see the zone.
Next define what roles can see and access this szone. The visibiliy of the zone can be limited to a certain class of users via roles like Dev, QA etc
Figure 15. Final review page for “PaaS Infrastructure Zone” wizard
Finally review settings and click submit
Figure 16. Showing the Confirmation that the PaaS Infranstructure Zone has been successfully created.
The remaining steps required to enable snap clone is to create a database pools which is a collection of servers or nodes that have database software installed. The remaining part of the setup is done by a differnet user who is the administrator for database as a service.
Log in as DBAAS_ADMIN.
For the next part navigate to the menu “Setup -> Cloud -> Database”.
Figure 17. Middleware and Database Cloud page
Figure 18. Navigate to “Setup -> Cloud -> Database”.
Figure 19. Database Cloud Self Service Portal Setup. To create a database pool choose the “Create” button in the center of the page and from the pull down, choose “For Database”.
To create a new pool click on the “Create” button in the center of the page, and chose “For Database” from the pull down menu that appears.
Figure 20. Choose “Create -> For Database”
Figure 21. Edit pool page. Provide a meaningful name a descrpition of the pool. Add Oracle home directories in the bottom of the page. At the very bottom of the page set a constraint on the number of databases instances that can be created in the pool. On the top right, set the host credentials.
In the “Edit Pool” page, at the top left of the screen, provide a meaningful name and description for the pool. In the middle of the screen add Oracle homes that will be used for databse provisioning. Every member of a database pool is required to be homogeneous. Homogenous requires that the platform and Oracle version is the same across all the hosts and Oracle homes in the pool. All the Oracle installations also have to be of the same type either single instance or RAC. In the top right add the Oracle home provide oracle credentials and root credentials. Finally at the bottom of the page a constraint can be set on the number of database instances that can be started in this pool.
Figure 22. Set request limits on the pool
The next page sets the request settings. The first restriction sets how far in advanced can requrest can be made. Second restricts how long a request can be kept which is the archive retension. After the archive retention time the requests will be deleted. Finally is the request duration which is the maximum duration for which the request can be made.
Figure 23. Set memory and storage quotas per role for the pool. The quotas cover memory, storage, database requests and schema requests.
The above page configures quotas. Quota is allocated to each and every self service user. The quotas controls the amount fo resources users have access to. Quotas are assigned to a role and users inherit quota values from the role. Click “Create” in the middle of the screen.
Figure 24. Editing the quotas on a pool for a role.
The popup dialogue has for these entries
Figure 25. Profiles and Service Templates
Profiles and service templates
.A profile is use to capture information about the source database which can then be used for provisioning.
A service template is a standardized service definition for a database configuration that is offered to the self service users. A collection of service templates forms the service catalogue. A service template will provision databsae with or without seed data. To capture an ideal configuration, the easist thing to do is to point at an existing database and fetch information of interest from that database. The information from the database can be captured using a profile.
To create a profile click on the “Create” button under “Profiles”
Figure 26. Specify a reference target for a profile.
Click the magnifying glass to search for a source database.
Figure 27. Search for a reference target database
Pick a refrence target by clicking on it, the click the “Select” button in bottom right.
Figure 28. Creating a Database Provisioning Profile
To pick a database for use in thin cloning, choose the check box “Data Content” with suboption selected fro “Structured Data” with sub-option selected for “Create” with sub-option selected for “Storage Snapshot”. This option is only enabled only when the “enable snapshot” option is enabled on the storage registration page. Disable option capture oracle home.
Provide credentials for the host machines Oracle account and for the database login.
The “Content Option” step is not needed for the above selections.
Figure 29. Give the profile a meaniful name and description
Next provide credentials for Oracle home and Oracle databse, then provide a meaningful name for the profile as well as a location. The profile will be userful when creating a service template.
Figure 30. Review of create profile options.
Next review the summary and click subit which will connect to storage and take snapshots of the storage
Figure 31. Shows a zoom into the menus to choose
To create a new service template choose a profile and in this case use “thin provisioning for reference DB” profile. Now to create a new service template click “create” and choose “for database”. Service templates are part of the service catalogue and exposed to the self service users.
Figure 32. Provide a meaningful name and description for the service template.
Provide a meaningful name and description. For the rest of service template provide information about the databses that will be created from the snapshots such as providing database type, rac or single instance, for rac provide number of nodes. Provide the SID prefix to appended to the SIDs generated for the clones, provide the Domain Name and the port.
Figure 33. Provide a storage area for writes to use.
The cloning operation only creates a read only copy thus it is required to provide write space elsewhere in order to allow writing to the thin clone.
click on the edit button
click on volumne, then edit button
Figure 34. set diretory and maximum space usage for the write location
Provide the mount point prefix and amount of writeable space wish to allocate
Users of the thin clone databses can also be allowed to take further snapshots. These snapshots can be used as a mechinism to rollback changes, The number of thiese snapshtos can limited just below storage size section:
Figure 35. set the number of snapshots that can be taken of a a thin clone
Figure 36. set the initial passwords for database accounts on the thin clone.
next provide credentials for administrative accounts
for all other non-administartive accounts can choose to leave them as is or change them all to one password you can modify certain init.ora parameters for exmaple memory
Figure 37. Modify any specific init.ora parameters for the thin clone
Figure 38. Set any pre and post provision scripts to be run at the creation of a thin clone.
custom scripts can be provide as pre or post creation steps this can be very useful if you want to register databses with OID or certian actions that are specific to your organization
Figure 39. Set the zone and pool for the thin clone
Figure 40. set the roles that can use the service template.
you associate this srvice template with a zone and a template this insures that the service template can actualy work on the group of resources that you have identified and can limit the visibility of the service tempalte usein roles
Figure 41. review of the service template creation requites
finally we review the summary and click submit
Figure 42. 12c Cloud Control
Figure 43. 12c Cloud Control Self Service Portal
Contents of the Database Cloud Self Service Potal screen
Figure 44. To clone a database, choose the “Request” then “Database” menu.
Figure 45. From the list choose a Self Service Template. In this case “SOEDB Service Template”
Figure 46. Fill out the clone Service Request
request wizard asks for
Users do not get system access to the databases but instead get a slightly less privilege user who becomes the owner of the database
Figure 47. Shows new clone database
http://www.youtube.com/watch?v=9VK1z6nU1PU – provisioning