Category: cloning

February 1st, 2017 by dbakevlar

Delphix focuses on virtualizing non-production environments, easing the pressure on DBAs, resources and budget, but there is a second use case for product that we don’t discuss nearly enough.

Protection from data loss.

Jamie Pope, one of the great guys that works in our pre-sales engineering group, sent Adam and I an article on one of those situations that makes any DBA, (or an entire business, for that matters) cringe.  GitLab.com was performing some simple maintenance and someone deleted the wrong directory, removing over 300G of production data from their system.  It appears they were first going to use PostgreSQL “vacuum” feature to clean up the database, but decided they had extra time to clean up some directories and that’s where it all went wrong.  To complicate matters, the onsite backups had failed, so they had to go to offsite ones, (and every reader moans…)

Even this morning, you can view the tweets of the status for the database copy and feel the pain of this organization as they try to put right the simple mistake.

Users are down as they work to get the system back up.  Just getting the data copied before they’re able to perform the restore is painful and as a DBA, I feel for the folks involved:

How could Delphix have saved the day for GitLab?  Virtual databases, (VDBs) are read/write copies and derived from a recovered image that is compressed, duplicates removed and then kept in a state of perpetual recovery having the transactional data applied in a specific interval, (commonly once every 24 hrs) to the Delphix Engine source.  We support a large number of database platforms, (Oracle, SQL Server, Sybase, SAP, etc) and are able to virtualize the applications that are connected to them, too.  The interval of how often we update the Delphix Engine source is configurable, so depending on network and resources, this interval can be decreased to apply more often, depending on how up to date the VDBs need to be vs. production.

With this technology, we’ve come into a number of situations where customers suffered a cataclysmic failure situation in production.  While traditionally, they would be dependent upon a full recovery from a physical backup via tape, (which might be offsite) or scrambling to even find a backup that fit within a backup to tape window, they suddenly discovered that Delphix could spin up a brand new virtual database with the last refresh before the incident from the Delphix source and then use a number of viable options to get them up and running quickly.

  1. Switch the users and application to point to the new VDB that was recovered to the point in time, (PIT) before the incident occurred.  Meanwhile, IT is able to take their time recovering the production database with the physical backup, with little outage to the business.
  2. Create a VDB to the PIT before the failure and then create a connection between the production and the VDB, making a copy back to production of the data that was lost.
  3. If there was dire loss, (i.e. disk, etc.)  create a VDB to the PIT before the failure and perform what’s called a V2P, or virtual to physical, rehydrating the virtual data to become the new physical database.

This is the type of situation happens more often then we’d like to admit.  Many times resources have been working long shifts and make a mistake due to exhaustion, other times someone unfamiliar and with access to something they shouldn’t simply make a dire mistake, but these things happen and this is why DBAs are always requesting two or three methods of backups.  We learn quite quickly we’re only as good as our last backup and if we can’t protect the data, well, we won’t have a job for very long.

Interested in testing it out for yourself?  We have a really cool free Delphix trial via Amazon cloud that uses your AWS account.  There’s a source host and databases, along with a virtual host and databases, so you can create VDBs, blow away tables, recovery via a VDB, create a V2P, (virtual to physical) all on your own.

 

Posted in AWS Trial, cloning, Delphix Tagged with: , ,

April 23rd, 2014 by Kyle Hailey

About year ago or more, Oracle came out with a way to create thin clone copies of a database in EM 12c called “Snap Clone”.

Screen Shot 2014-04-11 at 9.53.57 AM

Not sure this makes working with data any sunnier and certainly doesn’t add any sunlight to the cloud movement. Snap Clone technology has seen little adoption AFAIK. I’m not aware of a single customer reference yet, besides internal usage at Oracle. Why ? Because Snap Clone doesn’t solve the real problem. Snap Clone is complex, depends on cobbling other technologies together, and lacks in crucial features. The real solution is simplicity and end-to-end integration with data source syncing all the way to data provisioning. Snap Clone will provision database thin clones using specialized storage snapshots limited to ZFS and Netapp, but is no help for customers without ZFS or Netapp. Even if you have Netapp or ZFS there is no integrated solution for syncing with source data. Oracle’s technology is a storage snapshot management tool for Netapp and ZFS and simple storage snapshot technology has been around for almost 20 years with little adoption compared to the potential. Snapshots are like the fuel. A data virtualization appliance is like the car.

The real solution harnesses thin cloning technology in a fully integrated and automated platform called an adata virtualization appliance. An data virtualization appliance revolutionizes thin cloning technology in the same way the web browser revolutionized the internet.  Before the web browser internet usage was limited to academics using ftp, email and chat rooms. After the browser came out everyone started using the internet. Similarly thin cloning has been around for over a decade it’s usage is exploding now that Delphix has provide an data virtualization appliance.  In less than 4 years, the Delphix agile data platform customers have grown to include Walmart, HP, IBM, Facebook, Ebay, Comcast, Macys, Prudential, Krogers, SAP, Proctor and Gamble, Gap, New York  Life,McDonalds, Wellsfargo, Bank of America   etc.

Why has Delphix been so successful? Because these customers are seeing improvements like 2x application development output, 10x cost reduction, 1000s of hours of DBA work saved a year, petabytes of disk freed. Delphix is about  virtualizing  all of enterprise data, be it in a database by Oracle or Microsoft or in flat files, be it on premises or in the cloud. Delphix makes both the data as well as copies of that data immediately available anywhere. It provides solutions in a few mouse clicks to problems that were extremely difficult and time consuming before. Delphix is free of hardware constraints running on any storage and commodity Intel hardware.  With this kind of technology Delphix can do cloud migrations, datacenter migrations, DR, regulatory compliance…the sky’s the limit. For example:

  • Data Branching – branch data, databases and application stacks to provide immediate full environments to QA based on the latests development environment and do it all  in a space and time efficient manner.  No longer does QA have to wait to build environments; they can now be made in minutes. With branching, one can maintain multiple versions of  application environments to support development teams who may need to patch previous versions while working on the current version or support multiple development teams who are working concurrently on different release versions. 
  • Synchronized clones – one of the hardest projects IT teams have is provisioning multiple databases at the same time for data integration, development and testing. For example, financial closes often require synchronizing financial information from  several databases into a central financial system. If there are discrepancies, then it requires getting copies of all the systems at the same point in time – the financial close time – and isolating and correcting the discrepancies. Provisioning multiple databases at the same point in time is a daunting task, but with Delphix it is a simple out of the box functionality. 
  • Application Stack Cloning – Delphix will not only virtualize the database but also the application stack, and even Oracle binaries. For example, Delphix will provide thin clones of the full Oracle EBS stack including reconfiguring the EBS stack for the new clone environment. 
  • Live data archive – ability to archive many historical synchronized copies of full databases, data and application stacks stored in a fraction of the space and accessible in minutes. This capability is crucial for auditing and compliance support. 
  • Support for RAC, SQL Server, Postgres – linking and provisioning RAC databases is as easy as a single instance. Integrated automated support is already there for SQL Server and Postgres with Sybase in beta and more databases to come.

Delphix is as simple to install as a VMware VM. Everything is run in a few mouse clicks from a simple web UI.  After an initial link to source data, in few minutes, users can start provisioning Oracle databases, SQL Server databases or application data or binaries.

 

Screen Shot 2014-04-17 at 10.20.29 AM

 

 

 

Screen Shot 2014-04-23 at 4.27.12 PM

*1 One Oracle blog suggests using Data Guard as the test master, which means adding another technology stack to the mix without any automated management or integration not to mention requiring more Oracle licensing.

*2 Snap Clone may claim to be storage agnostic, but to use something like EMC VNX or IBM for example requires installing Solaris on a machine and then attaching an EMC LUN to the Solaris box, having Solaris map ZFS onto the storage, then sharing the storage with EM 12c.

 

You don’t need to take my advice. Instead, ask them to prove it:

  • Ask for references.  Find at least three references and ask them about their experiences.
  • Prove it.  Test setup time.  Test linking time. Test provisioning time. Test replication of VDBs in one site/cloud to another site/cloud and across heterogeneous storage. Test cross platform cloning from UNIX to Linux (Delphix does this automatically). Provision clones of MS SQL databases. Provision a full EBS stack (Delphix does this automatically). Provision a SAP sandbox. Test on SSD storage like Pure Storage, Violin, XtremIO. Provision two or more separate databases at the same point in time which supports the case of an application that uses more than one database. Branch multiple versions of the same master VBD. Make branches of the VDB branches.

Delphix has years of industry production time  and many Fortune 500 customers, including the Fortune #1 who can attest to the robustness and power of the solution. Delphix has an expert and rapid support team that has won praises from their customers. Delphix is the industry standard for data and database virtualization.

 

Posted in cloning

April 21st, 2014 by Kyle Hailey

apple_orange

Performance testing requires full, fresh data

Many organizations don’t even attempt to test performance until very late in their development cycle because it is only in the UAT (or equivalent) environment that they have a full copy of their production data set.  Errors and inefficiencies found at this stage are expensive to fix and are often promoted to production due to pressures from the business to meet release schedules.

Delphix customers give each developer, or team of developers, a full, fresh copy of the database where they can validate the performance of their code in the UNIT TESTING phase of their projects.  Poorly written queries are identified by the developer and can be fixed immediately, before their code is submitted for integration with the rest of the application.   The test/fix iteration is much tighter and results in higher quality, better performing application releases.

How does Delphix enable this?

VDBs created by Delphix have many advantages over a physical database, and therefore can be used in unique ways.  Consider the following:

  • Self service.  Delphix automates all the complexity required to make changes to a database, allowing developers and testers to get what they need without waiting on associated support organizations.
  • Fast provisioning.  VDBs require no data movement at the time they are created, so even large databases can be created in a few minutes.
  • Easy data refresh.  Refreshing a VDB with the latest data from production can be done with 3 mouse clicks.  Never test against synthetic data.
  • Data rewind/reset.  Delphix tracks all changes made to a VDB and can rewind the state of the database to any point in time at the request of the user.  Run a test, rewind, change parameters, run the test again.
  • Efficient use of infrastructure.  VDBs run in a tiny storage footprint, allowing teams to run many more database environments in parallel.
  • Efficient use of licenses.  Turning VDBs on and off is trivial.  Test environments can be spun up as needed and suspended when testing is finished.  Suspended VDBs use no resources of the DBMS.
  • Database relocation.  VDBs are easily moved between database hosts, even across datacenters.

Following are examples of performance changes that can easily be tested in VDBs:

Database Configuration

  • changes to initialization parameters (eg. optimizer_index_cost_adj, optimizer_index_caching, etc)
  • changes to redo size, parameters
  • DBMS version and patch set
  • SGA size
  • CPU type, speed (move VDB between database hosts)
  • Different DB statistics, statistics gathering methods

Data Modeling

  • Index changes
  • SQL Profiles – Like the old stored outlines, you can set up a complex system of profiles on a VDB and test different explain plans
  • Run complex and potentially debilitating queries on a VDB to minimize impact, use TKPROF and heavy tracing you can’t do elsewhere

Application  Configuration

  • Testing application server connection pool sizes/limits
  • network bandwidth testing for multi-hop/firewall configuration
  • theoretical maximums for concurrent batch jobs (not just at the DB, but the app tier as well)
  • testing database monitoring solutions/thresholds/configuration impact
  • Oracle trace event impact when turned on (deviation from a baseline)

Enabling a physical UAT environment with Delphix

As mentioned above, many Delphix customers will still maintain a final testing environment that matches the production setup exactly.  They will have fibre channel (or equivalent) connections to the SAN directly from their DBMS host.  Even in this environment, which bypasses the Delphix Engine, our software can provide great benefit to the testing process.

The V2P feature can be used to migrate any data set contained within Delphix to a physical storage environment.  That means any data set captured from production, or any data set modified by a VDB can be pushed to UAT in an automated fashion by Delphix.  Running a V2P operation is not as fast as creating/refreshing a VDB because it requires data movement, but it is faster than restoring a traditional database backup and automates all the instance creation and configuration tasks.

Bringing it all together

The high level life cycle of performance testing on the Delphix Agile Data Platform looks something like the following:

  1. Create and/or refresh development environments with the latest full data set from production.
  2. Use VDBs to iterate quickly on unit tests of new code, data modeling changes, DBMS configuration changes.
  3. Integrate and test all changes in a highly parallelized QA environment, using VDBs to minimize the setup time between test runs.
  4. Run V2P to migrate release candidates to UAT for final performance verification.
  5. Promote changes to production.

 

A

Posted in cloning

November 7th, 2013 by Kyle Hailey

 


Chocholate_peanutbutter

Oracle 12c introduces the new Pluggable Database (PDB) functionality into the Oracle database. What’s the advantage of PDBs? PDBs eliminate the heavy memory overhead of starting up a full Oracle instance requiring a new SGA and full set of background processes. Instead,  we startup one container database (CDB) and then PDBs all share resources of the CDB. With a CDB & PDBs, there is one instance, one set of background processes, one SGA and these are shared among the PDBs. Starting a PDB only requires about 100MB  of memory as opposed to half a GB normally required to start an Oracle instance. Because of this reduced memory cost one can go from running 50  instances of Oracle on 20GB of memory to running 250 PDBs  in that same 20GB of memory. (numbers given by Oracle benchmarks)

 

One of the main use cases for PDBs is cloning a database for use in development and QA. With PDBs, one can easily give every developer in a development team their own copy of  a database, except for one thing. Each of those PDBs, even though they take up hardly any memory, still require a full set of datafiles. Creation of these datafiles is slow and costly. That’s where Delphix comes in.

 

With Delphix, having 10 copies of the same database takes up less than the size of the original database! How is that done? It’s done by compressing the original copy and then sharing all the duplicate blocks in that copy among all the clones.

 

Thus where PDBs give you database for free at the memory level,  Delphix gives you free databases at the disk footprint level and the combination of the two gives you practically free databases!

 

4dc3a2daf47a11e29c1122000a1fba2c_7

 

Oracle marketing slide showing 50 separate databases can run in 20 GB of RAM and 250 PDBs can run in 20 GB of RAM. “5x more scalable”

 

 

 

Screen Shot 2014-04-25 at 8.42.32 AM

Video example of how easy it is with Delphix to link to a PDB in one CDB and the provision it, thin clone, to another CDB on a different machine:

Posted in cloning, PDB

September 4th, 2013 by Kyle Hailey



Oracle OEM 12c introduces a new feature that enables the creation of Oracle database thin clones by leveraging file system snapshot technologies from either  ZFS  or Netapp.  The OEM adds a graphic interface to the process of making database thin clones. The feature that enables database thin cloning in OEM is called Snap Clone and is part of OEM’s Cloud Control Self Service for data cloning. Snap Clone is available via the feature Database as a Service (DBaaS). Snap clone leverages the copy on write technologies available in some storage systems for database cloning.  Support is initially available for NAS storage and specifically on  ZFS Storage  and NetApp Storage.

In order to use Snap Clone, one has to install the source database such that the source database data files are on a ZFS storage  or Netapp array and have the storage managed by agents on a LINUX machine and then one can thin clone data files on that same storage array.

Snap Clone offers role based access, so storage admin can login in and only have access to areas they are responsible for as well as limiting access to source databases, clones and resource by end users.

Setting Snap Clone

 

The prerequisites for getting start with Snap Clone are having available storage on ZFS Storage Appliance or Netapp storage array as well as having access to a master test database. A master test databse is a database that has a sanatized version of a production database such that it is either a subset and or  masked. The test master database has to be registered with OEM.  After the test master is registered with OEM, Snap Clone can be setup. To set up snap clone, come into Oracle Cloud Control 12c as “cloud administrator” role with “storage adminstator” priviledge or super administrator and register the storage. To register the storage navigate to “ setup -> provisining patching -> storage registration”.

  • Navigate to “ setup -> provisining patching -> storage registration”
  • Click “Register” tab, and choose storage, either Netapp or ZFS,
    • Supply storage information
      • Name: Storage array name registered in DNS
      • Vendor
      • Protocol: http or https
      • Storage Credentials: credentials for interacting with storage
  • Install agents on a separate LINUX machine to manage the Netapp or ZFS storage. An agent has to run on Linux host to manage the  storage. Supply the
    • Agent host
    • Host credentials
  • Pick a database to make the test master
    • Put the test master on ZFS storage or Netapp storage
    • Register the ZFS storage or Netapp storage with OEM
    • Enable Snap Clone for the  test master database
  • Set up a zone – set max CPU and Memory for a set of hosts and the roles that can see these zones
  • Set up a pool – a pool is a set of machines where databases can be provisioned
  • Set up a profile – a source database that can be used for thin cloning
  • Set up a service template – reference values such as a init.ora for database to be created

Screen Shot 2013-06-09 at 10.08.47 PM

Figure 1. Shows the entry page in OEM 12c Cloud Control. From here go to the top right and choose setup, then provisining patching then storage registration as shown above.

 Navigate to storage registration

To setup Snap Clone navigate to storage registyratiom choose the menus “setup -> provisining patching -> storage registration”.

Screen Shot 2013-06-09 at 10.10.44 PM

Figure 2 Shows a zoom into the menus to choose

Screen Shot 2013-05-31 at 10.09.02 PM

Figure 3. Storage Registration Page

Screen Shot 2013-06-09 at 10.15.04 PM

Figure 4. Choose the type of storage array

Once on the Storage Registration page, choose “Register” and then choose the storage, either Netapp or Sun ZFS.

Register the Storage

Screen Shot 2013-06-09 at 10.17.02 PM

Figure 5. Storage Registration Page

To register the storage supply the following information

  • Name: Storage array name registered in DNS
  • Vendor
  • Protocol: http or https
  • Storage Credentials: credentials for interacting with storage

All of which is documented in cloud administration guide.

Define agents used to manage storage

Then define agents used to manage storage. Agents have are required to run on a LINUX host.  More than one agen can define to provide redundancy.  The agents will be the path by which OEM communicates with the storage. For each agent, supply the following information

  • Host name
  • Credential type
  • Credentials

And finally define the frequency with which the agent synchronizes with the storage to gather the sorage hardware details such as  information on aggregates shares volumes.

After the storage information, agent information and agent synchronization information has been filled out, then hit “submit” button in the top right. Hitting the submit button will return the UI back to the “Storage Registration”. On the “Storage Registration”, click on the storage appliance listed in the top, then click on the contents tab on the bottom half of the page. This will list all the  volumns and aggregates in the storage appliance.

Looking at volumns on Storage Array

Screen Shot 2013-06-09 at 10.18.48 PM

Figure 6. Editing storage ceiling by clicking on a aggregate and hitting the “Edit Storage Config” tab.

For each aggregate one can set storage ceilings. Click on the aggregate or FlexVol and the click “Edit Storage Ceilings” tab.

Choosing a Database Test Master

On the database tab is a list of databases that can be used for cloning. OEM detects the database automatically on the hosts it is managing. OEM will also automatically correlate databases that have storage on the storage array added storage registration.  OEM looks for all databases that have files on the registered storage.  Click on database, then the show files tabs which will show the files and volumes for this database.

Screen Shot 2013-06-09 at 10.20.00 PM

Figure 7. List of files by volumn for database

Screen Shot 2013-06-09 at 10.20.07 PM

Figure 8. Enable Snap Clone for databases that will be used as test masters.

Nominating a database as test master requires enabling snap clone. To enable snap clone for a database, click on the chosen database, then click “Enable Snap Clone” tab just above the list of databases. This will automatically validate that all the volumes are flex clone enabled (in the case of Netapp).

 

Setting up Zones

The next step is to configure zone which can be used to organize cloud resources

Choose the menu option “Enterprise -> Cloud -> Midelware and Database Home”

 

Screen Shot 2013-06-09 at 10.23.11 PM

Figure 9. Navigate first to “Cloud -> Middleware and Database Home”

Middleware and Database Cloud page

Screen Shot 2013-06-09 at 10.24.23 PM

Figure 10.  Middleware and Databawe Cloud page

Setting up a Zone

In order to see the zones defined, click on the number next to the title “Paas Infrastructure Zones”  in the top left under General Information.

Screen Shot 2013-06-09 at 10.26.24 PM

Figure 11. PaaS Infrastructure Zones

To create a zone, click the tab “Create”.

Screen Shot 2013-06-09 at 10.27.33 PM

Figure 12. first page of wizard to create a PaaS Infrastructure Zone, give a meaningful name and description of the zone and define maximum CPU utilizaiton and memory allocation.

In the first page of the “PaaS Infrastructure Zone”, give zones a meaningful name and description. Define constraints such as maximum host CPU and memory allocations.

Screen Shot 2013-06-09 at 10.28.56 PM

Figure 13. Second page of the “PaaS Infrastructure Zone” wizard, add hosts that are available to the zone.

Next define hosts that are members of the zone and provide credentials that operate across all members of this zone

Screen Shot 2013-06-09 at 10.30.11 PM

Figure 14. Third page of the “PaaS Infrastructure Zone” wizard, limit which roles can see the zone.

Next define what roles can see and access this szone. The visibiliy of the zone can be limited to a certain class of users via roles like Dev, QA etc

Screen Shot 2013-06-09 at 10.31.18 PM

Figure 15. Final review page for “PaaS Infrastructure Zone” wizard

Finally review settings and click submit

Screen Shot 2013-06-09 at 10.32.29 PM

Figure 16. Showing the Confirmation that the PaaS Infranstructure Zone has been successfully created.

 

Creating Database Pool and Profiles

The remaining steps required to enable snap clone is to create a database pools which is a collection of servers or nodes that have database software installed.  The remaining part of the setup is done by a differnet user who is the administrator for database as a service.

Log in as  DBAAS_ADMIN.

For the next part navigate to the menu “Setup -> Cloud -> Database”.

Screen Shot 2013-06-09 at 10.33.24 PM

Figure 17. Middleware and Database Cloud page

Screen Shot 2013-06-09 at 10.35.59 PM

Figure 18. Navigate to “Setup -> Cloud -> Database”.

Screen Shot 2013-06-09 at 10.36.56 PM

Figure 19. Database Cloud Self Service Portal Setup. To create a database pool choose the “Create” button in the center of the page and from the pull down, choose “For Database”.

To create a new pool click on the “Create” button in the center of the page, and chose “For Database” from the pull down menu that appears.

Screen Shot 2013-06-09 at 10.38.16 PM

Figure 20. Choose “Create -> For Database”

Screen Shot 2013-06-09 at 10.39.10 PM

Figure 21. Edit pool page. Provide a meaningful name a descrpition of the pool. Add Oracle home directories in the bottom of the page. At the very bottom of the page set a constraint on the number of databases instances that can be created in the pool. On the top right, set the host credentials.

Set

  • Name and description
  • Oracle Home
  • Maximum number of databases per host
  • Credentials

In the “Edit Pool” page, at the top left of the screen, provide a meaningful name and description for the pool. In the middle of the screen add Oracle homes that will be used for databse provisioning.  Every member of a database pool is required to be homogeneous. Homogenous requires that the platform and Oracle version is the same across all the hosts  and Oracle homes in the pool. All the Oracle installations also have to be of the same type either single instance or RAC. In the top right  add the  Oracle home provide oracle credentials and root credentials. Finally at the bottom of the page a constraint can be set on the number of database instances that can be started in this pool.

Screen Shot 2013-06-09 at 10.41.03 PM

Figure 22. Set request limits on the pool

The next page sets the request settings. The first restriction sets how far in advanced can requrest can be made. Second restricts  how long a request can be kept which is the archive retension.  After the archive retention time  the  requests will be deleted.  Finally is the request duration which is the maximum duration for which the request can be made.

Screen Shot 2013-06-09 at 10.42.12 PM

Figure 23. Set memory  and storage quotas per role for the pool. The quotas cover memory, storage, database requests and schema requests.

 

The above page  configures quotas.  Quota is allocated to each and every self service user. The quotas controls the amount fo resources users  have access to. Quotas are assigned to a role and users inherit quota values from the role. Click “Create” in the middle of the screen.

Screen Shot 2013-06-09 at 10.43.03 PM

Figure 24. Editing the quotas on a pool for a  role.

The popup dialogue has for these entries

  • Role name
  • memory GB
  • storage GB
  • number of database request

Screen Shot 2013-06-09 at 10.43.56 PM

Figure 25. Profiles and Service Templates

Profiles and service templates

.A profile is use to capture information about the source database which can then be used for provisioning.

A service template is a standardized service definition for a database configuration that is offered to the self service users. A collection of service templates forms the  service catalogue. A service template will provision databsae with or  without seed data. To capture an ideal configuration, the easist thing to do is to point at an existing database and fetch information of interest from that database.  The information from the database can be captured using a profile.

To create a profile click on the “Create” button under “Profiles”

Creating a profile

Screen Shot 2013-06-09 at 10.45.07 PM

Figure 26. Specify a reference target for a profile.

Click the magnifying glass to search for a source database.

Screen Shot 2013-06-09 at 10.46.46 PM

Figure 27. Search for a reference target database

 

Pick a refrence target by clicking on it, the click the “Select” button in bottom right.

Screen Shot 2013-06-09 at 10.47.31 PM

Figure 28. Creating a Database Provisioning Profile

To pick a database for use in thin cloning, choose the check box “Data Content” with suboption selected fro “Structured Data” with sub-option selected for “Create” with sub-option selected for “Storage Snapshot”. This  option is only enabled only when the “enable snapshot” option is enabled on the storage registration page. Disable option capture oracle home.

Provide credentials for the host machines Oracle account and for the database login.

 

The “Content Option” step is not needed for the above selections.

Screen Shot 2013-06-09 at 10.48.16 PM

Figure 29. Give the profile a meaniful name and description

 

Next provide credentials for Oracle home and Oracle databse, then provide a meaningful name for the profile  as well as a location. The profile will be userful when creating a service template.

Screen Shot 2013-06-09 at 10.49.12 PM

Figure 30. Review of create profile options.

 

Next  review the summary and click subit which will connect to storage and take snapshots of the storage


Screen Shot 2013-06-09 at 10.50.46 PM

Figure 31. Shows a zoom into the menus to choose

 

To  create a new service template choose a profile and  in this case use “thin provisioning for reference DB” profile.  Now to create a new service template click “create” and choose “for database”. Service templates are part of the service catalogue and exposed to the self service users.

Screen Shot 2013-06-09 at 10.51.42 PM

Figure 32. Provide a meaningful name and description for the service template.

 

Provide a meaningful name and description. For the rest of service template provide information about the databses that will be created from the snapshots such as providing database type, rac or single instance, for rac provide number of nodes. Provide the SID prefix to appended to the SIDs generated for the clones, provide the Domain Name and the port.

Screen Shot 2013-06-09 at 10.52.38 PM

Figure 33. Provide a storage area for writes to use.

 

The   cloning operation only creates a read only copy thus it is required to provide write space elsewhere in order to allow writing to the thin clone.

click on the edit button

click on volumne, then edit button

Screen Shot 2013-06-09 at 10.53.29 PM

Figure 34. set diretory and maximum space usage for the write location

 

Provide the mount point prefix and amount of writeable space wish to allocate

 

Users of the thin clone databses can  also be allowed to take further snapshots. These snapshots can be used as a mechinism to rollback changes, The number of thiese snapshtos can limited just below storage size section:

Screen Shot 2013-06-09 at 10.54.18 PM

Figure 35. set the number of snapshots that can be taken of a a thin clone

Screen Shot 2013-06-09 at 10.55.15 PM

Figure 36. set the initial passwords for database accounts on the thin clone.

next provide credentials for administrative accounts

  • SYS
  • SYSMAN
  • DBSMNP

for all other non-administartive accounts can choose to leave them as is or change them all to one password you can modify certain init.ora parameters for exmaple memory

Screen Shot 2013-06-09 at 10.55.59 PM

Figure 37. Modify any specific init.ora parameters for the thin clone

 

Screen Shot 2013-06-09 at 10.57.13 PM

Figure 38. Set any pre and post provision scripts to be run at the creation of a thin clone.

 

custom scripts can be provide as pre or post creation steps this can be very useful if you want to register databses with  OID or certian actions that are specific to your organization

Screen Shot 2013-06-09 at 10.58.05 PM

Figure 39. Set the zone and pool for the thin clone

Screen Shot 2013-06-09 at 10.59.07 PM

Figure 40. set the roles that can use the service template.

you associate this srvice template with a zone and a template this insures that the service template can actualy work on the  group of resources that you have identified and can limit the visibility of the service tempalte usein roles

Screen Shot 2013-06-09 at 11.00.17 PM

Figure 41. review of the service template creation requites

finally we review the summary and click submit

Creating  a Thin Clone

Screen Shot 2013-06-09 at 11.06.14 PM

Figure 42. 12c Cloud Control

Screen Shot 2013-06-09 at 11.07.06 PM

Figure 43. 12c Cloud Control Self Service Portal

Contents of the Database Cloud Self Service Potal screen

  • Left hand side
    •  Notification – any instances that are about to expire
    •   Usage
      •     databases (number provisioned out of maximum)
      •     schema services
      •     Memory
      •     Storage
  • Right side
    •   Top
      •     Databases
    • Bottom
      •     requests – requests that created the database services and the database instances

Screen Shot 2013-06-09 at 11.08.02 PM

Figure 44. To clone a database, choose the “Request” then “Database” menu.

Screen Shot 2013-06-09 at 11.08.40 PM

Figure 45. From the list choose a Self Service Template. In this case “SOEDB Service Template”

Options are

  •   RMAN backups which are full clones
  •   empty databases
  •   snap clone which are thin clones

Screen Shot 2013-06-09 at 11.09.42 PM

Figure 46. Fill out the clone Service Request

request wizard asks for

  • request name
  • select zone – collection of servers
  • select a start and end time
  • provide a user name and password, new user and password

Users do not get system access to the databases but instead get a slightly less privilege user who becomes the owner of the database

Hit Submit

Screen Shot 2013-06-09 at 11.10.25 PM

Figure 47. Shows new clone database

 

References

http://www.youtube.com/watch?v=J7fnfLS5Dxg&feature=youtu.be – setup

http://www.youtube.com/watch?v=9VK1z6nU1PU – provisioning

http://www.oracle.com/technetwork/oem/cloud-mgmt/dbaas-snap-netapp-2041775.mp4

http://www.slideshare.net/kellynpotvin/dbaas-database-as-a-service-in-a-dbas-world#!

 

Posted in cloning, em, Oracle Tagged with: , ,

  • Facebook
  • Google+
  • LinkedIn
  • Twitter