Subscribe to Blog via Email
Follow me on TwitterMy Tweets
Delphix focuses on virtualizing non-production environments, easing the pressure on DBAs, resources and budget, but there is a second use case for product that we don’t discuss nearly enough.
Protection from data loss.
Jamie Pope, one of the great guys that works in our pre-sales engineering group, sent Adam and I an article on one of those situations that makes any DBA, (or an entire business, for that matters) cringe. GitLab.com was performing some simple maintenance and someone deleted the wrong directory, removing over 300G of production data from their system. It appears they were first going to use PostgreSQL “vacuum” feature to clean up the database, but decided they had extra time to clean up some directories and that’s where it all went wrong. To complicate matters, the onsite backups had failed, so they had to go to offsite ones, (and every reader moans…)
Even this morning, you can view the tweets of the status for the database copy and feel the pain of this organization as they try to put right the simple mistake.
Users are down as they work to get the system back up. Just getting the data copied before they’re able to perform the restore is painful and as a DBA, I feel for the folks involved:
How could Delphix have saved the day for GitLab? Virtual databases, (VDBs) are read/write copies and derived from a recovered image that is compressed, duplicates removed and then kept in a state of perpetual recovery having the transactional data applied in a specific interval, (commonly once every 24 hrs) to the Delphix Engine source. We support a large number of database platforms, (Oracle, SQL Server, Sybase, SAP, etc) and are able to virtualize the applications that are connected to them, too. The interval of how often we update the Delphix Engine source is configurable, so depending on network and resources, this interval can be decreased to apply more often, depending on how up to date the VDBs need to be vs. production.
With this technology, we’ve come into a number of situations where customers suffered a cataclysmic failure situation in production. While traditionally, they would be dependent upon a full recovery from a physical backup via tape, (which might be offsite) or scrambling to even find a backup that fit within a backup to tape window, they suddenly discovered that Delphix could spin up a brand new virtual database with the last refresh before the incident from the Delphix source and then use a number of viable options to get them up and running quickly.
This is the type of situation happens more often then we’d like to admit. Many times resources have been working long shifts and make a mistake due to exhaustion, other times someone unfamiliar and with access to something they shouldn’t simply make a dire mistake, but these things happen and this is why DBAs are always requesting two or three methods of backups. We learn quite quickly we’re only as good as our last backup and if we can’t protect the data, well, we won’t have a job for very long.
Interested in testing it out for yourself? We have a really cool free Delphix trial via Amazon cloud that uses your AWS account. There’s a source host and databases, along with a virtual host and databases, so you can create VDBs, blow away tables, recovery via a VDB, create a V2P, (virtual to physical) all on your own.
About year ago or more, Oracle came out with a way to create thin clone copies of a database in EM 12c called “Snap Clone”.
Not sure this makes working with data any sunnier and certainly doesn’t add any sunlight to the cloud movement. Snap Clone technology has seen little adoption AFAIK. I’m not aware of a single customer reference yet, besides internal usage at Oracle. Why ? Because Snap Clone doesn’t solve the real problem. Snap Clone is complex, depends on cobbling other technologies together, and lacks in crucial features. The real solution is simplicity and end-to-end integration with data source syncing all the way to data provisioning. Snap Clone will provision database thin clones using specialized storage snapshots limited to ZFS and Netapp, but is no help for customers without ZFS or Netapp. Even if you have Netapp or ZFS there is no integrated solution for syncing with source data. Oracle’s technology is a storage snapshot management tool for Netapp and ZFS and simple storage snapshot technology has been around for almost 20 years with little adoption compared to the potential. Snapshots are like the fuel. A data virtualization appliance is like the car.
The real solution harnesses thin cloning technology in a fully integrated and automated platform called an adata virtualization appliance. An data virtualization appliance revolutionizes thin cloning technology in the same way the web browser revolutionized the internet. Before the web browser internet usage was limited to academics using ftp, email and chat rooms. After the browser came out everyone started using the internet. Similarly thin cloning has been around for over a decade it’s usage is exploding now that Delphix has provide an data virtualization appliance. In less than 4 years, the Delphix agile data platform customers have grown to include Walmart, HP, IBM, Facebook, Ebay, Comcast, Macys, Prudential, Krogers, SAP, Proctor and Gamble, Gap, New York Life,McDonalds, Wellsfargo, Bank of America etc.
Why has Delphix been so successful? Because these customers are seeing improvements like 2x application development output, 10x cost reduction, 1000s of hours of DBA work saved a year, petabytes of disk freed. Delphix is about virtualizing all of enterprise data, be it in a database by Oracle or Microsoft or in flat files, be it on premises or in the cloud. Delphix makes both the data as well as copies of that data immediately available anywhere. It provides solutions in a few mouse clicks to problems that were extremely difficult and time consuming before. Delphix is free of hardware constraints running on any storage and commodity Intel hardware. With this kind of technology Delphix can do cloud migrations, datacenter migrations, DR, regulatory compliance…the sky’s the limit. For example:
Delphix is as simple to install as a VMware VM. Everything is run in a few mouse clicks from a simple web UI. After an initial link to source data, in few minutes, users can start provisioning Oracle databases, SQL Server databases or application data or binaries.
*1 One Oracle blog suggests using Data Guard as the test master, which means adding another technology stack to the mix without any automated management or integration not to mention requiring more Oracle licensing.
*2 Snap Clone may claim to be storage agnostic, but to use something like EMC VNX or IBM for example requires installing Solaris on a machine and then attaching an EMC LUN to the Solaris box, having Solaris map ZFS onto the storage, then sharing the storage with EM 12c.
You don’t need to take my advice. Instead, ask them to prove it:
Delphix has years of industry production time and many Fortune 500 customers, including the Fortune #1 who can attest to the robustness and power of the solution. Delphix has an expert and rapid support team that has won praises from their customers. Delphix is the industry standard for data and database virtualization.
Posted in cloning
Performance testing requires full, fresh data
Many organizations don’t even attempt to test performance until very late in their development cycle because it is only in the UAT (or equivalent) environment that they have a full copy of their production data set. Errors and inefficiencies found at this stage are expensive to fix and are often promoted to production due to pressures from the business to meet release schedules.
Delphix customers give each developer, or team of developers, a full, fresh copy of the database where they can validate the performance of their code in the UNIT TESTING phase of their projects. Poorly written queries are identified by the developer and can be fixed immediately, before their code is submitted for integration with the rest of the application. The test/fix iteration is much tighter and results in higher quality, better performing application releases.
How does Delphix enable this?
VDBs created by Delphix have many advantages over a physical database, and therefore can be used in unique ways. Consider the following:
Following are examples of performance changes that can easily be tested in VDBs:
Enabling a physical UAT environment with Delphix
As mentioned above, many Delphix customers will still maintain a final testing environment that matches the production setup exactly. They will have fibre channel (or equivalent) connections to the SAN directly from their DBMS host. Even in this environment, which bypasses the Delphix Engine, our software can provide great benefit to the testing process.
The V2P feature can be used to migrate any data set contained within Delphix to a physical storage environment. That means any data set captured from production, or any data set modified by a VDB can be pushed to UAT in an automated fashion by Delphix. Running a V2P operation is not as fast as creating/refreshing a VDB because it requires data movement, but it is faster than restoring a traditional database backup and automates all the instance creation and configuration tasks.
Bringing it all together
The high level life cycle of performance testing on the Delphix Agile Data Platform looks something like the following:
Posted in cloning
Oracle 12c introduces the new Pluggable Database (PDB) functionality into the Oracle database. What’s the advantage of PDBs? PDBs eliminate the heavy memory overhead of starting up a full Oracle instance requiring a new SGA and full set of background processes. Instead, we startup one container database (CDB) and then PDBs all share resources of the CDB. With a CDB & PDBs, there is one instance, one set of background processes, one SGA and these are shared among the PDBs. Starting a PDB only requires about 100MB of memory as opposed to half a GB normally required to start an Oracle instance. Because of this reduced memory cost one can go from running 50 instances of Oracle on 20GB of memory to running 250 PDBs in that same 20GB of memory. (numbers given by Oracle benchmarks)
One of the main use cases for PDBs is cloning a database for use in development and QA. With PDBs, one can easily give every developer in a development team their own copy of a database, except for one thing. Each of those PDBs, even though they take up hardly any memory, still require a full set of datafiles. Creation of these datafiles is slow and costly. That’s where Delphix comes in.
With Delphix, having 10 copies of the same database takes up less than the size of the original database! How is that done? It’s done by compressing the original copy and then sharing all the duplicate blocks in that copy among all the clones.
Thus where PDBs give you database for free at the memory level, Delphix gives you free databases at the disk footprint level and the combination of the two gives you practically free databases!
Oracle marketing slide showing 50 separate databases can run in 20 GB of RAM and 250 PDBs can run in 20 GB of RAM. “5x more scalable”
Video example of how easy it is with Delphix to link to a PDB in one CDB and the provision it, thin clone, to another CDB on a different machine:
Oracle OEM 12c introduces a new feature that enables the creation of Oracle database thin clones by leveraging file system snapshot technologies from either ZFS or Netapp. The OEM adds a graphic interface to the process of making database thin clones. The feature that enables database thin cloning in OEM is called Snap Clone and is part of OEM’s Cloud Control Self Service for data cloning. Snap Clone is available via the feature Database as a Service (DBaaS). Snap clone leverages the copy on write technologies available in some storage systems for database cloning. Support is initially available for NAS storage and specifically on ZFS Storage and NetApp Storage.
In order to use Snap Clone, one has to install the source database such that the source database data files are on a ZFS storage or Netapp array and have the storage managed by agents on a LINUX machine and then one can thin clone data files on that same storage array.
Snap Clone offers role based access, so storage admin can login in and only have access to areas they are responsible for as well as limiting access to source databases, clones and resource by end users.
The prerequisites for getting start with Snap Clone are having available storage on ZFS Storage Appliance or Netapp storage array as well as having access to a master test database. A master test databse is a database that has a sanatized version of a production database such that it is either a subset and or masked. The test master database has to be registered with OEM. After the test master is registered with OEM, Snap Clone can be setup. To set up snap clone, come into Oracle Cloud Control 12c as “cloud administrator” role with “storage adminstator” priviledge or super administrator and register the storage. To register the storage navigate to “ setup -> provisining patching -> storage registration”.
Figure 1. Shows the entry page in OEM 12c Cloud Control. From here go to the top right and choose setup, then provisining patching then storage registration as shown above.
To setup Snap Clone navigate to storage registyratiom choose the menus “setup -> provisining patching -> storage registration”.
Figure 2 Shows a zoom into the menus to choose
Figure 3. Storage Registration Page
Figure 4. Choose the type of storage array
Once on the Storage Registration page, choose “Register” and then choose the storage, either Netapp or Sun ZFS.
Figure 5. Storage Registration Page
To register the storage supply the following information
All of which is documented in cloud administration guide.
Then define agents used to manage storage. Agents have are required to run on a LINUX host. More than one agen can define to provide redundancy. The agents will be the path by which OEM communicates with the storage. For each agent, supply the following information
And finally define the frequency with which the agent synchronizes with the storage to gather the sorage hardware details such as information on aggregates shares volumes.
After the storage information, agent information and agent synchronization information has been filled out, then hit “submit” button in the top right. Hitting the submit button will return the UI back to the “Storage Registration”. On the “Storage Registration”, click on the storage appliance listed in the top, then click on the contents tab on the bottom half of the page. This will list all the volumns and aggregates in the storage appliance.
Figure 6. Editing storage ceiling by clicking on a aggregate and hitting the “Edit Storage Config” tab.
For each aggregate one can set storage ceilings. Click on the aggregate or FlexVol and the click “Edit Storage Ceilings” tab.
On the database tab is a list of databases that can be used for cloning. OEM detects the database automatically on the hosts it is managing. OEM will also automatically correlate databases that have storage on the storage array added storage registration. OEM looks for all databases that have files on the registered storage. Click on database, then the show files tabs which will show the files and volumes for this database.
Figure 7. List of files by volumn for database
Figure 8. Enable Snap Clone for databases that will be used as test masters.
Nominating a database as test master requires enabling snap clone. To enable snap clone for a database, click on the chosen database, then click “Enable Snap Clone” tab just above the list of databases. This will automatically validate that all the volumes are flex clone enabled (in the case of Netapp).
The next step is to configure zone which can be used to organize cloud resources
Choose the menu option “Enterprise -> Cloud -> Midelware and Database Home”
Figure 9. Navigate first to “Cloud -> Middleware and Database Home”
Figure 10. Middleware and Databawe Cloud page
In order to see the zones defined, click on the number next to the title “Paas Infrastructure Zones” in the top left under General Information.
Figure 11. PaaS Infrastructure Zones
To create a zone, click the tab “Create”.
Figure 12. first page of wizard to create a PaaS Infrastructure Zone, give a meaningful name and description of the zone and define maximum CPU utilizaiton and memory allocation.
In the first page of the “PaaS Infrastructure Zone”, give zones a meaningful name and description. Define constraints such as maximum host CPU and memory allocations.
Figure 13. Second page of the “PaaS Infrastructure Zone” wizard, add hosts that are available to the zone.
Next define hosts that are members of the zone and provide credentials that operate across all members of this zone
Figure 14. Third page of the “PaaS Infrastructure Zone” wizard, limit which roles can see the zone.
Next define what roles can see and access this szone. The visibiliy of the zone can be limited to a certain class of users via roles like Dev, QA etc
Figure 15. Final review page for “PaaS Infrastructure Zone” wizard
Finally review settings and click submit
Figure 16. Showing the Confirmation that the PaaS Infranstructure Zone has been successfully created.
The remaining steps required to enable snap clone is to create a database pools which is a collection of servers or nodes that have database software installed. The remaining part of the setup is done by a differnet user who is the administrator for database as a service.
Log in as DBAAS_ADMIN.
For the next part navigate to the menu “Setup -> Cloud -> Database”.
Figure 17. Middleware and Database Cloud page
Figure 18. Navigate to “Setup -> Cloud -> Database”.
Figure 19. Database Cloud Self Service Portal Setup. To create a database pool choose the “Create” button in the center of the page and from the pull down, choose “For Database”.
To create a new pool click on the “Create” button in the center of the page, and chose “For Database” from the pull down menu that appears.
Figure 20. Choose “Create -> For Database”
Figure 21. Edit pool page. Provide a meaningful name a descrpition of the pool. Add Oracle home directories in the bottom of the page. At the very bottom of the page set a constraint on the number of databases instances that can be created in the pool. On the top right, set the host credentials.
In the “Edit Pool” page, at the top left of the screen, provide a meaningful name and description for the pool. In the middle of the screen add Oracle homes that will be used for databse provisioning. Every member of a database pool is required to be homogeneous. Homogenous requires that the platform and Oracle version is the same across all the hosts and Oracle homes in the pool. All the Oracle installations also have to be of the same type either single instance or RAC. In the top right add the Oracle home provide oracle credentials and root credentials. Finally at the bottom of the page a constraint can be set on the number of database instances that can be started in this pool.
Figure 22. Set request limits on the pool
The next page sets the request settings. The first restriction sets how far in advanced can requrest can be made. Second restricts how long a request can be kept which is the archive retension. After the archive retention time the requests will be deleted. Finally is the request duration which is the maximum duration for which the request can be made.
Figure 23. Set memory and storage quotas per role for the pool. The quotas cover memory, storage, database requests and schema requests.
The above page configures quotas. Quota is allocated to each and every self service user. The quotas controls the amount fo resources users have access to. Quotas are assigned to a role and users inherit quota values from the role. Click “Create” in the middle of the screen.
Figure 24. Editing the quotas on a pool for a role.
The popup dialogue has for these entries
Figure 25. Profiles and Service Templates
Profiles and service templates
.A profile is use to capture information about the source database which can then be used for provisioning.
A service template is a standardized service definition for a database configuration that is offered to the self service users. A collection of service templates forms the service catalogue. A service template will provision databsae with or without seed data. To capture an ideal configuration, the easist thing to do is to point at an existing database and fetch information of interest from that database. The information from the database can be captured using a profile.
To create a profile click on the “Create” button under “Profiles”
Figure 26. Specify a reference target for a profile.
Click the magnifying glass to search for a source database.
Figure 27. Search for a reference target database
Pick a refrence target by clicking on it, the click the “Select” button in bottom right.
Figure 28. Creating a Database Provisioning Profile
To pick a database for use in thin cloning, choose the check box “Data Content” with suboption selected fro “Structured Data” with sub-option selected for “Create” with sub-option selected for “Storage Snapshot”. This option is only enabled only when the “enable snapshot” option is enabled on the storage registration page. Disable option capture oracle home.
Provide credentials for the host machines Oracle account and for the database login.
The “Content Option” step is not needed for the above selections.
Figure 29. Give the profile a meaniful name and description
Next provide credentials for Oracle home and Oracle databse, then provide a meaningful name for the profile as well as a location. The profile will be userful when creating a service template.
Figure 30. Review of create profile options.
Next review the summary and click subit which will connect to storage and take snapshots of the storage
Figure 31. Shows a zoom into the menus to choose
To create a new service template choose a profile and in this case use “thin provisioning for reference DB” profile. Now to create a new service template click “create” and choose “for database”. Service templates are part of the service catalogue and exposed to the self service users.
Figure 32. Provide a meaningful name and description for the service template.
Provide a meaningful name and description. For the rest of service template provide information about the databses that will be created from the snapshots such as providing database type, rac or single instance, for rac provide number of nodes. Provide the SID prefix to appended to the SIDs generated for the clones, provide the Domain Name and the port.
Figure 33. Provide a storage area for writes to use.
The cloning operation only creates a read only copy thus it is required to provide write space elsewhere in order to allow writing to the thin clone.
click on the edit button
click on volumne, then edit button
Figure 34. set diretory and maximum space usage for the write location
Provide the mount point prefix and amount of writeable space wish to allocate
Users of the thin clone databses can also be allowed to take further snapshots. These snapshots can be used as a mechinism to rollback changes, The number of thiese snapshtos can limited just below storage size section:
Figure 35. set the number of snapshots that can be taken of a a thin clone
Figure 36. set the initial passwords for database accounts on the thin clone.
next provide credentials for administrative accounts
for all other non-administartive accounts can choose to leave them as is or change them all to one password you can modify certain init.ora parameters for exmaple memory
Figure 37. Modify any specific init.ora parameters for the thin clone
Figure 38. Set any pre and post provision scripts to be run at the creation of a thin clone.
custom scripts can be provide as pre or post creation steps this can be very useful if you want to register databses with OID or certian actions that are specific to your organization
Figure 39. Set the zone and pool for the thin clone
Figure 40. set the roles that can use the service template.
you associate this srvice template with a zone and a template this insures that the service template can actualy work on the group of resources that you have identified and can limit the visibility of the service tempalte usein roles
Figure 41. review of the service template creation requites
finally we review the summary and click submit
Figure 42. 12c Cloud Control
Figure 43. 12c Cloud Control Self Service Portal
Contents of the Database Cloud Self Service Potal screen
Figure 44. To clone a database, choose the “Request” then “Database” menu.
Figure 45. From the list choose a Self Service Template. In this case “SOEDB Service Template”
Figure 46. Fill out the clone Service Request
request wizard asks for
Users do not get system access to the databases but instead get a slightly less privilege user who becomes the owner of the database
Figure 47. Shows new clone database
http://www.youtube.com/watch?v=9VK1z6nU1PU – provisioning