I’ve been busy reading and testing everything I can with Delphix, whenever I get a chance. I’m incredibly fascinated by copy data management and the idea of doing this with Exadata is nothing new, as Oracle has it’s own version with sparse copy. The main challenge is that Exadata’s version of this is kind of clunky and really doesn’t have the management user interface that Delphix offers.
There is a lot of disk that comes with an Exadata, not just CPU, network bandwidth and memory. Now you can’t utilize offloading with a virtualized database, but you may not be interested in doing so. The goal is to create a private cloud that you can use small storage silos for virtualized environments. We all know that copy data management is a huge issue for IT these days, so why not make the most of your Exadata, too?
With Delphix, you can even take and external source and provision a copy in just a matter of minutes to an Exadata, utilizing very little storage. You can even refresh, roll back, version and branch through the user interface provided.
I simulated two different architecture designs for how Delphix would work with Exadata. The first was with standard hardware, with Virtual Databases, (VDBs) on the Exadata and the second having both the Dsource and the VDBs on another Exadata.
VDBs On A Second Exadata
- Production on EXADATA,
- Standard RMAN sync to Delphix
- VDBs hosted on EXADATA DB compute nodes
- 10Gb NFS is standard connectivity on EXADATA
VDBs on Standard Storage, Source on Exadata
- Production on EXADATA, standard RMAN sync to Delphix
- VDBs hosted on commodity x86 servers
How Does it All Work
Now we need to capture our gold copy to use for the DSource, which will require space, but Delphix does use compression, so it will be considerably smaller than the original database it’s using for the data source.
If we then add ALL the VDBs to the total storage utilized by that and by the Dsource, then you’d see that they only use about the same amount of space as the original database! Each of these VDBs are going to interact with the user independently, just as a standard database copy would. They can be at different points in time, track different snapshots, have different hooks, (pre or post scripts to be run for that copy) with different data, (which is just different blocks, so that would be the only additional space outside of other changes.) Pretty cool if you ask me!