Subscribe to Blog via Email
Now that I’ve loaded a ton of transactions and did a bunch of work load on my source database with the SH sample schema and Swingbench, I’ve noted how little impact to the databases using different cloud tools, (which will come in a few later posts) now I’m going to show you how easy it is to create a new VDB from all of this, WITH the new SH data included. During all of this time, the primary users of my Delphix VDB, (virtualized databases) would have been working in the previous iage, but someone wants that new SH schema now that my testing has completed.
To do this, I open up my Delphix admin console, (using the IP address for the Delphix Engine from the AWS Trail build output), log in as delphix_admin and open up the Source group to access the Employee Oracle 11g database, (aka ORCL.)
I know my new load is complete on the ORCL database and need to take a new snapshot to update the Delphix Engine outside of the standard refresh interval, (I’m set at the default of every 24 hrs.) Access this by clicking on the Configuration tab and to take a snapshot, I simply click on the camera icon.
A snapshot will take a couple seconds, as this is a very, very small database, (2.3G) and then you can click on Timeflow to view the new snapshot available for use. Ensure the new snapshot is chosen by moving the slider all the way to the right and look at the timestamp, ensuring it’s the latest, matching your recent one.
Click on Provision and it will default to the Source host, change to the target, update to a new, preferred database name, (if you don’t like the default) and then you may have to scroll down to see the Next button to go through the subsequent steps in the wizard. I know my Macbook has a smaller screen and I do have to scroll to see the Next button. After you’ve made any other changes, click on Finish and let the job run. Don’t be surprised by the speed that a VDB is provisioned- I know it’s really fast, but it really did create a new VDB!
Now that we have it, let’s connect to it from SQL*Plus and check prove that we got the new SH schema over.
Using the IP Address for the Linux Target that was given to use in our AWS Trial build, let’s connect:
ssh delphix@<linuxtarget IP Address>
Did you really just create a whole new VDB?
[delphix@linuxtarget ~]$ ps -ef | grep pmon delphix 1148 1131 0 18:57 pts/0 00:00:00 grep pmon delphix 16825 1 0 Mar09 ? 00:00:06 ora_pmon_devdb delphix 16848 1 0 Mar09 ? 00:00:06 ora_pmon_qadb delphix 31479 1 0 18:30 ? 00:00:00 ora_pmon_VEmp6C0
Yep, there it is…
Now let’s connect to it.
Set our environment:
Set the ORACLE_SID to the new VDB
Connect to SQL*Plus as our SH user using the password used in our creation on the source database, ORCL:
$ sqlplus sh Enter password: Connected to: Oracle Database 11g Enterprise Edition Release 22.214.171.124.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> select object_type, count(*) from user_objects 2 group by object_type; OBJECT_TYPE COUNT(*) ------------------- ---------- INDEX PARTITION 196 TABLE PARTITION 56 LOB 2 DIMENSION 5 MATERIALIZED VIEW 2 INDEX 30 VIEW 1 TABLE 16 8 rows selected. SQL> select table_name, sum(num_rows) from user_tab_statistics 2 where table_name not like 'DR$%' 3 group by table_name 4 order by table_name; TABLE_NAME SUM(NUM_ROWS) ------------------------------ ------------- CAL_MONTH_SALES_MV 48 CHANNELS 5 COSTS 164224 COUNTRIES 23 CUSTOMERS 55500 FWEEK_PSCAT_SALES_MV 11266 PRODUCTS 72 PROMOTIONS 503 SALES 1837686 SALES_TRANSACTIONS_EXT 916039 SUPPLEMENTARY_DEMOGRAPHICS 4500 TIMES 1826 12 rows selected.
Well, lookie there, the same as the source database we loaded earlier... 🙂 Next, we’ll go into the stuff that always gets my interest- performance data, the cloud and visualization tools.
Most people know I like to do things the hard way… 🙂
When it comes to learning things, there’s often no better way to get a strong understanding than to have to take something apart and put it back together. I wanted to use swingbench on my AWS deployment and needed the Sales History, (SH) sample schema. I don’t have an interface to perform the installation via the configuration manager, so I was going to install it manually.
Surprise, the scripts in the $ORACLE_HOME/demo/schema/sh directory were missing. There are a couple options to solve this dilemma. Mine was to first get the sample schemas. You can retrieve them from a handy GitHub repository found here, maintained by Gerald Venzl.
I downloaded the entire demo scripts directory and then SCP’d them up to my AWS host.
scp db-sample-schemas-master.zip delphix@<IP Address>:/home/oracle/.
Next, I extracted the files to the $ORACLE_HOME/demo directory.
Now the unzip will call the directory db-sample-schemas-master, which is fine with me, as I like to retain the previous one, (I’m a DBA, so have copies of data until I’m sure I don’t need it is my life.)
mv schema schema_kp mv db-sample-schemas-master schema
With that change, everything is now as it should be, but the one thing you’ll find out, is that the download is for 12c and I’m alright with this, as the swingbench I’m using is expecting SH for 12c, too. Not that I expect any differences, but as Jeff Smith was all too happy to remind me on Facebook, I’m using decade old version of Oracle on my image here.
There are a lot of scripts in the Sales_History folder, but all you’ll need to run is the sh_main.sql from SQL*Plus as sysdba to create the SH schema.
There are parameter values that you’ll enter to create the SH schema manually that you may assume are different than the prompt terms and as I’ve seen very little written on it, (even after all these years of this existing) this may help others out:
specify password for SH as parameter 1:
Self-explanatory- what password would you like SH user to have.
specify default tablespace for SH as parameter 2:
What tablespace do you want this created in? I chose Users, as this is just my play database.
specify temporary tablespace for SH as parameter 3:
Temp was mine and is the common value for this prompt.
specify password for SYS as parameter 4:
This is the password for SYSTEM, not SYS, btw.
specify directory path for the data files as parameter 5:
This is not Oracle datafiles, this is the path to your SH directory, ($ORACLE_HOME/demo/schemas/sales_history/) for access to the control files and dat files for SQL Loader. Remember to have a slash at the end of the path name.
writeable directory path for the log files as parameter 6:
A directory for log files- I put this in the same directory and remember to use a slash at the end or you’re log files will have the previous directory as the beginning of the file name and save to one directory up.
specify version as parameter 7:
This isn’t the version of the database, but the version of the sample schema- the one from Github is “v3”.
specify connect string as parameter 8:
pretty clear, but the service or connect string for the database that the schema is being created in.
I then ran into some errors, but it was pretty easy to view the log and then the script and see why:
SP2-0310: unable to open file "__SUB__CWD__/sales_history/psh_v3"
Well, the scripts, (psh_v3.sql, lsh_v3.sql and csh_v3.sql) called in the sh_main.sql is looking in the sales_history directory, so we need to get rid of the sub directory paths that don’t exist in the 11g environment.
view the sh_main.sql, you’ll see three paths to update. Below is an example of one section of the script with the section to be removed BOLDED:
REM Post load operations REM ======================================================= DEFINE vscript = _SUB_CWD_/sales_history/psh_&vrs @&vscript
The DEFINE will now look like the following so it looks in the sales_history directory if you haven’t been pointing to the $ORACLE_HOME/demo directory:
DEFINE vscript = psh_&vrs
Once you’ve saved your changes, you can simply re-run sh_main.sql again, as it does a drop schema on the sample schema before it does the create. If no other changes need to be made to your parameters, just execute sh_main.sql, if you need to change your values for the parameters entered, just quickest to exit from SQL*Plus and reconnect to unset the values.
Verify that there weren’t any errors in your $RUN_HOME/sh_v3.log file and if all was successful, then connect as the SH user with SQL*Plus and check the schema:
SQL> select object_type, count(*) from user_objects 2 group by object_type;
OBJECT_TYPE COUNT(*) ------------------- ---------- INDEX PARTITION 196 TABLE PARTITION 56 LOB 2 DIMENSION 5 MATERIALIZED VIEW 2 INDEX 30 VIEW 1 TABLE 16
SQL> select table_name, sum(num_rows) from user_tab_statistics 2 where table_name not like 'DR$%' --Dimensional Index Transformation 3 group by table_name 4 order by table_name;
TABLE_NAME SUM(NUM_ROWS) ------------------------------ ------------- CAL_MONTH_SALES_MV 48 CHANNELS 5 COSTS 164224 COUNTRIES 23 CUSTOMERS 55500 FWEEK_PSCAT_SALES_MV 11266 PRODUCTS 72 PROMOTIONS 503 SALES 1837686 SALES_TRANSACTIONS_EXT 916039 SUPPLEMENTARY_DEMOGRAPHICS 4500 TIMES 1826
And now we’re ready to run Swingbench Sales History against our AWS instance to collect performance data. I’ll try to blog on Swingbench connections and data collection next time.
See everyone at UTOUG for Spring Training Days on Monday, March 13th and at the end of the week I’ll be in Iceland at SQL Saturady #602!