Subscribe to Blog via Email
Most people know I like to do things the hard way… 🙂
When it comes to learning things, there’s often no better way to get a strong understanding than to have to take something apart and put it back together. I wanted to use swingbench on my AWS deployment and needed the Sales History, (SH) sample schema. I don’t have an interface to perform the installation via the configuration manager, so I was going to install it manually.
Surprise, the scripts in the $ORACLE_HOME/demo/schema/sh directory were missing. There are a couple options to solve this dilemma. Mine was to first get the sample schemas. You can retrieve them from a handy GitHub repository found here, maintained by Gerald Venzl.
I downloaded the entire demo scripts directory and then SCP’d them up to my AWS host.
scp db-sample-schemas-master.zip delphix@<IP Address>:/home/oracle/.
Next, I extracted the files to the $ORACLE_HOME/demo directory.
Now the unzip will call the directory db-sample-schemas-master, which is fine with me, as I like to retain the previous one, (I’m a DBA, so have copies of data until I’m sure I don’t need it is my life.)
mv schema schema_kp mv db-sample-schemas-master schema
With that change, everything is now as it should be, but the one thing you’ll find out, is that the download is for 12c and I’m alright with this, as the swingbench I’m using is expecting SH for 12c, too. Not that I expect any differences, but as Jeff Smith was all too happy to remind me on Facebook, I’m using decade old version of Oracle on my image here.
There are a lot of scripts in the Sales_History folder, but all you’ll need to run is the sh_main.sql from SQL*Plus as sysdba to create the SH schema.
There are parameter values that you’ll enter to create the SH schema manually that you may assume are different than the prompt terms and as I’ve seen very little written on it, (even after all these years of this existing) this may help others out:
specify password for SH as parameter 1:
Self-explanatory- what password would you like SH user to have.
specify default tablespace for SH as parameter 2:
What tablespace do you want this created in? I chose Users, as this is just my play database.
specify temporary tablespace for SH as parameter 3:
Temp was mine and is the common value for this prompt.
specify password for SYS as parameter 4:
This is the password for SYSTEM, not SYS, btw.
specify directory path for the data files as parameter 5:
This is not Oracle datafiles, this is the path to your SH directory, ($ORACLE_HOME/demo/schemas/sales_history/) for access to the control files and dat files for SQL Loader. Remember to have a slash at the end of the path name.
writeable directory path for the log files as parameter 6:
A directory for log files- I put this in the same directory and remember to use a slash at the end or you’re log files will have the previous directory as the beginning of the file name and save to one directory up.
specify version as parameter 7:
This isn’t the version of the database, but the version of the sample schema- the one from Github is “v3”.
specify connect string as parameter 8:
pretty clear, but the service or connect string for the database that the schema is being created in.
I then ran into some errors, but it was pretty easy to view the log and then the script and see why:
SP2-0310: unable to open file "__SUB__CWD__/sales_history/psh_v3"
Well, the scripts, (psh_v3.sql, lsh_v3.sql and csh_v3.sql) called in the sh_main.sql is looking in the sales_history directory, so we need to get rid of the sub directory paths that don’t exist in the 11g environment.
view the sh_main.sql, you’ll see three paths to update. Below is an example of one section of the script with the section to be removed BOLDED:
REM Post load operations REM ======================================================= DEFINE vscript = _SUB_CWD_/sales_history/psh_&vrs @&vscript
The DEFINE will now look like the following so it looks in the sales_history directory if you haven’t been pointing to the $ORACLE_HOME/demo directory:
DEFINE vscript = psh_&vrs
Once you’ve saved your changes, you can simply re-run sh_main.sql again, as it does a drop schema on the sample schema before it does the create. If no other changes need to be made to your parameters, just execute sh_main.sql, if you need to change your values for the parameters entered, just quickest to exit from SQL*Plus and reconnect to unset the values.
Verify that there weren’t any errors in your $RUN_HOME/sh_v3.log file and if all was successful, then connect as the SH user with SQL*Plus and check the schema:
SQL> select object_type, count(*) from user_objects 2 group by object_type;
OBJECT_TYPE COUNT(*) ------------------- ---------- INDEX PARTITION 196 TABLE PARTITION 56 LOB 2 DIMENSION 5 MATERIALIZED VIEW 2 INDEX 30 VIEW 1 TABLE 16
SQL> select table_name, sum(num_rows) from user_tab_statistics 2 where table_name not like 'DR$%' --Dimensional Index Transformation 3 group by table_name 4 order by table_name;
TABLE_NAME SUM(NUM_ROWS) ------------------------------ ------------- CAL_MONTH_SALES_MV 48 CHANNELS 5 COSTS 164224 COUNTRIES 23 CUSTOMERS 55500 FWEEK_PSCAT_SALES_MV 11266 PRODUCTS 72 PROMOTIONS 503 SALES 1837686 SALES_TRANSACTIONS_EXT 916039 SUPPLEMENTARY_DEMOGRAPHICS 4500 TIMES 1826
And now we’re ready to run Swingbench Sales History against our AWS instance to collect performance data. I’ll try to blog on Swingbench connections and data collection next time.
See everyone at UTOUG for Spring Training Days on Monday, March 13th and at the end of the week I’ll be in Iceland at SQL Saturady #602!