July 31st, 2017 by dbakevlar

This is the Part III in a four part series on how to:

  1.  Enable VNC Viewer access on Amazon EC2 hosts.
  2.  Install DB12c and upgrade a Dsource for Delphix from 11g to 12c, (12.1)
  3.  Update the Delphix Configuration to point to the newly upgraded 12c database and the new Oracle 12c home.
  4.  Install DB12c and upgrade target VDBs for Delphix residing on AWS to 12.1 from the newly upgraded source.

In Part II, we finished upgrading the Dsource database, but now we need to get it configured on the Delphix side.

Log into the Delphix Admin console to make the changes required to recognize the Dsource is now DB12c and has a new Oracle home.

Log into the Delphix console as the Delphix_Admin user and go to the Manage –> Environments.

Click on the Refresh button and let the system recognize the new Oracle Home for DB12c:

Once complete, you should see the 12.1 installation we performed on the Linux Source now listed in the Environments list.

Click on Manage –> Datasets and find the Dsource 11g database and click on it.

Click on the Configuration tab and click on the Upgrade icon, (a small up arrow in the upper right.)

Update to the new Oracle Home that will now be listed in the dropdown and scroll down to save.

Now click on the camera icon to take a snap sync to ensure everything is functioning properly.  This should only take a minute to complete.

The DSource is now updated in the Delphix Admin console and we can turn our attentions to the Linux target and our VDBs that source from this host.  In Part IV we’ll dig into the other half of the source/target configuration and how I upgraded Delphix environments with a few surprises!

 

Posted in AWS, Delphix, Oracle Tagged with: , , ,

July 26th, 2017 by dbakevlar

I’m finally getting back to upgrading the Linux Source for a POC I’m doing with some folks and picking up from where we left off in Part I

Address Display Issue

Now that we have our VNC Viewer working on our Amazon host, the first thing we’ll try is to run the Oracle installer, (unzipped location –> database –> runInstaller) but it’s going to fail because we’re missing the xdpinfo file.  To verify this, you’ll need to open up a terminal from Application –> System Tools –> Terminal:

$ ls -l /usr/bin/xdpyinfo
 ls: /usr/bin/xdpyinfo: No such file or directory

We’ll need to install this with yum:

$ sudo yum -y install xorg-x11-utils

Once we’ve completed this, let’s verify our display:

$ echo $DISPLAY

:1.0 <– (0 is local, first number is the display, just as ipaddress:display for your VNC Viewer connection.)

If it’s correct, you can test it by executing xclock:

$ xclock

The clock should appear on the screen if the display is set correctly.

Install Oracle 12c

Run the installer:

$ ./runInstaller

The installer will come up for Oracle 12c and you can choose to enter in your information, but I chose to stay uninformed… 🙂  I chose to install AND upgrade the database to DB12c from 11g.

The warnings for swap and the few libraries I also chose to ignore by clicking ignore all and proceeded with the installation.

Root.sh and the Trace Analyzer

Once the installation of the new Oracle Home is complete, choose to run the root.sh script when prompted:

$ sudo /u01/app/oracle/product/12.1/db_1/root.sh

Overwrite all files when prompted by the script run and it’s up to you, but I chose to install the Oracle Trace File Analyzer so I can check it out at a later date.  You’ll then be prompted to choose the database to upgrade.  We’re going to upgrade our source database, ORCL in this example.

Upgrade Our Oracle DSource(Database)

Choose to proceed forward with the upgrade on the database, but know that you’ll require more space for the archive logs that are generated during the upgrade.  The check will tell you how much to add, but I’d add another 1Gb to ensure you are prepared with the other steps you have to run as we go through the preparation steps.

Log into SQL Plus as SYSDBA to perform this step:

ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=8100M;

Go through any warnings, but steps like stats collection and grants on triggers will have to be performed post the upgrade.

Drop the OLAP catalog:

$ sqlplus / as sysdba

@$ORACLE_HOME/olap/admin/catnoamd.sql

exit

Remove the OEM catalog for Enterprise Manager, first shutting down the console from the terminal:

$ emctl stop dbconsole

Copy the emremove.sql from the 12c Oracle Home/rdbms/admin and place it in the same location for 11g home.  Log into SQL Plus as SYSDBA:

SET ECHO ON;

SET SERVEROUTPUT ON;

@$ORACLE_HOME/rdbms/admin/emremove.sql

Empty the recyclebin post these steps:

purge recyclebin;

The assumption is that you have a backup prepared or you can use flashback with your resources allocated and proceed forward with upgrade.

Choose to upgrade the 11g listener and choose to install EM Express if you’d like to have that for monitoring.  Make sure to keep the default checks for the following window to update everything we need and collect stats before the upgrade runs to ensure it proceeds efficiently through all objects required.

Choose to proceed with the upgrade and if you’ve followed these instructions, you should find a successful installation of DB12c and upgrade of the database.  Keep in mind, we’re not going to go multi-tenant in this upgrade example, so if you were looking for those steps, my POC I’m building isn’t going to take that on in this set of blog posts.

Post Upgrade Steps:

Update your environment variables, including copying the 11g.env to a new profile called 12c.env and updating the Oracle Home.  Now set your environment and log into SQL Plus as SYSDBA to the upgraded database.

Update all the necessary dictionary and fixed stats:

EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;

EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;

Now, for our next post, we’ll need to set up the same installations on our Amazon host and VNC Viewer configuration we did for the Source and then install Oracle DB12c on our target server as we did in this post.  Then we’ll discuss how to get all our Delphix VDBs, (virtual databases) upgraded to be the same as our source in no time!

 

Posted in AWS, Delphix, Oracle Tagged with: , , ,

July 13th, 2017 by dbakevlar

For a POC that I’m working on with the DBVisit guys, I needed a quick, 12c environment to work on and have at our disposal as required.  I knew I could build out an 11g one in about 10 minutes with our trust free trial, but would then need to upgrade it to 12c.

Disable snapshots to Delphix Engine

This is a simple prerequisite before you upgrade an Oracle source database and takes down the pressure on the system, as well as confusion as the database upgrades the Oracle home, etc.

Simply log into the Delphix Admin console, click on your source group that the source database belongs to and under Configuration, in the right hand side, you’ll see a slider that needs to be moved to the “disable” position to no longer take interval snapshots.

Configure GUI for Simplified Oracle Installation

EC2 doesn’t come default with the GUI interface, so we just need to install it on the host to make life a little easier for the upgrade:

  •  Check for updates:
[delphix@linuxsource database]$ sudo yum update -y

….

  xfsprogs.x86_64 0:3.1.1-20.el6                                                
  yum.noarch 0:3.2.29-81.el6.centos                                             
  yum-plugin-fastestmirror.noarch 0:1.1.30-40.el6                               
Replaced:
  python2-boto.noarch 0:2.41.0-1.el6                                            
Complete!
  • Install the desktop:
[delphix@linuxsource database]$ sudo yum groupinstall -y "Desktop"

  xorg-x11-xkb-utils.x86_64 0:7.7-12.el6                                        
  xulrunner.x86_64 0:17.0.10-1.el6.centos                                       
  zenity.x86_64 0:2.28.0-1.el6                                                  
Complete!
  • Install dependencies like fonts needed:
[delphix@linuxsource database]$ sudo yum install -y pixman pixman-devel libXfont
[delphix@linuxsource database]$ sudo yum -y install tigervnc-server

Each of the above should show completed successfully.

  • Set the password for the VNC:
[delphix@linuxsource database]$ vncpasswd
Password:
Verify:
  • Restart the SSHD Service:

sudo service sshd restart

[delphix@linuxsource database]$ sudo service sshd restart
Stopping sshd:                                             [  OK  ]
Starting sshd:                                             [  OK  ]
  • Configure VNC Server properties with SUDO privs:
[delphix@linuxsource database]$ sudo vi /etc/sysconfig/vncservers
VNCSERVERS="1:delphix"
VNCSERVERARGS[2]="-geometry 1280X1024

Save and exit the vncservers configuration file.

  • Start VNC Server:
[delphix@linuxsource database]$ sudo service vncserver start

….

Log file is /home/delphix/.vnc/linuxsource.delphix.local:1.log
                                                           [  OK  ]
  • I’m the only one who will be accessing this host to perform these types of tasks, so I’ll use port 5901 and add a firewall rule:
[delphix@linuxsource database]$ sudo iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 5901 -j ACCEPT

You can now use the VNC Viewer to access the GUI for the Linux Source and install/upgrade Oracle.  I’m assuming you already have it, but if you don’t, download it and do a quick install.  Keep in mind, to install Oracle via the GUI on the Linux Target, I’ll need to perform these steps on that target, too.

Let’s check and verify that we can get to the Linux Source desktop.  Configure a new connection in the VNC Viewer and remember to use the public IP and “:1” for your user that you wish to log into.  Save and log into the Linux Source.

In the next post, I’ll update the Linux Source Oracle database and we’ll proceed with upgrading Delphix source and target databases on Amazon.

 

Posted in Oracle Tagged with: , , ,

June 22nd, 2017 by dbakevlar

I know that right sidebar on my blog has an AWFUL lot of Microsoft events on it.  There’s so many, I’ve begun to use the hashtag #MySummerOfSQL due to it.  For those of you that follow me with Oracle, it doesn’t mean that I’m leaving the Oracle community- not even close.  I’m as dedicated as ever to Oracle and hope to dig back into my performance roots on both platforms, but know that the summer is the quiet time for Oracle user group events, so I’ll be keeping myself busy with SQL Saturdays and the AWESOME preview to the annual Pass Summit conference, (for the Oracle peeps, think of an Oracle Open World for Microsoft folks, sans the sales folks… :)) which is a series of worldwide webinars called the 24 HOP, (24 Hours of Pass).

I want to thank the Microsoft SQL Pass community for embracing me and letting me regain my footing since departing the my time as a SQL Server DBA back with the release of SQL Server 2012 and I’m really loving all the enhancements in SQL Server 2014, 2016 and now, 2017!

For those on the Oracle side of the house, hopefully the Oracle Open World acceptances will come out in the next two weeks and I’m crossing my fingers I’ll get to speak either on my own or even better, with one of the fantastic co-presenters I’m hoping to partner up with-  Gurcan Orhan and Mike Donovan of DB Visit.

I’m busy prepping my slides for the last HUGE Oracle conference before the summer break, KSCOPE, in San Antonio this next week, but I’ll try to get one more blog post out this week.  Of course, it’s going to be more on the SQL Server/Oracle optimizer comparisons.

So see my Oracle peeps in San Antonio next week for the ever AWESOME KSCOPE 2017 and help celebrate their 20th birthday!

Posted in DBA Life Tagged with: , ,

April 13th, 2017 by dbakevlar

I was in a COE, (Center of Excellence) meeting yesterday and someone asked me, “Kellyn, is your blog correct?  Are you really speaking at a Blockchain event??”  Yeah, I’m all over the technical map these days and you know what?

I love the variety of technology, the diversity of attendance and the differences in how the conferences are managed.  Now that last one might seem odd and you might think that they’d all be similar, but its surprising how different they really are.

Getting to Know You

Today I’m going to talk about an aspect of conferences that’s very near to my heart, which is networking via events.  For women in technology, there are some unique challenges for us when it comes to networking.  Men have concerns about approaching women to network- such as fearful of accusations of inappropriate interaction and women have the challenge that a lot of networking opportunities occur outside of the workplace and in social situations that we may not be comfortable in.  No matter who you are, no matter what your intentions, there’s a lot of wariness and in the end, women often just lose out when it comes to building their network.  I’ve been able to breach this pretty successfully, but I have seen where it’s backfired and have found myself on more than one occasion defending both genders who’ve ended up on the losing side of the situation.

With that said, conferences and other professional events can assist with helping us geeks build our networks and it’s not all about networking events.  I noticed a while back that the SQL Server community appeared to be more networked among their members.  I believe part of this is due to the long history of their event software and some of its features.

Using the SQL Pass website, specifically the local user group event management software-  notice that its all centralized.  Unlike the significantly independent Oracle user groups, SQL Server user groups are able to use a centralized repository for their event management, speaker portal, scheduling, etc.  It’s not to say that there aren’t any events outside of Pass Summit and SQL Saturdays, there’s actually a ton, but this was the portal for the regional user groups, creating the spoke that bridged out to the larger community.

Centralized System

Outside of submitting my abstract proposals to as many SQL Saturdays worldwide from one portal, I also can maintain one speaker biography, information about my blog, Twitter, Linkedin and other social media in this one location.

The second benefit of this simplicity, is that these biographies and profiles “feed” the conference schedules and event sites.  You have a central location for management, but hundreds of event sites where different members can connect.  After abstracts have been approved and the schedule built, I can easily go into an event’s schedule and click on each speaker biography and choose to connect with anyone listed who has entered their social media information in their global profile.

Using my profile as an example, you’ll notice the social media icons under my title are available with a simple click of the mouse:

This gives me both an easy way to network with my fellow speakers, but also an excuse to network with them!  I can click on each one of the social media buttons and choose to follow each of the speakers on Twitter and connect with them on Linkedin.  I send a note with the Linkedin connection telling the speaker that we’re both speaking at the event and due to this, I’d like to add them to my network.

As you can join as many regional and virtual user groups as you like, (and your Pass membership is free) I joined the three in Colorado, (Denver, Boulder and Colorado Springs.) Each one of those offers the ability to also connect with the board members using a similar method, (now going to use Todd and David as my examples from the Denver SQL Server user group.)

The Oracle user groups have embraced adding twitter links to most speaker bios and some board groups, but I know for RMOUG, many still hesitated or aren’t using social media to the extent they could.  I can’t stress enough how impressed I am when I see events incorporate Linkedin and Twitter into their speaker and management profiles, knowing the value they bring to technical careers, networks and the community.

New Kids on the Block

Although the SQL Server community is a good example, they aren’t the only ones.  I’m also speaking at new events on emergent technologies, like Data Platforms 2017.  I’ll be polite and expose my own profile page, but I’m told I’m easy to find in the sea of male speakers… 🙂 Along with my picture, bio and session information, there are links to my social media connections, allowing people to connect with me:

Yes, the Bizzabo software, (same software package that RMOUG will be using for our 2018 conference, along with a few other Oracle events this coming year) is aesthetically appealing, but more importantly, it incorporates important networking features that in the past just weren’t as essential as they are in today’s business world.

I first learned the networking tactic of connecting with people I was speaking with from Jeff Smith and I think its a great skill that everyone should take advantage of, no matter if you’re speaking or just attending.  For women, I think it’s essential to your career to take advantage of opportunities to network outside of the traditional ways we’ve been taught in the past and this is just one more way to work around that glass ceiling.

 

 

Posted in DBA Life, Oracle, SQLServer Tagged with: , , ,

April 11th, 2017 by dbakevlar

I’m itching to dig more into the SQL Server 2016 optimizer enhancements, but I’m going to complete my comparison of indices between the two platforms before I get myself into further trouble with my favorite area of database technology.

<–This is sooo me.

Index Organized Tables

Index Organized Tables, (IOT) are just another variation of a primary b-tree index, but unlike a standard table with an index simply enforcing uniqueness, the index IS the table.  The data is arranged in order to improve performance and in a clustered primary key state.

This is the closest to a clustered index in SQL Server that Oracle will ever get, so it makes sense that a comparison in performance and fragmentation is the next step after I’ve performed standard index and primary key index comparisons to Oracle.

Let’s create a new copy of our Oracle objects, but this time, update to an Index Organized Table:

CREATE TABLE ora_tst_iot(
        c1 NUMBER, 
        c2 varchar2(255),
        CREATEDATE timestamp DEFAULT CURRENT_TIMESTAMP,
        CONSTRAINT pk_ora_iot PRIMARY KEY (c1))
    ORGANIZATION INDEX 
    TABLESPACE users
    PCTTHRESHOLD 20
    OVERFLOW TABLESPACE users;

CREATE SEQUENCE C1_iot_SEQ START WITH 1;

CREATE OR REPLACE TRIGGER C1_iot_BIR 
BEFORE INSERT ON ORA_TST_IOT
FOR EACH ROW
BEGIN
  SELECT C1_iot_SEQ.NEXTVAL
  INTO   :new.C1
  FROM   DUAL;
END;
/

The PCTThreshold can be anywhere between 0-50, but I chose 20 for this example.  I didn’t add any compression, as C1 is a simple sequence which won’t have the ability to take advantage of compression and I also added the additional support objects of a sequence and a trigger, just as I did for the previous test on the Oracle table.

First Data Load

Now we’ll insert the rows from ORA_INDEX_TST into ORA_TST_IOT

SQL> insert into ora_tst_iot(c2) select c2 from ora_index_tst;

995830 rows created.

Elapsed: 00:00:04:01

There won’t be any fragmentation in the current table-  it was directly loaded-  no deletes, no updates.  Although it won’t be shown in the examples, I will collect stats at regular intervals and flush the cache to ensure I’m not impacted in any of my tests.

SQL> ANALYZE INDEX PK_INDEXPS VALIDATE STRUCTURE;
Index analyzed.

Elapsed: 00:00:00.37

SQL> select index_name from dba_indexes
  2  where table_name='ORA_TST_IOT';

INDEX_NAME
------------------------------
PK_ORA_IOT
SQL> analyze index pk_ora_iot validate structure;

Index analyzed.

SQL> analyze index pk_ora_iot compute statistics;

Index analyzed.

SQL> SELECT LF_BLKS, LF_BLK_LEN, DEL_LF_ROWS,USED_SPACE, PCT_USED FROM INDEX_STATS where NAME='PK_ORA_IOT';

     32115 7996 0  224394262     88

Well, the table IS THE INDEX.  We collect stats on the table and now let’s remove some data, rebuild and then see what we can do to this IOT-

SQL> select * from ora_tst_iot
  2  where c1=994830;

    994830

SBTF02LYEQDFGG2522Q3N3EA2N8IV7SML1MU1IMEG2KLZA6SICGLAVGVY2XWADLZSZAHZOJI5BONDL2L
0O4638IK3JQBW7D92V2ZYQBON49NHJHZR12DM3JWJ1SVWXS76RMBBE9OTDUKRZJVLTPIBX5LWVUUO3VU
VWZTXROKFWYD33R4UID7VXT2NG5ZH5IP9TDOQ8G0

10-APR-17 03.09.52.115290 PM

SQL> delete from ora_tst_iot
  2  where c1 >=994820           
  3  and c1 <=994830;

11 rows deleted.

SQL> commit;

Now we see what the data originally looked like-  C2 is a large column data that was consuming significant space.

What if we now disable our trigger for our sequence and reinsert the rows with smaller values for c2, rebuild and then update with larger values again?

ALTER TRIGGER C1_IOT_BIR DISABLE;

INSERT INTO ORA_TST_IOT(C1,C2) VALUES (994820, 'A');
INSERT INTO ORA_TST_IOT(C1,C2) VALUES (994821, 'B');
INSERT INTO ORA_TST_IOT(C1,C2) VALUES (994822, 'C');
INSERT INTO ORA_TST_IOT(C1,C2) VALUES (994823, 'D');
INSERT INTO ORA_TST_IOT(C1,C2) VALUES (994824, 'E');
INSERT INTO ORA_TST_IOT(C1,C2) VALUES (994825, 'F');
INSERT INTO ORA_TST_IOT(C1,C2) VALUES (994826, 'G');
INSERT INTO ORA_TST_IOT(C1,C2) VALUES (994827, 'H');

so on and so forth till we reach 994830…

COMMIT and then let’s rebuild our table…

Rebuild or Move

ALTER TABLE ORA_IOT_TST REBUILD;

What happens to the table, (IOT) when we’ve issued this command?  It’s moving all the rows back to fill up each block up to the pct free.  For an IOT, we can’t simply rebuild the index, as the index IS THE TABLE.

Now we’ve re-organized our IOT so the blocks are only taking up the space that it would have when it was first inserted into.  So let’s see what happens now that we issue an UPDATE to those rows-

SQL> update ora_tst_iot set c2=dbms_random.string('B',200)
  2  where c1 >=994820
  3  and c1 <=994830;

11 rows updated.

So how vulnerable are IOTs to different storage issues?

Chained Rows

Chained Rows after updating, moving data and then updating to larger data values than the first with 10% free on each block?

Just 11 rows shows the pressure:

SQL> SELECT 'Chained or Migrated Rows = '||value
 FROM v$sysstat
 WHERE name = 'table fetch continued row';  

Chained or Migrated Rows = 73730

Data Unload

Let’s delete and update more rows using DML like the following:

SQL> delete from ora_tst_iot
  2  where c2 like '%300%';

4193 rows deleted.

Insert rows for 300 with varying degree of lengths, delete more, rinse and repeat and update and delete…

So what has this done to our table as we insert, update, delete and then insert again?

SQL> SELECT table_name, iot_type, iot_name FROM USER_TABLES
     WHERE iot_type IS NOT NULL;  

TABLE_NAME               IOT_TYPE           IOT_NAME
------------------------------ ------------ ------------------------------
SYS_IOT_OVER_88595       IOT_OVERFLOW       ORA_TST_IOT
ORA_TST_IOT              IOT

This is where a clustered index and an IOT is very different.  There is a secondary management object involved when there is overflow.  If you look up at my creation, yes, I chose to create an overflow.  Even if I drop the IOT properly, the overflow table will go into the recycle bin, (unless I’ve configured the database without it.)

SQL> select index_name from dba_indexes
  2  where table_name='ORA_TST_IOT';

INDEX_NAME
------------------------------
PK_ORA_IOT

SQL> analyze index pk_ora_iot validate structure;
Index analyzed.

SQL> select blocks, height, br_blks, lf_blks from index_stats;

    BLOCKS     HEIGHT BR_BLKS    LF_BLKS
---------- ---------- ---------- ----------
     32768     3       45        32115

We can see, for the blocks, the rows per leaf blocks aren’t too many-  this is a new table without a lot of DML, but we still see that with the current configuration, there aren’t a lot of rows returned per leaf block.

When we select from the IOT, the index is in full use and we can see that with the proper pct free/pct used, the index is still in pretty good shape:

SQL> select * from table(dbms_xplan.display_awr('fbhfmn88tq99z'));


select c1, c2, createdate from ora_tst_iot
Plan hash value: 3208808379

--------------------------------------------------------------------------------
| Id  | Operation     | Name   | Rows  | Bytes | Cost (%CPU)| Time

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |   |   |   |  8709 (100)|
|   1 |  INDEX FAST FULL SCAN| PK_ORA_IOT |   995K|   206M|  8709   (1)| 00:01:4
5 |

13 rows selected.

SQL> analyze index pk_ora_iot validate structure;
Index analyzed.

SQL> SELECT blocks, height, br_blks, lf_blks FROM index_stats;

    BLOCKS     HEIGHT  BR_BLKS    LF_BLKS
---------- ---------- ---------- ----------
     32768     3       45         32058

SQL> select pct_used from index_stats;

  PCT_USED
----------
 88

I Did a Bad

So now what happens, if like our original test, we shrink down the percentage of what can be used and reorganize, (and please don’t do this in production…or test….or dev….or ever! 🙂)?

SQL> alter table ora_tst_iot move pctfree 90;
Table altered.

SQL> analyze index pk_ora_iot validate structure;
Index analyzed.

SQL> SELECT blocks, height, br_blks, lf_blks FROM index_stats;

    BLOCKS     HEIGHT  BR_BLKS    LF_BLKS
---------- ---------- ---------- ----------
    172928     3      228         165630

Well, that’s a few more leaf blocks, eh?  Insert after enabling trigger again-

SQL> BEGIN
   FOR i in 1..1000000 LOOP
    INSERT INTO ORA_TST_IOT(c2) VALUES(i);
   END LOOP;
 COMMIT;
END;
 /

Now we have our elapsed time for Azure inserts of 1 million records with 100% and 10%.  Let’s compare it to our IOT.  The IOT move command to fill the blocks to 100% was quite fast.  Of course, the reverse, only allowing for 10%, (90% free) took F.O.R.E.V.E.R…, (OK, it sure felt like it…why didn’t I just truncate it?  Oh, yeah, I wanted it to be a real test, not simply an easy test..)

Note: For this test, we’ll rebuild after updating the pctfree each time.

 

10% Fill Factor in SQL Server and 1 million insert: Elapsed time: 23 minutes, 18 seconds

90% PCTFree in Oracle and 1 million insert:  7 min, 12 seconds

100% Fill Factor in SQL Server and 1 million insert: Elapsed Time: 4 minutes, 43 seconds

0% PCTFree in Oracle and 1 million insert: 1 min, 8 seconds

REBUILD of the Oracle IOT to make it 90% free in each block?  Elapsed Time:  8 hrs, 21 minutes, 12 seconds 

…along with four backups of archive logs it generated that filled up the archive dest… 🙂  Now the AWS Trial is to be used to test out the Delphix product, not to test out index performance in a high insert/delete/update scenario, so I’ve been asking for some of these challenges, but it was still a great way to build this out quickly and then compare.

 

Results

In this test, this was the overall results:

  1. Less overhead during transactional processing when it comes to inserts, updates and deletes to the IOT.
  2. Improved performance on any selects that require the data to be sorted in sequential order.
  3. Similar performance to SQL Server clustered indexes on complex queries and less sort temp usage.
  4. Limited use case than SQL Server clustered indexes, as these are quite common and IOTs are less used in the Oracle space.
  5. More maintenance upkeep as we will need to reorganize the IOT if its used with processing that includes a lot of inserts, updates and deletes.
  6. DBCC rebuilds of a clustered index uses less resources and doesn’t impact the transaction log as it does Oracle’s rollback and archive log.  It was easier to build the table with a high pct free storage configuration and then do an insert of the data, then drop the old table than to do an “alter move” command.

Now there’s more to do comparisons on, so I’m going to dig in more on the SQL Server side, but here’s to Oracle Index Organized Tables, (IOTs)!

Posted in Oracle, SQLServer Tagged with: , , ,

April 5th, 2017 by dbakevlar

I just uploaded my slides from Collaborate 2017 to Slideshare, but also, the findings missing from my slide deck I used for Oak Table World on War of the Indices- Oracle and SQL Server.  Feel free to download them here.

As I tested out Oracle vs. SQL Server Index performance, I ran across this great script to check for fragmentation from Franck Pachot.  You’ll need to simply update the script to declare the table and index name or simply edit the script as is, adding those two values correctly before running it.

The outcome when run against my ugly index in the testing of the good, the bad and the ugly, was fun, to say the least:

SQL> @index_frag.sql;

 C1 ->       C1 rows/block bytes/block %free space blocks free
---------- -- ---------- ---------- ----------- ----------- ---------- -----
     30041 ->   130221 35     560  93   2851 oooo
    130256 ->   230433 35     560  93   2851 oooo
    230468 ->   330654 35     560  93   2851 oooo
    330689 ->   430874 35     560  93   2851 oooo
    430909 ->   531073 35     560  93   2851 oooo
    531108 ->   631264 35     560  93   2850 oooo
    631299 ->   731429 35     560  93   2850 oooo
    731464 ->   831547 35     560  93   2850 oooo
    831583 ->   931711 35     560  93   2850 oooo
    931746 -> 1030047 34     560  93   2850 oooo

If you have just one or a couple indexes that you want to perform some deep analysis to view fragmentation, Franck’s script does an awesome job.

Just my share for the day…. 🙂

Posted in Oracle Tagged with: ,

March 10th, 2017 by dbakevlar

Most people know I like to do things the hard way… 🙂

When it comes to learning things, there’s often no better way to get a strong understanding than to have to take something apart and put it back together.  I wanted to use swingbench on my AWS deployment and needed the Sales History, (SH) sample schema.  I don’t have an interface to perform the installation via the configuration manager, so I was going to install it manually.

Surprise, the scripts in the $ORACLE_HOME/demo/schema/sh directory were missing.  There are a couple options to solve this dilemma.  Mine was to first get the sample schemas.  You can retrieve them from a handy GitHub repository found here, maintained by Gerald Venzl.

I downloaded the entire demo scripts directory and then SCP’d them up to my AWS host.

scp db-sample-schemas-master.zip delphix@<IP Address>:/home/oracle/.

Next, I extracted the files to the $ORACLE_HOME/demo directory.

unzip db-sample-schemas-master.zip

Now the unzip will call the directory db-sample-schemas-master, which is fine with me, as I like to retain the previous one, (I’m a DBA, so have copies of data until I’m sure I don’t need it is my life.)

mv schema schema_kp
mv db-sample-schemas-master schema

With that change, everything is now as it should be, but the one thing you’ll find out, is that the download is for 12c and I’m alright with this, as the swingbench I’m using is expecting SH for 12c, too.  Not that I expect any differences, but as Jeff Smith was all too happy to remind me on Facebook, I’m using decade old version of Oracle on my image here.

There are a lot of scripts in the Sales_History folder, but all you’ll need to run is the sh_main.sql from SQL*Plus as sysdba to create the SH schema.

There are parameter values that you’ll enter to create the SH schema manually that you may assume are different than the prompt terms and as I’ve seen very little written on it, (even after all these years of this existing) this may help others out:

specify password for SH as parameter 1:

Self-explanatory- what password would you like SH user to have.

specify default tablespace for SH as parameter 2:

What tablespace do you want this created in?  I chose Users, as this is just my play database.

specify temporary tablespace for SH as parameter 3:

Temp was mine and is the common value for this prompt.

specify password for SYS as parameter 4:

This is the password for SYSTEM, not SYS, btw.

specify directory path for the data files as parameter 5:  

This is not Oracle datafiles, this is the path to your SH directory, ($ORACLE_HOME/demo/schemas/sales_history/) for access to the control files and dat files for SQL Loader.  Remember to have a slash at the end of the path name.

writeable directory path for the log files as parameter 6:

A directory for log files-  I put this in the same directory and remember to use a slash at the end or you’re log files will have the previous directory as the beginning of the file name and save to one directory up.

specify version as parameter 7:

This isn’t the version of the database, but the version of the sample schema-  the one from Github is “v3”.

specify connect string as parameter 8:

pretty clear, but the service or connect string for the database that the schema is being created in.

Error Will Robinson, Error!

I then ran into some errors, but it was pretty easy to view the log and then the script and see why:

SP2-0310: unable to open file "__SUB__CWD__/sales_history/psh_v3"

Well, the scripts, (psh_v3.sql, lsh_v3.sql and csh_v3.sql) called in the sh_main.sql is looking in the sales_history directory, so we need to get rid of the sub directory paths that don’t exist in the 11g environment.

view the sh_main.sql, you’ll see three paths to update.  Below is an example of one section of the script with the section to be removed BOLDED:

REM Post load operations
REM =======================================================
DEFINE vscript = _SUB_CWD_/sales_history/psh_&vrs

@&vscript

The DEFINE will now look like the following so it looks in the sales_history directory if you haven’t been pointing to the $ORACLE_HOME/demo directory:

DEFINE vscript = psh_&vrs

Once you’ve saved your changes, you can simply re-run sh_main.sql again, as it does a drop schema on the sample schema before it does the create.  If no other changes need to be made to your parameters, just execute sh_main.sql, if you need to change your values for the parameters entered, just quickest to exit from SQL*Plus and reconnect to unset the values.

Are We There Yet?

Verify that there weren’t any errors in your $RUN_HOME/sh_v3.log file and if all was successful, then connect as the SH user with SQL*Plus and check the schema:

SQL> select object_type, count(*) from user_objects
  2  group by object_type;
OBJECT_TYPE       COUNT(*)
------------------- ----------
INDEX PARTITION     196
TABLE PARTITION     56
LOB                 2
DIMENSION           5
MATERIALIZED VIEW   2
INDEX               30
VIEW                1
TABLE               16
SQL> select table_name, sum(num_rows) from user_tab_statistics
  2  where table_name not like 'DR$%' --Dimensional Index Transformation
  3  group by table_name
  4  order by table_name;
TABLE_NAME       SUM(NUM_ROWS)
------------------------------ -------------
CAL_MONTH_SALES_MV         48
CHANNELS                   5
COSTS                      164224
COUNTRIES                  23
CUSTOMERS                  55500
FWEEK_PSCAT_SALES_MV       11266
PRODUCTS                   72
PROMOTIONS                 503
SALES                      1837686
SALES_TRANSACTIONS_EXT     916039
SUPPLEMENTARY_DEMOGRAPHICS 4500
TIMES                      1826

And now we’re ready to run Swingbench Sales History against our AWS instance to collect performance data.  I’ll try to blog on Swingbench connections and data collection next time.

See everyone at UTOUG for Spring Training Days on Monday, March 13th and at the end of the week I’ll be in Iceland at SQL Saturady #602!

Posted in Cloud, Oracle Tagged with: , ,

January 13th, 2017 by dbakevlar

I receive about 20-30 messages a week from women in the industry.  I take my role in the Oracle community as a role model for women in technology quite seriously and I’ve somehow ended up speaking up a number of times, upon request from different groups.

rosie

Although its not the first time the topics come up, I was asked last week for some recommendations on Oracle’s CEO, Safra Catz and her opportunity to be on President Elect Trump’s transition team.

I wanted to ask your opinion about Safra not taking a leave of absence to help with Trump’s transition team? I think she should take a leave and as one of the top women in IT I think it shows poor judgment. Could the WITs write her a letter? Thoughts?

After some deep thought, I decided the topic required a good, solid answer and a broader audience.  As with anything involving the topic of WIT, the name of the source who asked the question doesn’t matter and anyone who asks you to give names isn’t really interested in education, but persecution.

It took me some time to think through the complexities of the situation.  Everyone will have some natural biases when a topic bridges so many uncomfortable areas of discussion:

  • Women’s Roles
  • Politics
  • Previous Employer

After putting my own bias aside and thinking through the why and what, here’s my thoughts-

No, I don’t think Safra should take a leave of absence. We have significantly few women in c-level positions.  As of April 2016, only 4% of CEO’s for Fortune 500 companies were women, (which Safra is one.)  I have a difficult time believing we’d be asking most men to give up the opportunity to be on a presidential transition team or take a leave of absence.  Some of the most challenging and difficult times in our career are also the most rewarding and this may be one of those times in Safra’s life.  Anyone who’s friends with me, especially on Facebook, would know, I’m not a big fan of Donald Trump, but in no way should we ask Safra to not try to be part of the solution.

No, I don’t think Safra should refrain from being on the transition team.  As much as we discuss the challenges of getting more women in technology, its even a larger challenge in politics.  Women have less than 25% of the seats in Congress and even less at local government levels.  We are over 50% of the workforce and 50% of the US population.  How can we ever have our voices heard if we aren’t participating in our own government?  Having more representation is important, not less and not because my politics don’t mesh with hers.

So what should the discussion really be about if we don’t want Safra to take a leave of absence or remove herself from the transition team?

  1. We want to know that there are clear policies in place to deter from conflict of interest.  We need to know that if improprieties do occur, that accountability will result.
  2. We need to not limit Safra in opportunities or over-scrutinize her the way we do so many women who don’t fit inside the nice, neat little expectations society still has of them.
  3. We shouldn’t hold Safra accountable for what Donald Trump represents, his actions or if we don’t agree with his politics.

We also need to discuss what is really bothering many when a woman or person of color enters into the lions den, aka a situation that is clearly not very welcoming to us due to gender, race or orientation.  It can bring out feelings of betrayal, concerns that the individual is “working for the enemy.”  We want to know that Safra will stand up for our rights as the under-represented.  We want to know that she would tell Donald that she doesn’t condone his behavior or actions towards women, race and culture.

One of the biggest challenges I had to overcome when I started my career, was recognizing that every individual has their own path in this world.  Their path may be very different than mine, but through change comes growth and to expect someone to do what may not be in their capabilities can be just as limiting as not letting them do what they do best.  This wouldn’t be allowing Safra to do what she does best.

I’ve never viewed Safra as a role-model when it comes to the protection and advancement of women’s roles in technology or our world.  She’s never historically represented this, any more than those expecting it from Marissa Mayer.  It’s just not part of their unique paths, no matter how much the media likes to quote either of them, (especially Marissa, which consistently makes me cringe.)  It doesn’t mean they aren’t capable of accomplishing great feats-  just not feats in the battle for equality.  It also doesn’t mean they aren’t a source of representation.  The more women that are in the space, the better.  That’s how we overcome some of the bias we face.

Regarding those that do support women in more ways that just representing the overall count of women in technology and politics, I’d rather put my time into Sheryl Sandberg, Grace Hopper, Meg Whitman and others who have the passion to head up equality issues.  I both welcome and am thankful for the discussion surrounding writing the letter and applaud the woman who asked me about the topic-  it’s a difficult one.

For those of you who are still learning about why equality is so important, here’s a few historical references of great women who’ve advanced our rights.  We wouldn’t be where we are today without them.

Thank you to everyone for the great beginning to 2017 and thank you for continuing to trust me to lead so many of these initiatives.  I hope I can continue to educate and help the women in our technical community prosper.

 

Posted in Oracle, WIT Tagged with: , ,

January 4th, 2017 by dbakevlar

How was 2016 for me?

It was a surprisingly busy year-  blogging, speaking, working and doing.

I posted just under 100 posts to my blog this year.  After I changed jobs, the “3 per week” quickly declined to “4 per month” after I was inundated with new challenges and the Delphix learning curve.  That will change for 2017, along with some new initiatives that are in the works, so stay tuned.

For 2016, the most popular posts and pages for my website followed a similar trend from the last year.  My emulator for RPI is still a popular item and I have almost as many questions on RPI as I do WIT-  Raspberry Pi is everywhere and you’ll see a regained momentum from me with some smart home upgrades.

My readers for 2016 came from just about every country.  There were only a few that weren’t represented, but the largest numbers were from the expected:

I also write from time to time on Linked in.  Linked in has become the home for my Women in Technology posts and its lead me to receive around 30 messages a week from women looking for guidance, sharing their stories or just reaching out.  I appreciate the support and the value its provided to those in the industry.

RMOUG Conference Director- No Escape!

The 2016 conference was a great success for RMOUG, but much of it was due to budget cuts and changes that were made as we went along and addressed trends.  I’ve been collecting the data from evaluations and it really does show why companies are so interested in the value their data can provide them.  I use what I gather each year to make intelligent decisions about where RMOUG should take the conference direction each year-  what works, what doesn’t and when someone throws an idea out there, you can either decide to look into it or have the data to prove that you shouldn’t allocate resources to an endeavor.

A New Job

I wasn’t into the Oracle cloud like a lot of other folks.  It just wasn’t that interesting to me and felt that Oracle, as much as they were putting into their cloud investment, deserved someone who was behind it.  I’d come to Oracle to learn everything I could about Oracle and Enterprise Manager and an on-premise solution as it was, it wasn’t in the company focus.  When Kyle and I spoke about an opportunity to step into a revamped version of his position at Delphix, a company that I knew a great deal about and admired, it was a no-brainer.  I started with this great, little company in June and there are some invigorating initiatives that I look forward to becoming part of for 2017!

2 Awards

In February, I was awarded RMOUG’s Lifetime achievement award.  I kind of thought this would mean I could ride off in the sunset as the conference director, but as my position ended at Oracle, which had been a significant fight to keep me managing the conference as an Oracle employee, (transitioning me to a non-voting member to keep within the by-laws) not many were surprised to see me take on a sixth year of managing the conference.

In April I was humbly awarded the Ken Jacobs award from IOUG.  This is an award I’m very proud of, as Oracle employee’s are the only ones eligible and I was awarded it in just the two years I was employed at the red O.

3 Makers Events

I haven’t had much time for my Raspberry Pi projects the last number of months, but it doesn’t mean I don’t still love them.  I gained some recognition as 2nd ranking in the world for RPI klout score back in July, which took me by surprise.  I love adding a lot of IOT stories into my content and it had caught the attention of social media engines.  Reading and content is one thing, but it was even more important to do- I had a blast being part of the impressive Colorado’s Maker Faire at the Denver Museum of Nature and Science earlier in 2016.  I also was part of two smaller makers faires in Colorado, allowing me to discuss inexpensive opportunities for STEM education for schools using Raspberry Pis, Python coding and 4M kits.

Speaking Engagements

Even though I took a number of months off to focus on Delphix initiatives, I still spoke at 12 events and organized two, (Training Days and RMOUG’s QEW.)

February:  RMOUG– Denver, CO, (Director and speaker)

March: HotSos– Dallas, CO, (Keynote)

April: IOUG Collaborate– Las Vegas, NV

May: GLOC– Cleveland, OH, NoCOUG– San Jose, CA

June: KSCOPE– Chicago, IL

July: RMOUG Quarterly Education Workshop– Denver, CO, (Organizer)

September: Oracle Open World/Oak Table World– San Francisco, CA

October: UNYOUG– Buffalo, NY, Rocky Mountain DataCon & Denver Testing Summit–  Denver, CO

November: MOUS– Detroit, MI, (Keynote) ECO– Raleigh, NC

New Meetup Initiatives and Growth

I took over the Denver/Boulder Girl Geek Dinners meetup last April.  The community had almost 650 members at the time and although it wasn’t as big as the Girls Develop It or Women Who Code, I was adamant about keeping it alive.  Come the new year and thanks to some fantastic co-organizers assisting me, (along with community events in the technical arena) we’re now on our way to 1100 members for the Denver/Boulder area.

The Coming Year

I’m pretty much bursting with anticipation due to all that is on my plate for the coming year.  I know the right hand side bar is a clear indication that I’ll be speaking more, meaning more travel and a lot of new content.  With some awesome new opportunities from Delphix and the organizations I’m part of, I look forward to a great 2017!

 

Posted in DBA Life, Oracle Tagged with: , , ,

December 27th, 2016 by dbakevlar

The only thing that remains constant is change.   ~Heralitus

chameleon

This often is more apparent in technology than anywhere else.  Even with a group of Oracle evangelists and enthusiasts like the ACE program, which offers the opportunity for peers to recognize those that contribute to the community and help make it a better place.

I’ve [often quietly] assisted many around me to find their path into the ACE program, even when I was an employee at Oracle and even now that I have ACE alumni status.  It was once a rare thing to have this status, but I was also considered an oddity, after spending a couple years learning all that I could about Enterprise Manager, moved on to Delphix to undergo my next opportunity for growth.  I departed Oracle at a time the ACE program was going through a major overhaul, with new requirements and rules to being part of the program.

For some in the Oracle community, it simply doesn’t make sense for them to be part of the program any longer, but it can also make it easy for those of the “reality TV” generation to try to find drama and gossip in it all.  Being nominated to the ACE program is something to be proud of-  you’re peers have reached out to recognize your contributions.  As rules were changed and re-evaluated while I was traversing the questions about how I would re-enter the ACE Program.  I received a large amount of emails of support, but at the same time, recommendations that it would be best to just stay the course and do what I do best, so I’ve made the decision, like many others, not to pursue re-activating my ACE Director.  At this time, 155 out of almost 700 members in the directory are listed as having alumnus status, so I’m suddenly far from alone.

There are many reasons individuals in the Oracle community would wish to still pursue the ACE program and I will continue to support anyone who wishes assistance with this goal.  I also recognized how difficult the decision was for those that left the program and believe the earned recognition should continue.  Just as The ACE program recognizes them as ACE Alumni in the ACE Program registry, RMOUG Training Days has updated the speaker biography with an option to be listed as an ACE Alumni.

If you are a speaker with our conference and an ACE Alumni, you have the option to update your profile and be recognized for your earned and continued contributions to the Oracle community.

I look forward to seeing everyone in February and Happy Holidays!

Posted in DBA Life Tagged with: , ,

November 14th, 2016 by dbakevlar

OK, so I’m all over the map, (technology wise) right now.  One day I’m working with data masking on Oracle, the next it’s SQL Server or MySQL, and the next its DB2.  After almost six months of this, the chaos of feeling like a fast food drive thru with 20 lanes open at all times is starting to make sense and my brain is starting to find efficient ways to siphon all this information into the correct “lanes”.  No longer is the lane that asked for a hamburger getting fries with hot sauce… 🙂

thesemakeme

One of the areas that I’ve been spending some time on is the optimizer and differences in Microsoft SQL Server 2016.  I’m quite adept on the Oracle side of the house, but for MSSQL, the cost based optimizer was *formally* introduced in SQL Server 2000 and filtered statistics weren’t even introduced until 2008.  While I was digging into the deep challenges of the optimizer during this time on the Oracle side, with MSSQL, I spent considerable time looking at execution plans via dynamic management views, (DMVs) to optimize for efficiency.  It simply wasn’t at the same depth as Oracle until the subsequent releases and has grown tremendously in the SQL Server community.

Compatibility Mode

As SQL Server 2016 takes hold, the community is starting to embrace an option that Oracle folks have done historically-  When a new release comes out, if you’re on the receiving end of significant performance degradation, you have the choice to set the compatibility mode to the previous version.

I know there are a ton of Oracle folks out there that just read that and cringed.

Compatibility in MSSQL is now very similar to Oracle.  We allocate the optimizer features by release version value, so for each platform it corresponds to the following:

Database Version Value
Oracle 11.2.0.4 11.2.0.x
Oracle 12.1 12.1.0.0.x
Oracle 12c release 2 12.1.0.2.0
MSSQL 2012 110
MSSQL 2014 120
MSSQL 2016 130

 

SQL Server has had this for some time, as you can see by the following table:

Product Database Engine Version Compatibility Level Designation Supported Compatibility Level Values
SQL Server 2016 13 130 130, 120, 110, 100
SQL Database 12 120 130, 120, 110, 100
SQL Server 2014 12 120 120, 110, 100
SQL Server 2012 11 110 110, 100, 90
SQL Server 2008 R2 10.5 105 100, 90, 80
SQL Server 2008 10 100 100, 90, 80
SQL Server 2005 9 90 90, 80
SQL Server 2000 8 80 80

These values can be viewed in each database using queries for the corresponding command line tool.

For Oracle:

SELECT name, value, description from v$parameter where name='compatible';

Now if you’re in database 12c and multi-tenant, then you need to ensure you’re correct database first:

ALTER SESSION SET CONTAINER = <pdb_name>;
ALTER SYSTEM SET COMPATIBLE = '12.1.0.0.0';

For MSSQL:

SELECT databases.name, databases.compatibility_level from sys.databases 
GO
ALTER DATABASE <dbname> SET COMPATIBILITY_LEVEL = 120
GO

Features

How many of us have heard, “You can call it a bug or you can call it a feature”?  Microsoft has taken a page from Oracle’s book and refer to the need to set the database to the previous compatibility level as Compatibility Level Guarantee.  It’s a very positive sounding “feature” and for those that have upgraded and are suddenly faced with a business meltdown due to a surprise impact once they do upgrade or simply from a lack of testing are going to find this to be a feature.

So what knowledge, due to many years of experience with this kind of feature, can the Oracle side of the house offer to the MSSQL community on this?

I think anyone deep into database optimization knows that “duct taping” around a performance problem like this- by moving the compatibility back to the previous version is wrought with long term issues.  This is not addressing a unique query or even a few transactional processes being addressed with this fix.  Although this should be a short term fix before you launch to production, [we hope] experience has taught us on the Oracle side, that you have databases that exist for years in a different compatibility version than the release version.  Many DBAs have databases that they are creating work arounds and applying one off patch fixes for because the compatibility either can’t or won’t be raised to the release version.  This is a database level way of holding the optimizer at the previous version.  The WHOLE database.

You’re literally saying, “OK kid, [database], we know you’re growing, so we upgraded you to latest set of pants, but now we’re going to hem and cinch them back to the previous size.”  Afterwards we say, “Why aren’t they performing well? After all, we did buy them new pants!”

So by “cinching” the database compatibility mode back down, what are we missing in SQL Server 2016?

  • No 10,000 foreign key or referential constraints for you, no, you’re back to 253.
  • Parallel update of statistic samples
  • New Cardinality Estimator, (CE)
  • Sublinear threshold for statistics updates
  • A slew of miscellaneous enhancements that I won’t list here.

Statistics Collection

Now there is a change I don’t like, but I do prefer how Microsoft has addressed it in the architecture.  There is a trace flag 2371 that controls, via on or off, if statistics are updated at about 20% change in row count values.  This is now on by default with MSSQL 2016 compatibility 130.  If it’s set to off, then statistics at the object level aren’t automatically updated.  There are a number of ways to do this in Oracle, but getting more difficult with dynamic sampling enhancements that put the power of statistics internal to Oracle and less in the hands of the Database Administrator.  This requires about 6 parameter changes in Oracle and as a DBA who’s attempted to lock down stats collection, its a lot easier than said.  There were still ways that Oracle was able to override my instructions at times.

Optimizer Changes and Hot Fixes

There is also a flag to apply hot fixes which I think is a solid feature in MSSQL that Oracle could benefit from, (instead of us DBAs scrambling to find out what feature was implemented, locating the parameter and updating the value for it…)  Using trace flag 4199 granted the power to the DBA to enable any new optimizer features, but, just like Oracle, with the introduction of SQL Server 2016, this is now controlled with the compatibility mode.  I’m sorry MSSQL DBAs, it looks like this is one of those features from Oracle that, (in my opinion) I wish would have infected cross platform in reverse.

As stated, the Compatibility Level Guarantee sounds pretty sweet, but the bigger challenge is the impact that Oracle DBAs have experienced for multiple releases that optimizer compatibility control has been part of our database world.  We have databases living in the past.  Databases that are continually growing, but can’t take advantage of the “new clothes” they’ve been offered.  Fixes that we can’t take advantage of because we’d need to update the compatibility to do so and the pain of doing so is too risky.  Nothing like being a tailor that can only hem and cinch.  As the tailors responsible for the future of our charges, there is a point where we need to ensure our voices are heard, to ensure that we are not one of the complacent bystanders, offering stability at the cost of watching the world change around us.

Posted in Oracle, SQLServer Tagged with: , ,

August 19th, 2016 by dbakevlar

This is a living document that I will continue to update and will add new database platforms to as I go along in my career.  I spend a lot of time translating database platforms I’ve worked in for all tiers of the technical business.  It just seems natural that we might need a location on the web where that information is kept.

alive

I’ll add some diagrams at a later date, but we’ll start with a simple spreadsheet of common terms and questions and how each of the terms or tasks in Oracle, (the current #1 RDBMS) translates in other platforms.  If there isn’t a similar term or task, I’ve marked not applicable, (N/A).  If you have any additions or suggestions on how to improve the list, please feel free to comment or email me at DBAKevlar at Gmail.

As I’m still working on the best viewable format, keep in mind, you can export the table into a number of formats or even print it!

Thanks!

The Rosetta Stone for Database Platforms

Generated by wpDataTables

Posted in DBA Life, SQLServer Tagged with: , , , , , ,

November 24th, 2015 by dbakevlar

After my AWR Warehouse session at DOAG in Nuremberg, Germany last week, an attendee asked me if he really needed to use trace files anymore with the value provided by AWR, (Automatic Workload Repository) and ASH, (Active Session History.)  I responded that trace files were incredibly valuable and should never be discounted.

To use an analogy, when I was 22 years old, and entered a doctor’s office missing a small percentage of my visual field after an apparent migraine, doctors used a number of different “tools” at their disposal in an attempt to diagnose what had happened.  They first started with an Ophthalmoscope to determine the health of my retinas.  They assumed that I was simply experiencing visual disturbances due to a migraine and sent me home with a prescription strength case of Tylenol after verifying that my eyes were healthy.

After no change with another two days passed, the doctors then proceeded with a Computed Tomography Scan, aka CAT/CT scan.  This is a standard first line inspection of the brain and again, it resulted with no information as to what was causing the visual disturbance.  If it hadn’t been for a neurologist that was on duty at the hospital, I may very well have been sent home again.  He asked me about the severe migraine, the actual specifics of the experience and suddenly realized he was hearing someone describe to him the symptoms of a stroke.  No one had considered to ask what had happened and since I was in my early twenties, hadn’t considered this.  At this point, the doctor asked for me to have an MRI, (Magnetic Resonance Imaging) with contract.  An injection of gadolinium contrast resulted in certain tissues and abnormalities more clearly visible and for me, it showed that I had experienced a break in the blood vessels in the back right of my brain, issuing a small aneurysm, but suffering only a little blood loss.  The main damage was to the brain tissue in the back area of my brain which “translates” the information being sent from my optic nerve and this was the cause of my left side visual field loss.  Due to this new information, he was able to start the research that in the coming years saved my life.

image_brn

This doctor, like a database administrator or developer, knew to use the right tool for the job once he knew what he was up against and this is how we succeed when others fail in the technical industry.  To be told that you no longer need a tool such as tracing because valuable data is provided from AWR or ASH limits the ability to see real culprits that the secondary tools discussed weren’t really designed to capture.

To know when to use one tool over the other is often the biggest challenge.  A number of years back, a customer was experiencing terrible, overall performance issues in an ETL process that no one had been able to pinpoint.  I always love a good challenge and began to look at it via AWR reports.  I noticed that there were waits that were unaccounted for in the SQL processing.

Execution Plan

| Id  | Operation                       | Name                       | Rows  | Bytes | Cost  | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT                |                            |       |       |   688 |       |       |
|   1 |  DELETE                         | _ITEM_MONTH_ROLLUP         |       |       |       |       |       |
|   2 |   NESTED LOOPS                  |                            |  1490 | 86420 |   688 |       |       |<--This one doesn't add up to time/value consumed.
|   3 |    VIEW                         | VW_NSO_1                   |     1 |    26 |    20 |       |       |
|   4 |     SORT UNIQUE                 |                            |     1 |    40 |    20 |       |       |
|   5 |      TABLE ACCESS BY INDEX ROWID| PROCESS_LOG_MASTER         |     1 |    40 |     4 |       |       |
|   6 |       INDEX RANGE SCAN          | PROCESS_LOG_MASTER_I7      |   132 |       |     1 |       |       |
|   7 |    PARTITION RANGE ITERATOR     |                            |  1490 | 47680 |   386 |   KEY |   KEY |
|   8 |     INDEX RANGE SCAN            | ITEM_MONTH_ROLLUP_I2       |  1490 | 47680 |   386 |   KEY |   KEY |
--------------------------------------------------------------------------------------------------------------

Note
—–

– dynamic sampling used for this statement (level=8)

– SQL profile “SYS_SQLPROF_013dad8465770005” used for this statement


CPU Costing is off and there is a profile on this statement. Stats are up to date, why am I seeing dynamic sampling level 8??

I proceeded to capture a trace and that’s when I was able to demonstrate that due to auto-DOP in 11.2, a choice was made to perform dynamic sampling that was adding over 80% to the execution time, impacting performance significantly.

SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring blah, blah, blah, blah... :)

 

call     count       cpu    elapsed       disk      query    current rows
------- ------  -------- ---------- ---------- ---------- ---------- ----------
Parse        1      0.00       0.00          0          0          0          0
Execute      1     28.60      48.52     322895      43776     489123        189
Fetch        0      0.00       0.00          0          0          0          0
------- ------  -------- ---------- ---------- ---------- ---------- ----------
total        2     28.60      48.52     322895      43776     489123        189

Limiting your tools, no matter if you limit yourself to just using tracing or AWR/ASH, SLOB, Snapper or any other tool is a poor choice.  Use the right tool for the situation that provides answer to the questions the business relies on you for.  You wouldn’t trust a doctor that used a CAT scan to check a broken bone vs. an X-ray, so why would you trust an optimization specialist that limits themselves to one set of tools?

Posted in Database Tagged with: , ,

October 8th, 2015 by dbakevlar

I’ve been a bit busy traveling and speaking, but I wanted to discuss the great events I’ve been lucky enough to be part of the last couple weeks.

Indianapolis Oracle User Group

I spoke at the INOUG back on September 17th.  The user group in Indiana is incredibly welcoming and I had a great time with the wonderful attendees for my three back-to-back sessions.  We discussed AWR Warehouse, ASH and AWR Mining and a new session on Optimization for EM12c and the Hybrid Cloud.

East Coast Oracle Conference

September 21st-23rd was my first visit to the great city of Raleigh, NC and ECO!  I’ve always wanted to visit the location and had a great time not just because I got to hang out with friends like Jeff Smith, Ric Van Dyke, Kent Graziano, Scott Spendolini and Tim St. Hilaire, but I also received an incredibly warm welcome from the ECO attendees!  Linda Hoover and her team put on a great conference and the interaction from those in the sessions were phenomenal, both during and afterwards.

20150922_115657

I had a great joint keynote with Jeff Smith on “Empowering Your Technical Career with Social Media”.  It was incredibly well received and even though the room felt like it was -50F, everyone stayed for the entire keynote.  Some of the best feedback was when a woman who was working in the hotel serving food came up afterwards to tell me how much she enjoyed it and asked me some questions about what I did for a living and how to get into the tech industry.

eco_jsmith_kpotvin

An attendee also came up to me in the restaurant to let me know she’d only stayed for two keynotes her entire career and that ours was one of them!  These kind words from these two women and the other attendees who approached us meant a lot to Jeff and I.  We really appreciate the support!  I also have to thank Monty Latiolais and Ray Smith for letting Jeff and I rework their social media profiles and personal branding.  It was great to see it all come together.  For those that attended my technical session which was a deep dive in AWR and ASH, thank you for offering me a standing room only in the largest room they had at the conference and for all the interaction.

 IOUG Master Class at Sports Authority Stadium

I raced home at midnight on the 23rd to ensure I was available to do the keynote on the 24th for the IOUG Master class at….OK, let’s just say it, I’m a Denverite, it will always be Mile-High Stadium.  🙂

20150924_092558

The event included a good amount of RMOUG folks, along with some new folks and I got to speak with Graham Thornton from EMC, too!  It was a great day to be at the stadium and a great time was had by all as I spoke on Oracle Hybrid Cloud.

Oracle Education Foundation

Everyone knows I’ve been all about educating with small computers like the Raspberry Pi.  Thanks to Jake Kuramoto, I was offered the opportunity to come be part of the coaching team with the Oracle Education Foundation, specializing in teaching Raspberry Pi with Python.  I helped to build some of the content and it was an awesome experience to work with the high school kids.

20151007_143703

I only got to spend a week out of the two week project, because if I didn’t get home at least one week in October, Tim was going to come and get me!  It was a dream come true for me and I look forward to volunteering with the Oracle Education Foundation in the future.

On Thursday, Jake and his team, including the impressive Mark Vilrokx, came in to demonstrate a great nerf gun powered by an Adruino micro-computer and a sensor connected to the internet.  It used voice activated commands to shoot nerf ammo, even to know how many nerf bullets to shoot.

 

I’ll return home next week and enjoy some time with my family, my home and my dog before returning for Oracle Open World and Oak Table World for the end of October, so until then, check out the Arduino controlled nerf gun Twitter account, IOT Nerf and a shot of the great class I worked with for the week!

oef1

 

 

Posted in DBA Life, Oracle, Raspberry Pi Tagged with: , ,

July 8th, 2015 by dbakevlar

Happy Birthday to me!  So for my birthday, I give a present to you…  As I want all DBAs to sleep better at night, here are the top ten features you can use in Enterprise Manager Cloud Control to offer a good night’s rest instead of during the day at your desk… 🙂

sleeping_pc

1.  Disable the Default Rule Sets Shipped with Cloud Control.

Yes, you heard me.  I believe you should use them as a starting point or an example, but don’t put them into production.  These were examples set by development to see all that you could be notified on, but what you need to be woke up for should be anything mission critical that will SUFFER an outage if you DON’T respond.  Anything that can wait till the morning SHOULD wait till the morning.

rulesets

Make copies of the default rules and disable the originals.  Plan on making as many copies and edits as necessary to ensure that you are only being notified on the appropriate targets, life cycle status and line of business that YOU are responsible for ensuring is up and available to the business.

2.  Implement Monitoring Templates and Default for Important Target Types.

Monitoring templates ensure that you are monitoring each target in the same way and for the same metric thresholds.  This ensures you start with metric thresholds that make sense for the target and should be applied to all targets of that target type.  Creating monitoring templates are easy when you create one target as an example and use it for the source of your template.

3.  Use Metric Thresholds for Individual Targets and Set Them to Not Be Overridden by Monitoring Templates

Now this might sound like a complete 180 from #2 on this list, but it’s not.  This is just like #1, break down and specialize for unique targets that have unique challenges.  This means, if you have a target backup drive that fills up to 97% each night, you shouldn’t be woke up for it.  This is expected behavior and you can either set a static threshold specific to this target or an adaptive threshold that won’t be overridden by the monitoring template for this target ONLY.

4.  Utilize Administration Groups

admin_grp1

Administration Groups offer you advanced features and scalability to your Cloud Control environment that standard groups, and to a lesser extent, Dynamic groups, do not.  Line of business and life cycle management features that ensure you can break down notification groups, rule sets and other features, along with more advanced features with Database as a Service and other features to allow you to do more with less.  The natural life of a database environment is one of growth, so thinking ahead one, five and ten years is a great way to add value to the business as a database administrator.

5.  Create Metric Extensions for Unique Monitoring Scenarios

Enterprise Manager 12c is a self-service product.  So often there are unique situations that the business needs monitored for or the DBA notes creates a situation or outage, but isn’t, by default, a metric that comes with EM12c.  It’s easy enough to create a metric extension and take the concern and worry out of the situation, creating more value to the business.

6.  Add Corrective Actions, (Jobs)

Often when, a problem occurs, a DBA has a simple shell script or SQL they run and it corrects the problem.  If this is the case, why not have Cloud Control monitor for the issue, create an incident in the Incident Manager, send an email, then run the SQL or script as a Corrective Action?  The DBA will still know the problem occurred the next morning, but no one needs to be woke up to do what can be automated in the system.

corr_act

7.  Use Patch Plans and Automate Patching

I understand, really.  Something could somehow, somewhere, some rare time go wrong, but the patch plans you can create in Enterprise Manager are surprisingly robust and full featured. If you’re still doing patching the old fashioned way and not patching environments in the more automated and global patch plan way, you’re wasting time and let’s face it-  DBAs rarely have time to waste.  You are a resource that could be utilized for more important tasks and quarterly PSU patching is just not one of those.

8.  Centralize Database Jobs to EM Jobs

The common environment is structured with multiple DBAs, often with one DBA as primary to a database environment and the others playing catch up to figure out how the primary has the database set up.  My favorite DBA to work with once told me, “Kellyn, love your shell scripts.  They make the world go ‘round.  I just don’t want to try to figure out how you write shell at 3am in the morning or what kind of scheduler is used on all the OS’s you support!”  I realized that I owed him to centralize all my environments with an interface that made it easy for ANYONE to manage it.  No one had to look at cron, the task scheduler or a third party scheduling tool anymore.  Everything was in Enterprise Manager and no matter what operating system, it all looked very similar with the logs in the same place, found in the same tab of the UI.  Think about it- this is one you do for the team, move those jobs to inside Enterprise Manager, too…

9.  Put Compliance Framework into Place

Compliance is one of those things that seem a mystery to most.  I’m often asked why environments really need it and does it make sense.  It can seem overwhelming at first, but the idea that you know what database environments, hosts and such are out of compliance helps to distinguish how to get your database environment all set up to ensure that business best practices are in place-  You have a baseline of compliance standards for configuration settings, installation and real-time monitoring to view globally via EM12c.

10.  Plug-ins to Offer a Single Pane of Glass View

A database is a database or that’s how the business sees it.  I have almost as many years in SQL Server as I do in Oracle.  I’ve worked in Sybase, Informix, Postgres and MySQL.  After being hired for my Oracle DBA skills in every job I’ve held, it never failed-  within 6 weeks, a mission critical database environment on a secondary database platform was discovered that a group, often outside of IT had implemented and now needed critical support of.  Enterprise Manager offers plug-ins to support all of the above database platforms and more.  It offers plug-ins for engineered systems, storage arrays and other hardware that the DBA is now expected to manage, too.  Why manage all of this from multiple systems when you can easily create a single pane to ensure you’re covered?

So there you have it, my top ten list.  There are, of course, 100’s of other great features in EM12c, but make sure you are taking advantage of these in the list!

morning

Posted in Enterprise Manager, Oracle Tagged with: , , ,

June 22nd, 2015 by dbakevlar

I’m glad to be home after a couple weeks in Europe, speaking at both the Harmony 15 conference in beautiful Tallinn, Estonia and then as a keynote speaker at AOUG in lovely Vienna, Austria the week after.  I get to pretty much stay close to home for the next two months, traveling only a bit, but I want to go over the upcoming conferences that I’ll be speaking at the next couple months.

July 23rd, 2015, Georgia Oracle User Group’s Quarterly Tech Symposium

I’ll be speaking, along with my peer, Loc Nhan at the event.  I’m looking forward to seeing the Atlanta folks, like Bobby Curtis, Danny Bryant and Stewart Bryson.  You can find specifics here.

August 7th, 2015, Rocky Mtn. Oracle User Group’s Quarterly Education Workshop

August has a number of local events, along with all my work that I need to perform for RMOUG, so I’m sticking close to home!  I’ll be the main speaker and assisting to coordinate a new challenge for the RMOUG Quarterly Education Workshop-  A Family Coding day!  We’ll start out the day at Elitches, before it opens, with a hands on demos from DeVoxx4Kids, Lady Coders and I’ll do a lab with the kids using Raspberry Pi.  After the park opens and the kids go wild on the rides, we’ll continue with an EM12c day, introducing Hybrid Cloud and a hands on lab with the AWR Warehouse!  All this will be on August 7th and should be up on the rmoug.org site soon, so keep an eye out for it!

September 22nd-23rd, 2015, East Coast Oracle Conference, (ECO)

I’ll be doing a few tech sessions, along with a joint keynote with Jeff Smith, from SQL Developer fame.  We’ll be educating the tech masses on the value of social media to their careers and details can be found here.

October 25th-29th, 2015, Oracle Open World

It’s hard to believe it’s almost that time again, but yes, the biggest Oracle event in the world will be happening at the end of October this year and I’ll be ready, will you? 🙂

November 2nd-5th, 2015, CMG Performance and Capacity Conference

This is a new one for me and I can’t wait!  The event will have me talking about why performance is important with Oracle and how to achieve it as a feature!  This is being held in the great city of San Antonio, Tx and it should be a lovely time to be on the river walk in November!  Details about this conference can be found here.

I am booked up through next year this time, but we’ll just stop at this event and I’ll update you more as the time gets closer!  Thanks to everyone who’s granted me these great opportunities and look forward to seeing everyone at these incredible events!

Posted in DBA Life Tagged with: , ,

March 3rd, 2015 by dbakevlar

I’m at HotSos Symposium 2015 speaking this week, so thought I would blog about the results of the conference I’m the director of and that finished up just two weeks ago.  I’m not admitting to being overwhelmed by OEM questions here, as I’m rather enjoying it.  I love seeing folks so into Enterprise Manager and look forward to more!  Keep the ideas for more blog posts coming!  I’ll write on all these great topics in upcoming posts.

Rocky Mountain Oracle User Group, (RMOUG) Training Days 2015 is over for another year, but the conference is a task that encompasses approximately 10-11 months of planning and anywhere from 120-200 hrs per year of volunteer work from my as the Training Days Director.  This in no way includes the countless hours that are contributed by Team YCC, our conference connection who helps us manage the conference or the invaluable volunteers from our membership that assist us with registration, driving speakers to and from the airport, as well as being ambassadors for our 120+  technical sessions.

Post each conference I am director for, I compile a ton of data that assists me in planning for the next year’s conference.  This starts immediately after the current year’s conference and comprises not only the feedback that is offered to me in the conference evaluations, but spoken and email feedback that attendees and speakers are kind enough to share with me.  I find this data crucial to my planning in the next year, but there is an important set of rules that are utilized to ensure I get the most from the data.  I manage the conference very similar to the way I manage database environments and data is king.  I’m going to share just a small bit of this data with you today, but it will give you an idea of the detail I get into when identifying the wins, the opportunities and the challenges for next year’s conference.

One of the major changes I had implemented to the evaluations was based off a conversation with many of the Oak Table members on values offered.  When given the opportunity to grade a venue, speaker or event on the following:

  • Very Satisfied
  • Satisfied
  • Unsatisfied

Reviewers were more inclined to choose “Satisfied” of the three options.  It was easy, didn’t demand a lot of thought into their choice and a choice of 1-10 values would result in more valuable data in my evaluations.  Being the glutton for punishment I am and finding logic in the conversation, I chose to update our evaluations to the 1-10 vs. the above choices or 1-5.

It’s been a very “interesting” and positive change.  Not only did it bring up our scoring from “Satisfied”, which was an average rating, to higher marks overall, but we received more constructive feedback that can be used to make the conference even better next year.

Colorado Convention Center

Although we are continually searching for the best venue for the conference, we receive positive feedback on the Colorado Convention Center.  Our attendees appreciate the central location, the opportunities to enjoy all the restaurants, entertainment and such in the downtown area.  The Colorado Convention Center offers us the world-class venue that a conference of our size deserves.  Our speakers find little challenge to gaining funding to travel and speak because of the location, too.

rmoug1

Notice that we don’t have any scores from 1-10 that are under 8!.  Our average score was 9.57, so pretty impressive.  That was also the overall average with how often they let us down on coffee/tea and other refreshments between sessions,(yeah, I’ll be talking to them about this, as I feel it’s very important to have during those breaks.)

Project O.W.L.

This was part of the new marketing initiative I put together this year for RMOUG.  New additions at conferences are always unnerving.  We had RAC Attack last year, but to create a new exhibitor area, new sponsorships and attendee participation opportunities, you hope every group will get what they need out of the initiative.  We added Rep Attack, (Replication from DBVisit), Clone Attack,(From Delphix) a hardware display from Oracle and a Stump the Expert panel from OTN, who also sponsored our RAC Attack area.

We did pretty well with Project O.W.L., (which stands for Oracle Without Limits) but we learned from our evaluations that our attendees really wanted all those “attack” opportunities on the first ½ day, during our deep dives and hands on labs.

rmoug2

The reviewers didn’t complain about any of the “Attack” sessions or hardware displays, but gave lower scores, (down to 6, on a scale of 1-10) due to scheduling changes they really wanted to see for this new event offering.

Length of Conference

We have offered a 2 ½ day conference for the last couple years, after having a 2 day conference and an additional hands on lab ½ day previously.  Last year we started offering a single day pass for those few that were unable to join us for the entire event. This offered us additional flexibility for our attendees and we noticed that only about 35 people take advantage of this, but it removed the challenge we had for those sharing badges, which impacted our “true attendance count” when working with the Colorado Convention Center on our next year’s contract and it also increased the amount of folks that asked for a longer conference:

rmoug3

Currently, 65% of our attendees who’ve filled out their evaluations, would like to extend the conference to a full three days.  I’ve also seen some benefits of separating the Hands on Labs/deep dive sessions by development vs. Database focused to get the most out of the three days.  This would mean that during the DBA sessions, we would have the development centric deep dives and HOL and vice-versa for the DBA deep dives/HOL.   This scheduling would allow us to add another track, which our current comments list an interest in DevOPS and VMWare or Hardware.

Amount of Sessions

We consistently have comments saying, “Not enough DB12c” and the next might say, “Too much DB12c”.  Below that is a comment asking for more development sessions, followed by someone asking for less development.  This is expected and actually tells me when I’m in my “sweet spot” of session scheduling.  Our tracks closely match our attendance designated roles, so we know we are doing well with our schedule.

Percentage of Sessions for Each Track

rmoug4

Roles that Attendees Hold

rmoug5

If you mix and match the session percentages that are in our conference vs. the roles that our attendees hold, you will see that we have an excellent balance of sessions that match the amount of those that will be interested in it.

Women Attending Training Days

The reason I started Women in Tech at RMOUG was that I did a count, (we don’t collect information on the gender of our attendees, but I can either tell by name or by knowing the person, which allows me to count about 97% of our attendees.) and was aware that we only had 7% attendance by women.  With the introduction of the WIT sessions, we have now increased our attendance to over 22%.

rmoug6

New Attendance

We do ask our attendees how many Training Days folks have attended.   I noted a number of folks that felt the people they’d always seen at the conference were no longer attending and we’d noticed that, as with many Oracle User Group conferences, the attendees are “aging out”.  Gaining new attendees through new Oracle customers, startups and new technologies is essential.  Knowing if you are successful is important, too.

rmoug7

Currently, 40% of our attendance have attended four or less Training Days, which tells us we are making head way in introducing our conference and RMOUG to the area.  We still are retaining 59% of our long-term attendees, (we do have some who have attended most of our 26 conferences, too!)  Keeping both groups satisfied are also a big challenge, so again, this data shows us that we are doing a very good job.

rmoug8

We had an average of 8.85 rating on session quality on a scale of 1-10 and most of the complaints were when anyone and I do mean anyone thought they could get away with marketing in their sessions.  No matter how often we let people know that marketing is very frowned upon by our attendees,  abstract reviewers offering low scores for any abstract that appears to have any marketing in their session, someone still tries to push the marketing card.  The session level evaluations won’t be out for a little while, but I already fear for those that were called out at the conference level for marketing or sales in their technical session and those were the ones that created a majority of the percentage of 7 scores.

Session Quality

We couldn’t  have the great speakers, topics and quality of sessions without our great abstract reviewers and committee.  We have around 50 reviewers, made up of local attendees,  ACEs and Oak Table Members.  This provides us with the best over all scoring.  We ask people to only review those tracks that they are knowledgeable in and to never review their own abstracts or those that may be considered a conflict of interest.  Even my own abstracts are submitted for review and then I pull all mine, knowing that I’ll be onsite and if I need a last minute replacement, it comes in handy to slip one of mine or Tim Gorman’s in, as we have a few that have been approved.  I’m commonly quite busy and prefer to give as many speakers an opportunity to speak, so I have no problem pulling mine from the schedule unless absolutely required.

rmoug9

We achieved an average scored of 9.57 on session quality out of a score 1-10, so this tells you just how effective our abstract review and selection process is.  I applaud and recognize our abstract reviewers and thank them for making my job so easy when it comes to, not only choosing our abstracts for our conference, but if someone asks why they weren’t selected, the scores and comments, (sans the reviewer names, those remain between the committee and myself) offer feedback to assist the speaker in how they might change their abstract submission in the future for a better chance of getting accepted.  We do receive over 300 abstracts per year and can only accept around 100, so we are forced to say no to 2/3 of our abstracts submitted.

Overall, our registration count was up for paying attendees, which is a rare thing for user group conferences.  Our number of volunteers also increased, (which is crucial to our conference success.) RMOUG is a non-profit that relies on the power of our great volunteer base.  These volunteers drive many of our speakers from and to the airport, register attendees and serve as ambassadors to each and every session.  Our exhibitor area was 40% larger than its been in previous years, which brings additional revenue that RMOUG depends on for Quarterly Education Workshops, Special interest Groups, the RMOUG Newsletter, SQL>Update and other yearly expenditures.  RMOUG  couldn’t survive without the contributions of so many different groups, community participation and sponsorship.  This user group is powerful because of its community and the support  deserves a round of applause for making another Training Days conference a success!

Posted in DBA Life, Oracle Tagged with: , ,

  • Facebook
  • Google+
  • LinkedIn
  • Twitter