Category: Oracle

April 18th, 2017 by dbakevlar

For over a year I’ve been researching cloud migration best practices.  Consistently there was one red flag that trips me that I’m viewing recommended migration paths.  No matter what you read, just about all of them include the following high level steps:

As we can see from above, the scope of the project is identified, requirements laid out and a project team is allocated.

The next step in the project is to choose one or more clouds, choose the first environments to test out in the cloud, along with security concerns and application limitations.  DBAs are tested repeatedly as they continue to try to keep up with the demand of refreshing or ensuring the cloud environments are able to keep in sync with on-prem and the cycle continues until a cutover date is issued.  The migration go or no-go occurs and the either non-production or all of the environment is migrated to the cloud.

As someone who works for Delphix, I focus on the point of failure where DBAs can’t keep up with full clones and data refreshes in cloud migrations or development and testing aren’t able to complete the necessary steps that could be if the company was using virtualization.  From a security standpoint, I am concerned with how few companies aren’t investing in masking with the sheer quantity of breeches in the news, but as a DBA, there is a whole different scenario that really makes me question the steps that many companies are using to migrate to the cloud.

Now here’s where they loose me every time- the last step in most cloud migration plans is to optimize.

I’m troubled by optimization being viewed as the step you take AFTER you migrate to the cloud.  Yes, I believe that there will undoubtedly be unknowns that no one can take into consideration before the physical migration to a cloud environment, but to take databases, “as is” when an abundance of performance data is already known about the database that could and will impact performance, seems to be inviting unwarranted risk and business impact.

So here’s my question to those investing in a cloud migration or have already migrated to the cloud-  Did you streamline and optimize your database/applications BEFORE migrating to the cloud or AFTER?

 

 

Posted in AWS, Azure, Oracle, SQLServer Tagged with: , ,

April 13th, 2017 by dbakevlar

I was in a COE, (Center of Excellence) meeting yesterday and someone asked me, “Kellyn, is your blog correct?  Are you really speaking at a Blockchain event??”  Yeah, I’m all over the technical map these days and you know what?

I love the variety of technology, the diversity of attendance and the differences in how the conferences are managed.  Now that last one might seem odd and you might think that they’d all be similar, but its surprising how different they really are.

Getting to Know You

Today I’m going to talk about an aspect of conferences that’s very near to my heart, which is networking via events.  For women in technology, there are some unique challenges for us when it comes to networking.  Men have concerns about approaching women to network- such as fearful of accusations of inappropriate interaction and women have the challenge that a lot of networking opportunities occur outside of the workplace and in social situations that we may not be comfortable in.  No matter who you are, no matter what your intentions, there’s a lot of wariness and in the end, women often just lose out when it comes to building their network.  I’ve been able to breach this pretty successfully, but I have seen where it’s backfired and have found myself on more than one occasion defending both genders who’ve ended up on the losing side of the situation.

With that said, conferences and other professional events can assist with helping us geeks build our networks and it’s not all about networking events.  I noticed a while back that the SQL Server community appeared to be more networked among their members.  I believe part of this is due to the long history of their event software and some of its features.

Using the SQL Pass website, specifically the local user group event management software-  notice that its all centralized.  Unlike the significantly independent Oracle user groups, SQL Server user groups are able to use a centralized repository for their event management, speaker portal, scheduling, etc.  It’s not to say that there aren’t any events outside of Pass Summit and SQL Saturdays, there’s actually a ton, but this was the portal for the regional user groups, creating the spoke that bridged out to the larger community.

Centralized System

Outside of submitting my abstract proposals to as many SQL Saturdays worldwide from one portal, I also can maintain one speaker biography, information about my blog, Twitter, Linkedin and other social media in this one location.

The second benefit of this simplicity, is that these biographies and profiles “feed” the conference schedules and event sites.  You have a central location for management, but hundreds of event sites where different members can connect.  After abstracts have been approved and the schedule built, I can easily go into an event’s schedule and click on each speaker biography and choose to connect with anyone listed who has entered their social media information in their global profile.

Using my profile as an example, you’ll notice the social media icons under my title are available with a simple click of the mouse:

This gives me both an easy way to network with my fellow speakers, but also an excuse to network with them!  I can click on each one of the social media buttons and choose to follow each of the speakers on Twitter and connect with them on Linkedin.  I send a note with the Linkedin connection telling the speaker that we’re both speaking at the event and due to this, I’d like to add them to my network.

As you can join as many regional and virtual user groups as you like, (and your Pass membership is free) I joined the three in Colorado, (Denver, Boulder and Colorado Springs.) Each one of those offers the ability to also connect with the board members using a similar method, (now going to use Todd and David as my examples from the Denver SQL Server user group.)

The Oracle user groups have embraced adding twitter links to most speaker bios and some board groups, but I know for RMOUG, many still hesitated or aren’t using social media to the extent they could.  I can’t stress enough how impressed I am when I see events incorporate Linkedin and Twitter into their speaker and management profiles, knowing the value they bring to technical careers, networks and the community.

New Kids on the Block

Although the SQL Server community is a good example, they aren’t the only ones.  I’m also speaking at new events on emergent technologies, like Data Platforms 2017.  I’ll be polite and expose my own profile page, but I’m told I’m easy to find in the sea of male speakers… 🙂 Along with my picture, bio and session information, there are links to my social media connections, allowing people to connect with me:

Yes, the Bizzabo software, (same software package that RMOUG will be using for our 2018 conference, along with a few other Oracle events this coming year) is aesthetically appealing, but more importantly, it incorporates important networking features that in the past just weren’t as essential as they are in today’s business world.

I first learned the networking tactic of connecting with people I was speaking with from Jeff Smith and I think its a great skill that everyone should take advantage of, no matter if you’re speaking or just attending.  For women, I think it’s essential to your career to take advantage of opportunities to network outside of the traditional ways we’ve been taught in the past and this is just one more way to work around that glass ceiling.

 

 

Posted in DBA Life, Oracle, SQLServer Tagged with: , , ,

April 11th, 2017 by dbakevlar

I’m itching to dig more into the SQL Server 2016 optimizer enhancements, but I’m going to complete my comparison of indices between the two platforms before I get myself into further trouble with my favorite area of database technology.

<–This is sooo me.

Index Organized Tables

Index Organized Tables, (IOT) are just another variation of a primary b-tree index, but unlike a standard table with an index simply enforcing uniqueness, the index IS the table.  The data is arranged in order to improve performance and in a clustered primary key state.

This is the closest to a clustered index in SQL Server that Oracle will ever get, so it makes sense that a comparison in performance and fragmentation is the next step after I’ve performed standard index and primary key index comparisons to Oracle.

Let’s create a new copy of our Oracle objects, but this time, update to an Index Organized Table:

CREATE TABLE ora_tst_iot(
        c1 NUMBER, 
        c2 varchar2(255),
        CREATEDATE timestamp DEFAULT CURRENT_TIMESTAMP,
        CONSTRAINT pk_ora_iot PRIMARY KEY (c1))
    ORGANIZATION INDEX 
    TABLESPACE users
    PCTTHRESHOLD 20
    OVERFLOW TABLESPACE users;

CREATE SEQUENCE C1_iot_SEQ START WITH 1;

CREATE OR REPLACE TRIGGER C1_iot_BIR 
BEFORE INSERT ON ORA_TST_IOT
FOR EACH ROW
BEGIN
  SELECT C1_iot_SEQ.NEXTVAL
  INTO   :new.C1
  FROM   DUAL;
END;
/

The PCTThreshold can be anywhere between 0-50, but I chose 20 for this example.  I didn’t add any compression, as C1 is a simple sequence which won’t have the ability to take advantage of compression and I also added the additional support objects of a sequence and a trigger, just as I did for the previous test on the Oracle table.

First Data Load

Now we’ll insert the rows from ORA_INDEX_TST into ORA_TST_IOT

SQL> insert into ora_tst_iot(c2) select c2 from ora_index_tst;

995830 rows created.

Elapsed: 00:00:04:01

There won’t be any fragmentation in the current table-  it was directly loaded-  no deletes, no updates.  Although it won’t be shown in the examples, I will collect stats at regular intervals and flush the cache to ensure I’m not impacted in any of my tests.

SQL> ANALYZE INDEX PK_INDEXPS VALIDATE STRUCTURE;
Index analyzed.

Elapsed: 00:00:00.37

SQL> select index_name from dba_indexes
  2  where table_name='ORA_TST_IOT';

INDEX_NAME
------------------------------
PK_ORA_IOT
SQL> analyze index pk_ora_iot validate structure;

Index analyzed.

SQL> analyze index pk_ora_iot compute statistics;

Index analyzed.

SQL> SELECT LF_BLKS, LF_BLK_LEN, DEL_LF_ROWS,USED_SPACE, PCT_USED FROM INDEX_STATS where NAME='PK_ORA_IOT';

     32115 7996 0  224394262     88

Well, the table IS THE INDEX.  We collect stats on the table and now let’s remove some data, rebuild and then see what we can do to this IOT-

SQL> select * from ora_tst_iot
  2  where c1=994830;

    994830

SBTF02LYEQDFGG2522Q3N3EA2N8IV7SML1MU1IMEG2KLZA6SICGLAVGVY2XWADLZSZAHZOJI5BONDL2L
0O4638IK3JQBW7D92V2ZYQBON49NHJHZR12DM3JWJ1SVWXS76RMBBE9OTDUKRZJVLTPIBX5LWVUUO3VU
VWZTXROKFWYD33R4UID7VXT2NG5ZH5IP9TDOQ8G0

10-APR-17 03.09.52.115290 PM

SQL> delete from ora_tst_iot
  2  where c1 >=994820           
  3  and c1 <=994830;

11 rows deleted.

SQL> commit;

Now we see what the data originally looked like-  C2 is a large column data that was consuming significant space.

What if we now disable our trigger for our sequence and reinsert the rows with smaller values for c2, rebuild and then update with larger values again?

ALTER TRIGGER C1_IOT_BIR DISABLE;

INSERT INTO ORA_TST_IOT(C1,C2) VALUES (994820, 'A');
INSERT INTO ORA_TST_IOT(C1,C2) VALUES (994821, 'B');
INSERT INTO ORA_TST_IOT(C1,C2) VALUES (994822, 'C');
INSERT INTO ORA_TST_IOT(C1,C2) VALUES (994823, 'D');
INSERT INTO ORA_TST_IOT(C1,C2) VALUES (994824, 'E');
INSERT INTO ORA_TST_IOT(C1,C2) VALUES (994825, 'F');
INSERT INTO ORA_TST_IOT(C1,C2) VALUES (994826, 'G');
INSERT INTO ORA_TST_IOT(C1,C2) VALUES (994827, 'H');

so on and so forth till we reach 994830…

COMMIT and then let’s rebuild our table…

Rebuild or Move

ALTER TABLE ORA_IOT_TST REBUILD;

What happens to the table, (IOT) when we’ve issued this command?  It’s moving all the rows back to fill up each block up to the pct free.  For an IOT, we can’t simply rebuild the index, as the index IS THE TABLE.

Now we’ve re-organized our IOT so the blocks are only taking up the space that it would have when it was first inserted into.  So let’s see what happens now that we issue an UPDATE to those rows-

SQL> update ora_tst_iot set c2=dbms_random.string('B',200)
  2  where c1 >=994820
  3  and c1 <=994830;

11 rows updated.

So how vulnerable are IOTs to different storage issues?

Chained Rows

Chained Rows after updating, moving data and then updating to larger data values than the first with 10% free on each block?

Just 11 rows shows the pressure:

SQL> SELECT 'Chained or Migrated Rows = '||value
 FROM v$sysstat
 WHERE name = 'table fetch continued row';  

Chained or Migrated Rows = 73730

Data Unload

Let’s delete and update more rows using DML like the following:

SQL> delete from ora_tst_iot
  2  where c2 like '%300%';

4193 rows deleted.

Insert rows for 300 with varying degree of lengths, delete more, rinse and repeat and update and delete…

So what has this done to our table as we insert, update, delete and then insert again?

SQL> SELECT table_name, iot_type, iot_name FROM USER_TABLES
     WHERE iot_type IS NOT NULL;  

TABLE_NAME               IOT_TYPE           IOT_NAME
------------------------------ ------------ ------------------------------
SYS_IOT_OVER_88595       IOT_OVERFLOW       ORA_TST_IOT
ORA_TST_IOT              IOT

This is where a clustered index and an IOT is very different.  There is a secondary management object involved when there is overflow.  If you look up at my creation, yes, I chose to create an overflow.  Even if I drop the IOT properly, the overflow table will go into the recycle bin, (unless I’ve configured the database without it.)

SQL> select index_name from dba_indexes
  2  where table_name='ORA_TST_IOT';

INDEX_NAME
------------------------------
PK_ORA_IOT

SQL> analyze index pk_ora_iot validate structure;
Index analyzed.

SQL> select blocks, height, br_blks, lf_blks from index_stats;

    BLOCKS     HEIGHT BR_BLKS    LF_BLKS
---------- ---------- ---------- ----------
     32768     3       45        32115

We can see, for the blocks, the rows per leaf blocks aren’t too many-  this is a new table without a lot of DML, but we still see that with the current configuration, there aren’t a lot of rows returned per leaf block.

When we select from the IOT, the index is in full use and we can see that with the proper pct free/pct used, the index is still in pretty good shape:

SQL> select * from table(dbms_xplan.display_awr('fbhfmn88tq99z'));


select c1, c2, createdate from ora_tst_iot
Plan hash value: 3208808379

--------------------------------------------------------------------------------
| Id  | Operation     | Name   | Rows  | Bytes | Cost (%CPU)| Time

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |   |   |   |  8709 (100)|
|   1 |  INDEX FAST FULL SCAN| PK_ORA_IOT |   995K|   206M|  8709   (1)| 00:01:4
5 |

13 rows selected.

SQL> analyze index pk_ora_iot validate structure;
Index analyzed.

SQL> SELECT blocks, height, br_blks, lf_blks FROM index_stats;

    BLOCKS     HEIGHT  BR_BLKS    LF_BLKS
---------- ---------- ---------- ----------
     32768     3       45         32058

SQL> select pct_used from index_stats;

  PCT_USED
----------
 88

I Did a Bad

So now what happens, if like our original test, we shrink down the percentage of what can be used and reorganize, (and please don’t do this in production…or test….or dev….or ever! 🙂)?

SQL> alter table ora_tst_iot move pctfree 90;
Table altered.

SQL> analyze index pk_ora_iot validate structure;
Index analyzed.

SQL> SELECT blocks, height, br_blks, lf_blks FROM index_stats;

    BLOCKS     HEIGHT  BR_BLKS    LF_BLKS
---------- ---------- ---------- ----------
    172928     3      228         165630

Well, that’s a few more leaf blocks, eh?  Insert after enabling trigger again-

SQL> BEGIN
   FOR i in 1..1000000 LOOP
    INSERT INTO ORA_TST_IOT(c2) VALUES(i);
   END LOOP;
 COMMIT;
END;
 /

Now we have our elapsed time for Azure inserts of 1 million records with 100% and 10%.  Let’s compare it to our IOT.  The IOT move command to fill the blocks to 100% was quite fast.  Of course, the reverse, only allowing for 10%, (90% free) took F.O.R.E.V.E.R…, (OK, it sure felt like it…why didn’t I just truncate it?  Oh, yeah, I wanted it to be a real test, not simply an easy test..)

Note: For this test, we’ll rebuild after updating the pctfree each time.

 

10% Fill Factor in SQL Server and 1 million insert: Elapsed time: 23 minutes, 18 seconds

90% PCTFree in Oracle and 1 million insert:  7 min, 12 seconds

100% Fill Factor in SQL Server and 1 million insert: Elapsed Time: 4 minutes, 43 seconds

0% PCTFree in Oracle and 1 million insert: 1 min, 8 seconds

REBUILD of the Oracle IOT to make it 90% free in each block?  Elapsed Time:  8 hrs, 21 minutes, 12 seconds 

…along with four backups of archive logs it generated that filled up the archive dest… 🙂  Now the AWS Trial is to be used to test out the Delphix product, not to test out index performance in a high insert/delete/update scenario, so I’ve been asking for some of these challenges, but it was still a great way to build this out quickly and then compare.

 

Results

In this test, this was the overall results:

  1. Less overhead during transactional processing when it comes to inserts, updates and deletes to the IOT.
  2. Improved performance on any selects that require the data to be sorted in sequential order.
  3. Similar performance to SQL Server clustered indexes on complex queries and less sort temp usage.
  4. Limited use case than SQL Server clustered indexes, as these are quite common and IOTs are less used in the Oracle space.
  5. More maintenance upkeep as we will need to reorganize the IOT if its used with processing that includes a lot of inserts, updates and deletes.
  6. DBCC rebuilds of a clustered index uses less resources and doesn’t impact the transaction log as it does Oracle’s rollback and archive log.  It was easier to build the table with a high pct free storage configuration and then do an insert of the data, then drop the old table than to do an “alter move” command.

Now there’s more to do comparisons on, so I’m going to dig in more on the SQL Server side, but here’s to Oracle Index Organized Tables, (IOTs)!

Posted in Oracle, SQLServer Tagged with: , , ,

April 5th, 2017 by dbakevlar

I just uploaded my slides from Collaborate 2017 to Slideshare, but also, the findings missing from my slide deck I used for Oak Table World on War of the Indices- Oracle and SQL Server.  Feel free to download them here.

As I tested out Oracle vs. SQL Server Index performance, I ran across this great script to check for fragmentation from Franck Pachot.  You’ll need to simply update the script to declare the table and index name or simply edit the script as is, adding those two values correctly before running it.

The outcome when run against my ugly index in the testing of the good, the bad and the ugly, was fun, to say the least:

SQL> @index_frag.sql;

 C1 ->       C1 rows/block bytes/block %free space blocks free
---------- -- ---------- ---------- ----------- ----------- ---------- -----
     30041 ->   130221 35     560  93   2851 oooo
    130256 ->   230433 35     560  93   2851 oooo
    230468 ->   330654 35     560  93   2851 oooo
    330689 ->   430874 35     560  93   2851 oooo
    430909 ->   531073 35     560  93   2851 oooo
    531108 ->   631264 35     560  93   2850 oooo
    631299 ->   731429 35     560  93   2850 oooo
    731464 ->   831547 35     560  93   2850 oooo
    831583 ->   931711 35     560  93   2850 oooo
    931746 -> 1030047 34     560  93   2850 oooo

If you have just one or a couple indexes that you want to perform some deep analysis to view fragmentation, Franck’s script does an awesome job.

Just my share for the day…. 🙂

Posted in Oracle Tagged with: ,

April 5th, 2017 by dbakevlar

This post has a lot of the support code and data for my Oak Table Talk that I’ll be giving at IOUG Collaborate 2017 in Las Vegas on April 5th, 2017.  

One of the Iceland 2017 SQL Saturday sessions got me thinking about indexing and how similar and different it all is in Oracle vs. SQL Server.  There was some really fun, (well, at least what I call fun…) test cases built out and referenced by Paul Randal.  After looking through some of it, I decided it might be interesting to try to replicate it to Oracle, (as close as possible) and compare how the two database platforms deal with index storage and specifically- SQL Server’s Fill Factor vs. Oracle PctIncrease index percentage filled.

B-tree indexing is the cornerstone of physically optimizing searches on data.  No consensus exists on what the “B” stands for, (some think its from Bayer, for one of the main gentlemen who did the research and many more believe it’s for Boeing, for the Research Center the research was done at.)

The choice in how the data is organized, leafs and such are pretty standard, but database platforms have created some unique indexing that enhances queries on RDBMS vs. just having heap tables.

Using Oracle and SQL Server as our choice for a comparison today, there are a few translations I need for readers of this blog:

OracleSQL ServerDescription
Index Organized Table, (IOT)Clustered Indexphysical index storing data in their key values. In SQL Server, there can be only one Clustered index per table.
Pctfree of blockFillFactor of pagePercent of storage that is allowed filled. There are different times when this is used for each platform.
SequenceTOPAbility to populate data with a sequential number
dbms_random.stringReplicateAbility to populate data with string values
blockpageunit of storage
Automatic Workload Repository, (AWR)Dynamic Management Views, (DMV)Performance data collection

Now that we have that out of the way, you can use this trusty, little graph for common terms that require a “translation” from one database platform to the other.

The next thing to remember is that PCTFree and FillFactor aren’t adhered to at all times.  Appending a row to an index is different than updating a row in an index and each platform has it’s own set of criteria to decide if it follows the rule of percentage of a block or page to fill or not.

The Test

The steps of this test:

  1. Create a table with three columns and two indexes-  SQL Server having its trusted clustered index.
  2. Populate data of different sizes to each of the tables.
  3. check the storage of our index “health”
  4. Remove data
  5. Repeat step 2 and also remove data
  6. Check the storage again to see how it has changed-  page splits in SQL Server, leaf block splits in Oracle

Goal is:

  1. Inspect the differences and similarities of indexing in both platforms
  2. The pros and cons of how index data is stored and used in both platforms

Oracle Code for Creation of Objects and Support

  • Table and PK with constraint
  • Alter index statement to decrease pctfree
  • Sequence to populate c1 column
  • Trigger to do this in simple way
  • Rinse, repeat, more “shampoo”, and do it all again… 🙂

Yes, I could have just used RowNum, but I was trying to kill a second bird, (testing task) with this one stone, so an trigger with a sequence it is… 🙂

CREATE TABLE ORA_INDEX_TST
(
C1 NUMBER NOT NULL
,C2 VARCHAR2(255)
,CREATEDATE TIMESTAMP
);

CREATE INDEX PK_INDEXPS ON ORA_INDEX_TST (C1);
ALTER TABLE ORA_INDEX_TST ADD CONSTRAINT OIT_PK PRIMARY KEY(C1) USING INDEX PK_INDEXPS;
CREATE UNIQUE INDEX IDX_INDEXPS ON ORA_INDEX_TST(C2);
ALTER INDEX PK_INDEXPS REBUILD PCTFREE 90 INITRANS 5;
ALTER INDEX IDX_INDEXPS REBUILD PCTFREE 90 INITRANS 5;
CREATE SEQUENCE C1_SEQ START WITH 1;

CREATE OR REPLACE TRIGGER C1_BIR 
BEFORE INSERT ON ORA_INDEX_TST 
FOR EACH ROW
BEGIN
  SELECT C1_SEQ.NEXTVAL
  INTO   :new.C1
  FROM   DUAL;
END;
/

We’ll need to manually insert just enough data to fill up one block, which is 8KB in this database, (and default.)

INSERT INTO ORA_INDEX_TST (C2, CREATEDATE) 
VALUES (dbms_random.string('A', 200), SYSDATE);
INSERT INTO ORA_INDEX_TST (C2, CREATEDATE) 
VALUES (dbms_random.string('B', 200), SYSDATE);
INSERT INTO ORA_INDEX_TST (C2, CREATEDATE) 
VALUES (dbms_random.string('C', 200), SYSDATE);
INSERT INTO ORA_INDEX_TST (C2, CREATEDATE) 
VALUES (dbms_random.string('D', 200), SYSDATE);
INSERT INTO ORA_INDEX_TST (C2, CREATEDATE) 
VALUES (dbms_random.string('E', 200), SYSDATE);
INSERT INTO ORA_INDEX_TST (C2, CREATEDATE) 
VALUES (dbms_random.string('F', 200), SYSDATE);
INSERT INTO ORA_INDEX_TST (C2, CREATEDATE) 
VALUES (dbms_random.string('G', 200), SYSDATE);
COMMIT;

We’ll now verify that our data is inserted into one block:

SQL> ANALYZE INDEX PK_INDEXPS VALIDATE STRUCTURE;

SQL> SELECT LF_BLKS, LF_BLK_LEN, DEL_LF_ROWS,USED_SPACE, PCT_USED FROM INDEX_STATS where NAME='PK_INDEXPS';
   LF_BLKS LF_BLK_LEN DEL_LF_ROWS USED_SPACE   PCT_USED
---------- ---------- ----------- ---------- ----------
  1        7924       0           1491        19

Code for SQL Server Objects and Support.  Since I didn’t have the same secondary project request, this one will appear simpler:

  • Table with PK with constraint
  • alter index and change fill factor
  • Second Table to use for data population
CREATE TABLE SQL_INDEX_TST (c1 INT NOT NULL, c2 CHAR (255), createdate datetime);
CREATE INDEX CL2_INDEX_TST ON SQL_INDEX_TST(C2);
GO

ALTER TABLE SQL_INDEX_TST 
ADD CONSTRAINT PK_CLINDX_TST PRIMARY KEY CLUSTERED (c1);
GO

First, in SQL Server, a page will hold around 8KB of data, so let’s test out our index storage:

INSERT INTO SQL_INDEX_TST(c1,c2) VALUES (1, 'a');
INSERT INTO SQL_INDEX_TST(c1,c2) VALUES (2, 'a'); 
INSERT INTO SQL_INDEX_TST(c1,c2) VALUES (3, 'a'); 
INSERT INTO SQL_INDEX_TST(c1,c2) VALUES (4, 'a');
INSERT INTO SQL_INDEX_TST(c1,c2) VALUES (5, 'a'); 
INSERT INTO SQL_INDEX_TST(c1,c2) VALUES (6, 'a'); 
INSERT INTO SQL_INDEX_TST(c1,c2) VALUES (7, 'a'); 
GO

We now have officially “filled” the first page as much as possible and we should see this if we query the information schema:

SELECT
OBJECT_SCHEMA_NAME(ios.object_id) + '.' + OBJECT_NAME(ios.object_id) as table_name, i.name as index_name, leaf_allocation_count, nonleaf_allocation_count, fill_factor, type_desc
FROM sys.dm_db_index_operational_stats(DB_ID(), OBJECT_ID('dbo.SQL_INDEX_TST'),NULL, NULL) ios
INNER JOIN sys.indexes i ON i.object_id = ios.object_id AND i.index_id = ios.index_id;
 

Data Loads

Oracle:

SQL> Begin
For IDS in 1..1000000
Loop
INSERT INTO ORA_INDEX_TST (C2) 
VALUES (dbms_random.string('X', 200));
Commit;
End loop;
End; 
  /

10% PCT Free- Time Elapsed 2 minutes, 12 seconds

90% PCT Free- Time Elapsed 7 minutes, 3 seconds

I’ll have both the initial test data and the new 10000 rows I’ve added:

SQL> select count(*) from ora_index_tst;

  COUNT(*)
----------
     1000008

Let’s delete some of this data load to create deleted leaf blocks:

SQL> delete from ora_index_tst
2 where c2 like '%200%';

4179 rows deleted.

SQL> commit;
Commit complete.

Now let’s analyze and take a look at the stats again:

SELECT LF_BLKS, LF_BLK_LEN, DEL_LF_ROWS,USED_SPACE, PCT_USED FROM INDEX_STATS where NAME='PK_INDEXPS';

   LF_BLKS LF_BLK_LEN DEL_LF_ROWS USED_SPACE   PCT_USED

---------- ---------- ----------- ---------- ----------

     41227  7924       121 212596009       19

There’s a substantial difference in number of leaf blocks vs. when the pct_used is allowed to fill up:

   LF_BLKS LF_BLK_LEN DEL_LF_ROWS USED_SPACE   PCT_USED

---------- ---------- ----------- ---------- ----------

     2004  7996       531   15985741     100

Oracle wasn’t impacted by PCTFREE that much, but there was some impact. Rebuilds were required to clean up some wait, but it wasn’t a true “requirement”, just a preferences if consistent deletes, updates where data was different sized than original and poor storage choices.  The differences in performance weren’t that significant.

Now that we know we have deleted rows, let’s do the same on the SQL Server side:

SQL Server

declare @id int

select @id = 9 --already inserted 8 rows
while @id >= 0 and @id <= 1000000
begin
   insert into sql_index_tst (c1,c2) values(@id, 'DKUELKJ' + convert(varchar(7), @id))
   select @id = @id + 1
end

Default Fill Factor- Elapsed Time: 4 minutes, 43 seconds

10% Fill Factor- Elapsed time: 23 minutes, 18 seconds

Delete some rows to test similar to Oracle:

DELETE FROM SQL_INDEX_TST WHERE c2 LIKE ‘%200%’;

Now there are a few ways we can look at how the indexes were impacted.  We’ll first check for page splits, which as we’ve discussed, cause extra work to the transaction log and fragmentation in the index:

SELECT
OBJECT_SCHEMA_NAME(ios.object_id) + '.' + OBJECT_NAME(ios.object_id) as table_name
,i.name as index_name
,leaf_allocation_count
,nonleaf_allocation_count
FROM sys.dm_db_index_operational_stats(DB_ID(), OBJECT_ID('dbo.SQL_Index_tst'),NULL, NULL) ios
INNER JOIN sys.indexes i ON i.object_id = ios.object_id AND i.index_id = ios.index_id;

Next, we’ll look at the physical fragmentation of the index:

SELECT
OBJECT_SCHEMA_NAME(ips.object_id) + '.' + OBJECT_NAME(ips.object_id) as table_name
,ips.avg_fragmentation_in_percent
,ips.fragment_count
,page_count
FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID('dbo.SQL_Index_tst')
GO

There’s significant fragmentation and it also impacted performance as we viewed above.

Index Maintenance

USE AS_test;  

GO

DBCC DBREINDEX ('SQL_INDEX_TST', CL2_INDEX_TST,100);
DBCC DBREINDEX ('SQL_INDEX_TST', PK_CLINDX_TST,100);
GO

We’ve now rebuilt our indexes and moved the fillfactor to 100%.  Queries using each index column in where clause improved over 20%.  Insert and updates increased to perform similarly to Oracle, unless….

Sorts on data for C1 column on a clustered index in SQL Server increased dramatically and out-performed Oracle’s PK. Only IOT tables could compete, but the use case was very small where it was beneficial.

And The Winner Is…

So who won out in my comparison at Oak Table World?  As we always hear from the DBA,

It Depends…

Some of the benefits of clustered indexes in SQL Server are superior to Oracle:

  • Data is physically sorted in the clustered index
  • Optimizer usage specific- clustered index seek
  • Work best with sequential data, identity columns and order dates
  • Option to randomize the writes on the index can deter from hot spots.

There are negatives that leave this debate still open for me:

  • Vulnerable to fragmentation
  • Inserts are added at end and introduce “Last Page Insert Latch Contention” wait event.
  • Subject to hot spots, (see above…)
  • Page Splits- hit performance hard, especially to the transaction log, (requires much more work than standard insert.)
  • Fillfactor can be a hit or miss configuration for some systems.

There was a lot of other tests and queries I used than what is presented here, but this is the main focus of the test.  I need to thank those that have contributed to the deep index knowledge that offered me the research to then want to research on my own.  Shout out to Richard Foote, Mr. Oracle Index and Paul Randal and Jason Strate for the SQL Server expertise!

Posted in Oracle, SQLServer Tagged with: ,

March 30th, 2017 by dbakevlar

Every year I make the trek to Vegas for the large Collaborate conference and 2017 is no different!

It’s a busy schedule for me at Delphix, so buckle up and hang on for this next week’s events!

Sunday, April 2nd, 9am-1pm

Pre-Conference Hands on Lab

Location: South Seas C  Session# 666

Registration is required for this special conference event and it will give the registered attendees a chance to get hands on experience with databases with virtualization.  This is an important skill set for Database Administrators as they discover how easy it is to migrate to the cloud with virtualized environments and for developers and testers, as they discover this is how to get those pesky, monstrous databases to move at the same speed as DevOps.

Monday, April 3rd, 1:30-2:30pm

Database Patching and Upgrading Using Virtualization, (IOUGenius Session)

Location:  Jasmine C  Session# 149

Tuesday, April 4th, ALL DAY– OK, for the entire time the exhibitor area is up! 🙂

Delphix Booth, doing demos and running amok!  

Location: Booth 1210

Wednesday, April 5th, 9:45-10:45am

Virtualization 101 and Q&A

Location: Jasmine C  Session# 201

Wednesday, April 5th, ALL DAY

Oak Table World

Location: South Seas C

1:15 Session from me!

Oracle vs. SQL Server-  Page Splits, Leaf Blocks and Other Wars

If you haven’t noticed, if you come to an Oak Table World session, (and really, why would you miss any of the phenomenal Oakies speaking??)  You can put your name in for a raffle for a (difficult to find) NES system.  Yeah, entertain me, it took me a bit to track this game console down for the event!

For a complete list of Delphix Session, you can locate them in the following spreadsheet:

9:00 AM – 1:00 PM – South Seas C
Data Virtualization Hands-On Lab (pre-registration required)
Presenters: Kellyn Pot’Vin-Gorman and Leighton Nelson
1:30 PM – 2:30 PM – Jasmine C
Database Patching and Upgrading Using Virtualization
Presenter: Kellyn Pot’Vin-Gorman
9:30 AM – 4:15 PM – Exhibitor Showcase
4:15 PM – 5:15 PM – Palm D
Linux/UNIX Tools for Oracle DBAs
Presenter: Tim Gorman
9:45 AM – 10:45 AM – Jasmine C
Virtualization 101 and Q&A
Presenter: Kellyn Pot’Vin-Gorman
5:15 PM – 8:00 PM – Exhibitor Showcase 2:45 PM – 3:45 PM – Banyan E
OAUG Archive & Purge SIG Session
Presenter: Brian Bent
10:45 AM – 4:15 PM – Exhibitor Showcase
8:30 AM – 9:30 AM – Palm A
Migration Enterprise Applications to the Cloud
Presenter: Leighton Nelson
5:15 PM – 7:00 PM – Exhibitor Showcase

See you next week in Vegas!!

 

Posted in Oracle

March 30th, 2017 by dbakevlar

Azure is the second most popular cloud platform to date, so it’s where Delphix naturally is going to support second on our road to the cloud.  As I start to work with the options for us deploying Delphix, there are complexities I need to educate myself on in Azure.  As we’re just starting out, there’s a lot to learn and a lot of automation we can take advantage of.  It’s an excellent time for me to get up to speed with this cloud platform, so hopefully everyone will learn right along with me!

We’ll be using Terraform to deploy to Azure, just as we prefer to use it for our AWS deployments.  It’s open source, very robust and has significant support in the community, so we’ll switch from cloud setup to Terraform prep in many of these posts.  Before we can do that, we need to set up our Azure environment after we’ve registered our subscription with Azure.

Azure Consoles

There are the New and the Classic consoles for Azure, but also ones in the modern, New console that are marked as “Classic” that aren’t part of the actual “Classic” one.  I found this a bit confusing, so it’s good to have the distinction.

Azure’s “New” Portal, with it’s modern, sleek design

Azure’s “Classic” management interface, with it’s pale blue and white schema, which still serves a very significant purpose

Once you’ve created your Azure account, you’ll find that you need access to the Classic console to perform many of the basic setup tasks, where the Modern console is better for advanced administration.

Preparation is Key

There are a number of steps you’ll need to perform in preparation for Delphix to deploy to Azure.  The delphix engine, a source and a target are out goal, so we’ll start simple and work our way out.  Let’s see how much I can figure out and how much I may need to look to others more experienced to get me through.  No matter what, you’ll need both consoles, so keep the links above handy and I’ll refer to the consoles by “New” and “Classic” to help distinguish them as I go along.  Know that in this post, we’ll spend most of our time in the Classic console.

Set up an Account and Add Web App

If you don’t already have one, Microsoft will let you set up an account and even give you $200 in free credits to use.  Once you sign up, then you need to know where to go next.  This is where the “Classic” console comes in, as you need to set up your “application” that will be used for your deployment.

Log into the “Classic” console and click on Active Directory and the Default Directory highlighted in blue.  This will open a new page and you will have the opportunity to click Add at the bottom of the page to add a new Active Directory application.

  • Name the Application, (open up a text editor, copy and paste the name of the app into it, you’ll need this data later)
  • The application type is web app or api
  • Enter a URL/URI and yes, they can be made up.  They don’t have to be real.

Client and Client Secret

Now that your application is created, you’ll see a tab called Configure.  Click on this tab and you’ll see the client ID displayed.  Copy the Client ID and add that to your text editor, also for later.

Scroll down and you’ll see a section called Keys.  Click on the button that says “Select Duration” and choose 1 or 2 years.  At the very bottom of the screen, you’ll see a Save button, click it and then the Secret passcode will be displayed for you to copy and paste into your text editor.  Do this now, as you won’t be able to get to it later.

Tenant ID

To the left of the Save button, you’ll see “View Endpoints”.  Click on this and you’ll see a number of entries.  The tenant ID is the repeat value shown in each of the entries at the end.  An example is shown below:

Copy and paste this into your text editor under a heading of tenant ID.

Add Application to the Active Directory

Now that you’ve created this framework, you need to grant permissions to use it all.  In the Configure tab, scroll to the very bottom where it says “Permissions to Other Applications” and click on Add Application.  Choose the Active Directory application from the list, (if you have a new account, you won’t have much to choose from) Azure Service Management API and click on the checkmark in the lower right corner of the pane.  This will return you to the previous page.  Click on the designated privileges and choose to grant it Access Azure Service Management as organization and then save.

Subscription Data

Now, log into the New portal and click on Subscriptions on the left hand side.  Click on the Subscription and it will open up to display your Subscription ID, which you’ll need to copy and paste into your text editor.

Click on Access Control, (IAM) and click on Add.  Now you may only see your username, but the applications are there-  they just won’t be displayed by default.  Type in your application name that you put in your text editor, (example, mine is Web_K_Terra.)  Reminder-  you must type in the name of your app, just as you did when you created it, (it is cap sensitive, etc.) Grant reader and contributor roles from the role list, saving between each additional role.

You should now see your user in the list with both roles assigned to it like the example below for Web_K_Terra app:

Our configuration is complete and ready to go onto the networking piece.

The first part of my terraform template is ready, too.  All the pertinent data that I required from my build out has been added to it and it looks something like the following:

provider “Web_K_Terra” {
subscription_id = “gxxxxxxx-db34-4gi7-xxxxx-9k31xxxxxxxxxp2”
client_id = “d76683b5-9848-4d7b-xxxx-xxxxxxxxxxxx”
client_secret = “trKgvXxxxxxXXXXxxxXXXXfNOc8gipHf-xxxxxxXXx=”
tenant_id = “xxxxxxxx-9706-xxxx-a13a-4a8363bxxxxx”

}

This is a great start to getting us out on Azure, in part II, we’ll talk about setting up connectivity between your desktop and Azure for remote access and recommendations for tools to access it locally.

 

Posted in Azure, Oracle Tagged with: , ,

March 21st, 2017 by dbakevlar

I ended up speaking at two events this last week.  Now if timezones and flights weren’t enough to confuse someone, I was speaking at both an Oracle AND a SQL Server event- yeah, that’s how I roll these days.

Utah Oracle User Group, (UTOUG)

I arrived last Sunday in Salt Lake, which is just a slightly milder weather and more conservative version of Colorado, to speak at UTOUG’s Spring Training Days Conference.  I love this location and the weather was remarkable, but even with the warm temps, skiing was still only a 1/2 hour drive from the city.  Many of the speakers and attendees took advantage of this opportunity by doing just that while visiting.  I chose to hang out with Michelle Kolbe and Lori Lorusso.  I had a great time at the event and although I was only onsite for 48hrs, I really like this event so close to my home state.

I presented on Virtualization 101 for DBAs and it was a well attended session.  I really loved how many questions I received and how curious the database community has become about how this is the key to moving to the cloud seamlessly.

There are significant take-aways from UTOUG.  The user group, although small, is well cared for and the event is using some of the best tools to ensure that they get the best bang for the buck.  It’s well organized and I applaud all that Michelle does to keep everyone engaged.  It’s not an easy endeavor, yet she takes this challenge on with gusto and with much success.

SQL Saturday Iceland

After spending Wednesday at home, I was back at the airport to head to Reykjavik, Iceland for their SQL Saturday.  I’ve visited Iceland a couple times now and if you aren’t aware of this, IcelandAir offers up to 7 day layovers to visit Iceland and then you can continue on to your final destination.  Tim and I have taken advantage of this perk on one of our trips to OUGN, (Norway) and it was a great way to visit some of this incredible country.  When the notification arrived for SQL Saturday Iceland, I promptly submitted my abstracts and crossed my fingers.  Lucky for me,  accepted my abstract and I was offered the chance to speak with this great SQL Server user group.

After arriving before 7am on Friday morning at Keflavik airport, I realized that I wouldn’t have a hotel room ready for me, no matter how much I wanted to sleep.  Luckily there is a great article on the “I Love Reykjavik” site offering inside info on what to do if you do show up early.  I was able to use the FlyBus to get a shuttle directly to and from my hotel, (all you have to do is ask the front desk to call them the night before you’re leaving and they’ll pick you back up in front of your hotel 3 hrs before your flight.)  Once I arrived, I was able to check in my bags with their front desk and headed out into town.

I stayed at Hlemmur Square, which was central to the town and the event and next to almost all of the buses throughout the city.  The main street in front of it, Laugavegur, is one of the main streets that runs East-West and is very walkable.  Right across this street from the hotel was a very “memorable” museum, the Phallilogical Museum.  I’m not going to link to it or post any pictures, but if you’re curious, I’ll warn you, it’s NSFW, even if it’s very, uhm…educational.  It was recommended by a few folks on Twitter and it did ensure I stayed awake after only 2 hours of sleep in 24 hours!

As I wandered about town, there are a few things you’ll note about Iceland-  the murals of graffiti is really awesome and Icelandic folks like good quality products-  the stores housed local and international goods often made from wool, wood, quality metal and such. The city parliment building is easily accessible and it’s right across from the main shopping area and new city development.

On Saturday, I was quick to arrive at Iceland’s SQL Saturday, as I had a full list of sessions I wanted to attend.  I was starting to feel the effects of Iceland weather on my joints, but I was going to make sure I got the most out of the event.  I had connected with a couple of the speakers at the dinner the night before, but with jet lag, you hope you’ll make a better impression on the day of the event.

I had the opportunity to learn about the most common challenges with SQL Server 2016 and that Dynamic Data Masking isn’t an enterprise solution.  Due to lacking discovery tools, the ability to join to non-masked objects and common values, (i.e. 80% of data is local and the most common location value would easily be identified, etc.) the confidential data of masked objects could be identified.

I also enjoyed an introduction to containers with SQL Server and security challenges.  The opening slide from Andy says it all:

Makes you proud to be an American, doesn’t it? 🙂

My session was in the afternoon and we not only had excellent discussions on how to empower database environments with virtualization, but I even did a few quick demonstrations of ease of cloud management with AWS and Oracle…yes, to SQL Server DBAs.  It was interesting to see the ease of management, but how easy it was for me to manage Oracle with the interface.  I performed all validations of data refreshes from the command line, so there was no doubt that I was working in Oracle, yet the refreshes and such were done in AWS and with the Delphix Admin console.

I made it through the last session on the introduction to containers with SQL Server, which included a really interesting demonstration of a SQL Server container sans an OS installation, allowing it to run with very limited resource requirements on a Mac.  After this session was over, I was thankful that two of my fellow presenters were willing to drop me off at my hotel and I promptly collapsed in slumber, ready to return home.  I was sorry to miss out on the after event dinner and drinks, but learned that although I love Iceland, a few days and some extra recovery time may be required.

Thank you to everyone at Utah Oracle User Group and Iceland’s SQL Server User Group for having me as a guest at your wonderful events.  If you need me, I’ll be taking a nap… 🙂

 

Posted in DBA Life, Oracle, SQLServer Tagged with: , ,

March 20th, 2017 by dbakevlar

Now that I’ve loaded a ton of transactions and did a bunch of work load on my source database with the SH sample schema and Swingbench, I’ve noted how little impact to the databases using different cloud tools, (which will come in a few later posts) now I’m going to show you how easy it is to create a new VDB from all of this, WITH the new SH data included.  During all of this time, the primary users of my Delphix VDB, (virtualized databases) would have been working in the previous iage, but someone wants that new SH schema now that my testing has completed.

To do this, I open up my Delphix admin console, (using the IP address for the Delphix Engine from the AWS Trail build output), log in as delphix_admin and open up the Source group to access the Employee Oracle 11g database, (aka ORCL.)

I know my new load is complete on the ORCL database and need to take a new snapshot to update the Delphix Engine outside of the standard refresh interval, (I’m set at the default of every 24 hrs.)  Access this by clicking on the Configuration tab and to take a snapshot, I simply click on the camera icon.

A snapshot will take a couple seconds, as this is a very, very small database, (2.3G) and then you can click on Timeflow to view the new snapshot available for use.  Ensure the new snapshot is chosen by moving the slider all the way to the right and look at the timestamp, ensuring it’s the latest, matching your recent one.

Click on Provision and it will default to the Source host, change to the target, update to a new, preferred database name, (if you don’t like the default) and then you may have to scroll down to see the Next button to go through the subsequent steps in the wizard.  I know my Macbook has a smaller screen and I do have to scroll to see the Next button.  After you’ve made any other changes, click on Finish and let the job run.  Don’t be surprised by the speed that a VDB is provisioned-  I know it’s really fast, but it really did create a new VDB!

Now that we have it, let’s connect to it from SQL*Plus and check prove that we got the new SH schema over.

Using the IP Address for the Linux Target that was given to use in our AWS Trial build, let’s connect:

ssh delphix@<linuxtarget IP Address>

Did you really just create a whole new VDB?

[delphix@linuxtarget ~]$ ps -ef | grep pmon
delphix   1148  1131  0 18:57 pts/0    00:00:00 grep pmon
delphix  16825     1  0 Mar09 ?        00:00:06 ora_pmon_devdb
delphix  16848     1  0 Mar09 ?        00:00:06 ora_pmon_qadb
delphix  31479     1  0 18:30 ?        00:00:00 ora_pmon_VEmp6C0

Yep, there it is…

Now let’s connect to it.

Set our environment:

. 11g.env

Set the ORACLE_SID to the new VDB

export ORACLE_SID=VEmp6C0

Connect to SQL*Plus as our SH user using the password used in our creation on the source database, ORCL:

$ sqlplus sh

Enter password: 
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select object_type, count(*) from user_objects
  2  group by object_type;

OBJECT_TYPE       COUNT(*)

------------------- ----------
INDEX PARTITION     196
TABLE PARTITION     56
LOB                 2
DIMENSION           5
MATERIALIZED VIEW   2
INDEX               30
VIEW                1
TABLE               16

8 rows selected.

SQL> select table_name, sum(num_rows) from user_tab_statistics
  2  where table_name not like 'DR$%'
  3  group by table_name
  4  order by table_name;

TABLE_NAME       SUM(NUM_ROWS)

------------------------------ -------------
CAL_MONTH_SALES_MV             48
CHANNELS                       5
COSTS                          164224
COUNTRIES                      23
CUSTOMERS                      55500
FWEEK_PSCAT_SALES_MV           11266
PRODUCTS                       72
PROMOTIONS                     503
SALES                          1837686
SALES_TRANSACTIONS_EXT         916039
SUPPLEMENTARY_DEMOGRAPHICS     4500
TIMES                          1826

12 rows selected.

Well, lookie there, the same as the source database we loaded earlier... 🙂  Next, we’ll go into the stuff that always gets my interest- performance data, the cloud and visualization tools.

 

Posted in AWS Trial, Oracle Tagged with: ,

March 13th, 2017 by dbakevlar

Swingbench is a one of the best choices for easy loads on a database.  I wanted to use it against the SH sample schema I loaded into my Oracle Source database and I haven’t used Swingbench outside of the command line quite a while back, (my databases seem to always come with a load on them!)  so it was time to update my Swingbench skills and catch up with the user interface.  Thanks to Dominic Giles for keeping the download, features and documentation so well maintained.

After adding the application rights to run on my Macbook Pro, I was impressed by the clean and complete interface.  I wanted to connect it to my AWS instance and as we talk about, the cloud is a lot simpler a change than most DBAs first consider.

When first accessing, Swingbench will prompt you to choose what pre-configured workload you’d like to utilize.  I had already set up the Sales History schema in my AWS Trial source database, so I chose Sales History and then had to perform a few simple configurations to get it to run.

Username: sh

Password: <password for your sh user>

Connect String: <IP Address for AWS Instance>:<DB Port>:<services name>

Proceed down to the tab for Environment Variables and add the following:

ORACLE_HOME  <Oracle Home>

I chose the default 16 connections to start out, but you can add more if you’d like. You can also configure stats collection, snapshot collection before and after the workload.

I set my autoconnect to true, but the default is to not start the load until you hit the green arrow button.  The load will then execute the workload with the amount of connections requested until you hit the red stop button.  You should see the users logged in at the bottom right and in the events window:

Next post we’ll discuss what you’ll see when running a Swingbench on a source database, the Delphix Engine host and subsequently refreshes to a VDB, (virtual database.)  We’ll also discuss other tools that can grant you visibility to optimization opportunities in the cloud.

 

 

Posted in AWS Trial, Cloud, Oracle Tagged with: , ,

March 10th, 2017 by dbakevlar

Most people know I like to do things the hard way… 🙂

When it comes to learning things, there’s often no better way to get a strong understanding than to have to take something apart and put it back together.  I wanted to use swingbench on my AWS deployment and needed the Sales History, (SH) sample schema.  I don’t have an interface to perform the installation via the configuration manager, so I was going to install it manually.

Surprise, the scripts in the $ORACLE_HOME/demo/schema/sh directory were missing.  There are a couple options to solve this dilemma.  Mine was to first get the sample schemas.  You can retrieve them from a handy GitHub repository found here, maintained by Gerald Venzl.

I downloaded the entire demo scripts directory and then SCP’d them up to my AWS host.

scp db-sample-schemas-master.zip delphix@<IP Address>:/home/oracle/.

Next, I extracted the files to the $ORACLE_HOME/demo directory.

unzip db-sample-schemas-master.zip

Now the unzip will call the directory db-sample-schemas-master, which is fine with me, as I like to retain the previous one, (I’m a DBA, so have copies of data until I’m sure I don’t need it is my life.)

mv schema schema_kp
mv db-sample-schemas-master schema

With that change, everything is now as it should be, but the one thing you’ll find out, is that the download is for 12c and I’m alright with this, as the swingbench I’m using is expecting SH for 12c, too.  Not that I expect any differences, but as Jeff Smith was all too happy to remind me on Facebook, I’m using decade old version of Oracle on my image here.

There are a lot of scripts in the Sales_History folder, but all you’ll need to run is the sh_main.sql from SQL*Plus as sysdba to create the SH schema.

There are parameter values that you’ll enter to create the SH schema manually that you may assume are different than the prompt terms and as I’ve seen very little written on it, (even after all these years of this existing) this may help others out:

specify password for SH as parameter 1:

Self-explanatory- what password would you like SH user to have.

specify default tablespace for SH as parameter 2:

What tablespace do you want this created in?  I chose Users, as this is just my play database.

specify temporary tablespace for SH as parameter 3:

Temp was mine and is the common value for this prompt.

specify password for SYS as parameter 4:

This is the password for SYSTEM, not SYS, btw.

specify directory path for the data files as parameter 5:  

This is not Oracle datafiles, this is the path to your SH directory, ($ORACLE_HOME/demo/schemas/sales_history/) for access to the control files and dat files for SQL Loader.  Remember to have a slash at the end of the path name.

writeable directory path for the log files as parameter 6:

A directory for log files-  I put this in the same directory and remember to use a slash at the end or you’re log files will have the previous directory as the beginning of the file name and save to one directory up.

specify version as parameter 7:

This isn’t the version of the database, but the version of the sample schema-  the one from Github is “v3”.

specify connect string as parameter 8:

pretty clear, but the service or connect string for the database that the schema is being created in.

Error Will Robinson, Error!

I then ran into some errors, but it was pretty easy to view the log and then the script and see why:

SP2-0310: unable to open file "__SUB__CWD__/sales_history/psh_v3"

Well, the scripts, (psh_v3.sql, lsh_v3.sql and csh_v3.sql) called in the sh_main.sql is looking in the sales_history directory, so we need to get rid of the sub directory paths that don’t exist in the 11g environment.

view the sh_main.sql, you’ll see three paths to update.  Below is an example of one section of the script with the section to be removed BOLDED:

REM Post load operations
REM =======================================================
DEFINE vscript = _SUB_CWD_/sales_history/psh_&vrs

@&vscript

The DEFINE will now look like the following so it looks in the sales_history directory if you haven’t been pointing to the $ORACLE_HOME/demo directory:

DEFINE vscript = psh_&vrs

Once you’ve saved your changes, you can simply re-run sh_main.sql again, as it does a drop schema on the sample schema before it does the create.  If no other changes need to be made to your parameters, just execute sh_main.sql, if you need to change your values for the parameters entered, just quickest to exit from SQL*Plus and reconnect to unset the values.

Are We There Yet?

Verify that there weren’t any errors in your $RUN_HOME/sh_v3.log file and if all was successful, then connect as the SH user with SQL*Plus and check the schema:

SQL> select object_type, count(*) from user_objects
  2  group by object_type;
OBJECT_TYPE       COUNT(*)
------------------- ----------
INDEX PARTITION     196
TABLE PARTITION     56
LOB                 2
DIMENSION           5
MATERIALIZED VIEW   2
INDEX               30
VIEW                1
TABLE               16
SQL> select table_name, sum(num_rows) from user_tab_statistics
  2  where table_name not like 'DR$%' --Dimensional Index Transformation
  3  group by table_name
  4  order by table_name;
TABLE_NAME       SUM(NUM_ROWS)
------------------------------ -------------
CAL_MONTH_SALES_MV         48
CHANNELS                   5
COSTS                      164224
COUNTRIES                  23
CUSTOMERS                  55500
FWEEK_PSCAT_SALES_MV       11266
PRODUCTS                   72
PROMOTIONS                 503
SALES                      1837686
SALES_TRANSACTIONS_EXT     916039
SUPPLEMENTARY_DEMOGRAPHICS 4500
TIMES                      1826

And now we’re ready to run Swingbench Sales History against our AWS instance to collect performance data.  I’ll try to blog on Swingbench connections and data collection next time.

See everyone at UTOUG for Spring Training Days on Monday, March 13th and at the end of the week I’ll be in Iceland at SQL Saturady #602!

Posted in Cloud, Oracle Tagged with: , ,

February 20th, 2017 by dbakevlar

We’ve been working hard to create an incredible new trial version of Delphix that uses AWS, which is built with the open source product Terraform.  Terraform is a tool that anyone can use to build, version and manage a product effectively and seamlessly in a number of clouds.  We are currently using it to implement to AWS, but there is a bright future for these types of open source products and I’m really impressed with how easy its made it to deploy compute instances, the Delphix Engine and supporting architecture on AWS EC2.  If you’d like to read up on Terraform, check out their website.

The Delphix Admin Console and Faults

After building out the Delphix environment with the Engine and the a Linux source/target, the first step for many is to log into the Delphix Admin console.  You can view any faults during the build at the upper right corner under Faults. One error that I’ve noticed comes up in after a successful build is the following:

AWS Console to the Rescue

By logging into your AWS EC2 console, you can view the instances that are being used.  As you’ll note, the error says that the Delphix Engine is using an unsupported  instance type m4.large.  Yes in our EC2 console, we can see the Delphix Engine, (last in the list and with the name ending in “DE”) that no, actually it isn’t.

It’s actually a m4.xlarge instance type.  What’s even more interesting, is that the Linux Target, (LT) and Linux Source, (LS) are both m4.large instance types, yet no warning was issued for either of these instances as unsupported.

AWS EC2 Supported Instance Types

You can locate what types of instance types are supported for AWS EC2 with the following link.  At this page, we can also see that both the m4.large and the m4.xlarge instance type IS SUPPORTED.

Knowing that we’ve validated that the instance type is supported means that I can safely ignore it and proceed to work through the trial without worry.

If you’re planning on deploying a production Delphix Engine on AWS, inspect the following document to ensure you build it with the proper configuration.

Nothing to see here and thought I better let everyone know before someone lumps Amazon with CNN… 🙂

Posted in AWS Trial, Oracle Tagged with: ,

February 10th, 2017 by dbakevlar

I ran across an article from 2013 from Straight Talk on Agile Development by Alex Kuznetsov and it reminded me how long we’ve been battling for easier ways of doing agile in a RDBMS environments.

Getting comfortable with a virtualized environment can be an odd experience for most DBAs, but as soon as you recognize how similar it is to a standard environment, we stop over-thinking it and it makes it quite simple to then implement agile with even petabytes of data in an relational environment without using slow and archaic processes.

The second effect of this is to realize that we may start to acquire secondary responsibilities and take on ensuring that all tiers of the existing environment are consistently developed and tested, not just the database.

A Virtual Can Be Virtualized

Don’t worry-  I’m going to show you that its not that difficult and virtualization makes it really easy to do all of this, especially when you have products like Delphix to support your IT environment. For our example, we’re going to use our trusty AWS Trial environment and we have already provisioned a virtual QA database.  We want to create a copy of our development virtualized web application to test some changes we’ve made and connect it to this new QA VDB.

From the Delphix Admin Console, go to Dev Copies and expand to view those available.  Click on Employee Web Application, Dev VFiles Running.  Under TimeFlow, you will see a number of snapshots that have been taken on a regular interval.  Click on one and click on Provision.

Now this is where you need the information about your virtual database that you wish to connect to:

  1.  You will want to switch from provisioning to the source to the Linux Target.

Currently the default is to connect to the existing development database, but we want to connect to the new QA we wish to test on.  You can ssh as delphix@<ipaddress for linuxtarget> to connect to and gather this information.

2.  Gathering Information When You Didn’t Beforehand

I’ve created a new VDB to test against, with the idea, that I wouldn’t want to confiscate an existing VDB from any of my developers or testers.  The new VDB is called EmpQAV1.  Now, if you’re like me, you’re not going to have remembered to grab the info about this new database before you went into the wizard to begin the provisioning.  No big deal, we’ll just log into the target and get it:

[delphix@linuxtarget ~]$ . 11g.env
[delphix@linuxtarget ~]$ echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0.4/db_1
[delphix@linuxtarget ~]$ ps -ef | grep pmon
delphix  14816     1  0 17:23 ?        00:00:00 ora_pmon_devdb
delphix  14817     1  0 17:23 ?        00:00:00 ora_pmon_qadb
delphix  17832     1  0 17:32 ?        00:00:00 ora_pmon_EmpQAV1
delphix  19935 19888  0 18:02 pts/0    00:00:00 grep pmon

I can now set my ORACLE_SID:

[delphix@linuxtarget ~]$ export ORACLE_SID=EmpQAV1

Now, let’s gather the rest of the information we’ll need to connect to the new database by connecting to the database and gathering what we need.

[delphix@linuxtarget ~]$ lsnrctl services
LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 10-FEB-2017 18:13:06
Copyright (c) 1991, 2013, Oracle.  All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "EmpQAV1" has 1 instance(s).
  Instance "EmpQAV1", status READY, has 1 handler(s) for this service...

Provision Your VFile

Fill in all the values required in the next section of the provisioning setup:

Click Next and add any requirements to match your vfile configuration that you had for the existing environment.  For this one, there aren’t any, (additional NFS Mount points, etc.)  Then click Next and Finish.

The VFile creation should take a couple minutes max and you should now see an environment that looks similar to the following:

This is a fully functional copy of your web application, created from another virtual copy that can test against a virtual database, ensuring that all aspects of a development project are tested thoroughly before releasing to production!

Why would you choose to do anything less?

 

 

 

Posted in AWS Trial, Delphix, Oracle Tagged with: ,

February 8th, 2017 by dbakevlar

There are more configurations for AWS than there are fish in the sea, but as the rush of folks arrive to test out the incredibly cool AWS Trial for Delphix, I’ll add my rendition of what to look for to know you’re AWS setup is prepped to successfully deploy.

The EC2 Dashboard View

After you’ve selected your location, set up your security user/group and key pairs, there’s a quick way to see, (at least high level) if you’re ready to deploy the AWS Trial to the zone in question.

Go to your EC2 Dashboard and to the location, (Zone) that you plan to deploy your trial to and you should see the following:

Notice in the dashboard, you can see that the key pairs, (1) and the expected Security Groups, (3) are displayed, which tells us that we’re ready to deploy to this zone.  If we double click on the Key Pair, we’ll see that its match to the one we downloaded locally and will use in our configuration with Terraform:

How Terraform Communicates with AWS

These are essential to deploying in an AWS zone that’s configured as part of your .tfvars file for terraform.  You’ll note in the example below, we have both designated the correct zone and the key pair that is part of the zone we’ll be using to authenticate:


#VERSION=004

#this file should be named terraform.tfvars

# ENTER INPUTS BELOW

access_key="XXXXXXX"

secret_key="XXXXXXXXXX"

aws_region="us-east-1"

your_ip="xxx.xx.xxx.xxx"

key_name="Delphix_east1" #don't include .pem in the key name 

instance_name="Delphix_AWS"

community_username="xxx@delphix.com"

community_password="password"

Hopefully this is a helpful first step in understanding how zones, key pairs and security groups interact to support the configuration file, (tfvars) file that we use with the Delphix deployment via Terraform into AWS.

 

Posted in AWS Trial, Delphix, Oracle Tagged with: ,

January 31st, 2017 by dbakevlar

I’ve been at Delphix for just over six months now.  In that time, I was working with a number of great people on a number of initiatives surrounding competitive, the company roadmap and some new initiatives.  With the introduction of our CEO, Chris Cook, new CMO, Michelle Kerr and other pivotal positions within this growing company, it became apparent that we’d be redirecting our focus on Delphix’s message and connections within the community.

I was still quite involved in the community, even though my speaking had been trimmed down considerably with the other demands at Delphix.  Even though I wasn’t submitting abstracts to many of the big events I’d done so in previous years, I still spoke at 2-3 events each month during the fall and made clear introductions into the Test Data Management, Agile and re-introduction into the SQL Server communities.

As of yesterday, my role was enhanced so that evangelism, which was previously 10% of my allocation, is now going to be upwards of 80% as the Technical Evangelist for the Office of the CTO at Delphix.  I’m thrilled that I’m going to be speaking, engaging and blogging with the community at a level I’ve never done before.  I’ll be joined by the AWESOME Adam Bowen, (@CloudSurgeon on Twitter) in his role as Strategic Advisor and as the first members of this new group at Delphix.  I would like to thank all those that supported me to gain this position and the vision of the management to see the value of those in the community that make technology successful day in and day out.

I’ve always been impressed with the organizations who recognize the power of grassroots evangelism and the power it has in the industry.  What will I and Adam be doing?  Our CEO, Chris Cook said it best in his announcement:

As members of the [Office of CTO], Adam and Kellyn will function as executives with our customers, prospects and at market facing events.  They will evangelize the direction and values of Delphix; old, current, and new industry trends; and act as a customer advocate/sponsor, when needed.  They will connect identified trends back into Marketing and Engineering to help shape our message and product direction.  In this role, Adam and Kellyn will drive thought leadership and market awareness of Delphix by representing the company at high leverage, high impact events and meetings. []

As many of you know, I’m persistent, but rarely patient, so I’ve already started to fulfill my role and be prepared for some awesome new content, events that I’ll be speaking at and new initiatives.  The first on our list was releasing the new Delphix Trial via the Amazon Cloud.  You’ll have the opportunity to read a number of great posts to help you feel like an Amazon guru, even if you’re brand new to the cloud.  In the upcoming months, watch for new features, stories and platforms that we’ll introduce you to. This delivery system, using Terraform, (thanks to Adam) is the coolest and easiest way for anyone to try out Delphix, with their own AWS account and start to learn the power of Delphix with use case studies that are directed to their role in the IT organization.

Posted in AWS Trial, DBA Life, Delphix, Oracle Tagged with: ,

January 18th, 2017 by dbakevlar

So Brent Ozar’s group of geeks did something that I highly support-  a survey of data professional’s salaries.  Anyone who knows me, knows I live by data and I’m all about transparency.  The data from the survey is available for download from the site and they’re promoting app developers to download the Excel spreadsheet of the raw data and work with it.

Now I’m a bit busy with work as the Technical Intelligence Manager at Delphix and a little conference that I’m the director for, called RMOUG Training Days, which is less than a month from now, but I couldn’t resist the temptation to load the data into one of my XE databases on a local VM and play with it a bit...just a bit.

It was easy to save the data as a CSV and use SQL Loader to dump it into Oracle XE.  I could have used BCP and loaded it into SQL Server, too, (I know, I’m old school) but I had a quick VM with XE on it, so I just grabbed that quick to give me a database to query from.  I did edit the CSV and removed both the “looking” column and took out the headers.  If you choose to keep them, make sure you add the column back into the control file and update the “options ( skip=0)” to be “options ( skip=1)” to not load the column headers as a row in the table.

The control file to load the data has the following syntax:

--Control file for data --
options ( skip=0 )
load data
 infile 'salary.csv'
 into table salary_base
fields terminated by ','
optionally enclosed by '"'
 (TIMEDT DATE "MM-DD-YYYY HH24:MI:SS"
 , SALARYUSD
 , PRIMARYDB
 , YEARSWDB
 , OTHERDB
 , EMPSTATUS
 , JOBTITLE
 , SUPERVISE
 , YEARSONJOB
 , TEAMCNT
 , DBSERVERS
 , EDUCATION
 , TECHDEGREE
 , CERTIFICATIONS
 , HOURSWEEKLY
 , DAYSTELECOMMUTE
 , EMPLOYMENTSECTOR)

and the table creation is the following:

create table SALARY_BSE(TIMEDT TIMESTAMP not null,
SALARYUSD NUMBER not null,
COUNTRY VARCHAR(40),
PRIMARYDB VARCHAR(35),
YEARSWDB NUMBER,
OTHERDB VARCHAR(150),
EMPSTATUS VARCHAR(100),
JOBTITLE VARCHAR(70),
SUPERVISE VARCHAR(80),
YEARSONJOB NUMBER,
TEAMCNT VARCHAR(15),
DBSERVERS VARCHAR(50),
EDUCATION VARCHAR(50),
TECHDEGREE VARCHAR(75),
CERTIFICATIONS VARCHAR(40),
HOURSWEEKLY NUMBER,
DAYSTELECOMMUTE VARCHAR(40),
EMPLOYMENTSECTOR VARCHAR(35));

I used Excel to create some simple graphs from my results and queried the data from SQL Developer, (Jeff would be so proud of me for not using the command line… :))

Here’s what I queried and found interesting in the results.

We Are What We Eat, err Work On

The database flavors we work on may be a bit more diverse than most assume.  Now this one was actually difficult, as the field could be freely typed into and there were some mispellings, combinations of capital and small letters, etc.  The person who wrote “postgress”, yeah, we’ll talk… 🙂

The data was still heavily askew towards the MSSQL crowd. Over 2700 respondents were SQL Server and only 169 were listed their primary database platform as others, but Oracle was the majority:

You Get What you Pay For

Now the important stuff for a lot of people is the actual salary.  Many folks think that Oracle DBAs make a lot more than those that specialize in SQL Server, but I haven’t found that and as this survey demonstrated, the averages were pretty close here, too. No matter if you’re Oracle or SQL Server, we ain’t making as much as that Amazon DBA…:)

Newbies, Unite

Many of those who filled out the survey haven’t been in the field that long, (less than five years).  There’s still a considerable amount of folks who’ve been in the industry since it’s inception.

We’ve Found Our Place

Of the 30% of us that don’t have degrees in our chosen field, most of us stopped after getting a bachelors to find our path in life:

There’s still a few of us, (just under 200) out there who had to accumulate a lot of school loans  getting a masters or a Doctorate/PHD, before we figured out that tech was the place to be…:)

Location, Location, Location

The last quick look I did was to see by country, what were the top and bottom average salaries for DBAs-

 

Not too bad, Switzerland and Denmark… 🙂

Data Is King

I wish there’d been more respondents to the survey, but very happy with the data that was provided.  I’m considering doing one of my own, just to get more people from the Oracle side, but until then, here’s a little something to think about as we prep for the new year and another awesome year in the database industry!

 

 

Posted in DBA Life, Oracle Tagged with: , ,

January 13th, 2017 by dbakevlar

I receive about 20-30 messages a week from women in the industry.  I take my role in the Oracle community as a role model for women in technology quite seriously and I’ve somehow ended up speaking up a number of times, upon request from different groups.

rosie

Although its not the first time the topics come up, I was asked last week for some recommendations on Oracle’s CEO, Safra Catz and her opportunity to be on President Elect Trump’s transition team.

I wanted to ask your opinion about Safra not taking a leave of absence to help with Trump’s transition team? I think she should take a leave and as one of the top women in IT I think it shows poor judgment. Could the WITs write her a letter? Thoughts?

After some deep thought, I decided the topic required a good, solid answer and a broader audience.  As with anything involving the topic of WIT, the name of the source who asked the question doesn’t matter and anyone who asks you to give names isn’t really interested in education, but persecution.

It took me some time to think through the complexities of the situation.  Everyone will have some natural biases when a topic bridges so many uncomfortable areas of discussion:

  • Women’s Roles
  • Politics
  • Previous Employer

After putting my own bias aside and thinking through the why and what, here’s my thoughts-

No, I don’t think Safra should take a leave of absence. We have significantly few women in c-level positions.  As of April 2016, only 4% of CEO’s for Fortune 500 companies were women, (which Safra is one.)  I have a difficult time believing we’d be asking most men to give up the opportunity to be on a presidential transition team or take a leave of absence.  Some of the most challenging and difficult times in our career are also the most rewarding and this may be one of those times in Safra’s life.  Anyone who’s friends with me, especially on Facebook, would know, I’m not a big fan of Donald Trump, but in no way should we ask Safra to not try to be part of the solution.

No, I don’t think Safra should refrain from being on the transition team.  As much as we discuss the challenges of getting more women in technology, its even a larger challenge in politics.  Women have less than 25% of the seats in Congress and even less at local government levels.  We are over 50% of the workforce and 50% of the US population.  How can we ever have our voices heard if we aren’t participating in our own government?  Having more representation is important, not less and not because my politics don’t mesh with hers.

So what should the discussion really be about if we don’t want Safra to take a leave of absence or remove herself from the transition team?

  1. We want to know that there are clear policies in place to deter from conflict of interest.  We need to know that if improprieties do occur, that accountability will result.
  2. We need to not limit Safra in opportunities or over-scrutinize her the way we do so many women who don’t fit inside the nice, neat little expectations society still has of them.
  3. We shouldn’t hold Safra accountable for what Donald Trump represents, his actions or if we don’t agree with his politics.

We also need to discuss what is really bothering many when a woman or person of color enters into the lions den, aka a situation that is clearly not very welcoming to us due to gender, race or orientation.  It can bring out feelings of betrayal, concerns that the individual is “working for the enemy.”  We want to know that Safra will stand up for our rights as the under-represented.  We want to know that she would tell Donald that she doesn’t condone his behavior or actions towards women, race and culture.

One of the biggest challenges I had to overcome when I started my career, was recognizing that every individual has their own path in this world.  Their path may be very different than mine, but through change comes growth and to expect someone to do what may not be in their capabilities can be just as limiting as not letting them do what they do best.  This wouldn’t be allowing Safra to do what she does best.

I’ve never viewed Safra as a role-model when it comes to the protection and advancement of women’s roles in technology or our world.  She’s never historically represented this, any more than those expecting it from Marissa Mayer.  It’s just not part of their unique paths, no matter how much the media likes to quote either of them, (especially Marissa, which consistently makes me cringe.)  It doesn’t mean they aren’t capable of accomplishing great feats-  just not feats in the battle for equality.  It also doesn’t mean they aren’t a source of representation.  The more women that are in the space, the better.  That’s how we overcome some of the bias we face.

Regarding those that do support women in more ways that just representing the overall count of women in technology and politics, I’d rather put my time into Sheryl Sandberg, Grace Hopper, Meg Whitman and others who have the passion to head up equality issues.  I both welcome and am thankful for the discussion surrounding writing the letter and applaud the woman who asked me about the topic-  it’s a difficult one.

For those of you who are still learning about why equality is so important, here’s a few historical references of great women who’ve advanced our rights.  We wouldn’t be where we are today without them.

Thank you to everyone for the great beginning to 2017 and thank you for continuing to trust me to lead so many of these initiatives.  I hope I can continue to educate and help the women in our technical community prosper.

 

Posted in Oracle, WIT Tagged with: , ,

January 4th, 2017 by dbakevlar

How was 2016 for me?

It was a surprisingly busy year-  blogging, speaking, working and doing.

I posted just under 100 posts to my blog this year.  After I changed jobs, the “3 per week” quickly declined to “4 per month” after I was inundated with new challenges and the Delphix learning curve.  That will change for 2017, along with some new initiatives that are in the works, so stay tuned.

For 2016, the most popular posts and pages for my website followed a similar trend from the last year.  My emulator for RPI is still a popular item and I have almost as many questions on RPI as I do WIT-  Raspberry Pi is everywhere and you’ll see a regained momentum from me with some smart home upgrades.

My readers for 2016 came from just about every country.  There were only a few that weren’t represented, but the largest numbers were from the expected:

I also write from time to time on Linked in.  Linked in has become the home for my Women in Technology posts and its lead me to receive around 30 messages a week from women looking for guidance, sharing their stories or just reaching out.  I appreciate the support and the value its provided to those in the industry.

RMOUG Conference Director- No Escape!

The 2016 conference was a great success for RMOUG, but much of it was due to budget cuts and changes that were made as we went along and addressed trends.  I’ve been collecting the data from evaluations and it really does show why companies are so interested in the value their data can provide them.  I use what I gather each year to make intelligent decisions about where RMOUG should take the conference direction each year-  what works, what doesn’t and when someone throws an idea out there, you can either decide to look into it or have the data to prove that you shouldn’t allocate resources to an endeavor.

A New Job

I wasn’t into the Oracle cloud like a lot of other folks.  It just wasn’t that interesting to me and felt that Oracle, as much as they were putting into their cloud investment, deserved someone who was behind it.  I’d come to Oracle to learn everything I could about Oracle and Enterprise Manager and an on-premise solution as it was, it wasn’t in the company focus.  When Kyle and I spoke about an opportunity to step into a revamped version of his position at Delphix, a company that I knew a great deal about and admired, it was a no-brainer.  I started with this great, little company in June and there are some invigorating initiatives that I look forward to becoming part of for 2017!

2 Awards

In February, I was awarded RMOUG’s Lifetime achievement award.  I kind of thought this would mean I could ride off in the sunset as the conference director, but as my position ended at Oracle, which had been a significant fight to keep me managing the conference as an Oracle employee, (transitioning me to a non-voting member to keep within the by-laws) not many were surprised to see me take on a sixth year of managing the conference.

In April I was humbly awarded the Ken Jacobs award from IOUG.  This is an award I’m very proud of, as Oracle employee’s are the only ones eligible and I was awarded it in just the two years I was employed at the red O.

3 Makers Events

I haven’t had much time for my Raspberry Pi projects the last number of months, but it doesn’t mean I don’t still love them.  I gained some recognition as 2nd ranking in the world for RPI klout score back in July, which took me by surprise.  I love adding a lot of IOT stories into my content and it had caught the attention of social media engines.  Reading and content is one thing, but it was even more important to do- I had a blast being part of the impressive Colorado’s Maker Faire at the Denver Museum of Nature and Science earlier in 2016.  I also was part of two smaller makers faires in Colorado, allowing me to discuss inexpensive opportunities for STEM education for schools using Raspberry Pis, Python coding and 4M kits.

Speaking Engagements

Even though I took a number of months off to focus on Delphix initiatives, I still spoke at 12 events and organized two, (Training Days and RMOUG’s QEW.)

February:  RMOUG– Denver, CO, (Director and speaker)

March: HotSos– Dallas, CO, (Keynote)

April: IOUG Collaborate– Las Vegas, NV

May: GLOC– Cleveland, OH, NoCOUG– San Jose, CA

June: KSCOPE– Chicago, IL

July: RMOUG Quarterly Education Workshop– Denver, CO, (Organizer)

September: Oracle Open World/Oak Table World– San Francisco, CA

October: UNYOUG– Buffalo, NY, Rocky Mountain DataCon & Denver Testing Summit–  Denver, CO

November: MOUS– Detroit, MI, (Keynote) ECO– Raleigh, NC

New Meetup Initiatives and Growth

I took over the Denver/Boulder Girl Geek Dinners meetup last April.  The community had almost 650 members at the time and although it wasn’t as big as the Girls Develop It or Women Who Code, I was adamant about keeping it alive.  Come the new year and thanks to some fantastic co-organizers assisting me, (along with community events in the technical arena) we’re now on our way to 1100 members for the Denver/Boulder area.

The Coming Year

I’m pretty much bursting with anticipation due to all that is on my plate for the coming year.  I know the right hand side bar is a clear indication that I’ll be speaking more, meaning more travel and a lot of new content.  With some awesome new opportunities from Delphix and the organizations I’m part of, I look forward to a great 2017!

 

Posted in DBA Life, Oracle Tagged with: , , ,

  • Facebook
  • Google+
  • LinkedIn
  • Twitter