Category: Oracle

March 30th, 2017 by dbakevlar

Every year I make the trek to Vegas for the large Collaborate conference and 2017 is no different!

It’s a busy schedule for me at Delphix, so buckle up and hang on for this next week’s events!

Sunday, April 2nd, 9am-1pm

Pre-Conference Hands on Lab

Location: South Seas C  Session# 666

Registration is required for this special conference event and it will give the registered attendees a chance to get hands on experience with databases with virtualization.  This is an important skill set for Database Administrators as they discover how easy it is to migrate to the cloud with virtualized environments and for developers and testers, as they discover this is how to get those pesky, monstrous databases to move at the same speed as DevOps.

Monday, April 3rd, 1:30-2:30pm

Database Patching and Upgrading Using Virtualization, (IOUGenius Session)

Location:  Jasmine C  Session# 149

Tuesday, April 4th, ALL DAY– OK, for the entire time the exhibitor area is up! 🙂

Delphix Booth, doing demos and running amok!  

Location: Booth 1210

Wednesday, April 5th, 9:45-10:45am

Virtualization 101 and Q&A

Location: Jasmine C  Session# 201

Wednesday, April 5th, ALL DAY

Oak Table World

Location: South Seas C

1:15 Session from me!

Oracle vs. SQL Server-  Page Splits, Leaf Blocks and Other Wars

If you haven’t noticed, if you come to an Oak Table World session, (and really, why would you miss any of the phenomenal Oakies speaking??)  You can put your name in for a raffle for a (difficult to find) NES system.  Yeah, entertain me, it took me a bit to track this game console down for the event!

For a complete list of Delphix Session, you can locate them in the following spreadsheet:

9:00 AM – 1:00 PM – South Seas C
Data Virtualization Hands-On Lab (pre-registration required)
Presenters: Kellyn Pot’Vin-Gorman and Leighton Nelson
1:30 PM – 2:30 PM – Jasmine C
Database Patching and Upgrading Using Virtualization
Presenter: Kellyn Pot’Vin-Gorman
9:30 AM – 4:15 PM – Exhibitor Showcase
4:15 PM – 5:15 PM – Palm D
Linux/UNIX Tools for Oracle DBAs
Presenter: Tim Gorman
9:45 AM – 10:45 AM – Jasmine C
Virtualization 101 and Q&A
Presenter: Kellyn Pot’Vin-Gorman
5:15 PM – 8:00 PM – Exhibitor Showcase 2:45 PM – 3:45 PM – Banyan E
OAUG Archive & Purge SIG Session
Presenter: Brian Bent
10:45 AM – 4:15 PM – Exhibitor Showcase
8:30 AM – 9:30 AM – Palm A
Migration Enterprise Applications to the Cloud
Presenter: Leighton Nelson
5:15 PM – 7:00 PM – Exhibitor Showcase

See you next week in Vegas!!

 

Posted in Oracle

March 30th, 2017 by dbakevlar

Azure is the second most popular cloud platform to date, so it’s where Delphix naturally is going to support second on our road to the cloud.  As I start to work with the options for us deploying Delphix, there are complexities I need to educate myself on in Azure.  As we’re just starting out, there’s a lot to learn and a lot of automation we can take advantage of.  It’s an excellent time for me to get up to speed with this cloud platform, so hopefully everyone will learn right along with me!

We’ll be using Terraform to deploy to Azure, just as we prefer to use it for our AWS deployments.  It’s open source, very robust and has significant support in the community, so we’ll switch from cloud setup to Terraform prep in many of these posts.  Before we can do that, we need to set up our Azure environment after we’ve registered our subscription with Azure.

Azure Consoles

There are the New and the Classic consoles for Azure, but also ones in the modern, New console that are marked as “Classic” that aren’t part of the actual “Classic” one.  I found this a bit confusing, so it’s good to have the distinction.

Azure’s “New” Portal, with it’s modern, sleek design

Azure’s “Classic” management interface, with it’s pale blue and white schema, which still serves a very significant purpose

Once you’ve created your Azure account, you’ll find that you need access to the Classic console to perform many of the basic setup tasks, where the Modern console is better for advanced administration.

Preparation is Key

There are a number of steps you’ll need to perform in preparation for Delphix to deploy to Azure.  The delphix engine, a source and a target are out goal, so we’ll start simple and work our way out.  Let’s see how much I can figure out and how much I may need to look to others more experienced to get me through.  No matter what, you’ll need both consoles, so keep the links above handy and I’ll refer to the consoles by “New” and “Classic” to help distinguish them as I go along.  Know that in this post, we’ll spend most of our time in the Classic console.

Set up an Account and Add Web App

If you don’t already have one, Microsoft will let you set up an account and even give you $200 in free credits to use.  Once you sign up, then you need to know where to go next.  This is where the “Classic” console comes in, as you need to set up your “application” that will be used for your deployment.

Log into the “Classic” console and click on Active Directory and the Default Directory highlighted in blue.  This will open a new page and you will have the opportunity to click Add at the bottom of the page to add a new Active Directory application.

  • Name the Application, (open up a text editor, copy and paste the name of the app into it, you’ll need this data later)
  • The application type is web app or api
  • Enter a URL/URI and yes, they can be made up.  They don’t have to be real.

Client and Client Secret

Now that your application is created, you’ll see a tab called Configure.  Click on this tab and you’ll see the client ID displayed.  Copy the Client ID and add that to your text editor, also for later.

Scroll down and you’ll see a section called Keys.  Click on the button that says “Select Duration” and choose 1 or 2 years.  At the very bottom of the screen, you’ll see a Save button, click it and then the Secret passcode will be displayed for you to copy and paste into your text editor.  Do this now, as you won’t be able to get to it later.

Tenant ID

To the left of the Save button, you’ll see “View Endpoints”.  Click on this and you’ll see a number of entries.  The tenant ID is the repeat value shown in each of the entries at the end.  An example is shown below:

Copy and paste this into your text editor under a heading of tenant ID.

Add Application to the Active Directory

Now that you’ve created this framework, you need to grant permissions to use it all.  In the Configure tab, scroll to the very bottom where it says “Permissions to Other Applications” and click on Add Application.  Choose the Active Directory application from the list, (if you have a new account, you won’t have much to choose from) Azure Service Management API and click on the checkmark in the lower right corner of the pane.  This will return you to the previous page.  Click on the designated privileges and choose to grant it Access Azure Service Management as organization and then save.

Subscription Data

Now, log into the New portal and click on Subscriptions on the left hand side.  Click on the Subscription and it will open up to display your Subscription ID, which you’ll need to copy and paste into your text editor.

Click on Access Control, (IAM) and click on Add.  Now you may only see your username, but the applications are there-  they just won’t be displayed by default.  Type in your application name that you put in your text editor, (example, mine is Web_K_Terra.)  Reminder-  you must type in the name of your app, just as you did when you created it, (it is cap sensitive, etc.) Grant reader and contributor roles from the role list, saving between each additional role.

You should now see your user in the list with both roles assigned to it like the example below for Web_K_Terra app:

Our configuration is complete and ready to go onto the networking piece.

The first part of my terraform template is ready, too.  All the pertinent data that I required from my build out has been added to it and it looks something like the following:

provider “Web_K_Terra” {
subscription_id = “gxxxxxxx-db34-4gi7-xxxxx-9k31xxxxxxxxxp2”
client_id = “d76683b5-9848-4d7b-xxxx-xxxxxxxxxxxx”
client_secret = “trKgvXxxxxxXXXXxxxXXXXfNOc8gipHf-xxxxxxXXx=”
tenant_id = “xxxxxxxx-9706-xxxx-a13a-4a8363bxxxxx”

}

This is a great start to getting us out on Azure, in part II, we’ll talk about setting up connectivity between your desktop and Azure for remote access and recommendations for tools to access it locally.

 

Posted in Azure, Oracle Tagged with: , ,

March 21st, 2017 by dbakevlar

I ended up speaking at two events this last week.  Now if timezones and flights weren’t enough to confuse someone, I was speaking at both an Oracle AND a SQL Server event- yeah, that’s how I roll these days.

Utah Oracle User Group, (UTOUG)

I arrived last Sunday in Salt Lake, which is just a slightly milder weather and more conservative version of Colorado, to speak at UTOUG’s Spring Training Days Conference.  I love this location and the weather was remarkable, but even with the warm temps, skiing was still only a 1/2 hour drive from the city.  Many of the speakers and attendees took advantage of this opportunity by doing just that while visiting.  I chose to hang out with Michelle Kolbe and Lori Lorusso.  I had a great time at the event and although I was only onsite for 48hrs, I really like this event so close to my home state.

I presented on Virtualization 101 for DBAs and it was a well attended session.  I really loved how many questions I received and how curious the database community has become about how this is the key to moving to the cloud seamlessly.

There are significant take-aways from UTOUG.  The user group, although small, is well cared for and the event is using some of the best tools to ensure that they get the best bang for the buck.  It’s well organized and I applaud all that Michelle does to keep everyone engaged.  It’s not an easy endeavor, yet she takes this challenge on with gusto and with much success.

SQL Saturday Iceland

After spending Wednesday at home, I was back at the airport to head to Reykjavik, Iceland for their SQL Saturday.  I’ve visited Iceland a couple times now and if you aren’t aware of this, IcelandAir offers up to 7 day layovers to visit Iceland and then you can continue on to your final destination.  Tim and I have taken advantage of this perk on one of our trips to OUGN, (Norway) and it was a great way to visit some of this incredible country.  When the notification arrived for SQL Saturday Iceland, I promptly submitted my abstracts and crossed my fingers.  Lucky for me,  accepted my abstract and I was offered the chance to speak with this great SQL Server user group.

After arriving before 7am on Friday morning at Keflavik airport, I realized that I wouldn’t have a hotel room ready for me, no matter how much I wanted to sleep.  Luckily there is a great article on the “I Love Reykjavik” site offering inside info on what to do if you do show up early.  I was able to use the FlyBus to get a shuttle directly to and from my hotel, (all you have to do is ask the front desk to call them the night before you’re leaving and they’ll pick you back up in front of your hotel 3 hrs before your flight.)  Once I arrived, I was able to check in my bags with their front desk and headed out into town.

I stayed at Hlemmur Square, which was central to the town and the event and next to almost all of the buses throughout the city.  The main street in front of it, Laugavegur, is one of the main streets that runs East-West and is very walkable.  Right across this street from the hotel was a very “memorable” museum, the Phallilogical Museum.  I’m not going to link to it or post any pictures, but if you’re curious, I’ll warn you, it’s NSFW, even if it’s very, uhm…educational.  It was recommended by a few folks on Twitter and it did ensure I stayed awake after only 2 hours of sleep in 24 hours!

As I wandered about town, there are a few things you’ll note about Iceland-  the murals of graffiti is really awesome and Icelandic folks like good quality products-  the stores housed local and international goods often made from wool, wood, quality metal and such. The city parliment building is easily accessible and it’s right across from the main shopping area and new city development.

On Saturday, I was quick to arrive at Iceland’s SQL Saturday, as I had a full list of sessions I wanted to attend.  I was starting to feel the effects of Iceland weather on my joints, but I was going to make sure I got the most out of the event.  I had connected with a couple of the speakers at the dinner the night before, but with jet lag, you hope you’ll make a better impression on the day of the event.

I had the opportunity to learn about the most common challenges with SQL Server 2016 and that Dynamic Data Masking isn’t an enterprise solution.  Due to lacking discovery tools, the ability to join to non-masked objects and common values, (i.e. 80% of data is local and the most common location value would easily be identified, etc.) the confidential data of masked objects could be identified.

I also enjoyed an introduction to containers with SQL Server and security challenges.  The opening slide from Andy says it all:

Makes you proud to be an American, doesn’t it? 🙂

My session was in the afternoon and we not only had excellent discussions on how to empower database environments with virtualization, but I even did a few quick demonstrations of ease of cloud management with AWS and Oracle…yes, to SQL Server DBAs.  It was interesting to see the ease of management, but how easy it was for me to manage Oracle with the interface.  I performed all validations of data refreshes from the command line, so there was no doubt that I was working in Oracle, yet the refreshes and such were done in AWS and with the Delphix Admin console.

I made it through the last session on the introduction to containers with SQL Server, which included a really interesting demonstration of a SQL Server container sans an OS installation, allowing it to run with very limited resource requirements on a Mac.  After this session was over, I was thankful that two of my fellow presenters were willing to drop me off at my hotel and I promptly collapsed in slumber, ready to return home.  I was sorry to miss out on the after event dinner and drinks, but learned that although I love Iceland, a few days and some extra recovery time may be required.

Thank you to everyone at Utah Oracle User Group and Iceland’s SQL Server User Group for having me as a guest at your wonderful events.  If you need me, I’ll be taking a nap… 🙂

 

Posted in DBA Life, Oracle, SQLServer Tagged with: , ,

March 20th, 2017 by dbakevlar

Now that I’ve loaded a ton of transactions and did a bunch of work load on my source database with the SH sample schema and Swingbench, I’ve noted how little impact to the databases using different cloud tools, (which will come in a few later posts) now I’m going to show you how easy it is to create a new VDB from all of this, WITH the new SH data included.  During all of this time, the primary users of my Delphix VDB, (virtualized databases) would have been working in the previous iage, but someone wants that new SH schema now that my testing has completed.

To do this, I open up my Delphix admin console, (using the IP address for the Delphix Engine from the AWS Trail build output), log in as delphix_admin and open up the Source group to access the Employee Oracle 11g database, (aka ORCL.)

I know my new load is complete on the ORCL database and need to take a new snapshot to update the Delphix Engine outside of the standard refresh interval, (I’m set at the default of every 24 hrs.)  Access this by clicking on the Configuration tab and to take a snapshot, I simply click on the camera icon.

A snapshot will take a couple seconds, as this is a very, very small database, (2.3G) and then you can click on Timeflow to view the new snapshot available for use.  Ensure the new snapshot is chosen by moving the slider all the way to the right and look at the timestamp, ensuring it’s the latest, matching your recent one.

Click on Provision and it will default to the Source host, change to the target, update to a new, preferred database name, (if you don’t like the default) and then you may have to scroll down to see the Next button to go through the subsequent steps in the wizard.  I know my Macbook has a smaller screen and I do have to scroll to see the Next button.  After you’ve made any other changes, click on Finish and let the job run.  Don’t be surprised by the speed that a VDB is provisioned-  I know it’s really fast, but it really did create a new VDB!

Now that we have it, let’s connect to it from SQL*Plus and check prove that we got the new SH schema over.

Using the IP Address for the Linux Target that was given to use in our AWS Trial build, let’s connect:

ssh delphix@<linuxtarget IP Address>

Did you really just create a whole new VDB?

[delphix@linuxtarget ~]$ ps -ef | grep pmon
delphix   1148  1131  0 18:57 pts/0    00:00:00 grep pmon
delphix  16825     1  0 Mar09 ?        00:00:06 ora_pmon_devdb
delphix  16848     1  0 Mar09 ?        00:00:06 ora_pmon_qadb
delphix  31479     1  0 18:30 ?        00:00:00 ora_pmon_VEmp6C0

Yep, there it is…

Now let’s connect to it.

Set our environment:

. 11g.env

Set the ORACLE_SID to the new VDB

export ORACLE_SID=VEmp6C0

Connect to SQL*Plus as our SH user using the password used in our creation on the source database, ORCL:

$ sqlplus sh

Enter password: 
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select object_type, count(*) from user_objects
  2  group by object_type;

OBJECT_TYPE       COUNT(*)

------------------- ----------
INDEX PARTITION     196
TABLE PARTITION     56
LOB                 2
DIMENSION           5
MATERIALIZED VIEW   2
INDEX               30
VIEW                1
TABLE               16

8 rows selected.

SQL> select table_name, sum(num_rows) from user_tab_statistics
  2  where table_name not like 'DR$%'
  3  group by table_name
  4  order by table_name;

TABLE_NAME       SUM(NUM_ROWS)

------------------------------ -------------
CAL_MONTH_SALES_MV             48
CHANNELS                       5
COSTS                          164224
COUNTRIES                      23
CUSTOMERS                      55500
FWEEK_PSCAT_SALES_MV           11266
PRODUCTS                       72
PROMOTIONS                     503
SALES                          1837686
SALES_TRANSACTIONS_EXT         916039
SUPPLEMENTARY_DEMOGRAPHICS     4500
TIMES                          1826

12 rows selected.

Well, lookie there, the same as the source database we loaded earlier... 🙂  Next, we’ll go into the stuff that always gets my interest- performance data, the cloud and visualization tools.

 

Posted in AWS Trial, Oracle Tagged with: ,

March 13th, 2017 by dbakevlar

Swingbench is a one of the best choices for easy loads on a database.  I wanted to use it against the SH sample schema I loaded into my Oracle Source database and I haven’t used Swingbench outside of the command line quite a while back, (my databases seem to always come with a load on them!)  so it was time to update my Swingbench skills and catch up with the user interface.  Thanks to Dominic Giles for keeping the download, features and documentation so well maintained.

After adding the application rights to run on my Macbook Pro, I was impressed by the clean and complete interface.  I wanted to connect it to my AWS instance and as we talk about, the cloud is a lot simpler a change than most DBAs first consider.

When first accessing, Swingbench will prompt you to choose what pre-configured workload you’d like to utilize.  I had already set up the Sales History schema in my AWS Trial source database, so I chose Sales History and then had to perform a few simple configurations to get it to run.

Username: sh

Password: <password for your sh user>

Connect String: <IP Address for AWS Instance>:<DB Port>:<services name>

Proceed down to the tab for Environment Variables and add the following:

ORACLE_HOME  <Oracle Home>

I chose the default 16 connections to start out, but you can add more if you’d like. You can also configure stats collection, snapshot collection before and after the workload.

I set my autoconnect to true, but the default is to not start the load until you hit the green arrow button.  The load will then execute the workload with the amount of connections requested until you hit the red stop button.  You should see the users logged in at the bottom right and in the events window:

Next post we’ll discuss what you’ll see when running a Swingbench on a source database, the Delphix Engine host and subsequently refreshes to a VDB, (virtual database.)  We’ll also discuss other tools that can grant you visibility to optimization opportunities in the cloud.

 

 

Posted in AWS Trial, Cloud, Oracle Tagged with: , ,

March 10th, 2017 by dbakevlar

Most people know I like to do things the hard way… 🙂

When it comes to learning things, there’s often no better way to get a strong understanding than to have to take something apart and put it back together.  I wanted to use swingbench on my AWS deployment and needed the Sales History, (SH) sample schema.  I don’t have an interface to perform the installation via the configuration manager, so I was going to install it manually.

Surprise, the scripts in the $ORACLE_HOME/demo/schema/sh directory were missing.  There are a couple options to solve this dilemma.  Mine was to first get the sample schemas.  You can retrieve them from a handy GitHub repository found here, maintained by Gerald Venzl.

I downloaded the entire demo scripts directory and then SCP’d them up to my AWS host.

scp db-sample-schemas-master.zip delphix@<IP Address>:/home/oracle/.

Next, I extracted the files to the $ORACLE_HOME/demo directory.

unzip db-sample-schemas-master.zip

Now the unzip will call the directory db-sample-schemas-master, which is fine with me, as I like to retain the previous one, (I’m a DBA, so have copies of data until I’m sure I don’t need it is my life.)

mv schema schema_kp
mv db-sample-schemas-master schema

With that change, everything is now as it should be, but the one thing you’ll find out, is that the download is for 12c and I’m alright with this, as the swingbench I’m using is expecting SH for 12c, too.  Not that I expect any differences, but as Jeff Smith was all too happy to remind me on Facebook, I’m using decade old version of Oracle on my image here.

There are a lot of scripts in the Sales_History folder, but all you’ll need to run is the sh_main.sql from SQL*Plus as sysdba to create the SH schema.

There are parameter values that you’ll enter to create the SH schema manually that you may assume are different than the prompt terms and as I’ve seen very little written on it, (even after all these years of this existing) this may help others out:

specify password for SH as parameter 1:

Self-explanatory- what password would you like SH user to have.

specify default tablespace for SH as parameter 2:

What tablespace do you want this created in?  I chose Users, as this is just my play database.

specify temporary tablespace for SH as parameter 3:

Temp was mine and is the common value for this prompt.

specify password for SYS as parameter 4:

This is the password for SYSTEM, not SYS, btw.

specify directory path for the data files as parameter 5:  

This is not Oracle datafiles, this is the path to your SH directory, ($ORACLE_HOME/demo/schemas/sales_history/) for access to the control files and dat files for SQL Loader.  Remember to have a slash at the end of the path name.

writeable directory path for the log files as parameter 6:

A directory for log files-  I put this in the same directory and remember to use a slash at the end or you’re log files will have the previous directory as the beginning of the file name and save to one directory up.

specify version as parameter 7:

This isn’t the version of the database, but the version of the sample schema-  the one from Github is “v3”.

specify connect string as parameter 8:

pretty clear, but the service or connect string for the database that the schema is being created in.

Error Will Robinson, Error!

I then ran into some errors, but it was pretty easy to view the log and then the script and see why:

SP2-0310: unable to open file "__SUB__CWD__/sales_history/psh_v3"

Well, the scripts, (psh_v3.sql, lsh_v3.sql and csh_v3.sql) called in the sh_main.sql is looking in the sales_history directory, so we need to get rid of the sub directory paths that don’t exist in the 11g environment.

view the sh_main.sql, you’ll see three paths to update.  Below is an example of one section of the script with the section to be removed BOLDED:

REM Post load operations
REM =======================================================
DEFINE vscript = _SUB_CWD_/sales_history/psh_&vrs

@&vscript

The DEFINE will now look like the following so it looks in the sales_history directory if you haven’t been pointing to the $ORACLE_HOME/demo directory:

DEFINE vscript = psh_&vrs

Once you’ve saved your changes, you can simply re-run sh_main.sql again, as it does a drop schema on the sample schema before it does the create.  If no other changes need to be made to your parameters, just execute sh_main.sql, if you need to change your values for the parameters entered, just quickest to exit from SQL*Plus and reconnect to unset the values.

Are We There Yet?

Verify that there weren’t any errors in your $RUN_HOME/sh_v3.log file and if all was successful, then connect as the SH user with SQL*Plus and check the schema:

SQL> select object_type, count(*) from user_objects
  2  group by object_type;
OBJECT_TYPE       COUNT(*)
------------------- ----------
INDEX PARTITION     196
TABLE PARTITION     56
LOB                 2
DIMENSION           5
MATERIALIZED VIEW   2
INDEX               30
VIEW                1
TABLE               16
SQL> select table_name, sum(num_rows) from user_tab_statistics
  2  where table_name not like 'DR$%' --Dimensional Index Transformation
  3  group by table_name
  4  order by table_name;
TABLE_NAME       SUM(NUM_ROWS)
------------------------------ -------------
CAL_MONTH_SALES_MV         48
CHANNELS                   5
COSTS                      164224
COUNTRIES                  23
CUSTOMERS                  55500
FWEEK_PSCAT_SALES_MV       11266
PRODUCTS                   72
PROMOTIONS                 503
SALES                      1837686
SALES_TRANSACTIONS_EXT     916039
SUPPLEMENTARY_DEMOGRAPHICS 4500
TIMES                      1826

And now we’re ready to run Swingbench Sales History against our AWS instance to collect performance data.  I’ll try to blog on Swingbench connections and data collection next time.

See everyone at UTOUG for Spring Training Days on Monday, March 13th and at the end of the week I’ll be in Iceland at SQL Saturady #602!

Posted in Cloud, Oracle Tagged with: , ,

February 20th, 2017 by dbakevlar

We’ve been working hard to create an incredible new trial version of Delphix that uses AWS, which is built with the open source product Terraform.  Terraform is a tool that anyone can use to build, version and manage a product effectively and seamlessly in a number of clouds.  We are currently using it to implement to AWS, but there is a bright future for these types of open source products and I’m really impressed with how easy its made it to deploy compute instances, the Delphix Engine and supporting architecture on AWS EC2.  If you’d like to read up on Terraform, check out their website.

The Delphix Admin Console and Faults

After building out the Delphix environment with the Engine and the a Linux source/target, the first step for many is to log into the Delphix Admin console.  You can view any faults during the build at the upper right corner under Faults. One error that I’ve noticed comes up in after a successful build is the following:

AWS Console to the Rescue

By logging into your AWS EC2 console, you can view the instances that are being used.  As you’ll note, the error says that the Delphix Engine is using an unsupported  instance type m4.large.  Yes in our EC2 console, we can see the Delphix Engine, (last in the list and with the name ending in “DE”) that no, actually it isn’t.

It’s actually a m4.xlarge instance type.  What’s even more interesting, is that the Linux Target, (LT) and Linux Source, (LS) are both m4.large instance types, yet no warning was issued for either of these instances as unsupported.

AWS EC2 Supported Instance Types

You can locate what types of instance types are supported for AWS EC2 with the following link.  At this page, we can also see that both the m4.large and the m4.xlarge instance type IS SUPPORTED.

Knowing that we’ve validated that the instance type is supported means that I can safely ignore it and proceed to work through the trial without worry.

If you’re planning on deploying a production Delphix Engine on AWS, inspect the following document to ensure you build it with the proper configuration.

Nothing to see here and thought I better let everyone know before someone lumps Amazon with CNN… 🙂

Posted in AWS Trial, Oracle Tagged with: ,

February 10th, 2017 by dbakevlar

I ran across an article from 2013 from Straight Talk on Agile Development by Alex Kuznetsov and it reminded me how long we’ve been battling for easier ways of doing agile in a RDBMS environments.

Getting comfortable with a virtualized environment can be an odd experience for most DBAs, but as soon as you recognize how similar it is to a standard environment, we stop over-thinking it and it makes it quite simple to then implement agile with even petabytes of data in an relational environment without using slow and archaic processes.

The second effect of this is to realize that we may start to acquire secondary responsibilities and take on ensuring that all tiers of the existing environment are consistently developed and tested, not just the database.

A Virtual Can Be Virtualized

Don’t worry-  I’m going to show you that its not that difficult and virtualization makes it really easy to do all of this, especially when you have products like Delphix to support your IT environment. For our example, we’re going to use our trusty AWS Trial environment and we have already provisioned a virtual QA database.  We want to create a copy of our development virtualized web application to test some changes we’ve made and connect it to this new QA VDB.

From the Delphix Admin Console, go to Dev Copies and expand to view those available.  Click on Employee Web Application, Dev VFiles Running.  Under TimeFlow, you will see a number of snapshots that have been taken on a regular interval.  Click on one and click on Provision.

Now this is where you need the information about your virtual database that you wish to connect to:

  1.  You will want to switch from provisioning to the source to the Linux Target.

Currently the default is to connect to the existing development database, but we want to connect to the new QA we wish to test on.  You can ssh as delphix@<ipaddress for linuxtarget> to connect to and gather this information.

2.  Gathering Information When You Didn’t Beforehand

I’ve created a new VDB to test against, with the idea, that I wouldn’t want to confiscate an existing VDB from any of my developers or testers.  The new VDB is called EmpQAV1.  Now, if you’re like me, you’re not going to have remembered to grab the info about this new database before you went into the wizard to begin the provisioning.  No big deal, we’ll just log into the target and get it:

[delphix@linuxtarget ~]$ . 11g.env
[delphix@linuxtarget ~]$ echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0.4/db_1
[delphix@linuxtarget ~]$ ps -ef | grep pmon
delphix  14816     1  0 17:23 ?        00:00:00 ora_pmon_devdb
delphix  14817     1  0 17:23 ?        00:00:00 ora_pmon_qadb
delphix  17832     1  0 17:32 ?        00:00:00 ora_pmon_EmpQAV1
delphix  19935 19888  0 18:02 pts/0    00:00:00 grep pmon

I can now set my ORACLE_SID:

[delphix@linuxtarget ~]$ export ORACLE_SID=EmpQAV1

Now, let’s gather the rest of the information we’ll need to connect to the new database by connecting to the database and gathering what we need.

[delphix@linuxtarget ~]$ lsnrctl services
LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 10-FEB-2017 18:13:06
Copyright (c) 1991, 2013, Oracle.  All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "EmpQAV1" has 1 instance(s).
  Instance "EmpQAV1", status READY, has 1 handler(s) for this service...

Provision Your VFile

Fill in all the values required in the next section of the provisioning setup:

Click Next and add any requirements to match your vfile configuration that you had for the existing environment.  For this one, there aren’t any, (additional NFS Mount points, etc.)  Then click Next and Finish.

The VFile creation should take a couple minutes max and you should now see an environment that looks similar to the following:

This is a fully functional copy of your web application, created from another virtual copy that can test against a virtual database, ensuring that all aspects of a development project are tested thoroughly before releasing to production!

Why would you choose to do anything less?

 

 

 

Posted in AWS Trial, Delphix, Oracle Tagged with: ,

February 8th, 2017 by dbakevlar

There are more configurations for AWS than there are fish in the sea, but as the rush of folks arrive to test out the incredibly cool AWS Trial for Delphix, I’ll add my rendition of what to look for to know you’re AWS setup is prepped to successfully deploy.

The EC2 Dashboard View

After you’ve selected your location, set up your security user/group and key pairs, there’s a quick way to see, (at least high level) if you’re ready to deploy the AWS Trial to the zone in question.

Go to your EC2 Dashboard and to the location, (Zone) that you plan to deploy your trial to and you should see the following:

Notice in the dashboard, you can see that the key pairs, (1) and the expected Security Groups, (3) are displayed, which tells us that we’re ready to deploy to this zone.  If we double click on the Key Pair, we’ll see that its match to the one we downloaded locally and will use in our configuration with Terraform:

How Terraform Communicates with AWS

These are essential to deploying in an AWS zone that’s configured as part of your .tfvars file for terraform.  You’ll note in the example below, we have both designated the correct zone and the key pair that is part of the zone we’ll be using to authenticate:


#VERSION=004

#this file should be named terraform.tfvars

# ENTER INPUTS BELOW

access_key="XXXXXXX"

secret_key="XXXXXXXXXX"

aws_region="us-east-1"

your_ip="xxx.xx.xxx.xxx"

key_name="Delphix_east1" #don't include .pem in the key name 

instance_name="Delphix_AWS"

community_username="xxx@delphix.com"

community_password="password"

Hopefully this is a helpful first step in understanding how zones, key pairs and security groups interact to support the configuration file, (tfvars) file that we use with the Delphix deployment via Terraform into AWS.

 

Posted in AWS Trial, Delphix, Oracle Tagged with: ,

January 31st, 2017 by dbakevlar

I’ve been at Delphix for just over six months now.  In that time, I was working with a number of great people on a number of initiatives surrounding competitive, the company roadmap and some new initiatives.  With the introduction of our CEO, Chris Cook, new CMO, Michelle Kerr and other pivotal positions within this growing company, it became apparent that we’d be redirecting our focus on Delphix’s message and connections within the community.

I was still quite involved in the community, even though my speaking had been trimmed down considerably with the other demands at Delphix.  Even though I wasn’t submitting abstracts to many of the big events I’d done so in previous years, I still spoke at 2-3 events each month during the fall and made clear introductions into the Test Data Management, Agile and re-introduction into the SQL Server communities.

As of yesterday, my role was enhanced so that evangelism, which was previously 10% of my allocation, is now going to be upwards of 80% as the Technical Evangelist for the Office of the CTO at Delphix.  I’m thrilled that I’m going to be speaking, engaging and blogging with the community at a level I’ve never done before.  I’ll be joined by the AWESOME Adam Bowen, (@CloudSurgeon on Twitter) in his role as Strategic Advisor and as the first members of this new group at Delphix.  I would like to thank all those that supported me to gain this position and the vision of the management to see the value of those in the community that make technology successful day in and day out.

I’ve always been impressed with the organizations who recognize the power of grassroots evangelism and the power it has in the industry.  What will I and Adam be doing?  Our CEO, Chris Cook said it best in his announcement:

As members of the [Office of CTO], Adam and Kellyn will function as executives with our customers, prospects and at market facing events.  They will evangelize the direction and values of Delphix; old, current, and new industry trends; and act as a customer advocate/sponsor, when needed.  They will connect identified trends back into Marketing and Engineering to help shape our message and product direction.  In this role, Adam and Kellyn will drive thought leadership and market awareness of Delphix by representing the company at high leverage, high impact events and meetings. []

As many of you know, I’m persistent, but rarely patient, so I’ve already started to fulfill my role and be prepared for some awesome new content, events that I’ll be speaking at and new initiatives.  The first on our list was releasing the new Delphix Trial via the Amazon Cloud.  You’ll have the opportunity to read a number of great posts to help you feel like an Amazon guru, even if you’re brand new to the cloud.  In the upcoming months, watch for new features, stories and platforms that we’ll introduce you to. This delivery system, using Terraform, (thanks to Adam) is the coolest and easiest way for anyone to try out Delphix, with their own AWS account and start to learn the power of Delphix with use case studies that are directed to their role in the IT organization.

Posted in AWS Trial, DBA Life, Delphix, Oracle Tagged with: ,

January 18th, 2017 by dbakevlar

So Brent Ozar’s group of geeks did something that I highly support-  a survey of data professional’s salaries.  Anyone who knows me, knows I live by data and I’m all about transparency.  The data from the survey is available for download from the site and they’re promoting app developers to download the Excel spreadsheet of the raw data and work with it.

Now I’m a bit busy with work as the Technical Intelligence Manager at Delphix and a little conference that I’m the director for, called RMOUG Training Days, which is less than a month from now, but I couldn’t resist the temptation to load the data into one of my XE databases on a local VM and play with it a bit...just a bit.

It was easy to save the data as a CSV and use SQL Loader to dump it into Oracle XE.  I could have used BCP and loaded it into SQL Server, too, (I know, I’m old school) but I had a quick VM with XE on it, so I just grabbed that quick to give me a database to query from.  I did edit the CSV and removed both the “looking” column and took out the headers.  If you choose to keep them, make sure you add the column back into the control file and update the “options ( skip=0)” to be “options ( skip=1)” to not load the column headers as a row in the table.

The control file to load the data has the following syntax:

--Control file for data --
options ( skip=0 )
load data
 infile 'salary.csv'
 into table salary_base
fields terminated by ','
optionally enclosed by '"'
 (TIMEDT DATE "MM-DD-YYYY HH24:MI:SS"
 , SALARYUSD
 , PRIMARYDB
 , YEARSWDB
 , OTHERDB
 , EMPSTATUS
 , JOBTITLE
 , SUPERVISE
 , YEARSONJOB
 , TEAMCNT
 , DBSERVERS
 , EDUCATION
 , TECHDEGREE
 , CERTIFICATIONS
 , HOURSWEEKLY
 , DAYSTELECOMMUTE
 , EMPLOYMENTSECTOR)

and the table creation is the following:

create table SALARY_BSE(TIMEDT TIMESTAMP not null,
SALARYUSD NUMBER not null,
COUNTRY VARCHAR(40),
PRIMARYDB VARCHAR(35),
YEARSWDB NUMBER,
OTHERDB VARCHAR(150),
EMPSTATUS VARCHAR(100),
JOBTITLE VARCHAR(70),
SUPERVISE VARCHAR(80),
YEARSONJOB NUMBER,
TEAMCNT VARCHAR(15),
DBSERVERS VARCHAR(50),
EDUCATION VARCHAR(50),
TECHDEGREE VARCHAR(75),
CERTIFICATIONS VARCHAR(40),
HOURSWEEKLY NUMBER,
DAYSTELECOMMUTE VARCHAR(40),
EMPLOYMENTSECTOR VARCHAR(35));

I used Excel to create some simple graphs from my results and queried the data from SQL Developer, (Jeff would be so proud of me for not using the command line… :))

Here’s what I queried and found interesting in the results.

We Are What We Eat, err Work On

The database flavors we work on may be a bit more diverse than most assume.  Now this one was actually difficult, as the field could be freely typed into and there were some mispellings, combinations of capital and small letters, etc.  The person who wrote “postgress”, yeah, we’ll talk… 🙂

The data was still heavily askew towards the MSSQL crowd. Over 2700 respondents were SQL Server and only 169 were listed their primary database platform as others, but Oracle was the majority:

You Get What you Pay For

Now the important stuff for a lot of people is the actual salary.  Many folks think that Oracle DBAs make a lot more than those that specialize in SQL Server, but I haven’t found that and as this survey demonstrated, the averages were pretty close here, too. No matter if you’re Oracle or SQL Server, we ain’t making as much as that Amazon DBA…:)

Newbies, Unite

Many of those who filled out the survey haven’t been in the field that long, (less than five years).  There’s still a considerable amount of folks who’ve been in the industry since it’s inception.

We’ve Found Our Place

Of the 30% of us that don’t have degrees in our chosen field, most of us stopped after getting a bachelors to find our path in life:

There’s still a few of us, (just under 200) out there who had to accumulate a lot of school loans  getting a masters or a Doctorate/PHD, before we figured out that tech was the place to be…:)

Location, Location, Location

The last quick look I did was to see by country, what were the top and bottom average salaries for DBAs-

 

Not too bad, Switzerland and Denmark… 🙂

Data Is King

I wish there’d been more respondents to the survey, but very happy with the data that was provided.  I’m considering doing one of my own, just to get more people from the Oracle side, but until then, here’s a little something to think about as we prep for the new year and another awesome year in the database industry!

 

 

Posted in DBA Life, Oracle Tagged with: , ,

January 13th, 2017 by dbakevlar

I receive about 20-30 messages a week from women in the industry.  I take my role in the Oracle community as a role model for women in technology quite seriously and I’ve somehow ended up speaking up a number of times, upon request from different groups.

rosie

Although its not the first time the topics come up, I was asked last week for some recommendations on Oracle’s CEO, Safra Catz and her opportunity to be on President Elect Trump’s transition team.

I wanted to ask your opinion about Safra not taking a leave of absence to help with Trump’s transition team? I think she should take a leave and as one of the top women in IT I think it shows poor judgment. Could the WITs write her a letter? Thoughts?

After some deep thought, I decided the topic required a good, solid answer and a broader audience.  As with anything involving the topic of WIT, the name of the source who asked the question doesn’t matter and anyone who asks you to give names isn’t really interested in education, but persecution.

It took me some time to think through the complexities of the situation.  Everyone will have some natural biases when a topic bridges so many uncomfortable areas of discussion:

  • Women’s Roles
  • Politics
  • Previous Employer

After putting my own bias aside and thinking through the why and what, here’s my thoughts-

No, I don’t think Safra should take a leave of absence. We have significantly few women in c-level positions.  As of April 2016, only 4% of CEO’s for Fortune 500 companies were women, (which Safra is one.)  I have a difficult time believing we’d be asking most men to give up the opportunity to be on a presidential transition team or take a leave of absence.  Some of the most challenging and difficult times in our career are also the most rewarding and this may be one of those times in Safra’s life.  Anyone who’s friends with me, especially on Facebook, would know, I’m not a big fan of Donald Trump, but in no way should we ask Safra to not try to be part of the solution.

No, I don’t think Safra should refrain from being on the transition team.  As much as we discuss the challenges of getting more women in technology, its even a larger challenge in politics.  Women have less than 25% of the seats in Congress and even less at local government levels.  We are over 50% of the workforce and 50% of the US population.  How can we ever have our voices heard if we aren’t participating in our own government?  Having more representation is important, not less and not because my politics don’t mesh with hers.

So what should the discussion really be about if we don’t want Safra to take a leave of absence or remove herself from the transition team?

  1. We want to know that there are clear policies in place to deter from conflict of interest.  We need to know that if improprieties do occur, that accountability will result.
  2. We need to not limit Safra in opportunities or over-scrutinize her the way we do so many women who don’t fit inside the nice, neat little expectations society still has of them.
  3. We shouldn’t hold Safra accountable for what Donald Trump represents, his actions or if we don’t agree with his politics.

We also need to discuss what is really bothering many when a woman or person of color enters into the lions den, aka a situation that is clearly not very welcoming to us due to gender, race or orientation.  It can bring out feelings of betrayal, concerns that the individual is “working for the enemy.”  We want to know that Safra will stand up for our rights as the under-represented.  We want to know that she would tell Donald that she doesn’t condone his behavior or actions towards women, race and culture.

One of the biggest challenges I had to overcome when I started my career, was recognizing that every individual has their own path in this world.  Their path may be very different than mine, but through change comes growth and to expect someone to do what may not be in their capabilities can be just as limiting as not letting them do what they do best.  This wouldn’t be allowing Safra to do what she does best.

I’ve never viewed Safra as a role-model when it comes to the protection and advancement of women’s roles in technology or our world.  She’s never historically represented this, any more than those expecting it from Marissa Mayer.  It’s just not part of their unique paths, no matter how much the media likes to quote either of them, (especially Marissa, which consistently makes me cringe.)  It doesn’t mean they aren’t capable of accomplishing great feats-  just not feats in the battle for equality.  It also doesn’t mean they aren’t a source of representation.  The more women that are in the space, the better.  That’s how we overcome some of the bias we face.

Regarding those that do support women in more ways that just representing the overall count of women in technology and politics, I’d rather put my time into Sheryl Sandberg, Grace Hopper, Meg Whitman and others who have the passion to head up equality issues.  I both welcome and am thankful for the discussion surrounding writing the letter and applaud the woman who asked me about the topic-  it’s a difficult one.

For those of you who are still learning about why equality is so important, here’s a few historical references of great women who’ve advanced our rights.  We wouldn’t be where we are today without them.

Thank you to everyone for the great beginning to 2017 and thank you for continuing to trust me to lead so many of these initiatives.  I hope I can continue to educate and help the women in our technical community prosper.

 

Posted in Oracle, WIT Tagged with: , ,

January 4th, 2017 by dbakevlar

How was 2016 for me?

It was a surprisingly busy year-  blogging, speaking, working and doing.

I posted just under 100 posts to my blog this year.  After I changed jobs, the “3 per week” quickly declined to “4 per month” after I was inundated with new challenges and the Delphix learning curve.  That will change for 2017, along with some new initiatives that are in the works, so stay tuned.

For 2016, the most popular posts and pages for my website followed a similar trend from the last year.  My emulator for RPI is still a popular item and I have almost as many questions on RPI as I do WIT-  Raspberry Pi is everywhere and you’ll see a regained momentum from me with some smart home upgrades.

My readers for 2016 came from just about every country.  There were only a few that weren’t represented, but the largest numbers were from the expected:

I also write from time to time on Linked in.  Linked in has become the home for my Women in Technology posts and its lead me to receive around 30 messages a week from women looking for guidance, sharing their stories or just reaching out.  I appreciate the support and the value its provided to those in the industry.

RMOUG Conference Director- No Escape!

The 2016 conference was a great success for RMOUG, but much of it was due to budget cuts and changes that were made as we went along and addressed trends.  I’ve been collecting the data from evaluations and it really does show why companies are so interested in the value their data can provide them.  I use what I gather each year to make intelligent decisions about where RMOUG should take the conference direction each year-  what works, what doesn’t and when someone throws an idea out there, you can either decide to look into it or have the data to prove that you shouldn’t allocate resources to an endeavor.

A New Job

I wasn’t into the Oracle cloud like a lot of other folks.  It just wasn’t that interesting to me and felt that Oracle, as much as they were putting into their cloud investment, deserved someone who was behind it.  I’d come to Oracle to learn everything I could about Oracle and Enterprise Manager and an on-premise solution as it was, it wasn’t in the company focus.  When Kyle and I spoke about an opportunity to step into a revamped version of his position at Delphix, a company that I knew a great deal about and admired, it was a no-brainer.  I started with this great, little company in June and there are some invigorating initiatives that I look forward to becoming part of for 2017!

2 Awards

In February, I was awarded RMOUG’s Lifetime achievement award.  I kind of thought this would mean I could ride off in the sunset as the conference director, but as my position ended at Oracle, which had been a significant fight to keep me managing the conference as an Oracle employee, (transitioning me to a non-voting member to keep within the by-laws) not many were surprised to see me take on a sixth year of managing the conference.

In April I was humbly awarded the Ken Jacobs award from IOUG.  This is an award I’m very proud of, as Oracle employee’s are the only ones eligible and I was awarded it in just the two years I was employed at the red O.

3 Makers Events

I haven’t had much time for my Raspberry Pi projects the last number of months, but it doesn’t mean I don’t still love them.  I gained some recognition as 2nd ranking in the world for RPI klout score back in July, which took me by surprise.  I love adding a lot of IOT stories into my content and it had caught the attention of social media engines.  Reading and content is one thing, but it was even more important to do- I had a blast being part of the impressive Colorado’s Maker Faire at the Denver Museum of Nature and Science earlier in 2016.  I also was part of two smaller makers faires in Colorado, allowing me to discuss inexpensive opportunities for STEM education for schools using Raspberry Pis, Python coding and 4M kits.

Speaking Engagements

Even though I took a number of months off to focus on Delphix initiatives, I still spoke at 12 events and organized two, (Training Days and RMOUG’s QEW.)

February:  RMOUG– Denver, CO, (Director and speaker)

March: HotSos– Dallas, CO, (Keynote)

April: IOUG Collaborate– Las Vegas, NV

May: GLOC– Cleveland, OH, NoCOUG– San Jose, CA

June: KSCOPE– Chicago, IL

July: RMOUG Quarterly Education Workshop– Denver, CO, (Organizer)

September: Oracle Open World/Oak Table World– San Francisco, CA

October: UNYOUG– Buffalo, NY, Rocky Mountain DataCon & Denver Testing Summit–  Denver, CO

November: MOUS– Detroit, MI, (Keynote) ECO– Raleigh, NC

New Meetup Initiatives and Growth

I took over the Denver/Boulder Girl Geek Dinners meetup last April.  The community had almost 650 members at the time and although it wasn’t as big as the Girls Develop It or Women Who Code, I was adamant about keeping it alive.  Come the new year and thanks to some fantastic co-organizers assisting me, (along with community events in the technical arena) we’re now on our way to 1100 members for the Denver/Boulder area.

The Coming Year

I’m pretty much bursting with anticipation due to all that is on my plate for the coming year.  I know the right hand side bar is a clear indication that I’ll be speaking more, meaning more travel and a lot of new content.  With some awesome new opportunities from Delphix and the organizations I’m part of, I look forward to a great 2017!

 

Posted in DBA Life, Oracle Tagged with: , , ,

December 12th, 2016 by dbakevlar
UPDATE-  I want to thank everyone who supported this post and reached out to me about the challenges of managing and working with user group events.  This post, as with other’s I’ve written on different topics, (I have quite the knack to say what most just think, don’t I? :)) struck a nerve in the community, but it was also very timely with a mission that Nichole Scott, Oracle’s North America User Group Senior Manager, was working to address.  Nichole was kind enough to reach out to me after reading the blog post and a good discussion on what occurred, (both historically and currently) was able to find a way to get RMOUG sponsorship and help us reach out to another marketing team about the conflicting Cloud marketing event in Denver.  I really appreciate the great support from Nichole and her efforts went a long way in resolving some of the frustrations RMOUG’s board was feeling.

This is one of those posts where my hat comes off as a previous Oracle employee, same with my Delphix hat.  I am here only as a board member and representative of my Oracle REGIONAL user group community and have a bone to pick with a few people at Oracle.

steve-buscemi-deal-with-it

I’ve been the conference director for Rocky Mountain Oracle User Group’s, (RMOUG) Training Days conference since 2012.  This is a demanding volunteer position within the board of directors for any regional user group, made even more demanding at RMOUG for three reasons:

  1. RMOUG’s Training Days conference is the largest regional conference in the US.
  2. We have a board that is strained on demands, so resources are limited with lives of the volunteers.
  3. User groups are going through challenges as the Oracle community matures and changes.

After five conferences, you’d think it’d just be down to a science, but there’s always an unexpected challenge or opportunity to take on, plus as a non-profit, I’m required to do it in a way that has little cost or justifies itself with a pay off.

In the last two years, Oracle really stepped up and was one of our top sponsors.  With that sponsorship came a number of demands from them in the way of attendee lists, free tickets and extra cost to our membership.  There were always a few people inside the company that would bleed us dry in an attempt to get everything out of the event till it would cost us more than the money invested.

Although I’d feel some frustration with this, there were always incredible people inside Oracle who saw the value in the Oracle user group community and would try to make it worth it.  On the other hand, the secondary support from Oracle Technology Network, (OTN) through Laura Ramsey has been an incredible benefit to our community.  Although we spend every penny of the sponsorship to make sure their area is provided for, they always ensure to help us by marketing our event through social media and with the community.  I’ll state again, what OTN does is separate from Oracle sponsorship and as I’ve told folks-  Oracle is like 10,000 little companies under one name, so what one does, shouldn’t be put in comparison to another.

This year, the standard routine of speaking with the Oracle Hardware group for sponsorship was underway and they were very excited about RMOUG Training Days 2017.  We received word this last week that this wouldn’t be happening and that Oracle wouldn’t be offering any sponsorship for RMOUG.  Oracle just didn’t see it very valuable to sponsor Oracle user groups any longer.

Something also to be aware of, is that RMOUG, like UKOUG, doesn’t receive any compensation for ACE Director speakers.  We are one of the few that have been penalized as “too successful” as of 2013 and informed that the good that we do in the community would no longer be eligible for ACE Director speakers to be reimbursed for travel expenses.  ACE Directors have still submitted abstracts and hoped to be accepted to speak. We at RMOUG went out of our way to help make it worth their while by finding drivers for speakers from the airport, (someone once joked that whoever picked out the location for the Denver International Airport must liked Kansas.)  We also have an incredible welcome reception for our speakers and volunteers and offer our volunteers incentives to help us keep everything running like a well-oiled machine.  This is all done by a non-profit with rising costs from management companies and event centers, while memberships in user groups are at an all-time low.

We were then contacted by Oracle this last week, to let us know that they would be having a Cloud day event just 10 days before our 2017 conference.  There are Oracle employees that are liaison’s on our board and at no time did anyone in Oracle marketing communicate with her, as they began to plan out the event on the calendar or if it would impact the local Oracle user group community.

Their request- They wanted RMOUG’s support in marketing this event and promoting it to our membership to raise attendance.

Nope.  Not going to happen.  I’m not going to EVEN add a link to it here in my post.

Oracle can’t continue to stomp on the local user groups and expect them to do your marketing, provide you with attendees to your events and expect them to survive.

I’m quite aware how many user groups are in a battle for their survival right now.  UTOUG has started to plead with their membership-  letting them know that unless someone steps up and helps, their ability to offer meetups and keep the group going will be impacted.  NoCOUG has repeatedly gone to their membership and groups outside looking for support-  fearful that they won’t survive to see another year.  RMOUG dropped one 2016 newsletter and almost dropped a quarterly event due to lacking support, volunteers and attendance.  These are just three of the top five in the US this last quarter-  I know from my speaking engagements from the last two years, there are many other stories I could share.

The Oracle ACE program now has a point system that requires those in the community to speak at events, but if the regional user groups fail, where will people speak?  If Oracle doesn’t support groups, where will they go?  I know I’m doing more with meetups, DataCons and other events.

Most directors on boards expect yearly changes to who they will be dealing with at Oracle, as their is someone new every year.  Even the Oracle calendar is missing a majority of the US events and I won’t even get started on those outside of the country.  The community isn’t volunteering their time to their community, Oracle isn’t supporting the regional user groups and now we have competition from not just meetups and other local events, companies, but ORACLE, TOO?

There’s no shortcut to technical quality that can be had with bright lights and a lot of marketing swag.  It takes everyone to built the user community brand and the more that Oracle tries to bundle it up into a fast food type of drive thru experience, the more the user community will starve for real content.  That’s where the regional user groups come in and provide sustenance.

 

 

Posted in Oracle

November 30th, 2016 by dbakevlar

I love technology-   ALL TECHNOLOGY.  This includes loving my Mac Air and loving my Microsoft Surface Pro 4.  I’ve recently went back to a Mac when I joined Delphix, trimming down the power I had on my Surface Pro 4, knowing the content I was providing would be required to run on hardware with lesser resources.

With the release of Microsoft SQL Server 2016 on Linux, I jumped in with both feet and wanted to install it on one of my Linux VMs that I have “discovered” with my Delphix engine and its Oracle environment.  To accomplish this, the OS version was CentOS, which wasn’t a supported OS, but did work with the yum commands with a few changes and with an upgrade from CentOS 6.6 to 7.

After upgrading and installing SQL Server 2016, I became aware that the memory requirements, once trimmed down on the VM to less than 1.5Gb, now required at least 3.5G to run.  My Mac Air has only 8G of memory on it and to run a Delphix environment, you have the stand alone Delphix engine, (a simple software appliance) VM, a “Source” environment and a “Target” environment.  To run three environments with that much of an increase in resource demands for the source and target was a bit too much for the little machine.

work

OVA Move

Moving a VM from a Mac to a PC is easy by copying the .ova file and importing it on the new PC.  Upon doing so, I noted that it was from the original version and didn’t include my OS upgrade or the MSSQL installation.

screen-shot-2016-11-30-at-1-21-58-pm

I was able to quickly see this by comparing the images with the following command:

uname -a

I took a new snapshot, brought the new file over and imported again, but no change occurred.  I then decided to do some testing of how robust and how dependent VMs are.

Ensuring the VM was DOWN, I copied the actual VM’s folder on my Mac Air to a jump drive.  It was almost 18 GB.

screen-shot-2016-11-30-at-1-11-56-pm

I then switched over to the Surface and with the VM down, renamed the original folder and copied the one from my MAC into the same directory.  I then renamed it to be the same as the original, (I had to remove the .vmwarevm extension on the folder)  which then mimicked the original it was replacing.  Here’s the folder with the .vmwarevm extension on the jumpdrive-

win_vm2

And here’s how I renamed the original folder to “old” and then copied the Mac version to the same directory and removed the extension.  Notice that the name now matches what it would have been for the original, which will also match what is in the Windows registry:

vm_win1

I restarted the VM and then checked that everything came up and verify that the image contained the correct OS and MSSQL installation:

screen-shot-2016-11-30-at-2-00-48-pm screen-shot-2016-11-30-at-2-00-34-pm

Ahhh, much better…. 🙂  Now my upgraded VM with the addition of MSSQL 2016 has some room to move and grow!

 

Posted in Oracle, VMWare Tagged with: , ,

November 23rd, 2016 by dbakevlar

The challenge of coding, is that you sometimes need to be aware of what can be impacted outside of your code to make it appear guilty.

didnt

I wrote a blog post a short time back on my first time scripting with Apple Script with a goal of automating my weekly status reports to my new manager of my tasks in OmniFocus.  After seeing some cool stuff one of my peers did with their setup in OmniFocus, I built out mine to do something similar.  What I didn’t realize, was that my code was significantly dependent on my original setup and no new updates were created post the change.

Now that I know a little bit more about Apple Scripting and Omni Focus, I chose to do the following:

  1. Use Omnifocus’ Perspectives to give me a better status report output.
  2. Simplify my code to use the perspective vs. me “scraping” the data in Omnifocus.

For the database folks out there, a Perspective is just like a view in a database.  It’s just an alias with a selection of data that answers the query, “What have I completed in the last month, listing the project, the notes on my status and organize it by date completed.”

Create Perspective

You can create a new Perspective quite easily:

screen-shot-2016-11-23-at-11-12-15-am

You can create the perspective, (I’ve named mine “Weekly Report” and then create the “view” that will populate our report properly.

screen-shot-2016-11-23-at-11-13-05-am

Apple Script

Now, we’ll need to build our code to match what our perspective is done:

--Set up code to match the perspective

set thePerspective to "Weekly Report" --The exact Perspective name
set theSubjectOfMessage to "Task Report for Kellyn" --The Subject of Email
set theSender to "Kellyn Gorman"
set POSIXpath to POSIX path of "/Users/pathname/Library/Containers/com.omnigroup.OmniFocus2/Data/Documents/tasklist.txt"
tell front document of application "OmniFocus"
 tell front document window
 set perspective name to thePerspective
save in POSIXpath as "public.text"
 end tell
end tell
tell application "Mail"
 set theMessage to make new outgoing message with properties {subject:theSubjectOfMessage, sender:theSender}
 tell content of theMessage
make new attachment with properties {file name:POSIXpath} at after last paragraph
 end tell
 tell theMessage
make new to recipient at end of to recipients with properties {address:"emailaddress@.com"} --add address for email
 end tell
send theMessage
end tell

And test your code…always… 🙂  If you’ve set everything up correctly, then you should have an awesome weekly status report sent to your manager telling him how awesome you are.

Posted in DBA Life, Oracle Tagged with: ,

November 14th, 2016 by dbakevlar

OK, so I’m all over the map, (technology wise) right now.  One day I’m working with data masking on Oracle, the next it’s SQL Server or MySQL, and the next its DB2.  After almost six months of this, the chaos of feeling like a fast food drive thru with 20 lanes open at all times is starting to make sense and my brain is starting to find efficient ways to siphon all this information into the correct “lanes”.  No longer is the lane that asked for a hamburger getting fries with hot sauce… 🙂

thesemakeme

One of the areas that I’ve been spending some time on is the optimizer and differences in Microsoft SQL Server 2016.  I’m quite adept on the Oracle side of the house, but for MSSQL, the cost based optimizer was *formally* introduced in SQL Server 2000 and filtered statistics weren’t even introduced until 2008.  While I was digging into the deep challenges of the optimizer during this time on the Oracle side, with MSSQL, I spent considerable time looking at execution plans via dynamic management views, (DMVs) to optimize for efficiency.  It simply wasn’t at the same depth as Oracle until the subsequent releases and has grown tremendously in the SQL Server community.

Compatibility Mode

As SQL Server 2016 takes hold, the community is starting to embrace an option that Oracle folks have done historically-  When a new release comes out, if you’re on the receiving end of significant performance degradation, you have the choice to set the compatibility mode to the previous version.

I know there are a ton of Oracle folks out there that just read that and cringed.

Compatibility in MSSQL is now very similar to Oracle.  We allocate the optimizer features by release version value, so for each platform it corresponds to the following:

Database Version Value
Oracle 11.2.0.4 11.2.0.x
Oracle 12.1 12.1.0.0.x
Oracle 12c release 2 12.1.0.2.0
MSSQL 2012 110
MSSQL 2014 120
MSSQL 2016 130

 

SQL Server has had this for some time, as you can see by the following table:

Product Database Engine Version Compatibility Level Designation Supported Compatibility Level Values
SQL Server 2016 13 130 130, 120, 110, 100
SQL Database 12 120 130, 120, 110, 100
SQL Server 2014 12 120 120, 110, 100
SQL Server 2012 11 110 110, 100, 90
SQL Server 2008 R2 10.5 105 100, 90, 80
SQL Server 2008 10 100 100, 90, 80
SQL Server 2005 9 90 90, 80
SQL Server 2000 8 80 80

These values can be viewed in each database using queries for the corresponding command line tool.

For Oracle:

SELECT name, value, description from v$parameter where name='compatible';

Now if you’re in database 12c and multi-tenant, then you need to ensure you’re correct database first:

ALTER SESSION SET CONTAINER = <pdb_name>;
ALTER SYSTEM SET COMPATIBLE = '12.1.0.0.0';

For MSSQL:

SELECT databases.name, databases.compatibility_level from sys.databases 
GO
ALTER DATABASE <dbname> SET COMPATIBILITY_LEVEL = 120
GO

Features

How many of us have heard, “You can call it a bug or you can call it a feature”?  Microsoft has taken a page from Oracle’s book and refer to the need to set the database to the previous compatibility level as Compatibility Level Guarantee.  It’s a very positive sounding “feature” and for those that have upgraded and are suddenly faced with a business meltdown due to a surprise impact once they do upgrade or simply from a lack of testing are going to find this to be a feature.

So what knowledge, due to many years of experience with this kind of feature, can the Oracle side of the house offer to the MSSQL community on this?

I think anyone deep into database optimization knows that “duct taping” around a performance problem like this- by moving the compatibility back to the previous version is wrought with long term issues.  This is not addressing a unique query or even a few transactional processes being addressed with this fix.  Although this should be a short term fix before you launch to production, [we hope] experience has taught us on the Oracle side, that you have databases that exist for years in a different compatibility version than the release version.  Many DBAs have databases that they are creating work arounds and applying one off patch fixes for because the compatibility either can’t or won’t be raised to the release version.  This is a database level way of holding the optimizer at the previous version.  The WHOLE database.

You’re literally saying, “OK kid, [database], we know you’re growing, so we upgraded you to latest set of pants, but now we’re going to hem and cinch them back to the previous size.”  Afterwards we say, “Why aren’t they performing well? After all, we did buy them new pants!”

So by “cinching” the database compatibility mode back down, what are we missing in SQL Server 2016?

  • No 10,000 foreign key or referential constraints for you, no, you’re back to 253.
  • Parallel update of statistic samples
  • New Cardinality Estimator, (CE)
  • Sublinear threshold for statistics updates
  • A slew of miscellaneous enhancements that I won’t list here.

Statistics Collection

Now there is a change I don’t like, but I do prefer how Microsoft has addressed it in the architecture.  There is a trace flag 2371 that controls, via on or off, if statistics are updated at about 20% change in row count values.  This is now on by default with MSSQL 2016 compatibility 130.  If it’s set to off, then statistics at the object level aren’t automatically updated.  There are a number of ways to do this in Oracle, but getting more difficult with dynamic sampling enhancements that put the power of statistics internal to Oracle and less in the hands of the Database Administrator.  This requires about 6 parameter changes in Oracle and as a DBA who’s attempted to lock down stats collection, its a lot easier than said.  There were still ways that Oracle was able to override my instructions at times.

Optimizer Changes and Hot Fixes

There is also a flag to apply hot fixes which I think is a solid feature in MSSQL that Oracle could benefit from, (instead of us DBAs scrambling to find out what feature was implemented, locating the parameter and updating the value for it…)  Using trace flag 4199 granted the power to the DBA to enable any new optimizer features, but, just like Oracle, with the introduction of SQL Server 2016, this is now controlled with the compatibility mode.  I’m sorry MSSQL DBAs, it looks like this is one of those features from Oracle that, (in my opinion) I wish would have infected cross platform in reverse.

As stated, the Compatibility Level Guarantee sounds pretty sweet, but the bigger challenge is the impact that Oracle DBAs have experienced for multiple releases that optimizer compatibility control has been part of our database world.  We have databases living in the past.  Databases that are continually growing, but can’t take advantage of the “new clothes” they’ve been offered.  Fixes that we can’t take advantage of because we’d need to update the compatibility to do so and the pain of doing so is too risky.  Nothing like being a tailor that can only hem and cinch.  As the tailors responsible for the future of our charges, there is a point where we need to ensure our voices are heard, to ensure that we are not one of the complacent bystanders, offering stability at the cost of watching the world change around us.

Posted in Oracle, SQLServer Tagged with: , ,

August 15th, 2016 by dbakevlar

I’ve been involved in two data masking projects in my time as a database administrator.  One was to mask and secure credit card numbers and the other was to protect personally identifiable information, (PII) for a demographics company.  I remember the pain, but it was better than what could have happened if we hadn’t protected customer data….

blowup

Times have changed and now, as part of a company that has a serious market focus on data masking, my role has time allocated to research on data protection, data masking and understanding the technical requirements.

Reasons to Mask

The percentage of companies that contain data that SHOULD be masked is much higher than most would think.

Screen Shot 2016-08-15 at 12.59.05 PM

The amount of data that should be masked vs. is masked can be quite different.  There was a great study done by the Ponemon Instititue, (that says Ponemon, you Pokemon Go freaks…:)) that showed 23% of data was masked to some level and 45% of data was significantly masked by 2014.  This still left over 30% of data at risk.

The Mindset Around Securing Data

We also don’t think very clearly about how and what to protect.  We often silo our security-  The network administrators secure the network.  The server administrators secure the host, but doesn’t concern themselves with the application or the database and the DBA may be securing the database, but the application that’s accessing it, may be open to accessing data that shouldn’t be available to those involved.  We won’t even start about what George in accounting is doing.

We need to change from thinking just of disk encryption and start thinking about data encryption and application encryption with key data stores that protect all of the data-  the goal of the entire project.  It’s not like we’re going to see people running out of a building with a server, but seriously, it doesn’t just happen in the movies and people have stolen drives/jump or even print outs of spreadsheets drives with incredibly important data residing on it.

As I’ve been learning what is essential to masking data properly, along with what makes our product superior, is that it identifies potential data that should be masked, along with ongoing audits to ensure that data doesn’t become vulnerable over time.

Screen Shot 2016-08-15 at 12.30.34 PM

This can be the largest consumption of resources in any data masking project, so I was really impressed with this area of Delphix data masking.  Its really easy to use, so if you don’t understand the ins and outs to DBMS_CRYPTO or unfamiliar with the java.utilRANDOM syntax, no worries, Delphix product makes it really easy to mask data and has a centralized key store to manage everything.

Screen Shot 2016-08-15 at 11.52.53 AM

It doesn’t matter if the environment is on-premise or in the cloud.  Delphix, like a number of companies these days, understands that hybrid management is a requirement, so efficient masking and ensuring that at no point is sensitive data at risk is essential.

The Shift

How many data breaches do we need to hear about to make us all pay more attention to this?  Security topics at conferences are diminished vs. when I started to attend less than a decade ago, so I know it wasn’t that long ago it appeared to be more important to us and yet it seems to be more important of an issue.

Screen Shot 2016-08-15 at 11.47.19 AM

Research was also performed that found only 7-19% of companies actually knew where all their sensitive data was located.  That’s over 80% sensitive data vulnerable to a breach.  I don’t know about the rest of you, but upon finishing up on that little bit of research, I understood why many feel better about not knowing and why its better just to accept this and address masking needs to ensure we’re not one of the vulnerable ones.

Automated solutions to discover vulnerable data can significantly reduce risks and reduce the demands on those that often manage the data, but don’t know what the data is for.  I’ve always said that the best DBAs know the data, but how much can we really understand it and do our jobs?  It’s often the users that understand it, but may not comprehend the technical requirements to safeguard it.  Automated solutions removes that skill requirement from having to exist in human form, allowing us all to do our jobs better.  I thought it was really cool that our data masking tool considers this and takes this pressure off of us, letting the tool do the heavy lifting.

Along with a myriad of database platforms, we also know that people are bound and determined to export data to Excel, MS Access and other flat file formats resulting in more vulnerabilities that seem out of our control.  Delphix data masking tool considers this and supports many of these applications, as well.  George, the new smarty-pants in accounting wrote out his own XML pull of customers and credit card numbers?  No problem, we got you covered… 🙂

Screen Shot 2016-08-15 at 12.51.45 PM

So now, along with telling you how to automate a script to email George to change his password from “1234” in production, I can now make recommendations on how to keep him from having the ability to print out a spreadsheet with all the customer’s credit card numbers on it and leave it on the printer…:)

Happy Monday, everyone!

 

 

 

 

Posted in Data Masking, Delphix, Oracle Tagged with: ,

  • Facebook
  • Google+
  • LinkedIn
  • Twitter