Category: Delphix

February 10th, 2017 by dbakevlar

So you thought you were finished configuring your AWS target, eh?  I already posted a previous time on how to address a fault with the RMEM, but now we’re onto the WMEM.  Wait, WM-what?

No, I fear a DBAs work is never over and when it comes to the cloud, our skill set has just expanded from what it was when we worked on-premise!

Our trusty Delphix Admin Console keeps track of settings on all our sources and targets, informing us when the settings aren’t set to what is recommended, so that we’ll be aware of any less than optimal parameters that could effect performance.

As we address latency in cloud environments, network settings become more important.

How WMEM Differs from RMEM

RMEM= receive

WMEM=send

Where RMEM is quite easy to remember as receive settings, we get to thank

 

As root, we’ll add another line to the sysctl.conf file to reflect values other than defaults:

$ echo 'net.ipv4.tcp_wmem= 102404194304 12582912' >> /etc/sysctl.conf

Reload the values into the system:

$ sysctl -p /etc/sysctl.conf

Verify the settings are now active:

$ sysctl -a | grep net.ipv4.tcp_wmem

net.ipv4.tcp_wmem = 10240 4194304 12582912

That’s all there is to it.  Now you can mark the fault as resolved in the Delphix Admin Console.

Posted in AWS Trial, Delphix Tagged with: , ,

February 8th, 2017 by dbakevlar

There are more configurations for AWS than there are fish in the sea, but as the rush of folks arrive to test out the incredibly cool AWS Trial for Delphix, I’ll add my rendition of what to look for to know you’re AWS setup is prepped to successfully deploy.

The EC2 Dashboard View

After you’ve selected your location, set up your security user/group and key pairs, there’s a quick way to see, (at least high level) if you’re ready to deploy the AWS Trial to the zone in question.

Go to your EC2 Dashboard and to the location, (Zone) that you plan to deploy your trial to and you should see the following:

Notice in the dashboard, you can see that the key pairs, (1) and the expected Security Groups, (3) are displayed, which tells us that we’re ready to deploy to this zone.  If we double click on the Key Pair, we’ll see that its match to the one we downloaded locally and will use in our configuration with Terraform:

How Terraform Communicates with AWS

These are essential to deploying in an AWS zone that’s configured as part of your .tfvars file for terraform.  You’ll note in the example below, we have both designated the correct zone and the key pair that is part of the zone we’ll be using to authenticate:


#VERSION=004

#this file should be named terraform.tfvars

# ENTER INPUTS BELOW

access_key="XXXXXXX"

secret_key="XXXXXXXXXX"

aws_region="us-east-1"

your_ip="xxx.xx.xxx.xxx"

key_name="Delphix_east1" #don't include .pem in the key name 

instance_name="Delphix_AWS"

community_username="xxx@delphix.com"

community_password="password"

Hopefully this is a helpful first step in understanding how zones, key pairs and security groups interact to support the configuration file, (tfvars) file that we use with the Delphix deployment via Terraform into AWS.

 

Posted in AWS Trial, Delphix, Oracle Tagged with: ,

February 1st, 2017 by dbakevlar

Delphix focuses on virtualizing non-production environments, easing the pressure on DBAs, resources and budget, but there is a second use case for product that we don’t discuss nearly enough.

Protection from data loss.

Jamie Pope, one of the great guys that works in our pre-sales engineering group, sent Adam and I an article on one of those situations that makes any DBA, (or an entire business, for that matters) cringe.  GitLab.com was performing some simple maintenance and someone deleted the wrong directory, removing over 300G of production data from their system.  It appears they were first going to use PostgreSQL “vacuum” feature to clean up the database, but decided they had extra time to clean up some directories and that’s where it all went wrong.  To complicate matters, the onsite backups had failed, so they had to go to offsite ones, (and every reader moans…)

Even this morning, you can view the tweets of the status for the database copy and feel the pain of this organization as they try to put right the simple mistake.

Users are down as they work to get the system back up.  Just getting the data copied before they’re able to perform the restore is painful and as a DBA, I feel for the folks involved:

How could Delphix have saved the day for GitLab?  Virtual databases, (VDBs) are read/write copies and derived from a recovered image that is compressed, duplicates removed and then kept in a state of perpetual recovery having the transactional data applied in a specific interval, (commonly once every 24 hrs) to the Delphix Engine source.  We support a large number of database platforms, (Oracle, SQL Server, Sybase, SAP, etc) and are able to virtualize the applications that are connected to them, too.  The interval of how often we update the Delphix Engine source is configurable, so depending on network and resources, this interval can be decreased to apply more often, depending on how up to date the VDBs need to be vs. production.

With this technology, we’ve come into a number of situations where customers suffered a cataclysmic failure situation in production.  While traditionally, they would be dependent upon a full recovery from a physical backup via tape, (which might be offsite) or scrambling to even find a backup that fit within a backup to tape window, they suddenly discovered that Delphix could spin up a brand new virtual database with the last refresh before the incident from the Delphix source and then use a number of viable options to get them up and running quickly.

  1. Switch the users and application to point to the new VDB that was recovered to the point in time, (PIT) before the incident occurred.  Meanwhile, IT is able to take their time recovering the production database with the physical backup, with little outage to the business.
  2. Create a VDB to the PIT before the failure and then create a connection between the production and the VDB, making a copy back to production of the data that was lost.
  3. If there was dire loss, (i.e. disk, etc.)  create a VDB to the PIT before the failure and perform what’s called a V2P, or virtual to physical, rehydrating the virtual data to become the new physical database.

This is the type of situation happens more often then we’d like to admit.  Many times resources have been working long shifts and make a mistake due to exhaustion, other times someone unfamiliar and with access to something they shouldn’t simply make a dire mistake, but these things happen and this is why DBAs are always requesting two or three methods of backups.  We learn quite quickly we’re only as good as our last backup and if we can’t protect the data, well, we won’t have a job for very long.

Interested in testing it out for yourself?  We have a really cool free Delphix trial via Amazon cloud that uses your AWS account.  There’s a source host and databases, along with a virtual host and databases, so you can create VDBs, blow away tables, recovery via a VDB, create a V2P, (virtual to physical) all on your own.

 

Posted in AWS Trial, cloning, Delphix Tagged with: , ,

January 31st, 2017 by dbakevlar

I’ve been at Delphix for just over six months now.  In that time, I was working with a number of great people on a number of initiatives surrounding competitive, the company roadmap and some new initiatives.  With the introduction of our CEO, Chris Cook, new CMO, Michelle Kerr and other pivotal positions within this growing company, it became apparent that we’d be redirecting our focus on Delphix’s message and connections within the community.

I was still quite involved in the community, even though my speaking had been trimmed down considerably with the other demands at Delphix.  Even though I wasn’t submitting abstracts to many of the big events I’d done so in previous years, I still spoke at 2-3 events each month during the fall and made clear introductions into the Test Data Management, Agile and re-introduction into the SQL Server communities.

As of yesterday, my role was enhanced so that evangelism, which was previously 10% of my allocation, is now going to be upwards of 80% as the Technical Evangelist for the Office of the CTO at Delphix.  I’m thrilled that I’m going to be speaking, engaging and blogging with the community at a level I’ve never done before.  I’ll be joined by the AWESOME Adam Bowen, (@CloudSurgeon on Twitter) in his role as Strategic Advisor and as the first members of this new group at Delphix.  I would like to thank all those that supported me to gain this position and the vision of the management to see the value of those in the community that make technology successful day in and day out.

I’ve always been impressed with the organizations who recognize the power of grassroots evangelism and the power it has in the industry.  What will I and Adam be doing?  Our CEO, Chris Cook said it best in his announcement:

As members of the [Office of CTO], Adam and Kellyn will function as executives with our customers, prospects and at market facing events.  They will evangelize the direction and values of Delphix; old, current, and new industry trends; and act as a customer advocate/sponsor, when needed.  They will connect identified trends back into Marketing and Engineering to help shape our message and product direction.  In this role, Adam and Kellyn will drive thought leadership and market awareness of Delphix by representing the company at high leverage, high impact events and meetings. []

As many of you know, I’m persistent, but rarely patient, so I’ve already started to fulfill my role and be prepared for some awesome new content, events that I’ll be speaking at and new initiatives.  The first on our list was releasing the new Delphix Trial via the Amazon Cloud.  You’ll have the opportunity to read a number of great posts to help you feel like an Amazon guru, even if you’re brand new to the cloud.  In the upcoming months, watch for new features, stories and platforms that we’ll introduce you to. This delivery system, using Terraform, (thanks to Adam) is the coolest and easiest way for anyone to try out Delphix, with their own AWS account and start to learn the power of Delphix with use case studies that are directed to their role in the IT organization.

Posted in AWS Trial, DBA Life, Delphix, Oracle Tagged with: ,

January 30th, 2017 by dbakevlar

I don’t want to alarm you, but there’s a new Delphix trial on AWS!  It uses your own AWS account and with a simple set up, allows you to deploy a trial Delphix environment.  Yes, you hear me right-  just with a couple steps, you could have your own setup to work with Delphix!

There’s documentation to make it simple to deploy, simple to understand and then use cases for individuals determined by their focus, (IT Architect, Developer, Database Administrator, etc.)

This was a huge undertaking and I’m incredibly proud of Delphix to be offering this to the community!

So get out there and check this trial out!  All you need is an AWS account on Amazon and if you don’t have one, it only takes a few minutes to create one and set it up, just waiting for a final verification before you can get started!  If you have any questions or feedback about the trial, don’t hesitate to email me at dbakevlar at gmail.

Posted in AWS Trial, Cloud, Delphix Tagged with: , ,

January 24th, 2017 by dbakevlar

So you’ve deployed targets with Delphix on AWS and you receive the following error:

It’s only a warning, but it states that you’re default of 87380 is below the recommended second value for the ipv4.tcp.rmem property.  Is this really an issue and do you need to resolve it?  As usual, the answer is “it depends” and its all about on how important performance is to you.

What is net.ipv4.tcp.rmem?

To answer this question, we need to understand network performance.  I’m no network admin, so I am far from an expert on this topic, but as I’ve worked more often in the cloud, it’s become evident to me that the network is the new bottleneck for many organizations.  Amazon has even build a transport, (the Snowmobile) to bypass this challenge.

The network parameter settings in question have to do with network window sizes for the cloud host in question surrounding TCP window reacts and WAN links.  We’re on AWS for this environment and the Delphix Admin Console was only the messenger to let us know that our setting currently provided for this target are less than optimal.

Each time the sender hits this limit, they must wait for a window update before they can continue and you can see how this could hinder optimal performance for the network.

Validation First

To investigate this, we’re going to log into our Linux target and SU over as root, which is the only user who has the privileges to edit this important file.:

$ ssh delphix@<IP Address for Target>
$ su root

As root, let’s first confirm what the Delphix Admin Console has informed us of by running the following command:

$ sysctl -a | grep net.ipv4.tcp_rmem 

net.ipv4.tcp_rmem = 4096 87380 4194304

There are three values displayed in the results:

  • The first value is the minimum amount of receive window that will be set to each TCP connection, even when the system is overwhelmed.
  • The default value allocated to each tcp connection,
  • The third is the maximum that can be allocated to any TCP connection.

To translate what this second value corresponds to-  this is the size of data in flight any sender can communicate via TCP to the cloud host before having to receive a window update.

So why are faster networks better?  Literally, the faster the network, the closer the bits and the more data that can be transferred.  If there’s a significant delay, due to a low setting on the default of how much data can be placed on the “wire”, then the receive window won’t be used optimally.

This will require us to update our parameter file and either edit or add the following lines:

net.ipv4.tcp_window_scaling = 1

net.core.rmem_max = 16777216

net.ipv4.tcp_rmem = 4096 12582912 16777216
I’m using the value as recommended by Brendan Gregg’s blog post on tuning EC2 instances.  This leaves a pretty narrow difference between the minimum and maximum for the window receive, but it is now within the recommended range for enhanced performance.
After you’ve updated the sysctl.conf file, you’ll need to reload it with the following command:
$ sysctl -p /etc/sysctl.conf
$ sysctl -a | grep net.ipv4.tcp_rmem 

net.ipv4.tcp_rmem = 4096 12582912 16777216

Ahhh, that looks much better… 🙂

Posted in AWS Trial, Delphix Tagged with: , ,

January 9th, 2017 by dbakevlar

We, DBAs, have a tendency to over think everything.  I don’t know if the trait to over think is just found in DBAs or if we see it in other technical positions, too.

I believe it corresponds to some of why we become loyal to one technology or platform.  We become accustomed to how something works and it’s difficult for us to let go of that knowledge and branch out into something new.  We also hate asking questions-  we should just be able to figure it out, which is why we love blog posts and simple documentation.  Just give us the facts and leave us alone to do our job.

Take the cloud-  Many DBAs were quite hesitant to embrace it.  There was a fear of security issues, challenges with network and more than anything, a learning curve.  As common, hindsight is always 20/20.  Once you start working in the cloud, you often realize that its much easier than you first thought it would be and your frustration is your worst enemy.

So today we’re going to go over some basic skills the DBA requires to manage a cloud environment, using Amazon, (AWS) as our example and the small changes required to do what we once did on-premise.

In Amazon, we’re going to be working on EC2, also referred to as the Elastic Compute Cloud.

Understanding Locations, Regions and Zones

EC2 is built out into regions and zones.  Knowing what region you’re working in is important, as it allows you to “silo” the work you’re doing and in some ways, isn’t much different than a data center. Inside of each of these regions, are availability zones, which isolates services and features even more, allowing definitive security at a precise level, with resources shared only when you deem it should.

Just as privileges granted inside a database can both be a blessing and a curse, locations and regions can cause challenges if you don’t pay attention to the location settings when you’re building out an environment.

Amazon provides a number of links with detailed information on this topic, but here’s the tips I think are important for a DBA to know:

  1.  Before setting anything up that is part of a complete solution requiring multiple setup page configurations, ALWAYS check the region in the upper right corner.  I was surprised when it would change from page to page or after a login-

2.  If you think you may have set something up in the wrong region, the dashboard can tell you what is deployed to what region under the resources section:

Understanding Security Keys

Public key cryptography makes the EC2 world go round.  Without this valuable 2048-bit SSH-2 RSA key encryption, you can’t communicate or log into your EC2 host securely.  Key pairs, a combination of a private and public key should be part of your setup for your cloud environment.

Using EC2’s mechanism to create these is easy to do and eases management.  Its not the only way, but it does simplify and as you can see above in the resource information from the dashboard, it also offers you a one-stop shop for everything you need.

When you create one in the Amazon cloud, the private key downloads automatically to the workstation you’re using and it’s important that you keep track of it, as there’s no way to recreate the private key that will be required to connect to the EC2 host.

Your key pair is easy to create by first accessing your EC2 dashboard and then scroll down on the left side and click on “Key Pairs”.  From this console, you’ll have the opportunity to create, import a pre-existing key or manage the ones already in EC2:

Before creating, always verify your region you’re working in, as we discussed in the previous section and if you’re experiencing issue with your key, verify typographical errors and if the location of the private file matches the name listed for identification.

If more than one group is managing the EC2 environment, carefully consider before deleting a key pair.  I’ve experienced the pain caused by a key removal that created a production outage.  Creation of a new key pair is simpler to manage than implementation of a new key pair across application and system tiers after the removal of one that was necessary.

Understanding Roles and Security

Security Groups are silo’d for a clear reason and no where is this more apparent than in the cloud.  To ensure that the cloud is secure, setting clear and defined boundaries of accessibility to roles and groups is important to keep infiltrators out of environments they have no business accessing.

As we discussed in Key Pairs, our Security Groups are also listed by region under resources so we know they exist at a high level.  If we click on the Security Groups link under Resources in the EC2 Dashboard, we’ll go from seeing 5 security group members:

To viewing the list of security groups:

If you need to prove that these are for N. California, (i.e. US-West-1) region, click on the region up in the top right corner and change to a different region.  For our example, I switched to Ohio, (us-east-2) and the previous security groups aren’t listed and just the default security group for Ohio region is displayed:

Security groups should be treated in the cloud the same way we treat privileges inside a database-  granting the least privileges required is best practice.

Understanding How to SSH to a Host

You’re a DBA, which means you’re most likely most comfortable at the command line.  Logging in via SSH on a box is as natural as walking and everything we’ve gone through so far was to prepare you for this next step.

Your favorite command line tool, no matter if it’s Putty or Terminal, if you’re set up everything in the previous sections correctly, then you’re ready to log into the host, aka instance.

  1.  Ensure your downloaded private key is saved in an easily accessible spot for you to use to log in or that you know the username/password, (keys just make this easier…)
  2. Gather the information about “instances” by clicking on the EC2 dashboard, then click on Instances.
  3. The Public DNS and the Public IP is displayed and note the region, too:

You can use this information to then ssh into the host:

ssh -i "<keypair_name>.pem" <osuser>@<public dns or ip address>.<region>.compute.amazonaws.com

Once logged in as the OS user, you can SU over to the application or database user and proceed as you would on any other host.

If you attempt to log into a region with a key pair from another region, it state that the key pair can’t be found, so another aspect showing the importance of regions.

Understanding How to SCP a File

This is the last area I’ll cover today, (I know, a few of you are saying, “good, I’ve already got too much in my head to keep straight, Kellyn…)

With just about any Cloud offering, you can bring your own license.  Although there are a ton of images, (AMIs in AWS, VHDs in Azure, etc.) pre-built, you may need to use a bare metal OS image and load your own software or as most DBAs, bring over patches to maintain the database you have running out there.  Just because you’re in the cloud doesn’t mean you don’t have a job to do.

Change over to the directory that contains the file that you need to copy and then run the following:

scp -i <keypair>.pem <file name to be transferred> <osuser>@<public dns or ip address>.<region>.compute.amazonaws.com:/<direction you wish to place the file in>/.

If you try to use a key pair from one region to log into a SCP to a host, (instance) in another region, you won’t receive an error, but it will act like you skipped the “-i” and the key pair and you’ll be prompted for the password for the user:

<> password: 

pxxxxxxxxxxxx_11xxxx_Linux-x86-64.zip             100%   20MB  72.9KB/s   04:36

This is a good start to getting started as a DBA on the cloud and not over-thinking.  I’ll be posting more in the upcoming weeks that will not only assist those already in the cloud, but those wanting to find a way to invest more into their own cloud education!

Posted in Cloud, Delphix, Delphix Express

December 16th, 2016 by dbakevlar

screen-shot-2016-12-16-at-12-05-22-pm

On the first day with Delphix, I provisioned with glee, an IT Manager Happy.

On the second day with Delphix, I provisioned with glee, two SAP ASE and an IT Manager Happy.

On the third day with Delphix, I provisioned with glee, three Oracle Databases, two SAP ASE and an IT Manager Happy.

On the fourth day with Delphix, I provisioned with glee, four EBS Clones, three Oracle Databases, two SAP ASE and an IT Manager Happy.

On the fifth day with Delphix, I provisioned with glee…

Five Cloud Migrations!

Four EBS Clones, three Oracle Databases, two SAP ASE and an IT Manager Happy.

On the sixth day with Delphix, I provisioned with glee,

Six SQL Servers running, Five Cloud Migrations, four EBS Clones, three Oracle Databases, two SAP ASE and an IT Manager Happy.

On the seventh day with Delphix, I provisioned with glee,

Seven developers coding, Six SQL Servers running, Five Cloud Migrations, four EBS Clones, three Oracle Databases, two SAP ASE and an IT Manager Happy.

On the eighth day with Delphix, I provisioned with glee,

Eight testers testing, Seven developers coding, Six SQL Servers running, Five Cloud Migrations, four EBS Clones, three Oracle Databases, two SAP ASE and an IT Manager Happy.

On the ninth day with Delphix, I provisioned with glee,

Nine applications applying, Eight testers testing, Seven developers coding, Six SQL Servers running, Five Cloud Migrations, four EBS Clones, three Oracle Databases, two SAP ASE and an IT Manager Happy.

On the tenth day with Delphix, I provisioned with glee,

Ten DevOps leading, Nine applications applying, Eight testers testing, Seven developers coding, Six SQL Servers running, Five Cloud Migrations, four EBS Clones, three Oracle Databases, two SAP ASE and an IT Manager Happy.

On the eleventh day with Delphix, I provisioned with glee,

Eleven DB2 humming, Ten DevOps leading, Nine applications applying, Eight testers testing, Seven developers coding, Six SQL Servers running, Five Cloud Migrations, four EBS Clones, three Oracle Databases, two SAP ASE and an IT Manager Happy.

On the twelfth day with Delphix, I provisioned with glee,

Twelve databases masking, Eleven DB2 humming, Ten DevOps leading, nine applications applying, eight testers testing, seven developers coding, six SQL Servers running, five Cloud Migrations, four EBS Clones, three Oracle Databases, two SAP ASE and as IT Manager Happy.

happy_h

Posted in DBA Life, Delphix Tagged with: ,

November 17th, 2016 by dbakevlar

I thought I’d do something on Oracle this week, but then Microsoft made an announcement that was like an early Christmas present-  SQL Server release for Linux.

santa

I work for a company that supports Oracle and SQL Server, so I wanted to know how *real* this release was.  I first wanted to test it out on a new build and as they recommend, along as link to an Ubuntu install, I created a new VM and started from there-

screen-shot-2016-11-17-at-1-20-55-pm

Ubuntu Repository Challenge

There were a couple packages that were missing until the repository is updated to pull universe by adding repository locations into the sources.list file:

screen-shot-2016-11-17-at-1-22-39-pm

There is also a carriage return at the end of the MSSQL installation when it’s added to the sources.list file.  Remove this before you save.

Once you do this, if you’re chosen to share your network connection with your Mac, you should be able to install successfully when running the commands found on the install page from Microsoft.

CentOS And MSSQL

The second install I did was on a VM using CentOS 6.7 that was pre-discovered as a source for one of my Delphix engines.  The installation failed upon running it, which you can see here:

screen-shot-2016-11-17-at-11-21-21-am

Even attempting to work around this wasn’t successful and the challenge was that the older openssl wasn’t going to work with the new SQL Server installation.  I decided to simply upgrade to CentOS 7.

CentOS 7

The actual process of upgrading is pretty easy, but there are some instructions out there that are incorrect, so here are the proper steps:

  1.  First, take a backup of your image, (snapshot) before you begin.
  2. edit the yum directory to prep it for the upgrade by going to and creating the following file: /etc/yum.repos.d/upgrade.repo
    1. Add the following information to the file:
[upgrade]
name=upgrade
baseurl=http://dev.centos.org/centos/6/upg/x86_64/
enabled=1
gpgcheck=0

Save this file and then run the following:

yum install preupgrade-assistant-contents redhat-upgrade-tool preupgrade-assistant

You may see that one has stated it won’t install as newer ones are available-  that’s fine.  As long as you have at least newer packages, you’re fine.  Now run the preupgrade

preupg

The log final output may not write, also.  If you are able to verify the runs outside of this and it says that it was completed successfully, please know that the pre-upgrade was successful as a whole.

Once this is done, import the GPG Key:

rpm --import http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-7

After the key is imported, then you can start the upgrade:

/usr/bin/redhat-upgrade-tool-cli --force --network 7 --instrepo=http://mirror.centos.org/centos/7/os/x86_64

Once done, then you’ll need to reboot before you run your installation of SQL Server:

reboot

MSSQL Install

Once the VM has cycled, then you can run the installation using the Redhat installation as root, (my delphix user doesn’t have the rights and I decided to have MSSQL installed under root for this first test run):

su
curl https://packages.microsoft.com/config/rhel/7/mssql-server.repo > /etc/yum.repos.d/mssql-server.repo

Now run the install:

sudo yum install -y mssql-server

Once its completed, it’s time to set up your MSSQL admin and password:

sudo /opt/mssql/bin/sqlservr-setup

One more reboot and you’re done!

reboot

You should then see your SQL Server service running with the following command:

systemctl status mssql-server

You’re ready to log in and create your database, which I’ll do in a second post on this fun topic.

OK, you linux fans, go MSSQL! 🙂

 

Posted in Delphix, SQLServer Tagged with: , ,

October 20th, 2016 by dbakevlar

I’ll be attending my very first Pass Summit next week and I’m really psyched!  Delphix is a major sponsor at the event, so I’ll get to be at the booth and will be rocking some amazing new Delphix attire, (thank you to my boss for understanding that a goth girl has to keep up appearances and letting me order my own Delphix ware.)

Its an amazing event and for those of you who are my Oracle peeps, wondering what Summit is, think Oracle Open World for the Microsoft SQL Server expert folks.

chris_suddenly

I was a strong proponent of immersing in different database and technology platforms early on.  You never know when the knowledge you gain in an area that you never thought would be useful ends up saving the day.

Just Goin to Take a Look

Yesterday this philosophy came into play again.  A couple of folks were having some challenges with a testing scenario of a new MSSQL environment and asked for other Delphix experts for assistance via Slack.  I am known for multi-tasking, so I thought, while I was doing some research and building out content, I would just have the shared session going in the background while I continued to work.  As soon as I logged into the web session, the guys welcomed me and said, “Maybe Kellyn knows what’s causing this error…”

Me- “Whoops, guess I gotta pay attention…”

SQL Server, for the broader database world, has always been, unlike Oracle, multi-tenant.  This translates to a historical architecture that has a server level login AND a user database level username.  The Login ID, (login name) is linked to a userID, (and such a user name) in the (aka schema) user database.  Oracle is starting to migrate to similar architecture with Database version 12c, moving more away from schemas within a database and towards multi-tenant, where the pluggable database, (PDB) serves as the schema.

I didn’t recognize the initial error that arose from the clone process, but that’s not uncommon, as error messages can change with versions and with proprietary code.  I also have worked very little to none on MSSQL 2014.  When the guys clicked in Management Studio on the target user database and were told they didn’t have access, it wasn’t lost on anyone to look at the login and user mapping to show the login didn’t have a mapping to a username for this particular user database. What was challenging them, was that when they tried to add the mapping, (username) for the login to the database, it stated the username already existed and failed.

Old School, New Fixes

This is where “old school” MSSQL knowledge came into play.  Most of my database knowledge for SQL Server is from versions 6.5 through 2008.  Along with a lot of recovery and migrations, I also performed a process very similar to the option in Oracle to plug or unplug a PDB, in MSSQL terminology referred to as “attach and detach” of a MSSQL database.  You could then easily move the database to another SQL Server, but you very often would have what is called “orphaned users.”  This is where the login ID’s weren’t connected to the user names in the database and needed to be resynchronized correctly.  To perform this task, you could dynamically create a script to pull the logins if they didn’t already exist, run it against the “target” SQL Server and then create one that ran a procedure to synchronize the logins and user names.

Use  <user_dbname>
go
exec sp_change_users_login 'Update_One','<loginname>','<username>'
go

For the problem that was experienced above, it was simply the delphix user that wasn’t linked post restoration due to some privileges and we once we ran this command against the target database all was good again.

This wasn’t the long term solution, but pointed to where the break was in the clone design and that can now be addressed, but it shows that experience, no matter how benign having it may seem, can come in handy later on in our careers.

PASS_2016

I am looking forward to learning a bunch of NEW and AWESOME MSSQL knowledge to take back to Delphix at Pass Summit this next week, as well as meeting up with some great folks from the SQL Family.

See you next week in Seattle!

 

 

 

 

Posted in Delphix, SQLServer Tagged with: , , ,

October 4th, 2016 by dbakevlar

My sabbatical from speaking is about to end in another week and it will return with quite the big bang.

bigbang1

Oct. 14th

First up is Upper NY Oracle User Group, (UNYOUG) for a day of sessions in Buffalo, NY.  I’ll be doing three talks:

  1.  Virtualization 101
  2.  The Limitless DBA
  3.  AWR and ASH with Database 12c

 

pass_2016_stacked

Oct 25th-28th

I’ll be at Pass Summit!  I’ve been wanting to attend this conference since I was managing MSSQL 7 databases!  I finally get to go, but as I’m newly back in the MSSQL saddle, no speaking sessions for me.  I do have a number of peers on the MSSQL side of the house, so hoping to have folks show me around and if you have the time to introduce yourself or introduce me to events and people at this fantastic event, please do!

Nov. 2nd

Next, I head into October with quite the number of talks.  I’ll start out on Nov. 2nd in Detroit, Mi at the Michigan Oracle User Summit, (MOUS) doing a keynote, “The Power in the Simple Act of Doing” and then a technical session, “Virtualization 101”.

Nov. 3rd

I’ll fly out right after I finish my second talk so I can make my way down to Raleigh, NC for the East Coast Oracle conference, (ECO), where I’ll also be doing a couple presentations on Nov. 3rd.

Nov. 9th

The next week I get to stay close to home for the Agile Test and Test Automation Summit.  This is a brand new event in the Denver area.  I’ll be doing a new talk here on Test Data Management, a hot buzzword, but one that people rarely understand the complexities and automation around.

Nov. 10th

The next day, I’m back downtown in Denver, where I can present at Rocky Mountain DataCon, (RMDC) event in Denver.

screen-shot-2016-10-04-at-12-24-56-pm

The RMDC is a newer event and it’s really been picking up traction in the Denver area.  I’ll be speaking on “The Limitless DBA”, focusing on the power of virtualization.  Kent Graziano, the Data Warrior and evangelist for Snowflake will be there, too.  I’m glad to see this new local event taking off in the Denver area, as the Denver/Boulder area consistently ranks high as one of best places to be if you’re in the tech industry.

I’m working to take it easy during the month of December, as I’ll have enough to do just catching up at work at Delphix and then with RMOUG duties with the upcoming Training Days 2017 conference in February 2017!

 

Posted in DBA Life, Delphix Tagged with: ,

September 30th, 2016 by dbakevlar

The topic of DevOps and and Agile are everywhere, but how often do you hear Source Control thrown into the mix?  Not so much in public, but behind the closed doors of technical and development meetings when Agile is in place, it’s a common theme.  When source control isn’t part of the combination, havoc ensues and a lot of DBAs working nights on production with broken hearts.

together

Control Freaks

So what is source control and why is it such an important part of DevOps?  The official definition of source control is:

A component of software configuration management, version control, also known as revision control or source control, is the management of changes to documents, computer programs, large web sites, and other collections of information.

Delphix, with it’s ability to provide developer with as many virtual copies of databases, including masked sensitive data, is a no-brainer when ensuring development and then test have the environments to do their jobs properly.  The added features of bookmarking and branching is the impressive part that creates full source control.

Branching and Bookmarks

Using the diagram below, note how easy it is to mark each iteration of development with a bookmark to make it easy to then lock and deliver to test, a consistent image via a virtual database, (VDB.)

  • Screen Shot 2016-03-09 at 1.28.07 PMNote the feature branches, but every pull and checkout should be a test of the build, including the data.
  • How do we include the data? We connect the source databases (even when the source was multi-terabtytes originally) to Delphix and now we have production data in version control synchronized from all sources
  • This is then a single timeline representing all sources from which to develop, branch and test.
  • After each subsequent development deployment, a branch is created for test in the form of a VDB.  The VDB’s are all read/write copies, so full testing can be performed, even destructive testing.  It’s simple to reverse a destructive test with Delphix Timeflow.
  • After each test succeeds, a merge can be performed or if a problem occurs in the testing, a bookmark can be performed to preserve the use case for closer examination upon delivery of the VDB image to development.
  • The Delphix engine can be kept keep the environment sync’d near real-time with production to deter from any surprises that a static physical refresh might create.
  • Each refresh only takes a matter of minutes vs. days or weeks with a physical duplicate or refresh process.  VDBs save over 70% on storage space allocation, too.

Delphix is capable of all of this, while implementing Agile data masking to each and every development and test environment to protect all PII and PCI data from production in non-production environments.

Delphix, DevOps and Source Control-  a match made in heaven.

Posted in Delphix, devops Tagged with: , ,

September 15th, 2016 by dbakevlar

Along with the deep learning I’ve been allowed to do about data virtualization, I’ve learned a great deal about Test Data Management.  Since doing so, I’ve started to do some informal surveys of the DBAs I run into and ask them, “How do you get data to your testers so that they can perform tests?”  “How do you deliver code to different environments?”

confused

As a basic skill for a database administrator, we’re taught how to use export and import tools, along with cloning options to deliver data where its needed for various development and in succession, testing activities.  If DBAs didn’t deliver on time, due to resource constraints or inability, then developers would often find their own ways to manually create the data they would need to test new code. integration testing teams would need to manufacture data to validate complicated end-to-end functional testing scenarios, and performance & scalability testing teams would need to manufacture data that could stress their solutions at scale.  Rarely were their means successful and the deployment, along with the company often felt the pain.

Failure and Data

As the volume of application projects increased, larger IT organizations recognized the opportunity to gain efficiencies of scale and searched out opportunities to streamline processes and gain ways of speeding up data refreshes, even synthesizing data!  However, Developers and Testers still had little ability to self-service their needs and often found synthesized data incapable of meeting requirements and floundering deployments once to production.  IT organizations were able to achieve some efficiencies and cost savings, but significant savings related to development, along with testing productivity and quality remained a mystery.

DevOps To the Rescue

With the adoption of DevOps, a heightened focus on automation and speed of delivery occurred across IT.  Advanced Test Data Management solutions are starting to become a reality.  Companies are starting to realize that importance of data distribution, self-service and data security when delivered to non-production environments.
delphix-schema
I truly believe that no IT organization can accuse development or testing departments of lacking delivery if the groups aren’t offered the environments needed and data quality required to deliver a quality product.  One of the ways this can be accomplished is via virtualized database and application environments.  Simply virtualize the test and development, eliminating up to 90% of the storage required for physical databases and yet, still offer all the same data that is available in production.  If data security is a concern, this can all be done with data masking, built right into a proper Test Data Management product.

Test Drive TDM

If you’re interested in taking a proper Test Data Management product for a test drive, right on your laptop, try out Delphix Express.

 

Posted in Delphix, Delphix Express, devops, Test Data Management, Uncategorized Tagged with: ,

August 31st, 2016 by dbakevlar

Oracle Open World 2016 is almost here…where did the summer go??

damn_fall

With this upon us, there is something you attendees need and that’s to know about what awesome sessions are at Oracle Open World from the Delphix team!  I gave my options up as is the tragedy of switching companies in late spring from Oracle, but you can catch some great content on how to reach the new level in data management with Tim Gorman, (my phenomenal hubby, duh!) and Brian Bent from Delphix:

After absorbing all this great content on Sunday, you can come over to Oak Table World at the Children’s Creativity Museum on Monday and Tuesday to see the Oak Table members present their latest technical findings to the world.  The schedule and directions to the event are all available in the link above.

If you’re looking for where the cool kids will be on Thursday, check out the Delphix Sync event!  There’s still time to register if you want to join us and talk about how cool data virtualization is.

If you’re a social butterfly and want to get more involved with the community, check out some of the great activities that, and I do mean THAT Jeff Smith and the SQL Developer team have been planning for Oracle Open World, like the Open World Bridge Run.

 

Posted in DBA Life, Delphix Tagged with: , , ,

August 26th, 2016 by dbakevlar

I’ve been busy reading and testing everything I can with Delphix, whenever I get a chance.  I’m incredibly fascinated by copy data management and the idea of doing this with Exadata is nothing new, as Oracle has it’s own version with sparse copy.  The main challenge is that Exadata’s version of this is kind of clunky and really doesn’t have the management user interface that Delphix offers.

reading_well

There is a lot of disk that comes with an Exadata, not just CPU, network bandwidth and memory.  Now you can’t utilize offloading with a virtualized database, but you may not be interested in doing so.  The goal is to create a private cloud that you can use small storage silos for virtualized environments.  We all know that copy data management is a huge issue for IT these days, so why not make the most of your Exadata, too?

With Delphix, you can even take and external source and provision a copy in just a matter of minutes to an Exadata, utilizing very little storage.  You can even refresh, roll back, version and branch through the user interface provided.

I simulated two different architecture designs for how Delphix would work with Exadata.  The first was with standard hardware, with Virtual Databases, (VDBs) on the Exadata and the second having both the Dsource and the VDBs on another Exadata.

VDBs On A Second Exadata

  • Production on EXADATA,
  • Standard RMAN sync to Delphix
  • VDBs hosted on EXADATA DB compute nodes
  • 10Gb NFS is standard connectivity on EXADATA

 

Screen Shot 2016-03-07 at 5.47.53 PM

VDBs on Standard Storage, Source on Exadata

  • Production on EXADATA, standard RMAN sync to Delphix
  • VDBs hosted on commodity x86 servers

Screen Shot 2016-03-07 at 5.46.36 PM

How Does it All Work

Now we need to capture our gold copy to use for the DSource, which will require space, but Delphix does use compression, so it will be considerably smaller than the original database it’s using for the data source.Screen Shot 2016-03-07 at 5.55.57 PM

If we then add ALL the VDBs to the total storage utilized by that and by the Dsource, then you’d see that they only use about the same amount of space as the original database!  Each of these VDBs are going to interact with the user independently, just as a standard database copy would.  They can be at different points in time, track different snapshots, have different hooks, (pre or post scripts to be run for that copy) with different data, (which is just different blocks, so that would be the only additional space outside of other changes.)  Pretty cool if you ask me!

Save a server, save space and your sanity, clone with Delphix.

Posted in Copy Data Management, Delphix Tagged with: , ,

August 24th, 2016 by dbakevlar

While chatting on slack the other day, one of my peers asked if I’d seen that ESG Global had done a write up on Veritas Velocity.  Velocity is a product that won’t be available until the end of 2016 and is considered “Alpha”.  I was surprised that anyone allowed to participate in Alpha was able to publish a description on the product, but that’s not my call to make.

greatjob

What I found interesting about the article, written by ESG, discusses how Veritas Velocity, “… is combining its sophisticated approaches to data management with its broader ability to deliver superior data protection and information availability in order to offer something revolutionary.”

Revolutionary?

I found this statement to be quite odd, as what they’re doing is simply using the same technology that Delphix has utilized for years to perform what Delphix has implemented at our customer sites since 2008.  They are simply hopping on the bandwagon, (along with a number of other companies) in an attempt to take advantage of the newest buzz word, “Copy Data Management”.

There’s nothing revolutionary about what we do.  It was revolutionary back in 2008 and may be seen as revolutionary to the customers who haven’t embraced the power of virtualized environments yet, but to say what they’ve created is revolutionary isn’t true.

If we inspect (at a high level) what Veritas Velocity does:

  1. A self-contained VM appliance to manage storage and thin cloning.
  2. A configuration of storage with an NFS mount presented as VMs.
  3. Hybrid management to the cloud.
  4. The VMs are then presented as targets to be used for thin clones.
  5. Eliminating copies of data by virtualizing database environments, focused on the cloud.
  6. A User interface to manage it all.

I can replace the lead into the above list with Delphix and that describes the Delphix Engine, as well.  We also offer a mature User Interface, advanced scripting capabilities and heterogenous support.

Screen Shot 2016-08-24 at 5.04.11 PM

There are a lot of companies out there making claims that they have revolutionized new capabilities like “data virtualization”,  “copy data management “ and “test data management”. Delphix has been in this space since the beginning and as the Gartner reports prove, will continue to be the driving force behind what other companies are striving to achieve in their products.

 

Want to learn how many solutions Delphix virtualization can provide for your company’s data?  Try out Delphix Express, a simple Virtualbox or VMware open source version for your workstation to check out who’s been doing it right all along and before it was cool!

Posted in Delphix, Delphix Express Tagged with: , , , , ,

August 15th, 2016 by dbakevlar

I’ve been involved in two data masking projects in my time as a database administrator.  One was to mask and secure credit card numbers and the other was to protect personally identifiable information, (PII) for a demographics company.  I remember the pain, but it was better than what could have happened if we hadn’t protected customer data….

blowup

Times have changed and now, as part of a company that has a serious market focus on data masking, my role has time allocated to research on data protection, data masking and understanding the technical requirements.

Reasons to Mask

The percentage of companies that contain data that SHOULD be masked is much higher than most would think.

Screen Shot 2016-08-15 at 12.59.05 PM

The amount of data that should be masked vs. is masked can be quite different.  There was a great study done by the Ponemon Instititue, (that says Ponemon, you Pokemon Go freaks…:)) that showed 23% of data was masked to some level and 45% of data was significantly masked by 2014.  This still left over 30% of data at risk.

The Mindset Around Securing Data

We also don’t think very clearly about how and what to protect.  We often silo our security-  The network administrators secure the network.  The server administrators secure the host, but doesn’t concern themselves with the application or the database and the DBA may be securing the database, but the application that’s accessing it, may be open to accessing data that shouldn’t be available to those involved.  We won’t even start about what George in accounting is doing.

We need to change from thinking just of disk encryption and start thinking about data encryption and application encryption with key data stores that protect all of the data-  the goal of the entire project.  It’s not like we’re going to see people running out of a building with a server, but seriously, it doesn’t just happen in the movies and people have stolen drives/jump or even print outs of spreadsheets drives with incredibly important data residing on it.

As I’ve been learning what is essential to masking data properly, along with what makes our product superior, is that it identifies potential data that should be masked, along with ongoing audits to ensure that data doesn’t become vulnerable over time.

Screen Shot 2016-08-15 at 12.30.34 PM

This can be the largest consumption of resources in any data masking project, so I was really impressed with this area of Delphix data masking.  Its really easy to use, so if you don’t understand the ins and outs to DBMS_CRYPTO or unfamiliar with the java.utilRANDOM syntax, no worries, Delphix product makes it really easy to mask data and has a centralized key store to manage everything.

Screen Shot 2016-08-15 at 11.52.53 AM

It doesn’t matter if the environment is on-premise or in the cloud.  Delphix, like a number of companies these days, understands that hybrid management is a requirement, so efficient masking and ensuring that at no point is sensitive data at risk is essential.

The Shift

How many data breaches do we need to hear about to make us all pay more attention to this?  Security topics at conferences are diminished vs. when I started to attend less than a decade ago, so I know it wasn’t that long ago it appeared to be more important to us and yet it seems to be more important of an issue.

Screen Shot 2016-08-15 at 11.47.19 AM

Research was also performed that found only 7-19% of companies actually knew where all their sensitive data was located.  That’s over 80% sensitive data vulnerable to a breach.  I don’t know about the rest of you, but upon finishing up on that little bit of research, I understood why many feel better about not knowing and why its better just to accept this and address masking needs to ensure we’re not one of the vulnerable ones.

Automated solutions to discover vulnerable data can significantly reduce risks and reduce the demands on those that often manage the data, but don’t know what the data is for.  I’ve always said that the best DBAs know the data, but how much can we really understand it and do our jobs?  It’s often the users that understand it, but may not comprehend the technical requirements to safeguard it.  Automated solutions removes that skill requirement from having to exist in human form, allowing us all to do our jobs better.  I thought it was really cool that our data masking tool considers this and takes this pressure off of us, letting the tool do the heavy lifting.

Along with a myriad of database platforms, we also know that people are bound and determined to export data to Excel, MS Access and other flat file formats resulting in more vulnerabilities that seem out of our control.  Delphix data masking tool considers this and supports many of these applications, as well.  George, the new smarty-pants in accounting wrote out his own XML pull of customers and credit card numbers?  No problem, we got you covered… 🙂

Screen Shot 2016-08-15 at 12.51.45 PM

So now, along with telling you how to automate a script to email George to change his password from “1234” in production, I can now make recommendations on how to keep him from having the ability to print out a spreadsheet with all the customer’s credit card numbers on it and leave it on the printer…:)

Happy Monday, everyone!

 

 

 

 

Posted in Data Masking, Delphix, Oracle Tagged with: ,

July 25th, 2016 by dbakevlar

A number of emails I received about trying out Delphix Express was regarding VMWare.  Many of my followers had used Virtualbox for a long time and we all know, no one likes change, (OK, maybe me, but we know how abnormal I am anyway… :))

itcrowd1

Adding a VM in VMWare

Importing an OVA is pretty simple in VMWare.  In the VMWare Fusion application, click on File, Import and accept the defaults.  Depending on the size of the VM, the process will take the time needed to import and if anything happens to the VM you imported in, the great thing about a VM, you just have to DELETE AND IMPORT AGAIN to erase that which you have destroyed. 🙂

How Do I Remove a VM?

Open the VM Control Console, click on the VM you want to delete, then click on Remove.  Remember to click on delete files if you’d like that space back on your hard drive, too!  The utility will take just a moment to clean up the VM and you can then proceed with work or re-import in the OVA file.

How to Check my IP Address?

I know, if using Delphix Express, the IP address for the machine is displayed when it’s first started, but I also know that we DBAs are a curious lot and known for snooping around every chance we get.  Due to this, you may not have noted the IP address and now need it to log into a terminal windows or want a second terminal to run or check items.

Knowing how to return the IP address is a good thing to know, so here are all the ways depending on what OS you’re on:

Linux-  type in ifconfig from the terminal and you’ll see the IP address listed for the inet address for the eth0 configuration, (commonly setup as the eth0.)

Windows-  ipconfig -a from the Command prompt or Click on the Window Icon, type in “Network” to go to Network and Ethernet and then click on ethernet.  Your IP Address is listed in in the IPv4 Address setting.

Mac-  ifconfig from terminal of click on the Apple up at top left corner of screen, click on System Preferences, click on Network, then if you’re using WiFi, click on it and then TCP/IP to view your actual IP Address listed for the IPv4 address.

Control the Cursor

If the VMware screen is blank, (no test or image on the screen or you’ve lost your cursor, the best way to get control back is to click Ctrl/Command on Mac to retrieve cursor control or make return your screen to active.

Check for VMWare Updates

Every software has updates and just like the other software we support, updates to VMware is important.  We may not utilize our VMs as often as we think, so it’s good to get into the practice to check for updates when you first log into VMware by clicking on VMWare Fusion and Check for Updates.  If only takes a moment and hopefully you’ll see the following after it’s gone out to check:

Screen Shot 2016-07-25 at 12.23.47 PM

Don’t Sell the Farm

By that, I mean to remember that you’re on one PC and you’re running virtual PCs on it.  Don’t take up so many resources to your VMs that your PC doesn’t have enough to do its job.  A VM should be pretty conservative on its resources and its important to look at the configuration and see if you can dial down any usage that isn’t necessary.

To do this, the VM must be shutdown, (not just suspended) and click on Virtual Machine, Settings.  In your settings, there are a couple areas that need to be considered for resource usage:

Screen Shot 2016-07-25 at 12.50.26 PM

The first, obviously, is to look at Processors & Memory.  Ensure you’ve left enough memory for your PC and as PCs come with quad-core or higher processors these days, a single core is often sufficient for the processing on a VM.

Screen Shot 2016-07-25 at 12.54.30 PM

The amount of space that is being used by a VM is a consideration, too.  If a VM is so large that you need to purchase an external drive to run it on, then that’s a better choice vs. using up all your local disk or it may be time to build out Delphix just to virtualize the environment to start! 🙂

Verify that all disks for the VM are actually in use.  I’ve seen where their are drives created for future growth, but never used or extra space that was allocated that just needs to be shrunk down.  This can be accomplished by clicking on Virtual Machine, Settings and then from there, click on each of the disks in use by the VM, shrinking any that may have been over-allocated to.  This is another task that can only be performed when the VM is shutdown.

Well, there’s a start to getting comfortable with VMWare Fusion.  Do you have any tips or tricks that you can add to this?  Comment and let use know and have a great Monday!

 

 

 

 

 

 

 

Posted in Delphix, VMWare Tagged with: ,

  • Facebook
  • Google+
  • LinkedIn
  • Twitter