Category: Delphix Express

January 9th, 2017 by dbakevlar

We, DBAs, have a tendency to over think everything.  I don’t know if the trait to over think is just found in DBAs or if we see it in other technical positions, too.

I believe it corresponds to some of why we become loyal to one technology or platform.  We become accustomed to how something works and it’s difficult for us to let go of that knowledge and branch out into something new.  We also hate asking questions-  we should just be able to figure it out, which is why we love blog posts and simple documentation.  Just give us the facts and leave us alone to do our job.

Take the cloud-  Many DBAs were quite hesitant to embrace it.  There was a fear of security issues, challenges with network and more than anything, a learning curve.  As common, hindsight is always 20/20.  Once you start working in the cloud, you often realize that its much easier than you first thought it would be and your frustration is your worst enemy.

So today we’re going to go over some basic skills the DBA requires to manage a cloud environment, using Amazon, (AWS) as our example and the small changes required to do what we once did on-premise.

In Amazon, we’re going to be working on EC2, also referred to as the Elastic Compute Cloud.

Understanding Locations, Regions and Zones

EC2 is built out into regions and zones.  Knowing what region you’re working in is important, as it allows you to “silo” the work you’re doing and in some ways, isn’t much different than a data center. Inside of each of these regions, are availability zones, which isolates services and features even more, allowing definitive security at a precise level, with resources shared only when you deem it should.

Just as privileges granted inside a database can both be a blessing and a curse, locations and regions can cause challenges if you don’t pay attention to the location settings when you’re building out an environment.

Amazon provides a number of links with detailed information on this topic, but here’s the tips I think are important for a DBA to know:

  1.  Before setting anything up that is part of a complete solution requiring multiple setup page configurations, ALWAYS check the region in the upper right corner.  I was surprised when it would change from page to page or after a login-

2.  If you think you may have set something up in the wrong region, the dashboard can tell you what is deployed to what region under the resources section:

Understanding Security Keys

Public key cryptography makes the EC2 world go round.  Without this valuable 2048-bit SSH-2 RSA key encryption, you can’t communicate or log into your EC2 host securely.  Key pairs, a combination of a private and public key should be part of your setup for your cloud environment.

Using EC2’s mechanism to create these is easy to do and eases management.  Its not the only way, but it does simplify and as you can see above in the resource information from the dashboard, it also offers you a one-stop shop for everything you need.

When you create one in the Amazon cloud, the private key downloads automatically to the workstation you’re using and it’s important that you keep track of it, as there’s no way to recreate the private key that will be required to connect to the EC2 host.

Your key pair is easy to create by first accessing your EC2 dashboard and then scroll down on the left side and click on “Key Pairs”.  From this console, you’ll have the opportunity to create, import a pre-existing key or manage the ones already in EC2:

Before creating, always verify your region you’re working in, as we discussed in the previous section and if you’re experiencing issue with your key, verify typographical errors and if the location of the private file matches the name listed for identification.

If more than one group is managing the EC2 environment, carefully consider before deleting a key pair.  I’ve experienced the pain caused by a key removal that created a production outage.  Creation of a new key pair is simpler to manage than implementation of a new key pair across application and system tiers after the removal of one that was necessary.

Understanding Roles and Security

Security Groups are silo’d for a clear reason and no where is this more apparent than in the cloud.  To ensure that the cloud is secure, setting clear and defined boundaries of accessibility to roles and groups is important to keep infiltrators out of environments they have no business accessing.

As we discussed in Key Pairs, our Security Groups are also listed by region under resources so we know they exist at a high level.  If we click on the Security Groups link under Resources in the EC2 Dashboard, we’ll go from seeing 5 security group members:

To viewing the list of security groups:

If you need to prove that these are for N. California, (i.e. US-West-1) region, click on the region up in the top right corner and change to a different region.  For our example, I switched to Ohio, (us-east-2) and the previous security groups aren’t listed and just the default security group for Ohio region is displayed:

Security groups should be treated in the cloud the same way we treat privileges inside a database-  granting the least privileges required is best practice.

Understanding How to SSH to a Host

You’re a DBA, which means you’re most likely most comfortable at the command line.  Logging in via SSH on a box is as natural as walking and everything we’ve gone through so far was to prepare you for this next step.

Your favorite command line tool, no matter if it’s Putty or Terminal, if you’re set up everything in the previous sections correctly, then you’re ready to log into the host, aka instance.

  1.  Ensure your downloaded private key is saved in an easily accessible spot for you to use to log in or that you know the username/password, (keys just make this easier…)
  2. Gather the information about “instances” by clicking on the EC2 dashboard, then click on Instances.
  3. The Public DNS and the Public IP is displayed and note the region, too:

You can use this information to then ssh into the host:

ssh -i "<keypair_name>.pem" <osuser>@<public dns or ip address>.<region>.compute.amazonaws.com

Once logged in as the OS user, you can SU over to the application or database user and proceed as you would on any other host.

If you attempt to log into a region with a key pair from another region, it state that the key pair can’t be found, so another aspect showing the importance of regions.

Understanding How to SCP a File

This is the last area I’ll cover today, (I know, a few of you are saying, “good, I’ve already got too much in my head to keep straight, Kellyn…)

With just about any Cloud offering, you can bring your own license.  Although there are a ton of images, (AMIs in AWS, VHDs in Azure, etc.) pre-built, you may need to use a bare metal OS image and load your own software or as most DBAs, bring over patches to maintain the database you have running out there.  Just because you’re in the cloud doesn’t mean you don’t have a job to do.

Change over to the directory that contains the file that you need to copy and then run the following:

scp -i <keypair>.pem <file name to be transferred> <osuser>@<public dns or ip address>.<region>.compute.amazonaws.com:/<direction you wish to place the file in>/.

If you try to use a key pair from one region to log into a SCP to a host, (instance) in another region, you won’t receive an error, but it will act like you skipped the “-i” and the key pair and you’ll be prompted for the password for the user:

<> password: 

pxxxxxxxxxxxx_11xxxx_Linux-x86-64.zip             100%   20MB  72.9KB/s   04:36

This is a good start to getting started as a DBA on the cloud and not over-thinking.  I’ll be posting more in the upcoming weeks that will not only assist those already in the cloud, but those wanting to find a way to invest more into their own cloud education!

Posted in Cloud, Delphix, Delphix Express

September 15th, 2016 by dbakevlar

Along with the deep learning I’ve been allowed to do about data virtualization, I’ve learned a great deal about Test Data Management.  Since doing so, I’ve started to do some informal surveys of the DBAs I run into and ask them, “How do you get data to your testers so that they can perform tests?”  “How do you deliver code to different environments?”

confused

As a basic skill for a database administrator, we’re taught how to use export and import tools, along with cloning options to deliver data where its needed for various development and in succession, testing activities.  If DBAs didn’t deliver on time, due to resource constraints or inability, then developers would often find their own ways to manually create the data they would need to test new code. integration testing teams would need to manufacture data to validate complicated end-to-end functional testing scenarios, and performance & scalability testing teams would need to manufacture data that could stress their solutions at scale.  Rarely were their means successful and the deployment, along with the company often felt the pain.

Failure and Data

As the volume of application projects increased, larger IT organizations recognized the opportunity to gain efficiencies of scale and searched out opportunities to streamline processes and gain ways of speeding up data refreshes, even synthesizing data!  However, Developers and Testers still had little ability to self-service their needs and often found synthesized data incapable of meeting requirements and floundering deployments once to production.  IT organizations were able to achieve some efficiencies and cost savings, but significant savings related to development, along with testing productivity and quality remained a mystery.

DevOps To the Rescue

With the adoption of DevOps, a heightened focus on automation and speed of delivery occurred across IT.  Advanced Test Data Management solutions are starting to become a reality.  Companies are starting to realize that importance of data distribution, self-service and data security when delivered to non-production environments.
delphix-schema
I truly believe that no IT organization can accuse development or testing departments of lacking delivery if the groups aren’t offered the environments needed and data quality required to deliver a quality product.  One of the ways this can be accomplished is via virtualized database and application environments.  Simply virtualize the test and development, eliminating up to 90% of the storage required for physical databases and yet, still offer all the same data that is available in production.  If data security is a concern, this can all be done with data masking, built right into a proper Test Data Management product.

Test Drive TDM

If you’re interested in taking a proper Test Data Management product for a test drive, right on your laptop, try out Delphix Express.

 

Posted in Delphix, Delphix Express, devops, Test Data Management, Uncategorized Tagged with: ,

August 24th, 2016 by dbakevlar

While chatting on slack the other day, one of my peers asked if I’d seen that ESG Global had done a write up on Veritas Velocity.  Velocity is a product that won’t be available until the end of 2016 and is considered “Alpha”.  I was surprised that anyone allowed to participate in Alpha was able to publish a description on the product, but that’s not my call to make.

greatjob

What I found interesting about the article, written by ESG, discusses how Veritas Velocity, “… is combining its sophisticated approaches to data management with its broader ability to deliver superior data protection and information availability in order to offer something revolutionary.”

Revolutionary?

I found this statement to be quite odd, as what they’re doing is simply using the same technology that Delphix has utilized for years to perform what Delphix has implemented at our customer sites since 2008.  They are simply hopping on the bandwagon, (along with a number of other companies) in an attempt to take advantage of the newest buzz word, “Copy Data Management”.

There’s nothing revolutionary about what we do.  It was revolutionary back in 2008 and may be seen as revolutionary to the customers who haven’t embraced the power of virtualized environments yet, but to say what they’ve created is revolutionary isn’t true.

If we inspect (at a high level) what Veritas Velocity does:

  1. A self-contained VM appliance to manage storage and thin cloning.
  2. A configuration of storage with an NFS mount presented as VMs.
  3. Hybrid management to the cloud.
  4. The VMs are then presented as targets to be used for thin clones.
  5. Eliminating copies of data by virtualizing database environments, focused on the cloud.
  6. A User interface to manage it all.

I can replace the lead into the above list with Delphix and that describes the Delphix Engine, as well.  We also offer a mature User Interface, advanced scripting capabilities and heterogenous support.

Screen Shot 2016-08-24 at 5.04.11 PM

There are a lot of companies out there making claims that they have revolutionized new capabilities like “data virtualization”,  “copy data management “ and “test data management”. Delphix has been in this space since the beginning and as the Gartner reports prove, will continue to be the driving force behind what other companies are striving to achieve in their products.

 

Want to learn how many solutions Delphix virtualization can provide for your company’s data?  Try out Delphix Express, a simple Virtualbox or VMware open source version for your workstation to check out who’s been doing it right all along and before it was cool!

Posted in Delphix, Delphix Express Tagged with: , , , , ,

July 18th, 2016 by dbakevlar

For those of you that downloaded and are starting to work with Delphix Express, (because you’re the cool kids… :)) You may have noticed that there is an Express Edition Oracle 11g database out there you could use as a Dsource, (source database for Delphix to clone and virtualize…)

omg

If you’d like to work with this free version with your Delphix Express environment, these are the steps that I performed to allow me to utilize it.  My setup is as follows:

  • VMWare Fusions 8
  • Chrome Browser
  • Delphix is upgraded to the latest VM release

Although we start the environment from the Delphix Engine VM, the Target VM contains the discovery scripts/configuration files and the Dsource VM has the 11g XE environment we wish to add.

Configure and Start the Dsource

Log into a terminal session to your DSource VM.  You can login as the delphix user, (default password is delphix) and then ‘su’ over to the oracle user.  You now need to check to see if the XE environment is running:

ps -ef | grep pmon

If the database is up and running, you’re done over here on you’re Dsource, but if it isn’t, then you need to start it.

First, you’ll need to set the environment, which is just standard for any database administrator:

Check each of the environment settings for the following:

$ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe

$ORACLE_BASE=/u01/app/oracle

$ORACLE_SID=XEXE

Reconfigure the Engine

Once these check out as set to those, you should be able to start the database without any errors, (if you followed my last post, you set it up to configure it as part of the setup.

Now, by default, these aren’t configured as Dsource or Targets to deter the Dsource and Target VMs from consuming too much space by default.  Needless to say, you’ll need to tell the Delphix engine that it’s alright now to use them.

Open up a terminal to the Target for your Delphix Express and get the IP Address:

ifconfig

Take this address and type it into a web browser window and add the port to it:

Example: 172.15.190.129:8000

The the landshark configuration file will come up and you’ll need to check the following to ensure they are set to true:

Screen Shot 2016-07-18 at 1.21.49 PM

We need to tell the Delphix discovery script that we want to enable the Dsources, (Source VM) and VDBs, (Target VM) to configure/discovery and then which ones we will be working with, (the oracle_xe, which is the XEXE database we checked out on the Dsource VM.)

Remember to submit your changes before existing out of the configuration, otherwise you’ll just have to do it all over again and you know how much I hate it when anyone does things more than once! 🙂

Run the Setup Script

Return to the terminal window for the Target VM.  Check and see if any setup scripts are attempting to run:

ps -ef | grep setup

You should only see the the following running from the startup and you’re good-

Screen Shot 2016-07-18 at 1.40.58 PM

Run the setup to configure the new Dsources and VDBs to your Delphix Express environment as the delphix OS user on the Target VM:

./landshark_setup.py

You’ll note that some of the configuration was completed previously and skipped, but that there’s also some additions to your environment now that you’ve requested these areas be configured.  It doesn’t take a long time, (note the time in the output from the landshark_setup.log):

Screen Shot 2016-07-18 at 1.55.31 PM

And by Grabthar’s hammer, you’ll have those Dsources now in your Delphix Express environment to work with:

Screen Shot 2016-07-18 at 1.55.02 PM

Next post I’ll talk more about the actual cloning and VDBs- I promise… 🙂  Have a good week!

 

Posted in Delphix Express, Oracle, VMWare Tagged with: , , ,

July 11th, 2016 by dbakevlar

Delphix Express offers a virtual environment to work with all the cool features like data virtualization and data masking on just a workstation or even a laptop.  The product has an immense offering, so no matter how hard Kyle, Adam and the other folks worked on this labor of love, there’s bound to be some manual configurations that are required to ensure you get the most from the product.  This is where I thought I’d help and offer a virtual hug to go along with the virtual images…:)

virtual_hug

If you’re already set on installing and working<– (Link here!!) with Delphix Express, you will find the following Vimeo videos- importing the VMs and configuring Delphix Express quite helpful. Adam Bowen did a great job with these videos to get you started, but below, I’ll go through some technical details a bit deeper to give folks added arsenal in case they’ve missed a step or challenged just starting out with VMWare.

Note- Delphix Express requires VMWare Fusion, which you can download after purchasing a license, ($79.99) but well worth the investment.

Resource Usage

Not enough memory to run all three VM’s required as part of Delphix Express or after an upgrade, the Delphix Express uses over 6Gb.

Different laptops/workstations have different amounts of memory, CPU and space available.  Memory is the most common constraint with today’s pc.  Although the VMs are configured for optimal performance, the target and source environments can have the memory trimmed to 2Gb each and still perform when resources are constrained.

The VM must be shut down for this configuration change to be implemented.  After stopping or before starting the VM, click on Virtual Machine, Settings.  Click on Processors and Memory and then you can configure the memory usage via a slider option as seen below:

Screen Shot 2016-07-11 at 3.04.08 PM

Move the slider to under 2G for the VM in question and then close the configuration window and start the VM.  Perform this for each VM, (the Delphix Engine VM should already be at 2Gb.)

Configuration

Issue- Population of sources and targets is empty after successful configuration.

Checking the Log

After starting the target and source VMs, a UI interface with command line is opened and you can login right from the VMWare.  Virtualbox would require a terminal opened to the desktop, but either way, you can get to the command line interface in such a way without using Putty or another desktop terminal from your workstation.

On the target VM command line, login as the delphix user.  The target VM has a python script that runs in the background upon startup that checks for a delphix engine once every minute and if it locates one, will run the configuration.  You can view this running in the cron:

crontab -l
@reboot ..... /home/delphix/landshark_setup.py

It writes to the following log file:

/home/delphix/landshark_setup.log

You can view this file, (or tail it or cat it, whatever you are comfortable doing to view the end of the file…)  I prefer just to view the last ten lines, so I’ll run a command to look at JUST the last ten lines:

tail -10 landshark_setup.log

If the configuration is having issues locating the Delphix engine, it will show in this log file.  Once confirmed, then we have a couple steps to check:

VMWare issue with the one of the virtual machines not visible to another.  Each VM needs to be able to communicate and interact with each other.  When importing in each VM, the ability for the VM to be “host aware” with the Mac may not have occurred.  If you the delphix engine VM isn’t viewable to the target or the source, you can check the log and then verify in the following way.

Click on Virtual Machine, Settings and then click on Network Adapter.  Verify that the top radio option is selected for “Share with my Mac”:

Screen Shot 2016-07-11 at 3.20.33 PM

Verify that this is configured for EACH of the three virtual machines involved.  If this hasn’t corrected and the configuration doesn’t populate the virtual environments in the Delphix interface, then it’s time to look at the configuration for the target machine.

Get IP Address

While SSH connected to the target machine, type in the following:

ifconfig

Use the IP address shown, (inet address) and open a browser on your PC, adding the port used for the target configuration file, (port 8000 by default):

<ipaddress>:8000

You should be shown the configuration file for your target server that is used to run the delphix engine configuration.  There are options to update the values for different parameters.  The you should focus on are:

Environments

linux_source_ip= make sure this matches the source VM’s ip address when you type in “ifconfig”.

Engine

engine_address= ip address for the delphix engine VM when you type in ifconfig on the host

engine_password= should match the password that you updated your delphix_admin to when you went through the configuration.  Update it to match if it doesn’t, as I’ve seen some folks not set it to “landshark” as demonstrated in the videos, so of course, the setup will fail when the file doesn’t match the password set by the user.

Content

oracle_xe = If you set Oracle_xe to true, then don’t set the 11g or 12c to true.  To conserver workstation resources,  choose only one database type.

Once you’re made all the changes you want to the page, click on Submit Changes.

Screen Shot 2016-07-11 at 3.37.21 PM

You need to run the reconfiguration manually now.  Remember, this runs in the background each minute, but when it does that, you can’t see what’s going on, so I recommend killing the running process and running it manually.

Manual Runs of the Landshark Setup

From the target host, type in the following:

ps -ef | grep landshark_setup

Kill the running processes:

killall landshark_setup.py

Check for any running processes, just to be safe:

ps -ef | grep landshark_setup

Once you’ve confirmed that none are running, let’s run the script manually from the delphix user home:

./landshark_setup.py

Verify that the configuration runs, monitoring as it steps through each step:

Screen Shot 2016-07-10 at 6.52.54 PM

This is the first time you’re performed these steps, so expect a refresh won’t be performed, but a creation will.  You should now see the left panel of your Delphix Engine UI populated:

Screen Shot 2016-07-11 at 3.48.40 PM

Now we’ve come to the completion of the initial configuration.  In my next post on Delphix Express, I’ll discuss the Dsource and Target database configurations for different target types.  Working with these files and configurations are great practice to learning about Delphix, even if your Delphix Express even if you are amazed at how easy this all was.

If you want to see more about Delphix Express, check the following links from Kyle Hailey or our very own Oracle Alchemist guy, Steve Karam….:)

Screen Shot 2016-07-11 at 5.18.14 PM

 

 

 

 

 

Posted in Delphix, Delphix Express, Oracle Tagged with: ,

  • Facebook
  • Google+
  • LinkedIn
  • Twitter