Subscribe to Blog via Email
We, DBAs, have a tendency to over think everything. I don’t know if the trait to over think is just found in DBAs or if we see it in other technical positions, too.
I believe it corresponds to some of why we become loyal to one technology or platform. We become accustomed to how something works and it’s difficult for us to let go of that knowledge and branch out into something new. We also hate asking questions- we should just be able to figure it out, which is why we love blog posts and simple documentation. Just give us the facts and leave us alone to do our job.
Take the cloud- Many DBAs were quite hesitant to embrace it. There was a fear of security issues, challenges with network and more than anything, a learning curve. As common, hindsight is always 20/20. Once you start working in the cloud, you often realize that its much easier than you first thought it would be and your frustration is your worst enemy.
So today we’re going to go over some basic skills the DBA requires to manage a cloud environment, using Amazon, (AWS) as our example and the small changes required to do what we once did on-premise.
In Amazon, we’re going to be working on EC2, also referred to as the Elastic Compute Cloud.
EC2 is built out into regions and zones. Knowing what region you’re working in is important, as it allows you to “silo” the work you’re doing and in some ways, isn’t much different than a data center. Inside of each of these regions, are availability zones, which isolates services and features even more, allowing definitive security at a precise level, with resources shared only when you deem it should.
Just as privileges granted inside a database can both be a blessing and a curse, locations and regions can cause challenges if you don’t pay attention to the location settings when you’re building out an environment.
Amazon provides a number of links with detailed information on this topic, but here’s the tips I think are important for a DBA to know:
2. If you think you may have set something up in the wrong region, the dashboard can tell you what is deployed to what region under the resources section:
Public key cryptography makes the EC2 world go round. Without this valuable 2048-bit SSH-2 RSA key encryption, you can’t communicate or log into your EC2 host securely. Key pairs, a combination of a private and public key should be part of your setup for your cloud environment.
Using EC2’s mechanism to create these is easy to do and eases management. Its not the only way, but it does simplify and as you can see above in the resource information from the dashboard, it also offers you a one-stop shop for everything you need.
When you create one in the Amazon cloud, the private key downloads automatically to the workstation you’re using and it’s important that you keep track of it, as there’s no way to recreate the private key that will be required to connect to the EC2 host.
Your key pair is easy to create by first accessing your EC2 dashboard and then scroll down on the left side and click on “Key Pairs”. From this console, you’ll have the opportunity to create, import a pre-existing key or manage the ones already in EC2:
Before creating, always verify your region you’re working in, as we discussed in the previous section and if you’re experiencing issue with your key, verify typographical errors and if the location of the private file matches the name listed for identification.
If more than one group is managing the EC2 environment, carefully consider before deleting a key pair. I’ve experienced the pain caused by a key removal that created a production outage. Creation of a new key pair is simpler to manage than implementation of a new key pair across application and system tiers after the removal of one that was necessary.
Security Groups are silo’d for a clear reason and no where is this more apparent than in the cloud. To ensure that the cloud is secure, setting clear and defined boundaries of accessibility to roles and groups is important to keep infiltrators out of environments they have no business accessing.
As we discussed in Key Pairs, our Security Groups are also listed by region under resources so we know they exist at a high level. If we click on the Security Groups link under Resources in the EC2 Dashboard, we’ll go from seeing 5 security group members:
To viewing the list of security groups:
If you need to prove that these are for N. California, (i.e. US-West-1) region, click on the region up in the top right corner and change to a different region. For our example, I switched to Ohio, (us-east-2) and the previous security groups aren’t listed and just the default security group for Ohio region is displayed:
Security groups should be treated in the cloud the same way we treat privileges inside a database- granting the least privileges required is best practice.
You’re a DBA, which means you’re most likely most comfortable at the command line. Logging in via SSH on a box is as natural as walking and everything we’ve gone through so far was to prepare you for this next step.
Your favorite command line tool, no matter if it’s Putty or Terminal, if you’re set up everything in the previous sections correctly, then you’re ready to log into the host, aka instance.
You can use this information to then ssh into the host:
ssh -i "<keypair_name>.pem" <osuser>@<public dns or ip address>.<region>.compute.amazonaws.com
Once logged in as the OS user, you can SU over to the application or database user and proceed as you would on any other host.
If you attempt to log into a region with a key pair from another region, it state that the key pair can’t be found, so another aspect showing the importance of regions.
This is the last area I’ll cover today, (I know, a few of you are saying, “good, I’ve already got too much in my head to keep straight, Kellyn…)
With just about any Cloud offering, you can bring your own license. Although there are a ton of images, (AMIs in AWS, VHDs in Azure, etc.) pre-built, you may need to use a bare metal OS image and load your own software or as most DBAs, bring over patches to maintain the database you have running out there. Just because you’re in the cloud doesn’t mean you don’t have a job to do.
Change over to the directory that contains the file that you need to copy and then run the following:
scp -i <keypair>.pem <file name to be transferred> <osuser>@<public dns or ip address>.<region>.compute.amazonaws.com:/<direction you wish to place the file in>/.
If you try to use a key pair from one region to log into a SCP to a host, (instance) in another region, you won’t receive an error, but it will act like you skipped the “-i” and the key pair and you’ll be prompted for the password for the user:
<> password: pxxxxxxxxxxxx_11xxxx_Linux-x86-64.zip 100% 20MB 72.9KB/s 04:36
This is a good start to getting started as a DBA on the cloud and not over-thinking. I’ll be posting more in the upcoming weeks that will not only assist those already in the cloud, but those wanting to find a way to invest more into their own cloud education!
I thought I’d do something on Oracle this week, but then Microsoft made an announcement that was like an early Christmas present- SQL Server release for Linux.
I work for a company that supports Oracle and SQL Server, so I wanted to know how *real* this release was. I first wanted to test it out on a new build and as they recommend, along as link to an Ubuntu install, I created a new VM and started from there-
There were a couple packages that were missing until the repository is updated to pull universe by adding repository locations into the sources.list file:
There is also a carriage return at the end of the MSSQL installation when it’s added to the sources.list file. Remove this before you save.
Once you do this, if you’re chosen to share your network connection with your Mac, you should be able to install successfully when running the commands found on the install page from Microsoft.
The second install I did was on a VM using CentOS 6.7 that was pre-discovered as a source for one of my Delphix engines. The installation failed upon running it, which you can see here:
Even attempting to work around this wasn’t successful and the challenge was that the older openssl wasn’t going to work with the new SQL Server installation. I decided to simply upgrade to CentOS 7.
The actual process of upgrading is pretty easy, but there are some instructions out there that are incorrect, so here are the proper steps:
[upgrade] name=upgrade baseurl=http://dev.centos.org/centos/6/upg/x86_64/ enabled=1 gpgcheck=0
Save this file and then run the following:
yum install preupgrade-assistant-contents redhat-upgrade-tool preupgrade-assistant
You may see that one has stated it won’t install as newer ones are available- that’s fine. As long as you have at least newer packages, you’re fine. Now run the preupgrade
The log final output may not write, also. If you are able to verify the runs outside of this and it says that it was completed successfully, please know that the pre-upgrade was successful as a whole.
Once this is done, import the GPG Key:
rpm --import http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-7
After the key is imported, then you can start the upgrade:
/usr/bin/redhat-upgrade-tool-cli --force --network 7 --instrepo=http://mirror.centos.org/centos/7/os/x86_64
Once done, then you’ll need to reboot before you run your installation of SQL Server:
Once the VM has cycled, then you can run the installation using the Redhat installation as root, (my delphix user doesn’t have the rights and I decided to have MSSQL installed under root for this first test run):
curl https://packages.microsoft.com/config/rhel/7/mssql-server.repo > /etc/yum.repos.d/mssql-server.repo
Now run the install:
sudo yum install -y mssql-server
Once its completed, it’s time to set up your MSSQL admin and password:
One more reboot and you’re done!
You should then see your SQL Server service running with the following command:
systemctl status mssql-server
You’re ready to log in and create your database, which I’ll do in a second post on this fun topic.
OK, you linux fans, go MSSQL! 🙂
I’ll be attending my very first Pass Summit next week and I’m really psyched! Delphix is a major sponsor at the event, so I’ll get to be at the booth and will be rocking some amazing new Delphix attire, (thank you to my boss for understanding that a goth girl has to keep up appearances and letting me order my own Delphix ware.)
Its an amazing event and for those of you who are my Oracle peeps, wondering what Summit is, think Oracle Open World for the Microsoft SQL Server expert folks.
I was a strong proponent of immersing in different database and technology platforms early on. You never know when the knowledge you gain in an area that you never thought would be useful ends up saving the day.
Yesterday this philosophy came into play again. A couple of folks were having some challenges with a testing scenario of a new MSSQL environment and asked for other Delphix experts for assistance via Slack. I am known for multi-tasking, so I thought, while I was doing some research and building out content, I would just have the shared session going in the background while I continued to work. As soon as I logged into the web session, the guys welcomed me and said, “Maybe Kellyn knows what’s causing this error…”
Me- “Whoops, guess I gotta pay attention…”
SQL Server, for the broader database world, has always been, unlike Oracle, multi-tenant. This translates to a historical architecture that has a server level login AND a user database level username. The Login ID, (login name) is linked to a userID, (and such a user name) in the (aka schema) user database. Oracle is starting to migrate to similar architecture with Database version 12c, moving more away from schemas within a database and towards multi-tenant, where the pluggable database, (PDB) serves as the schema.
I didn’t recognize the initial error that arose from the clone process, but that’s not uncommon, as error messages can change with versions and with proprietary code. I also have worked very little to none on MSSQL 2014. When the guys clicked in Management Studio on the target user database and were told they didn’t have access, it wasn’t lost on anyone to look at the login and user mapping to show the login didn’t have a mapping to a username for this particular user database. What was challenging them, was that when they tried to add the mapping, (username) for the login to the database, it stated the username already existed and failed.
This is where “old school” MSSQL knowledge came into play. Most of my database knowledge for SQL Server is from versions 6.5 through 2008. Along with a lot of recovery and migrations, I also performed a process very similar to the option in Oracle to plug or unplug a PDB, in MSSQL terminology referred to as “attach and detach” of a MSSQL database. You could then easily move the database to another SQL Server, but you very often would have what is called “orphaned users.” This is where the login ID’s weren’t connected to the user names in the database and needed to be resynchronized correctly. To perform this task, you could dynamically create a script to pull the logins if they didn’t already exist, run it against the “target” SQL Server and then create one that ran a procedure to synchronize the logins and user names.
Use <user_dbname> go exec sp_change_users_login 'Update_One','<loginname>','<username>' go
For the problem that was experienced above, it was simply the delphix user that wasn’t linked post restoration due to some privileges and we once we ran this command against the target database all was good again.
This wasn’t the long term solution, but pointed to where the break was in the clone design and that can now be addressed, but it shows that experience, no matter how benign having it may seem, can come in handy later on in our careers.
I am looking forward to learning a bunch of NEW and AWESOME MSSQL knowledge to take back to Delphix at Pass Summit this next week, as well as meeting up with some great folks from the SQL Family.
See you next week in Seattle!
My sabbatical from speaking is about to end in another week and it will return with quite the big bang.
First up is Upper NY Oracle User Group, (UNYOUG) for a day of sessions in Buffalo, NY. I’ll be doing three talks:
I’ll be at Pass Summit! I’ve been wanting to attend this conference since I was managing MSSQL 7 databases! I finally get to go, but as I’m newly back in the MSSQL saddle, no speaking sessions for me. I do have a number of peers on the MSSQL side of the house, so hoping to have folks show me around and if you have the time to introduce yourself or introduce me to events and people at this fantastic event, please do!
Next, I head into October with quite the number of talks. I’ll start out on Nov. 2nd in Detroit, Mi at the Michigan Oracle User Summit, (MOUS) doing a keynote, “The Power in the Simple Act of Doing” and then a technical session, “Virtualization 101”.
I’ll fly out right after I finish my second talk so I can make my way down to Raleigh, NC for the East Coast Oracle conference, (ECO), where I’ll also be doing a couple presentations on Nov. 3rd.
The next week I get to stay close to home for the Agile Test and Test Automation Summit. This is a brand new event in the Denver area. I’ll be doing a new talk here on Test Data Management, a hot buzzword, but one that people rarely understand the complexities and automation around.
The next day, I’m back downtown in Denver, where I can present at Rocky Mountain DataCon, (RMDC) event in Denver.
The RMDC is a newer event and it’s really been picking up traction in the Denver area. I’ll be speaking on “The Limitless DBA”, focusing on the power of virtualization. Kent Graziano, the Data Warrior and evangelist for Snowflake will be there, too. I’m glad to see this new local event taking off in the Denver area, as the Denver/Boulder area consistently ranks high as one of best places to be if you’re in the tech industry.
I’m working to take it easy during the month of December, as I’ll have enough to do just catching up at work at Delphix and then with RMOUG duties with the upcoming Training Days 2017 conference in February 2017!
The topic of DevOps and and Agile are everywhere, but how often do you hear Source Control thrown into the mix? Not so much in public, but behind the closed doors of technical and development meetings when Agile is in place, it’s a common theme. When source control isn’t part of the combination, havoc ensues and a lot of DBAs working nights on production with broken hearts.
So what is source control and why is it such an important part of DevOps? The official definition of source control is:
A component of software configuration management, version control, also known as revision control or source control, is the management of changes to documents, computer programs, large web sites, and other collections of information.
Delphix, with it’s ability to provide developer with as many virtual copies of databases, including masked sensitive data, is a no-brainer when ensuring development and then test have the environments to do their jobs properly. The added features of bookmarking and branching is the impressive part that creates full source control.
Using the diagram below, note how easy it is to mark each iteration of development with a bookmark to make it easy to then lock and deliver to test, a consistent image via a virtual database, (VDB.)
Delphix is capable of all of this, while implementing Agile data masking to each and every development and test environment to protect all PII and PCI data from production in non-production environments.
Along with the deep learning I’ve been allowed to do about data virtualization, I’ve learned a great deal about Test Data Management. Since doing so, I’ve started to do some informal surveys of the DBAs I run into and ask them, “How do you get data to your testers so that they can perform tests?” “How do you deliver code to different environments?”
Oracle Open World 2016 is almost here…where did the summer go??
With this upon us, there is something you attendees need and that’s to know about what awesome sessions are at Oracle Open World from the Delphix team! I gave my options up as is the tragedy of switching companies in late spring from Oracle, but you can catch some great content on how to reach the new level in data management with Tim Gorman, (my phenomenal hubby, duh!) and Brian Bent from Delphix:
After absorbing all this great content on Sunday, you can come over to Oak Table World at the Children’s Creativity Museum on Monday and Tuesday to see the Oak Table members present their latest technical findings to the world. The schedule and directions to the event are all available in the link above.
If you’re looking for where the cool kids will be on Thursday, check out the Delphix Sync event! There’s still time to register if you want to join us and talk about how cool data virtualization is.
If you’re a social butterfly and want to get more involved with the community, check out some of the great activities that, and I do mean THAT Jeff Smith and the SQL Developer team have been planning for Oracle Open World, like the Open World Bridge Run.
I’ve been busy reading and testing everything I can with Delphix, whenever I get a chance. I’m incredibly fascinated by copy data management and the idea of doing this with Exadata is nothing new, as Oracle has it’s own version with sparse copy. The main challenge is that Exadata’s version of this is kind of clunky and really doesn’t have the management user interface that Delphix offers.
There is a lot of disk that comes with an Exadata, not just CPU, network bandwidth and memory. Now you can’t utilize offloading with a virtualized database, but you may not be interested in doing so. The goal is to create a private cloud that you can use small storage silos for virtualized environments. We all know that copy data management is a huge issue for IT these days, so why not make the most of your Exadata, too?
With Delphix, you can even take and external source and provision a copy in just a matter of minutes to an Exadata, utilizing very little storage. You can even refresh, roll back, version and branch through the user interface provided.
I simulated two different architecture designs for how Delphix would work with Exadata. The first was with standard hardware, with Virtual Databases, (VDBs) on the Exadata and the second having both the Dsource and the VDBs on another Exadata.
Now we need to capture our gold copy to use for the DSource, which will require space, but Delphix does use compression, so it will be considerably smaller than the original database it’s using for the data source.
If we then add ALL the VDBs to the total storage utilized by that and by the Dsource, then you’d see that they only use about the same amount of space as the original database! Each of these VDBs are going to interact with the user independently, just as a standard database copy would. They can be at different points in time, track different snapshots, have different hooks, (pre or post scripts to be run for that copy) with different data, (which is just different blocks, so that would be the only additional space outside of other changes.) Pretty cool if you ask me!
While chatting on slack the other day, one of my peers asked if I’d seen that ESG Global had done a write up on Veritas Velocity. Velocity is a product that won’t be available until the end of 2016 and is considered “Alpha”. I was surprised that anyone allowed to participate in Alpha was able to publish a description on the product, but that’s not my call to make.
What I found interesting about the article, written by ESG, discusses how Veritas Velocity, “… is combining its sophisticated approaches to data management with its broader ability to deliver superior data protection and information availability in order to offer something revolutionary.”
I found this statement to be quite odd, as what they’re doing is simply using the same technology that Delphix has utilized for years to perform what Delphix has implemented at our customer sites since 2008. They are simply hopping on the bandwagon, (along with a number of other companies) in an attempt to take advantage of the newest buzz word, “Copy Data Management”.
There’s nothing revolutionary about what we do. It was revolutionary back in 2008 and may be seen as revolutionary to the customers who haven’t embraced the power of virtualized environments yet, but to say what they’ve created is revolutionary isn’t true.
If we inspect (at a high level) what Veritas Velocity does:
I can replace the lead into the above list with Delphix and that describes the Delphix Engine, as well. We also offer a mature User Interface, advanced scripting capabilities and heterogenous support.
There are a lot of companies out there making claims that they have revolutionized new capabilities like “data virtualization”, “copy data management “ and “test data management”. Delphix has been in this space since the beginning and as the Gartner reports prove, will continue to be the driving force behind what other companies are striving to achieve in their products.
Want to learn how many solutions Delphix virtualization can provide for your company’s data? Try out Delphix Express, a simple Virtualbox or VMware open source version for your workstation to check out who’s been doing it right all along and before it was cool!
I’ve been involved in two data masking projects in my time as a database administrator. One was to mask and secure credit card numbers and the other was to protect personally identifiable information, (PII) for a demographics company. I remember the pain, but it was better than what could have happened if we hadn’t protected customer data….
Times have changed and now, as part of a company that has a serious market focus on data masking, my role has time allocated to research on data protection, data masking and understanding the technical requirements.
The percentage of companies that contain data that SHOULD be masked is much higher than most would think.
The amount of data that should be masked vs. is masked can be quite different. There was a great study done by the Ponemon Instititue, (that says Ponemon, you Pokemon Go freaks…:)) that showed 23% of data was masked to some level and 45% of data was significantly masked by 2014. This still left over 30% of data at risk.
We also don’t think very clearly about how and what to protect. We often silo our security- The network administrators secure the network. The server administrators secure the host, but doesn’t concern themselves with the application or the database and the DBA may be securing the database, but the application that’s accessing it, may be open to accessing data that shouldn’t be available to those involved. We won’t even start about what George in accounting is doing.
We need to change from thinking just of disk encryption and start thinking about data encryption and application encryption with key data stores that protect all of the data- the goal of the entire project. It’s not like we’re going to see people running out of a building with a server, but seriously, it doesn’t just happen in the movies and people have stolen drives/jump or even print outs of spreadsheets drives with incredibly important data residing on it.
As I’ve been learning what is essential to masking data properly, along with what makes our product superior, is that it identifies potential data that should be masked, along with ongoing audits to ensure that data doesn’t become vulnerable over time.
This can be the largest consumption of resources in any data masking project, so I was really impressed with this area of Delphix data masking. Its really easy to use, so if you don’t understand the ins and outs to DBMS_CRYPTO or unfamiliar with the java.utilRANDOM syntax, no worries, Delphix product makes it really easy to mask data and has a centralized key store to manage everything.
It doesn’t matter if the environment is on-premise or in the cloud. Delphix, like a number of companies these days, understands that hybrid management is a requirement, so efficient masking and ensuring that at no point is sensitive data at risk is essential.
How many data breaches do we need to hear about to make us all pay more attention to this? Security topics at conferences are diminished vs. when I started to attend less than a decade ago, so I know it wasn’t that long ago it appeared to be more important to us and yet it seems to be more important of an issue.
Research was also performed that found only 7-19% of companies actually knew where all their sensitive data was located. That’s over 80% sensitive data vulnerable to a breach. I don’t know about the rest of you, but upon finishing up on that little bit of research, I understood why many feel better about not knowing and why its better just to accept this and address masking needs to ensure we’re not one of the vulnerable ones.
Automated solutions to discover vulnerable data can significantly reduce risks and reduce the demands on those that often manage the data, but don’t know what the data is for. I’ve always said that the best DBAs know the data, but how much can we really understand it and do our jobs? It’s often the users that understand it, but may not comprehend the technical requirements to safeguard it. Automated solutions removes that skill requirement from having to exist in human form, allowing us all to do our jobs better. I thought it was really cool that our data masking tool considers this and takes this pressure off of us, letting the tool do the heavy lifting.
Along with a myriad of database platforms, we also know that people are bound and determined to export data to Excel, MS Access and other flat file formats resulting in more vulnerabilities that seem out of our control. Delphix data masking tool considers this and supports many of these applications, as well. George, the new smarty-pants in accounting wrote out his own XML pull of customers and credit card numbers? No problem, we got you covered… 🙂
So now, along with telling you how to automate a script to email George to change his password from “1234” in production, I can now make recommendations on how to keep him from having the ability to print out a spreadsheet with all the customer’s credit card numbers on it and leave it on the printer…:)
Happy Monday, everyone!
A number of emails I received about trying out Delphix Express was regarding VMWare. Many of my followers had used Virtualbox for a long time and we all know, no one likes change, (OK, maybe me, but we know how abnormal I am anyway… :))
Importing an OVA is pretty simple in VMWare. In the VMWare Fusion application, click on File, Import and accept the defaults. Depending on the size of the VM, the process will take the time needed to import and if anything happens to the VM you imported in, the great thing about a VM, you just have to DELETE AND IMPORT AGAIN to erase that which you have destroyed. 🙂
Open the VM Control Console, click on the VM you want to delete, then click on Remove. Remember to click on delete files if you’d like that space back on your hard drive, too! The utility will take just a moment to clean up the VM and you can then proceed with work or re-import in the OVA file.
I know, if using Delphix Express, the IP address for the machine is displayed when it’s first started, but I also know that we DBAs are a curious lot and known for snooping around every chance we get. Due to this, you may not have noted the IP address and now need it to log into a terminal windows or want a second terminal to run or check items.
Knowing how to return the IP address is a good thing to know, so here are all the ways depending on what OS you’re on:
Linux- type in ifconfig from the terminal and you’ll see the IP address listed for the inet address for the eth0 configuration, (commonly setup as the eth0.)
Windows- ipconfig -a from the Command prompt or Click on the Window Icon, type in “Network” to go to Network and Ethernet and then click on ethernet. Your IP Address is listed in in the IPv4 Address setting.
Mac- ifconfig from terminal of click on the Apple up at top left corner of screen, click on System Preferences, click on Network, then if you’re using WiFi, click on it and then TCP/IP to view your actual IP Address listed for the IPv4 address.
If the VMware screen is blank, (no test or image on the screen or you’ve lost your cursor, the best way to get control back is to click Ctrl/Command on Mac to retrieve cursor control or make return your screen to active.
Every software has updates and just like the other software we support, updates to VMware is important. We may not utilize our VMs as often as we think, so it’s good to get into the practice to check for updates when you first log into VMware by clicking on VMWare Fusion and Check for Updates. If only takes a moment and hopefully you’ll see the following after it’s gone out to check:
By that, I mean to remember that you’re on one PC and you’re running virtual PCs on it. Don’t take up so many resources to your VMs that your PC doesn’t have enough to do its job. A VM should be pretty conservative on its resources and its important to look at the configuration and see if you can dial down any usage that isn’t necessary.
To do this, the VM must be shutdown, (not just suspended) and click on Virtual Machine, Settings. In your settings, there are a couple areas that need to be considered for resource usage:
The first, obviously, is to look at Processors & Memory. Ensure you’ve left enough memory for your PC and as PCs come with quad-core or higher processors these days, a single core is often sufficient for the processing on a VM.
The amount of space that is being used by a VM is a consideration, too. If a VM is so large that you need to purchase an external drive to run it on, then that’s a better choice vs. using up all your local disk or it may be time to build out Delphix just to virtualize the environment to start! 🙂
Verify that all disks for the VM are actually in use. I’ve seen where their are drives created for future growth, but never used or extra space that was allocated that just needs to be shrunk down. This can be accomplished by clicking on Virtual Machine, Settings and then from there, click on each of the disks in use by the VM, shrinking any that may have been over-allocated to. This is another task that can only be performed when the VM is shutdown.
Well, there’s a start to getting comfortable with VMWare Fusion. Do you have any tips or tricks that you can add to this? Comment and let use know and have a great Monday!
Delphix Express offers a virtual environment to work with all the cool features like data virtualization and data masking on just a workstation or even a laptop. The product has an immense offering, so no matter how hard Kyle, Adam and the other folks worked on this labor of love, there’s bound to be some manual configurations that are required to ensure you get the most from the product. This is where I thought I’d help and offer a virtual hug to go along with the virtual images…:)
If you’re already set on installing and working<– (Link here!!) with Delphix Express, you will find the following Vimeo videos- importing the VMs and configuring Delphix Express quite helpful. Adam Bowen did a great job with these videos to get you started, but below, I’ll go through some technical details a bit deeper to give folks added arsenal in case they’ve missed a step or challenged just starting out with VMWare.
Note- Delphix Express requires VMWare Fusion, which you can download after purchasing a license, ($79.99) but well worth the investment.
Not enough memory to run all three VM’s required as part of Delphix Express or after an upgrade, the Delphix Express uses over 6Gb.
Different laptops/workstations have different amounts of memory, CPU and space available. Memory is the most common constraint with today’s pc. Although the VMs are configured for optimal performance, the target and source environments can have the memory trimmed to 2Gb each and still perform when resources are constrained.
The VM must be shut down for this configuration change to be implemented. After stopping or before starting the VM, click on Virtual Machine, Settings. Click on Processors and Memory and then you can configure the memory usage via a slider option as seen below:
Move the slider to under 2G for the VM in question and then close the configuration window and start the VM. Perform this for each VM, (the Delphix Engine VM should already be at 2Gb.)
Issue- Population of sources and targets is empty after successful configuration.
After starting the target and source VMs, a UI interface with command line is opened and you can login right from the VMWare. Virtualbox would require a terminal opened to the desktop, but either way, you can get to the command line interface in such a way without using Putty or another desktop terminal from your workstation.
On the target VM command line, login as the delphix user. The target VM has a python script that runs in the background upon startup that checks for a delphix engine once every minute and if it locates one, will run the configuration. You can view this running in the cron:
crontab -l @reboot ..... /home/delphix/landshark_setup.py
It writes to the following log file:
You can view this file, (or tail it or cat it, whatever you are comfortable doing to view the end of the file…) I prefer just to view the last ten lines, so I’ll run a command to look at JUST the last ten lines:
tail -10 landshark_setup.log
If the configuration is having issues locating the Delphix engine, it will show in this log file. Once confirmed, then we have a couple steps to check:
VMWare issue with the one of the virtual machines not visible to another. Each VM needs to be able to communicate and interact with each other. When importing in each VM, the ability for the VM to be “host aware” with the Mac may not have occurred. If you the delphix engine VM isn’t viewable to the target or the source, you can check the log and then verify in the following way.
Click on Virtual Machine, Settings and then click on Network Adapter. Verify that the top radio option is selected for “Share with my Mac”:
Verify that this is configured for EACH of the three virtual machines involved. If this hasn’t corrected and the configuration doesn’t populate the virtual environments in the Delphix interface, then it’s time to look at the configuration for the target machine.
Get IP Address
While SSH connected to the target machine, type in the following:
Use the IP address shown, (inet address) and open a browser on your PC, adding the port used for the target configuration file, (port 8000 by default):
You should be shown the configuration file for your target server that is used to run the delphix engine configuration. There are options to update the values for different parameters. The you should focus on are:
linux_source_ip= make sure this matches the source VM’s ip address when you type in “ifconfig”.
engine_address= ip address for the delphix engine VM when you type in ifconfig on the host
engine_password= should match the password that you updated your delphix_admin to when you went through the configuration. Update it to match if it doesn’t, as I’ve seen some folks not set it to “landshark” as demonstrated in the videos, so of course, the setup will fail when the file doesn’t match the password set by the user.
oracle_xe = If you set Oracle_xe to true, then don’t set the 11g or 12c to true. To conserver workstation resources, choose only one database type.
Once you’re made all the changes you want to the page, click on Submit Changes.
You need to run the reconfiguration manually now. Remember, this runs in the background each minute, but when it does that, you can’t see what’s going on, so I recommend killing the running process and running it manually.
From the target host, type in the following:
ps -ef | grep landshark_setup
Kill the running processes:
Check for any running processes, just to be safe:
ps -ef | grep landshark_setup
Once you’ve confirmed that none are running, let’s run the script manually from the delphix user home:
Verify that the configuration runs, monitoring as it steps through each step:
This is the first time you’re performed these steps, so expect a refresh won’t be performed, but a creation will. You should now see the left panel of your Delphix Engine UI populated:
Now we’ve come to the completion of the initial configuration. In my next post on Delphix Express, I’ll discuss the Dsource and Target database configurations for different target types. Working with these files and configurations are great practice to learning about Delphix, even if your Delphix Express even if you are amazed at how easy this all was.
I’ve been going through some SERIOUS training in just over a week. This training has successfully navigated the “Three I’s”, as in its been Interesting, Interactive and Informative. The offerings are very complete and the knowledge gained is limitless.
I’d also like to send a shout out to Steve Karam, Leighton Nelson and everyone else at Delphix who’s had a hand in designing the training, both for new employees and for the those working with our hands on labs. I’ve had a chance to work with both and they’re just far above anything I’ve seen anywhere else.
Most DBAs know- If you attempt to take a shortcut in patching or upgrading, either by not testing or hoping that your environments are the same without verifying, shortcuts can go very wrong, very quickly.
Patching is also one of the most TEDIOUS tasks required of DBAs. The demands on the IT infrastructure for downtime to apply quarterly PSU patches, (not including emergency security patches) to all the development, test, QA and production databases is a task I’ve never looked forward to. Even when utilizing Enterprise Manager 12c Patch Plans with the DBLM management pack, you still had to hope that you checked compliance for all environments and prayed that your failure threshold wasn’t tripped, which means a large amount of your time would have to be allocated to address patching outside of just testing and building out patch plans.
I bet most of you already knew you could virtualize your development and test from a single Delphix compressed copy, (referred to as a DSource.) create as many virtual copies, (referred to as VDBs) as your organization needs to have for development, testing, QA, backup and disaster recovery, (if you weren’t aware of this, you can thank me later… :))
What you may not know, (and what I learned this week) is that you can also do the following:
Considering how much time and resources are saved by just eliminating such a large portion of time required for patching and upgrading, this is worth investing in Delphix just for this alone!
Want to learn more? Check out the following links:
Want to Demo Delphix? <– Click here!
Back in 2012, when I started to build a reputation as a mentor, the goal was not just to create my own path and set it afire, but for others to desire to make their own path before my footsteps cooled.
This week I joined Delphix. Many acted as if this was pre-ordained and simply part of my destiny. Due to my technical knowledge, they assumed I would be working in the same group as my husband and virtualization was just a natural fit for my skills. Although I love working just a few feet from my husband, (which we’ve done for years now) I actually joined the Product Management and Marketing team, which is a surprise to many.
All new jobs come with new challenges, but when you also change paths, it can be like walking on hot coals. You can find yourself anxious, having doubts about your skills and if you’re up to the challenge. I accepted this challenge because I wanted to have more impact with the direction of the technology I was working with. I wanted to help the business make intelligent decisions with the powerful knowledge I have about technology, customers and product and this is something that Delphix is keen on letting me be a part of. I’m pretty fearless, but even I have to remember to not let my fears or frustrations get the best of me. As I always say, there is incredible power in the simple act of doing, so just do and its surprising how quickly you’ll be successful.
In just the few days I’ve been here, I’ve already begun to build the documentation that will help determine what content will be directed to what audiences, chosen a few members of the community to do guest blogs, did some really great training and was introduced to some incredible people.
I’m learning how to manage my time a little differently than I did before, as things moved a lot slower at Oracle, but I love how my skills are more in line with what the company needs from me. Even though I’ve been here less than a week and am in a position that I’ve never held before, I know exactly what my purpose is.
I want to thank my new peers and managers for helping me quickly get up to speed. My beloved Microsoft Surface has been migrated to my secondary desk and my new work computer is set up (I’m on a Mac Air, don’t everyone gasp at once and start taking bets on how long the keyboard will last… :)) As I stated earlier, the work is new and interesting, which is why Delphix was at the top of my list for companies to join. Like my new position, Delphix technology removes many of the tedious tasks and automates much of what once was a manual process so as to get onto more interesting and rewarding adventures.
I’ll continue to update everyone on how my new world is shaking out at Delphix and hopefully will convince a couple of you to join me. It wouldn’t be the first time that’s happened. The coals are warm, but will you follow in my footsteps before they cool?
So after over two years at Oracle, I’m moving on. Yes, for those of you who haven’t seen the tweets and the posts, you heard right.
OK, everyone- cleansing breath.
I worked with great people and did some awesome things in the community, blogged everything Enterprise Manager and talked over 1/2 the Oracle community into buying and doing projects with Raspberry Pi while I was at it!
Many folks thought I was a product manager or a technical consultant, but my title was Consulting Member for the Technical Staff with the Strategic Customer Program with the Enterprise Manager and Oracle Management Cloud Group. I know I was part of a select group at Oracle, but I believe the opportunity to work at Oracle was an important step in my career and I’d recommend it to anyone for the experience it provides.
There is a huge difference working for Oracle vs. being in the Oracle community, even as an Oracle ACE Director. I was utterly amazed being part of the Oracle machine. One of the most amazing experiences was observing how releases came together. It was a complete different experience as an employee vs. a customer. Being part of a massive undertaking such as a product release, impressively building out software to be released to its customer base is pretty astounding. Understanding how and what it takes to move the machine and once it gets moving, how pertinent it is for anyone in its way to get out of the way is important to understanding how a successful product is created.
I learned a lot in just over two years and I have to admit- many of the negatives that people said would be present at Oracle, I just didn’t experience. I had great mentors and contacts inside of Oracle. It’s easy to assimilate into a big company environment when you have people like Pete Sharman, Tyler Muth, Mary Melgaard and other’s looking out for you. I’ll be sad to leave all the great people that I worked with at Oracle, too- Steve, Courtney, Scott, Werner, Andrew, Joe, Pramod and Will. At the same time, I look forward to opportunities to learn new skills with the awesome folks that have so readily embraced my quirky self at my new company. I learned a great deal in my two years at Oracle and this is knowledge that I’m able to take with me as I move forward to my new adventure.
With that said, I’ve been offered an incredible opportunity to stretch my legs a bit and try something new and I am excited to move onto this new challenge. I’ll still be speaking at conferences, but also will direct technology in a a way that should be very constructive to my technical style.
There has been a lot of rumors to where I’m off to. Some of you have guessed correctly on where I’m going, but I know none of you guessed what I’ll be doing. I will be focusing more on my multi-platform skills, so for those of you that thought I would be leaving all those years of experience in database and OS platforms, it’s going to be just the opposite.
I’m very excited to announce that, as of Monday, June 13th, I’m the new Technical Intelligence Manager at Delphix.
Buckle up, Baby! This is going to be good.