Subscribe to Blog via Email
When tearing down an AWS Delphix Trial, we run the following command with Terraform:
I’ve mentioned before that every time I execute this command, I suddenly feel like I’m in control of the Death Star in Star Wars:
As this runs outside of the AWS EC2 web interface, you may see some odd information in your dashboard. In our example, we’ve run “terraform destroy” and the tear down was successful:
So you may go to your volumes and after verifying that yes, no volumes exist:
The instances may still show the three instances that were created as part of the trial, (delphix engine, source and target.)
These are simply “ghosts of instances past.” The tear down was completely successful and there’s simply a delay before the instance names are removed from the dashboard. Notice that they no longer are listed with a public DNS or IP address. This is a clear indication that these aren’t currently running, exist or more importantly, being charged for.
Just one more [little] thing to be aware of… 🙂
Now, for most of us, we’re living in a mobile world, which means as our laptop travels, our office moves and our IP address changes. This can be a bit troubling for those that are working in the cloud and our configuration to our cloud relies on locating us via our IP Address being the same as it was in our previous location.
What happens if you’re IP Address changes from what you have in your configuration file, (in Terraform’s case, your terraform.tfvars file) for the Delphix AWS Trial? I set my IP Address purposefully incorrect in the file to demonstrate what would happen after I run the terraform apply command:
It’s not the most descriptive error, but that I/O timeout should tell you right away that terraform can’t connect back to your machine.
Now, we’ll tell you to capture your current IP address and update the IP address in the TFVARS file that resides in the Delphix_demo folder, but I know some of you are wondering why we didn’t just build out the product to adjust for an IP address change.
The truth is, you can set a static IP Address for your laptop OR just alias your laptop with the IP Address you wish to have. There are a number of different ways to address this, but looking into the most common, let’s dig into how we would update the IP address vs. updating the file.
You can go to System Preferences or Control Panel, (depending on which OS you’re on) and click on Network and configure your TCP/IP setting to manual and type in an IP Address there. The preference is commonly to choose a non-competitive IP address, (often the one that was dynamically set will do as your manual one to retain) and choose to save the settings. Restart the PC and you can then add that to your configuration files. Is that faster than just updating the TFVARS file- nope.
The second way to do this is to create an Alias IP address to deter from the challenge of each location/WiFi move having it automatically assigned.
Just as above, we often will use the IP address that was given dynamically and just choose to keep this as the one you’ll keep each time. If you’re unsure of what your IP is, there are a couple ways to collect this information:
Open up a browser and type in “What is my IP Address“
or from a command prompt, with “en0” being your WiFi connection, gather your IP Address one of two ways:
$ dig +short myip.opendns.com @resolver1.opendns.com
$ ipconfig getifaddr en0
Then set the IP address and cycle the your WiFi connection:
$ sudo ipconfig set en0 INFORM <IP Address> $ sudo ifconfig en0 down $ sudo ifconfig en0 up
You can also click on your WiFi icon and reset it as well. Don’t be surprised if this takes a while to reconnect. Renewing and releasing of IP addresses can take a bit across the network and the time required must be recognized.
Depending on which OS you’re on. Using the IP Address from your tfvars file, set it as an alias with the following command:
$ sudo ifconfig en0 alias <IP Address> 255.255.255.0 Password: <admin password for workstation>
If you need to unset it later on:
sudo ifconfig en0 -alias <IP Address>
I found this to be an adequate option- the alias was always there, (like any other alias, it just forwards everything onto the address that you’re recognized at in the file.) but it may add time to the build, (still gathering data to confirm this.) With this addition, I shouldn’t have to update my configuration file, (for the AWS Trial, that means setting it in our terraform.tfvars in the YOUR_IP parameter.)
The browser commands to gather your IP Address work the same way, but if you want to change it via the command line, the commands are different for Windows PC’s:
netsh interface ipv4 show config
You’ll see your IP Address in the configuration. If you want to change it, then you need to run the following:
netsh interface ipv4 set address name="Wi-Fi" static <IP Address> 255.255.255.0 <Gateway>
netsh interface ipv4 show config
You’ll see that the IP Address for your Wi-Fi has updated to the new address. If you want to set it to DHCP, (dynamic) again, run the following:
netsh interface ipv4 set address name="Wi-Fi" source=dhcp
Now you can go wherever you darn well please, set an alias and run whatever terraform commands you wish. All communication will just complete without any error due to a challenging new IP address.
Ain’t that just jiffy? OK, it may be quicker to just gather the IP address and update the tfvars file, but just in case you wanted to know what could be done and why we may not have built it into the AWS Trial, here it is! 🙂
I ran across an article from 2013 from Straight Talk on Agile Development by Alex Kuznetsov and it reminded me how long we’ve been battling for easier ways of doing agile in a RDBMS environments.
Getting comfortable with a virtualized environment can be an odd experience for most DBAs, but as soon as you recognize how similar it is to a standard environment, we stop over-thinking it and it makes it quite simple to then implement agile with even petabytes of data in an relational environment without using slow and archaic processes.
The second effect of this is to realize that we may start to acquire secondary responsibilities and take on ensuring that all tiers of the existing environment are consistently developed and tested, not just the database.
Don’t worry- I’m going to show you that its not that difficult and virtualization makes it really easy to do all of this, especially when you have products like Delphix to support your IT environment. For our example, we’re going to use our trusty AWS Trial environment and we have already provisioned a virtual QA database. We want to create a copy of our development virtualized web application to test some changes we’ve made and connect it to this new QA VDB.
From the Delphix Admin Console, go to Dev Copies and expand to view those available. Click on Employee Web Application, Dev VFiles Running. Under TimeFlow, you will see a number of snapshots that have been taken on a regular interval. Click on one and click on Provision.
Now this is where you need the information about your virtual database that you wish to connect to:
Currently the default is to connect to the existing development database, but we want to connect to the new QA we wish to test on. You can ssh as delphix@<ipaddress for linuxtarget> to connect to and gather this information.
2. Gathering Information When You Didn’t Beforehand
I’ve created a new VDB to test against, with the idea, that I wouldn’t want to confiscate an existing VDB from any of my developers or testers. The new VDB is called EmpQAV1. Now, if you’re like me, you’re not going to have remembered to grab the info about this new database before you went into the wizard to begin the provisioning. No big deal, we’ll just log into the target and get it:
[delphix@linuxtarget ~]$ . 11g.env [delphix@linuxtarget ~]$ echo $ORACLE_HOME /u01/app/oracle/product/188.8.131.52/db_1 [delphix@linuxtarget ~]$ ps -ef | grep pmon delphix 14816 1 0 17:23 ? 00:00:00 ora_pmon_devdb delphix 14817 1 0 17:23 ? 00:00:00 ora_pmon_qadb delphix 17832 1 0 17:32 ? 00:00:00 ora_pmon_EmpQAV1 delphix 19935 19888 0 18:02 pts/0 00:00:00 grep pmon
I can now set my ORACLE_SID:
[delphix@linuxtarget ~]$ export ORACLE_SID=EmpQAV1
Now, let’s gather the rest of the information we’ll need to connect to the new database by connecting to the database and gathering what we need.
[delphix@linuxtarget ~]$ lsnrctl services LSNRCTL for Linux: Version 184.108.40.206.0 - Production on 10-FEB-2017 18:13:06 Copyright (c) 1991, 2013, Oracle. All rights reserved. Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521)) Services Summary... Service "EmpQAV1" has 1 instance(s). Instance "EmpQAV1", status READY, has 1 handler(s) for this service...
Fill in all the values required in the next section of the provisioning setup:
Click Next and add any requirements to match your vfile configuration that you had for the existing environment. For this one, there aren’t any, (additional NFS Mount points, etc.) Then click Next and Finish.
The VFile creation should take a couple minutes max and you should now see an environment that looks similar to the following:
This is a fully functional copy of your web application, created from another virtual copy that can test against a virtual database, ensuring that all aspects of a development project are tested thoroughly before releasing to production!
Why would you choose to do anything less?
So you thought you were finished configuring your AWS target, eh? I already posted a previous time on how to address a fault with the RMEM, but now we’re onto the WMEM. Wait, WM-what?
No, I fear a DBAs work is never over and when it comes to the cloud, our skill set has just expanded from what it was when we worked on-premise!
Our trusty Delphix Admin Console keeps track of settings on all our sources and targets, informing us when the settings aren’t set to what is recommended, so that we’ll be aware of any less than optimal parameters that could effect performance.
As we address latency in cloud environments, network settings become more important.
Where RMEM is quite easy to remember as receive settings, we get to thank
As root, we’ll add another line to the sysctl.conf file to reflect values other than defaults:
$ echo 'net.ipv4.tcp_wmem= 102404194304 12582912' >> /etc/sysctl.conf
Reload the values into the system:
$ sysctl -p /etc/sysctl.conf
Verify the settings are now active:
$ sysctl -a | grep net.ipv4.tcp_wmem net.ipv4.tcp_wmem = 10240 4194304 12582912
That’s all there is to it. Now you can mark the fault as resolved in the Delphix Admin Console.
There are more configurations for AWS than there are fish in the sea, but as the rush of folks arrive to test out the incredibly cool AWS Trial for Delphix, I’ll add my rendition of what to look for to know you’re AWS setup is prepped to successfully deploy.
After you’ve selected your location, set up your security user/group and key pairs, there’s a quick way to see, (at least high level) if you’re ready to deploy the AWS Trial to the zone in question.
Go to your EC2 Dashboard and to the location, (Zone) that you plan to deploy your trial to and you should see the following:
Notice in the dashboard, you can see that the key pairs, (1) and the expected Security Groups, (3) are displayed, which tells us that we’re ready to deploy to this zone. If we double click on the Key Pair, we’ll see that its match to the one we downloaded locally and will use in our configuration with Terraform:
These are essential to deploying in an AWS zone that’s configured as part of your .tfvars file for terraform. You’ll note in the example below, we have both designated the correct zone and the key pair that is part of the zone we’ll be using to authenticate:
#VERSION=004 #this file should be named terraform.tfvars # ENTER INPUTS BELOW access_key="XXXXXXX" secret_key="XXXXXXXXXX" aws_region="us-east-1" your_ip="xxx.xx.xxx.xxx" key_name="Delphix_east1" #don't include .pem in the key name instance_name="Delphix_AWS" community_username="email@example.com" community_password="password"
Hopefully this is a helpful first step in understanding how zones, key pairs and security groups interact to support the configuration file, (tfvars) file that we use with the Delphix deployment via Terraform into AWS.
Ah, yes, it’s that RMOUG Training Days time of the year again!
As a techie, I didn’t put a lot of time into my slides when I first started presenting, thinking I’d simply dazzle them with my amazing knowledge.
What I found is that slide deck skills don’t just help your presentation, they help you tell the story better, which should be the goal of every presenter. No matter how technical we are, we most likely haven’t had any formal Powerpoint training and as I’ve been upping my “PPT skills” this last year, I thought I’d share with the rest of the class…:)
Presenter View something that would have really helped me out when I first started. To not have to remember every little detail and to have my notes displayed at the bottom of my screen would have been a stellar advancement.
Having notes at the bottom upped my presentation game, as I found I removed much of the text on my slides and instead used reference links to blog posts, articles and white papers to fully engage the attendee in my sessions vs. receiving minimal thoughts displayed on a slide.
I found that I went to more full screen graphics with a single phrase and was able to simply talk with the audience, having my notes keep me on topic, without having a slide full of discussion points that were more for me than those in attendance
A picture is worth a thousand words, but a moving picture? That’s a whole story and the reason people go to movies, watch TV and why even adults like cartoons. We can display an idea with movement more completely than a still image.
Powerpoint makes animations quite simple and it just takes the ability to group, highlight and animate with the animation toolbar:
All the basic and even a number of advanced animations are available to make your presentations pop and the ability to create a complex animation to get your point across!
Why is it great to have every other or every third slide a full screen graphic? It gives the audience a rest between the data they’ve been taking in and with technical or complex topic presentations, your attendee is likely to appreciate the break.
No matter if you’ve just discussed the algorithm used for a specific data masking feature or like the slide above, discussing networking for geeky introverts, a slide like this makes an impact and the audience will more likely remember intricate details when combined with a humorous phrase and memorable picture.
Always use stock images and if you end up using an image from someone else’s site, etc. ask and give credit.
I used to update all my slides to a new template by copying them over in slide view, but now I know that I can switch them to a new slide template and switch out individual slides to new single template slides with the Layout menu in Powerpoint.
Having more control about the layout- everything from text, to graphics, etc. saves me time from manually updating everything in a single slide. If there is a common format that you use, you can make a copy of the template and save it off to use in the future, too.
Well, it’s time for me to get more tasks in preparation for the conference done! We’re looking forward to seeing everyone at RMOUG Training Days 2017!
Wednesday: Introduction for Connor McDonald from Oracle, 2017 Keynote- 9:45am, Main Ballroom
Lunch with the Experts- 12:30pm, Main Ballroom
Women in Technology Round Table with Komal Goyal and Rene Antunez– 1:15pm, room 1A
Social Media, the Next Generation with Rene Antunez– 4pm, room 1A
Welcome Reception Host- 5:30pm in the Main Ballroom
Thursday: Delphix Hands on Lab- 9:00am in the OTN Area
Lunch with the Experts– 12:30pm, Main Ballroom
Closing Session for the OTN Area– 2:00pm
Virtualization and the Cloud– 4:00pm- room 4F
I will be found at the Registration Desk, the OTN area doing interviews, taking pictures and videos throughout the other times of the conference!
Delphix focuses on virtualizing non-production environments, easing the pressure on DBAs, resources and budget, but there is a second use case for product that we don’t discuss nearly enough.
Protection from data loss.
Jamie Pope, one of the great guys that works in our pre-sales engineering group, sent Adam and I an article on one of those situations that makes any DBA, (or an entire business, for that matters) cringe. GitLab.com was performing some simple maintenance and someone deleted the wrong directory, removing over 300G of production data from their system. It appears they were first going to use PostgreSQL “vacuum” feature to clean up the database, but decided they had extra time to clean up some directories and that’s where it all went wrong. To complicate matters, the onsite backups had failed, so they had to go to offsite ones, (and every reader moans…)
Even this morning, you can view the tweets of the status for the database copy and feel the pain of this organization as they try to put right the simple mistake.
Users are down as they work to get the system back up. Just getting the data copied before they’re able to perform the restore is painful and as a DBA, I feel for the folks involved:
How could Delphix have saved the day for GitLab? Virtual databases, (VDBs) are read/write copies and derived from a recovered image that is compressed, duplicates removed and then kept in a state of perpetual recovery having the transactional data applied in a specific interval, (commonly once every 24 hrs) to the Delphix Engine source. We support a large number of database platforms, (Oracle, SQL Server, Sybase, SAP, etc) and are able to virtualize the applications that are connected to them, too. The interval of how often we update the Delphix Engine source is configurable, so depending on network and resources, this interval can be decreased to apply more often, depending on how up to date the VDBs need to be vs. production.
With this technology, we’ve come into a number of situations where customers suffered a cataclysmic failure situation in production. While traditionally, they would be dependent upon a full recovery from a physical backup via tape, (which might be offsite) or scrambling to even find a backup that fit within a backup to tape window, they suddenly discovered that Delphix could spin up a brand new virtual database with the last refresh before the incident from the Delphix source and then use a number of viable options to get them up and running quickly.
This is the type of situation happens more often then we’d like to admit. Many times resources have been working long shifts and make a mistake due to exhaustion, other times someone unfamiliar and with access to something they shouldn’t simply make a dire mistake, but these things happen and this is why DBAs are always requesting two or three methods of backups. We learn quite quickly we’re only as good as our last backup and if we can’t protect the data, well, we won’t have a job for very long.
Interested in testing it out for yourself? We have a really cool free Delphix trial via Amazon cloud that uses your AWS account. There’s a source host and databases, along with a virtual host and databases, so you can create VDBs, blow away tables, recovery via a VDB, create a V2P, (virtual to physical) all on your own.
I’ve been at Delphix for just over six months now. In that time, I was working with a number of great people on a number of initiatives surrounding competitive, the company roadmap and some new initiatives. With the introduction of our CEO, Chris Cook, new CMO, Michelle Kerr and other pivotal positions within this growing company, it became apparent that we’d be redirecting our focus on Delphix’s message and connections within the community.
I was still quite involved in the community, even though my speaking had been trimmed down considerably with the other demands at Delphix. Even though I wasn’t submitting abstracts to many of the big events I’d done so in previous years, I still spoke at 2-3 events each month during the fall and made clear introductions into the Test Data Management, Agile and re-introduction into the SQL Server communities.
As of yesterday, my role was enhanced so that evangelism, which was previously 10% of my allocation, is now going to be upwards of 80% as the Technical Evangelist for the Office of the CTO at Delphix. I’m thrilled that I’m going to be speaking, engaging and blogging with the community at a level I’ve never done before. I’ll be joined by the AWESOME Adam Bowen, (@CloudSurgeon on Twitter) in his role as Strategic Advisor and as the first members of this new group at Delphix. I would like to thank all those that supported me to gain this position and the vision of the management to see the value of those in the community that make technology successful day in and day out.
I’ve always been impressed with the organizations who recognize the power of grassroots evangelism and the power it has in the industry. What will I and Adam be doing? Our CEO, Chris Cook said it best in his announcement:
As members of the [Office of CTO], Adam and Kellyn will function as executives with our customers, prospects and at market facing events. They will evangelize the direction and values of Delphix; old, current, and new industry trends; and act as a customer advocate/sponsor, when needed. They will connect identified trends back into Marketing and Engineering to help shape our message and product direction. In this role, Adam and Kellyn will drive thought leadership and market awareness of Delphix by representing the company at high leverage, high impact events and meetings. 
As many of you know, I’m persistent, but rarely patient, so I’ve already started to fulfill my role and be prepared for some awesome new content, events that I’ll be speaking at and new initiatives. The first on our list was releasing the new Delphix Trial via the Amazon Cloud. You’ll have the opportunity to read a number of great posts to help you feel like an Amazon guru, even if you’re brand new to the cloud. In the upcoming months, watch for new features, stories and platforms that we’ll introduce you to. This delivery system, using Terraform, (thanks to Adam) is the coolest and easiest way for anyone to try out Delphix, with their own AWS account and start to learn the power of Delphix with use case studies that are directed to their role in the IT organization.
I don’t want to alarm you, but there’s a new Delphix trial on AWS! It uses your own AWS account and with a simple set up, allows you to deploy a trial Delphix environment. Yes, you hear me right- just with a couple steps, you could have your own setup to work with Delphix!
There’s documentation to make it simple to deploy, simple to understand and then use cases for individuals determined by their focus, (IT Architect, Developer, Database Administrator, etc.)
This was a huge undertaking and I’m incredibly proud of Delphix to be offering this to the community!
So get out there and check this trial out! All you need is an AWS account on Amazon and if you don’t have one, it only takes a few minutes to create one and set it up, just waiting for a final verification before you can get started! If you have any questions or feedback about the trial, don’t hesitate to email me at dbakevlar at gmail.
So you’ve deployed targets with Delphix on AWS and you receive the following error:
It’s only a warning, but it states that you’re default of 87380 is below the recommended second value for the ipv4.tcp.rmem property. Is this really an issue and do you need to resolve it? As usual, the answer is “it depends” and its all about on how important performance is to you.
To answer this question, we need to understand network performance. I’m no network admin, so I am far from an expert on this topic, but as I’ve worked more often in the cloud, it’s become evident to me that the network is the new bottleneck for many organizations. Amazon has even build a transport, (the Snowmobile) to bypass this challenge.
The network parameter settings in question have to do with network window sizes for the cloud host in question surrounding TCP window reacts and WAN links. We’re on AWS for this environment and the Delphix Admin Console was only the messenger to let us know that our setting currently provided for this target are less than optimal.
Each time the sender hits this limit, they must wait for a window update before they can continue and you can see how this could hinder optimal performance for the network.
To investigate this, we’re going to log into our Linux target and SU over as root, which is the only user who has the privileges to edit this important file.:
$ ssh delphix@<IP Address for Target>
$ su root
As root, let’s first confirm what the Delphix Admin Console has informed us of by running the following command:
$ sysctl -a | grep net.ipv4.tcp_rmem net.ipv4.tcp_rmem = 4096 87380 4194304
There are three values displayed in the results:
To translate what this second value corresponds to- this is the size of data in flight any sender can communicate via TCP to the cloud host before having to receive a window update.
So why are faster networks better? Literally, the faster the network, the closer the bits and the more data that can be transferred. If there’s a significant delay, due to a low setting on the default of how much data can be placed on the “wire”, then the receive window won’t be used optimally.
This will require us to update our parameter file and either edit or add the following lines:
net.ipv4.tcp_rmem = 4096 12582912 16777216
$ sysctl -p /etc/sysctl.conf
$ sysctl -a | grep net.ipv4.tcp_rmem net.ipv4.tcp_rmem = 4096 12582912 16777216
Ahhh, that looks much better… 🙂
So Brent Ozar’s group of geeks did something that I highly support- a survey of data professional’s salaries. Anyone who knows me, knows I live by data and I’m all about transparency. The data from the survey is available for download from the site and they’re promoting app developers to download the Excel spreadsheet of the raw data and work with it.
Now I’m a bit busy with work as the Technical Intelligence Manager at Delphix and a little conference that I’m the director for, called RMOUG Training Days, which is less than a month from now, but I couldn’t resist the temptation to load the data into one of my XE databases on a local VM and play with it a bit...just a bit.
It was easy to save the data as a CSV and use SQL Loader to dump it into Oracle XE. I could have used BCP and loaded it into SQL Server, too, (I know, I’m old school) but I had a quick VM with XE on it, so I just grabbed that quick to give me a database to query from. I did edit the CSV and removed both the “looking” column and took out the headers. If you choose to keep them, make sure you add the column back into the control file and update the “options ( skip=0)” to be “options ( skip=1)” to not load the column headers as a row in the table.
The control file to load the data has the following syntax:
--Control file for data -- options ( skip=0 ) load data infile 'salary.csv' into table salary_base fields terminated by ',' optionally enclosed by '"' (TIMEDT DATE "MM-DD-YYYY HH24:MI:SS" , SALARYUSD , PRIMARYDB , YEARSWDB , OTHERDB , EMPSTATUS , JOBTITLE , SUPERVISE , YEARSONJOB , TEAMCNT , DBSERVERS , EDUCATION , TECHDEGREE , CERTIFICATIONS , HOURSWEEKLY , DAYSTELECOMMUTE , EMPLOYMENTSECTOR)
and the table creation is the following:
create table SALARY_BSE(TIMEDT TIMESTAMP not null, SALARYUSD NUMBER not null, COUNTRY VARCHAR(40), PRIMARYDB VARCHAR(35), YEARSWDB NUMBER, OTHERDB VARCHAR(150), EMPSTATUS VARCHAR(100), JOBTITLE VARCHAR(70), SUPERVISE VARCHAR(80), YEARSONJOB NUMBER, TEAMCNT VARCHAR(15), DBSERVERS VARCHAR(50), EDUCATION VARCHAR(50), TECHDEGREE VARCHAR(75), CERTIFICATIONS VARCHAR(40), HOURSWEEKLY NUMBER, DAYSTELECOMMUTE VARCHAR(40), EMPLOYMENTSECTOR VARCHAR(35));
I used Excel to create some simple graphs from my results and queried the data from SQL Developer, (Jeff would be so proud of me for not using the command line… :))
Here’s what I queried and found interesting in the results.
The database flavors we work on may be a bit more diverse than most assume. Now this one was actually difficult, as the field could be freely typed into and there were some mispellings, combinations of capital and small letters, etc. The person who wrote “postgress”, yeah, we’ll talk… 🙂
The data was still heavily askew towards the MSSQL crowd. Over 2700 respondents were SQL Server and only 169 were listed their primary database platform as others, but Oracle was the majority:
Now the important stuff for a lot of people is the actual salary. Many folks think that Oracle DBAs make a lot more than those that specialize in SQL Server, but I haven’t found that and as this survey demonstrated, the averages were pretty close here, too. No matter if you’re Oracle or SQL Server, we ain’t making as much as that Amazon DBA…:)
Many of those who filled out the survey haven’t been in the field that long, (less than five years). There’s still a considerable amount of folks who’ve been in the industry since it’s inception.
Of the 30% of us that don’t have degrees in our chosen field, most of us stopped after getting a bachelors to find our path in life:
There’s still a few of us, (just under 200) out there who had to accumulate a lot of school loans getting a masters or a Doctorate/PHD, before we figured out that tech was the place to be…:)
The last quick look I did was to see by country, what were the top and bottom average salaries for DBAs-
Not too bad, Switzerland and Denmark… 🙂
I wish there’d been more respondents to the survey, but very happy with the data that was provided. I’m considering doing one of my own, just to get more people from the Oracle side, but until then, here’s a little something to think about as we prep for the new year and another awesome year in the database industry!
I thought I’d do something on Oracle this week, but then Microsoft made an announcement that was like an early Christmas present- SQL Server release for Linux.
I work for a company that supports Oracle and SQL Server, so I wanted to know how *real* this release was. I first wanted to test it out on a new build and as they recommend, along as link to an Ubuntu install, I created a new VM and started from there-
There were a couple packages that were missing until the repository is updated to pull universe by adding repository locations into the sources.list file:
There is also a carriage return at the end of the MSSQL installation when it’s added to the sources.list file. Remove this before you save.
Once you do this, if you’re chosen to share your network connection with your Mac, you should be able to install successfully when running the commands found on the install page from Microsoft.
The second install I did was on a VM using CentOS 6.7 that was pre-discovered as a source for one of my Delphix engines. The installation failed upon running it, which you can see here:
Even attempting to work around this wasn’t successful and the challenge was that the older openssl wasn’t going to work with the new SQL Server installation. I decided to simply upgrade to CentOS 7.
The actual process of upgrading is pretty easy, but there are some instructions out there that are incorrect, so here are the proper steps:
[upgrade] name=upgrade baseurl=http://dev.centos.org/centos/6/upg/x86_64/ enabled=1 gpgcheck=0
Save this file and then run the following:
yum install preupgrade-assistant-contents redhat-upgrade-tool preupgrade-assistant
You may see that one has stated it won’t install as newer ones are available- that’s fine. As long as you have at least newer packages, you’re fine. Now run the preupgrade
The log final output may not write, also. If you are able to verify the runs outside of this and it says that it was completed successfully, please know that the pre-upgrade was successful as a whole.
Once this is done, import the GPG Key:
rpm --import http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-7
After the key is imported, then you can start the upgrade:
/usr/bin/redhat-upgrade-tool-cli --force --network 7 --instrepo=http://mirror.centos.org/centos/7/os/x86_64
Once done, then you’ll need to reboot before you run your installation of SQL Server:
Once the VM has cycled, then you can run the installation using the Redhat installation as root, (my delphix user doesn’t have the rights and I decided to have MSSQL installed under root for this first test run):
curl https://packages.microsoft.com/config/rhel/7/mssql-server.repo > /etc/yum.repos.d/mssql-server.repo
Now run the install:
sudo yum install -y mssql-server
Once its completed, it’s time to set up your MSSQL admin and password:
One more reboot and you’re done!
You should then see your SQL Server service running with the following command:
systemctl status mssql-server
You’re ready to log in and create your database, which I’ll do in a second post on this fun topic.
OK, you linux fans, go MSSQL! 🙂
I’ll be attending my very first Pass Summit next week and I’m really psyched! Delphix is a major sponsor at the event, so I’ll get to be at the booth and will be rocking some amazing new Delphix attire, (thank you to my boss for understanding that a goth girl has to keep up appearances and letting me order my own Delphix ware.)
Its an amazing event and for those of you who are my Oracle peeps, wondering what Summit is, think Oracle Open World for the Microsoft SQL Server expert folks.
I was a strong proponent of immersing in different database and technology platforms early on. You never know when the knowledge you gain in an area that you never thought would be useful ends up saving the day.
Yesterday this philosophy came into play again. A couple of folks were having some challenges with a testing scenario of a new MSSQL environment and asked for other Delphix experts for assistance via Slack. I am known for multi-tasking, so I thought, while I was doing some research and building out content, I would just have the shared session going in the background while I continued to work. As soon as I logged into the web session, the guys welcomed me and said, “Maybe Kellyn knows what’s causing this error…”
Me- “Whoops, guess I gotta pay attention…”
SQL Server, for the broader database world, has always been, unlike Oracle, multi-tenant. This translates to a historical architecture that has a server level login AND a user database level username. The Login ID, (login name) is linked to a userID, (and such a user name) in the (aka schema) user database. Oracle is starting to migrate to similar architecture with Database version 12c, moving more away from schemas within a database and towards multi-tenant, where the pluggable database, (PDB) serves as the schema.
I didn’t recognize the initial error that arose from the clone process, but that’s not uncommon, as error messages can change with versions and with proprietary code. I also have worked very little to none on MSSQL 2014. When the guys clicked in Management Studio on the target user database and were told they didn’t have access, it wasn’t lost on anyone to look at the login and user mapping to show the login didn’t have a mapping to a username for this particular user database. What was challenging them, was that when they tried to add the mapping, (username) for the login to the database, it stated the username already existed and failed.
This is where “old school” MSSQL knowledge came into play. Most of my database knowledge for SQL Server is from versions 6.5 through 2008. Along with a lot of recovery and migrations, I also performed a process very similar to the option in Oracle to plug or unplug a PDB, in MSSQL terminology referred to as “attach and detach” of a MSSQL database. You could then easily move the database to another SQL Server, but you very often would have what is called “orphaned users.” This is where the login ID’s weren’t connected to the user names in the database and needed to be resynchronized correctly. To perform this task, you could dynamically create a script to pull the logins if they didn’t already exist, run it against the “target” SQL Server and then create one that ran a procedure to synchronize the logins and user names.
Use <user_dbname> go exec sp_change_users_login 'Update_One','<loginname>','<username>' go
For the problem that was experienced above, it was simply the delphix user that wasn’t linked post restoration due to some privileges and we once we ran this command against the target database all was good again.
This wasn’t the long term solution, but pointed to where the break was in the clone design and that can now be addressed, but it shows that experience, no matter how benign having it may seem, can come in handy later on in our careers.
I am looking forward to learning a bunch of NEW and AWESOME MSSQL knowledge to take back to Delphix at Pass Summit this next week, as well as meeting up with some great folks from the SQL Family.
See you next week in Seattle!
The topic of DevOps and and Agile are everywhere, but how often do you hear Source Control thrown into the mix? Not so much in public, but behind the closed doors of technical and development meetings when Agile is in place, it’s a common theme. When source control isn’t part of the combination, havoc ensues and a lot of DBAs working nights on production with broken hearts.
So what is source control and why is it such an important part of DevOps? The official definition of source control is:
A component of software configuration management, version control, also known as revision control or source control, is the management of changes to documents, computer programs, large web sites, and other collections of information.
Delphix, with it’s ability to provide developer with as many virtual copies of databases, including masked sensitive data, is a no-brainer when ensuring development and then test have the environments to do their jobs properly. The added features of bookmarking and branching is the impressive part that creates full source control.
Using the diagram below, note how easy it is to mark each iteration of development with a bookmark to make it easy to then lock and deliver to test, a consistent image via a virtual database, (VDB.)
Delphix is capable of all of this, while implementing Agile data masking to each and every development and test environment to protect all PII and PCI data from production in non-production environments.
Along with the deep learning I’ve been allowed to do about data virtualization, I’ve learned a great deal about Test Data Management. Since doing so, I’ve started to do some informal surveys of the DBAs I run into and ask them, “How do you get data to your testers so that they can perform tests?” “How do you deliver code to different environments?”
Oracle Open World 2016 is almost here…where did the summer go??
With this upon us, there is something you attendees need and that’s to know about what awesome sessions are at Oracle Open World from the Delphix team! I gave my options up as is the tragedy of switching companies in late spring from Oracle, but you can catch some great content on how to reach the new level in data management with Tim Gorman, (my phenomenal hubby, duh!) and Brian Bent from Delphix:
After absorbing all this great content on Sunday, you can come over to Oak Table World at the Children’s Creativity Museum on Monday and Tuesday to see the Oak Table members present their latest technical findings to the world. The schedule and directions to the event are all available in the link above.
If you’re looking for where the cool kids will be on Thursday, check out the Delphix Sync event! There’s still time to register if you want to join us and talk about how cool data virtualization is.
If you’re a social butterfly and want to get more involved with the community, check out some of the great activities that, and I do mean THAT Jeff Smith and the SQL Developer team have been planning for Oracle Open World, like the Open World Bridge Run.
I’ve been busy reading and testing everything I can with Delphix, whenever I get a chance. I’m incredibly fascinated by copy data management and the idea of doing this with Exadata is nothing new, as Oracle has it’s own version with sparse copy. The main challenge is that Exadata’s version of this is kind of clunky and really doesn’t have the management user interface that Delphix offers.
There is a lot of disk that comes with an Exadata, not just CPU, network bandwidth and memory. Now you can’t utilize offloading with a virtualized database, but you may not be interested in doing so. The goal is to create a private cloud that you can use small storage silos for virtualized environments. We all know that copy data management is a huge issue for IT these days, so why not make the most of your Exadata, too?
With Delphix, you can even take and external source and provision a copy in just a matter of minutes to an Exadata, utilizing very little storage. You can even refresh, roll back, version and branch through the user interface provided.
I simulated two different architecture designs for how Delphix would work with Exadata. The first was with standard hardware, with Virtual Databases, (VDBs) on the Exadata and the second having both the Dsource and the VDBs on another Exadata.
Now we need to capture our gold copy to use for the DSource, which will require space, but Delphix does use compression, so it will be considerably smaller than the original database it’s using for the data source.
If we then add ALL the VDBs to the total storage utilized by that and by the Dsource, then you’d see that they only use about the same amount of space as the original database! Each of these VDBs are going to interact with the user independently, just as a standard database copy would. They can be at different points in time, track different snapshots, have different hooks, (pre or post scripts to be run for that copy) with different data, (which is just different blocks, so that would be the only additional space outside of other changes.) Pretty cool if you ask me!