Category: Delphix

May 10th, 2017 by dbakevlar

I love questions-  They give me something to write about that I don’t have to come up with from my experience or challenges…:)

So in my last post, Paul asked:

I am not sure what happens to the other changes which happened while the release was happening? Presumably they are also lost? Presumably the database has to go down while the data is reverted?

The Setup

In our scenario to answer this question, I’m going to perform the following on the VEmp_826 virtualized database:

  1. Add a table
  2. Add an index
  3. Include transactions, both inserts and deletes
  4. Rewind the database using the Admin Console

As I’m about to make these changes to my database, I take a snapshot which is then displayed in the Delphix Admin console using the “Camera” icon in the Configuration pane.

Note the SNC Range listing on each of them.  Those are the SCNs available in that snapshot and the timestamp is listed, as well.

Now I log into my target host that the VDB resides on.  Even though this is hosted on AWS, it really is no different for me than logging into any remote box.  I set my environment and log in as the schema owner to perform the tasks we’ve listed above.

Create Table

So we’ll create a table, index and some support objects for our test:

CREATE TABLE REW_TST
(
C1 NUMBER NOT NULL
,C2 VARCHAR2(255)
,CDATE TIMESTAMP
);

CREATE INDEX PK_INX_RT ON REW_TST (C1);
CREATE SEQUENCE RT_SEQ START WITH 1;

CREATE OR REPLACE TRIGGER RW_BIR
BEFORE INSERT ON REW_TST
FOR EACH ROW
BEGIN
  SELECT C1_SEQ.NEXTVAL
  INTO   :new.C1
  FROM   DUAL;
END;
/

Now that it’s all created, we’ll take ANOTHER snapshot.

Add Data to Kinder Table

This snapshot takes just a couple seconds and is about 10 minutes after our first one and contains the changes that were made to the database since we took the initial change.

Now I’ll add a couple rows from another transaction into the KINDER_TBL from yesterday:

INSERT INTO KINDER_TBL
VALUES (1,dbms_random.string('A', 200), SYSDATE);
INSERT INTO KINDER_TBL
VALUES (2,dbms_random.string('B', 200), SYSDATE);
INSERT INTO KINDER_TBL
VALUES (3,dbms_random.string('C', 200), SYSDATE);
INSERT INTO KINDER_TBL
VALUES (4,dbms_random.string('D', 200), SYSDATE);
INSERT INTO KINDER_TBL
VALUES (5,dbms_random.string('E', 200), SYSDATE);
INSERT INTO KINDER_TBL
VALUES (6, dbms_random.string('F', 200), SYSDATE);
INSERT INTO  KINDER_TBL
VALUES (7,dbms_random.string('G', 200), SYSDATE);
COMMIT;

We’ll take another snapshot:

Add Data to the New Table

Now let’s add a ton of rows to the new table we’ve created:

SQL> Begin
For IDS in 1..10000
Loop
INSERT INTO REW_TST (C2)
VALUES (dbms_random.string('X', 200));
Commit;
End loop;
End; 
  /

And take another snapshot.

Now that I have all of my snapshots for the critical times in the release change, there is a secondary option that is available.

Snapshots at the DBA Level

As I pointed out earlier, there is a range of SCNs in each snapshot.  Notice that I can now provision by time or SCN from the Admin Console:

So I could easily go back to any of my SET beginning or ending SCN by the snapshot OR I could click on the up/down arrows or type the exact SCN I want to pinpoint for the recovery. Once I’ve decided on the correct SCN to recover to, then click on Refresh VDB and it will go to that SCN, just like doing a recovery via RMAN, but instead of having to type out the commands and the time constraints, this would be an incredibly quick recovery.

Notice that I can go back to any of my snapshots, too.  For the purpose of this test, we’re going to go back to just before I added the data to the new table by clicking on the Selected SCN and clicking Rewind VDB.

Note that now this is the final snapshot shown in the system, no longer displaying the one that was taken after we inserted the 10K rows into REW_TST.

If we look at our SQL*Plus connection to the database, we’ll see that it’s no longer connected from our 10K row check on our new table:

And if I connect back in, what do I have for my data in the tables I worked on?

Pssst-  there are fourteen rows instead of seven because I inserted another 7 yesterday when I created the table… 🙂

I think that answers a lot of the questions posed by Paul, but I’m going to jump in a little deeper on one-

Database Outage During a Rewind

Yes, the database did experience an outage as the VDB was put back to the point in time or SCN requested by the User Interface or Command line interface for Delphix.  You can see this from querying the database:

SQL> select to_char(startup_time,'DD-MM-YYYY HH24:MI:SS') startup_time
 from v$instance;
STARTUP_TIME
-------------------
10-05-2017 12:11:49

The entire database is restored back to this time and the Dsource, the database the VDB is sourced from and keeps track of everything going on in all VDBs, has pushed the database back, yet the snapshots from before this time exist, (tracked by the Delphix Engine.)

If you view the alert log for the VDB, you’ll also see the tail of the recovery steps, including our requested SCN:

alter database recover datafile list clear
Completed: alter database recover datafile list clear
alter database recover datafile list
 1 , 2 , 3 , 4
Completed: alter database recover datafile list
alter database recover if needed
 start until change 2258846
Media Recovery Start
 started logmerger process
Parallel Media Recovery started with 2 slaves
Wed May 10 12:11:05 2017
Recovery of Online Redo Log: Thread 1 Group 3 Seq 81 Reading mem 0
  Mem# 0: /mnt/provision/VEmp_826/datafile/VEMP_826/onlinelog/o1_mf_3_dk3r6nho_.log
Incomplete recovery applied all redo ever generated.
Recovery completed through change 2258846 time 05/10/2017 11:55:30
Media Recovery Complete (VEmp826)

Any other changes around the change that you’re tracking from are impacted by a rewind, so if there are two developers working on the same database, they could impact each other, but with the small footprint of a VDB, why wouldn’t you just give them each their own VDB and merge the changes at the end of the development cycle?  The glorious reasons for adoption virtualization technology is to have the ability to work in 2 week sprints and be more agile than our older, waterfall methods that are laden with problems.

Let me know if you have any more questions-  I live for questions that offer me some incentive to go look at what’s going on under the covers!

Posted in Database, Delphix, Oracle Tagged with:

May 9th, 2017 by dbakevlar

Different combination in the game of tech create a winning roll of the dice and other times create a loss.  Better communication between teams can offer a better opportunity to deter from holes in development cycle tools, especially when DevOps is the solution you’re striving for.

It doesn’t hurt to have a map to help guide you.  This interactive map from XebiaLabs can help offer a little clarity to the solutions, but there’s definitely some holes in multiple places that could be clarified a bit more.

The power of this periodic table of DevOps tools, isn’t just that they are broken up by the tool type, but that you’re able to easily filter by open source, freemium, paid or enterprise level tools.  This assists in reach the goals of your solution.  As we all get educated, I’ll focus horizontally in future posts, but today, we’ll take a vertical look at the Database Management tier and where I specialize first.

Apples aren’t Apfel

When comparisons are made, it’s common to have the inability to do apple to apples.  Focusing on the Database Management toolsets, such as Delphix, I can tell you that only Redgate I view as a competitor and this only happened recently with their introduction of SQL Clone.  The rest of the products shown don’t offer any virtualization, (our strongest feature) in their product and we consider Liquidbase and Datical partners in many use cases.

Any tool is better than nothing, even one that helps you choose tools.  So let’s first start to discern what the “Database Management”  tools are supposed to accomplish and then create one of our own.  The overall goal appears to be version control for your database, which is a pretty cool concept.

DBMaestro

The first product on the list is something I do like because the natural “control issues” I have as a DBA.  You want to know that changes to a database occurred in a controlled, documented and organized manner.  DBMaestro allows for this and has some pretty awesome ways of doing it.  Considering that DevOps is embracing agility at an ever increasing rate, having version control capabilities that will work with both Oracle SQL Developer and Microsoft Visual Studio are highly attractive.  The versioning is still as a code change level and not at the data level, but it’s still worthy of discussion.

That it offers all of this through a simple UI is very attractive to the newer generation of developers and DBAs will still want to be part of it.

Liquibase

This is the first of two companies we partner with that are in the list.  It’s very different from DBMaestro, as it’s the only open source in the database management products and is works with XML, JSON, SQL and other formats. You can build just about anything you require and the product has an extensive support community, so if you need to find an example, it’s pretty simple to do so online.

I really like the fact that Liquibase takes compliance into consideration and has the capability to delay SQL transactions from performing without the approval from the appropriate responsible party.  It may not be version control of your data, but at least you can closely monitor and time out the changes to it.

Where Liquibase partners with Delphix is that we can perform continuous delivery via Liquibase and Delphix can then version control the data tier.  We can be used for versioning, branching and refreshing from a previous snapshot if there was a negative outcome in a development or test scenario, making continuous deployment a reality without requiring “backup scripts” for the data changes.

Redgate SQL Control

Everybody love a great user interface and like most, there’s a pretty big price tag that goes along with the ease of use when adding up all the different products that’s offered.  There’s just a ton that you can do with Redgate and you can do most of it for Oracle, MSSQL and MySQL, which is way cool.  Monitor, develop, virtualize, but the main recognition that you’re getting with the periodic table for DevOps tools is for version control and comparisons.  This comes from the SQL Control product from Redgate and offers quite a full quite of products for the developer and the DBA.

Datical

This is another product that we’ve partnered with repeatedly.  The idea that we, as DBAs can review and monitor any and all changes to a database is very attractive to any IT shop. Having it simplified into a tool is incredibly beneficial to any business who wants to deliver continuously and when implemented with Delphix, then the data can be delivered as fast as the rest of the business.

Idera

Idera’s DB Change Manager can give IT a safety net to ensure that changes intended are the changes that happen in the environment.  Idera, just like many of the others on the list supports multiple database platforms, which is a keen feature of a database change control tool, but no virtualization or copy data management, (CDM) tool exists or at least, not one exists any longer.

Fitting in

So where does Delphix fit in with all of these products?  We touched on it a little bit as I mentioned each of these tools. Delphix is recognized for the ability to deploy and that it does so as part of continuous delivery is awesome, but as I stated, it’s not a direct apple to apples comparison as we not only offer versioning control, but we do so at the data level.

Delphix Jet Stream

So let’s create an example-

We can do versioning and track changes in releases in the way of our Jet Stream product.  Jet Stream is the much loved product for our developers and testers.

I’ve often appreciated any tool set that allowed others not only to fish instead of me fishing for them,  Offering the Developer or Tester access to the administration console meant for a DBA can only set them up to fail.

Jet Stream’s interface is really clean and easy to use.  It has a clear left hand panel with options to access and the interaction is direct on what the user will be doing.  I can create bookmarks, naming versions, which allows me the ability to

If a developer is using Jet Stream, they would make changes as part of a release and once complete, create a bookmark, (a snapshot in time) of their container, (A container here is made up of the database, application tier and anything else we want included that Delphix can virtualize.)

We’ve started our new test run of a new development deployment.  We’ve made an initial book mark singing the beginning of the test and then a second bookmark to say the first set of changes were completed.

At this time, there’s a script that removes 20 rows from a table.  The check queries all verified that this is the delete statement that should remove the 20 rows in question.

SQL> delete from kinder_tbl  where c2 like '%40%';

143256 rows deleted.

SQL> commit;

SQL> insert into kinder_tbl values (...

When the tester performs the release and hits this step, the catastrophic change to the data occurs.

Whoops, thats not 20 rows.

Now, the developer could grovel to the DBA to use archaic processing like flashback database or worse, import the data back into a second table and merge the missing rows, etc.  There’s a few ways to skin this cat, but what if the developer could recover the data himself?

This developer was using Jet Stream and can simply go into the console, where they’ve been taking that extra couple seconds to bookmark each milestone during the release to the database, which INCLUDES marking the changes to the data!

If we inspect the bookmarks, we can see that the second of three occurred before the delete of data.  This makes it simple to use the “rewind” option, (bottom right icon next to the trash can for removing the bookmark) to revert the data changes.  Keep in mind that this will revert the database back to the point in time when the bookmark was performed, so ALL changes will be reverted to that point in time.

Once that is done, we can verify quickly that our data is returned and no need to bother the DBA or the concern that catastrophic change to data has impacted the development or test environment.

SQL> select count(*) from kinder_tbl
  where c2 like '%40%';

  COUNT(*)
----------
    143256

I plan on going though different combinations of tools in the periodic table of DevOps tools and show what strengths and drawbacks there are to choices in implementation in upcoming posts, so until the next post, have a great week!

 

 

Posted in Delphix, devops Tagged with: ,

March 6th, 2017 by dbakevlar

I had the opportunity to attend Delphix‘s Company Kick Off last week in San Francisco.  This was my first time at an event of this nature and it was incredibly successful.  As Delphix had just promoted Adam Bowen and me to the Office of CTO, along with promoting Eric Schrock to the CTO position, I was in an enviable position, but not just because of the role change.

For some reason, my badge stated I was in Product Management, (I wasn’t) and as I was transitioning from my previous role in Technical Product Marketing, (I was the full-on techie on the team.) I now belonged to a group that hadn’t existed when they planned the event.  I decided to become a “free agent” for the week and took advantage of not belonging in anyone.  As the company split into their designated groups- Sales, Engineering and Professional Services, I started to move between the department events. This resulted in some incredible perspectives on our company and why the diversity of the roles and groups serve the company as a whole.

Doing the Kick Off

First off-  kudos to the entire marketing team for the event.  It was incredibly well done.  The venue and location was excellent.  Having everyone in one place, considering how many of us are remote, really made a huge difference.  All of this had to be coordinated-  event, people, travel, content and commonly three separate events stitched into one!  As a remote employee, it was great to finally put faces to names.  Networking, community and social events gave us more than enough time to get to know those we only knew virtually and even in three days, I felt more connected to the teams.

The company kick off started with a talk from John Foley, the Blue Angels pilot, who discussed his Glad to be Here movement, (#gladtobehere).  It was very inspirational and I added significant knowledge in the value of the feedback phase of projects, which I believe is a weakness in me as a mentor.  Our CEO Chris Cook and others from the C-Level then presented Delphix goals for the next year in a very energized way that provided unification of focus for all of us.  With our heads filled with a full picture of where we were headed, we sat down to a dinner and the big company party to follow.

Shout Out to the Cool Kids

What did I learn that I may not have known before?  That our CEO, Chris Cook is really fantastic.  He knows the business, is very personable and took the time to visit with as many Delphixers as he could during the week.  He’s also not afraid to have fun and knows that downtime is essential to a healthy work environment.

Our CEO, Chris Cook, reliving the 80’s for our Company Kick Off party.  I look like I’m signing for someone to call me.  The 80’s were not my best years… 🙂

The company message was on point, “Delphix moves data at the speed of business.”  As a DBA, I understand how long the database has been viewed as the bottleneck and to remove this through virtualization is incredibly powerful.

I learned that we have a very clear vision of where we are starting to fit with the huge direction change for many businesses.  With so many companies migrating to the cloud and so many others that claim to do what we do, (very rarely and even more rare, do it successfully.)  I started to recognize how all of our groups fit together.  We are all moving parts that are necessary to make Delphix the success it is.

Delphix’s CFO, Stewart Grierson and Chris Cook on Stage at the CKO

For me, the free agent, it was shocking to go from the buzz and big lights of the Sales/Pre-sales Kick off, to the very personalized interaction of the Professional Services, to the quiet, small groups at the hackathon for Engineering.  I’m a very adaptable personality, but admit that I’m more introverted than most people realize.  I’m drained by loud events and need to re-energize with downtime when I get the chance, so this bouncing from space to space, (each with their own different vibe) really worked well for me.

The week I learned about how much our sales staff does-  how many regions are handled by so few people and how much they accomplish with the help of some awesome pre-sales folks!  You also get to find out how your contributions make an impact.  Content you produced that you thought was just gathering dust, someone takes the time to thank you and tell you how vital it is to their day to day job.

I discovered how the Professional Services, (PS) team closely works with those that provide our documentation and community support.  Where Sales was all about recognizing what had been accomplished this year, PS was looking to the year ahead and how they would solve some of the critical challenges.  Sitting in granted me the opportunity to proudly hear the announcement of my husband’s promotion to Technical Manager in the Professional Services group.  It also gave me a chance to visit with my old friend and co-author Leighton Nelson, who is doing fantastic work in Steve Karam’s group.

Getting My Geek On

On breakfast the last day, I happened to sit down with the engineers.  I’ve become accustomed to speaking over everyone’s head in marketing these days, so it was this odd  sense of relief as I listened and discussed some of the interesting ways this group was taking on syntax differences between the languages being utilized in our product.  We chatted about Intellij, Python and Oracle Cloud complexities and the hackathon that was going on in their group for the day.  When I sat in the Engineering group’s kick off later on, I was granted the opportunity to speak to Nathan Jolly, a SQL Server DBA at Delphix that I’ve only had the pleasure of speaking to in meetings.  He’s on our Australian team and there aren’t many opportunities to be in the same timezone.

Quieter than in the party, people chat it up

The quiet was also a nice break after evenings filled with great food, conversations and company.  The party was the first night, but each evening brought more dinners and community events, (I was part of a group that assembled over 40 wheelchairs for charity!) and the last evening, I ended up closing down the bar with a group that [somehow] talked me into an arm wrestling match.  Now, as many know, I have about a good week at low altitude where I’m taking advantage of the extra oxygen.  This results in little impact from alcohol and for the particular evening, me winning an arm wrestling match against one of the guys.  Nothing more needs to be said about this outside of me refusing to give up and that both my elbows are bruised from the match.

A Successful Preparation for the Upcoming Year

I may be speaking for the rest of the teams, but I left with renewed energy, direction and connection to this great company we call Delphix.  I am amazed at how much we’ve grown and changed in just the 9 months I’ve been here and look forward to the upcoming year.  I have no doubt 2018’s kick off will be just as awesome and that we’ll need a bigger venue to hold all the new employees!

I quickly put together a scrapbook video story of my pictures, enjoy!  March is also a busy month for events, both Oracle and SQL Server.  Here’s where I’ll be this month:

March 8th:  IOUG Town hall webinar

March 13th:  UTOUG Training Days, Salt Lake City, UT.

March 18th:  Iceland SQL Saturday, Reykjavik, Iceland

March 28th:  Colorado SQL Saturday, Colorado Springs, CO

 

 

 

 

Posted in Delphix

February 23rd, 2017 by dbakevlar

This was received by one of our Delphix AWS Trial customers and he wasn’t sure how to address it.  If any others experience it, this is the why it occurs and how you can correct it.

You’re logged into your Delphix Administration Console and you note there is a fault displayed in the upper right hand console.  Upon expanding, you see the following warning for the databases from the Linux Target to the Sources they pull updates from:

 

Its only a warning and the reason it’s only a warning is that it doesn’t stop the user from performing a snapshot and provisioning, but it does impact the timeflow, which could be a consideration if a developer were working and needed to go back in time during this lost time.

How do you address the error?  Its one that’s well documented by Delphix, so simply proceed to the following link, which will describe the issue, how to investigate and resolve.

Straightforward, right?  Not so much in my case.  When the user attempted to ssh into the linux target, he received an IO error:

$ ssh delphix@xx.xxx.xxx.x2

$ ssh: connect to host xx.xxx.xxx.x2 port 22: Operation timed out

I asked the user to then log into Amazon EC2 dashboard and click on Instances.  The following displayed:

Oh-oh….

By highlighting the instance, the following is then displayed at the lower part of the dashboard, displaying that their is an issue with this instance:

Amazon is quickly on top of this once I refresh the instances and all is running once more.  Once this is corrected and all instances show a green check mark for the status, then I’m able to SSH into the console with out issue:

$ ssh delphix@xx.xxx.xxx.x2

Last login: Tue Feb 21 18:05:28 2017 from landsharkengine.delphix.local

[delphix@linuxtarget ~]$

Does this resolve the issue in the Delphix Admin Console?  Depends… and the documentation linked states that the problem is commonly due to a problem with network or source system, but in this case, it was the target system that suffered the issue in AWS.

As this is a trial and not a non-production system that is currently being used, we will skip recovering the logs to the target system and proceed with taking a new snapshot.  The document also goes into excellent ways to deter from experiencing this type of outage in the future.  Again, this is just a trial, so we won’t put these into practice for a trial environment that we can easily drop and recreate in less than an hour.

Tips For Real Delphix Environments

There are a few things to remember when you’re working with the CLI:

-If you want to inspect any admin information, (such as these logs, as shown in the document) you’ll need to be logged in as the delphix_admin@DOMAIN@<DE IP Address>.

So if you don’t see the “paths” as you work through the commands, it’s because you may have logged in as sysadmin instead of the delphix_admin.

-If you’ve marked the fault as “Resolved”, there’s no way to resolve the timeflow issue, so you’ll receive the following:

ip-10-0-1-10 timeflow oracle log> list timeflow='TimeFlow' missing=true

No such Timeflow ''TimeFlow''.

-If the databases are down, it’s going to be difficult for Delphix to do anything with the target database.  Consider updating the target to auto restart on an outage.  To do so, click on the database in question, click on Configuration and change it to “On” for the Auto VDB Restart.

Here’s a few more tidbits to make you more of an expert with Delphix.  Want to try out the free trail with AWS?  All you need is a free Amazon account and it’s only about 38 cents an hour to play around with a great copy of our Delphix Engine, including a deployed Source and Target.

Just click on this link to get started!

Posted in AWS Trial, Delphix

February 13th, 2017 by dbakevlar

When tearing down an AWS Delphix Trial, we run the following command with Terraform:

>terraform destroy

I’ve mentioned before that every time I execute this command, I suddenly feel like I’m in control of the Death Star in Star Wars:

As this runs outside of the AWS EC2 web interface, you may see some odd information in your dashboard.  In our example, we’ve run “terraform destroy” and the tear down was successful:

So you may go to your volumes and after verifying that yes, no volumes exist:

The instances may still show the three instances that were created as part of the trial, (delphix engine, source and target.)

These are simply “ghosts of instances past.”  The tear down was completely successful and there’s simply a delay before the instance names are removed from the dashboard.  Notice that they no longer are listed with a public DNS or IP address.  This is a clear indication that these aren’t currently running, exist or more importantly, being charged for.

Just one more [little] thing to be aware of… 🙂

Posted in AWS Trial, Delphix Tagged with: ,

February 13th, 2017 by dbakevlar

Now, for most of us, we’re living in a mobile world, which means as our laptop travels, our office moves and our IP address changes.  This can be a bit troubling for those that are working in the cloud and our configuration to our cloud relies on locating us via our IP Address being the same as it was in our previous location.

What happens if you’re IP Address changes from what you have in your configuration file, (in Terraform’s case, your terraform.tfvars file) for the Delphix AWS Trial?  I set my IP Address purposefully incorrect in the file to demonstrate what would happen after I run the terraform apply command:

It’s not the most descriptive error, but that I/O timeout should tell you right away that terraform can’t connect back to your machine.

Addressing the IP Address Issue

Now, we’ll tell you to capture your current IP address and update the IP address in the TFVARS file that resides in the Delphix_demo folder, but I know some of you are wondering why we didn’t just build out the product to adjust for an IP address change.

The truth is, you can set a static IP Address for your laptop OR just alias your laptop with the IP Address you wish to have.  There are a number of different ways to address this, but looking into the most common, let’s dig into how we would update the IP address vs. updating the file.

Static IP Address

You can go to System Preferences or Control Panel, (depending on which OS you’re on) and click on Network and configure your TCP/IP setting to manual and type in an IP Address there.  The preference is commonly to choose a non-competitive IP address, (often the one that was dynamically set will do as your manual one to retain) and choose to save the settings.  Restart the PC and you can then add that to your configuration files.  Is that faster than just updating the TFVARS file-  nope.

Setting the IP Address- Mac/Linux

The second way to do this is to create an Alias IP address to deter from the challenge of each location/WiFi move having it automatically assigned.

Just as above, we often will use the IP address that was given dynamically and just choose to keep this as the one you’ll keep each time.  If you’re unsure of what your IP is, there are a couple ways to collect this information:

Open up a browser and type in “What is my IP Address

or from a command prompt, with “en0” being your WiFi connection, gather your IP Address one of two ways:

$ dig +short myip.opendns.com @resolver1.opendns.com
$ ipconfig getifaddr en0

Then set the IP address and cycle the your WiFi connection: 


$ sudo ipconfig set en0 INFORM <IP Address>
$ sudo ifconfig en0 down 
$ sudo ifconfig en0 up

You can also click on your WiFi icon and reset it as well.  Don’t be surprised if this takes a while to reconnect.  Renewing and releasing of IP addresses can take a bit across the network and the time required must be recognized.

An Alias on Mac/Linux

Depending on which OS you’re on.  Using the IP Address from your tfvars file, set it as an alias with the following command:

$ sudo ifconfig en0 alias <IP Address> 255.255.255.0

Password: <admin password for workstation>

If you need to unset it later on:

sudo ifconfig en0 -alias <IP Address>

I found this to be an adequate option-  the alias was always there, (like any other alias, it just forwards everything onto the address that you’re recognized at in the file.) but it may add time to the build, (still gathering data to confirm this.)  With this addition, I shouldn’t have to update my configuration file, (for the AWS Trial, that means setting it in our terraform.tfvars in the YOUR_IP parameter.)

Setting your IP Address on Windows

The browser commands to gather your IP Address work the same way, but if you want to change it via the command line, the commands are different for Windows PC’s:

netsh interface ipv4 show config

You’ll see your IP Address in the configuration.  If you want to change it, then you need to run the following:

netsh interface ipv4 set address name="Wi-Fi" static <IP Address> 255.255.255.0 <Gateway>
netsh interface ipv4 show config

You’ll see that the IP Address for your Wi-Fi has updated to the new address.  If you want to set it to DHCP, (dynamic) again, run the following:

netsh interface ipv4 set address name="Wi-Fi" source=dhcp

Now you can go wherever you darn well please, set an alias and run whatever terraform commands you wish.  All communication will just complete without any error due to a challenging new IP address.

Ain’t that just jiffy? OK, it may be quicker to just gather the IP address and update the tfvars file, but just in case you wanted to know what could be done and why we may not have built it into the AWS Trial, here it is! 🙂

Posted in AWS Trial, Delphix Tagged with: , , ,

February 10th, 2017 by dbakevlar

I ran across an article from 2013 from Straight Talk on Agile Development by Alex Kuznetsov and it reminded me how long we’ve been battling for easier ways of doing agile in a RDBMS environments.

Getting comfortable with a virtualized environment can be an odd experience for most DBAs, but as soon as you recognize how similar it is to a standard environment, we stop over-thinking it and it makes it quite simple to then implement agile with even petabytes of data in an relational environment without using slow and archaic processes.

The second effect of this is to realize that we may start to acquire secondary responsibilities and take on ensuring that all tiers of the existing environment are consistently developed and tested, not just the database.

A Virtual Can Be Virtualized

Don’t worry-  I’m going to show you that its not that difficult and virtualization makes it really easy to do all of this, especially when you have products like Delphix to support your IT environment. For our example, we’re going to use our trusty AWS Trial environment and we have already provisioned a virtual QA database.  We want to create a copy of our development virtualized web application to test some changes we’ve made and connect it to this new QA VDB.

From the Delphix Admin Console, go to Dev Copies and expand to view those available.  Click on Employee Web Application, Dev VFiles Running.  Under TimeFlow, you will see a number of snapshots that have been taken on a regular interval.  Click on one and click on Provision.

Now this is where you need the information about your virtual database that you wish to connect to:

  1.  You will want to switch from provisioning to the source to the Linux Target.

Currently the default is to connect to the existing development database, but we want to connect to the new QA we wish to test on.  You can ssh as delphix@<ipaddress for linuxtarget> to connect to and gather this information.

2.  Gathering Information When You Didn’t Beforehand

I’ve created a new VDB to test against, with the idea, that I wouldn’t want to confiscate an existing VDB from any of my developers or testers.  The new VDB is called EmpQAV1.  Now, if you’re like me, you’re not going to have remembered to grab the info about this new database before you went into the wizard to begin the provisioning.  No big deal, we’ll just log into the target and get it:

[delphix@linuxtarget ~]$ . 11g.env
[delphix@linuxtarget ~]$ echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0.4/db_1
[delphix@linuxtarget ~]$ ps -ef | grep pmon
delphix  14816     1  0 17:23 ?        00:00:00 ora_pmon_devdb
delphix  14817     1  0 17:23 ?        00:00:00 ora_pmon_qadb
delphix  17832     1  0 17:32 ?        00:00:00 ora_pmon_EmpQAV1
delphix  19935 19888  0 18:02 pts/0    00:00:00 grep pmon

I can now set my ORACLE_SID:

[delphix@linuxtarget ~]$ export ORACLE_SID=EmpQAV1

Now, let’s gather the rest of the information we’ll need to connect to the new database by connecting to the database and gathering what we need.

[delphix@linuxtarget ~]$ lsnrctl services
LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 10-FEB-2017 18:13:06
Copyright (c) 1991, 2013, Oracle.  All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "EmpQAV1" has 1 instance(s).
  Instance "EmpQAV1", status READY, has 1 handler(s) for this service...

Provision Your VFile

Fill in all the values required in the next section of the provisioning setup:

Click Next and add any requirements to match your vfile configuration that you had for the existing environment.  For this one, there aren’t any, (additional NFS Mount points, etc.)  Then click Next and Finish.

The VFile creation should take a couple minutes max and you should now see an environment that looks similar to the following:

This is a fully functional copy of your web application, created from another virtual copy that can test against a virtual database, ensuring that all aspects of a development project are tested thoroughly before releasing to production!

Why would you choose to do anything less?

 

 

 

Posted in AWS Trial, Delphix, Oracle Tagged with: ,

February 10th, 2017 by dbakevlar

So you thought you were finished configuring your AWS target, eh?  I already posted a previous time on how to address a fault with the RMEM, but now we’re onto the WMEM.  Wait, WM-what?

No, I fear a DBAs work is never over and when it comes to the cloud, our skill set has just expanded from what it was when we worked on-premise!

Our trusty Delphix Admin Console keeps track of settings on all our sources and targets, informing us when the settings aren’t set to what is recommended, so that we’ll be aware of any less than optimal parameters that could effect performance.

As we address latency in cloud environments, network settings become more important.

How WMEM Differs from RMEM

RMEM= receive

WMEM=send

Where RMEM is quite easy to remember as receive settings, we get to thank

 

As root, we’ll add another line to the sysctl.conf file to reflect values other than defaults:

$ echo 'net.ipv4.tcp_wmem= 102404194304 12582912' >> /etc/sysctl.conf

Reload the values into the system:

$ sysctl -p /etc/sysctl.conf

Verify the settings are now active:

$ sysctl -a | grep net.ipv4.tcp_wmem

net.ipv4.tcp_wmem = 10240 4194304 12582912

That’s all there is to it.  Now you can mark the fault as resolved in the Delphix Admin Console.

Posted in AWS Trial, Delphix Tagged with: , ,

February 8th, 2017 by dbakevlar

There are more configurations for AWS than there are fish in the sea, but as the rush of folks arrive to test out the incredibly cool AWS Trial for Delphix, I’ll add my rendition of what to look for to know you’re AWS setup is prepped to successfully deploy.

The EC2 Dashboard View

After you’ve selected your location, set up your security user/group and key pairs, there’s a quick way to see, (at least high level) if you’re ready to deploy the AWS Trial to the zone in question.

Go to your EC2 Dashboard and to the location, (Zone) that you plan to deploy your trial to and you should see the following:

Notice in the dashboard, you can see that the key pairs, (1) and the expected Security Groups, (3) are displayed, which tells us that we’re ready to deploy to this zone.  If we double click on the Key Pair, we’ll see that its match to the one we downloaded locally and will use in our configuration with Terraform:

How Terraform Communicates with AWS

These are essential to deploying in an AWS zone that’s configured as part of your .tfvars file for terraform.  You’ll note in the example below, we have both designated the correct zone and the key pair that is part of the zone we’ll be using to authenticate:


#VERSION=004

#this file should be named terraform.tfvars

# ENTER INPUTS BELOW

access_key="XXXXXXX"

secret_key="XXXXXXXXXX"

aws_region="us-east-1"

your_ip="xxx.xx.xxx.xxx"

key_name="Delphix_east1" #don't include .pem in the key name 

instance_name="Delphix_AWS"

community_username="xxx@delphix.com"

community_password="password"

Hopefully this is a helpful first step in understanding how zones, key pairs and security groups interact to support the configuration file, (tfvars) file that we use with the Delphix deployment via Terraform into AWS.

 

Posted in AWS Trial, Delphix, Oracle Tagged with: ,

February 1st, 2017 by dbakevlar

Delphix focuses on virtualizing non-production environments, easing the pressure on DBAs, resources and budget, but there is a second use case for product that we don’t discuss nearly enough.

Protection from data loss.

Jamie Pope, one of the great guys that works in our pre-sales engineering group, sent Adam and I an article on one of those situations that makes any DBA, (or an entire business, for that matters) cringe.  GitLab.com was performing some simple maintenance and someone deleted the wrong directory, removing over 300G of production data from their system.  It appears they were first going to use PostgreSQL “vacuum” feature to clean up the database, but decided they had extra time to clean up some directories and that’s where it all went wrong.  To complicate matters, the onsite backups had failed, so they had to go to offsite ones, (and every reader moans…)

Even this morning, you can view the tweets of the status for the database copy and feel the pain of this organization as they try to put right the simple mistake.

Users are down as they work to get the system back up.  Just getting the data copied before they’re able to perform the restore is painful and as a DBA, I feel for the folks involved:

How could Delphix have saved the day for GitLab?  Virtual databases, (VDBs) are read/write copies and derived from a recovered image that is compressed, duplicates removed and then kept in a state of perpetual recovery having the transactional data applied in a specific interval, (commonly once every 24 hrs) to the Delphix Engine source.  We support a large number of database platforms, (Oracle, SQL Server, Sybase, SAP, etc) and are able to virtualize the applications that are connected to them, too.  The interval of how often we update the Delphix Engine source is configurable, so depending on network and resources, this interval can be decreased to apply more often, depending on how up to date the VDBs need to be vs. production.

With this technology, we’ve come into a number of situations where customers suffered a cataclysmic failure situation in production.  While traditionally, they would be dependent upon a full recovery from a physical backup via tape, (which might be offsite) or scrambling to even find a backup that fit within a backup to tape window, they suddenly discovered that Delphix could spin up a brand new virtual database with the last refresh before the incident from the Delphix source and then use a number of viable options to get them up and running quickly.

  1. Switch the users and application to point to the new VDB that was recovered to the point in time, (PIT) before the incident occurred.  Meanwhile, IT is able to take their time recovering the production database with the physical backup, with little outage to the business.
  2. Create a VDB to the PIT before the failure and then create a connection between the production and the VDB, making a copy back to production of the data that was lost.
  3. If there was dire loss, (i.e. disk, etc.)  create a VDB to the PIT before the failure and perform what’s called a V2P, or virtual to physical, rehydrating the virtual data to become the new physical database.

This is the type of situation happens more often then we’d like to admit.  Many times resources have been working long shifts and make a mistake due to exhaustion, other times someone unfamiliar and with access to something they shouldn’t simply make a dire mistake, but these things happen and this is why DBAs are always requesting two or three methods of backups.  We learn quite quickly we’re only as good as our last backup and if we can’t protect the data, well, we won’t have a job for very long.

Interested in testing it out for yourself?  We have a really cool free Delphix trial via Amazon cloud that uses your AWS account.  There’s a source host and databases, along with a virtual host and databases, so you can create VDBs, blow away tables, recovery via a VDB, create a V2P, (virtual to physical) all on your own.

 

Posted in AWS Trial, cloning, Delphix Tagged with: , ,

January 31st, 2017 by dbakevlar

I’ve been at Delphix for just over six months now.  In that time, I was working with a number of great people on a number of initiatives surrounding competitive, the company roadmap and some new initiatives.  With the introduction of our CEO, Chris Cook, new CMO, Michelle Kerr and other pivotal positions within this growing company, it became apparent that we’d be redirecting our focus on Delphix’s message and connections within the community.

I was still quite involved in the community, even though my speaking had been trimmed down considerably with the other demands at Delphix.  Even though I wasn’t submitting abstracts to many of the big events I’d done so in previous years, I still spoke at 2-3 events each month during the fall and made clear introductions into the Test Data Management, Agile and re-introduction into the SQL Server communities.

As of yesterday, my role was enhanced so that evangelism, which was previously 10% of my allocation, is now going to be upwards of 80% as the Technical Evangelist for the Office of the CTO at Delphix.  I’m thrilled that I’m going to be speaking, engaging and blogging with the community at a level I’ve never done before.  I’ll be joined by the AWESOME Adam Bowen, (@CloudSurgeon on Twitter) in his role as Strategic Advisor and as the first members of this new group at Delphix.  I would like to thank all those that supported me to gain this position and the vision of the management to see the value of those in the community that make technology successful day in and day out.

I’ve always been impressed with the organizations who recognize the power of grassroots evangelism and the power it has in the industry.  What will I and Adam be doing?  Our CEO, Chris Cook said it best in his announcement:

As members of the [Office of CTO], Adam and Kellyn will function as executives with our customers, prospects and at market facing events.  They will evangelize the direction and values of Delphix; old, current, and new industry trends; and act as a customer advocate/sponsor, when needed.  They will connect identified trends back into Marketing and Engineering to help shape our message and product direction.  In this role, Adam and Kellyn will drive thought leadership and market awareness of Delphix by representing the company at high leverage, high impact events and meetings. []

As many of you know, I’m persistent, but rarely patient, so I’ve already started to fulfill my role and be prepared for some awesome new content, events that I’ll be speaking at and new initiatives.  The first on our list was releasing the new Delphix Trial via the Amazon Cloud.  You’ll have the opportunity to read a number of great posts to help you feel like an Amazon guru, even if you’re brand new to the cloud.  In the upcoming months, watch for new features, stories and platforms that we’ll introduce you to. This delivery system, using Terraform, (thanks to Adam) is the coolest and easiest way for anyone to try out Delphix, with their own AWS account and start to learn the power of Delphix with use case studies that are directed to their role in the IT organization.

Posted in AWS Trial, DBA Life, Delphix, Oracle Tagged with: ,

January 30th, 2017 by dbakevlar

I don’t want to alarm you, but there’s a new Delphix trial on AWS!  It uses your own AWS account and with a simple set up, allows you to deploy a trial Delphix environment.  Yes, you hear me right-  just with a couple steps, you could have your own setup to work with Delphix!

There’s documentation to make it simple to deploy, simple to understand and then use cases for individuals determined by their focus, (IT Architect, Developer, Database Administrator, etc.)

This was a huge undertaking and I’m incredibly proud of Delphix to be offering this to the community!

So get out there and check this trial out!  All you need is an AWS account on Amazon and if you don’t have one, it only takes a few minutes to create one and set it up, just waiting for a final verification before you can get started!  If you have any questions or feedback about the trial, don’t hesitate to email me at dbakevlar at gmail.

Posted in AWS Trial, Cloud, Delphix Tagged with: , ,

January 24th, 2017 by dbakevlar

So you’ve deployed targets with Delphix on AWS and you receive the following error:

It’s only a warning, but it states that you’re default of 87380 is below the recommended second value for the ipv4.tcp.rmem property.  Is this really an issue and do you need to resolve it?  As usual, the answer is “it depends” and its all about on how important performance is to you.

What is net.ipv4.tcp.rmem?

To answer this question, we need to understand network performance.  I’m no network admin, so I am far from an expert on this topic, but as I’ve worked more often in the cloud, it’s become evident to me that the network is the new bottleneck for many organizations.  Amazon has even build a transport, (the Snowmobile) to bypass this challenge.

The network parameter settings in question have to do with network window sizes for the cloud host in question surrounding TCP window reacts and WAN links.  We’re on AWS for this environment and the Delphix Admin Console was only the messenger to let us know that our setting currently provided for this target are less than optimal.

Each time the sender hits this limit, they must wait for a window update before they can continue and you can see how this could hinder optimal performance for the network.

Validation First

To investigate this, we’re going to log into our Linux target and SU over as root, which is the only user who has the privileges to edit this important file.:

$ ssh delphix@<IP Address for Target>
$ su root

As root, let’s first confirm what the Delphix Admin Console has informed us of by running the following command:

$ sysctl -a | grep net.ipv4.tcp_rmem 

net.ipv4.tcp_rmem = 4096 87380 4194304

There are three values displayed in the results:

  • The first value is the minimum amount of receive window that will be set to each TCP connection, even when the system is overwhelmed.
  • The default value allocated to each tcp connection,
  • The third is the maximum that can be allocated to any TCP connection.

To translate what this second value corresponds to-  this is the size of data in flight any sender can communicate via TCP to the cloud host before having to receive a window update.

So why are faster networks better?  Literally, the faster the network, the closer the bits and the more data that can be transferred.  If there’s a significant delay, due to a low setting on the default of how much data can be placed on the “wire”, then the receive window won’t be used optimally.

This will require us to update our parameter file and either edit or add the following lines:

net.ipv4.tcp_window_scaling = 1

net.core.rmem_max = 16777216

net.ipv4.tcp_rmem = 4096 12582912 16777216
I’m using the value as recommended by Brendan Gregg’s blog post on tuning EC2 instances.  This leaves a pretty narrow difference between the minimum and maximum for the window receive, but it is now within the recommended range for enhanced performance.
After you’ve updated the sysctl.conf file, you’ll need to reload it with the following command:
$ sysctl -p /etc/sysctl.conf
$ sysctl -a | grep net.ipv4.tcp_rmem 

net.ipv4.tcp_rmem = 4096 12582912 16777216

Ahhh, that looks much better… 🙂

Posted in AWS Trial, Delphix Tagged with: , ,

January 9th, 2017 by dbakevlar

We, DBAs, have a tendency to over think everything.  I don’t know if the trait to over think is just found in DBAs or if we see it in other technical positions, too.

I believe it corresponds to some of why we become loyal to one technology or platform.  We become accustomed to how something works and it’s difficult for us to let go of that knowledge and branch out into something new.  We also hate asking questions-  we should just be able to figure it out, which is why we love blog posts and simple documentation.  Just give us the facts and leave us alone to do our job.

Take the cloud-  Many DBAs were quite hesitant to embrace it.  There was a fear of security issues, challenges with network and more than anything, a learning curve.  As common, hindsight is always 20/20.  Once you start working in the cloud, you often realize that its much easier than you first thought it would be and your frustration is your worst enemy.

So today we’re going to go over some basic skills the DBA requires to manage a cloud environment, using Amazon, (AWS) as our example and the small changes required to do what we once did on-premise.

In Amazon, we’re going to be working on EC2, also referred to as the Elastic Compute Cloud.

Understanding Locations, Regions and Zones

EC2 is built out into regions and zones.  Knowing what region you’re working in is important, as it allows you to “silo” the work you’re doing and in some ways, isn’t much different than a data center. Inside of each of these regions, are availability zones, which isolates services and features even more, allowing definitive security at a precise level, with resources shared only when you deem it should.

Just as privileges granted inside a database can both be a blessing and a curse, locations and regions can cause challenges if you don’t pay attention to the location settings when you’re building out an environment.

Amazon provides a number of links with detailed information on this topic, but here’s the tips I think are important for a DBA to know:

  1.  Before setting anything up that is part of a complete solution requiring multiple setup page configurations, ALWAYS check the region in the upper right corner.  I was surprised when it would change from page to page or after a login-

2.  If you think you may have set something up in the wrong region, the dashboard can tell you what is deployed to what region under the resources section:

Understanding Security Keys

Public key cryptography makes the EC2 world go round.  Without this valuable 2048-bit SSH-2 RSA key encryption, you can’t communicate or log into your EC2 host securely.  Key pairs, a combination of a private and public key should be part of your setup for your cloud environment.

Using EC2’s mechanism to create these is easy to do and eases management.  Its not the only way, but it does simplify and as you can see above in the resource information from the dashboard, it also offers you a one-stop shop for everything you need.

When you create one in the Amazon cloud, the private key downloads automatically to the workstation you’re using and it’s important that you keep track of it, as there’s no way to recreate the private key that will be required to connect to the EC2 host.

Your key pair is easy to create by first accessing your EC2 dashboard and then scroll down on the left side and click on “Key Pairs”.  From this console, you’ll have the opportunity to create, import a pre-existing key or manage the ones already in EC2:

Before creating, always verify your region you’re working in, as we discussed in the previous section and if you’re experiencing issue with your key, verify typographical errors and if the location of the private file matches the name listed for identification.

If more than one group is managing the EC2 environment, carefully consider before deleting a key pair.  I’ve experienced the pain caused by a key removal that created a production outage.  Creation of a new key pair is simpler to manage than implementation of a new key pair across application and system tiers after the removal of one that was necessary.

Understanding Roles and Security

Security Groups are silo’d for a clear reason and no where is this more apparent than in the cloud.  To ensure that the cloud is secure, setting clear and defined boundaries of accessibility to roles and groups is important to keep infiltrators out of environments they have no business accessing.

As we discussed in Key Pairs, our Security Groups are also listed by region under resources so we know they exist at a high level.  If we click on the Security Groups link under Resources in the EC2 Dashboard, we’ll go from seeing 5 security group members:

To viewing the list of security groups:

If you need to prove that these are for N. California, (i.e. US-West-1) region, click on the region up in the top right corner and change to a different region.  For our example, I switched to Ohio, (us-east-2) and the previous security groups aren’t listed and just the default security group for Ohio region is displayed:

Security groups should be treated in the cloud the same way we treat privileges inside a database-  granting the least privileges required is best practice.

Understanding How to SSH to a Host

You’re a DBA, which means you’re most likely most comfortable at the command line.  Logging in via SSH on a box is as natural as walking and everything we’ve gone through so far was to prepare you for this next step.

Your favorite command line tool, no matter if it’s Putty or Terminal, if you’re set up everything in the previous sections correctly, then you’re ready to log into the host, aka instance.

  1.  Ensure your downloaded private key is saved in an easily accessible spot for you to use to log in or that you know the username/password, (keys just make this easier…)
  2. Gather the information about “instances” by clicking on the EC2 dashboard, then click on Instances.
  3. The Public DNS and the Public IP is displayed and note the region, too:

You can use this information to then ssh into the host:

ssh -i "<keypair_name>.pem" <osuser>@<public dns or ip address>.<region>.compute.amazonaws.com

Once logged in as the OS user, you can SU over to the application or database user and proceed as you would on any other host.

If you attempt to log into a region with a key pair from another region, it state that the key pair can’t be found, so another aspect showing the importance of regions.

Understanding How to SCP a File

This is the last area I’ll cover today, (I know, a few of you are saying, “good, I’ve already got too much in my head to keep straight, Kellyn…)

With just about any Cloud offering, you can bring your own license.  Although there are a ton of images, (AMIs in AWS, VHDs in Azure, etc.) pre-built, you may need to use a bare metal OS image and load your own software or as most DBAs, bring over patches to maintain the database you have running out there.  Just because you’re in the cloud doesn’t mean you don’t have a job to do.

Change over to the directory that contains the file that you need to copy and then run the following:

scp -i <keypair>.pem <file name to be transferred> <osuser>@<public dns or ip address>.<region>.compute.amazonaws.com:/<direction you wish to place the file in>/.

If you try to use a key pair from one region to log into a SCP to a host, (instance) in another region, you won’t receive an error, but it will act like you skipped the “-i” and the key pair and you’ll be prompted for the password for the user:

<> password: 

pxxxxxxxxxxxx_11xxxx_Linux-x86-64.zip             100%   20MB  72.9KB/s   04:36

This is a good start to getting started as a DBA on the cloud and not over-thinking.  I’ll be posting more in the upcoming weeks that will not only assist those already in the cloud, but those wanting to find a way to invest more into their own cloud education!

Posted in Cloud, Delphix, Delphix Express

December 16th, 2016 by dbakevlar

screen-shot-2016-12-16-at-12-05-22-pm

On the first day with Delphix, I provisioned with glee, an IT Manager Happy.

On the second day with Delphix, I provisioned with glee, two SAP ASE and an IT Manager Happy.

On the third day with Delphix, I provisioned with glee, three Oracle Databases, two SAP ASE and an IT Manager Happy.

On the fourth day with Delphix, I provisioned with glee, four EBS Clones, three Oracle Databases, two SAP ASE and an IT Manager Happy.

On the fifth day with Delphix, I provisioned with glee…

Five Cloud Migrations!

Four EBS Clones, three Oracle Databases, two SAP ASE and an IT Manager Happy.

On the sixth day with Delphix, I provisioned with glee,

Six SQL Servers running, Five Cloud Migrations, four EBS Clones, three Oracle Databases, two SAP ASE and an IT Manager Happy.

On the seventh day with Delphix, I provisioned with glee,

Seven developers coding, Six SQL Servers running, Five Cloud Migrations, four EBS Clones, three Oracle Databases, two SAP ASE and an IT Manager Happy.

On the eighth day with Delphix, I provisioned with glee,

Eight testers testing, Seven developers coding, Six SQL Servers running, Five Cloud Migrations, four EBS Clones, three Oracle Databases, two SAP ASE and an IT Manager Happy.

On the ninth day with Delphix, I provisioned with glee,

Nine applications applying, Eight testers testing, Seven developers coding, Six SQL Servers running, Five Cloud Migrations, four EBS Clones, three Oracle Databases, two SAP ASE and an IT Manager Happy.

On the tenth day with Delphix, I provisioned with glee,

Ten DevOps leading, Nine applications applying, Eight testers testing, Seven developers coding, Six SQL Servers running, Five Cloud Migrations, four EBS Clones, three Oracle Databases, two SAP ASE and an IT Manager Happy.

On the eleventh day with Delphix, I provisioned with glee,

Eleven DB2 humming, Ten DevOps leading, Nine applications applying, Eight testers testing, Seven developers coding, Six SQL Servers running, Five Cloud Migrations, four EBS Clones, three Oracle Databases, two SAP ASE and an IT Manager Happy.

On the twelfth day with Delphix, I provisioned with glee,

Twelve databases masking, Eleven DB2 humming, Ten DevOps leading, nine applications applying, eight testers testing, seven developers coding, six SQL Servers running, five Cloud Migrations, four EBS Clones, three Oracle Databases, two SAP ASE and as IT Manager Happy.

happy_h

Posted in DBA Life, Delphix Tagged with: ,

November 17th, 2016 by dbakevlar

I thought I’d do something on Oracle this week, but then Microsoft made an announcement that was like an early Christmas present-  SQL Server release for Linux.

santa

I work for a company that supports Oracle and SQL Server, so I wanted to know how *real* this release was.  I first wanted to test it out on a new build and as they recommend, along as link to an Ubuntu install, I created a new VM and started from there-

screen-shot-2016-11-17-at-1-20-55-pm

Ubuntu Repository Challenge

There were a couple packages that were missing until the repository is updated to pull universe by adding repository locations into the sources.list file:

screen-shot-2016-11-17-at-1-22-39-pm

There is also a carriage return at the end of the MSSQL installation when it’s added to the sources.list file.  Remove this before you save.

Once you do this, if you’re chosen to share your network connection with your Mac, you should be able to install successfully when running the commands found on the install page from Microsoft.

CentOS And MSSQL

The second install I did was on a VM using CentOS 6.7 that was pre-discovered as a source for one of my Delphix engines.  The installation failed upon running it, which you can see here:

screen-shot-2016-11-17-at-11-21-21-am

Even attempting to work around this wasn’t successful and the challenge was that the older openssl wasn’t going to work with the new SQL Server installation.  I decided to simply upgrade to CentOS 7.

CentOS 7

The actual process of upgrading is pretty easy, but there are some instructions out there that are incorrect, so here are the proper steps:

  1.  First, take a backup of your image, (snapshot) before you begin.
  2. edit the yum directory to prep it for the upgrade by going to and creating the following file: /etc/yum.repos.d/upgrade.repo
    1. Add the following information to the file:
[upgrade]
name=upgrade
baseurl=http://dev.centos.org/centos/6/upg/x86_64/
enabled=1
gpgcheck=0

Save this file and then run the following:

yum install preupgrade-assistant-contents redhat-upgrade-tool preupgrade-assistant

You may see that one has stated it won’t install as newer ones are available-  that’s fine.  As long as you have at least newer packages, you’re fine.  Now run the preupgrade

preupg

The log final output may not write, also.  If you are able to verify the runs outside of this and it says that it was completed successfully, please know that the pre-upgrade was successful as a whole.

Once this is done, import the GPG Key:

rpm --import http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-7

After the key is imported, then you can start the upgrade:

/usr/bin/redhat-upgrade-tool-cli --force --network 7 --instrepo=http://mirror.centos.org/centos/7/os/x86_64

Once done, then you’ll need to reboot before you run your installation of SQL Server:

reboot

MSSQL Install

Once the VM has cycled, then you can run the installation using the Redhat installation as root, (my delphix user doesn’t have the rights and I decided to have MSSQL installed under root for this first test run):

su
curl https://packages.microsoft.com/config/rhel/7/mssql-server.repo > /etc/yum.repos.d/mssql-server.repo

Now run the install:

sudo yum install -y mssql-server

Once its completed, it’s time to set up your MSSQL admin and password:

sudo /opt/mssql/bin/sqlservr-setup

One more reboot and you’re done!

reboot

You should then see your SQL Server service running with the following command:

systemctl status mssql-server

You’re ready to log in and create your database, which I’ll do in a second post on this fun topic.

OK, you linux fans, go MSSQL! 🙂

 

Posted in Delphix, SQLServer Tagged with: , ,

October 20th, 2016 by dbakevlar

I’ll be attending my very first Pass Summit next week and I’m really psyched!  Delphix is a major sponsor at the event, so I’ll get to be at the booth and will be rocking some amazing new Delphix attire, (thank you to my boss for understanding that a goth girl has to keep up appearances and letting me order my own Delphix ware.)

Its an amazing event and for those of you who are my Oracle peeps, wondering what Summit is, think Oracle Open World for the Microsoft SQL Server expert folks.

chris_suddenly

I was a strong proponent of immersing in different database and technology platforms early on.  You never know when the knowledge you gain in an area that you never thought would be useful ends up saving the day.

Just Goin to Take a Look

Yesterday this philosophy came into play again.  A couple of folks were having some challenges with a testing scenario of a new MSSQL environment and asked for other Delphix experts for assistance via Slack.  I am known for multi-tasking, so I thought, while I was doing some research and building out content, I would just have the shared session going in the background while I continued to work.  As soon as I logged into the web session, the guys welcomed me and said, “Maybe Kellyn knows what’s causing this error…”

Me- “Whoops, guess I gotta pay attention…”

SQL Server, for the broader database world, has always been, unlike Oracle, multi-tenant.  This translates to a historical architecture that has a server level login AND a user database level username.  The Login ID, (login name) is linked to a userID, (and such a user name) in the (aka schema) user database.  Oracle is starting to migrate to similar architecture with Database version 12c, moving more away from schemas within a database and towards multi-tenant, where the pluggable database, (PDB) serves as the schema.

I didn’t recognize the initial error that arose from the clone process, but that’s not uncommon, as error messages can change with versions and with proprietary code.  I also have worked very little to none on MSSQL 2014.  When the guys clicked in Management Studio on the target user database and were told they didn’t have access, it wasn’t lost on anyone to look at the login and user mapping to show the login didn’t have a mapping to a username for this particular user database. What was challenging them, was that when they tried to add the mapping, (username) for the login to the database, it stated the username already existed and failed.

Old School, New Fixes

This is where “old school” MSSQL knowledge came into play.  Most of my database knowledge for SQL Server is from versions 6.5 through 2008.  Along with a lot of recovery and migrations, I also performed a process very similar to the option in Oracle to plug or unplug a PDB, in MSSQL terminology referred to as “attach and detach” of a MSSQL database.  You could then easily move the database to another SQL Server, but you very often would have what is called “orphaned users.”  This is where the login ID’s weren’t connected to the user names in the database and needed to be resynchronized correctly.  To perform this task, you could dynamically create a script to pull the logins if they didn’t already exist, run it against the “target” SQL Server and then create one that ran a procedure to synchronize the logins and user names.

Use  <user_dbname>
go
exec sp_change_users_login 'Update_One','<loginname>','<username>'
go

For the problem that was experienced above, it was simply the delphix user that wasn’t linked post restoration due to some privileges and we once we ran this command against the target database all was good again.

This wasn’t the long term solution, but pointed to where the break was in the clone design and that can now be addressed, but it shows that experience, no matter how benign having it may seem, can come in handy later on in our careers.

PASS_2016

I am looking forward to learning a bunch of NEW and AWESOME MSSQL knowledge to take back to Delphix at Pass Summit this next week, as well as meeting up with some great folks from the SQL Family.

See you next week in Seattle!

 

 

 

 

Posted in Delphix, SQLServer Tagged with: , , ,

October 4th, 2016 by dbakevlar

My sabbatical from speaking is about to end in another week and it will return with quite the big bang.

bigbang1

Oct. 14th

First up is Upper NY Oracle User Group, (UNYOUG) for a day of sessions in Buffalo, NY.  I’ll be doing three talks:

  1.  Virtualization 101
  2.  The Limitless DBA
  3.  AWR and ASH with Database 12c

 

pass_2016_stacked

Oct 25th-28th

I’ll be at Pass Summit!  I’ve been wanting to attend this conference since I was managing MSSQL 7 databases!  I finally get to go, but as I’m newly back in the MSSQL saddle, no speaking sessions for me.  I do have a number of peers on the MSSQL side of the house, so hoping to have folks show me around and if you have the time to introduce yourself or introduce me to events and people at this fantastic event, please do!

Nov. 2nd

Next, I head into October with quite the number of talks.  I’ll start out on Nov. 2nd in Detroit, Mi at the Michigan Oracle User Summit, (MOUS) doing a keynote, “The Power in the Simple Act of Doing” and then a technical session, “Virtualization 101”.

Nov. 3rd

I’ll fly out right after I finish my second talk so I can make my way down to Raleigh, NC for the East Coast Oracle conference, (ECO), where I’ll also be doing a couple presentations on Nov. 3rd.

Nov. 9th

The next week I get to stay close to home for the Agile Test and Test Automation Summit.  This is a brand new event in the Denver area.  I’ll be doing a new talk here on Test Data Management, a hot buzzword, but one that people rarely understand the complexities and automation around.

Nov. 10th

The next day, I’m back downtown in Denver, where I can present at Rocky Mountain DataCon, (RMDC) event in Denver.

screen-shot-2016-10-04-at-12-24-56-pm

The RMDC is a newer event and it’s really been picking up traction in the Denver area.  I’ll be speaking on “The Limitless DBA”, focusing on the power of virtualization.  Kent Graziano, the Data Warrior and evangelist for Snowflake will be there, too.  I’m glad to see this new local event taking off in the Denver area, as the Denver/Boulder area consistently ranks high as one of best places to be if you’re in the tech industry.

I’m working to take it easy during the month of December, as I’ll have enough to do just catching up at work at Delphix and then with RMOUG duties with the upcoming Training Days 2017 conference in February 2017!

 

Posted in DBA Life, Delphix Tagged with: ,

  • Facebook
  • Google+
  • LinkedIn
  • Twitter