Category: AWS

August 7th, 2017 by dbakevlar

It’s finally time to upgrade our Linux Target!  OK, so we’re not going to upgrade the way a DBA would normally upgrade a database server when we’re working with virtualization.

So far, we’ve completed:

  • 1.  Updating our instances so that we’ll have a GUI interface if we’ll need one.
  • 2.  Installed Oracle on the Linux Source and upgraded our Dsource database to 12c

 

Now we’re done with our Linux Source and onto our Linux Target.

Install and Configure VNC and Oracle

We’ll run through and install and configure the VNC Viewer requirements just like we did in Part I and Part II. We’ll also install Oracle, but only this time, we’ve performed a software installation only.

We’ll install the Enterprise Edition and we’ll make sure to install it in the same path as we did on our Linux Source, (/u01/app/oracle/product/12.1/db_1)  We’re not installing the multi-tenant, as we didn’t configure this on our source, either.

Once that is complete, it’s time to get our VDB’s upgraded.

The first thing you need to remember is that the VDBs are simply virtual images of our Dsource that is already UPGRADED.

Add the New Oracle Home to the Linux Target

Log into Delphix Admin Console and click on Environments.

click on the Linux Target and then click on the refresh button:

Click on the Databases tab and you’ll now see the DB12c Oracle home is now present in the list:

Prep VDBs for switch to new home

Copy your environments profile from 11g.env to 12c.env.  Update the Oracle home to point to the new 12c home and save the file.

Now I have three VDBs on this target:

[delphix@linuxtarget ~]$ ps -ef | grep pmon

delphix   7501     1  0 Jul12 ?        00:01:17 ora_pmon_devdb
delphix   8301     1  0 Jul06 ?        00:01:49 ora_pmon_VEmp6
delphix  16875     1  0 Jul05 ?        00:01:57 ora_pmon_qadb

Log into the Linux Target and from the command line, set the environment and log into each database via SQL Plus and shut it down.

. 11g.env

export ORACLE_SID=VEmp6f
sqlplus / as sysdba
shutdown immediate;
exit;

and so on and so forth…. 🙂

Copy all the init files from the 11g Oracle Home/dbs for the VDBs over to the 12c Oracle Home/dbs/.

And this is where it all went wrong for two of the VDBs…

Back on the Delphix Admin Console, click on Manage –> Datasets

Click on each of the VDBs involved.  Click on Configuration –> Upgrade, (up arrow icon) and say yes to upgrading.  Update the Oracle home from the 11g in the drop down to the new 12c Oracle home and click the gold check mark to confirm.

OK, so this is what I WOULD have done for all three VDBs if I’d been working with JUST VDBs, but this is where it gets a bit interesting and I had to go off the beaten path for a solution.  I’m unsure if this is documented anywhere inside Delphix, (Delphix Support is awesome, so I’m sure they already know this, but for my own sanity) here’s the situation and the answer.  The environment I am working on is built off of AWS AMIs that consist of Delphix containers.  Containers are very powerful, allowing you to “package” up a database, application and other tiers of virtualized environments, offering the ability to manage it all as one piece.  This was the situation for the qadb and the devdb instances.

Due to this, I couldn’t run the upgrade inside the Delphix Admin console since these two VDBs were part of Delphix “data pods.”  The following are the steps to then address this change.

Remove the Containers, (Subsequently the VDBs as Well!)

  1. Log into the Delphix’s Jet Stream.
  2. Upper right hand corner, click on Usage Overview
  3. Scroll down and click on Employee Application, (its the template for the VDBs in question..)
  4. At the bottom of this page, you’ll see the two containers that possess the VDBs as part of them.  To the right, there is a trash can icon for delete.  (The reason this is an option is that I have a template built for this container and it will be very simple to recreate a VDB and vfile to repopulate this container, (matter of minutes, max.)

       5. Delete the two containers that are controlling the administration of these two VDBs still pointing to the 11g home.

Create the New VDBs and Virtualized Application, (Vfiles)

Now, log into the Delphix Admin console.

  1. Provision two VDBs from the orcl source db, one as devdb and another as qadb, just as it was before, both on the Linux target.
  2. Provision two vfiles of the Web Application, one as QA_Web and the other as DEV_Web, port numbers 1080 and 2080, keeping all other defaults.
  3. Once completed, (couple minutes, max) then lets return to Jet Stream and create the containers that will house the system.

Create the New Containers From the Template

In Jet Stream

  1. Click on Data Management in the upper right hand corner
  2. You will be brought to the Templates tab, click on the Employee Application template
  3. Click on Add Container

  1. Name:  “Dev 12c Container”, Owner: Dev and choose the devdb and the DEV_Web for the sources, then complete the container creation.
  2. Click again on Add Container
  3. Name: “QA 12c Container”, Owner: QA and choose the qadb and the QA_Web for the sources, then complete the container creation.

This will take just a moment to finish creating and that’s all there is to it.  You now have two DB12c environments that are completely containerized and upgraded from their previous 11g state.

Our source shows we’re using our DB12c upgraded database:

And we can also see everything is upgraded and happy in our Delphix Administration Console.  

It may have taken a little longer for me to upgrade with the complexity of the containers introduced, but the power of data pods is exceptional when we’re managing our data, the database and the application as one piece anyway.  Shouldn’t we treat it as one?

 

 

 

 

 

 

 

Posted in AWS, Delphix, Oracle

July 31st, 2017 by dbakevlar

This is the Part III in a four part series on how to:

  1.  Enable VNC Viewer access on Amazon EC2 hosts.
  2.  Install DB12c and upgrade a Dsource for Delphix from 11g to 12c, (12.1)
  3.  Update the Delphix Configuration to point to the newly upgraded 12c database and the new Oracle 12c home.
  4.  Install DB12c and upgrade target VDBs for Delphix residing on AWS to 12.1 from the newly upgraded source.

In Part II, we finished upgrading the Dsource database, but now we need to get it configured on the Delphix side.

Log into the Delphix Admin console to make the changes required to recognize the Dsource is now DB12c and has a new Oracle home.

Log into the Delphix console as the Delphix_Admin user and go to the Manage –> Environments.

Click on the Refresh button and let the system recognize the new Oracle Home for DB12c:

Once complete, you should see the 12.1 installation we performed on the Linux Source now listed in the Environments list.

Click on Manage –> Datasets and find the Dsource 11g database and click on it.

Click on the Configuration tab and click on the Upgrade icon, (a small up arrow in the upper right.)

Update to the new Oracle Home that will now be listed in the dropdown and scroll down to save.

Now click on the camera icon to take a snap sync to ensure everything is functioning properly.  This should only take a minute to complete.

The DSource is now updated in the Delphix Admin console and we can turn our attentions to the Linux target and our VDBs that source from this host.  In Part IV we’ll dig into the other half of the source/target configuration and how I upgraded Delphix environments with a few surprises!

 

Posted in AWS, Delphix, Oracle Tagged with: , , ,

July 26th, 2017 by dbakevlar

I’m finally getting back to upgrading the Linux Source for a POC I’m doing with some folks and picking up from where we left off in Part I

Address Display Issue

Now that we have our VNC Viewer working on our Amazon host, the first thing we’ll try is to run the Oracle installer, (unzipped location –> database –> runInstaller) but it’s going to fail because we’re missing the xdpinfo file.  To verify this, you’ll need to open up a terminal from Application –> System Tools –> Terminal:

$ ls -l /usr/bin/xdpyinfo
 ls: /usr/bin/xdpyinfo: No such file or directory

We’ll need to install this with yum:

$ sudo yum -y install xorg-x11-utils

Once we’ve completed this, let’s verify our display:

$ echo $DISPLAY

:1.0 <– (0 is local, first number is the display, just as ipaddress:display for your VNC Viewer connection.)

If it’s correct, you can test it by executing xclock:

$ xclock

The clock should appear on the screen if the display is set correctly.

Install Oracle 12c

Run the installer:

$ ./runInstaller

The installer will come up for Oracle 12c and you can choose to enter in your information, but I chose to stay uninformed… 🙂  I chose to install AND upgrade the database to DB12c from 11g.

The warnings for swap and the few libraries I also chose to ignore by clicking ignore all and proceeded with the installation.

Root.sh and the Trace Analyzer

Once the installation of the new Oracle Home is complete, choose to run the root.sh script when prompted:

$ sudo /u01/app/oracle/product/12.1/db_1/root.sh

Overwrite all files when prompted by the script run and it’s up to you, but I chose to install the Oracle Trace File Analyzer so I can check it out at a later date.  You’ll then be prompted to choose the database to upgrade.  We’re going to upgrade our source database, ORCL in this example.

Upgrade Our Oracle DSource(Database)

Choose to proceed forward with the upgrade on the database, but know that you’ll require more space for the archive logs that are generated during the upgrade.  The check will tell you how much to add, but I’d add another 1Gb to ensure you are prepared with the other steps you have to run as we go through the preparation steps.

Log into SQL Plus as SYSDBA to perform this step:

ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=8100M;

Go through any warnings, but steps like stats collection and grants on triggers will have to be performed post the upgrade.

Drop the OLAP catalog:

$ sqlplus / as sysdba

@$ORACLE_HOME/olap/admin/catnoamd.sql

exit

Remove the OEM catalog for Enterprise Manager, first shutting down the console from the terminal:

$ emctl stop dbconsole

Copy the emremove.sql from the 12c Oracle Home/rdbms/admin and place it in the same location for 11g home.  Log into SQL Plus as SYSDBA:

SET ECHO ON;

SET SERVEROUTPUT ON;

@$ORACLE_HOME/rdbms/admin/emremove.sql

Empty the recyclebin post these steps:

purge recyclebin;

The assumption is that you have a backup prepared or you can use flashback with your resources allocated and proceed forward with upgrade.

Choose to upgrade the 11g listener and choose to install EM Express if you’d like to have that for monitoring.  Make sure to keep the default checks for the following window to update everything we need and collect stats before the upgrade runs to ensure it proceeds efficiently through all objects required.

Choose to proceed with the upgrade and if you’ve followed these instructions, you should find a successful installation of DB12c and upgrade of the database.  Keep in mind, we’re not going to go multi-tenant in this upgrade example, so if you were looking for those steps, my POC I’m building isn’t going to take that on in this set of blog posts.

Post Upgrade Steps:

Update your environment variables, including copying the 11g.env to a new profile called 12c.env and updating the Oracle Home.  Now set your environment and log into SQL Plus as SYSDBA to the upgraded database.

Update all the necessary dictionary and fixed stats:

EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;

EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;

Now, for our next post, we’ll need to set up the same installations on our Amazon host and VNC Viewer configuration we did for the Source and then install Oracle DB12c on our target server as we did in this post.  Then we’ll discuss how to get all our Delphix VDBs, (virtual databases) upgraded to be the same as our source in no time!

 

Posted in AWS, Delphix, Oracle Tagged with: , , ,

April 28th, 2017 by dbakevlar

I did a couple great sessions yesterday for the awesome Dallas Oracle User Group, (DOUG.)  It was the first time I did my thought leadership piece on Making Sense of the Cloud and it was a great talk, with some incredible questions from the DOUG attendees!

This points me to a great [older] post on things IT can do to help guarantee tech projects are more successful. DevOps is a standard in most modern IT shops and DBAs are expected to find ways to be part of this valuable solution.  If you inspect the graph, displaying the value of different projects in ROI, vs. how often these different types of projects run over budget and time, it may be surprising.

Where non-software projects are concerned, the project rarely runs over the schedule, but in the way of benefits, often comes up short.  When we’re dealing with software, 33% of project run over time, but the ROI is excruciatingly high and worthy of the investment.  You have to wonder how much of that over-allocation in time feeds into the percentage increase in cost?  If this could be deterred, think about how more valuable these projects would become?

The natural life of a database is growth.  Very few databases stay a consistent size, as companies prosper, critical data valuable to the company requires a secure storage location and a logical structure to report on that data is necessary for the company’s future.  This is where relational databases come in and they can become the blessing and the burden of any venture.  Database administrators are both respected and despised for their necessity to manage the database environment as the health of the database is an important part of the IT infrastructure and with the move to the cloud, a crucial part of any viable cloud migration project.

How much of that time, money and delay shown in those projects are due to the sheer size and complexity of the database tier?  Our source data shows how often companies just aren’t able to hold it together due to lacking skills, lacking estimates in time estimates and other unknowns that come back to bit us.

I can’t stress enough why virtualization is key to removing a ton of the overhead, time and money that ends up going into software projects that include a database.

Virtualizing non-production databases results in:

  1. Ability to deliver full copies of production for developers without extensive demands on storage.
  2. Ability to deliver those databases in a matter of minutes vs. days or weeks.
  3. Ability to refresh databases as needed for any project.
  4. Self-service user-interface so developers and testers can recover from a catastrophic issue in a database without having to grovel to a DBA to restore a virtual database.
  5. Ability to branch the VDB and do versioning, which is awesome for both developers and testers, (I know, we DBAs care very little about this feature… :))
  6. In migrations/cloud migrations, the ability to migrate databases in short periods of time and to limit the storage footprint to save company the money they were promised the cloud would deliver that most are finding out in the long run, is not occurring with traditional database scenarios.

It’s definitely something to think about and if you don’t believe me, test it yourself with a free trial!  Not enough people are embracing virtualization and it takes so much of the headache out of RDBMS management.

Posted in AWS, Azure, Cloud, Oracle, SQLServer Tagged with: , ,

April 18th, 2017 by dbakevlar

For over a year I’ve been researching cloud migration best practices.  Consistently there was one red flag that trips me that I’m viewing recommended migration paths.  No matter what you read, just about all of them include the following high level steps:

As we can see from above, the scope of the project is identified, requirements laid out and a project team is allocated.

The next step in the project is to choose one or more clouds, choose the first environments to test out in the cloud, along with security concerns and application limitations.  DBAs are tested repeatedly as they continue to try to keep up with the demand of refreshing or ensuring the cloud environments are able to keep in sync with on-prem and the cycle continues until a cutover date is issued.  The migration go or no-go occurs and the either non-production or all of the environment is migrated to the cloud.

As someone who works for Delphix, I focus on the point of failure where DBAs can’t keep up with full clones and data refreshes in cloud migrations or development and testing aren’t able to complete the necessary steps that could be if the company was using virtualization.  From a security standpoint, I am concerned with how few companies aren’t investing in masking with the sheer quantity of breeches in the news, but as a DBA, there is a whole different scenario that really makes me question the steps that many companies are using to migrate to the cloud.

Now here’s where they loose me every time- the last step in most cloud migration plans is to optimize.

I’m troubled by optimization being viewed as the step you take AFTER you migrate to the cloud.  Yes, I believe that there will undoubtedly be unknowns that no one can take into consideration before the physical migration to a cloud environment, but to take databases, “as is” when an abundance of performance data is already known about the database that could and will impact performance, seems to be inviting unwarranted risk and business impact.

So here’s my question to those investing in a cloud migration or have already migrated to the cloud-  Did you streamline and optimize your database/applications BEFORE migrating to the cloud or AFTER?

 

 

Posted in AWS, Azure, Oracle, SQLServer Tagged with: , ,

  • Facebook
  • Google+
  • LinkedIn
  • Twitter