April 28th, 2017 by dbakevlar

I did a couple great sessions yesterday for the awesome Dallas Oracle User Group, (DOUG.)  It was the first time I did my thought leadership piece on Making Sense of the Cloud and it was a great talk, with some incredible questions from the DOUG attendees!

This points me to a great [older] post on things IT can do to help guarantee tech projects are more successful. DevOps is a standard in most modern IT shops and DBAs are expected to find ways to be part of this valuable solution.  If you inspect the graph, displaying the value of different projects in ROI, vs. how often these different types of projects run over budget and time, it may be surprising.

Where non-software projects are concerned, the project rarely runs over the schedule, but in the way of benefits, often comes up short.  When we’re dealing with software, 33% of project run over time, but the ROI is excruciatingly high and worthy of the investment.  You have to wonder how much of that over-allocation in time feeds into the percentage increase in cost?  If this could be deterred, think about how more valuable these projects would become?

The natural life of a database is growth.  Very few databases stay a consistent size, as companies prosper, critical data valuable to the company requires a secure storage location and a logical structure to report on that data is necessary for the company’s future.  This is where relational databases come in and they can become the blessing and the burden of any venture.  Database administrators are both respected and despised for their necessity to manage the database environment as the health of the database is an important part of the IT infrastructure and with the move to the cloud, a crucial part of any viable cloud migration project.

How much of that time, money and delay shown in those projects are due to the sheer size and complexity of the database tier?  Our source data shows how often companies just aren’t able to hold it together due to lacking skills, lacking estimates in time estimates and other unknowns that come back to bit us.

I can’t stress enough why virtualization is key to removing a ton of the overhead, time and money that ends up going into software projects that include a database.

Virtualizing non-production databases results in:

  1. Ability to deliver full copies of production for developers without extensive demands on storage.
  2. Ability to deliver those databases in a matter of minutes vs. days or weeks.
  3. Ability to refresh databases as needed for any project.
  4. Self-service user-interface so developers and testers can recover from a catastrophic issue in a database without having to grovel to a DBA to restore a virtual database.
  5. Ability to branch the VDB and do versioning, which is awesome for both developers and testers, (I know, we DBAs care very little about this feature… :))
  6. In migrations/cloud migrations, the ability to migrate databases in short periods of time and to limit the storage footprint to save company the money they were promised the cloud would deliver that most are finding out in the long run, is not occurring with traditional database scenarios.

It’s definitely something to think about and if you don’t believe me, test it yourself with a free trial!  Not enough people are embracing virtualization and it takes so much of the headache out of RDBMS management.

Posted in AWS, Azure, Cloud, Oracle, SQLServer Tagged with: , ,

April 18th, 2017 by dbakevlar

For over a year I’ve been researching cloud migration best practices.  Consistently there was one red flag that trips me that I’m viewing recommended migration paths.  No matter what you read, just about all of them include the following high level steps:

As we can see from above, the scope of the project is identified, requirements laid out and a project team is allocated.

The next step in the project is to choose one or more clouds, choose the first environments to test out in the cloud, along with security concerns and application limitations.  DBAs are tested repeatedly as they continue to try to keep up with the demand of refreshing or ensuring the cloud environments are able to keep in sync with on-prem and the cycle continues until a cutover date is issued.  The migration go or no-go occurs and the either non-production or all of the environment is migrated to the cloud.

As someone who works for Delphix, I focus on the point of failure where DBAs can’t keep up with full clones and data refreshes in cloud migrations or development and testing aren’t able to complete the necessary steps that could be if the company was using virtualization.  From a security standpoint, I am concerned with how few companies aren’t investing in masking with the sheer quantity of breeches in the news, but as a DBA, there is a whole different scenario that really makes me question the steps that many companies are using to migrate to the cloud.

Now here’s where they loose me every time- the last step in most cloud migration plans is to optimize.

I’m troubled by optimization being viewed as the step you take AFTER you migrate to the cloud.  Yes, I believe that there will undoubtedly be unknowns that no one can take into consideration before the physical migration to a cloud environment, but to take databases, “as is” when an abundance of performance data is already known about the database that could and will impact performance, seems to be inviting unwarranted risk and business impact.

So here’s my question to those investing in a cloud migration or have already migrated to the cloud-  Did you streamline and optimize your database/applications BEFORE migrating to the cloud or AFTER?

 

 

Posted in AWS, Azure, Oracle, SQLServer Tagged with: , ,

March 30th, 2017 by dbakevlar

Azure is the second most popular cloud platform to date, so it’s where Delphix naturally is going to support second on our road to the cloud.  As I start to work with the options for us deploying Delphix, there are complexities I need to educate myself on in Azure.  As we’re just starting out, there’s a lot to learn and a lot of automation we can take advantage of.  It’s an excellent time for me to get up to speed with this cloud platform, so hopefully everyone will learn right along with me!

We’ll be using Terraform to deploy to Azure, just as we prefer to use it for our AWS deployments.  It’s open source, very robust and has significant support in the community, so we’ll switch from cloud setup to Terraform prep in many of these posts.  Before we can do that, we need to set up our Azure environment after we’ve registered our subscription with Azure.

Azure Consoles

There are the New and the Classic consoles for Azure, but also ones in the modern, New console that are marked as “Classic” that aren’t part of the actual “Classic” one.  I found this a bit confusing, so it’s good to have the distinction.

Azure’s “New” Portal, with it’s modern, sleek design

Azure’s “Classic” management interface, with it’s pale blue and white schema, which still serves a very significant purpose

Once you’ve created your Azure account, you’ll find that you need access to the Classic console to perform many of the basic setup tasks, where the Modern console is better for advanced administration.

Preparation is Key

There are a number of steps you’ll need to perform in preparation for Delphix to deploy to Azure.  The delphix engine, a source and a target are out goal, so we’ll start simple and work our way out.  Let’s see how much I can figure out and how much I may need to look to others more experienced to get me through.  No matter what, you’ll need both consoles, so keep the links above handy and I’ll refer to the consoles by “New” and “Classic” to help distinguish them as I go along.  Know that in this post, we’ll spend most of our time in the Classic console.

Set up an Account and Add Web App

If you don’t already have one, Microsoft will let you set up an account and even give you $200 in free credits to use.  Once you sign up, then you need to know where to go next.  This is where the “Classic” console comes in, as you need to set up your “application” that will be used for your deployment.

Log into the “Classic” console and click on Active Directory and the Default Directory highlighted in blue.  This will open a new page and you will have the opportunity to click Add at the bottom of the page to add a new Active Directory application.

  • Name the Application, (open up a text editor, copy and paste the name of the app into it, you’ll need this data later)
  • The application type is web app or api
  • Enter a URL/URI and yes, they can be made up.  They don’t have to be real.

Client and Client Secret

Now that your application is created, you’ll see a tab called Configure.  Click on this tab and you’ll see the client ID displayed.  Copy the Client ID and add that to your text editor, also for later.

Scroll down and you’ll see a section called Keys.  Click on the button that says “Select Duration” and choose 1 or 2 years.  At the very bottom of the screen, you’ll see a Save button, click it and then the Secret passcode will be displayed for you to copy and paste into your text editor.  Do this now, as you won’t be able to get to it later.

Tenant ID

To the left of the Save button, you’ll see “View Endpoints”.  Click on this and you’ll see a number of entries.  The tenant ID is the repeat value shown in each of the entries at the end.  An example is shown below:

Copy and paste this into your text editor under a heading of tenant ID.

Add Application to the Active Directory

Now that you’ve created this framework, you need to grant permissions to use it all.  In the Configure tab, scroll to the very bottom where it says “Permissions to Other Applications” and click on Add Application.  Choose the Active Directory application from the list, (if you have a new account, you won’t have much to choose from) Azure Service Management API and click on the checkmark in the lower right corner of the pane.  This will return you to the previous page.  Click on the designated privileges and choose to grant it Access Azure Service Management as organization and then save.

Subscription Data

Now, log into the New portal and click on Subscriptions on the left hand side.  Click on the Subscription and it will open up to display your Subscription ID, which you’ll need to copy and paste into your text editor.

Click on Access Control, (IAM) and click on Add.  Now you may only see your username, but the applications are there-  they just won’t be displayed by default.  Type in your application name that you put in your text editor, (example, mine is Web_K_Terra.)  Reminder-  you must type in the name of your app, just as you did when you created it, (it is cap sensitive, etc.) Grant reader and contributor roles from the role list, saving between each additional role.

You should now see your user in the list with both roles assigned to it like the example below for Web_K_Terra app:

Our configuration is complete and ready to go onto the networking piece.

The first part of my terraform template is ready, too.  All the pertinent data that I required from my build out has been added to it and it looks something like the following:

provider “Web_K_Terra” {
subscription_id = “gxxxxxxx-db34-4gi7-xxxxx-9k31xxxxxxxxxp2”
client_id = “d76683b5-9848-4d7b-xxxx-xxxxxxxxxxxx”
client_secret = “trKgvXxxxxxXXXXxxxXXXXfNOc8gipHf-xxxxxxXXx=”
tenant_id = “xxxxxxxx-9706-xxxx-a13a-4a8363bxxxxx”

}

This is a great start to getting us out on Azure, in part II, we’ll talk about setting up connectivity between your desktop and Azure for remote access and recommendations for tools to access it locally.

 

Posted in Azure, Oracle Tagged with: , ,

March 13th, 2017 by dbakevlar

Swingbench is a one of the best choices for easy loads on a database.  I wanted to use it against the SH sample schema I loaded into my Oracle Source database and I haven’t used Swingbench outside of the command line quite a while back, (my databases seem to always come with a load on them!)  so it was time to update my Swingbench skills and catch up with the user interface.  Thanks to Dominic Giles for keeping the download, features and documentation so well maintained.

After adding the application rights to run on my Macbook Pro, I was impressed by the clean and complete interface.  I wanted to connect it to my AWS instance and as we talk about, the cloud is a lot simpler a change than most DBAs first consider.

When first accessing, Swingbench will prompt you to choose what pre-configured workload you’d like to utilize.  I had already set up the Sales History schema in my AWS Trial source database, so I chose Sales History and then had to perform a few simple configurations to get it to run.

Username: sh

Password: <password for your sh user>

Connect String: <IP Address for AWS Instance>:<DB Port>:<services name>

Proceed down to the tab for Environment Variables and add the following:

ORACLE_HOME  <Oracle Home>

I chose the default 16 connections to start out, but you can add more if you’d like. You can also configure stats collection, snapshot collection before and after the workload.

I set my autoconnect to true, but the default is to not start the load until you hit the green arrow button.  The load will then execute the workload with the amount of connections requested until you hit the red stop button.  You should see the users logged in at the bottom right and in the events window:

Next post we’ll discuss what you’ll see when running a Swingbench on a source database, the Delphix Engine host and subsequently refreshes to a VDB, (virtual database.)  We’ll also discuss other tools that can grant you visibility to optimization opportunities in the cloud.

 

 

Posted in AWS Trial, Cloud, Oracle Tagged with: , ,

March 2nd, 2017 by dbakevlar

There are a lot of people and companies starting to push the same old myth regarding the death of the database administrator role in companies.  On the Oracle side, it started with release Oracle 7 and now is proposed with the introduction of cloud.  Hopefully my post will help ease the mind of those out there with concerns.  There are a number of OBVIOUS reasons this is simply not true, but I’m going to write a few posts over the next year on some of the less obvious ones that will ensure DBAs stay employed for the long haul.

The first and to some-  less obvious reason that DBAs are going to continue to be a necessary role in Information Technology with the Cloud is that almost all databases use a Cost Based Optimizer, (CBO).

I’m not going to go into when it was introduced in the different platforms, but over 90% of database platforms used in the market today have a CBO.  This grants the database the ability to make performance decisions based on cost vs. strict rules, granting, (in theory and in most instances) better performance.

Articles, Bugs and Challenges, (ABCs)

There was an interesting thread on Oracle-l on hit of IO for an EBS environment due to extended statistics.  There were links in the conversation to Jonathan Lewis’ blog that bring you to some incredibly interesting investigations on adaptive plans and other posts on configuration recommendations/bugs involved with extended statistics.

With the introduction of the CBO, the DBA was supposed to have less to worry about in the way of performance.  The database was supposed to have automated statistics gathering that would then be used, along with type of process, kernel settings and parameters to make intelligent decisions without human intervention.  The capability allowed the engine to take advantage of advanced features outside of simple rules, (if index for where clause columns exist, then use, etc.)

Some CBOs perform with more consistency than others, but many times the challenge of why a database chose a plan is lost on the DBA due to the complexity required to make these decisions.  The one thing the DBA thought they could count on was the database engine using up to date statistics on objects, calls and parameters to make the decision.  DBAs began to tear apart the algorithms behind every table/index scan and the cost of each process and limits for each memory and IO feature.  As their knowledge increased, IT shops became more dependent upon their skills to take the CBO to the level required to ensure customers received the data they needed when they needed it.  We learned to know when to ignore the cost on a query or transaction and how to force the database to choose the improved plan.

Dynamic Sampling or Is It Dynamic Pain?

I am a database administrator that HATED Oracle dynamic sampling and still find the cost way out weighing the benefit.  There were few cases where it served a DBA like me, who possessed strong CBO and statistics knowledge, that for Oracle to make choices for me, (especially with SQL that had controlled hints included in the statements) caused me to find new ways to disable it anyway I could.  I had dreams of the feature maturing into something that would serve my needs instead of waking me from those dreams to address another challenge where none should have been present.

If you managed as many multi-TB databases as I did, extensive dynamic sampling, especially on large objects could come back to haunt you.  I performed a number of traces on processes where an Exadata was being accused of a configuration problem when in truth, it was 8 minutes of dynamic sampling out of a 9 minute db time.  In each instance, I proved dynamic sampling was to blame via trace file evidence and in each instance, developers and application folks involved would ask why dynamic sampling was even considered a feature. I did see the feature usage and benefits, but it was rarely for the very large databases I managed.

Adaptive Plans

The next logical step in Oracle’s mind for enhancing features like dynamic sampling was to add Adaptive Plans. This is another feature that Oracle has introduced to benefit query and transactional process performance in databases. Allow the plan to adapt to allow the plan to adapt to the run in question, but if you’ve read the thread and the links included in the first part of this post, you’ll know that if often performs less than optimally.

Silver Bullets- NOT

In the end, OnPrem databases required extensive knowledge of the internal database workings, metrics and an strong research skills were required to guarantee the most consistent performance for any enterprise database engine.

All DBAs have experienced the quick fix solutionist, (not even a word, but I’m making it up here!) that would make recommendations like:

“Oh, it’s eating up CPU?  Let’s get more/faster CPU!”

“I/O waits?  Just get faster disk!”

“We need more compute?  Just throw more at it!”

As a DBA, we knew that this was the quick and honestly, a temporary fix.  To quote Cary Millsap, “You can’t hardware your way out of a software problem.”  It’s one of my favorites, as I found myself in the situation of explaining why adding hardware was only a short-term solution. To answer why it’s short-term, we have to ask ourselves, “What is the natural life of a database?”

Growth.

Either in design, processes, users or code, (especially with poorly written code.)  If you didn’t correct the poor foundation that was causing the heavy usage on the system by ensuring it ran more efficiently, you would only find yourself in the same place in six months or if lucky, two years, explaining why the “database sucks” again.  This required research, testing and traditional optimization techniques, not enabling by granting it more resources to eat up in the future.

The Cloud is the Key

Considering that in a very high level view, any cloud is really just running all of these same product features and database engines on somebody else’s computer.  How does this allow for complex features that required expertise to manage bypassed?

Unlike initial project startups or quick development spin ups, do we think companies are just going to continue to pay for more and more compute and IO?

I would be willing to bet it’s more cost effective to have people who know how to do more with less.  At what point does that graph of price vs. demand hit the point that having people who know what they’re doing with a database make a difference?  I think it’s a lot lower than the threshold many companies assume with statements of  “You won’t need a Database Administrator anymore-  Just standard administrator and developers!”

Tell me what you think!

 

 

 

 

 

Posted in Cloud, DBA Life Tagged with: ,

February 4th, 2016 by dbakevlar

I’ve been discussing for years about the importance of network to database performance, especially once I started working on VLDBs, (Very Large Databases) but its a topic that often is disregarded.  Now that I’m working more and more in the cloud, it’s become more evident the importance of the network to our survival.

For each and every cloud project I’ve been involved in, there is evidently going to be multiple challenges that turn to the network administrator for a solution.  I don’t blame the administrator in any way when he becomes exasperated by our requests.  As it is my solemn duty to protect the database, the network administrator is the sole protector of the network.  You’ll hear a frustrated DBA say, “just open the &^$# network up!  Let’s just get this connected to our cloud provider!” I have to admit that this request must be akin to someone asking a DBA to provide SYSDBA to a developer in production.

vizag-real-estate-is-not-happening

So yes, there are a lot of moving parts in a cloud environment.  No, not all of them are at the database level, but many of them could be at the network level.  This means that your new cloud environment must connect past firewalls, proxies, blocked ports and authentication steps that may not have been required back in the sole on-premise days.

hybrid_cloud_agent_to_oms_comm_ha

Yeah, there’s a bit more to the network than demonstrated in the picture above.

The database connection needs a secure connection past the firewall and may require proxy configurations to access via a web browser.  The application interface to manage them may require proxy settings in browsers that may have automated processes to manage outside a manual proxy setting.  You may have network configurations that are different from one local office to another.  We’ve only discussed configuration and haven’t even considered speed, packet size and bandwidth.

So here is my recommendation-  make friends with your network administrator.  In fact, take the ol’ chap out for a beer or two.  Learn about what it takes to master, protect and ensure the company’s network from the threats outside.  Learning about the network will provide you with incredible value as a cloud administrator and you may get a great friend out of the venture, too.  For those of you that don’t make friends with your network admin, I don’t want to be hearing about any mishaps with phenobarbital to get the information, OK? 🙂

JGLCheers

 

 

 

Posted in Cloud Tagged with: ,

June 23rd, 2015 by dbakevlar

The sales, support and technical teams were brought into the Denver Tech Center office to do some advanced training in Hybrid Cloud.  There were many take-aways from the days we spent in the office, (which is saying a lot-  most of you likely know how much I hate working anywhere but from home… :)) and I thought I would share a bit of this information with those that are aching for more details on this new and impressive offering from release 5.

If you’re new to Hybrid Cloud and want to know the high level info, please see my blog post on the topic.

Cloud Control Cloning Options

Cloud Control now includes a number of new options for database targets in EM12c.  These new drop down options include cloning to ease access to the new hybrid cloning.  Once you’ve logged into Cloud Control, go to Targets à Databases and then choose a database you wish to implement cloning features for.  Right click on the target and the drop downs will take you to the cloning options under Oracle Database –> Cloning.

tm_creation

There will be the following choices from this drop down:

  • Clone to Oracle Cloud-  ability to directly clone to the cloud from an on-premise database.
  • Create Full Clone-  Full clone from an image copy or an RMAN backup.
  • Create Test Master- Create a read-only test master database from the source target.
  • Enable as a Test Master- Use the database target as a test master, which will render it read-only and it would rarely be an option for a production database.
  • Clone Management-  Manage existing cloning options.

Using a test master is essential for snap clones, which are a great way to offer great space savings and eliminates the time that is required for standard cloning processes.  The test master is in a read only mode, so it will need to be refreshed or recreated with an up to date copy, (which will then be another option in the drop down, “Disable Test Master”) for new cloning procedures.

Snapshot Clone

For the example today, we’ll use the following production database:

tm1

We’ll use an existing test master database to perform our clone from:

tm4

We can right click on the database and choose to create a clone.  This is going to be an artifact via a snapclone, so keep this in mind as we inspect the times and results of this process.

tm8

Upon choosing to create a snapshot clone of a pluggable database.  This will then create snapshot clone, each clone is just a copy of the file header with block changes involved on the read only or read-write clone.

Once you fill out the pertinent data for the clone, using the correct preferred credentials with SYSDBA privileges, name the new pluggable database, the name you’d like it displayed in  Cloud Control as and enter the PDB  administration credentials, password and confirm the password.  Once that’s done, choose if you’d like to clone it to a different container, (CDB) than what the source resides on and then add the database host and ASM credentials.

Once you click next, you have more advanced options to view and/or setup:

tm10

The advanced options allow you to change the sparse disk group to create the clone on, storage limits and database options.

tm11

You can then choose to have data masking to protect any sensitive data, but keep in mind, once you do so, you will no longer be using a snapclone due to the masking, but the option to implement it at this step is an option.  You can also set up any pre, post or SQL scripts that need to be run as part of the clone.  This could include resetting sequences, user passwords, etc.

The next step allows you to schedule the clone in the future or run immediately.

tm12

You can also choose the type of notifications, as this is simply an EM Job that is submitted to perform the cloning process.  Once you’ve reviewed the cloning steps chosen via the wizard, you can then submit.

tm13

Once the jobs been submitted, the submitter can monitor the job steps:

tm14

 

Success

Once the clone has completed, you can view each of the steps, including the time each took.

tm15

The source database was over 500 GB and was cloned in less than one minute!  You also will see the new cloned database in the targets list:

tm16

If curious, note that this is a fully cloned database that is on ASM, which you can view, just as you would for any other database.

Again, note the size and that this can be managed like any other database that you would have created via a DBCA template or through a standard creation process.

tm17

More to come soon and thanks to Oracle for letting us get our hands on the new 12.1.0.5 hybrid cloning!

Posted in Cloud, Enterprise Manager Tagged with: , , ,

  • Facebook
  • Google+
  • LinkedIn
  • Twitter