March 13th, 2017 by dbakevlar

Swingbench is a one of the best choices for easy loads on a database.  I wanted to use it against the SH sample schema I loaded into my Oracle Source database and I haven’t used Swingbench outside of the command line quite a while back, (my databases seem to always come with a load on them!)  so it was time to update my Swingbench skills and catch up with the user interface.  Thanks to Dominic Giles for keeping the download, features and documentation so well maintained.

After adding the application rights to run on my Macbook Pro, I was impressed by the clean and complete interface.  I wanted to connect it to my AWS instance and as we talk about, the cloud is a lot simpler a change than most DBAs first consider.

When first accessing, Swingbench will prompt you to choose what pre-configured workload you’d like to utilize.  I had already set up the Sales History schema in my AWS Trial source database, so I chose Sales History and then had to perform a few simple configurations to get it to run.

Username: sh

Password: <password for your sh user>

Connect String: <IP Address for AWS Instance>:<DB Port>:<services name>

Proceed down to the tab for Environment Variables and add the following:

ORACLE_HOME  <Oracle Home>

I chose the default 16 connections to start out, but you can add more if you’d like. You can also configure stats collection, snapshot collection before and after the workload.

I set my autoconnect to true, but the default is to not start the load until you hit the green arrow button.  The load will then execute the workload with the amount of connections requested until you hit the red stop button.  You should see the users logged in at the bottom right and in the events window:

Next post we’ll discuss what you’ll see when running a Swingbench on a source database, the Delphix Engine host and subsequently refreshes to a VDB, (virtual database.)  We’ll also discuss other tools that can grant you visibility to optimization opportunities in the cloud.

 

 

Posted in AWS Trial, Cloud, Oracle Tagged with: , ,

March 2nd, 2017 by dbakevlar

There are a lot of people and companies starting to push the same old myth regarding the death of the database administrator role in companies.  On the Oracle side, it started with release Oracle 7 and now is proposed with the introduction of cloud.  Hopefully my post will help ease the mind of those out there with concerns.  There are a number of OBVIOUS reasons this is simply not true, but I’m going to write a few posts over the next year on some of the less obvious ones that will ensure DBAs stay employed for the long haul.

The first and to some-  less obvious reason that DBAs are going to continue to be a necessary role in Information Technology with the Cloud is that almost all databases use a Cost Based Optimizer, (CBO).

I’m not going to go into when it was introduced in the different platforms, but over 90% of database platforms used in the market today have a CBO.  This grants the database the ability to make performance decisions based on cost vs. strict rules, granting, (in theory and in most instances) better performance.

Articles, Bugs and Challenges, (ABCs)

There was an interesting thread on Oracle-l on hit of IO for an EBS environment due to extended statistics.  There were links in the conversation to Jonathan Lewis’ blog that bring you to some incredibly interesting investigations on adaptive plans and other posts on configuration recommendations/bugs involved with extended statistics.

With the introduction of the CBO, the DBA was supposed to have less to worry about in the way of performance.  The database was supposed to have automated statistics gathering that would then be used, along with type of process, kernel settings and parameters to make intelligent decisions without human intervention.  The capability allowed the engine to take advantage of advanced features outside of simple rules, (if index for where clause columns exist, then use, etc.)

Some CBOs perform with more consistency than others, but many times the challenge of why a database chose a plan is lost on the DBA due to the complexity required to make these decisions.  The one thing the DBA thought they could count on was the database engine using up to date statistics on objects, calls and parameters to make the decision.  DBAs began to tear apart the algorithms behind every table/index scan and the cost of each process and limits for each memory and IO feature.  As their knowledge increased, IT shops became more dependent upon their skills to take the CBO to the level required to ensure customers received the data they needed when they needed it.  We learned to know when to ignore the cost on a query or transaction and how to force the database to choose the improved plan.

Dynamic Sampling or Is It Dynamic Pain?

I am a database administrator that HATED Oracle dynamic sampling and still find the cost way out weighing the benefit.  There were few cases where it served a DBA like me, who possessed strong CBO and statistics knowledge, that for Oracle to make choices for me, (especially with SQL that had controlled hints included in the statements) caused me to find new ways to disable it anyway I could.  I had dreams of the feature maturing into something that would serve my needs instead of waking me from those dreams to address another challenge where none should have been present.

If you managed as many multi-TB databases as I did, extensive dynamic sampling, especially on large objects could come back to haunt you.  I performed a number of traces on processes where an Exadata was being accused of a configuration problem when in truth, it was 8 minutes of dynamic sampling out of a 9 minute db time.  In each instance, I proved dynamic sampling was to blame via trace file evidence and in each instance, developers and application folks involved would ask why dynamic sampling was even considered a feature. I did see the feature usage and benefits, but it was rarely for the very large databases I managed.

Adaptive Plans

The next logical step in Oracle’s mind for enhancing features like dynamic sampling was to add Adaptive Plans. This is another feature that Oracle has introduced to benefit query and transactional process performance in databases. Allow the plan to adapt to allow the plan to adapt to the run in question, but if you’ve read the thread and the links included in the first part of this post, you’ll know that if often performs less than optimally.

Silver Bullets- NOT

In the end, OnPrem databases required extensive knowledge of the internal database workings, metrics and an strong research skills were required to guarantee the most consistent performance for any enterprise database engine.

All DBAs have experienced the quick fix solutionist, (not even a word, but I’m making it up here!) that would make recommendations like:

“Oh, it’s eating up CPU?  Let’s get more/faster CPU!”

“I/O waits?  Just get faster disk!”

“We need more compute?  Just throw more at it!”

As a DBA, we knew that this was the quick and honestly, a temporary fix.  To quote Cary Millsap, “You can’t hardware your way out of a software problem.”  It’s one of my favorites, as I found myself in the situation of explaining why adding hardware was only a short-term solution. To answer why it’s short-term, we have to ask ourselves, “What is the natural life of a database?”

Growth.

Either in design, processes, users or code, (especially with poorly written code.)  If you didn’t correct the poor foundation that was causing the heavy usage on the system by ensuring it ran more efficiently, you would only find yourself in the same place in six months or if lucky, two years, explaining why the “database sucks” again.  This required research, testing and traditional optimization techniques, not enabling by granting it more resources to eat up in the future.

The Cloud is the Key

Considering that in a very high level view, any cloud is really just running all of these same product features and database engines on somebody else’s computer.  How does this allow for complex features that required expertise to manage bypassed?

Unlike initial project startups or quick development spin ups, do we think companies are just going to continue to pay for more and more compute and IO?

I would be willing to bet it’s more cost effective to have people who know how to do more with less.  At what point does that graph of price vs. demand hit the point that having people who know what they’re doing with a database make a difference?  I think it’s a lot lower than the threshold many companies assume with statements of  “You won’t need a Database Administrator anymore-  Just standard administrator and developers!”

Tell me what you think!

 

 

 

 

 

Posted in Cloud, DBA Life Tagged with: ,

February 4th, 2016 by dbakevlar

I’ve been discussing for years about the importance of network to database performance, especially once I started working on VLDBs, (Very Large Databases) but its a topic that often is disregarded.  Now that I’m working more and more in the cloud, it’s become more evident the importance of the network to our survival.

For each and every cloud project I’ve been involved in, there is evidently going to be multiple challenges that turn to the network administrator for a solution.  I don’t blame the administrator in any way when he becomes exasperated by our requests.  As it is my solemn duty to protect the database, the network administrator is the sole protector of the network.  You’ll hear a frustrated DBA say, “just open the &^$# network up!  Let’s just get this connected to our cloud provider!” I have to admit that this request must be akin to someone asking a DBA to provide SYSDBA to a developer in production.

vizag-real-estate-is-not-happening

So yes, there are a lot of moving parts in a cloud environment.  No, not all of them are at the database level, but many of them could be at the network level.  This means that your new cloud environment must connect past firewalls, proxies, blocked ports and authentication steps that may not have been required back in the sole on-premise days.

hybrid_cloud_agent_to_oms_comm_ha

Yeah, there’s a bit more to the network than demonstrated in the picture above.

The database connection needs a secure connection past the firewall and may require proxy configurations to access via a web browser.  The application interface to manage them may require proxy settings in browsers that may have automated processes to manage outside a manual proxy setting.  You may have network configurations that are different from one local office to another.  We’ve only discussed configuration and haven’t even considered speed, packet size and bandwidth.

So here is my recommendation-  make friends with your network administrator.  In fact, take the ol’ chap out for a beer or two.  Learn about what it takes to master, protect and ensure the company’s network from the threats outside.  Learning about the network will provide you with incredible value as a cloud administrator and you may get a great friend out of the venture, too.  For those of you that don’t make friends with your network admin, I don’t want to be hearing about any mishaps with phenobarbital to get the information, OK? 🙂

JGLCheers

 

 

 

Posted in Cloud Tagged with: ,

June 23rd, 2015 by dbakevlar

The sales, support and technical teams were brought into the Denver Tech Center office to do some advanced training in Hybrid Cloud.  There were many take-aways from the days we spent in the office, (which is saying a lot-  most of you likely know how much I hate working anywhere but from home… :)) and I thought I would share a bit of this information with those that are aching for more details on this new and impressive offering from release 5.

If you’re new to Hybrid Cloud and want to know the high level info, please see my blog post on the topic.

Cloud Control Cloning Options

Cloud Control now includes a number of new options for database targets in EM12c.  These new drop down options include cloning to ease access to the new hybrid cloning.  Once you’ve logged into Cloud Control, go to Targets à Databases and then choose a database you wish to implement cloning features for.  Right click on the target and the drop downs will take you to the cloning options under Oracle Database –> Cloning.

tm_creation

There will be the following choices from this drop down:

  • Clone to Oracle Cloud-  ability to directly clone to the cloud from an on-premise database.
  • Create Full Clone-  Full clone from an image copy or an RMAN backup.
  • Create Test Master- Create a read-only test master database from the source target.
  • Enable as a Test Master- Use the database target as a test master, which will render it read-only and it would rarely be an option for a production database.
  • Clone Management-  Manage existing cloning options.

Using a test master is essential for snap clones, which are a great way to offer great space savings and eliminates the time that is required for standard cloning processes.  The test master is in a read only mode, so it will need to be refreshed or recreated with an up to date copy, (which will then be another option in the drop down, “Disable Test Master”) for new cloning procedures.

Snapshot Clone

For the example today, we’ll use the following production database:

tm1

We’ll use an existing test master database to perform our clone from:

tm4

We can right click on the database and choose to create a clone.  This is going to be an artifact via a snapclone, so keep this in mind as we inspect the times and results of this process.

tm8

Upon choosing to create a snapshot clone of a pluggable database.  This will then create snapshot clone, each clone is just a copy of the file header with block changes involved on the read only or read-write clone.

Once you fill out the pertinent data for the clone, using the correct preferred credentials with SYSDBA privileges, name the new pluggable database, the name you’d like it displayed in  Cloud Control as and enter the PDB  administration credentials, password and confirm the password.  Once that’s done, choose if you’d like to clone it to a different container, (CDB) than what the source resides on and then add the database host and ASM credentials.

Once you click next, you have more advanced options to view and/or setup:

tm10

The advanced options allow you to change the sparse disk group to create the clone on, storage limits and database options.

tm11

You can then choose to have data masking to protect any sensitive data, but keep in mind, once you do so, you will no longer be using a snapclone due to the masking, but the option to implement it at this step is an option.  You can also set up any pre, post or SQL scripts that need to be run as part of the clone.  This could include resetting sequences, user passwords, etc.

The next step allows you to schedule the clone in the future or run immediately.

tm12

You can also choose the type of notifications, as this is simply an EM Job that is submitted to perform the cloning process.  Once you’ve reviewed the cloning steps chosen via the wizard, you can then submit.

tm13

Once the jobs been submitted, the submitter can monitor the job steps:

tm14

 

Success

Once the clone has completed, you can view each of the steps, including the time each took.

tm15

The source database was over 500 GB and was cloned in less than one minute!  You also will see the new cloned database in the targets list:

tm16

If curious, note that this is a fully cloned database that is on ASM, which you can view, just as you would for any other database.

Again, note the size and that this can be managed like any other database that you would have created via a DBCA template or through a standard creation process.

tm17

More to come soon and thanks to Oracle for letting us get our hands on the new 12.1.0.5 hybrid cloning!

Posted in Cloud, Enterprise Manager Tagged with: , , ,

  • Facebook
  • Google+
  • LinkedIn
  • Twitter