Category: Cloud

May 26th, 2017 by dbakevlar

I’m in sunny Phoenix this week at the Data Platforms 2017 Conference and looking forward to a break in the heat when I return to Colorado this evening.

As this event is for big data, I expected to present on how big data could benefit from virtualization, but was surprised to find that I didn’t have a lot of luck finding customers utilizing us for this reason, (yet).  As I’ve discussed in previous presentations, I was aware of what a “swiss army knife” virtualization is, resolving numerous issues, across a myriad of environments, yet often unidentified.

The Use Case

To find my use case, I went out to the web and found a great article, “The Case for Flat Files in Big Data Projects“, by Time.com interactive graphics editor, Chris Wilson.  The discussion surrounds the use of data created as part of ACA and used for another article, “How Much Money Does Your Doctor Get From Medical Companies“.  The data in the interactive graphs that are part of the article is publicly available from cms.gov and the Chris discusses the challenges created by it and how they overcame it.

Upon reading it, there was a clear explanation of why Chris’ team did what they did to consume the data in a way that complimented their skill set.  It resonated with anyone who works in any IT shop and how we end up with technical choices that we’re left to justify later on.  While observing conversations at the conference this week, I lost count of how often I accepted the fact that there wasn’t a “hadoop” shop or a “hive” shop, but everyone had a group of varied solutions that resulted in their environment and if you didn’t have it, don’t count it out-  Pig, Python, Kafka or others could show up tomorrow.

This results in a more open and accepting technical landscape, which I, a “legacy data store” technologist, was welcome.  When I explained my upcoming talk and my experience, no one turned up their nose at any technology I admitted to having an interest in.

With the use case found online, was also the data.  As part of the policies in the ACA, cms.gov site, (The Center for Medicare and Medicaid) you can get access to all of this valuable data and it can offer incredible insight into these valuable programs.  The Time article only focuses on the payments to doctors from medical companies, but the data that is collected, separated out by area and then zipped up by year, is quite extensive, but as noted by a third article, as anticipated as the data was, it was cumbersome and difficult to use.

The Requirements

I proceeded to take this use case and imagine it as part of an agile environment, with this data as part of the pipeline to produce revenue by providing information to the consumer.  What would be required and how could virtualization not only enhance what Chris Wilson’s team had built, but how could the entire pipeline benefit from Delphix’s “swiss army knife” approach?

  1.  I can’t assume this is the main data store.  These flat files are a supplement to legacy data stores.
  2. There would be a standard development environment-  development and testing would need their own environments, not just a production copy of these files, applications, etc.
  3. If it’s providing data to a consumer and data is in perpetual motion in the age of the internet, an agile development method would need to be in place, which means a short development cycle with many, small, “scrum like” development groups from different departments working on tasks.
  4. Automation and seamless deployment would assist in less human intervention and resource demands, along with more successful deployments.

Solution

There were four areas that I focused to solve and eliminate bottlenecks that I either experienced or foresaw an organization experiencing when having this data as part of their environment.

  1. Eliminate the need to have multiple copies of the files, slow and manual process to propagate files to targets for development, test, etc. with Delphix’s vFile option, this would include any applications or other non-relational database tier included in the scenario.
  2. Eliminate any legacy data stores copies and refreshes that big data was dependent from and create VDBs for all development, test and reporting.
  3. Protect all non-production environments by masking non-production databases and flat files.
  4. Containerize environments for easy deployment, delivery, testing and cloud migrations.

vFiles

Each of the files, compressed were just over 500M and uncompressed, 15-18G.  This took about over 4 minutes per file to transfer to a host and could add up to considerable space.

I used Delphix vFile to virtualize files.  This means that there is a single, “gold copy” host of the files at the Delphix engine and then there’s an NFS Mount that “projects” the file access to each target, which can be used for unique copies to as many development, test and reporting copies.

Fig. 1- Creating vFiles from dSource that flat files are sourced on.

If a refresh is required, then the parent is refreshed and an automated refresh to all the “children” can be performed.  Changes can be made at the child level and if catastrophic, Delphix would allow for easy recovery, allowing for data version control, not just code version control throughout the development and testing process.

Fig. 2- Demonstration of target vFile, showing NFS Mount, files available, (created in less than 10 seconds) and how easily disabled and proven to be “Projection” of files.

Its a pretty cool feature and one that is very valuable to the big data arena.  I heard countless stories of how often, due to lack of storage, data scientists and developers were taking subsets of data to test and then once to production, find out that their code wouldn’t complete or fail when presented against the full data.  Having the ability to have the FULL files without taking up more space for multiple environments would be incredibly beneficial and shorten development cycles.

Virtualize

Most big data shops are still dependent on legacy data stores.  These are legendary roadblocks due to their size, complexity and demands for refreshes.  I proposed that those be virtualized so that each developer could have a copy and instant refresh without storage demands to again, ease development deadline pressures and allow for full access of data towards the development success.

Protect

Most people know we mask relational databases, but did you know we have Agile Data Masking for  flat files?  If these files are going to be pushed to non-production systems, especially with as much as we’re starting to hear about GDPR, (General Data Protection Regulations) from the EU in the US now, shouldn’t we mask outside of the database?

What kind of files can be masked?

  • Multi-record
  • CSV
  • XML
  • Word
  • Excel
  • PowerPoint
  • Unstructured
  • EDI

Thats a pretty considerable and cool list.  The ability to go in and mask data from flat files is a HUGE benefit to big data folks.  Many of them were looking at file security from the permissions and encryption level, so the ability to render the data benign to risk is a fantastic improvement.

Containerize

The last step is in simple recognition that big data is complex and consists of a ton of moving parts.  To acknowledge how much is often home built, open source, consisting of legacy data stores, flat files, application and other dependent tiers, should be expected.

Fig. 3- A Container, created on-prem, then moved to the cloud and to as many environments as required for the development cycle to meet the business needs.

Delphix has the ability to create templates of our sources, (aka dSources) which is nothing more than creating a container.  In my use case enhancement, I took all of these legacy data stores, applications, (including any Ajax code) flat files and then create a template from it for simple refreshes, deployments via jenkins, Chef jobs or other DevOps automation.  The ability to then take these templates and deploy them to the cloud would make a migration from on-prem to the cloud a simpler process or from one cloud vendor to another.

Fig. 4- A look at the full scenario-  Delphix engines masking files, databases, creating containers and deploying it all on-prem and to the cloud.

The end story is that this use case could be any big data shop or start up in the world today.  So many of these companies are hindered by data and Delphix virtualization could easily let their data move at the speed of business.

I want to thank Data Platforms 2017 and all the people who were so receptive of my talk.  If you’d like access to the slide deck, it’s been uploaded to Slideshare. I had a great time in Phoenix and hope I can come back soon!

 

 

Posted in big data, Cloud, Data Masking, Oracle Tagged with: ,

April 28th, 2017 by dbakevlar

I did a couple great sessions yesterday for the awesome Dallas Oracle User Group, (DOUG.)  It was the first time I did my thought leadership piece on Making Sense of the Cloud and it was a great talk, with some incredible questions from the DOUG attendees!

This points me to a great [older] post on things IT can do to help guarantee tech projects are more successful. DevOps is a standard in most modern IT shops and DBAs are expected to find ways to be part of this valuable solution.  If you inspect the graph, displaying the value of different projects in ROI, vs. how often these different types of projects run over budget and time, it may be surprising.

Where non-software projects are concerned, the project rarely runs over the schedule, but in the way of benefits, often comes up short.  When we’re dealing with software, 33% of project run over time, but the ROI is excruciatingly high and worthy of the investment.  You have to wonder how much of that over-allocation in time feeds into the percentage increase in cost?  If this could be deterred, think about how more valuable these projects would become?

The natural life of a database is growth.  Very few databases stay a consistent size, as companies prosper, critical data valuable to the company requires a secure storage location and a logical structure to report on that data is necessary for the company’s future.  This is where relational databases come in and they can become the blessing and the burden of any venture.  Database administrators are both respected and despised for their necessity to manage the database environment as the health of the database is an important part of the IT infrastructure and with the move to the cloud, a crucial part of any viable cloud migration project.

How much of that time, money and delay shown in those projects are due to the sheer size and complexity of the database tier?  Our source data shows how often companies just aren’t able to hold it together due to lacking skills, lacking estimates in time estimates and other unknowns that come back to bit us.

I can’t stress enough why virtualization is key to removing a ton of the overhead, time and money that ends up going into software projects that include a database.

Virtualizing non-production databases results in:

  1. Ability to deliver full copies of production for developers without extensive demands on storage.
  2. Ability to deliver those databases in a matter of minutes vs. days or weeks.
  3. Ability to refresh databases as needed for any project.
  4. Self-service user-interface so developers and testers can recover from a catastrophic issue in a database without having to grovel to a DBA to restore a virtual database.
  5. Ability to branch the VDB and do versioning, which is awesome for both developers and testers, (I know, we DBAs care very little about this feature… :))
  6. In migrations/cloud migrations, the ability to migrate databases in short periods of time and to limit the storage footprint to save company the money they were promised the cloud would deliver that most are finding out in the long run, is not occurring with traditional database scenarios.

It’s definitely something to think about and if you don’t believe me, test it yourself with a free trial!  Not enough people are embracing virtualization and it takes so much of the headache out of RDBMS management.

Posted in AWS, Azure, Cloud, Oracle, SQLServer Tagged with: , ,

March 16th, 2017 by dbakevlar

I love valuable metric data and I wanted to see what’s offered from different cloud providers.  Metrics are data and data ensures that when performance is impacted, we have the information we need to answer questions and resolve issues.

For my first adventure, we’ll start with a high level review of Amazon.  Cloudwatch is one of the main tools to inspect the workload on instances in EC2, using the AWS Trial while Swingbench runs the Sales History workload.

There is additional charges for this feature, but it’s pretty minimal per instance vs. other tools, so I’ll talk about the basics here and you can make up your own mind.

Once you enter this console, the graph area will be empty and you’ll need to add metrics to the graph.  Since I’m using EC2, I click on EC2 in the Metrics area below the empty graph and each of the metrics available for each of my instances will be shown in a list.

Click on the instance and metric you’re interested and then a list of options will show.  Click on Add to graph to add it to the graph above the list and have it start collecting metric data.

You can also click on the check box at the top of All Metrics to add them all.

Once you’ve completed adding the metrics for all the instances you’re interested in, you’re graph will look something like this:

As you can see, you can highlight the line on the graph to match it to the metric/instance name below the graph or vice-versa.

As most of the work for Swingbench is going to the source and then to the Delphix Engine to track the changes to be used by the targets, those first two instances and network activity will be the highest usage.  I haven’t played with the Graphed Options tab, yet and will leave that for another day.

If there’s a metric for an instance you’ve added that isn’t providing any value to the charge, it’s simplest to click on the Graphed Metrics tab, then click on the checkmark in the list and it will stop displaying it in the graph.

If you haven’t had an opportunity to play with the free AWS Trial of Delphix, check it out!  It’s a great way to become familiar with Delphix and AWS-  all in one shot!

 

Posted in AWS Trial, Cloud Tagged with: ,

March 13th, 2017 by dbakevlar

Swingbench is a one of the best choices for easy loads on a database.  I wanted to use it against the SH sample schema I loaded into my Oracle Source database and I haven’t used Swingbench outside of the command line quite a while back, (my databases seem to always come with a load on them!)  so it was time to update my Swingbench skills and catch up with the user interface.  Thanks to Dominic Giles for keeping the download, features and documentation so well maintained.

After adding the application rights to run on my Macbook Pro, I was impressed by the clean and complete interface.  I wanted to connect it to my AWS instance and as we talk about, the cloud is a lot simpler a change than most DBAs first consider.

When first accessing, Swingbench will prompt you to choose what pre-configured workload you’d like to utilize.  I had already set up the Sales History schema in my AWS Trial source database, so I chose Sales History and then had to perform a few simple configurations to get it to run.

Username: sh

Password: <password for your sh user>

Connect String: <IP Address for AWS Instance>:<DB Port>:<services name>

Proceed down to the tab for Environment Variables and add the following:

ORACLE_HOME  <Oracle Home>

I chose the default 16 connections to start out, but you can add more if you’d like. You can also configure stats collection, snapshot collection before and after the workload.

I set my autoconnect to true, but the default is to not start the load until you hit the green arrow button.  The load will then execute the workload with the amount of connections requested until you hit the red stop button.  You should see the users logged in at the bottom right and in the events window:

Next post we’ll discuss what you’ll see when running a Swingbench on a source database, the Delphix Engine host and subsequently refreshes to a VDB, (virtual database.)  We’ll also discuss other tools that can grant you visibility to optimization opportunities in the cloud.

 

 

Posted in AWS Trial, Cloud, Oracle Tagged with: , ,

March 10th, 2017 by dbakevlar

Most people know I like to do things the hard way… 🙂

When it comes to learning things, there’s often no better way to get a strong understanding than to have to take something apart and put it back together.  I wanted to use swingbench on my AWS deployment and needed the Sales History, (SH) sample schema.  I don’t have an interface to perform the installation via the configuration manager, so I was going to install it manually.

Surprise, the scripts in the $ORACLE_HOME/demo/schema/sh directory were missing.  There are a couple options to solve this dilemma.  Mine was to first get the sample schemas.  You can retrieve them from a handy GitHub repository found here, maintained by Gerald Venzl.

I downloaded the entire demo scripts directory and then SCP’d them up to my AWS host.

scp db-sample-schemas-master.zip delphix@<IP Address>:/home/oracle/.

Next, I extracted the files to the $ORACLE_HOME/demo directory.

unzip db-sample-schemas-master.zip

Now the unzip will call the directory db-sample-schemas-master, which is fine with me, as I like to retain the previous one, (I’m a DBA, so have copies of data until I’m sure I don’t need it is my life.)

mv schema schema_kp
mv db-sample-schemas-master schema

With that change, everything is now as it should be, but the one thing you’ll find out, is that the download is for 12c and I’m alright with this, as the swingbench I’m using is expecting SH for 12c, too.  Not that I expect any differences, but as Jeff Smith was all too happy to remind me on Facebook, I’m using decade old version of Oracle on my image here.

There are a lot of scripts in the Sales_History folder, but all you’ll need to run is the sh_main.sql from SQL*Plus as sysdba to create the SH schema.

There are parameter values that you’ll enter to create the SH schema manually that you may assume are different than the prompt terms and as I’ve seen very little written on it, (even after all these years of this existing) this may help others out:

specify password for SH as parameter 1:

Self-explanatory- what password would you like SH user to have.

specify default tablespace for SH as parameter 2:

What tablespace do you want this created in?  I chose Users, as this is just my play database.

specify temporary tablespace for SH as parameter 3:

Temp was mine and is the common value for this prompt.

specify password for SYS as parameter 4:

This is the password for SYSTEM, not SYS, btw.

specify directory path for the data files as parameter 5:  

This is not Oracle datafiles, this is the path to your SH directory, ($ORACLE_HOME/demo/schemas/sales_history/) for access to the control files and dat files for SQL Loader.  Remember to have a slash at the end of the path name.

writeable directory path for the log files as parameter 6:

A directory for log files-  I put this in the same directory and remember to use a slash at the end or you’re log files will have the previous directory as the beginning of the file name and save to one directory up.

specify version as parameter 7:

This isn’t the version of the database, but the version of the sample schema-  the one from Github is “v3”.

specify connect string as parameter 8:

pretty clear, but the service or connect string for the database that the schema is being created in.

Error Will Robinson, Error!

I then ran into some errors, but it was pretty easy to view the log and then the script and see why:

SP2-0310: unable to open file "__SUB__CWD__/sales_history/psh_v3"

Well, the scripts, (psh_v3.sql, lsh_v3.sql and csh_v3.sql) called in the sh_main.sql is looking in the sales_history directory, so we need to get rid of the sub directory paths that don’t exist in the 11g environment.

view the sh_main.sql, you’ll see three paths to update.  Below is an example of one section of the script with the section to be removed BOLDED:

REM Post load operations
REM =======================================================
DEFINE vscript = _SUB_CWD_/sales_history/psh_&vrs

@&vscript

The DEFINE will now look like the following so it looks in the sales_history directory if you haven’t been pointing to the $ORACLE_HOME/demo directory:

DEFINE vscript = psh_&vrs

Once you’ve saved your changes, you can simply re-run sh_main.sql again, as it does a drop schema on the sample schema before it does the create.  If no other changes need to be made to your parameters, just execute sh_main.sql, if you need to change your values for the parameters entered, just quickest to exit from SQL*Plus and reconnect to unset the values.

Are We There Yet?

Verify that there weren’t any errors in your $RUN_HOME/sh_v3.log file and if all was successful, then connect as the SH user with SQL*Plus and check the schema:

SQL> select object_type, count(*) from user_objects
  2  group by object_type;
OBJECT_TYPE       COUNT(*)
------------------- ----------
INDEX PARTITION     196
TABLE PARTITION     56
LOB                 2
DIMENSION           5
MATERIALIZED VIEW   2
INDEX               30
VIEW                1
TABLE               16
SQL> select table_name, sum(num_rows) from user_tab_statistics
  2  where table_name not like 'DR$%' --Dimensional Index Transformation
  3  group by table_name
  4  order by table_name;
TABLE_NAME       SUM(NUM_ROWS)
------------------------------ -------------
CAL_MONTH_SALES_MV         48
CHANNELS                   5
COSTS                      164224
COUNTRIES                  23
CUSTOMERS                  55500
FWEEK_PSCAT_SALES_MV       11266
PRODUCTS                   72
PROMOTIONS                 503
SALES                      1837686
SALES_TRANSACTIONS_EXT     916039
SUPPLEMENTARY_DEMOGRAPHICS 4500
TIMES                      1826

And now we’re ready to run Swingbench Sales History against our AWS instance to collect performance data.  I’ll try to blog on Swingbench connections and data collection next time.

See everyone at UTOUG for Spring Training Days on Monday, March 13th and at the end of the week I’ll be in Iceland at SQL Saturady #602!

Posted in Cloud, Oracle Tagged with: , ,

March 2nd, 2017 by dbakevlar

There are a lot of people and companies starting to push the same old myth regarding the death of the database administrator role in companies.  On the Oracle side, it started with release Oracle 7 and now is proposed with the introduction of cloud.  Hopefully my post will help ease the mind of those out there with concerns.  There are a number of OBVIOUS reasons this is simply not true, but I’m going to write a few posts over the next year on some of the less obvious ones that will ensure DBAs stay employed for the long haul.

The first and to some-  less obvious reason that DBAs are going to continue to be a necessary role in Information Technology with the Cloud is that almost all databases use a Cost Based Optimizer, (CBO).

I’m not going to go into when it was introduced in the different platforms, but over 90% of database platforms used in the market today have a CBO.  This grants the database the ability to make performance decisions based on cost vs. strict rules, granting, (in theory and in most instances) better performance.

Articles, Bugs and Challenges, (ABCs)

There was an interesting thread on Oracle-l on hit of IO for an EBS environment due to extended statistics.  There were links in the conversation to Jonathan Lewis’ blog that bring you to some incredibly interesting investigations on adaptive plans and other posts on configuration recommendations/bugs involved with extended statistics.

With the introduction of the CBO, the DBA was supposed to have less to worry about in the way of performance.  The database was supposed to have automated statistics gathering that would then be used, along with type of process, kernel settings and parameters to make intelligent decisions without human intervention.  The capability allowed the engine to take advantage of advanced features outside of simple rules, (if index for where clause columns exist, then use, etc.)

Some CBOs perform with more consistency than others, but many times the challenge of why a database chose a plan is lost on the DBA due to the complexity required to make these decisions.  The one thing the DBA thought they could count on was the database engine using up to date statistics on objects, calls and parameters to make the decision.  DBAs began to tear apart the algorithms behind every table/index scan and the cost of each process and limits for each memory and IO feature.  As their knowledge increased, IT shops became more dependent upon their skills to take the CBO to the level required to ensure customers received the data they needed when they needed it.  We learned to know when to ignore the cost on a query or transaction and how to force the database to choose the improved plan.

Dynamic Sampling or Is It Dynamic Pain?

I am a database administrator that HATED Oracle dynamic sampling and still find the cost way out weighing the benefit.  There were few cases where it served a DBA like me, who possessed strong CBO and statistics knowledge, that for Oracle to make choices for me, (especially with SQL that had controlled hints included in the statements) caused me to find new ways to disable it anyway I could.  I had dreams of the feature maturing into something that would serve my needs instead of waking me from those dreams to address another challenge where none should have been present.

If you managed as many multi-TB databases as I did, extensive dynamic sampling, especially on large objects could come back to haunt you.  I performed a number of traces on processes where an Exadata was being accused of a configuration problem when in truth, it was 8 minutes of dynamic sampling out of a 9 minute db time.  In each instance, I proved dynamic sampling was to blame via trace file evidence and in each instance, developers and application folks involved would ask why dynamic sampling was even considered a feature. I did see the feature usage and benefits, but it was rarely for the very large databases I managed.

Adaptive Plans

The next logical step in Oracle’s mind for enhancing features like dynamic sampling was to add Adaptive Plans. This is another feature that Oracle has introduced to benefit query and transactional process performance in databases. Allow the plan to adapt to allow the plan to adapt to the run in question, but if you’ve read the thread and the links included in the first part of this post, you’ll know that if often performs less than optimally.

Silver Bullets- NOT

In the end, OnPrem databases required extensive knowledge of the internal database workings, metrics and an strong research skills were required to guarantee the most consistent performance for any enterprise database engine.

All DBAs have experienced the quick fix solutionist, (not even a word, but I’m making it up here!) that would make recommendations like:

“Oh, it’s eating up CPU?  Let’s get more/faster CPU!”

“I/O waits?  Just get faster disk!”

“We need more compute?  Just throw more at it!”

As a DBA, we knew that this was the quick and honestly, a temporary fix.  To quote Cary Millsap, “You can’t hardware your way out of a software problem.”  It’s one of my favorites, as I found myself in the situation of explaining why adding hardware was only a short-term solution. To answer why it’s short-term, we have to ask ourselves, “What is the natural life of a database?”

Growth.

Either in design, processes, users or code, (especially with poorly written code.)  If you didn’t correct the poor foundation that was causing the heavy usage on the system by ensuring it ran more efficiently, you would only find yourself in the same place in six months or if lucky, two years, explaining why the “database sucks” again.  This required research, testing and traditional optimization techniques, not enabling by granting it more resources to eat up in the future.

The Cloud is the Key

Considering that in a very high level view, any cloud is really just running all of these same product features and database engines on somebody else’s computer.  How does this allow for complex features that required expertise to manage bypassed?

Unlike initial project startups or quick development spin ups, do we think companies are just going to continue to pay for more and more compute and IO?

I would be willing to bet it’s more cost effective to have people who know how to do more with less.  At what point does that graph of price vs. demand hit the point that having people who know what they’re doing with a database make a difference?  I think it’s a lot lower than the threshold many companies assume with statements of  “You won’t need a Database Administrator anymore-  Just standard administrator and developers!”

Tell me what you think!

 

 

 

 

 

Posted in Cloud, DBA Life Tagged with: ,

March 1st, 2017 by dbakevlar

No, this isn’t a title for a future Star Wars movie, but our own future, foreseen by me, (as well as many others) from experience, research and discussions everyday.

No, it’s not this dark and menacing…no sith lords.

The Cloud

We know who the main players in the current cloud arena are and how much they hold of the cloud market.

Many are betting that they can make a dent in that market and as much as it looks like some companies have the cloud all “wrapped up”, it may not be as clean a win as you might think.

One to Many

Most companies foresee having one, primary cloud vendor, yet that may be today, but not tomorrow.  As a DBA, I was told over and over again, “We’re hiring you just for your Oracle skills.  We won’t have any need for your SQL Server or other database platform skills.”  Within six weeks, a mission critical system would be discovered that ran on another database platform and my skills were needed to first recover from whatever cataclysmic situation had occurred and then centralizing the management for it under IT.

How many of you want to take bets on this happening with the cloud?  IT is often viewed as a road block by many companies and so the business, when it needs something, will find a way to get what it needs.  Historically, this meant getting a server, putting it under someone’s desk and having them purchase or develop the product outside of the IT department.  Now, with the ease of the cloud, someone will simply create what they need, have hosted in the cloud what will become critical to a business at a point in the future and then the IT organization will need to be responsible, secure it and manage it.

The How of Clouds

This leaves IT folks with some new challenges.  Instead of having to consolidate to company standards for servers or migrate databases or data centers, they will have to migrate between clouds.

This type of need, along with a demand for business migrating into the cloud, will create cloud price wars.  They will be very similar to what we’re experiencing with our mobile providers, first introduced among the big four providers when T-Mobile did away with contracts and transfer fees.  Verizon, Sprint and AT&T were quick to follow with their own versions to entice customers and make it easier to move from one provider to another.

This is another reason of why I’m at Delphix.  I see how important its going to be for us to help customers to:

  1. Move easily into the cloud from on-prem.
  2. Ability to migrate from one cloud provider to another when they desire.

Although Amazon and Azure are rulers of the roost today, there are other companies that may be trailing in the arena that may rule it tomorrow.  If there’s one thing we know is constant, it’s change.  There was a time when we all laughed at the geeks and their smartphones, yet now we all own one.  I wouldn’t count anybody out of the race yet and it might be pertinent to start betting on those that enable those in the race.

 

 

Posted in Cloud, DBA Life Tagged with:

January 30th, 2017 by dbakevlar

I don’t want to alarm you, but there’s a new Delphix trial on AWS!  It uses your own AWS account and with a simple set up, allows you to deploy a trial Delphix environment.  Yes, you hear me right-  just with a couple steps, you could have your own setup to work with Delphix!

There’s documentation to make it simple to deploy, simple to understand and then use cases for individuals determined by their focus, (IT Architect, Developer, Database Administrator, etc.)

This was a huge undertaking and I’m incredibly proud of Delphix to be offering this to the community!

So get out there and check this trial out!  All you need is an AWS account on Amazon and if you don’t have one, it only takes a few minutes to create one and set it up, just waiting for a final verification before you can get started!  If you have any questions or feedback about the trial, don’t hesitate to email me at dbakevlar at gmail.

Posted in AWS Trial, Cloud, Delphix Tagged with: , ,

January 9th, 2017 by dbakevlar

We, DBAs, have a tendency to over think everything.  I don’t know if the trait to over think is just found in DBAs or if we see it in other technical positions, too.

I believe it corresponds to some of why we become loyal to one technology or platform.  We become accustomed to how something works and it’s difficult for us to let go of that knowledge and branch out into something new.  We also hate asking questions-  we should just be able to figure it out, which is why we love blog posts and simple documentation.  Just give us the facts and leave us alone to do our job.

Take the cloud-  Many DBAs were quite hesitant to embrace it.  There was a fear of security issues, challenges with network and more than anything, a learning curve.  As common, hindsight is always 20/20.  Once you start working in the cloud, you often realize that its much easier than you first thought it would be and your frustration is your worst enemy.

So today we’re going to go over some basic skills the DBA requires to manage a cloud environment, using Amazon, (AWS) as our example and the small changes required to do what we once did on-premise.

In Amazon, we’re going to be working on EC2, also referred to as the Elastic Compute Cloud.

Understanding Locations, Regions and Zones

EC2 is built out into regions and zones.  Knowing what region you’re working in is important, as it allows you to “silo” the work you’re doing and in some ways, isn’t much different than a data center. Inside of each of these regions, are availability zones, which isolates services and features even more, allowing definitive security at a precise level, with resources shared only when you deem it should.

Just as privileges granted inside a database can both be a blessing and a curse, locations and regions can cause challenges if you don’t pay attention to the location settings when you’re building out an environment.

Amazon provides a number of links with detailed information on this topic, but here’s the tips I think are important for a DBA to know:

  1.  Before setting anything up that is part of a complete solution requiring multiple setup page configurations, ALWAYS check the region in the upper right corner.  I was surprised when it would change from page to page or after a login-

2.  If you think you may have set something up in the wrong region, the dashboard can tell you what is deployed to what region under the resources section:

Understanding Security Keys

Public key cryptography makes the EC2 world go round.  Without this valuable 2048-bit SSH-2 RSA key encryption, you can’t communicate or log into your EC2 host securely.  Key pairs, a combination of a private and public key should be part of your setup for your cloud environment.

Using EC2’s mechanism to create these is easy to do and eases management.  Its not the only way, but it does simplify and as you can see above in the resource information from the dashboard, it also offers you a one-stop shop for everything you need.

When you create one in the Amazon cloud, the private key downloads automatically to the workstation you’re using and it’s important that you keep track of it, as there’s no way to recreate the private key that will be required to connect to the EC2 host.

Your key pair is easy to create by first accessing your EC2 dashboard and then scroll down on the left side and click on “Key Pairs”.  From this console, you’ll have the opportunity to create, import a pre-existing key or manage the ones already in EC2:

Before creating, always verify your region you’re working in, as we discussed in the previous section and if you’re experiencing issue with your key, verify typographical errors and if the location of the private file matches the name listed for identification.

If more than one group is managing the EC2 environment, carefully consider before deleting a key pair.  I’ve experienced the pain caused by a key removal that created a production outage.  Creation of a new key pair is simpler to manage than implementation of a new key pair across application and system tiers after the removal of one that was necessary.

Understanding Roles and Security

Security Groups are silo’d for a clear reason and no where is this more apparent than in the cloud.  To ensure that the cloud is secure, setting clear and defined boundaries of accessibility to roles and groups is important to keep infiltrators out of environments they have no business accessing.

As we discussed in Key Pairs, our Security Groups are also listed by region under resources so we know they exist at a high level.  If we click on the Security Groups link under Resources in the EC2 Dashboard, we’ll go from seeing 5 security group members:

To viewing the list of security groups:

If you need to prove that these are for N. California, (i.e. US-West-1) region, click on the region up in the top right corner and change to a different region.  For our example, I switched to Ohio, (us-east-2) and the previous security groups aren’t listed and just the default security group for Ohio region is displayed:

Security groups should be treated in the cloud the same way we treat privileges inside a database-  granting the least privileges required is best practice.

Understanding How to SSH to a Host

You’re a DBA, which means you’re most likely most comfortable at the command line.  Logging in via SSH on a box is as natural as walking and everything we’ve gone through so far was to prepare you for this next step.

Your favorite command line tool, no matter if it’s Putty or Terminal, if you’re set up everything in the previous sections correctly, then you’re ready to log into the host, aka instance.

  1.  Ensure your downloaded private key is saved in an easily accessible spot for you to use to log in or that you know the username/password, (keys just make this easier…)
  2. Gather the information about “instances” by clicking on the EC2 dashboard, then click on Instances.
  3. The Public DNS and the Public IP is displayed and note the region, too:

You can use this information to then ssh into the host:

ssh -i "<keypair_name>.pem" <osuser>@<public dns or ip address>.<region>.compute.amazonaws.com

Once logged in as the OS user, you can SU over to the application or database user and proceed as you would on any other host.

If you attempt to log into a region with a key pair from another region, it state that the key pair can’t be found, so another aspect showing the importance of regions.

Understanding How to SCP a File

This is the last area I’ll cover today, (I know, a few of you are saying, “good, I’ve already got too much in my head to keep straight, Kellyn…)

With just about any Cloud offering, you can bring your own license.  Although there are a ton of images, (AMIs in AWS, VHDs in Azure, etc.) pre-built, you may need to use a bare metal OS image and load your own software or as most DBAs, bring over patches to maintain the database you have running out there.  Just because you’re in the cloud doesn’t mean you don’t have a job to do.

Change over to the directory that contains the file that you need to copy and then run the following:

scp -i <keypair>.pem <file name to be transferred> <osuser>@<public dns or ip address>.<region>.compute.amazonaws.com:/<direction you wish to place the file in>/.

If you try to use a key pair from one region to log into a SCP to a host, (instance) in another region, you won’t receive an error, but it will act like you skipped the “-i” and the key pair and you’ll be prompted for the password for the user:

<> password: 

pxxxxxxxxxxxx_11xxxx_Linux-x86-64.zip             100%   20MB  72.9KB/s   04:36

This is a good start to getting started as a DBA on the cloud and not over-thinking.  I’ll be posting more in the upcoming weeks that will not only assist those already in the cloud, but those wanting to find a way to invest more into their own cloud education!

Posted in Cloud, Delphix, Delphix Express

April 7th, 2016 by dbakevlar

While at Collaborate on Monday, Oracle has been offered a spot to highlight The Oracle Management Cloud.  OMC is a suite of next-generation monitoring and management cloud services designed for heterogeneous environments, all empowered by Oracle’s own cloud!  The first three OMC services (Application Performance Monitoring, Log Analytics and IT Analytics) were launched at OpenWorld 2015.  Both Log Analytics and IT Analytics contain must-have capabilities for DBAs.  This is the only OMC session that will be available at Collaborate and it’s a not to miss session with an all-star cast to present on it.  Manning the slides will be Brian Hengen and Yuri Grinshteyn, while Courtney Llamas and I will be sharing our expertise during the demonstrations.

omc1

 

We’ll be covering how to grapple with large environment resource usage, identifying what really needs your time allocated to it (versus what you can just let go!) and how you can help your manager manage your infrastructure better.  We all know how much damage false assumptions can cause to your ability to respond to priorities and demands.  IT Analytics has the intelligence  you need built in, so no more guesswork based on capacity planning scripts, Excel spreadsheets and graphing to prove to the business that you have a handle on where your energy is most needed.

We’ll also discuss how to extract valuable insight from mountains of logs.  We DBAs know how valuable log data is but we also know how voluminous it is.  Having it reduced to human-scale and correlate seamlessly with  the analytics discussed above is an impressive feat and you’ll see how powerful log data can be when displayed this way for the business.  No more “going down rabbit holes” trying to digest log data on one server, matching it to output from a user interface from a tool on another server and finally verifying that it actually had anything to do with the actual problem you were investigating in the first place.

So ensure you’ve added this session to your Collaborate schedule on Monday, at 10:15 in room Palm B. An impressive group of folks from Oracle have made sure it’s going to be an interesting, informative and empowering session on  the Oracle Management Cloud. I have a feeling the room may be packed, so make sure to register beforehand!

Posted in Cloud, Oracle Management Cloud Tagged with: ,

February 4th, 2016 by dbakevlar

I’ve been discussing for years about the importance of network to database performance, especially once I started working on VLDBs, (Very Large Databases) but its a topic that often is disregarded.  Now that I’m working more and more in the cloud, it’s become more evident the importance of the network to our survival.

For each and every cloud project I’ve been involved in, there is evidently going to be multiple challenges that turn to the network administrator for a solution.  I don’t blame the administrator in any way when he becomes exasperated by our requests.  As it is my solemn duty to protect the database, the network administrator is the sole protector of the network.  You’ll hear a frustrated DBA say, “just open the &^$# network up!  Let’s just get this connected to our cloud provider!” I have to admit that this request must be akin to someone asking a DBA to provide SYSDBA to a developer in production.

vizag-real-estate-is-not-happening

So yes, there are a lot of moving parts in a cloud environment.  No, not all of them are at the database level, but many of them could be at the network level.  This means that your new cloud environment must connect past firewalls, proxies, blocked ports and authentication steps that may not have been required back in the sole on-premise days.

hybrid_cloud_agent_to_oms_comm_ha

Yeah, there’s a bit more to the network than demonstrated in the picture above.

The database connection needs a secure connection past the firewall and may require proxy configurations to access via a web browser.  The application interface to manage them may require proxy settings in browsers that may have automated processes to manage outside a manual proxy setting.  You may have network configurations that are different from one local office to another.  We’ve only discussed configuration and haven’t even considered speed, packet size and bandwidth.

So here is my recommendation-  make friends with your network administrator.  In fact, take the ol’ chap out for a beer or two.  Learn about what it takes to master, protect and ensure the company’s network from the threats outside.  Learning about the network will provide you with incredible value as a cloud administrator and you may get a great friend out of the venture, too.  For those of you that don’t make friends with your network admin, I don’t want to be hearing about any mishaps with phenobarbital to get the information, OK? 🙂

JGLCheers

 

 

 

Posted in Cloud Tagged with: ,

September 15th, 2015 by dbakevlar

So I’m going to start this post with an admission-  I don’t have access to a cloud environment to test this out, but I know what I would do first if I experienced slow response time on database creation or cloning via EM12c to the cloud and I would like to at LEAST post what I would do to give others the chance to test it out and see if it offers them some help.

Knowing what is causing slow performance is essential to trouble shooting any problem.  I’ve had a few folks come to me with complaints that their hybrid cloud clones or database creations in the cloud are slow, but with the little data I’ve seen in the exchanges, its starting to give telltale signs that the cloud isn’t the issue. There are some Enterprise Manager features with 12.1.0.5 that may be able to assist and that’s what we’ll discuss today.

Page Performance

Although log data and diagnostic utilities are crucial when I’m trouble shooting anything, simple visuals based on this data can be very helpful.  The Page Performance for the Repository is one of those features, but there is one caveat-  the page that is experiencing slow performance, (i.e. database creation wizard with Cloud Control to the PaaS in the cloud, etc.) must be run at least two times in the last 24 hrs to be captured in this tool.

The data provided by this tool is based off a considerable amount that we collect via the EM Diagnostics utilities, so although it doesn’t provide as deep a diagnostic as the utility does, it may shed some high level light on what you’re facing.

To access the page, you need to be in the OMR, (Repository) target, which commonly is accessed via Setup, Manage Cloud Control, Repository and then from there, click on OMS and Repository, Monitoring, Page Performance.  You’ll see the second tab in is Page Level Performance, which will have the data that we’re about to go deeper into today.

pageperf1

Upon accessing the dashboard, you’ll quickly notice that many pages have very different requirements to produce the output that you view in the console pages, (i.e. not all plugins and collections are created the same… :))

pageperf2

Now I want you to focus on the headers to note the data collected:

pageperf3

Cloud Control breaks down the page processing time for a number of different areas-

  • Total Page Processing Time in Seconds
  • Page Processing Time in the OMS, (Oracle Management Service)
  • Page Processing Time in the OMR, (Oracle Management Repository)
  • Processing Time in Browser/Network, Multi- Requests Page
  • Requests per Page, (very good when it comes to efficiency)
  • Number of SQL/PLSQL Executions per Page

The one I’m going to focus on is the Processing Time in Browser or Network.  If you note, there are sort options for each of the Avg and Max totals that are collected in each column.

If we sort by Max Processing Time in Browser/Network and we’ll choose the first in the list that contains an active report, (the others are stale, so those pages aren’t links you can click on…):

pageperf4

Again, I don’t have any cloud access to demonstrate this on and as you can see, the RAC Top Activity is what we’ll use for this example of how we can inspect wait information on page response.

pageperf5

pageperf6

The top of the page clearly shows what IS the time distribution across Database, Java, Agent and Network/Browser.  You can in the lower section, (actually to the far right in the console) the amount of seconds allocated to each type of task by the process, which for our example is considerably Java Time.  If there is SQL involved, it also breaks down the execution plans and wait events.

For this example, there is a small amount of network time, (this is the “High Availability” environment at my work, so it’s a pretty buff setup) but it does display the light pink section that shows there are some waits for the network and/or browser.

If WE WERE inspecting a cloud usage process, we should see what is the time distribution for the network and I’m pretty sure we would see that displayed in these pages clearly, (again, this is an assumption on my part, as I haven’t had the first hand experience to investigate it…)

Summary

If you are working in the Oracle cloud and are experiencing slowness, run the cloud process via the console a couple times, (even if it does timeout) and consider using the Page Performance feature to make a quick inspection of the report.  Although the example above clearly shows how much time is being spent on Java, you may find for the cloud, you’re dealing with network slowness that requires some investigation into firewall, DNS resolution and other challenges, but this report may quickly show the Oracle cloud as the innocent party.

Posted in Cloud, EM12c Performance, Enterprise Manager, Oracle

June 23rd, 2015 by dbakevlar

The sales, support and technical teams were brought into the Denver Tech Center office to do some advanced training in Hybrid Cloud.  There were many take-aways from the days we spent in the office, (which is saying a lot-  most of you likely know how much I hate working anywhere but from home… :)) and I thought I would share a bit of this information with those that are aching for more details on this new and impressive offering from release 5.

If you’re new to Hybrid Cloud and want to know the high level info, please see my blog post on the topic.

Cloud Control Cloning Options

Cloud Control now includes a number of new options for database targets in EM12c.  These new drop down options include cloning to ease access to the new hybrid cloning.  Once you’ve logged into Cloud Control, go to Targets à Databases and then choose a database you wish to implement cloning features for.  Right click on the target and the drop downs will take you to the cloning options under Oracle Database –> Cloning.

tm_creation

There will be the following choices from this drop down:

  • Clone to Oracle Cloud-  ability to directly clone to the cloud from an on-premise database.
  • Create Full Clone-  Full clone from an image copy or an RMAN backup.
  • Create Test Master- Create a read-only test master database from the source target.
  • Enable as a Test Master- Use the database target as a test master, which will render it read-only and it would rarely be an option for a production database.
  • Clone Management-  Manage existing cloning options.

Using a test master is essential for snap clones, which are a great way to offer great space savings and eliminates the time that is required for standard cloning processes.  The test master is in a read only mode, so it will need to be refreshed or recreated with an up to date copy, (which will then be another option in the drop down, “Disable Test Master”) for new cloning procedures.

Snapshot Clone

For the example today, we’ll use the following production database:

tm1

We’ll use an existing test master database to perform our clone from:

tm4

We can right click on the database and choose to create a clone.  This is going to be an artifact via a snapclone, so keep this in mind as we inspect the times and results of this process.

tm8

Upon choosing to create a snapshot clone of a pluggable database.  This will then create snapshot clone, each clone is just a copy of the file header with block changes involved on the read only or read-write clone.

Once you fill out the pertinent data for the clone, using the correct preferred credentials with SYSDBA privileges, name the new pluggable database, the name you’d like it displayed in  Cloud Control as and enter the PDB  administration credentials, password and confirm the password.  Once that’s done, choose if you’d like to clone it to a different container, (CDB) than what the source resides on and then add the database host and ASM credentials.

Once you click next, you have more advanced options to view and/or setup:

tm10

The advanced options allow you to change the sparse disk group to create the clone on, storage limits and database options.

tm11

You can then choose to have data masking to protect any sensitive data, but keep in mind, once you do so, you will no longer be using a snapclone due to the masking, but the option to implement it at this step is an option.  You can also set up any pre, post or SQL scripts that need to be run as part of the clone.  This could include resetting sequences, user passwords, etc.

The next step allows you to schedule the clone in the future or run immediately.

tm12

You can also choose the type of notifications, as this is simply an EM Job that is submitted to perform the cloning process.  Once you’ve reviewed the cloning steps chosen via the wizard, you can then submit.

tm13

Once the jobs been submitted, the submitter can monitor the job steps:

tm14

 

Success

Once the clone has completed, you can view each of the steps, including the time each took.

tm15

The source database was over 500 GB and was cloned in less than one minute!  You also will see the new cloned database in the targets list:

tm16

If curious, note that this is a fully cloned database that is on ASM, which you can view, just as you would for any other database.

Again, note the size and that this can be managed like any other database that you would have created via a DBCA template or through a standard creation process.

tm17

More to come soon and thanks to Oracle for letting us get our hands on the new 12.1.0.5 hybrid cloning!

Posted in Cloud, Enterprise Manager Tagged with: , , ,

June 18th, 2015 by dbakevlar

Last week’s release of 12.1.0.5 was a pleasant surprise for everyone out in the Oracle world.  This release hit the bulls-eye for another cloud target of Oracle’s, announcing the introduction of Enterprise Manager 12c’s offering a single pane of glass management of the hybrid cloud.  The EM12c team has been been trained and testing out the new features of this release with great enthusiasm and I have to admit, pretty cool stuff, folks!

Why is the Hybrid Cloud Important?

Many companies are still a bit hesitant to embrace the cloud or due to sensitive data and security requirements, aren’t able to take advantage of cloud offerings for their production systems.  Possessing a powerful tool like Enterprise Manager to help guide them to the cloud could make all the difference-

hybcld1

 

You’re going to start hearing the EM folks use the term, “Single Pane of Glass” a lot in the upcoming months, as it’s part of the overall move, taking Enterprise Manager from the perception that EM is still a DBA tool and getting everyone to embrace the truth that EM12c has grown into an infrastructure tool.

What is the hybrid cloud?

As we’ve discussed the baby-steps that many companies are taking, (vs. others that are jumping in, feet first! :)) with the hybrid cloud, the company can now uphold those requirements and maintain their production systems within on-premise sites, but enforce data masking and sub-setting, the sensitive data is never presented outside the production database, (including to the test master database that is used to track the changes in the snapclone copies…)  This then allows them with Database as a Service to clone development, test, Q&A environments to a less expensive cloud storage platform without exposing any sensitive data.

clone_datamask

Once the datamasking or any other pre-clone data cleansing/subsetting is performed, then the Test Master database is created and can be used to create as many snap clones as needed.  These snaps can be used for development, QA or testing.  The space savings continues to increase as the snapclone copies are added, as the block changes are most of the space consumption in the test master database.  This can add up to a 90% storage savings over traditional database full copies.

Hybrid Cloning

The power of hybrid cloning is the Hybrid Cloud Gateway, a secure SSH tunneling, that allows seamless communication between on-premise systems and the cloud.

hybcld3

Types of Clones

There are four types of clones currently offered with Hybrid cloning-

  • On-premise source cloned to the cloud.
  • Cloud source, cloned to on-premise.
  • Cloud source cloned in the cloud.
  • Migrate from a schema in a database to a PDB in the cloud.

Simplicity is Key

The user interface is simple to engage, use to create a clone or clones, save off templates, build out a catalog to be used for a self-service portal and when cloning, the status dashboard is a great quick view of success on cloning steps:

hybcld4

If deeper investigation of any single step needs to be perforrmed, the logging is no different than inspecting an EM job log, (because an EM job is exactly what it is… :)):

hybcld5

I’ll be returning from Europe soon and hope to do more with the product, digging into this great new feature, but until then, here’s a great overview of 12.1.0.5’s brand new star!

 

Posted in Cloud, Enterprise Manager Tagged with: ,

January 13th, 2015 by dbakevlar

Every job comes with tasks that no one likes to perform and database administration is no exception.  Patching is one of those necessary tasks that must be performed and when we are expected to do more with less everyday, the demands of patching another host, another agent, another application is often a task that no one looks forward to.  It’s not that it goes wrong, but that it’s just tedious and many DBAs know there are a lot of other tasks that could be better use of their time.   Patching is still an essential and important task that must be performed, we all know that. OPatch and other patching utilities from Oracle make patching easy, but it can still remove a lot of time from a resource’s day.

Enterprise Manager 12c’s automated patching and provisioning, using the Database Lifecycle Management Pack is gaining more appreciation from the IT community as it assists the DBA with features to search recommended patches, create patch plans, review for conflicts and allow sharing and re-use of patch plans.

Configuring a Database for Online or Offline Patching

After logging into a target database, you can click on Setup and go to the Offline Patching setup:

patching22

You can then choose to use Online patching with MOS credentials:

patching1

or use Offline Credentials and configure the patching catalog and ensure you upload all the XML’s for the catalog, which will now be stored locally to a workstation.  Once the upload is complete, run the Refresh From My Oracle Support job.

patching2

The Online configuration is recommended and works with the software library.  It’s what we’ll be talking about today.

Also ensure that you’ve set up correct privileges to perform patching. Provisioning and patching require steps to be performed that will require privileges to run root scripts, so ensure that the credentials that are used for the patching allow to sudo to root or PBrun.

Database Patch Plans

To set up a patch plan for a database, there are a number of steps, but the patch plan wizard makes this very easy to do.  For our example, we’ll choose to patch 11.2.0.4 databases to the latest recommended patches.

First, let’s do a search to find out what patches we’ll need to apply to our 11.2.0.4 databases in our EM environment.

patching3

Our Enterprise menu takes us to the Provisioning and Patching, Patches and Updates.

From this console page, we can view what patch plans are already created in case we can reuse one:

patching4

As there isn’t an existing plan that fits what we need to do, we are going to first search for what patches are recommended with the Recommended Patch Advisor:

patching10

We’ve chosen to perform a search for recommended patches for 11.2.0.4.0 databases on Linux x86-64.  This will return the following four patches:

patching11

We can click on the first Patch Name, which will take us to the patch information, including what bugs are addressed in this patch, along with the option to download or create a patch plan.  For the purpose of this post, we’ll choose to create a patch plan:

patching12

We’ll create a new patch plan for this, as our existing ones currently do not include an 11g database patch plan that would be feasible to add to.  We can see our list of patches on the left, too, so this helps as we proceed to build onto our patch plans.

After clicking on the Add to New, we come to the following:

patching13

Name your patch plan something meaningful, (I choose to name the patch for a single instance, “SI”, the patch number and that it’s for 11.2.0.4) and then choose the database from the list you wish to apply the patch to.  You can hold down the CTRL key and choose more than one database and when finished, click on Create Plan.

The patch plan wizard will then check to see if any other targets monitored by Cloud Control will be impacted and asks you to either add them to the patch plan or to cancel the patch plan for further investigation:

patching14

If you are satisfied to with the additions, you can click on Add All to Plan to proceed.  The wizard then checks for any conflicts by the additions and will report them:

patching15

In our example above, I’ve added an 11.2.0.3 instance home to show that the wizard notes it and offers to either ignore the warnings and add it or (more appropriately) cancel the patch plan and correct the mistake.

Adding to Patch Plans

In our recommended patch list, we had four recommended patches.  Once we’ve created our first patch plan, we can now choose to add to it with the subsequent patches from the list:

patching16

This allows us to create one patch plan for all four patches and EM will apply them in the proper order as part of the patch deployment process.

Patch Plan Review and Deploy

One a patch plan is created, the next step is to review and deploy it.  Choose the patch plan from the list that we created earlier:

patching18

Double clicking on it will bring up the validation warning if any exist:

patching17

We can then analyze the validations required and correct any open issues as we review the patch plan and correct them before deploying:

patching29

We can see in the above checks, that we are missing credentials required for our patches to be successful.  These can now be set by clicking to the right of the “Not Set” and proceed with the review of our patch plan.

patching20

Next we add any special scripts that are required, (none here…) any notification on the patching process so we aren’t in the dark while the patch is being applied, rollback options and conflicts checks.

These steps give the database administrator a true sense of comfort that allows them to automate, yet have notifications and options that they would choose if they were running the patch interactively.

Once satisfied with the plan, choose the Deploy button and your patch is now ready to scheduled.

patching21

Once the patching job completes or if it experiences an issue and results in executing the logic placed in the above conflict/rollback steps, the DBA can view the output log to see what issues have occurred before correcting and rescheduling.

Output Log 
Step is being run by operating system user : 'ptch_em_user' 
 
Run privilege of the step is : Normal  

This is Provisioning Executor Script
…
Directive Type is SUB_Perl
…
The output of the directive is:
…
Tue Jan 6 00:15:40 2015 - Found the metadata files; '19121551' is an patch
…
Tue Jan 6 00:15:40 2015 - OPatch from '/u01/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch.pl' 
  will be used to apply the Interim Patch.
…
Tue Jan 6 00:15:52 2015 - Invoking OPatch 11.2.0.4.7
…
Following patches will be rolled back from Oracle Home on application of the patches in the given list :
   4612895
…
Do you want to proceed? [y|n]
Y (auto-answered by -silent)
User Responded with: Y
OPatch continues with these patches:  6458921  

Do you want to proceed? [y|n]
Y (auto-answered by -silent)
User Responded with: Y

Running prerequisite checks...

This is high level, but really, it’s quite easy and the more you automate provisioning and patching, the easier it’ll get and you’ll wonder why you waited so long!

 

 

Posted in Cloud, Enterprise Manager Tagged with: , ,

April 23rd, 2014 by dbakevlar

The last couple weeks I’ve been lucky enough to get time with a new ZFSSA Simulator, (it will be out soon for everyone, so you’ll have to just play with the current one available, patience grasshopper! :)) and spent checking out the newest features available with Database as a Service, (DBaaS) Snapclone via Enteprise Manager 12c.  I’m really thrilled with Snapclone-  In two previous positions, I spent considerable time finding new ways of speeding up RMAN duplicates to ease the stress of weekly datamart builds that were sourced off of production and this feature would have been a lifesaver back then!

As Oracle’s DBaaS offering is so full featured, I think its easy to have misconceptions about the product or find blog posts and documentation on earlier releases that lead to misconceptions. Due to this, we thought it might help if I tried to dispel some of myths, letting more folks realize just how incredible Snapclone really is!

Misconception #1- Snapclone only works with the ZFS Storage Appliance

DBaaS does offer a fantastic hardware solution that requires a hardware NAS like Netapp and ZFS/ZFSSA, but that’s not the only option.  There is also the software solution which  can work on ANY storage.  There’s no requirement for the test master database, (used to track changes and save considerable space vs. a traditional cloning method…) on where it must reside and that means it can be on different storage than production.

There are benefits to both the hardware and software solutions of Snapclone.  Keep in mind, Oracle prefers to support hardware and software that are engineered to work together, but they realize that not all customers will have a ZFS or NetApp Solution in place, so they ensure that Snapclone solutions are available to all storage a customer may have in their shop.  Some of those benefits and requirements are listed below, but you can clearly see, Snapclone does not have a requirements of ZFS or NetApp:

tm_profile_2

Misconception #2-  Snapclone Requires a Cloud Implementation

It is true that Snapclone requires database pools to be configured and built, but this is in the environment’s best interest to ensure that it is properly governed placement is enforced. This feature is separate from an actual “cloud” implementation and the two shouldn’t be confused.  There are often dual definitions for terms and cloud is no different.  We have private cloud, public cloud, the over-all  term for cloud technology and then we have database pools that are part of Snapclone and those are configured in the cloud settings of DBaaS.  They should not be confused with having to implement “cloud”.

Misconception #3- Unable to Perform Continuous Refresh on Test Master Database

There are a couple easy ways to accomplish a refresh of the test master database used to clone from production.  Outside of the traditional profiles that would schedule a snapshot refresh, DBAs can set up active or physcial dataguard or can use storage replication technologies that they already have in place.  To see how these look from a high level, let’s first look at a traditional refresh of a test master database:

tm_profile_1

 

Now you’ll notice that the diagram states the test master is “regularly” refreshed with a current data set from production and if you inspect the diagram below, you will see an example of an example of a software or hardware refresh scenario to the test master database, (using datamasking and subsetting if required) and then creating Snap Clones.

 

tm_using_testmaster

Now as I said earlier, you can use a standby database, too, to perform a clone.  The following diagram shows the simplicity of a standby with Dataguard or Golden Gate.  Notice where Snap Clone takes over-  it’s at the Standby database tier, so you still get the benefit of the feature, but can utilize comfortable technology such as Dataguard or Golden Gate:

tm_using_standby

Misconception #4- Snapclone Can Only be Used with DB12c

Snapclone works with any supported version of the database 10gR2 to 12c. Per the DBaaS team of experts, it may work on earlier versions, they just haven’t certified it, so if there are any guinea pigs out there that want to test it out, we’d love to hear about your experience!

The process of Snapclone is very simple once the environment is set up.  With the Rapid Start option, it’s pretty much the “Easy button” for setting up the DBaaS environment and the process solidified once service templates are built and the service request has been submitted in the Self Service Portal.  There isn’t anymore confusion surrounding where an Oracle home installation should be performed or what prefix is used for database naming convention and other small issues that can end up costing an environment in unnecessary complexity later on.

Misconception #5- Snapclone doesn’t Support Exadata

A couple folks have asked me if Snapclone is suppored on Exadata and in truth, Exadata with Snapclone offers a unique opportunity with consolidations and creating a private cloud for the business.  I’ll go into it in depth in another blog post, as it deserves it’s own post, but the following diagram does offer a high level view of how Snapclone can offer a really cool option with Exadata:

tm_exadata

There are so many features that are provided by Snapclone that its difficult to keep on top of everything, but trying to dispel the misconceptions is important so people don’t miss out on this impressive opportunity to save companies time, money and storage.  I know my whitepaper was over 40 pages on DBaaS and I only focused on NetApp and ZFS, so I do understand how easy it is, but hopefully this first post will get people investigating DBaaS options more!

Posted in Cloud, DBaaS, Enterprise Manager, Oracle

  • Facebook
  • Google+
  • LinkedIn
  • Twitter