Latest Entries »

First Week at Oracle

I was warned it would be difficult.  I was warned that it would be time-consuming and painful, but I survived without as much as a scratch on me.

This last week was my first week at Oracle and it was really, really great.  The on board process wasn’t difficult at all.

Maybe it was because I was told to be prepared for a difficult and challenging week, so I was prepared for something very difficult and it was much easier than I imagined.

Maybe it was because I had new challenges and interesting new environments to work in, which made the week go by fast and the on board tasks seem minimal.

Maybe it was because I have an excellent manager and peers who made sure I had everything I needed.

Maybe it was because I had excellent advisers and support inside so when I had a question, Tyler, Jeff, Courtney, Pete, Werner and others were there to quickly help me out.

Maybe it’s just that Oracle has a lot of processes, applications and good people in place that makes coming on board a pretty pleasant experience.

Onto week two! :)

This is the second post on ASH Analytics Detail.   You can find the first post here.

This post we’re going to work into more Activity data within ASH Analytics, but I’m focusing on Resource Consumption, which you can find once you are into Ash Analtyics, (you may want to stop refreshes, which can be done at the upper right in the refresh settings) under the Activity button.

ash_0321_resource_consum

We’ll be working with the last three options in this menu marked off in red, as the Wait Class is just a similar view of what we already get in the main Activity graph.

Wait Class

I’m sure you’ve already noticed, but ASH Analytics has a tendency to always arrange any of the graphs by the heaviest usage at the bottom to the lightest at the top.  This makes it easier to view for the user.   This will be the case for pretty much all the activity graphs, where Load Maps will arrange largest to smallest, left to right.

ash_0321_main

Nope, not much new here-  We are still seeing the resource consumption, but as wait events.  If we compare it to the standard wait events view, not much different.  The reason we would really find value in this is if we had IORM, (IO Resource Manager) enabled.  Since we don’t, we’re going to skip it for now and it will get to be part of that blog post in the future… :)

Wait Event

when we filter by Wait Event, we get a very different view of our data than we did by Wait class.  Note the actual categories that are graph is broken up by, listed down on the right hand side.

ash_0321_wait_event

Scanning from the bottom –> up on the right hand side, we can see the largest resource consumption is “LNS Wait on SENDREQ” and the second highest consumer is “LGWR- LNS Wait on Channel”.  You are also seeing log file sync and parallel log file write.  All of this comes to a clear understanding of what we are dealing with here… Why?  This is a dataguard environment and the LNS* type waits are quite common and occur when-

  • LGWR writes redo to the online redo log on this primary database (when LGWR SYNC is not used, user commits are acknowledged once this step completes except when the parameter COMMIT NOWAIT is used.
  • The Data Guard LNS process on this primary database performs a network send to the Data Guard RFS process on the standby database. For redo write sizes larger than a MB, LNS will issue multiple network sends to the RFS process on the standby.
  • LNS posts LGWR that the all the redo has been successfully received and written to disk by the standby, which is a heavy consumer, as well.

On the primary database here is extended resource consumption represented with the “log file parallel write” wait event. We will then note repeated “LNS wait on SENDREQ” wait event, too. You can further divide the “LNS wait on SENDREQ” wait event into network time and RFS I/O time by subtracting the “RFS write” wait event obtained on the standby. These wait events can be assessed on the standby by using multiple queries in the “Data Guard Specific Wait Events” section for a physical standby or by using AWR for a logical standby if you need further proof.

Object

I’ve hidden the exact object names involved in this database, but you’ll note, they weren’t involved in much of the resource consumption anyway.  Now on an environment with heavy IO, this would change significantly and you would see a lot more of one or more objects being the focus of this type of graph.  This is where having the Top Activity box check marked is helpful, as it clearly shows you that object IO is not much to focus on for this environment.  The black mark showing the upper max of activity for any given time in the activity graph gives you a clear line to compare with.

ash_0321_object

Blocking Session

This is where many of you will become very interested again.  We all want to know about blocking sessions and how they may be impacting the resource consumption in our environment.  Although there was no *true* impact to production processing, there was some blocking sessions that we could inspect in our current example.

Only sessions that are experiencing some type of bl0cking are displayed in this view.  No matter if it’s transaction, (TX- Tran Lock contention or TM- DML enqueue contention , etc.) or UL, (user defined)  these sessions will be shown in this graph.

ash_0321_blocking_sess

I’ve included the bottom portion of the ASH Analytics section so you’re able to see the SQL_ID’s involved in the sessions and then to the right, you can also view the session information, displayed by Instance ID, (for RAC) session ID, (sid) and serial number, (serial#).

You can then click on a session in the lower section, which will filter the ASH graph by the session and the wait class to give you another view, demonstrating in our example that we are waiting on a commit(s) and then our favorite wait, “other”…. :)

ash_0321_blocking_sess_det

What’s helpful, is that the SQL_ID is clearly shown on the left and  you’re able to click on any of these, (although I’d highly recommend clicking on the top one that is 5 times the concern vs. the others. :))

This completes the review of the Resource Consumption Menu for the ASH Analytics Activity Page.  We’ll continue on in Part III later this week!

I have a request to hit on some EM performance blog posts, so we’re going to start with breaking down some of the ASH Analytics areas.  ASH Analytics is not your grandfather’s “Top Activity” and I recommend everyone begin to embrace it as it is the the future of performance activity in the Enterprise Manager.  The idea that we will be able to pull directly from ASH and AWR to present our performance data via the EM12c is exciting, to say the least.  The added accuracy and value of the aggregated historical data must be recognized as well.

The standard output for ASH Analytics looks quite similar to Top Activity in the way of focusing on activity graphed out by wait events, but I’m going to dig into a different ways to present the activity data, as it may answer questions that simply won’t show via wait event graphing.

Once you first enter the ASH Analytics interface, you’ll be presented with the section at the top, which will display the timeline for your examination and then below, the actual wait event graph as seen here in our example:

ash_standard

You’ll note that we haven’t used any filters, so we’re viewing all activity data and the standard wait classes we are accustomed to viewing as shown, i.e. I/O, (dark blue) CPU, (kelly green) and System, (light blue).  We can see the system is rather busy during multiple intervals, bypassing the CPU cores line, displayed in red across the image.

Now let’s display the data in a different format.  This is done by changing the Wait Class, shown under the Activity button, to another filtering type.

ash_menu

The section highlighted in red is where we will be spending our time today.  I’ll display the same section of time we see in the standard wait event displayed in our standard wait event view, but focus on the SQL defined displays and we’ll go over how they might assist you in troubleshooting an issue.

SQL_ID

Now this one might seem a bit self-explanatory, but displaying the data from the activity pane appears very different, then, let’s say from the load map:

ash_bld_out

Now let’s look at it from the Activity View defined by SQL_ID:

ash_by_sqlid

I find the load map an excellent way to get my point across to non-technical folks, but I’m a DBA-  I’m used to spending my time looking at the data in terms of activity, so this is a more comfortable view for me to move from seeing the data in terms of wait events and transitioning that view to seeing the percentage of activity allocated to each SQL_ID.

We can click on the SQL_ID displayed on the right to go to it’s detailed ASH page, which will show you all data pertaining to that SQL_ID and wait information involved with it.  By clicking on different sections, we’re not just digging down into more detailed information, keep in mind, we are also “filtering” out more data that could have been masking valuable information.

Top SQL_ID

You may also think that this one is not going to look any different than the last, that’s where you’re wrong.  This is not the SQL_ID that has resources allocated to it in activity, but the top level SQL_ID that is executing the SQL_ID active.  This is helpful if you are trying to locate what packages and procedures should be first on the list for code review or if you want to quickly locate the package or procedure responsible for a specific SQL_ID.

ash_top_level_sqlid

These can then be identified to sessions by clicking on links and then traced to users, hosts, etc.  If I click on one of the top SQL_IDs, it will take me to all SQL_ID’s involved in that Top SQL_ID and all the wait events, displayed in the same graph timeline.  From there, I can then dig down into the data pertaining to the waits, the SQL_ID’s involved as part of the Top SQL_ID or even switch to other views, such as a load map to present the data in another format for peers to view more easily.

Force Matching Signature

Force matching signature is a way of ensuring that SQL is using the same plan/profile, etc. even when it has literals in it.  It’s kind of like setting cursor_sharing=FORCE, but in a “hinted” way throughout a database.  This can be both good and bad, as let’s say that it forces the value to “1″, where the value “1″ really only makes up 2% of the rows and it would have been better if it knew what it was working with.

ash_force_matching

SQL Plan Hash Value

You should be seeing this next image [hopefully] as a mirror of the SQL_ID one.  We do like plan stability and the idea that we have a ton of hash plan values changing erratically is enough to make most of us feel queasy.  Having a quick visual that we’re not experiencing a lot of plan changes can be quite helpful.  It’s not foolproof, again, this is a visual representation of the data, but it’s helpful.

ash_sql_plan_hash

SQL Plan Operation

If you’re a DBA managing a data warehouse, you know when any DML is running and on what schedule.  The SQL Plan Operation view can give you a quick verification if something is amiss and there are updates, deletes or inserts happening that shouldn’t be.  You can see the percentage of activity that may also quickly tell you a change has occurred vs. the normal database activity.

ash_sql_oper

You can also see just how much of your activity is going to certain type of processing.

SQL Plan Operation Line

The plan operation line, you can see the operation type for the process, along with the description.  the process is then broke down by both the SQL_Id and the step the statement is performing.

ash_sql_plan_op_line

If you hover over any SQL_ID listed, it will also show this to you in the highlighted area:

spec_load_type_op

SQL OpCode

This one is self-explanatory.  We are simply looking at the activity level per statement type.  It is easy to see that during our busy intervals, queries were around a 1/3 or the activity, as were inserts.  This view can be helpful if you are retaining screenshots at different snapshot intervals for comparison.

ash_sql_op_code

Top Level SQL Opcode

You can also view the data simply by Operation code, displaying by each code.  Notice that these are displayed by the Instance number, (if on RAC), Session ID, (sid) and Serial number, (serial#).  This is a global view of the operations that are occurring in the environment and can offer a clear view of activity, but at an operation type level.

 

ash_sql_op_code

We’ve just touched on what the SQL section of the ASH Analytics Activity view can offer.  Stay Tuned and we’ll dig into each section and each area of this great interface for performance monitoring and analysis.

 

 

 

 

I’m in the midst of transferring over to my travel laptop to run all my VMs on and retiring my previous ASUS work laptop to my youngest son.  I was surprised to find out that not ALL laptops are set up to run virtual environments these days.

1.  Virtualization may not be enabled in the BIOS, (i.e. On-Boot UEFI in the Lenovo Yoga 11s.)

enable_vm

Once this is enabled, save the configuration and reboot, allowing you to now run a VM on your laptop.

If you are importing appliances, make sure you have set the location to import the disks to the appropriate location, especially if you are using an external disk to house your VMs.

Before you click, “Import”, look at the very bottom where the location of the virtual disk will reside and verify that is where you want them to live.  The default is commonly the data location for the PC user on Windows or the /home/user for Linux users.

imp_disk

 

Now onto the VM

2.  All VM images are not the same.  If you are using an image file or have created the image from scratch, be prepared to do some preparation to get the image ready before you can successfully install Oracle on it and/or any other product such as Enterprise Manager, etc.

The nicest part is that Oracle will let you know what is wrong in its prereq checks during the installation of Oracle.  It’s going to let you know if your OS and virtual host has what it takes to support the installation-

vm_updates

 

Most of these are pretty straight forward for anyone who’s installed Oracle for awhile, maybe not for those that are newer to the product.

1.  Most VM images have a small /tmp and/or it can be filled quickly by installation.

  • clear out files from /tmp after each installation or in this case, failed install.
  • create a soft link to point to a second temp directory that has more space and make it the default temporary directory.

2.  Increase is the swapfile space.  This can be created in a location you have space to reserve and read/write swap to.  On  our VM, we’re kind of limited at the moment, so we’re just going to create a swapfile off of /u01 and give it 2GB of space:

dd if=/dev/zero of=/u01/swapfile bs=1024 count=2097152
mkswap /u01/swapfile
swapon /u01/swapfile
swapon -a

3.  Hard limit/soft limit-  These can be fixed by the following inside your VM, most of the steps must be performed as ROOT, but verify if it’s looking for ROOT to perform or a check as the OS user-

Next, we’re alerted that the max open file descriptors is an issue, so let’s look at those for the system, the user and then increase them:

cat /proc/sys/fs/file-max
Do as the user!!
ulimit -Hn
4096
ulimit -Sn
1024

We’ve now verified that they are smaller than the requested amount, (in our installation, it requested 65536) so we’ll add this and then verify the change in the file-

sysctl -w fs.file-max=65536
vi /etc/sysctl.conf
We can also simply append to file, (as ROOT):
fs.file-max = 100000

Now save and verify with the following:

sysctl -p

Log out and log back in to set the new values.

Next post we’ll take on all those darn lib files… :)

 

I love my Lenovo Yoga 11s special build, ultra book.  It has 16G of memory, an Intel core i7 and 256 SSD, but that’s no where near the amount of space that I’m going to require to be running numerous virtual environments on it.  To help me out, I went on a fun buying and testing spree with a number of external disk solutions to find what worked best for my needs.  

The contenders, (yes, I was all over the board on these)-

The goal was to come up with a solid combination of performance, portability and storage.

I’m not going to go over any benchmarks as you can see the claims by the manufacturer and the reviews from folks who purchased them on the links, but I can tell you what ended up working for me.

The My Book was just to large and clunky.  I didn’t get the speed increases with the external power supply, (there were recommendations to get an external with added power as my Yoga doesn’t offer a whole lot in the tiny package…)  I think it would make a good backup drive, but not good for running VMs and no way was I hauling the literally “book” sized external drive overseas and to conferences!

The Toshiba Canvio, along with two other 5400rpm drives I already had were solid, had space of 500MB to 1.7TB, but they didn’t show the performance I required and the USB was 2.0 on the older ones.  It was good, but not good enough, so the Toshiba Canvio lost due to size vs.  price.

The PNY 128 GB flash drive was fast enough, but it’s just too darn small.  The price is really great at $50, (and for those Best Buy shoppers, Amazon has it for a third of what you’re paying at Best Buy!) so it is only good enough to be used as an external flash drive to SUPPORT the VM environment, (which I’ll explain in a little bit.)

Which leads me to the Patriot 256GB SuperSonic-  This 256GB, blazing fast flash drive is just big enough and plenty fast enough to run your VM for demos and webinars with the speed you want, but the price is the same as what you’d be paying for the Best Buy PNY version.  This is a fantastic deal for a 256GB fast USB 3.0 flash drive.

Then comes in the Touro 1TB 7200, 3.0 USB external drive.  It out performed all the other drives, the price was fantastic.  The combination of this for my standard VMs and the Patriot 256GB SuperSonic flash drive for demos is a nice combination.  I also kept the 128GB PNY flash drive-  why?  As you work with VMs, you come across the need to download and copy files as you are building.  It’s nice to have a fast flash drive with plenty of space to download to.

The end setup is portable enough that I can easily travel and present, but it’s fast enough that I can avoid some of the technical difficulties we all have run into when attempting to demonstrate via a VM.  A special thanks to Leighton Nelson, Tyler Muth and Connor McDonald for their recommendations.  This saved me a lot of time and I was able to test out just what fell inside that range to build out what would work for the setup I needed! :)

20140316_192114 

Yes, you heard that right-  DBA Goth Cowgirl is getting an upgrade to Enterprise Manager, (EM12c) Goth Cowgirl!  I will be starting as Oracle’s Consulting Member of the Technical Staff for the Strategic Customers Program, specializing in Enterprise Manager on March 17th.  The Strategic Customer Program is a group that rolls up under the Systems Management product line and is one of four primary product lines, (the other three being Database, Fusion Middleware and Applications.)  We comprise the whole of Enterprise Manager and OpsCenter.

This opportunity has been long in the making and I look forward to focusing on the Enterprise Manager product as both part of the development team and presenting at conferences.  Its been made quite clear to me that both my technical and marketing/presentation skills will be highly sought after in my new role and I’m looking forward to being a integral part of the EM team.  I will be primarily working remote, but will travel for short term technical engagements and marketing efforts whenever requested.

So for those of you who want to continue to master EM12c with me, EM12c Goth Cowgirl, continue to follow me as I’ll be offered invaluable expertise as I deep dive into the source of all that is EM12c with Oracle!

I want to thank all those that helped me in some way with this opportunity, including Dan Koloski, Pete Sharman, Tyler Muth, Mary Melgaard, Wendy Delmolino, Pramod Chowbey, Will Scelzo and Adilson Jardim.  These are all incredible folks within Oracle and their guidance and support is tremendously appreciated now and in the future.

em12c_img

The New Chapter

Today is my last day at Enkitec and I look back on a whirlwind two years.  As many people know, I never seem to stop moving, (I hear my mother’s voice from when I was a small child saying, “Kellyn, sit down!  Kellyn, don’t climb!  Please, sit still!” :))  Nope, still doesn’t work, but I also never seem to stop growing and it’s time to take that next step in my career.

These last two years have been phenomenal, starting out as a new Oracle ACE and a flurry of conference appearances, it’s culminated to me speaking at 16 conferences, two challenging years as the conference director for RMOUG’s Training Days, database track lead for ODTUG’s ever popular, KSCOPE conference, earning my ACE Director and being inducted into the OAK Table Network.  Somewhere in all of this, I was able to co-author two books, (Pro SQLServer 2012 and Expert EM12c) and start a third, (The Enterprise Manager Command Line Interface, (EM CLI).

I’ve thoroughly enjoyed supporting my wonderful Enkitec clients and working with my peers who’ve been part of the Remote DBA team, (shout out to Mike M., Bobby N, Katy, Lance, Gary, Greg and those that have come and gone from the remote team….)  I wish the continued success for those that I’ve been friends with long before Enkitec and have recently been able to call coworkers-  Martin Bach, Bobby Curtis, Frits Hoogland, Karl Arao, Alex Fatkulin and Andy Klock, (poor Alex and Andy stuck with me a second time around! :))  I appreciate the support of my direct leads who in a place of very flat hierarchy, made sure I had what I needed to be successful with Enkitec clients the last two years-  Jon Adams, Andy Colvin, Mike Moehlman and Martin Paynter.

After a couple weeks of transitioning my clients over to the competent hands of my teammates, I will be taking the next couple weeks off to concentrate on the Enterprise Manager 12c CLI book.  Its rather amazing we’ve gotten as far as we have considering how busy Ray Smith, Seth Miller and I am with day jobs, user groups community and conference demands.  These guys rock to work with on a book and I’m very proud of how incredible this little project of ours has come together.  Ray and Seth are stand up guys.

You will be seeing me speak this next week at HotSos and then the beginning of April at Norway’s OUGN, (Tim and I will be doing a joint keynote, along with our own tech sessions and I believe Heli and I have the makings of a great WIT session!)  Tim and I will be flying back from Norway and then a couple days later, right back out to Las Vegas for Collaborate for a full schedule.  I have three speaker sessions, a number of panels and other opportunities to network and speak to everyone.  My tech sessions for each of these will be focused on master EM12c, DBaaS and effectively utilizing ASH and AWR.

Thank you everyone who has helped me get to where I am and have supported me in my next opportunity to shine.  A special thanks goes out to Tim Gorman, my partner, mentor and biggest supporter.  I will be announcing officially where you can find me very soon, so stay tuned!

A number of folks have told me how much they enjoy the content that I “buffer” out throughout a 24 hr automated period, but hate how much  they often miss vs. getting “spammed” with all of it at once and just being overwhelmed.  I’m trying to find the best of both worlds and have started to push this content out to my very own Flipboard magazine.

If you love Flipboard and would like to find the stories of interest I push out on Twitter, Facebook and Linked In, you can now subscribe to DBA Kevlar Ammo and get all the ammo you need… :)

There is also a link at the top of this blog that can take you to the Flipboard magazine to subscribe, too:  dbakevlar_ammo

Enjoy and I’ll keep the content up to date, pushing to my buffer as well as “flipping it” to my magazine! :)

I know, I know… Just answering questions that I keep receiving from folks repeatedly, so if you know this one, love ya, if you need this answer, here it is! :)

To secure/unsecure or to resyncronize an agent, you need the agent registration password.  This would have been created when you performed the installation and configuration of the Enterprise Manager 12c.  If you didn’t perform the installation or you’ve just started to support it, you are most likely scratching your head and asking where could so and so have put that password?!?!?

No fear-  you can add one to be used and used immediately.

Log into your EM12c Console and then click on Setup…

regi_pass

 

Once you click on Registration Passwords, you’ll be taken to the Registration Management page:

init_reg_pass

 

You will notice that the Agent Registration Password that was created at the time of the install will show.  You have two choices-

1.  Edit the existing one or

2.  Create a new one using the Add Registration Password button at the upper right:

add_regis_pass

 

 

The reasons for not editing an existing one?

1.  Other administrators may be using it and just haven’t shared the information with you.

2.  I have never removed one and not sure if it would lend to some agents being blocked and require a resync.  In a large environment, not sure I even want to take the chance, (will test on one of my own VM’s at a future date, promise!)

Needless to say, I commonly just add a new one, so click on the “Add Registration Password”

Add in the information regarding your new password.  Now, if you enter in the same password as the default Agent Registration Password?  EM12c recognizes this and will NOT add the new one… :)

new_reg_pass1

 

Click on OK.  You will now see the new Agent Registration Password, which can be used in securing an agent, etc.

 

new_agent_pass2

 

>emctl secure agent
Oracle Enterprise Manager Cloud Control 12c Release 2
Copyright (c) 1996, 2012 Oracle Corporation.  All rights reserved.
Agent is already stopped…   Done.
Securing agent…   Started.
Enter Agent Registration Password : <Enter Your New Agent Registration Password Here>
EMD gensudoprops completed successfully
Securing agent…   Successful.

 

When an agent reports that it’s blocked and needs to be resync’d, most DBAs are going to log into the Enterprise Manager 12c console and attempt a resynchronization to have it fail.  A resync isn’t required very often, but if you do run into “Agent Blocked”, here are the initial steps that should be performed to have a resync complete successfully.

Log onto the server that is reporting it’s blocked.
If a MS Windows server, then open a command prompt “as an administrator” and go to the agent home, (this can be seen in the console under “home location” and has the word, “core” in the path)
So for our example:
E:\app\oracle\agent12c\core\12.1.0.2.0\bin>
If Linux/Unix, go to the $AGENT_HOME, ensuring you are in the “CORE” directory and proceed to bin.
1.  Stop the agent:
>emctl stop agent
Oracle Enterprise Manager Cloud Control 12c Release 2
Copyright (c) 1996, 2012 Oracle Corporation.  All rights reserved.
The Oracleagent12c1Agent service is stopping………….
The Oracleagent12c1Agent service was stopped successfully.
 
2.  Secure the agent:
>emctl secure agent
Oracle Enterprise Manager Cloud Control 12c Release 2
Copyright (c) 1996, 2012 Oracle Corporation.  All rights reserved.
Agent is already stopped…   Done.
Securing agent…   Started.
Enter Agent Registration Password : <– Enter the Registration Password, if unknown, create a new one in the Console.
EMD gensudoprops completed successfully
Securing agent…   Successful.
 
3.  Restart the agent:
>emctl start agent
Oracle Enterprise Manager Cloud Control 12c Release 2
Copyright (c) 1996, 2012 Oracle Corporation.  All rights reserved.
The Oracleagent12c1Agent service is starting……………………
The Oracleagent12c1Agent service was started successfully.
 
4.  Verify that it’s secured by trying to upload, but still blocked, which we expect, (the unblock has to be done AFTER a resecure):
>emctl upload agent
Oracle Enterprise Manager Cloud Control 12c Release 2
Copyright (c) 1996, 2012 Oracle Corporation.  All rights reserved.
—————————————————————
EMD upload error:full upload has failed: The agent is blocked by the OMS. Agentis out-of-sync with repository. This most likely means that the agent was reinst
alled or recovered. Please contact an EM administrator to unblock the agent by performing an agent resync from the console. (AGENT_BLOCKED)
 
5.  Now, Submit a resync from the console to complete the task:
Click on Targets, All Targets. In the search menu, type in the name of the host that is experiencing the issue.  All targets that are part of that host will come up.  Notice the one that says “Agent”.  Click on it and it will bring you to the Agent console.
Below the name of the host target name on the upper left, you will notice it says, “Agent”  Click on this and then in the drop down menu, click on “Resyncronization”.
Follow through the defaults and click then click on the job that is submitted to perform the resync.  You can monitor it till it’s complete, (sometimes it can take up to a 1/2 hour to clear out all the issues.
Once it says “Succeeded”, then you can log in and successfully upload, as well as the status should be green for all targets connected to this agent.