We went over a few of the Java “tuning” options last time, so let’s go onto the OMS tier for this post.
Location, Location, Location
High latency issues between the OMS(Service) and the OMR, (Repository) when separated by geographical location is common. It’s important when designing the Enterprise Manager environment that you keep your OMS hosts close to the repository hosts geo-locationally. Your agents can be global, with minor network considerations, but the OMS and OMR should always be planned for one geo-location, (preferably one datacenter location.)
Number of Users
Sizing the OMS based on number of concurrent users might not seem like something many need to worry about- I mean really, only the DBAs will be using it, right?
If you are looking at middleware, (weblogic) or application tier support, along with the much desired XaaS, (Everything as a Service) this question is never out of line with requirement gathering.
So how do you tune an OMS for concurrent users?
OMS and Java Heap Size
The Java heap size can be impacting to the OMS, so again, we’ll look at this setting. It is handled different considering you’re on an older, non 64-bit OS vs. newer, which is all 64-bit.
The default is 1.5G, but we can change this by doing the following:
Our resident Yoda, Werner de Gruyter advises to NOT go over 4G without checking all of the OS and OMS stats beforehand, young padawan… 🙂
Off to Work We Go- Task Workers
Task Worker threads are in charge of picking up all the DBMS Scheduler jobs that are issued by EM12c to roll up metrics, collect metrics, etc. Some of these jobs take a bit of time, more time than the standard task worker threads are allocated for vs. quantity of workers.
Due to this, we recommend checking if there is a backlog of tasks:
>repvfy verify repository -test 1001
The test will return values to show you have a backlog. If you do, then you can run the following to collect the data that’s necessary for the system to optimize:
>repvfy dump task_health
By running this, data is collected that then can be used with the following to tune the task worker threads:
>repvfy execute optimize
Now this should address the problem, but sometimes you’ll see that it just didn’t capture the timeline that was really experiencing the problem and you STILL have a backlog. You can force the number of task worker threads by running the following via a SQL*Plus session as the SYSMAN user:
SQL> exec gc_diag2_ext.SetWorkerCounts(<value 2-4>);
The command won’t accept anything larger than 4, so keep that in mind.
Yes, I know it’s an evil word for most, but know that we are all on this side working very hard to make it easier day in and day out.
I can honestly say that about 90% of issues that people run into are corrected in the quarterly patches. My recommendation when experiencing an issue with the EM12c environment, no matter if it is with the OMR, OMS, Weblogic or Agents, ensure you are patched to the latest patch release. Also, for agents, always set up patch plans to take the manual intervention out of the way. You deserve to have this automated… 🙂
Now as we all love to patch, know that it’s been a topic involved in sincere and extensive discussion here in the EM Team and I foresee impressive improvements from the incredible team I work with.
There’s a lot more to cover on the OMS and then we’ll cover agent tuning, so know that Part III will be up on my Blog soon!