Category: Oracle

January 18th, 2017 by dbakevlar

So Brent Ozar’s group of geeks did something that I highly support-  a survey of data professional’s salaries.  Anyone who knows me, knows I live by data and I’m all about transparency.  The data from the survey is available for download from the site and they’re promoting app developers to download the Excel spreadsheet of the raw data and work with it.

Now I’m a bit busy with work as the Technical Intelligence Manager at Delphix and a little conference that I’m the director for, called RMOUG Training Days, which is less than a month from now, but I couldn’t resist the temptation to load the data into one of my XE databases on a local VM and play with it a bit...just a bit.

It was easy to save the data as a CSV and use SQL Loader to dump it into Oracle XE.  I could have used BCP and loaded it into SQL Server, too, (I know, I’m old school) but I had a quick VM with XE on it, so I just grabbed that quick to give me a database to query from.  I did edit the CSV and removed both the “looking” column and took out the headers.  If you choose to keep them, make sure you add the column back into the control file and update the “options ( skip=0)” to be “options ( skip=1)” to not load the column headers as a row in the table.

The control file to load the data has the following syntax:

--Control file for data --
options ( skip=0 )
load data
 infile 'salary.csv'
 into table salary_base
fields terminated by ','
optionally enclosed by '"'
 (TIMEDT DATE "MM-DD-YYYY HH24:MI:SS"
 , SALARYUSD
 , PRIMARYDB
 , YEARSWDB
 , OTHERDB
 , EMPSTATUS
 , JOBTITLE
 , SUPERVISE
 , YEARSONJOB
 , TEAMCNT
 , DBSERVERS
 , EDUCATION
 , TECHDEGREE
 , CERTIFICATIONS
 , HOURSWEEKLY
 , DAYSTELECOMMUTE
 , EMPLOYMENTSECTOR)

and the table creation is the following:

create table SALARY_BSE(TIMEDT TIMESTAMP not null,
SALARYUSD NUMBER not null,
COUNTRY VARCHAR(40),
PRIMARYDB VARCHAR(35),
YEARSWDB NUMBER,
OTHERDB VARCHAR(150),
EMPSTATUS VARCHAR(100),
JOBTITLE VARCHAR(70),
SUPERVISE VARCHAR(80),
YEARSONJOB NUMBER,
TEAMCNT VARCHAR(15),
DBSERVERS VARCHAR(50),
EDUCATION VARCHAR(50),
TECHDEGREE VARCHAR(75),
CERTIFICATIONS VARCHAR(40),
HOURSWEEKLY NUMBER,
DAYSTELECOMMUTE VARCHAR(40),
EMPLOYMENTSECTOR VARCHAR(35));

I used Excel to create some simple graphs from my results and queried the data from SQL Developer, (Jeff would be so proud of me for not using the command line… :))

Here’s what I queried and found interesting in the results.

We Are What We Eat, err Work On

The database flavors we work on may be a bit more diverse than most assume.  Now this one was actually difficult, as the field could be freely typed into and there were some mispellings, combinations of capital and small letters, etc.  The person who wrote “postgress”, yeah, we’ll talk… 🙂

The data was still heavily askew towards the MSSQL crowd. Over 2700 respondents were SQL Server and only 169 were listed their primary database platform as others, but Oracle was the majority:

You Get What you Pay For

Now the important stuff for a lot of people is the actual salary.  Many folks think that Oracle DBAs make a lot more than those that specialize in SQL Server, but I haven’t found that and as this survey demonstrated, the averages were pretty close here, too. No matter if you’re Oracle or SQL Server, we ain’t making as much as that Amazon DBA…:)

Newbies, Unite

Many of those who filled out the survey haven’t been in the field that long, (less than five years).  There’s still a considerable amount of folks who’ve been in the industry since it’s inception.

We’ve Found Our Place

Of the 30% of us that don’t have degrees in our chosen field, most of us stopped after getting a bachelors to find our path in life:

There’s still a few of us, (just under 200) out there who had to accumulate a lot of school loans  getting a masters or a Doctorate/PHD, before we figured out that tech was the place to be…:)

Location, Location, Location

The last quick look I did was to see by country, what were the top and bottom average salaries for DBAs-

 

Not too bad, Switzerland and Denmark… 🙂

Data Is King

I wish there’d been more respondents to the survey, but very happy with the data that was provided.  I’m considering doing one of my own, just to get more people from the Oracle side, but until then, here’s a little something to think about as we prep for the new year and another awesome year in the database industry!

 

 

Posted in DBA Life, Oracle Tagged with: , ,

January 13th, 2017 by dbakevlar

I receive about 20-30 messages a week from women in the industry.  I take my role in the Oracle community as a role model for women in technology quite seriously and I’ve somehow ended up speaking up a number of times, upon request from different groups.

rosie

Although its not the first time the topics come up, I was asked last week for some recommendations on Oracle’s CEO, Safra Catz and her opportunity to be on President Elect Trump’s transition team.

I wanted to ask your opinion about Safra not taking a leave of absence to help with Trump’s transition team? I think she should take a leave and as one of the top women in IT I think it shows poor judgment. Could the WITs write her a letter? Thoughts?

After some deep thought, I decided the topic required a good, solid answer and a broader audience.  As with anything involving the topic of WIT, the name of the source who asked the question doesn’t matter and anyone who asks you to give names isn’t really interested in education, but persecution.

It took me some time to think through the complexities of the situation.  Everyone will have some natural biases when a topic bridges so many uncomfortable areas of discussion:

  • Women’s Roles
  • Politics
  • Previous Employer

After putting my own bias aside and thinking through the why and what, here’s my thoughts-

No, I don’t think Safra should take a leave of absence. We have significantly few women in c-level positions.  As of April 2016, only 4% of CEO’s for Fortune 500 companies were women, (which Safra is one.)  I have a difficult time believing we’d be asking most men to give up the opportunity to be on a presidential transition team or take a leave of absence.  Some of the most challenging and difficult times in our career are also the most rewarding and this may be one of those times in Safra’s life.  Anyone who’s friends with me, especially on Facebook, would know, I’m not a big fan of Donald Trump, but in no way should we ask Safra to not try to be part of the solution.

No, I don’t think Safra should refrain from being on the transition team.  As much as we discuss the challenges of getting more women in technology, its even a larger challenge in politics.  Women have less than 25% of the seats in Congress and even less at local government levels.  We are over 50% of the workforce and 50% of the US population.  How can we ever have our voices heard if we aren’t participating in our own government?  Having more representation is important, not less and not because my politics don’t mesh with hers.

So what should the discussion really be about if we don’t want Safra to take a leave of absence or remove herself from the transition team?

  1. We want to know that there are clear policies in place to deter from conflict of interest.  We need to know that if improprieties do occur, that accountability will result.
  2. We need to not limit Safra in opportunities or over-scrutinize her the way we do so many women who don’t fit inside the nice, neat little expectations society still has of them.
  3. We shouldn’t hold Safra accountable for what Donald Trump represents, his actions or if we don’t agree with his politics.

We also need to discuss what is really bothering many when a woman or person of color enters into the lions den, aka a situation that is clearly not very welcoming to us due to gender, race or orientation.  It can bring out feelings of betrayal, concerns that the individual is “working for the enemy.”  We want to know that Safra will stand up for our rights as the under-represented.  We want to know that she would tell Donald that she doesn’t condone his behavior or actions towards women, race and culture.

One of the biggest challenges I had to overcome when I started my career, was recognizing that every individual has their own path in this world.  Their path may be very different than mine, but through change comes growth and to expect someone to do what may not be in their capabilities can be just as limiting as not letting them do what they do best.  This wouldn’t be allowing Safra to do what she does best.

I’ve never viewed Safra as a role-model when it comes to the protection and advancement of women’s roles in technology or our world.  She’s never historically represented this, any more than those expecting it from Marissa Mayer.  It’s just not part of their unique paths, no matter how much the media likes to quote either of them, (especially Marissa, which consistently makes me cringe.)  It doesn’t mean they aren’t capable of accomplishing great feats-  just not feats in the battle for equality.  It also doesn’t mean they aren’t a source of representation.  The more women that are in the space, the better.  That’s how we overcome some of the bias we face.

Regarding those that do support women in more ways that just representing the overall count of women in technology and politics, I’d rather put my time into Sheryl Sandberg, Grace Hopper, Meg Whitman and others who have the passion to head up equality issues.  I both welcome and am thankful for the discussion surrounding writing the letter and applaud the woman who asked me about the topic-  it’s a difficult one.

For those of you who are still learning about why equality is so important, here’s a few historical references of great women who’ve advanced our rights.  We wouldn’t be where we are today without them.

Thank you to everyone for the great beginning to 2017 and thank you for continuing to trust me to lead so many of these initiatives.  I hope I can continue to educate and help the women in our technical community prosper.

 

Posted in Oracle, WIT Tagged with: , ,

January 4th, 2017 by dbakevlar

How was 2016 for me?

It was a surprisingly busy year-  blogging, speaking, working and doing.

I posted just under 100 posts to my blog this year.  After I changed jobs, the “3 per week” quickly declined to “4 per month” after I was inundated with new challenges and the Delphix learning curve.  That will change for 2017, along with some new initiatives that are in the works, so stay tuned.

For 2016, the most popular posts and pages for my website followed a similar trend from the last year.  My emulator for RPI is still a popular item and I have almost as many questions on RPI as I do WIT-  Raspberry Pi is everywhere and you’ll see a regained momentum from me with some smart home upgrades.

My readers for 2016 came from just about every country.  There were only a few that weren’t represented, but the largest numbers were from the expected:

I also write from time to time on Linked in.  Linked in has become the home for my Women in Technology posts and its lead me to receive around 30 messages a week from women looking for guidance, sharing their stories or just reaching out.  I appreciate the support and the value its provided to those in the industry.

RMOUG Conference Director- No Escape!

The 2016 conference was a great success for RMOUG, but much of it was due to budget cuts and changes that were made as we went along and addressed trends.  I’ve been collecting the data from evaluations and it really does show why companies are so interested in the value their data can provide them.  I use what I gather each year to make intelligent decisions about where RMOUG should take the conference direction each year-  what works, what doesn’t and when someone throws an idea out there, you can either decide to look into it or have the data to prove that you shouldn’t allocate resources to an endeavor.

A New Job

I wasn’t into the Oracle cloud like a lot of other folks.  It just wasn’t that interesting to me and felt that Oracle, as much as they were putting into their cloud investment, deserved someone who was behind it.  I’d come to Oracle to learn everything I could about Oracle and Enterprise Manager and an on-premise solution as it was, it wasn’t in the company focus.  When Kyle and I spoke about an opportunity to step into a revamped version of his position at Delphix, a company that I knew a great deal about and admired, it was a no-brainer.  I started with this great, little company in June and there are some invigorating initiatives that I look forward to becoming part of for 2017!

2 Awards

In February, I was awarded RMOUG’s Lifetime achievement award.  I kind of thought this would mean I could ride off in the sunset as the conference director, but as my position ended at Oracle, which had been a significant fight to keep me managing the conference as an Oracle employee, (transitioning me to a non-voting member to keep within the by-laws) not many were surprised to see me take on a sixth year of managing the conference.

In April I was humbly awarded the Ken Jacobs award from IOUG.  This is an award I’m very proud of, as Oracle employee’s are the only ones eligible and I was awarded it in just the two years I was employed at the red O.

3 Makers Events

I haven’t had much time for my Raspberry Pi projects the last number of months, but it doesn’t mean I don’t still love them.  I gained some recognition as 2nd ranking in the world for RPI klout score back in July, which took me by surprise.  I love adding a lot of IOT stories into my content and it had caught the attention of social media engines.  Reading and content is one thing, but it was even more important to do- I had a blast being part of the impressive Colorado’s Maker Faire at the Denver Museum of Nature and Science earlier in 2016.  I also was part of two smaller makers faires in Colorado, allowing me to discuss inexpensive opportunities for STEM education for schools using Raspberry Pis, Python coding and 4M kits.

Speaking Engagements

Even though I took a number of months off to focus on Delphix initiatives, I still spoke at 12 events and organized two, (Training Days and RMOUG’s QEW.)

February:  RMOUG– Denver, CO, (Director and speaker)

March: HotSos– Dallas, CO, (Keynote)

April: IOUG Collaborate– Las Vegas, NV

May: GLOC– Cleveland, OH, NoCOUG– San Jose, CA

June: KSCOPE– Chicago, IL

July: RMOUG Quarterly Education Workshop– Denver, CO, (Organizer)

September: Oracle Open World/Oak Table World– San Francisco, CA

October: UNYOUG– Buffalo, NY, Rocky Mountain DataCon & Denver Testing Summit–  Denver, CO

November: MOUS– Detroit, MI, (Keynote) ECO– Raleigh, NC

New Meetup Initiatives and Growth

I took over the Denver/Boulder Girl Geek Dinners meetup last April.  The community had almost 650 members at the time and although it wasn’t as big as the Girls Develop It or Women Who Code, I was adamant about keeping it alive.  Come the new year and thanks to some fantastic co-organizers assisting me, (along with community events in the technical arena) we’re now on our way to 1100 members for the Denver/Boulder area.

The Coming Year

I’m pretty much bursting with anticipation due to all that is on my plate for the coming year.  I know the right hand side bar is a clear indication that I’ll be speaking more, meaning more travel and a lot of new content.  With some awesome new opportunities from Delphix and the organizations I’m part of, I look forward to a great 2017!

 

Posted in DBA Life, Oracle Tagged with: , , ,

December 12th, 2016 by dbakevlar
UPDATE-  I want to thank everyone who supported this post and reached out to me about the challenges of managing and working with user group events.  This post, as with other’s I’ve written on different topics, (I have quite the knack to say what most just think, don’t I? :)) struck a nerve in the community, but it was also very timely with a mission that Nichole Scott, Oracle’s North America User Group Senior Manager, was working to address.  Nichole was kind enough to reach out to me after reading the blog post and a good discussion on what occurred, (both historically and currently) was able to find a way to get RMOUG sponsorship and help us reach out to another marketing team about the conflicting Cloud marketing event in Denver.  I really appreciate the great support from Nichole and her efforts went a long way in resolving some of the frustrations RMOUG’s board was feeling.

This is one of those posts where my hat comes off as a previous Oracle employee, same with my Delphix hat.  I am here only as a board member and representative of my Oracle REGIONAL user group community and have a bone to pick with a few people at Oracle.

steve-buscemi-deal-with-it

I’ve been the conference director for Rocky Mountain Oracle User Group’s, (RMOUG) Training Days conference since 2012.  This is a demanding volunteer position within the board of directors for any regional user group, made even more demanding at RMOUG for three reasons:

  1. RMOUG’s Training Days conference is the largest regional conference in the US.
  2. We have a board that is strained on demands, so resources are limited with lives of the volunteers.
  3. User groups are going through challenges as the Oracle community matures and changes.

After five conferences, you’d think it’d just be down to a science, but there’s always an unexpected challenge or opportunity to take on, plus as a non-profit, I’m required to do it in a way that has little cost or justifies itself with a pay off.

In the last two years, Oracle really stepped up and was one of our top sponsors.  With that sponsorship came a number of demands from them in the way of attendee lists, free tickets and extra cost to our membership.  There were always a few people inside the company that would bleed us dry in an attempt to get everything out of the event till it would cost us more than the money invested.

Although I’d feel some frustration with this, there were always incredible people inside Oracle who saw the value in the Oracle user group community and would try to make it worth it.  On the other hand, the secondary support from Oracle Technology Network, (OTN) through Laura Ramsey has been an incredible benefit to our community.  Although we spend every penny of the sponsorship to make sure their area is provided for, they always ensure to help us by marketing our event through social media and with the community.  I’ll state again, what OTN does is separate from Oracle sponsorship and as I’ve told folks-  Oracle is like 10,000 little companies under one name, so what one does, shouldn’t be put in comparison to another.

This year, the standard routine of speaking with the Oracle Hardware group for sponsorship was underway and they were very excited about RMOUG Training Days 2017.  We received word this last week that this wouldn’t be happening and that Oracle wouldn’t be offering any sponsorship for RMOUG.  Oracle just didn’t see it very valuable to sponsor Oracle user groups any longer.

Something also to be aware of, is that RMOUG, like UKOUG, doesn’t receive any compensation for ACE Director speakers.  We are one of the few that have been penalized as “too successful” as of 2013 and informed that the good that we do in the community would no longer be eligible for ACE Director speakers to be reimbursed for travel expenses.  ACE Directors have still submitted abstracts and hoped to be accepted to speak. We at RMOUG went out of our way to help make it worth their while by finding drivers for speakers from the airport, (someone once joked that whoever picked out the location for the Denver International Airport must liked Kansas.)  We also have an incredible welcome reception for our speakers and volunteers and offer our volunteers incentives to help us keep everything running like a well-oiled machine.  This is all done by a non-profit with rising costs from management companies and event centers, while memberships in user groups are at an all-time low.

We were then contacted by Oracle this last week, to let us know that they would be having a Cloud day event just 10 days before our 2017 conference.  There are Oracle employees that are liaison’s on our board and at no time did anyone in Oracle marketing communicate with her, as they began to plan out the event on the calendar or if it would impact the local Oracle user group community.

Their request- They wanted RMOUG’s support in marketing this event and promoting it to our membership to raise attendance.

Nope.  Not going to happen.  I’m not going to EVEN add a link to it here in my post.

Oracle can’t continue to stomp on the local user groups and expect them to do your marketing, provide you with attendees to your events and expect them to survive.

I’m quite aware how many user groups are in a battle for their survival right now.  UTOUG has started to plead with their membership-  letting them know that unless someone steps up and helps, their ability to offer meetups and keep the group going will be impacted.  NoCOUG has repeatedly gone to their membership and groups outside looking for support-  fearful that they won’t survive to see another year.  RMOUG dropped one 2016 newsletter and almost dropped a quarterly event due to lacking support, volunteers and attendance.  These are just three of the top five in the US this last quarter-  I know from my speaking engagements from the last two years, there are many other stories I could share.

The Oracle ACE program now has a point system that requires those in the community to speak at events, but if the regional user groups fail, where will people speak?  If Oracle doesn’t support groups, where will they go?  I know I’m doing more with meetups, DataCons and other events.

Most directors on boards expect yearly changes to who they will be dealing with at Oracle, as their is someone new every year.  Even the Oracle calendar is missing a majority of the US events and I won’t even get started on those outside of the country.  The community isn’t volunteering their time to their community, Oracle isn’t supporting the regional user groups and now we have competition from not just meetups and other local events, companies, but ORACLE, TOO?

There’s no shortcut to technical quality that can be had with bright lights and a lot of marketing swag.  It takes everyone to built the user community brand and the more that Oracle tries to bundle it up into a fast food type of drive thru experience, the more the user community will starve for real content.  That’s where the regional user groups come in and provide sustenance.

 

 

Posted in Oracle

November 30th, 2016 by dbakevlar

I love technology-   ALL TECHNOLOGY.  This includes loving my Mac Air and loving my Microsoft Surface Pro 4.  I’ve recently went back to a Mac when I joined Delphix, trimming down the power I had on my Surface Pro 4, knowing the content I was providing would be required to run on hardware with lesser resources.

With the release of Microsoft SQL Server 2016 on Linux, I jumped in with both feet and wanted to install it on one of my Linux VMs that I have “discovered” with my Delphix engine and its Oracle environment.  To accomplish this, the OS version was CentOS, which wasn’t a supported OS, but did work with the yum commands with a few changes and with an upgrade from CentOS 6.6 to 7.

After upgrading and installing SQL Server 2016, I became aware that the memory requirements, once trimmed down on the VM to less than 1.5Gb, now required at least 3.5G to run.  My Mac Air has only 8G of memory on it and to run a Delphix environment, you have the stand alone Delphix engine, (a simple software appliance) VM, a “Source” environment and a “Target” environment.  To run three environments with that much of an increase in resource demands for the source and target was a bit too much for the little machine.

work

OVA Move

Moving a VM from a Mac to a PC is easy by copying the .ova file and importing it on the new PC.  Upon doing so, I noted that it was from the original version and didn’t include my OS upgrade or the MSSQL installation.

screen-shot-2016-11-30-at-1-21-58-pm

I was able to quickly see this by comparing the images with the following command:

uname -a

I took a new snapshot, brought the new file over and imported again, but no change occurred.  I then decided to do some testing of how robust and how dependent VMs are.

Ensuring the VM was DOWN, I copied the actual VM’s folder on my Mac Air to a jump drive.  It was almost 18 GB.

screen-shot-2016-11-30-at-1-11-56-pm

I then switched over to the Surface and with the VM down, renamed the original folder and copied the one from my MAC into the same directory.  I then renamed it to be the same as the original, (I had to remove the .vmwarevm extension on the folder)  which then mimicked the original it was replacing.  Here’s the folder with the .vmwarevm extension on the jumpdrive-

win_vm2

And here’s how I renamed the original folder to “old” and then copied the Mac version to the same directory and removed the extension.  Notice that the name now matches what it would have been for the original, which will also match what is in the Windows registry:

vm_win1

I restarted the VM and then checked that everything came up and verify that the image contained the correct OS and MSSQL installation:

screen-shot-2016-11-30-at-2-00-48-pm screen-shot-2016-11-30-at-2-00-34-pm

Ahhh, much better…. 🙂  Now my upgraded VM with the addition of MSSQL 2016 has some room to move and grow!

 

Posted in Oracle, VMWare Tagged with: , ,

November 23rd, 2016 by dbakevlar

The challenge of coding, is that you sometimes need to be aware of what can be impacted outside of your code to make it appear guilty.

didnt

I wrote a blog post a short time back on my first time scripting with Apple Script with a goal of automating my weekly status reports to my new manager of my tasks in OmniFocus.  After seeing some cool stuff one of my peers did with their setup in OmniFocus, I built out mine to do something similar.  What I didn’t realize, was that my code was significantly dependent on my original setup and no new updates were created post the change.

Now that I know a little bit more about Apple Scripting and Omni Focus, I chose to do the following:

  1. Use Omnifocus’ Perspectives to give me a better status report output.
  2. Simplify my code to use the perspective vs. me “scraping” the data in Omnifocus.

For the database folks out there, a Perspective is just like a view in a database.  It’s just an alias with a selection of data that answers the query, “What have I completed in the last month, listing the project, the notes on my status and organize it by date completed.”

Create Perspective

You can create a new Perspective quite easily:

screen-shot-2016-11-23-at-11-12-15-am

You can create the perspective, (I’ve named mine “Weekly Report” and then create the “view” that will populate our report properly.

screen-shot-2016-11-23-at-11-13-05-am

Apple Script

Now, we’ll need to build our code to match what our perspective is done:

--Set up code to match the perspective

set thePerspective to "Weekly Report" --The exact Perspective name
set theSubjectOfMessage to "Task Report for Kellyn" --The Subject of Email
set theSender to "Kellyn Gorman"
set POSIXpath to POSIX path of "/Users/pathname/Library/Containers/com.omnigroup.OmniFocus2/Data/Documents/tasklist.txt"
tell front document of application "OmniFocus"
 tell front document window
 set perspective name to thePerspective
save in POSIXpath as "public.text"
 end tell
end tell
tell application "Mail"
 set theMessage to make new outgoing message with properties {subject:theSubjectOfMessage, sender:theSender}
 tell content of theMessage
make new attachment with properties {file name:POSIXpath} at after last paragraph
 end tell
 tell theMessage
make new to recipient at end of to recipients with properties {address:"emailaddress@.com"} --add address for email
 end tell
send theMessage
end tell

And test your code…always… 🙂  If you’ve set everything up correctly, then you should have an awesome weekly status report sent to your manager telling him how awesome you are.

Posted in DBA Life, Oracle Tagged with: ,

November 14th, 2016 by dbakevlar

OK, so I’m all over the map, (technology wise) right now.  One day I’m working with data masking on Oracle, the next it’s SQL Server or MySQL, and the next its DB2.  After almost six months of this, the chaos of feeling like a fast food drive thru with 20 lanes open at all times is starting to make sense and my brain is starting to find efficient ways to siphon all this information into the correct “lanes”.  No longer is the lane that asked for a hamburger getting fries with hot sauce… 🙂

thesemakeme

One of the areas that I’ve been spending some time on is the optimizer and differences in Microsoft SQL Server 2016.  I’m quite adept on the Oracle side of the house, but for MSSQL, the cost based optimizer was *formally* introduced in SQL Server 2000 and filtered statistics weren’t even introduced until 2008.  While I was digging into the deep challenges of the optimizer during this time on the Oracle side, with MSSQL, I spent considerable time looking at execution plans via dynamic management views, (DMVs) to optimize for efficiency.  It simply wasn’t at the same depth as Oracle until the subsequent releases and has grown tremendously in the SQL Server community.

Compatibility Mode

As SQL Server 2016 takes hold, the community is starting to embrace an option that Oracle folks have done historically-  When a new release comes out, if you’re on the receiving end of significant performance degradation, you have the choice to set the compatibility mode to the previous version.

I know there are a ton of Oracle folks out there that just read that and cringed.

Compatibility in MSSQL is now very similar to Oracle.  We allocate the optimizer features by release version value, so for each platform it corresponds to the following:

Database Version Value
Oracle 11.2.0.4 11.2.0.x
Oracle 12.1 12.1.0.0.x
Oracle 12c release 2 12.1.0.2.0
MSSQL 2012 110
MSSQL 2014 120
MSSQL 2016 130

 

SQL Server has had this for some time, as you can see by the following table:

Product Database Engine Version Compatibility Level Designation Supported Compatibility Level Values
SQL Server 2016 13 130 130, 120, 110, 100
SQL Database 12 120 130, 120, 110, 100
SQL Server 2014 12 120 120, 110, 100
SQL Server 2012 11 110 110, 100, 90
SQL Server 2008 R2 10.5 105 100, 90, 80
SQL Server 2008 10 100 100, 90, 80
SQL Server 2005 9 90 90, 80
SQL Server 2000 8 80 80

These values can be viewed in each database using queries for the corresponding command line tool.

For Oracle:

SELECT name, value, description from v$parameter where name='compatible';

Now if you’re in database 12c and multi-tenant, then you need to ensure you’re correct database first:

ALTER SESSION SET CONTAINER = <pdb_name>;
ALTER SYSTEM SET COMPATIBLE = '12.1.0.0.0';

For MSSQL:

SELECT databases.name, databases.compatibility_level from sys.databases 
GO
ALTER DATABASE <dbname> SET COMPATIBILITY_LEVEL = 120
GO

Features

How many of us have heard, “You can call it a bug or you can call it a feature”?  Microsoft has taken a page from Oracle’s book and refer to the need to set the database to the previous compatibility level as Compatibility Level Guarantee.  It’s a very positive sounding “feature” and for those that have upgraded and are suddenly faced with a business meltdown due to a surprise impact once they do upgrade or simply from a lack of testing are going to find this to be a feature.

So what knowledge, due to many years of experience with this kind of feature, can the Oracle side of the house offer to the MSSQL community on this?

I think anyone deep into database optimization knows that “duct taping” around a performance problem like this- by moving the compatibility back to the previous version is wrought with long term issues.  This is not addressing a unique query or even a few transactional processes being addressed with this fix.  Although this should be a short term fix before you launch to production, [we hope] experience has taught us on the Oracle side, that you have databases that exist for years in a different compatibility version than the release version.  Many DBAs have databases that they are creating work arounds and applying one off patch fixes for because the compatibility either can’t or won’t be raised to the release version.  This is a database level way of holding the optimizer at the previous version.  The WHOLE database.

You’re literally saying, “OK kid, [database], we know you’re growing, so we upgraded you to latest set of pants, but now we’re going to hem and cinch them back to the previous size.”  Afterwards we say, “Why aren’t they performing well? After all, we did buy them new pants!”

So by “cinching” the database compatibility mode back down, what are we missing in SQL Server 2016?

  • No 10,000 foreign key or referential constraints for you, no, you’re back to 253.
  • Parallel update of statistic samples
  • New Cardinality Estimator, (CE)
  • Sublinear threshold for statistics updates
  • A slew of miscellaneous enhancements that I won’t list here.

Statistics Collection

Now there is a change I don’t like, but I do prefer how Microsoft has addressed it in the architecture.  There is a trace flag 2371 that controls, via on or off, if statistics are updated at about 20% change in row count values.  This is now on by default with MSSQL 2016 compatibility 130.  If it’s set to off, then statistics at the object level aren’t automatically updated.  There are a number of ways to do this in Oracle, but getting more difficult with dynamic sampling enhancements that put the power of statistics internal to Oracle and less in the hands of the Database Administrator.  This requires about 6 parameter changes in Oracle and as a DBA who’s attempted to lock down stats collection, its a lot easier than said.  There were still ways that Oracle was able to override my instructions at times.

Optimizer Changes and Hot Fixes

There is also a flag to apply hot fixes which I think is a solid feature in MSSQL that Oracle could benefit from, (instead of us DBAs scrambling to find out what feature was implemented, locating the parameter and updating the value for it…)  Using trace flag 4199 granted the power to the DBA to enable any new optimizer features, but, just like Oracle, with the introduction of SQL Server 2016, this is now controlled with the compatibility mode.  I’m sorry MSSQL DBAs, it looks like this is one of those features from Oracle that, (in my opinion) I wish would have infected cross platform in reverse.

As stated, the Compatibility Level Guarantee sounds pretty sweet, but the bigger challenge is the impact that Oracle DBAs have experienced for multiple releases that optimizer compatibility control has been part of our database world.  We have databases living in the past.  Databases that are continually growing, but can’t take advantage of the “new clothes” they’ve been offered.  Fixes that we can’t take advantage of because we’d need to update the compatibility to do so and the pain of doing so is too risky.  Nothing like being a tailor that can only hem and cinch.  As the tailors responsible for the future of our charges, there is a point where we need to ensure our voices are heard, to ensure that we are not one of the complacent bystanders, offering stability at the cost of watching the world change around us.

Posted in Oracle, SQLServer Tagged with: , ,

August 15th, 2016 by dbakevlar

I’ve been involved in two data masking projects in my time as a database administrator.  One was to mask and secure credit card numbers and the other was to protect personally identifiable information, (PII) for a demographics company.  I remember the pain, but it was better than what could have happened if we hadn’t protected customer data….

blowup

Times have changed and now, as part of a company that has a serious market focus on data masking, my role has time allocated to research on data protection, data masking and understanding the technical requirements.

Reasons to Mask

The percentage of companies that contain data that SHOULD be masked is much higher than most would think.

Screen Shot 2016-08-15 at 12.59.05 PM

The amount of data that should be masked vs. is masked can be quite different.  There was a great study done by the Ponemon Instititue, (that says Ponemon, you Pokemon Go freaks…:)) that showed 23% of data was masked to some level and 45% of data was significantly masked by 2014.  This still left over 30% of data at risk.

The Mindset Around Securing Data

We also don’t think very clearly about how and what to protect.  We often silo our security-  The network administrators secure the network.  The server administrators secure the host, but doesn’t concern themselves with the application or the database and the DBA may be securing the database, but the application that’s accessing it, may be open to accessing data that shouldn’t be available to those involved.  We won’t even start about what George in accounting is doing.

We need to change from thinking just of disk encryption and start thinking about data encryption and application encryption with key data stores that protect all of the data-  the goal of the entire project.  It’s not like we’re going to see people running out of a building with a server, but seriously, it doesn’t just happen in the movies and people have stolen drives/jump or even print outs of spreadsheets drives with incredibly important data residing on it.

As I’ve been learning what is essential to masking data properly, along with what makes our product superior, is that it identifies potential data that should be masked, along with ongoing audits to ensure that data doesn’t become vulnerable over time.

Screen Shot 2016-08-15 at 12.30.34 PM

This can be the largest consumption of resources in any data masking project, so I was really impressed with this area of Delphix data masking.  Its really easy to use, so if you don’t understand the ins and outs to DBMS_CRYPTO or unfamiliar with the java.utilRANDOM syntax, no worries, Delphix product makes it really easy to mask data and has a centralized key store to manage everything.

Screen Shot 2016-08-15 at 11.52.53 AM

It doesn’t matter if the environment is on-premise or in the cloud.  Delphix, like a number of companies these days, understands that hybrid management is a requirement, so efficient masking and ensuring that at no point is sensitive data at risk is essential.

The Shift

How many data breaches do we need to hear about to make us all pay more attention to this?  Security topics at conferences are diminished vs. when I started to attend less than a decade ago, so I know it wasn’t that long ago it appeared to be more important to us and yet it seems to be more important of an issue.

Screen Shot 2016-08-15 at 11.47.19 AM

Research was also performed that found only 7-19% of companies actually knew where all their sensitive data was located.  That’s over 80% sensitive data vulnerable to a breach.  I don’t know about the rest of you, but upon finishing up on that little bit of research, I understood why many feel better about not knowing and why its better just to accept this and address masking needs to ensure we’re not one of the vulnerable ones.

Automated solutions to discover vulnerable data can significantly reduce risks and reduce the demands on those that often manage the data, but don’t know what the data is for.  I’ve always said that the best DBAs know the data, but how much can we really understand it and do our jobs?  It’s often the users that understand it, but may not comprehend the technical requirements to safeguard it.  Automated solutions removes that skill requirement from having to exist in human form, allowing us all to do our jobs better.  I thought it was really cool that our data masking tool considers this and takes this pressure off of us, letting the tool do the heavy lifting.

Along with a myriad of database platforms, we also know that people are bound and determined to export data to Excel, MS Access and other flat file formats resulting in more vulnerabilities that seem out of our control.  Delphix data masking tool considers this and supports many of these applications, as well.  George, the new smarty-pants in accounting wrote out his own XML pull of customers and credit card numbers?  No problem, we got you covered… 🙂

Screen Shot 2016-08-15 at 12.51.45 PM

So now, along with telling you how to automate a script to email George to change his password from “1234” in production, I can now make recommendations on how to keep him from having the ability to print out a spreadsheet with all the customer’s credit card numbers on it and leave it on the printer…:)

Happy Monday, everyone!

 

 

 

 

Posted in Data Masking, Delphix, Oracle Tagged with: ,

August 2nd, 2016 by dbakevlar

otw

Anyone who’s anyone knows to search out OakTable World at major events in the US and Europe, and Oracle Open World 2016 is no different!

OakTable World 2016, (#otw16) will be held at the Children’s Creativity Museum again this year during the week of Oracle Open World.  The Oak Table members will be discussing their latest technical obsessions and research on Monday and Tuesday, (September 19th-20th).  The truth is, folks-  The Oak Table experts are an AWESOME group, (if I don’t say so myself! :)) as we could have easily done another day of incredible sessions, but alas, two days is all we have available for this year’s event.

This year’s sponsors to make sure the Oakies have a place to rest their weary laptops are no slouches themselves in the technical world:

delphix-logo_w_500Pythian_Company_Logo_2015AmazonWebservices_Logo.svg

Schedule-  A Work in Progress

I’ll continue to formalize the schedule as the session titles fill in and expect a few more ten minute ted talks to be added to the schedule as well.  Each session is 50 minutes, so there will be a 10 minute break between each session in this packed schedule!

The Great Dane

Mogens Norgaard will be opening Oak Table World on Monday, at 8:30am.  Be prepared to be awe-inspired by all he has to share with you, (which may go hand-in-hand with the amount of coffee we can provide to him…)

screen-shot-2016-09-19-at-8-08-00-am

screen-shot-2016-09-19-at-8-09-06-am

 

Location, Location, Location

If you’re unsure of how to get to Oak Table World, I’ve included a map below of San Francisco’s Moscone Center, where Oracle Open World will be.  OTW will be held in the Creativity Theater, which is in the museum behind the carousel, around the Southeast side of the building.

Moscone_ctr

Oak Table World is FREE to the PUBLIC!  We don’t require an Oracle Open World badge to attend, so bring a friend and they’ll owe you big time!

Food + Beer, Yes, I SAID BEER

So currently, instead of the formal breakfast and lunch that’s been previously offered, we’re going to try something more…”Oakie” and crowd friendly.  As the Executive Goth Girl, (my own designation) I am going for good coffee and awesome donuts in the mornings to fuel the attendees.  As lunch is offered with registration to OOW for 90%, (or more) of our participants, I’m spending money that would go to lunch and following the requirements for an event per the ABC, (Alcohol and Beverage Control of California) we’ll be having beer tasting from some of the local breweries.  We’ll have some fantastic munchies if you get hungry, but as we all know, beer is really a carb.  You can thank me later… 🙂

Video Streaming

For those of you wondering if we’ll be doing any recordings of the sessions-  OTN and Laura Ramsey have agreed to do the live streaming for the event, (because they ROCK!!) so be prepared for some seriously impressive content streaming your way from their channel after the event if you’re unable to attend.

I’ll keep this page updated with new information as Oak Table World 16 gets closer and thank you to everyone for their support!  Have questions or ideas?  Email Kellyn at dbakevlar at Gmail.

 

Posted in DBA Life, Oracle, Social Media Tagged with: , ,

July 18th, 2016 by dbakevlar

For those of you that downloaded and are starting to work with Delphix Express, (because you’re the cool kids… :)) You may have noticed that there is an Express Edition Oracle 11g database out there you could use as a Dsource, (source database for Delphix to clone and virtualize…)

omg

If you’d like to work with this free version with your Delphix Express environment, these are the steps that I performed to allow me to utilize it.  My setup is as follows:

  • VMWare Fusions 8
  • Chrome Browser
  • Delphix is upgraded to the latest VM release

Although we start the environment from the Delphix Engine VM, the Target VM contains the discovery scripts/configuration files and the Dsource VM has the 11g XE environment we wish to add.

Configure and Start the Dsource

Log into a terminal session to your DSource VM.  You can login as the delphix user, (default password is delphix) and then ‘su’ over to the oracle user.  You now need to check to see if the XE environment is running:

ps -ef | grep pmon

If the database is up and running, you’re done over here on you’re Dsource, but if it isn’t, then you need to start it.

First, you’ll need to set the environment, which is just standard for any database administrator:

Check each of the environment settings for the following:

$ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe

$ORACLE_BASE=/u01/app/oracle

$ORACLE_SID=XEXE

Reconfigure the Engine

Once these check out as set to those, you should be able to start the database without any errors, (if you followed my last post, you set it up to configure it as part of the setup.

Now, by default, these aren’t configured as Dsource or Targets to deter the Dsource and Target VMs from consuming too much space by default.  Needless to say, you’ll need to tell the Delphix engine that it’s alright now to use them.

Open up a terminal to the Target for your Delphix Express and get the IP Address:

ifconfig

Take this address and type it into a web browser window and add the port to it:

Example: 172.15.190.129:8000

The the landshark configuration file will come up and you’ll need to check the following to ensure they are set to true:

Screen Shot 2016-07-18 at 1.21.49 PM

We need to tell the Delphix discovery script that we want to enable the Dsources, (Source VM) and VDBs, (Target VM) to configure/discovery and then which ones we will be working with, (the oracle_xe, which is the XEXE database we checked out on the Dsource VM.)

Remember to submit your changes before existing out of the configuration, otherwise you’ll just have to do it all over again and you know how much I hate it when anyone does things more than once! 🙂

Run the Setup Script

Return to the terminal window for the Target VM.  Check and see if any setup scripts are attempting to run:

ps -ef | grep setup

You should only see the the following running from the startup and you’re good-

Screen Shot 2016-07-18 at 1.40.58 PM

Run the setup to configure the new Dsources and VDBs to your Delphix Express environment as the delphix OS user on the Target VM:

./landshark_setup.py

You’ll note that some of the configuration was completed previously and skipped, but that there’s also some additions to your environment now that you’ve requested these areas be configured.  It doesn’t take a long time, (note the time in the output from the landshark_setup.log):

Screen Shot 2016-07-18 at 1.55.31 PM

And by Grabthar’s hammer, you’ll have those Dsources now in your Delphix Express environment to work with:

Screen Shot 2016-07-18 at 1.55.02 PM

Next post I’ll talk more about the actual cloning and VDBs- I promise… 🙂  Have a good week!

 

Posted in Delphix Express, Oracle, VMWare Tagged with: , , ,

July 11th, 2016 by dbakevlar

Delphix Express offers a virtual environment to work with all the cool features like data virtualization and data masking on just a workstation or even a laptop.  The product has an immense offering, so no matter how hard Kyle, Adam and the other folks worked on this labor of love, there’s bound to be some manual configurations that are required to ensure you get the most from the product.  This is where I thought I’d help and offer a virtual hug to go along with the virtual images…:)

virtual_hug

If you’re already set on installing and working<– (Link here!!) with Delphix Express, you will find the following Vimeo videos- importing the VMs and configuring Delphix Express quite helpful. Adam Bowen did a great job with these videos to get you started, but below, I’ll go through some technical details a bit deeper to give folks added arsenal in case they’ve missed a step or challenged just starting out with VMWare.

Note- Delphix Express requires VMWare Fusion, which you can download after purchasing a license, ($79.99) but well worth the investment.

Resource Usage

Not enough memory to run all three VM’s required as part of Delphix Express or after an upgrade, the Delphix Express uses over 6Gb.

Different laptops/workstations have different amounts of memory, CPU and space available.  Memory is the most common constraint with today’s pc.  Although the VMs are configured for optimal performance, the target and source environments can have the memory trimmed to 2Gb each and still perform when resources are constrained.

The VM must be shut down for this configuration change to be implemented.  After stopping or before starting the VM, click on Virtual Machine, Settings.  Click on Processors and Memory and then you can configure the memory usage via a slider option as seen below:

Screen Shot 2016-07-11 at 3.04.08 PM

Move the slider to under 2G for the VM in question and then close the configuration window and start the VM.  Perform this for each VM, (the Delphix Engine VM should already be at 2Gb.)

Configuration

Issue- Population of sources and targets is empty after successful configuration.

Checking the Log

After starting the target and source VMs, a UI interface with command line is opened and you can login right from the VMWare.  Virtualbox would require a terminal opened to the desktop, but either way, you can get to the command line interface in such a way without using Putty or another desktop terminal from your workstation.

On the target VM command line, login as the delphix user.  The target VM has a python script that runs in the background upon startup that checks for a delphix engine once every minute and if it locates one, will run the configuration.  You can view this running in the cron:

crontab -l
@reboot ..... /home/delphix/landshark_setup.py

It writes to the following log file:

/home/delphix/landshark_setup.log

You can view this file, (or tail it or cat it, whatever you are comfortable doing to view the end of the file…)  I prefer just to view the last ten lines, so I’ll run a command to look at JUST the last ten lines:

tail -10 landshark_setup.log

If the configuration is having issues locating the Delphix engine, it will show in this log file.  Once confirmed, then we have a couple steps to check:

VMWare issue with the one of the virtual machines not visible to another.  Each VM needs to be able to communicate and interact with each other.  When importing in each VM, the ability for the VM to be “host aware” with the Mac may not have occurred.  If you the delphix engine VM isn’t viewable to the target or the source, you can check the log and then verify in the following way.

Click on Virtual Machine, Settings and then click on Network Adapter.  Verify that the top radio option is selected for “Share with my Mac”:

Screen Shot 2016-07-11 at 3.20.33 PM

Verify that this is configured for EACH of the three virtual machines involved.  If this hasn’t corrected and the configuration doesn’t populate the virtual environments in the Delphix interface, then it’s time to look at the configuration for the target machine.

Get IP Address

While SSH connected to the target machine, type in the following:

ifconfig

Use the IP address shown, (inet address) and open a browser on your PC, adding the port used for the target configuration file, (port 8000 by default):

<ipaddress>:8000

You should be shown the configuration file for your target server that is used to run the delphix engine configuration.  There are options to update the values for different parameters.  The you should focus on are:

Environments

linux_source_ip= make sure this matches the source VM’s ip address when you type in “ifconfig”.

Engine

engine_address= ip address for the delphix engine VM when you type in ifconfig on the host

engine_password= should match the password that you updated your delphix_admin to when you went through the configuration.  Update it to match if it doesn’t, as I’ve seen some folks not set it to “landshark” as demonstrated in the videos, so of course, the setup will fail when the file doesn’t match the password set by the user.

Content

oracle_xe = If you set Oracle_xe to true, then don’t set the 11g or 12c to true.  To conserver workstation resources,  choose only one database type.

Once you’re made all the changes you want to the page, click on Submit Changes.

Screen Shot 2016-07-11 at 3.37.21 PM

You need to run the reconfiguration manually now.  Remember, this runs in the background each minute, but when it does that, you can’t see what’s going on, so I recommend killing the running process and running it manually.

Manual Runs of the Landshark Setup

From the target host, type in the following:

ps -ef | grep landshark_setup

Kill the running processes:

killall landshark_setup.py

Check for any running processes, just to be safe:

ps -ef | grep landshark_setup

Once you’ve confirmed that none are running, let’s run the script manually from the delphix user home:

./landshark_setup.py

Verify that the configuration runs, monitoring as it steps through each step:

Screen Shot 2016-07-10 at 6.52.54 PM

This is the first time you’re performed these steps, so expect a refresh won’t be performed, but a creation will.  You should now see the left panel of your Delphix Engine UI populated:

Screen Shot 2016-07-11 at 3.48.40 PM

Now we’ve come to the completion of the initial configuration.  In my next post on Delphix Express, I’ll discuss the Dsource and Target database configurations for different target types.  Working with these files and configurations are great practice to learning about Delphix, even if your Delphix Express even if you are amazed at how easy this all was.

If you want to see more about Delphix Express, check the following links from Kyle Hailey or our very own Oracle Alchemist guy, Steve Karam….:)

Screen Shot 2016-07-11 at 5.18.14 PM

 

 

 

 

 

Posted in Delphix, Delphix Express, Oracle Tagged with: ,

June 6th, 2016 by dbakevlar

How many times have you had maintenance or a release complete and everyone is sure that everything’s been put back the way it should have been, all t’s crossed, all i’s dotted and then you release it to the customers only to find out that NOPE, something was forgotten in the moving parts of technology?  As the Database Administrator, you can do a bit of CYA and not be the one who has to say-

WhatJustHappened

Having the ability to compare targets is a powerful feature in Enterprise Manager 13c, (and 12c, don’t feel left out there…:))  The comparison feature is the first of three options that encompass Configuration Management-

config2

Building Comparison Baselines

Upon entering Configuration Management in EM13c, you will be offered the option to create a one-time comparison, or use a pre-existing comparison as your base.  You can access the Configuration Management utility via the Enterprise drop down in EM13c Cloud Control:

config1

For our example today and due to the small environment I possess in my test environment, we’re going to compare two database targets.  The Configuration Management utility is fantastic at comparing targets to see if changes have occurred and I recommend collecting “baseline” templates to have available for this purpose, but know that the tool is an option to perform other comparisons, such as:

  • investigating a change in a target.
  • researching drift or consistency

Maintenance Use Case

For our example today, we’re going to be working on a CDB,  then as the “Lead DBA”, we’ll discover that changes that weren’t reverted as part of our maintenance with the Configuration Management Comparison tool.

We first need to set up the comparison “baseline”, so to do this, I’m going to make a copy of the default template, Database Instance Template.  It’s just good practice to make copies and leave the locked templates, in case there are times where we find there are areas we need to watch for changes in any environment that may not have been turned on by default.

Once you enter the main dashboard, click on the bottom icon on the left, which when highlighted, will show you is for Templates.

config11

Scroll down till you see Database Instance Template, highlight it and click on Create Like at the top menu.  You will need to name your new copy of the original template.  For mine, I’ve named it CDB_Compare:

config10

Click OK and you now will be brought to the template with all it’s comparison values displayed.  If there are any areas that you want to compare for immediately, check that area and make sure that there is a check mark in the box for that change.  For our example, let’s say that we have a process in this CDB that when quarterly maintenance is complete, the pluggable database must be brought back up, but sometimes it’s a step that the DBAs forget to complete.  By default, the configuration template is checking for this, but if it didn’t, I would place the check mark in the appropriate box and save the template before proceeding.

config13

Now that I have my template ready, I can use it to do a comparison.  On the far left, click on the top icon, (of a bar graph) that will take you to the Overview page or the One Time Comparison Results, both of which will offer you an opportunity to create the baseline of the CDB  that you want to compare against.

Click on Create Comparison and fill in the following information:

config14

Click on Submit and as expected, no differences are found, (we just compared the environment against itself using the new CDB_Compare template, that checks everything out) but we now have our baseline.

Perform Maintenance and Compare

Our maintenance has been completed, now our database is ready to be released to the users, but we want to verify that the changes performed should have been performed and no steps were missed that would hinder it from being ready for production use.

We perform another comparison, this time against our baseline and choose to only show differences-

 

config15-1024x270

Per the report, the databsae is in read only and if we log in via SQLPlus, we can quickly verify that this:

SQL> select name, open_mode from v$database;

NAME OPEN_MODE
--------- --------------------
CDBKELLY READ WRITE

SQL> select name, open_mode from v$pdbs;

NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDBKELLYN MOUNTED
PDBK_CL1 READ ONLY

So instead of mistakenly releasing the database back to the users, we can run the following and know we’ve verified that we are safe to:

ALTER PLUGGABLE DATABASE PDBKELLYN OPEN;

Well, that’ll save us from having to explain how that was missed…Whew!

 

Posted in EM13c, Oracle Tagged with: , ,

June 2nd, 2016 by dbakevlar

I know Werner DeGruyter will like the title of this post, so here’s a post dedicated to him as my last week at Oracle is off to a busy start…. 🙂

yoda

As I attempt to wrap up any open tasks at Oracle, I’m still Training Days 2017 Conference Director for RMOUG, have a planning meeting for the 800 member Girl Geek Dinner Boulder/Denver Meetup group that I’m the owner of, designing the booth and building out all the projects for the MakerFaire event at the Denver Science Museum next weekend and now have taken on the Summer Quarterly Education Workshop at the Denver Aquarium in the end of July.  This is a bit much as I start a new job for the end of June, but there are things that need to be done for community organizations to survive and often not enough people doing it.

As I know that many other user groups are in the same boat, I come to you with pleading, open arms and say to you, as part of your community, volunteer your time.  If all of us give a little, it really adds up to a lot in the end.  As attendees at conferences, events and meetups ask what happened to this or that group and wonders why they don’t have activities any longer, it seems to always boil down to commitment from it’s volunteers.  If it’s not fed and cared for by time and care, then it won’t survive.  I have this conversation at almost every user group conference and hear similar stories from meetup groups and other event groups that you might think are no where related.  It all comes back to the passion and commitment of those involved, along with the support of those that may not be giving as much, but ensure those that are, are well cared for.

So here are the rules for the survival of a group:

  1. Everyone do a little so no one is doing too much and you end up losing those valuable resources.  I see volunteer groups and board of directors that are commonly unbalanced and that is expected-  we all have lives and demands, but if it’s always like this with no check and balances, then it’s time to for people to step up and put in some time.
  2. If you aren’t doing, you don’t complain.  If you do see something that isn’t working well and want it fixed, please be prepared to volunteer some time to help out vs. volunteering others to do the work.  That’s just flaky and do you really want to be that flaky whiner at the event? 🙂
  3. Support the people who volunteer their time to your groups and defend them within an inch of your life.  Without them you don’t have a user group, you don’t have new members, you age out and your user group dies a slow, quiet death.  If you doubt me, just look at the strewn carcasses of user groups that were once impressive specimens in the event world.
  4. There is no magic formula.  What works for one group might not work for another.  There are some very impressive individuals that are working hard to create the newest, shiniest events that attract speakers, sponsors and attendees.  This is how it always will be. The groups that have been around awhile just need to keep updating with the new and improved to stay on top of the game and compete.

RMOUG has an incredible board of directors and our volunteers are SECOND to NONE!!  This has served us well all these years.  I don’t know how I would survive the demands of Training Days if it wasn’t for the volunteers and those on the board that help me when the going gets tough.  I’m quite aware of this need in other user groups as well.

So here’s the challenge for those of you out there-  Reach out to your local user group and consider volunteering a little time to one of it’s events.

  • Help with event check in.
  • Help with setup.
  • Consider sponsorship with your company.
  • Consider helping with the board, (we have members at large on our board that is a good way to test out the waters as a board member.)
  • Look at how to become a board of director.

Ask the user group or meetup what they could use help with and DO IT.

Here is the list of regional user groups from Oracle and from IOUG.  Find yours and volunteer to your community.  It’s worth your time, valuable to your career, and it’s the only way they can continue to be successful.

Posted in DBA Life, Oracle Tagged with: ,

May 18th, 2016 by dbakevlar

Change is difficult for technical folks.  Our world is always moving at blinding speed, so if you start changing things that we don’t think need to be changed, even if you improve upon them, we’re not always appreciative.

change

Configuration Management, EM12c to EM13c

As requests came in for me to write on the topic of Configuration Management, I found the EM13c documentation very lacking, having to push back to EM 12.1.0.5 to fill in a lot of missing areas.  There were changes to the main interface that you use to work with the product.

When comparing the drop down, you can see the changes.config_chng1

Now I’m going to explain to you why this change is good.  In Enterprise Manager 12.1.0.5, (on the left)  you can see that the Comparison feature of the Configuration Management has a different drop down option than in Enterprise Manager 13.1.0.0.

EM12c Configuration Management

You might think it is better to have a direct access to the Compare, Templates and Job Activity directly via the drop downs, but it really is *still directly* accessible, but the interface has changed.

When you accessed Configuration Management in EM12c, you would click on Comparison Templates and reach the following window:

config_c5

You can see all the templates, access them quickly, but what if you want to then perform a comparison?  Intuition would tell you to click on Actions and then Create.  This unfortunately, only allows you to create a Comparison Template, not a One-Time Comparison.

To create a one-time comparison in EM12c, you would have to start over, click on the Enterprise menu, Configuration and then Comparison.  This isn’t very user friendly and can be frustrating for the user, even if they’ve become accustomed to the user interface.

EM13c Configuration Management Overview

EM13c has introduced a new interface for Configuration Management.  The initial interface dashboard is the Overview:

config_c4

You can easily create a One-time Comparison, a Drift Management definition or Consistency Management right from the main Overview screen.  All interfaces for the Configuration Manager now includes tab icons on the left so that you can easily navigate from one feature of the Configuration Management utility to another.

In EM13c, if you are in the Configuration Templates, you can easily see the tabs to take you to the Definitions, the Overview or even the One-Time Comparison.

config_c6

No more returning to the Enterprise drop down and starting from the beginning to simply access another aspect of Configuration Management.

See?  Not all change is bad… 🙂  If you’d like to learn more about this cool feature, (before I start to dig into it fully with future blog posts) start with the EM12c documentation.  There’s a lot more to understanding the basics in this documentation.

Posted in EM13c, Enterprise Manager, Oracle Tagged with: , ,

May 9th, 2016 by dbakevlar

A lot of my ideas for blog posts come from questions emailed to me or asked via Twitter.  Today’s blog is no different, as I was asked by someone in the community what the best method of comparing databases using features within AWR when migrating from one host and OS to another.

Don't_know_wat

There is a  lot of planning that must go into a project to migrate a database to another host or consolidate to another server, but when we introduce added changes, such as a different OS, new applications, workload or other demands, these need to be taken into consideration.  How do you plan for this and what kind of testing can you perform to eliminate risk to performance and the user experience once you migrate over?

AWR Warehouse

I won’t lie to any of you, this is where the AWR Warehouse just puts it all to shame.  The ability to compare AWR data is the cornerstone of this product and it’s about to shine here again.  For a project of this type, it may very well be a consideration to deploy one and load the AWR data into the warehouse, especially if you’re taking on a consolidation.

There are two main comparison reports, one focused on AWR, (Automatic Workload Repository) data and the other on ADDM, (Automatic Database Diagnostic Monitor).

addm_compare1

From the AWR Warehouse, once you highlight a database from the main dashboard, you’ll have the option to run either report and the coolest part of these reports is that you don’t just get to compare time snapshots from the same database, but you can compare one snapshot from a database source in the AWR Warehouse to ANOTHER database source that resides in the warehouse!

ADDM Comparison Period

This report is incredibly valuable and offers the comparisons to pinpoint many of the issues that are going to create the pain-points of a migration.  The “just the facts” and crucial information about what is different, what has changed and what doesn’t match the “base” for the comparison will be displayed very effectively.

When you choose this report, the option to compare from any snapshot interval for the current database is offered, but you can then click on the magnifying glass icon for the Database to compare to and change to compare to any database that is loaded into the AWR Warehouse-

 

compare2

For our example, we’re going to use a day difference, same timeline to use as our Base Period.  Once we fill in these options, we can click Run to request the report.

The report is broken  down into three sections-

  • A side by side comparison of activity by wait event.
  • Details of differences via tabs and tables
  • Resource usage graphs, separated by tabs.

compare3

We can clearly compare between the two comparisons of activity that there was more commit waits during the base period, along with user I/O in the comparison period.  During a crisis situation, these graphs can be very beneficial when needed to show waits to less technical team members.

compare4

The Configuration tab below the activity graphs will display quickly what differences in OS, initialization parameters, host and other external influences to the database.  The Findings tab will then go into the performance comparisons differences.  Did the SQL perform better or degrade?  In the below table, the SQL ID, along with detailed information about the performance change is displayed.

Resources are the last tab to display graphs about the important area of resource usage.  Was there an impact difference to CPU usage between one host and the other?

compare6

Was there swapping or other memory issues?

compare7

In our example, we can clearly see the extended data reads and for Exadata consolidations, the ever valuable single block read latency is shown-

compare8

Now for those in engineered systems and RAC environments, you’re going to want to know waits for interconnect.  Again, these are simply and clearly compared, then displayed in graph form.

compare9

This report will offer very quick answers to

“What Changed?”

“What’s different?”

“What Happened at XXpm?”

The value this report provides is easy to see, but when offered to compare one database to another, even when on different hosts, you can see how valuable the AWR Warehouse becomes that even the consolidation planner can’t offer.

Next post, I’ll go over the AWR Warehouse AWR Comparision Period Report.

 

 

 

 

 

 

Posted in ASH and AWR, AWR Warehouse, Oracle Tagged with: , , ,

May 4th, 2016 by dbakevlar

The OMS Patcher is a newer patching mechanism for the OMS specifically, (I know, the name kind of gave it away…)  Although there are a number of similarities to Oracle’s infamous OPatch, I’ve been spending a lot of time on OTN’s support forums and via email, assisting folks as they apply the first system patch to 13.1.0.0.0.  Admit it, we know how much you like patching…

nooo

The patch we’ll be working with is the following:

Before undertaking this patch, you’ll need to log into your weblogic console as the weblogic admin, (you do still remember the URL and the login/password, right? :))  as this will be required as part of the patching process.
Once you’ve verified this information, you’ll just need to download the patch, unzip it and read the README.txt to get an understanding of what you’re patching.
Per the instructions, you’ll need to shut down the OMS (only).
./emctl stop oms
Take the time to ensure your environment is set up properly.  The ORACLE_HOME will need to be switched over from the database installation home, (if the OMS and OMR are sharing the same host, the ORACLE_HOME is most likely set incorrectly for the patch requirements.)
As an example, this is my path environment on my test server:
/u01/app/oracle/13c/bin <–Location of my bin directory for my OMS executables.
/u01/app/oracle/13c/OMSPatcher/omspatcher <– location of the OMSPatcher executable.

$ORACLE_HOME should be set to OMS_HOME and set omspatcher to the OMSPATCHER :

export omspatcher=$OMS_HOME/OMSPATCHER/omspatcher
export ORACLE_HOME=/u01/app/oracle/13c
If you return to the README.txt, you’ll be there awhile, as the instructions start to offer you poor advice once you get to the following:
$ omspatcher apply -analyze  -property_file <location of property file>
This command will result in a failure on the patch annoy those attempting to apply it.

I’d recommend running the following instead, which is a simplified command and will result in success if you’re set up your environment:

omspatcher apply <path to your patch location>/22920724 -analyze

If this returns with a successful test of your patch, then simply remove the “-analyze” from the command and it will then apply the patch:

omspatcher apply <path to your patch location>/22920724

You’ll be asked a couple of questions, so be ready with the information, including verifying that you can log into your Weblogic console.

Verify that the Weblogic domain URL and username is correct or type in the correct one, enter the weblogic password
Choose to apply the patch by clicking “Y”
Patch should proceed.
The output of the patch will look like the following:

OMSPatcher log file: /u01/app/oracle/13c/cfgtoollogs/omspatcher/22920724/omspatcher_2016-04-29_15-42-56PM_deploy.log

Please enter OMS weblogic admin server URL(t3s://adc00osp.us.oracle.com:7102):>
Please enter OMS weblogic admin server username(weblogic):>
Please enter OMS weblogic admin server password:>

Do you want to proceed? [y|n]
y
User Responded with: Y

Applying sub-patch "22589347 " to component "oracle.sysman.si.oms.plugin" and version "13.1.1.0.0"...

Applying sub-patch "22823175 " to component "oracle.sysman.emas.oms.plugin" and version "13.1.1.0.0"...

Applying sub-patch "22823156 " to component "oracle.sysman.db.oms.plugin" and version "13.1.1.0.0"...

Log file location: /u01/app/oracle/13c/cfgtoollogs/omspatcher/22920724/omspatcher_2016-04-29_15-42-56PM_deploy.log

OMSPatcher succeeded.

 

Note the sub-patch information.  It’s important to know that this is contained in the log, for it you needed to rollback a system patch, it must be done via each sub-patch using the Identifier listed here.

If you attempted to rollback the system patch, using the system patch identifier, you’d receive an error:

$ 01/app/oracle/13c/OMSPatcher/omspatcher rollback -id 22920724 -analyze <
OMSPatcher Automation Tool
Copyright (c) 2015, Oracle Corporation. All rights reserved.

......
"22920724" is a system patch ID. OMSPatcher does not support roll back with system patch ID.

OMSRollbackSession failed: "22920724" is a system patch ID. OMSPatcher does not support roll back with system patch ID.

 
Once the system patch has completed successfully, you’ll need to add the agent patch and best practice is to use a patch plan and apply it to one agent, make it the gold agent current image and then apply that to all your agents that are subscribed to it.  If you need more information on how to use Gold Agent Images, just read up on it in this post.

 

Posted in EM13c, Enterprise Manager, Oracle Tagged with: , ,

April 27th, 2016 by dbakevlar

Someone pinged me earlier today and said, “Do I even really need to know about logs in Enterprise Manager?  I mean, it’s a GUI, (graphical user interface) so the logs should be unnecessary to the administrator.”

luke

You just explained why we receive so many emails from database experts stuck on issues with EM, thinking its “just a GUI”.

Log Files

Yes, there are a lot of logs involved with the Enterprise Manager.  With the introduction back in EM10g of the agent, there were more and with the EM11g, the weblogic tier, we added more.  EM12c added functionality never dreamed before and with it, MORE logs, but don’t dispair, because we’ve also tried to streamline those logs and where we weren’t able to streamline, we at least came up with a directory path naming convention that eased you from having to search for information so often.

The directory structure for the most important EM logs are in the $OMS_HOME/gc_inst/em/OMGC_OMS1/sysman/log directory.

Now in many threads on Oracle Support and in blogs, you’ll hear about the emctl.log, but today I’m going to spend some time on the emoms properties, trace and log files.  Now the EMOMS naming convention is just what you would think it’s about-  the Enterprise Manager Oracle Management Service, aka EMOMS.

The PROPERTIES File

After all that talk about logs, we’re going to jump into the configuration files first.  The emoms.properties is in a couple directory locations over in the $OMS_HOME/gc_inst/em/EMGC_OMS1/sysman/config directory.

Now in EM12c, this file, along with the emomslogging.properties file was very important to the configuration of the OMS and it’s logging, which without this, we wouldn’t have any trace or log files or at least the OMS wouldn’t know what to do with the output data it collected!  If you look in the emoms.properties/emomslogging.properties files for EM13c, you’ll receive the following header:

#NOTE
#----
#1. EMOMS(LOGGING).PROPERTIES FILE HAS BEEN REMOVED

Yes, the file is simply a place holder and you now use EMCTL commands to configure the OMS and logging properties.

There are, actually, very helpful commands listed in the property file to tell you HOW to update your EM OMS properties!  Know if you can’t remember an emctl property commands, this is a good place to look to find the command/usage.

The TRACE Files

Trace files are recognized by any DBA-  These files trace a process and for the emoms*.trc files, these are the trace files for EM OMS processes, including the one for the Oracle Management Service.  Know that a “warning” isn’t always a thing to be concerned about.  Sometimes it’s just letting you know what’s going on in the system, (yeah, I know, shouldn’t they just classify that INFO then?”

2016-04-09 01:00:07,523 [RJob Step 62480] WARN jobCommand.JvmdHealthReportJob logp.251 - JVMD Health report job has started

These files do contain more information than the standard log file, but it may be more than what a standard EM administrator is going to search through.  They’re most helpful when working with MOS and I recommend uploading the corresponding trace files if there is a log that support has narrowed in on.

The LOG Files

Most of the time, you’re going to be in this directory, looking at the emctl.og, but remember that the emoms.log is there for research as well.  If you perform any task that involves the OMS and an error occurs, it should be written to the emoms.log, so looking at this log can provide insight to the issue you’re investigating.

The format of the logs are important to understand and I know I’ve blogged about this in the past, but we’ll just do a quick and high level review.  Taking the following entry:

2016-01-12 14:54:56,702 [[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] ERROR deploymentservice.OMSInfo logp.251 - Failed to get all oms info

We can see that the log entry starts with timestamp, module, message, status, (ERROR, WARN, INFO) detail, error message.  This simplifies it when having to read these logs or knowing how one would parse them into a log analysis program.

There are other emoms log files, simply specializing in loader processing and startup.  Each of these logs commonly contain a log file with more detailed information about the data its in charge of tracing.

If you want to learn more, I’d recommend reading up on EM logging from Oracle.

 

 

Posted in EM13c, Enterprise Manager, Oracle Tagged with: , ,

April 20th, 2016 by dbakevlar

How much do you know about the big push to BI Publisher reports from Information Publisher reporting in Enterprise Manager 13c?  Be honest now, Pete Sharman is watching…. 🙂

george

I promise, there won’t be a quiz at the end of this post, but its important for everyone to start recognizing the power behind the new reporting strategy.  Pete was the PM over the big push in EM13c and has a great blog post with numerous resource links, so I’ll leave him to quizzing everyone!

IP Reports are incredibly powerful and I don’t see them going away soon, but they have a lot of limitations, too.  With the “harder” push to BI Publisher with EM13c, users receive a more robust reporting platform that is able to support the functionality that is required of an IT Infrastructure tool.

BI Publisher

You can access the BI Publisher in EM13c from the Enterprise drop down menu-

bippub4

There’s a plethora of reports already built out for you to utilize!  These reports access only the OMR, (Oracle EM Management Repository) and cover numerous categories:

  • Target information and status
  • Cloud
  • Security
  • Resource and consolidation planning
  • Metrics, incidents and alerting

bipub3

Note: Please be aware that the license for BI Publisher included with Enterprise Manager only covers reporting against the OMR and not any other targets DIRECTLY.  If you decide to build reports against data residing in targets outside the repository, it will need to be licensed for each.

Many of the original reports that were converted over from IP Reports were done so by a wonderful Oracle partner, Blue Medora, who are well known for their VMware plugins for Enterprise Manager.

BI Publisher Interface

Once you click on one of the reports, you’ll be taken from the EM13c interface to the BI Publisher one.  Don’t panic when that screen changes-  it’s supposed to do that.

bipub4

 

You’ll notice be brought to the Home page, but you’ll notice that you’ll have access to your catalog of reports, (it will mirror the reports in the EM13c reporting interface) the ability to create New reports, open reports that you may have drafts of or are local to your machine, (not uploaded to the repository) and authentication information.

In the left hand side bar, you will have menu options that duplicate some of what is in the top menu and tips access to help you get more acquainted with BI Publisher-

bipub7

This is where you’ll most likely access the catalog, create reports and download local BIP tools to use on your desktop.

Running Standard Reports

 

To run a standard, pre-created report, is pretty easy.  This is a report that’s already had the template format created for you and the data sources linked.  Oracle has tried to create a number of reports in categories it thought most IT departments would need, but let’s just run two to demonstrate.

Let’s say you want to know about Database Group Health.  Now there’s not a lot connected to my small development environment, (four databases, three in the Oracle Public Cloud and one on-premise) and this is currently aimed at my EM repository. This limits the results, but as you can see, it shows the current availability, the current number of incidents and compliance violations.bipub1

We could also take a look at what kinds of targets exist in the Enterprise Manager environment:

bipub11

Or who has powerful privileges in the environment:

bipub10

Now this is just a couple of the dozens of reports available to you that can be run, copied, edited and sourced for your own environment’s reporting needs out of the BI Publisher.    I’d definitely recommend that if you haven’t checked out BI Publisher, spend a little time on it and see how much it can do!

 

 

Posted in EM13c, Enterprise Manager, Oracle Tagged with: , ,

  • Facebook
  • Google+
  • LinkedIn
  • Twitter