Category: Oracle

September 21st, 2017 by dbakevlar

The network has often been viewed as “no man’s land” for the DBA-  Our tools may identify network latency, but rarely does it go into any details, designating the network outside our jurisdiction.

As we work through data gravity, i.e. the weight of data, the pull of applications, services, etc. to data sources, we have to inspect what connects it to the data and slows it down.  Yes, the network.

We can’t begin to investigate the network without spending some time on Shannon’s law, also known as Shannon-Hartley Theorem.  The equation relates to the maximum capacity (transmission bit rate) that can be achieved over a given channel with certain noise characteristics and bandwidth.

This theorem has been around for quite some time in the telephony world, first patented in 1903 by W.M Minor with the introduction of a concept on how to increase the capacity of transmission lines.  Over the years, multiplexing and quantizers were introduced, but the main computation has stayed the same:

  • A given communication, (or data) system has a maximum rate of information C known as the channel capacity
  • If the transmission information rate R is less than C, then the data transmission in the presence of noise can be made to happen with arbitrarily small error probabilities by using intelligent coding techniques
  • To get lower error probabilities, the encoder has to work on longer blocks of signal data. This entails longer delays and higher computational requirements

In layman’s terms-  the data is only going to go as fast as it can do so without hitting a error threshold.

As a DBA, we always inspect waits in the form of latency and latency is actually just a measure of time and you should always tune for time or you’re just wasting time. Latency is the closest measure to speed when you compare it to the distance involved between two points, which when discussing data source and application, etc., are your points. This is where it gets interesting.  Super low latency networks aren’t necessarily huge bandwidth, such as infiniband, which is common in engineered systems like Exadata.  In comparison, standard networks can have much higher volume, but they can’t talk as “fast” on a packet by packet basis. These types of networks compete by providing extensive parallel lanes, but as we know, individual lanes simply won’t be able to.

Now I’m not going to go into the further areas of this theorem, including the Shannon’s Limit, but the network, especially with the introduction of the cloud, has reared its ugly head as the newest bottle neck.  There’s a very good reason cloud providers like AWS have come up with Snowmobile.  Every cloud project I’ve been on, the network has been a significant impact to its success.  My advice to all DBAs is to enhance your knowledge with information about networking.  If you didn’t respect your network administrator before, you will after you do a little research… 🙂 It will serve you as you embrace the cloud.

Posted in Cloud, DBA Life, Oracle, SQLServer Tagged with: ,

September 20th, 2017 by dbakevlar

As we come upon Oracle Open World at the end of the month, I’m busy with a number of events and tasks.

I spoke at the Microservices, Containers and DevOps Summit in Denver yesterday and will be traveling to San Diego, California to speak at SQL Saturday #661 this weekend.  I love the Microsoft events, but not sure my family loves the loss of my weekend time with them half as much.

At the end of the month, Delphix will be publishing my DevOps book based off my series of blog posts and webinars on DevOps and the DBA.  It will be available via eBook on Delphix’s website and a hardcopy will be available at a few events in the coming year, including at Oak Table World 2017 at Oracle Open World.  If you do have the opportunity to read the book, I would love some feedback for the upcoming second edition where I really want to get more into specifics of how to do DevOps with database development.  Most focus on application tier and most DevOps environments scope across app, database and OS tiers.

I’m also looking forward to Oracle Open World this year.  I’m finished with my Microsoft Pass Summit slides and presentation, but still working on the ones for Oracle Open World!  You’d think with Summit a month out after OOW, it would be the other way around, but this just isn’t how its played out.  This year at OOW, I’m presenting on Super Sunday with Mike Donovan from DBVisit.  We’re going to be virtualizing and replicating to the cloud with our demo, so it’s been a lot of fun creating the demo environment.  Our story is intact, its just getting the time and the common timezone to get everything together, (Mike is in New Zealand…:))

I’ll also be presenting at the Oracle Women in Tech event on Sunday.  Thanks to Laura Ramsey who’s ensured this has taken more precedence at Oracle and last year, we had a great panel session in the midst of the chaos!  Laura’s the bomb and made sure the importance of inclusion of women in the Oracle technical industry is shared with the community at large.

On Monday and Tuesday, I’ll spend most of my time at Oak Table World, which for many of you know, is the event we Oakies put on next door to the big event, showcasing the sessions that may not be of interest to Oracle, but is of interest to us geeks.  We’ll be at the Children’s Creativity Museum next to the Carousel and Jeremiah Wilton has done an awesome job organizing this year’s event.  As I did last years, I know how much work it is and I don’t envy him, so if you see him, make sure to give him a pat on the back!

As always, there are a ton of events, parties and get togethers at Oracle Open World.  It is the largest annual Oracle event in the world.  I look forward to seeing many friends and if you are one of those that won’t be able to make it this year, know that you will be missed…:(

 

Posted in Oracle Tagged with: , ,

September 15th, 2017 by dbakevlar

I planned on finishing up and publishing a different post today, but after a number of conversations with DBAs in the community, this topic took precedence.

Yesterday the above announcement came out with the Oracle earnings call, along with the following:

“In a couple of weeks, we will announce the world’s first fully autonomous database cloud service,” said Oracle Chairman and CTO, Larry Ellison. “Based on machine learning, the latest version of Oracle is a totally automated “self-driving” system that does not require human beings to manage or tune the database. Using AI to eliminate most sources of human error enables Oracle to offer database SLA’s that guarantee 99.995% reliability while charging much less than AWS.”
With DBAs that have been around a while, we know the idea that you don’t need a DBA has been around since Oracle 7, the “self-healing database”.  I received numerous emails and messages and even a phone call from folks in the community, concerned about the announcement, along with the rebranding of Oracle Technology Network to Oracle Development Community.

The Role is Changing

I’ve been presenting on the topic of the changing role of the DBA for almost a year now.  It’s not an unknown, folks.  We all realize that the influencer has broadened and for cloud initiatives, what better way to introduce it into the business than to give it to developers and let them build out environments that require very little input or guidance from operations?
The demand on the development cycle has increased exponentially.  None of us can deny that and as much as we know that data friction is the cause of this, many times we DBAs felt fulfilled by that friction.  The best DBAs all have a few control issues and the ability to control the environment was essential to ensuring stability and consistency of design, platform, etc. occurred. We may not like to hear it, but it’s just how it is.  It was part of my job to slow things down when I noted issues that could impact uptime, performance or accessibility.
With the introduction of cloud, particularly SaaS, the need for operations, (DBA, administrative and network support) is little to not required.  IaaS is still a powerful opportunity for the business to have more control over the environment and expertise, but it’s going to look awfully attractive to companies to pay more monthly for a SaaS offering when the business is sold on the idea, “You won’t need all those DBAs, server and network administrators any longer.”  The extra cost, when billed monthly is easier to “swallow” than annual licensing fees and salaries.
DBAs are now recommended to look to cloud providers to utilize their skills or move to a development role.  I’ve been proposing a third option of DevOps, as I’ve stated time and time again. The DBA role is changing and we just need to admit that we’re not needed for much of what we once were.
Or are we?

History

Oracle is not the only one to claim that you won’t need a DBA, nor have they done it as successfully as Microsoft SQL Server.  At the release of SQL Server 7.0, there was a push of marketing that stated a DBA was no longer necessary.  Windows Admins and developers were installing SQL Server and taking over management of the database tier.  It resulted in a temporary lull in available positions for SQL Server DBAs and as I worked for a company whose largest environment resided on SQL Server, I remember interviewing candidates who couldn’t tell me what an “LSN” was, how lock escalation worked or even how data was written to the SQL Server transaction log.  We quickly exhausted candidates searching for a qualified DBA.  Luckily, Microsoft learned the error of their ways and within two years, all those databases installed by non-DBAs came to a breaking point and DBAs were re-introduced in mass quantity.  The SQL Server DBA community is thriving and has some incredibly skilled administrators that understand the database engine, design and code.
I foresee this as a potential scenario for the Oracle database community now and this is why.

Clouding Your Way Out of a Software Problem

Any DBA who specializes in optimization knows that hardware offers around 15% overall opportunity for improvement.  My favorite quote from Cary Millsap, “You can’t hardware your way out of a software problem” is quite fitting, too.  A hardware upgrade can offer a quick increase in performance, only to find that the problem seemingly returns after a period of time.  As we’ve discussed in previous posts.  The natural life of a database is growth-  growth in data, growth in processing, growth in users.  This growth requires more resources and if the environment is not performing as optimally and efficiently as possible, more resources will always be required.
When we attack an optimization project at the code, design and application level, we have over an 80% opportunity for improvement.  The improvements are long term and can serve future projects, as well.
Having this information in our pockets, let’s now add the reality of the cloud.  Depending on the provider, we purchase compute and storage “packages” to serve the best needs of the environment.  It may not be the most optimal configuration, but if we need more, it’s easy enough to scale up.  All of this comes with a cost.  Anyone who thinks they’re going to save money going to the cloud really shouldn’t make this the reason for going to the cloud.  One of the largest surprises for companies who migrated to the cloud early on, although the monthly costs looked promising, the overall annual cost realized no savings and often increased expenditures.  Your benefit of the cloud is easy access, easy scaling when needed, for most primary systems, it won’t result in savings., The real question is, “should you scale and cost the business money just because it’s there?”
As I stated earlier, you can’t hardware your way out of a software problem, but can you cloud your way out of a software problem?  Cloud vendors will be all too happy to scale you to the next higher cloud package offering.  They will be very happy to do it transparently for you, but this will be a required, regular process if you don’t have someone optimizing your environment.
Are you really expecting your developers to be skilled in identifying performance issues and optimization of code and design? 

Long Live the DBA

Developers are expected to do more in a shorter cycle and with less every day.  Agile is here and with the introduction of DevOps, there is structure around agile development to demand even more from them.  The skills and the depth of their development knowledge is already vast and that will result in them being stretched to fulfill the demands from standard development tasks.
This will result in a high demand for DBAs knowledge of database engine, the optimizer and how to optimize environments.  Those DBAs with advanced skills in these areas will have plenty of work to keep them busy and if Larry is successful with the bid to rid companies of their DBAs for a period of time, they’ll be very busy cleaning up the mess afterwards.

Posted in DBA Rants, Oracle

September 13th, 2017 by dbakevlar

It was a really busy summer and ended with me returning after a week of vacation in Singapore.  What should I do after a 17hr flight and jet lag?  Two webinars and a SQL Saturday event!  What better way to get over jet lag and get my game back on and just jump back in!

I started out by having a webinar this morning on “DBA to DevOps to DataOps- the Revolution.”  I had a feeling with the jet lag, I’d be done faster than I’d hoped, but with the amount of questions from the over 400 attendees, it was an awesome one hour with everyone.  I focused on the important topic of data gravity and how the role of the DBA can evolve to be more productive for the business.

There were reference links that I knew were important and the PDF slide deck doesn’t provide that, so please refer to the following links below to catch up with all the delphix blog posts I’ve written on this topic:

Blog Posts-

FYI-  there are two more blog posts that will be published shortly on delphix.com, so stay tuned for those.

Webinar Recordings

On Thursday, I’ll be presenting with Oracle, “The DBA Diaries” focused on the cloud.  It should be a great conversation on where DBAs are in the scheme of the cloud and how our role is evolving.

To round up the week, I’ll be presenting at SQL Saturday Denver, my local SQL Saturday event for the SQL Server community!  Delphix is sponsoring this awesome event and I’m looking forward to presenting, (as is Tim at this event.)

Sunday–  I SLEEP!  No, I lie… I’ll be uploading all my code, video and content for ODTUG’s Geekathon.   Then I sleep. 🙂

 

Posted in DBA Life, Oracle Tagged with: ,

September 5th, 2017 by dbakevlar

Data gravity and the friction it causes within the development cycle is an incredibly obvious problem in my eyes.

Data gravity suffers from the Von Newmann Bottleneck. It’s a basic limitation on how fast computers can be. Pretty simple, but states that the speed of where data resides and where it’s processed is the limiting factor in computing speed.

OLAP, DSS and VLDB DBAs are constantly in battle with this challenge.  How much data is being consumed in a process, how much must be brought from disk and will the processing required to create the results end up “spilling” to disk vs. completing in memory.

Microsoft researcher Jim Gray has spend most of his career looking at the economics of data, which is one of the most accurate terms of this area of technical study.  He started working at Microsoft in 1995 and although passionate about many areas of technology, his research on large databases and transactional processing speeds is one of great respect in my world.

Now some may say this has little to do with being a database administrator, but how many of us spend significant time on the cost based optimizer, as moving or getting data has cost- so economics of data it is.

And this is the fundamental principle of data gravity and why DBAs get the big bucks.

If you’re interested in learning more about data gravity, DevOps and the future of DBAs, register for the upcoming webinar.

Posted in big data, Database, DBA Life, Delphix, Oracle, SQLServer Tagged with:

August 30th, 2017 by dbakevlar

Delphix Engineering and Support are pretty amazing folks.  They continue to pursue for solutions, no matter how much time it takes and the complex challenges they’re faced with supporting heterogenous environments, hardware configurations and customer needs.

This post is in support of the effort from our team that resulted in stability to a previously impacted Solaris 11.2 cluster configuration.  The research, patching, testing and then resulting certification from Oracle was a massive undertaking from our team and I hope this information serves the community, but in no way is recommended by Delphix.  It’s just what was done to resolve the problem, after logical decisions for the use of the system by our team.

Challenge

Environment:  Solaris 11.3 (with SRU 17.5) + Oracle 12.2 RAC + ESX 5.5
Situation:
Post an upgrade to 12.2, environments were experiencing significant cluster instability, memory starvation due to the new demands for memory post the upgrade.
Upon inspection, it was found that numerous features required more memory than previous and the system simply didn’t have the means as to support it.  As our environment was a Solaris environment with 12.2, there was a documented patch we needed to request from Oracle for RAC performance and node evictions.  The environment was still experiencing node evictions, etc data showed that we’d have to triple the memory on each node to have continue using the environment as it had before.  Our folks aren’t one to give up that easily, so secondary research was performed to find out if some of the memory use could be trimmed down.
What we discovered, is that what is old can become new again.  My buddy and fellow Oakie, Marc Fielding had blogged, (along with links to other posts, including credit to another Oakie, Jeremy Schneider) about how he’d limited resources back in 2015 after patching to 12.1.0.2 and this post really helped the engineers at Delphix get past the last hump on the environment, even after implementing the patch to address a memory leak.  Much of what you’re going to see here, came from that post, focused on its use in a development/test system, (Delphix’s sweet spot.)

Research

Kernel memory out of control
Starting with kernel memory usage, the mdb -k command can be used to inspect at a percentage level:
$ echo “::memstat” | mdb -k
Page Summary           Pages                 MB          %Tot
  ————                 —————-             —————-           —-
  Kernel               151528              3183            24%
  Anon                 185037              1623            12%
  ...

We can also look at it a second way, breaking down the kernel memory areas with kmsastat:

::kmsastat

cache                        buf    buf    buf    memory     alloc alloc 
name                        size in use  total    in use   succeed  fail 
------------------------- ------ ------ ------ --------- --------- ----- 
kmem_magazine_1               16   3371   3556     57344      3371     0 
kmem_magazine_3               32  16055  16256    524288     16055     0 
kmem_magazine_7               64  29166  29210   1884160     29166     0 
kmem_magazine_15             128   6711   6741    876544      6711     0 
...

Oracle ZFS ARC Cache

Next- Oracle ZFS has a very smart cache layer, also referred to as ARC (Adaptive replacement cache). Both a blessing and a curse, ARC consumes as much memory that is available, but is supposed to free up memory to other applications if it’s needed.  This memory is used to supplement any slow disk I/O.  When inspecting our environment, a significant amount was being over-allocated to ARC.  This may be due to the newness of Oracle 12.2, but in a cluster, memory starvation can be a common cause of node eviction.

We can inspect the size stats for the ARC in the following file:

view /proc/spl/kstat/zfs/arcstats

This assumes ZFS is mounted on /proc, so your actual arcstats file may reside in a different path location than shown above.  Inside the file, review the following information:

  • c is the target size of the ARC in bytes
  • c_max is the maximum size of the ARC in bytes
  • size is the current size of the ARC in bytes

Ours was eating up everything left, taking 100% of memory left, as we’ll discuss in the next section of this post.

Oracle Clusterware Memory

The Oracle clusterware is a third area that was investigated for frivolous memory usage that could be trimmed down.  There’s some clear documented steps to investigate issues with misconfigurations and feature issues from Oracle that can assist in identifying many of these.

So, post upgrade and patching, what can you do to trim down memory usage to avoid memory upgrades to support the cluster upgrade?

Changes

From the list of features and installations that weren’t offering a benefit to a development/test environment, these were what made the list and why:
Update were made to the /etc/system file, (requires a reboot and must be performed as root):
  • Added set user_reserve_hint_pct=80
    • This change was made to limit the ZFS on how much memory for the ARC cache.  There was a significant issue for the customer when CRS processes weren’t able to allocate memory.  80% was the highest percentage this could be set without a node reboot being experienced, something we all prefer not to happen.
  • Stopped the Cluster Health Monitor, (CHM) process.  This is a brand new background process in 12c Clusterware and collects workload data, which is significantly more valuable in a production environment, but in development and test?  It can easily be a subsequent drain on CPU and memory that could be better put to use for more virtual databases.
  •  To perform this, the following commands were used as the root user:
$ crsctl stop res ora.crf -init
$ crsctl delete res ora.crf -init
  • Removed the Trace File Analyzer Collector (tfactl).  This background process collects the many trace files Oracle generates into a single location.  Handy for troubleshooting, but it’s Java-based and has a significant memory footprint and subject to java heap issues.
  • It was uninstalled with the following command as the $ORACLE_HOME owner on each node of the cluster:
$ tfactl uninstall
  • Engineering stopped and disabled the Cluster Verification Utility, (CVU).  In previous version this was a utility that could be manually added to the installation or performed post to troubleshoot issues via an Admin.  This is another feature that simply eats up resources that could be reallocated to dev and test environments, so it was time to stop and disable it with the following:
$ srvctl cvu stop
$ srvctl cvu disable

Additional Changes

  • Reduced memory allocation for the ASM instance.
    • The ASM instance in 12.2 is now using 1Gb of memory, where previous 256Mb.  That’s a huge change that can impact other features dependent on that memory.
    • Upon research, it was found that 750Mb was adequate, so if more memory reallocation is required, consider lowering the memory on each node to 750Mb.
  • To perform this set of instance level parameter change, run the following on any of the nodes and then restart each node until the cluster has been cycled to put the change into effect:
$ export ORACLE_HOME=<Grid Home>

$ export ORACLE_SID=<Local ASM SID>

$ sqlplus / as sysasm
alter system set "_asm_allow_small_memory_target"=true scope=spfile;
alter system set memory_target=750m scope=spfile;
alter system set memory_max_target=750m scope=spfile;

High CPU usage features can be troubling for most DBAs, but when it’s experienced on development and test databases that are often granted less resources to begin with vs. production, a change can often enhance the stability and longevity of these environments.

  • Disabled high-res time ticks in all databases, including ASM DBs, regular DBs, and the Grid Infrastructure Management Repository DB (GIMR, SID is -MGMTDB).  High-res ticks are a new feature in 12c, and they seem to cause a lot of CPU usage from cluster time-keeping background processes like VKTM.  Here’s the SQL to disable high-res ticks (must be run once in each DB):
alter system set "_disable_highres_ticks"=TRUE scope=spfile;
The team, after all these changes, found the Solaris kernel was still consuming more memory than before the upgrade, but it was more justifiable:
  • Solaris Kernel: 1GB of RAM
  • ARC Cache: between 1-2GB
  • Oracle Clusterware: 3Gb

Memory Upgrade

We Did Add Memory, but not as much as expected to.
After all the adjustments, we still were using over 5GB of memory for these three features, so upped each node from 8GB to 16GB to ensure enough resources to support all dev and test demands post the upgrade.  We wanted to provision as many Virtual databses, (VDBs) for any development or test the groups needed, so having a more than 3Gb free for databases was going to be required!
The Solaris cluster, as this time, has experienced no more kernel panics, node evictions or unexpected reboots, which we need to admit is the most important outcome.  It’s more difficult to explain an outage to users than why we shut down and uninstalled unused features to Oracle…. 🙂

Posted in Delphix, Oracle Tagged with: , ,

August 24th, 2017 by dbakevlar

There was a great post by Noel Yuhanna on how he deems the number of DBAs required in a database environment by size and number of databases.  This challenge has created a situation where data platforms are searching for ways to remove this roadblock and eliminate the skills needed to manage the database tier.

I’ve been a DBA for almost 2 decades now, (“officially” as my beginning date with an official title and my years experience differ by a couple years…) When I started, we commonly used import and export features to move data from one data source to another.  Tim Hall just wrote up a quick review of enhancements for Oracle 12.2 Datapump, but I felt dread as I read it, realizing that it continues to hold DBAs back with the challenge of data gravity.

Data move utilities may go through updates over the years and they do have their purpose, but I don’t feel the common challenge they’re used to undertake is the correct one.  Taking my two primary database platforms as an example, for Oracle, we went from Import/Export to SQL Loader to Datapump and for SQL Server, we went from BCP, (Bulk Copy Protocol) to, Bulk Inserts to a preference on the SQL Server Import/Export Wizard.

Enhancements?

Each of these products have introduced GUI interfaces, (as part of a support product or third party product), pipe functions, parallel processing and other enhancements.  They’ve added advanced error handling and automatic restart.  Oracle introduced the powerful transportable tablespaces and SQL Server went after filegroup moves, (very similar concepts, grouping objects by logical and physical naming to ease management.)

Now, with the few enhancements that have been added to data movement utilities, I want you to consider this-  If we focus on data provided by Forbes,

  • there is 1.7MB of data, per person, per second generated in the world today.
  • That data has to be stored somewhere.

No matter if it’s relational or big data or another type of data store, SOME of that data is going to be in the two RDBMS that I used in my example.  The natural life of a database is GROWTH.  The enhancements to these archaic data movement utilities haven’t and never will keep up with the demands of the data growth.  Why are we still advocating the use of them to migrate from one database to another?  Why are we promoting them for cloud migrations?

This Is(n’t) How We Do It

I’m seeing this recommendation all too often in documentation for products and best practice.  Oracle’s Migration steps for 11g to the cloud demonstrates this-

  • DataPump with conventional export/import
  • DataPump transportable tablespace
  • RMAN Transportable tablespace
  • RMAN CONVERT transportable tablespace with DataPump

These tools have been around for quite some time and yes, they’ve been are trusted sidekicks that we know will save the day, but we have a new challenge when going to the cloud-  along with data gravity, we have a network latency and network connect issues.

Depending on the size, (the weight) of our data that has to be transferred, Database as a Service can turn into a service for no one.  Failures, requiring us to perform a 10046 trace to try to diagnose a failed DataPump, with the weight of data gravity behind it, can delay projects and cause project creep in a way that many in IT aren’t willing to wait for and the role of the DBA comes to a critical threshold again.

I’m not asking DBAs to go bleeding edge, but I am asking you to embrace tools that others areas of IT have already recognized as the game changer.  Virtualize, containerize and for the database, that means Data Pods.  Migrate faster- all data sources, applications, etc. as one and then deliver a virtual environment while offering you the time to “rehydrate” to physical without other resources waiting for you- the DBA, the often viewed roadblock.  Be part of the answer, not part of the problem that archaic import/export tools introduce because they aren’t the right tool for the job.

Posted in Cloud, DBA Rants, Delphix, Oracle Tagged with: , ,

August 23rd, 2017 by dbakevlar

Even though my social media profile is pretty available for Twitter and Linked in, I’m significantly conservative with other personal and financial data online.  The reversal of the Internet Privacy Rule, (I’ve linked to a Fox news link, as there was so much negative news on this one…) had everyone pretty frustrated, but then we need to look at security of personal information, especially financial data and as we can see by security breaches so far in 2017, we all have reason to be concerned.

The EU has taken the opposite approach with the right to be forgotten, along with General Data Protection Regulations, (GDPR.)  Where we seem to be taking a lesser, bizarre path to security, the rest of the world is tightening it up.

For the database engineer, we are

Responsible for the data, the data access and all of the database, so help me God.

As the gatekeeper for the company’s data, security had better be high on our list and our career.  There are a lot of documents and articles telling us to protect our environment, but often when we go to the business, the high cost of these products can make them hesitate on investing in them.

My Example

I’m about to use only one of the top 15 security breaches of all time as an example, but seriously, Sony Playstation Network, this was pretty terrifying and an excellent example of why we need to think deeper about data security.

Date of Discovery: April, 2011
How many Users Impacted: 77 million PlayStation Network individual accounts were hacked.

How it went down:  The Sony Playstation Network breach is viewed as the worst gaming community data breach in history. Hackers were able to make off with 12 million unencrypted credit card numbers as part of the data they accessed. They also retrieved account users full names, passwords, e-mails, home addresses, along with their purchase history and PSN/Qriocity logins and passwords.  There was an estimated loss of $171 million in revenue while the site was down for over a month.

I know as a customer, my kids always wonder why I limit where I submit my data online.  So often companies offer me the option to pay or store my credit card information in their system and I won’t.  The above is a great example as of why I don’t.  The convenience isn’t worth the high cost of lacking security or unknown security measures.

John Linkous of elQnetworks stated, “It’s enough to make every good security person wonder, ‘If this is what it’s like at Sony, what’s it like at every other multi-national company that’s sitting on millions of user data records?'”

As it was only certain environments that weren’t protected and specific ones that didn’t involve encryption methods, it reminds those in IT security to identify and apply security controls consistently across environments and organizations.

How to Protect Data

There are some pretty clear rules of thumb when protecting data-

  • Roles, Privileges and Grants

Utilize the database and applications full security features to ensure that the least privileged access is granted to the user.  As automation and advanced features come into offer you more time to allocate towards the important topic of security, build out a strong security foundation of features to ensure you’ve protected your data to the highest degree.

  • Audit regularly

There are full auditing features to ensure compliance and verify who has what access and privileges.  You should know who has access to what, if any privileges change and if changes are made by users other than the appropriate ones.

  • Encrypt production

Use powerful encryption methods to secure your production system.  Encryption changes the data to an unreadable format until a key is submitted to return the data to its original, readable format.  Encryption can be reversed, but strong encryption methods can offer advanced security against breaches.  Auditing should also show who is accessing the data and alert upon a suspected breach.

  • Mask Non-production

Often 80% of our data is non-production copies.  Most users, stakeholders and developers may not recognize the risk to the company as they would with the production environment.  Remove the responsibility and unintentional risk by masking the data with a masking tool that contains a full auto-discovery process and templates to make it easily repeatable and dynamic.

As of 2014, Sony agreed to a preliminary $15 million settlement in a class action lawsuit over the breach, which brings the grand total to just over $186 million in loss to the Sony Playstation Network.

If you think encryption and masking products are expensive, recognize how expensive a breach is.

Posted in Data Masking, Delphix, Oracle Tagged with: ,

August 10th, 2017 by dbakevlar

In my latest blog post on the Delphix site, I continue my conversation with why DevOps is the next step for DBAs and how DBAs can embrace this next step in their evolution.

This is an extensive series of blog posts, (four so far) to be followed by an ebook, a podcast and two webinars.  One is to be announced soon from Oracle called, “The DBA Diaries” and the other will be a from Delphix, titled, “The Revolution:  From Databases and DevOps to DataOps“.

The goal for all of this is to ease transition for the Database community as the brutal shift to the cloud, now underway, changes our day to day lives.  Development continues to move at an ever accelerating pace and yet the DBA is standing still, waiting for the data to catch up with it all.  This is a concept that many refer to as “data gravity“.

The concept was first coined just a few years ago by a Senior VP Platform Engineer, Dave McCrory.  It was an open discussion aimed at understanding how data impacted the way technology changed when connected with network, software and compute.

He discusses the basic understanding that there’s a limit in “the speed with which information can get from memory (where data is stored) to computing (where data is acted upon) is the limiting factor in computing speed.” called the Von Newmann Bottleneck.

These are essential concepts that I believe all DBAs and Developers should understand, as data gravity impacts all of us.  Its the reason for many enhancements to database, network and compute power.  Its the reason optimization specialists are in such demand.  Other roles such as backup, monitoring and error handling can be automated, but the more that we drive logic into programs, nothing is as good as true skill in optimization when it comes to eliminating much of data gravity issues.  Less data, less weight-  it’s as simple as that.

We all know the cloud discussions are coming, and with that, even bigger challenges are felt by the gravity from data.  Until then, let’s just take a step back and recognize that we need some new goals and some new skills.  If you’re like to learn more about data gravity, but don’t have time to take it all in at once, consider following it on Twitter, which is curated by Dave McCrory.

I’m off to Jacksonville, Fl. tomorrow to speak at SQL Saturday #649!

 

 

 

Posted in Database, DBA Life, Delphix, Oracle, SQLServer Tagged with: , , ,

August 7th, 2017 by dbakevlar

It’s finally time to upgrade our Linux Target!  OK, so we’re not going to upgrade the way a DBA would normally upgrade a database server when we’re working with virtualization.

So far, we’ve completed:

  • 1.  Updating our instances so that we’ll have a GUI interface if we’ll need one.
  • 2.  Installed Oracle on the Linux Source and upgraded our Dsource database to 12c

 

Now we’re done with our Linux Source and onto our Linux Target.

Install and Configure VNC and Oracle

We’ll run through and install and configure the VNC Viewer requirements just like we did in Part I and Part II. We’ll also install Oracle, but only this time, we’ve performed a software installation only.

We’ll install the Enterprise Edition and we’ll make sure to install it in the same path as we did on our Linux Source, (/u01/app/oracle/product/12.1/db_1)  We’re not installing the multi-tenant, as we didn’t configure this on our source, either.

Once that is complete, it’s time to get our VDB’s upgraded.

The first thing you need to remember is that the VDBs are simply virtual images of our Dsource that is already UPGRADED.

Add the New Oracle Home to the Linux Target

Log into Delphix Admin Console and click on Environments.

click on the Linux Target and then click on the refresh button:

Click on the Databases tab and you’ll now see the DB12c Oracle home is now present in the list:

Prep VDBs for switch to new home

Copy your environments profile from 11g.env to 12c.env.  Update the Oracle home to point to the new 12c home and save the file.

Now I have three VDBs on this target:

[delphix@linuxtarget ~]$ ps -ef | grep pmon

delphix   7501     1  0 Jul12 ?        00:01:17 ora_pmon_devdb
delphix   8301     1  0 Jul06 ?        00:01:49 ora_pmon_VEmp6
delphix  16875     1  0 Jul05 ?        00:01:57 ora_pmon_qadb

Log into the Linux Target and from the command line, set the environment and log into each database via SQL Plus and shut it down.

. 11g.env

export ORACLE_SID=VEmp6f
sqlplus / as sysdba
shutdown immediate;
exit;

and so on and so forth…. 🙂

Copy all the init files from the 11g Oracle Home/dbs for the VDBs over to the 12c Oracle Home/dbs/.

And this is where it all went wrong for two of the VDBs…

Back on the Delphix Admin Console, click on Manage –> Datasets

Click on each of the VDBs involved.  Click on Configuration –> Upgrade, (up arrow icon) and say yes to upgrading.  Update the Oracle home from the 11g in the drop down to the new 12c Oracle home and click the gold check mark to confirm.

OK, so this is what I WOULD have done for all three VDBs if I’d been working with JUST VDBs, but this is where it gets a bit interesting and I had to go off the beaten path for a solution.  I’m unsure if this is documented anywhere inside Delphix, (Delphix Support is awesome, so I’m sure they already know this, but for my own sanity) here’s the situation and the answer.  The environment I am working on is built off of AWS AMIs that consist of Delphix containers.  Containers are very powerful, allowing you to “package” up a database, application and other tiers of virtualized environments, offering the ability to manage it all as one piece.  This was the situation for the qadb and the devdb instances.

Due to this, I couldn’t run the upgrade inside the Delphix Admin console since these two VDBs were part of Delphix “data pods.”  The following are the steps to then address this change.

Remove the Containers, (Subsequently the VDBs as Well!)

  1. Log into the Delphix’s Jet Stream.
  2. Upper right hand corner, click on Usage Overview
  3. Scroll down and click on Employee Application, (its the template for the VDBs in question..)
  4. At the bottom of this page, you’ll see the two containers that possess the VDBs as part of them.  To the right, there is a trash can icon for delete.  (The reason this is an option is that I have a template built for this container and it will be very simple to recreate a VDB and vfile to repopulate this container, (matter of minutes, max.)

       5. Delete the two containers that are controlling the administration of these two VDBs still pointing to the 11g home.

Create the New VDBs and Virtualized Application, (Vfiles)

Now, log into the Delphix Admin console.

  1. Provision two VDBs from the orcl source db, one as devdb and another as qadb, just as it was before, both on the Linux target.
  2. Provision two vfiles of the Web Application, one as QA_Web and the other as DEV_Web, port numbers 1080 and 2080, keeping all other defaults.
  3. Once completed, (couple minutes, max) then lets return to Jet Stream and create the containers that will house the system.

Create the New Containers From the Template

In Jet Stream

  1. Click on Data Management in the upper right hand corner
  2. You will be brought to the Templates tab, click on the Employee Application template
  3. Click on Add Container

  1. Name:  “Dev 12c Container”, Owner: Dev and choose the devdb and the DEV_Web for the sources, then complete the container creation.
  2. Click again on Add Container
  3. Name: “QA 12c Container”, Owner: QA and choose the qadb and the QA_Web for the sources, then complete the container creation.

This will take just a moment to finish creating and that’s all there is to it.  You now have two DB12c environments that are completely containerized and upgraded from their previous 11g state.

Our source shows we’re using our DB12c upgraded database:

And we can also see everything is upgraded and happy in our Delphix Administration Console.  

It may have taken a little longer for me to upgrade with the complexity of the containers introduced, but the power of data pods is exceptional when we’re managing our data, the database and the application as one piece anyway.  Shouldn’t we treat it as one?

 

 

 

 

 

 

 

Posted in AWS, Delphix, Oracle

August 3rd, 2017 by dbakevlar

I’ve been asked what it takes to be a successful evangelist and realizing that what makes one successful at it, is often like holding sand in your hands- no matter how tightly you hold your fists, it’s difficult to contain the grains.

The term evangelist is one that either receives very positive or very negative responses.  I’m not a fan of the term, but no matter if you use this term or call them advocates, representative, influencer-  it doesn’t matter, they are essential to the business, product or technology that they become the voice for.

Those that I view as successful evangelists in the communities that I am part of?

There are a number of folks I’m sure I missed I also admire as I interact and observe their contributions, but these are a few that come to mind when I think of fellow evangelists.

What makes an evangelist successful?  It may not be what you think.

1. It’s Not Just About the Company

Most companies think they hire an evangelist to promote and market the company and yet, when all you do it push out company info, company marketing- People STOP listening to you.  What you say, do and are interested in should drive people to want to know more about you, including the company you work for and what that company does.

All of these folks talk about interests outside of work.  They post about their lives, their interests and contribute to their communities.  This is what it means to be really authentic and setting an example.  People want to be more like them because they see the value they add to the world than just talking points.

2.  They’re Authentic

Authenticity is something most find very elusive.  If you’re just copying what another does, there’s nothing authentic about that.  There’s nothing wrong finding a tip or tidbit that someone else is doing and adopting it, but it has to WORK for you.  I was just part of a conversation yesterday, where Jeff and I were discussing that he doesn’t use Buffer, (social media scheduling tool) where I live by it.  It doesn’t work for Jeff and there’s nothing wrong with that.  We are individuals and what makes us powerful evangelists is that we figured out what works for each of us.

3.  In the Know

As a technical evangelist, you can’t just read the docs and think you’re going to be received well.  Theory is not practice and I’ve had a couple disagreements with managers explaining why I needed to work with the product.  I’ve had to battle for hardware to build out what I’ve been expected to talk on and only once I didn’t fight for it and I paid for it drastically.  I won’t write on a topic unless I can test it out on my own.  Being in the trenches provides you a point of view no document can provide.

Documentation is secondary to experience.

4.  Your View is Outward

This is a difficult one for most companies when they’re trying to create evangelists from internal employees.  Those that may be deeply involved at the company level may interact well with others, but won’t redirect to an external view.  I’ve had people ask me why my husband isn’t doing as much as I am in the community.  Due to his position, he must be more internally and customer facing.  My job is very separate from my fellow employees.  I must always be focused outward and interact at least 95% of my time with the community.  You’ll notice all of the folks listed are continually interacting with people outside of their company and are considered very “approachable.”

We volunteer our time in the community- user groups, board of directors, events and partnering with companies.  We socialize, as we know our network is essential to the companies we represent.

5.  We Promote

I wish I did more public promotion like I see some of these other folks.  I’m like my parents-  I stand up for others and support them on initiatives and goals.  I do a lot of mentoring, but less when I’m blogging.  My mother was never about empty compliments and I did take after her on this.  I’m just not very good at remembering to compliment people on social media and feel I lack in this area, but I continually watch others do this for folks in the community and this is so important.

We ensure to work with those that may need introductions in our network, support in the community and reach out to offer our help.  In the public view, this is quite transparent, so when others pay this forward or return the favor, it can appear that people just bend over backwards for us, but we often have been their for the folks in question in the past, with no expectations and people remembered this.

We do promote our company, but for the right reasons.  The company has done something good for the community, has something special going on, but rarely do we push out anything marketing, as it just doesn’t come across very well from us.  It’s not authentic.

Additional Recommendations

  • Refrain from internet arguments, social media confrontations

I’m not saying to be a pushover.  I literally have friends muted and even blocked.  There’s nothing wrong with NOT being connected to individuals that have very different beliefs or social media behavior.  You shouldn’t take it personally– this is professional and you should treat it as such.

You may find, (especially for women and people of color) that certain individuals will challenge you on ridiculous topics and battle you on little details.  This is just the standard over-scrutinizing that we go through and if it’s not too bad, I tell people to just ignore it and not respond.  If it escalates, don’t hesitate to mute or block the person.  You’re not there to entertain them and by removing your contributions from their feed- “out of sight, out of mind”, offering peace to both of you… 🙂

  • Use automation tools and send out content that INTERESTS YOU.

Contribute what you want and limit to a certain percentage of what your company wants and be authentic.  Find your own niche and space and don’t send out “noise.”

There are a ton of tools out there.  Test out buffer, hootsuite, Klout or SumAll to make social media contributions easier.  If you don’t have a blog, create one and show what you’re working on and don’t worry about the topic.  You’ll be surprised that if you just write on challenges you’re facing, how you’ve solved a problem you’ve come across and write on a topic that you couldn’t find a solution to online, people will find value in your contributions.

  • Interact and be receptive of others

Have fun with social media and have real conversations.  People do appreciate honesty with respect.  Answer comments and questions on your blog.  Respond to questions on forums for your product and promote other people’s events and contributions.

When people approach you at an event or send you a direct message, try to engage with them and thank them for having the guts to come up and speak with you.  It’s not easy for most people to approach someone they don’t know.

  • Volunteer and Contribute

We used to be part of our community and as our world has changed with technology, the term community has changed.  These communities wouldn’t exist without the contributions of people.  Volunteer to help with user groups, events and forums.  Don’t just volunteer to be on a board of directors and not do anything.  It’s not something to just put on your CV and think you’re contributing.  There is incredible power in the simple act of doing, so DO.  Provide value and ask how you can help.  Kent has been a board member, a volunteer and even a president of user groups.  Jeff has run content selections and run events even though he’s limited in what he’s allowed to do as an Oracle employee and Rie promotes information about every woman speaker at SQL Saturday events along with all she does to run the Atlanta SQL Saturday, (largest one in the US!)  I won’t even try to name all the different contributions that Grant is part of, including the new attendees event at Pass Summit, (Microsoft’s version of Oracle Open World for my Oracle peeps!)

For those companies that are thinking-  “I hired an evangelist, so I want them to be all about me and all invested in the company.”  If they do, you’ll never have the successful evangelist that will be embraced by the community, able to promote your product/company in a powerful, grassroots way and if their eyes are always looking inside, they will miss everything going on outside and as we all know, technology moves fast.  Look away and you’ll miss it.

 

Posted in DBA Rants, Delphix, Oracle, SQLServer Tagged with:

July 31st, 2017 by dbakevlar

This is the Part III in a four part series on how to:

  1.  Enable VNC Viewer access on Amazon EC2 hosts.
  2.  Install DB12c and upgrade a Dsource for Delphix from 11g to 12c, (12.1)
  3.  Update the Delphix Configuration to point to the newly upgraded 12c database and the new Oracle 12c home.
  4.  Install DB12c and upgrade target VDBs for Delphix residing on AWS to 12.1 from the newly upgraded source.

In Part II, we finished upgrading the Dsource database, but now we need to get it configured on the Delphix side.

Log into the Delphix Admin console to make the changes required to recognize the Dsource is now DB12c and has a new Oracle home.

Log into the Delphix console as the Delphix_Admin user and go to the Manage –> Environments.

Click on the Refresh button and let the system recognize the new Oracle Home for DB12c:

Once complete, you should see the 12.1 installation we performed on the Linux Source now listed in the Environments list.

Click on Manage –> Datasets and find the Dsource 11g database and click on it.

Click on the Configuration tab and click on the Upgrade icon, (a small up arrow in the upper right.)

Update to the new Oracle Home that will now be listed in the dropdown and scroll down to save.

Now click on the camera icon to take a snap sync to ensure everything is functioning properly.  This should only take a minute to complete.

The DSource is now updated in the Delphix Admin console and we can turn our attentions to the Linux target and our VDBs that source from this host.  In Part IV we’ll dig into the other half of the source/target configuration and how I upgraded Delphix environments with a few surprises!

 

Posted in AWS, Delphix, Oracle Tagged with: , , ,

July 26th, 2017 by dbakevlar

I’m finally getting back to upgrading the Linux Source for a POC I’m doing with some folks and picking up from where we left off in Part I

Address Display Issue

Now that we have our VNC Viewer working on our Amazon host, the first thing we’ll try is to run the Oracle installer, (unzipped location –> database –> runInstaller) but it’s going to fail because we’re missing the xdpinfo file.  To verify this, you’ll need to open up a terminal from Application –> System Tools –> Terminal:

$ ls -l /usr/bin/xdpyinfo
 ls: /usr/bin/xdpyinfo: No such file or directory

We’ll need to install this with yum:

$ sudo yum -y install xorg-x11-utils

Once we’ve completed this, let’s verify our display:

$ echo $DISPLAY

:1.0 <– (0 is local, first number is the display, just as ipaddress:display for your VNC Viewer connection.)

If it’s correct, you can test it by executing xclock:

$ xclock

The clock should appear on the screen if the display is set correctly.

Install Oracle 12c

Run the installer:

$ ./runInstaller

The installer will come up for Oracle 12c and you can choose to enter in your information, but I chose to stay uninformed… 🙂  I chose to install AND upgrade the database to DB12c from 11g.

The warnings for swap and the few libraries I also chose to ignore by clicking ignore all and proceeded with the installation.

Root.sh and the Trace Analyzer

Once the installation of the new Oracle Home is complete, choose to run the root.sh script when prompted:

$ sudo /u01/app/oracle/product/12.1/db_1/root.sh

Overwrite all files when prompted by the script run and it’s up to you, but I chose to install the Oracle Trace File Analyzer so I can check it out at a later date.  You’ll then be prompted to choose the database to upgrade.  We’re going to upgrade our source database, ORCL in this example.

Upgrade Our Oracle DSource(Database)

Choose to proceed forward with the upgrade on the database, but know that you’ll require more space for the archive logs that are generated during the upgrade.  The check will tell you how much to add, but I’d add another 1Gb to ensure you are prepared with the other steps you have to run as we go through the preparation steps.

Log into SQL Plus as SYSDBA to perform this step:

ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=8100M;

Go through any warnings, but steps like stats collection and grants on triggers will have to be performed post the upgrade.

Drop the OLAP catalog:

$ sqlplus / as sysdba

@$ORACLE_HOME/olap/admin/catnoamd.sql

exit

Remove the OEM catalog for Enterprise Manager, first shutting down the console from the terminal:

$ emctl stop dbconsole

Copy the emremove.sql from the 12c Oracle Home/rdbms/admin and place it in the same location for 11g home.  Log into SQL Plus as SYSDBA:

SET ECHO ON;

SET SERVEROUTPUT ON;

@$ORACLE_HOME/rdbms/admin/emremove.sql

Empty the recyclebin post these steps:

purge recyclebin;

The assumption is that you have a backup prepared or you can use flashback with your resources allocated and proceed forward with upgrade.

Choose to upgrade the 11g listener and choose to install EM Express if you’d like to have that for monitoring.  Make sure to keep the default checks for the following window to update everything we need and collect stats before the upgrade runs to ensure it proceeds efficiently through all objects required.

Choose to proceed with the upgrade and if you’ve followed these instructions, you should find a successful installation of DB12c and upgrade of the database.  Keep in mind, we’re not going to go multi-tenant in this upgrade example, so if you were looking for those steps, my POC I’m building isn’t going to take that on in this set of blog posts.

Post Upgrade Steps:

Update your environment variables, including copying the 11g.env to a new profile called 12c.env and updating the Oracle Home.  Now set your environment and log into SQL Plus as SYSDBA to the upgraded database.

Update all the necessary dictionary and fixed stats:

EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;

EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;

Now, for our next post, we’ll need to set up the same installations on our Amazon host and VNC Viewer configuration we did for the Source and then install Oracle DB12c on our target server as we did in this post.  Then we’ll discuss how to get all our Delphix VDBs, (virtual databases) upgraded to be the same as our source in no time!

 

Posted in AWS, Delphix, Oracle Tagged with: , , ,

July 19th, 2017 by dbakevlar

There are a plethora of mishaps in the early space program to prove the need for DevOps, but Fifty-five years ago this month, there was one in particular that is often used as an example for all.  This simple human error almost ended the whole American space program and it serves as a strong example of why DevOps is essential as agile speeds up the development cycle.  

The Mariner I space probe was a pivotal point in the space race between the United States and the Soviet Union.  The space probe was a grand expedition into a series of large, sophisticated, as well as interplanetary missions, all to carry the Mariner moniker.  For this venture to launch, (pun intended) it was dependent on a huge, as well as new development project for a powerful booster rocket called the Atlas-Centaur.  The development program ran into so many testing failures that NASA ended up dropping the initial project and going with a less sophisticated booster to meet the release date, (i.e. features dropped from the project.)  These new probe designs were based off the previously used Ranger moon probes, so there was less testing thought needed and the Atlas Agena B Booster was born, bringing the Mariner project down to a meager cost of $80 million.

The goal of the Mariner I was to perform an unmanned mission to Mars, Venus and Mercury.  It was equipped with solar cells on its wings to assist on the voyage, which was all new technology, but the booster, required to escape Earth’s gravity, was an essential part of the project. As the boosters were based off of older technology than many of the newer features, the same attention wasn’t offered to it while testing was being performed.

On July 22nd, 1962, the Mariner I lifted off, but after approximately four minutes in, it veered off course.  NASA made the fateful decision to terminate the spacecraft and destroyed millions of dollars of equipment, ensuring it didn’t end up crashing on its own into populated areas.

As has already been well documented, the guidance system, which was supposed to correct the Mariner 1 flight, had a single typo in the entire coded program.  A missing hyphen, required for instructions to adjust flight patterns was missing.  Where it should have read “R-dot-bar sub-n”, instead was “R-dot-bar sub n”.  This minor change caused the program to over-correct small velocity changes and created erratic steering commands to the spacecraft.

This missing hyphen caused a loss of millions of dollars in the space program and is considered the most expensive hyphen in history.

How does this feed into the DevOps scenario?  

Missing release dates for software can cost companies millions of dollars, but so can the smallest typos.  Reusing code and automation of programming, along with proper policies, process and collaboration throughout the development cycle ensures that code isn’t just well written, but in these shortened development cycles, it’s reviewed and tested fully before it’s released.  When releases are done in smaller test scenarios, a feedback loop is ensured so that errors are caught early and guaranteed not to go into production.

Posted in devops, Oracle, SQLServer Tagged with:

July 13th, 2017 by dbakevlar

For a POC that I’m working on with the DBVisit guys, I needed a quick, 12c environment to work on and have at our disposal as required.  I knew I could build out an 11g one in about 10 minutes with our trust free trial, but would then need to upgrade it to 12c.

Disable snapshots to Delphix Engine

This is a simple prerequisite before you upgrade an Oracle source database and takes down the pressure on the system, as well as confusion as the database upgrades the Oracle home, etc.

Simply log into the Delphix Admin console, click on your source group that the source database belongs to and under Configuration, in the right hand side, you’ll see a slider that needs to be moved to the “disable” position to no longer take interval snapshots.

Configure GUI for Simplified Oracle Installation

EC2 doesn’t come default with the GUI interface, so we just need to install it on the host to make life a little easier for the upgrade:

  •  Check for updates:
[delphix@linuxsource database]$ sudo yum update -y

….

  xfsprogs.x86_64 0:3.1.1-20.el6                                                
  yum.noarch 0:3.2.29-81.el6.centos                                             
  yum-plugin-fastestmirror.noarch 0:1.1.30-40.el6                               
Replaced:
  python2-boto.noarch 0:2.41.0-1.el6                                            
Complete!
  • Install the desktop:
[delphix@linuxsource database]$ sudo yum groupinstall -y "Desktop"

  xorg-x11-xkb-utils.x86_64 0:7.7-12.el6                                        
  xulrunner.x86_64 0:17.0.10-1.el6.centos                                       
  zenity.x86_64 0:2.28.0-1.el6                                                  
Complete!
  • Install dependencies like fonts needed:
[delphix@linuxsource database]$ sudo yum install -y pixman pixman-devel libXfont
[delphix@linuxsource database]$ sudo yum -y install tigervnc-server

Each of the above should show completed successfully.

  • Set the password for the VNC:
[delphix@linuxsource database]$ vncpasswd
Password:
Verify:
  • Restart the SSHD Service:

sudo service sshd restart

[delphix@linuxsource database]$ sudo service sshd restart
Stopping sshd:                                             [  OK  ]
Starting sshd:                                             [  OK  ]
  • Configure VNC Server properties with SUDO privs:
[delphix@linuxsource database]$ sudo vi /etc/sysconfig/vncservers
VNCSERVERS="1:delphix"
VNCSERVERARGS[2]="-geometry 1280X1024

Save and exit the vncservers configuration file.

  • Start VNC Server:
[delphix@linuxsource database]$ sudo service vncserver start

….

Log file is /home/delphix/.vnc/linuxsource.delphix.local:1.log
                                                           [  OK  ]
  • I’m the only one who will be accessing this host to perform these types of tasks, so I’ll use port 5901 and add a firewall rule:
[delphix@linuxsource database]$ sudo iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 5901 -j ACCEPT

You can now use the VNC Viewer to access the GUI for the Linux Source and install/upgrade Oracle.  I’m assuming you already have it, but if you don’t, download it and do a quick install.  Keep in mind, to install Oracle via the GUI on the Linux Target, I’ll need to perform these steps on that target, too.

Let’s check and verify that we can get to the Linux Source desktop.  Configure a new connection in the VNC Viewer and remember to use the public IP and “:1” for your user that you wish to log into.  Save and log into the Linux Source.

In the next post, I’ll update the Linux Source Oracle database and we’ll proceed with upgrading Delphix source and target databases on Amazon.

 

Posted in Oracle Tagged with: , , ,

July 3rd, 2017 by dbakevlar

Database Administrators, (DBAs) through their own self-promotion, will tell you they’re the smartest people in the room and being such, will avoid buzzwords that create cataclysmic shifts in technology as DevOps has.  One of our main role is to maintain consistent availability, which is always threatened by change and DevOps opposes this with a focus on methodologies like agile, continuous delivery and lean development.

Residing a step or more behind bleeding edge has never phased the DBA.  We were the cool kids by being retro, those refusing to fall for the latest trend or the coolest new feature, knowing that with bleeding edge comes risk and that a DBA that takes risks is a DBA out of work.  So we  put up the barricades and refused the radical claims and cultural shift to DevOps.

As I travel to multiple events focused on numerous platforms the database is crucial to, I’m faced with peers frustrated with DevOps and considerable conversation dedicated to how it’s the end of the database administrator.  It may be my imagination, but I’ve been hearing this same story, with the blame assigned elsewhere-  either its Agile, DevOps, the Cloud or even a latest release of the actual database platform.  The story’s the same-  the end of the Database Administrator.

The most alarming and obvious pain point of this, is that in each of these scenarios, the result was the Database Administrator a focal point in the end more so than they were when it began.  When it comes to DevOps, the specific challenges of the goal needed the DBA more so than any of these storylines.  As development hurdled top speed to deliver what the business required, the DBA and operations as a whole, delivered the security, the stability and the methodologies to build automation at the level that the other groups simply never needed previously.

Powerful DBAs with skills not just in scripting, but in efficiency and logic, were able to take complicated, multi-tier environments and break them down into strategies that could be easily adopted.  As they’d overcome the challenges of the database being central and blamed for everything in the IT environment, they were able to dissect and built out complex management and monitoring of end-to-end DevOps.  As essential as System, Network and Server Administration was to the Operations group, the Database Administrator possessed advanced skills required, a hybrid of the developer and the operations personnel that make them a natural fit for DevOps.

The truth is, the DBA is not ruined by DevOps, but the role is revolutionized.

Thanks to this awesome post from 2012 from Alex Tatiyants which resonated so well with the DBAs I speak to every day, even in 2017.

Posted in DBA Life, devops, Oracle Tagged with: ,

June 28th, 2017 by dbakevlar

I’ve been at KSCOPE 2017 all week and it’s been a busy scheduled even with only two sessions.  Its just one of those conferences that has so much going on all the time that the days just speed by at 140MPH.

As with most major conferences, KSCOPE abstract submission was about 9 months ago.  This was a time when I was just coming to grips with how much Delphix could solve in the IT environment and may have been one of the first abstracts I submitted as a Delphix employee.  I wasn’t too thrilled with my choice and thank KSCOPE for still accepting me as a speaker, so the end product that I presented was very different than what I submitted.

I was in the DBA track, but after arriving, I started to push out on my network that I was building a developer presentation perspective with our development/tester interface, Jet Stream.  One of the challenges I experienced as a developer, a release manager and my many years as a database administrator was in releasing to multiple tiers.  This complexity has become an increasing pain point with the introduction of agile methodologies.

The demonstration example was an agile development team of a web developer, a tester and a database administrator team.  They were to bulk load new employee data to save the business from having to manually enter data and ensure that the numbering order of the employee ID was sequenced.

I proceeded to build out an environment this week in the cloud that would represent such an environment.  It consisted of our common employee web application, an Oracle database to store the employee information, some structured flat files of employee information and some scripts to load this information from the files.  These were all then used to create development and test environments and created containers to track changes via Jet Stream as each new step of the development and testing occurred.

In newer agile development shops, the DBA may also be an integral part of the development team and in this scenario, this demonstrated how we may solve problems in the database can be in conflict with how an application was designed to function, causing downtime.  With the container of Virtual Database, (VDB), Virtual File Directories, (vfiles)-  virtualized development environments that are complete read/write copies to use for development and test, we were able to use Jet Stream to version control not just the code changes, but the data changes, too.

In my demo, I showed how the bulk load process, designed by the DBA, had created a sequence and trigger on the table used to populate the employees table with data from the structured flat file, then proceeded to load it.  The data loaded without issue and the employee ID was now sequential-  requirement solved and job complete.  It was simple to then create a bookmark in the timeflow and noting what had been done in that iteration.

The problem was, post the bulk load of the data, the change actually broke the application and no new employees could be added through the interface.  We proved this by attempting to add an employee in the application and then querying the database to verify that it wasn’t just the application that wasn’t displaying the new employee addition.

I was able to demonstrate in Jet Stream, that instead of using rollback scripts and backing out files to previous versions, I was able to quickly “rewind” to the “Before 2.7 release bookmark” and all tiers of the container were reset to before the data load, saving considerable time and resources for the agile team.

If you’d like to learn more about Jet Stream, Templates, Containers or how this can provide incredible value in our hectic agile DBA and development lives, check out the following links:

Delphix Jet Stream PDF

Valuable Jet Stream Concepts

Jet Stream Container Overview

 

 

Posted in Delphix, Oracle Tagged with: , ,

June 19th, 2017 by dbakevlar

So you’re going to see a lot of posts from me in the coming months surrounding topics shared by Oracle and SQL Server.  These posts offer me the opportunity to re-engage with my Oracle roots and will focus on enhancing my SQL Server knowledge for the 2014 and 2016, (2017 soon enough, too) features, which I’m behind in.

I’m going to jump right in with both feet with the topics of hints.  The official, (and generic) definition of a SQL hint is:

“A hint is an addition to a SQL statement that instructs the database engine on how to execute the statement.”

Hints are most often used in discussion on queries, but they can assist in influencing the performance of inserts, updates and deletes, too.  What you’ll find is that the actual terminology is pretty much the same for hints in SQL statements for Oracle and SQL Server, but the syntax is different.

The Optimizer and Oracle

Oracle hints were quite common during the infancy of the Oracle Cost Based Optimizer, (CBO).  It could be frustrating for a database administrator who was accustomed to the Rules Based Optimizer, (rules, people!  If there’s an index, use it!) to give up control of performance to a feature that simply wasn’t taking the shortest route to the results.  As time passed from Oracle 9i to 10g, we harnessed hints less, trusting the CBO and by Oracle 11g, it started to be frowned upon unless you had a very strong use case for hinting.  I was in the latter scenario, as my first Oracle 11g database environment required not just new data, but a new database weekly and a requirement for me to guarantee performance.  I knew pretty much every optimal plan for every SQL statement in the systems and it was my responsibility to make sure each new database chose the most optimal plan.  I had incorporated complex hints, (and then profiles as we upgraded…)

With the introduction of database version Oracle 12c, it became a sought after skill to use hints effectively again, as many new optimizer features, (often with the words “dynamic” or “automated” in them) started to impact performance beyond what was outside the allowable.

SQL Server’s Query Optimizer

SQL Server’s optimizer took a big jump in features and functionality in SQL Server 2014.  With this jump, we started to see a new era of SQL Server performance experts with the introduction of SQL Server 2016 that moved even further into expertise with optimization, not only in collecting performance data via dynamic management views/functions, (DMVs/DMFs) but also in ability to influence the SQL Server Query Optimizer to make intelligent decisions with advanced statistics features and elevated hinting.

Hints have a more convoluted history in the SQL Server world than in the Oracle one.  I have to send some love and attention to Kendra Little after I found this cartoon she drew in regards to her frustration with the use of ROWLOCK hints:

After reading this, my plan is still to go deeper into a number of areas of performance, including the optimizers, but today, we’ll just stick to a high level difference on hinting in queries.

Hints

In our examples, we’ll focus on forcing the use of a HASH join instead of a nested loop, using an index for a specific table and a MERGE join.  Let’s say we want to use a hash join on the Employees and a merge join on the Job_history table.  We also want to make sure that we use the primary key for one of the employee ID joins, as a less optimal index usage results with lower costs even though the performance isn’t as optimal due to concurrency.

The query would look like the following in Oracle:

SELECT   /*+ LEADING(e2 e1) USE_HASH(e1) INDEX(e1 emp_emp_id_pk) 
           USE_MERGE(j) FULL(j) */
         e1.Name, j.job_id, sum(e2.salary) total_sal
FROM     employees e1, employees e2, job_history j
WHERE    e1.employee_id = e2.manager_id
AND      e1.employee_id = j.employee_id
AND      e1.hire_date = j.start_date
GROUP BY e1.first_name, e1.last_name, j.job_id
ORDER BY total_sal;

If there was a subquery as part of this statement, we could add a second set of hints for it, as each query supports its own hints in the statement after the word SELECT.

If we were to take the same statement in SQL Server, the hints would look a bit different.  Yeah, the following is about as close as I could get to “apples to apples” in hints and in TSQL, so please forgive me if it ain’t as pretty as I would have preferred it to be:

SELECT   e1.Name, j.Jobid, sum(pr.Salary) Total_Salary 
FROM     Employees AS e1, INNER MERGE JOIN Job_History AS j 
         LEFT OUTER HASH JOIN Employees AS e2
         WITH (FORCESEEK (emp_emp_id_pk(e2.EmployeeID)))
ON       e1.EmployeeID = e2.ManagerID 
WHERE    e1.EmployeeID = j.EmployeeID 
AND      e1.HireDate = j.StartDate 
GROUP BY e1.Name, j.JobID 
ORDER BY Total_Salary;

In a TSQL statement, each hint is placed at the object in the statement that its in reference to.  The hints are written out commands, (vs. more hint syntax required in Oracle) and the force seek on the primary key for Employees.

As you can see, Oracle signals a hint when put between /*+ and ending with a */.  Each requires some syntax and advanced performance knowledge, but all in all, the goal is the same-  influence the optimizer to perform in a specific way and [hopefully] choose the optimal plan.

Please let me know if you’d like to see more in this series, either by sending me an email to dbakevlar at Gmail or commenting on this post and I’m going to go start preparing for KSCOPE 2017–  Someone explain to me how it already is the end of June!! 🙂

 

 

Posted in Oracle, SQLServer Tagged with: ,

  • Facebook
  • Google+
  • LinkedIn
  • Twitter