Category: Delphix

September 5th, 2017 by dbakevlar

Data gravity and the friction it causes within the development cycle is an incredibly obvious problem in my eyes.

Data gravity suffers from the Von Newmann Bottleneck. It’s a basic limitation on how fast computers can be. Pretty simple, but states that the speed of where data resides and where it’s processed is the limiting factor in computing speed.

OLAP, DSS and VLDB DBAs are constantly in battle with this challenge.  How much data is being consumed in a process, how much must be brought from disk and will the processing required to create the results end up “spilling” to disk vs. completing in memory.

Microsoft researcher Jim Gray has spend most of his career looking at the economics of data, which is one of the most accurate terms of this area of technical study.  He started working at Microsoft in 1995 and although passionate about many areas of technology, his research on large databases and transactional processing speeds is one of great respect in my world.

Now some may say this has little to do with being a database administrator, but how many of us spend significant time on the cost based optimizer, as moving or getting data has cost- so economics of data it is.

And this is the fundamental principle of data gravity and why DBAs get the big bucks.

If you’re interested in learning more about data gravity, DevOps and the future of DBAs, register for the upcoming webinar.

Posted in big data, Database, DBA Life, Delphix, Oracle, SQLServer Tagged with:

August 30th, 2017 by dbakevlar

Delphix Engineering and Support are pretty amazing folks.  They continue to pursue for solutions, no matter how much time it takes and the complex challenges they’re faced with supporting heterogenous environments, hardware configurations and customer needs.

This post is in support of the effort from our team that resulted in stability to a previously impacted Solaris 11.2 cluster configuration.  The research, patching, testing and then resulting certification from Oracle was a massive undertaking from our team and I hope this information serves the community, but in no way is recommended by Delphix.  It’s just what was done to resolve the problem, after logical decisions for the use of the system by our team.

Challenge

Environment:  Solaris 11.3 (with SRU 17.5) + Oracle 12.2 RAC + ESX 5.5
Situation:
Post an upgrade to 12.2, environments were experiencing significant cluster instability, memory starvation due to the new demands for memory post the upgrade.
Upon inspection, it was found that numerous features required more memory than previous and the system simply didn’t have the means as to support it.  As our environment was a Solaris environment with 12.2, there was a documented patch we needed to request from Oracle for RAC performance and node evictions.  The environment was still experiencing node evictions, etc data showed that we’d have to triple the memory on each node to have continue using the environment as it had before.  Our folks aren’t one to give up that easily, so secondary research was performed to find out if some of the memory use could be trimmed down.
What we discovered, is that what is old can become new again.  My buddy and fellow Oakie, Marc Fielding had blogged, (along with links to other posts, including credit to another Oakie, Jeremy Schneider) about how he’d limited resources back in 2015 after patching to 12.1.0.2 and this post really helped the engineers at Delphix get past the last hump on the environment, even after implementing the patch to address a memory leak.  Much of what you’re going to see here, came from that post, focused on its use in a development/test system, (Delphix’s sweet spot.)

Research

Kernel memory out of control
Starting with kernel memory usage, the mdb -k command can be used to inspect at a percentage level:
$ echo “::memstat” | mdb -k
Page Summary           Pages                 MB          %Tot
  ————                 —————-             —————-           —-
  Kernel               151528              3183            24%
  Anon                 185037              1623            12%
  ...

We can also look at it a second way, breaking down the kernel memory areas with kmsastat:

::kmsastat

cache                        buf    buf    buf    memory     alloc alloc 
name                        size in use  total    in use   succeed  fail 
------------------------- ------ ------ ------ --------- --------- ----- 
kmem_magazine_1               16   3371   3556     57344      3371     0 
kmem_magazine_3               32  16055  16256    524288     16055     0 
kmem_magazine_7               64  29166  29210   1884160     29166     0 
kmem_magazine_15             128   6711   6741    876544      6711     0 
...

Oracle ZFS ARC Cache

Next- Oracle ZFS has a very smart cache layer, also referred to as ARC (Adaptive replacement cache). Both a blessing and a curse, ARC consumes as much memory that is available, but is supposed to free up memory to other applications if it’s needed.  This memory is used to supplement any slow disk I/O.  When inspecting our environment, a significant amount was being over-allocated to ARC.  This may be due to the newness of Oracle 12.2, but in a cluster, memory starvation can be a common cause of node eviction.

We can inspect the size stats for the ARC in the following file:

view /proc/spl/kstat/zfs/arcstats

This assumes ZFS is mounted on /proc, so your actual arcstats file may reside in a different path location than shown above.  Inside the file, review the following information:

  • c is the target size of the ARC in bytes
  • c_max is the maximum size of the ARC in bytes
  • size is the current size of the ARC in bytes

Ours was eating up everything left, taking 100% of memory left, as we’ll discuss in the next section of this post.

Oracle Clusterware Memory

The Oracle clusterware is a third area that was investigated for frivolous memory usage that could be trimmed down.  There’s some clear documented steps to investigate issues with misconfigurations and feature issues from Oracle that can assist in identifying many of these.

So, post upgrade and patching, what can you do to trim down memory usage to avoid memory upgrades to support the cluster upgrade?

Changes

From the list of features and installations that weren’t offering a benefit to a development/test environment, these were what made the list and why:
Update were made to the /etc/system file, (requires a reboot and must be performed as root):
  • Added set user_reserve_hint_pct=80
    • This change was made to limit the ZFS on how much memory for the ARC cache.  There was a significant issue for the customer when CRS processes weren’t able to allocate memory.  80% was the highest percentage this could be set without a node reboot being experienced, something we all prefer not to happen.
  • Stopped the Cluster Health Monitor, (CHM) process.  This is a brand new background process in 12c Clusterware and collects workload data, which is significantly more valuable in a production environment, but in development and test?  It can easily be a subsequent drain on CPU and memory that could be better put to use for more virtual databases.
  •  To perform this, the following commands were used as the root user:
$ crsctl stop res ora.crf -init
$ crsctl delete res ora.crf -init
  • Removed the Trace File Analyzer Collector (tfactl).  This background process collects the many trace files Oracle generates into a single location.  Handy for troubleshooting, but it’s Java-based and has a significant memory footprint and subject to java heap issues.
  • It was uninstalled with the following command as the $ORACLE_HOME owner on each node of the cluster:
$ tfactl uninstall
  • Engineering stopped and disabled the Cluster Verification Utility, (CVU).  In previous version this was a utility that could be manually added to the installation or performed post to troubleshoot issues via an Admin.  This is another feature that simply eats up resources that could be reallocated to dev and test environments, so it was time to stop and disable it with the following:
$ srvctl cvu stop
$ srvctl cvu disable

Additional Changes

  • Reduced memory allocation for the ASM instance.
    • The ASM instance in 12.2 is now using 1Gb of memory, where previous 256Mb.  That’s a huge change that can impact other features dependent on that memory.
    • Upon research, it was found that 750Mb was adequate, so if more memory reallocation is required, consider lowering the memory on each node to 750Mb.
  • To perform this set of instance level parameter change, run the following on any of the nodes and then restart each node until the cluster has been cycled to put the change into effect:
$ export ORACLE_HOME=<Grid Home>

$ export ORACLE_SID=<Local ASM SID>

$ sqlplus / as sysasm
alter system set "_asm_allow_small_memory_target"=true scope=spfile;
alter system set memory_target=750m scope=spfile;
alter system set memory_max_target=750m scope=spfile;

High CPU usage features can be troubling for most DBAs, but when it’s experienced on development and test databases that are often granted less resources to begin with vs. production, a change can often enhance the stability and longevity of these environments.

  • Disabled high-res time ticks in all databases, including ASM DBs, regular DBs, and the Grid Infrastructure Management Repository DB (GIMR, SID is -MGMTDB).  High-res ticks are a new feature in 12c, and they seem to cause a lot of CPU usage from cluster time-keeping background processes like VKTM.  Here’s the SQL to disable high-res ticks (must be run once in each DB):
alter system set "_disable_highres_ticks"=TRUE scope=spfile;
The team, after all these changes, found the Solaris kernel was still consuming more memory than before the upgrade, but it was more justifiable:
  • Solaris Kernel: 1GB of RAM
  • ARC Cache: between 1-2GB
  • Oracle Clusterware: 3Gb

Memory Upgrade

We Did Add Memory, but not as much as expected to.
After all the adjustments, we still were using over 5GB of memory for these three features, so upped each node from 8GB to 16GB to ensure enough resources to support all dev and test demands post the upgrade.  We wanted to provision as many Virtual databses, (VDBs) for any development or test the groups needed, so having a more than 3Gb free for databases was going to be required!
The Solaris cluster, as this time, has experienced no more kernel panics, node evictions or unexpected reboots, which we need to admit is the most important outcome.  It’s more difficult to explain an outage to users than why we shut down and uninstalled unused features to Oracle…. 🙂

Posted in Delphix, Oracle Tagged with: , ,

August 24th, 2017 by dbakevlar

There was a great post by Noel Yuhanna on how he deems the number of DBAs required in a database environment by size and number of databases.  This challenge has created a situation where data platforms are searching for ways to remove this roadblock and eliminate the skills needed to manage the database tier.

I’ve been a DBA for almost 2 decades now, (“officially” as my beginning date with an official title and my years experience differ by a couple years…) When I started, we commonly used import and export features to move data from one data source to another.  Tim Hall just wrote up a quick review of enhancements for Oracle 12.2 Datapump, but I felt dread as I read it, realizing that it continues to hold DBAs back with the challenge of data gravity.

Data move utilities may go through updates over the years and they do have their purpose, but I don’t feel the common challenge they’re used to undertake is the correct one.  Taking my two primary database platforms as an example, for Oracle, we went from Import/Export to SQL Loader to Datapump and for SQL Server, we went from BCP, (Bulk Copy Protocol) to, Bulk Inserts to a preference on the SQL Server Import/Export Wizard.

Enhancements?

Each of these products have introduced GUI interfaces, (as part of a support product or third party product), pipe functions, parallel processing and other enhancements.  They’ve added advanced error handling and automatic restart.  Oracle introduced the powerful transportable tablespaces and SQL Server went after filegroup moves, (very similar concepts, grouping objects by logical and physical naming to ease management.)

Now, with the few enhancements that have been added to data movement utilities, I want you to consider this-  If we focus on data provided by Forbes,

  • there is 1.7MB of data, per person, per second generated in the world today.
  • That data has to be stored somewhere.

No matter if it’s relational or big data or another type of data store, SOME of that data is going to be in the two RDBMS that I used in my example.  The natural life of a database is GROWTH.  The enhancements to these archaic data movement utilities haven’t and never will keep up with the demands of the data growth.  Why are we still advocating the use of them to migrate from one database to another?  Why are we promoting them for cloud migrations?

This Is(n’t) How We Do It

I’m seeing this recommendation all too often in documentation for products and best practice.  Oracle’s Migration steps for 11g to the cloud demonstrates this-

  • DataPump with conventional export/import
  • DataPump transportable tablespace
  • RMAN Transportable tablespace
  • RMAN CONVERT transportable tablespace with DataPump

These tools have been around for quite some time and yes, they’ve been are trusted sidekicks that we know will save the day, but we have a new challenge when going to the cloud-  along with data gravity, we have a network latency and network connect issues.

Depending on the size, (the weight) of our data that has to be transferred, Database as a Service can turn into a service for no one.  Failures, requiring us to perform a 10046 trace to try to diagnose a failed DataPump, with the weight of data gravity behind it, can delay projects and cause project creep in a way that many in IT aren’t willing to wait for and the role of the DBA comes to a critical threshold again.

I’m not asking DBAs to go bleeding edge, but I am asking you to embrace tools that others areas of IT have already recognized as the game changer.  Virtualize, containerize and for the database, that means Data Pods.  Migrate faster- all data sources, applications, etc. as one and then deliver a virtual environment while offering you the time to “rehydrate” to physical without other resources waiting for you- the DBA, the often viewed roadblock.  Be part of the answer, not part of the problem that archaic import/export tools introduce because they aren’t the right tool for the job.

Posted in Cloud, DBA Rants, Delphix, Oracle Tagged with: , ,

August 23rd, 2017 by dbakevlar

Even though my social media profile is pretty available for Twitter and Linked in, I’m significantly conservative with other personal and financial data online.  The reversal of the Internet Privacy Rule, (I’ve linked to a Fox news link, as there was so much negative news on this one…) had everyone pretty frustrated, but then we need to look at security of personal information, especially financial data and as we can see by security breaches so far in 2017, we all have reason to be concerned.

The EU has taken the opposite approach with the right to be forgotten, along with General Data Protection Regulations, (GDPR.)  Where we seem to be taking a lesser, bizarre path to security, the rest of the world is tightening it up.

For the database engineer, we are

Responsible for the data, the data access and all of the database, so help me God.

As the gatekeeper for the company’s data, security had better be high on our list and our career.  There are a lot of documents and articles telling us to protect our environment, but often when we go to the business, the high cost of these products can make them hesitate on investing in them.

My Example

I’m about to use only one of the top 15 security breaches of all time as an example, but seriously, Sony Playstation Network, this was pretty terrifying and an excellent example of why we need to think deeper about data security.

Date of Discovery: April, 2011
How many Users Impacted: 77 million PlayStation Network individual accounts were hacked.

How it went down:  The Sony Playstation Network breach is viewed as the worst gaming community data breach in history. Hackers were able to make off with 12 million unencrypted credit card numbers as part of the data they accessed. They also retrieved account users full names, passwords, e-mails, home addresses, along with their purchase history and PSN/Qriocity logins and passwords.  There was an estimated loss of $171 million in revenue while the site was down for over a month.

I know as a customer, my kids always wonder why I limit where I submit my data online.  So often companies offer me the option to pay or store my credit card information in their system and I won’t.  The above is a great example as of why I don’t.  The convenience isn’t worth the high cost of lacking security or unknown security measures.

John Linkous of elQnetworks stated, “It’s enough to make every good security person wonder, ‘If this is what it’s like at Sony, what’s it like at every other multi-national company that’s sitting on millions of user data records?'”

As it was only certain environments that weren’t protected and specific ones that didn’t involve encryption methods, it reminds those in IT security to identify and apply security controls consistently across environments and organizations.

How to Protect Data

There are some pretty clear rules of thumb when protecting data-

  • Roles, Privileges and Grants

Utilize the database and applications full security features to ensure that the least privileged access is granted to the user.  As automation and advanced features come into offer you more time to allocate towards the important topic of security, build out a strong security foundation of features to ensure you’ve protected your data to the highest degree.

  • Audit regularly

There are full auditing features to ensure compliance and verify who has what access and privileges.  You should know who has access to what, if any privileges change and if changes are made by users other than the appropriate ones.

  • Encrypt production

Use powerful encryption methods to secure your production system.  Encryption changes the data to an unreadable format until a key is submitted to return the data to its original, readable format.  Encryption can be reversed, but strong encryption methods can offer advanced security against breaches.  Auditing should also show who is accessing the data and alert upon a suspected breach.

  • Mask Non-production

Often 80% of our data is non-production copies.  Most users, stakeholders and developers may not recognize the risk to the company as they would with the production environment.  Remove the responsibility and unintentional risk by masking the data with a masking tool that contains a full auto-discovery process and templates to make it easily repeatable and dynamic.

As of 2014, Sony agreed to a preliminary $15 million settlement in a class action lawsuit over the breach, which brings the grand total to just over $186 million in loss to the Sony Playstation Network.

If you think encryption and masking products are expensive, recognize how expensive a breach is.

Posted in Data Masking, Delphix, Oracle Tagged with: ,

August 16th, 2017 by dbakevlar

I’m off to Columbus, Ohio tomorrow for a full day of sessions on Friday for the Ohio Oracle User Group.  The wonderful Mary E. Brown and her group has set up a great venue and a fantastic schedule.  Next week, I’m off to SQL Saturday Vancouver to present on DevOps for the DBA to a lovely group of SQL Server attendees.  It’s my first time to Vancouver, British Columbia and as it’s one of the cities on our list of potential future locations to live, I’m very excited to visit.

Speaking of SQL Server-  Delphix‘s own SQL Server COE, (Center of Excellence) meets twice a month to discuss various topics surrounding our much-loved Microsoft offering.  This week, one of the topics discussed a previous change made to permissions to the Backup Operator role from SQL Server 2008R2 to SQL Server 2012. This feature, referred to as “File Share Scoping” was unique to 2008R2 clusters and no longer exists.

Now many may say, “but this is such an old version.  We’ve got SQL Server 2017, right?”  The challenge is, there are folks out there with 2008 instances and it’s good to know about these little changes that can make big impacts to your dependent products.  This change impacted products with shared backups file systems and as we know, having access to a backup can offload a lot of potential load on a system.

Now, for my product, Delphix, we are dependent on read access to backup files for the initial creation of our “golden copy” that we source everything from.  The change in SQL Server 2012 from the previous File Share Scoping in 2008R2 was only made to Microsoft Failover Clusters, to then offering access to only those with Administrator, where previously, anyone with Backup Operator role could attain access, too.

Our documentation clearly states during configuration of a Delphix engine for the validated sync, (creation of the golden copy) the customer must grant read access for the backup shares to the Delphix OS user and doesn’t state to grant Backup Operator.  As with everything, routine can spell failure, as the Backup Operator role previously offered this access with 2008R2 and it was easy to assume the configuration complete upon database level role grants.

Using Powershell from the command line, note that you can’t view the root of the shared drive with the file server role, Backup Operator in the newer release.

PS C:\Users\user> Get-SmbShareAccess -name "E$" | ft -AutoSize

Name ScopeName AccountName AccessControlType AccessRight
---- --------- ----------- ----------------- -----------
E$ USER1-SHARE BUILTIN\Administrators Allow Full
E$ * BUILTIN\Administrators Allow Full
E$ * BUILTIN\Backup Operators Allow Full
E$ * NT AUTHORITY\INTERACTIVE Allow Full

If you’d like to read more details on backup and recovery changes from SQL Server 2008R2 to 2012, check out the documentation from Microsoft here.

 

Posted in Database, Delphix, SQLServer

August 10th, 2017 by dbakevlar

In my latest blog post on the Delphix site, I continue my conversation with why DevOps is the next step for DBAs and how DBAs can embrace this next step in their evolution.

This is an extensive series of blog posts, (four so far) to be followed by an ebook, a podcast and two webinars.  One is to be announced soon from Oracle called, “The DBA Diaries” and the other will be a from Delphix, titled, “The Revolution:  From Databases and DevOps to DataOps“.

The goal for all of this is to ease transition for the Database community as the brutal shift to the cloud, now underway, changes our day to day lives.  Development continues to move at an ever accelerating pace and yet the DBA is standing still, waiting for the data to catch up with it all.  This is a concept that many refer to as “data gravity“.

The concept was first coined just a few years ago by a Senior VP Platform Engineer, Dave McCrory.  It was an open discussion aimed at understanding how data impacted the way technology changed when connected with network, software and compute.

He discusses the basic understanding that there’s a limit in “the speed with which information can get from memory (where data is stored) to computing (where data is acted upon) is the limiting factor in computing speed.” called the Von Newmann Bottleneck.

These are essential concepts that I believe all DBAs and Developers should understand, as data gravity impacts all of us.  Its the reason for many enhancements to database, network and compute power.  Its the reason optimization specialists are in such demand.  Other roles such as backup, monitoring and error handling can be automated, but the more that we drive logic into programs, nothing is as good as true skill in optimization when it comes to eliminating much of data gravity issues.  Less data, less weight-  it’s as simple as that.

We all know the cloud discussions are coming, and with that, even bigger challenges are felt by the gravity from data.  Until then, let’s just take a step back and recognize that we need some new goals and some new skills.  If you’re like to learn more about data gravity, but don’t have time to take it all in at once, consider following it on Twitter, which is curated by Dave McCrory.

I’m off to Jacksonville, Fl. tomorrow to speak at SQL Saturday #649!

 

 

 

Posted in Database, DBA Life, Delphix, Oracle, SQLServer Tagged with: , , ,

August 7th, 2017 by dbakevlar

It’s finally time to upgrade our Linux Target!  OK, so we’re not going to upgrade the way a DBA would normally upgrade a database server when we’re working with virtualization.

So far, we’ve completed:

  • 1.  Updating our instances so that we’ll have a GUI interface if we’ll need one.
  • 2.  Installed Oracle on the Linux Source and upgraded our Dsource database to 12c

 

Now we’re done with our Linux Source and onto our Linux Target.

Install and Configure VNC and Oracle

We’ll run through and install and configure the VNC Viewer requirements just like we did in Part I and Part II. We’ll also install Oracle, but only this time, we’ve performed a software installation only.

We’ll install the Enterprise Edition and we’ll make sure to install it in the same path as we did on our Linux Source, (/u01/app/oracle/product/12.1/db_1)  We’re not installing the multi-tenant, as we didn’t configure this on our source, either.

Once that is complete, it’s time to get our VDB’s upgraded.

The first thing you need to remember is that the VDBs are simply virtual images of our Dsource that is already UPGRADED.

Add the New Oracle Home to the Linux Target

Log into Delphix Admin Console and click on Environments.

click on the Linux Target and then click on the refresh button:

Click on the Databases tab and you’ll now see the DB12c Oracle home is now present in the list:

Prep VDBs for switch to new home

Copy your environments profile from 11g.env to 12c.env.  Update the Oracle home to point to the new 12c home and save the file.

Now I have three VDBs on this target:

[delphix@linuxtarget ~]$ ps -ef | grep pmon

delphix   7501     1  0 Jul12 ?        00:01:17 ora_pmon_devdb
delphix   8301     1  0 Jul06 ?        00:01:49 ora_pmon_VEmp6
delphix  16875     1  0 Jul05 ?        00:01:57 ora_pmon_qadb

Log into the Linux Target and from the command line, set the environment and log into each database via SQL Plus and shut it down.

. 11g.env

export ORACLE_SID=VEmp6f
sqlplus / as sysdba
shutdown immediate;
exit;

and so on and so forth…. 🙂

Copy all the init files from the 11g Oracle Home/dbs for the VDBs over to the 12c Oracle Home/dbs/.

And this is where it all went wrong for two of the VDBs…

Back on the Delphix Admin Console, click on Manage –> Datasets

Click on each of the VDBs involved.  Click on Configuration –> Upgrade, (up arrow icon) and say yes to upgrading.  Update the Oracle home from the 11g in the drop down to the new 12c Oracle home and click the gold check mark to confirm.

OK, so this is what I WOULD have done for all three VDBs if I’d been working with JUST VDBs, but this is where it gets a bit interesting and I had to go off the beaten path for a solution.  I’m unsure if this is documented anywhere inside Delphix, (Delphix Support is awesome, so I’m sure they already know this, but for my own sanity) here’s the situation and the answer.  The environment I am working on is built off of AWS AMIs that consist of Delphix containers.  Containers are very powerful, allowing you to “package” up a database, application and other tiers of virtualized environments, offering the ability to manage it all as one piece.  This was the situation for the qadb and the devdb instances.

Due to this, I couldn’t run the upgrade inside the Delphix Admin console since these two VDBs were part of Delphix “data pods.”  The following are the steps to then address this change.

Remove the Containers, (Subsequently the VDBs as Well!)

  1. Log into the Delphix’s Jet Stream.
  2. Upper right hand corner, click on Usage Overview
  3. Scroll down and click on Employee Application, (its the template for the VDBs in question..)
  4. At the bottom of this page, you’ll see the two containers that possess the VDBs as part of them.  To the right, there is a trash can icon for delete.  (The reason this is an option is that I have a template built for this container and it will be very simple to recreate a VDB and vfile to repopulate this container, (matter of minutes, max.)

       5. Delete the two containers that are controlling the administration of these two VDBs still pointing to the 11g home.

Create the New VDBs and Virtualized Application, (Vfiles)

Now, log into the Delphix Admin console.

  1. Provision two VDBs from the orcl source db, one as devdb and another as qadb, just as it was before, both on the Linux target.
  2. Provision two vfiles of the Web Application, one as QA_Web and the other as DEV_Web, port numbers 1080 and 2080, keeping all other defaults.
  3. Once completed, (couple minutes, max) then lets return to Jet Stream and create the containers that will house the system.

Create the New Containers From the Template

In Jet Stream

  1. Click on Data Management in the upper right hand corner
  2. You will be brought to the Templates tab, click on the Employee Application template
  3. Click on Add Container

  1. Name:  “Dev 12c Container”, Owner: Dev and choose the devdb and the DEV_Web for the sources, then complete the container creation.
  2. Click again on Add Container
  3. Name: “QA 12c Container”, Owner: QA and choose the qadb and the QA_Web for the sources, then complete the container creation.

This will take just a moment to finish creating and that’s all there is to it.  You now have two DB12c environments that are completely containerized and upgraded from their previous 11g state.

Our source shows we’re using our DB12c upgraded database:

And we can also see everything is upgraded and happy in our Delphix Administration Console.  

It may have taken a little longer for me to upgrade with the complexity of the containers introduced, but the power of data pods is exceptional when we’re managing our data, the database and the application as one piece anyway.  Shouldn’t we treat it as one?

 

 

 

 

 

 

 

Posted in AWS, Delphix, Oracle

August 3rd, 2017 by dbakevlar

I’ve been asked what it takes to be a successful evangelist and realizing that what makes one successful at it, is often like holding sand in your hands- no matter how tightly you hold your fists, it’s difficult to contain the grains.

The term evangelist is one that either receives very positive or very negative responses.  I’m not a fan of the term, but no matter if you use this term or call them advocates, representative, influencer-  it doesn’t matter, they are essential to the business, product or technology that they become the voice for.

Those that I view as successful evangelists in the communities that I am part of?

There are a number of folks I’m sure I missed I also admire as I interact and observe their contributions, but these are a few that come to mind when I think of fellow evangelists.

What makes an evangelist successful?  It may not be what you think.

1. It’s Not Just About the Company

Most companies think they hire an evangelist to promote and market the company and yet, when all you do it push out company info, company marketing- People STOP listening to you.  What you say, do and are interested in should drive people to want to know more about you, including the company you work for and what that company does.

All of these folks talk about interests outside of work.  They post about their lives, their interests and contribute to their communities.  This is what it means to be really authentic and setting an example.  People want to be more like them because they see the value they add to the world than just talking points.

2.  They’re Authentic

Authenticity is something most find very elusive.  If you’re just copying what another does, there’s nothing authentic about that.  There’s nothing wrong finding a tip or tidbit that someone else is doing and adopting it, but it has to WORK for you.  I was just part of a conversation yesterday, where Jeff and I were discussing that he doesn’t use Buffer, (social media scheduling tool) where I live by it.  It doesn’t work for Jeff and there’s nothing wrong with that.  We are individuals and what makes us powerful evangelists is that we figured out what works for each of us.

3.  In the Know

As a technical evangelist, you can’t just read the docs and think you’re going to be received well.  Theory is not practice and I’ve had a couple disagreements with managers explaining why I needed to work with the product.  I’ve had to battle for hardware to build out what I’ve been expected to talk on and only once I didn’t fight for it and I paid for it drastically.  I won’t write on a topic unless I can test it out on my own.  Being in the trenches provides you a point of view no document can provide.

Documentation is secondary to experience.

4.  Your View is Outward

This is a difficult one for most companies when they’re trying to create evangelists from internal employees.  Those that may be deeply involved at the company level may interact well with others, but won’t redirect to an external view.  I’ve had people ask me why my husband isn’t doing as much as I am in the community.  Due to his position, he must be more internally and customer facing.  My job is very separate from my fellow employees.  I must always be focused outward and interact at least 95% of my time with the community.  You’ll notice all of the folks listed are continually interacting with people outside of their company and are considered very “approachable.”

We volunteer our time in the community- user groups, board of directors, events and partnering with companies.  We socialize, as we know our network is essential to the companies we represent.

5.  We Promote

I wish I did more public promotion like I see some of these other folks.  I’m like my parents-  I stand up for others and support them on initiatives and goals.  I do a lot of mentoring, but less when I’m blogging.  My mother was never about empty compliments and I did take after her on this.  I’m just not very good at remembering to compliment people on social media and feel I lack in this area, but I continually watch others do this for folks in the community and this is so important.

We ensure to work with those that may need introductions in our network, support in the community and reach out to offer our help.  In the public view, this is quite transparent, so when others pay this forward or return the favor, it can appear that people just bend over backwards for us, but we often have been their for the folks in question in the past, with no expectations and people remembered this.

We do promote our company, but for the right reasons.  The company has done something good for the community, has something special going on, but rarely do we push out anything marketing, as it just doesn’t come across very well from us.  It’s not authentic.

Additional Recommendations

  • Refrain from internet arguments, social media confrontations

I’m not saying to be a pushover.  I literally have friends muted and even blocked.  There’s nothing wrong with NOT being connected to individuals that have very different beliefs or social media behavior.  You shouldn’t take it personally– this is professional and you should treat it as such.

You may find, (especially for women and people of color) that certain individuals will challenge you on ridiculous topics and battle you on little details.  This is just the standard over-scrutinizing that we go through and if it’s not too bad, I tell people to just ignore it and not respond.  If it escalates, don’t hesitate to mute or block the person.  You’re not there to entertain them and by removing your contributions from their feed- “out of sight, out of mind”, offering peace to both of you… 🙂

  • Use automation tools and send out content that INTERESTS YOU.

Contribute what you want and limit to a certain percentage of what your company wants and be authentic.  Find your own niche and space and don’t send out “noise.”

There are a ton of tools out there.  Test out buffer, hootsuite, Klout or SumAll to make social media contributions easier.  If you don’t have a blog, create one and show what you’re working on and don’t worry about the topic.  You’ll be surprised that if you just write on challenges you’re facing, how you’ve solved a problem you’ve come across and write on a topic that you couldn’t find a solution to online, people will find value in your contributions.

  • Interact and be receptive of others

Have fun with social media and have real conversations.  People do appreciate honesty with respect.  Answer comments and questions on your blog.  Respond to questions on forums for your product and promote other people’s events and contributions.

When people approach you at an event or send you a direct message, try to engage with them and thank them for having the guts to come up and speak with you.  It’s not easy for most people to approach someone they don’t know.

  • Volunteer and Contribute

We used to be part of our community and as our world has changed with technology, the term community has changed.  These communities wouldn’t exist without the contributions of people.  Volunteer to help with user groups, events and forums.  Don’t just volunteer to be on a board of directors and not do anything.  It’s not something to just put on your CV and think you’re contributing.  There is incredible power in the simple act of doing, so DO.  Provide value and ask how you can help.  Kent has been a board member, a volunteer and even a president of user groups.  Jeff has run content selections and run events even though he’s limited in what he’s allowed to do as an Oracle employee and Rie promotes information about every woman speaker at SQL Saturday events along with all she does to run the Atlanta SQL Saturday, (largest one in the US!)  I won’t even try to name all the different contributions that Grant is part of, including the new attendees event at Pass Summit, (Microsoft’s version of Oracle Open World for my Oracle peeps!)

For those companies that are thinking-  “I hired an evangelist, so I want them to be all about me and all invested in the company.”  If they do, you’ll never have the successful evangelist that will be embraced by the community, able to promote your product/company in a powerful, grassroots way and if their eyes are always looking inside, they will miss everything going on outside and as we all know, technology moves fast.  Look away and you’ll miss it.

 

Posted in DBA Rants, Delphix, Oracle, SQLServer Tagged with:

July 31st, 2017 by dbakevlar

This is the Part III in a four part series on how to:

  1.  Enable VNC Viewer access on Amazon EC2 hosts.
  2.  Install DB12c and upgrade a Dsource for Delphix from 11g to 12c, (12.1)
  3.  Update the Delphix Configuration to point to the newly upgraded 12c database and the new Oracle 12c home.
  4.  Install DB12c and upgrade target VDBs for Delphix residing on AWS to 12.1 from the newly upgraded source.

In Part II, we finished upgrading the Dsource database, but now we need to get it configured on the Delphix side.

Log into the Delphix Admin console to make the changes required to recognize the Dsource is now DB12c and has a new Oracle home.

Log into the Delphix console as the Delphix_Admin user and go to the Manage –> Environments.

Click on the Refresh button and let the system recognize the new Oracle Home for DB12c:

Once complete, you should see the 12.1 installation we performed on the Linux Source now listed in the Environments list.

Click on Manage –> Datasets and find the Dsource 11g database and click on it.

Click on the Configuration tab and click on the Upgrade icon, (a small up arrow in the upper right.)

Update to the new Oracle Home that will now be listed in the dropdown and scroll down to save.

Now click on the camera icon to take a snap sync to ensure everything is functioning properly.  This should only take a minute to complete.

The DSource is now updated in the Delphix Admin console and we can turn our attentions to the Linux target and our VDBs that source from this host.  In Part IV we’ll dig into the other half of the source/target configuration and how I upgraded Delphix environments with a few surprises!

 

Posted in AWS, Delphix, Oracle Tagged with: , , ,

July 26th, 2017 by dbakevlar

I’m finally getting back to upgrading the Linux Source for a POC I’m doing with some folks and picking up from where we left off in Part I

Address Display Issue

Now that we have our VNC Viewer working on our Amazon host, the first thing we’ll try is to run the Oracle installer, (unzipped location –> database –> runInstaller) but it’s going to fail because we’re missing the xdpinfo file.  To verify this, you’ll need to open up a terminal from Application –> System Tools –> Terminal:

$ ls -l /usr/bin/xdpyinfo
 ls: /usr/bin/xdpyinfo: No such file or directory

We’ll need to install this with yum:

$ sudo yum -y install xorg-x11-utils

Once we’ve completed this, let’s verify our display:

$ echo $DISPLAY

:1.0 <– (0 is local, first number is the display, just as ipaddress:display for your VNC Viewer connection.)

If it’s correct, you can test it by executing xclock:

$ xclock

The clock should appear on the screen if the display is set correctly.

Install Oracle 12c

Run the installer:

$ ./runInstaller

The installer will come up for Oracle 12c and you can choose to enter in your information, but I chose to stay uninformed… 🙂  I chose to install AND upgrade the database to DB12c from 11g.

The warnings for swap and the few libraries I also chose to ignore by clicking ignore all and proceeded with the installation.

Root.sh and the Trace Analyzer

Once the installation of the new Oracle Home is complete, choose to run the root.sh script when prompted:

$ sudo /u01/app/oracle/product/12.1/db_1/root.sh

Overwrite all files when prompted by the script run and it’s up to you, but I chose to install the Oracle Trace File Analyzer so I can check it out at a later date.  You’ll then be prompted to choose the database to upgrade.  We’re going to upgrade our source database, ORCL in this example.

Upgrade Our Oracle DSource(Database)

Choose to proceed forward with the upgrade on the database, but know that you’ll require more space for the archive logs that are generated during the upgrade.  The check will tell you how much to add, but I’d add another 1Gb to ensure you are prepared with the other steps you have to run as we go through the preparation steps.

Log into SQL Plus as SYSDBA to perform this step:

ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=8100M;

Go through any warnings, but steps like stats collection and grants on triggers will have to be performed post the upgrade.

Drop the OLAP catalog:

$ sqlplus / as sysdba

@$ORACLE_HOME/olap/admin/catnoamd.sql

exit

Remove the OEM catalog for Enterprise Manager, first shutting down the console from the terminal:

$ emctl stop dbconsole

Copy the emremove.sql from the 12c Oracle Home/rdbms/admin and place it in the same location for 11g home.  Log into SQL Plus as SYSDBA:

SET ECHO ON;

SET SERVEROUTPUT ON;

@$ORACLE_HOME/rdbms/admin/emremove.sql

Empty the recyclebin post these steps:

purge recyclebin;

The assumption is that you have a backup prepared or you can use flashback with your resources allocated and proceed forward with upgrade.

Choose to upgrade the 11g listener and choose to install EM Express if you’d like to have that for monitoring.  Make sure to keep the default checks for the following window to update everything we need and collect stats before the upgrade runs to ensure it proceeds efficiently through all objects required.

Choose to proceed with the upgrade and if you’ve followed these instructions, you should find a successful installation of DB12c and upgrade of the database.  Keep in mind, we’re not going to go multi-tenant in this upgrade example, so if you were looking for those steps, my POC I’m building isn’t going to take that on in this set of blog posts.

Post Upgrade Steps:

Update your environment variables, including copying the 11g.env to a new profile called 12c.env and updating the Oracle Home.  Now set your environment and log into SQL Plus as SYSDBA to the upgraded database.

Update all the necessary dictionary and fixed stats:

EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;

EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;

Now, for our next post, we’ll need to set up the same installations on our Amazon host and VNC Viewer configuration we did for the Source and then install Oracle DB12c on our target server as we did in this post.  Then we’ll discuss how to get all our Delphix VDBs, (virtual databases) upgraded to be the same as our source in no time!

 

Posted in AWS, Delphix, Oracle Tagged with: , , ,

June 28th, 2017 by dbakevlar

I’ve been at KSCOPE 2017 all week and it’s been a busy scheduled even with only two sessions.  Its just one of those conferences that has so much going on all the time that the days just speed by at 140MPH.

As with most major conferences, KSCOPE abstract submission was about 9 months ago.  This was a time when I was just coming to grips with how much Delphix could solve in the IT environment and may have been one of the first abstracts I submitted as a Delphix employee.  I wasn’t too thrilled with my choice and thank KSCOPE for still accepting me as a speaker, so the end product that I presented was very different than what I submitted.

I was in the DBA track, but after arriving, I started to push out on my network that I was building a developer presentation perspective with our development/tester interface, Jet Stream.  One of the challenges I experienced as a developer, a release manager and my many years as a database administrator was in releasing to multiple tiers.  This complexity has become an increasing pain point with the introduction of agile methodologies.

The demonstration example was an agile development team of a web developer, a tester and a database administrator team.  They were to bulk load new employee data to save the business from having to manually enter data and ensure that the numbering order of the employee ID was sequenced.

I proceeded to build out an environment this week in the cloud that would represent such an environment.  It consisted of our common employee web application, an Oracle database to store the employee information, some structured flat files of employee information and some scripts to load this information from the files.  These were all then used to create development and test environments and created containers to track changes via Jet Stream as each new step of the development and testing occurred.

In newer agile development shops, the DBA may also be an integral part of the development team and in this scenario, this demonstrated how we may solve problems in the database can be in conflict with how an application was designed to function, causing downtime.  With the container of Virtual Database, (VDB), Virtual File Directories, (vfiles)-  virtualized development environments that are complete read/write copies to use for development and test, we were able to use Jet Stream to version control not just the code changes, but the data changes, too.

In my demo, I showed how the bulk load process, designed by the DBA, had created a sequence and trigger on the table used to populate the employees table with data from the structured flat file, then proceeded to load it.  The data loaded without issue and the employee ID was now sequential-  requirement solved and job complete.  It was simple to then create a bookmark in the timeflow and noting what had been done in that iteration.

The problem was, post the bulk load of the data, the change actually broke the application and no new employees could be added through the interface.  We proved this by attempting to add an employee in the application and then querying the database to verify that it wasn’t just the application that wasn’t displaying the new employee addition.

I was able to demonstrate in Jet Stream, that instead of using rollback scripts and backing out files to previous versions, I was able to quickly “rewind” to the “Before 2.7 release bookmark” and all tiers of the container were reset to before the data load, saving considerable time and resources for the agile team.

If you’d like to learn more about Jet Stream, Templates, Containers or how this can provide incredible value in our hectic agile DBA and development lives, check out the following links:

Delphix Jet Stream PDF

Valuable Jet Stream Concepts

Jet Stream Container Overview

 

 

Posted in Delphix, Oracle Tagged with: , ,

May 10th, 2017 by dbakevlar

I love questions-  They give me something to write about that I don’t have to come up with from my experience or challenges…:)

So in my last post, Paul asked:

I am not sure what happens to the other changes which happened while the release was happening? Presumably they are also lost? Presumably the database has to go down while the data is reverted?

The Setup

In our scenario to answer this question, I’m going to perform the following on the VEmp_826 virtualized database:

  1. Add a table
  2. Add an index
  3. Include transactions, both inserts and deletes
  4. Rewind the database using the Admin Console

As I’m about to make these changes to my database, I take a snapshot which is then displayed in the Delphix Admin console using the “Camera” icon in the Configuration pane.

Note the SNC Range listing on each of them.  Those are the SCNs available in that snapshot and the timestamp is listed, as well.

Now I log into my target host that the VDB resides on.  Even though this is hosted on AWS, it really is no different for me than logging into any remote box.  I set my environment and log in as the schema owner to perform the tasks we’ve listed above.

Create Table

So we’ll create a table, index and some support objects for our test:

CREATE TABLE REW_TST
(
C1 NUMBER NOT NULL
,C2 VARCHAR2(255)
,CDATE TIMESTAMP
);

CREATE INDEX PK_INX_RT ON REW_TST (C1);
CREATE SEQUENCE RT_SEQ START WITH 1;

CREATE OR REPLACE TRIGGER RW_BIR
BEFORE INSERT ON REW_TST
FOR EACH ROW
BEGIN
  SELECT C1_SEQ.NEXTVAL
  INTO   :new.C1
  FROM   DUAL;
END;
/

Now that it’s all created, we’ll take ANOTHER snapshot.

Add Data to Kinder Table

This snapshot takes just a couple seconds and is about 10 minutes after our first one and contains the changes that were made to the database since we took the initial change.

Now I’ll add a couple rows from another transaction into the KINDER_TBL from yesterday:

INSERT INTO KINDER_TBL
VALUES (1,dbms_random.string('A', 200), SYSDATE);
INSERT INTO KINDER_TBL
VALUES (2,dbms_random.string('B', 200), SYSDATE);
INSERT INTO KINDER_TBL
VALUES (3,dbms_random.string('C', 200), SYSDATE);
INSERT INTO KINDER_TBL
VALUES (4,dbms_random.string('D', 200), SYSDATE);
INSERT INTO KINDER_TBL
VALUES (5,dbms_random.string('E', 200), SYSDATE);
INSERT INTO KINDER_TBL
VALUES (6, dbms_random.string('F', 200), SYSDATE);
INSERT INTO  KINDER_TBL
VALUES (7,dbms_random.string('G', 200), SYSDATE);
COMMIT;

We’ll take another snapshot:

Add Data to the New Table

Now let’s add a ton of rows to the new table we’ve created:

SQL> Begin
For IDS in 1..10000
Loop
INSERT INTO REW_TST (C2)
VALUES (dbms_random.string('X', 200));
Commit;
End loop;
End; 
  /

And take another snapshot.

Now that I have all of my snapshots for the critical times in the release change, there is a secondary option that is available.

Snapshots at the DBA Level

As I pointed out earlier, there is a range of SCNs in each snapshot.  Notice that I can now provision by time or SCN from the Admin Console:

So I could easily go back to any of my SET beginning or ending SCN by the snapshot OR I could click on the up/down arrows or type the exact SCN I want to pinpoint for the recovery. Once I’ve decided on the correct SCN to recover to, then click on Refresh VDB and it will go to that SCN, just like doing a recovery via RMAN, but instead of having to type out the commands and the time constraints, this would be an incredibly quick recovery.

Notice that I can go back to any of my snapshots, too.  For the purpose of this test, we’re going to go back to just before I added the data to the new table by clicking on the Selected SCN and clicking Rewind VDB.

Note that now this is the final snapshot shown in the system, no longer displaying the one that was taken after we inserted the 10K rows into REW_TST.

If we look at our SQL*Plus connection to the database, we’ll see that it’s no longer connected from our 10K row check on our new table:

And if I connect back in, what do I have for my data in the tables I worked on?

Pssst-  there are fourteen rows instead of seven because I inserted another 7 yesterday when I created the table… 🙂

I think that answers a lot of the questions posed by Paul, but I’m going to jump in a little deeper on one-

Database Outage During a Rewind

Yes, the database did experience an outage as the VDB was put back to the point in time or SCN requested by the User Interface or Command line interface for Delphix.  You can see this from querying the database:

SQL> select to_char(startup_time,'DD-MM-YYYY HH24:MI:SS') startup_time
 from v$instance;
STARTUP_TIME
-------------------
10-05-2017 12:11:49

The entire database is restored back to this time and the Dsource, the database the VDB is sourced from and keeps track of everything going on in all VDBs, has pushed the database back, yet the snapshots from before this time exist, (tracked by the Delphix Engine.)

If you view the alert log for the VDB, you’ll also see the tail of the recovery steps, including our requested SCN:

alter database recover datafile list clear
Completed: alter database recover datafile list clear
alter database recover datafile list
 1 , 2 , 3 , 4
Completed: alter database recover datafile list
alter database recover if needed
 start until change 2258846
Media Recovery Start
 started logmerger process
Parallel Media Recovery started with 2 slaves
Wed May 10 12:11:05 2017
Recovery of Online Redo Log: Thread 1 Group 3 Seq 81 Reading mem 0
  Mem# 0: /mnt/provision/VEmp_826/datafile/VEMP_826/onlinelog/o1_mf_3_dk3r6nho_.log
Incomplete recovery applied all redo ever generated.
Recovery completed through change 2258846 time 05/10/2017 11:55:30
Media Recovery Complete (VEmp826)

Any other changes around the change that you’re tracking from are impacted by a rewind, so if there are two developers working on the same database, they could impact each other, but with the small footprint of a VDB, why wouldn’t you just give them each their own VDB and merge the changes at the end of the development cycle?  The glorious reasons for adoption virtualization technology is to have the ability to work in 2 week sprints and be more agile than our older, waterfall methods that are laden with problems.

Let me know if you have any more questions-  I live for questions that offer me some incentive to go look at what’s going on under the covers!

Posted in Database, Delphix, Oracle Tagged with:

May 9th, 2017 by dbakevlar

Different combination in the game of tech create a winning roll of the dice and other times create a loss.  Better communication between teams can offer a better opportunity to deter from holes in development cycle tools, especially when DevOps is the solution you’re striving for.

It doesn’t hurt to have a map to help guide you.  This interactive map from XebiaLabs can help offer a little clarity to the solutions, but there’s definitely some holes in multiple places that could be clarified a bit more.

The power of this periodic table of DevOps tools, isn’t just that they are broken up by the tool type, but that you’re able to easily filter by open source, freemium, paid or enterprise level tools.  This assists in reach the goals of your solution.  As we all get educated, I’ll focus horizontally in future posts, but today, we’ll take a vertical look at the Database Management tier and where I specialize first.

Apples aren’t Apfel

When comparisons are made, it’s common to have the inability to do apple to apples.  Focusing on the Database Management toolsets, such as Delphix, I can tell you that only Redgate I view as a competitor and this only happened recently with their introduction of SQL Clone.  The rest of the products shown don’t offer any virtualization, (our strongest feature) in their product and we consider Liquidbase and Datical partners in many use cases.

Any tool is better than nothing, even one that helps you choose tools.  So let’s first start to discern what the “Database Management”  tools are supposed to accomplish and then create one of our own.  The overall goal appears to be version control for your database, which is a pretty cool concept.

DBMaestro

The first product on the list is something I do like because the natural “control issues” I have as a DBA.  You want to know that changes to a database occurred in a controlled, documented and organized manner.  DBMaestro allows for this and has some pretty awesome ways of doing it.  Considering that DevOps is embracing agility at an ever increasing rate, having version control capabilities that will work with both Oracle SQL Developer and Microsoft Visual Studio are highly attractive.  The versioning is still as a code change level and not at the data level, but it’s still worthy of discussion.

That it offers all of this through a simple UI is very attractive to the newer generation of developers and DBAs will still want to be part of it.

Liquibase

This is the first of two companies we partner with that are in the list.  It’s very different from DBMaestro, as it’s the only open source in the database management products and is works with XML, JSON, SQL and other formats. You can build just about anything you require and the product has an extensive support community, so if you need to find an example, it’s pretty simple to do so online.

I really like the fact that Liquibase takes compliance into consideration and has the capability to delay SQL transactions from performing without the approval from the appropriate responsible party.  It may not be version control of your data, but at least you can closely monitor and time out the changes to it.

Where Liquibase partners with Delphix is that we can perform continuous delivery via Liquibase and Delphix can then version control the data tier.  We can be used for versioning, branching and refreshing from a previous snapshot if there was a negative outcome in a development or test scenario, making continuous deployment a reality without requiring “backup scripts” for the data changes.

Redgate SQL Control

Everybody love a great user interface and like most, there’s a pretty big price tag that goes along with the ease of use when adding up all the different products that’s offered.  There’s just a ton that you can do with Redgate and you can do most of it for Oracle, MSSQL and MySQL, which is way cool.  Monitor, develop, virtualize, but the main recognition that you’re getting with the periodic table for DevOps tools is for version control and comparisons.  This comes from the SQL Control product from Redgate and offers quite a full quite of products for the developer and the DBA.

Datical

This is another product that we’ve partnered with repeatedly.  The idea that we, as DBAs can review and monitor any and all changes to a database is very attractive to any IT shop. Having it simplified into a tool is incredibly beneficial to any business who wants to deliver continuously and when implemented with Delphix, then the data can be delivered as fast as the rest of the business.

Idera

Idera’s DB Change Manager can give IT a safety net to ensure that changes intended are the changes that happen in the environment.  Idera, just like many of the others on the list supports multiple database platforms, which is a keen feature of a database change control tool, but no virtualization or copy data management, (CDM) tool exists or at least, not one exists any longer.

Fitting in

So where does Delphix fit in with all of these products?  We touched on it a little bit as I mentioned each of these tools. Delphix is recognized for the ability to deploy and that it does so as part of continuous delivery is awesome, but as I stated, it’s not a direct apple to apples comparison as we not only offer versioning control, but we do so at the data level.

Delphix Jet Stream

So let’s create an example-

We can do versioning and track changes in releases in the way of our Jet Stream product.  Jet Stream is the much loved product for our developers and testers.

I’ve often appreciated any tool set that allowed others not only to fish instead of me fishing for them,  Offering the Developer or Tester access to the administration console meant for a DBA can only set them up to fail.

Jet Stream’s interface is really clean and easy to use.  It has a clear left hand panel with options to access and the interaction is direct on what the user will be doing.  I can create bookmarks, naming versions, which allows me the ability to

If a developer is using Jet Stream, they would make changes as part of a release and once complete, create a bookmark, (a snapshot in time) of their container, (A container here is made up of the database, application tier and anything else we want included that Delphix can virtualize.)

We’ve started our new test run of a new development deployment.  We’ve made an initial book mark singing the beginning of the test and then a second bookmark to say the first set of changes were completed.

At this time, there’s a script that removes 20 rows from a table.  The check queries all verified that this is the delete statement that should remove the 20 rows in question.

SQL> delete from kinder_tbl  where c2 like '%40%';

143256 rows deleted.

SQL> commit;

SQL> insert into kinder_tbl values (...

When the tester performs the release and hits this step, the catastrophic change to the data occurs.

Whoops, thats not 20 rows.

Now, the developer could grovel to the DBA to use archaic processing like flashback database or worse, import the data back into a second table and merge the missing rows, etc.  There’s a few ways to skin this cat, but what if the developer could recover the data himself?

This developer was using Jet Stream and can simply go into the console, where they’ve been taking that extra couple seconds to bookmark each milestone during the release to the database, which INCLUDES marking the changes to the data!

If we inspect the bookmarks, we can see that the second of three occurred before the delete of data.  This makes it simple to use the “rewind” option, (bottom right icon next to the trash can for removing the bookmark) to revert the data changes.  Keep in mind that this will revert the database back to the point in time when the bookmark was performed, so ALL changes will be reverted to that point in time.

Once that is done, we can verify quickly that our data is returned and no need to bother the DBA or the concern that catastrophic change to data has impacted the development or test environment.

SQL> select count(*) from kinder_tbl
  where c2 like '%40%';

  COUNT(*)
----------
    143256

I plan on going though different combinations of tools in the periodic table of DevOps tools and show what strengths and drawbacks there are to choices in implementation in upcoming posts, so until the next post, have a great week!

 

 

Posted in Delphix, devops Tagged with: ,

March 6th, 2017 by dbakevlar

I had the opportunity to attend Delphix‘s Company Kick Off last week in San Francisco.  This was my first time at an event of this nature and it was incredibly successful.  As Delphix had just promoted Adam Bowen and me to the Office of CTO, along with promoting Eric Schrock to the CTO position, I was in an enviable position, but not just because of the role change.

For some reason, my badge stated I was in Product Management, (I wasn’t) and as I was transitioning from my previous role in Technical Product Marketing, (I was the full-on techie on the team.) I now belonged to a group that hadn’t existed when they planned the event.  I decided to become a “free agent” for the week and took advantage of not belonging in anyone.  As the company split into their designated groups- Sales, Engineering and Professional Services, I started to move between the department events. This resulted in some incredible perspectives on our company and why the diversity of the roles and groups serve the company as a whole.

Doing the Kick Off

First off-  kudos to the entire marketing team for the event.  It was incredibly well done.  The venue and location was excellent.  Having everyone in one place, considering how many of us are remote, really made a huge difference.  All of this had to be coordinated-  event, people, travel, content and commonly three separate events stitched into one!  As a remote employee, it was great to finally put faces to names.  Networking, community and social events gave us more than enough time to get to know those we only knew virtually and even in three days, I felt more connected to the teams.

The company kick off started with a talk from John Foley, the Blue Angels pilot, who discussed his Glad to be Here movement, (#gladtobehere).  It was very inspirational and I added significant knowledge in the value of the feedback phase of projects, which I believe is a weakness in me as a mentor.  Our CEO Chris Cook and others from the C-Level then presented Delphix goals for the next year in a very energized way that provided unification of focus for all of us.  With our heads filled with a full picture of where we were headed, we sat down to a dinner and the big company party to follow.

Shout Out to the Cool Kids

What did I learn that I may not have known before?  That our CEO, Chris Cook is really fantastic.  He knows the business, is very personable and took the time to visit with as many Delphixers as he could during the week.  He’s also not afraid to have fun and knows that downtime is essential to a healthy work environment.

Our CEO, Chris Cook, reliving the 80’s for our Company Kick Off party.  I look like I’m signing for someone to call me.  The 80’s were not my best years… 🙂

The company message was on point, “Delphix moves data at the speed of business.”  As a DBA, I understand how long the database has been viewed as the bottleneck and to remove this through virtualization is incredibly powerful.

I learned that we have a very clear vision of where we are starting to fit with the huge direction change for many businesses.  With so many companies migrating to the cloud and so many others that claim to do what we do, (very rarely and even more rare, do it successfully.)  I started to recognize how all of our groups fit together.  We are all moving parts that are necessary to make Delphix the success it is.

Delphix’s CFO, Stewart Grierson and Chris Cook on Stage at the CKO

For me, the free agent, it was shocking to go from the buzz and big lights of the Sales/Pre-sales Kick off, to the very personalized interaction of the Professional Services, to the quiet, small groups at the hackathon for Engineering.  I’m a very adaptable personality, but admit that I’m more introverted than most people realize.  I’m drained by loud events and need to re-energize with downtime when I get the chance, so this bouncing from space to space, (each with their own different vibe) really worked well for me.

The week I learned about how much our sales staff does-  how many regions are handled by so few people and how much they accomplish with the help of some awesome pre-sales folks!  You also get to find out how your contributions make an impact.  Content you produced that you thought was just gathering dust, someone takes the time to thank you and tell you how vital it is to their day to day job.

I discovered how the Professional Services, (PS) team closely works with those that provide our documentation and community support.  Where Sales was all about recognizing what had been accomplished this year, PS was looking to the year ahead and how they would solve some of the critical challenges.  Sitting in granted me the opportunity to proudly hear the announcement of my husband’s promotion to Technical Manager in the Professional Services group.  It also gave me a chance to visit with my old friend and co-author Leighton Nelson, who is doing fantastic work in Steve Karam’s group.

Getting My Geek On

On breakfast the last day, I happened to sit down with the engineers.  I’ve become accustomed to speaking over everyone’s head in marketing these days, so it was this odd  sense of relief as I listened and discussed some of the interesting ways this group was taking on syntax differences between the languages being utilized in our product.  We chatted about Intellij, Python and Oracle Cloud complexities and the hackathon that was going on in their group for the day.  When I sat in the Engineering group’s kick off later on, I was granted the opportunity to speak to Nathan Jolly, a SQL Server DBA at Delphix that I’ve only had the pleasure of speaking to in meetings.  He’s on our Australian team and there aren’t many opportunities to be in the same timezone.

Quieter than in the party, people chat it up

The quiet was also a nice break after evenings filled with great food, conversations and company.  The party was the first night, but each evening brought more dinners and community events, (I was part of a group that assembled over 40 wheelchairs for charity!) and the last evening, I ended up closing down the bar with a group that [somehow] talked me into an arm wrestling match.  Now, as many know, I have about a good week at low altitude where I’m taking advantage of the extra oxygen.  This results in little impact from alcohol and for the particular evening, me winning an arm wrestling match against one of the guys.  Nothing more needs to be said about this outside of me refusing to give up and that both my elbows are bruised from the match.

A Successful Preparation for the Upcoming Year

I may be speaking for the rest of the teams, but I left with renewed energy, direction and connection to this great company we call Delphix.  I am amazed at how much we’ve grown and changed in just the 9 months I’ve been here and look forward to the upcoming year.  I have no doubt 2018’s kick off will be just as awesome and that we’ll need a bigger venue to hold all the new employees!

I quickly put together a scrapbook video story of my pictures, enjoy!  March is also a busy month for events, both Oracle and SQL Server.  Here’s where I’ll be this month:

March 8th:  IOUG Town hall webinar

March 13th:  UTOUG Training Days, Salt Lake City, UT.

March 18th:  Iceland SQL Saturday, Reykjavik, Iceland

March 28th:  Colorado SQL Saturday, Colorado Springs, CO

 

 

 

 

Posted in Delphix

February 23rd, 2017 by dbakevlar

This was received by one of our Delphix AWS Trial customers and he wasn’t sure how to address it.  If any others experience it, this is the why it occurs and how you can correct it.

You’re logged into your Delphix Administration Console and you note there is a fault displayed in the upper right hand console.  Upon expanding, you see the following warning for the databases from the Linux Target to the Sources they pull updates from:

 

Its only a warning and the reason it’s only a warning is that it doesn’t stop the user from performing a snapshot and provisioning, but it does impact the timeflow, which could be a consideration if a developer were working and needed to go back in time during this lost time.

How do you address the error?  Its one that’s well documented by Delphix, so simply proceed to the following link, which will describe the issue, how to investigate and resolve.

Straightforward, right?  Not so much in my case.  When the user attempted to ssh into the linux target, he received an IO error:

$ ssh delphix@xx.xxx.xxx.x2

$ ssh: connect to host xx.xxx.xxx.x2 port 22: Operation timed out

I asked the user to then log into Amazon EC2 dashboard and click on Instances.  The following displayed:

Oh-oh….

By highlighting the instance, the following is then displayed at the lower part of the dashboard, displaying that their is an issue with this instance:

Amazon is quickly on top of this once I refresh the instances and all is running once more.  Once this is corrected and all instances show a green check mark for the status, then I’m able to SSH into the console with out issue:

$ ssh delphix@xx.xxx.xxx.x2

Last login: Tue Feb 21 18:05:28 2017 from landsharkengine.delphix.local

[delphix@linuxtarget ~]$

Does this resolve the issue in the Delphix Admin Console?  Depends… and the documentation linked states that the problem is commonly due to a problem with network or source system, but in this case, it was the target system that suffered the issue in AWS.

As this is a trial and not a non-production system that is currently being used, we will skip recovering the logs to the target system and proceed with taking a new snapshot.  The document also goes into excellent ways to deter from experiencing this type of outage in the future.  Again, this is just a trial, so we won’t put these into practice for a trial environment that we can easily drop and recreate in less than an hour.

Tips For Real Delphix Environments

There are a few things to remember when you’re working with the CLI:

-If you want to inspect any admin information, (such as these logs, as shown in the document) you’ll need to be logged in as the delphix_admin@DOMAIN@<DE IP Address>.

So if you don’t see the “paths” as you work through the commands, it’s because you may have logged in as sysadmin instead of the delphix_admin.

-If you’ve marked the fault as “Resolved”, there’s no way to resolve the timeflow issue, so you’ll receive the following:

ip-10-0-1-10 timeflow oracle log> list timeflow='TimeFlow' missing=true

No such Timeflow ''TimeFlow''.

-If the databases are down, it’s going to be difficult for Delphix to do anything with the target database.  Consider updating the target to auto restart on an outage.  To do so, click on the database in question, click on Configuration and change it to “On” for the Auto VDB Restart.

Here’s a few more tidbits to make you more of an expert with Delphix.  Want to try out the free trail with AWS?  All you need is a free Amazon account and it’s only about 38 cents an hour to play around with a great copy of our Delphix Engine, including a deployed Source and Target.

Just click on this link to get started!

Posted in AWS Trial, Delphix

February 13th, 2017 by dbakevlar

When tearing down an AWS Delphix Trial, we run the following command with Terraform:

>terraform destroy

I’ve mentioned before that every time I execute this command, I suddenly feel like I’m in control of the Death Star in Star Wars:

As this runs outside of the AWS EC2 web interface, you may see some odd information in your dashboard.  In our example, we’ve run “terraform destroy” and the tear down was successful:

So you may go to your volumes and after verifying that yes, no volumes exist:

The instances may still show the three instances that were created as part of the trial, (delphix engine, source and target.)

These are simply “ghosts of instances past.”  The tear down was completely successful and there’s simply a delay before the instance names are removed from the dashboard.  Notice that they no longer are listed with a public DNS or IP address.  This is a clear indication that these aren’t currently running, exist or more importantly, being charged for.

Just one more [little] thing to be aware of… 🙂

Posted in AWS Trial, Delphix Tagged with: ,

February 13th, 2017 by dbakevlar

Now, for most of us, we’re living in a mobile world, which means as our laptop travels, our office moves and our IP address changes.  This can be a bit troubling for those that are working in the cloud and our configuration to our cloud relies on locating us via our IP Address being the same as it was in our previous location.

What happens if you’re IP Address changes from what you have in your configuration file, (in Terraform’s case, your terraform.tfvars file) for the Delphix AWS Trial?  I set my IP Address purposefully incorrect in the file to demonstrate what would happen after I run the terraform apply command:

It’s not the most descriptive error, but that I/O timeout should tell you right away that terraform can’t connect back to your machine.

Addressing the IP Address Issue

Now, we’ll tell you to capture your current IP address and update the IP address in the TFVARS file that resides in the Delphix_demo folder, but I know some of you are wondering why we didn’t just build out the product to adjust for an IP address change.

The truth is, you can set a static IP Address for your laptop OR just alias your laptop with the IP Address you wish to have.  There are a number of different ways to address this, but looking into the most common, let’s dig into how we would update the IP address vs. updating the file.

Static IP Address

You can go to System Preferences or Control Panel, (depending on which OS you’re on) and click on Network and configure your TCP/IP setting to manual and type in an IP Address there.  The preference is commonly to choose a non-competitive IP address, (often the one that was dynamically set will do as your manual one to retain) and choose to save the settings.  Restart the PC and you can then add that to your configuration files.  Is that faster than just updating the TFVARS file-  nope.

Setting the IP Address- Mac/Linux

The second way to do this is to create an Alias IP address to deter from the challenge of each location/WiFi move having it automatically assigned.

Just as above, we often will use the IP address that was given dynamically and just choose to keep this as the one you’ll keep each time.  If you’re unsure of what your IP is, there are a couple ways to collect this information:

Open up a browser and type in “What is my IP Address

or from a command prompt, with “en0” being your WiFi connection, gather your IP Address one of two ways:

$ dig +short myip.opendns.com @resolver1.opendns.com
$ ipconfig getifaddr en0

Then set the IP address and cycle the your WiFi connection: 


$ sudo ipconfig set en0 INFORM <IP Address>
$ sudo ifconfig en0 down 
$ sudo ifconfig en0 up

You can also click on your WiFi icon and reset it as well.  Don’t be surprised if this takes a while to reconnect.  Renewing and releasing of IP addresses can take a bit across the network and the time required must be recognized.

An Alias on Mac/Linux

Depending on which OS you’re on.  Using the IP Address from your tfvars file, set it as an alias with the following command:

$ sudo ifconfig en0 alias <IP Address> 255.255.255.0

Password: <admin password for workstation>

If you need to unset it later on:

sudo ifconfig en0 -alias <IP Address>

I found this to be an adequate option-  the alias was always there, (like any other alias, it just forwards everything onto the address that you’re recognized at in the file.) but it may add time to the build, (still gathering data to confirm this.)  With this addition, I shouldn’t have to update my configuration file, (for the AWS Trial, that means setting it in our terraform.tfvars in the YOUR_IP parameter.)

Setting your IP Address on Windows

The browser commands to gather your IP Address work the same way, but if you want to change it via the command line, the commands are different for Windows PC’s:

netsh interface ipv4 show config

You’ll see your IP Address in the configuration.  If you want to change it, then you need to run the following:

netsh interface ipv4 set address name="Wi-Fi" static <IP Address> 255.255.255.0 <Gateway>
netsh interface ipv4 show config

You’ll see that the IP Address for your Wi-Fi has updated to the new address.  If you want to set it to DHCP, (dynamic) again, run the following:

netsh interface ipv4 set address name="Wi-Fi" source=dhcp

Now you can go wherever you darn well please, set an alias and run whatever terraform commands you wish.  All communication will just complete without any error due to a challenging new IP address.

Ain’t that just jiffy? OK, it may be quicker to just gather the IP address and update the tfvars file, but just in case you wanted to know what could be done and why we may not have built it into the AWS Trial, here it is! 🙂

Posted in AWS Trial, Delphix Tagged with: , , ,

February 10th, 2017 by dbakevlar

I ran across an article from 2013 from Straight Talk on Agile Development by Alex Kuznetsov and it reminded me how long we’ve been battling for easier ways of doing agile in a RDBMS environments.

Getting comfortable with a virtualized environment can be an odd experience for most DBAs, but as soon as you recognize how similar it is to a standard environment, we stop over-thinking it and it makes it quite simple to then implement agile with even petabytes of data in an relational environment without using slow and archaic processes.

The second effect of this is to realize that we may start to acquire secondary responsibilities and take on ensuring that all tiers of the existing environment are consistently developed and tested, not just the database.

A Virtual Can Be Virtualized

Don’t worry-  I’m going to show you that its not that difficult and virtualization makes it really easy to do all of this, especially when you have products like Delphix to support your IT environment. For our example, we’re going to use our trusty AWS Trial environment and we have already provisioned a virtual QA database.  We want to create a copy of our development virtualized web application to test some changes we’ve made and connect it to this new QA VDB.

From the Delphix Admin Console, go to Dev Copies and expand to view those available.  Click on Employee Web Application, Dev VFiles Running.  Under TimeFlow, you will see a number of snapshots that have been taken on a regular interval.  Click on one and click on Provision.

Now this is where you need the information about your virtual database that you wish to connect to:

  1.  You will want to switch from provisioning to the source to the Linux Target.

Currently the default is to connect to the existing development database, but we want to connect to the new QA we wish to test on.  You can ssh as delphix@<ipaddress for linuxtarget> to connect to and gather this information.

2.  Gathering Information When You Didn’t Beforehand

I’ve created a new VDB to test against, with the idea, that I wouldn’t want to confiscate an existing VDB from any of my developers or testers.  The new VDB is called EmpQAV1.  Now, if you’re like me, you’re not going to have remembered to grab the info about this new database before you went into the wizard to begin the provisioning.  No big deal, we’ll just log into the target and get it:

[delphix@linuxtarget ~]$ . 11g.env
[delphix@linuxtarget ~]$ echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0.4/db_1
[delphix@linuxtarget ~]$ ps -ef | grep pmon
delphix  14816     1  0 17:23 ?        00:00:00 ora_pmon_devdb
delphix  14817     1  0 17:23 ?        00:00:00 ora_pmon_qadb
delphix  17832     1  0 17:32 ?        00:00:00 ora_pmon_EmpQAV1
delphix  19935 19888  0 18:02 pts/0    00:00:00 grep pmon

I can now set my ORACLE_SID:

[delphix@linuxtarget ~]$ export ORACLE_SID=EmpQAV1

Now, let’s gather the rest of the information we’ll need to connect to the new database by connecting to the database and gathering what we need.

[delphix@linuxtarget ~]$ lsnrctl services
LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 10-FEB-2017 18:13:06
Copyright (c) 1991, 2013, Oracle.  All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "EmpQAV1" has 1 instance(s).
  Instance "EmpQAV1", status READY, has 1 handler(s) for this service...

Provision Your VFile

Fill in all the values required in the next section of the provisioning setup:

Click Next and add any requirements to match your vfile configuration that you had for the existing environment.  For this one, there aren’t any, (additional NFS Mount points, etc.)  Then click Next and Finish.

The VFile creation should take a couple minutes max and you should now see an environment that looks similar to the following:

This is a fully functional copy of your web application, created from another virtual copy that can test against a virtual database, ensuring that all aspects of a development project are tested thoroughly before releasing to production!

Why would you choose to do anything less?

 

 

 

Posted in AWS Trial, Delphix, Oracle Tagged with: ,

  • Facebook
  • Google+
  • LinkedIn
  • Twitter