Category: Oracle

May 26th, 2017 by dbakevlar

I’m in sunny Phoenix this week at the Data Platforms 2017 Conference and looking forward to a break in the heat when I return to Colorado this evening.

As this event is for big data, I expected to present on how big data could benefit from virtualization, but was surprised to find that I didn’t have a lot of luck finding customers utilizing us for this reason, (yet).  As I’ve discussed in previous presentations, I was aware of what a “swiss army knife” virtualization is, resolving numerous issues, across a myriad of environments, yet often unidentified.

The Use Case

To find my use case, I went out to the web and found a great article, “The Case for Flat Files in Big Data Projects“, by Time.com interactive graphics editor, Chris Wilson.  The discussion surrounds the use of data created as part of ACA and used for another article, “How Much Money Does Your Doctor Get From Medical Companies“.  The data in the interactive graphs that are part of the article is publicly available from cms.gov and the Chris discusses the challenges created by it and how they overcame it.

Upon reading it, there was a clear explanation of why Chris’ team did what they did to consume the data in a way that complimented their skill set.  It resonated with anyone who works in any IT shop and how we end up with technical choices that we’re left to justify later on.  While observing conversations at the conference this week, I lost count of how often I accepted the fact that there wasn’t a “hadoop” shop or a “hive” shop, but everyone had a group of varied solutions that resulted in their environment and if you didn’t have it, don’t count it out-  Pig, Python, Kafka or others could show up tomorrow.

This results in a more open and accepting technical landscape, which I, a “legacy data store” technologist, was welcome.  When I explained my upcoming talk and my experience, no one turned up their nose at any technology I admitted to having an interest in.

With the use case found online, was also the data.  As part of the policies in the ACA, cms.gov site, (The Center for Medicare and Medicaid) you can get access to all of this valuable data and it can offer incredible insight into these valuable programs.  The Time article only focuses on the payments to doctors from medical companies, but the data that is collected, separated out by area and then zipped up by year, is quite extensive, but as noted by a third article, as anticipated as the data was, it was cumbersome and difficult to use.

The Requirements

I proceeded to take this use case and imagine it as part of an agile environment, with this data as part of the pipeline to produce revenue by providing information to the consumer.  What would be required and how could virtualization not only enhance what Chris Wilson’s team had built, but how could the entire pipeline benefit from Delphix’s “swiss army knife” approach?

  1.  I can’t assume this is the main data store.  These flat files are a supplement to legacy data stores.
  2. There would be a standard development environment-  development and testing would need their own environments, not just a production copy of these files, applications, etc.
  3. If it’s providing data to a consumer and data is in perpetual motion in the age of the internet, an agile development method would need to be in place, which means a short development cycle with many, small, “scrum like” development groups from different departments working on tasks.
  4. Automation and seamless deployment would assist in less human intervention and resource demands, along with more successful deployments.

Solution

There were four areas that I focused to solve and eliminate bottlenecks that I either experienced or foresaw an organization experiencing when having this data as part of their environment.

  1. Eliminate the need to have multiple copies of the files, slow and manual process to propagate files to targets for development, test, etc. with Delphix’s vFile option, this would include any applications or other non-relational database tier included in the scenario.
  2. Eliminate any legacy data stores copies and refreshes that big data was dependent from and create VDBs for all development, test and reporting.
  3. Protect all non-production environments by masking non-production databases and flat files.
  4. Containerize environments for easy deployment, delivery, testing and cloud migrations.

vFiles

Each of the files, compressed were just over 500M and uncompressed, 15-18G.  This took about over 4 minutes per file to transfer to a host and could add up to considerable space.

I used Delphix vFile to virtualize files.  This means that there is a single, “gold copy” host of the files at the Delphix engine and then there’s an NFS Mount that “projects” the file access to each target, which can be used for unique copies to as many development, test and reporting copies.

Fig. 1- Creating vFiles from dSource that flat files are sourced on.

If a refresh is required, then the parent is refreshed and an automated refresh to all the “children” can be performed.  Changes can be made at the child level and if catastrophic, Delphix would allow for easy recovery, allowing for data version control, not just code version control throughout the development and testing process.

Fig. 2- Demonstration of target vFile, showing NFS Mount, files available, (created in less than 10 seconds) and how easily disabled and proven to be “Projection” of files.

Its a pretty cool feature and one that is very valuable to the big data arena.  I heard countless stories of how often, due to lack of storage, data scientists and developers were taking subsets of data to test and then once to production, find out that their code wouldn’t complete or fail when presented against the full data.  Having the ability to have the FULL files without taking up more space for multiple environments would be incredibly beneficial and shorten development cycles.

Virtualize

Most big data shops are still dependent on legacy data stores.  These are legendary roadblocks due to their size, complexity and demands for refreshes.  I proposed that those be virtualized so that each developer could have a copy and instant refresh without storage demands to again, ease development deadline pressures and allow for full access of data towards the development success.

Protect

Most people know we mask relational databases, but did you know we have Agile Data Masking for  flat files?  If these files are going to be pushed to non-production systems, especially with as much as we’re starting to hear about GDPR, (General Data Protection Regulations) from the EU in the US now, shouldn’t we mask outside of the database?

What kind of files can be masked?

  • Multi-record
  • CSV
  • XML
  • Word
  • Excel
  • PowerPoint
  • Unstructured
  • EDI

Thats a pretty considerable and cool list.  The ability to go in and mask data from flat files is a HUGE benefit to big data folks.  Many of them were looking at file security from the permissions and encryption level, so the ability to render the data benign to risk is a fantastic improvement.

Containerize

The last step is in simple recognition that big data is complex and consists of a ton of moving parts.  To acknowledge how much is often home built, open source, consisting of legacy data stores, flat files, application and other dependent tiers, should be expected.

Fig. 3- A Container, created on-prem, then moved to the cloud and to as many environments as required for the development cycle to meet the business needs.

Delphix has the ability to create templates of our sources, (aka dSources) which is nothing more than creating a container.  In my use case enhancement, I took all of these legacy data stores, applications, (including any Ajax code) flat files and then create a template from it for simple refreshes, deployments via jenkins, Chef jobs or other DevOps automation.  The ability to then take these templates and deploy them to the cloud would make a migration from on-prem to the cloud a simpler process or from one cloud vendor to another.

Fig. 4- A look at the full scenario-  Delphix engines masking files, databases, creating containers and deploying it all on-prem and to the cloud.

The end story is that this use case could be any big data shop or start up in the world today.  So many of these companies are hindered by data and Delphix virtualization could easily let their data move at the speed of business.

I want to thank Data Platforms 2017 and all the people who were so receptive of my talk.  If you’d like access to the slide deck, it’s been uploaded to Slideshare. I had a great time in Phoenix and hope I can come back soon!

 

 

Posted in big data, Cloud, Data Masking, Oracle Tagged with: ,

May 10th, 2017 by dbakevlar

I love questions-  They give me something to write about that I don’t have to come up with from my experience or challenges…:)

So in my last post, Paul asked:

I am not sure what happens to the other changes which happened while the release was happening? Presumably they are also lost? Presumably the database has to go down while the data is reverted?

The Setup

In our scenario to answer this question, I’m going to perform the following on the VEmp_826 virtualized database:

  1. Add a table
  2. Add an index
  3. Include transactions, both inserts and deletes
  4. Rewind the database using the Admin Console

As I’m about to make these changes to my database, I take a snapshot which is then displayed in the Delphix Admin console using the “Camera” icon in the Configuration pane.

Note the SNC Range listing on each of them.  Those are the SCNs available in that snapshot and the timestamp is listed, as well.

Now I log into my target host that the VDB resides on.  Even though this is hosted on AWS, it really is no different for me than logging into any remote box.  I set my environment and log in as the schema owner to perform the tasks we’ve listed above.

Create Table

So we’ll create a table, index and some support objects for our test:

CREATE TABLE REW_TST
(
C1 NUMBER NOT NULL
,C2 VARCHAR2(255)
,CDATE TIMESTAMP
);

CREATE INDEX PK_INX_RT ON REW_TST (C1);
CREATE SEQUENCE RT_SEQ START WITH 1;

CREATE OR REPLACE TRIGGER RW_BIR
BEFORE INSERT ON REW_TST
FOR EACH ROW
BEGIN
  SELECT C1_SEQ.NEXTVAL
  INTO   :new.C1
  FROM   DUAL;
END;
/

Now that it’s all created, we’ll take ANOTHER snapshot.

Add Data to Kinder Table

This snapshot takes just a couple seconds and is about 10 minutes after our first one and contains the changes that were made to the database since we took the initial change.

Now I’ll add a couple rows from another transaction into the KINDER_TBL from yesterday:

INSERT INTO KINDER_TBL
VALUES (1,dbms_random.string('A', 200), SYSDATE);
INSERT INTO KINDER_TBL
VALUES (2,dbms_random.string('B', 200), SYSDATE);
INSERT INTO KINDER_TBL
VALUES (3,dbms_random.string('C', 200), SYSDATE);
INSERT INTO KINDER_TBL
VALUES (4,dbms_random.string('D', 200), SYSDATE);
INSERT INTO KINDER_TBL
VALUES (5,dbms_random.string('E', 200), SYSDATE);
INSERT INTO KINDER_TBL
VALUES (6, dbms_random.string('F', 200), SYSDATE);
INSERT INTO  KINDER_TBL
VALUES (7,dbms_random.string('G', 200), SYSDATE);
COMMIT;

We’ll take another snapshot:

Add Data to the New Table

Now let’s add a ton of rows to the new table we’ve created:

SQL> Begin
For IDS in 1..10000
Loop
INSERT INTO REW_TST (C2)
VALUES (dbms_random.string('X', 200));
Commit;
End loop;
End; 
  /

And take another snapshot.

Now that I have all of my snapshots for the critical times in the release change, there is a secondary option that is available.

Snapshots at the DBA Level

As I pointed out earlier, there is a range of SCNs in each snapshot.  Notice that I can now provision by time or SCN from the Admin Console:

So I could easily go back to any of my SET beginning or ending SCN by the snapshot OR I could click on the up/down arrows or type the exact SCN I want to pinpoint for the recovery. Once I’ve decided on the correct SCN to recover to, then click on Refresh VDB and it will go to that SCN, just like doing a recovery via RMAN, but instead of having to type out the commands and the time constraints, this would be an incredibly quick recovery.

Notice that I can go back to any of my snapshots, too.  For the purpose of this test, we’re going to go back to just before I added the data to the new table by clicking on the Selected SCN and clicking Rewind VDB.

Note that now this is the final snapshot shown in the system, no longer displaying the one that was taken after we inserted the 10K rows into REW_TST.

If we look at our SQL*Plus connection to the database, we’ll see that it’s no longer connected from our 10K row check on our new table:

And if I connect back in, what do I have for my data in the tables I worked on?

Pssst-  there are fourteen rows instead of seven because I inserted another 7 yesterday when I created the table… 🙂

I think that answers a lot of the questions posed by Paul, but I’m going to jump in a little deeper on one-

Database Outage During a Rewind

Yes, the database did experience an outage as the VDB was put back to the point in time or SCN requested by the User Interface or Command line interface for Delphix.  You can see this from querying the database:

SQL> select to_char(startup_time,'DD-MM-YYYY HH24:MI:SS') startup_time
 from v$instance;
STARTUP_TIME
-------------------
10-05-2017 12:11:49

The entire database is restored back to this time and the Dsource, the database the VDB is sourced from and keeps track of everything going on in all VDBs, has pushed the database back, yet the snapshots from before this time exist, (tracked by the Delphix Engine.)

If you view the alert log for the VDB, you’ll also see the tail of the recovery steps, including our requested SCN:

alter database recover datafile list clear
Completed: alter database recover datafile list clear
alter database recover datafile list
 1 , 2 , 3 , 4
Completed: alter database recover datafile list
alter database recover if needed
 start until change 2258846
Media Recovery Start
 started logmerger process
Parallel Media Recovery started with 2 slaves
Wed May 10 12:11:05 2017
Recovery of Online Redo Log: Thread 1 Group 3 Seq 81 Reading mem 0
  Mem# 0: /mnt/provision/VEmp_826/datafile/VEMP_826/onlinelog/o1_mf_3_dk3r6nho_.log
Incomplete recovery applied all redo ever generated.
Recovery completed through change 2258846 time 05/10/2017 11:55:30
Media Recovery Complete (VEmp826)

Any other changes around the change that you’re tracking from are impacted by a rewind, so if there are two developers working on the same database, they could impact each other, but with the small footprint of a VDB, why wouldn’t you just give them each their own VDB and merge the changes at the end of the development cycle?  The glorious reasons for adoption virtualization technology is to have the ability to work in 2 week sprints and be more agile than our older, waterfall methods that are laden with problems.

Let me know if you have any more questions-  I live for questions that offer me some incentive to go look at what’s going on under the covers!

Posted in Database, Delphix, Oracle Tagged with:

April 28th, 2017 by dbakevlar

I did a couple great sessions yesterday for the awesome Dallas Oracle User Group, (DOUG.)  It was the first time I did my thought leadership piece on Making Sense of the Cloud and it was a great talk, with some incredible questions from the DOUG attendees!

This points me to a great [older] post on things IT can do to help guarantee tech projects are more successful. DevOps is a standard in most modern IT shops and DBAs are expected to find ways to be part of this valuable solution.  If you inspect the graph, displaying the value of different projects in ROI, vs. how often these different types of projects run over budget and time, it may be surprising.

Where non-software projects are concerned, the project rarely runs over the schedule, but in the way of benefits, often comes up short.  When we’re dealing with software, 33% of project run over time, but the ROI is excruciatingly high and worthy of the investment.  You have to wonder how much of that over-allocation in time feeds into the percentage increase in cost?  If this could be deterred, think about how more valuable these projects would become?

The natural life of a database is growth.  Very few databases stay a consistent size, as companies prosper, critical data valuable to the company requires a secure storage location and a logical structure to report on that data is necessary for the company’s future.  This is where relational databases come in and they can become the blessing and the burden of any venture.  Database administrators are both respected and despised for their necessity to manage the database environment as the health of the database is an important part of the IT infrastructure and with the move to the cloud, a crucial part of any viable cloud migration project.

How much of that time, money and delay shown in those projects are due to the sheer size and complexity of the database tier?  Our source data shows how often companies just aren’t able to hold it together due to lacking skills, lacking estimates in time estimates and other unknowns that come back to bit us.

I can’t stress enough why virtualization is key to removing a ton of the overhead, time and money that ends up going into software projects that include a database.

Virtualizing non-production databases results in:

  1. Ability to deliver full copies of production for developers without extensive demands on storage.
  2. Ability to deliver those databases in a matter of minutes vs. days or weeks.
  3. Ability to refresh databases as needed for any project.
  4. Self-service user-interface so developers and testers can recover from a catastrophic issue in a database without having to grovel to a DBA to restore a virtual database.
  5. Ability to branch the VDB and do versioning, which is awesome for both developers and testers, (I know, we DBAs care very little about this feature… :))
  6. In migrations/cloud migrations, the ability to migrate databases in short periods of time and to limit the storage footprint to save company the money they were promised the cloud would deliver that most are finding out in the long run, is not occurring with traditional database scenarios.

It’s definitely something to think about and if you don’t believe me, test it yourself with a free trial!  Not enough people are embracing virtualization and it takes so much of the headache out of RDBMS management.

Posted in AWS, Azure, Cloud, Oracle, SQLServer Tagged with: , ,

April 18th, 2017 by dbakevlar

For over a year I’ve been researching cloud migration best practices.  Consistently there was one red flag that trips me that I’m viewing recommended migration paths.  No matter what you read, just about all of them include the following high level steps:

As we can see from above, the scope of the project is identified, requirements laid out and a project team is allocated.

The next step in the project is to choose one or more clouds, choose the first environments to test out in the cloud, along with security concerns and application limitations.  DBAs are tested repeatedly as they continue to try to keep up with the demand of refreshing or ensuring the cloud environments are able to keep in sync with on-prem and the cycle continues until a cutover date is issued.  The migration go or no-go occurs and the either non-production or all of the environment is migrated to the cloud.

As someone who works for Delphix, I focus on the point of failure where DBAs can’t keep up with full clones and data refreshes in cloud migrations or development and testing aren’t able to complete the necessary steps that could be if the company was using virtualization.  From a security standpoint, I am concerned with how few companies aren’t investing in masking with the sheer quantity of breeches in the news, but as a DBA, there is a whole different scenario that really makes me question the steps that many companies are using to migrate to the cloud.

Now here’s where they loose me every time- the last step in most cloud migration plans is to optimize.

I’m troubled by optimization being viewed as the step you take AFTER you migrate to the cloud.  Yes, I believe that there will undoubtedly be unknowns that no one can take into consideration before the physical migration to a cloud environment, but to take databases, “as is” when an abundance of performance data is already known about the database that could and will impact performance, seems to be inviting unwarranted risk and business impact.

So here’s my question to those investing in a cloud migration or have already migrated to the cloud-  Did you streamline and optimize your database/applications BEFORE migrating to the cloud or AFTER?

 

 

Posted in AWS, Azure, Oracle, SQLServer Tagged with: , ,

April 13th, 2017 by dbakevlar

I was in a COE, (Center of Excellence) meeting yesterday and someone asked me, “Kellyn, is your blog correct?  Are you really speaking at a Blockchain event??”  Yeah, I’m all over the technical map these days and you know what?

I love the variety of technology, the diversity of attendance and the differences in how the conferences are managed.  Now that last one might seem odd and you might think that they’d all be similar, but its surprising how different they really are.

Getting to Know You

Today I’m going to talk about an aspect of conferences that’s very near to my heart, which is networking via events.  For women in technology, there are some unique challenges for us when it comes to networking.  Men have concerns about approaching women to network- such as fearful of accusations of inappropriate interaction and women have the challenge that a lot of networking opportunities occur outside of the workplace and in social situations that we may not be comfortable in.  No matter who you are, no matter what your intentions, there’s a lot of wariness and in the end, women often just lose out when it comes to building their network.  I’ve been able to breach this pretty successfully, but I have seen where it’s backfired and have found myself on more than one occasion defending both genders who’ve ended up on the losing side of the situation.

With that said, conferences and other professional events can assist with helping us geeks build our networks and it’s not all about networking events.  I noticed a while back that the SQL Server community appeared to be more networked among their members.  I believe part of this is due to the long history of their event software and some of its features.

Using the SQL Pass website, specifically the local user group event management software-  notice that its all centralized.  Unlike the significantly independent Oracle user groups, SQL Server user groups are able to use a centralized repository for their event management, speaker portal, scheduling, etc.  It’s not to say that there aren’t any events outside of Pass Summit and SQL Saturdays, there’s actually a ton, but this was the portal for the regional user groups, creating the spoke that bridged out to the larger community.

Centralized System

Outside of submitting my abstract proposals to as many SQL Saturdays worldwide from one portal, I also can maintain one speaker biography, information about my blog, Twitter, Linkedin and other social media in this one location.

The second benefit of this simplicity, is that these biographies and profiles “feed” the conference schedules and event sites.  You have a central location for management, but hundreds of event sites where different members can connect.  After abstracts have been approved and the schedule built, I can easily go into an event’s schedule and click on each speaker biography and choose to connect with anyone listed who has entered their social media information in their global profile.

Using my profile as an example, you’ll notice the social media icons under my title are available with a simple click of the mouse:

This gives me both an easy way to network with my fellow speakers, but also an excuse to network with them!  I can click on each one of the social media buttons and choose to follow each of the speakers on Twitter and connect with them on Linkedin.  I send a note with the Linkedin connection telling the speaker that we’re both speaking at the event and due to this, I’d like to add them to my network.

As you can join as many regional and virtual user groups as you like, (and your Pass membership is free) I joined the three in Colorado, (Denver, Boulder and Colorado Springs.) Each one of those offers the ability to also connect with the board members using a similar method, (now going to use Todd and David as my examples from the Denver SQL Server user group.)

The Oracle user groups have embraced adding twitter links to most speaker bios and some board groups, but I know for RMOUG, many still hesitated or aren’t using social media to the extent they could.  I can’t stress enough how impressed I am when I see events incorporate Linkedin and Twitter into their speaker and management profiles, knowing the value they bring to technical careers, networks and the community.

New Kids on the Block

Although the SQL Server community is a good example, they aren’t the only ones.  I’m also speaking at new events on emergent technologies, like Data Platforms 2017.  I’ll be polite and expose my own profile page, but I’m told I’m easy to find in the sea of male speakers… 🙂 Along with my picture, bio and session information, there are links to my social media connections, allowing people to connect with me:

Yes, the Bizzabo software, (same software package that RMOUG will be using for our 2018 conference, along with a few other Oracle events this coming year) is aesthetically appealing, but more importantly, it incorporates important networking features that in the past just weren’t as essential as they are in today’s business world.

I first learned the networking tactic of connecting with people I was speaking with from Jeff Smith and I think its a great skill that everyone should take advantage of, no matter if you’re speaking or just attending.  For women, I think it’s essential to your career to take advantage of opportunities to network outside of the traditional ways we’ve been taught in the past and this is just one more way to work around that glass ceiling.

 

 

Posted in DBA Life, Oracle, SQLServer Tagged with: , , ,

April 11th, 2017 by dbakevlar

I’m itching to dig more into the SQL Server 2016 optimizer enhancements, but I’m going to complete my comparison of indices between the two platforms before I get myself into further trouble with my favorite area of database technology.

<–This is sooo me.

Index Organized Tables

Index Organized Tables, (IOT) are just another variation of a primary b-tree index, but unlike a standard table with an index simply enforcing uniqueness, the index IS the table.  The data is arranged in order to improve performance and in a clustered primary key state.

This is the closest to a clustered index in SQL Server that Oracle will ever get, so it makes sense that a comparison in performance and fragmentation is the next step after I’ve performed standard index and primary key index comparisons to Oracle.

Let’s create a new copy of our Oracle objects, but this time, update to an Index Organized Table:

CREATE TABLE ora_tst_iot(
        c1 NUMBER, 
        c2 varchar2(255),
        CREATEDATE timestamp DEFAULT CURRENT_TIMESTAMP,
        CONSTRAINT pk_ora_iot PRIMARY KEY (c1))
    ORGANIZATION INDEX 
    TABLESPACE users
    PCTTHRESHOLD 20
    OVERFLOW TABLESPACE users;

CREATE SEQUENCE C1_iot_SEQ START WITH 1;

CREATE OR REPLACE TRIGGER C1_iot_BIR 
BEFORE INSERT ON ORA_TST_IOT
FOR EACH ROW
BEGIN
  SELECT C1_iot_SEQ.NEXTVAL
  INTO   :new.C1
  FROM   DUAL;
END;
/

The PCTThreshold can be anywhere between 0-50, but I chose 20 for this example.  I didn’t add any compression, as C1 is a simple sequence which won’t have the ability to take advantage of compression and I also added the additional support objects of a sequence and a trigger, just as I did for the previous test on the Oracle table.

First Data Load

Now we’ll insert the rows from ORA_INDEX_TST into ORA_TST_IOT

SQL> insert into ora_tst_iot(c2) select c2 from ora_index_tst;

995830 rows created.

Elapsed: 00:00:04:01

There won’t be any fragmentation in the current table-  it was directly loaded-  no deletes, no updates.  Although it won’t be shown in the examples, I will collect stats at regular intervals and flush the cache to ensure I’m not impacted in any of my tests.

SQL> ANALYZE INDEX PK_INDEXPS VALIDATE STRUCTURE;
Index analyzed.

Elapsed: 00:00:00.37

SQL> select index_name from dba_indexes
  2  where table_name='ORA_TST_IOT';

INDEX_NAME
------------------------------
PK_ORA_IOT
SQL> analyze index pk_ora_iot validate structure;

Index analyzed.

SQL> analyze index pk_ora_iot compute statistics;

Index analyzed.

SQL> SELECT LF_BLKS, LF_BLK_LEN, DEL_LF_ROWS,USED_SPACE, PCT_USED FROM INDEX_STATS where NAME='PK_ORA_IOT';

     32115 7996 0  224394262     88

Well, the table IS THE INDEX.  We collect stats on the table and now let’s remove some data, rebuild and then see what we can do to this IOT-

SQL> select * from ora_tst_iot
  2  where c1=994830;

    994830

SBTF02LYEQDFGG2522Q3N3EA2N8IV7SML1MU1IMEG2KLZA6SICGLAVGVY2XWADLZSZAHZOJI5BONDL2L
0O4638IK3JQBW7D92V2ZYQBON49NHJHZR12DM3JWJ1SVWXS76RMBBE9OTDUKRZJVLTPIBX5LWVUUO3VU
VWZTXROKFWYD33R4UID7VXT2NG5ZH5IP9TDOQ8G0

10-APR-17 03.09.52.115290 PM

SQL> delete from ora_tst_iot
  2  where c1 >=994820           
  3  and c1 <=994830;

11 rows deleted.

SQL> commit;

Now we see what the data originally looked like-  C2 is a large column data that was consuming significant space.

What if we now disable our trigger for our sequence and reinsert the rows with smaller values for c2, rebuild and then update with larger values again?

ALTER TRIGGER C1_IOT_BIR DISABLE;

INSERT INTO ORA_TST_IOT(C1,C2) VALUES (994820, 'A');
INSERT INTO ORA_TST_IOT(C1,C2) VALUES (994821, 'B');
INSERT INTO ORA_TST_IOT(C1,C2) VALUES (994822, 'C');
INSERT INTO ORA_TST_IOT(C1,C2) VALUES (994823, 'D');
INSERT INTO ORA_TST_IOT(C1,C2) VALUES (994824, 'E');
INSERT INTO ORA_TST_IOT(C1,C2) VALUES (994825, 'F');
INSERT INTO ORA_TST_IOT(C1,C2) VALUES (994826, 'G');
INSERT INTO ORA_TST_IOT(C1,C2) VALUES (994827, 'H');

so on and so forth till we reach 994830…

COMMIT and then let’s rebuild our table…

Rebuild or Move

ALTER TABLE ORA_IOT_TST REBUILD;

What happens to the table, (IOT) when we’ve issued this command?  It’s moving all the rows back to fill up each block up to the pct free.  For an IOT, we can’t simply rebuild the index, as the index IS THE TABLE.

Now we’ve re-organized our IOT so the blocks are only taking up the space that it would have when it was first inserted into.  So let’s see what happens now that we issue an UPDATE to those rows-

SQL> update ora_tst_iot set c2=dbms_random.string('B',200)
  2  where c1 >=994820
  3  and c1 <=994830;

11 rows updated.

So how vulnerable are IOTs to different storage issues?

Chained Rows

Chained Rows after updating, moving data and then updating to larger data values than the first with 10% free on each block?

Just 11 rows shows the pressure:

SQL> SELECT 'Chained or Migrated Rows = '||value
 FROM v$sysstat
 WHERE name = 'table fetch continued row';  

Chained or Migrated Rows = 73730

Data Unload

Let’s delete and update more rows using DML like the following:

SQL> delete from ora_tst_iot
  2  where c2 like '%300%';

4193 rows deleted.

Insert rows for 300 with varying degree of lengths, delete more, rinse and repeat and update and delete…

So what has this done to our table as we insert, update, delete and then insert again?

SQL> SELECT table_name, iot_type, iot_name FROM USER_TABLES
     WHERE iot_type IS NOT NULL;  

TABLE_NAME               IOT_TYPE           IOT_NAME
------------------------------ ------------ ------------------------------
SYS_IOT_OVER_88595       IOT_OVERFLOW       ORA_TST_IOT
ORA_TST_IOT              IOT

This is where a clustered index and an IOT is very different.  There is a secondary management object involved when there is overflow.  If you look up at my creation, yes, I chose to create an overflow.  Even if I drop the IOT properly, the overflow table will go into the recycle bin, (unless I’ve configured the database without it.)

SQL> select index_name from dba_indexes
  2  where table_name='ORA_TST_IOT';

INDEX_NAME
------------------------------
PK_ORA_IOT

SQL> analyze index pk_ora_iot validate structure;
Index analyzed.

SQL> select blocks, height, br_blks, lf_blks from index_stats;

    BLOCKS     HEIGHT BR_BLKS    LF_BLKS
---------- ---------- ---------- ----------
     32768     3       45        32115

We can see, for the blocks, the rows per leaf blocks aren’t too many-  this is a new table without a lot of DML, but we still see that with the current configuration, there aren’t a lot of rows returned per leaf block.

When we select from the IOT, the index is in full use and we can see that with the proper pct free/pct used, the index is still in pretty good shape:

SQL> select * from table(dbms_xplan.display_awr('fbhfmn88tq99z'));


select c1, c2, createdate from ora_tst_iot
Plan hash value: 3208808379

--------------------------------------------------------------------------------
| Id  | Operation     | Name   | Rows  | Bytes | Cost (%CPU)| Time

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |   |   |   |  8709 (100)|
|   1 |  INDEX FAST FULL SCAN| PK_ORA_IOT |   995K|   206M|  8709   (1)| 00:01:4
5 |

13 rows selected.

SQL> analyze index pk_ora_iot validate structure;
Index analyzed.

SQL> SELECT blocks, height, br_blks, lf_blks FROM index_stats;

    BLOCKS     HEIGHT  BR_BLKS    LF_BLKS
---------- ---------- ---------- ----------
     32768     3       45         32058

SQL> select pct_used from index_stats;

  PCT_USED
----------
 88

I Did a Bad

So now what happens, if like our original test, we shrink down the percentage of what can be used and reorganize, (and please don’t do this in production…or test….or dev….or ever! 🙂)?

SQL> alter table ora_tst_iot move pctfree 90;
Table altered.

SQL> analyze index pk_ora_iot validate structure;
Index analyzed.

SQL> SELECT blocks, height, br_blks, lf_blks FROM index_stats;

    BLOCKS     HEIGHT  BR_BLKS    LF_BLKS
---------- ---------- ---------- ----------
    172928     3      228         165630

Well, that’s a few more leaf blocks, eh?  Insert after enabling trigger again-

SQL> BEGIN
   FOR i in 1..1000000 LOOP
    INSERT INTO ORA_TST_IOT(c2) VALUES(i);
   END LOOP;
 COMMIT;
END;
 /

Now we have our elapsed time for Azure inserts of 1 million records with 100% and 10%.  Let’s compare it to our IOT.  The IOT move command to fill the blocks to 100% was quite fast.  Of course, the reverse, only allowing for 10%, (90% free) took F.O.R.E.V.E.R…, (OK, it sure felt like it…why didn’t I just truncate it?  Oh, yeah, I wanted it to be a real test, not simply an easy test..)

Note: For this test, we’ll rebuild after updating the pctfree each time.

 

10% Fill Factor in SQL Server and 1 million insert: Elapsed time: 23 minutes, 18 seconds

90% PCTFree in Oracle and 1 million insert:  7 min, 12 seconds

100% Fill Factor in SQL Server and 1 million insert: Elapsed Time: 4 minutes, 43 seconds

0% PCTFree in Oracle and 1 million insert: 1 min, 8 seconds

REBUILD of the Oracle IOT to make it 90% free in each block?  Elapsed Time:  8 hrs, 21 minutes, 12 seconds 

…along with four backups of archive logs it generated that filled up the archive dest… 🙂  Now the AWS Trial is to be used to test out the Delphix product, not to test out index performance in a high insert/delete/update scenario, so I’ve been asking for some of these challenges, but it was still a great way to build this out quickly and then compare.

 

Results

In this test, this was the overall results:

  1. Less overhead during transactional processing when it comes to inserts, updates and deletes to the IOT.
  2. Improved performance on any selects that require the data to be sorted in sequential order.
  3. Similar performance to SQL Server clustered indexes on complex queries and less sort temp usage.
  4. Limited use case than SQL Server clustered indexes, as these are quite common and IOTs are less used in the Oracle space.
  5. More maintenance upkeep as we will need to reorganize the IOT if its used with processing that includes a lot of inserts, updates and deletes.
  6. DBCC rebuilds of a clustered index uses less resources and doesn’t impact the transaction log as it does Oracle’s rollback and archive log.  It was easier to build the table with a high pct free storage configuration and then do an insert of the data, then drop the old table than to do an “alter move” command.

Now there’s more to do comparisons on, so I’m going to dig in more on the SQL Server side, but here’s to Oracle Index Organized Tables, (IOTs)!

Posted in Oracle, SQLServer Tagged with: , , ,

April 5th, 2017 by dbakevlar

I just uploaded my slides from Collaborate 2017 to Slideshare, but also, the findings missing from my slide deck I used for Oak Table World on War of the Indices- Oracle and SQL Server.  Feel free to download them here.

As I tested out Oracle vs. SQL Server Index performance, I ran across this great script to check for fragmentation from Franck Pachot.  You’ll need to simply update the script to declare the table and index name or simply edit the script as is, adding those two values correctly before running it.

The outcome when run against my ugly index in the testing of the good, the bad and the ugly, was fun, to say the least:

SQL> @index_frag.sql;

 C1 ->       C1 rows/block bytes/block %free space blocks free
---------- -- ---------- ---------- ----------- ----------- ---------- -----
     30041 ->   130221 35     560  93   2851 oooo
    130256 ->   230433 35     560  93   2851 oooo
    230468 ->   330654 35     560  93   2851 oooo
    330689 ->   430874 35     560  93   2851 oooo
    430909 ->   531073 35     560  93   2851 oooo
    531108 ->   631264 35     560  93   2850 oooo
    631299 ->   731429 35     560  93   2850 oooo
    731464 ->   831547 35     560  93   2850 oooo
    831583 ->   931711 35     560  93   2850 oooo
    931746 -> 1030047 34     560  93   2850 oooo

If you have just one or a couple indexes that you want to perform some deep analysis to view fragmentation, Franck’s script does an awesome job.

Just my share for the day…. 🙂

Posted in Oracle Tagged with: ,

April 5th, 2017 by dbakevlar

This post has a lot of the support code and data for my Oak Table Talk that I’ll be giving at IOUG Collaborate 2017 in Las Vegas on April 5th, 2017.  

One of the Iceland 2017 SQL Saturday sessions got me thinking about indexing and how similar and different it all is in Oracle vs. SQL Server.  There was some really fun, (well, at least what I call fun…) test cases built out and referenced by Paul Randal.  After looking through some of it, I decided it might be interesting to try to replicate it to Oracle, (as close as possible) and compare how the two database platforms deal with index storage and specifically- SQL Server’s Fill Factor vs. Oracle PctIncrease index percentage filled.

B-tree indexing is the cornerstone of physically optimizing searches on data.  No consensus exists on what the “B” stands for, (some think its from Bayer, for one of the main gentlemen who did the research and many more believe it’s for Boeing, for the Research Center the research was done at.)

The choice in how the data is organized, leafs and such are pretty standard, but database platforms have created some unique indexing that enhances queries on RDBMS vs. just having heap tables.

Using Oracle and SQL Server as our choice for a comparison today, there are a few translations I need for readers of this blog:

OracleSQL ServerDescription
Index Organized Table, (IOT)Clustered Indexphysical index storing data in their key values. In SQL Server, there can be only one Clustered index per table.
Pctfree of blockFillFactor of pagePercent of storage that is allowed filled. There are different times when this is used for each platform.
SequenceTOPAbility to populate data with a sequential number
dbms_random.stringReplicateAbility to populate data with string values
blockpageunit of storage
Automatic Workload Repository, (AWR)Dynamic Management Views, (DMV)Performance data collection

Now that we have that out of the way, you can use this trusty, little graph for common terms that require a “translation” from one database platform to the other.

The next thing to remember is that PCTFree and FillFactor aren’t adhered to at all times.  Appending a row to an index is different than updating a row in an index and each platform has it’s own set of criteria to decide if it follows the rule of percentage of a block or page to fill or not.

The Test

The steps of this test:

  1. Create a table with three columns and two indexes-  SQL Server having its trusted clustered index.
  2. Populate data of different sizes to each of the tables.
  3. check the storage of our index “health”
  4. Remove data
  5. Repeat step 2 and also remove data
  6. Check the storage again to see how it has changed-  page splits in SQL Server, leaf block splits in Oracle

Goal is:

  1. Inspect the differences and similarities of indexing in both platforms
  2. The pros and cons of how index data is stored and used in both platforms

Oracle Code for Creation of Objects and Support

  • Table and PK with constraint
  • Alter index statement to decrease pctfree
  • Sequence to populate c1 column
  • Trigger to do this in simple way
  • Rinse, repeat, more “shampoo”, and do it all again… 🙂

Yes, I could have just used RowNum, but I was trying to kill a second bird, (testing task) with this one stone, so an trigger with a sequence it is… 🙂

CREATE TABLE ORA_INDEX_TST
(
C1 NUMBER NOT NULL
,C2 VARCHAR2(255)
,CREATEDATE TIMESTAMP
);

CREATE INDEX PK_INDEXPS ON ORA_INDEX_TST (C1);
ALTER TABLE ORA_INDEX_TST ADD CONSTRAINT OIT_PK PRIMARY KEY(C1) USING INDEX PK_INDEXPS;
CREATE UNIQUE INDEX IDX_INDEXPS ON ORA_INDEX_TST(C2);
ALTER INDEX PK_INDEXPS REBUILD PCTFREE 90 INITRANS 5;
ALTER INDEX IDX_INDEXPS REBUILD PCTFREE 90 INITRANS 5;
CREATE SEQUENCE C1_SEQ START WITH 1;

CREATE OR REPLACE TRIGGER C1_BIR 
BEFORE INSERT ON ORA_INDEX_TST 
FOR EACH ROW
BEGIN
  SELECT C1_SEQ.NEXTVAL
  INTO   :new.C1
  FROM   DUAL;
END;
/

We’ll need to manually insert just enough data to fill up one block, which is 8KB in this database, (and default.)

INSERT INTO ORA_INDEX_TST (C2, CREATEDATE) 
VALUES (dbms_random.string('A', 200), SYSDATE);
INSERT INTO ORA_INDEX_TST (C2, CREATEDATE) 
VALUES (dbms_random.string('B', 200), SYSDATE);
INSERT INTO ORA_INDEX_TST (C2, CREATEDATE) 
VALUES (dbms_random.string('C', 200), SYSDATE);
INSERT INTO ORA_INDEX_TST (C2, CREATEDATE) 
VALUES (dbms_random.string('D', 200), SYSDATE);
INSERT INTO ORA_INDEX_TST (C2, CREATEDATE) 
VALUES (dbms_random.string('E', 200), SYSDATE);
INSERT INTO ORA_INDEX_TST (C2, CREATEDATE) 
VALUES (dbms_random.string('F', 200), SYSDATE);
INSERT INTO ORA_INDEX_TST (C2, CREATEDATE) 
VALUES (dbms_random.string('G', 200), SYSDATE);
COMMIT;

We’ll now verify that our data is inserted into one block:

SQL> ANALYZE INDEX PK_INDEXPS VALIDATE STRUCTURE;

SQL> SELECT LF_BLKS, LF_BLK_LEN, DEL_LF_ROWS,USED_SPACE, PCT_USED FROM INDEX_STATS where NAME='PK_INDEXPS';
   LF_BLKS LF_BLK_LEN DEL_LF_ROWS USED_SPACE   PCT_USED
---------- ---------- ----------- ---------- ----------
  1        7924       0           1491        19

Code for SQL Server Objects and Support.  Since I didn’t have the same secondary project request, this one will appear simpler:

  • Table with PK with constraint
  • alter index and change fill factor
  • Second Table to use for data population
CREATE TABLE SQL_INDEX_TST (c1 INT NOT NULL, c2 CHAR (255), createdate datetime);
CREATE INDEX CL2_INDEX_TST ON SQL_INDEX_TST(C2);
GO

ALTER TABLE SQL_INDEX_TST 
ADD CONSTRAINT PK_CLINDX_TST PRIMARY KEY CLUSTERED (c1);
GO

First, in SQL Server, a page will hold around 8KB of data, so let’s test out our index storage:

INSERT INTO SQL_INDEX_TST(c1,c2) VALUES (1, 'a');
INSERT INTO SQL_INDEX_TST(c1,c2) VALUES (2, 'a'); 
INSERT INTO SQL_INDEX_TST(c1,c2) VALUES (3, 'a'); 
INSERT INTO SQL_INDEX_TST(c1,c2) VALUES (4, 'a');
INSERT INTO SQL_INDEX_TST(c1,c2) VALUES (5, 'a'); 
INSERT INTO SQL_INDEX_TST(c1,c2) VALUES (6, 'a'); 
INSERT INTO SQL_INDEX_TST(c1,c2) VALUES (7, 'a'); 
GO

We now have officially “filled” the first page as much as possible and we should see this if we query the information schema:

SELECT
OBJECT_SCHEMA_NAME(ios.object_id) + '.' + OBJECT_NAME(ios.object_id) as table_name, i.name as index_name, leaf_allocation_count, nonleaf_allocation_count, fill_factor, type_desc
FROM sys.dm_db_index_operational_stats(DB_ID(), OBJECT_ID('dbo.SQL_INDEX_TST'),NULL, NULL) ios
INNER JOIN sys.indexes i ON i.object_id = ios.object_id AND i.index_id = ios.index_id;
 

Data Loads

Oracle:

SQL> Begin
For IDS in 1..1000000
Loop
INSERT INTO ORA_INDEX_TST (C2) 
VALUES (dbms_random.string('X', 200));
Commit;
End loop;
End; 
  /

10% PCT Free- Time Elapsed 2 minutes, 12 seconds

90% PCT Free- Time Elapsed 7 minutes, 3 seconds

I’ll have both the initial test data and the new 10000 rows I’ve added:

SQL> select count(*) from ora_index_tst;

  COUNT(*)
----------
     1000008

Let’s delete some of this data load to create deleted leaf blocks:

SQL> delete from ora_index_tst
2 where c2 like '%200%';

4179 rows deleted.

SQL> commit;
Commit complete.

Now let’s analyze and take a look at the stats again:

SELECT LF_BLKS, LF_BLK_LEN, DEL_LF_ROWS,USED_SPACE, PCT_USED FROM INDEX_STATS where NAME='PK_INDEXPS';

   LF_BLKS LF_BLK_LEN DEL_LF_ROWS USED_SPACE   PCT_USED

---------- ---------- ----------- ---------- ----------

     41227  7924       121 212596009       19

There’s a substantial difference in number of leaf blocks vs. when the pct_used is allowed to fill up:

   LF_BLKS LF_BLK_LEN DEL_LF_ROWS USED_SPACE   PCT_USED

---------- ---------- ----------- ---------- ----------

     2004  7996       531   15985741     100

Oracle wasn’t impacted by PCTFREE that much, but there was some impact. Rebuilds were required to clean up some wait, but it wasn’t a true “requirement”, just a preferences if consistent deletes, updates where data was different sized than original and poor storage choices.  The differences in performance weren’t that significant.

Now that we know we have deleted rows, let’s do the same on the SQL Server side:

SQL Server

declare @id int

select @id = 9 --already inserted 8 rows
while @id >= 0 and @id <= 1000000
begin
   insert into sql_index_tst (c1,c2) values(@id, 'DKUELKJ' + convert(varchar(7), @id))
   select @id = @id + 1
end

Default Fill Factor- Elapsed Time: 4 minutes, 43 seconds

10% Fill Factor- Elapsed time: 23 minutes, 18 seconds

Delete some rows to test similar to Oracle:

DELETE FROM SQL_INDEX_TST WHERE c2 LIKE ‘%200%’;

Now there are a few ways we can look at how the indexes were impacted.  We’ll first check for page splits, which as we’ve discussed, cause extra work to the transaction log and fragmentation in the index:

SELECT
OBJECT_SCHEMA_NAME(ios.object_id) + '.' + OBJECT_NAME(ios.object_id) as table_name
,i.name as index_name
,leaf_allocation_count
,nonleaf_allocation_count
FROM sys.dm_db_index_operational_stats(DB_ID(), OBJECT_ID('dbo.SQL_Index_tst'),NULL, NULL) ios
INNER JOIN sys.indexes i ON i.object_id = ios.object_id AND i.index_id = ios.index_id;

Next, we’ll look at the physical fragmentation of the index:

SELECT
OBJECT_SCHEMA_NAME(ips.object_id) + '.' + OBJECT_NAME(ips.object_id) as table_name
,ips.avg_fragmentation_in_percent
,ips.fragment_count
,page_count
FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID('dbo.SQL_Index_tst')
GO

There’s significant fragmentation and it also impacted performance as we viewed above.

Index Maintenance

USE AS_test;  

GO

DBCC DBREINDEX ('SQL_INDEX_TST', CL2_INDEX_TST,100);
DBCC DBREINDEX ('SQL_INDEX_TST', PK_CLINDX_TST,100);
GO

We’ve now rebuilt our indexes and moved the fillfactor to 100%.  Queries using each index column in where clause improved over 20%.  Insert and updates increased to perform similarly to Oracle, unless….

Sorts on data for C1 column on a clustered index in SQL Server increased dramatically and out-performed Oracle’s PK. Only IOT tables could compete, but the use case was very small where it was beneficial.

And The Winner Is…

So who won out in my comparison at Oak Table World?  As we always hear from the DBA,

It Depends…

Some of the benefits of clustered indexes in SQL Server are superior to Oracle:

  • Data is physically sorted in the clustered index
  • Optimizer usage specific- clustered index seek
  • Work best with sequential data, identity columns and order dates
  • Option to randomize the writes on the index can deter from hot spots.

There are negatives that leave this debate still open for me:

  • Vulnerable to fragmentation
  • Inserts are added at end and introduce “Last Page Insert Latch Contention” wait event.
  • Subject to hot spots, (see above…)
  • Page Splits- hit performance hard, especially to the transaction log, (requires much more work than standard insert.)
  • Fillfactor can be a hit or miss configuration for some systems.

There was a lot of other tests and queries I used than what is presented here, but this is the main focus of the test.  I need to thank those that have contributed to the deep index knowledge that offered me the research to then want to research on my own.  Shout out to Richard Foote, Mr. Oracle Index and Paul Randal and Jason Strate for the SQL Server expertise!

Posted in Oracle, SQLServer Tagged with: ,

March 30th, 2017 by dbakevlar

Every year I make the trek to Vegas for the large Collaborate conference and 2017 is no different!

It’s a busy schedule for me at Delphix, so buckle up and hang on for this next week’s events!

Sunday, April 2nd, 9am-1pm

Pre-Conference Hands on Lab

Location: South Seas C  Session# 666

Registration is required for this special conference event and it will give the registered attendees a chance to get hands on experience with databases with virtualization.  This is an important skill set for Database Administrators as they discover how easy it is to migrate to the cloud with virtualized environments and for developers and testers, as they discover this is how to get those pesky, monstrous databases to move at the same speed as DevOps.

Monday, April 3rd, 1:30-2:30pm

Database Patching and Upgrading Using Virtualization, (IOUGenius Session)

Location:  Jasmine C  Session# 149

Tuesday, April 4th, ALL DAY– OK, for the entire time the exhibitor area is up! 🙂

Delphix Booth, doing demos and running amok!  

Location: Booth 1210

Wednesday, April 5th, 9:45-10:45am

Virtualization 101 and Q&A

Location: Jasmine C  Session# 201

Wednesday, April 5th, ALL DAY

Oak Table World

Location: South Seas C

1:15 Session from me!

Oracle vs. SQL Server-  Page Splits, Leaf Blocks and Other Wars

If you haven’t noticed, if you come to an Oak Table World session, (and really, why would you miss any of the phenomenal Oakies speaking??)  You can put your name in for a raffle for a (difficult to find) NES system.  Yeah, entertain me, it took me a bit to track this game console down for the event!

For a complete list of Delphix Session, you can locate them in the following spreadsheet:

9:00 AM – 1:00 PM – South Seas C
Data Virtualization Hands-On Lab (pre-registration required)
Presenters: Kellyn Pot’Vin-Gorman and Leighton Nelson
1:30 PM – 2:30 PM – Jasmine C
Database Patching and Upgrading Using Virtualization
Presenter: Kellyn Pot’Vin-Gorman
9:30 AM – 4:15 PM – Exhibitor Showcase
4:15 PM – 5:15 PM – Palm D
Linux/UNIX Tools for Oracle DBAs
Presenter: Tim Gorman
9:45 AM – 10:45 AM – Jasmine C
Virtualization 101 and Q&A
Presenter: Kellyn Pot’Vin-Gorman
5:15 PM – 8:00 PM – Exhibitor Showcase 2:45 PM – 3:45 PM – Banyan E
OAUG Archive & Purge SIG Session
Presenter: Brian Bent
10:45 AM – 4:15 PM – Exhibitor Showcase
8:30 AM – 9:30 AM – Palm A
Migration Enterprise Applications to the Cloud
Presenter: Leighton Nelson
5:15 PM – 7:00 PM – Exhibitor Showcase

See you next week in Vegas!!

 

Posted in Oracle

March 30th, 2017 by dbakevlar

Azure is the second most popular cloud platform to date, so it’s where Delphix naturally is going to support second on our road to the cloud.  As I start to work with the options for us deploying Delphix, there are complexities I need to educate myself on in Azure.  As we’re just starting out, there’s a lot to learn and a lot of automation we can take advantage of.  It’s an excellent time for me to get up to speed with this cloud platform, so hopefully everyone will learn right along with me!

We’ll be using Terraform to deploy to Azure, just as we prefer to use it for our AWS deployments.  It’s open source, very robust and has significant support in the community, so we’ll switch from cloud setup to Terraform prep in many of these posts.  Before we can do that, we need to set up our Azure environment after we’ve registered our subscription with Azure.

Azure Consoles

There are the New and the Classic consoles for Azure, but also ones in the modern, New console that are marked as “Classic” that aren’t part of the actual “Classic” one.  I found this a bit confusing, so it’s good to have the distinction.

Azure’s “New” Portal, with it’s modern, sleek design

Azure’s “Classic” management interface, with it’s pale blue and white schema, which still serves a very significant purpose

Once you’ve created your Azure account, you’ll find that you need access to the Classic console to perform many of the basic setup tasks, where the Modern console is better for advanced administration.

Preparation is Key

There are a number of steps you’ll need to perform in preparation for Delphix to deploy to Azure.  The delphix engine, a source and a target are out goal, so we’ll start simple and work our way out.  Let’s see how much I can figure out and how much I may need to look to others more experienced to get me through.  No matter what, you’ll need both consoles, so keep the links above handy and I’ll refer to the consoles by “New” and “Classic” to help distinguish them as I go along.  Know that in this post, we’ll spend most of our time in the Classic console.

Set up an Account and Add Web App

If you don’t already have one, Microsoft will let you set up an account and even give you $200 in free credits to use.  Once you sign up, then you need to know where to go next.  This is where the “Classic” console comes in, as you need to set up your “application” that will be used for your deployment.

Log into the “Classic” console and click on Active Directory and the Default Directory highlighted in blue.  This will open a new page and you will have the opportunity to click Add at the bottom of the page to add a new Active Directory application.

  • Name the Application, (open up a text editor, copy and paste the name of the app into it, you’ll need this data later)
  • The application type is web app or api
  • Enter a URL/URI and yes, they can be made up.  They don’t have to be real.

Client and Client Secret

Now that your application is created, you’ll see a tab called Configure.  Click on this tab and you’ll see the client ID displayed.  Copy the Client ID and add that to your text editor, also for later.

Scroll down and you’ll see a section called Keys.  Click on the button that says “Select Duration” and choose 1 or 2 years.  At the very bottom of the screen, you’ll see a Save button, click it and then the Secret passcode will be displayed for you to copy and paste into your text editor.  Do this now, as you won’t be able to get to it later.

Tenant ID

To the left of the Save button, you’ll see “View Endpoints”.  Click on this and you’ll see a number of entries.  The tenant ID is the repeat value shown in each of the entries at the end.  An example is shown below:

Copy and paste this into your text editor under a heading of tenant ID.

Add Application to the Active Directory

Now that you’ve created this framework, you need to grant permissions to use it all.  In the Configure tab, scroll to the very bottom where it says “Permissions to Other Applications” and click on Add Application.  Choose the Active Directory application from the list, (if you have a new account, you won’t have much to choose from) Azure Service Management API and click on the checkmark in the lower right corner of the pane.  This will return you to the previous page.  Click on the designated privileges and choose to grant it Access Azure Service Management as organization and then save.

Subscription Data

Now, log into the New portal and click on Subscriptions on the left hand side.  Click on the Subscription and it will open up to display your Subscription ID, which you’ll need to copy and paste into your text editor.

Click on Access Control, (IAM) and click on Add.  Now you may only see your username, but the applications are there-  they just won’t be displayed by default.  Type in your application name that you put in your text editor, (example, mine is Web_K_Terra.)  Reminder-  you must type in the name of your app, just as you did when you created it, (it is cap sensitive, etc.) Grant reader and contributor roles from the role list, saving between each additional role.

You should now see your user in the list with both roles assigned to it like the example below for Web_K_Terra app:

Our configuration is complete and ready to go onto the networking piece.

The first part of my terraform template is ready, too.  All the pertinent data that I required from my build out has been added to it and it looks something like the following:

provider “Web_K_Terra” {
subscription_id = “gxxxxxxx-db34-4gi7-xxxxx-9k31xxxxxxxxxp2”
client_id = “d76683b5-9848-4d7b-xxxx-xxxxxxxxxxxx”
client_secret = “trKgvXxxxxxXXXXxxxXXXXfNOc8gipHf-xxxxxxXXx=”
tenant_id = “xxxxxxxx-9706-xxxx-a13a-4a8363bxxxxx”

}

This is a great start to getting us out on Azure, in part II, we’ll talk about setting up connectivity between your desktop and Azure for remote access and recommendations for tools to access it locally.

 

Posted in Azure, Oracle Tagged with: , ,

March 21st, 2017 by dbakevlar

I ended up speaking at two events this last week.  Now if timezones and flights weren’t enough to confuse someone, I was speaking at both an Oracle AND a SQL Server event- yeah, that’s how I roll these days.

Utah Oracle User Group, (UTOUG)

I arrived last Sunday in Salt Lake, which is just a slightly milder weather and more conservative version of Colorado, to speak at UTOUG’s Spring Training Days Conference.  I love this location and the weather was remarkable, but even with the warm temps, skiing was still only a 1/2 hour drive from the city.  Many of the speakers and attendees took advantage of this opportunity by doing just that while visiting.  I chose to hang out with Michelle Kolbe and Lori Lorusso.  I had a great time at the event and although I was only onsite for 48hrs, I really like this event so close to my home state.

I presented on Virtualization 101 for DBAs and it was a well attended session.  I really loved how many questions I received and how curious the database community has become about how this is the key to moving to the cloud seamlessly.

There are significant take-aways from UTOUG.  The user group, although small, is well cared for and the event is using some of the best tools to ensure that they get the best bang for the buck.  It’s well organized and I applaud all that Michelle does to keep everyone engaged.  It’s not an easy endeavor, yet she takes this challenge on with gusto and with much success.

SQL Saturday Iceland

After spending Wednesday at home, I was back at the airport to head to Reykjavik, Iceland for their SQL Saturday.  I’ve visited Iceland a couple times now and if you aren’t aware of this, IcelandAir offers up to 7 day layovers to visit Iceland and then you can continue on to your final destination.  Tim and I have taken advantage of this perk on one of our trips to OUGN, (Norway) and it was a great way to visit some of this incredible country.  When the notification arrived for SQL Saturday Iceland, I promptly submitted my abstracts and crossed my fingers.  Lucky for me,  accepted my abstract and I was offered the chance to speak with this great SQL Server user group.

After arriving before 7am on Friday morning at Keflavik airport, I realized that I wouldn’t have a hotel room ready for me, no matter how much I wanted to sleep.  Luckily there is a great article on the “I Love Reykjavik” site offering inside info on what to do if you do show up early.  I was able to use the FlyBus to get a shuttle directly to and from my hotel, (all you have to do is ask the front desk to call them the night before you’re leaving and they’ll pick you back up in front of your hotel 3 hrs before your flight.)  Once I arrived, I was able to check in my bags with their front desk and headed out into town.

I stayed at Hlemmur Square, which was central to the town and the event and next to almost all of the buses throughout the city.  The main street in front of it, Laugavegur, is one of the main streets that runs East-West and is very walkable.  Right across this street from the hotel was a very “memorable” museum, the Phallilogical Museum.  I’m not going to link to it or post any pictures, but if you’re curious, I’ll warn you, it’s NSFW, even if it’s very, uhm…educational.  It was recommended by a few folks on Twitter and it did ensure I stayed awake after only 2 hours of sleep in 24 hours!

As I wandered about town, there are a few things you’ll note about Iceland-  the murals of graffiti is really awesome and Icelandic folks like good quality products-  the stores housed local and international goods often made from wool, wood, quality metal and such. The city parliment building is easily accessible and it’s right across from the main shopping area and new city development.

On Saturday, I was quick to arrive at Iceland’s SQL Saturday, as I had a full list of sessions I wanted to attend.  I was starting to feel the effects of Iceland weather on my joints, but I was going to make sure I got the most out of the event.  I had connected with a couple of the speakers at the dinner the night before, but with jet lag, you hope you’ll make a better impression on the day of the event.

I had the opportunity to learn about the most common challenges with SQL Server 2016 and that Dynamic Data Masking isn’t an enterprise solution.  Due to lacking discovery tools, the ability to join to non-masked objects and common values, (i.e. 80% of data is local and the most common location value would easily be identified, etc.) the confidential data of masked objects could be identified.

I also enjoyed an introduction to containers with SQL Server and security challenges.  The opening slide from Andy says it all:

Makes you proud to be an American, doesn’t it? 🙂

My session was in the afternoon and we not only had excellent discussions on how to empower database environments with virtualization, but I even did a few quick demonstrations of ease of cloud management with AWS and Oracle…yes, to SQL Server DBAs.  It was interesting to see the ease of management, but how easy it was for me to manage Oracle with the interface.  I performed all validations of data refreshes from the command line, so there was no doubt that I was working in Oracle, yet the refreshes and such were done in AWS and with the Delphix Admin console.

I made it through the last session on the introduction to containers with SQL Server, which included a really interesting demonstration of a SQL Server container sans an OS installation, allowing it to run with very limited resource requirements on a Mac.  After this session was over, I was thankful that two of my fellow presenters were willing to drop me off at my hotel and I promptly collapsed in slumber, ready to return home.  I was sorry to miss out on the after event dinner and drinks, but learned that although I love Iceland, a few days and some extra recovery time may be required.

Thank you to everyone at Utah Oracle User Group and Iceland’s SQL Server User Group for having me as a guest at your wonderful events.  If you need me, I’ll be taking a nap… 🙂

 

Posted in DBA Life, Oracle, SQLServer Tagged with: , ,

March 20th, 2017 by dbakevlar

Now that I’ve loaded a ton of transactions and did a bunch of work load on my source database with the SH sample schema and Swingbench, I’ve noted how little impact to the databases using different cloud tools, (which will come in a few later posts) now I’m going to show you how easy it is to create a new VDB from all of this, WITH the new SH data included.  During all of this time, the primary users of my Delphix VDB, (virtualized databases) would have been working in the previous iage, but someone wants that new SH schema now that my testing has completed.

To do this, I open up my Delphix admin console, (using the IP address for the Delphix Engine from the AWS Trail build output), log in as delphix_admin and open up the Source group to access the Employee Oracle 11g database, (aka ORCL.)

I know my new load is complete on the ORCL database and need to take a new snapshot to update the Delphix Engine outside of the standard refresh interval, (I’m set at the default of every 24 hrs.)  Access this by clicking on the Configuration tab and to take a snapshot, I simply click on the camera icon.

A snapshot will take a couple seconds, as this is a very, very small database, (2.3G) and then you can click on Timeflow to view the new snapshot available for use.  Ensure the new snapshot is chosen by moving the slider all the way to the right and look at the timestamp, ensuring it’s the latest, matching your recent one.

Click on Provision and it will default to the Source host, change to the target, update to a new, preferred database name, (if you don’t like the default) and then you may have to scroll down to see the Next button to go through the subsequent steps in the wizard.  I know my Macbook has a smaller screen and I do have to scroll to see the Next button.  After you’ve made any other changes, click on Finish and let the job run.  Don’t be surprised by the speed that a VDB is provisioned-  I know it’s really fast, but it really did create a new VDB!

Now that we have it, let’s connect to it from SQL*Plus and check prove that we got the new SH schema over.

Using the IP Address for the Linux Target that was given to use in our AWS Trial build, let’s connect:

ssh delphix@<linuxtarget IP Address>

Did you really just create a whole new VDB?

[delphix@linuxtarget ~]$ ps -ef | grep pmon
delphix   1148  1131  0 18:57 pts/0    00:00:00 grep pmon
delphix  16825     1  0 Mar09 ?        00:00:06 ora_pmon_devdb
delphix  16848     1  0 Mar09 ?        00:00:06 ora_pmon_qadb
delphix  31479     1  0 18:30 ?        00:00:00 ora_pmon_VEmp6C0

Yep, there it is…

Now let’s connect to it.

Set our environment:

. 11g.env

Set the ORACLE_SID to the new VDB

export ORACLE_SID=VEmp6C0

Connect to SQL*Plus as our SH user using the password used in our creation on the source database, ORCL:

$ sqlplus sh

Enter password: 
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select object_type, count(*) from user_objects
  2  group by object_type;

OBJECT_TYPE       COUNT(*)

------------------- ----------
INDEX PARTITION     196
TABLE PARTITION     56
LOB                 2
DIMENSION           5
MATERIALIZED VIEW   2
INDEX               30
VIEW                1
TABLE               16

8 rows selected.

SQL> select table_name, sum(num_rows) from user_tab_statistics
  2  where table_name not like 'DR$%'
  3  group by table_name
  4  order by table_name;

TABLE_NAME       SUM(NUM_ROWS)

------------------------------ -------------
CAL_MONTH_SALES_MV             48
CHANNELS                       5
COSTS                          164224
COUNTRIES                      23
CUSTOMERS                      55500
FWEEK_PSCAT_SALES_MV           11266
PRODUCTS                       72
PROMOTIONS                     503
SALES                          1837686
SALES_TRANSACTIONS_EXT         916039
SUPPLEMENTARY_DEMOGRAPHICS     4500
TIMES                          1826

12 rows selected.

Well, lookie there, the same as the source database we loaded earlier... 🙂  Next, we’ll go into the stuff that always gets my interest- performance data, the cloud and visualization tools.

 

Posted in AWS Trial, Oracle Tagged with: ,

March 13th, 2017 by dbakevlar

Swingbench is a one of the best choices for easy loads on a database.  I wanted to use it against the SH sample schema I loaded into my Oracle Source database and I haven’t used Swingbench outside of the command line quite a while back, (my databases seem to always come with a load on them!)  so it was time to update my Swingbench skills and catch up with the user interface.  Thanks to Dominic Giles for keeping the download, features and documentation so well maintained.

After adding the application rights to run on my Macbook Pro, I was impressed by the clean and complete interface.  I wanted to connect it to my AWS instance and as we talk about, the cloud is a lot simpler a change than most DBAs first consider.

When first accessing, Swingbench will prompt you to choose what pre-configured workload you’d like to utilize.  I had already set up the Sales History schema in my AWS Trial source database, so I chose Sales History and then had to perform a few simple configurations to get it to run.

Username: sh

Password: <password for your sh user>

Connect String: <IP Address for AWS Instance>:<DB Port>:<services name>

Proceed down to the tab for Environment Variables and add the following:

ORACLE_HOME  <Oracle Home>

I chose the default 16 connections to start out, but you can add more if you’d like. You can also configure stats collection, snapshot collection before and after the workload.

I set my autoconnect to true, but the default is to not start the load until you hit the green arrow button.  The load will then execute the workload with the amount of connections requested until you hit the red stop button.  You should see the users logged in at the bottom right and in the events window:

Next post we’ll discuss what you’ll see when running a Swingbench on a source database, the Delphix Engine host and subsequently refreshes to a VDB, (virtual database.)  We’ll also discuss other tools that can grant you visibility to optimization opportunities in the cloud.

 

 

Posted in AWS Trial, Cloud, Oracle Tagged with: , ,

March 10th, 2017 by dbakevlar

Most people know I like to do things the hard way… 🙂

When it comes to learning things, there’s often no better way to get a strong understanding than to have to take something apart and put it back together.  I wanted to use swingbench on my AWS deployment and needed the Sales History, (SH) sample schema.  I don’t have an interface to perform the installation via the configuration manager, so I was going to install it manually.

Surprise, the scripts in the $ORACLE_HOME/demo/schema/sh directory were missing.  There are a couple options to solve this dilemma.  Mine was to first get the sample schemas.  You can retrieve them from a handy GitHub repository found here, maintained by Gerald Venzl.

I downloaded the entire demo scripts directory and then SCP’d them up to my AWS host.

scp db-sample-schemas-master.zip delphix@<IP Address>:/home/oracle/.

Next, I extracted the files to the $ORACLE_HOME/demo directory.

unzip db-sample-schemas-master.zip

Now the unzip will call the directory db-sample-schemas-master, which is fine with me, as I like to retain the previous one, (I’m a DBA, so have copies of data until I’m sure I don’t need it is my life.)

mv schema schema_kp
mv db-sample-schemas-master schema

With that change, everything is now as it should be, but the one thing you’ll find out, is that the download is for 12c and I’m alright with this, as the swingbench I’m using is expecting SH for 12c, too.  Not that I expect any differences, but as Jeff Smith was all too happy to remind me on Facebook, I’m using decade old version of Oracle on my image here.

There are a lot of scripts in the Sales_History folder, but all you’ll need to run is the sh_main.sql from SQL*Plus as sysdba to create the SH schema.

There are parameter values that you’ll enter to create the SH schema manually that you may assume are different than the prompt terms and as I’ve seen very little written on it, (even after all these years of this existing) this may help others out:

specify password for SH as parameter 1:

Self-explanatory- what password would you like SH user to have.

specify default tablespace for SH as parameter 2:

What tablespace do you want this created in?  I chose Users, as this is just my play database.

specify temporary tablespace for SH as parameter 3:

Temp was mine and is the common value for this prompt.

specify password for SYS as parameter 4:

This is the password for SYSTEM, not SYS, btw.

specify directory path for the data files as parameter 5:  

This is not Oracle datafiles, this is the path to your SH directory, ($ORACLE_HOME/demo/schemas/sales_history/) for access to the control files and dat files for SQL Loader.  Remember to have a slash at the end of the path name.

writeable directory path for the log files as parameter 6:

A directory for log files-  I put this in the same directory and remember to use a slash at the end or you’re log files will have the previous directory as the beginning of the file name and save to one directory up.

specify version as parameter 7:

This isn’t the version of the database, but the version of the sample schema-  the one from Github is “v3”.

specify connect string as parameter 8:

pretty clear, but the service or connect string for the database that the schema is being created in.

Error Will Robinson, Error!

I then ran into some errors, but it was pretty easy to view the log and then the script and see why:

SP2-0310: unable to open file "__SUB__CWD__/sales_history/psh_v3"

Well, the scripts, (psh_v3.sql, lsh_v3.sql and csh_v3.sql) called in the sh_main.sql is looking in the sales_history directory, so we need to get rid of the sub directory paths that don’t exist in the 11g environment.

view the sh_main.sql, you’ll see three paths to update.  Below is an example of one section of the script with the section to be removed BOLDED:

REM Post load operations
REM =======================================================
DEFINE vscript = _SUB_CWD_/sales_history/psh_&vrs

@&vscript

The DEFINE will now look like the following so it looks in the sales_history directory if you haven’t been pointing to the $ORACLE_HOME/demo directory:

DEFINE vscript = psh_&vrs

Once you’ve saved your changes, you can simply re-run sh_main.sql again, as it does a drop schema on the sample schema before it does the create.  If no other changes need to be made to your parameters, just execute sh_main.sql, if you need to change your values for the parameters entered, just quickest to exit from SQL*Plus and reconnect to unset the values.

Are We There Yet?

Verify that there weren’t any errors in your $RUN_HOME/sh_v3.log file and if all was successful, then connect as the SH user with SQL*Plus and check the schema:

SQL> select object_type, count(*) from user_objects
  2  group by object_type;
OBJECT_TYPE       COUNT(*)
------------------- ----------
INDEX PARTITION     196
TABLE PARTITION     56
LOB                 2
DIMENSION           5
MATERIALIZED VIEW   2
INDEX               30
VIEW                1
TABLE               16
SQL> select table_name, sum(num_rows) from user_tab_statistics
  2  where table_name not like 'DR$%' --Dimensional Index Transformation
  3  group by table_name
  4  order by table_name;
TABLE_NAME       SUM(NUM_ROWS)
------------------------------ -------------
CAL_MONTH_SALES_MV         48
CHANNELS                   5
COSTS                      164224
COUNTRIES                  23
CUSTOMERS                  55500
FWEEK_PSCAT_SALES_MV       11266
PRODUCTS                   72
PROMOTIONS                 503
SALES                      1837686
SALES_TRANSACTIONS_EXT     916039
SUPPLEMENTARY_DEMOGRAPHICS 4500
TIMES                      1826

And now we’re ready to run Swingbench Sales History against our AWS instance to collect performance data.  I’ll try to blog on Swingbench connections and data collection next time.

See everyone at UTOUG for Spring Training Days on Monday, March 13th and at the end of the week I’ll be in Iceland at SQL Saturady #602!

Posted in Cloud, Oracle Tagged with: , ,

February 20th, 2017 by dbakevlar

We’ve been working hard to create an incredible new trial version of Delphix that uses AWS, which is built with the open source product Terraform.  Terraform is a tool that anyone can use to build, version and manage a product effectively and seamlessly in a number of clouds.  We are currently using it to implement to AWS, but there is a bright future for these types of open source products and I’m really impressed with how easy its made it to deploy compute instances, the Delphix Engine and supporting architecture on AWS EC2.  If you’d like to read up on Terraform, check out their website.

The Delphix Admin Console and Faults

After building out the Delphix environment with the Engine and the a Linux source/target, the first step for many is to log into the Delphix Admin console.  You can view any faults during the build at the upper right corner under Faults. One error that I’ve noticed comes up in after a successful build is the following:

AWS Console to the Rescue

By logging into your AWS EC2 console, you can view the instances that are being used.  As you’ll note, the error says that the Delphix Engine is using an unsupported  instance type m4.large.  Yes in our EC2 console, we can see the Delphix Engine, (last in the list and with the name ending in “DE”) that no, actually it isn’t.

It’s actually a m4.xlarge instance type.  What’s even more interesting, is that the Linux Target, (LT) and Linux Source, (LS) are both m4.large instance types, yet no warning was issued for either of these instances as unsupported.

AWS EC2 Supported Instance Types

You can locate what types of instance types are supported for AWS EC2 with the following link.  At this page, we can also see that both the m4.large and the m4.xlarge instance type IS SUPPORTED.

Knowing that we’ve validated that the instance type is supported means that I can safely ignore it and proceed to work through the trial without worry.

If you’re planning on deploying a production Delphix Engine on AWS, inspect the following document to ensure you build it with the proper configuration.

Nothing to see here and thought I better let everyone know before someone lumps Amazon with CNN… 🙂

Posted in AWS Trial, Oracle Tagged with: ,

February 10th, 2017 by dbakevlar

I ran across an article from 2013 from Straight Talk on Agile Development by Alex Kuznetsov and it reminded me how long we’ve been battling for easier ways of doing agile in a RDBMS environments.

Getting comfortable with a virtualized environment can be an odd experience for most DBAs, but as soon as you recognize how similar it is to a standard environment, we stop over-thinking it and it makes it quite simple to then implement agile with even petabytes of data in an relational environment without using slow and archaic processes.

The second effect of this is to realize that we may start to acquire secondary responsibilities and take on ensuring that all tiers of the existing environment are consistently developed and tested, not just the database.

A Virtual Can Be Virtualized

Don’t worry-  I’m going to show you that its not that difficult and virtualization makes it really easy to do all of this, especially when you have products like Delphix to support your IT environment. For our example, we’re going to use our trusty AWS Trial environment and we have already provisioned a virtual QA database.  We want to create a copy of our development virtualized web application to test some changes we’ve made and connect it to this new QA VDB.

From the Delphix Admin Console, go to Dev Copies and expand to view those available.  Click on Employee Web Application, Dev VFiles Running.  Under TimeFlow, you will see a number of snapshots that have been taken on a regular interval.  Click on one and click on Provision.

Now this is where you need the information about your virtual database that you wish to connect to:

  1.  You will want to switch from provisioning to the source to the Linux Target.

Currently the default is to connect to the existing development database, but we want to connect to the new QA we wish to test on.  You can ssh as delphix@<ipaddress for linuxtarget> to connect to and gather this information.

2.  Gathering Information When You Didn’t Beforehand

I’ve created a new VDB to test against, with the idea, that I wouldn’t want to confiscate an existing VDB from any of my developers or testers.  The new VDB is called EmpQAV1.  Now, if you’re like me, you’re not going to have remembered to grab the info about this new database before you went into the wizard to begin the provisioning.  No big deal, we’ll just log into the target and get it:

[delphix@linuxtarget ~]$ . 11g.env
[delphix@linuxtarget ~]$ echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0.4/db_1
[delphix@linuxtarget ~]$ ps -ef | grep pmon
delphix  14816     1  0 17:23 ?        00:00:00 ora_pmon_devdb
delphix  14817     1  0 17:23 ?        00:00:00 ora_pmon_qadb
delphix  17832     1  0 17:32 ?        00:00:00 ora_pmon_EmpQAV1
delphix  19935 19888  0 18:02 pts/0    00:00:00 grep pmon

I can now set my ORACLE_SID:

[delphix@linuxtarget ~]$ export ORACLE_SID=EmpQAV1

Now, let’s gather the rest of the information we’ll need to connect to the new database by connecting to the database and gathering what we need.

[delphix@linuxtarget ~]$ lsnrctl services
LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 10-FEB-2017 18:13:06
Copyright (c) 1991, 2013, Oracle.  All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "EmpQAV1" has 1 instance(s).
  Instance "EmpQAV1", status READY, has 1 handler(s) for this service...

Provision Your VFile

Fill in all the values required in the next section of the provisioning setup:

Click Next and add any requirements to match your vfile configuration that you had for the existing environment.  For this one, there aren’t any, (additional NFS Mount points, etc.)  Then click Next and Finish.

The VFile creation should take a couple minutes max and you should now see an environment that looks similar to the following:

This is a fully functional copy of your web application, created from another virtual copy that can test against a virtual database, ensuring that all aspects of a development project are tested thoroughly before releasing to production!

Why would you choose to do anything less?

 

 

 

Posted in AWS Trial, Delphix, Oracle Tagged with: ,

February 8th, 2017 by dbakevlar

There are more configurations for AWS than there are fish in the sea, but as the rush of folks arrive to test out the incredibly cool AWS Trial for Delphix, I’ll add my rendition of what to look for to know you’re AWS setup is prepped to successfully deploy.

The EC2 Dashboard View

After you’ve selected your location, set up your security user/group and key pairs, there’s a quick way to see, (at least high level) if you’re ready to deploy the AWS Trial to the zone in question.

Go to your EC2 Dashboard and to the location, (Zone) that you plan to deploy your trial to and you should see the following:

Notice in the dashboard, you can see that the key pairs, (1) and the expected Security Groups, (3) are displayed, which tells us that we’re ready to deploy to this zone.  If we double click on the Key Pair, we’ll see that its match to the one we downloaded locally and will use in our configuration with Terraform:

How Terraform Communicates with AWS

These are essential to deploying in an AWS zone that’s configured as part of your .tfvars file for terraform.  You’ll note in the example below, we have both designated the correct zone and the key pair that is part of the zone we’ll be using to authenticate:


#VERSION=004

#this file should be named terraform.tfvars

# ENTER INPUTS BELOW

access_key="XXXXXXX"

secret_key="XXXXXXXXXX"

aws_region="us-east-1"

your_ip="xxx.xx.xxx.xxx"

key_name="Delphix_east1" #don't include .pem in the key name 

instance_name="Delphix_AWS"

community_username="xxx@delphix.com"

community_password="password"

Hopefully this is a helpful first step in understanding how zones, key pairs and security groups interact to support the configuration file, (tfvars) file that we use with the Delphix deployment via Terraform into AWS.

 

Posted in AWS Trial, Delphix, Oracle Tagged with: ,

January 31st, 2017 by dbakevlar

I’ve been at Delphix for just over six months now.  In that time, I was working with a number of great people on a number of initiatives surrounding competitive, the company roadmap and some new initiatives.  With the introduction of our CEO, Chris Cook, new CMO, Michelle Kerr and other pivotal positions within this growing company, it became apparent that we’d be redirecting our focus on Delphix’s message and connections within the community.

I was still quite involved in the community, even though my speaking had been trimmed down considerably with the other demands at Delphix.  Even though I wasn’t submitting abstracts to many of the big events I’d done so in previous years, I still spoke at 2-3 events each month during the fall and made clear introductions into the Test Data Management, Agile and re-introduction into the SQL Server communities.

As of yesterday, my role was enhanced so that evangelism, which was previously 10% of my allocation, is now going to be upwards of 80% as the Technical Evangelist for the Office of the CTO at Delphix.  I’m thrilled that I’m going to be speaking, engaging and blogging with the community at a level I’ve never done before.  I’ll be joined by the AWESOME Adam Bowen, (@CloudSurgeon on Twitter) in his role as Strategic Advisor and as the first members of this new group at Delphix.  I would like to thank all those that supported me to gain this position and the vision of the management to see the value of those in the community that make technology successful day in and day out.

I’ve always been impressed with the organizations who recognize the power of grassroots evangelism and the power it has in the industry.  What will I and Adam be doing?  Our CEO, Chris Cook said it best in his announcement:

As members of the [Office of CTO], Adam and Kellyn will function as executives with our customers, prospects and at market facing events.  They will evangelize the direction and values of Delphix; old, current, and new industry trends; and act as a customer advocate/sponsor, when needed.  They will connect identified trends back into Marketing and Engineering to help shape our message and product direction.  In this role, Adam and Kellyn will drive thought leadership and market awareness of Delphix by representing the company at high leverage, high impact events and meetings. []

As many of you know, I’m persistent, but rarely patient, so I’ve already started to fulfill my role and be prepared for some awesome new content, events that I’ll be speaking at and new initiatives.  The first on our list was releasing the new Delphix Trial via the Amazon Cloud.  You’ll have the opportunity to read a number of great posts to help you feel like an Amazon guru, even if you’re brand new to the cloud.  In the upcoming months, watch for new features, stories and platforms that we’ll introduce you to. This delivery system, using Terraform, (thanks to Adam) is the coolest and easiest way for anyone to try out Delphix, with their own AWS account and start to learn the power of Delphix with use case studies that are directed to their role in the IT organization.

Posted in AWS Trial, DBA Life, Delphix, Oracle Tagged with: ,

  • Facebook
  • Google+
  • LinkedIn
  • Twitter