Subscribe to Blog via Email
Follow me on TwitterMy Tweets
There are a plethora of mishaps in the early space program to prove the need for DevOps, but Fifty-five years ago this month, there was one in particular that is often used as an example for all. This simple human error almost ended the whole American space program and it serves as a strong example of why DevOps is essential as agile speeds up the development cycle.
The Mariner I space probe was a pivotal point in the space race between the United States and the Soviet Union. The space probe was a grand expedition into a series of large, sophisticated, as well as interplanetary missions, all to carry the Mariner moniker. For this venture to launch, (pun intended) it was dependent on a huge, as well as new development project for a powerful booster rocket called the Atlas-Centaur. The development program ran into so many testing failures that NASA ended up dropping the initial project and going with a less sophisticated booster to meet the release date, (i.e. features dropped from the project.) These new probe designs were based off the previously used Ranger moon probes, so there was less testing thought needed and the Atlas Agena B Booster was born, bringing the Mariner project down to a meager cost of $80 million.
The goal of the Mariner I was to perform an unmanned mission to Mars, Venus and Mercury. It was equipped with solar cells on its wings to assist on the voyage, which was all new technology, but the booster, required to escape Earth’s gravity, was an essential part of the project. As the boosters were based off of older technology than many of the newer features, the same attention wasn’t offered to it while testing was being performed.
On July 22nd, 1962, the Mariner I lifted off, but after approximately four minutes in, it veered off course. NASA made the fateful decision to terminate the spacecraft and destroyed millions of dollars of equipment, ensuring it didn’t end up crashing on its own into populated areas.
As has already been well documented, the guidance system, which was supposed to correct the Mariner 1 flight, had a single typo in the entire coded program. A missing hyphen, required for instructions to adjust flight patterns was missing. Where it should have read “R-dot-bar sub-n”, instead was “R-dot-bar sub n”. This minor change caused the program to over-correct small velocity changes and created erratic steering commands to the spacecraft.
This missing hyphen caused a loss of millions of dollars in the space program and is considered the most expensive hyphen in history.
How does this feed into the DevOps scenario?
Missing release dates for software can cost companies millions of dollars, but so can the smallest typos. Reusing code and automation of programming, along with proper policies, process and collaboration throughout the development cycle ensures that code isn’t just well written, but in these shortened development cycles, it’s reviewed and tested fully before it’s released. When releases are done in smaller test scenarios, a feedback loop is ensured so that errors are caught early and guaranteed not to go into production.
Doing three or four webinars in a month doesn’t seem like a big deal until you actually try to do it…and present at two or three events and make sure you do everything for your job outside of that, too. Suddenly you find yourself scrambling to keep up, but I’m known for taking on a few too many things at once… 🙂
Tomorrow at I’ll be presenting at 24HOP, also known as the 24 Hours of Pass, which is the webinar preview of The Pass Summit conference. This is a free web event to anyone who registers, so no matter if you’re Oracle or SQL Server, I greatly recommend to take advantage of this awesome opportunity to get a taste of what’s coming up from Microsoft’s annual event this fall in Seattle!
On July 25th, I’ll be Delphix will be hosting a webinar with me presenting on DevOps for the DBA. Now this session is different from the one that I’ll be doing at 24HOP or previously at other events. I like to continually update and add to my content and this time, I’m taking direct feedback from these previous sessions and building out the talk to answer those pesky questions the DBA community has been asking.
Needless to say, if you’re a DBA, these are two events that are both free and worthwhile in the next two weeks for free online presentations you can gain more knowledge from!
Database Administrators, (DBAs) through their own self-promotion, will tell you they’re the smartest people in the room and being such, will avoid buzzwords that create cataclysmic shifts in technology as DevOps has. One of our main role is to maintain consistent availability, which is always threatened by change and DevOps opposes this with a focus on methodologies like agile, continuous delivery and lean development.
Residing a step or more behind bleeding edge has never phased the DBA. We were the cool kids by being retro, those refusing to fall for the latest trend or the coolest new feature, knowing that with bleeding edge comes risk and that a DBA that takes risks is a DBA out of work. So we put up the barricades and refused the radical claims and cultural shift to DevOps.
As I travel to multiple events focused on numerous platforms the database is crucial to, I’m faced with peers frustrated with DevOps and considerable conversation dedicated to how it’s the end of the database administrator. It may be my imagination, but I’ve been hearing this same story, with the blame assigned elsewhere- either its Agile, DevOps, the Cloud or even a latest release of the actual database platform. The story’s the same- the end of the Database Administrator.
The most alarming and obvious pain point of this, is that in each of these scenarios, the result was the Database Administrator a focal point in the end more so than they were when it began. When it comes to DevOps, the specific challenges of the goal needed the DBA more so than any of these storylines. As development hurdled top speed to deliver what the business required, the DBA and operations as a whole, delivered the security, the stability and the methodologies to build automation at the level that the other groups simply never needed previously.
Powerful DBAs with skills not just in scripting, but in efficiency and logic, were able to take complicated, multi-tier environments and break them down into strategies that could be easily adopted. As they’d overcome the challenges of the database being central and blamed for everything in the IT environment, they were able to dissect and built out complex management and monitoring of end-to-end DevOps. As essential as System, Network and Server Administration was to the Operations group, the Database Administrator possessed advanced skills required, a hybrid of the developer and the operations personnel that make them a natural fit for DevOps.
Thanks to this awesome post from 2012 from Alex Tatiyants which resonated so well with the DBAs I speak to every day, even in 2017.
Different combination in the game of tech create a winning roll of the dice and other times create a loss. Better communication between teams can offer a better opportunity to deter from holes in development cycle tools, especially when DevOps is the solution you’re striving for.
It doesn’t hurt to have a map to help guide you. This interactive map from XebiaLabs can help offer a little clarity to the solutions, but there’s definitely some holes in multiple places that could be clarified a bit more.
The power of this periodic table of DevOps tools, isn’t just that they are broken up by the tool type, but that you’re able to easily filter by open source, freemium, paid or enterprise level tools. This assists in reach the goals of your solution. As we all get educated, I’ll focus horizontally in future posts, but today, we’ll take a vertical look at the Database Management tier and where I specialize first.
When comparisons are made, it’s common to have the inability to do apple to apples. Focusing on the Database Management toolsets, such as Delphix, I can tell you that only Redgate I view as a competitor and this only happened recently with their introduction of SQL Clone. The rest of the products shown don’t offer any virtualization, (our strongest feature) in their product and we consider Liquidbase and Datical partners in many use cases.
Any tool is better than nothing, even one that helps you choose tools. So let’s first start to discern what the “Database Management” tools are supposed to accomplish and then create one of our own. The overall goal appears to be version control for your database, which is a pretty cool concept.
The first product on the list is something I do like because the natural “control issues” I have as a DBA. You want to know that changes to a database occurred in a controlled, documented and organized manner. DBMaestro allows for this and has some pretty awesome ways of doing it. Considering that DevOps is embracing agility at an ever increasing rate, having version control capabilities that will work with both Oracle SQL Developer and Microsoft Visual Studio are highly attractive. The versioning is still as a code change level and not at the data level, but it’s still worthy of discussion.
That it offers all of this through a simple UI is very attractive to the newer generation of developers and DBAs will still want to be part of it.
This is the first of two companies we partner with that are in the list. It’s very different from DBMaestro, as it’s the only open source in the database management products and is works with XML, JSON, SQL and other formats. You can build just about anything you require and the product has an extensive support community, so if you need to find an example, it’s pretty simple to do so online.
I really like the fact that Liquibase takes compliance into consideration and has the capability to delay SQL transactions from performing without the approval from the appropriate responsible party. It may not be version control of your data, but at least you can closely monitor and time out the changes to it.
Where Liquibase partners with Delphix is that we can perform continuous delivery via Liquibase and Delphix can then version control the data tier. We can be used for versioning, branching and refreshing from a previous snapshot if there was a negative outcome in a development or test scenario, making continuous deployment a reality without requiring “backup scripts” for the data changes.
Everybody love a great user interface and like most, there’s a pretty big price tag that goes along with the ease of use when adding up all the different products that’s offered. There’s just a ton that you can do with Redgate and you can do most of it for Oracle, MSSQL and MySQL, which is way cool. Monitor, develop, virtualize, but the main recognition that you’re getting with the periodic table for DevOps tools is for version control and comparisons. This comes from the SQL Control product from Redgate and offers quite a full quite of products for the developer and the DBA.
This is another product that we’ve partnered with repeatedly. The idea that we, as DBAs can review and monitor any and all changes to a database is very attractive to any IT shop. Having it simplified into a tool is incredibly beneficial to any business who wants to deliver continuously and when implemented with Delphix, then the data can be delivered as fast as the rest of the business.
Idera’s DB Change Manager can give IT a safety net to ensure that changes intended are the changes that happen in the environment. Idera, just like many of the others on the list supports multiple database platforms, which is a keen feature of a database change control tool, but no virtualization or copy data management, (CDM) tool exists or at least, not one exists any longer.
So where does Delphix fit in with all of these products? We touched on it a little bit as I mentioned each of these tools. Delphix is recognized for the ability to deploy and that it does so as part of continuous delivery is awesome, but as I stated, it’s not a direct apple to apples comparison as we not only offer versioning control, but we do so at the data level.
So let’s create an example-
We can do versioning and track changes in releases in the way of our Jet Stream product. Jet Stream is the much loved product for our developers and testers.
I’ve often appreciated any tool set that allowed others not only to fish instead of me fishing for them, Offering the Developer or Tester access to the administration console meant for a DBA can only set them up to fail.
Jet Stream’s interface is really clean and easy to use. It has a clear left hand panel with options to access and the interaction is direct on what the user will be doing. I can create bookmarks, naming versions, which allows me the ability to
If a developer is using Jet Stream, they would make changes as part of a release and once complete, create a bookmark, (a snapshot in time) of their container, (A container here is made up of the database, application tier and anything else we want included that Delphix can virtualize.)
We’ve started our new test run of a new development deployment. We’ve made an initial book mark singing the beginning of the test and then a second bookmark to say the first set of changes were completed.
At this time, there’s a script that removes 20 rows from a table. The check queries all verified that this is the delete statement that should remove the 20 rows in question.
SQL> delete from kinder_tbl where c2 like '%40%'; 143256 rows deleted. SQL> commit; SQL> insert into kinder_tbl values (...
When the tester performs the release and hits this step, the catastrophic change to the data occurs.
Whoops, thats not 20 rows.
Now, the developer could grovel to the DBA to use archaic processing like flashback database or worse, import the data back into a second table and merge the missing rows, etc. There’s a few ways to skin this cat, but what if the developer could recover the data himself?
This developer was using Jet Stream and can simply go into the console, where they’ve been taking that extra couple seconds to bookmark each milestone during the release to the database, which INCLUDES marking the changes to the data!
If we inspect the bookmarks, we can see that the second of three occurred before the delete of data. This makes it simple to use the “rewind” option, (bottom right icon next to the trash can for removing the bookmark) to revert the data changes. Keep in mind that this will revert the database back to the point in time when the bookmark was performed, so ALL changes will be reverted to that point in time.
Once that is done, we can verify quickly that our data is returned and no need to bother the DBA or the concern that catastrophic change to data has impacted the development or test environment.
SQL> select count(*) from kinder_tbl where c2 like '%40%'; COUNT(*) ---------- 143256
I plan on going though different combinations of tools in the periodic table of DevOps tools and show what strengths and drawbacks there are to choices in implementation in upcoming posts, so until the next post, have a great week!
The topic of DevOps and and Agile are everywhere, but how often do you hear Source Control thrown into the mix? Not so much in public, but behind the closed doors of technical and development meetings when Agile is in place, it’s a common theme. When source control isn’t part of the combination, havoc ensues and a lot of DBAs working nights on production with broken hearts.
So what is source control and why is it such an important part of DevOps? The official definition of source control is:
A component of software configuration management, version control, also known as revision control or source control, is the management of changes to documents, computer programs, large web sites, and other collections of information.
Delphix, with it’s ability to provide developer with as many virtual copies of databases, including masked sensitive data, is a no-brainer when ensuring development and then test have the environments to do their jobs properly. The added features of bookmarking and branching is the impressive part that creates full source control.
Using the diagram below, note how easy it is to mark each iteration of development with a bookmark to make it easy to then lock and deliver to test, a consistent image via a virtual database, (VDB.)
Delphix is capable of all of this, while implementing Agile data masking to each and every development and test environment to protect all PII and PCI data from production in non-production environments.
Along with the deep learning I’ve been allowed to do about data virtualization, I’ve learned a great deal about Test Data Management. Since doing so, I’ve started to do some informal surveys of the DBAs I run into and ask them, “How do you get data to your testers so that they can perform tests?” “How do you deliver code to different environments?”
The demand for healthcare application development is exploding and has been exploding over the past couple of years because of
but to develop applications for health care requires the data to be masked. Why does masking data matter and matter especially for health care? If patient information gets out it can be quite damaging. One heuristic for the importance of healthcare information is that on the black market health care information on an individual tends to sell for 100x the credit card information for an individual. Imagine that someone needs health coverage and they swipe the health care information for someone else giving them free treatment. The value of the “free treatment” can well exceed the maximums on a credit card. Also imagine the havoc it can cause for the original individual if some jumps onto their health care. Important information like blood type can be logged incorrectly or the person my have HIV logged against them when they themselves are clear. It can take years to repair the damage or never if the damage is fatal.
“What do Britney Spears, George Clooney, Octomom (Nadya Suleman), and the late Farah Fawcett have in common? They are all victims of medical data breaches! … How much would a bookie pay to know the results of a boxer’s medical checkup before a title bout? What would a tabloid be willing to pay to be the first to report a celebrity’s cancer diagnosis? Unfortunately it doesn’t stop there and the average citizen is equally a target. “
When data gets to untrusted parties it is called leakage. To avoid leakage, companies use masking. Masking is a form of data mediation or transformation that replaces sensitive data with equally valid fabricated data. Masking data can be more work on top of the already significant work of provisioning copies of a source database to development and QA. Development and QA can get these database copies in minutes for almost no storage overhead using Delphix (as has been explained extensively on previous blogs) but by default these copies, or virtual databases(VDB), are not masked. Without Delphix, to mask database copies in development and QA would require masking every single copy, but with Delphix one can provision a single VDB, masked that VDB, and then clone in minutes for almost no storage as many masked copies of that first masked VDB as needed.
In the above graphic, Delphix links to a source database, and keeps a compressed version along with a rolling time window of changes from the source database. With this data Delphix can spin up a clone of the source database, anywhere in that time window. The clone can be spun up in a few minutes and takes almost no storage because it initially shares all the duplicate blocks on Delphix. This first VDB can be masked and then clones of the masked VDB can be made in minutes for almost no extra storage.
With Delphix in the architecture making masked copies is fast, easy and efficient. The first VDB that is masked will take up some extra storage for all the changed data. This VDB can then become the basis for all other development and QA masked copies so there is no need to worry about whether or not a development or QA database is masked. Because the source for all development and QA copies is masked then there is no way for any unmasked copies to make it into development and QA. Without the secure architecture of Delphix it becomes more complicated to verify and enforce that each copy is indeed masked. By consolidating the origins of all the down stream copies into a single set of masked shared data blocks, we can rest assured that all the down stream versions are also masked. The cloning interface in Delphix also logs all cloning activity and chain of custody reports can be run.
How do we actually accomplish the masking? Masking can be accomplished with a number of technologies available in the industry. With Delphix these technologies can be run on a VDB in the same manner that they are currently being used with regular physical clone databases. Alternatively Delphix has hooks for the provisioning where tools can be leveraged before the VDB is fully provisioned out.
Delphix has partnered with Axis Technology to streamline and automate the masking process with virtual databases. Look for upcoming blog posts to go into more detail about Axis and Delphix.
Posted in devops