AzureOracle

Oracle to Azure Refactoring Go or No-Go Indicators

When migrating an Oracle database to another platform, there are the common indicators and discussion topics around PL/SQL conversions, data types, application rewrites, etc., as being roadblocks to refactoring, but being successful also has to do with the SIZE of the workload coming from Oracle.  I find this is often dismissed, even though this is one of the quickest ways to identify if an ENTIRE Oracle database, (not even by schema or a subset of the Oracle database) can run on a Platform as a Service, (PaaS) solution.

The following post involves limits for each Azure PaaS solution and how to quickly assess from the Oracle workload assessment if a PaaS solution can handle the Oracle workload.  Many of you will be surprised that vCore isn’t in this list, but I rarely have challenges with the CPU requirements for Oracle workloads in the cloud.  I’m going to skip this area and work more on the areas that will ABSOLUTELY kill your project to refactor from Oracle to another database platform based on the workload size.

Database Size Limits

If the Oracle database is being taken over “as is” and simply having tables migrated to another platform and importing data into the new platform, it needs to be under the following sizes.  For Hyperscale (citus), there is the entire redesign of the database that will increase the size, too.

Azure Managed Instance: 16TB

Azure SQL, Gen5(General Purpose): 1.536TiB

Azure SQL, Gen5(Compute): up to 4TiB, depends on the DTU and vCore

Hyperscale, Gen4: 100TB

PostgreSQL, Single Server: 1TB

PostgreSQL, Flex Server: 16TB

Hyperscale (Citus): Up to 2TiB per Worker/coordinator node, up to 20 worker nodes, there is considerable redesign that has to occur for this migration.  Nodes – Hyperscale (Citus) – Azure Database for PostgreSQL | Microsoft Docs

Memory

The memory usage once refactored may decrease, (or increase, too) for the newly designed system, but the memory that was required to do the work in Oracle, often optimized at the time it was running on the original platform, should be considered when deciding if the workload can be migrated to a PaaS solution.

The limits for memory allocation, per platform and per VM are as follows:

Azure Managed Instance: up to 13.6G

Azure SQL: up to 24G

Hyperscale: Undocumented

PostgreSQL, Flex Server: up to 8G, (other documents show up to E64dsv4, which is 432G of memory on the host)

Hyperscale (Citus): 4G per Coordinator node and 8G per Worker node

IO Latency:

Azure SQL: 5-10ms for read and write

Azure SQL, Business Critical: 1-2ms

Hyperscale: 1-2ms and higher latency for data retrieved from page servers.

PostgreSQL Flex Server: 5-7ms

Hyperscale (Citus): Undocumented

Throughput, (MBPs) Limitations

As an Oracle workload, even when refactored, won’t change the demands of the throughput, a very large throughput workload, (i.e., 1000MBPs+) needs to be carefully considered before looking a PaaS solution with published throughput information.  Throughput is where I’ll focus, as this is the biggest pain-point for most high IO workloads.  This goes for SQL Server, DB2, Sybase, not just Oracle refactoring.  Of all the decision factors involved when looking at an opportunity to refactor off of Oracle and moving to a PaaS solution in Azure, look at the MBPs limit for the solution.  It is often the first workload demand that deters the customer from leaving Oracle on Azure IaaS for their cloud solution.

For most services, every database file gets dedicated IOPS and throughput that depend on the file size. Larger files get more IOPS and throughput. The below values are the MAX throughput, (MBPs) that can be achieved if the storage is sized and configured optimally.

Azure Managed Instance: 450MBPs

Azure SQL:  up to 20K IOPs which comes to about 500MBPs, (not a direct translation)

Hyperscale: 100MBPs

PostgreSQL, Flex Server: 750MBPs

Hyperscale (Citus): up to 20, 2TiB worker nodes can get around 123K IOPs, which comes to about 3100MBPs, (IOPs to MBPs doesn’t directly translate)

Region Availability

Managed Instance: Resource limits – Azure SQL Managed Instance | Microsoft Docs

Limits and limitations – Hyperscale (Citus) – Azure Database for PostgreSQL | Microsoft Docs

This can include Max number of vCore units for dev and test, vCore limitations of a given service, so please refer to these links to find out more.

Concurrent Session Count and other Limits

Single database vCore resource limits – Azure SQL Database | Microsoft Docs

Limits – Azure Database for PostgreSQL – Single Server | Microsoft Docs

Limits – Azure Database for PostgreSQL – Flexible Server | Microsoft Docs

Limits and limitations – Hyperscale (Citus) – Azure Database for PostgreSQL | Microsoft Docs

Hardware Generation Characteristics

Information is out there to explain all the hardware and software that makes up a given service in Azure.  It’s worth the time to understand what options are available and how to make the best decision.  This is where Microsoft documentation really pays off, as there are clear documents, along with links to other documents that can give you the understanding you need to know what you’re migrating to.

Azure Managed Instance

Hardware generations have different characteristics, as described in the following table:

HARDWARE GENERATION CHARACTERISTICS
Standard-series (Gen5) Premium-series (preview) Memory optimized premium-series (preview)
CPU Intel® E5-2673 v4 (Broadwell) 2.3 GHz, Intel® SP-8160 (Skylake), and Intel® 8272CL (Cascade Lake) 2.5 GHz processors Intel® 8370C (Ice Lake) 2.8 GHz processors Intel® 8370C (Ice Lake) 2.8 GHz processors
Number of vCores
vCore=1 LP (hyper-thread)
4-80 vCores 4-80 vCores 4-64 vCores
Max memory (memory/vCore ratio) 5.1 GB per vCore
Add more vCores to get more memory.
7 GB per vCore 13.6 GB per vCore
Max In-Memory OLTP memory Instance limit: 0.8 – 1.65 GB per vCore Instance limit: 1.1 – 2.3 GB per vCore Instance limit: 2.2 – 4.5 GB per vCore
Max instance reserved storage* General Purpose: up to 16 TB
Business Critical: up to 4 TB
General Purpose: up to 16 TB
Business Critical: up to 5.5 TB
General Purpose: up to 16 TB
Business Critical: up to 16 TB

 

Comparing an IaaS solution to PaaS is always a challenge, but the benefit of a PaaS solution is there is less to manage.  For smaller Oracle workloads or schemas from a larger Oracle workload, as well as greenfield projects, I absolutely see tremendous value in a PaaS solution.  It’s important to make the right choice and know what the workload is demanding of Oracle to create a successful opportunity for the data estate- and that’s what I’m about:  How do you make the most of ALL OF YOUR DATA IN AZURE.

 

Kellyn

http://about.me/dbakevlar

2 thoughts on “Oracle to Azure Refactoring Go or No-Go Indicators

Comments are closed.