SAPonPower

An ongoing discussion about SAP infrastructure

Scale-up vs. scale-out architectures for SAP HANA – part 2

S/4HANA is enabled for scale-out up to 4 nodes plus one hot-standby.  Enablement does not mean it is easy or advisable.  SAP states clearly: “We recommend using scale-up configurations as long as this is economically justifiable, taking operational costs and drawbacks into account.”[i]  This same note goes on to say: Limited knowledge about S/4HANA customer scenarios using scale-out is currently available.”

For very large customers, e.g. those for which an S/4HANA system’s memory is predicted to be larger than 24TB currently, scale-out may be the best option.  Best, of course, implies that there may be other options which will be discussed later in this post.

It is a reasonable question to ask why does SAP offer such conditional advice.  We can only speculate since SAP does not provide a direct explanation.  Some insight may be gained by reading the SAP note on scale-out sizing.[ii]  Unlike analytical applications such as BW/4HANA, partitioning of S/4HANA tables across nodes is not permitted.  Instead, all tables of a particular module are grouped together and the entire group must be placed on an individual node in the cluster.

Let’s consider a simple example of three commonly used modules, FI, MM and SD (Financial, Materials, Sales).  The tables associated with each module belong to their respective groups.  Placing each on a different node may help to minimize the size of any one node, but several issues arise.

  • Each will probably be a different size.  This fully supported, but the uneven load distribution may result in one node running at high utilization while another is barely using any capacity.  Not only does this mean wasted computing power, power and cooling but could result in inferior performance on the hot node.
  • Since most customers prefer to size all nodes in a cluster the same way, considerable over capacity of memory might result further driving up infrastructure costs.
  • Transactions often do not fit comfortably within a single module, e.g. a sales order might result in financial tables being updated with billing, accounts receivable and revenue data and materials tables being adjusted with a decrement of available stock.  If a transaction is running on node 1 (the master node) and needs to access/update tables on nodes 2 and 3, those communications run across a network.  As with the BW example in the previous blog post, each communication is at least 30 times slower across a network than across memory.

It is important to consider that every transaction that comes into an S/4HANA system connects to the index server on the master node with queries distributed by the master node to the appropriate index server.  This means that every transaction not handled directly by the master node must involve at least one send and one receive with the associated 30 times slower latency.

Some cross-node latency may be reduced by collocating appropriate groups resulting in fewer total nodes and/or by replicating some tables.  Unfortunately, if a table is replicated, this would result in the breaking of a fundamental SAP rule noted in SAP Note 2408419 (see footnote #1 below), that all tables of a group must be located on the same node.

As with the BW example, what works well for one scenario may not work well for another.  One of the significant advantages of S/4HANA over Business Suite 7 is the consolidation and dramatic reduction of tables resulting in fewer, much larger tables.  Conversely, this makes table distribution in a scale-out cluster much more challenging. It is not hard to imagine that performance management could be quite a task in a scale-out scenario.

So, if scale-out is not an option for many/most customers, what should be done if approaching a significant memory barrier?  Options include:

  • Cleanup, use of hybrid LOBs, index optimization, etc
  • Archiving data to reduce the size of the system
  • Eliminating duplicate data or easily reproduced data, e.g. iDocs, data from Hadoop
  • Usage of Data Aging[iii]
  • Sizing memory smaller than predicted
  • Request an exception to size the system larger than officially supported

Cleaning up your system and getting rid of various unnecessary memory consumers should be the first approach undertaken.[iv]  Remember, what might have been important with a conventional DB may either be not needed with S/4HANA or a better technique may exist.  The expected memory reduction is usually shown as part of an ERP sizing report.

Archiving is another obvious approach but since the data is kept on very slow media, compared to in-memory data, and cannot be changed, the decision as to what to archive and where to place it can be very challenging for some organizations.

iDocs are, by definition, intermediate documents and are used primarily for sending and receiving documents to/from third parties, e.g. sales orders, purchase orders, invoices, shipping notices.  Every iDoc sent or received should have a corresponding transaction within the SAP system which means that it is essentially a duplicate record once processed by the SAP system.  Many customers keep these documents indefinitely just in case any disputes occur with those third parties.  Often, these iDocs just sit around collecting digital dust and may be prime candidates for deletion or archival.  Likewise, data from an external source, e.g. Hadoop, should still exist in that source and could potentially be deleted from HANA.

Data Aging only covers a subset of data objects and requires some effort to utilize it.[v]  By default, the ABAP server adds “WITH RANGE RESTRICTION (‘CURRENT’)” to all queries to prevent unintended access to aged or cold partitions, which means that to access aged data, a query must specify which aged partition to access.  This implies special transactions or at least different training for users to access aged data.  Data Aging does allow aged data to be updated, so may be more desirable than archiving in some cases.  Aged data is stored on storage devices which means many order of magnitude slower than memory, however this can be mitigated, to some extent by faster media, e.g. NVMe drives on PCIe cards. Unfortunately, Data Aging has not been implemented by many customers meaning a potentially steep learning curve.

Deliberately undersizing a system is not recommended by SAP and I am not recommending it either.  That said, if an implementation is approaching a memory boundary and scaling to a larger VM or platform is not possible (physically, politically or financially), then this technique may be considered. It comes with some risk, however, so should be considered a last resort only.  HANA enables “lazy loading” of columns[vi] whereby columns are not loaded until needed. If your system has a large number of columns which consume space on disk but are never or rarely accessed, the memory reserved for these columns will, likewise, go unused or underused.  HANA also will attempt to unload columns when the system runs out of allocable memory based on a least frequently used algorithm. Unless a problem occurs, a system configured with less memory than the sizing report predicts will start without problem and unload columns when needed.  The penalty comes when those columns that are not memory resident are accessed at which time other column(s) must first be unloaded and the entire requested column loaded, i.e. significant latency for the first access is incurred. As mentioned earlier, this should be considered only in a worst case scenario and only if scaling up/out is not desired or an option.

Lastly, requesting an exception from SAP to allow a system size greater than officially supported may be a viable choice for the customers that are expected to exceed current maximums. This may not be without difficulty as when you embark on a journey where few or none have gone before, inevitably, you will run into obstacles that others have not yet encountered.  Dispatch mechanisms, delta merge operations, transactional log latency, savepoint I/O throughput, system startup times, backup/recovery and system replication are among some of the more significant areas that would be stressed and some might break.

My advice: Scale-up only in all S/4HANA cases unless the predicted memory for the immediate planning horizon exceeds the official SAP maximum supported size.  Before considering scale-out solutions, use every available tool to reduce the size of the system and ask SAP for an exception if the resulting size is still above the maximum.  Lastly, remember that SAP and its hardware partners are constantly working to enable larger HANA system sizes.  If the size required today fits within the largest supported system but is expected to exceed the limit over time, it may be reasonable to start your implementation or migration effort today with the expectation that the maximum will be increased by the time you need it.  Admittedly, this is taking a risk, but one that may be tolerable and if the limit is not raised in time, scale-out is still an option.

[i]2408419 – SAP S/4HANA – Multi-Node Support
[ii]2428711 – S/4HANA Scale-Out Sizing
[iii]2416490 – FAQ: SAP HANA Data Aging in SAP S/4HANA
[iv]1999997 – FAQ: SAP HANA MemoryFAQ 5
[v]1872170 – Business Suite on HANA and S/4HANA sizing report.
[vi]https://www.sap.com/germany/documents/2016/08/205c8299-867c-0010-82c7-eda71af511fa.html
Advertisements

July 16, 2018 - Posted by | Uncategorized | , , , , , , ,

1 Comment »

  1. Good Insight! Thanks Alfred!

    Comment by NIcholas Chang | July 18, 2018 | Reply


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: