Scale-up vs. scale-out architectures for SAP HANA – part 2
S/4HANA is enabled for scale-out up to 4 nodes plus one hot-standby. Enablement does not mean it is easy or advisable. SAP states clearly: “We recommend using scale-up configurations as long as this is economically justifiable, taking operational costs and drawbacks into account.”[i] This same note goes on to say: “Limited knowledge about S/4HANA customer scenarios using scale-out is currently available.”
For very large customers, e.g. those for which an S/4HANA system’s memory is predicted to be larger than 24TB currently, scale-out may be the best option. Best, of course, implies that there may be other options which will be discussed later in this post.
It is a reasonable question to ask why does SAP offer such conditional advice. We can only speculate since SAP does not provide a direct explanation. Some insight may be gained by reading the SAP note on scale-out sizing.[ii] Unlike analytical applications such as BW/4HANA, partitioning of S/4HANA tables across nodes is not permitted. Instead, all tables of a particular module are grouped together and the entire group must be placed on an individual node in the cluster.
Let’s consider a simple example of three commonly used modules, FI, MM and SD (Financial, Materials, Sales). The tables associated with each module belong to their respective groups. Placing each on a different node may help to minimize the size of any one node, but several issues arise.
- Each will probably be a different size. This fully supported, but the uneven load distribution may result in one node running at high utilization while another is barely using any capacity. Not only does this mean wasted computing power, power and cooling but could result in inferior performance on the hot node.
- Since most customers prefer to size all nodes in a cluster the same way, considerable over capacity of memory might result further driving up infrastructure costs.
- Transactions often do not fit comfortably within a single module, e.g. a sales order might result in financial tables being updated with billing, accounts receivable and revenue data and materials tables being adjusted with a decrement of available stock. If a transaction is running on node 1 (the master node) and needs to access/update tables on nodes 2 and 3, those communications run across a network. As with the BW example in the previous blog post, each communication is at least 30 times slower across a network than across memory.
It is important to consider that every transaction that comes into an S/4HANA system connects to the index server on the master node with queries distributed by the master node to the appropriate index server. This means that every transaction not handled directly by the master node must involve at least one send and one receive with the associated 30 times slower latency.
Some cross-node latency may be reduced by collocating appropriate groups resulting in fewer total nodes and/or by replicating some tables. Unfortunately, if a table is replicated, this would result in the breaking of a fundamental SAP rule noted in SAP Note 2408419 (see footnote #1 below), that all tables of a group must be located on the same node.
As with the BW example, what works well for one scenario may not work well for another. One of the significant advantages of S/4HANA over Business Suite 7 is the consolidation and dramatic reduction of tables resulting in fewer, much larger tables. Conversely, this makes table distribution in a scale-out cluster much more challenging. It is not hard to imagine that performance management could be quite a task in a scale-out scenario.
So, if scale-out is not an option for many/most customers, what should be done if approaching a significant memory barrier? Options include:
- Cleanup, use of hybrid LOBs, index optimization, etc
- Archiving data to reduce the size of the system
- Eliminating duplicate data or easily reproduced data, e.g. iDocs, data from Hadoop
- Usage of Data Aging[iii]
- Sizing memory smaller than predicted
- Request an exception to size the system larger than officially supported
Cleaning up your system and getting rid of various unnecessary memory consumers should be the first approach undertaken.[iv] Remember, what might have been important with a conventional DB may either be not needed with S/4HANA or a better technique may exist. The expected memory reduction is usually shown as part of an ERP sizing report.
Archiving is another obvious approach but since the data is kept on very slow media, compared to in-memory data, and cannot be changed, the decision as to what to archive and where to place it can be very challenging for some organizations.
iDocs are, by definition, intermediate documents and are used primarily for sending and receiving documents to/from third parties, e.g. sales orders, purchase orders, invoices, shipping notices. Every iDoc sent or received should have a corresponding transaction within the SAP system which means that it is essentially a duplicate record once processed by the SAP system. Many customers keep these documents indefinitely just in case any disputes occur with those third parties. Often, these iDocs just sit around collecting digital dust and may be prime candidates for deletion or archival. Likewise, data from an external source, e.g. Hadoop, should still exist in that source and could potentially be deleted from HANA.
Data Aging only covers a subset of data objects and requires some effort to utilize it.[v] By default, the ABAP server adds “WITH RANGE RESTRICTION (‘CURRENT’)” to all queries to prevent unintended access to aged or cold partitions, which means that to access aged data, a query must specify which aged partition to access. This implies special transactions or at least different training for users to access aged data. Data Aging does allow aged data to be updated, so may be more desirable than archiving in some cases. Aged data is stored on storage devices which means many order of magnitude slower than memory, however this can be mitigated, to some extent by faster media, e.g. NVMe drives on PCIe cards. Unfortunately, Data Aging has not been implemented by many customers meaning a potentially steep learning curve.
Deliberately undersizing a system is not recommended by SAP and I am not recommending it either. That said, if an implementation is approaching a memory boundary and scaling to a larger VM or platform is not possible (physically, politically or financially), then this technique may be considered. It comes with some risk, however, so should be considered a last resort only. HANA enables “lazy loading” of columns[vi] whereby columns are not loaded until needed. If your system has a large number of columns which consume space on disk but are never or rarely accessed, the memory reserved for these columns will, likewise, go unused or underused. HANA also will attempt to unload columns when the system runs out of allocable memory based on a least frequently used algorithm. Unless a problem occurs, a system configured with less memory than the sizing report predicts will start without problem and unload columns when needed. The penalty comes when those columns that are not memory resident are accessed at which time other column(s) must first be unloaded and the entire requested column loaded, i.e. significant latency for the first access is incurred. As mentioned earlier, this should be considered only in a worst case scenario and only if scaling up/out is not desired or an option.
Lastly, requesting an exception from SAP to allow a system size greater than officially supported may be a viable choice for the customers that are expected to exceed current maximums. This may not be without difficulty as when you embark on a journey where few or none have gone before, inevitably, you will run into obstacles that others have not yet encountered. Dispatch mechanisms, delta merge operations, transactional log latency, savepoint I/O throughput, system startup times, backup/recovery and system replication are among some of the more significant areas that would be stressed and some might break.
My advice: Scale-up only in all S/4HANA cases unless the predicted memory for the immediate planning horizon exceeds the official SAP maximum supported size. Before considering scale-out solutions, use every available tool to reduce the size of the system and ask SAP for an exception if the resulting size is still above the maximum. Lastly, remember that SAP and its hardware partners are constantly working to enable larger HANA system sizes. If the size required today fits within the largest supported system but is expected to exceed the limit over time, it may be reasonable to start your implementation or migration effort today with the expectation that the maximum will be increased by the time you need it. Admittedly, this is taking a risk, but one that may be tolerable and if the limit is not raised in time, scale-out is still an option.
[i]2408419 – SAP S/4HANA – Multi-Node Support
[ii]2428711 – S/4HANA Scale-Out Sizing
[iii]2416490 – FAQ: SAP HANA Data Aging in SAP S/4HANA
[iv]1999997 – FAQ: SAP HANA MemoryFAQ 5
[v]1872170 – Business Suite on HANA and S/4HANA sizing report.
[vi]https://www.sap.com/germany/documents/2016/08/205c8299-867c-0010-82c7-eda71af511fa.html
Scale-up vs. scale-out architectures for SAP HANA – part 1
Dozens of articles, blog posts, how-to guides and SAP notes have been written about this subject. One of the best was by John Appleby, now Global Head of DDM/HANA COEs @ SAP.[i] Several others have been written by vendors with a vested interest in the proposed option. The vendor for which I work, IBM, offers excellent solutions for both options, so my perspective is based on both my and the experiences of our many customers, some that have chosen one or the other option, or both, in some cases.
Scale-out for BW is well established, understood, fully supported by SAP and can be cost effective from the perspective of systems acquisition costs. Scale-out for S/4HANA, by comparison, is in use by very few customers, not well understood, yet is support by SAP for configurations up to 4 nodes. Does this mean that a scale-out architecture should always be used for BW and a scale-up architecture for S/4HANA the only viable choice? This blog post will discuss only BW and similar analytical environments including BW/4HANA, data marts, data lakes, etc. The next will discuss S/4HANA and the third in the series will discuss vendor selection and where one might have an advantage over the others.
Scale-out has 3 key advantages over scale-up:
- Every vendor can participate therefore competitive bidding of “commodity” level systems can result in optimal pricing.
- High availability, using host auto-failover requires nothing more than n+1 systems as the hot standby node can take over the role of any other node (some customers chose n+2 or group nodes and standby nodes).
- Some environments are simply too large to fit in even the largest supported scale-up systems.
Scale-up, likewise, has 3 key advantages over scale-out:
- Performance is, inevitably, better as joins across memory are always faster than joins across a network
- Management is much simpler as query analysis and data distribution decisions need not be performed on a regular basis plus fewer systems are involved with the corresponding decrease in monitoring, updating, connectivity, etc.
- TCO can be lower when the costs of systems, storage, network and basis management are included.
Business requirements, as always, should drive the decision as to which to use. As mentioned, when an environment is simply too large, unless a customer is willing to ask for an exception from SAP (and SAP is willing to grant it), then scale-out may be the only option. Currently, SAP supports BW configurations of up to 6TB on many 8-socket Intel Skylake based systems (up to 12TB on HPE’s 16-socket system) and up to 16TB on IBM Power Systems.
The next most important issue is usually cost. Let’s take a simple example of an 8TB BW HANA requirement. With scale-out, 4 @ 2TB nodes may be used with a single 2TB node for hot standby for a total of 10TB of memory. If scale-up is used, the primary system must be 8TB and the hot-standby another 8TB for a total of 16TB of memory. Considering that memory is the primary driver of the cost of acquisition, 16TB, from any vendor, will cost more than 10TB. If the analysis stops there, then the decision is obvious. However, I would strongly encourage all customers to examine all costs, not just TCA.
In the above example, 5 systems are required for the scale-out configuration vs. 2 for scale-up. The scale-out config could be reduced to 4 systems if 3TB nodes are used with 1TB left unused although the total memory requirement would go up to 12TB. At a minimum, twice the management activities, trouble-shooting and connectivity would be required. Also, remember, prod rarely exists on its own with some semblance of the configuration existing in QA, often DR and sometimes other non-prod instances.
The other set of activities is much more intensive. To distribute load amongst the systems, first data must be distributed. Some data must reside on the master node, e.g. all row-store tables, ABAP tables, general operations tables. Other data such as Fact, DataStore Object (DSO), Persistent Staging Area (PSA) is distributed evenly across the slave nodes based on the desired partitioning specification, e.g. hash, round robin or range. There are also more complex options where specifications can be mixed to get around hash or range limitations and create a multi-level partitioning plan). And, of course, you can partition different tables using different specifications. Which set of distribution specifications you use is highly dependent on how data is accessed and this is where it gets really complicated. Most customers start with a simple specification, begin monitoring placement using the table distribution editor and performance using STO3N plus getting feedback from end users (read that as complaints to the help desk). After some period of time and analysis of performance, many customers elect to redistribute data using a better or more complex set of specifications. Unfortunately, what is good for one query, e.g. distribute data based on month, is bad for another which looks for data based on zipcode, customer name or product number. Some customers report that the above set of activities can consume part or all of one or more FTEs.
Back to the above example. 10TB vs. 16TB which we will assume is replicated in QA and DR, for sake of argument, i.e. the scale-up solution requires 18TB more memory. If the price per TB is $35,000 then the cost different in TCA would be $630,000. The average cost of a senior basis administrator (required for this sort of complex task) in most western countries is in the $150,000 range. That means that over the course of 5 years, the TCO of the scale-up solution, considering only TCA and basis admin costs would be roughly equivalent to the cost of the scale-out solution. Systems, storage and network administration costs could push the TCO of the scale-out solution up relative to the scale-up solution.
And then there is performance. Some very high performance network adapter companies have been able to drive TCP latency across a 10Gb Ethernet down to 3.6us which sounds really good until you consider memory latency is around 120ns, i.e. 30 times faster. Joining tables across nodes not only is substantially slower, but also results in more CPU and memory overhead.[ii] A retailer in Switzerland, Coop Group, reported 5 times quicker analytics while using 85% fewer cores after migrating from an 8-node x86 scale-out BW HANA cluster with 320 total cores to a single scale-up 96-core IBM Power Systems.[iii] While various benchmarks suggest 2x or better per core performance of Power Systems vs. x86, the results suggest far higher, much of which can, no doubt, be attributed to the effect of using a scale-up architecture.
Of course, performance is relative. BW queries run with scale-out HANA will usually outperform BW on a conventional DB by an order of magnitude or more. If this is sufficient for business purposes, then it may be hard to build a case for why faster is required. But end users have a tendency to soak up additional horsepower once they understand what is possible. They do this in the form of more what-if analyses, interactive drill downs, more frequent mock-closes, etc.
If the TCO is similar or better and a scale-up approach delivers superior performance with many fewer headaches and calls to the help desk for intermittent performance problems, then it would be very worthwhile to investigate this option.
To recap; For BW HANA and similar analytical environments, Scale-out architectures usually offer the lowest TCA and scalability beyond the largest scale-up environment. Scale-up architectures offers significantly easier administration, much better performance and competitive to superior TCO.
[i]https://blogs.saphana.com/2014/12/10/sap-hana-scale-scale-hardware/
[ii]https://launchpad.support.sap.com/#/notes/2044468(see FAQ 8)
[iii]https://www.ibm.com/case-studies/coop-group-technical-reference
ASUG Webinar next week – Scale-Up Architecture Makes Deploying SAP HANA® Simple
On October 4, 2016, Joe Caruso, Director – ERP Technical Architecture at Pfizer, will join me presenting on an ASUG Webinar. Pfizer is not only a huge pharmaceutical company but, more importantly, has implemented SAP throughout their business including just about every module and component that SAP has to offer in their industry. Almost 2 years ago, Pfizer decided to begin their journey to HANA starting with BW. Pfizer is a leader in their industry and in the world of SAP and has never been afraid to try new things including a large scale PoC to evaluate scale-out vs. scale-up architectures for BW. After completion of this PoC, Pfizer made a decision regarding which one worked better, proceeded to implement BW HANA and had their go-live just recently. Please join us to hear about this fascinating journey. For those that are ASUG members, simply follow this link.
If you are an employee of an ASUG member company, either an Installation or Affiliate member, but not registered for ASUG, you can follow this link to join at no cost. That link also offers the opportunity for companies to join ASUG, a very worthwhile organization that offers chapter meetings all over North America, a wide array of presentations at their annual meeting during Sapphire, the BI+Analytics conference coming up in New Orleans, October 17 – 20, 2016, hundreds of webinars, not to mention networking opportunities with other member companies and the ability to influence SAP through their combined power.
This session will be recorded and made available at a later date for those not able to attend. When the link to the recording is made available, I will amend this blog post with that information.
SAP HANA on Power support expands dramatically
SAP’s release of HANA SPS11 marks a critical milestone for SAP/IBM customers. About a year ago, I wrote that there was Hope for HoP, HANA on Power. Some considered this wishful thinking, little more than a match struck in the Windy City. In August, that hope became a pilot light with SAP’s announcement of General Availability of Scale-up BW HANA running on the Power Systems platform. Still, the doubters questioned whether Power could make a dent in a field already populated by dozens of x86 vendors with hundreds of supported appliances and thousands of installed customers. With almost 1 new customer per business day deciding to implement HANA on Power since that time, the pilot light has quickly evolved into a nice strong flame on a stove.
In November, 2015, SAP unleashed a large assortment of support for HoP. First, they released a first of a kind support for running more than 1 production instance using virtualization on a system.[1] For those that don’t recall, SAP limits systems running HANA in production on VMware to one[2], count that as 1, total VMs on the entire system. Yes, non-prod can utilize VMware to its heart’s content, but is it wise to mess with best practices and utilize different stacks for prod and non-prod, much less deal with restrictions that limit the number of vps to 64, i.e. 32 real processors not counting VMware overhead and 1TB of memory? Power now supports up to 4 resource pools on E870 and E880 systems and 3 on systems below this level. One of those resource pools can be a “shared pool” supporting many VMs of any kind and any supported OS as long as none of them run production HANA instances. Any production HANA instance must run in a dedicated or dedicated-donating partition in which when production HANA needs CPU resources, it gets it without any negotiation or delay, but when it does not require all of the resources, it allows partitions in the shared pool to utilize unused resources. This is ideal for HANA as it is often characterized by wide variations in loads, often low utilization and very low utilization on non-prod, HA and DR systems, resulting in the much better flexibility and resource utilization (read that as reduced cost).
But SAP did not stop there. Right before the US Thanksgiving holiday, SAP released support for running HANA on Power with Business Suite, specifically ERP 6.0 EHP7, CRM 7.0 EHP3 and SRM 7.0 EHP3, SAP Landscape Transformation Replication Server 2.0, HANA dynamic tiering, BusinessObjects Business Intelligence platform 4.1 SP 03, HANA smart data integration 1.0 SP02, HANA spatial SPS 11 and controlled availability of BPC[3], scale-out BW[4] using the TDI model with up to 16-nodes. SAP plans to update the application support note as each additional application passes customer and/or internal tests, with support rolling out rapidly in the next few months.
Not enough? Well, SAP took the next step and increased the memory per core ratio on high end systems, i.e. the E870 and E880, to 50GB/core for BW workloads thereby increasing the total memory supported in a scale-up configuration to 4.8TB.[5]
What does this mean for SAP customers? It means that the long wait is over. Finally, a robust, reliable, scalable and flexible platform is available to support a wide variety of HANA environments, especially those considered to be mission critical. Those customers that were waiting for a bet-your-business solution need wait no more. In short order, the match jumped to a pilot light, then a flame to a full cooktop. Just wait until S/4HANA, SCM and LiveCache are supported on HoP, likely not a long wait at this rate, and the flame will have jumped to one of those jet burners used for crawfish boiling from my old home town of New Orleans! Sorry, did I push the metaphor to far? 🙂
[1] 2230704 – SAP HANA on IBM Power Systems with multiple – LPARs per physical host
[2] 1995460 – Single SAP HANA VM on VMware vSphere in production
[3] 2218464 – Supported products when running SAP HANA on IBM Power Systems and http://news.sap.com/customers-choose-sap-hana-to-run-their-business/
[4] BW Scale-out support restriction that was previously present has been removed from 2133369 – SAP HANA on IBM Power Systems: Central Release Note for SPS 09 and SPS 10
[5] 2188482 – SAP HANA on IBM Power Systems: Allowed Hardware