SAPonPower

An ongoing discussion about SAP infrastructure

Optane DC Persistent Memory – Proven, industrial strength or full of hype – Detail, part 2

If the performance considerations from part 1 were the only issues, a reasonable case could be made for the potential value of doing a PoC with this technology.  But, of course, those are not the only issues.  One of the reasons that NVDIMMs have longer latencies than DRAM is due to their persistence and therefore the need to encrypt data placed on these components.  Encryption and decryption take a lot of computational power and can have a substantial impact on latency and bandwidth.  The funny thing is that encryption of these NVDIMMs can be turned off if desired, presumably with a resulting improvement to performance.  But what kind of customer would be willing to turn off this vital security technology?

Another desirable trait of modern, in-memory platforms is advanced memory protection which allows a system to continue to operate in the event of a DIMM failure.  This often starts with basic ECC, but then progresses to SDDC, DDDC (Chipkill or Lockstep), ADDDC (Skylake and beyond only) and IBM’s unique Chipkill + chip sparing technology.  ADDDC is not available for NVDIMMs, but DDDC is.  The downside of DDDC is that it comes with a significant performance penalty. No performance numbers have been provided for NVDIMMs configured with DDDC, but previous generations saw 20% to 40% degradation when using this mode.[i][ii]

What kind of customer would be willing to disable key security features or run critical systems without the best available reliability technologies?  I would certainly advise customers to use encryption and advanced reliability technologies in most circumstances.  Only those customers that can scramble business critical, PII and/or HIPAA data should ever consider disabling persistent memory encryption.  I searched, using every option that I could imagine, and failed to find a single web site that recommended ever disabling NVDIMM encryption.

SAP Benchmarks results posted on the external web site do not show the details of how security and reliability configuration parameters have been set.  It is therefore impossible to say whether HPE enabled or disabled these protection features.  In my many years of experience and extensive discussion with benchmarking experts, I can share that every single one, at every vendor, used every tool or technology that did not violate official rules to enhance results.  It would not be too much of a leap to project that HPE, and other vendors posting results with NVDIMMs, have likely disabled anything that might cause their results to diminish in any way.  (HPE, if you would like to share your configuration details, I would be happy to post them and if I have mischaracterized how you ran these benchmarks, will also post a retraction.) As a result, these BWH results may not only have relevance to only a small subset of the potential workloads but may also represent an unacceptable exposure to any company that has high single system availability requirements or has one of those unreasonable security departments which thinks that data protection is actually worthwhile.

And then, there are OLTP customers.  Based on the lack of benchmark testing of Suite on HANA, S/4HANA or C/4HANA combined with the above data from Lenovo about the massive reduction of bandwidth and associated huge increase in latency for OLTP, it would be MOST unwise to place any of these types of environments on systems with NVDIMMs without extensive testing of real customer workloads to ensure that internal performance SLAs can be met.

Certain types of workloads may perform decently with NVDIMMs.  BW environments where the primary use is for predictable and repeatable queries and reports may see only moderate performance degradation compared to DRAM based systems, but still orders of magnitude better performance that AnyDB systems which merely cache recently used data in memory and keep most data on external storage.  BW Extension nodes, S/4 Data aging objects and other types of archival systems that take older, less frequently used data and place them on other tiers of storage or systems, could certainly benefit from NVDIMMs.  Non-prod workloads which are not in the critical path to production, e.g. dev, test, sandbox, might make sense to place on systems with NVDIMMs.  All of these depend on an acceptance of potential performance issues and hardware/firmware/software fixes that inevitably come once customers start playing with version 1.0 of any new technology.

Based on likely performance issues, inferior RAS technology and the above mentioned “fix” dilemma, I would strongly advise that critical systems like production, QA, pre-prod, HA and DR should stay on DRAM based systems until bleeding edge customers prove the value of NVDIMMs and are willing to publicly share their journey.

The question then becomes whether the benefit to a subset of the environments are so substantial that it makes sense to select a vendor for HANA systems based on their ability to utilize NVDIMMs even when this technology might not be used for the most critical of the workloads and their associated critical path and HA/DR systems. This gets into the subjects of cost reduction and restart speeds which will be covered in part 3 of this series.

[i]https://lenovopress.com/lp0048.pdf

[ii]https://sp.ts.fujitsu.com/dmsp/Publications/public/wp-broadwell-ex-memory-performance-ww-en.pdf

May 27, 2019 Posted by | Uncategorized | , , , , , , , , , , , | Leave a comment

Optane DC Persistent Memory – Proven, industrial strength or full of hype – Detail, part 1

Several non-Intel sites suggest that Intel’s storage class memory (Lenovo abbreviates these as DCPMM, while many others refer to them with the more generic term NVDIMM) delivers a read latency of roughly 5 times slower than DRAM, e.g. 350 nanoseconds for NVDIMM vs. 70 nanoseconds for DRAM.[i]  A much better analysis comes from Lenovo which examined a variety of load conditions and published their results in a white paper.[ii]  Here are some of the results:

  • A fully populated 6x DCPMM socket could deliver up to 40GB/s read throughput, 15GB/s write
  • Each additional pair of DCPMMs delivered proportional increases in throughput
  • Random reads had a load to use latency that was roughly 50% higher than sequential reads
  • Random reads had a max per socket (6x DCPMM) throughput that was between 10 and 13GB/s compared to 40 to 45GB/s for sequential reads

The most interesting quote from this section was: “Overall, workloads that are more read intensive and sequential in nature will see the best performance.”  This echoes the quote from SAP’s NVRAM white paper: “From the perspective (of) read accesses, sequential scans fare better in NVRAM than point reads: the cache line pre-fetch is expected to mitigate the higher latency.[iii]

The next section is even more interesting.  Some of its results comparing the performance differences of DRAM to DCPMM were:

  • Almost 3x better max sequential read bandwidth
  • Over 5x better max random read bandwidth
  • Over 5x better max sequential 2:1 R/W bandwidth
  • Over 8x better max random 2:1 R/W bandwidth
  • Latencies for DCPMM in the random 2:1 R/W test hit a severe knee of the curve and showed max latencies over 8x that of DRAM at very light bandwidth loads
  • DRAM, by comparison, continued to deliver significantly increasing bandwidth with only a small amount of latency degradation until it hit a knee of the curve at over 10x of the max DCPMM bandwidth

Unfortunately, this is not a direct indication of how an application like HANA might perform.  For that, we have to look at available benchmarks. To date, none of the SD benchmarks have utilized NVDIMMs.  Lenovo published a couple of BWH results, one with and one without NVDIMMs, but used different numbers of records, so they are not directly comparable.  HPE, on the other hand, published a couple of BWH results using the exact same systems and numbers of records.[iv]  Remarkably, only a small, 6% performance degradation, going from an all DRAM 3TB configuration to a mixed 768GB/3TB NVDIMM configuration occurred in the parallel query execution phase of the benchmark.  The exact configuration is not shown on the public web site, but we can assume something about the config based on SAP Note: 2700084 – FAQ: SAP HANA Persistent Memory: To achieve highest memory performance, all DIMM slots have to be used in pairs of DRAM DIMMs and persistent memory DIMMs, i.e. the system must be equipped with one DRAM DIMM and one NVDIMM in each memory channel.”  Vendors submitting benchmark results do not have to follow these guidelines, but if HPE did, then they used 24@32GB DRAM DIMMs and 24@128TB NVDIMMs.  Also, following other guidelines in the same SAP Note and the SAP HANA Administration Guide, HPE most likely placed the column store on NVDIMMS with row store, caches, intermediate and final results calculations on DRAM DIMMs.

BWH is a benchmark composed of 1.3 billion records which can easily be loaded into a 1TB system with room to spare.  To achieve larger configurations, vendors can load the same 1.3B records a second, third or more times, which HPE did a total of 5 times to get to 6.5B records.  The column compression dictionary tables, only grow with unique data, i.e. do not grow when you repeat the same data set regardless of the number of times it is added.

BWH includes 3 phases, a load phase which represents data ingestion from ERP, a parallel query phase and a sequential, single user complex query phase.  Some have focused on the ingestion and complex query phases, because they show the most degradation in performance vs. DRAM.  While that is tempting, I believe the parallel query phase is of the most relevance.  During this phase, 385 queries of low, medium and high complexity (no clue as to how SAP defines those complexities, what their SQL looks like or how many of each type are included) are run, in parallel and randomly.  After an hour, the total count of queries completed is reported. In theory, the larger the database, the fewer the queries that could be run per hour as each query would have more data to traverse.  However, that is not what we see in these results.

Lenovo, once again, provides the best insights here.  With Skylake processors, they reported two results.  On the first, they loaded 1.3B records, on the second 5.2B records or 4 times the number of rows with only twice the memory.  One might predict that queries per hour would be 4 times or more worse considering the non-proportionate increase in memory.  The results, however, show only a little over 2x decrease in Query/hr. Dell reported a similar set of results, this time with Cascade Lake, also with only real memory and also only around 2x decrease in Query/hr for 4X larger number of records.

What does that tell us? It is impossible to say for sure. From the SAP NVRAM white paper referenced earlier, “One can observe that some of the queries are more sensitive to the latency of the persistent memory than others. This can be explained by multiple factors:

  1. Does the query exhibit a memory access pattern that can easily prefetch by the hardware
  2. prefetchers? Is the working set of queries small enough to fit in CPU
  3. cache and hence agnostic to persistent memory latency? Is processing of the query compute or latency bound?”

SAP stores results in the “Static Cache”. “The static result cache is particularly helpful in the following scenario:  Complex query based on a view; Rather small result set; Limited amount of changes in the underlying tables.  The static result cache can provide the following advantages: Reduction of CPU consumption; Reduction of SAP HANA thread utilization; Performance improvements[v]

Other areas like delta storage, caches, intermediate result sets or row store remain solely in dynamic RAM (DRAM) is usually stored in DRAM, not NVDIMMs.[vi]

The data in BWH is completely static.  Some queries are complex and presumably based on views.   Since the same queries execute over and over again, prefetchers may become especially effective.  It may be possible that some or many of the 385 queries in BWH may be hitting the results cache in DRAM.  In other words, after the first set of queries run, a decent percentage of accesses may be hitting only the DRAM portion of memory, masking much of the latency and bandwidth issues of NVRAM.  In other words, this benchmark may actually be testing CPU power against a set of results cached in working memory more than actual query speed against column store.

So, let us now consider the HPE benchmark with NVDIMMs.  On the surface, 6% degradation with NVDIMMs vs. all DRAM seems improbable considering NVDIMM higher latency/lower bandwidth.  But after considering the above caching, repetitive data and repeating query set, it should not be much of a shock that this sort of benchmark could be masking the real performance effects.  Then we should consider the quote from Lenovo’s white paper above which said that NVDIMMs are a great technology for read intensive, sequential workloads.

Taken together, while not definitive, we can deduce that a real workload using more varied and random reads, against a non-repeating set of records might see a substantially different query throughput than demonstrated by this benchmark.

Believe it or not, there is even more detail on this subject, which will be the focus of a part 2 post.

 

[i]https://www.pcper.com/news/Storage/Intels-Optane-DC-Persistent-Memory-DIMMs-Push-Latency-Closer-DRAM

[ii]https://lenovopress.com/lp1083.pdf

[iii]http://www.vldb.org/pvldb/vol10/p1754-andrei.pdf

[iv]https://www.sap.com/dmc/exp/2018-benchmark-directory/#/bwh

[v]https://launchpad.support.sap.com/#/notes/2336344

[vi]https://launchpad.support.sap.com/#/notes/2700084

May 20, 2019 Posted by | Uncategorized | , , , , , , , , , , , | Leave a comment

Lintel for SAP App Servers – The right choice

Or is it? Running SAP application servers on IBM Power Systems with Linux results in a lower TCA than using x86 systems with Linux and VMware. Usually, I don’t start a blog post with the conclusion, but was so amazed by the results of this analysis, that I could not help myself.

For several years now, I have seen many customers move older legacy app servers to x86 systems using Linux and VMware as well as implementing new SAP app servers on the same. When asked why, the answers boil down to cost, skills and standards. Historically, Lintel servers were not just perceived to cost less, but could the cost differences could be easily demonstrated. Students emerging from colleges have worked with Linux far more often than with UNIX and despite the fact that learning UNIX and how it is implemented in actual production environments is very little different in real effort/cost, the perception of Linux skills being more plentiful and consequently less expensive persists. Many companies and government entities have decided to standardize on Linux. For some legacy IBM Power Systems customers, a complicating factor, or perhaps a compelling factor in the analysis, has compared partitions on large enterprise class systems against low cost 2-socket x86 servers. And so, increasingly, enterprises have defaulted to Lintel as the app server of choice.

Something has changed however and is completely overturning the conventional wisdom discussed above. There is a new technology which takes advantage of all of those Linux skills on the market, obeys the Linux standards mandate and costs less than virtualized Lintel systems. What is this amazing new technology? Surprise, it is a descendent of the technology introduced in 1990 by IBM called the RS/6000, with new Linux only POWER8 systems. (OK, since you know that I am an IBM Power guy and I gave away the conclusion at the start of this post, that was probably not much of a surprise.) At least, this is what the marketing guys have been telling us and they have some impressive consultant studies and internal analyses that back up their claims.

For those of you who have been following this blog for a while, you know that I am skeptical of consultant analyses and even more so of internal analyses. So, instead of depending on those, I set out to prove, or disprove, this assertion. The journey began with setting reasonable assumptions. Actually, I went a little overboard and gave every benefit of the doubt to x86 and did the opposite for Power.

Overhead – The pundits, both internal and external, seem to suggest that 10% or more overhead for VMware is reasonable. Even VMware’s best practices guide for HANA suggests an overhead of 10%. However, I have heard some customers claim that 5% is possible. So, I decided to use the most favorable number and settled on 5% overhead. PowerVM does have overhead, but it is already baked into all benchmarks and sizings since it is built into the embedded hypervisor, i.e. it is there even when you are running a single virtual machine on a system.

Utilization – Many experts have suggested that average utilization of VMware systems range in the 20% to 30% range. I found at least one analyst that said that the best run shops can drive their VMware systems up to 45%. I selected 45%, once again since I want to give all of the benefit of the doubt to Lintel systems. By comparison, many experts suggest that 85% utilization is reasonable for PowerVM based systems, but I selected 75% simply to not give any of the benefit of the doubt to Power that I was giving to x86.

SAPS – Since we are talking about SAP app servers, it is logical to use SAP benchmarks. The best result that I could find for a 2 socket Linux Intel Haswell-EP system was posted by Dell @ 90,120 SAPS (1). A similar 2-socket server from IBM was posted @ 115,870 SAPS (2).

IBM has internal sizing tables, as does every vendor, in which it estimates the SAPS capacity of different servers based on different OSs. One of those servers, the Power S822L, a 20-core Linux only system, is estimated to be able to attain roughly 35% less SAPS than the benchmark result for its slightly larger cousin running AIX, but this takes into consideration differences in MHz, number of cores and small differences due to the compilers used for SAP Linux binaries.

For our hypothetical comparison, let us assume that a customer needs approximately the SAPS capacity as can be attained with three Lintel systems running VMware including the 5% overhead mentioned above, a sustained utilization of 45% and 256GB per server. Extrapolating the IBM numbers, including no additional PowerVM overhead and a sustained utilization of 75%, results in a requirement of two S822L systems each with 386GB.

Lenovo, HP and Dell all offer easy to use configurators on the web. I ran through the same configuration for each: 2 @ Intel Xeon Processor E5-2699 v3 18C 2.3GHz 45MB Cache 2133MHz 145W, 16 @ 16GB x4 DIMMS, 1 @ Dual-port 10GB Base-T Ethernet adapter, 2 @ 300GB 10K RPM disk (2 @ 1TB 7200 RPM for Dell) and 24x7x4 hour 3-year warranty upgrades (3). Both the Lenovo and HP sites show an almost identical number for RedHat Enterprise Linux with unlimited guests (Dell’s was harder to decipher since they apply discounts to the prices shown), so for consistency, I used the same price for RHEL including 3-yr premium subscription and support. VMware also offers their list prices on the web and the same numbers were used for each system, i.e. Version 5.5, 2-socket, premium support, 3yr (4).

The configuration for the S822L was created using IBM’s eConfig tool: 2 @ 10-core 3.42 GHz POWER8 Processor Card, 12 @ 32GB x4 DIMMS, 1 @ Dual-port 10GB Base-T Ethernet adapter, 2 @ 300GB 10K RPM disk and a 24x7x4 hour 3-year warranty upgrade, RHEL with unlimited guests and 3yr premium subscription and support and PowerVM with unlimited guests, 3yr 24×7 support (SWMA). Quick disclaimer; I am not a configuration expert with IBM’s products much less those from other companies which means there may be small errors, so don’t hold me to these numbers as being exact. In fact, if anyone with more expertise would like to comment on this post and provide more accurate numbers, I would appreciate that. You will see, however, that all three x86 systems fell in the same basic range, so small errors are likely of limited consequence.

The best list price among the Lintel vendors came in at $24,783 including the warranty upgrade. RHEL 7 came in at $9,259 and VMware @ $9,356 with a grand total for of $43,398 and for 3 systems, $130,194. For the IBM Power System, the hardware list was $33,136 including the warranty upgrade, PowerVM for Linux $10,450 and RHEL 7 $6,895 for a grand total of $51,109 and for 2 systems, $102,218.

So, for equivalent effective SAPS capacity, Lintel systems cost around $130K vs. $102K for Power … and this is before we consider the reliability and security advantages not to mention scalability, peak workload handling characteristics, reduced footprint, power and cooling. Just to meet the list price of the Power System, the Lintel vendors would have to deliver a minimum of 22% discount including RHEL and VMware.

Conclusions:
For customers making HANA decisions, it is important to note that the app server does not go away and SAP fully support heterogeneous configurations, i.e. it does not matter if the app server is on a different platform or even a different OS than the HANA DB server. This means that Linux based Power Boxes are the perfect companion to HANA DB servers regardless of vendor.

For customers that are refreshing older Power app servers, the comparisons can be a little more complicated in that there is a reasonable case to be made for running app servers on enterprise class systems potentially also housing database servers in terms of higher effective utilization, higher reliability, the ability to run app servers in an IFL (Integrated Facility for Linux) at very attractive prices, increased efficiencies and improved speeds through use of virtual Ethernet for app to DB communications. That said, any analysis should start with like for like, e.g. two socket scale-out Linux servers, and then consider any additional value that can be gained through the use of AIX (with active memory expansion) and/or enterprise class servers with or without IFLs. As such, this post makes a clear point that, in a worst case scenario, scale-out Linux only Power Systems are less expensive than x86. In a best case scenario, the TCO, reliability and security advantages of enterprise class Power Systems make the value proposition of IBM Power even more compelling.

For customers that have already made the move to Lintel, the message is clear. You moved for sound economic, skills and standards based reasons. When it is time to refresh your app servers or add additional ones for growth or other purposes, those same reasons should drive you to make a decision to utilize IBM Power Systems for your app servers. Any customer that wishes to pursue such an option is welcome to contact me, your local IBM rep or an IBM business partner.

Footnotes:
1. Dell PowerEdge R730 – 2 Processors / 36 Cores / 72 Threads 16,500 users, Red Hat Enterprise Linux 7, SAP ASE 16, SAP enhancement package 5 for SAP ERP 6.0, Intel Xeon Processor E5-2699 v3, 2.3 Ghz, 262,144MB, Cert # 2014033, 9/10/2014

2. IBM Power System S824, 4 Processors / 24 Cores / 192 Threads, 21,212 Users, AIX 7.1, DB2 10.5, SAP enhancement package 5 for SAP ERP 6.0, POWER8, 3.52 Ghz, 524,288MB, Cert # 2014016, 4/28/2014

3. https://www-01.ibm.com/products/hardware/configurator/americas/bhui/flowAction.wss?_eventId=launchNIConfigSession&CONTROL_Model_BasePN=5462AC1&_flowExecutionKey=_cF5B38036-BD56-7C78-D1F7-C82B3E821957_k34676A10-590F-03C2-16B2-D9B5CE08DCC9
http://configure.us.dell.com/dellstore/config.aspx?c=us&cs=04&fb=1&l=en&model_id=poweredge-r730&oc=pe_r730_1356&s=bsd&vw=classic
http://h71016.www7.hp.com/MiddleFrame.asp?view=std&oi=E9CED&BEID=19701&SBLID=&AirTime=False&BaseId=45441&FamilyID=3852&ProductLineID=431

4. http://store.vmware.com/store/vmware/en_US/pd/productID.288070900&src=WWW_eBIZ_productpage_vSphere_Enterprise_Buy_US

January 23, 2015 Posted by | Uncategorized | , , , , , , , , , , , , | 4 Comments