SAPonPower

An ongoing discussion about SAP infrastructure

SAP increases support for HANA on Power, now up to 16 concurrent production VMs with IBM PowerVM

On March 1, 2019, SAP updated SAP Note 2230704 – SAP HANA on IBM Power Systems with multiple – LPARs per physical host.  Previously, up to 8 concurrent HANA production VMs could be supported on the Power E880 system with 16 sockets.  Now, the new POWER9 based E980, also with 16 sockets, is supported with up to 16 concurrent HANA production VMs.  As was the case prior to this update, each VM must have a minimum of 4 cores and 128GB and can grow to as large as 16TB for OLAP and 24TB for OLTP.  The maximum VM count may be reduced by 1 if a shared pool is desired for one or more non-production HANA, any other SAP or non-SAP workloads. There is no restriction on the number of VMs that can run in a shared pool from an SAP perspective, but practical physical limits are usually hit before any PowerVM architectural limits. CPU capacity not used by the production VMs may be shared, temporarily, with VMs in the shared pool using a proprietary technology called dedicated-donating where the production VM, which owns the CPU capacity, may loan part of it to the shared pool and get it back immediately when needed for that production workload.

Most customers were quite happy with 8 concurrent VMs, so why should anyone care about 16?  Turns out, some customers have really complex landscapes.  I recently had discussions with a customer that has around 12 current and planned production HANA instances. They were debating whether to use HANA in a multi-tenant configuration.  The problem is that all HANA tenants in a mult-tenant VM are tightly bound, i.e. when the VM, OS or SAP software needs to be updated or reconfigured, all tenants are affected simultaneously.  While not impossible to deal with, this introduces operational complexity.  If those same 12 instances were placed on a new POWER server, the operational complexity could be eliminated.

As much as this might benefit the edge customers with a large number of instances, it will really benefit cloud vendors that utilize Power with greater flexibility, more sharing of resources and lower management and infrastructure costs.  Also, isolation between cloud clients are essential, so multi-tenancy is rarely an effective option. On the other hand, PowerVM offers very strong isolation, so this offers an excellent option for cloud providers even when different clients share the same infrastructure.

This announcement also closes a perceived gap where VMware could already run up to 16 concurrent VMs on an 8-socket system.  The caveat to that support was that the minimum and maximum size of each VM when running at the 16 VM level was ½ socket.  Of course, you could play the mix and match game with some VMs at the ½ socket level and others at the full or multi-socket level, but neither of these options provides for very good granularity.  For systems with 28-core sockets, the granularity per VM is 14 cores, 28 cores and then multiples of 28 cores up to 112 cores.  For those that are configured at 1/2 sockets, if there is no other workload to consume the other 1/2 socket, then the capacity is simply wasted.  Memory, likewise, has some granularity limitations.  According to VMware’s Best Practice Guide, “When an SAP HANA VM needs to be larger than a single NUMA node, allocate all resource of this NUMA node; and to maintain a high CPU cache hit ratio, never share a NUMA node for production use cases with other workloads. Also, avoid allocation of more memory than a single NUMA node has connected to it because this would force it to access memory that is not local to the SAP HANA processes scheduled on the NUMA node.”  In other words, any memory not consumed by the HANA VM(s) on a particular socket/node is simply wasted since other nodes should not utilize memory across nodes.

By comparison, production HANA workloads running on PowerVM may be adjusted by 1 core at a time with memory granularity measured in MB, not GB or TB.

In an upcoming blog post, I will give some practical examples of landscapes and how PowerVM and VMware virtualization would utilize resources.

With this enhanced level of support from SAP, IBM Power Systems with PowerVM is once again the clear leader in terms of virtualization for HANA environments.

Advertisements

March 11, 2019 Posted by | Uncategorized | , , , , , , , | Leave a comment

VMware pushes past 4TB SAP HANA limit

Excuse me while I yawn.  Why such a lack of enthusiasm you might ask?  Well, it is hard to know where to start.  First, the “new” limit of 6TB is only applicable to transactional workloads like SoH and S/4HANA.  BW workloads can now scale to an amazing size of, wait for it, 3TB.  And granularity is still largely at the socket level.  Let’s dissect this a bit.

VMware VMs with 6TB for OLTP/3TB for OLAP is just slightly behind IBM Power Systems’ current limitation of 24TB OLTP/16TB OLAP by a factor or 4 or more.  (yes, I can divide 16/3 = 5.3)

But kudos to VMware. At least now customers with over 4TB but under 6TB requirements (including 3 to 5 years growth of course) can now consider VMware, right?

With this latest announcement (not really an announcement, a SAP note, but that is pretty typical for SAP) VMware is now supported as a “real” virtualization solution for HANA.  Oh, how much they wish that was true.  VMware is now supported on two levels: min and max size of “shared” socket at ½ of the socket’s capacity or min of 1 dedicated socket/max of 4 sockets with a granularity of 1 socket.  At a ½ socket, the socket may be shared with another HANA production workload but SAP suggests an overhead of 14% in this scenario and it is not clear if they mean 14% total or 14% in addition to the 10% when using VMware in the first place.  Even at this level, the theoretical capacity is so small as to be of interest for only the very smallest demands.    At the dedicated socket level, VMware has achieved a fundamental breakthrough … physical partitioning.  Let me reach back through the way-back machine … way, way back … to the 1990s,  (Oh, you Millennials, you have no idea how primitive life used to be) when some of us used to subdivide systems along infrastructure boundaries and thought we were doing something really special (HP figured out how to do this about 5 years later (and are still doing this today), but lets give them credit for trying and not criticize them for being a little slow … after all, not everyone can be a C student).

So, now, almost 30 years later, VMware is able to partition on physical boundaries of a socket for production HANA workloads.  That is so cool that if any of you are similarly impressed, I have this incredibly sophisticated device that will calculate a standard deviation with the press of a button (yes, Millennials, we used to have specialized tools to help us add and subtract called calculators which were a massive improvement over slide rules, so back off!   What, you have never heard of a slide rule … OMG!)

A little 101 of Power Systems for HANA:  PowerVM (IBM’s hardware/firmware virtualization manager) can subdivide a physical system at a logical level of a core, not a physical socket, for HANA production workloads.  You can also add (or subtract) capacity by increments of 1 core for HANA production, increments of 1/20thof a core for non-prod and other workloads.  You can even add memory without cores even if that memory is not physically attached to the socket on which that core resides.  But, there is more.  During this special offer, only if you call in the next 1,000,000 minutes, at no extra charge, PowerVM will throw in the ability to move workloads and/or memory around as needed within a system or to another system in its cluster and share capacity unused by production workloads with a shared pool of VMs for application servers, non-prod or even host a variety of non-HANA,  non-SAP, Linux, AIX or IBM I  workloads on the same physical system with up to 64TB of memory shared amongst all workloads on a logical basis.

Shall we dive a little deeper into SAP’s support for HANA on VMware … I thing we shall!  So, we are all giddy about hosting a 6TB S/4 instance on VMware.  The HANA appliance specs for a 6TB instance on Skylake are 4 sockets @ 28 core/socket with hyperthreading enabled.  A VMware 6.7 VM is supported with up to 128 virtual processors (vps).  4 x 28 x 2 = 224 vps

I know, we have a small problem with math here.  128 is less than 224 which means the math must be wrong … or you can’t use all of the cores with VMware.  To be precise, with hyperthreading enabled you can only use 16 cores per socket or 2/3 of the cores used in the certified appliance test.  And, that is before you consider the minimum 10% overhead noted in the SAP note.  So, we are asked to believe that 128 * .9 = 115vps will perform as well as 224vps.  As a career long salesman from Louisiana, I have to say, please call me because I know where I can get ahold of some prime real estate in the Atchafalya Basin (you can look it up but it is basically swamp land) to sell to you for a really good price.

Alternately, we can disable hyperthreading and use all of 4 sockets for a single HANA DB workload. Once again, the math gets in the way. VMware estimates hyperthreading increases core throughput by about 15%, so logically, removing hyperthreading has the opposite effect, and once again, that 10% overhead still comes into play meaning even more performance degradation:   .85 * .9 = 77% of the certified appliance capacity.

By the way, brand new in this updated SAP note is a security warning when using VMware.  I have to admit a certain amount of surprise here as I believe this is the first time that using a virtualization solution on Intel systems is recognized by SAP as coming with some degree of risk … despite the National Vulnerabilities Database showing 1009 hits when searching on VMware.  A similar search on PowerVM returns 0 results.

This new announcement does little to change the playing field.  Intel based solutions using VMware to host product HANA environments will deliver stunted physically partitioned systems with almost no sharing of resources other than perhaps some I/O and a nice hefty bill for software and maintenance from VMware.  In other words, as before, most customers will be confronted with a simple choice: choose bare-metal Intel systems for production HANA and use VMware for only non-prod servers which do not require the same stack as production such as development or sandbox or choose IBM Power Systems and fully exploit the server consolidation capabilities it offers which continues the journey that most customers have been on since the early 2000s of improved datacenter efficiency using virtualized infrastructure.

September 26, 2018 Posted by | Uncategorized | , , , , , , , | 4 Comments

TDI Phase 5 – SAPS based sizing bringing better TCO to new and existing Power Systems customers

SAP made a fundamental and incredibly important announcement this week at SAP TechEd in Las Vegas: TDI Phase 5 – SAPS based sizing for HANA workloads.  Since its debut, HANA has been sized based on a strict memory to core ratio determined by SAP based on workloads and platform characteristics, e.g. generation of processor, MHz, interconnect technology, etc.  This might have made some sense in the early days when much was not known about the loads that customers were likely to experience and SAP still had high hopes for enabling all customer employees to become knowledge workers with direct access to analytics.  Over time, with very rare exception, it turned out that CPU loads were far lower than the ratios might have predicted.

I have only run into one customer in the past two years that was able to drive a high utilization of their HANA systems and that was a customer running an x86 BW implementation with an impressively high number of concurrent users at one point in their month.  Most customers have experienced just the opposite, consistently low utilization regardless of technology.

For many customers, especially those running x86 systems, this has not been an issue.  First, it is not a significant departure from what many have experienced for years, even those running VMware.  Second, to compensate for relatively low memory and socket-to-socket bandwidth combined with high latency interconnects, many x86 systems work best with an excess of CPU.  Third, many x86 vendors have focused on HANA appliances which are rarely utilized with virtualization and are therefore often single instance systems.

IBM Power Systems customers, by comparison, have been almost universal in their concern about poor utilization.  These customers have historically driven high utilization, often over 65%.  Power has up to 5 times the memory bandwidth per socket of x86 systems (without compromising reliability) and very wide and parallel interconnect paths with very low latencies.  HANA has never been offered as an appliance on Power Systems, instead being offered only using a Tailored Datacenter Infrastructure (TDI) approach.  As a result, customers view on-premise Power Systems as a sort of utility, i.e. that they should be able to use them as they see fit and drive as much workload through them as possible while maintaining the Service Level Agreements (SLA) that their end users require.  The idea of running a system at 5%, or even 25%, utilization is almost an affront to these customers, but that is what they have experienced with the memory to core restrictions previously in place.

IBM’s virtualization solution, PowerVM, enabled SAP customers to run multiple production workloads (up to 8 on the largest systems) or a mix of production workloads (up to 7) with a shared pool of CPU resources within which an almost unlimited mix of VMs could run including non-prod HANA, application servers, as well as non-SAP and even other OS workloads, e.g. AIX and IBM i.  In this mixed mode, some of the excess CPU resource not used by the production workloads could be utilized by the shared-pool workloads.  This helped drive up utilization somewhat, but not enough for many.

These customers would like to do what they have historically done.  They would like to negotiate response time agreements with their end user departments then size their systems to meet those agreements and resize if they need more capacity or end up with too much capacity.

The newly released TDI Overview document http://bit.ly/2fLRFPb describes the new methodology: SAP HANA quicksizer and SAP HANA sizing reports have been enhanced to provide separate CPU and RAM sizing results in SAPS”.  I was able to verify Quicksizer showing SAPS, but not the sizing reports.  An SAP expert I ran into at TechEd suggested that getting the sizing reports to determine SAPS would be a tall order since they would have to include a database of SAPS capacity for every system on the market as well as number of cores and MHz for each one.  (In a separate blog post, I will share how IBM can help customers to calculate utilized SAPS on existing systems).  Customers are instructed to work with their hardware partner to determine the number of cores required based on the SAPS projected above.  The document goes on to state: The resulting HANA TDI configurations will extend the choice of HANA system sizes; and customers with less CPU intensive workloads may have bigger main memory capacity compared to SAP HANA appliance based solutions using fixed core to memory sizing approach (that’s more geared towards delivery of optimal performance for any type of a workload).”

Using a SAPS based methodology will be a good start and may result in fewer cores required for the same workload as would have been previously calculated based on a memory/core ratio.  Customers that wish to allocate more of less CPU to those workloads will now have this option meaning that even more significant reduction of CPU may be possible.  This will likely result in much more efficient use of CPU resources, more capacity available to other workloads and/or the ability to size systems with less resources to drive down the cost of those systems.  Either way helps drive much better TCO by reducing numbers and sizes of systems with the associated datacenter and personnel costs.

Existing Power customers will undoubtedly be delighted by this news.  Those customers will be able to start experimenting with different core allocations and most will find they are able to decrease their current HANA VM sizes substantially.  With the resources no longer required to support production, other workloads currently implemented on external systems may be consolidated to the newly, right sized, system.  Application servers, central services, Hadoop, HPC, AI, etc. are candidates to be consolidated in this way.

Here is a very simple example:  A hypothetical customer has two production workloads, BW/4HANA and S/4HANA which require 4TB and 3TB respectively.  For each, HA is required as is Dev/Test, Sandbox and QA.  Prior to TDI Phase 5, using Power Systems, the 4TB BW system would require roughly 82-cores due to the 50GB/core ratio and the S/4 workload would require roughly 33 cores due to the 96GB/core ratio.  Including HA and non-prod, the systems might look something like:

TDI Phase 4

Note the relatively small number of cores available in the shared pool (might be less than optimal) and the total number of cores in the system. Some customers may have elected to increase to an even larger system or utilize additional systems as a result.  As this stood, this was already a pretty compelling TCO and consolidation story to customers.

With SAPS based sizing, the BW workload may require only 70 cores and S/4 21 cores (both are guesses based on early sizing examples and proper analysis of the SAP sizing reports and per core SAPS ratings of servers is required to determine actual core requirements).  The resulting architecture could look like:

TDI Phase 5 est

Note the smaller core count in each system.  By switching to this methodology, lower cost CPU sockets may be employed and processor activation costs decreased by 24 cores per system.  But the number of cores in the shared pool remains the same, so still could be improved a bit.

During a landscape session at SAP TechEd in Las Vegas, an SAP expert stated that customers will be responsible for performance and CPU allocation will not be enforced by SAP through HWCCT as had been the case in the past.  This means that customers will be able to determine the number of cores to allocate to their various instances.  It is conceivable that some customers will find that instead of the 70 cores in the above example, 60, 50 or fewer cores may be required for BW with decreased requirements for S/4HANA as well.  Using this approach, a customer choosing this more hypothetical approach might see the following:

TDI Phase 5 hyp

Note how the number of cores in the shared pool have increased substantially allowing for more workloads to be consolidated to these systems, further decreasing costs by eliminating those external systems as well as being able to consolidate more SAN and Network cards, decreasing computer room space and reducing energy/cooling requirements.

A reasonable question is whether these same savings would accrue to an x86 implementation.  The answer is not necessarily.  Yes, fewer cores would also be required, but to take advantage of a similar type of consolidation, VMware must be employed.  And if VMware is used, then a host of caveats must be taken into consideration.  1) overhead, reportedly 12% or more, must be added to the capacity requirements.  2) I/O throughput must be tested to ensure load times, log writes, savepoints, snapshots and backup speeds which are acceptable to the business.  3) limits must be understood, e.g. max memory in a VM is 4TB which means that BW cannot grow by even 1KB. 4) Socket isolation is required as SAP does not permit the sharing of a socket in a HANA production/VMware environment meaning that reducing core requirements may not result in fewer sockets, i.e. this may not eliminate underutilized cores in an Intel/VMware system.  5) Non-prod workloads can’t take advantage of capacity not used by production for several reasons not the least of which is that SAP does not permit sharing of sockets between VM prod and non-prod instances not to mention the reluctance of many customer to mix prod and non-prod using a software hypervisor such as VMware even if SAP permitted this.  Bottom line is that most customers, through an abundance of caution, or actual experience with VMware, choose to place production on bare-metal and non-prod, which does not require the same stack as prod, on VMware.  Workloads which do require the same stack as prod, e.g. QA, also are usually placed on bare-metal.  After closer evaluation, this means that TDI Phase 5 will have limited benefits to x86 customers.

This announcement is the equivalent of finally being allowed to use 5th gear on your car after having been limited to only 4 for a long time.  HANA on IBM Power Systems already had the fastest adoption in recent SAP history with roughly 950 customers selecting HANA on Power in just 2 years. TDI Phase 5 uniquely benefits Power Systems customers which will continue the acceleration of HANA on Power.  Those individuals that recommended or made decisions to select HANA on Power will look like geniuses to their CFOs as they will now get the equivalent of new systems capacity at no cost.

September 29, 2017 Posted by | Uncategorized | , , , , , , , , , , , , | 3 Comments

Update – SAP HANA support for VMware, IBM Power Systems and new customer testimonials

The week before Sapphire, SAP unveiled a number of significant enhancements.  VMware 6.0 is now supported for a production VM (notice the lack of a plural); more on that below.  Hybris Commerce, a number of apps surrounding SoH and S/4HANA is now supported on IBM Power Systems.  Yes, you read that right.  The Holy Grail of SAP, S/4 or more specifically, 1511 FPS 02, is now supported on HoP.  Details, as always, can be found in SAP note:   2218464 – Supported products when running SAP HANA on IBM Power Systems .   The importance of this announcement, or should I say non-announcement as you had to be watching the above SAP note as I do on almost a daily basis because it changes so often, was the only place where this was mentioned.  This is not a dig at SAP as this is their characteristic way of releasing updates on availability to previously suggested intentions and is consistent as this was how they non-announced VMware 6.0 support as well.  Hasso Plattner, various SAP executives and employees, in Sapphire keynotes and other sessions, mentioned support for IBM Power Systems in almost a nonchalant manner, clearly demonstrating that HANA on Power has moved from being a niche product to mainstream.

Also, of note, Pfizer delivered an ASUG session during Sapphire including significant discussion about their use of IBM Power Systems.  I was particularly struck by how Joe Caruso, Director, ERP Technical Architecture at Pfizer described how Pfizer tested a large BW environment on both a single scale-up Power System with 50 cores and 5TB of memory and on a 6-node x86 scale-out cluster (tested on two different vendors, not mentioned in this session but probably not critical as their performance differences were negligible), 60-cores on each node with 1 master node and 4 worker nodes plus a not-standby.  After appropriate tuning, including utilizing table partitioning on all systems, including Power, the results were pretty astounding; both environments performed almost identically, executing Pfizer’s sample set, composed of 75+ queries, in 5.7 seconds, an impressive 6 to 1 performance advantage on a per core basis, not including the hot-standby node.  What makes this incredible is that the official BW-EML benchmark only shows an advantage of 1.8 to 1 vs. the best of breed x86 competitor and another set of BW-EML benchmark results published by another x86 competitor shows scale-out to be only 15% slower than scale-up.  For anyone that has studied the Power architecture, especially POWER8, you probably know that it has intrinsics that suggest it should handle mixed, complex and very large workloads far better than x86, but it takes a customer executing against their real data with their own queries to show what this platform can really do.  Consider benchmarks to be the rough equivalent of a NASCAR race car, with the best of engineering, mechanics, analytics, etc, vs. customer workloads which, in this analogy, involves transporting varied precious cargo in traffic, on the highway and on sub-par road conditions.  Pfizer decided that the performance demonstrated in this PoC was compelling enough to substantiate their decision to implement using IBM Power Systems with an expected go-live later this year.  Also, of interest, Pfizer evaluated the reliability characteristics of Power, based in part on their use of Power Systems for conventional database systems over the past few years, and decided that a hot-standby node for Power was unnecessary, further improving the overall TCO for their BW project.  I had not previously considered this option, but it makes sense considering the rarity of Power Systems to be unable to handle predictable, or even unpredictable faults, without interrupting running workloads.  Add to this, for many, the loss of analytical environments is unlikely to result in significant economic loss.

Also in a Sapphire session, Steve Parker, Director Application Development, Kennametal, shared a very interesting story about their journey to HANA on Power.  Though they encountered quite a few challenges along the way, not the least being that they started down the path to Suite on HANA and S/4HANA prior to it being officially supported by SAP, they found the Power platform to be highly stable and its flexibility was of critical importance to them.  Very impressively, they reduced response times compared to their old database, Oracle, by 60% and reduced the run-time of a critical daily report from 4.5 hours to just 45 minutes, an 83% improvement and month end batch now completes 33% faster than before.  Kennametal was kind enough to participate in a video, available on YouTube at: https://www.youtube.com/watch?v=8sHDBFTBhuk as well as a write up on their experience at: http://www-03.ibm.com/software/businesscasestudies/us/en/gicss67sap?synkey=W626308J29266Y50.

As I mentioned earlier, SAP snuck in a non-announcement about VMware and how a single production VM is now supported with VMware 6.0 in the week prior to Sapphire.  SAP note 2315348 – describes how a customer may support a single SAP HANA VM on VMware vSphere 6 in production.  One might reasonably question why anyone would want to do this.  I will withhold any observations on the mind set of such an individual and instead focus on what is, and is not, possible with this support.  What is not possible: the ability to run multiple production VMs on a system or to mix production and non-prod.  What is possible: the ability to utilize up to 128 virtual processors and 4TB of memory for a production VM, utilize vMotion and DRS for that VM and to deliver DRAMATICALLY worse performance than would be possible with a bare-metal 4TB system.  Why?  Because 128 vps with Hyperthreading enabled (which just about everyone does) utilizes 64 cores.  To support 6TB today, a bare-metal Haswell-EX system with 144 cores is required.  If we extrapolate that requirement to 4TB, 96 cores would be required.  Remember, SAP previously explained a minimum overhead of 12% was observed with a VM vs. bare-metal, i.e. those 64 cores under VMware 6.0 would operate, at best, like 56 cores on bare-metal or 42% less capacity than required for bare-metal.  Add to this the fact that you can’t recover any capacity left over on that system and you are left with a hobbled HANA VM and lots of leftover CPU resources.  So, vMotion is the only thing of real value to be gained?  Isn’t HANA System Replication and a controlled failover a much more viable way of moving from one system to another?  Even if vMotion might be preferred, does vMotion move memory pages from source to target system using the EXACT same layout as was implemented on the source system?  I suspect the answer is no as vMotion is designed to work even if other VMs are currently running on the target system, i.e. it will fill memory pages based on availability, not based on location.  As a result, this would mean that all of the wonderful CPU/memory affinity that HANA so carefully established on the source system would likely be lost with a potentially huge impact on performance.

So, to summarize, this new VMware 6.0 support promises bad performance, incredibly poor utilization in return for the potential to not use System Replication and suffer even more performance degradation upon movement of a VM from one system to another using vMotion.  Sounds awesome but now I understand why no one at the VMware booth at Sapphire was popping Champagne or in a celebratory mood.  (Ok, I just made that up as I did not exactly sit and stare at their booth.)

May 26, 2016 Posted by | Uncategorized | , , , , , , , , | 6 Comments

SAP support for multiple production HANA VMs with VMware

Recently, SAP updated their SAP notes regarding the ability to run multiple production HANA VMs with VMware.  On the surface, this sounds like VMware has achieved parity with IBM’s PowerVM, but the reality could not be much farther away from that perception.  This is not to say that users of VMware for HANA will see no improvement.  For a few customers, this will be a good option, but as always, the devil is in the details and, as always, I will play the part of the devil.

Level of VMware supported: 5.5 … still.  VMware 6.0 is supported only for non-production.[i]  If VMware 6.0 is so wonderful and they are such “great” partners with SAP, it seems awfully curious why a product announced on Feb 3, 2015 is still not supported by SAP.

Maximum size of each production HANA instance: 64 virtual processors and 1TB of memory, however this translates to 32 physical processors with Hyperthreading enabled and sizing guidelines must still be followed.  Currently, BW HANA is sized @ 2TB for 4 Haswell chips, i.e. 28.4GB/core which translates to a maximum size of 910GB for a 32 core VM/64 vp, so slightly less than the 1TB supported.  Suite on HANA on Intel is supported at 1.5x higher memory ratio than BW, but since the size of the VM is limited to 1TB, this point is largely moot.

Performance impact: At a minimum, SAP estimates a 12% performance degradation compared to bare metal (upon which most benchmarks are run and from which most sizings are based), so one would logically conclude that the memory/cpu ratio should be reduced by the same level.  The 12% performance impact, but not the reduced sizing effect that I believe should be the result, are detailed in a SAP note.[ii]  It goes on to state “However, there are around 100 low-level performance tests in the test suite exercising various HANA kernel components that exhibit a performance degradation of more than 12%. This indicates that there are particular scenarios which might not be suited for HANA on VMware.Only 100?  When you consider the only like-for-like published benchmarks using VMware and HANA[iii], which showed a 12% degradation (coincidence, I think not) for a single VM HANA system vs. bare metal, it leaves one to wonder what sort of degradation might occur in a multiple VM HANA environment.  There is no guidance provided on this which might make anything less than a bleeding edge customer with no regard for SLA’s to be VERY cautious.  Another SAP note[iv] goes on to state, “For optimal VM performance, VMware recommends to size the VMs within the NUMA node boundaries of the specific server system (CPU cores and local NUMA node memory).” How much impact?  Not provided here.  So, either you size your VMs to fit within NUMA building blocks, i.e. a single 18-core socket or you suffer an undefined performance penalty.  It is also interesting to note what VMware said in Performance Best Practices for VMware vSphere® 5.5Be careful when using CPU affinity on systems with hyper-threading. Because the two logical processors share most of the processor resources, pinning vCPUs, whether from different virtual machines or from a single SMP virtual machine, to both logical processors on one core (CPUs 0 and 1, for example) could cause poor performance.”  That certainly gives me the warm and fuzzy!

Multiple VM support: Yes, you can now run multiple production HANA VMs on a system[v].  HOWEVER, “The vCPUs of a single VM must be pinned to physical cores, so the CPU cores of a socket get exclusively used by only one single VM. A single VM may span more than one socket, however. CPU and Memory overcommitting must not be used.”  This is NOT the value of virtualization, but of physical partitioning, a wonderful technology if we were living in the 1990s.  So, if you have an 8-socket system, you can run up to 8 simultaneous production VMs as long as al VMs are smaller than 511GB for BW, 767GB for SoH.  Need 600GB for BW, well that will cost you a full socket even though you only need a few cores thereby reducing the maximum number of VMs you can support on the system.  And this is before we take the 12% performance impact detailed above into consideration which could further limit the memory per core and number of VMs supported.

Support for mixed production HANA VMs and non-production: Not included in any of the above mentioned SAP notes.  One can infer from the above notes that this is not permitted meaning that there is no way to harvest unused cycles from production for the use of ANY other workload, whether non-prod or non-HANA DB.

Problem resolution: SAP Note 1995460 details the process by which problems may be resolved and while they are guided via SAP’s OSS system, there is transfer of ownership of problems when a known VMware related fix is not available.  The exact words are: “For all other performance related issues, the customer will be referred within SAP’s OSS system to VMware for support. VMware will take ownership and work with SAP HANA HW/OS partner, SAP and the customer to identify the root cause. Due to the abstraction of hardware that occurs when using virtualization, some hardware details are not directly available to SAP HANA support.” and of a little more concern “SAP support may request that additional details be gathered by the customer or the SAP HANA HW partner to help with troubleshooting issues or VMware to reproduce the issue on SAP HANA running in a bare metal environment.”

My summary of the above: One of more production HANA instances may be run under VMware 5.5 with a maximum of 64 vp/32 pp (assuming Hyperthreading is enabled) and a minimum of 12% performance degradation with potential proportionate impact on sizing, with guidance about “some scenarios might be suited for HANA with VMware”, with potential performance issues when VMs cross socket boundaries, with physical partitioning at the socket level, no sharing of CPU resources, no support for running non-production on the same system to harvest unused cycles and potential requirement to reproduce issues on a bare metal system if necessary.

Yes, that was a long, run-on sentence.  But it begs the question of just when would VMware be a good choice for hosting one or more production HANA instances?  My take is that unless you have very small instances which are unsuitable for HANA MDC (multitenancy) or are a cloud provider for very small companies, there is simply no value in this solution.  For those potential cloud providers, their target customer set would include companies with very small HANA requirements and the willingness to accept an SLA that is very flexible on performance targets while using a shared infrastructure for which there a wide variety of issues for which a problem in one VM could result in the whole system to fail with multiple customers impacted simultaneously.

And in case anyone is concerned that I am simply the bearer of bad news, let me remind the reader that IBM Power Systems with PowerVM is supported by SAP with up to 4 production HANA VMs (on the E870 and E880, 3 on all other HANA supported Power Systems) with granularity at the core level, no restrictions on NUMA boundaries, the ability to have a shared pool in place of one of the above production VMs with any number of non-production HANA VMs up to the limits of PowerVM which can utilize unused cycles from the production VMs, no performance penalties, no caveats about what types of workloads are well suited for PowerVM, excellent partition isolation preventing the vast majorities that could happen in one VM from affecting any other ones and no problem resolution handoffs or ownership changes.

In other words, if customers want to continue the journey around virtualization and server consolidation that they started in the early 2000s, want to have a very flexible infrastructure which can grow as they move to SoH, shrink as they move to S/4, grow as they consolidate more workloads into their primary instance, shrink again as they roll off data using data tiering, data aging or perhaps Hadoop and all without having to take significant system outages or throw away investment and purchase additional systems, IBM Power Systems with PowerVM can support this; VMware cannot.

 

———————

[i] 1788665 – SAP HANA Support for virtualized / partitioned (multi-tenant) environments

[ii] 1995460 – Single SAP HANA VM on VMware vSphere in production

[iii] Benchmark detail for bare metal and VMware 5.5 based runs from http://global.sap.com/solutions/benchmark/bweml-results.htm:

06/02/2014 HP 2,000,000,000 111,850 SuSE Linux Enterprise Server 11 on VMWARE ESX 5.5 SAP HANA 1.0 SAP NetWeaver 7.30 1 database server: HP DL580 Gen8, 4 processors / 60 cores / 120 threads, Intel Xeon Processor E7-4880 v2, 2.50 GHz, 64 KB L1 cache and 256 KB L2 cache per core, 30 MB L3 cache per processor, 1024 GB main memory

03/26/2014 HP 2,000,000,000 126,980 SuSE Linux Enterprise Server 11 SAP HANA 1.0 SAP NetWeaver 7.30 1 database server: HP DL580 Gen8, 4 processors / 60 cores / 120 threads, Intel Xeon Processor E7-4880 v2, 2.50 GHz, 64 KB L1 cache and 256 KB L2 cache per core, 30 MB L3 cache per processor, 1024 GB main memory

[iv] 2161991 – VMware vSphere configuration guidelines

[v] 2024433 – Multiple SAP HANA VMs on VMware vSphere in production

April 5, 2016 Posted by | Uncategorized | , , , , , , , , , | 4 Comments

Rebuttal to “Why choose x86” for SAP blog posting

I was intrigued by a recent blog post, entitled: Part 1: SAP on VMware : Why choose x86.  https://communities.vmware.com/blogs/walkonblock/2014/02/06/part-1-sap-on-vmware-why-choose-x86.  I will get to the credibility of the author in just a moment.  First, however, I felt it might be interesting to review the points that were made and discuss these, point by point.

  1. No Vendor Lock-in: When it comes to x86 world, there is no vendor lock-in as you can use any vendor and any make and model as per your requirements”.  Interesting that the author did not discuss the vendor lock-in on chip, firmware or hypervisor.  Intel, or to a very minor degree, AMD, is required for all x86 systems.   This would be like being able to choose any car as long as the engine was manufactured by Toyota (a very capable manufacturer but with a lock on the industry, might not offer the best price or innovation). As any customer knows, each x86 system has its own unique BIOS and/or firmware.  Sure, you can switch from one vendor to another or add a second vendor, but lacking proper QA, training, and potentially different operational procedures, this can result in problems.  And then there is the hypervisor with VMware clearly the preference of the author as it is for most SAP x86 virtualization customers.  No lock-in there?

SAP certifies multiple different OS and hypervisor environments for their code.  Customers can utilize one or more at any given time.  As all logic is written in 3rd and 4th GL languages, i.e. ABAP and JAVA, and is contained within the DB server, customers can move from one OS, HW platform and/or hypervisor to another and only have to, wait for it, do proper QA, training and modify operational procedures as appropriate.  So, SAP has removed lock-in regardless of OS, HW or hypervisor.

Likewise, Oracle, DB2 and Sybase support most OS’s, HW and hypervisors (with some restrictions).  Yes, a migration is required for movement between dissimilar stacks, but this could be said for moving from Windows to Linux and any move between different stacks still requires all migration activities to be completed with the potential exception of data movement when you “simply” change the HW vendor.

  1. Lower hardware & maintenance costs: x86 servers are far better than cheaper than non-x86 servers. This also includes the ongoing annual maintenance costs (AMC) as well.”  Funny, however, that the author only compared HW and maintenance costs and conveniently forgot about OS and hypervisor costs.  Also interesting that the author forgot about utilization of systems.  If one system is ½ the cost of another, but you can only drive, effectively, ½ the workload, then the cost is the same per unit of work.  Industry analysts have suggested that 45% utilization is the maximum sustained to be expected out of VMware SAP systems with most seeing far less.  By the same token, those analysts say that 85% or higher is to be expected of Power Systems.  Also interesting to note that the author did not say which systems were being compared as new systems and options from IBM Power Systems offer close to price parity with x86 systems when HW, OS, hypervisor and 3 years of maintenance are included.
  1. Better performance:  Some of the models of x86 servers can actually out-perform the non-x86 servers in various forms.”  Itanium is one of the examples, which is a no-duh for anyone watching published benchmarks.  The other example is a Gartner paper sponsored by Intel which actually does not quote a single SAP benchmark.  Too bad the author suggested this was a discussion of SAP.  Last I checked (today 2/10/14), IBM Power Systems can deliver almost 5 times the SAPS performance of the largest x86 server (as measured by the 2-tier SD benchmark).  On a SAPS/core basis, Power deliver almost 30% more SAPS/core compared to Windows systems and almost 60% more than Linux/x86 systems.  Likewise, on the 3-tier benchmark, the latest Power result is almost 4.5 times that of the latest x86 result.  So, much for point 3.
  1. Choice of OS: You have choice of using any OS of your choice and not forced to choose a specific OS.”  Yes, it really sucks that with Power, you are forced to choose, AIX … or IBM I for Business … or SUSE Linux … or RedHat Linux which is so much worse than being forced to choose Microsoft Windows … or Oracle Solaris … or SUSE Linux … or RedHat Linux.
  1. Disaster Recovery: You can use any type of hardware, make and model when it comes to disaster recovery (DR). You don’t need to maintain hardware from same vendor.”  Oh, really?  First, I have not met any customers that use one stack for production and a totally different one in DR, but that is not to say that it can’t be done.  Second, remember the discussion about BIOS and firmware?  There can be different patches, prerequisites and workarounds for different stacks.  Few customers want to spend all of the money they “saved” by investing in a separate QA cycle for DR.  Even fewer want to take a chance of DR not working when they can least afford it, i.e. when there is a disaster.  Interestingly, Power actually supports this better than x86 as the stack is identical regardless of which generation, model, mhz is used.  You can even run in Power6 mode on a Power7+ server further enabling complete compatibility regardless of chip type meaning you can use older systems in DR to back up brand new systems in production.
  1. Unprecedented scalability: You can now scale the x86 servers the way you want, TB’s of RAM’s , more than 64 cores etc is very much possible/available in x86 environment.”  Yes, any way that you want as long as you don’t need more capacity than is available with the current 80 core systems.  Any way that you want as long as you are not running with VMware which limits partitions to 128 threads which equates to 64 cores.  Any way that you want but that VMware suggests that you contain partitions within a NUMA block which means a max of 40 cores.  http://blogs.vmware.com/apps/sap  Any way that you want as long as you recognize that VMware partitions are further limited in terms of scalability which results in an effective limit of 32 threads/16 cores as I have discussed in this blog previously.
  1. Support from Implementation Vendor: “If you check with your implementation vendor/partner, you will find they that almost all of them can certify/support implementation of SAP on x86 environment. The same is the case if you are thinking about migrating from non-x86 to x86 world.”  No clue what point is being made here as all vendors on all supported systems and OSs support SAP on their systems.

The author referred to my blog as part of the proof of his/her theories which is the only reason why I noticed this blog in the first place.  The author describes him/herself as “Working with Channel Presales of an MNC”.  Interesting that he/she hides him/herself behind “MNC” because the “MNC” that I work for believes that transparency and honesty are required in all internet postings.  That said, the author writes about nothing but VMware, so you will have to draw your own conclusions as to where this individual works or with which “MNC” his/her biases lie.

The author, in the reference to my posting, completely misunderstood the point that I made regarding the use of 2-tier SAP benchmark data in projecting the requirements of database only workloads and apparently did not even read the “about me” which shows up by default when you open my blog.  I do not work for SAP and nothing that I say can be considered to represent them in any way.

Fundamentally, the author’s bottom line comment, “x86 delivers compelling total cost of ownership (TCO) while considering SAP on x86 environment” is neither supported by the facts that he/she shared nor by those shared by others.  IBM Power Systems continues to offer very competitive costs with significantly superior operational characteristics for SAP and non-SAP customers.

February 17, 2014 Posted by | Uncategorized | , , , , , | 2 Comments

Is VMware ready for HANA production?

Virtualizing SAP HANA with VMware in productive environments is not supported at this time, but according to Arne Arnold with SAP, based on his blog post of Nov 5,  http://www.saphana.com/community/blogs/blog/2013/11/05/just-a-matter-of-time-sap-hana-virtualized-in-production, they are working hard in that direction.

Clearly memory can be assigned to individual partitions with VMware and to a more limited extent, CPU resources may also be assigned although this may be a bit more limited in its effectiveness.  The issues that SAP will have to overcome, however, are inherent limitations in scalability of VMware partitions, I/O latency and potential contention between partitions for CPU resources.

As I discussed in my blog post late last year, https://saponpower.wordpress.com/2012/10/23/sap-performance-report-sponsored-by-hp-intel-and-vmware-shows-startling-results/, VMware 5.0 was proven, by VMware, HP and Intel, to have severe performance limitations and scalability constraints.  In the lab test, a single partition achieved only 62.5% scalability, overall, but what is more startling is the scalability between each measured interval.  From 4 to 8 threads, they were able to double the number of users, thereby demonstrating 100% scalability, which is excellent.  From 8 to 16 threads, they were only about to handle 66.7% more users despite doubling the number of threads.  From 16 to 32 threads, the number of users supported increased only 50%.  Since the time of the study being published, VMware has released vSphere 5.1 with an architected limit of 64 threads per partition and 5.5 with an architected limited of 128 threads per partitions.  Notice my careful wording of architected limit not the official VMware wording, of “scalability”.  Scaling implies that with each additional thread, additional work can be accomplished.  Linear scaling implies that each time you double the number of threads, you can accomplish twice the amount of work.  Clearly, vSphere 5.0 was unable to attain even close to linear scaling.  But now with the increased number of threads supported, can they achieve more work?  Unfortunately, there are no SAP proof points to answer this question.  All that we can do is to extrapolate the results from their earlier published results assuming the only change was the limitation in the number of architected threads.  If we use the straight forward Microsoft Excel “trendline” function to project results using a Polynomial with an order of 2, (no, it has been way to long since I took statistics in college to explain what this means but I trust Microsoft (lol)) we see that a VMware partition is unlikely to ever achieve much more throughput, without a major change in the VMware kernel, than it achieved with only 32 threads.  Here is a graph that I was able to create in Excel using the data points from the above white paper.

VMware scalability curve

Remember, at 32 threads, with Intel Hyperthreading, this represents only 16 cores. As a 1TB BW HANA system requires 80 cores, it is rather difficult to imagine how a VMware partition could ever handle this sort of workload much less how it would respond to larger workloads.  Remember, 1TB = 512GB of data space which, at a 4 to 1 compression ratio, equal 2TB of data.  VMware starts to look more and more inadequate as data size increases.

And if a customer was misled enough by VMware or one of their resellers, they might think that using VMware in non-prod was a good idea.  Has SAP or a reputable consultant ever recommended using use one architecture and stack in non-prod and a completely different one in prod?

So, in which case would virtualizing HANA be a good idea?  As far as I can tell, only if you are dealing with very small HANA databases.  How small?  Let’s do the math:   assuming linear scalability (which we have already proven above is not even close to what VMware can achieve) 32 threads = 16 cores which is only 20% of the capacity of an 80 core system.  20% of 2TB = 400GB of uncompressed data.  At the 62.5% scalability described above, this would diminish further to 250GB.  There may be some side-car applications for which a large enterprise might replicate only 250GB of data, but do you really want to size for the absolute maximum throughput and have no room for growth other than chucking the entire system and moving to newer processor versions each time they come out?  There might also be some very small customers which have data that can currently fit into this small a space, but once again, why architect for no growth and potentially failure?  Remember, this was a discussion only about scalability, not response time.  Is it likely that response time also degrades as VMware partitions increase in size?  Silly me!  I forgot to mention that the above white paper showed response time increasing from .2 seconds @ 4 threads to 1 second @ 32 threads a 400% increase in response time.  Isn’t the goal of HANA to deliver improved performance?  Kind of defeats the purpose if you virtualize it using VMware!

November 20, 2013 Posted by | Uncategorized | , , , | 3 Comments

Head in the cloud? Keep your feet on the ground with IBM with cloud computing for SAP.

Cloud means many things to many people.  One definition, popularized by various internet based organizations refers to cloud as a repository of web URLs, email, documents, pictures, videos, information about items for sale, etc. on a set of servers maintained by an internet provider where any server in that cluster may access and make available to the end user the requested object.  This is a good definition for those types of services, however SAP does not exist as a set of independent objects that can be stored and made available on such a cloud.

 

Another definition involves the dynamic creation, usage and deletion of system images on a set of internet based servers hosted by a provider.   Those images could contain just about anything including SAP software and customer data.  Security of customer data, both on disk and in transit across the internet, service level agreements, reliability, backup/recovery and government compliance (where appropriate) are just a few of the many issues that have to be addressed in such implementations.  Non-production systems are well suited for this type of cloud since many of the above issues may be less of a concern than for production systems.  Of course, that is only the case when no business data or intellectual property, e.g. developed ABAP or Java code, is stored on such servers in which case these systems become more and more sensitive, like production.  This type of public cloud may offer a low cost for infrequently accessed or low utilization environments.  Those economics can often change dramatically as usage increases or if more controls are desired.

 

Yet another definition utilizes traditional data center hosting providers that offer robust security, virtual private networks, high speed communications, high availability, backup/recovery and thorough controls.  The difference between conventional static hosting and cloud hosting is that the resources utilized for a given customer or application instance may be hosted on virtual rather than dedicated systems, available on demand, may be activated or removed via a self-service portal and may be multi-tenant, i.e. multiple customers may be hosted on a shared cloud.  While more expensive than the above cloud, this sort of cloud is usually more appropriate for SAP production implementations and is often less expensive than building a data center, staffing it with experts, acquiring the necessary support infrastructure, etc.

 

As many customers already own data centers, have large staffs of experts and host their own SAP systems today, another cloud alternative is often required: a Private Cloud.  These customers often wish to reduce the cost of systems by driving higher utilization, shared use of infrastructure among various workloads, automatic load balancing, improvements in staff productivity and potentially even self-service portals for on demand systems with charge back accounting to departments based on usage.

 

Utilizing a combination of tools from IBM and SAP, customers can implement a private cloud and achieve as many of the above goals as desired.  Let’s start with SAP.  SAP made its first foray into this area several years ago with their Adaptive Computing Controller (ACC).  Leveraging SAP application virtualization it allowed for start, stop and relocate of SAP instances under the control of basis administrators.  This helped SAP to get a much deeper appreciation for customer requirements which enabled them to develop SAP NetWeaver Landscape Virtualization Management (LVM).   SAP, very wisely, realized that attempting to control infrastructure resources directly would require a huge effort, continuous updates as partner technology changed not to mention an almost unlimited number of testing and support scenarios.  Instead, SAP developed a set of business workflows to allow basis admins to perform a wide array of common tasks.  They also developed an API and invited partners to write interfaces to their respective cloud enabling solutions.  In this way, while governing a workflow SAP LVM simply has to request a resource, for example, from the partner’s systems or storage manager, and once that resource is delivered, continue with the rest of the workflow on the SAP application level.

 

IBM was an early partner with SAP ACC and has continued that partnership with SAP LVM.  By integrating storage management, the solution and enablement in the IBM Power Systems environment is particularly thorough and is probably the most complete of its kind on the market.  IBM offers two types of systems managers, IBM Systems Director (SD) and IBM Flex Systems Manager (FSM).  SD is appropriate for rack based systems including conventional Power Systems in addition to IBM’s complete portfolio of systems and storage.  As part of that solution, customers can manage physical and virtual resources, maintain operating systems, consolidate error management, control high availability and even optimize data center energy utilization.  FSM is a manager specifically for IBM’s new PureSystems family of products including several Power Systems nodes.  FSM is focused on the management of the components delivered as part of a PureSystems environment where SD is focused on the entire data center including PureSystems, storage and rack based systems.  Otherwise, the functions in an LVM context, are largely the same.  FSM may be used with SD in a data center either side by side or with FSM feeding certain types of information up to SD.  IBM also offers a storage management solution called Tivoli Storage FlashCopy Manager (FCM).  This solution drives the non-disruptive copying of filesystems on appropriate storage subsystems such as IBM’s XIV as well as virtually any IBM or non-IBM storage subsystem through the IBM SAN Volume Controller (SVC) or V7000 (basically an SVC packaged with its own HDD and SSD).

 

Using the above, SAP LVM can capture OS images including SAP software, find resources on which to create new instances, rapidly deploy images, move them around as desired to load balance systems or when preventative maintenance is desired, monitor SAP instances, provide advanced dashboards, a variety of reports, and make SAP system/db copies, clones or refreshes including the SAP relevant post-copy automation tasks.

 

What makes the IBM Power Systems implementation unique is the integration between all of the pieces of the solution.  Using LVM with Power Systems with either SD, FSM or both, a basis admin can see and control both physical and virtual resources as PowerVM is built in and is part of every Power System automatically.  This means that when, for instance, a physical node is added to an environment, SD and FSM can see it immediately meaning the LVM can also see it and start using it.   In the x86 world, there are two supported configurations for LVM, native and virtualized.  Clearly, a native installation is limited by its very definition as all of the attributes of resource sharing, movement and some management features that come with virtualization are not present in a native installation.

 

According to SAP Note 1527538 – SAP NetWeaver Landscape Virtualization Management 1.0, currently only VMware is supported for virtualized x86 environments.  LVM with VMware/x86 based implementations rely on VMware vCenter meaning they can control only virtual resources.  Depending on customer implementation, systems admins may have to use a centralized systems management tool for installation, network, configuration and problem management, i.e. the physical world,  and vCenter for the virtual world.  This contrasts with SD or FSM which can manage the entire Power Systems physical and virtual environment plus all of the associated network and chassis management, where appropriate.

 

LVM with Power Systems and FCM can drive full database copy/clone/refresh activity through disk subsystems.  Disk subsystems such as IBM XIV, can make copies very fast in a variety of ways.  Some make pointer based copies which means that only changed blocks are duplicated and a “copy” is made available almost immediately for further processing by LVM.  In some situations and/or with some disk subsystems, a full copy process, in which every block is duplicated, might be utilized but this happens at the disk subsystem or SAN level without involving a host system so is not only reasonably fast but also does not take host system resources.  In fact, a host system in this configuration does not even need to stop processing but merely place the source DB into “logging only” mode which is then resumed into normal operating mode a short time later after the copy is initiated.

 

LVM with x86 offers two options.  Option 1, utilize VMware and its storage copy service.  Option 2, utilize LVM native or with VMware and use a separate plugin with from a storage subsystem vendor.  Option 2 works pretty much the same as the above described Power/FCM solution except that only certain vendors are supported and any integration of plugins from different companies, not to mention any troubleshooting is a customer task.  It might be worthwhile to consider the number of companies that might be required to solve a problem in this environment, e.g. SAP, VMware, the storage subsystem vendor, the OS vendor and the systems vendor.

 

For Option 1, VMware drives copies via vCenter using a host only process.  According to above mentioned SAP Note, “virtualization based cloning is only supported with Offline Database.”  This might be considered a bit disruptive by some and impossible to accommodate by others.  Even though it might be theoretically possible to use a VMware snapshot, a SID rename process must be employed for a clone and every table must be read in and then out again with changes to the SID.  (That said, for some other LVM activities not involving a full clone, a VMware snapshot might be used.)  As a result, VMware snapshots may quickly take on the appearance of a full copy, so may not be the best technology to use both for the overhead on the system and the fact that VMware itself does not recommend keeping database snapshots around for more than a few days at most, so the clone process typically uses the full copy option.  When the full copy is initiated by VMware, every block must be read into VMware and then back out.  Not only is this process slow for large databases, but it places a large load on the source system potentially resulting in poor performance for other partitions during this time.   Since a full copy is utilized, a VMware based copy/clone will also take radically more disk storage than a Power/XIV based clone as it is fully supported with a “changed block only” copy.

 

Of course, the whole discussion of using LVM with vCenter may be moot.  After all, the assumption is that one would be utilizing VMware for database systems.  Many customers choose not to do this for a variety of reasons, from multiple single points of failure, to scaling to database vendor support to potential issues in problem resolution due to the use of a multi-layer, multi-vendor stack, e.g. hardware from one vendor with proprietary firmware from another vendor, processor chip from another vendor, virtualization software from VMware, OS from Microsoft, SUSE, Red Hat or Oracle not to mention high availability and other potential issues.  Clearly, this would not be an issue if one eliminates database systems from the environment, but that is where some of the biggest benefit of LVM are realized.

 

LVM, as sophisticated as it is currently, does not address all of the requirements that some customers might have for a private cloud.  The good news is that it doesn’t have to.  IBM supplies a full range of cloud enabling products under the brand name of IBM Smart Cloud.  These tools range from an “Entry” product suitable for adding a simple self-service portal, some additional automation and some accounting features to a full feature “Enterprise” version.  Those tools call SD or FSM functions to manage the environment which is quite fortunate as any changes made by those tools is immediately visible to LVM, thereby completing the circle.

 

SAP and IBM collaborated to produce a wonderful and in-depth document that details the IBM/Power solution:  https://scn.sap.com/docs/DOC-24822

 

A blogger at SAP has also written extensively on the topic of Cloud for SAP.  You can see his blog at: http://blogs.sap.com/cloud/2012/07/24/sap-industry-analyst-base-camp-a-recap-of-the-sap-cloud-strategy-session/

January 4, 2013 Posted by | Uncategorized | , , , , , , , , , , , , | Leave a comment

SAP performance report sponsored by HP, Intel and VMware shows startling results

Not often does a sponsored study show the opposite of what was intended, but this study does.  An astute blog reader alerted me about a white paper sponsored by HP, VMware and Intel by an organization called Enterprise Strategy Group (ESG).  The white paper is entitled “Lab Validation Report – HP ProLiant DL980, Intel Xeon, and VMware vSphere 5 SAP Performance Analysis – Effectively Virtualizing Tier-1 Application Workloads – By Tony Palmer, Brian Garrett, and Ajen Johan – July 2011.”  The words that they used to describe the results are, as expected, highly complementary to HP, Intel and VMware.  In this paper, ESG points out that almost 60% of the respondents to their study have not virtualized “tier 1” applications like SAP yet but expect a rapid increase in the use of virtualization.  We can only assume that they only surveyed x86 customers as 100% of Power Systems customers are virtualized since the PowerVM hypervisor is baked into the hardware and firmware of every system and can’t be removed.  Nevertheless, it is encouraging that customers are moving in the right direction and that there is so much potential for the increased use of virtualization.

 

ESG provided some amazing statistics regarding scalability.  ESG probably does not realize just how bad this makes VMware and HP look, otherwise, they probably would not have published it.  They ran an SAP ECC 6.0 workload which they describe as “real world” but for which they provide no backup as to what this workload was comprised of, so it is possible that a given customer’s workload may be even more intensive than the one tested.  They ran a single VM with 4 vcpu, then 8, 16 and 32.  They show both the number of users supported as well as the IOPS and dialog response time.  Then, in their conclusions, they state that scaling was nearly linear.  This data shows that when scaling from 4 to 32 cores, an 8x increase, the number of users supported increased from 600 to 3,000, a 5x increase.  Put a different way, 5/8 = .625 or 62.5% scalability.  Not only is this not even remotely close to linear scaling, but it is an amazing poor level of scalability.  IOPS, likewise, increased from 140 to 630 demonstrating 56.3% scalability and response time went from .2 seconds to 1 second, which while respectable, was 5 times that of the 4 vcpu VM.

 

ESG also ran a non-virtualized test with 32 physical cores.  In this test, they achieved only 4,400 users/943 IOPS.  Remember, VMware is limited to 32 vcpu which works out to the equivalent of 16 cores.  So, with twice the number of effective physical cores, they were only able to support 46.7% more users and 49.7% more IOPS.  To make matters much worse, response time almost doubled to 1.9 seconds.

 

ESG went on to make the following statement: “Considering that the SAP workload tested utilized only half of the CPU and one quarter of the available RAM installed in the DL980 tested, it is not unreasonable to expect that a single DL980 could easily support a second virtualized SAP workload at a similarly high utilization level and/or multiple less intensive workloads driven by other applications.”  If response time is already borderline poor with VMware managing only a single workload, is it reasonable to assume that response time will go up or down if you add a second workload?  If IOPS are not even keeping pace with the poor scalability of vcpu, it is reasonable to assume that IOPS will all of a sudden start improving faster?  If you have not tested the effect of running a second workload, is it reasonable to speculate what might happen under drastically different conditions?  This is like saying that on a hot summer day, an air conditioner was able to maintain a cool temperature in a sunny room with half of the chairs occupied and therefore it is not “unreasonable” to assume that it could do the same with all chair occupied.  That might be the case, but there is absolutely no supporting evidence to support such a speculation.

 

ESG further speculates that because this test utilized default values for BIOS, OS, SAP and SQL Server, performance would likely be higher with tuning.  … And my car will probably go faster if I wash it and add air to the tires, but by how much??  In summary and I am paraphrasing, ESG says that VMware, Intel processors and HP servers are ready for SAP primetime providing reliability and performance while simplifying operations and lowering costs.  Interesting that they talk about reliability yet they, once again, provide no supporting evidence and did not mention a single thing about reliability earlier in the paper other than to say that the HP DL980 G7 delivers “enhanced reliability”.  I certainly believe every marketing claim that a company makes without data to back it up, don’t you?

 

There are three ways that you can read this white paper.

  1. ESG has done a thorough job of evaluating HP x86 systems, Intel and VMware and has proven that this environment can handle SAP workloads with ease
  2. ESG has proven that VMware has either incredibly poor scalability or high overhead or both
  3. ESG has limited credibility as they make predictions for which they have no data to support their conclusions

 

While I might question how ESG makes predictions, I don’t believe that they do a poor job at performance testing.  They seem to operate like an economist, i.e. they are very good at collecting data but make predictions based on past experience, not hard data.  When is the last time that economists correctly predicted market fluctuations?  If they did, they would all be incredibly rich!

 

I think it would be irresponsible to say that VMware based environments are incapable of handling SAP workloads.  On the contrary, VMware is quite capable, but there are significant caveats.  VMware does best with small workloads, e.g. 4 to 8  vcpu, not with larger workloads e.g. 16 to 32 vcpu.  This means if a customer utilizes SAP on VMware, they will need more and smaller images than they would on excellent scaling platforms like IBM Power Systems, which drives up management costs substantially and reduces flexibility.  By way of comparison, published SAP SD 2-tier benchmark results for IBM Power Systems utilizing POWER7 technology show 99% scalability when comparing the performance of a 16-core to a 32-core system at the same MHz, 89.3% scalability when comparing a 64-core to a 128-core system with a 5% higher MHz, which when normalized to the same MHz shows 99% scalability even at this extremely high performance level.

 

The second caveat for VMware and HP/Intel systems is in that area that ESG brushed over as if it was a foregone conclusion, i.e. reliability.  Solitaire Interglobal examined data from over 40,000 customers and found that x86 systems suffer from 3 times or more system outages when comparing Linux based x86 systems to Power Systems and up to 10 times more system outages when comparing Windows based x86 systems to Power Systems.  They also found radically higher outage durations for both Linux and Windows compared to Power and much lower overall availability when looking at both planned and unplanned outages in general: http://ibm.co/strategicOS and specifically in virtualized environments: http://ibm.co/virtualizationplatformmatters.  Furthermore, as noted in my post from late last year, https://saponpower.wordpress.com/2011/08/29/vsphere-5-0-compared-to-powervm/, VMware introduces a number of single points of failure when mission critical applications demand just the opposite, i.e. the elimination of single points of failure.

 

I am actually very happy to see this ESG white paper, as it is has proven how poor VMware scales for large workloads like SAP in ways that few other published studies have ever exposed.  Power Systems continues to set the bar very high when it comes to delivering effective virtualization for large and small SAP environments while offering outstanding, mission critical reliability.  As noted in https://saponpower.wordpress.com/2011/08/15/ibm-power-systems-compared-to-x86-for-sap-landscapes/, IBM does this while maintaining a similar or lower TCO when all production, HA and non-production systems, 3 years of 24x7x365 hardware maintenance, licenses and 24x7x365 support for Enterprise Linux and vSphere 5.0 Enterprise Plus … and that analysis was done back when I did not have ESG’s lab report showing how poorly VMware scales.  I may have to revise my TCO estimates based on this new data.

October 23, 2012 Posted by | Uncategorized | , , , , , , , , , | 3 Comments

Is the SAP 2-tier benchmark a good predictor of database performance?

Answer: Not even close, especially for x86 systems.  Sizings for x86 systems based on the 2-tier benchmark can be as much as 50% smaller for database only workloads as would be predicted by the 3-tier benchmark.  Bottom line, I recommend that any database only sizings for x86 systems or partitions be at least doubled to ensure that enough capacity is available for the workload.  At the same time, IBM Power Systems sizings are extremely conservative and have built in allowances for reality vs. hypothetical 2-tier benchmark based sizings.  What follows is a somewhat technical and detailed analysis but this topic cannot, unfortunately, be boiled down into a simple set of assertions.

 

The details: The SAP Sales and Distribution (S&D) 2-tier benchmark is absolutely vital to SAP sizings as workloads are measured in SAPS (SAP Application Performance Standard)[i], a unit of measurement based on the 2-tier benchmark.  The goal of this benchmark is to be hardware independent and useful for all types of workloads, but the reality of this benchmark is quite different.  The capacity required for the database server portion of the workload is 7% to 9% of the total capacity with the remainder used by multiple instances of dialog/update servers and a message/enqueue server.  This contrasts with the real world where the ratio of app to DB servers is more in the 4 to 1 range for transactional systems and 2 or 1 to 1 for BW.  In other words, this benchmark is primarily an application server benchmark with a relatively small database server.  Even if a particular system or database software delivered 50% higher performance for the DB server compared to what would be predicted by the 2-tier benchmark, the result on the 2-tier benchmark would only change by .07 * .5 = 3.5%.

 

How then is one supposed to size database servers when the SAP Quicksizer shows the capacity requirements based on 2-tier SAPS?   A clue may be found by examining another, closely related SAP benchmark, the S&D 3-tier benchmark.  The workload used in this benchmark is identical to that used in the 2-tier benchmark with the difference being that in the 2-tier benchmark, all instances of DB and App servers must be located within one operating system (OS) image where with the 3-tier benchmark, DB and App server instances may be distributed to multiple different OS images and servers.  Unfortunately, the unit of measurement is still SAPS but this represents the total SAPS handled by all servers working together.  Fortunately, 100% of the SAPS must be funneled through the database server, i.e. this SAPS measurement, which I will call DB SAPS, represents the maximum capacity of the DB server.

 

Now, we can compare different SAPS and DB SAPS results or sizing estimates for various systems to see how well 2-tier and 3-tier SAPS correlate with one another.  Turns out, this is easier said than done as there are precious few 3-tier published results available compared to the hundreds of results published for the 2-tier benchmark.  But, I would not be posting this blog entry if I did not find a way to accomplish this, would I?  I first wanted to find two results on the 3-tier benchmark that achieved similar results.  Fortunately, HP and IBM both published results within a month of one another back in 2008, with HP hitting 170,200 DB SAPS[ii] on a 16-core x86 system and IBM hitting 161,520 DB SAPS[iii] on a 4-core Power system.

 

While the stars did not line up precisely, it turns out that 2-tier results were published by both vendors just a few months earlier with HP achieving 17,550 SAPS[iv] on the same 16-core x86 system and IBM achieving 10,180 SAPS[v] on a 4-core and slightly higher MHz (4.7GHz or 12% faster than used in the 3-tier benchmark) Power system than the one in the 3-tier benchmark.

 

Notice that the HP 2-tier result is 72% higher than the IBM result using the faster IBM processor.  Clearly, this lead would have even higher had IBM published a result on the slower processor.  While SAP benchmark rules do not allow for estimates of slower to faster processors by vendors, even though I I am posting this as an individual not on behalf of IBM, I will err on the side of caution and give you only the formula, not the estimated result:  17,550 / (10,180 * 4.2 / 4.7) = the ratio of the published HP result to the projected slower IBM processor.  At the same time, HP achieved only a 5.4% higher 3-tier result.  How does one go from almost twice the performance to essentially tied?  Easy answer, the IBM system was designed for database workloads with a whole boatload of attributes that go almost unused in application server workloads, e.g. extremely high I/O throughput and advanced cache coherency mechanisms.

 

One might point out that Intel has really turned up its game since 2008 with the introduction of Nehalem and Westmere chips and closed the gap, somewhat, against IBM’s Power Systems.  There is some truth in that, but let’s take a look at a more recent result.  In late 2011, HP published a 3-tier result of 175,320 DB SAPS[vi].  A direct comparison of old and new results show that the new result delivered 3% more performance than the old with 12 cores instead of 16 which works out to about 37% more performance per core.  Admittedly, this is not completely correct as the old benchmark utilized SAP ECC 6.0 with ASCII and the new one used SAP ECC 6.0 EP4 with Unicode which is estimated to be a 28% higher resource workload, so in reality, this new result is closer to 76% more performance per core.  By comparison, a slightly faster DL380 G7[vii], but otherwise almost identical system to the BL460c G7, delivered 112% more SAPS/core on the 2-tier benchmark compared to the BL680c G5 and almost 171% more per SAPS/core once the 28% factor mentioned above is taken into consideration.  Once again, one would need to adjust these numbers based on differences in MHz and the formula for that would be: either of the above numbers * 3.06/3.33 = estimated SAPS/core.

 

After one does this math, one would find that improvement in 2-tier results was almost 3 times the improvement in 3-tier results further questioning whether the 2-tier benchmark has any relevance to the database tier.  And just one more complicating factor; how vendors interpret SAP Quicksizer output.  The Quicksizer conveniently breaks down the amount of workload required of both the DB and App tiers.  Unfortunately, experience shows that this breakdown does not work in reality, so vendors can make modifications to the ratios based on their experience.  Some, such as IBM, have found that DB loads are significantly higher than the Quicksizer estimates and have made sure that this tier is sized higher.  Remember, while app servers can scale out horizontally, unless a parallel DB is used, the DB server cannot, so making sure that you don’t run out of capacity is essential.  What happens when you compare the sizing from IBM to that of another vendor?  That is hard to say since each can use whatever ratio they believe is correct.  If you don’t know what ratio the different vendors use, you may be comparing apples and oranges.

 

Great!  Now, what is a customer to do now that I have completely destroyed any illusion that database sizing based on 2-tier SAPS is even remotely close to reality?

 

One option is to say, “I have no clue” and simply add a fudge factor, perhaps 100%, to the database sizing.  One could not be faulted for such a decision as there is no other simple answer.  But, one could also not be certain that this sizing was correct.  For example, how does I/O throughput fit into the equation.  It is possible for a system to be able to handle a certain amount of processing but not be able to feed data in at the rate necessary to sustain that processing.  Some virtualization managers, such as VMware have to transfer data first to the hypervisor and then to the partition or in the other direction to the disk subsystem.  This causes additional latency and overhead and may be hard to estimate.

 

A better option is to start with IBM.  IBM Power Systems is the “gold standard” for SAP open systems database hosting.  A huge population of very large SAP customers, some of which have decided to utilize x86 systems for the app tier, use Power for the DB tier.  This has allowed IBM to gain real world experience in how to size DB systems which has been incorporated into its sizing methodology.  As a result, customers should feel a great deal of trust in the sizing that IBM delivers and once you have this sizing, you can work backwards into what an x86 system should require.  Then you can compare this to the sizing delivered by the x86 vendor and have a good discussion about why there are differences.  How do you work backwards?  A fine question for which I will propose a methodology.

 

Ideally, IBM would have a 3-tier benchmark for a current system from which you could extrapolate, but that is not the case.  Instead, you could extrapolate from the published result for the Power 550 mentioned above using IBM’s rperf, an internal estimate of relative performance for database intensive environments which is published externally.  The IBM Power Systems Performance Report[viii] includes rperf ratings for current and past systems.  If we multiply the size of the database system as estimated by the IBM ERP sizer by the ratio of per core performance of IBM and x86 systems, we should be able to estimate how much capacity is required on the x86 system.  For simplicity, we will assume the sizer has determined that the database requires 10 of 16 @ IBM Power 740 3.55GHz cores.  Here is the proposed formula:

 

Power 550 DB SAPS x 1/1.28 (old SAPS to new SAPS conversion) x rperf of 740 / rperf of 550

161,520 DB SAPS x 1/1.28 x 176.57 / 36.28 = estimated DB SAPS of 740 @ 16 cores

Then we can divide that above number by the number of cores to get a per core DB SAPS estimate.  By the same token you can divide the published HP BL 460c G7 DB SAPS number by the number of cores.  Then:

Estimated Power 740 DB SAPS/core / Estimated BL460c G7 DB SAPS/core = ratio to apply to sizing

The result is a ratio of 2.6, e.g. if a workload requires 10 IBM Power 740 3.55GHz cores, it would require 26 BL460c G7 cores.  This contrasts to the per core estimated SAPS based on the 2-tier benchmark which suggests just that the Power 740 would have been just 1.4 time the performance per core.   In other words, a 2-tier based sizing would suggest that the x86 system require just 14 cores where the 3-tier comparison suggests it actually needs almost twice that.  This is, assuming the I/O throughput is sufficient.  This also suggests that both systems have the same target utilization.  In reality, where x86 systems are usually sized for no more than 65% utilization, Power System are routinely sized for up to 85% utilization.

 

If this workload was planned to run under VMware, the number of vcpus must be considered which is twice the number of cores, i.e. this workload would require 52 cores which is over the limit of 32 vcpu limit of VMware 5.0.  Even when VMware can handle 64 vcpu, the overhead of VMware and its ability to sustain the high I/O of such a workload must be included in any sizing.

 

Of course, technology moves on and Intel is into its Gen8 processors.  So, you may have to adjust what you believe to the effective throughput of the x86 system based on relative performance to the BL460c G7 above, but now, at least, you may have a frame of reference for doing the appropriate calculations.  Clearly, we have shown that 2-tier is an unreliable benchmark by which to size database only systems or partitions and can easily be off by 100% for x86 systems.

 


[ii] 170,200 SAPS/34,000 users, HP ProLiant BL680c G5, 4 Processor/16 Core/16 Thread, E7340, 2.4 Ghz, Windows Server 2008 Enterprise Edition, SQL Server 2008, Certification # 2008003

[iii] 161,520 SAPS/32,000 users, IBM System p 550, 2 Processor/4 Core/8 Thread, POWER6, 4.2 Ghz , AIX 5.3,  DB2 9.5, Certification # 2008001

[iv] 17,550 SAPS/3,500 users , HP ProLiant BL680c G5, 4 Processor/16 Core/16 Thread, E7340, 2.4 Ghz, Windows Server 2008 Enterprise Edition, SQL Server 2008, Certification # 2007055

[v] 10,180 SAPS/2,035 users, IBM System p 570, 2 Processor/4 Core/8 Thread, POWER6, 4.7 Ghz, AIX 6.1, Oracle 10G, Certification # 2007037

[vi] 175,320 SAPS/32,125 users, HP ProLiant BL460c G7, 2 Processor/12 Core/24 Thread, X5675, 3.06 Ghz,  Windows Server 2008 R2 Enterprise on VMware ESX 5.0, SQL Server 2008, Certification # 2011044

[vii]27,880 SAPS/ 5,110 users, HP ProLiant DL380 G7, 2 Processor/12 Core/24 Thread, X5680, 3.33 Ghz, Windows Server 2008 R2 Enterprise, SQL Server 2008, Certification # 2010031

July 30, 2012 Posted by | Uncategorized | , , , , , , , , , | 1 Comment