SAPonPower

An ongoing discussion about SAP infrastructure

Optane DC Persistent Memory – Proven, industrial strength or full of hype – Detail, part 3

In this final of a three part series, we will explore the two other major “benefits” of Optane DIMMs: fast restart and TCO.

Fast restart

HANA, as an in-memory database, must be loaded into memory to perform well.  Intel, for years and, apparently up to current times, has suffered with a major bottleneck in its I/O subsystem.  As a result, loading a single terabyte of data into memory could take 10 to 20 minutes in a best-case scenario.  Anecdotally, some customers have remarked that placing superfast, all flash subsystems, such as IBM’s FlashSystem 9100, behind an Intel HANA system resulted in little improvement in load times compared to mid-range SSD subsystems.  For customers attempting to bring up a 10TB storage/20TB memory HANA system, this could result in load times measured in hours.  As a result, a faster way of getting a HANA system up and running was sorely needed.

This did not appear to be a problem for customers using IBM’s Power Systems.  Not only has Power delivered roughly twice the I/O bandwidth of Intel systems for years, but with POWER9, IBM introduced PCIe Gen4, further extending their leadership in this area.  The bottleneck is actually in the storage subsystem and number of paths that it can drive, not in the processor.  To prove this, IBM ran a test with 10 NVMe cards in PCIe slots and was able to drive load speeds into HANA of almost 1TB/min.[I].  In other words, to improve restart times, Power Systems customers need only move to faster subsystems and/or add more or faster paths.

This suggests that Intel’s motivation for NVDIMMs may be to solve a problem of their own making.  But this also raises a question of their understanding of HANA.  If a customer is running a transactional workload such as Suite on HANA, S/4 or C/4, and is using HANA System Replication, wouldn’t at least one of the pair of nodes be available at all times?  SAP supports near zero upgrades[ii], so systems, firmware, OS or even HANA itself may be updated on one of the pair of nodes while the other continues to operate, followed by a synchronization of changed data and a controlled failover so that the first node might be updated.  In this way, cold restarts of HANA, where a fast restart option might make a big difference, may be driven down into a very rare occurrence.  In other words, wouldn’t this be a better option than causing poor performance to everything due to radically slower DIMMs compared to DRAM as has been discussed in gory detail on the previous two posts of this series?

HANA also offers a quick restart option whereby HANA can be started and the database made available within minutes even though all of the columns have not yet been loaded into memory. Yes, performance will be pretty bad until all columns are loaded into memory, but for non-production systems and non-mission critical systems, this might be an acceptable option.  Lastly, with HANA 2.0 SPS04, SAP now supports fast restart with conventional memory.[iii]  This only works when the OS stays up and running, i.e. can’t be used when the system, firmware or OS is being updated, but this can be used for the vast majority of required restarts, e.g. HANA upgrades, patches and restarts when a bounce of the HANA environment is needed.  Though this is not mentioned in the help documentation, it may even be possible to patch the Linux kernel while using the fast restart option if SUSE SLES is used with their “Live Patching” function.[iv]

TCO

Optane DIMMs are less expensive than DRAM DIMMs.  List prices appears to be about 40% cheaper when comparing same size DIMMs.  Effective prices, however, may have a much smaller delta since there exists competition for DRAM meaning discounts may be much deeper than for the NVDIMMs from Intel, currently the only source.  This assumes full utilization of those NVDIMMs which may prove to be a drastically bad assumption.  Sizing guidance from SAP[v]shows that the ratio of DRAM vs. PMEM (their term for NVDIMMs) capacity can be anything from 2:1 to 1:4, but it provides no guidance as to where a given workload might fall or what sort of performance impact might result.  This means that a customer might purchase NVDIMMs with a capacity ratio of 1:2, e.g. 1TB DRAM:2TB PMEM, but might end up only being able to utilize only 512GB or 1TB PMEM due to negative performance results.  In that case, the cost of effective NVDIMMs would have instantly doubled or quadrupled and would, effectively, be more expensive than DRAM DIMMs.

But let us assume the best rather than the worst.  Even if only a 2:1 ratio works relatively well, the cost of the NVDIMMs, if sized for that ratio, would be somewhat lower than the equivalent cost of DRAM DIMMs. The problem is that memory, while a significant portion of the cost of systems, is but one element in the overall TCO of a HANA landscape.  If reducing TCO is the goal, shouldn’t all options be considered?

Virtualization has been in heavy use by most customers for years helping to drive up system utilization resulting in the need for fewer systems, decreasing network and SAN ports, reducing floor space and power/cooling and, perhaps most importantly, reducing the cost of IT management.  Unfortunately, few high end customers, other than those using IBM Power Systems can take advantage of this technology in the HANA world due to the many reasons identified in the latest of many previous posts.[vi]  Put another way, if a customer utilizes an industrial strength and proven virtualization solution for HANA, i.e. IBM PowerVM, they may be able to reduce TCO considerably[vii]and potentially much more than the relatively small improvement due to NVDIMMs.

But if driving down memory costs is the only goal, there are a couple of ideas that are less radical than using NVDIMMs worth investigating.  Depending on RTO requirements, some workloads might need an HA option, but might not require it to be ready in minutes.  If this is the case, then a cold standby server running other workloads which could be killed in the event of a system outage could be utilized, e.g. QA, Dev, Test, Sandbox, Hadoop.  Since no incremental memory would be required, memory costs would be substantially lower than that required for System Replication, even if NVDIMMs are used. IBM offers a tool called VM Recovery Manager which can instrument and automate such a configuration.

Another option worth considering, only for non-production workloads, is a feature of IBM PowerVM called Memory Deduplication.  After different VMs are started using “a shared memory pool”, the hypervisor builds a logical memory map.  It then scans the pages of each VM looking for identical memory pages at which time it uses the logical memory map to point each VM to the same real memory page thereby freeing up the redundant memory pages for use by other workloads.  If a page is subsequently changed by one of the VMs, the hypervisor simply recreates a unique real memory page for that VM. The upshot of this feature is that the total quantity of DRAM memory may be reduced substantially for workloads that are relatively static and have large amounts of duplication between them. The reason that this should not be used for production is because when the VMs start, the hypervisor has not yet had the chance to deduplicate the memory pages and, if the sum of logical memory of all VMs is larger than the total memory, paging will occur.  This will subside over time and may be of little consequence to non-production workloads, but the risk to performance for production might be considered unacceptable and, besides, “Memory over-commitment must not be used” for production HANA according to SAP.

Summary

Faster restarts than may be possible with traditional Intel systems may be achieved by using near zero HANA upgrades with System Replication, HANA fast restart or by switching to a system with a radically faster I/O subsystem, e.g. IBM Power Systems. TCO may be reduced with tried and proven virtualization technologies as provided with IBM PowerVM, cold standby systems or memory deduplication rather than experimenting with version 1.0 of a new technology with no track record, unknown reliability, poor guidance on sizing and potentially huge impacts to performance.

 

[i]https://www.ibm.com/downloads/cas/WQDZWBYJ

[ii]https://launchpad.support.sap.com/#/notes/1984882

[iii]https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.04/en-US/ce158d28135147f099b761f8b1ee43fc.html

[iv]https://launchpad.support.sap.com/#/notes/1984787

[v]https://launchpad.support.sap.com/#/notes/2786237

[vi]https://saponpower.wordpress.com/2018/09/26/vmware-pushes-past-4tb-sap-hana-limit/

[vii]https://www.ibm.com/downloads/cas/M7X2YXZD

Advertisements

June 3, 2019 Posted by | Uncategorized | , , , , , , , , , , , , , , | 1 Comment

SAP HANA support for HPE nPar on Superdome Flex update

In addition to the outstanding support for virtualization technologies like PowerVM for HANA and the lukewarm support for VMware by SAP, SAP also supports other technologies that allow larger systems to be subdivided into smaller nodes.  Note that I did not say virtualization, but subdivision.  Physical partitioning (PPAR) is a technology invented in the 1990s and only allows components, e.g. boards or NUMA nodes, to be allocated to a separate workload from others on the same physical system.

On October 22, 2018, SAP updated its SAP Note for HPE nPar technology.[i]  With this update, SAP now supports nPars with Superdome Flex.  Granularity is incredibly fine (not).  As noted in the SAP note, “Via nPartitions, the following  partition sizes are supported in terms of the number of sockets:

    • Skylake based architecture: ScaleUp 16s, 12s, 8s, 4s; ScaleOut 4s, 8s, 16s

Or to put it in terms of cores, each socket has 28 cores, so granularity is 112 cores.  You need only 20 cores?  No problem, you get to consume 112.  You need 113 cores? Also no problem, you get to consume 224 cores.  But, on the positive side, these npars are “electrically isolated” which has 2 really important implications.  First, the only way to isolate one or more Superdome Flex drawers into a separate nPar is to physically change the mesh wiring of the entire system.  That means that if you decide to change the configuration of nPars, dynamic changes would be the exact opposite of what is supported.  In fact, according to customer reports, HPE requires a Statement of Work service contract to come out and rewire the system and it takes multiple days … one customer reported multiple weeks.  The second implication is that all resources on the node(s) in an nPar are dedicated to that nPar.  In the above example, if you need 20 cores, you probably require around a ½ TB of memory for BW or 1TB of memory for S/4.  It is possible to configure an nPar with as little as 1.5TB of memory which means that you might waste an entire TB if you only need ½ TB.  Alternately, if you have other workloads on other nPars that require more cores and memory and you want to keep all drawers consistent to allow for future changes, you might actually have up to 6TB per drawer meaning much more wasted memory if you only require ½ TB for a particular workload.  By the way, the only other elements that are shared when a system is broken up into physically isolated nPars are the frame(s), power supplies and the RMC – Rack Management Controller.  PCIe cards cannot be shared due to the physical isolation, so by using nPars, you essentially take a very expensive system and carve it into a bunch of smaller and very expensive, isolated systems which are difficult to reconfigure.  Alternately, if you really must use HPE technology for smaller workloads, you could purchase smaller systems at much lower prices.

I have really been trying to scratch my head and understand why anyone would want this type of 1990s era partitioning technology.  HPE certainly does because it results in higher profits from selling larger systems with more aggregate capacity while giving the false appearance of flexibility.  For customers, on the other hand, it offers massive waste and very limited flexibility.

My advice: Don’t be a sucker and get taken in by HPE’s misdirection play.  Either purchase appropriately sized systems for each workload or purchase systems that offer real virtualization, such as IBM Power Systems, with fine grained allocation of resources sharing of components such as PCIe adapters and true server consolidation, but don’t purchase one of these massive HPE systems and then eliminate any perceived value of using such a large system by cutting it up into smaller systems.

 

 

[i]2103848 – SAP HANA on HPE nPartitions in production

 

March 25, 2019 Posted by | Uncategorized | , , , , , , , , , , , | Leave a comment

SAP increases support for HANA on Power, now up to 16 concurrent production VMs with IBM PowerVM

On March 1, 2019, SAP updated SAP Note 2230704 – SAP HANA on IBM Power Systems with multiple – LPARs per physical host.  Previously, up to 8 concurrent HANA production VMs could be supported on the Power E880 system with 16 sockets.  Now, the new POWER9 based E980, also with 16 sockets, is supported with up to 16 concurrent HANA production VMs.  As was the case prior to this update, each VM must have a minimum of 4 cores and 128GB and can grow to as large as 16TB for OLAP and 24TB for OLTP.  The maximum VM count may be reduced by 1 if a shared pool is desired for one or more non-production HANA, any other SAP or non-SAP workloads. There is no restriction on the number of VMs that can run in a shared pool from an SAP perspective, but practical physical limits are usually hit before any PowerVM architectural limits. CPU capacity not used by the production VMs may be shared, temporarily, with VMs in the shared pool using a proprietary technology called dedicated-donating where the production VM, which owns the CPU capacity, may loan part of it to the shared pool and get it back immediately when needed for that production workload.

Most customers were quite happy with 8 concurrent VMs, so why should anyone care about 16?  Turns out, some customers have really complex landscapes.  I recently had discussions with a customer that has around 12 current and planned production HANA instances. They were debating whether to use HANA in a multi-tenant configuration.  The problem is that all HANA tenants in a mult-tenant VM are tightly bound, i.e. when the VM, OS or SAP software needs to be updated or reconfigured, all tenants are affected simultaneously.  While not impossible to deal with, this introduces operational complexity.  If those same 12 instances were placed on a new POWER server, the operational complexity could be eliminated.

As much as this might benefit the edge customers with a large number of instances, it will really benefit cloud vendors that utilize Power with greater flexibility, more sharing of resources and lower management and infrastructure costs.  Also, isolation between cloud clients are essential, so multi-tenancy is rarely an effective option. On the other hand, PowerVM offers very strong isolation, so this offers an excellent option for cloud providers even when different clients share the same infrastructure.

This announcement also closes a perceived gap where VMware could already run up to 16 concurrent VMs on an 8-socket system.  The caveat to that support was that the minimum and maximum size of each VM when running at the 16 VM level was ½ socket.  Of course, you could play the mix and match game with some VMs at the ½ socket level and others at the full or multi-socket level, but neither of these options provides for very good granularity.  For systems with 28-core sockets, the granularity per VM is 14 cores, 28 cores and then multiples of 28 cores up to 112 cores.  For those that are configured at 1/2 sockets, if there is no other workload to consume the other 1/2 socket, then the capacity is simply wasted.  Memory, likewise, has some granularity limitations.  According to VMware’s Best Practice Guide, “When an SAP HANA VM needs to be larger than a single NUMA node, allocate all resource of this NUMA node; and to maintain a high CPU cache hit ratio, never share a NUMA node for production use cases with other workloads. Also, avoid allocation of more memory than a single NUMA node has connected to it because this would force it to access memory that is not local to the SAP HANA processes scheduled on the NUMA node.”  In other words, any memory not consumed by the HANA VM(s) on a particular socket/node is simply wasted since other nodes should not utilize memory across nodes.

By comparison, production HANA workloads running on PowerVM may be adjusted by 1 core at a time with memory granularity measured in MB, not GB or TB.

In an upcoming blog post, I will give some practical examples of landscapes and how PowerVM and VMware virtualization would utilize resources.

With this enhanced level of support from SAP, IBM Power Systems with PowerVM is once again the clear leader in terms of virtualization for HANA environments.

March 11, 2019 Posted by | Uncategorized | , , , , , , , | Leave a comment

Power Systems – Delivering best of breed scalability for SAP HANA

SAP quietly revised a SAP Note last week but it certainly made a loud sound for some.  Version 47 of https://launchpad.support.sap.com/#/notes/2188482 now says that OLTP workloads, such as Suite on HANA or S/4HANA are now supported on IBM Power Systems up to 24TB.  OLAP workloads, like BW HANA may be implemented on IBM Power Systems with up to 16TB for a single scale-up instance.  As noted in https://launchpad.support.sap.com/#/notes/2055470, scale-out BW is supported with up to 16 nodes bringing the maximum supported BW environment to a whopping 256TB.

As impressive as those stats are, it should also be noted that SAP also provided new core-to-memory (CTM) guidance with the 24TB OLTP system sized at 176-cores which results in 140GB/core, up from the previous 113.7GB/core at 16TB.  The 16TB OLAP system, sized at 192-cores, translates to 85.3GB/core, up from the previous 50GB/core for 4-socket and above systems.

By comparison, the maximum supported sizes for Intel Skylake systems are 6TB for OLAP and 12TB for OLTP which correlates to 27.4GB/core OLAP and 54.9GB/core OLTP.  In other words, SAP has published numbers which suggest Power Systems can handle workloads that are  2.7x (OLAP) and 2x (OLAP) the size of the maximum supported Skylake systems.  On the CTM side, this works out to a maximum of 3.1x (OLAP) and 2.6x (OLTP) better performance per core for Power Systems over Skylake.

Full disclosure, these numbers do not represent the highest scaling Intel systems.  In order to find them, you must look at the previous generation of systems.  Some may consider them obsolete, but for customers that must scale beyond 6TB/12TB (OLAP/OLTP) and are unwilling or unable to consider Power Systems, an immediate sunk investment may be their only choice.  (Note to customers in this undesirable predicament, if you really want to get an independent, third party verification of potential obsolesence, ask your favorite leasing companies, not associated or owned by the vendor, what residual value they would assume after 1 year for these systems vs. what they would assume for similar Skylake systems after 1 year.)

The previous “generation” of HPE Superdome, “X”, which as discussed in my last blog post shares 0% technology with Skylake based HPE Superdome “Flex”, was supported up to 8TB/16TB with 384 cores for both OLAP and OLTP, resulting in CTM of 21.3GB/42.7GB/core.  The SGI derived HPE MC990 X, which is the real predecessor to the new “Flex” system, was supported up to 4TB/20TB with 192 cores OLAP with 480 cores.

Strangely, “Flex” is only supported for HANA with 2 nodes or chassis where “MC990 X” was supported with up to 5 nodes.  It has been over 4 months since “Flex” was announced and at announcement date, HPE loudly proclaimed that “Flex” could support 48TB with 8 chassis/32 sockets https://news.hpe.com/hewlett-packard-enterprise-unveils-the-worlds-most-scalable-and-modular-in-memory-computing-platform/.  Since that time, some HPE reps have been telling customers that 32TB support with HANA was imminent.  One has to wonder what the hold up is.  First it took a couple of months just to get 128GB DIMM support. Now, it is taking even longer to get more than 2-node support for HANA.  If I were a potential HPE customer, I would be very curious and asking my rep about these delays (and I would have my BS detector set to high sensitivity).

Customers have now been presented with a stark contrast.  On one side, Power Systems has been on a roll; growing market share in HANA, regular increases in supported memory sizes, the ability to handle the largest single image HANA memory sizes of any vendor, outstanding mainframe derived reliability and radically better flexibility with built in virtualization and support for a maximum of 8 concurrent production HANA instances or 7 production with many dozens of non-prod HANA, application servers, non-HANA DBs and/or a wide variety of other applications supported in a shared pool, all at competitive price points.

On the other hand, Intel based HANA systems seem to be stuck in a rut with decreased maximum memory sizes (admittedly, this may be temporary), anemic increases in CTM, improved RAS but not yet to the league of Power Systems and a very questionable VMware based virtualization support filled with caveats, limitations, overhead and poor, at best, sharing of resources.

March 28, 2018 Posted by | Uncategorized | , , , , , , , , , , , , , , , | Leave a comment

HPE Superdome is dead, but HPE marketing continues its deceptive ways.

Today, 11/6/17, HPE announced the “New” Superdome Flex.  If you did not look too closely, you would think that this was some sort of descendant of Superdome.  After all, the Integrity Superdome took the original Superdome and replaced PA-RISC chips and the SX1000 cell controller with Itanium chips and a faster SX2000 cell controller.  Superdome 2 took this further by upgrading to the latest Itanium chips, an even faster SX3000 cell controller and moved from a cell board to a blade configuration.  Superdome X changed out the Itanium chips for Intel Xeon chips which it upgraded over several generations.  So, it would be only natural to think that Superdome Flex did something similar and that is exactly what HPE wants you to think.

Except, this is not even remotely like any prior Superdome and has inherited almost nothing from it.  In fact, this is a very straightforward descendant of the SGI UV 300H, which HPE renamed the MC990 X after the acquisition.  A glance at the front of the “new” system shows the same basic design, a 4-socket, 5U chassis even down to the unique diagonal handles on the fans, but they apparently moved the NUMAlink fabric ports (no longer called that; renamed Superdome Flex ports) from the back to the front, perhaps to get rid of a little of the rats nest of cables which defined the SGI UV 300H.  This means there is no SX3000 or cross bar switch in the Flex and the blade design is gone.  Even the memory DIMMS are different which implies that nothing could be moved from an old Superdome X to a new “Flex” other than perhaps some old PCIe adapters.

So, if the entire design is based on an SGI acquired technology and it shares nothing from its “namesake”, one would need to avoided that course in ethics in high school or college to find it appropriate to suggest to customers that this is a related technology.  Imagine if Honda changed the engine, frame, transmission, trim and body style but called their new car an Accord “Flex” because it used the same bumper and tire sizes, would you feel as if they were trying to manipulate you?

Back to the more important topic, Superdome is now dead!  I have been saying this for a while and blogged about this several months ago.  I suggested that any customer considering investing in this technology view it as instantly obsolete and a sunk investment.  I pointed out the huge investment in ccNUMA interconnect technologies and how it was hard to imagine how HPE could afford to invest in 2 different ones at the same time, so only one system was likely to survive.  I explained that the SGI technology offered more space and power to host the new, larger, higher wattage and heat dissipating Skylake processors.  It appears that my projections were correct.  For customers that ignored that advice, I just hope you got a really great price and don’t mind paying a lot for old technology for any upgrades or dumping your old systems at a huge financial loss.  For any customer still considering a Superdome X, the writing is no longer on the wall.  It is on HPE’s web site.  https://news.hpe.com/hewlett-packard-enterprise-unveils-the-worlds-most-scalable-and-modular-in-memory-computing-platform/

Currently, no white papers have been published showing the architecture and detailed specs of this “new” system, only a relatively high level “Spec” sheet.  Perhaps HPE is too embarrassed to publish this since it would likely resemble the SGI UV 300H in way too many ways, including the old rats nest of 4-bit wide interconnect cables.  Once they do, I will investigate and will likely publish a separate post to share what I find.

On the SAP front, new HANA appliance specs have been published for “Flex”.   It is interesting, and again embarrassing for HPE, that only up to 8-socket configs are shown, with less BWoH memory support @ 6TB max than the old, and now obsolete, Superdome X.  Even more interesting is the lack of SoH and S/4 configs, and I have a suspicion as to why.  Turns out that the spec sheet does have one interesting point after all.  It shows the maximum size memory DIMMS are 64GB and the number of DIMMS slots is 48 with a max supported memory of 3TB per chassis, i.e. half of what is necessary to support the 6TB per 4-sockets that other competing Intel vendors support.

So, if you need a supported HANA configuration today with current generation processors for BWoH beyond 6TB, look at any vendor other than HPE with 8-socket Skylake systems or IBM Power Systems.  If you need a supported SoH or S/4 configuration with current gen processors, look at any vendor other than HPE and beyond 12TB, only IBM Power Systems is supported at this level.

November 6, 2017 Posted by | Uncategorized | , , , , , , , , , , | Leave a comment

Support for HANA on Power with RedHat is finally here!!

SAP made a very exciting announcement this past Friday.  While it took a bit longer than expected, support for RHEL 7.3 with HANA on Power was announced by SAP in their usual overwhelming manner, i.e. they updated a SAP note:   SAP HANA 2235581: Supported Operating Systems.  RHEL is only supported on Power in Little Endian mode, i.e. only works with HANA 2.0.  This support is incredibly important for customers that have established RHEL as their standard for Linux and were either reluctant to introduce a different Linux distribution or were outright forbidden to by their corporate standards.  Taken together with the TDI Phase 5 SAPS based sizing announcement, yet another element that was inhibiting the already explosive growth of HANA on Power was removed.  I described that announcement as allowing the use of 5th gear after being limited to only 4.  Taking this metaphor a step further, RHEL support is like disengaging the parking brake.  I should mention that IBM does not develop a Linux variant of their own nor do they endorse any particular variety.  As such, I will not suggest any advantage of running RHEL or SLES for HANA and recommend, if a company has no firm policy either way, that you ask each distro partner to explain why they think theirs is better for HANA than the other.  At this time, both are supported only with PowerVM on Power Systems and with the exact same limits and multi-tenant/multi-VM flexibility.

October 9, 2017 Posted by | Uncategorized | , , , , , , , , | 4 Comments

HPE, still playing fast and loose with the facts about SAP HANA on Power

Writing a blog post would be so much simpler if IBM permitted me to lie, but that is prohibited.  That is clearly not the case at HPE, see this recent blog post: https://community.hpe.com/t5/Alliances/SAP-HANA-runs-best-on-x86-Period/ba-p/6971659#.WZxDPa01TxW

It contains so many lies, it is hard to know where to start.   Let’s start with the biggest one.  There is a 10x difference in performance KPIs required by SAP to certify and ship a HANA appliance vs. a solution certified for TDI only.

You really have to love those lies that are refuted by such easily obtained facts from documentation that is apparently not used by HPE called SAP Notes.  SAP note 1943937 specifically states: All HWCCT tests of appliances (compute servers) certified with scenario HANA-HWC-AP SU 1.1 or HANA-HWC-AP RH 1.1 must use HWCCT of SAP HANA SPS10 or higher or a related SAP HANA revision”  Interesting that appliances must use the same HWCCT test of SAP KPIs as used by TDI.  So, based on HPE’s blog post, does this mean that if an HPE appliance compute server is used for TDI, it will perform 10x worse than if it is used in an appliance? That would imply that the secret sauce of HPE’s appliances is so incredible that it acts like a dual turbocharger on a car!

The blog post goes on to say “It (Power) ‘works’ … but it is just held to a ~10x lower standard without any of the performance optimizations attributed to SAP’s co-innovation efforts with Intel.”  OK, two lies in one sentence, obviously going for the gold here.  As we have already discussed the 10x lie, let us just look at the second one, i.e. performance optimizations.  Specifically, the blog calls out AVX and TSX as the performance optimizations for the Intel platform.  They are correct, those optimization don’t work on Power as, instead of AVX (Advanced Vector Extensions), Power has two fully symmetric vector pipelines called via VSX (Vector-Scalar eXtensions) instructions which HANA has been “optimized” to use in the same manner as AVX.  And TSX, a.k.a. Transactional System Extensions, came out after POWER8 Transactional Memory, but HANA was optimized for both at the same time.

The blog post also stated “Intel E7v3 CPUs for HANA (TSX and AVX) that offer a 5x performance boost over older Intel E7 or Power 8 CPUs.“  Awesome, but where is the proof behind this statement?  Perhaps a benchmark?  Nope, not one published on SAP’s site, even the old SD benchmark backs up this claim (which shows, by the way, almost the same SAPS/core for E7v3 vs. E7v2 and way less than POWER8, but maybe that benchmark does not use those optimizations?  Ok, then maybe the sizing certifications show this?  Nope.  A 4 socket CS500 Ivy Bridge system (E7v2) is published as supporting up to a 2TB SoH solution where a 4 socket CS500 Haswell systems (E7v3) is shown as supporting up to a 3TB SoH solution.  So far, that is just 50% more, not 5x, but perhaps HPE can’t tell the difference between 0.5 and 5.0?  But didn’t Haswell have more cores per socket?  Yes, it had 18 cores/socket vs. 15 cores/socket for IvyBridge, i.e. 20% more cores/socket.  So, E7v3 based systems could actually host 25% more memory and associated workload than E7v2 based systems per core.  Of course, I am sure that the switch from DDR3 to DDR4 from E7v2 to E7v3 had nothing to do with this performance improvement.  So, 5x performance boost is clearly nothing but a big fat lie.

And the hits just keep coming.  The next statement is just lovely “Naturally, this only matters if you want to be able to call SAP support to get help on nuance performance issues impacting your productive SAP HANA deployment.“  I guess he is trying to suggest that SAP won’t help you with performance problems if you are running any TDI solution including Power Systems, except this is contradicted by all of the SAP notes about TDI and HWCCT, not to mention the experience of customers who have implemented HANA on Power.

IBM wants to be the king of legacy businesses like mainframe and UNIX. That’s pretty much the only platforms they have left. So now that they can state that HANA “works” on Power, they can make a case to their AIX/Power customers that they should stay on AIX/Power for SAP and HANA and avoid what IBM claims to be an “oh so painful’ Unix to x86 migration.“  HPE, suggesting that you can run HANA on AIX/Power since HANA only runs on Linux/Power, might not be telling a lie but might just be expressing ignorance and the inability to use sophisticated and obscure search tools like Google.   As to the suggestion of IBM having said that a SAP heterogeneous migration is “oh so painful” ignores the fact that we have done hundreds of such migrations from HP/UX among others.  Perhaps the author is reflecting HPE’s migration experience with what every other migration provider sees as very well understood and fully supported SAP process.  As to IBM’s motivation, HPE is trying to suggest that IBM is only in the HANA business to support its legacy SAP on AIX/Power.  Just looking at the thriving business of HANA on Power, over 850 HANA on Power wins since becoming a supported HANA provider, might suggest otherwise.  How about the complete absence of any quotes, marketing materials or other documentation that shows that IBM is in this market for any other reason than it is the future of SAP and IBM intends to remain a premier partner of SAP and our customers?

Taken together, this blog post shows that HPE must be using the advanced Skylake processors in their Superdome-X and MC990 X (SGI UV 300H) to generate lies at an astounding pace.  Oh wait, I forgot (not really) that HPE still has not announced support for Skylake in their high end systems and is only certified for BWoH, not SoH or S/4HANA, with Skylake and only up to 3TB at that, … one month after announcement of Skylake!  Wow, I guess this shows simultaneously how much their pace of technology innovation has slowed down and partnership with SAP has decreased!

And, let’s end on their last amazing sentence, “Every day you put off that UNIX to x86 migration, you are running HANA in a performance degraded mode with production support limitations.“  Yes, HPE seems to be recommending that you migrate your UNIX based systems, i.e. those running Business Suite 7, to x86.  In other words, do two migrations, once from UNIX to x86 and a second to HANA.  Sounds like a totally disconnect from business reality!  As to the second part of that sentence, it simply does not make sense, so we are not going to attribute that to a lie, but to a simple logic error.

What are we to conclude about this blog post?  Taken on its own, it is a rogue employee.  Taken with the deluge of other similar misleading and outright lies emerging from HPE and we see a trend.  HPE has gone from being a well-respected systems supplier to a struggling company that promotes and condones lies and hires less than competent individuals to propagate misinformation.  Sounds more like a failing communist state than a company worthy of a customer’s trust.

August 22, 2017 Posted by | Uncategorized | , , , , , , , , | 4 Comments

Is your company ready to put S/4HANA into the cloud? – Part 3

The third of a 4 part discussion about corporate requirements in support of S/4HANA and questions to be asked of cloud providers in support of placing this landscape in the cloud.

  • What backups must be performed? Some cloud providers might include daily, weekly, incremental or no backups.  They may include raw image backups vs. database aware backups. Just make sure that whatever backups you require are supported and included in the price for the cloud services.
    • Should corporate backup solution be used? For flexibility reasons as well as visibility, you may prefer that a backup solution that you have tested and approved works in the cloud environment.  Or, perhaps, it is an audit requirement.
    • Are extra server(s) required for backup solution? Your security and audit departments may not permit your backups to share infrastructure, including the network, with any other clients in a provider’s cloud environment.  One or more servers with or without dedicated network infrastructure may be required.
    • How quickly must backups be performed, restored and what is the RTO after database corruption? SAP HANA backups can generally take their time as long as the aggregate transfer rate is sufficient to backup the entire database prior to the next backup.  Well, that is unless you want to be able to restore to the prior day in the event of database corruption in which case you may want the backup finished prior to a specific time on the same day.   Just make sure you think about this and have the infrastructure necessary to meet your backup speed priced.  Even more important is how quickly the backup can be restored as well as what services are offered to restore the backups and roll forward any logs that have been created since that backup was initiated, i.e. the RTO for getting back up and running.
    • Will backups be available to DR systems? Not to be overlooked are backups in DR.  Not only will you want to be able to take backups in DR, but you would also need to be able to restore from a backup in the primary site to the DR site, not so easily done if the primary site is truly down and unavailable.  This means that you would need the backup server to have bi-directional replication with DR site as well as testing to ensure this works correctly.  What incremental costs are required for the replicated backup bandwidth?
  • Security – First a disclaimer. I am not a security expert, so may be only addressing a subset of the real requirements.
    • How will corporate single sign-on operate with the cloud solution? Whether you use Microsoft Active Directory, CA SSO, IBM Tivoli Access Manager or one of the dozens of other products on the market, you are probably using this solution to authenticate and authorize users in SAP.  Make sure it can integrate with the potential S/4HANA system in the cloud.  Make sure that your security administrators can control policies, assign and revoke privileges and audit as necessary.
    • Must communications to/from cloud be encrypted and what solution will be used? We all know that hackers want to access your data for malicious reasons, financial gain or industrial espionage.  Do you want your key strokes and data to and from the cloud to transmit in clear text?  If not, which solution will you use and how might the use of that solution impact performance?  How about between application servers and database servers at the cloud provider?
    • How will data stored in cloud be secured? It is one thing to have your personal email stored on storage devices shared with millions of other users, but do your corporate polices allow for corporate databases to be located on storage devices that are shared with other customers?  If not, do you require dedicated devices, of what kind and at what cost?
    • How will backups be secured? We touched on backups earlier, but this is now specific to the physical media on which those backups are stored not to mention replicated to the DR site as well as any external media that you might require, e.g. tapes, DVDs or removable disks.  How can you be ensured that no one makes a copy, removes a disk, etc?
  • What are the non-production requirements? All of the above was just talking about production, but most customers have an even more extensive non-production landscape.  Many, if not most, of those same questions can be applied to non-production.  Remember, there are few employees that command a higher salary than your developers, whether internal or external.  They create corporate intellectual property and often work with copies of production data.  Their workloads vary based on project demands, phases of implementation or problems to be addressed.  Many customers utilize DR capacity or underutilized capacity on HA systems to address non-prod requirements, however this may not be an option in a cloud environment, or if it is, at what cost?
    • How will images be created/copied, managed, isolated, secured? You may use SAP LaMa (Landscape Manager previously know as Landscape Virtualization Manager (LVM), backup/restore, disk replication, TDMS, BDLS and/or custom scripts to populate non-prod systems.  Will those tools and techniques work in the cloud and at what cost?

 

The last part of this discussion will deal with migration challenges when moving to the cloud and lastly, a few of the reasons that are often used to justify a move to the cloud.

May 5, 2017 Posted by | Uncategorized | , , , , , , , | Leave a comment

Is your company ready to put S/4HANA into the cloud? – Part 2

And now, the details and rationale behind the questions posed in Part 1.

  • What is the expected memory size for the HANA DB? Your HANA instances may fit comfortably within the provider’s offerings, or may force a bare-metal option, or may not be offered at all.  Equally important is expected growth as you may start within one tier and end in another or may be unable to fit in a provider’s cloud environment.
  • What are your performance objectives and how will they be measured/enforced? This may not be that important for some non-production environments, but production is used to run part, or all, of a company.  The last thing you want is to find out that transaction performance is not measured or for which no enforcement for missing an objective exists.  Even worse, what happens if these are measured, but only up to the edge of the provider’s cloud, not inclusive of WAN latency?  Sub-second response time is usually required, but if the WAN adds .5 seconds, your end users may not find this acceptable.  How about if the WAN latency varies?  The only thing worse than poor performance is unpredictable performance.
    • Who is responsible for addressing any performance issues? No one wants finger pointing so is the cloud provider willing to be responsible for end-user performance including WAN latency and at what cost?
    • Is bare-metal required or if shared, how much overhead, how much over-commitment? One of the ways that some cloud providers offer a competitive price is using shared infrastructure, virtualized with VMware or PowerVM for example.  Each of these have different limits and overhead with VMware noted by SAP as having a minimum of 12% overhead and PowerVM with 0% as the benchmarks were run under PowerVM to start with.  Likewise, VMware environments are limited to 4TB per instance and often multiple different instances may not run on shared infrastructure based on a very difficult to understand set of rules from SAP.  PowerVM has no such limits or rules and allows up to 8 concurrent production instances, each up to 16TB for S/4 or SoH up to the physical limits of the system.  If the cloud provider is offering a shared environment, are they running under SAP’s definition of “supported” or are they taking the chance and running “unsupported”?  Lastly, if it is a shared environment, is it possible that your performance or security may suffer because of another client’s use of that shared infrastructure?
  • What availability is required? 99.8%? 9%?  99.95%? 4 nines or higher?  Not all cloud providers can address the higher limits, so you should be clear about what your business requires.
  • Is HA mandatory? HA is usually an option, at a higher price.  The type of HA you desire may, or may not, be offered by each cloud provider.  Testing of that HA solution periodically may, or may not be offered so if you need or expect this, make sure you ask about it.
    • For HA, what are the RPO, RTO and RTP time limits? Not all HA solutions are created equal.  How much data loss is acceptable to your business and how quickly must you be able to get back up and running after a failure?  RTP is a term that you may not have heard to often and refers to “Return to Processing”, i.e. it is not enough to get the system back to a point of full data integrity and ready to work, but the system must be at a point that the business expects with a clear understanding of what transactions have or have not been committed.  Imagine a situation where a customer places an order or paid a bill, but it gets lost or where you paid a supplier and mistakenly pay them a second time.
  • Is DR mandatory and what are the RTP, RTO and RTP time limits? Same rationale for these questions as for HA, once again DR, when available, always offered at an additional charge and highly dependent on the type of replication used, with disk based replication usually less expensive than HANA System Replication but with a longer RTO/RTP.
    • Incremental costs for DR replication bandwidth? Often overlooked is the network costs for replicating data from the primary site to the DR site, but clearly a line item that should not be overlooked.  Some customers may decide to use two different cloud providers for primary and DR in which case not only may pricing for each be different but WAN capacity may be even more critical and pricey.
    • Disaster readiness assessment, mock drills or full, periodic data center flips? Having a DR site available is wonderful provided when you actually need it, everything works correctly.  As this is an entire discussion unto itself, let it be said that every business recovery expert will tell you to plan and test thoroughly.  Make sure you discuss this with a potential cloud provider and have the price to support whatever you require included in their bid.

 

I said this would be a two part post, but there is simply too much to include in only 2 parts, so the parts will go on until I address all of the questions and issues.

May 4, 2017 Posted by | Uncategorized | , , , , , , , | Leave a comment

Is your company ready to put S/4HANA in the cloud? – part 1

Cloud computing for SAP is of huge interest to SAP clients as well as providers of cloud services including IBM and SAP.  Some are so bought into the idea of cloud that for any application requirement “the answer is cloud; now what was the question?”  As my friend Bob Gagnon wrote in a recent LinkedIn blog post[i], is S/4HANA ready for the cloud? The answer is Yes!  But is your organization ready to put S/4HANA in the cloud?  The answer to that question is the subject of this two part blog post.

Bob brings up several excellent points in his blog post, a few of which I will discuss here plus a few others.  These questions are ones that every customer should either be asking their potential cloud providers or asking of themselves.  These questions are not listed in any particular order nor intended to cast cloud in a negative light, merely to point out operational requirements that must be addressed whether S/4HANA, or other SAP or non-SAP applications are placed in a cloud or hosted on-premise.  In parts 2,3 and 4, I will discuss the background and rationale for each question.  Part 4 also discusses some of the points that cloud proponents often tout with some comments by me around ways of achieving similar benefits with on-premise solutions.

  • What is the expected memory size for the HANA DB?
  • What are your performance objectives and how will they be measured/enforced?
    • Is bare-metal required or if shared, how much overhead, how much over-commitment?
  • What availability is required?
  • Is HA mandatory?
    • What are the RPO, RTO and RTP time limits?
  • Is DR mandatory?
    • What are the RTP, RTO and RTP time limits?
    • Incremental costs for DR replication bandwidth?
    • Disaster readiness assessment, mock drills or full, periodic data center flips?
  • What backups must be performed
    • Should corporate backup solution be used?
    • Are extra server(s) required for backup solution?
    • How quickly must backups be restored and RTO occur?
    • Will backups be available to DR systems?
    • What incremental costs are required for WAN copies?
  •  Security
    • How will corporate single sign-on operate with cloud solution?
    • Must communications to/from cloud be encrypted and what solution will be used?
    • How will data stored in cloud be secured?
    • How will backups be secured?
  • What are non-prod requirements, how will images be created/copied, managed, isolated, secured?
  • How will you get from your current on-premise SAP landscape to the cloud?
    • Can a migration be completed within the outage window that your business allows?
    • At what cost, complexity and risk?

[i] https://www.linkedin.com/pulse/s4haha-is-cloud-ready-robert-gagnon?trk=v-feed&lipi=urn%3Ali%3Apage%3Ad_flagship3_feed%3BwtSy49CsXhCWw7ppB5yiaA%3D%3D

 

 

 

May 3, 2017 Posted by | Uncategorized | , , , , , , | Leave a comment