SAPonPower

An ongoing discussion about SAP infrastructure

HPE Superdome is dead, but HPE marketing continues its deceptive ways.

Today, 11/6/17, HPE announced the “New” Superdome Flex.  If you did not look too closely, you would think that this was some sort of descendant of Superdome.  After all, the Integrity Superdome took the original Superdome and replaced PA-RISC chips and the SX1000 cell controller with Itanium chips and a faster SX2000 cell controller.  Superdome 2 took this further by upgrading to the latest Itanium chips, an even faster SX3000 cell controller and moved from a cell board to a blade configuration.  Superdome X changed out the Itanium chips for Intel Xeon chips which it upgraded over several generations.  So, it would be only natural to think that Superdome Flex did something similar and that is exactly what HPE wants you to think.

Except, this is not even remotely like any prior Superdome and has inherited almost nothing from it.  In fact, this is a very straightforward descendant of the SGI UV 300H, which HPE renamed the MC990 X after the acquisition.  A glance at the front of the “new” system shows the same basic design, a 4-socket, 5U chassis even down to the unique diagonal handles on the fans, but they apparently moved the NUMAlink fabric ports (no longer called that; renamed Superdome Flex ports) from the back to the front, perhaps to get rid of a little of the rats nest of cables which defined the SGI UV 300H.  This means there is no SX3000 or cross bar switch in the Flex and the blade design is gone.  Even the memory DIMMS are different which implies that nothing could be moved from an old Superdome X to a new “Flex” other than perhaps some old PCIe adapters.

So, if the entire design is based on an SGI acquired technology and it shares nothing from its “namesake”, one would need to avoided that course in ethics in high school or college to find it appropriate to suggest to customers that this is a related technology.  Imagine if Honda changed the engine, frame, transmission, trim and body style but called their new car an Accord “Flex” because it used the same bumper and tire sizes, would you feel as if they were trying to manipulate you?

Back to the more important topic, Superdome is now dead!  I have been saying this for a while and blogged about this several months ago.  I suggested that any customer considering investing in this technology view it as instantly obsolete and a sunk investment.  I pointed out the huge investment in ccNUMA interconnect technologies and how it was hard to imagine how HPE could afford to invest in 2 different ones at the same time, so only one system was likely to survive.  I explained that the SGI technology offered more space and power to host the new, larger, higher wattage and heat dissipating Skylake processors.  It appears that my projections were correct.  For customers that ignored that advice, I just hope you got a really great price and don’t mind paying a lot for old technology for any upgrades or dumping your old systems at a huge financial loss.  For any customer still considering a Superdome X, the writing is no longer on the wall.  It is on HPE’s web site.  https://news.hpe.com/hewlett-packard-enterprise-unveils-the-worlds-most-scalable-and-modular-in-memory-computing-platform/

Currently, no white papers have been published showing the architecture and detailed specs of this “new” system, only a relatively high level “Spec” sheet.  Perhaps HPE is too embarrassed to publish this since it would likely resemble the SGI UV 300H in way too many ways, including the old rats nest of 4-bit wide interconnect cables.  Once they do, I will investigate and will likely publish a separate post to share what I find.

On the SAP front, new HANA appliance specs have been published for “Flex”.   It is interesting, and again embarrassing for HPE, that only up to 8-socket configs are shown, with less BWoH memory support @ 6TB max than the old, and now obsolete, Superdome X.  Even more interesting is the lack of SoH and S/4 configs, and I have a suspicion as to why.  Turns out that the spec sheet does have one interesting point after all.  It shows the maximum size memory DIMMS are 64GB and the number of DIMMS slots is 48 with a max supported memory of 3TB per chassis, i.e. half of what is necessary to support the 6TB per 4-sockets that other competing Intel vendors support.

So, if you need a supported HANA configuration today with current generation processors for BWoH beyond 6TB, look at any vendor other than HPE with 8-socket Skylake systems or IBM Power Systems.  If you need a supported SoH or S/4 configuration with current gen processors, look at any vendor other than HPE and beyond 12TB, only IBM Power Systems is supported at this level.

Advertisements

November 6, 2017 Posted by | Uncategorized | , , , , , , , , , , | Leave a comment

TDI Phase 5 – SAPS based sizing bringing better TCO to new and existing Power Systems customers

SAP made a fundamental and incredibly important announcement this week at SAP TechEd in Las Vegas: TDI Phase 5 – SAPS based sizing for HANA workloads.  Since its debut, HANA has been sized based on a strict memory to core ratio determined by SAP based on workloads and platform characteristics, e.g. generation of processor, MHz, interconnect technology, etc.  This might have made some sense in the early days when much was not known about the loads that customers were likely to experience and SAP still had high hopes for enabling all customer employees to become knowledge workers with direct access to analytics.  Over time, with very rare exception, it turned out that CPU loads were far lower than the ratios might have predicted.

I have only run into one customer in the past two years that was able to drive a high utilization of their HANA systems and that was a customer running an x86 BW implementation with an impressively high number of concurrent users at one point in their month.  Most customers have experienced just the opposite, consistently low utilization regardless of technology.

For many customers, especially those running x86 systems, this has not been an issue.  First, it is not a significant departure from what many have experienced for years, even those running VMware.  Second, to compensate for relatively low memory and socket-to-socket bandwidth combined with high latency interconnects, many x86 systems work best with an excess of CPU.  Third, many x86 vendors have focused on HANA appliances which are rarely utilized with virtualization and are therefore often single instance systems.

IBM Power Systems customers, by comparison, have been almost universal in their concern about poor utilization.  These customers have historically driven high utilization, often over 65%.  Power has up to 5 times the memory bandwidth per socket of x86 systems (without compromising reliability) and very wide and parallel interconnect paths with very low latencies.  HANA has never been offered as an appliance on Power Systems, instead being offered only using a Tailored Datacenter Infrastructure (TDI) approach.  As a result, customers view on-premise Power Systems as a sort of utility, i.e. that they should be able to use them as they see fit and drive as much workload through them as possible while maintaining the Service Level Agreements (SLA) that their end users require.  The idea of running a system at 5%, or even 25%, utilization is almost an affront to these customers, but that is what they have experienced with the memory to core restrictions previously in place.

IBM’s virtualization solution, PowerVM, enabled SAP customers to run multiple production workloads (up to 8 on the largest systems) or a mix of production workloads (up to 7) with a shared pool of CPU resources within which an almost unlimited mix of VMs could run including non-prod HANA, application servers, as well as non-SAP and even other OS workloads, e.g. AIX and IBM i.  In this mixed mode, some of the excess CPU resource not used by the production workloads could be utilized by the shared-pool workloads.  This helped drive up utilization somewhat, but not enough for many.

These customers would like to do what they have historically done.  They would like to negotiate response time agreements with their end user departments then size their systems to meet those agreements and resize if they need more capacity or end up with too much capacity.

The newly released TDI Overview document http://bit.ly/2fLRFPb describes the new methodology: SAP HANA quicksizer and SAP HANA sizing reports have been enhanced to provide separate CPU and RAM sizing results in SAPS”.  I was able to verify Quicksizer showing SAPS, but not the sizing reports.  An SAP expert I ran into at TechEd suggested that getting the sizing reports to determine SAPS would be a tall order since they would have to include a database of SAPS capacity for every system on the market as well as number of cores and MHz for each one.  (In a separate blog post, I will share how IBM can help customers to calculate utilized SAPS on existing systems).  Customers are instructed to work with their hardware partner to determine the number of cores required based on the SAPS projected above.  The document goes on to state: The resulting HANA TDI configurations will extend the choice of HANA system sizes; and customers with less CPU intensive workloads may have bigger main memory capacity compared to SAP HANA appliance based solutions using fixed core to memory sizing approach (that’s more geared towards delivery of optimal performance for any type of a workload).”

Using a SAPS based methodology will be a good start and may result in fewer cores required for the same workload as would have been previously calculated based on a memory/core ratio.  Customers that wish to allocate more of less CPU to those workloads will now have this option meaning that even more significant reduction of CPU may be possible.  This will likely result in much more efficient use of CPU resources, more capacity available to other workloads and/or the ability to size systems with less resources to drive down the cost of those systems.  Either way helps drive much better TCO by reducing numbers and sizes of systems with the associated datacenter and personnel costs.

Existing Power customers will undoubtedly be delighted by this news.  Those customers will be able to start experimenting with different core allocations and most will find they are able to decrease their current HANA VM sizes substantially.  With the resources no longer required to support production, other workloads currently implemented on external systems may be consolidated to the newly, right sized, system.  Application servers, central services, Hadoop, HPC, AI, etc. are candidates to be consolidated in this way.

Here is a very simple example:  A hypothetical customer has two production workloads, BW/4HANA and S/4HANA which require 4TB and 3TB respectively.  For each, HA is required as is Dev/Test, Sandbox and QA.  Prior to TDI Phase 5, using Power Systems, the 4TB BW system would require roughly 82-cores due to the 50GB/core ratio and the S/4 workload would require roughly 33 cores due to the 96GB/core ratio.  Including HA and non-prod, the systems might look something like:

TDI Phase 4

Note the relatively small number of cores available in the shared pool (might be less than optimal) and the total number of cores in the system. Some customers may have elected to increase to an even larger system or utilize additional systems as a result.  As this stood, this was already a pretty compelling TCO and consolidation story to customers.

With SAPS based sizing, the BW workload may require only 70 cores and S/4 21 cores (both are guesses based on early sizing examples and proper analysis of the SAP sizing reports and per core SAPS ratings of servers is required to determine actual core requirements).  The resulting architecture could look like:

TDI Phase 5 est

Note the smaller core count in each system.  By switching to this methodology, lower cost CPU sockets may be employed and processor activation costs decreased by 24 cores per system.  But the number of cores in the shared pool remains the same, so still could be improved a bit.

During a landscape session at SAP TechEd in Las Vegas, an SAP expert stated that customers will be responsible for performance and CPU allocation will not be enforced by SAP through HWCCT as had been the case in the past.  This means that customers will be able to determine the number of cores to allocate to their various instances.  It is conceivable that some customers will find that instead of the 70 cores in the above example, 60, 50 or fewer cores may be required for BW with decreased requirements for S/4HANA as well.  Using this approach, a customer choosing this more hypothetical approach might see the following:

TDI Phase 5 hyp

Note how the number of cores in the shared pool have increased substantially allowing for more workloads to be consolidated to these systems, further decreasing costs by eliminating those external systems as well as being able to consolidate more SAN and Network cards, decreasing computer room space and reducing energy/cooling requirements.

A reasonable question is whether these same savings would accrue to an x86 implementation.  The answer is not necessarily.  Yes, fewer cores would also be required, but to take advantage of a similar type of consolidation, VMware must be employed.  And if VMware is used, then a host of caveats must be taken into consideration.  1) overhead, reportedly 12% or more, must be added to the capacity requirements.  2) I/O throughput must be tested to ensure load times, log writes, savepoints, snapshots and backup speeds which are acceptable to the business.  3) limits must be understood, e.g. max memory in a VM is 4TB which means that BW cannot grow by even 1KB. 4) Socket isolation is required as SAP does not permit the sharing of a socket in a HANA production/VMware environment meaning that reducing core requirements may not result in fewer sockets, i.e. this may not eliminate underutilized cores in an Intel/VMware system.  5) Non-prod workloads can’t take advantage of capacity not used by production for several reasons not the least of which is that SAP does not permit sharing of sockets between VM prod and non-prod instances not to mention the reluctance of many customer to mix prod and non-prod using a software hypervisor such as VMware even if SAP permitted this.  Bottom line is that most customers, through an abundance of caution, or actual experience with VMware, choose to place production on bare-metal and non-prod, which does not require the same stack as prod, on VMware.  Workloads which do require the same stack as prod, e.g. QA, also are usually placed on bare-metal.  After closer evaluation, this means that TDI Phase 5 will have limited benefits to x86 customers.

This announcement is the equivalent of finally being allowed to use 5th gear on your car after having been limited to only 4 for a long time.  HANA on IBM Power Systems already had the fastest adoption in recent SAP history with roughly 950 customers selecting HANA on Power in just 2 years. TDI Phase 5 uniquely benefits Power Systems customers which will continue the acceleration of HANA on Power.  Those individuals that recommended or made decisions to select HANA on Power will look like geniuses to their CFOs as they will now get the equivalent of new systems capacity at no cost.

September 29, 2017 Posted by | Uncategorized | , , , , , , , , , , , , | 3 Comments

Intel Skylake has been announced and the self-described HANA “market leader”, HPE, is curiously trailing the field

Intel announced general availability of their “Skylake” processor on the “Purely” platform last week.  Soon after, SAP posted certified HANA configurations for Lenovo and Fujitsu up to 8 sockets and 12TB memory for Suite on HANA (SoH) and S/4HANA (S4) and 6TB for BW on HANA (BWoH).  They also posted certified configurations for Dell and Cisco up to 4-socket systems with 6TB SoH/S4 and 3TB BWoH.  The certified configurations posted for HPE, which describes itself as the HANA market leader, only included up to 4-socket/3TB BWoH configurations, no configurations for SoH/S4 and nothing for any larger systems.

It is still early and more certified configurations will no doubt emerge over time, but these early results do beg the question, “what is going on with HPE?”  I checked the most recent press releases for HPE and they did not even mention the Skylake debut much less their certification with SAP HANA.  If you Google using the keywords, HPE, Skylake and HANA, you may find a few discussions about HPE’s acquisition of SGI and my previous blog posts with my speculation about Superdome’s demise and HPE’s misleading of customers about this impending event, but nothing from HPE.

So, I will share a little more speculation as to what this slow start for HPE in the Skylake space might portend.

Option 1 – HPE is not investing the funds necessary to certify all of their possible configurations and SoH/S4.  Anyone that has been involved with the HANA certification process will tell you that it is very time consuming and expensive.  As you can see from HPE’s primary Intel based competitors, they are all very eager to increase their market share and acted quickly.  Is HPE becoming complacent?  Are they having financial restrictions that have not been made public?

Option 2 – HPE’s technology limitations are becoming apparent.  The Converged System 500 is based on Proliant DL560/580 systems which support a maximum of 4 sockets.  These systems utilize Intel QPI and now UPI interconnect technologies, i.e. no custom ASICs or ccNUMA switches are required.  The CS900 based on the Superdome X and the MC990 X (SGI UV 300H) utilize custom ASICs and, in the case of Superdome X, a set of ccNUMA switches.  As I speculated previously, Superdome X is probably at end of life, so it may never see another certification on SAP’s HANA site.  As to the MC990 X, the crystal ball is a bit more hazy.  Perhaps HPE is trying to shoot for the moon and hit a number beyond the 20TB for SoH/S4 that is currently supported meaning a much longer and more complex set of certification tests.  Or perhaps they are running into technical challenges with the new ASICs required to support UPI.

Option 3 – MC990 X is going to officially become HPE’s only high end offering to support Skylake and subsequent processors and Superdome X is going to be announced at end of life.  If this were to happen, it would mean that anyone that had recently purchased such a system would have purchased a system that is immediately obsolete.

If Option 1 turns out to be true, one would have to concerned about HPE’s future in the HANA space.  If Option 2 turns out to be true, one would have to be really concerned about HPE’s future in the HANA space.  And if Option 3 turns out to be true, why would HPE be waiting?  The answer may be inventory.  If HPE has a substantial inventory of “old” Broadwell based blades and Superdome X chassis, they will undoubtedly want to unload these at the highest price possible and they know that the value of obsolete systems after such an announcement would drop into the below cost of manufacturing range.

So, you pick the most likely scenario.  Worst case for HPE is that they are just a little slow or shooting too high.  Worst case for customers is that they purchase a HANA system based on Superdome X and end up with a few hundred thousand dollar boat anchor.  If you work for a company considering the purchase of an HPE Superdome X solution, you may want to ask about its future and, if you find it is at end of life, select another solution for your SAP HANA requirements.

Inevitably, more systems will be published on SAP’s certification page, https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/appliances.html#viewcount=100&categories=certified%2CIntel%20Skylake%20SP .  When that happens, especially if any of my predictions turn out to be true or if they are all wrong and another scenario emerges, I will post an update.

July 20, 2017 Posted by | Uncategorized | , , , , , , , , , , , , | Leave a comment

HPE, in an act of desperation, is spreading misinformation about SAP HANA on IBM Power Systems

Misinformation is a poor characterization of HPE’s behavior.  HPE, or some of its employees, are showing customers charts with a variety of statements which are simply untrue.  In any normal definition, this is called a lie.  This is unethical and unprofessional.  I will repeat what it says in my profile, these are my opinions, not a reflection of those of IBM.

You may have seen the blog post from Vicente Moranta: https://www.linkedin.com/pulse/truth-wild-lies-being-told-hanaonpower-vicente-moranta or my own post on IBM Systems Blog: https://www.ibm.com/blogs/systems/can-you-tell-hana-on-ibm-power-systems-fact-from-fiction/ . At IBM, we have this set of ethical rules called “IBM Business Conduct Guidelines” (BCG).  This 15 to 20 page document is required reading every year with a mandatory test to ensure comprehensive understanding of these rules.  I can boil down one of the most important themes into two words: DON’T LIE!

For those of you who have been reading this blog for a while, you may question whether I am too verbose and that may be fair as I thoroughly research each subject and include attribution for claims, usually including direct links to the source of those claims.  I would never think of making up “facts” and, on rare occasion when a reader has informed me of a mistake, I always correct the mistake as well as include a comment to that effect.

Some background:  A few weeks ago, a customer sent us a list of questions about SAP HANA on IBM Power Systems.  At first, the questions seemed bizarre as they included some very pointed misunderstandings about HANA and SAP in general and IBM’s role with SAP in particular.  As I read them more thoroughly, I realized that someone or some entity had coached the customer.  This was confirmed when I received a copy of a HPE presentation from a completely different source with almost identically worded statements.  By the way, back to the BCG, IBM employees are not allowed to view much less share information from a competitor marked confidential and this presentation was not marked with anything, meaning it was being shown to customers with, or without, HPE management’s official knowledge.

Some of the lies it shares:

  • HPE has 99%+ share of the HANA market. It is kind of funny to note that this claim is contradicted in the same table where it shows 80% share for Intel.  I guess they are confusing SAP and SAP HANA markets which is misleading at best.  More importantly, SAP does not release market share information and even if they did, I think the Lenovo, Cisco, Dell and Fujitsu might together claim more than 1% of the market.
  • IBM, not SAP “delivers” HANA code to customers because they have access to SAP code and have created a “special” version of SAP HANA. Wow, it is hard to figure out where to start here.  SAP owns HANA and only they distribute code.  They refused to support other operating systems than Linux, including AIX, for the very reason of wanting a common code tree for all platforms.  HPE is correct that IBM works closely with SAP to optimize HANA code, a fact which should be lauded not criticized.  Apparently, HPE must not have such a relationship and are jealous?  What HPE does not understand is that regardless of who, IBM, Intel or other, contributes code to SAP or suggests modifications to code, SAP makes all decisions regarding that code, including support, and incorporates it into the common code tree meaning all platforms can benefit if the code is not related to a specific, proprietary instruction set.  When Intel contributed code for TSX, Power HANA was not able to use this code, but with appropriate modifications, SAP was able to add the code to call IBM’s similar “Transactional Memory” calls.  Now, there is simple logic which ensures the appropriate call is made depending on the underlying processor architecture.  Likewise, when IBM saw that the huge number of threads in its architecture might push limits in HANA, it worked with SAP to improve the thread and workload dispatch mechanisms in HANA.  When Intel released their Broadwell-EX 24-core chips and SAP approved large socket counts, these systems would have hit the same threading issues, but with the new mechanisms already in place, were able to benefit from IBM & SAP’s joint effort.  Maybe HPE means that SAP has to compile the same code as used for Intel systems on the Power platform.  Well duh, it is a different chip architecture, so this is computer science 101, but hardly a different “version”.
  • Release priority – #1 Intel, #2 Power. Wrong again HPE!  HANA 2.0 released simultaneously on Intel and Power, as they did for S/4HANA 1610 on-prem edition, support for SoH with HANA 2.0, etc.  Where do you get your misinformation HPE?  This information is widely available on SAP’s Service Marketplace and the SAP PAM.
  • Sizes supported – HPE shows Power support of “only” 4.8TB for BW, 9TB for SoH vs. 24TB for Intel and No scale-out HANA on Power – I will give HPE the benefit of the doubt on the 4.8TB statement as 6TB just came out, but the “only” part is strange in that in the same table it shows “only” 4TB support on Intel. As to 9TB SoH and lack of scale-out HANA, both are wrong and have been for a while with 16TB SoH available since December 4th, 2016, see SAP Note 2188482 and scale-out HANA since November 2015. As to the 24TB claim for Intel, the largest supported HANA appliance is 20TB, so HPE, once again, seems to be making up facts.

There were other lies, but I think you get the idea.  Here are a few suggestions:

To HPE management: Shame on you for permitting such behavior or if done with your knowledge, for encouraging it.  If you have any “integrity” (pun intended), you will fire the employees and managers responsible for knowingly spreading lies and will print a retraction in appropriate press sources and on your web site.  If you don’t, then you are demonstrating, loud and clear, that your company is not to be trusted.

To HPE employees: Unless your management takes the above suggestions to heart with appropriate action to rectify this wrong, I am not sure how you can sleep well working for a company that considers truth to be something to be sacrificed at their convenience.  Hope they are truthful about your benefits.

To SAP customers: I can only speak for myself; when a restaurant, retailer or manufacturer lies to me and/or the public, I refuse to ever do business with them again.  The old saying applies, “fool me once, shame on you, fool me twice, shame on me.”  When you consider the minimal differences, if any, in cost of acquisition between all HANA system providers on the market, including IBM Power

May 15, 2017 Posted by | Uncategorized | , , , , , , , , , | Leave a comment

Is your company ready to put S/4HANA into the Cloud? – part 4

This is the 4th and final installment on this topic.  Sorry for the length of each part, but the issues surrounding placement of corporate application environments cannot be boiled down into simple statements like “always think cloud first” or “cloud is no place for a corporate application”.

  • How will you get from your current on-premise SAP landscape to the cloud? As mentioned in a recent blog post[i], database conversions from a conventional database or Suite on HANA to S/4HANA are not trivial to start with.  Now add the complexity of doing that across a WAN with system characteristics and technologies which you may not be able to control and you have just made a difficult task even more so.
    • Can a migration be completed within the outage window that your business allows? Fundamentally, the business will only allow outages which result in little to know lost business or financial penalties.  Where you may be able to use dedicated, 1Gb or 10Gb Ethernet or even faster internal networks (in the case of Power Systems), unless you are able to purchase temporary, massive WAN bandwidth, you may be faced with an outage that is longer than the business will allow.
    • At what cost, complexity and risk? If such a migration would take longer than allowable, there are strategies and solutions to deal with this, e.g. SAP MDS (Minimized Downtime Service), IBM CDC (InfoSphere Change Data Capture), SNP Transformation Backbone, Dell Shareplex, but these add cost, require much more planning and testing and might impose some additional risk especially across WAN communications, see the discussion on security across the WAN above.

Lest you feel that this post is overly focused on issues which might prevent you from moving to the cloud, there are good reasons as well.  As I am not an expert on that part of the story, I will refer you to some pretty good articles on the subject.[ii]  The common theme across these sites is that cloud can a) result in cost savings, b) improve agility, c) provide more elasticity and scaling, d) move from a CapEx model to OpEx.  Lets take these one at a time.

a) cost savings – For customers that are growing rapidly, are startups, have never implemented a complex ERP system, Cloud certainly can offer major cost avoidance.  For customers with existing data centers, Linux or Unix trained support staffs, UPS and diesel generator power units, storage, security and operations standards, investments and teams, backup and recovery solutions, unless the move of SAP to the cloud along with other potential moves will allow for a large portion of those staffs to be laid off and data centers sold to another company, it may be more challenging to figure out exactly what sort of savings result from a move to the cloud.  Once you address all of your corporate requirements, discussed in detail in parts 2 and 3 of this blog, a new price for the cloud services to support your SAP S/4HANA environment may emerge and then you can start the process of determining what sort of cost savings are likely to be forthcoming.  From my personal experience with customers, it often turns out that little to no cost savings actually result.

b) improve agility – This one is more clear cut.  When on-premise systems are purchased and your requirements change, often you may find out that you under or overbought and that adjusting capacity, starting up or shutting down systems or simply running power, planning for cooling, running network and storage cables, to name just a few tasks can take weeks or months.  Cloud data centers often pre-provision technology to be ready for growth and changes in demands plus this is the business they are in, so they tend to be very good at keeping ahead of the demands for their services.  Admittedly, some customers are also excellent at this and those that have chosen IBM Power Systems with PowerVM find that making adjustments to systems is so easy that agility is not a major issue.  I know of some customers that purchase larger systems than initially required with large amounts of Capacity on Demand CPUs and memory so that growth can be accommodated without any need for physical changes, simply logical activations to deal with this very issue.

c) elasticity and scaling – Elasticity is usually considered in a cost context, i.e. pay for what you use, which cloud models do very well with utility models and use of shared infrastructure plus ability to charge per unit regardless of what size systems are required, meaning nothing is lost if you start on one size system and have to move to another.  Scaling usually refers to the ability to add almost an unlimited number of additional servers very quickly and easily, once against because cloud providers focus on this and are very good at rapid provisioning.  Is this required for S/4HANA is a more important question.  After going through a proper sizing, just about all customers get something wrong.  A study by Solitaire Interglobal [iii] a few years ago revealed that customer using x86 systems for SAP, on average, were quoted a starting price that was no more than 40% of the eventual cost.  I have seen this personally with undersized offerings or ones that “answered the mail” but did not address necessary project requirements.  Customers that have experienced this sort of cost overrun will find a cloud option especially attractive because of the ability to seamlessly move between systems or scale-out as necessary.  By comparison, that same Solitaire study showed that customers that purchased IBM Power Systems for SAP were quoted a starting price of 85% to 90% of the eventual cost.  Once again, that is because we ask the right questions up front so sizes of systems are much more accurate, most project requirements have been accounted for and overruns are less common.  These sorts of customers may not find cloud quite as much of a boon for scaling.  On the elasticity front, Power Systems offer a pay as you go model with capacity on demand or flexible financing, so this issue can also be addressed for on-premise implementations.

d) CapEx vs. OpEx – Cloud is all OpEx.  Some customers’ CFOs have decided that this is necessary even though the rationale is not always clear to those of us without a finance or business degree.  Leases for on-premise systems can be structured to be mostly or all OpEx.  Of course, that only accounts for the systems, so data center infrastructure would likely fall more under CapEx.  If those are sunk assets, however, then unless they are to be sold, depreciation under CapEx will continue whether SAP systems are moved to the cloud or not.

I am sure there are plenty of other reasons to move to the cloud.  I would simply encourage customers to get informed about the challenges of migration; the costs once real corporate requirements are included; the security and control or lack thereof you will have of your mission critical systems; the options you can utilize to resolve some of the issues driving you to cloud today.  For any customer that would like to have a discussion with me about these issues, costs and solutions, please respond to this blog or send me an email: afreude@us.ibm.com

[i] https://wordpress.com/post/saponpower.wordpress.com/524
[ii] https://doublehorn.com/why-move-to-the-cloud/
http://www.belden.com/blog/datacenters/6-reasons-why-enterprises-are-moving-to-the-cloud.cfm
https://www.salesforce.com/uk/blog/2015/11/why-move-to-the-cloud-10-benefits-of-cloud-computing.html
https://www.cardinalsolutions.com/blog/2016/08/top-reasons-for-moving-to-the-cloud
https://www.l-tron.com/top-10-reasons-to-move-your-enterprise-to-the-cloud/
 [iii]http://sil-usa.com/index.php?route=product/product&product_id=54

May 8, 2017 Posted by | Uncategorized | , , , , , , , | Leave a comment

Is your company ready to put S/4HANA into the cloud? – Part 3

The third of a 4 part discussion about corporate requirements in support of S/4HANA and questions to be asked of cloud providers in support of placing this landscape in the cloud.

  • What backups must be performed? Some cloud providers might include daily, weekly, incremental or no backups.  They may include raw image backups vs. database aware backups. Just make sure that whatever backups you require are supported and included in the price for the cloud services.
    • Should corporate backup solution be used? For flexibility reasons as well as visibility, you may prefer that a backup solution that you have tested and approved works in the cloud environment.  Or, perhaps, it is an audit requirement.
    • Are extra server(s) required for backup solution? Your security and audit departments may not permit your backups to share infrastructure, including the network, with any other clients in a provider’s cloud environment.  One or more servers with or without dedicated network infrastructure may be required.
    • How quickly must backups be performed, restored and what is the RTO after database corruption? SAP HANA backups can generally take their time as long as the aggregate transfer rate is sufficient to backup the entire database prior to the next backup.  Well, that is unless you want to be able to restore to the prior day in the event of database corruption in which case you may want the backup finished prior to a specific time on the same day.   Just make sure you think about this and have the infrastructure necessary to meet your backup speed priced.  Even more important is how quickly the backup can be restored as well as what services are offered to restore the backups and roll forward any logs that have been created since that backup was initiated, i.e. the RTO for getting back up and running.
    • Will backups be available to DR systems? Not to be overlooked are backups in DR.  Not only will you want to be able to take backups in DR, but you would also need to be able to restore from a backup in the primary site to the DR site, not so easily done if the primary site is truly down and unavailable.  This means that you would need the backup server to have bi-directional replication with DR site as well as testing to ensure this works correctly.  What incremental costs are required for the replicated backup bandwidth?
  • Security – First a disclaimer. I am not a security expert, so may be only addressing a subset of the real requirements.
    • How will corporate single sign-on operate with the cloud solution? Whether you use Microsoft Active Directory, CA SSO, IBM Tivoli Access Manager or one of the dozens of other products on the market, you are probably using this solution to authenticate and authorize users in SAP.  Make sure it can integrate with the potential S/4HANA system in the cloud.  Make sure that your security administrators can control policies, assign and revoke privileges and audit as necessary.
    • Must communications to/from cloud be encrypted and what solution will be used? We all know that hackers want to access your data for malicious reasons, financial gain or industrial espionage.  Do you want your key strokes and data to and from the cloud to transmit in clear text?  If not, which solution will you use and how might the use of that solution impact performance?  How about between application servers and database servers at the cloud provider?
    • How will data stored in cloud be secured? It is one thing to have your personal email stored on storage devices shared with millions of other users, but do your corporate polices allow for corporate databases to be located on storage devices that are shared with other customers?  If not, do you require dedicated devices, of what kind and at what cost?
    • How will backups be secured? We touched on backups earlier, but this is now specific to the physical media on which those backups are stored not to mention replicated to the DR site as well as any external media that you might require, e.g. tapes, DVDs or removable disks.  How can you be ensured that no one makes a copy, removes a disk, etc?
  • What are the non-production requirements? All of the above was just talking about production, but most customers have an even more extensive non-production landscape.  Many, if not most, of those same questions can be applied to non-production.  Remember, there are few employees that command a higher salary than your developers, whether internal or external.  They create corporate intellectual property and often work with copies of production data.  Their workloads vary based on project demands, phases of implementation or problems to be addressed.  Many customers utilize DR capacity or underutilized capacity on HA systems to address non-prod requirements, however this may not be an option in a cloud environment, or if it is, at what cost?
    • How will images be created/copied, managed, isolated, secured? You may use SAP LaMa (Landscape Manager previously know as Landscape Virtualization Manager (LVM), backup/restore, disk replication, TDMS, BDLS and/or custom scripts to populate non-prod systems.  Will those tools and techniques work in the cloud and at what cost?

 

The last part of this discussion will deal with migration challenges when moving to the cloud and lastly, a few of the reasons that are often used to justify a move to the cloud.

May 5, 2017 Posted by | Uncategorized | , , , , , , , | Leave a comment

Is your company ready to put S/4HANA into the cloud? – Part 2

And now, the details and rationale behind the questions posed in Part 1.

  • What is the expected memory size for the HANA DB? Your HANA instances may fit comfortably within the provider’s offerings, or may force a bare-metal option, or may not be offered at all.  Equally important is expected growth as you may start within one tier and end in another or may be unable to fit in a provider’s cloud environment.
  • What are your performance objectives and how will they be measured/enforced? This may not be that important for some non-production environments, but production is used to run part, or all, of a company.  The last thing you want is to find out that transaction performance is not measured or for which no enforcement for missing an objective exists.  Even worse, what happens if these are measured, but only up to the edge of the provider’s cloud, not inclusive of WAN latency?  Sub-second response time is usually required, but if the WAN adds .5 seconds, your end users may not find this acceptable.  How about if the WAN latency varies?  The only thing worse than poor performance is unpredictable performance.
    • Who is responsible for addressing any performance issues? No one wants finger pointing so is the cloud provider willing to be responsible for end-user performance including WAN latency and at what cost?
    • Is bare-metal required or if shared, how much overhead, how much over-commitment? One of the ways that some cloud providers offer a competitive price is using shared infrastructure, virtualized with VMware or PowerVM for example.  Each of these have different limits and overhead with VMware noted by SAP as having a minimum of 12% overhead and PowerVM with 0% as the benchmarks were run under PowerVM to start with.  Likewise, VMware environments are limited to 4TB per instance and often multiple different instances may not run on shared infrastructure based on a very difficult to understand set of rules from SAP.  PowerVM has no such limits or rules and allows up to 8 concurrent production instances, each up to 16TB for S/4 or SoH up to the physical limits of the system.  If the cloud provider is offering a shared environment, are they running under SAP’s definition of “supported” or are they taking the chance and running “unsupported”?  Lastly, if it is a shared environment, is it possible that your performance or security may suffer because of another client’s use of that shared infrastructure?
  • What availability is required? 99.8%? 9%?  99.95%? 4 nines or higher?  Not all cloud providers can address the higher limits, so you should be clear about what your business requires.
  • Is HA mandatory? HA is usually an option, at a higher price.  The type of HA you desire may, or may not, be offered by each cloud provider.  Testing of that HA solution periodically may, or may not be offered so if you need or expect this, make sure you ask about it.
    • For HA, what are the RPO, RTO and RTP time limits? Not all HA solutions are created equal.  How much data loss is acceptable to your business and how quickly must you be able to get back up and running after a failure?  RTP is a term that you may not have heard to often and refers to “Return to Processing”, i.e. it is not enough to get the system back to a point of full data integrity and ready to work, but the system must be at a point that the business expects with a clear understanding of what transactions have or have not been committed.  Imagine a situation where a customer places an order or paid a bill, but it gets lost or where you paid a supplier and mistakenly pay them a second time.
  • Is DR mandatory and what are the RTP, RTO and RTP time limits? Same rationale for these questions as for HA, once again DR, when available, always offered at an additional charge and highly dependent on the type of replication used, with disk based replication usually less expensive than HANA System Replication but with a longer RTO/RTP.
    • Incremental costs for DR replication bandwidth? Often overlooked is the network costs for replicating data from the primary site to the DR site, but clearly a line item that should not be overlooked.  Some customers may decide to use two different cloud providers for primary and DR in which case not only may pricing for each be different but WAN capacity may be even more critical and pricey.
    • Disaster readiness assessment, mock drills or full, periodic data center flips? Having a DR site available is wonderful provided when you actually need it, everything works correctly.  As this is an entire discussion unto itself, let it be said that every business recovery expert will tell you to plan and test thoroughly.  Make sure you discuss this with a potential cloud provider and have the price to support whatever you require included in their bid.

 

I said this would be a two part post, but there is simply too much to include in only 2 parts, so the parts will go on until I address all of the questions and issues.

May 4, 2017 Posted by | Uncategorized | , , , , , , , | Leave a comment

Is your company ready to put S/4HANA in the cloud? – part 1

Cloud computing for SAP is of huge interest to SAP clients as well as providers of cloud services including IBM and SAP.  Some are so bought into the idea of cloud that for any application requirement “the answer is cloud; now what was the question?”  As my friend Bob Gagnon wrote in a recent LinkedIn blog post[i], is S/4HANA ready for the cloud? The answer is Yes!  But is your organization ready to put S/4HANA in the cloud?  The answer to that question is the subject of this two part blog post.

Bob brings up several excellent points in his blog post, a few of which I will discuss here plus a few others.  These questions are ones that every customer should either be asking their potential cloud providers or asking of themselves.  These questions are not listed in any particular order nor intended to cast cloud in a negative light, merely to point out operational requirements that must be addressed whether S/4HANA, or other SAP or non-SAP applications are placed in a cloud or hosted on-premise.  In parts 2,3 and 4, I will discuss the background and rationale for each question.  Part 4 also discusses some of the points that cloud proponents often tout with some comments by me around ways of achieving similar benefits with on-premise solutions.

  • What is the expected memory size for the HANA DB?
  • What are your performance objectives and how will they be measured/enforced?
    • Is bare-metal required or if shared, how much overhead, how much over-commitment?
  • What availability is required?
  • Is HA mandatory?
    • What are the RPO, RTO and RTP time limits?
  • Is DR mandatory?
    • What are the RTP, RTO and RTP time limits?
    • Incremental costs for DR replication bandwidth?
    • Disaster readiness assessment, mock drills or full, periodic data center flips?
  • What backups must be performed
    • Should corporate backup solution be used?
    • Are extra server(s) required for backup solution?
    • How quickly must backups be restored and RTO occur?
    • Will backups be available to DR systems?
    • What incremental costs are required for WAN copies?
  •  Security
    • How will corporate single sign-on operate with cloud solution?
    • Must communications to/from cloud be encrypted and what solution will be used?
    • How will data stored in cloud be secured?
    • How will backups be secured?
  • What are non-prod requirements, how will images be created/copied, managed, isolated, secured?
  • How will you get from your current on-premise SAP landscape to the cloud?
    • Can a migration be completed within the outage window that your business allows?
    • At what cost, complexity and risk?

[i] https://www.linkedin.com/pulse/s4haha-is-cloud-ready-robert-gagnon?trk=v-feed&lipi=urn%3Ali%3Apage%3Ad_flagship3_feed%3BwtSy49CsXhCWw7ppB5yiaA%3D%3D

 

 

 

May 3, 2017 Posted by | Uncategorized | , , , , , , | Leave a comment

What should you do when your LoBs say they are not ready for S/4HANA – part 2, Why choice of Infrastructure matters

Conversions to S/4HANA usually take place over several months and involve dozens to hundreds of steps.  With proper care and planning, these projects can run on time, within budget and result in a final Go-Live that is smooth and occurs within an acceptable outage window.  Alternately, horror stories abound of projects delayed, errors made, outages far beyond what was considered acceptable by the business, etc.  The choice of infrastructure to support a conversion may the last thing on anyone’s mind, but it can have a dramatic impact on achieving a successful outcome.

A conversion includes running many pre-checks[i] which can run for quite a while[ii] which implies that they can drive CPU utilization to high levels for a significant duration and impact other running workloads.  As a result, consultants routinely recommend that you make a copy of any system, especially production, against which you will run these pre-checks.  SAP recommends that you run these pre-checks against every system to be converted, e.g. Development, Test, Sandbox, QA, etc.  If they are being used for on-going work, it may be advisable to also make a copy of them and run those copies on other systems or within virtual machines which can be limited to avoid causing performance issues with other co-resident virtual machines.

In order to find issues and correct them, conversion efforts usually involve a phased approach with multiple conversions of support systems, e.g. Dev, Test, Sandbox, QA using a tool such as SAP’s Systems Update Manager with the Database Migration Option (SUM w/DMO).  One of the goals of each run is to figure out how long it will take and what actions need to be taken to ensure the Go-Live production conversion completes within the required outage window, including any post-processing, performance tuning, validation and backups.

In an attempt to keep expenses low, many customers will choose to use existing systems or VMs or systems in addition to a new “target” system or systems if HA is to be tested.  This means that the customer’s network will likely be used in support of these connections.  Taken together, the use of shared infrastructure components may come into opposition with events among the shared components which can impacts these tests and activities.  For example, if a VM is used but not enough CPU or network bandwidth is provided, the duration of the test may extend well beyond what is planned meaning more cost for per-hour consulting and may not provide the insight into what needs to be fixed and how long the actual migration may take.  How about if you have plenty of CPU capacity or even a dedicated system, but the backup group decides to initiate a large database backup at the same time and on the same network as your migration test is using.  Or maybe, you decide to run a test at a time that another group, e.g. operations, needs to test something that impacts it or when new equipment or firmware is being installed and modifications to shared infrastructure are occurring, etc.  Of course, you can have good change management and carefully arrange when your conversion tests will occur which means that you may have restricted windows of opportunity at times that are not always convenient for your team.

Let’s not forget that the application/database conversion is only one part of a successful conversion.  Functional validation tests are often required which could overwhelm limited infrastructure or take it away from parallel conversion tasks.  Other easily overlooked but critical tasks include ensuring all necessary interfaces work; that third party middleware and applications install and operate correctly; that backups can be taken and recovered; that HA systems are tested with acceptable RPO and RTO; that DR is set up and running properly also with acceptable RPO and RTO.  And since this will be a new application suite with different business processes and a brand new Fiori interface, training most likely will be required as well.

So, how can the choice of infrastructure make a difference to this almost overwhelming set of issues and requirements?  It comes down to flexibility.  Infrastructure which is built on virtualization allows many of these challenges to be easily addressed.  I will use an existing IBM Power Systems customer running Oracle, DB2 or Sybase to demonstrate how this would work.

The first issue dealt with running pre-checks on existing systems.  If those existing systems are Power Systems and enough excess capacity is available, PowerVM, the IBM Virtualization Hypervisor, allows a VM to be  started with an exact copy of production, passed through normal post-processing such as BDLS to create a non-production copy or cloned and placed behind a network firewall.  This VM could be fenced off logically and throttled such that production running on the same system would always be given preference for cpu resources.  By comparison, a similar database located on an x86 system would likely not be able to use this process as the database is usually running on bare-metal systems for which no VM can be created.

Alternately, for Power Systems, the exact same process could be utilized to carve out a VM on a new HANA target system and this is where the real value starts to emerge.  Once a copy or clone is available on the target HANA on Power system, as much capacity can be allocated to the various pre-checks and related tasks as needed without any concern for the impact on production or the need to throttle these processes thereby optimizing the duration of these tasks.  On the same system, HANA target VMs may be created.  As mock conversions take place, an internal virtual network may be utilized.  Not only is such a network faster by a factor of 2 or more, but it is dedicated to this single purpose and completely unaffected by anything else going on within the datacenter network.  No coordination is required beyond the conversion team which means that there is no externally imposed delay to begin a test or constraints on how long such a test may take or, for that matter, how many times such a test may be run.

The story only gets better.  Remember, SAP suggests you run these pre-checks on all non-prod landscapes.  With PowerVM, you may fire up any number of different copy/clone VMs and/or HANA VMs.  This means that as you move from one phase to the next, from one instance to the next, from conversion to production, run a static validation environment while other tasks continue, conduct training classes, run many different phases for different projects at the same time, PowerVM enables the system to respond to your changing requirements.  This helps avoid the need to purchase extra interim systems and buy when needed, not significantly ahead of time due to the inflexibility of other platforms.  You can even simulate an HA environment to allow you to test your HA strategy without needing a second system, up to the physical limits of the system, of course.  This is where a tool like SAP’s TDMS, Test Data Migration Server, might come in very handy.

And when it comes time for the actual Go-Live conversion, the running production database VM may be moved live from the “old” system, without any downtime, to the “new” system and the migration may now proceed using the virtual, in-memory network at the fastest possible speed and with all external factors removed.  Of course, if the “old” system is based on POWER8, it may then be used/upgraded for other HANA purposes.  Prior Power Systems as well as current generation POWER8 systems can be used for a wide variety of other purposes, both SAP and those that are not.

Bottom line: The choice of infrastructure can help you eliminate external influences that cause delays and complications to your conversion project, optimize your spend on infrastructure, and deliver the best possible throughput and lowest outage window when it comes to the Go-Live cut-over.  If complete control over your conversion timeline was not enough, avoidance of delays keeps costs for non-fixed cost resources to a minimum.  For any SAP customer not using Power Systems today, this flexibility can provide enormous benefits, however the process of moving between systems would be somewhat different.  For any existing Power Systems customer, this flexibility makes a move to HANA on Power Systems almost a no-brainer, especially since IBM has so effectively removed TCA as a barrier to adoption.

[i] https://blogs.sap.com/2017/01/20/system-conversion-to-s4hana-1610-part-2-pre-checks/

[ii] https://uacp.hana.ondemand.com/http.svc/rc/PRODUCTION/pdfe68bfa55e988410ee10000000a441470/1511%20001/en-US/CONV_OP1511_FPS01.pdf page 19

April 26, 2017 Posted by | Uncategorized | , , , , , , , , , | Leave a comment

What should you do when your LoBs say they are not ready for S/4HANA – part 1

A recent survey by ASUG revealed that over 60% of the surveyed SAP customers have yet to purchase S/4HANA.  Of those that did, only 8% were live.[i]  It did not ask whether those that had purchased it but were not live were actively pursuing a conversion or whether the software was sitting on a shelf, or for that matter, whether some might have been mistakenly speaking of some form of HANA but not S/4HANA.  I was surprised to see that the numbers were even that high as most of the large enterprises that I work with on a regular basis are either still on conventional Business Suite or, at best, moving to Suite on HANA.  Regardless, this means a lot of customers are waiting.  This should not be news to you and it certainly was not to me as I have had conversations with dozens of customers that said the same thing, i.e. that their business leaders were either unconvinced of the business value of S/4HANA or that they planned to move eventually, just were not ready now.  Unless those customers have other HANA plans, this is often the end of the discussion.

A few days ago, Bob Gagnon, a friend of mine and a brilliant consultant, but don’t tell him I said so, wrote a short piece on LinkedIn about this subject.[ii]  He makes a persuasive point that waiting has its costs.  Specifically, he writes about the “technical debt” your company creates because any customization you do to your current environment while you wait must either, at worst, be completely redone in S/4 or at least must be tested and qualified a second time when you do implement S/4.  He goes on to mention the increased complexity of your roadmap as you have to consider the institutional dependencies and impact on your future conversion.  Furthermore, you have no choice and must convert or move off of SAP at some point, so you can spend the money today or spend it tomorrow, but either way, you have to spend it.

As I thought about this, I realized there are additional factors to be considered.  As is now common knowledge, SAP has indicated that support for conventional database systems for SAP will end in 2025.  What may be a surprise to some, as Bob mentioned, is that SAP may withdraw support for Business Suite on HANA at the same time.  Compound this with SAP’s assertion that “customers typically migrate 10%-15% of their productive landscape per year, which means that customers that begin in early 2017 can fully migrate a complete traditional SAP Business Suite landscape to SAP S/4HANA by the end of maintenance for SAP Business Suite.”[iii]  This means that for each year that you wait, you have less time to get your entire landscape converted prior to end of support.  Rarely is rushing an advisable method of reliably achieving a desired outcome, as any of us with teenagers or young adults have, no doubt, shared countless times.

How about the availability of skilled resources?  Long before 2025 approaches, the pool of well qualified and trained S/4HANA developers and migration consultants will be stretched thinner and thinner.  When demand is high and supply is low prices go up meaning that those who delay will either see far higher costs to convert to S/4HANA.  Inevitably, a new crop of low skilled and potentially unqualified or at least not very experienced individuals will rush into the vacuum.  Those companies that hire systems integrators with these sorts of “low cost” resources may run into foreseeable problems, delays or failed conversions.

Other factors to consider include skills and innovation.  Even if your LoB executives have not concluded there is enough value in S/4HANA currently, since they have not made plans to move away from SAP, there is clearly some value there.  If you make plans to convert to S/4HANA now, you can get your personnel up to speed and educated and working toward innovations enabled by S/4HANA sooner than later.  If even 20% of what SAP says about business process improvements, concurrent analytics, geographical analysis, executive boardrooms and drill down detail are true, then that is value that you will get eventually, so why defer getting those benefits?

So, pay now or pay more tomorrow.  Plan carefully and thoroughly and execute that plan at a reasonable pace or rush and hope to avoid potentially catastrophic mistakes.  Incur a technical debt or leverage the inevitability of 2025 to enable innovation today.

Why, you might ask, is an SAP infrastructure subject matter expert talking about conversions to S/4HANA?  That is indeed a fine question and will be addressed in part two of this blog post at a later date.

 

[i] https://discuss.asug.com/docs/DOC-46613

[ii] https://www.linkedin.com/pulse/s4hana-waiting-has-negative-impact-robert-gagnon?trk=v-feed&lipi=urn%3Ali%3Apage%3Ad_flagship3_feed%3B%2BFCPwehGKCg0YU5uNYpKYA%3D%3D

[iii] http://sapinsider.wispubs.com/Assets/Articles/2017/January/SPI-Making-the-Move-to-SAP-S4HANA

April 25, 2017 Posted by | Uncategorized | , , , , , , , , , | Leave a comment