SAPonPower

An ongoing discussion about SAP infrastructure

TDI Phase 5 – SAPS based sizing bringing better TCO to new and existing Power Systems customers

SAP made a fundamental and incredibly important announcement this week at SAP TechEd in Las Vegas: TDI Phase 5 – SAPS based sizing for HANA workloads.  Since its debut, HANA has been sized based on a strict memory to core ratio determined by SAP based on workloads and platform characteristics, e.g. generation of processor, MHz, interconnect technology, etc.  This might have made some sense in the early days when much was not known about the loads that customers were likely to experience and SAP still had high hopes for enabling all customer employees to become knowledge workers with direct access to analytics.  Over time, with very rare exception, it turned out that CPU loads were far lower than the ratios might have predicted.

I have only run into one customer in the past two years that was able to drive a high utilization of their HANA systems and that was a customer running an x86 BW implementation with an impressively high number of concurrent users at one point in their month.  Most customers have experienced just the opposite, consistently low utilization regardless of technology.

For many customers, especially those running x86 systems, this has not been an issue.  First, it is not a significant departure from what many have experienced for years, even those running VMware.  Second, to compensate for relatively low memory and socket-to-socket bandwidth combined with high latency interconnects, many x86 systems work best with an excess of CPU.  Third, many x86 vendors have focused on HANA appliances which are rarely utilized with virtualization and are therefore often single instance systems.

IBM Power Systems customers, by comparison, have been almost universal in their concern about poor utilization.  These customers have historically driven high utilization, often over 65%.  Power has up to 5 times the memory bandwidth per socket of x86 systems (without compromising reliability) and very wide and parallel interconnect paths with very low latencies.  HANA has never been offered as an appliance on Power Systems, instead being offered only using a Tailored Datacenter Infrastructure (TDI) approach.  As a result, customers view on-premise Power Systems as a sort of utility, i.e. that they should be able to use them as they see fit and drive as much workload through them as possible while maintaining the Service Level Agreements (SLA) that their end users require.  The idea of running a system at 5%, or even 25%, utilization is almost an affront to these customers, but that is what they have experienced with the memory to core restrictions previously in place.

IBM’s virtualization solution, PowerVM, enabled SAP customers to run multiple production workloads (up to 8 on the largest systems) or a mix of production workloads (up to 7) with a shared pool of CPU resources within which an almost unlimited mix of VMs could run including non-prod HANA, application servers, as well as non-SAP and even other OS workloads, e.g. AIX and IBM i.  In this mixed mode, some of the excess CPU resource not used by the production workloads could be utilized by the shared-pool workloads.  This helped drive up utilization somewhat, but not enough for many.

These customers would like to do what they have historically done.  They would like to negotiate response time agreements with their end user departments then size their systems to meet those agreements and resize if they need more capacity or end up with too much capacity.

The newly released TDI Overview document http://bit.ly/2fLRFPb describes the new methodology: SAP HANA quicksizer and SAP HANA sizing reports have been enhanced to provide separate CPU and RAM sizing results in SAPS”.  I was able to verify Quicksizer showing SAPS, but not the sizing reports.  An SAP expert I ran into at TechEd suggested that getting the sizing reports to determine SAPS would be a tall order since they would have to include a database of SAPS capacity for every system on the market as well as number of cores and MHz for each one.  (In a separate blog post, I will share how IBM can help customers to calculate utilized SAPS on existing systems).  Customers are instructed to work with their hardware partner to determine the number of cores required based on the SAPS projected above.  The document goes on to state: The resulting HANA TDI configurations will extend the choice of HANA system sizes; and customers with less CPU intensive workloads may have bigger main memory capacity compared to SAP HANA appliance based solutions using fixed core to memory sizing approach (that’s more geared towards delivery of optimal performance for any type of a workload).”

Using a SAPS based methodology will be a good start and may result in fewer cores required for the same workload as would have been previously calculated based on a memory/core ratio.  Customers that wish to allocate more of less CPU to those workloads will now have this option meaning that even more significant reduction of CPU may be possible.  This will likely result in much more efficient use of CPU resources, more capacity available to other workloads and/or the ability to size systems with less resources to drive down the cost of those systems.  Either way helps drive much better TCO by reducing numbers and sizes of systems with the associated datacenter and personnel costs.

Existing Power customers will undoubtedly be delighted by this news.  Those customers will be able to start experimenting with different core allocations and most will find they are able to decrease their current HANA VM sizes substantially.  With the resources no longer required to support production, other workloads currently implemented on external systems may be consolidated to the newly, right sized, system.  Application servers, central services, Hadoop, HPC, AI, etc. are candidates to be consolidated in this way.

Here is a very simple example:  A hypothetical customer has two production workloads, BW/4HANA and S/4HANA which require 4TB and 3TB respectively.  For each, HA is required as is Dev/Test, Sandbox and QA.  Prior to TDI Phase 5, using Power Systems, the 4TB BW system would require roughly 82-cores due to the 50GB/core ratio and the S/4 workload would require roughly 33 cores due to the 96GB/core ratio.  Including HA and non-prod, the systems might look something like:

TDI Phase 4

Note the relatively small number of cores available in the shared pool (might be less than optimal) and the total number of cores in the system. Some customers may have elected to increase to an even larger system or utilize additional systems as a result.  As this stood, this was already a pretty compelling TCO and consolidation story to customers.

With SAPS based sizing, the BW workload may require only 70 cores and S/4 21 cores (both are guesses based on early sizing examples and proper analysis of the SAP sizing reports and per core SAPS ratings of servers is required to determine actual core requirements).  The resulting architecture could look like:

TDI Phase 5 est

Note the smaller core count in each system.  By switching to this methodology, lower cost CPU sockets may be employed and processor activation costs decreased by 24 cores per system.  But the number of cores in the shared pool remains the same, so still could be improved a bit.

During a landscape session at SAP TechEd in Las Vegas, an SAP expert stated that customers will be responsible for performance and CPU allocation will not be enforced by SAP through HWCCT as had been the case in the past.  This means that customers will be able to determine the number of cores to allocate to their various instances.  It is conceivable that some customers will find that instead of the 70 cores in the above example, 60, 50 or fewer cores may be required for BW with decreased requirements for S/4HANA as well.  Using this approach, a customer choosing this more hypothetical approach might see the following:

TDI Phase 5 hyp

Note how the number of cores in the shared pool have increased substantially allowing for more workloads to be consolidated to these systems, further decreasing costs by eliminating those external systems as well as being able to consolidate more SAN and Network cards, decreasing computer room space and reducing energy/cooling requirements.

A reasonable question is whether these same savings would accrue to an x86 implementation.  The answer is not necessarily.  Yes, fewer cores would also be required, but to take advantage of a similar type of consolidation, VMware must be employed.  And if VMware is used, then a host of caveats must be taken into consideration.  1) overhead, reportedly 12% or more, must be added to the capacity requirements.  2) I/O throughput must be tested to ensure load times, log writes, savepoints, snapshots and backup speeds which are acceptable to the business.  3) limits must be understood, e.g. max memory in a VM is 4TB which means that BW cannot grow by even 1KB. 4) Socket isolation is required as SAP does not permit the sharing of a socket in a HANA production/VMware environment meaning that reducing core requirements may not result in fewer sockets, i.e. this may not eliminate underutilized cores in an Intel/VMware system.  5) Non-prod workloads can’t take advantage of capacity not used by production for several reasons not the least of which is that SAP does not permit sharing of sockets between VM prod and non-prod instances not to mention the reluctance of many customer to mix prod and non-prod using a software hypervisor such as VMware even if SAP permitted this.  Bottom line is that most customers, through an abundance of caution, or actual experience with VMware, choose to place production on bare-metal and non-prod, which does not require the same stack as prod, on VMware.  Workloads which do require the same stack as prod, e.g. QA, also are usually placed on bare-metal.  After closer evaluation, this means that TDI Phase 5 will have limited benefits to x86 customers.

This announcement is the equivalent of finally being allowed to use 5th gear on your car after having been limited to only 4 for a long time.  HANA on IBM Power Systems already had the fastest adoption in recent SAP history with roughly 950 customers selecting HANA on Power in just 2 years. TDI Phase 5 uniquely benefits Power Systems customers which will continue the acceleration of HANA on Power.  Those individuals that recommended or made decisions to select HANA on Power will look like geniuses to their CFOs as they will now get the equivalent of new systems capacity at no cost.

Advertisements

September 29, 2017 Posted by | Uncategorized | , , , , , , , , , , , , | 1 Comment

Is your company ready to put S/4HANA into the cloud? – Part 3

The third of a 4 part discussion about corporate requirements in support of S/4HANA and questions to be asked of cloud providers in support of placing this landscape in the cloud.

  • What backups must be performed? Some cloud providers might include daily, weekly, incremental or no backups.  They may include raw image backups vs. database aware backups. Just make sure that whatever backups you require are supported and included in the price for the cloud services.
    • Should corporate backup solution be used? For flexibility reasons as well as visibility, you may prefer that a backup solution that you have tested and approved works in the cloud environment.  Or, perhaps, it is an audit requirement.
    • Are extra server(s) required for backup solution? Your security and audit departments may not permit your backups to share infrastructure, including the network, with any other clients in a provider’s cloud environment.  One or more servers with or without dedicated network infrastructure may be required.
    • How quickly must backups be performed, restored and what is the RTO after database corruption? SAP HANA backups can generally take their time as long as the aggregate transfer rate is sufficient to backup the entire database prior to the next backup.  Well, that is unless you want to be able to restore to the prior day in the event of database corruption in which case you may want the backup finished prior to a specific time on the same day.   Just make sure you think about this and have the infrastructure necessary to meet your backup speed priced.  Even more important is how quickly the backup can be restored as well as what services are offered to restore the backups and roll forward any logs that have been created since that backup was initiated, i.e. the RTO for getting back up and running.
    • Will backups be available to DR systems? Not to be overlooked are backups in DR.  Not only will you want to be able to take backups in DR, but you would also need to be able to restore from a backup in the primary site to the DR site, not so easily done if the primary site is truly down and unavailable.  This means that you would need the backup server to have bi-directional replication with DR site as well as testing to ensure this works correctly.  What incremental costs are required for the replicated backup bandwidth?
  • Security – First a disclaimer. I am not a security expert, so may be only addressing a subset of the real requirements.
    • How will corporate single sign-on operate with the cloud solution? Whether you use Microsoft Active Directory, CA SSO, IBM Tivoli Access Manager or one of the dozens of other products on the market, you are probably using this solution to authenticate and authorize users in SAP.  Make sure it can integrate with the potential S/4HANA system in the cloud.  Make sure that your security administrators can control policies, assign and revoke privileges and audit as necessary.
    • Must communications to/from cloud be encrypted and what solution will be used? We all know that hackers want to access your data for malicious reasons, financial gain or industrial espionage.  Do you want your key strokes and data to and from the cloud to transmit in clear text?  If not, which solution will you use and how might the use of that solution impact performance?  How about between application servers and database servers at the cloud provider?
    • How will data stored in cloud be secured? It is one thing to have your personal email stored on storage devices shared with millions of other users, but do your corporate polices allow for corporate databases to be located on storage devices that are shared with other customers?  If not, do you require dedicated devices, of what kind and at what cost?
    • How will backups be secured? We touched on backups earlier, but this is now specific to the physical media on which those backups are stored not to mention replicated to the DR site as well as any external media that you might require, e.g. tapes, DVDs or removable disks.  How can you be ensured that no one makes a copy, removes a disk, etc?
  • What are the non-production requirements? All of the above was just talking about production, but most customers have an even more extensive non-production landscape.  Many, if not most, of those same questions can be applied to non-production.  Remember, there are few employees that command a higher salary than your developers, whether internal or external.  They create corporate intellectual property and often work with copies of production data.  Their workloads vary based on project demands, phases of implementation or problems to be addressed.  Many customers utilize DR capacity or underutilized capacity on HA systems to address non-prod requirements, however this may not be an option in a cloud environment, or if it is, at what cost?
    • How will images be created/copied, managed, isolated, secured? You may use SAP LaMa (Landscape Manager previously know as Landscape Virtualization Manager (LVM), backup/restore, disk replication, TDMS, BDLS and/or custom scripts to populate non-prod systems.  Will those tools and techniques work in the cloud and at what cost?

 

The last part of this discussion will deal with migration challenges when moving to the cloud and lastly, a few of the reasons that are often used to justify a move to the cloud.

May 5, 2017 Posted by | Uncategorized | , , , , , , , | Leave a comment

Is your company ready to put S/4HANA into the cloud? – Part 2

And now, the details and rationale behind the questions posed in Part 1.

  • What is the expected memory size for the HANA DB? Your HANA instances may fit comfortably within the provider’s offerings, or may force a bare-metal option, or may not be offered at all.  Equally important is expected growth as you may start within one tier and end in another or may be unable to fit in a provider’s cloud environment.
  • What are your performance objectives and how will they be measured/enforced? This may not be that important for some non-production environments, but production is used to run part, or all, of a company.  The last thing you want is to find out that transaction performance is not measured or for which no enforcement for missing an objective exists.  Even worse, what happens if these are measured, but only up to the edge of the provider’s cloud, not inclusive of WAN latency?  Sub-second response time is usually required, but if the WAN adds .5 seconds, your end users may not find this acceptable.  How about if the WAN latency varies?  The only thing worse than poor performance is unpredictable performance.
    • Who is responsible for addressing any performance issues? No one wants finger pointing so is the cloud provider willing to be responsible for end-user performance including WAN latency and at what cost?
    • Is bare-metal required or if shared, how much overhead, how much over-commitment? One of the ways that some cloud providers offer a competitive price is using shared infrastructure, virtualized with VMware or PowerVM for example.  Each of these have different limits and overhead with VMware noted by SAP as having a minimum of 12% overhead and PowerVM with 0% as the benchmarks were run under PowerVM to start with.  Likewise, VMware environments are limited to 4TB per instance and often multiple different instances may not run on shared infrastructure based on a very difficult to understand set of rules from SAP.  PowerVM has no such limits or rules and allows up to 8 concurrent production instances, each up to 16TB for S/4 or SoH up to the physical limits of the system.  If the cloud provider is offering a shared environment, are they running under SAP’s definition of “supported” or are they taking the chance and running “unsupported”?  Lastly, if it is a shared environment, is it possible that your performance or security may suffer because of another client’s use of that shared infrastructure?
  • What availability is required? 99.8%? 9%?  99.95%? 4 nines or higher?  Not all cloud providers can address the higher limits, so you should be clear about what your business requires.
  • Is HA mandatory? HA is usually an option, at a higher price.  The type of HA you desire may, or may not, be offered by each cloud provider.  Testing of that HA solution periodically may, or may not be offered so if you need or expect this, make sure you ask about it.
    • For HA, what are the RPO, RTO and RTP time limits? Not all HA solutions are created equal.  How much data loss is acceptable to your business and how quickly must you be able to get back up and running after a failure?  RTP is a term that you may not have heard to often and refers to “Return to Processing”, i.e. it is not enough to get the system back to a point of full data integrity and ready to work, but the system must be at a point that the business expects with a clear understanding of what transactions have or have not been committed.  Imagine a situation where a customer places an order or paid a bill, but it gets lost or where you paid a supplier and mistakenly pay them a second time.
  • Is DR mandatory and what are the RTP, RTO and RTP time limits? Same rationale for these questions as for HA, once again DR, when available, always offered at an additional charge and highly dependent on the type of replication used, with disk based replication usually less expensive than HANA System Replication but with a longer RTO/RTP.
    • Incremental costs for DR replication bandwidth? Often overlooked is the network costs for replicating data from the primary site to the DR site, but clearly a line item that should not be overlooked.  Some customers may decide to use two different cloud providers for primary and DR in which case not only may pricing for each be different but WAN capacity may be even more critical and pricey.
    • Disaster readiness assessment, mock drills or full, periodic data center flips? Having a DR site available is wonderful provided when you actually need it, everything works correctly.  As this is an entire discussion unto itself, let it be said that every business recovery expert will tell you to plan and test thoroughly.  Make sure you discuss this with a potential cloud provider and have the price to support whatever you require included in their bid.

 

I said this would be a two part post, but there is simply too much to include in only 2 parts, so the parts will go on until I address all of the questions and issues.

May 4, 2017 Posted by | Uncategorized | , , , , , , , | Leave a comment

What should you do when your LoBs say they are not ready for S/4HANA – part 2, Why choice of Infrastructure matters

Conversions to S/4HANA usually take place over several months and involve dozens to hundreds of steps.  With proper care and planning, these projects can run on time, within budget and result in a final Go-Live that is smooth and occurs within an acceptable outage window.  Alternately, horror stories abound of projects delayed, errors made, outages far beyond what was considered acceptable by the business, etc.  The choice of infrastructure to support a conversion may the last thing on anyone’s mind, but it can have a dramatic impact on achieving a successful outcome.

A conversion includes running many pre-checks[i] which can run for quite a while[ii] which implies that they can drive CPU utilization to high levels for a significant duration and impact other running workloads.  As a result, consultants routinely recommend that you make a copy of any system, especially production, against which you will run these pre-checks.  SAP recommends that you run these pre-checks against every system to be converted, e.g. Development, Test, Sandbox, QA, etc.  If they are being used for on-going work, it may be advisable to also make a copy of them and run those copies on other systems or within virtual machines which can be limited to avoid causing performance issues with other co-resident virtual machines.

In order to find issues and correct them, conversion efforts usually involve a phased approach with multiple conversions of support systems, e.g. Dev, Test, Sandbox, QA using a tool such as SAP’s Systems Update Manager with the Database Migration Option (SUM w/DMO).  One of the goals of each run is to figure out how long it will take and what actions need to be taken to ensure the Go-Live production conversion completes within the required outage window, including any post-processing, performance tuning, validation and backups.

In an attempt to keep expenses low, many customers will choose to use existing systems or VMs or systems in addition to a new “target” system or systems if HA is to be tested.  This means that the customer’s network will likely be used in support of these connections.  Taken together, the use of shared infrastructure components may come into opposition with events among the shared components which can impacts these tests and activities.  For example, if a VM is used but not enough CPU or network bandwidth is provided, the duration of the test may extend well beyond what is planned meaning more cost for per-hour consulting and may not provide the insight into what needs to be fixed and how long the actual migration may take.  How about if you have plenty of CPU capacity or even a dedicated system, but the backup group decides to initiate a large database backup at the same time and on the same network as your migration test is using.  Or maybe, you decide to run a test at a time that another group, e.g. operations, needs to test something that impacts it or when new equipment or firmware is being installed and modifications to shared infrastructure are occurring, etc.  Of course, you can have good change management and carefully arrange when your conversion tests will occur which means that you may have restricted windows of opportunity at times that are not always convenient for your team.

Let’s not forget that the application/database conversion is only one part of a successful conversion.  Functional validation tests are often required which could overwhelm limited infrastructure or take it away from parallel conversion tasks.  Other easily overlooked but critical tasks include ensuring all necessary interfaces work; that third party middleware and applications install and operate correctly; that backups can be taken and recovered; that HA systems are tested with acceptable RPO and RTO; that DR is set up and running properly also with acceptable RPO and RTO.  And since this will be a new application suite with different business processes and a brand new Fiori interface, training most likely will be required as well.

So, how can the choice of infrastructure make a difference to this almost overwhelming set of issues and requirements?  It comes down to flexibility.  Infrastructure which is built on virtualization allows many of these challenges to be easily addressed.  I will use an existing IBM Power Systems customer running Oracle, DB2 or Sybase to demonstrate how this would work.

The first issue dealt with running pre-checks on existing systems.  If those existing systems are Power Systems and enough excess capacity is available, PowerVM, the IBM Virtualization Hypervisor, allows a VM to be  started with an exact copy of production, passed through normal post-processing such as BDLS to create a non-production copy or cloned and placed behind a network firewall.  This VM could be fenced off logically and throttled such that production running on the same system would always be given preference for cpu resources.  By comparison, a similar database located on an x86 system would likely not be able to use this process as the database is usually running on bare-metal systems for which no VM can be created.

Alternately, for Power Systems, the exact same process could be utilized to carve out a VM on a new HANA target system and this is where the real value starts to emerge.  Once a copy or clone is available on the target HANA on Power system, as much capacity can be allocated to the various pre-checks and related tasks as needed without any concern for the impact on production or the need to throttle these processes thereby optimizing the duration of these tasks.  On the same system, HANA target VMs may be created.  As mock conversions take place, an internal virtual network may be utilized.  Not only is such a network faster by a factor of 2 or more, but it is dedicated to this single purpose and completely unaffected by anything else going on within the datacenter network.  No coordination is required beyond the conversion team which means that there is no externally imposed delay to begin a test or constraints on how long such a test may take or, for that matter, how many times such a test may be run.

The story only gets better.  Remember, SAP suggests you run these pre-checks on all non-prod landscapes.  With PowerVM, you may fire up any number of different copy/clone VMs and/or HANA VMs.  This means that as you move from one phase to the next, from one instance to the next, from conversion to production, run a static validation environment while other tasks continue, conduct training classes, run many different phases for different projects at the same time, PowerVM enables the system to respond to your changing requirements.  This helps avoid the need to purchase extra interim systems and buy when needed, not significantly ahead of time due to the inflexibility of other platforms.  You can even simulate an HA environment to allow you to test your HA strategy without needing a second system, up to the physical limits of the system, of course.  This is where a tool like SAP’s TDMS, Test Data Migration Server, might come in very handy.

And when it comes time for the actual Go-Live conversion, the running production database VM may be moved live from the “old” system, without any downtime, to the “new” system and the migration may now proceed using the virtual, in-memory network at the fastest possible speed and with all external factors removed.  Of course, if the “old” system is based on POWER8, it may then be used/upgraded for other HANA purposes.  Prior Power Systems as well as current generation POWER8 systems can be used for a wide variety of other purposes, both SAP and those that are not.

Bottom line: The choice of infrastructure can help you eliminate external influences that cause delays and complications to your conversion project, optimize your spend on infrastructure, and deliver the best possible throughput and lowest outage window when it comes to the Go-Live cut-over.  If complete control over your conversion timeline was not enough, avoidance of delays keeps costs for non-fixed cost resources to a minimum.  For any SAP customer not using Power Systems today, this flexibility can provide enormous benefits, however the process of moving between systems would be somewhat different.  For any existing Power Systems customer, this flexibility makes a move to HANA on Power Systems almost a no-brainer, especially since IBM has so effectively removed TCA as a barrier to adoption.

[i] https://blogs.sap.com/2017/01/20/system-conversion-to-s4hana-1610-part-2-pre-checks/

[ii] https://uacp.hana.ondemand.com/http.svc/rc/PRODUCTION/pdfe68bfa55e988410ee10000000a441470/1511%20001/en-US/CONV_OP1511_FPS01.pdf page 19

April 26, 2017 Posted by | Uncategorized | , , , , , , , , , | Leave a comment

SAP HANA on Power – status update

This entry has been superseded by a new one: https://saponpower.wordpress.com/2014/06/06/there-is-hop-for-hana-hana-on-power-te-program-begins/

 

After Vishal Sikka’s announcement that SAP was investigating the potential of HANA on IBM Power Systems, it seemed that all that was needed for this concept to become a reality was for IBM to invest in the resources to aid SAP in porting and optimization of SAP HANA on Power (HoP) and for customers to weigh in on their desire for such a solution.

Many, very large customers told us that they did let SAP know of their interest in HoP.  IBM and SAP made the necessary investments for a proof of concept with HoP.  This successful effort was an example of the outstanding results that happen when two great companies cooperate and put some of their best people together.  However, there are still no commitments to deliver HoP in 2013.  SAP apparently has not ruled out such a solution at some point in the future.  So, why should you care since HANA already runs on x86?

Simple answer.  Are you ready to bet your business on x86?  

Do Intel systems offer the scalability that your business requires and can those systems react fast enough to changing business conditions?  Power scales far high than x86, has no artificial limitations and responds to changing demands almost instantly.

Are x86 systems reliable enough?   Power Systems inherited a wide array of self correcting and fault tolerant features from the mainframe, still the standard for reliability in the industry.

Are x86 systems secure enough?   Despite the best attempts by hackers, PowerVM has still never been breached.

Can you exploit virtualization or will you have to go back to a 1990s concept of islands of automation?  The PowerVM hypervisor is part of every Power system, so it is virtualized by default and the journey that most customers have been on for most of this millennium can continue unabated.

What can you do about this?  Speak up!!  Call your SAP Account Executive and send them notes.  Let them know that you are unwilling to take a chance on allowing your SAP Business Suite database systems to be placed on anything less than the most reliable, scalable, secure and flexible systems available, i.e. IBM Power Systems.    Remind SAP that Business Suite DB already runs very well on current Power Systems and that until SAP is willing to support this platform for HANA, there is very little compelling reason for you to consider a move to HANA.

Sapphire is just a week away.  This may be the best opportunity for you to deliver this message as most of SAP’s leadership will be present in Orlando.  If they hear this message from enough customers, it is unlikely that they will simply ignore it.

May 6, 2013 Posted by | Uncategorized | , , , , | 1 Comment

The top 3 things that SAP needs are memory, memory and I can’t remember the third. :-) A review of the IBM Power Systems announcements with a focus on the memory enhancements.

While this might not exactly be new news, it is worthwhile to consider the value of the latest Power Systems announcements for SAP workloads.  On October 12, 2011, IBM released a wide range of enhancements to the Power Systems family.  The ones that might have received the most publicity, not to mention new model numbers, were valuable but not the most important part of the announcement, from my point of view.  Yes, the new higher MHz Power 770 and 780 and the ability to order a 780 with 2 chips per socket thereby allowing the system to grow to 96 cores were certainly very welcome additions to the family.  Especially nice was that the 3.3 GHz processors in the new MMC model of the 770 came in at the same price as the 3.1 GHz processors in the previous MMB model.  So, 6.5% more performance at no additional cost.

For SAP, however, raw performance often takes second fiddle to memory.   The old rule is that for SAP workloads, we run out of memory long before we run out of CPU.   IBM started to address this issue in 2010 with the announcement of the Active Memory Expansion (AME)  feature of POWER7 systems.  This feature allows for dynamic compression/decompression of memory pages thereby making memory appear to be larger than it really is.   The administrator of a system can select the target “expansion” and the system will then build a “compressed” pool in memory into which pages are compressed and placed starting from those pages less frequently accessed to those more frequently accessed.  As pages are touched, they are uncompressed and moved into the regular memory pool from which they are accessed normally.  Applications run unchanged as AIX performs all of the moves without any interaction or awareness required by the application.   The point at which response time, throughput or a large amount of CPU overhead starts to occur is the “knee of the curve”, i.e. slightly higher than the point at which the expansion should be set.  A tool, called AMEPAT, allows the administrator to “model” the workload prior to turning AME on, or for that matter on older hardware as long as the OS level is AIX 6.1 TL4 SP2 or later.

Some workloads will see more benefit than others.  For instance, during internal test run by IBM, the 2-tier SD benchmark showed outstanding opportunities for compression and hit 111% expansion, e.g. 10GB of real memory appears to be 21GB to the application, before response time or thoughput showed any negative effect from the compression/decompression activity.  During testing of a retail BW workload, 160% expansion was reached.  Even database workloads tend to benefit from AME.  DB2 database, which already feature outstanding compression, have seen another 30% or 40% expansion.  The reason for this difference comes from the different approaches to compression.  In DB2, if 1,000 residences or business have an address on Main Street,  Austin, Texas,  (had to pick a city so selected my own) DB2 replaces Main Street, Austin, Texas in each row with a pointer to another table that has a single row entitled Main Street, Austin, Texas.  AME, by comparison, is more of an inline compression, e.g. if it sees a repeating pattern, it can replace that pattern with a symbol that represents the pattern and how often it repeats.  Oracle recently announced that they would also support AME.  The amount of expansion with AME will likely vary from something close to DB2, if Oracle Advanced Compression is used, to significantly higher if Advanced Compression is not used since many more opportunities for compression will likely exist.

So, AME can help SAP workloads close the capacity gap between memory and CPU.  Another way to view this is that this technology can decrease the cost of Power Systems by either allowing customers to purchase less memory or to place more workloads on the same system, thereby driving up utilization and decreasing the cost per workload.  It is worthwhile to note than many x86 systems have also tried to address this gap, but as none offer anything even remotely close to AME, they have instead resorted to more DIMM slots.  While this is a good solution, it should be noted that twice the number of DIMMs requires twice the amount of power and cooling and suffers from twice the failures, i.e. TANSTAFL: there ain’t no such thing as a free lunch.

In the latest announcements, IBM introduced support for the new 32GB dimms.  This effectively doubled the maximum memory on most models, from the 710 through the 795.  Combined with AME, this decreases or eliminates the gap between memory capacity and  CPU and makes these models even more cost effective since more workloads can share the same hardware.  Two other systems received similar enhancements recently, but these were not part of the formal announcement.  The two latest blades in the Power Systems portfolio, the PS703 and the PS704, were announced earlier this year with twice the number of cores but the same memory as the PS701 and PS702 respectively.  Now, using 16GB DIMMS, the PS703/PS704 can support up to 256GB/512GB of memory making these blades very respectable especially for application server workloads.  Add to that, with the Systems Director Management Console (SDMC) AME can be implemented for blades allowing for even more effective memory per blade.   Combined, these blades have closed the price difference even further compared to similar x86 blades.

One last memory related announcement may have been largely overlooked by many because it involved an enhancement to the Active Memory Sharing (AMS) feature of PowerVM.  AMS has historically been a technology that allowed for overcommitment of memory.  While CPU overcommitment is now routine, memory overcommitment means that some % of memory pages will have to be paged out to solid state or other types of disk.  The performance penalty is well understood making this not appropriate for production workloads but potentially beneficial for many other non-prod, HA or DR workloads.  That said, few SAP customers have implemented this technology due to the complexity and performance variability that can result.  The new announcement introduces Active Memory™ Deduplication for AMS implementations.   Using this new technology, PowerVM will scan partitions after they finish booting and locate  identical pages within and across all partitions on the system.  When identical pages are detected, all copies, except one, will be removed and all memory references will point to the same “first copy” of the page.   Since PowerVM is doing this, even the OSs can be unaware of this action.  Instead, as this post processing proceeds, the PowerVM free memory counter will increase until a steady state has been reached.  Once enough memory is freed up in this manner, new partitions may be started.  It is quite easy to imagine that a large number of pages are duplicates, e.g. each instance of an OS has many read only pages which are identical and multiple instances of an application, e.g. SAP app servers, will likewise have executable pages which are identical.  The expectation is that another 30% to 40% effective memory expansion will occur for many workloads using this new technology.  One caveat however; since the scan is after a partition boots, operationally it will be important to have a phased booting schedule to allow for the dedupe process to free up pages prior to starting more partitions thereby avoiding the possibility of paging.  Early testing suggests that the dedupe process should arrive at a steady state approximately 20 minutes after partitions are booted.

The bottom line is that with the larger DIMMS, AME and AMS Memory Deduplication, IBM Power Systems are in a great position to allow customers to fully exploit the CPU power of these systems by combining even more workloads together on fewer servers.  This will effectively drive down the TCA for customers and remove what little difference there might be between Power Systems and systems from various x86 vendors.

November 29, 2011 Posted by | Uncategorized | , , , , , , , , , , , , | 4 Comments

Excellent PowerVM for SAP document

About 3 years ago, the IBM SAP Competency Center in Germany produced a very good document that took the reader through the reasons and rationale for virtualizing SAP landscapes and then explained all of the technologies available on the Power Systems platform to allow users to accomplish that goal.  As many improvements have been introduced in the Power Systems line as well as with its Systems Software, a new updated version was needed.  The Competency Center rose to the task and produced this completely refreshed document.

http://www.redbooks.ibm.com/abstracts/sg247564.html?Open

 

Here is the table of contents to give you a small taste of what it covers.

Chapter 1. From a non-virtualized to a virtualized infrastructure

Chapter 2. PowerVM virtualization technologies

Chapter 3. Best practice implementation example at a customer site

Chapter 4. Hands-on management tasks

Chapter 5. Virtual I/O Server

Chapter 6. IBM PowerVM Live Partition Mobility

Chapter 7. Workload partitions

Chapter 8. SAP system setup for virtualization

Chapter 9. Monitoring

Chapter 10. Support statements by IBM and SAP

 

It is not what one might call “light reading”, but it is a comprehensive and well written guide to the leading edge virtualization technologies offered by IBM on Power Systems and how SAP landscapes can benefit from them.

October 26, 2011 Posted by | Uncategorized | , , , , , , | Leave a comment

vSphere 5.0 compared to PowerVM

Until recently, VMware partitions suffered from a significant scalability limitation. Each partition could scale to a maximum of 8 virtual processors (vp) with vSphere 4.1 Enterprise Edition. For many customers and uses, this did not pose much of an issue as some of the best candidates for x86 virtualization are the thousands of small, older servers which can easily fit within a single core of a modern Intel or AMD chip. For SAP customers, however, the story was often quite different. Eight vp does not equate to 8 cores, it equates to 8 processor threads. Starting with Nehalem, Intel offered HyperThreading which allowed each core to run two different OS threads simultaneously. This feature boosted throughput, on average, by about 30% and just about all benchmarks since that time have been run with HyperThreading enabled. Although it is possible to disable it, few customers elect to do so as it removes that 30% increased throughput from the system. With HyperThreading enabled, 8 VMware vp utilize 4 cores/8 threads which can be as little as 20% of the cores on a single chip. Put in simple terms, this can be as little as 5,000 SAPS depending on the version and MHz of the chip. Many SAP customers routinely run their current application servers at 5,000 to 10,000 SAPS, meaning moving these servers to VMware partitions would result in the dreaded hotspot, i.e. bad performance and a flood of calls to the help desk. By comparison, PowerVM (IBM’s Power Systems virtualization technology) partitions may scale as large as the underlying hardware and if that limit is reached, may be migrated live to a larger server, assuming one exists in the cluster, and the partition allowed to continue to operate without interruption and a much higher partition size capability.

 
VMware recently introduced vSphere 5.0. Among a long list of improvements is the ability to utilize 32 vp for a single partition. On the surface, this would seem to imply that VMware can scale to all but a very few large demands. Once you dig deeper, several factors emerge. As vSphere 5.0 is very new, there are not many benchmarks and even less customer experience. There is no such thing as a linearly scalable server, despite benchmarks that seem to imply this, even from my own company. All systems have a scalability knee of the curve. Where some workloads, e.g. AIM7, when tested by IBM showed up to 7.5 times the performance with 8 vp compared to 1 vp on a Xeon 5570 system with vSphere 4.0 update 1, it is worthwhile to note that this was only achieved when no other partitions were running, clearly not the reason why anyone would utilize VMware. In fact, one would expect just the opposite, that an overcommitment of CPU resources would be utilized to get the maximum throughput of a system. On another test, DayTrader2.0 in JDBC mode, a scalability maximum of 4.67 the performance of a single thread was reached with 8 vp, once again while running no other VMs. It would be reasonable to assume that VMware has done some scaling optimization but it would be premature and quite unlikely to assume that 32 vp will scale even remotely close to 4 times the performance of an 8 vp VM. When multiple VMs run at the same time, VMware overhead and thread contention may reduce effective scaling even further. For the time being, a wise customer would be well advised to wait until more evidence is presented before assuming that all scaling issues have been resolved.

But this is just one issue and, perhaps, not the most important one. SAP servers are by their very nature, mission critical. For database servers, any downtime can have severe consequences. For application servers, depending on how customers implement their SAP landscapes and the cost of downtime, some outages may not have as large the consequence. It is important to note that when an application server fails, the context for each user ‘s session is lost. In a best case scenario, the users can recall all necessary details to re-run the transactions in flight after re-logging on to another application server. This means that the only loss is the productivity of that user multiplied by the number of users previously logged on and doing productive work on that server. Assuming 500 users and 5 minutes to get logged back on, transaction initiated through to completion, this is only 2,500 minutes of lost productivity which at a loaded cost of $75,000 per employee is only a total loss to the company of $1,500 per occurrence. With one such occurrence per application server per year, this would result in $6,000 of cost over 5 years and should be included in any comparison of TCO. Of course, this does not take into consideration any IT staff time required to fix the server, any load on the help desk to help resolve issues, nor any political cost to IT if failures happen too frequently. But what happens if the users are unable to recall all of the details necessary to re-run the transactions or what happens if tight integration with production requires that manufacturing be suspended until all users are able to get back to where they had been? The costs can escalate very quickly.

So, what is my point? All x86 hypervisors, including VMware 4.1 and 5.0, are software layers on top of the hardware. In the event of an uncorrectable error in the hardware, the hypervisor usually fails and, in turn, takes down all VMs that it is hosting. Furthermore, problems are not just confined to the CPU, but could be caused by memory, power supplies, fans or a large variety of other components. I/O is yet another critical issue. VMware provides shared I/O resources to partitions, but it does this sharing within the same hypervisor. A device driver error, physical card error or, in some cases, even an external error in a cable, for example, might result in a hypervisor critical error and resulting outage. In other words, the hypervisor becomes a very large single point of failure. In order to avoid the sort of costs described above, most customers try to architect mission critical systems to reduce single points of failure not introduce new ones.

PowerVM takes the opposite approach. First, it is implemented in hardware and firmware. As the name implies, hardware is hardened meaning it is inherently more reliable and far less code is required since many functions are built into the chip.

Second, PowerVM acts primarily as an elegant dispatcher. In other words, it decides which partition executes next in a given core, but then it gets out of the way and allows that partition to execute natively in that core with no hypervisor in the middle of it. This means that if an uncorrectable error were to occur, an exceedingly rare event for Power Systems due to the wide array of fault tolerant components not available in any x86 server, in most situations the error would be confined to a single core and the partition executing in that core at that moment.

Third, sharing of I/O is done through the use of a separate partition called the Virtual I/O (VIO) server. This is done to remove this code from the hypervisor, thereby making they hypervisor more resilient and also to allow for extra redundancy. In most situations, IBM recommends that customers utilize more than one VIO server and spread I/O adapters across those servers with redundant virtual connections to each partition. This means that if an error were to occur in a VIO server, once again a very rare event, only the VIO server might fail, but the other VIO servers would not fail and there would be no impact on the hypervisor since it is not involved in the sharing of I/O at all. Furthermore, partitions would not fail since they would be multipathing virtual devices across more than one VIO server.

So even if VMware can scale beyond 8vp, the question is how much of your enterprise are you ready to place on a single x86 server? 500 users? 1,000 users? 5,000 users? Remember, 500 users calling the help desk at one time would result in long delays. 1,000 at the same time would result in many individuals not waiting and calling their LOB execs instead.

In the event that this is not quite enough of a reason to select Power and PowerVM over x86 with VMware, it is worthwhile to consider the security exposure differences. This has been covered already in a prior blog entry comparing Power to x86 servers, but is worthwhile noting again. PowerVM has no known vulnerabilities according to the National Vulnerability Database, http:// nvd.nist.gov. By comparison, a search on that web site for VMware results in 119 hits. Admittedly, this includes older versions as well as workstation versions, but it is clear that hackers have historically found weaknesses to exploit. VMware has introduced vShield with vSphere 5.0, a set of technologies intended to make VMware more secure, but only it would be prudent to wait and see if this closes all holes or opens new ones.

Also covered in the prior blog entry, the security of the hypervisor is only one piece of the equation. Equally, or perhaps more important is the security of the underlying OSs. Likewise, AIX is among the least vulnerable OSs with Linux and Windows having an order of magnitude more vulnerabilities. Also covered in that blog was a discussion about problem isolation, determination and vendor ownership of problems to drive them to successful resolution. With IBM, almost the entire stack is owned by IBM and supported for mission critical computing whereas with x86, the stack is a hodgepodge of vendors with different support agreements, capabilities and views on who should be responsible for a problem often resulting in finger pointing.

There is no question that VMware has tremendous potential for applications that are not mission critical as well as being an excellent fit for many non-production environments. For SAP, the very definition of mission critical, a more robust, more scalable and better secured environment is needed and Power Systems with PowerVM does an excellent job of delivering on these requirements.

Oh, I did not mention cost. With the new memory based pricing model for vSphere 5.0, applications such as SAP, which demand enormous quantities of memory, may easily exceed the new limits for memory pool size forcing the purchase of additional VMware licenses. Those extra license costs and their associated maintenance, can easily add enough cost that the price differences, if there are any, between Power and x86 further close to be almost meaningless.

August 29, 2011 Posted by | Uncategorized | , , , , , , , , , , , , , , , , , , , , , | 5 Comments

IBM Power Systems compared to x86 for SAP landscapes

It seems like every other day, someone asks me to help them justify why a customer should select IBM Power Systems over x86 alternatives for new or existing SAP customers. Here is a short summary of the key attributes that most customers require and the reasons why Power Systems excels or conversely, where x86 systems fall short.

TCO – Total Cost of Ownership is usually at the top of everyone’s list. Often this is confused with TCA or Total Cost of Acquisition. TCA can be very important for some individuals within customer organizations, especially when those individuals are only responsible for capital acquisition costs and not operational costs such as maintenance, power, cooling, floor space, personnel, software and other assorted costs. TCA can also be important when only capital budgets are restricted. For most customers, however, TCO is far more important. Some evaluators compare systems, one for one. While this might seem to make sense, would it be reasonable to compare a pickup truck and an 18-wheeler semi? Obviously not, so, to do a fair job of comparing TCO, a company must look at all aspects, purposes and effects of different choices. For instance, with IBM Power Systems, customers routinely utilize PowerVM, the IBM Power virtualization technology, to combine many different workloads including ERP, CRM, BW, EP, SCM, SRM and other production database and application servers, high availability servers, backup/recovery servers and non-production servers onto a single, small set of servers. While some of this is possible with x86 virtualization technologies, it is rarely done, partly due to “best practices” separation of workloads and also due to support restrictions by some software products, such as Oracle database, when used in a virtualized x86 environment. This typically results in a requirement for many more servers. Likewise, many Power Systems customers routinely drive their utilization to 80% or higher, where the best of x86 virtualization customers rarely drive to even 50% utilization. Taken together, it is very common to see 2 or 3 times the number of systems for x86 customers than for equivalently sized Power Systems customers and I provided only two reasons of the many frequently experienced by SAP customers. So, where an individual Power System might be slightly higher in cost than the equivalent x86 server, full SAP landscapes on Power Systems often require far fewer systems. Between a potentially lower cost of acquisition and the associated lower cost of management, less power, cooling, floor space and often lower cost of third party software, customers can see a significantly lower TCO with IBM Power Systems.

For customers which are approaching the limits on their data centers, either in terms of floor space, power or cooling, x86 horizontal proliferation may drive the need for data center expansion that could cost into the many millions of dollars. Power Systems may help customers to achieve radically higher levels of consolidation through its far more advanced virtualization and much higher scalability thereby potentially avoiding the need for that data center expansion. The savings, in this event, would make the other savings seem trivial by comparison.

Reliability – A system which is low cost but suffers relatively high numbers of outages may not be the best option for mission critical systems such as SAP. IBM Power Systems feature an impressive array of reliability technologies that are not available on any x86 system. This starts with failure detection circuitry which is built into the entire system including the processor chips and is called First Failure Data Capture (FFDC). FFDC has been offered and improved upon since the mid-90’s for Power Systems and its predecessors. This unique technology captures soft and hard errors from within the hardware allowing the service processor, standard with every system, to predict failures which could impact application availability and take preventive action such as dynamically deallocating components from adapter cards to memory and cache lines and even processor cores. Intel, starting with Nehalem-EX, offers Machine Check Architecture Recovery (MCA), their first version of a similar concept. As a first version, it is doubtful that it can approach the much more mature FFDC technology from IBM. Even more important is the “architecture” which, once errors are detected, passes that information, not to a service processor, but to the Operating System or Virtualization Manager with the “option” for that software to fix the problem in the hardware. This is like your car telling you that your braking system has a problem. Even if you have the mechanical ability to run advanced diagnostics, remove and replace parts, bleed the system, etc., this would involve a significant outage and most certainly could not be done on the fly. Likewise, it is extremely doubtful that Microsoft, for instance, is going to invest in software to fix a problem in an Intel processor especially since this area is likely going to change and only addresses one potential area of reliability. Furthermore, does Microsoft actually want to take on responsibility for hardware reliability? This is just one example, of many, that affect uptime, but without which SAP systems can be exposed.

Equally important is what happens if a problem does occur. Unless you are very lucky, you have experienced the Blue Screen of Death at least one or a hundred times in your past. This is one of those wonderful things that can occur when you don’t have a comprehensive reliability architecture such as that with IBM Power Systems. With x86 systems, essentially, the OS reports that a problem has occurred which could be related to the CPU, system hardware, OS, device driver, firmware, memory, application software, adapter cards, etc. and that your best course of action is to remove the last thing you installed and reboot your system. When you call your system vendor, they might suggest that you contact your OS vendor which might suggest you contact your virtualization vendor which might suggest the problem lies in your BIOS and on and on. Who takes responsibility and ownership and drives the problem to resolution? With IBM Power Systems, IBM develops and supports its own CPU, firmware, system hardware, virtualization, device drivers, OS (assuming AIX or i for Business), memory controllers and buffer chips and has a comprehensive set of rules and detection circuitry for third party hardware and software. This means that in the very rare event of an intermittent or hard to identify error occurs, which is not detected and corrected automatically, IBM takes ownership and resolves the problem unless it is determined that a third party piece of hardware or software caused the problem. In that case, IBM works diligently with its partners to resolve which includes IBM personnel that work on site at many of their partner locations such as Oracle and SAP.

Security – Often an afterthought, but potentially an extremely expensive one, should be carefully considered. PowerVM has never been successfully hacked as noted at http://nvd.nist.gov. AIX has approximately 0% of Critical and High Vulnerabilities and 2% of all OS vulnerabilities compared with 73% and 27% for Microsoft, respectively and 16% and 31% for Linux respectively. X-Force report – Mid-year 2010 http://www-935.ibm.com/services/us/iss/xforce/trendreports/ . A successful hack could result in just a personnel inconvenience for the IT staff, the loss of systems and/or in a worst case scenario, the theft of proprietary and/or personal data. SAP systems usually hold the crown jewels of an enterprise customer and should be among the best protected of any customer systems.

Bottom line – Where individual x86 systems may have a lower price tag than the equivalent Power System, full SAP landscapes will often require far fewer systems with Power Systems resulting in a lower TCO. Add to that much better reliability, fault detection, comprehensive problem resolution and ownership and rock solid security and the case for IBM Power Systems for SAP landscapes is pretty overwhelming.

August 15, 2011 Posted by | Uncategorized | , , , , , , , , , , , , , , , , , , , , | 6 Comments