SAPonPower

An ongoing discussion about SAP infrastructure

How to ensure Business Suite on HANA infrastructure is mission critical ready

Companies that plan on running Business Suite on HANA (SoH) require systems that are at least as fault tolerant as their current mission critical database systems.  Actually, the case can be made that these systems have to exceed current reliability design specifications due to the intrinsic conditions of HANA, most notably, but not limited to, extremely large memory sizes.  Other factors that will further exacerbate this include MCOD, MCOS, Virtualization and the new SPS09 feature, Multi-Tenancy.

A customer with 5TB of data in their current uncompressed Suite database will most likely see a reduction due to HANA compression (SAP note 1793345, and the HANA cookbook²) bringing their system size, including HANA work space, to roughly 3TB.  That same customer may have previously been using a database buffer of 100GB +/- 50GB.  At a current buffer size of 100GB, their new HANA system will require 30 times the amount of memory as the conventional database did.  All else being equal, 30x of any component will result in 30x failures.  In 2009, Google engineers wrote a white paper in which they noted that 8% of DIMMS experienced errors every year with most being hard errors and that when a correctable error occurred in a DIMM, there was a much higher chance that another would occur in that same DIMM leading, potentially, to uncorrectable errors.¹  As memory technology has not changed much since then, other than getting denser which could lead to even more likelihood of errors due to cosmic rays and other sources, the risk has likely not decreased.  As a result, unless companies wish to take chances with their most critical asset, they should elect to use the most reliable memory available.

IBM provides exactly that, the best of breed open systems memory reliability, not as an option at a higher cost, but included with every POWER8 system, from the one and two socket scale-out systems to even more advanced capabilities with the 4 & 8-socket systems, some of which will scale to 16-sockets (announced as a Statement of Direction for 2015).  This memory protection is represented in multiple discreet features that work together to deliver unprecedented reliability.  The following gets into quite a bit of technical detail, so if you don’t have your geek hat on, (mine can’t be removed as it was bonded to my head when I was reading Heinlein in 6th grade; yes, I know that dates me), then you may want to jump to the conclusions at the end.

Chipkill – Essentially a RAID like technology that spans data and ECC recovery information across multiple memory chips such that in the event of a chip failure, operations may continue without interruption.   Using x8 chips, Chipkill provides for Single Device Data Correction (SDDC) and with x4 chips, provides Double Device Data Correction (DDDC) due to the way in which data and ECC is spread across more chips simultaneously.

Spare DRAM modules – Each rank of memory (4 ranks per card on scale-out systems, 8 ranks per card on enterprise systems) contains an extra memory chip.  This chip is used to automatically rebuild the data that was held, previously, on the failed chip in the above scenario.  This happens transparently and automatically.  The effect is two-fold:  One, once the recovery is complete, no additional processing is required to perform Chipkill recovery allowing performance to return to pre-failure levels; Two, maintenance may be deferred as desired by the customer as Chipkill can, yet again, allow for uninterrupted operations in the event of a second memory chip failure and, in fact, IBM does not even make a call out for repair until a second chip fails.

Dynamic memory migration and Hypervisor memory mirroring – These are unique technologies only available on IBM’s Enterprise E870 and E880 systems.  In the event that a DIMM experiences errors that cannot be permanently corrected using sparing capability, the DIMM is called out for replacement.  If the ECC is capable of continuing to correct the errors, the call out is known as a predictive callout indicating the possibility of a future failure.  In such cases, if an E870 or E880 has unlicensed or unassigned DIMMS with sufficient capacity to handle it, logical memory blocks using memory from a predictively failing DIMM will be dynamically migrated to the spare/unused capacity. When this is successful this allows the system to continue to operate until the failing DIMM is replaced, without concern as to whether the failing DIMM might cause any future uncorrectable error.  Hypervisor memory mirroring is a selective mirroring technology for the memory used by the hypervisor which means that even a triple chip failure in a memory DIMM would not affect the operations of the hypervisor as it would simply start using the mirror.

L4 cache – Instead of conventional parity or ECC protected memory buffers used by other vendors, IBM utilizes special eDRAM (a more reliable technology to start with) which not only offers dramatically better performance but includes advanced techniques to delete cache lines for persistent recoverable and non-recoverable fault scenarios as well as to deallocate portions of the cache spanning multiple cache lines.

Extra memory lane – the connection from memory DIMMs or cards is made up of dozens of “lanes” which we can see visually as “pins”.  POWER8 systems feature an extra lane on each POWER8 chip.  In the event of an error, the system will attempt to retry the transfer, use ECC correction and if the error is determined by the service processor to be a hard error (as opposed to a soft/transient error), the system can deallocate the failing lane and allocate the spare lane to take its place.  As a result, no downtime in incurred and planned maintenance may be scheduled at a time that is convenient for the customer since all lanes, including the “replaced” one are still fully protected by ECC.

L2 and L3 Caches likewise have an array of protection technology including both cache line delete and cache column repair in addition to ECC and special hardening called “soft latches” which makes these caches less susceptible to soft error events.

As readers of my blog know, I rarely point out only one side of the equation without the other and in this case, the contrast to existing HANA capable systems could not be more dramatic making the symbol between the two sides a very big > symbol; details to follow.

Intel offers a variety of protection technologies for memory but leaves the decision as to which to employ up to customers.  This ranges from “performance mode” which has the least protection to “RAS mode” which has more protection at the cost of reduced performance.

Let’s start with the exclusives for IBM:  eDRAM L4 cache with its inherent superior protection and performance over conventional memory buffer chips, dynamic memory migration and hypervisor memory mirroring available on IBM Enterprise class servers, none of which are available in any form on x86 servers.  If these were the only advantages for Power Systems, this would already be compelling for mission critical systems, but this is only the start:

Lock step – Intel included similar technology to Chipkill in all of their chips which they call Lock step.  Lock step utilizes two DIMMs behind a single memory buffer chip to store a 64-byte cache line + ECC data instead of the standard single DIMM to provide 1x or 2x 8-bit error detection and 8-bit error correction within a single x8 or x4 DRAM respectively (with x4 modules, this is known as Double Device Data Correction or DDDC and is similar to standard POWER Chipkill with x4 modules.)  Lock Step is only available in RAS mode which incurs a penalty relative to performance mode.  Fujitsu released a performance white paper³ in which they described the results of a memory bandwidth benchmark called STREAM in which they described Lock step memory as running at only 57% of the speed of performance mode memory.

Lock step is certainly an improvement over standard or performance mode in that most single device events can be corrected on the fly (and two such events serially for x4 DIMMS) , but correction incurs a performance penalty above and beyond that incurred from being in Lock step mode in the first place.  After the first such failure, for x8 DIMMS, the system cannot withstand a second failure in that Lockstep pair of DIMMS and a callout for repair (read this as make a planned shutdown as soon as possible) be made to prevent a second and fatal error.  For x4 DIMMS, assuming the performance penalty is acceptable, the planned shutdown could be postponed to a more convenient time.  Remember, with the POWER spare DRAMS, no such immediate action is required.

Memory sparing – Since taking an emergency shutdown is unacceptable for a SoH system, Lock Step memory is therefore insufficient since it handles only the emergency situation but does not eliminate the need for a repair action (as the POWER memory spare does) and it incurs a performance penalty due to having to “lash” together two cards to act as one (as compared to POWER that achieves superior reliability with a single memory card).  Some x86 systems offer memory sparing in which one rank per memory channel is configured as a spare.  For instance, with the Lenovo System x x3850, each memory channel supports 3 DIMMs or ranks.  If sparing is used, the effective memory throughput of the system is reduced by 1/3 since one of every 3 DIMMs is no longer available for normal operations and the memory that must be purchased is increased by 50%.  In other words, 1TB of usable memory requires 1.5TB of installed memory.  The downsize of sparing is that it is a predictive failure technology, not a reactive one.  According to the IBM X6 Servers: Technical Overview Redbook-  “Sparing provides a degree of redundancy in the memory subsystem, but not to the extent of mirroring. In contrast to mirroring, sparing leaves more memory for the operating system. In sparing mode, the trigger for failover is a preset threshold of correctable errors. When this threshold is reached, the content is copied to its spare. The failed rank is then taken offline, and the spare counterpart is activated for use.”  In other words, this works best when you can see it coming, not after a part of the memory has failed.    When I asked a gentleman manning the Lenovo booth at TechEd && d-code about sparing, he first looked at me as if I had a horn sticking out of my head and then replied that almost no one uses this technology.  Now, I think I understand why.  This is a good option, but at a high cost and still falls short of POWER8 memory protection which is both predictive and reactive and dynamically responds to unforeseen events.  By comparison, memory sparing requires a threshold to be reached and then enough time to be available to complete a full rank copy, even if only a single chip is showing signs of imminent failure.

Memory mirroring – This technology utilizes a complete second set of memory channels and DIMMs to maintain a second copy of memory at all times.  This allows for a chip or an entire DIMM to fail with no loss of data as the second copy immediately takes over.  This option, however, does require that you double the amount of memory in the system, utilize plenty of system overhead to keep the pairs synchronized and take away ½ of the memory bandwidth (the other half of which goes to the copy).  This option may perform better than the memory sparing option because reads occur from both copies in an interleaved manner, but writes have to occur to both synchronously.

Conclusions:

Memory mirroring for x86 systems is the closest option to the continuous memory availability that POWER8 delivers.  Of course, having to purchase 2TB of memory in order to have proper protection of 1TB of effective memory adds a significant cost to the system and takes away substantial memory bandwidth.  HANA utilizes memory as few other systems do.

The problem is that x86 vendors won’t tell customers this.  Why?  Now, I can only speculate, but that is why I have a blog.  The x86 market is extremely competitive.  Most customers ask multiple vendors to bid on HANA opportunities.  It would put a vendor at a disadvantage to include this sort of option if the customer has not required it of all vendors.  In turn, x86 vendors don’t won’t to even insinuate that they might need such additional protection as that would imply a lack of reliability to meet mission critical standards.

So, let’s take this to the next logical step.  If a company is planning on implementing SoH using the above protection, they will need to double their real memory.  Many customers will need 4TB, 8TB or even some in the 12TB to 16TB range with a few even larger.  For the 4TB example, an 8TB system would be required which, as of the writing of this blog post, is not currently certified by SAP.  For the 8TB example, 16TB would be required which exceeds most x86 vendor’s capabilities.  At 12TB, only two vendors have even announced the intention of building a system to support 24TB and at 16TB, no vendor has currently announced plans to support 32TB of memory.

Oh, by the way, Fujitsu, in the above referenced white paper, measured the memory throughput of a system with memory mirroring and found it to be 69% that of a performance optimized system.  Remember, HANA demands extreme memory throughput and benchmarks typically use the fastest memory, not necessarily the most reliable meaning that if sizings are based on benchmarks, they may require adjustment when more reliable memory options are utilized.  Would larger core counts then be required to drive the necessary memory bandwidth?

Clearly, until SAP writes new rules to accommodate this necessary technology or vendors run realistic benchmarks showing just how much cpu and memory capacity is needed to support a properly mirrored memory subsystem on an x86 box, customers will be on their own to figure out what to do.

That guess work will be removed once HANA on Power GAs as it already includes the mission critical level of memory protection required for SoH and does so without any performance penalty.

Many thanks to Dan Henderson, IBM RAS expert extraordinaire, from whom I liberally borrowed some of the more technically accurate sentences in this post from his latest POWER8 RAS whitepaper¹¹ and who reviewed this post to make sure that I properly represented both IBM and non-IBM RAS options.

¹ http://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf
² https://cookbook.experiencesaphana.com/bw/operating-bw-on-hana/hana-database-administration/monitoring-landscape/memory-usage/
³ http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CD0QFjAA&url=http%3A%2F%2Fdocs.ts.fujitsu.com%2Fdl.aspx%3Fid%3D8ff6579c-966c-4bce-8be0-fc7a541b4a02&ei=t9VsVIP6GYW7yQTGwIGICQ&usg=AFQjCNHS1fOnd_QAnVV6JjRju9iPlAZkQg&bvm=bv.80120444,d.aWw
¹¹ http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?subtype=WH&infotype=SA&appname=STGE_PO_PO_USEN&htmlfid=POW03133USEN&attachment=POW03133USEN.PDF#loaded.

November 19, 2014 Posted by | Uncategorized | , , , , , , , , | Leave a comment

Questions about Linux on Power and HANA on Power

A curious reader just posed some questions that I suspect many more of you also have.  Here are the questions posed, and my answers:

 

“I remember at a trade show some years ago asking on the IBM stand how you ran Linux on a mainframe. I was told that whilst the SLES distribution had to be recompiled you could actually run compiled x86 Linux binaries on the mainframe. I thought that was pretty clever as getting all the ISVs to recompile for Linux on a mainframe would be a nightmare.

Coming to Linux on Power the web site is unclear whether you can run compiled x86 Linux binaries on Power. I suspected that the PowerVM hypervisor may be able to emulate the Intel instructions and run the x86 binaries, but is isn’t very clear.”

Both RedHat and SUSE Linux are operating systems which have been compiled to run on the Power platform.  The vast majority of the operating system code is identical across Power and x86 versions with only the low level code that directly interacts with the hardware being unique to the specific platform.  Currently, both run in big endian (BE) mode (byte order on disk and memory as opposed to x86 systems which run in little endian (LE); no positive or negative effect for either, simply a design choice).  As such, most applications running in those environments today require are compiled natively, not running with any emulation.  IBM did offer a “binary translation” function in 2011 called Lx86 which allowed x86 Linux applications to run unmodified on Linux on Power, but it was not widely adopted and was later removed from marketing in 2013.  In May, 2014, IBM announced that it would support the software based KVM as an alternative to the hardware/firmware hypervisor.  This allows customers that want to have a single set of administrative tools and skills to utilize KVM on both Power and x86 environment.  It also enables more OSs to run on Power as with KVM,  the system may be run in LE mode.  Canonical Ubuntu is now available on Power and is the first Linux OS to run in LE mode.  Both RedHat and SUSE are also available to run under KVM, however they currently run only in BE mode with SUSE planning on delivering a LE version (SLES 12) in the near future.  Debian and openSUSE are also reportedly working on LE versions for the Power platform.   Currently, LE is only supported under KVM and the entire system must run in the same mode.  In the future, IBM plans on supporting mixed mode allowing some partitions to run in one mode and others to run in the other mode, as well as allowing LE partitions to run under PowerVM.  Please read Jeff Scheel’s blog if you would like to know more about this subject:  https://www.ibm.com/developerworks/community/blogs/fe313521-2e95-46f2-817d-44a4f27eba32/entry/just_the_faqs_about_little_endian?lang=en

“And that brings me nicely to HoP. Is the HANA code recompiled or can it take advantage of some form of emulation? “

One of the key attributes of HANA is its incredible performance.  As such, even if it were possible to run with emulation, it would defeat the purpose of HANA to run in any sort of degraded mode. One of the ways that HANA delivers its speed is through the use of SIMD, (Single Instruction Multiple Data – http://www.saphana.com/community/blogs/blog/2014/03/19/hana-meets-the-sims-simd-simplicity-and-simulation).  On the Intel platform, SIMD is called by the SSE instruction set and is a single pipeline vector unit within each processor core.  IBM offers a similar vector unit within each of the Power cores, called Altivec, and which now supports dual pipeline vector operations.  Each type of unit is utilized by HANA in the same way, but requires platform specific code.  As such, emulation would not allow SSE based code to work even in emulation mode on an Alitvec based system.  HANA was originally coded for SSE based operations in LE mode on the Intel platform. SAP has modified their code to support Altivec based operations in BE mode on the Power platform which was subsequently compiled to run on the Power platform under PowerVM natively.

October 2, 2014 Posted by | Uncategorized | Leave a comment

IBM @ TechEd && d-code

IBM usually has a significant presence at TechEd && d-code and this year is not different.  Of course, there will be the usual array of experts ranging from Systems and Software to HANA, Cloud and Consulting services at our booth.  In addition, IBM as well as some of our best customers will be hosting many sessions:

 

Thursday, October 23, 10:30 AM – 11:30 AM

Session ID MOB105: – Bellini 2105 Level 2

Title:  Apple + IBM: Evolving to the SAP Enabled Individual Enterprise

Speaker:  Scott Geddes

Description:  What’s next, now that you’ve done your first waves of transformation with SAP? How do you empower end users in ways never possible before and unleash the power of our SAP implementation?  In this session we will explore how Apple + IBM are working together to change the way people work and create new, never before seen capabilities.

EXPERT NETWORKING SESSION:

Thursday, October 23

12:00pm – 1:00pm  Lounge #3

Apple + IBM: Evolving to the SAP Enabled Individual Enterprise (IBM and Apple alliance discussion cont’d)

Scott Geddes, IBM SAP Global Business Services – Mobility

Chuck Kichler, IBM SAP iCoC CTO

Tuesday, October 21, 2:00 PM – 3:00 PM

Session ID: DMM137

Title:  IBM’s Recommended Approach for Optimizing Different Kinds of SAP Workloads

Speaker: Guersad Kuecuek

Description:  Today, customers face various requirements to effectively deal with different kinds of workloads. Key aspects are high Service Level Agreements while maintaining optimal performance for analytical (OLAP) and transactional (OLTP) workloads. Find out how customers like Audi, Balluff, and Coca-Cola have mastered these challenging requirements.

Tuesday, October 21, 3:15 PM – 4:15 PM

Session ID:  DMM142

Title:   SAP HANA on IBM Power – Value, Performance and Experience

Speaker:  Alfred Freudenberger

Description:  With the announcement of the testing and evaluation program for SAP HANA on IBM Power Systems at SAPPHIRE NOW in 2014, a new option for SAP HANA deployments will soon be available. Why should SAP clients consider this option? For which environments is it well-suited? What have IBM and SAP learned during development, testing, and evaluation?

EXPERT NETWORKING SESSION:

Wednesday, October 22

11:30am – 12:00pm  Lounge #4

SAP HANA on IBM Power – Value, Performance and Experience

Alfred Freudenberger, IBM Leader NA SAP on Power

Tuesday, October 21, 4:30 PM – 5:30 PM

Session ID: DMM145
Title:  Simplify IBM Database Performance Tuning with the DBA Cockpit

Speaker:  Thomas Rech

Description:  In today’s IT world, it is crucial to maintain high SAP system performance to meet demanding Service Level Agreements. The DBA Cockpit for IBM DB2 Linux, Unix, and Windows is an easy, fully integrated solution for database monitoring and administration with SAP. Learn about the design concept, the capabilities, and discuss customer use cases.

Wednesday, October 22, 11:45 AM – 12:45 PM

Session ID ITM220

Title:  Business Continuity for SAP HANA-Based Applications – Shared Experiences

Speaker:  Irene Hopf

Description:  Learn about the options to keep business continuously running when you migrate SAP application landscapes to SAP HANA. High availability and disaster recovery are essential for business-critical applications. Discuss experiences with your peers and learn how other customers have implemented it.

Wednesday, October 22, 5:45 PM – 6:45 PM

Session ID INT206

Title:  Integrating Shop-Floor with Enterprise in Real-Time – SAP MII In Action

Speaker:  Dipankar Saha

Description:  How to integrate heterogeneous shop-floor systems with SAP ERP by SAP Manufacturing Integration and Intelligence (SAP MII) using custom frameworks with various industry case-studies. This includes: manufacturing integration use cases, real-time integration using SAP MII, and architecture and case studies of integration using the frameworks.

Thursday, October 23, 8:00 AM – 9:00 AM

Session ID UXP117

Title: Experience with Google Glass and Business Applications

Speaker:  Markus van Kempen

Description:  Google Glass presents a mobile form-factor which allows for new possibilities. This session discusses examples of user experiences, including the disconcerting experience of “wearing” a camera all the time, reactions from others, and navigation challenges. We show how to design for Google Glass and demonstrate business applications.

Thursday, October 23, 10:45 AM – 11:45 AM

Session ID ITM235

Title:  Establishing Architectural Patterns for SAP in the Cloud at CokeOne +

Speaker:  Michael Ryan

Description:  The CokeOne + migration to cloud for their non-production SAP environments included the establishment of architectural patterns to take advantage of the services provided by cloud computing. This session focuses on establishing the architectural patterns needed to transform businesses by moving business systems and processes to a cloud model.

Thursday, October 23, 2:00 PM – 3:00 PM

Session ID DMM127

Title:  Streamline SAP HANA Solution with Near-Line Storage Solution by PBS and IBM

Speaker:  Elke Hartmann-Bakan

Description:  Streamline your SAP HANA solution by keeping only hot data in memory and moving warm data to near-line storage (NLS). This allows you to maintain a lean SAP HANA database and sustain high performance. The PBS and IBM NLS solution offers near real-time speed on NLS, ultra fast load time from the online database to the NLS, and extreme compression.

Thursday, October 23, 4:30 PM – 5:30 PM

Session ID ITM123

Title: Planning Your Company’s SAP Systems Migration to the Cloud

Speaker:  Michael Ryan

Description:  The opportunity to move the SAP infrastructure to cloud is a game changer. Businesses are offered a level of speed and agility that has not been available in the past. However, moving to cloud does not solve basic issues that we experience in the IT world. We take a look at some of the key issues and think about the impact across enterprises.

EXPERT NETWORKING SESSION

Tuesday, October 21

2:30pm – 3:00pm  Lounge #4

SAP Applications on IBM Cloud – from self-service to fully managed

Keith Murray, Global Offerings Manager SAP on SoftLayer, IBM SmartCloud Services

Wolfgang Knobloch,  IBM GTS Global Offering Manager, SAP Cloud Offerings

I look forward to seeing you there.

October 1, 2014 Posted by | Uncategorized | Leave a comment

IBM Power Day @ TechEd && d-code

IBM had a by-invitation only IBM i day prior to the start of TechEd in years past.  This year, the event has been expanded to all Power customers and is no longer by invitation only.  Please read on for the details on this event.

 

Register now and reserve your seat for our IBM POWER Technology for SAP Day (Session and Dinner) at TechEd && d-code 2014 in Las Vegas.

IBM and SAP experts will provide the latest news and solution updates for AIX, IBM i and POWER Linux for SAP environments. There will also be an overview of the new SAP HANA on POWER solution which was announced at SAPPHIRE NOW.

Don’t miss this opportunity to have your questions answered by experts and to engage in lively discussions with your peers. The business meeting will be followed by a group dinner.

Please note:  You do not need a badge to attend.

Location
The Venetian/Palazzo Congress Center

3355 Las Vegas Boulevard South
Las Vegas, NV 89109
Room # Veronese 2404

Date:

Monday, October 20, 2014

Agenda

Refreshments & Registration:      1:00pm – 1:30pm

Welcome:                                  1:30pm – 2:00pm

IBM POWER Update:                  2:00pm – 3:00pm

Refreshment break:                   3:00pm – 3:15pm

Breakout Sessions

AIX/POWER Linux breakout session:    3:15pm – 4:45pm (Veronese meeting room # 2404)
IBM i breakout session:                      3:15pm – 4:45pm (Titian meeting room #2201B)

Refreshment break:                   4:45pm – 5:00pm

Preview of “IBM Hot Topics”

at SAP TechEd && d-code:          5:00pm – 5:15pm

Q & A:                                                  5:15pm – 5:45pm

Reception/Dinner:                      6:00pm – 9:00pm

Register today to reserve your seat! Seating is limited.

October 1, 2014 Posted by | Uncategorized | Leave a comment

There is HoP for HANA – HANA on Power T&E program begins

At SAPPHIRE NOW 2014 in Orlando, Bernd Leukert, (member of the SAP Executive Board with global responsibility for development and delivery of all products) made an announcement for which most of the community of IBM Power Systems customers, sellers and integrators have been waiting for a long time.  He announced the testing and evaluation program for HANA on IBM Power Systems (HoP).  As SAP has done with virtually every product they have introduced in the past, they will work with a select group of customers that have the interest, skills, experience and willingness to dedicate the time to thoroughly put a product through its paces to ensure that it will deliver the value necessary to customers, operate efficiently and without error and perform at an acceptable level.  Often, this is called Ramp-up and usually, unless critical problems are found, it is followed by an announcement of General Availability (GA).  During this announcement, Mr. Leukert mentioned that customers are welcome to request a full disclosure of SAP’s plans to support HANA on Power.

 

Many people came through our booth, joined us in meetings and used other channels to ask one simple question: “Once HoP reaches GA, why should a customer prefer this solution over one from the plethora of x86 based vendors.  I will boil down this subject into 5 key points: 1) Performance, 2) Reliability, 3) Flexibility, 4) Efficiency and 5) Cost

 

1)      Performance – HANA, as most know, is an in-memory database.  It replaces complex and cumbersome indexes and aggregates with highly compressed columns of data which can be traversed very quickly (full column scans, in effect) and in ways that don’t have to be considered ahead of time by architects and DBAs.  The speed comes at the cost of memory and can be limited by the bandwidth of the memory subsystem in addition to the throughput and number of processor cores in a system.  IBM recently announced 1 and 2 socket POWER8 systems capable of supporting a maximum of 1TB of memory (with both sockets populated).  A radically faster backplane and up to 128MB of L4 cache (with all memory slots populated) result in a maximum bandwidth of 384GB/s.   This compares to a two socket IvyBridge EX system (with 30 cores) which has a maximum bandwidth of 68GB/s in full RAS mode (85GB/s in the benchmark only, no sane customer would ever run HANA with partial RAS mode).  In other words, the 2-socket Power System can deliver 5.6 times the memory bandwidth.  Not bad, but not the whole story.  High memory throughput without high CPU performance results in a simple shifting of bottlenecks.

 

Each POWER8 core supports a robust cache hierarchy with twice the L1, L2 and L3 compared to Ivy Bridge systems (in full RAS mode),  up to 8 simultaneous threads (SMT8) and dual SIMD (Single Instruction Multiple Data) pipelines, compared to 2 threads (Hyperthreading) on IvyBridge EX and a single SIMD pipeline.  HANA takes full advantage of lots of cache, threads and SIMD pipelines.

 

As HoP is not yet GA, no formal benchmarks are available.  That said, the Power S824 (POWER8, 2 socket with 24 cores, 4U system, Cert # 2014016) running AIX achieved 21,212 Users, 116,727 SAPS on the SD 2-tier benchmark.  Not bad, especially when you consider that the best IvyBridge EX 2-socket , 30-core system SD 2-tier benchmark result to date achieved 12,280 Users, 67,020 SAPS (Cisco UCS B260 M4, Cert # 2014018).  On this benchmark, the Power System delivered almost 73% more performance per systems, 116% more performance per core.  While performance with a conventional DB and app servers cannot be correlated with predicted HANA performance, it is a reasonable measurement to consider relative performance of the two types of systems.

 

2)      Reliability – As I have written about in previous blog entries, IBM Power Systems have a long history of providing mission critical reliability.  Data analysis companies have evaluated extensive outage data from customers and determined that Power Systems, on average, have experienced up to 60% fewer outages than x86 systems with up to 5 times faster recovery from outages when they do occur.  That said, Intel has made many strides forward in reliability features.  If we were to assume equivalent reliability at the processor core level (Intel claims, not backed up by customer data yet), there are still a whole host of components that can result in outages, not to mention a major difference in the underlying RAS architecture for problem detection and resolution.  As this has been a discussion in previous blog entries, let me instead focus on the new improvements in POWER8 which leap frogs x86 systems.  As mentioned earlier, HANA loves, eats and breathes memory.  So, let’s consider memory reliability.  POWER8 memory cards on these new scale-out systems include features previously available only on Enterprise class systems such as the Power 770.  Specifically, each rank of memory features an inactive chip that is included at no extra cost and can take the place of a failing chip on the fly.  This is in addition to ChipKill (ECC on steroids) or a recovery mechanism across chips, not only within each one.  By comparison, x86 systems may offer ChipKill or even double ChipKill, but to avoid “recovery” after a chip failure, x86 systems would require memory mirroring (available on most high end x86 systems) but this effectively reduces memory capacity by 50% as ALL memory is mirrored.  This means that a 1.5TB, 30-core IvyBridge EX system would only be able to deliver 768GB of memory with similar memory redundancy to POWER8 with a full complement of 1TB of memory.

 

Lacking this huge penalty, most customers will stick with non-mirrored memory at a considerably lower reliability level.  But the story does not end there.  POWER8 adds extra protection on the memory bus with built in redundancy and automatic failover in the event of a memory lane failure.

 

Taken together, some estimates suggest that POWER8 systems will experience 2.5 to 8 times fewer system outages caused by the memory subsystem than similarly configured x86 systems.   Put another way, if we were to assume that a 1TB x86 system would experience one outage every 10 years (just a random number for comparison purposes,  not based on any actual data), then 10 such systems would result in 1 outage every year based on memory alone.  Maybe not a huge problem for BW systems with scale-out/HA configs, non-mission critical analytics systems or non-prod systems, but could be a big problem for mission critical business suite systems on separate nodes much less larger ones that place multiple SAP components on one node (yes, SAP seems to be envisioning a future of MCOD or Multiple Components on One DB).  Remember, however, that this is just the memory subsystem associated failure and there are many other potential areas of system failure for which Power has its long track record of superior protection and recovery.

 

3)      Flexibility – POWER has virtualization built in.  It is included by default at the hardware/firmware level (unless one were to remove the PowerVM hypervisor and replace it with PowerKVM, not the intended hypervisor for HANA).  At SAPPHIRE, SAP announced GA support for HANA production on VMware vSphere 5.5.  Just because you CAN do something does not mean that you SHOULD.  You could run across a busy highway without the assistance of lights or overpasses, but most people would consider you crazy to do so.  Likewise, vSphere 5.5 still utilizes the same I/O infrastructure that it has in the past resulting in all I/O writes, having to pass from a VM through its virtual device driver to the memory space of the vSphere hypervisor and then on to the I/O subsystem through another “real” device driver and the same, albeit reverse, path on reads.   If that was not bad enough to dissuade all but the most ardent VMware proponents from placing any large scale transactional DB on VMware, VMware’s own published study shows only 65% scaling on 32 virtual processors, which when extrapolated to 64 or 128 vps shows no incremental benefit at all.   Can you then image an incredibly memory intensive HANA system running with a ton of memory but only being able to utilize 32 threads/16 cores effectively?  Would this break the KPIs for HANA?

 

Another area of flexibility comes from configuration options.  We have recommended a TDI approach (Tailored Datacenter Integration: http://www.saphana.com/docs/DOC-4380) where SAP defines the KPIs for HANA and allows customers to determine how they will satisfy those KPIs.  In other words, if a customer wants to use an LPAR on a POWER7+ or POWER8 system, their choice of storage based on their current standards and install base (e.g. EMC, HDS, NetApp, IBM), an existing network infrastructure, on-board or off-board flash drives, XFS or GPFS, etc., a TDI approach would allow for the flexibility to select the components and configurations that work best for them.

 

4)      Efficiency – If running VMware is not a good option, as suggested above, then customers would be forced to employ islands of automation, multiple different HANA implementations on separate servers each with its own dedicated HA and DR servers, ABAP and JAVA app servers on their own systems, probably virtualized, but still separate from the HANA system.  Yes, you might virtualize non-prod, but if you virtualize QA or a stress test system, then you have violated one of the fundamental rules in system architecture, i.e. never utilize a different stack in the critical QA path that is not used in production.

 

By comparison, Power has no such limitations. Anything can be mixed and matched as desired, with obvious constraints of CPU, Memory and I/O considered, of course.  In other words, customers will have the flexibility to put together anything in almost any combination based on their unique needs.  Higher performance and comprehensive virtualization may result in fewer systems being required since it allows customers to place many different partitions (VMs) together, including HANA, app servers, non-prod and HA, into a single cluster using shared, virtualized resources.  As a result, data center footprint, power, cooling, administration, problem resolution, etc. can all see substantial reductions.

 

5)      Cost – With the introduction of POWER8 and its Linux only models, IBM entered a new era on pricing.  Full disclosure: I am not a wizard at configurations so I am certain that these would not pass a proper design review but should suffice for this comparison.  I selected the IBM Power S822L, a Linux only system that includes 2 sockets, 24 cores @ 3.026 GHz,  1TB of memory, PowerVM, a pair of 146GB drives (just for boot and paging), a pair of dual-port 40GB Ethernet adapters, 3 years of 24/7 warranty and Linux support.  The price came out a little over $90K.  I then went to the HP web site and selected a DL580 Gen8 with a pair of E7-4480v2 2.8GHz/15 core processors, 1TB of memory, a couple of 120GB disk drives and 4 dual port 10GB Ethernet adapters, since they apparently don’t support the newer 40GB ones.  Before I even added OS and Virtualization, this config was already close to $62K.  Add VMware vSphere 5.5 and a 4 Guest, 2 socket RedHat license with 3 yr 24/7 support and the total is closing on $78K.  So, for a little over 15% higher cost, at list, you could get a system with over 50% more performance, radically better virtualization and far better reliability.  The reliability advantage alone, especially for Suite on HANA customers, would justify a far higher price tag.  Of course, this is just the first wave of systems and many customers will need far larger configurations.  As IBM has not announced those, as of yet, I can’t speculate where they will come in, price wise, but if you consider how far things have come in a very short time for Power, the future looks awfully bright.

 

One thing that I did not discuss above, other than in passing, is GPFS.  Not that GPFS is to be ignored, but remember, this will hopefully be an option, not a requirement.  This amazing, scale-out technology, is shared with our System x brethren.  It is a game changer for customers, offering not only outstanding performance but built in redundancy for data and log volumes for HANA.  It has been extensively tested by SAP, IBM and customers.  It already certified to scale-out to 56 nodes with System x and SAP has tested it in configurations with over 100 nodes.  If GPFS were added to the mix, for scale-out configurations, this would add substantial additional value over similar x86 configurations, other than those from IBM System x.

 

HANA on Power will enable not just freedom of choice for customers, but the mission critical reliability and performance that may have been holding them back from trying Suite on HANA.  We are looking forward to working with SAP and our customers to explore this exciting new offering.

 

Many thanks to my colleague, Bob Wolf, for his outstanding editing.

June 6, 2014 Posted by | Uncategorized | 17 Comments

Rebuttal to “Why choose x86″ for SAP blog posting

I was intrigued by a recent blog post, entitled: Part 1: SAP on VMware : Why choose x86.  https://communities.vmware.com/blogs/walkonblock/2014/02/06/part-1-sap-on-vmware-why-choose-x86.  I will get to the credibility of the author in just a moment.  First, however, I felt it might be interesting to review the points that were made and discuss these, point by point.

  1. No Vendor Lock-in: When it comes to x86 world, there is no vendor lock-in as you can use any vendor and any make and model as per your requirements”.  Interesting that the author did not discuss the vendor lock-in on chip, firmware or hypervisor.  Intel, or to a very minor degree, AMD, is required for all x86 systems.   This would be like being able to choose any car as long as the engine was manufactured by Toyota (a very capable manufacturer but with a lock on the industry, might not offer the best price or innovation). As any customer knows, each x86 system has its own unique BIOS and/or firmware.  Sure, you can switch from one vendor to another or add a second vendor, but lacking proper QA, training, and potentially different operational procedures, this can result in problems.  And then there is the hypervisor with VMware clearly the preference of the author as it is for most SAP x86 virtualization customers.  No lock-in there?

SAP certifies multiple different OS and hypervisor environments for their code.  Customers can utilize one or more at any given time.  As all logic is written in 3rd and 4th GL languages, i.e. ABAP and JAVA, and is contained within the DB server, customers can move from one OS, HW platform and/or hypervisor to another and only have to, wait for it, do proper QA, training and modify operational procedures as appropriate.  So, SAP has removed lock-in regardless of OS, HW or hypervisor.

Likewise, Oracle, DB2 and Sybase support most OS’s, HW and hypervisors (with some restrictions).  Yes, a migration is required for movement between dissimilar stacks, but this could be said for moving from Windows to Linux and any move between different stacks still requires all migration activities to be completed with the potential exception of data movement when you “simply” change the HW vendor.

  1. Lower hardware & maintenance costs: x86 servers are far better than cheaper than non-x86 servers. This also includes the ongoing annual maintenance costs (AMC) as well.”  Funny, however, that the author only compared HW and maintenance costs and conveniently forgot about OS and hypervisor costs.  Also interesting that the author forgot about utilization of systems.  If one system is ½ the cost of another, but you can only drive, effectively, ½ the workload, then the cost is the same per unit of work.  Industry analysts have suggested that 45% utilization is the maximum sustained to be expected out of VMware SAP systems with most seeing far less.  By the same token, those analysts say that 85% or higher is to be expected of Power Systems.  Also interesting to note that the author did not say which systems were being compared as new systems and options from IBM Power Systems offer close to price parity with x86 systems when HW, OS, hypervisor and 3 years of maintenance are included.
  1. Better performance:  Some of the models of x86 servers can actually out-perform the non-x86 servers in various forms.”  Itanium is one of the examples, which is a no-duh for anyone watching published benchmarks.  The other example is a Gartner paper sponsored by Intel which actually does not quote a single SAP benchmark.  Too bad the author suggested this was a discussion of SAP.  Last I checked (today 2/10/14), IBM Power Systems can deliver almost 5 times the SAPS performance of the largest x86 server (as measured by the 2-tier SD benchmark).  On a SAPS/core basis, Power deliver almost 30% more SAPS/core compared to Windows systems and almost 60% more than Linux/x86 systems.  Likewise, on the 3-tier benchmark, the latest Power result is almost 4.5 times that of the latest x86 result.  So, much for point 3.
  1. Choice of OS: You have choice of using any OS of your choice and not forced to choose a specific OS.”  Yes, it really sucks that with Power, you are forced to choose, AIX … or IBM I for Business … or SUSE Linux … or RedHat Linux which is so much worse than being forced to choose Microsoft Windows … or Oracle Solaris … or SUSE Linux … or RedHat Linux.
  1. Disaster Recovery: You can use any type of hardware, make and model when it comes to disaster recovery (DR). You don’t need to maintain hardware from same vendor.”  Oh, really?  First, I have not met any customers that use one stack for production and a totally different one in DR, but that is not to say that it can’t be done.  Second, remember the discussion about BIOS and firmware?  There can be different patches, prerequisites and workarounds for different stacks.  Few customers want to spend all of the money they “saved” by investing in a separate QA cycle for DR.  Even fewer want to take a chance of DR not working when they can least afford it, i.e. when there is a disaster.  Interestingly, Power actually supports this better than x86 as the stack is identical regardless of which generation, model, mhz is used.  You can even run in Power6 mode on a Power7+ server further enabling complete compatibility regardless of chip type meaning you can use older systems in DR to back up brand new systems in production.
  1. Unprecedented scalability: You can now scale the x86 servers the way you want, TB’s of RAM’s , more than 64 cores etc is very much possible/available in x86 environment.”  Yes, any way that you want as long as you don’t need more capacity than is available with the current 80 core systems.  Any way that you want as long as you are not running with VMware which limits partitions to 128 threads which equates to 64 cores.  Any way that you want but that VMware suggests that you contain partitions within a NUMA block which means a max of 40 cores.  http://blogs.vmware.com/apps/sap  Any way that you want as long as you recognize that VMware partitions are further limited in terms of scalability which results in an effective limit of 32 threads/16 cores as I have discussed in this blog previously.
  1. Support from Implementation Vendor: “If you check with your implementation vendor/partner, you will find they that almost all of them can certify/support implementation of SAP on x86 environment. The same is the case if you are thinking about migrating from non-x86 to x86 world.”  No clue what point is being made here as all vendors on all supported systems and OSs support SAP on their systems.

The author referred to my blog as part of the proof of his/her theories which is the only reason why I noticed this blog in the first place.  The author describes him/herself as “Working with Channel Presales of an MNC”.  Interesting that he/she hides him/herself behind “MNC” because the “MNC” that I work for believes that transparency and honesty are required in all internet postings.  That said, the author writes about nothing but VMware, so you will have to draw your own conclusions as to where this individual works or with which “MNC” his/her biases lie.

The author, in the reference to my posting, completely misunderstood the point that I made regarding the use of 2-tier SAP benchmark data in projecting the requirements of database only workloads and apparently did not even read the “about me” which shows up by default when you open my blog.  I do not work for SAP and nothing that I say can be considered to represent them in any way.

Fundamentally, the author’s bottom line comment, “x86 delivers compelling total cost of ownership (TCO) while considering SAP on x86 environment” is neither supported by the facts that he/she shared nor by those shared by others.  IBM Power Systems continues to offer very competitive costs with significantly superior operational characteristics for SAP and non-SAP customers.

February 17, 2014 Posted by | Uncategorized | , , , , , | 2 Comments

Is VMware ready for HANA production?

Virtualizing SAP HANA with VMware in productive environments is not supported at this time, but according to Arne Arnold with SAP, based on his blog post of Nov 5,  http://www.saphana.com/community/blogs/blog/2013/11/05/just-a-matter-of-time-sap-hana-virtualized-in-production, they are working hard in that direction.

Clearly memory can be assigned to individual partitions with VMware and to a more limited extent, CPU resources may also be assigned although this may be a bit more limited in its effectiveness.  The issues that SAP will have to overcome, however, are inherent limitations in scalability of VMware partitions, I/O latency and potential contention between partitions for CPU resources.

As I discussed in my blog post late last year, http://saponpower.wordpress.com/2012/10/23/sap-performance-report-sponsored-by-hp-intel-and-vmware-shows-startling-results/, VMware 5.0 was proven, by VMware, HP and Intel, to have severe performance limitations and scalability constraints.  In the lab test, a single partition achieved only 62.5% scalability, overall, but what is more startling is the scalability between each measured interval.  From 4 to 8 threads, they were able to double the number of users, thereby demonstrating 100% scalability, which is excellent.  From 8 to 16 threads, they were only about to handle 66.7% more users despite doubling the number of threads.  From 16 to 32 threads, the number of users supported increased only 50%.  Since the time of the study being published, VMware has released vSphere 5.1 with an architected limit of 64 threads per partition and 5.5 with an architected limited of 128 threads per partitions.  Notice my careful wording of architected limit not the official VMware wording, of “scalability”.  Scaling implies that with each additional thread, additional work can be accomplished.  Linear scaling implies that each time you double the number of threads, you can accomplish twice the amount of work.  Clearly, vSphere 5.0 was unable to attain even close to linear scaling.  But now with the increased number of threads supported, can they achieve more work?  Unfortunately, there are no SAP proof points to answer this question.  All that we can do is to extrapolate the results from their earlier published results assuming the only change was the limitation in the number of architected threads.  If we use the straight forward Microsoft Excel “trendline” function to project results using a Polynomial with an order of 2, (no, it has been way to long since I took statistics in college to explain what this means but I trust Microsoft (lol)) we see that a VMware partition is unlikely to ever achieve much more throughput, without a major change in the VMware kernel, than it achieved with only 32 threads.  Here is a graph that I was able to create in Excel using the data points from the above white paper.

VMware scalability curve

Remember, at 32 threads, with Intel Hyperthreading, this represents only 16 cores. As a 1TB BW HANA system requires 80 cores, it is rather difficult to imagine how a VMware partition could ever handle this sort of workload much less how it would respond to larger workloads.  Remember, 1TB = 512GB of data space which, at a 4 to 1 compression ratio, equal 2TB of data.  VMware starts to look more and more inadequate as data size increases.

And if a customer was misled enough by VMware or one of their resellers, they might think that using VMware in non-prod was a good idea.  Has SAP or a reputable consultant ever recommended using use one architecture and stack in non-prod and a completely different one in prod?

So, in which case would virtualizing HANA be a good idea?  As far as I can tell, only if you are dealing with very small HANA databases.  How small?  Let’s do the math:   assuming linear scalability (which we have already proven above is not even close to what VMware can achieve) 32 threads = 16 cores which is only 20% of the capacity of an 80 core system.  20% of 2TB = 400GB of uncompressed data.  At the 62.5% scalability described above, this would diminish further to 250GB.  There may be some side-car applications for which a large enterprise might replicate only 250GB of data, but do you really want to size for the absolute maximum throughput and have no room for growth other than chucking the entire system and moving to newer processor versions each time they come out?  There might also be some very small customers which have data that can currently fit into this small a space, but once again, why architect for no growth and potentially failure?  Remember, this was a discussion only about scalability, not response time.  Is it likely that response time also degrades as VMware partitions increase in size?  Silly me!  I forgot to mention that the above white paper showed response time increasing from .2 seconds @ 4 threads to 1 second @ 32 threads a 400% increase in response time.  Isn’t the goal of HANA to deliver improved performance?  Kind of defeats the purpose if you virtualize it using VMware!

November 20, 2013 Posted by | Uncategorized | , , , | 3 Comments

High end Power Systems customers have a new option for SAP app servers that is dramatically less expensive than x86 Linux solutions

Up until recently, if you were expanding the use of your SAP infrastructure or have some older Power Systems that you were considering replacing with x86 Linux systems, I could give you a TCO argument that showed how you could see roughly equivalent TCO using lower end Power Servers.  Of course, some people might not buy into all of the assumptions or might state that Linux was their new standard such that AIX was no longer an acceptable option.  Recently, IBM made an announcement which has changed the landscape so dramatically that you can now obtain the needed capacity using high end server “dark cores” with Linux, not at an equivalent TCO, but at a dramatically lower TCA.

The new offering is called IFL which stands for Integrated Facility for Linux.  This concept originated with System Z (aka mainframe) several years ago.  It allows customers that have existing Power 770, 780 or 795 servers with capacity on demand “dark cores”, i.e. for which no workload currently runs and the license to use the hardware, virtualization and OS software have not been activated, to turn on a group of cores and memory specifically to be used for Linux only workloads.  A Power IFL is composed of 4 cores with 32GB of memory and has a list price of $8,591.

In the announcement materials provided by IBM Marketing, an example is provided of a customer that would need to add the equivalent of 16 cores @ 80% utilization and 128GB of memory to an existing Power 780 4.4GHz system or would need the equivalent capacity using a 32-core HP DL560 2.7GHz system running at 60% utilization.  They used SPECint_rate as the basis of this comparison.  Including 3 year license for PowerVM, Linux subscription and support, 24×7 hardware maintenance and the above mentioned Power activations, the estimated street price would be approximately $39,100.  By comparison, the above HP system plus Linux subscription and support, VMware vSphere and 24×7 hardware maintenance would come in at an estimated street price of approximately $55,200.

Already sounds like a good deal, but I am a skeptic, so I needed to run the numbers myself.  I find SPECint_rate to be a good indicator of performance for almost no workloads and an incredibly terrible indicator of performance for SAP workloads.  So, I took a different approach.  I found a set of data from an existing SAP customer of IBM which I then used to extrapolate capacity requirements.  I selected the workloads necessary to drive 16 cores of a Power 780 3.8GHz system @ 85% utilization.  Why 85%?  Because we, and independent sources such as Solitaire Interglobal, have data from many large customers that report routinely driving their Power Systems to a sustained utilization of 85% or higher.  I then took those exact same workloads and modeled them onto x86 servers assuming that they would be virtualized using VMware.  Once again, Solitaire Interglobal reports that almost no customers are able to drive a sustained utilization of 45% in this environment and that 35% would be more typical, but I chose a target utilization of 55% instead to make this as optimistic for the x86 servers as possible.  I also applied only a 10% VMware overhead factor although many sources say that is also optimistic.  It took almost 6 systems with each hosting about 3 partitions to handle the same workload as the above 16-core IFL pool did.

Once again, I was concerned that some of you might be even more optimistic about VMware, so I reran the model using a 65% target utilization (completely unattainable in my mind, but I wanted to work out the ultimate, all stars aligned, best admins on the planet, tons of time to tune systems, scenario) and 5% VMware overhead (I don’t know anyone that believes VMware overhead to be this low).  With each system hosting 3 to 4 partitions, I was able to fit the workloads on 5 systems.  If we just go crazy with unrealistic assumptions, I am sure there is a way that you could imagine these workloads fitting onto 4 systems.

Next, I wanted to determine the accurate price for those x86 systems.  I used HP’s handy on-line ordering web site to price some systems.  Instead of the DL560 that IBM Marketing used, I chose the DL360e Gen8 system, with 2@8-core 1.8GHz processors, 64GB of memory, a pair of 7200rpm 500GB hard drives, VMware Enterprise for 2 processors with 3 yr subscription, RH Enterprise Linux 2 socket/4 guest with 3 yr subscription, 3yr 24×7  ProCare Service and HP installation services.  The total price comes to $27,871 which after an estimated discount of 25% on everything (probably not realistic), results in a street price of $20,903.

Let’s do the math.  Depending on which x86 scenario you believe is reasonable, it either takes 6 systems at a cost of $125,419, 5 systems @ $104,515 or 4 systems @ $83,612 to handle the same load as a 4 IFL/16-core pool of partitions on a 780 at a cost of $39,100.  So, in the most optimistic case for x86, you would still have to pay $44,512 more.  It does not take a rocket scientist to realize that using Power IFLs would result in a far less expensive solution with far better reliability and flexibility characteristics not to mention better performance since communication to/from the DB servers would utilize the radically faster backplane instead of an external TCP/IP network.

But wait, you say.  There is a better solution.  You could use bigger x86 systems with more partitions on each one.  You are correct.  Thanks for bringing that up.  Turns out, just as with Power Systems, if you put more partitions on each VMware system, the aggregate peaks never add up to the sum of the individual peaks.  Using 32-core, DL560s @ 2.2GHz, 5% VMware overhead and 65% target utilization, you would only need 2 systems.  I priced them on the HP web site with RH Linux 4 socket/unlimited guests 3yr subscription, VMware Enterprise 4 socket/3yr, 24×7 ProCare and HP installation service and found the price to be $70,626 per system, i.e. $141,252 for two systems, $105,939 after the same, perhaps unattainable 25% discount.  Clearly, 2 systems are more elegant than 4 to 6, but still, this solution is still $66,839 more expensive than the IFL solution.

I started off to try and prove that IBM Marketing was being overly optimistic and ended up realizing that they were highly conservative.  The business case for using IFLs for SAP app servers on an existing IBM high end system with unutilized dark cores compared to net new VMware/Linux/x86 systems is overwhelming.  As many customers have decided to utilize high end Power servers for DB due to their reliability, security, flexibility and performance characteristics, the introduction of IFLs for app servers is almost a no-brainer.

 

 

 

 

 

Configuration details:

HP ProLiant DL360e Gen8 8 SFF Configure-to-order Server – (Energy Star)661189-ESC $11,435.00

HP ProLiant DL360e Gen8 Server
HP DL360e Gen8 Intel® Xeon® E5-2450L (1.8GHz/8-core/20MB/70W) Processor FIO Kit x 2
HP 32GB (4x8GB) Dual Rank x4 PC3L-10600 (DDR3-1333) Reg CAS-9 LP Memory Kit x 2
HP Integrated Lights Out 4 (iLO 4) Management Engine
HP Embedded B120i SATA Controller
HP 8-Bay Small Form Factor Drive Cage
HP Gen8 CPU1 Riser Kit with SAS Kit + SAS License Kit
HP 500GB 6G SATA 7.2K rpm SFF (2.5-inch) SC Midline 1yr Warranty Hard Drive x 2
HP 460W Common Slot Platinum Plus Hot Plug Power Supply
HP 1U Small Form Factor Ball Bearing Gen8 Rail Kit
3-Year Limited Warranty Included

3yr, 24×7 4hr ProCare Service $1,300.00

HP Install HP ProLiant $225.00

Red Hat Enterprise Linux 2 Sockets 4 Guest 3 Year Subscription 24×7 Support No Media Lic E-LTU $5,555.00

VMware vSphere Enterprise 1 Processor 3 yr software $4,678.00 x 2 = $9,356.00

DL360e Total price $27,871.00

 

ProLiant DL560 Gen8 Configure-to-order Server (Energy Star) 686792-ESC    $29,364.00

HP ProLiant DL560 Gen8 Configure-to-order Server
HP DL560 Gen8 Intel® Xeon® E5-4620 (2.2GHz/8-core/16MB/95W) Processor FIO Kit
HP DL560 Gen8 Intel® Xeon® E5-4620 (2.2GHz/8-core/16MB/95W) Processor Kit x3
HP 16GB (2x8GB) Dual Rank x4 PC3L-10600 (DDR3-1333) Reg CAS-9 LP Memory Kit x 4
ENERGY STAR® qualified model
HP Embedded Smart Array P420i/2GB FBWC Controller
HP 500GB 6G SAS 7.2K rpm SFF (2.5-inch) SC Midline 1yr Warranty Hard Drive x 2
HP iLO Management Engine(iLO 4)
3 years parts, labor and onsite service (3/3/3) standard warranty. Certain restrictions and exclusions apply.

HP 3y 4h 24×7 ProCare Service   $3,536.00

Red Hat Enterprise Linux 4 Sockets Unlimited Guest 3 Yr Subscription 24×7 Support No Media Lic E-LTU     $18,519.00

VMware vSphere Enterprise 1 Processor 3 yr software $4,678.00 x 4 = $18,712.00

HP Install DL560 Service   $495.00

DL560 Total price:   $70,626.00

October 21, 2013 Posted by | Uncategorized | , , , , , , , , , | Leave a comment

IBM @ SAP TechEd 2013 in Las Vegas

IBM will, yet again, have a strong presence at TechEd.  I have included a list of sessions at which IBM or customers of IBM will be presenting topics.  In addition to the “cloud” session listed below, I will also be participating in the Information Management session with Martin Mezger.  I look forward to seeing everyone at both of those sessions.  For those of you interested in SAP Landscape Virtualization Management, consider attending the PG&E session for a real world example of how this offering from SAP can bring real benefits to the operations of an organization.  Please also stop by the IBM Let’s Build A Smarter Planet booth, # 129 on the showroom floor.

Title Location Date Time Speaker
Successful Deployment of SAP Finance Rapidmart on HANA Platform at Lilly Bellini Room 2105   Wednesday, October 23 08:00a.m. Kiran Yelamaneni
Renovate to Innovate with IBM and SAP Cloud Bellini Room 2105   Wednesday, October 23 10:30 a.m. Chuck Kichler
IBM Information Management – Optimized Solutions for Customers Bellini Room 2105   Wednesday, October 23 04:30 p.m. Martin Mezger
The BPM Imperative – How to Change Project Thinking to Process Thinking Bellini Room 2105   Thursday, October 24 08:00 a.m. Parag Karkhanis
Avoiding Bumps in the Night with SAP HANA, High Availability, Disaster Recovery& more Bellini Room 2105   Thursday, October 24 09:15 a.m. Rich Travis
Cloud Benefits – SAP NetWeaver Landscape Virtualization Management and IBM PureSystem Bellini Room 2105   Thursday, October 24 10:30 a.m. Alfred Freudenberger
Next Generation Database Technology for SAP Applications and Big Data Bellini Room 2105   Thursday, October 24 02:00 p.m. Guersad Kuecuek
Virtualize SAP HANA Systems with VMware and IBM Bellini Room 2105   Thursday, October 24 03:15 p.m. Oliver Rettig | Bob Goldsand
Accelerate Your Agile Transformation with   Confidence L8   Tuesday, October 22 05:45 p.m. James Hunter
SAP HANA – IBM GPFS: Architecture, Concepts, and Best Practices L23   Wednesday, October 23 04:30 p.m. Tomas Krojzl
How IBM Overcame Application Lifecycle Complexity L10   Wednesday, October 23 05:45 p.m. James Hunter
SAP Self-Service and Provisioning at PG&E Based on SAP NetWeaver LVM with IBM SmartCloud L9   Thursday, October 24 08:00 a.m. Danial Khan
Creating Services for Mobile Applications Using SAP NetWeaver Gateway OData Channel L21   Thursday, October 24 11:45 a.m. Sandeep  Mandloi

October 7, 2013 Posted by | Uncategorized | , , , | Leave a comment

The hype and the reality of HANA

Can you imagine walking into a new car dealership and before you can say anything about your current vehicle and needs, a salesperson  immediately offers to show you the latest, greatest and most popular new car!  Of course you can since this is what that person gets paid to do.  Now, imagine the above scenario where the salesperson says “how is your current car not meeting your needs?” and following it up with “I don’t want you to buy anything from me unless it brings you substantial value”.  After smelling salts have been administered, you might recover enough to act like a cartoon character trying to check your ears to make sure they are functioning properly and ask the salesperson to repeat what he or she said.

The first scenario is occurring constantly with SAP account execs, systems integrators and consultants playing the above role of new car salesperson.  The second rarely happens, but that is exactly the role that I will play in this blog post.

The hype around HANA could not be much louder and deep than it is currently.  As bad as it might be, the FUD (Fear, Uncertainty and Doubt) is worse.  The hype suggests that HANA can do everything except park your car since that is a future capability (not really, I just made that up.)  At the very worst, this hype suggests a vision for the future that, while not solving world hunger or global warming, might improve the operations and profitability of companies.  The second is more insidious.  It suggests that unless you act like lambs and follow the lead of the individual telling this tale, you will be like a lost sheep, out of support and further out of the mainstream.

I will address the second issue first.  As of today, the beginning of August, SAP has made absolutely no statement indicating they will discontinue support for any platform, OS or DB.  In fact, a review of SAP notes shows support for most OS’s with no end date and even DB2 9.7 has an end of support date that is several years past that of direct standard support from IBM!  So, what gives???  Is SAP saying one thing internally and another externally?  I have been working with SAP for far too long and know their business practices too well to believe that they would act in such a two-faced manner not to mention exposing themselves to another round of expensive and draining lawsuits.  Instead, I place the arrow of shame squarely on those rogue SAP account execs that are perpetuating this story.  The next time that one of them makes this sort of suggestion, turn the tables on them.  Ask them to provide you with a statement, in writing, backed up with official press releases or SAP notes, showing that this is the case.  If they can’t, it is reasonable to conclude that they are simply trying to use the age old FUD tactic to get you to spend more money with them now rather than waiting until/if SAP actually decides to stop supporting a particular type of HW, OS or DB.

And now for the first issue; the hype around HANA.  HANA offers dramatic benefits to some SAP customers.  Some incarnation of HANA may indeed be inevitable for the vast majority.  However, the suggestion that HANA is the end-all be-all flies in the face of many other solutions on the market, many of which are radically less expensive and often have dramatically lower risk.  Here is a very simple example.

Most customers would like to reduce the time and resources required to run batch jobs.  It seems as if there is not a CFO anywhere that does not want to reduce month end/quarter close from multiple days down to a day or less.  CFOs are not the only ones with that desire as certain functions must come to a halt during a close and/or the availability requirements go sky high during this time period requiring higher IT investments.  SAP has suggested that HANA can achieve exactly this, however it is not quite clear whether this will require BW HANA, Suite on HANA or some combination of the two or even another as yet unannounced HANA variant.  I am sure that if you ask a dozen consultants, you will get a dozen different answer to how to achieve these goals with HANA and it is entirely possible that each of them are correct in their own way.  One thing is certain however: it won’t come cheaply.  Not only will a company have to buy HANA HW and SW, but they will have to pay for a migration and a boatload of consulting services.  It will also not come without risk.  BW HANA and Suite on HANA require a full migration.  Those systems become the exclusive repository of business critical data.  HANA is currently in its 58th revision in a little over two years.  HA, DR and backup/recovery tools are still evolving.  No benchmarks for Suite on HANA have been published which means that sizing guidelines are based purely on the size of the DB, not on throughput or even users.  Good luck finding extensive large scale customer references or even medium sized ones in your industry.   To make matters worse, a migration to HANA is a one way path.  There is no published migration methodology to move from HANA back to a conventional DB.  It is entirely possible that Suite on HANA will be much more stable than BW HANA was, that these systems will scream on benchmarks,  that all of those HA, DR, Backup/Recovery and associated tools will mature in short order and that monkeys will fly.  Had the word risk not been invented previously, Suite on HANA would probably be the first definition in the dictionary for it.

So, is there another way to achieve those goals, maybe one that is less expensive, does not require a migration, software licenses or consulting services?  Of course not because that would be as impossible to believe as the above mentioned flying monkeys.   Well, strap on your red shoes and welcome to Oz, because it is not only possible but many customers are already achieving exactly those gains.  How?  By utilizing high performance flash storage subsystems like the IBM FlashSystem.  Where transaction processing typically accesses a relatively small amount of data cached in database buffers, batch, month end and quarter close jobs tend to be very disk intensive.  A well-tuned disk subsystem can deliver access speeds of around 5 milliseconds.  SSDs can drop this to about 1 millisecond.   A FlashSystem can deliver incredible throughput while accessing data in as little as 100 microseconds.   Many customers have seen batch times reduced to a third or less than what they experienced before implementing FlashSystem.  Best of all, there are no efforts around migration, recoding, consulting and no software license costs.  A FlashSystem is “just another disk subsystem” to SAP.  If an IBM SVC (SAN Volume Controller) or V7000 is placed in front of a FlashSystem, data can be transparently replicated from a conventional disk subsystem to FlashSystem without even a system outage.  If the subsystem does not produce the results expected the system can be repurposed or, if tried out via a POC, simply returned at no cost.  To date, few, if any, customers have returned a FlashSystem after completing a POC as they have universally delivered such incredible results that the typical result is an order for more units.

Another super simple, no risk option is to consider using the old 2-tier approach to SAP systems.  In this situation, instead of utilizing separate database and application server systems/partitions, database and app server instances are housed within a single OS system/partition.  Some customers don’t realize how “chatty” app servers are with an amazing number of very small queries and data running back and forth to DB servers.  As fast as Ethernet is, it is as slow as molasses compared to the speed of an inter-process communication within an OS.  As crazy as it may seem, simply by consolidating DB and app servers into a single OS, batch and close activity may speed up dramatically.  And here is the no risk part.  Most customers have QA systems and from an SAP architecture perspective, there is no difference in having app servers within a single OS compared to on separate OSs.  As a result, customers can simply give it a shot and see what happens.  No pain other than a little time to set up and test the environment.  Yes, this is the salesman telling you not to spend any money with him.

This is not the only business case for HANA.  Others involve improving reporting or even doing away with reporting in favor of real-time analytics.  Here is the interesting part.  Before Suite on HANA or even BW HANA became available, SAP had introduced real-time replication into side-car HANA appliances.    With these devices, the source of business critical data is kept on conventional databases.  You remember those archaic old systems that are reliable, secure, scalable and around which you have built a best practices environment not to mention have purchased a DB license and are simply paying maintenance on it.  Perhaps naively, I call this the 95-5 rule, not 80-20.  You may be able to achieve 95% of your business goals with such a side-car without risking a migration or the integrity of your data.   Also, since you will be dealing with a subset of data, the cost of the SW license for such a device will likely be a small fraction of the cost of an entire DB.  Even better, as an appliance, if it fails, you just replace the appliance as the data source has not been changed.  Sounds too good to be true?  Ask your SAP AE and see what sort of response you get.  Or make it a little more interesting and suggest that you may be several years away from being ready go to Suite on HANA but could potentially do a side-car in the short term and observe the way the shark will smell blood in the water.  By the way, since you have to be on current levels of SAP software in order to migrate to Suite on HANA and reportedly 70% of customers in North America are not current, (no idea of the rest of the world) so this may not even be much or a stretch.

And I have not even mentioned DB2 BLU yet but will leave that for a later blog posting.

August 5, 2013 Posted by | Uncategorized | , , , , , , , , , , , , , | 4 Comments

Follow

Get every new post delivered to your Inbox.

Join 114 other followers