I was intrigued by a recent blog post, entitled: Part 1: SAP on VMware : Why choose x86. https://communities.vmware.com/blogs/walkonblock/2014/02/06/part-1-sap-on-vmware-why-choose-x86. I will get to the credibility of the author in just a moment. First, however, I felt it might be interesting to review the points that were made and discuss these, point by point.
- No Vendor Lock-in: “When it comes to x86 world, there is no vendor lock-in as you can use any vendor and any make and model as per your requirements”. Interesting that the author did not discuss the vendor lock-in on chip, firmware or hypervisor. Intel, or to a very minor degree, AMD, is required for all x86 systems. This would be like being able to choose any car as long as the engine was manufactured by Toyota (a very capable manufacturer but with a lock on the industry, might not offer the best price or innovation). As any customer knows, each x86 system has its own unique BIOS and/or firmware. Sure, you can switch from one vendor to another or add a second vendor, but lacking proper QA, training, and potentially different operational procedures, this can result in problems. And then there is the hypervisor with VMware clearly the preference of the author as it is for most SAP x86 virtualization customers. No lock-in there?
SAP certifies multiple different OS and hypervisor environments for their code. Customers can utilize one or more at any given time. As all logic is written in 3rd and 4th GL languages, i.e. ABAP and JAVA, and is contained within the DB server, customers can move from one OS, HW platform and/or hypervisor to another and only have to, wait for it, do proper QA, training and modify operational procedures as appropriate. So, SAP has removed lock-in regardless of OS, HW or hypervisor.
Likewise, Oracle, DB2 and Sybase support most OS’s, HW and hypervisors (with some restrictions). Yes, a migration is required for movement between dissimilar stacks, but this could be said for moving from Windows to Linux and any move between different stacks still requires all migration activities to be completed with the potential exception of data movement when you “simply” change the HW vendor.
- Lower hardware & maintenance costs: “x86 servers are far better than cheaper than non-x86 servers. This also includes the ongoing annual maintenance costs (AMC) as well.” Funny, however, that the author only compared HW and maintenance costs and conveniently forgot about OS and hypervisor costs. Also interesting that the author forgot about utilization of systems. If one system is ½ the cost of another, but you can only drive, effectively, ½ the workload, then the cost is the same per unit of work. Industry analysts have suggested that 45% utilization is the maximum sustained to be expected out of VMware SAP systems with most seeing far less. By the same token, those analysts say that 85% or higher is to be expected of Power Systems. Also interesting to note that the author did not say which systems were being compared as new systems and options from IBM Power Systems offer close to price parity with x86 systems when HW, OS, hypervisor and 3 years of maintenance are included.
- Better performance: “Some of the models of x86 servers can actually out-perform the non-x86 servers in various forms.” Itanium is one of the examples, which is a no-duh for anyone watching published benchmarks. The other example is a Gartner paper sponsored by Intel which actually does not quote a single SAP benchmark. Too bad the author suggested this was a discussion of SAP. Last I checked (today 2/10/14), IBM Power Systems can deliver almost 5 times the SAPS performance of the largest x86 server (as measured by the 2-tier SD benchmark). On a SAPS/core basis, Power deliver almost 30% more SAPS/core compared to Windows systems and almost 60% more than Linux/x86 systems. Likewise, on the 3-tier benchmark, the latest Power result is almost 4.5 times that of the latest x86 result. So, much for point 3.
- Choice of OS: “You have choice of using any OS of your choice and not forced to choose a specific OS.” Yes, it really sucks that with Power, you are forced to choose, AIX … or IBM I for Business … or SUSE Linux … or RedHat Linux which is so much worse than being forced to choose Microsoft Windows … or Oracle Solaris … or SUSE Linux … or RedHat Linux.
- Disaster Recovery: “You can use any type of hardware, make and model when it comes to disaster recovery (DR). You don’t need to maintain hardware from same vendor.” Oh, really? First, I have not met any customers that use one stack for production and a totally different one in DR, but that is not to say that it can’t be done. Second, remember the discussion about BIOS and firmware? There can be different patches, prerequisites and workarounds for different stacks. Few customers want to spend all of the money they “saved” by investing in a separate QA cycle for DR. Even fewer want to take a chance of DR not working when they can least afford it, i.e. when there is a disaster. Interestingly, Power actually supports this better than x86 as the stack is identical regardless of which generation, model, mhz is used. You can even run in Power6 mode on a Power7+ server further enabling complete compatibility regardless of chip type meaning you can use older systems in DR to back up brand new systems in production.
- Unprecedented scalability: “You can now scale the x86 servers the way you want, TB’s of RAM’s , more than 64 cores etc is very much possible/available in x86 environment.” Yes, any way that you want as long as you don’t need more capacity than is available with the current 80 core systems. Any way that you want as long as you are not running with VMware which limits partitions to 128 threads which equates to 64 cores. Any way that you want but that VMware suggests that you contain partitions within a NUMA block which means a max of 40 cores. http://blogs.vmware.com/apps/sap Any way that you want as long as you recognize that VMware partitions are further limited in terms of scalability which results in an effective limit of 32 threads/16 cores as I have discussed in this blog previously.
- Support from Implementation Vendor: “If you check with your implementation vendor/partner, you will find they that almost all of them can certify/support implementation of SAP on x86 environment. The same is the case if you are thinking about migrating from non-x86 to x86 world.” No clue what point is being made here as all vendors on all supported systems and OSs support SAP on their systems.
The author referred to my blog as part of the proof of his/her theories which is the only reason why I noticed this blog in the first place. The author describes him/herself as “Working with Channel Presales of an MNC”. Interesting that he/she hides him/herself behind “MNC” because the “MNC” that I work for believes that transparency and honesty are required in all internet postings. That said, the author writes about nothing but VMware, so you will have to draw your own conclusions as to where this individual works or with which “MNC” his/her biases lie.
The author, in the reference to my posting, completely misunderstood the point that I made regarding the use of 2-tier SAP benchmark data in projecting the requirements of database only workloads and apparently did not even read the “about me” which shows up by default when you open my blog. I do not work for SAP and nothing that I say can be considered to represent them in any way.
Fundamentally, the author’s bottom line comment, “x86 delivers compelling total cost of ownership (TCO) while considering SAP on x86 environment” is neither supported by the facts that he/she shared nor by those shared by others. IBM Power Systems continues to offer very competitive costs with significantly superior operational characteristics for SAP and non-SAP customers.
Virtualizing SAP HANA with VMware in productive environments is not supported at this time, but according to Arne Arnold with SAP, based on his blog post of Nov 5, http://www.saphana.com/community/blogs/blog/2013/11/05/just-a-matter-of-time-sap-hana-virtualized-in-production, they are working hard in that direction.
Clearly memory can be assigned to individual partitions with VMware and to a more limited extent, CPU resources may also be assigned although this may be a bit more limited in its effectiveness. The issues that SAP will have to overcome, however, are inherent limitations in scalability of VMware partitions, I/O latency and potential contention between partitions for CPU resources.
As I discussed in my blog post late last year, http://saponpower.wordpress.com/2012/10/23/sap-performance-report-sponsored-by-hp-intel-and-vmware-shows-startling-results/, VMware 5.0 was proven, by VMware, HP and Intel, to have severe performance limitations and scalability constraints. In the lab test, a single partition achieved only 62.5% scalability, overall, but what is more startling is the scalability between each measured interval. From 4 to 8 threads, they were able to double the number of users, thereby demonstrating 100% scalability, which is excellent. From 8 to 16 threads, they were only about to handle 66.7% more users despite doubling the number of threads. From 16 to 32 threads, the number of users supported increased only 50%. Since the time of the study being published, VMware has released vSphere 5.1 with an architected limit of 64 threads per partition and 5.5 with an architected limited of 128 threads per partitions. Notice my careful wording of architected limit not the official VMware wording, of “scalability”. Scaling implies that with each additional thread, additional work can be accomplished. Linear scaling implies that each time you double the number of threads, you can accomplish twice the amount of work. Clearly, vSphere 5.0 was unable to attain even close to linear scaling. But now with the increased number of threads supported, can they achieve more work? Unfortunately, there are no SAP proof points to answer this question. All that we can do is to extrapolate the results from their earlier published results assuming the only change was the limitation in the number of architected threads. If we use the straight forward Microsoft Excel “trendline” function to project results using a Polynomial with an order of 2, (no, it has been way to long since I took statistics in college to explain what this means but I trust Microsoft (lol)) we see that a VMware partition is unlikely to ever achieve much more throughput, without a major change in the VMware kernel, than it achieved with only 32 threads. Here is a graph that I was able to create in Excel using the data points from the above white paper.
Remember, at 32 threads, with Intel Hyperthreading, this represents only 16 cores. As a 1TB BW HANA system requires 80 cores, it is rather difficult to imagine how a VMware partition could ever handle this sort of workload much less how it would respond to larger workloads. Remember, 1TB = 512GB of data space which, at a 4 to 1 compression ratio, equal 2TB of data. VMware starts to look more and more inadequate as data size increases.
And if a customer was misled enough by VMware or one of their resellers, they might think that using VMware in non-prod was a good idea. Has SAP or a reputable consultant ever recommended using use one architecture and stack in non-prod and a completely different one in prod?
So, in which case would virtualizing HANA be a good idea? As far as I can tell, only if you are dealing with very small HANA databases. How small? Let’s do the math: assuming linear scalability (which we have already proven above is not even close to what VMware can achieve) 32 threads = 16 cores which is only 20% of the capacity of an 80 core system. 20% of 2TB = 400GB of uncompressed data. At the 62.5% scalability described above, this would diminish further to 250GB. There may be some side-car applications for which a large enterprise might replicate only 250GB of data, but do you really want to size for the absolute maximum throughput and have no room for growth other than chucking the entire system and moving to newer processor versions each time they come out? There might also be some very small customers which have data that can currently fit into this small a space, but once again, why architect for no growth and potentially failure? Remember, this was a discussion only about scalability, not response time. Is it likely that response time also degrades as VMware partitions increase in size? Silly me! I forgot to mention that the above white paper showed response time increasing from .2 seconds @ 4 threads to 1 second @ 32 threads a 400% increase in response time. Isn’t the goal of HANA to deliver improved performance? Kind of defeats the purpose if you virtualize it using VMware!
High end Power Systems customers have a new option for SAP app servers that is dramatically less expensive than x86 Linux solutions
Up until recently, if you were expanding the use of your SAP infrastructure or have some older Power Systems that you were considering replacing with x86 Linux systems, I could give you a TCO argument that showed how you could see roughly equivalent TCO using lower end Power Servers. Of course, some people might not buy into all of the assumptions or might state that Linux was their new standard such that AIX was no longer an acceptable option. Recently, IBM made an announcement which has changed the landscape so dramatically that you can now obtain the needed capacity using high end server “dark cores” with Linux, not at an equivalent TCO, but at a dramatically lower TCA.
The new offering is called IFL which stands for Integrated Facility for Linux. This concept originated with System Z (aka mainframe) several years ago. It allows customers that have existing Power 770, 780 or 795 servers with capacity on demand “dark cores”, i.e. for which no workload currently runs and the license to use the hardware, virtualization and OS software have not been activated, to turn on a group of cores and memory specifically to be used for Linux only workloads. A Power IFL is composed of 4 cores with 32GB of memory and has a list price of $8,591.
In the announcement materials provided by IBM Marketing, an example is provided of a customer that would need to add the equivalent of 16 cores @ 80% utilization and 128GB of memory to an existing Power 780 4.4GHz system or would need the equivalent capacity using a 32-core HP DL560 2.7GHz system running at 60% utilization. They used SPECint_rate as the basis of this comparison. Including 3 year license for PowerVM, Linux subscription and support, 24×7 hardware maintenance and the above mentioned Power activations, the estimated street price would be approximately $39,100. By comparison, the above HP system plus Linux subscription and support, VMware vSphere and 24×7 hardware maintenance would come in at an estimated street price of approximately $55,200.
Already sounds like a good deal, but I am a skeptic, so I needed to run the numbers myself. I find SPECint_rate to be a good indicator of performance for almost no workloads and an incredibly terrible indicator of performance for SAP workloads. So, I took a different approach. I found a set of data from an existing SAP customer of IBM which I then used to extrapolate capacity requirements. I selected the workloads necessary to drive 16 cores of a Power 780 3.8GHz system @ 85% utilization. Why 85%? Because we, and independent sources such as Solitaire Interglobal, have data from many large customers that report routinely driving their Power Systems to a sustained utilization of 85% or higher. I then took those exact same workloads and modeled them onto x86 servers assuming that they would be virtualized using VMware. Once again, Solitaire Interglobal reports that almost no customers are able to drive a sustained utilization of 45% in this environment and that 35% would be more typical, but I chose a target utilization of 55% instead to make this as optimistic for the x86 servers as possible. I also applied only a 10% VMware overhead factor although many sources say that is also optimistic. It took almost 6 systems with each hosting about 3 partitions to handle the same workload as the above 16-core IFL pool did.
Once again, I was concerned that some of you might be even more optimistic about VMware, so I reran the model using a 65% target utilization (completely unattainable in my mind, but I wanted to work out the ultimate, all stars aligned, best admins on the planet, tons of time to tune systems, scenario) and 5% VMware overhead (I don’t know anyone that believes VMware overhead to be this low). With each system hosting 3 to 4 partitions, I was able to fit the workloads on 5 systems. If we just go crazy with unrealistic assumptions, I am sure there is a way that you could imagine these workloads fitting onto 4 systems.
Next, I wanted to determine the accurate price for those x86 systems. I used HP’s handy on-line ordering web site to price some systems. Instead of the DL560 that IBM Marketing used, I chose the DL360e Gen8 system, with 2@8-core 1.8GHz processors, 64GB of memory, a pair of 7200rpm 500GB hard drives, VMware Enterprise for 2 processors with 3 yr subscription, RH Enterprise Linux 2 socket/4 guest with 3 yr subscription, 3yr 24×7 ProCare Service and HP installation services. The total price comes to $27,871 which after an estimated discount of 25% on everything (probably not realistic), results in a street price of $20,903.
Let’s do the math. Depending on which x86 scenario you believe is reasonable, it either takes 6 systems at a cost of $125,419, 5 systems @ $104,515 or 4 systems @ $83,612 to handle the same load as a 4 IFL/16-core pool of partitions on a 780 at a cost of $39,100. So, in the most optimistic case for x86, you would still have to pay $44,512 more. It does not take a rocket scientist to realize that using Power IFLs would result in a far less expensive solution with far better reliability and flexibility characteristics not to mention better performance since communication to/from the DB servers would utilize the radically faster backplane instead of an external TCP/IP network.
But wait, you say. There is a better solution. You could use bigger x86 systems with more partitions on each one. You are correct. Thanks for bringing that up. Turns out, just as with Power Systems, if you put more partitions on each VMware system, the aggregate peaks never add up to the sum of the individual peaks. Using 32-core, DL560s @ 2.2GHz, 5% VMware overhead and 65% target utilization, you would only need 2 systems. I priced them on the HP web site with RH Linux 4 socket/unlimited guests 3yr subscription, VMware Enterprise 4 socket/3yr, 24×7 ProCare and HP installation service and found the price to be $70,626 per system, i.e. $141,252 for two systems, $105,939 after the same, perhaps unattainable 25% discount. Clearly, 2 systems are more elegant than 4 to 6, but still, this solution is still $66,839 more expensive than the IFL solution.
I started off to try and prove that IBM Marketing was being overly optimistic and ended up realizing that they were highly conservative. The business case for using IFLs for SAP app servers on an existing IBM high end system with unutilized dark cores compared to net new VMware/Linux/x86 systems is overwhelming. As many customers have decided to utilize high end Power servers for DB due to their reliability, security, flexibility and performance characteristics, the introduction of IFLs for app servers is almost a no-brainer.
HP ProLiant DL360e Gen8 8 SFF Configure-to-order Server – (Energy Star)661189-ESC $11,435.00
HP ProLiant DL360e Gen8 Server
HP DL360e Gen8 Intel® Xeon® E5-2450L (1.8GHz/8-core/20MB/70W) Processor FIO Kit x 2
HP 32GB (4x8GB) Dual Rank x4 PC3L-10600 (DDR3-1333) Reg CAS-9 LP Memory Kit x 2
HP Integrated Lights Out 4 (iLO 4) Management Engine
HP Embedded B120i SATA Controller
HP 8-Bay Small Form Factor Drive Cage
HP Gen8 CPU1 Riser Kit with SAS Kit + SAS License Kit
HP 500GB 6G SATA 7.2K rpm SFF (2.5-inch) SC Midline 1yr Warranty Hard Drive x 2
HP 460W Common Slot Platinum Plus Hot Plug Power Supply
HP 1U Small Form Factor Ball Bearing Gen8 Rail Kit
3-Year Limited Warranty Included
3yr, 24×7 4hr ProCare Service $1,300.00
HP Install HP ProLiant $225.00
Red Hat Enterprise Linux 2 Sockets 4 Guest 3 Year Subscription 24×7 Support No Media Lic E-LTU $5,555.00
VMware vSphere Enterprise 1 Processor 3 yr software $4,678.00 x 2 = $9,356.00
DL360e Total price $27,871.00
ProLiant DL560 Gen8 Configure-to-order Server (Energy Star) 686792-ESC $29,364.00
HP ProLiant DL560 Gen8 Configure-to-order Server
HP DL560 Gen8 Intel® Xeon® E5-4620 (2.2GHz/8-core/16MB/95W) Processor FIO Kit
HP DL560 Gen8 Intel® Xeon® E5-4620 (2.2GHz/8-core/16MB/95W) Processor Kit x3
HP 16GB (2x8GB) Dual Rank x4 PC3L-10600 (DDR3-1333) Reg CAS-9 LP Memory Kit x 4
ENERGY STAR® qualified model
HP Embedded Smart Array P420i/2GB FBWC Controller
HP 500GB 6G SAS 7.2K rpm SFF (2.5-inch) SC Midline 1yr Warranty Hard Drive x 2
HP iLO Management Engine(iLO 4)
3 years parts, labor and onsite service (3/3/3) standard warranty. Certain restrictions and exclusions apply.
HP 3y 4h 24×7 ProCare Service $3,536.00
Red Hat Enterprise Linux 4 Sockets Unlimited Guest 3 Yr Subscription 24×7 Support No Media Lic E-LTU $18,519.00
VMware vSphere Enterprise 1 Processor 3 yr software $4,678.00 x 4 = $18,712.00
HP Install DL560 Service $495.00
DL560 Total price: $70,626.00
IBM will, yet again, have a strong presence at TechEd. I have included a list of sessions at which IBM or customers of IBM will be presenting topics. In addition to the “cloud” session listed below, I will also be participating in the Information Management session with Martin Mezger. I look forward to seeing everyone at both of those sessions. For those of you interested in SAP Landscape Virtualization Management, consider attending the PG&E session for a real world example of how this offering from SAP can bring real benefits to the operations of an organization. Please also stop by the IBM Let’s Build A Smarter Planet booth, # 129 on the showroom floor.
|Successful Deployment of SAP Finance Rapidmart on HANA Platform at Lilly||Bellini Room 2105||Wednesday, October 23||08:00a.m.||Kiran Yelamaneni|
|Renovate to Innovate with IBM and SAP Cloud||Bellini Room 2105||Wednesday, October 23||10:30 a.m.||Chuck Kichler|
|IBM Information Management – Optimized Solutions for Customers||Bellini Room 2105||Wednesday, October 23||04:30 p.m.||Martin Mezger|
|The BPM Imperative – How to Change Project Thinking to Process Thinking||Bellini Room 2105||Thursday, October 24||08:00 a.m.||Parag Karkhanis|
|Avoiding Bumps in the Night with SAP HANA, High Availability, Disaster Recovery& more||Bellini Room 2105||Thursday, October 24||09:15 a.m.||Rich Travis|
|Cloud Benefits – SAP NetWeaver Landscape Virtualization Management and IBM PureSystem||Bellini Room 2105||Thursday, October 24||10:30 a.m.||Alfred Freudenberger|
|Next Generation Database Technology for SAP Applications and Big Data||Bellini Room 2105||Thursday, October 24||02:00 p.m.||Guersad Kuecuek|
|Virtualize SAP HANA Systems with VMware and IBM||Bellini Room 2105||Thursday, October 24||03:15 p.m.||Oliver Rettig | Bob Goldsand|
|Accelerate Your Agile Transformation with Confidence||L8||Tuesday, October 22||05:45 p.m.||James Hunter|
|SAP HANA – IBM GPFS: Architecture, Concepts, and Best Practices||L23||Wednesday, October 23||04:30 p.m.||Tomas Krojzl|
|How IBM Overcame Application Lifecycle Complexity||L10||Wednesday, October 23||05:45 p.m.||James Hunter|
|SAP Self-Service and Provisioning at PG&E Based on SAP NetWeaver LVM with IBM SmartCloud||L9||Thursday, October 24||08:00 a.m.||Danial Khan|
|Creating Services for Mobile Applications Using SAP NetWeaver Gateway OData Channel||L21||Thursday, October 24||11:45 a.m.||Sandeep Mandloi|
Can you imagine walking into a new car dealership and before you can say anything about your current vehicle and needs, a salesperson immediately offers to show you the latest, greatest and most popular new car! Of course you can since this is what that person gets paid to do. Now, imagine the above scenario where the salesperson says “how is your current car not meeting your needs?” and following it up with “I don’t want you to buy anything from me unless it brings you substantial value”. After smelling salts have been administered, you might recover enough to act like a cartoon character trying to check your ears to make sure they are functioning properly and ask the salesperson to repeat what he or she said.
The first scenario is occurring constantly with SAP account execs, systems integrators and consultants playing the above role of new car salesperson. The second rarely happens, but that is exactly the role that I will play in this blog post.
The hype around HANA could not be much louder and deep than it is currently. As bad as it might be, the FUD (Fear, Uncertainty and Doubt) is worse. The hype suggests that HANA can do everything except park your car since that is a future capability (not really, I just made that up.) At the very worst, this hype suggests a vision for the future that, while not solving world hunger or global warming, might improve the operations and profitability of companies. The second is more insidious. It suggests that unless you act like lambs and follow the lead of the individual telling this tale, you will be like a lost sheep, out of support and further out of the mainstream.
I will address the second issue first. As of today, the beginning of August, SAP has made absolutely no statement indicating they will discontinue support for any platform, OS or DB. In fact, a review of SAP notes shows support for most OS’s with no end date and even DB2 9.7 has an end of support date that is several years past that of direct standard support from IBM! So, what gives??? Is SAP saying one thing internally and another externally? I have been working with SAP for far too long and know their business practices too well to believe that they would act in such a two-faced manner not to mention exposing themselves to another round of expensive and draining lawsuits. Instead, I place the arrow of shame squarely on those rogue SAP account execs that are perpetuating this story. The next time that one of them makes this sort of suggestion, turn the tables on them. Ask them to provide you with a statement, in writing, backed up with official press releases or SAP notes, showing that this is the case. If they can’t, it is reasonable to conclude that they are simply trying to use the age old FUD tactic to get you to spend more money with them now rather than waiting until/if SAP actually decides to stop supporting a particular type of HW, OS or DB.
And now for the first issue; the hype around HANA. HANA offers dramatic benefits to some SAP customers. Some incarnation of HANA may indeed be inevitable for the vast majority. However, the suggestion that HANA is the end-all be-all flies in the face of many other solutions on the market, many of which are radically less expensive and often have dramatically lower risk. Here is a very simple example.
Most customers would like to reduce the time and resources required to run batch jobs. It seems as if there is not a CFO anywhere that does not want to reduce month end/quarter close from multiple days down to a day or less. CFOs are not the only ones with that desire as certain functions must come to a halt during a close and/or the availability requirements go sky high during this time period requiring higher IT investments. SAP has suggested that HANA can achieve exactly this, however it is not quite clear whether this will require BW HANA, Suite on HANA or some combination of the two or even another as yet unannounced HANA variant. I am sure that if you ask a dozen consultants, you will get a dozen different answer to how to achieve these goals with HANA and it is entirely possible that each of them are correct in their own way. One thing is certain however: it won’t come cheaply. Not only will a company have to buy HANA HW and SW, but they will have to pay for a migration and a boatload of consulting services. It will also not come without risk. BW HANA and Suite on HANA require a full migration. Those systems become the exclusive repository of business critical data. HANA is currently in its 58th revision in a little over two years. HA, DR and backup/recovery tools are still evolving. No benchmarks for Suite on HANA have been published which means that sizing guidelines are based purely on the size of the DB, not on throughput or even users. Good luck finding extensive large scale customer references or even medium sized ones in your industry. To make matters worse, a migration to HANA is a one way path. There is no published migration methodology to move from HANA back to a conventional DB. It is entirely possible that Suite on HANA will be much more stable than BW HANA was, that these systems will scream on benchmarks, that all of those HA, DR, Backup/Recovery and associated tools will mature in short order and that monkeys will fly. Had the word risk not been invented previously, Suite on HANA would probably be the first definition in the dictionary for it.
So, is there another way to achieve those goals, maybe one that is less expensive, does not require a migration, software licenses or consulting services? Of course not because that would be as impossible to believe as the above mentioned flying monkeys. Well, strap on your red shoes and welcome to Oz, because it is not only possible but many customers are already achieving exactly those gains. How? By utilizing high performance flash storage subsystems like the IBM FlashSystem. Where transaction processing typically accesses a relatively small amount of data cached in database buffers, batch, month end and quarter close jobs tend to be very disk intensive. A well-tuned disk subsystem can deliver access speeds of around 5 milliseconds. SSDs can drop this to about 1 millisecond. A FlashSystem can deliver incredible throughput while accessing data in as little as 100 microseconds. Many customers have seen batch times reduced to a third or less than what they experienced before implementing FlashSystem. Best of all, there are no efforts around migration, recoding, consulting and no software license costs. A FlashSystem is “just another disk subsystem” to SAP. If an IBM SVC (SAN Volume Controller) or V7000 is placed in front of a FlashSystem, data can be transparently replicated from a conventional disk subsystem to FlashSystem without even a system outage. If the subsystem does not produce the results expected the system can be repurposed or, if tried out via a POC, simply returned at no cost. To date, few, if any, customers have returned a FlashSystem after completing a POC as they have universally delivered such incredible results that the typical result is an order for more units.
Another super simple, no risk option is to consider using the old 2-tier approach to SAP systems. In this situation, instead of utilizing separate database and application server systems/partitions, database and app server instances are housed within a single OS system/partition. Some customers don’t realize how “chatty” app servers are with an amazing number of very small queries and data running back and forth to DB servers. As fast as Ethernet is, it is as slow as molasses compared to the speed of an inter-process communication within an OS. As crazy as it may seem, simply by consolidating DB and app servers into a single OS, batch and close activity may speed up dramatically. And here is the no risk part. Most customers have QA systems and from an SAP architecture perspective, there is no difference in having app servers within a single OS compared to on separate OSs. As a result, customers can simply give it a shot and see what happens. No pain other than a little time to set up and test the environment. Yes, this is the salesman telling you not to spend any money with him.
This is not the only business case for HANA. Others involve improving reporting or even doing away with reporting in favor of real-time analytics. Here is the interesting part. Before Suite on HANA or even BW HANA became available, SAP had introduced real-time replication into side-car HANA appliances. With these devices, the source of business critical data is kept on conventional databases. You remember those archaic old systems that are reliable, secure, scalable and around which you have built a best practices environment not to mention have purchased a DB license and are simply paying maintenance on it. Perhaps naively, I call this the 95-5 rule, not 80-20. You may be able to achieve 95% of your business goals with such a side-car without risking a migration or the integrity of your data. Also, since you will be dealing with a subset of data, the cost of the SW license for such a device will likely be a small fraction of the cost of an entire DB. Even better, as an appliance, if it fails, you just replace the appliance as the data source has not been changed. Sounds too good to be true? Ask your SAP AE and see what sort of response you get. Or make it a little more interesting and suggest that you may be several years away from being ready go to Suite on HANA but could potentially do a side-car in the short term and observe the way the shark will smell blood in the water. By the way, since you have to be on current levels of SAP software in order to migrate to Suite on HANA and reportedly 70% of customers in North America are not current, (no idea of the rest of the world) so this may not even be much or a stretch.
And I have not even mentioned DB2 BLU yet but will leave that for a later blog posting.
This third party take on HANA is intriguing and funny but certainly makes you think. I rarely just post a link to another blog without editorial, but this one stands on its own.
After Vishal Sikka’s announcement that SAP was investigating the potential of HANA on IBM Power Systems, it seemed that all that was needed for this concept to become a reality was for IBM to invest in the resources to aid SAP in porting and optimization of SAP HANA on Power (HoP) and for customers to weigh in on their desire for such a solution.
Many, very large customers told us that they did let SAP know of their interest in HoP. IBM and SAP made the necessary investments for a proof of concept with HoP. This successful effort was an example of the outstanding results that happen when two great companies cooperate and put some of their best people together. However, there are still no commitments to deliver HoP in 2013. SAP apparently has not ruled out such a solution at some point in the future. So, why should you care since HANA already runs on x86?
Simple answer. Are you ready to bet your business on x86?
Do Intel systems offer the scalability that your business requires and can those systems react fast enough to changing business conditions? Power scales far high than x86, has no artificial limitations and responds to changing demands almost instantly.
Are x86 systems reliable enough? Power Systems inherited a wide array of self correcting and fault tolerant features from the mainframe, still the standard for reliability in the industry.
Are x86 systems secure enough? Despite the best attempts by hackers, PowerVM has still never been breached.
Can you exploit virtualization or will you have to go back to a 1990s concept of islands of automation? The PowerVM hypervisor is part of every Power system, so it is virtualized by default and the journey that most customers have been on for most of this millennium can continue unabated.
What can you do about this? Speak up!! Call your SAP Account Executive and send them notes. Let them know that you are unwilling to take a chance on allowing your SAP Business Suite database systems to be placed on anything less than the most reliable, scalable, secure and flexible systems available, i.e. IBM Power Systems. Remind SAP that Business Suite DB already runs very well on current Power Systems and that until SAP is willing to support this platform for HANA, there is very little compelling reason for you to consider a move to HANA.
Sapphire is just a week away. This may be the best opportunity for you to deliver this message as most of SAP’s leadership will be present in Orlando. If they hear this message from enough customers, it is unlikely that they will simply ignore it.
For those of you who will be attending the SAPPHIRE and/or ASUG annual conference in Orlando, IBM will be having a big presence at the conference. IBM will be presenting at 15 different ASUG and SAPPHIRE breakout sessions and will have a number of additional featured sessions. The IBM booth will have IBM experts on all facets of solutions around SAP. We will get you in contact with the right people who can answer any questions you may have. You can also check out all of the new solutions IBM has for SAP at the IBM booth. I will personally be manning the IBM booth with respect to SAP solutions on z. Whether you are running SAP on z or not, it would be great to see you. Please stop by and say hello.
The following link http://www.ibm.com/solutions/sap/us/en/landing/N844815N07692Q43-3.html has lots of information on all of the different activities IBM will have at the SAPPHIRE and ASUG conference. For your convenience, some general information is listed below in section A. Some key topics covered in the Experiential Zone are covered below in section B. The 15 different breakout sessions are covered in section C below.
A) Join IBM at the SAPPHIRE NOW and ASUG Annual Conference, May 14 – 16 in Orlando
For more than four decades, IBM and SAP have worked together to deliver superior ROI through tens of thousands of successful implementations to help companies innovate, adapt, and Compete in the Era of SMART
Visit IBM at booth #1017 to see how businesses successfully integrate IBM products and solutions
–Front Office & Mobility Solutions: Improve ProductivityAnywhere – Everywhere
–Cloud Computing: Reduce costs and improve flexibility
–SAP Analytics and SAP HANA
–Enterprise Application Services, Line of Business &Industry Solutions: Optimize SAP Investments
–Breakthrough Technologies: World Class Infrastructure, IBM DB2 and Middleware
Tuesday, May 14 1p – 140p: Under Armour speeds real time decisions to maximize product availability
Wednesday, May 15 4p – 4:40p: GM’s Big-Bang Service Parts Transformation
Tuesday, May 14 12p – 12:45p: Micro forum – Energize Your SAP Software Investment with IBM DB2 10.5 BLU Acceleration
2) Talk with our experts on key topics at the Experiential Zones:
What happens if you can’t make your product? How much money do you lose? What is the impact to an automotive enterprise such as the customers, consumers, dealers, wholesalers and shareholders? You rely on equipment, machines and plants to produce the products you sell and the Preventative Maintenance Solution from SAP has been developed to keep you from having any disruption. Our unique and powerful solution is the marriage of IBM and SAP’s innovative technologies. We have combined the IBM research department, our Watson artificial intelligence technology, SAP HANA, SAP Business Objects, SAP Mobility & Syclo and IBM hardware. This new innovative solution will allow Automotive OEMs, Suppliers, Dealers to predict more accurately their maintenance strategy for each piece of equipment in their enterprise.
Learn how to decrease unplanned equipment / machine/ plant down time
Learn how to reduce planned downtime
Learn how to reduce replenishment stock
Learn how to get maximum value from parts and repair / replace before failure
Learn how to reduce work order mistakes
The SAP Predictive Analytics in a Connected Health Care solution establishes critical components of a closed loop analytics environment. This solution establishes a industry leading data model that captures front office data (implant device, health and wellness, other 3rd party patient data) and integrates it with core back-office data (inventory, customer, sales, manufacturing) to provide connected patient, device, payer and provider analytics. In addition, advanced, predictive analytics are used to evaluate the data captured in the data model to provide insight into the patient or device’s health which will provide critical insight into a patients health and ultimately drive lower health care costs. The connected care solution will demonstrate the power of integrating the front office and back office with real time analytics through HANA, an approach that can be used for other med device and life sciences companies, as well as for other industries dealing with similar ecosystem challenges.
The “IBM Loyalty Management Concept Application” connects seamlessly into IBM’s Enhanced Loyalty Management Solution for SAP and enables loyalty program members to connect and transact like never before. This comprehensive app shows real-time reward updates, loyalty account information, purchase history, and personalized promotions computed by IBM Research’s unique analytics engine. The mobile application provides for mCommerce and rewards redemption capabilities.
The use of the application results in increased customer loyalty, transaction volume, and promotional conversion rates.
Manage loyalty accounts – review your account points, status levels, and upcoming rewards
Get benefits and share – get personalized offers, share them with your social network, and add them to your loyalty card for future use
Find stores – use the store locator to search for directions and details
“Lay away”- a method to borrow points for future transactions
3) IBM break-out sessions at ASUG and SAPPHIRE
1) IBM and SAP Transportation Management – Experiences in the Transportation Industry
ASUG session 4505
Within IBM, SAP Transportation Management is a strategic focus and IBM has teamed with SAP to deliver a next generation transportation management solution for DHL. This session will describe the project scope, status, and IBM’s role and relationship with SAP and Transportation Management.
2) Portal/ESS From Blueprint and Workshops to WDA/ABAP Configuration, Security, and Second Level PIN Authentication to an Employee Self-Service Portal
ASUG session 2101
This presentation will cover ESS Implementation experience with SAP Neteaver Portal 7.3 and ehp5 ECC. Presenters will demostratate the portal functionally of ESS time and pay statements and intgeration of a second level authentication. They’ll discuss implementation approach blueprinting via prototyping; show solutions to WebDynpro ABAP configuration challenges; and demo some ABAP enhancement for Info Type integration and explain some of the security and onboard process challenges. Presenters will also demonstrate and show under-the-hood functionality of the second Level PIN Authentication, including PIN reset and know its store on an Info Type. They will show documents and approach as well as configuration in our session that are useful to anyone thinking of implementing ESS with NW 7.3 and ehp5+. This session will offer some lessons learned on NW 7.3 Portal and ECC eph5 implementation.
3) Energize Your SAP Software Investment with IBM DB2 10.5 BLU Acceleration
SAPPHIRE session 88564
Whether you’re an SAP and DB2 veteran or want to explore superior alternatives to an incumbent non-DB2 database, join to discuss how exciting new joint innovations make an SAP software and DB2 LUW combination the best-of-breed business solution, including BLU Acceleration.
4) Case Study: How First Solar Achieved Real-Time Analysis of Supplier and Delivery Performance Metrics Using SAP HANA Enterprise Solution.
ASUG Session 1203
IT organization was challenged to stay abreast with changing data needs due to the nature of solar energy industry trends. Business users want reliable and real-time reporting capabilities to make informed and critical business decisions. At the same time First Solar wants to enable business self-service analytical capabilities with SAP HANA platform for addressing current and future information needs. The goal of the SAP HANA Enterprise project implementation was to move business analytics development from IT to the business users, providing business users access to more data, and deliver a robust BI reporting solution for supply chain management. Due to lengthy BW and ECC project timelines in the past as well as the aforementioned challenges, SAP HANA provided an ideal platform and better real-time analytics to assist business users with quick decision making process. Key highlights of the project include: SAP HANA platform provided scalable solution for Supply Chain analysis particularly for the procurement team; established foundation for big data and real-time information delivery requirements across the organization and greatly reduced query response times; better analysis of elements of the business that leverage product and vendor master data, purchasing analytics for supplier and delivery performance; and reduced burden on IT for reporting requests and provided platform for quicker time-to-value analysis and automated information delivery methods.
5) BC Hydro’s Integrated Project and Portfolio Management Enterprise Solution
ASUG session 2803
BC Hydro’s Project and Portfolio Management solution (PPM) is an integrated solution for delivering capital projects, programs, and portfolios. The solution consists of: Primavera P6: scheduling; SAP Project Systems: WBS & cost management; SAP Enterprise Project Connection: Primavera P6 and SAP PS integration; SAP BW and P6 Reporting Database; Microsoft SharePoint: project document, issue, risk, and change management; andIBM Rational: Practice reference tool. This case study presentation describes the PPM solution; its implementation, sustainment, challenges, and critical success factors to ensure its successful adoption as an enterprise solution.
6) Sell, Deliver, and Invoice Bundles of Products and Services
SAPPHIRE session 94914
This discussion details ways to create customer invoices based on consumption of ordered bundles. Learn why IBM turned to SAP solutions for consumption and invoicing now that professional services firms require companies to sell, deliver, and invoice product bundles.
7) Implementation Scenarios & Architectural Considerations for SAP MII Implementations
ASUG session 1105
SAP MII is a composition and performance management platform for manufacturing integration and visualization, which needs specific architectural considerations to address requirements related to integration and messaging, data persistency, user interface development, deployment, and security. This session will take you through the different architectural scenarios and decisions that you may need to take while implementing SAP MII along with industry-specific scenarios of SAP MII implementations. The following architectural scenarios and industry case studies will be explained in the presentation: SAP MII Deployment Architecture, SAP MII Data Persistency Architecture, SAP MII Integration & Messaging Architecture, SAP MII User Interface Options & Architecture, SAP MII Security Architecture, Typical Industry Scenarios for SAP MII, Implementations Process & Methods to Ease SAP MII Implementation, and IBM Cross Industry Solutions Assets on MII
8) Localizing the Global Template ? A Global SAP Transformation Challenge
ASUG session 1505
More and more multinational companies are either consolidating their world-wide SAP instances or embarking on green-field, global SAP-based transformation projects. These implementations are often complex and difficult. Typically, the integration of global operations running on a single SAP platform requires more careful and systematic consideration of all business requirements, both global and local, as they may sometimes be conflicting with each other, confusing, and/or be missed out completely. In many cases, the consequences will be long lasting, if not permanent. But this does not have to happen. How do we bring all these puzzles together while keeping two basic principles intact: 1) enabling the creation of a global template with common core processes and 2) successfully staging deployment (only adding incremental legal and regulatory requirements) so as to manage project costs, risks, and return on investment (ROI). Among other things, presenters will discuss a robust approach and look at corners of the world where legal and regulatory requirements are most challenging and needs to be considered upfront during the global blueprint. They will explore localization accelerators and requirement databases. Presenters will also have a lively discussion on the intricacies of global cultures and how to strategically optimize the strength of each of these teams from Blueprint to Go-Live.
9) At IBM, Complex Offerings Just Got a Lot Less Complicated
ASUG Session 3506
The first rule of business: don’t sell products, sell solutions. Well, easier said than done, particularly for industries dealing with complex products and services. You might wonder, then, how IBM configures price and quote its complex offerings with such remarkable efficiency. Simple. IBM uses SAP Solution Sales Configuration to address issues surrounding complex configurations and solution selling. With this application, your organization will be able to: cross-sell and up-sell across multiple brands and product lines, provide a consistent experience and support for all audiences across multiple channels, and offerintegrated real-time services for configuration and master data, so orders are filled accurately each time. Find out how SAP Solution Sales Configuration can shorten your sales cycle by generating quotes quickly, so your entire team can work more efficiently and productively every day.
10) SAP Enterprise Application Strategy in the Era of SAP HANA, Infrastructure, Platforms, Software and Everything as a Service
ASUG Session 2309
What are the forces that will shape your enterprise application landscape? How will SAP’s strategy of “on-premise” and “off-premise” impact your company’s strategy? What are the rules that can make your company’s strategy successful and what are pitfalls? What major shifts in technology and business might derail your strategy? Presenters will discuss several major company’s strategies to date. They’ll discuss how to use private, hosted, and public cloud successfully today. Finally, they’ll give attendees several tools to help any company navigate the complex future.
11) SAP Enabled Procurement Transformation Lessons Learned at General Motors
ASUG Session 1212
This presentation will describe how GM (General Motors) was able to use SAP procurement technology to vastly improve and transform their procurement business processes. Attendees will learn how GM was able to implement SAP SRM 7.01 (Supplier Relationship Management), Sourcing 7.0, and BI Reporting into a complex, global procurement organization. The presentation includes both the technical challenges and how they were overcome, as well as the business challenges and how they were addressed. The presentation covers the project from its inception, through a successful pilot deployment, and subsequent roll-outs. This presentation includes a frank review of the challenges faced and provides the specifics of how they were addressed. Each attendee will leave with a deep understanding on how to utilize SAP tools to transform complex procurement organizations.
12) IBM’s “Predictive” Maintenance Solution for SAP Software
SAPPHIRE session 87335
This new innovative solution will allow automotive OEMs, suppliers, and dealers to more accurately predict their maintenance strategy for each piece of equipment in their enterprise, leveraging the IBM research department, the SAP HANA platform, SAP BusinessObjects software, SAP mobile apps, and Syclo and IBM hardware.
13) Portal: How to Deal with Role-Based Navigation Models for Different Countries and Languages
ASUG Session 2213
In this session, presenters will discuss the lessons learned from Global SAP Portal implementations. they will share theirr experience with implementing global multi-language portals. They will demonstrate how they deal with multi-role navigation models, language preferences, and third party integration based on country requirements. Presenters will show the method to build navigation models, integrate help functionality, and address some of the ESS and MSS process challenges such as position base Personnel Change Requests (PCR), Organizational Change Requests (OCR), or people-based PCR/OCR. MSS personnel change requests and organizational change requests are part of the demonstration as well as policy and help desk (shared service) integration.
14) See How Waters Used Webdynpro to Integrate an End-to-End Process Around Document Review and Approval
ASUG Session 2914
Waters has made significant gains with it’s global application of electronic document review and approval processes. These processes today support Product Project Development, Software Tool Validation, ISO procedures and policies, as well as integration with preliminary review of Service and Operating Manuals released with Engineering Change. Using Webdynpro, the casual SAP user as well as seasoned veteran involved with document creation, review and approval is now able to access documentation from a variety of access points for general inquiry as well as for formal review and approval actions to include: Product Development Collaboration Rooms, Portal & Portal UWL, and even Lotus Notes. Further near term simplifications are goaled at 2-D text markups and integration with IBM’s Rationale Tool suite giving Waters and expandable DMS platform that will meet it’s document management needs for many years to come.
15) Single View of the Truth: How a Canadian Federal Government Agency Optimized Their SAP and non-SAP Deployment with IBM Rational Solutions
ASUG Session 3314
Discover how one of the largest Canadian Federal Government agencies optimized their SAP and non-SAP ERP solutions with IBM Rational, by delivering a comprehensive and automated approach for managing their enterprise architecture, requirements, quality, and change management. By using IBM Rational’s solution, they are now able to manage change quickly and efficiently with cost-effective Rational software, processes, and services for SAP and non-SAP solutions under one ”single view of the truth”. This offers several ongoing benefits: increase quality and speed of deployed business processes to SAP and non-SAP environments; centralize management of both SAP and non-SAP assets in a single “vanilla” ERP solution; centralize all SAP and non-SAP business processes and link them all to their respective business requirements and test plans; and an ability to manage and test SAP and non-SAP projects in a unified way. The following three initiatives were used with this Canadian Federal Government agency to help optimize their SAP and non-SAP deliveries, they are: Enterprise planning for SAP and non-SAP, Application Lifecycle Management for SAP and non-SAP, and Quality management for SAP and non-SAP.
This content was created by Bob Wolf, North America Sales Exec – SAP on System z Solutions.
Before you get the wrong impression, SAP has not announced the availability of HANA on Power and in no way, should you interpret this posting as any sort of pre-announcement. This is purely a discussion about why you should care whether SAP decides to support HANA on Power .
As you may be aware, during SAP’s announcement of the availability of HANA for Business Suite DB for ramp-up customers in early January, Vishal Sikka, Chief Technology Officer and member of the Executive Board at SAP, stated “We have been working heavily with [IBM]. All the way from lifecycle management and servers to services even Cognos, Business Intelligence on top of HANA – and also evaluating the work that we have been doing on POWER. As to see how far we can be go with POWER - the work that we have been doing jointly at HPI. This is a true example of open co-innovation, that we have been working on.” Ken Tsai, VP of SAP HANA product marketing, later added in an interview with IT Jungle, “Power is something that we’re looking at very closely now.” http://www.itjungle.com/fhs/fhs011513-story01.html. And from Amit Sinha, head of database and technology product marketing. “[HANA] on Power is a research project currently sponsored at Hasso Plattner Institute. We await results from that to take next meaningful steps jointly with IBM,” Clearly, something significant is going on. So, why should you care?
Very simply, the reasons why customers chose Power Systems (and perhaps HP Integrity and Oracle/Fujitsu SPARC/Solaris) for SAP DBs in the past, i.e. scalability, reliability, security, are just as relevant now with HANA as in the past with conventional databases, perhaps even more so. Why more so? Because once the promise of real time analytics on an operational database is realized, not necessarily in Version 1.0 of the product, but in the future undoubtedly, then the value obtained by this capability would result in the exact same loss of value if the system was not available or did not respond with the speed necessary for real time analytics.
A little known fact is that HANA for Business Suite DB currently is limited to a single node. This means that scale out options, common in the BW HANA space and others, is not an option for this implementation of the product. Until that becomes available, customers that wish to host large databases may require a larger number of cores than x86 vendors currently offer.
A second known but often overlooked fact is that parallel transactional database systems for SAP are often complex, expensive and have so many limitations that only two types of customers consider this option; those which need continuous or near continuous availability and those that want to move away from a robust UNIX solution and realize that to attain the same level of uptime as a single node UNIX system with conventional HA, an Oracle RAC or DB2 PureScale cluster is required. Why is it so complex? Without getting into too much detail, we need to look at the way SAP applications work and interact with the database. As most are aware, when a user logs on to SAP, they are connecting to a unique application server and until they log off, will remain connected to that server. Each application server is, in turn, connected to one node of a parallel DB cluster. Each request to read or write data is sent to that node and if the data is local, i.e. in the memory of that node, the processing occurs very rapidly. If, on the other hand, the data is on another node, that data must be moved from the remote node to the local node. Oracle RAC and DB2 PureScale use two different approaches with Oracle RAC using their Cache Fusion to move the data across an IP network and DB2 PureScale using Remote DMA to move the data across the network without using an IP stack, thereby improving speed and reducing overhead. Though there may be benefits of one over the other, this posting is not intended to debate this point, but instead point out that even with the fastest, lowest overhead transfer on an Infiniband network, access to remote memory is still thousands of time slower than accessing local memory.
Some applications are “cluster aware”, i.e. application servers connect to multiple DB nodes at the same time and direct traffic based on data locality which can only be possible if the DB and App servers work cooperatively to communicate what data is located where. SAP Business Suite is not currently cluster aware meaning that without a major change in the Netweaver Stack, replacing a conventional DB with a HANA DB will not result in cluster awareness and the HANA DB for Business Suite may need to remain as a single node implementation for some time.
Reliability and Security have been the subject of previous blog posts and will be reviewed in some detail in an upcoming post. Clearly, where some level of outages may be tolerable for application servers due to an n+1 architecture, few customers consider outages of a DB server to be acceptable unless they have implemented a parallel cluster and even then, may be mitigated, but still not considered tolerable. Also, as mentioned above, in order to achieve this, one must deal with the complexity, cost and limitations of a parallel DB. Since HANA for Business Suite is a single node implementation, at least for the time being, an outage or security intrusion would result in a complete outage of that SAP instance, perhaps more depending on interaction and interfaces between SAP components. Power Systems has a proven track record among Medium and Large Enterprise SAP customers of delivering the lowest level of both planned and unplanned outages and security vulnerabilities of any open system.
Virtualization and partition mobility may also be important factors to consider. As all Power partitions are by their very definition “virtualized”, it should be possible to dynamically resize a HANA DB partition, host multiple HANA DB partitions on the same system and even move those partitions around using Live Partition Mobility. By comparison, an x86 environment lacking VMware or similar virtualization technology could do none of the above. Though, in theory, SAP might support x86 virtualization at some point for production HANA Business Suites DBs, they don’t currently and there are a host of reasons why they should not which are the same reasons why any production SAP databases should not be hosted on VMware as I discussed in my blog posting: http://saponpower.wordpress.com/2011/08/29/vsphere-5-0-compared-to-powervm/ Lacking x86 virtualization, a customer might conceivably need a DB/HA pair of physical machines for each DB instance compared to potentially a single DB/HA pair for a Power based virtualized environment.
And now a point of pure speculation; with a conventional database, basis administrators and DBAs weigh off the cost/benefit of different levels in a storage hierarchy including main memory, flash and HDDs. Usually, main memory is sized to contain upwards of 95% of commonly accessed data with flash being used for logs and some hot data files and HDDs for everything else. For some customers, 30% to 80% of an SAP database is utilized so infrequently that keeping aged items in memory makes little sense and would add cost without any associated benefit. Unlike conventional DBs, with HANA, there is no choice. 100% of an SAP database must reside in memory with flash used for logs and HDDs used for a copy of the data in memory. Not only does this mean radically larger amounts of memory must be used but as a DB grows, more memory must be added over time. Also, more memory means more DIMMS with an associated increase in DIMM failure rates, power consumption and heat dissipation. Here Power Systems once again shines. First, IBM offers Power Systems with much larger memory capabilities but also offers Memory on Demand on Power 770 and above systems. With this, customers can pay for just the memory they need today and incrementally and non-disruptively add more as they need it. That is not speculation, but the following is. Power Systems using AIX offers Active Memory Expansion (AME), a unique feature which allows infrequently accessed memory pages to be placed into a compress pool which occupies much less space than uncompressed pages. AIX then transparently moves pages between uncompressed and compressed pools based on page activity using a hardware accelerator in POWER7+. In theory, a HANA DB could take advantage of this in an unprecedented way. Where test with DB2 have shown a 30% to 40% expansion rate (i.e. 10GB of real memory looks like 13GB to 14GB to the application), since potentially far more of a HANA DB would have low use patterns that it may be possible to size the memory of a HANA DB at a small fraction of the actual data size and consequently at a much lower cost plus associated lower rates of DIMM failures, less power and cooling.
If you feel that these potential benefits make sense and that you would like to see a HoP option, it is important that you share this desire with SAP as they are the only ones that can make the decision to support Power. Sharing your desire does not imply that you are ready to pull the trigger or that you won’t consider all available option, simply that you would like to get informed about SAP’s plans. In this way, SAP can gauge customer interest and you can have the opportunity to find out which of the above suggested benefits might actually be part of a HoP implementation or even get SAP to consider supporting one or more of them that you consider to be important. Customers interested in receiving more detailed information on the HANA on Power effort should approach their local SAP Account Executive in writing, requesting disclosure information on this platform technology effort.
Cloud means many things to many people. One definition, popularized by various internet based organizations refers to cloud as a repository of web URLs, email, documents, pictures, videos, information about items for sale, etc. on a set of servers maintained by an internet provider where any server in that cluster may access and make available to the end user the requested object. This is a good definition for those types of services, however SAP does not exist as a set of independent objects that can be stored and made available on such a cloud.
Another definition involves the dynamic creation, usage and deletion of system images on a set of internet based servers hosted by a provider. Those images could contain just about anything including SAP software and customer data. Security of customer data, both on disk and in transit across the internet, service level agreements, reliability, backup/recovery and government compliance (where appropriate) are just a few of the many issues that have to be addressed in such implementations. Non-production systems are well suited for this type of cloud since many of the above issues may be less of a concern than for production systems. Of course, that is only the case when no business data or intellectual property, e.g. developed ABAP or Java code, is stored on such servers in which case these systems become more and more sensitive, like production. This type of public cloud may offer a low cost for infrequently accessed or low utilization environments. Those economics can often change dramatically as usage increases or if more controls are desired.
Yet another definition utilizes traditional data center hosting providers that offer robust security, virtual private networks, high speed communications, high availability, backup/recovery and thorough controls. The difference between conventional static hosting and cloud hosting is that the resources utilized for a given customer or application instance may be hosted on virtual rather than dedicated systems, available on demand, may be activated or removed via a self-service portal and may be multi-tenant, i.e. multiple customers may be hosted on a shared cloud. While more expensive than the above cloud, this sort of cloud is usually more appropriate for SAP production implementations and is often less expensive than building a data center, staffing it with experts, acquiring the necessary support infrastructure, etc.
As many customers already own data centers, have large staffs of experts and host their own SAP systems today, another cloud alternative is often required: a Private Cloud. These customers often wish to reduce the cost of systems by driving higher utilization, shared use of infrastructure among various workloads, automatic load balancing, improvements in staff productivity and potentially even self-service portals for on demand systems with charge back accounting to departments based on usage.
Utilizing a combination of tools from IBM and SAP, customers can implement a private cloud and achieve as many of the above goals as desired. Let’s start with SAP. SAP made its first foray into this area several years ago with their Adaptive Computing Controller (ACC). Leveraging SAP application virtualization it allowed for start, stop and relocate of SAP instances under the control of basis administrators. This helped SAP to get a much deeper appreciation for customer requirements which enabled them to develop SAP NetWeaver Landscape Virtualization Management (LVM). SAP, very wisely, realized that attempting to control infrastructure resources directly would require a huge effort, continuous updates as partner technology changed not to mention an almost unlimited number of testing and support scenarios. Instead, SAP developed a set of business workflows to allow basis admins to perform a wide array of common tasks. They also developed an API and invited partners to write interfaces to their respective cloud enabling solutions. In this way, while governing a workflow SAP LVM simply has to request a resource, for example, from the partner’s systems or storage manager, and once that resource is delivered, continue with the rest of the workflow on the SAP application level.
IBM was an early partner with SAP ACC and has continued that partnership with SAP LVM. By integrating storage management, the solution and enablement in the IBM Power Systems environment is particularly thorough and is probably the most complete of its kind on the market. IBM offers two types of systems managers, IBM Systems Director (SD) and IBM Flex Systems Manager (FSM). SD is appropriate for rack based systems including conventional Power Systems in addition to IBM’s complete portfolio of systems and storage. As part of that solution, customers can manage physical and virtual resources, maintain operating systems, consolidate error management, control high availability and even optimize data center energy utilization. FSM is a manager specifically for IBM’s new PureSystems family of products including several Power Systems nodes. FSM is focused on the management of the components delivered as part of a PureSystems environment where SD is focused on the entire data center including PureSystems, storage and rack based systems. Otherwise, the functions in an LVM context, are largely the same. FSM may be used with SD in a data center either side by side or with FSM feeding certain types of information up to SD. IBM also offers a storage management solution called Tivoli Storage FlashCopy Manager (FCM). This solution drives the non-disruptive copying of filesystems on appropriate storage subsystems such as IBM’s XIV as well as virtually any IBM or non-IBM storage subsystem through the IBM SAN Volume Controller (SVC) or V7000 (basically an SVC packaged with its own HDD and SSD).
Using the above, SAP LVM can capture OS images including SAP software, find resources on which to create new instances, rapidly deploy images, move them around as desired to load balance systems or when preventative maintenance is desired, monitor SAP instances, provide advanced dashboards, a variety of reports, and make SAP system/db copies, clones or refreshes including the SAP relevant post-copy automation tasks.
What makes the IBM Power Systems implementation unique is the integration between all of the pieces of the solution. Using LVM with Power Systems with either SD, FSM or both, a basis admin can see and control both physical and virtual resources as PowerVM is built in and is part of every Power System automatically. This means that when, for instance, a physical node is added to an environment, SD and FSM can see it immediately meaning the LVM can also see it and start using it. In the x86 world, there are two supported configurations for LVM, native and virtualized. Clearly, a native installation is limited by its very definition as all of the attributes of resource sharing, movement and some management features that come with virtualization are not present in a native installation.
According to SAP Note 1527538 – SAP NetWeaver Landscape Virtualization Management 1.0, currently only VMware is supported for virtualized x86 environments. LVM with VMware/x86 based implementations rely on VMware vCenter meaning they can control only virtual resources. Depending on customer implementation, systems admins may have to use a centralized systems management tool for installation, network, configuration and problem management, i.e. the physical world, and vCenter for the virtual world. This contrasts with SD or FSM which can manage the entire Power Systems physical and virtual environment plus all of the associated network and chassis management, where appropriate.
LVM with Power Systems and FCM can drive full database copy/clone/refresh activity through disk subsystems. Disk subsystems such as IBM XIV, can make copies very fast in a variety of ways. Some make pointer based copies which means that only changed blocks are duplicated and a “copy” is made available almost immediately for further processing by LVM. In some situations and/or with some disk subsystems, a full copy process, in which every block is duplicated, might be utilized but this happens at the disk subsystem or SAN level without involving a host system so is not only reasonably fast but also does not take host system resources. In fact, a host system in this configuration does not even need to stop processing but merely place the source DB into “logging only” mode which is then resumed into normal operating mode a short time later after the copy is initiated.
LVM with x86 offers two options. Option 1, utilize VMware and its storage copy service. Option 2, utilize LVM native or with VMware and use a separate plugin with from a storage subsystem vendor. Option 2 works pretty much the same as the above described Power/FCM solution except that only certain vendors are supported and any integration of plugins from different companies, not to mention any troubleshooting is a customer task. It might be worthwhile to consider the number of companies that might be required to solve a problem in this environment, e.g. SAP, VMware, the storage subsystem vendor, the OS vendor and the systems vendor.
For Option 1, VMware drives copies via vCenter using a host only process. According to above mentioned SAP Note, “virtualization based cloning is only supported with Offline Database.” This might be considered a bit disruptive by some and impossible to accommodate by others. Even though it might be theoretically possible to use a VMware snapshot, a SID rename process must be employed for a clone and every table must be read in and then out again with changes to the SID. (That said, for some other LVM activities not involving a full clone, a VMware snapshot might be used.) As a result, VMware snapshots may quickly take on the appearance of a full copy, so may not be the best technology to use both for the overhead on the system and the fact that VMware itself does not recommend keeping database snapshots around for more than a few days at most, so the clone process typically uses the full copy option. When the full copy is initiated by VMware, every block must be read into VMware and then back out. Not only is this process slow for large databases, but it places a large load on the source system potentially resulting in poor performance for other partitions during this time. Since a full copy is utilized, a VMware based copy/clone will also take radically more disk storage than a Power/XIV based clone as it is fully supported with a “changed block only” copy.
Of course, the whole discussion of using LVM with vCenter may be moot. After all, the assumption is that one would be utilizing VMware for database systems. Many customers choose not to do this for a variety of reasons, from multiple single points of failure, to scaling to database vendor support to potential issues in problem resolution due to the use of a multi-layer, multi-vendor stack, e.g. hardware from one vendor with proprietary firmware from another vendor, processor chip from another vendor, virtualization software from VMware, OS from Microsoft, SUSE, Red Hat or Oracle not to mention high availability and other potential issues. Clearly, this would not be an issue if one eliminates database systems from the environment, but that is where some of the biggest benefit of LVM are realized.
LVM, as sophisticated as it is currently, does not address all of the requirements that some customers might have for a private cloud. The good news is that it doesn’t have to. IBM supplies a full range of cloud enabling products under the brand name of IBM Smart Cloud. These tools range from an “Entry” product suitable for adding a simple self-service portal, some additional automation and some accounting features to a full feature “Enterprise” version. Those tools call SD or FSM functions to manage the environment which is quite fortunate as any changes made by those tools is immediately visible to LVM, thereby completing the circle.
SAP and IBM collaborated to produce a wonderful and in-depth document that details the IBM/Power solution: https://scn.sap.com/docs/DOC-24822
A blogger at SAP has also written extensively on the topic of Cloud for SAP. You can see his blog at: http://blogs.sap.com/cloud/2012/07/24/sap-industry-analyst-base-camp-a-recap-of-the-sap-cloud-strategy-session/