Why SAP HANA on IBM Power Systems
This entry has been superseded by a new one: https://saponpower.wordpress.com/2014/06/06/there-is-hop-for-hana-hana-on-power-te-program-begins/
Before you get the wrong impression, SAP has not announced the availability of HANA on Power and in no way, should you interpret this posting as any sort of pre-announcement. This is purely a discussion about why you should care whether SAP decides to support HANA on Power .
As you may be aware, during SAP’s announcement of the availability of HANA for Business Suite DB for ramp-up customers in early January, Vishal Sikka, Chief Technology Officer and member of the Executive Board at SAP, stated “We have been working heavily with [IBM]. All the way from lifecycle management and servers to services even Cognos, Business Intelligence on top of HANA – and also evaluating the work that we have been doing on POWER. As to see how far we can be go with POWER – the work that we have been doing jointly at HPI. This is a true example of open co-innovation, that we have been working on.” Ken Tsai, VP of SAP HANA product marketing, later added in an interview with IT Jungle, “Power is something that we’re looking at very closely now.” http://www.itjungle.com/fhs/fhs011513-story01.html. And from Amit Sinha, head of database and technology product marketing. “[HANA] on Power is a research project currently sponsored at Hasso Plattner Institute. We await results from that to take next meaningful steps jointly with IBM,” Clearly, something significant is going on. So, why should you care?
Very simply, the reasons why customers chose Power Systems (and perhaps HP Integrity and Oracle/Fujitsu SPARC/Solaris) for SAP DBs in the past, i.e. scalability, reliability, security, are just as relevant now with HANA as in the past with conventional databases, perhaps even more so. Why more so? Because once the promise of real time analytics on an operational database is realized, not necessarily in Version 1.0 of the product, but in the future undoubtedly, then the value obtained by this capability would result in the exact same loss of value if the system was not available or did not respond with the speed necessary for real time analytics.
A little known fact is that HANA for Business Suite DB currently is limited to a single node. This means that scale out options, common in the BW HANA space and others, is not an option for this implementation of the product. Until that becomes available, customers that wish to host large databases may require a larger number of cores than x86 vendors currently offer.
A second known but often overlooked fact is that parallel transactional database systems for SAP are often complex, expensive and have so many limitations that only two types of customers consider this option; those which need continuous or near continuous availability and those that want to move away from a robust UNIX solution and realize that to attain the same level of uptime as a single node UNIX system with conventional HA, an Oracle RAC or DB2 PureScale cluster is required. Why is it so complex? Without getting into too much detail, we need to look at the way SAP applications work and interact with the database. As most are aware, when a user logs on to SAP, they are connecting to a unique application server and until they log off, will remain connected to that server. Each application server is, in turn, connected to one node of a parallel DB cluster. Each request to read or write data is sent to that node and if the data is local, i.e. in the memory of that node, the processing occurs very rapidly. If, on the other hand, the data is on another node, that data must be moved from the remote node to the local node. Oracle RAC and DB2 PureScale use two different approaches with Oracle RAC using their Cache Fusion to move the data across an IP network and DB2 PureScale using Remote DMA to move the data across the network without using an IP stack, thereby improving speed and reducing overhead. Though there may be benefits of one over the other, this posting is not intended to debate this point, but instead point out that even with the fastest, lowest overhead transfer on an Infiniband network, access to remote memory is still thousands of time slower than accessing local memory.
Some applications are “cluster aware”, i.e. application servers connect to multiple DB nodes at the same time and direct traffic based on data locality which can only be possible if the DB and App servers work cooperatively to communicate what data is located where. SAP Business Suite is not currently cluster aware meaning that without a major change in the Netweaver Stack, replacing a conventional DB with a HANA DB will not result in cluster awareness and the HANA DB for Business Suite may need to remain as a single node implementation for some time.
Reliability and Security have been the subject of previous blog posts and will be reviewed in some detail in an upcoming post. Clearly, where some level of outages may be tolerable for application servers due to an n+1 architecture, few customers consider outages of a DB server to be acceptable unless they have implemented a parallel cluster and even then, may be mitigated, but still not considered tolerable. Also, as mentioned above, in order to achieve this, one must deal with the complexity, cost and limitations of a parallel DB. Since HANA for Business Suite is a single node implementation, at least for the time being, an outage or security intrusion would result in a complete outage of that SAP instance, perhaps more depending on interaction and interfaces between SAP components. Power Systems has a proven track record among Medium and Large Enterprise SAP customers of delivering the lowest level of both planned and unplanned outages and security vulnerabilities of any open system.
Virtualization and partition mobility may also be important factors to consider. As all Power partitions are by their very definition “virtualized”, it should be possible to dynamically resize a HANA DB partition, host multiple HANA DB partitions on the same system and even move those partitions around using Live Partition Mobility. By comparison, an x86 environment lacking VMware or similar virtualization technology could do none of the above. Though, in theory, SAP might support x86 virtualization at some point for production HANA Business Suites DBs, they don’t currently and there are a host of reasons why they should not which are the same reasons why any production SAP databases should not be hosted on VMware as I discussed in my blog posting: https://saponpower.wordpress.com/2011/08/29/vsphere-5-0-compared-to-powervm/ Lacking x86 virtualization, a customer might conceivably need a DB/HA pair of physical machines for each DB instance compared to potentially a single DB/HA pair for a Power based virtualized environment.
And now a point of pure speculation; with a conventional database, basis administrators and DBAs weigh off the cost/benefit of different levels in a storage hierarchy including main memory, flash and HDDs. Usually, main memory is sized to contain upwards of 95% of commonly accessed data with flash being used for logs and some hot data files and HDDs for everything else. For some customers, 30% to 80% of an SAP database is utilized so infrequently that keeping aged items in memory makes little sense and would add cost without any associated benefit. Unlike conventional DBs, with HANA, there is no choice. 100% of an SAP database must reside in memory with flash used for logs and HDDs used for a copy of the data in memory. Not only does this mean radically larger amounts of memory must be used but as a DB grows, more memory must be added over time. Also, more memory means more DIMMS with an associated increase in DIMM failure rates, power consumption and heat dissipation. Here Power Systems once again shines. First, IBM offers Power Systems with much larger memory capabilities but also offers Memory on Demand on Power 770 and above systems. With this, customers can pay for just the memory they need today and incrementally and non-disruptively add more as they need it. That is not speculation, but the following is. Power Systems using AIX offers Active Memory Expansion (AME), a unique feature which allows infrequently accessed memory pages to be placed into a compress pool which occupies much less space than uncompressed pages. AIX then transparently moves pages between uncompressed and compressed pools based on page activity using a hardware accelerator in POWER7+. In theory, a HANA DB could take advantage of this in an unprecedented way. Where test with DB2 have shown a 30% to 40% expansion rate (i.e. 10GB of real memory looks like 13GB to 14GB to the application), since potentially far more of a HANA DB would have low use patterns that it may be possible to size the memory of a HANA DB at a small fraction of the actual data size and consequently at a much lower cost plus associated lower rates of DIMM failures, less power and cooling.
If you feel that these potential benefits make sense and that you would like to see a HoP option, it is important that you share this desire with SAP as they are the only ones that can make the decision to support Power. Sharing your desire does not imply that you are ready to pull the trigger or that you won’t consider all available option, simply that you would like to get informed about SAP’s plans. In this way, SAP can gauge customer interest and you can have the opportunity to find out which of the above suggested benefits might actually be part of a HoP implementation or even get SAP to consider supporting one or more of them that you consider to be important. Customers interested in receiving more detailed information on the HANA on Power effort should approach their local SAP Account Executive in writing, requesting disclosure information on this platform technology effort.