HANA on Power hits the Trifecta!
Actually, trifecta would imply only 3 big wins at the same time and HANA on Power Systems just hit 4 such big wins.
Win 1 – HANA 2.0 was announced by SAP with availability on Power Systems simultaneously as with Intel based systems.[i] Previous announcements by SAP had indicated that Power was now on an even footing as Intel for HANA from an application support perspective, however until this announcement, some customers may have still been unconvinced. I noticed this on occasion when presenting to customers and I made such an assertion and saw a little disbelief on some faces. This announcement leaves no doubt.
Win 2 – HANA 2.0 is only available on Power Systems with SUSE SLES 12 SP1 in Little Endian (LE) mode. Why, you might ask, is this a “win”? Because true database portability is now a reality. In LE mode, it is possible to pick up a HANA database built on Intel, make no modifications at all, and drop it on a Power box. This removes a major barrier to customers that might have considered a move but were unwilling to deal with the hassle, time requirements, effort and cost of an export/import. Of course, the destination will be HANA 2.0, so an upgrade from HANA 1.0 to 2.0 on the source system will be required prior to a move to Power among various other migration options. This subject will likely be covered in a separate blog post at a later date. This also means that customers that want to test how HANA will perform on Power compared to an incumbent x86 system will have a far easier time doing such a PoC.
Win 3 – Support for BW on the IBM E850C @ 50GB/core allowing this system to now support 2.4TB.[ii] The previous limit was 32GB/core meaning a maximum size of 1.5TB. This is a huge, 56% improvement which means that this, already very competitive platform, has become even stronger.
Win 4 – Saving the best for last, SAP announced support for Suite on HANA (SoH) and S/4HANA of up to 16TB with 144 cores on IBM Power E880 and E880C systems.ii Several very large customers were already pushing the previous 9TB boundary and/or had run the SAP sizing tools and realized that more than 9TB would be required to move to HANA. This announcement now puts IBM Power Systems on an even footing with HPE Superdome X. Only the lame duck SGI UV 300H has support for a larger single image size @ 20TB, but not by much. Also notice that to get to 16TB, only 144 cores are required for Power which means that there are still 48 cores unused in a potential 192 core systems, i.e. room for growth to a future limit once appropriate KPIs are met. Consider that the HPE Superdome X requires all 16 sockets to hit 16TB … makes you wonder how they will achieve a higher size prior to a new chip from Intel.
Win 5 – Oops, did I say there were only 4 major wins? My bad! Turns out there is a hidden win in the prior announcement, easily overlooked. Prior to this new, higher memory support, a maximum of 96GB/core was allowed for SoH and S/4HANA workloads. If one divides 16TB by 144 cores, the new ratio works out to 113.8GB/core or an 18.5% increase. Let’s do the same for HPE Superdome X. 16 sockets times 24 core/socket = 384 cores. 16TB / 384 cores = 42.7GB/core. This implies that a POWER8 core can handle 2.7 times the workload of an Intel core for this type of workload. Back in July, I published a two-part blog post on scaling up large transactional workloads.[iii] In that post, I noted that transactional workloads access data primarily in rows, not in columns, meaning they traverse columns that are typically spread across many cores and sockets. Clearly, being able to handle more memory per core and per socket means that less traversing is necessary resulting in a high probability of significantly better performance with HANA on Power compared to competing platforms, especially when one takes into consideration their radically higher ccNUMA latencies and dramatically lower ccNUMA bandwidth.
Taken together, these announcements have catapulted HANA on IBM Power Systems from being an outstanding option for most customers, but with a few annoying restrictions and limits especially for larger customers, to being a best-of-breed option for all customers, even those pushing much higher limits than the typical customer does.
[i] https://launchpad.support.sap.com/#/notes/2235581
[ii] https://launchpad.support.sap.com/#/notes/2188482
[iii] https://saponpower.wordpress.com/2016/07/01/large-scale-up-transactional-hana-systems-part-1/
Large scale-up transactional HANA systems – part 2
Part 1 of this subject detailed the challenges when sizing large scale-up transactional HANA environments. This part will dive into the details and methodology by which customers may select a vendor lacking an independent transactional HANA benchmark.
Past history with large transactional workloads
Before I start down this path, first it would be useful to understand why it is relevant. HANA transaction processing utilizes many of the same techniques as a conventional database. It accesses rows, albeit each column is physically separate, the transaction does not know this and gets all of the data together in one place prior to presenting the results to the dialog calling it. Likewise, a write must follow ACID properties including only one update against a piece of data can occur at any time requiring that cache coherency mechanisms are employed to ensure this. And a write to a log in addition to the memory location of the data to be changed or updated must occur. Sounds an awful lot like a conventional DB which is why past history handling these sorts transactional workloads makes plenty of sense.
HPE has a long history with large scale transactional workloads and Superdome systems, but this was primarily based on Integrity Superdome systems using Itanium processors and HP-UX not with Intel x86 systems and Linux. Among the Fortune 100, approximately 20 customers utilized HPE’s systems for their SAP database workloads almost entirely based on Oracle with HP-UX. Not bad and coming in second place to IBM Power Systems with approximately 40 of the Fortune 100 customers that use SAP. SGI has exactly 0 of those customers. Intel x86 systems represent 8 of that customer set with 2 being on Exadata, not even close to a standard x86 implementation with its Oracle RAC and highly proprietary storage environment. Three of the remaining x86 systems are utilized by vendors whose very existence is dependent on x86 so running on anything else would be a contradictory to their mission and these customers must make this solution work no matter what the expense and complexity might be. That leaves 3 customers, none of which utilize Superdome X technology for their database systems. To summarize, IBM Power has a robust set of high end current SAP transactional customers; HPE a smaller set entirely based on a different chip and OS than is offered with Superdome X; SGI has no experience in this space whatsoever; and x86 in general has limited experience confined to designs that have nothing in common with today’s high end x86 technology.
Industry Standard Benchmarks
A bit of background. Benchmarks are lab experiments open to optimization and exploitation by experts in the area and have little resemblance to reality. Unfortunately, it is the only third party metric by which systems can be compared. Benchmarks fall into two general categories, those that are horrible and those that are not horrible (note I did not say good). Horrible ones sometimes test nothing but the speed of CPUs by placing the entire running code in instruction cache and the entire read-only dataset upon which the code executes in data cache meaning no network and disk much less any memory I/O or cache coherency. SPEC benchmarks such as SPECint2006 and SPECint_rate2006 fall into this category. They are uniquely suited for ccNUMA systems as there is absolutely no communication between any sockets meaning this represents the best case scenario for a ccNUMA system.
It is therefore revealing that SGI, with 32 sockets and 288 cores, was only able to achieve 11,400 on this ideal ccNUMA benchmark, slightly beating HP Superdome X’s result of 11,100, also with 288 cores. By comparison, the IBM Power Systems E880 with only 192 cores, i.e. 2/3 of the cores, achieved 14,400, i.e. 26% better performance.
In descending order from horrible to not as bad, there are other benchmarks which can be used to compare systems. The list of benchmarks includes SAP SD 2-tier, SAP BW-EML, TPC-C and SAP 3-tier. Of those, the SD 2-tier has the most participation among vendors and includes real SAP code and a real database, but suffers from the database being a tiny percentage of the workload, approximately 6 to 8%, meaning on ccNUMA systems, multiple app servers can be placed on each system board resulting in only database communication going across a pretty darned fast network represented by the ccNUMA fabric. SGI is a no-show on this benchmark. HPE did show with Superdome X @ 288 cores and achieved 545,780 SAPS (100,000 users, Ref# 2016002), and still the world record holder. IBM Power showed up with the E870, an 80 core systems (28% of the number of cores as the HPE system) and achieved 436,100 SAPS (79,750 users, Ref# 2014034) (80% of the SAPS of the HPE system). Imagine what IBM would have been able to achieve with this almost linearly scalable benchmark had they attempted to run it on the E880 with 192 cores (probably close to 436,100 * 192/80 although it is not allowed for any vendor to publish the “results” of any extrapolations of SAP benchmarks but no one can stop a customer from inputting those numbers into a calcuator).
BW-EML was SAP’s first benchmark designed for HANA, although not restricted to it. As the name implies, it is a BW benchmark, so it is difficult to derive any correlation to transaction processing, but at least it does show some aspect of performance with HANA, analytic if nothing else and concurrent analytics is one of the core value propositions of HANA. HPE was a frequent contributor to this benchmark, but always with something other than Superdome X. It is important to note that Superdome X is the only Intel based system to utilize RAS mode or Intel Lockstep, by default, not as an option. That mode has a memory throughput impact of 40% to 60% based on published numbers from a variety of vendors, but, to date, no published benchmarks, of any sort, have been run in this mode. As a result, it is impossible to predict how well Superdome X might perform on this benchmark. Still, kudos to HPE for their past participation. Much better than SGI which is, once again, a no-show on this benchmark. IBM Power Systems, as you might predict, still holds the record for best performance on this benchmark with the 40 core E870 system @ 2 Billion rows.
TPC-C was a transaction processing benchmark that, at least for some time period, had good participation, including from HP Superdome. That is, until IBM embarrassed HPE so much, by delivering 50% more performance with ½ the number of cores. After this, HPE never published another result on Superdome … and that was back in the 2007/2008 time frame. TPC-C was certainly not a perfect benchmark, but it did have real transactions with real updates and about 10% of the benchmark involved remote accesses. Still, SGI was a no-show and HPE stopped publishing on this level of system in 2007 while IBM continued publishing through 2010 until there was no one left to challenge their results. A benchmark is only interesting when multiple vendors are vying for the top spot.
Last, but certainly not least, is the SAP SD 3-tier benchmark. In this one, the database was kept on a totally separate server and there was almost no way to optimize it to remove any ccNUMA effects. Only IBM had the guts to participate in this benchmark at a large scale with a 64-core POWER7+ system (the previous generation to POWER8). There was no submission from HPE that came even remotely close and, once again, SGI was MIA.
Architecture
Where IBM Power Systems utilizes a “glueless” interconnect up to 16 sockets, meaning all processor chips connect to each other directly, without the use of specialized hub chips or switches, Intel systems beyond 8 sockets utilize a “glued” architecture. Currently, only HPE and SGI offer solutions beyond 8 sockets. HPE is using a very old architecture in the Superdome X, first deployed for PA-RISC (remember those) in the Superdome introduced in 2000. Back then, they were using a cell controller (a.k.a. hub chip) on each system board. When they introduced the Itanium processor in 2002, they replaced this hub chip with a new one called SX1000; basically an ASIC that connected the various components on the system board together and to the central switch by which it communicats with other system boards. Since 2002, HPE has moved through three generations of ASICs and now is using the SX3000 which features considerably faster speeds, better reliability, some ccNUMA enhancements and connectivity to multiple interconnect switches. Yes, you read that correctly; where Intel has delivered a new generation of x86 chips just about every year over the last 14 years, HPE has delivered 3 generations of hub chips. Pace of innovation is clearly directly tied to volume and Superdome has never achieved sufficient volume alone nor use by other vendors to increase the speed of innovation. This means that while HPE may have delivered a major step forward at a particular point in time, it suffers from a long lag and diminishing returns as time and Intel chip generations progress. The important thing to understand is that every remote access, from either of the two Intel EX chips on each system board, to cache, memory or I/O connected to another system board, must pass through 8 hops, at a minimum, i.e. from calling socket, to SX3000, to central switch to remote SX3000, to remote socket and the same trip in return and that is assuming that data was resident in an on-board cache.
SGI, the other player in the beyond 8 socket space, is using a totally different approach, derived from their experience in the HPC space. They are also using a hub chip, called a HARP ASIC, but rather than connecting through one or more central switches, in the up to 32 socket systems UV 300H system, each system board, featuring 4 Intel EX chips and a proprietary ASIC per memory riser, includes two hub chips which are linked directly to each of the other hub chips in the system. This mesh is hand wired with a separate physical cable for every single connection. Again, you read that correctly, hand wired. This means that not only are physical connections made for every hub chip to hub chip connection with the inherent potential for an insertion or contact problem on each end of that wire, but as implementation size increases, say from 8-sockets/2 boards to 16-sockets/4 boards or to 32-sockets/8 boards, the number of physical, hand wired connections increases exponentially. OK, assuming that does not make you just a little bit apprehensive, consider this: Where HPE uses a memory protection technology called Double Device Data Correction + 1 (DDDC+1) in their Superdome X system, basically the ability to handle not just a single memory chip failure but at least 2 (not at the same time), SGI utilizes SDDC, i.e. Single device data correction. This means that after detection of the first failure, customers must rapidly decide whether to shut down the system and replace the failing memory component (assuming it has been accurately identified), or hope their software based page deallocation technology works fast enough to avert a catastrophic system failure due to a subsequent memory failure. Even with that software, if a memory fault occurs in a different page, the SGI system would still be exposed. My personal opinion is that memory protection is so important in any system, but especially in large scale scale-up HANA systems, that anything short of true enterprise memory protection of at least DDDC is doing nothing other than increasing customer risk.
Summary
SGI is asking customers to accept their assertions that SAP’s certification of the SGI UV 300H at 20TB implies they can scale better than any other platform and perform well at that level, but they are providing no evidence in support of that claim. SAP does not publish the criteria with which is certifies a solution, so it is possible that SGI has been able to “prove” addressability at 20TB, the ability to initialize a HANA system and maybe even to handle a moderate number of transactions. Lacking any sort of independent, auditable proof via a benchmark, any reasonable body of customers (one would be nice at least) driving high transaction volumes with HANA or a conventional database and anything other than a 4-bit wide, hand wired ccNUMA nest that would seem prone to low throughput and high error rates, especially with substandard memory protection, it is hard to imagine why anyone would find this solution appealing.
HPE, by comparison, does have some history in transactional systems at high transactional volumes with a completely different CPU, OS and memory architecture, but nothing with Superdome X. HPE has a few benchmarks, however poor, once again on systems from long ago plus mediocre results with the current generation and an architecture that has a minimum of 8-hops round trip for every remote access. On the positive side, at least HPE gets it regarding proper memory protection, but does not address how much performance degradation results from this protection. Once again, SAP’s certification at 16TB for Superdome X must be taken with the same grain of salt as SGI’s.
IBM Power Systems has an outstanding history with transactional systems at very high transactional volumes using current generation POWER8 systems. Power also dominates the benchmark space and continued to deliver better and better results until no competitor dared risk the fight. Lastly, POWER8 is latest generation of a chip designed from the ground up with ccNUMA optimization in mind and with reliability as its cornerstone, i.e. the results already include any overhead necessary to support this level of RAS. Yes, POWER8 is only supported at 9TB today for SAP SoH and S/4HANA, but lest we forget, it is the new competitor in the HANA market and the other guys only achieved their higher supported numbers after extensive customer and internal benchmark testing, both of which are underway with Power.
Large scale-up transactional HANA systems – part 1
Customers which require Suite on HANA (SoH) and S/4HANA systems with 6TB of memory or less will find a wide variety of available options. Those options do not require any specialized type of hardware, just systems that can scale up to 8 sockets with Intel based systems and up to 64 cores with IBM Power Systems (socket count depends on the number of active cores per socket which varies by system). If you require 6TB or less or can’t imagine ever needing more, then sizing is a fairly easy process, i.e. look at the sizing matrix from SAP and select a system which meets your needs. If you need to plan for more than 6TB, this is where it gets a bit more challenging. The list of options narrows to 5 vendors between 6TB and 8TB, IBM, Fujitsu, HPE, SGI and Lenovo and gets progressively smaller beyond that.
All systems with more than one socket today are ccNUMA, i.e. remote cache, memory and I/O accesses are delivered with more latency and lower bandwidth than local to the processor accesses. HANA is highly optimized for analytics, which most of you probably already know. The way it is optimized may not be as obvious. Most tables in HANA are columnar, i.e. every column in a table is kept in its own structure with its own dictionary and the elements of the column are replaced with a very short dictionary pointer resulting in outstanding compression, in most cases. Each column is placed in as few memory pages as possible which means that queries which scan through a column can run at crazy fast speeds as all of the data in the column is as “close” as possible to each other. This columnar structure is beautifully suited for analytics on ccNUMA systems since different columns will typically be placed behind different sockets which means that only queries that cross columns and joins will have to access columns that may not be local to a socket and, even then, usually only the results have to be sent across the ccNUMA fabric. There was a key word in the previous sentence that might have easily been missed: “analytics”. Where analytical queries scan down columns, transactional queries typically go across rows in which, due to the structure of a columnar database, every element in located in a different column, potentially spanning across the entire ccNUMA system. As a result, minimized latency and high cross system bandwidth may be more important than ever.
Let me stop here and give an example so that I don’t lose the readers that aren’t system nerds like myself. I will use a utility company as an example as everyone is a utility customer. For analytics, an executive might want to know the average usage of electricity on a given day at a given time meaning the query is composed of three elements, all contained in one table: usage, date and time. Unless these columns are enormous, i.e. over 2 Billion rows, they are very likely stored behind a single socket with no remote accesses required. Now, take that same company’s customer care center, where a utility consumer wants to turn on service, report an outage or find out what their last few months or years of bills have been. In this case, all sorts of information is required to populate the appropriate screens, first name, last name, street address, city, state, meter number, account number, usage, billed amount and on and on. Scans of columns are not required and a simple index lookup suffices, but every element is located in a different column which has to be resolved by an independent dictionary lookup/replacement of the compressed elements meaning several or several dozen communications across the systems as the columns are most likely distributed across the system. While an individual remote access may take longer, almost 5x in a worst case scenario[i], we are still talking nanoseconds here and even 100 of those still results in a “delay” of 50 microseconds. I know, what are you going to do while you are waiting! Of course, a utility customer is more likely to have hundreds, thousands or tens of thousands of transactions at any given point in time and there is the problem. An increased latency of 5x for every remote access may severely diminish the scalability of the system.
Does this mean that is not possible to scale-up a HANA transactional environment? Not at all, but it does take more than being able to physically place a lot of memory in a system to be able to utilize it in a productive manner with good scalability. How can you evaluate vendor claims then? Unfortunately, the old tried and true SAP SD benchmark has not been made available to run in HANA environments. Lacking that, you could flip a coin, believe vendor claims without proof or demand proof. Clearly, demanding proof is the most reasonable approach, but what proof? There are three types of proof to look at: past history with large transactional workloads, industry standard benchmarks and architecture.
In the over 8TB HANA space, there are three competitors; HPE Superdome X, SGI UV 300H and IBM Power Systems E870/E880. I will address those systems and these three proof points in part 2 of this blog post.