SAPonPower

An ongoing discussion about SAP infrastructure

SAP HANA support for HPE nPar on Superdome Flex update

In addition to the outstanding support for virtualization technologies like PowerVM for HANA and the lukewarm support for VMware by SAP, SAP also supports other technologies that allow larger systems to be subdivided into smaller nodes.  Note that I did not say virtualization, but subdivision.  Physical partitioning (PPAR) is a technology invented in the 1990s and only allows components, e.g. boards or NUMA nodes, to be allocated to a separate workload from others on the same physical system.

On October 22, 2018, SAP updated its SAP Note for HPE nPar technology.[i]  With this update, SAP now supports nPars with Superdome Flex.  Granularity is incredibly fine (not).  As noted in the SAP note, “Via nPartitions, the following  partition sizes are supported in terms of the number of sockets:

    • Skylake based architecture: ScaleUp 16s, 12s, 8s, 4s; ScaleOut 4s, 8s, 16s

Or to put it in terms of cores, each socket has 28 cores, so granularity is 112 cores.  You need only 20 cores?  No problem, you get to consume 112.  You need 113 cores? Also no problem, you get to consume 224 cores.  But, on the positive side, these npars are “electrically isolated” which has 2 really important implications.  First, the only way to isolate one or more Superdome Flex drawers into a separate nPar is to physically change the mesh wiring of the entire system.  That means that if you decide to change the configuration of nPars, dynamic changes would be the exact opposite of what is supported.  In fact, according to customer reports, HPE requires a Statement of Work service contract to come out and rewire the system and it takes multiple days … one customer reported multiple weeks.  The second implication is that all resources on the node(s) in an nPar are dedicated to that nPar.  In the above example, if you need 20 cores, you probably require around a ½ TB of memory for BW or 1TB of memory for S/4.  It is possible to configure an nPar with as little as 1.5TB of memory which means that you might waste an entire TB if you only need ½ TB.  Alternately, if you have other workloads on other nPars that require more cores and memory and you want to keep all drawers consistent to allow for future changes, you might actually have up to 6TB per drawer meaning much more wasted memory if you only require ½ TB for a particular workload.  By the way, the only other elements that are shared when a system is broken up into physically isolated nPars are the frame(s), power supplies and the RMC – Rack Management Controller.  PCIe cards cannot be shared due to the physical isolation, so by using nPars, you essentially take a very expensive system and carve it into a bunch of smaller and very expensive, isolated systems which are difficult to reconfigure.  Alternately, if you really must use HPE technology for smaller workloads, you could purchase smaller systems at much lower prices.

I have really been trying to scratch my head and understand why anyone would want this type of 1990s era partitioning technology.  HPE certainly does because it results in higher profits from selling larger systems with more aggregate capacity while giving the false appearance of flexibility.  For customers, on the other hand, it offers massive waste and very limited flexibility.

My advice: Don’t be a sucker and get taken in by HPE’s misdirection play.  Either purchase appropriately sized systems for each workload or purchase systems that offer real virtualization, such as IBM Power Systems, with fine grained allocation of resources sharing of components such as PCIe adapters and true server consolidation, but don’t purchase one of these massive HPE systems and then eliminate any perceived value of using such a large system by cutting it up into smaller systems.

 

 

[i]2103848 – SAP HANA on HPE nPartitions in production

 

Advertisement

March 25, 2019 Posted by | Uncategorized | , , , , , , , , , , , | Leave a comment

Power Systems – Delivering best of breed scalability for SAP HANA

SAP quietly revised a SAP Note last week but it certainly made a loud sound for some.  Version 47 of https://launchpad.support.sap.com/#/notes/2188482 now says that OLTP workloads, such as Suite on HANA or S/4HANA are now supported on IBM Power Systems up to 24TB.  OLAP workloads, like BW HANA may be implemented on IBM Power Systems with up to 16TB for a single scale-up instance.  As noted in https://launchpad.support.sap.com/#/notes/2055470, scale-out BW is supported with up to 16 nodes bringing the maximum supported BW environment to a whopping 256TB.

As impressive as those stats are, it should also be noted that SAP also provided new core-to-memory (CTM) guidance with the 24TB OLTP system sized at 176-cores which results in 140GB/core, up from the previous 113.7GB/core at 16TB.  The 16TB OLAP system, sized at 192-cores, translates to 85.3GB/core, up from the previous 50GB/core for 4-socket and above systems.

By comparison, the maximum supported sizes for Intel Skylake systems are 6TB for OLAP and 12TB for OLTP which correlates to 27.4GB/core OLAP and 54.9GB/core OLTP.  In other words, SAP has published numbers which suggest Power Systems can handle workloads that are  2.7x (OLAP) and 2x (OLAP) the size of the maximum supported Skylake systems.  On the CTM side, this works out to a maximum of 3.1x (OLAP) and 2.6x (OLTP) better performance per core for Power Systems over Skylake.

Full disclosure, these numbers do not represent the highest scaling Intel systems.  In order to find them, you must look at the previous generation of systems.  Some may consider them obsolete, but for customers that must scale beyond 6TB/12TB (OLAP/OLTP) and are unwilling or unable to consider Power Systems, an immediate sunk investment may be their only choice.  (Note to customers in this undesirable predicament, if you really want to get an independent, third party verification of potential obsolesence, ask your favorite leasing companies, not associated or owned by the vendor, what residual value they would assume after 1 year for these systems vs. what they would assume for similar Skylake systems after 1 year.)

The previous “generation” of HPE Superdome, “X”, which as discussed in my last blog post shares 0% technology with Skylake based HPE Superdome “Flex”, was supported up to 8TB/16TB with 384 cores for both OLAP and OLTP, resulting in CTM of 21.3GB/42.7GB/core.  The SGI derived HPE MC990 X, which is the real predecessor to the new “Flex” system, was supported up to 4TB/20TB with 192 cores OLAP with 480 cores.

Strangely, “Flex” is only supported for HANA with 2 nodes or chassis where “MC990 X” was supported with up to 5 nodes.  It has been over 4 months since “Flex” was announced and at announcement date, HPE loudly proclaimed that “Flex” could support 48TB with 8 chassis/32 sockets https://news.hpe.com/hewlett-packard-enterprise-unveils-the-worlds-most-scalable-and-modular-in-memory-computing-platform/.  Since that time, some HPE reps have been telling customers that 32TB support with HANA was imminent.  One has to wonder what the hold up is.  First it took a couple of months just to get 128GB DIMM support. Now, it is taking even longer to get more than 2-node support for HANA.  If I were a potential HPE customer, I would be very curious and asking my rep about these delays (and I would have my BS detector set to high sensitivity).

Customers have now been presented with a stark contrast.  On one side, Power Systems has been on a roll; growing market share in HANA, regular increases in supported memory sizes, the ability to handle the largest single image HANA memory sizes of any vendor, outstanding mainframe derived reliability and radically better flexibility with built in virtualization and support for a maximum of 8 concurrent production HANA instances or 7 production with many dozens of non-prod HANA, application servers, non-HANA DBs and/or a wide variety of other applications supported in a shared pool, all at competitive price points.

On the other hand, Intel based HANA systems seem to be stuck in a rut with decreased maximum memory sizes (admittedly, this may be temporary), anemic increases in CTM, improved RAS but not yet to the league of Power Systems and a very questionable VMware based virtualization support filled with caveats, limitations, overhead and poor, at best, sharing of resources.

March 28, 2018 Posted by | Uncategorized | , , , , , , , , , , , , , , , | Leave a comment

HPE Superdome is dead, but HPE marketing continues its deceptive ways.

Today, 11/6/17, HPE announced the “New” Superdome Flex.  If you did not look too closely, you would think that this was some sort of descendant of Superdome.  After all, the Integrity Superdome took the original Superdome and replaced PA-RISC chips and the SX1000 cell controller with Itanium chips and a faster SX2000 cell controller.  Superdome 2 took this further by upgrading to the latest Itanium chips, an even faster SX3000 cell controller and moved from a cell board to a blade configuration.  Superdome X changed out the Itanium chips for Intel Xeon chips which it upgraded over several generations.  So, it would be only natural to think that Superdome Flex did something similar and that is exactly what HPE wants you to think.

Except, this is not even remotely like any prior Superdome and has inherited almost nothing from it.  In fact, this is a very straightforward descendant of the SGI UV 300H, which HPE renamed the MC990 X after the acquisition.  A glance at the front of the “new” system shows the same basic design, a 4-socket, 5U chassis even down to the unique diagonal handles on the fans, but they apparently moved the NUMAlink fabric ports (no longer called that; renamed Superdome Flex ports) from the back to the front, perhaps to get rid of a little of the rats nest of cables which defined the SGI UV 300H.  This means there is no SX3000 or cross bar switch in the Flex and the blade design is gone.  Even the memory DIMMS are different which implies that nothing could be moved from an old Superdome X to a new “Flex” other than perhaps some old PCIe adapters.

So, if the entire design is based on an SGI acquired technology and it shares nothing from its “namesake”, one would need to avoided that course in ethics in high school or college to find it appropriate to suggest to customers that this is a related technology.  Imagine if Honda changed the engine, frame, transmission, trim and body style but called their new car an Accord “Flex” because it used the same bumper and tire sizes, would you feel as if they were trying to manipulate you?

Back to the more important topic, Superdome is now dead!  I have been saying this for a while and blogged about this several months ago.  I suggested that any customer considering investing in this technology view it as instantly obsolete and a sunk investment.  I pointed out the huge investment in ccNUMA interconnect technologies and how it was hard to imagine how HPE could afford to invest in 2 different ones at the same time, so only one system was likely to survive.  I explained that the SGI technology offered more space and power to host the new, larger, higher wattage and heat dissipating Skylake processors.  It appears that my projections were correct.  For customers that ignored that advice, I just hope you got a really great price and don’t mind paying a lot for old technology for any upgrades or dumping your old systems at a huge financial loss.  For any customer still considering a Superdome X, the writing is no longer on the wall.  It is on HPE’s web site.  https://news.hpe.com/hewlett-packard-enterprise-unveils-the-worlds-most-scalable-and-modular-in-memory-computing-platform/

Currently, no white papers have been published showing the architecture and detailed specs of this “new” system, only a relatively high level “Spec” sheet.  Perhaps HPE is too embarrassed to publish this since it would likely resemble the SGI UV 300H in way too many ways, including the old rats nest of 4-bit wide interconnect cables.  Once they do, I will investigate and will likely publish a separate post to share what I find.

On the SAP front, new HANA appliance specs have been published for “Flex”.   It is interesting, and again embarrassing for HPE, that only up to 8-socket configs are shown, with less BWoH memory support @ 6TB max than the old, and now obsolete, Superdome X.  Even more interesting is the lack of SoH and S/4 configs, and I have a suspicion as to why.  Turns out that the spec sheet does have one interesting point after all.  It shows the maximum size memory DIMMS are 64GB and the number of DIMMS slots is 48 with a max supported memory of 3TB per chassis, i.e. half of what is necessary to support the 6TB per 4-sockets that other competing Intel vendors support.

So, if you need a supported HANA configuration today with current generation processors for BWoH beyond 6TB, look at any vendor other than HPE with 8-socket Skylake systems or IBM Power Systems.  If you need a supported SoH or S/4 configuration with current gen processors, look at any vendor other than HPE and beyond 12TB, only IBM Power Systems is supported at this level.

November 6, 2017 Posted by | Uncategorized | , , , , , , , , , , | Leave a comment

HPE, still playing fast and loose with the facts about SAP HANA on Power

Writing a blog post would be so much simpler if IBM permitted me to lie, but that is prohibited.  That is clearly not the case at HPE, see this recent blog post: https://community.hpe.com/t5/Alliances/SAP-HANA-runs-best-on-x86-Period/ba-p/6971659#.WZxDPa01TxW

It contains so many lies, it is hard to know where to start.   Let’s start with the biggest one.  There is a 10x difference in performance KPIs required by SAP to certify and ship a HANA appliance vs. a solution certified for TDI only.

You really have to love those lies that are refuted by such easily obtained facts from documentation that is apparently not used by HPE called SAP Notes.  SAP note 1943937 specifically states: All HWCCT tests of appliances (compute servers) certified with scenario HANA-HWC-AP SU 1.1 or HANA-HWC-AP RH 1.1 must use HWCCT of SAP HANA SPS10 or higher or a related SAP HANA revision”  Interesting that appliances must use the same HWCCT test of SAP KPIs as used by TDI.  So, based on HPE’s blog post, does this mean that if an HPE appliance compute server is used for TDI, it will perform 10x worse than if it is used in an appliance? That would imply that the secret sauce of HPE’s appliances is so incredible that it acts like a dual turbocharger on a car!

The blog post goes on to say “It (Power) ‘works’ … but it is just held to a ~10x lower standard without any of the performance optimizations attributed to SAP’s co-innovation efforts with Intel.”  OK, two lies in one sentence, obviously going for the gold here.  As we have already discussed the 10x lie, let us just look at the second one, i.e. performance optimizations.  Specifically, the blog calls out AVX and TSX as the performance optimizations for the Intel platform.  They are correct, those optimization don’t work on Power as, instead of AVX (Advanced Vector Extensions), Power has two fully symmetric vector pipelines called via VSX (Vector-Scalar eXtensions) instructions which HANA has been “optimized” to use in the same manner as AVX.  And TSX, a.k.a. Transactional System Extensions, came out after POWER8 Transactional Memory, but HANA was optimized for both at the same time.

The blog post also stated “Intel E7v3 CPUs for HANA (TSX and AVX) that offer a 5x performance boost over older Intel E7 or Power 8 CPUs.“  Awesome, but where is the proof behind this statement?  Perhaps a benchmark?  Nope, not one published on SAP’s site, even the old SD benchmark backs up this claim (which shows, by the way, almost the same SAPS/core for E7v3 vs. E7v2 and way less than POWER8, but maybe that benchmark does not use those optimizations?  Ok, then maybe the sizing certifications show this?  Nope.  A 4 socket CS500 Ivy Bridge system (E7v2) is published as supporting up to a 2TB SoH solution where a 4 socket CS500 Haswell systems (E7v3) is shown as supporting up to a 3TB SoH solution.  So far, that is just 50% more, not 5x, but perhaps HPE can’t tell the difference between 0.5 and 5.0?  But didn’t Haswell have more cores per socket?  Yes, it had 18 cores/socket vs. 15 cores/socket for IvyBridge, i.e. 20% more cores/socket.  So, E7v3 based systems could actually host 25% more memory and associated workload than E7v2 based systems per core.  Of course, I am sure that the switch from DDR3 to DDR4 from E7v2 to E7v3 had nothing to do with this performance improvement.  So, 5x performance boost is clearly nothing but a big fat lie.

And the hits just keep coming.  The next statement is just lovely “Naturally, this only matters if you want to be able to call SAP support to get help on nuance performance issues impacting your productive SAP HANA deployment.“  I guess he is trying to suggest that SAP won’t help you with performance problems if you are running any TDI solution including Power Systems, except this is contradicted by all of the SAP notes about TDI and HWCCT, not to mention the experience of customers who have implemented HANA on Power.

IBM wants to be the king of legacy businesses like mainframe and UNIX. That’s pretty much the only platforms they have left. So now that they can state that HANA “works” on Power, they can make a case to their AIX/Power customers that they should stay on AIX/Power for SAP and HANA and avoid what IBM claims to be an “oh so painful’ Unix to x86 migration.“  HPE, suggesting that you can run HANA on AIX/Power since HANA only runs on Linux/Power, might not be telling a lie but might just be expressing ignorance and the inability to use sophisticated and obscure search tools like Google.   As to the suggestion of IBM having said that a SAP heterogeneous migration is “oh so painful” ignores the fact that we have done hundreds of such migrations from HP/UX among others.  Perhaps the author is reflecting HPE’s migration experience with what every other migration provider sees as very well understood and fully supported SAP process.  As to IBM’s motivation, HPE is trying to suggest that IBM is only in the HANA business to support its legacy SAP on AIX/Power.  Just looking at the thriving business of HANA on Power, over 850 HANA on Power wins since becoming a supported HANA provider, might suggest otherwise.  How about the complete absence of any quotes, marketing materials or other documentation that shows that IBM is in this market for any other reason than it is the future of SAP and IBM intends to remain a premier partner of SAP and our customers?

Taken together, this blog post shows that HPE must be using the advanced Skylake processors in their Superdome-X and MC990 X (SGI UV 300H) to generate lies at an astounding pace.  Oh wait, I forgot (not really) that HPE still has not announced support for Skylake in their high end systems and is only certified for BWoH, not SoH or S/4HANA, with Skylake and only up to 3TB at that, … one month after announcement of Skylake!  Wow, I guess this shows simultaneously how much their pace of technology innovation has slowed down and partnership with SAP has decreased!

And, let’s end on their last amazing sentence, “Every day you put off that UNIX to x86 migration, you are running HANA in a performance degraded mode with production support limitations.“  Yes, HPE seems to be recommending that you migrate your UNIX based systems, i.e. those running Business Suite 7, to x86.  In other words, do two migrations, once from UNIX to x86 and a second to HANA.  Sounds like a totally disconnect from business reality!  As to the second part of that sentence, it simply does not make sense, so we are not going to attribute that to a lie, but to a simple logic error.

What are we to conclude about this blog post?  Taken on its own, it is a rogue employee.  Taken with the deluge of other similar misleading and outright lies emerging from HPE and we see a trend.  HPE has gone from being a well-respected systems supplier to a struggling company that promotes and condones lies and hires less than competent individuals to propagate misinformation.  Sounds more like a failing communist state than a company worthy of a customer’s trust.

August 22, 2017 Posted by | Uncategorized | , , , , , , , , | 4 Comments

Intel Skylake has been announced and the self-described HANA “market leader”, HPE, is curiously trailing the field

Intel announced general availability of their “Skylake” processor on the “Purely” platform last week.  Soon after, SAP posted certified HANA configurations for Lenovo and Fujitsu up to 8 sockets and 12TB memory for Suite on HANA (SoH) and S/4HANA (S4) and 6TB for BW on HANA (BWoH).  They also posted certified configurations for Dell and Cisco up to 4-socket systems with 6TB SoH/S4 and 3TB BWoH.  The certified configurations posted for HPE, which describes itself as the HANA market leader, only included up to 4-socket/3TB BWoH configurations, no configurations for SoH/S4 and nothing for any larger systems.

It is still early and more certified configurations will no doubt emerge over time, but these early results do beg the question, “what is going on with HPE?”  I checked the most recent press releases for HPE and they did not even mention the Skylake debut much less their certification with SAP HANA.  If you Google using the keywords, HPE, Skylake and HANA, you may find a few discussions about HPE’s acquisition of SGI and my previous blog posts with my speculation about Superdome’s demise and HPE’s misleading of customers about this impending event, but nothing from HPE.

So, I will share a little more speculation as to what this slow start for HPE in the Skylake space might portend.

Option 1 – HPE is not investing the funds necessary to certify all of their possible configurations and SoH/S4.  Anyone that has been involved with the HANA certification process will tell you that it is very time consuming and expensive.  As you can see from HPE’s primary Intel based competitors, they are all very eager to increase their market share and acted quickly.  Is HPE becoming complacent?  Are they having financial restrictions that have not been made public?

Option 2 – HPE’s technology limitations are becoming apparent.  The Converged System 500 is based on Proliant DL560/580 systems which support a maximum of 4 sockets.  These systems utilize Intel QPI and now UPI interconnect technologies, i.e. no custom ASICs or ccNUMA switches are required.  The CS900 based on the Superdome X and the MC990 X (SGI UV 300H) utilize custom ASICs and, in the case of Superdome X, a set of ccNUMA switches.  As I speculated previously, Superdome X is probably at end of life, so it may never see another certification on SAP’s HANA site.  As to the MC990 X, the crystal ball is a bit more hazy.  Perhaps HPE is trying to shoot for the moon and hit a number beyond the 20TB for SoH/S4 that is currently supported meaning a much longer and more complex set of certification tests.  Or perhaps they are running into technical challenges with the new ASICs required to support UPI.

Option 3 – MC990 X is going to officially become HPE’s only high end offering to support Skylake and subsequent processors and Superdome X is going to be announced at end of life.  If this were to happen, it would mean that anyone that had recently purchased such a system would have purchased a system that is immediately obsolete.

If Option 1 turns out to be true, one would have to concerned about HPE’s future in the HANA space.  If Option 2 turns out to be true, one would have to be really concerned about HPE’s future in the HANA space.  And if Option 3 turns out to be true, why would HPE be waiting?  The answer may be inventory.  If HPE has a substantial inventory of “old” Broadwell based blades and Superdome X chassis, they will undoubtedly want to unload these at the highest price possible and they know that the value of obsolete systems after such an announcement would drop into the below cost of manufacturing range.

So, you pick the most likely scenario.  Worst case for HPE is that they are just a little slow or shooting too high.  Worst case for customers is that they purchase a HANA system based on Superdome X and end up with a few hundred thousand dollar boat anchor.  If you work for a company considering the purchase of an HPE Superdome X solution, you may want to ask about its future and, if you find it is at end of life, select another solution for your SAP HANA requirements.

Inevitably, more systems will be published on SAP’s certification page, https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/appliances.html#viewcount=100&categories=certified%2CIntel%20Skylake%20SP .  When that happens, especially if any of my predictions turn out to be true or if they are all wrong and another scenario emerges, I will post an update.

July 20, 2017 Posted by | Uncategorized | , , , , , , , , , , , , | Leave a comment

There they go again, HPE misleading customers about Superdome X futures

When the acquisition of SGI by HPE was announced last year, I was openly skeptical about HPE’s motives and wrote a blog post to discuss the potential reasons and implications behind this decision. I felt pretty certain that HPE would not retain both Superdome X (SD-X) and the high end SGI system, now called MC990 X as they play in the same space. I speculated that the MC990 X would not be the winner because it has inferior memory reliability technology and uses a hand wired matrix to interconnect different nodes compared to SD-X which uses a backplane to connect nodes to switches for connectivity to other nodes.

At SapphireNow this past week, I learned, from a couple of different sources, that Intel’s next generation platform, “Purely” using “Skylake” processors include so many significant changes that SD-X will very likely not be able to support them. Those changes include a new chip level interconnect technology “UltraPath” (UPI), as a follow on to QPI, faster memory and a larger footprint which, according to pictures available on the internet, appears to be at least 25% larger. This makes sense since the high end Skylake processor will have up to 28 cores versus Broadwell’s 24 cores and includes 25% more L3 cache but uses the same 14nm lithography. Though specs are limited, my sources also believe this new platform will require more power and cooling than the “Broadwell” chips used in SD-X today. SD-X cell boards are already pretty compact nodes, which made sense when they were trying to deliver a “converged” solution, but which means that any increase in footprint of any component could push these boards beyond their limits. Clearly, the UPI change alone would require HPE’s XNC2 cell controller to be redesigned even if no power or cooling limits were exceeded. Considering how few of these chips are used per year, it would be impractical for HPE to maintain a development program for both XNC2 (and the associated SX3000 switch) in addition to the SGI acquired NUMALink7 cell controllers. The speculation is therefore that SD-X will, in fact, be the loser and HPE will utilize the MC990 X as the go forward, high end platform.

If this turns out to be a correct conclusion, then a lot of very large customers are likely to be in for a major shock since they have purchased SD-X with the expectation of being able to upgrade it and add on like architecture systems for their SAP HANA workloads as they grow over time as well as for other large memory applications. Had they been informed about the upcoming obsolescence of SD-X, many of these customers may have made a different purchasing decision. I am not convinced they would have purchased the MC990 X instead for a variety of reasons, not the least of which is the lack of robust memory protection, available with SD-X, known as RAS, Lockstep or DDDC+ mode, but not with MC990 X. Lacking this, the failure of a chip on a memory DIMM could result in a HANA failure, system failure or the need to immediately shut down the system to diagnose and fix the failed DIMM. Most SAP customers running Suite on HANA or S/4HANA are unlikely to find this level of protection to be acceptable, especially when they are running massive HANA transactional systems.

On the subject of reliability, one of the highest exposures that any system has are the physical connections and the MC990 X has a lot of them to form the ccNUMA mesh. Each cable has a connector on each end which are then inserted into the appropriate port on each pair of node controllers. Any time a cable is inserted into any sort of receptacle, there is a possibility that it will fail due to the pressure required to insert it or corrosion. A system with 2 nodes has 8 such cables, 2 between the pair of controllers on each node and 4 between the nodes. A system with 5 nodes has approximately 50 such cables. While not necessarily catastrophic, a failure of one of these cables would require a planned outage as soon as possible as there would be a performance impact from the loss of the connection.

Another limitation of the MC990 X is the lack of virtualization technology. As each node in a MC990 X contains 4 sockets, the level of granularity of this system is so coarse that very poor utilization is likely to result and a massive increase in footprint will be required to handle the same set of workloads that might have been previously handled by SD-X. Physical partitioning is available, but based on one or more 4 socket nodes, such a waste as to be completely irrelevant.

If this speculation proves to be true, and HPE has not been sharing this roadmap with customers, then those customers have been convinced to invest in an obsolete platform and they should be furious. For those in the process of making a decision, they should be asking HPE the right questions about SD-X product futures and then taking that into account as well as the product weaknesses of MC990 X.

I should note that IBM has but one go forward architecture for Power Systems based on the on-chip interconnect technology that has been included and evolving since POWER4. Customers that invested in Power technology have seen a consistent 3 to 4 year cycle to the next generation with those who purchased high end models often having the option to upgrade from one generation to the next.

If you read my blog post last week, you may recall that I pointed out how HP was lying to customers about HANA on IBM Power Systems.  Now we learn that they may also be doing the same sort of thing about their own products.  Me thinks a trend is emerging here!?!?

May 23, 2017 Posted by | Uncategorized | , , , , , , , , , | 2 Comments

HANA on Power hits the Trifecta!

Actually, trifecta would imply only 3 big wins at the same time and HANA on Power Systems just hit 4 such big wins.

Win 1 – HANA 2.0 was announced by SAP with availability on Power Systems simultaneously as with Intel based systems.[i]  Previous announcements by SAP had indicated that Power was now on an even footing as Intel for HANA from an application support perspective, however until this announcement, some customers may have still been unconvinced.  I noticed this on occasion when presenting to customers and I made such an assertion and saw a little disbelief on some faces.  This announcement leaves no doubt.

Win 2 – HANA 2.0 is only available on Power Systems with SUSE SLES 12 SP1 in Little Endian (LE) mode.  Why, you might ask, is this a “win”?  Because true database portability is now a reality.  In LE mode, it is possible to pick up a HANA database built on Intel, make no modifications at all, and drop it on a Power box.  This removes a major barrier to customers that might have considered a move but were unwilling to deal with the hassle, time requirements, effort and cost of an export/import.  Of course, the destination will be HANA 2.0, so an upgrade from HANA 1.0 to 2.0 on the source system will be required prior to a move to Power among various other migration options.   This subject will likely be covered in a separate blog post at a later date.  This also means that customers that want to test how HANA will perform on Power compared to an incumbent x86 system will have a far easier time doing such a PoC.

Win 3 – Support for BW on the IBM E850C @ 50GB/core allowing this system to now support 2.4TB.[ii]  The previous limit was 32GB/core meaning a maximum size of 1.5TB.  This is a huge, 56% improvement which means that this, already very competitive platform, has become even stronger.

Win 4 – Saving the best for last, SAP announced support for Suite on HANA (SoH) and S/4HANA of up to 16TB with 144 cores on IBM Power E880 and E880C systems.ii  Several very large customers were already pushing the previous 9TB boundary and/or had run the SAP sizing tools and realized that more than 9TB would be required to move to HANA.  This announcement now puts IBM Power Systems on an even footing with HPE Superdome X.  Only the lame duck SGI UV 300H has support for a larger single image size @ 20TB, but not by much.  Also notice that to get to 16TB, only 144 cores are required for Power which means that there are still 48 cores unused in a potential 192 core systems, i.e. room for growth to a future limit once appropriate KPIs are met.  Consider that the HPE Superdome X requires all 16 sockets to hit 16TB … makes you wonder how they will achieve a higher size prior to a new chip from Intel.

Win 5 – Oops, did I say there were only 4 major wins?  My bad!  Turns out there is a hidden win in the prior announcement, easily overlooked.  Prior to this new, higher memory support, a maximum of 96GB/core was allowed for SoH and S/4HANA workloads.  If one divides 16TB by 144 cores, the new ratio works out to 113.8GB/core or an 18.5% increase.  Let’s do the same for HPE Superdome X.  16 sockets times 24 core/socket = 384 cores.  16TB / 384 cores = 42.7GB/core.  This implies that a POWER8 core can handle 2.7 times the workload of an Intel core for this type of workload.  Back in July, I published a two-part blog post on scaling up large transactional workloads.[iii]  In that post, I noted that transactional workloads access data primarily in rows, not in columns, meaning they traverse columns that are typically spread across many cores and sockets.  Clearly, being able to handle more memory per core and per socket means that less traversing is necessary resulting in a high probability of significantly better performance with HANA on Power compared to competing platforms, especially when one takes into consideration their radically higher ccNUMA latencies and dramatically lower ccNUMA bandwidth.

Taken together, these announcements have catapulted HANA on IBM Power Systems from being an outstanding option for most customers, but with a few annoying restrictions and limits especially for larger customers, to being a best-of-breed option for all customers, even those pushing much higher limits than the typical customer does.

[i] https://launchpad.support.sap.com/#/notes/2235581

[ii] https://launchpad.support.sap.com/#/notes/2188482

[iii] https://saponpower.wordpress.com/2016/07/01/large-scale-up-transactional-hana-systems-part-1/

December 6, 2016 Posted by | Uncategorized | , , , , , , , , , , , , , , , , , , , | 3 Comments

HPE acquisition of SGI implications to HANA customers

Last week, HP announced their intention to acquire SGI for $275M, a 30% premium over its market cap prior to the announcement.  This came as a surprise to most people, including me, both that HP would want to do this and how little SGI was worth, a fact that eWeek called “Embarrasing”, http://www.eweek.com/innovation/hpe-buys-sgi-for-275-million-how-far-the-mighty-have-fallen.html.  This raises a whole host of questions that HANA customers might want to consider.

First and foremost, there would appear to be three possible reasons why HP made this acquisition, 1) eliminate a key competitor and undercut Dell and Cisco at the same time, 2) acquire market share, 3) obtain access to superior technology, resources and/or keep them out of the hands of a competitor or some combination of the above.

If HP considered SGI a key competitor, it means that HP was losing a large number of deals to SGI, a fact that has not been released publicly but which would imply that customers are not convinced of Superdome X’s strength in this market.  As many are aware, both Dell and Cisco have resell agreements with SGI for their high end UV HANA systems.  It would seem unlikely that either Dell or Cisco would continue such a relationship with their arch nemesis and as such, this acquisition will seriously undermine the prospects of Dell and Cisco to compete for scale-up HANA workloads such as Suite on HANA and S/4HANA among customers that may need more than 4TB, in the case of Dell, and 6TB, in the case of Cisco.

On the other hand, market share is a reasonable goal, but SGI’s total revenue in the year ending 6/26/15 was only $512M, which would barely be a rounding error on HP’s revenue of $52B for the year ending 10/31/15.  Hard to imagine that HP could be this desperate for a potential 1% increase in revenue, assuming they had 0% overlap in markets.  Of course, they compete in the same markets, so the likely revenue increase is considerably less than even that paltry number.

That brings us to the third option, that the technology of SGI is so good that HP wanted to get their hands on it.  If that is the case, then HP would be admitting that the technology in Superdome X is inadequate for the demands of Big Data and Analytics.  I could not agree more and made such a case in a recent post on this blog.  In that post, I noted the latency inherent in HP’s minimum 8-hop round trip access to any off board resources (remote memory accesses adds another two hops), remember there are only two Intel processors per board in a Superdome X system which can accommodate up to 16 processors. Scale-up transactional workloads typically access data in rows dispersed across NUMA aligned columns, i.e. will constantly be traversing this high latency network.  Of course, this is not surprising since the architecture used in this system is INCREDIBLY OLD having been developed in the early 2000s, i.e. way before the era of Big Data.  But the surprising thing is that this would imply that HP believes SGI’s architecture is better than their own.  Remember, SGI’s UV system uses a point to point, hand wired, 4-bit wide mesh of wires between every two NUMA ASICs in the system, which, for the potential 32-sockets in a single cabinet system means 16 ASICS and 136 wires, if I have done the math correctly.  HP has been critical of the memory protection employed in systems like SGI’s UV which is based on SDDC (Single Device Data Correction).  In HP’s own words about their DDDC+1 memory protection: “This technology delivers up to a 17x improvement in the number of DIMM replacements versus those systems that use only Single-Chip Sparing technologies. Furthermore, DDDC +1 significantly reduces the chances of memory related crashes compared to systems that only have Single-Chip Sparing capabilities”  HP Integrity Superdome X system architecture and RAS Whitepaper.  What is really interesting is that SDDC does not imply even Single-Chip Sparing.

The only one of those options which makes sense to me is the first one, but of course, I have no way of knowing which of the above is correct. One thing is certain; customers considering implementing HANA on a high-end scale-up architecture from either HP or SGI, and Dell and Cisco by extension, are going to have to rethink their options.  HP has not stated which architecture will prevail or if they will keep both, hard to imagine but not out of the question either.  Without concrete direction from HP, it is possible that a customer decision for either architecture could result in almost immediate obsolescence.  I would not enjoy being in a board room of a company or meeting with my CFO to explain how I made a decision for a multi-million dollar solution which is dead-ended, worth 1/2 or less overnight and for which a complete replacement will be required sooner than later.  Likewise, it would be hard to imagine an upcoming decision for a strategic system to run the company’s entire business based on a single flip of a coin.

Now I am going to sound like a commercial for IBM, sorry.  There is an alternative which is rock solid, increasing, not decreasing in value, and has a strong and very clear roadmap, IBM Power Systems.  POWER8 for HANA has been seeing one of the fastest market acceptance rates of any solution in recent memory with hundreds of customers implementing HANA on Power Systems and/or purchasing systems in preparation for such an implementation, ranging from medium sized businesses in high growth markets to huge brand names.  The roadmap was revealed earlier this year at the OpenPower Summit, http://www.nextplatform.com/2016/04/07/ibm-unfolds-power-chip-roadmap-past-2020/.  This roadmap was further backed up with Google’s announcement of their plans for POWER9, http://www.nextplatform.com/2016/04/06/inside-future-google-rackspace-power9-system/, the UK’s SFTC plans, http://www.eweek.com/database/ibm-powers-uk-big-data-research.html worth £313 (roughly $400M based on current exchange rates) and the US DOE’s decision to base its Summit and Sierra supercomputers on IBM POWER9 systems, http://www.anandtech.com/show/8727/nvidia-ibm-supercomputers, a $375M investment, either of which is worth more than the entire value of SGI interestingly enough.  More importantly, these two major wins mean IBM is now contractually obligated to deliver POWER9 thereby ensuring the Power roadmap for a long time to come.  And, of course, IBM has a long history of delivering mission critical systems to customers, evolving and improving the scaling and workload handling characteristics over time while simultaneously improving systems availability.

August 15, 2016 Posted by | Uncategorized | , , , , , , , , , , | Leave a comment

Large scale-up transactional HANA systems – part 2

Part 1 of this subject detailed the challenges when sizing large scale-up transactional HANA environments.  This part will dive into the details and methodology by which customers may select a vendor lacking an independent transactional HANA benchmark.

Past history with large transactional workloads

Before I start down this path, first it would be useful to understand why it is relevant.  HANA transaction processing utilizes many of the same techniques as a conventional database.  It accesses rows, albeit each column is physically separate, the transaction does not know this and gets all of the data together in one place prior to presenting the results to the dialog calling it.  Likewise, a write must follow ACID properties including only one update against a piece of data can occur at any time requiring that cache coherency mechanisms are employed to ensure this.  And a write to a log in addition to the memory location of the data to be changed or updated must occur.  Sounds an awful lot like a conventional DB which is why past history handling these sorts transactional workloads makes plenty of sense.

HPE has a long history with large scale transactional workloads and Superdome systems, but this was primarily based on Integrity Superdome systems using Itanium processors and HP-UX not with Intel x86 systems and Linux.  Among the Fortune 100, approximately 20 customers utilized HPE’s systems for their SAP database workloads almost entirely based on Oracle with HP-UX.  Not bad and coming in second place to IBM Power Systems with approximately 40 of the Fortune 100 customers that use SAP.  SGI has exactly 0 of those customers.  Intel x86 systems represent 8 of that customer set with 2 being on Exadata, not even close to a standard x86 implementation with its Oracle RAC and highly proprietary storage environment.  Three of the remaining x86 systems are utilized by vendors whose very existence is dependent on x86 so running on anything else would be a contradictory to their mission and these customers must make this solution work no matter what the expense and complexity might be.  That leaves 3 customers, none of which utilize Superdome X technology for their database systems.  To summarize, IBM Power has a robust set of high end current SAP transactional customers; HPE a smaller set entirely based on a different chip and OS than is offered with Superdome X; SGI has no experience in this space whatsoever; and x86 in general has limited experience confined to designs that have nothing in common with today’s high end x86 technology.

Industry Standard Benchmarks

A bit of background.  Benchmarks are lab experiments open to optimization and exploitation by experts in the area and have little resemblance to reality.  Unfortunately, it is the only third party metric by which systems can be compared.  Benchmarks fall into two general categories, those that are horrible and those that are not horrible (note I did not say good).  Horrible ones sometimes test nothing but the speed of CPUs by placing the entire running code in instruction cache and the entire read-only dataset upon which the code executes in data cache meaning no network and disk much less any memory I/O or cache coherency.   SPEC benchmarks such as SPECint2006 and SPECint_rate2006 fall into this category.  They are uniquely suited for ccNUMA systems as there is absolutely no communication between any sockets meaning this represents the best case scenario for a ccNUMA system.

It is therefore revealing that SGI, with 32 sockets and 288 cores, was only able to achieve 11,400 on this ideal ccNUMA benchmark, slightly beating HP Superdome X’s result of 11,100, also with 288 cores.  By comparison, the IBM Power Systems E880 with only 192 cores, i.e. 2/3 of the cores, achieved 14,400, i.e. 26% better performance.

In descending order from horrible to not as bad, there are other benchmarks which can be used to compare systems.  The list of benchmarks includes SAP SD 2-tier, SAP BW-EML, TPC-C and SAP 3-tier.  Of those, the SD 2-tier has the most participation among vendors and includes real SAP code and a real database, but suffers from the database being a tiny percentage of the workload, approximately 6 to 8%, meaning on ccNUMA systems, multiple app servers can be placed on each system board resulting in only database communication going across a pretty darned fast network represented by the ccNUMA fabric.  SGI is a no-show on this benchmark.  HPE did show with Superdome X @ 288 cores and achieved 545,780 SAPS (100,000 users, Ref# 2016002), and still the world record holder.  IBM Power showed up with the E870, an 80 core systems (28% of the number of cores as the HPE system) and achieved 436,100 SAPS (79,750 users, Ref# 2014034) (80% of the SAPS of the HPE system).  Imagine what IBM would have been able to achieve with this almost linearly scalable benchmark had they attempted to run it on the E880 with 192 cores (probably close to 436,100 * 192/80 although it is not allowed for any vendor to publish the “results” of any extrapolations of SAP benchmarks but no one can stop a customer from inputting those numbers into a calcuator).

BW-EML was SAP’s first benchmark designed for HANA, although not restricted to it.  As the name implies, it is a BW benchmark, so it is difficult to derive any correlation to transaction processing, but at least it does show some aspect of performance with HANA, analytic if nothing else and concurrent analytics is one of the core value propositions of HANA.  HPE was a frequent contributor to this benchmark, but always with something other than Superdome X.  It is important to note that Superdome X is the only Intel based system to utilize RAS mode or Intel Lockstep, by default, not as an option.  That mode has a memory throughput impact of 40% to 60% based on published numbers from a variety of vendors, but, to date, no published benchmarks, of any sort, have been run in this mode.  As a result, it is impossible to predict how well Superdome X might perform on this benchmark.  Still, kudos to HPE for their past participation.  Much better than SGI which is, once again, a no-show on this benchmark.  IBM Power Systems, as you might predict, still holds the record for best performance on this benchmark with the 40 core E870 system @ 2 Billion rows.

TPC-C was a transaction processing benchmark that, at least for some time period, had good participation, including from HP Superdome.  That is, until IBM embarrassed HPE so much, by delivering 50% more performance with ½ the number of cores.  After this, HPE never published another result on Superdome … and that was back in the 2007/2008 time frame.  TPC-C was certainly not a perfect benchmark, but it did have real transactions with real updates and about 10% of the benchmark involved remote accesses.  Still, SGI was a no-show and HPE stopped publishing on this level of system in 2007 while IBM continued publishing through 2010 until there was no one left to challenge their results.  A benchmark is only interesting when multiple vendors are vying for the top spot.

Last, but certainly not least, is the SAP SD 3-tier benchmark.  In this one, the database was kept on a totally separate server and there was almost no way to optimize it to remove any ccNUMA effects.  Only IBM had the guts to participate in this benchmark at a large scale with a 64-core POWER7+ system (the previous generation to POWER8).  There was no submission from HPE that came even remotely close and, once again, SGI was MIA.

Architecture

Where IBM Power Systems utilizes a “glueless” interconnect up to 16 sockets, meaning all processor chips connect to each other directly, without the use of specialized hub chips or switches, Intel systems beyond 8 sockets utilize a “glued” architecture.  Currently, only HPE and SGI offer solutions beyond 8 sockets.  HPE is using a very old architecture in the Superdome X, first deployed for PA-RISC (remember those) in the Superdome introduced in 2000.  Back then, they were using a cell controller (a.k.a. hub chip) on each system board.  When they introduced the Itanium processor in 2002, they replaced this hub chip with a new one called SX1000; basically an ASIC that connected the various components on the system board together and to the central switch by which it communicats with other system boards.  Since 2002, HPE has moved through three generations of ASICs and now is using the SX3000 which features considerably faster speeds, better reliability, some ccNUMA enhancements and connectivity to multiple interconnect switches.  Yes, you read that correctly; where Intel has delivered a new generation of x86 chips just about every year over the last 14 years, HPE has delivered 3 generations of hub chips.  Pace of innovation is clearly directly tied to volume and Superdome has never achieved sufficient volume alone nor use by other vendors to increase the speed of innovation.  This means that while HPE may have delivered a major step forward at a particular point in time, it suffers from a long lag and diminishing returns as time and Intel chip generations progress.  The important thing to understand is that every remote access, from either of the two Intel EX chips on each system board, to cache, memory or I/O connected to another system board, must pass through 8 hops, at a minimum, i.e. from calling socket, to SX3000, to central switch to remote SX3000, to remote socket and the same trip in return and that is assuming that data was resident in an on-board cache.

SGI, the other player in the beyond 8 socket space, is using a totally different approach, derived from their experience in the HPC space.  They are also using a hub chip, called a HARP ASIC, but rather than connecting through one or more central switches, in the up to 32 socket systems UV 300H system, each system board, featuring 4 Intel EX chips and a proprietary ASIC per memory riser, includes two hub chips which are linked directly to each of the other hub chips in the system.  This mesh is hand wired with a separate physical cable for every single connection.  Again, you read that correctly, hand wired.  This means that not only are physical connections made for every hub chip to hub chip connection with the inherent potential for an insertion or contact problem on each end of that wire, but as implementation size increases, say from 8-sockets/2 boards to 16-sockets/4 boards or to 32-sockets/8 boards, the number of physical, hand wired connections increases exponentially.   OK, assuming that does not make you just a little bit apprehensive, consider this:  Where HPE uses a memory protection technology called Double Device Data Correction + 1 (DDDC+1) in their Superdome X system, basically the ability to handle not just a single memory chip failure but at least 2 (not at the same time), SGI utilizes SDDC, i.e. Single device data correction.  This means that after detection of the first failure, customers must rapidly decide whether to shut down the system and replace the failing memory component (assuming it has been accurately identified), or hope their software based page deallocation technology works fast enough to avert a catastrophic system failure due to a subsequent memory failure.  Even with that software, if a memory fault occurs in a different page, the SGI system would still be exposed.    My personal opinion is that memory protection is so important in any system, but especially in large scale scale-up HANA systems, that anything short of true enterprise memory protection of at least DDDC is doing nothing other than increasing customer risk.

Summary

SGI is asking customers to accept their assertions that SAP’s certification of the SGI UV 300H at 20TB implies they can scale better than any other platform and perform well at that level, but they are providing no evidence in support of that claim.  SAP does not publish the criteria with which is certifies a solution, so it is possible that SGI has been able to “prove” addressability at 20TB, the ability to initialize a HANA system and maybe even to handle a moderate number of transactions.  Lacking any sort of independent, auditable proof via a benchmark, any reasonable body of customers (one would be nice at least) driving high transaction volumes with HANA or a conventional database and anything other than a 4-bit wide, hand wired ccNUMA nest that would seem prone to low throughput and high error rates, especially with substandard memory protection, it is hard to imagine why anyone would find this solution appealing.

HPE, by comparison, does have some history in transactional systems at high transactional volumes with a completely different CPU, OS and memory architecture, but nothing with Superdome X.  HPE has a few benchmarks, however poor, once again on systems from long ago plus mediocre results with the current generation and an architecture that has a minimum of 8-hops round trip for every remote access.  On the positive side, at least HPE gets it regarding proper memory protection, but does not address how much performance degradation results from this protection.  Once again, SAP’s certification at 16TB for Superdome X must be taken with the same grain of salt as SGI’s.

IBM Power Systems has an outstanding history with transactional systems at very high transactional volumes using current generation POWER8 systems.  Power also dominates the benchmark space and continued to deliver better and better results until no competitor dared risk the fight.  Lastly, POWER8 is latest generation of a chip designed from the ground up with ccNUMA optimization in mind and with reliability as its cornerstone, i.e. the results already include any overhead necessary to support this level of RAS.  Yes, POWER8 is only supported at 9TB today for SAP SoH and S/4HANA, but lest we forget, it is the new competitor in the HANA market and the other guys only achieved their higher supported numbers after extensive customer and internal benchmark testing, both of which are underway with Power.

July 7, 2016 Posted by | Uncategorized | , , , , , , , , , , , , , , , | Leave a comment

Large scale-up transactional HANA systems – part 1

Customers which require Suite on HANA (SoH) and S/4HANA systems with 6TB of memory or less will find a wide variety of available options.  Those options do not require any specialized type of hardware, just systems that can scale up to 8 sockets with Intel based systems and up to 64 cores with IBM Power Systems (socket count depends on the number of active cores per socket which varies by system).  If you require 6TB or less or can’t imagine ever needing more, then sizing is a fairly easy process, i.e. look at the sizing matrix from SAP and select a system which meets your needs.  If you need to plan for more than 6TB, this is where it gets a bit more challenging.   The list of options narrows to 5 vendors between 6TB and 8TB, IBM, Fujitsu, HPE, SGI and Lenovo and gets progressively smaller beyond that.

All systems with more than one socket today are ccNUMA, i.e. remote cache, memory and I/O accesses are delivered with more latency and lower bandwidth than local to the processor accesses.   HANA is highly optimized for analytics, which most of you probably already know.  The way it is optimized may not be as obvious.  Most tables in HANA are columnar, i.e. every column in a table is kept in its own structure with its own dictionary and the elements of the column are replaced with a very short dictionary pointer resulting in outstanding compression, in most cases.  Each column is placed in as few memory pages as possible which means that queries which scan through a column can run at crazy fast speeds as all of the data in the column is as “close” as possible to each other.  This columnar structure is beautifully suited for analytics on ccNUMA systems since different columns will typically be placed behind different sockets which means that only queries that cross columns and joins will have to access columns that may not be local to a socket and, even then, usually only the results have to be sent across the ccNUMA fabric.  There was a key word in the previous sentence that might have easily been missed: “analytics”.  Where analytical queries scan down columns, transactional queries typically go across rows in which, due to the structure of a columnar database, every element in located in a different column, potentially spanning across the entire ccNUMA system.  As a result, minimized latency and high cross system bandwidth may be more important than ever.

Let me stop here and give an example so that I don’t lose the readers that aren’t system nerds like myself.  I will use a utility company as an example as everyone is a utility customer.  For analytics, an executive might want to know the average usage of electricity on a given day at a given time meaning the query is composed of three elements, all contained in one table: usage, date and time.    Unless these columns are enormous, i.e. over 2 Billion rows, they are very likely stored behind a single socket with no remote accesses required.  Now, take that same company’s customer care center, where a utility consumer wants to turn on service, report an outage or find out what their last few months or years of bills have been.  In this case, all sorts of information is required to populate the appropriate screens, first name, last name, street address, city, state, meter number, account number, usage, billed amount and on and on.  Scans of columns are not required and a simple index lookup suffices, but every element is located in a different column which has to be resolved by an independent dictionary lookup/replacement of the compressed elements meaning several or several dozen communications across the systems as the columns are most likely distributed across the system.  While an individual remote access may take longer, almost 5x in a worst case scenario[i], we are still talking nanoseconds here and even 100 of those still results in a “delay” of 50 microseconds.  I know, what are you going to do while you are waiting!  Of course, a utility customer is more likely to have hundreds, thousands or tens of thousands of transactions at any given point in time and there is the problem.  An increased latency of 5x for every remote access may severely diminish the scalability of the system.

Does this mean that is not possible to scale-up a HANA transactional environment?  Not at all, but it does take more than being able to physically place a lot of memory in a system to be able to utilize it in a productive manner with good scalability.  How can you evaluate vendor claims then?  Unfortunately, the old tried and true SAP SD benchmark has not been made available to run in HANA environments.  Lacking that, you could flip a coin, believe vendor claims without proof or demand proof.  Clearly, demanding proof is the most reasonable approach, but what proof?  There are three types of proof to look at: past history with large transactional workloads, industry standard benchmarks and architecture.

In the over 8TB HANA space, there are three competitors; HPE Superdome X, SGI UV 300H and IBM Power Systems E870/E880.  I will address those systems and these three proof points in part 2 of this blog post.

[i] http://www.vldb.org/pvldb/vol8/p1442-psaroudakis.pdf

July 1, 2016 Posted by | Uncategorized | , , , , , , , , , , , , , , , | 3 Comments