SAPonPower

An ongoing discussion about SAP infrastructure

Scale-up vs. scale-out architectures for SAP HANA – part 1

Dozens of articles, blog posts, how-to guides and SAP notes have been written about this subject.  One of the best was by John Appleby, now Global Head of DDM/HANA COEs @ SAP.[i]  Several others have been written by vendors with a vested interest in the proposed option. The vendor for which I work, IBM, offers excellent solutions for both options, so my perspective is based on both my and the experiences of our many customers, some that have chosen one or the other option, or both, in some cases.

Scale-out for BW is well established, understood, fully supported by SAP and can be cost effective from the perspective of systems acquisition costs.  Scale-out for S/4HANA, by comparison, is in use by very few customers, not well understood, yet is support by SAP for configurations up to 4 nodes.  Does this mean that a scale-out architecture should always be used for BW and a scale-up architecture for S/4HANA the only viable choice?  This blog post will discuss only BW and similar analytical environments including BW/4HANA, data marts, data lakes, etc.  The next will discuss S/4HANA and the third in the series will discuss vendor selection and where one might have an advantage over the others. 

Scale-out has 3 key advantages over scale-up:

  • Every vendor can participate therefore competitive bidding of “commodity” level systems can result in optimal pricing.
  • High availability, using host auto-failover requires nothing more than n+1 systems as the hot standby node can take over the role of any other node (some customers chose n+2 or group nodes and standby nodes).
  • Some environments are simply too large to fit in even the largest supported scale-up systems.

Scale-up, likewise, has 3 key advantages over scale-out:

  • Performance is, inevitably, better as joins across memory are always faster than joins across a network
  • Management is much simpler as query analysis and data distribution decisions need not be performed on a regular basis plus fewer systems are involved with the corresponding decrease in monitoring, updating, connectivity, etc.
  • TCO can be lower when the costs of systems, storage, network and basis management are included.

Business requirements, as always, should drive the decision as to which to use.  As mentioned, when an environment is simply too large, unless a customer is willing to ask for an exception from SAP (and SAP is willing to grant it), then scale-out may be the only option.  Currently, SAP supports BW configurations of up to 6TB on many 8-socket Intel Skylake based systems (up to 12TB on HPE’s 16-socket system) and up to 16TB on IBM Power Systems.

The next most important issue is usually cost.  Let’s take a simple example of an 8TB BW HANA requirement.  With scale-out, 4 @ 2TB nodes may be used with a single 2TB node for hot standby for a total of 10TB of memory.  If scale-up is used, the primary system must be 8TB and the hot-standby another 8TB for a total of 16TB of memory.  Considering that memory is the primary driver of the cost of acquisition, 16TB, from any vendor, will cost more than 10TB.  If the analysis stops there, then the decision is obvious. However, I would strongly encourage all customers to examine all costs, not just TCA.

In the above example, 5 systems are required for the scale-out configuration vs. 2 for scale-up. The scale-out config could be reduced to 4 systems if 3TB nodes are used with 1TB left unused although the total memory requirement would go up to 12TB.  At a minimum, twice the management activities, trouble-shooting and connectivity would be required. Also, remember, prod rarely exists on its own with some semblance of the configuration existing in QA, often DR and sometimes other non-prod instances.

The other set of activities is much more intensive.  To distribute load amongst the systems, first data must be distributed.  Some data must reside on the master node, e.g. all row-store tables, ABAP tables, general operations tables.  Other data such as Fact, DataStore Object (DSO), Persistent Staging Area (PSA) is distributed evenly across the slave nodes based on the desired partitioning specification, e.g. hash, round robin or range.  There are also more complex options where specifications can be mixed to get around hash or range limitations and create a multi-level partitioning plan).   And, of course, you can partition different tables using different specifications.  Which set of distribution specifications you use is highly dependent on how data is accessed and this is where it gets really complicated.  Most customers start with a simple specification, begin monitoring placement using the table distribution editor and performance using STO3N plus getting feedback from end users (read that as complaints to the help desk).  After some period of time and analysis of performance, many customers elect to redistribute data using a better or more complex set of specifications. Unfortunately, what is good for one query, e.g. distribute data based on month, is bad for another which looks for data based on zipcode, customer name or product number.  Some customers report that the above set of activities can consume part or all of one or more FTEs.

Back to the above example. 10TB vs. 16TB which we will assume is replicated in QA and DR, for sake of argument, i.e. the scale-up solution requires 18TB more memory.  If the price per TB is $35,000 then the cost different in TCA would be $630,000.  The average cost of a senior basis administrator (required for this sort of complex task) in most western countries is in the $150,000 range.  That means that over the course of 5 years, the TCO of the scale-up solution, considering only TCA and basis admin costs would be roughly equivalent to the cost of the scale-out solution.  Systems, storage and network administration costs could push the TCO of the scale-out solution up relative to the scale-up solution.

And then there is performance.  Some very high performance network adapter companies have been able to drive TCP latency across a 10Gb Ethernet down to 3.6us which sounds really good until you consider memory latency is around 120ns, i.e. 30 times faster.  Joining tables across nodes not only is substantially slower, but also results in more CPU and memory overhead.[ii]  A retailer in Switzerland, Coop Group, reported 5 times quicker analytics while using 85% fewer cores after migrating from an 8-node x86 scale-out BW HANA cluster with 320 total cores to a single scale-up 96-core IBM Power Systems.[iii]  While various benchmarks suggest 2x or better per core performance of Power Systems vs. x86, the results suggest far higher, much of which can, no doubt, be attributed to the effect of using a scale-up architecture.

Of course, performance is relative.  BW queries run with scale-out HANA will usually outperform BW on a conventional DB by an order of magnitude or more.  If this is sufficient for business purposes, then it may be hard to build a case for why faster is required.  But end users have a tendency to soak up additional horsepower once they understand what is possible.  They do this in the form of more what-if analyses, interactive drill downs, more frequent mock-closes, etc.

If the TCO is similar or better and a scale-up approach delivers superior performance with many fewer headaches and calls to the help desk for intermittent performance problems, then it would be very worthwhile to investigate this option.

 

To recap; For BW HANA and similar analytical environments, Scale-out architectures usually offer the lowest TCA and scalability beyond the largest scale-up environment.  Scale-up architectures offers significantly easier administration, much better performance and competitive to superior TCO.

[i]https://blogs.saphana.com/2014/12/10/sap-hana-scale-scale-hardware/

[ii]https://launchpad.support.sap.com/#/notes/2044468(see FAQ 8)

[iii]https://www.ibm.com/case-studies/coop-group-technical-reference

July 9, 2018 Posted by | Uncategorized | , , , , , , , , , , , | 3 Comments

Power Systems – Delivering best of breed scalability for SAP HANA

SAP quietly revised a SAP Note last week but it certainly made a loud sound for some.  Version 47 of https://launchpad.support.sap.com/#/notes/2188482 now says that OLTP workloads, such as Suite on HANA or S/4HANA are now supported on IBM Power Systems up to 24TB.  OLAP workloads, like BW HANA may be implemented on IBM Power Systems with up to 16TB for a single scale-up instance.  As noted in https://launchpad.support.sap.com/#/notes/2055470, scale-out BW is supported with up to 16 nodes bringing the maximum supported BW environment to a whopping 256TB.

As impressive as those stats are, it should also be noted that SAP also provided new core-to-memory (CTM) guidance with the 24TB OLTP system sized at 176-cores which results in 140GB/core, up from the previous 113.7GB/core at 16TB.  The 16TB OLAP system, sized at 192-cores, translates to 85.3GB/core, up from the previous 50GB/core for 4-socket and above systems.

By comparison, the maximum supported sizes for Intel Skylake systems are 6TB for OLAP and 12TB for OLTP which correlates to 27.4GB/core OLAP and 54.9GB/core OLTP.  In other words, SAP has published numbers which suggest Power Systems can handle workloads that are  2.7x (OLAP) and 2x (OLAP) the size of the maximum supported Skylake systems.  On the CTM side, this works out to a maximum of 3.1x (OLAP) and 2.6x (OLTP) better performance per core for Power Systems over Skylake.

Full disclosure, these numbers do not represent the highest scaling Intel systems.  In order to find them, you must look at the previous generation of systems.  Some may consider them obsolete, but for customers that must scale beyond 6TB/12TB (OLAP/OLTP) and are unwilling or unable to consider Power Systems, an immediate sunk investment may be their only choice.  (Note to customers in this undesirable predicament, if you really want to get an independent, third party verification of potential obsolesence, ask your favorite leasing companies, not associated or owned by the vendor, what residual value they would assume after 1 year for these systems vs. what they would assume for similar Skylake systems after 1 year.)

The previous “generation” of HPE Superdome, “X”, which as discussed in my last blog post shares 0% technology with Skylake based HPE Superdome “Flex”, was supported up to 8TB/16TB with 384 cores for both OLAP and OLTP, resulting in CTM of 21.3GB/42.7GB/core.  The SGI derived HPE MC990 X, which is the real predecessor to the new “Flex” system, was supported up to 4TB/20TB with 192 cores OLAP with 480 cores.

Strangely, “Flex” is only supported for HANA with 2 nodes or chassis where “MC990 X” was supported with up to 5 nodes.  It has been over 4 months since “Flex” was announced and at announcement date, HPE loudly proclaimed that “Flex” could support 48TB with 8 chassis/32 sockets https://news.hpe.com/hewlett-packard-enterprise-unveils-the-worlds-most-scalable-and-modular-in-memory-computing-platform/.  Since that time, some HPE reps have been telling customers that 32TB support with HANA was imminent.  One has to wonder what the hold up is.  First it took a couple of months just to get 128GB DIMM support. Now, it is taking even longer to get more than 2-node support for HANA.  If I were a potential HPE customer, I would be very curious and asking my rep about these delays (and I would have my BS detector set to high sensitivity).

Customers have now been presented with a stark contrast.  On one side, Power Systems has been on a roll; growing market share in HANA, regular increases in supported memory sizes, the ability to handle the largest single image HANA memory sizes of any vendor, outstanding mainframe derived reliability and radically better flexibility with built in virtualization and support for a maximum of 8 concurrent production HANA instances or 7 production with many dozens of non-prod HANA, application servers, non-HANA DBs and/or a wide variety of other applications supported in a shared pool, all at competitive price points.

On the other hand, Intel based HANA systems seem to be stuck in a rut with decreased maximum memory sizes (admittedly, this may be temporary), anemic increases in CTM, improved RAS but not yet to the league of Power Systems and a very questionable VMware based virtualization support filled with caveats, limitations, overhead and poor, at best, sharing of resources.

March 28, 2018 Posted by | Uncategorized | , , , , , , , , , , , , , , , | Leave a comment

TDI Phase 5 – SAPS based sizing bringing better TCO to new and existing Power Systems customers

SAP made a fundamental and incredibly important announcement this week at SAP TechEd in Las Vegas: TDI Phase 5 – SAPS based sizing for HANA workloads.  Since its debut, HANA has been sized based on a strict memory to core ratio determined by SAP based on workloads and platform characteristics, e.g. generation of processor, MHz, interconnect technology, etc.  This might have made some sense in the early days when much was not known about the loads that customers were likely to experience and SAP still had high hopes for enabling all customer employees to become knowledge workers with direct access to analytics.  Over time, with very rare exception, it turned out that CPU loads were far lower than the ratios might have predicted.

I have only run into one customer in the past two years that was able to drive a high utilization of their HANA systems and that was a customer running an x86 BW implementation with an impressively high number of concurrent users at one point in their month.  Most customers have experienced just the opposite, consistently low utilization regardless of technology.

For many customers, especially those running x86 systems, this has not been an issue.  First, it is not a significant departure from what many have experienced for years, even those running VMware.  Second, to compensate for relatively low memory and socket-to-socket bandwidth combined with high latency interconnects, many x86 systems work best with an excess of CPU.  Third, many x86 vendors have focused on HANA appliances which are rarely utilized with virtualization and are therefore often single instance systems.

IBM Power Systems customers, by comparison, have been almost universal in their concern about poor utilization.  These customers have historically driven high utilization, often over 65%.  Power has up to 5 times the memory bandwidth per socket of x86 systems (without compromising reliability) and very wide and parallel interconnect paths with very low latencies.  HANA has never been offered as an appliance on Power Systems, instead being offered only using a Tailored Datacenter Infrastructure (TDI) approach.  As a result, customers view on-premise Power Systems as a sort of utility, i.e. that they should be able to use them as they see fit and drive as much workload through them as possible while maintaining the Service Level Agreements (SLA) that their end users require.  The idea of running a system at 5%, or even 25%, utilization is almost an affront to these customers, but that is what they have experienced with the memory to core restrictions previously in place.

IBM’s virtualization solution, PowerVM, enabled SAP customers to run multiple production workloads (up to 8 on the largest systems) or a mix of production workloads (up to 7) with a shared pool of CPU resources within which an almost unlimited mix of VMs could run including non-prod HANA, application servers, as well as non-SAP and even other OS workloads, e.g. AIX and IBM i.  In this mixed mode, some of the excess CPU resource not used by the production workloads could be utilized by the shared-pool workloads.  This helped drive up utilization somewhat, but not enough for many.

These customers would like to do what they have historically done.  They would like to negotiate response time agreements with their end user departments then size their systems to meet those agreements and resize if they need more capacity or end up with too much capacity.

The newly released TDI Overview document http://bit.ly/2fLRFPb describes the new methodology: SAP HANA quicksizer and SAP HANA sizing reports have been enhanced to provide separate CPU and RAM sizing results in SAPS”.  I was able to verify Quicksizer showing SAPS, but not the sizing reports.  An SAP expert I ran into at TechEd suggested that getting the sizing reports to determine SAPS would be a tall order since they would have to include a database of SAPS capacity for every system on the market as well as number of cores and MHz for each one.  (In a separate blog post, I will share how IBM can help customers to calculate utilized SAPS on existing systems).  Customers are instructed to work with their hardware partner to determine the number of cores required based on the SAPS projected above.  The document goes on to state: The resulting HANA TDI configurations will extend the choice of HANA system sizes; and customers with less CPU intensive workloads may have bigger main memory capacity compared to SAP HANA appliance based solutions using fixed core to memory sizing approach (that’s more geared towards delivery of optimal performance for any type of a workload).”

Using a SAPS based methodology will be a good start and may result in fewer cores required for the same workload as would have been previously calculated based on a memory/core ratio.  Customers that wish to allocate more of less CPU to those workloads will now have this option meaning that even more significant reduction of CPU may be possible.  This will likely result in much more efficient use of CPU resources, more capacity available to other workloads and/or the ability to size systems with less resources to drive down the cost of those systems.  Either way helps drive much better TCO by reducing numbers and sizes of systems with the associated datacenter and personnel costs.

Existing Power customers will undoubtedly be delighted by this news.  Those customers will be able to start experimenting with different core allocations and most will find they are able to decrease their current HANA VM sizes substantially.  With the resources no longer required to support production, other workloads currently implemented on external systems may be consolidated to the newly, right sized, system.  Application servers, central services, Hadoop, HPC, AI, etc. are candidates to be consolidated in this way.

Here is a very simple example:  A hypothetical customer has two production workloads, BW/4HANA and S/4HANA which require 4TB and 3TB respectively.  For each, HA is required as is Dev/Test, Sandbox and QA.  Prior to TDI Phase 5, using Power Systems, the 4TB BW system would require roughly 82-cores due to the 50GB/core ratio and the S/4 workload would require roughly 33 cores due to the 96GB/core ratio.  Including HA and non-prod, the systems might look something like:

TDI Phase 4

Note the relatively small number of cores available in the shared pool (might be less than optimal) and the total number of cores in the system. Some customers may have elected to increase to an even larger system or utilize additional systems as a result.  As this stood, this was already a pretty compelling TCO and consolidation story to customers.

With SAPS based sizing, the BW workload may require only 70 cores and S/4 21 cores (both are guesses based on early sizing examples and proper analysis of the SAP sizing reports and per core SAPS ratings of servers is required to determine actual core requirements).  The resulting architecture could look like:

TDI Phase 5 est

Note the smaller core count in each system.  By switching to this methodology, lower cost CPU sockets may be employed and processor activation costs decreased by 24 cores per system.  But the number of cores in the shared pool remains the same, so still could be improved a bit.

During a landscape session at SAP TechEd in Las Vegas, an SAP expert stated that customers will be responsible for performance and CPU allocation will not be enforced by SAP through HWCCT as had been the case in the past.  This means that customers will be able to determine the number of cores to allocate to their various instances.  It is conceivable that some customers will find that instead of the 70 cores in the above example, 60, 50 or fewer cores may be required for BW with decreased requirements for S/4HANA as well.  Using this approach, a customer choosing this more hypothetical approach might see the following:

TDI Phase 5 hyp

Note how the number of cores in the shared pool have increased substantially allowing for more workloads to be consolidated to these systems, further decreasing costs by eliminating those external systems as well as being able to consolidate more SAN and Network cards, decreasing computer room space and reducing energy/cooling requirements.

A reasonable question is whether these same savings would accrue to an x86 implementation.  The answer is not necessarily.  Yes, fewer cores would also be required, but to take advantage of a similar type of consolidation, VMware must be employed.  And if VMware is used, then a host of caveats must be taken into consideration.  1) overhead, reportedly 12% or more, must be added to the capacity requirements.  2) I/O throughput must be tested to ensure load times, log writes, savepoints, snapshots and backup speeds which are acceptable to the business.  3) limits must be understood, e.g. max memory in a VM is 4TB which means that BW cannot grow by even 1KB. 4) Socket isolation is required as SAP does not permit the sharing of a socket in a HANA production/VMware environment meaning that reducing core requirements may not result in fewer sockets, i.e. this may not eliminate underutilized cores in an Intel/VMware system.  5) Non-prod workloads can’t take advantage of capacity not used by production for several reasons not the least of which is that SAP does not permit sharing of sockets between VM prod and non-prod instances not to mention the reluctance of many customer to mix prod and non-prod using a software hypervisor such as VMware even if SAP permitted this.  Bottom line is that most customers, through an abundance of caution, or actual experience with VMware, choose to place production on bare-metal and non-prod, which does not require the same stack as prod, on VMware.  Workloads which do require the same stack as prod, e.g. QA, also are usually placed on bare-metal.  After closer evaluation, this means that TDI Phase 5 will have limited benefits to x86 customers.

This announcement is the equivalent of finally being allowed to use 5th gear on your car after having been limited to only 4 for a long time.  HANA on IBM Power Systems already had the fastest adoption in recent SAP history with roughly 950 customers selecting HANA on Power in just 2 years. TDI Phase 5 uniquely benefits Power Systems customers which will continue the acceleration of HANA on Power.  Those individuals that recommended or made decisions to select HANA on Power will look like geniuses to their CFOs as they will now get the equivalent of new systems capacity at no cost.

September 29, 2017 Posted by | Uncategorized | , , , , , , , , , , , , | 3 Comments

Intel Skylake has been announced and the self-described HANA “market leader”, HPE, is curiously trailing the field

Intel announced general availability of their “Skylake” processor on the “Purely” platform last week.  Soon after, SAP posted certified HANA configurations for Lenovo and Fujitsu up to 8 sockets and 12TB memory for Suite on HANA (SoH) and S/4HANA (S4) and 6TB for BW on HANA (BWoH).  They also posted certified configurations for Dell and Cisco up to 4-socket systems with 6TB SoH/S4 and 3TB BWoH.  The certified configurations posted for HPE, which describes itself as the HANA market leader, only included up to 4-socket/3TB BWoH configurations, no configurations for SoH/S4 and nothing for any larger systems.

It is still early and more certified configurations will no doubt emerge over time, but these early results do beg the question, “what is going on with HPE?”  I checked the most recent press releases for HPE and they did not even mention the Skylake debut much less their certification with SAP HANA.  If you Google using the keywords, HPE, Skylake and HANA, you may find a few discussions about HPE’s acquisition of SGI and my previous blog posts with my speculation about Superdome’s demise and HPE’s misleading of customers about this impending event, but nothing from HPE.

So, I will share a little more speculation as to what this slow start for HPE in the Skylake space might portend.

Option 1 – HPE is not investing the funds necessary to certify all of their possible configurations and SoH/S4.  Anyone that has been involved with the HANA certification process will tell you that it is very time consuming and expensive.  As you can see from HPE’s primary Intel based competitors, they are all very eager to increase their market share and acted quickly.  Is HPE becoming complacent?  Are they having financial restrictions that have not been made public?

Option 2 – HPE’s technology limitations are becoming apparent.  The Converged System 500 is based on Proliant DL560/580 systems which support a maximum of 4 sockets.  These systems utilize Intel QPI and now UPI interconnect technologies, i.e. no custom ASICs or ccNUMA switches are required.  The CS900 based on the Superdome X and the MC990 X (SGI UV 300H) utilize custom ASICs and, in the case of Superdome X, a set of ccNUMA switches.  As I speculated previously, Superdome X is probably at end of life, so it may never see another certification on SAP’s HANA site.  As to the MC990 X, the crystal ball is a bit more hazy.  Perhaps HPE is trying to shoot for the moon and hit a number beyond the 20TB for SoH/S4 that is currently supported meaning a much longer and more complex set of certification tests.  Or perhaps they are running into technical challenges with the new ASICs required to support UPI.

Option 3 – MC990 X is going to officially become HPE’s only high end offering to support Skylake and subsequent processors and Superdome X is going to be announced at end of life.  If this were to happen, it would mean that anyone that had recently purchased such a system would have purchased a system that is immediately obsolete.

If Option 1 turns out to be true, one would have to concerned about HPE’s future in the HANA space.  If Option 2 turns out to be true, one would have to be really concerned about HPE’s future in the HANA space.  And if Option 3 turns out to be true, why would HPE be waiting?  The answer may be inventory.  If HPE has a substantial inventory of “old” Broadwell based blades and Superdome X chassis, they will undoubtedly want to unload these at the highest price possible and they know that the value of obsolete systems after such an announcement would drop into the below cost of manufacturing range.

So, you pick the most likely scenario.  Worst case for HPE is that they are just a little slow or shooting too high.  Worst case for customers is that they purchase a HANA system based on Superdome X and end up with a few hundred thousand dollar boat anchor.  If you work for a company considering the purchase of an HPE Superdome X solution, you may want to ask about its future and, if you find it is at end of life, select another solution for your SAP HANA requirements.

Inevitably, more systems will be published on SAP’s certification page, https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/appliances.html#viewcount=100&categories=certified%2CIntel%20Skylake%20SP .  When that happens, especially if any of my predictions turn out to be true or if they are all wrong and another scenario emerges, I will post an update.

July 20, 2017 Posted by | Uncategorized | , , , , , , , , , , , , | Leave a comment

HANA on Power hits the Trifecta!

Actually, trifecta would imply only 3 big wins at the same time and HANA on Power Systems just hit 4 such big wins.

Win 1 – HANA 2.0 was announced by SAP with availability on Power Systems simultaneously as with Intel based systems.[i]  Previous announcements by SAP had indicated that Power was now on an even footing as Intel for HANA from an application support perspective, however until this announcement, some customers may have still been unconvinced.  I noticed this on occasion when presenting to customers and I made such an assertion and saw a little disbelief on some faces.  This announcement leaves no doubt.

Win 2 – HANA 2.0 is only available on Power Systems with SUSE SLES 12 SP1 in Little Endian (LE) mode.  Why, you might ask, is this a “win”?  Because true database portability is now a reality.  In LE mode, it is possible to pick up a HANA database built on Intel, make no modifications at all, and drop it on a Power box.  This removes a major barrier to customers that might have considered a move but were unwilling to deal with the hassle, time requirements, effort and cost of an export/import.  Of course, the destination will be HANA 2.0, so an upgrade from HANA 1.0 to 2.0 on the source system will be required prior to a move to Power among various other migration options.   This subject will likely be covered in a separate blog post at a later date.  This also means that customers that want to test how HANA will perform on Power compared to an incumbent x86 system will have a far easier time doing such a PoC.

Win 3 – Support for BW on the IBM E850C @ 50GB/core allowing this system to now support 2.4TB.[ii]  The previous limit was 32GB/core meaning a maximum size of 1.5TB.  This is a huge, 56% improvement which means that this, already very competitive platform, has become even stronger.

Win 4 – Saving the best for last, SAP announced support for Suite on HANA (SoH) and S/4HANA of up to 16TB with 144 cores on IBM Power E880 and E880C systems.ii  Several very large customers were already pushing the previous 9TB boundary and/or had run the SAP sizing tools and realized that more than 9TB would be required to move to HANA.  This announcement now puts IBM Power Systems on an even footing with HPE Superdome X.  Only the lame duck SGI UV 300H has support for a larger single image size @ 20TB, but not by much.  Also notice that to get to 16TB, only 144 cores are required for Power which means that there are still 48 cores unused in a potential 192 core systems, i.e. room for growth to a future limit once appropriate KPIs are met.  Consider that the HPE Superdome X requires all 16 sockets to hit 16TB … makes you wonder how they will achieve a higher size prior to a new chip from Intel.

Win 5 – Oops, did I say there were only 4 major wins?  My bad!  Turns out there is a hidden win in the prior announcement, easily overlooked.  Prior to this new, higher memory support, a maximum of 96GB/core was allowed for SoH and S/4HANA workloads.  If one divides 16TB by 144 cores, the new ratio works out to 113.8GB/core or an 18.5% increase.  Let’s do the same for HPE Superdome X.  16 sockets times 24 core/socket = 384 cores.  16TB / 384 cores = 42.7GB/core.  This implies that a POWER8 core can handle 2.7 times the workload of an Intel core for this type of workload.  Back in July, I published a two-part blog post on scaling up large transactional workloads.[iii]  In that post, I noted that transactional workloads access data primarily in rows, not in columns, meaning they traverse columns that are typically spread across many cores and sockets.  Clearly, being able to handle more memory per core and per socket means that less traversing is necessary resulting in a high probability of significantly better performance with HANA on Power compared to competing platforms, especially when one takes into consideration their radically higher ccNUMA latencies and dramatically lower ccNUMA bandwidth.

Taken together, these announcements have catapulted HANA on IBM Power Systems from being an outstanding option for most customers, but with a few annoying restrictions and limits especially for larger customers, to being a best-of-breed option for all customers, even those pushing much higher limits than the typical customer does.

[i] https://launchpad.support.sap.com/#/notes/2235581

[ii] https://launchpad.support.sap.com/#/notes/2188482

[iii] https://saponpower.wordpress.com/2016/07/01/large-scale-up-transactional-hana-systems-part-1/

December 6, 2016 Posted by | Uncategorized | , , , , , , , , , , , , , , , , , , , | 3 Comments

ASUG Webinar next week – Scale-Up Architecture Makes Deploying SAP HANA® Simple

On October 4, 2016, Joe Caruso, Director – ERP Technical Architecture at Pfizer, will join me presenting on an ASUG Webinar.  Pfizer is not only a huge pharmaceutical company but, more importantly, has implemented SAP throughout their business including just about every module and component that SAP has to offer in their industry.  Almost 2 years ago, Pfizer decided to begin their journey to HANA starting with BW.  Pfizer is a leader in their industry and in the world of SAP and has never been afraid to try new things including a large scale PoC to evaluate scale-out vs. scale-up architectures for BW.  After completion of this PoC, Pfizer made a decision regarding which one worked better, proceeded to implement BW HANA and had their go-live just recently.  Please join us to hear about this fascinating journey.  For those that are ASUG members, simply follow this link.

If you are an employee of an ASUG member company, either an Installation or Affiliate member, but not registered for ASUG, you can follow this link to join at no cost.  That link also offers the opportunity for companies to join ASUG, a very worthwhile organization that offers chapter meetings all over North America, a wide array of presentations at their annual meeting during Sapphire, the BI+Analytics conference coming up in New Orleans, October 17 – 20, 2016, hundreds of webinars, not to mention networking opportunities with other member companies and the ability to influence SAP through their combined power.

This session will be recorded and made available at a later date for those not able to attend.  When the link to the recording is made available, I will amend this blog post with that information.

September 29, 2016 Posted by | Uncategorized | , , , , , , , | 5 Comments

HoP keeps Hopping Forward – GA Announcement and New IBM Solution Editions for SAP HANA

Almost two years ago, I speculated about the potential value of a HANA on Power solution.  In June, 2014, SAP announced a Test and Evaluation program for Scale-up BW HANA on Power.  That program shifted into high gear in October, 2014 and roughly 10 customers got to start kicking the tires on this solution.  Those customers had the opportunity to push HANA to its very limits.  Remember, where Intel systems have 2 threads per core, POWER8 has up to 8 threads per core.  Where the maximum size of most conventional Intel systems can scale to 240 threads, the IBM POWER E870 can scale to an impressive 640 threads and the E880 system can scale to 1536 threads.  This means that IBM is able to provide an invaluable test bed for system scalability to SAP.  As SAP’s largest customers move toward Suite “4” HANA (S4HANA), they need to have confidence in the scalability of HANA and IBM is leading the way in proving this capability.

A Ramp-up program began in March with approximately 25 customers around the world being given the opportunity to have access to GA level code and start to build out BW POC and production environments.  This brings us forward to the announcement by SAP this week @ SapphireNow in Orlando of the GA of HANA on Power.  SAP announced that customers will have the option of choosing Power for their BW HANA platform, initially to be used in a scale-up mode and plans to support scale-out BW, Suite on HANA and the full complement of side-car applications over the next 12 to 18 months.

Even the most loyal IBM customer knows the comparative value of other BW HANA solutions already available on the market.  To this end, IBM announced new “solution editions”.  A solution edition is simply a packaging of components, often with special pricing, to match expectations of the industry for a specific type of solution.  “Sounds like an appliance to me” says the guy with a Monty Python type of accent and intonation (no, I am not making fun of the English and am, in fact, a huge fan of Cleese and company).  True, if one were to look only at the headline and ignore the details.  In reality, IBM is looking toward these as starting points, not end points and most certainly not as any sort of implied limitation.  Remember, IBM Power Systems are based on the concept of Logical Partitions using Power Virtualization Manager (PVM).  As a result, a Power “box” is simply that, a physical container within which one or multiple logical systems reside and the size of each “system” is completely arbitrary based on customer requirements.

So, a “solution edition” simply defines a base configuration designed to be price competitive with the industry while allowing customers to flexibly define “systems” within it to meet their specific requirements and add incremental capability above that minimum as is appropriate for their business needs.  While a conventional x86 system might have 1TB of memory to support a system that requires 768GB, leaving the rest unutilized, a Power System provides for that 768GB system and allows the rest of the memory to be allocated to other virtual machines.   Likewise, HANA is often characterized by periods of 100% utilization, in support of instantaneous response time demanded of ad-hoc queries, followed by unfathomably long periods (in computer terms) of little to no activity.  Many customers might consider this to be a waste of valuable computing resource and look forward to being able to harness this for the myriad of other business purposes that their businesses actually depend on.  This is the promise of Power.  Put another way, the appliance model results in islands of automation like we saw in the 1990s where Power continues the model of server consolidation and virtualization that has become the modus operandi of the 2000s.

But, says the pitchman for a made for TV product, if you call right now, we will double the offer.  If you believe that, then you are probably not reading my blog.  If a product was that good, they would not have to give you more for the same price.  Power, on the other hand, takes a different approach.  Where conventional BW HANA systems offer a maximum size of 2TB for a single node, Power has no such inherent limitations.  To handle larger sizes, conventional systems must “scale-out” with a variety of techniques, potentially significantly increased costs and complexity.  Power offers the potential to simply “scale-up”.  Future IBM Power solutions may be able to scale-up to 4TB, 8TB or even 16TB.   In a recent post to this blog, I explained that to match the built in redundancy for mission critical reliability of memory in Power, x86 systems would require memory mirroring at twice the amount of memory with an associated increase in CPU and reduction in memory bandwidth for conventional x86 systems.  SAP is pushing the concepts of MCOS, MCOD and multi-tenancy, meaning that customers are likely to have even more of their workloads consolidated on fewer systems in the future.  This will result in demand for very large scaling systems with unprecedented levels of availability.  Only IBM is in position to deliver systems that meet this requirement in the near future.

Details on these solution editions can be found at http://www-03.ibm.com/systems/power/hardware/sod.html
In the last few days, IBM and other organizations have published information about the solution editions and the value of HANA on Power.  Here are some sites worth visiting:

Press Release: IBM Unveils Power Systems Solutions to Support SAP HANA
Video: The Next Chapter in IBM and SAP Innovation: Doug Balog announces SAP HANA on POWER8
Case study: Technische Universität München offers fast, simple and smart hosting services with SAP and IBM 
Video: Technische Universität München meet customer expectations with SAP HANA on IBM POWER8 
Analyst paper: IBM: Empowering SAP HANA Customers and Use Cases 
Article: HANA On Power Marches Toward GA

Selected SAP Press
ComputerWorld: IBM’s new Power Systems servers are just made for SAP Hana
eWEEK, IBM Launches Power Systems for SAP HANA
ExecutiveBiz, IBM Launches Power Systems Servers for SAP Hana Database System; Doug Balog Comments
TechEYE.netIBM and SAP work together again
ZDNet: IBM challenges Intel for space on SAP HANA
Data Center Knowledge: IBM Stakes POWER8 Claim to SAP Hana Hardware Market
Enterprise Times: IBM gives SAP HANA a POWER8 boost
The Platform: IBM Scales Up Power8 Iron, Targets In-Memory

Also a planning guide for HANA on Power has been published at http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102502 .

May 7, 2015 Posted by | Uncategorized | , , , , , , , , , , | 3 Comments

The hype and the reality of HANA

Can you imagine walking into a new car dealership and before you can say anything about your current vehicle and needs, a salesperson  immediately offers to show you the latest, greatest and most popular new car!  Of course you can since this is what that person gets paid to do.  Now, imagine the above scenario where the salesperson says “how is your current car not meeting your needs?” and following it up with “I don’t want you to buy anything from me unless it brings you substantial value”.  After smelling salts have been administered, you might recover enough to act like a cartoon character trying to check your ears to make sure they are functioning properly and ask the salesperson to repeat what he or she said.

The first scenario is occurring constantly with SAP account execs, systems integrators and consultants playing the above role of new car salesperson.  The second rarely happens, but that is exactly the role that I will play in this blog post.

The hype around HANA could not be much louder and deep than it is currently.  As bad as it might be, the FUD (Fear, Uncertainty and Doubt) is worse.  The hype suggests that HANA can do everything except park your car since that is a future capability (not really, I just made that up.)  At the very worst, this hype suggests a vision for the future that, while not solving world hunger or global warming, might improve the operations and profitability of companies.  The second is more insidious.  It suggests that unless you act like lambs and follow the lead of the individual telling this tale, you will be like a lost sheep, out of support and further out of the mainstream.

I will address the second issue first.  As of today, the beginning of August, SAP has made absolutely no statement indicating they will discontinue support for any platform, OS or DB.  In fact, a review of SAP notes shows support for most OS’s with no end date and even DB2 9.7 has an end of support date that is several years past that of direct standard support from IBM!  So, what gives???  Is SAP saying one thing internally and another externally?  I have been working with SAP for far too long and know their business practices too well to believe that they would act in such a two-faced manner not to mention exposing themselves to another round of expensive and draining lawsuits.  Instead, I place the arrow of shame squarely on those rogue SAP account execs that are perpetuating this story.  The next time that one of them makes this sort of suggestion, turn the tables on them.  Ask them to provide you with a statement, in writing, backed up with official press releases or SAP notes, showing that this is the case.  If they can’t, it is reasonable to conclude that they are simply trying to use the age old FUD tactic to get you to spend more money with them now rather than waiting until/if SAP actually decides to stop supporting a particular type of HW, OS or DB.

And now for the first issue; the hype around HANA.  HANA offers dramatic benefits to some SAP customers.  Some incarnation of HANA may indeed be inevitable for the vast majority.  However, the suggestion that HANA is the end-all be-all flies in the face of many other solutions on the market, many of which are radically less expensive and often have dramatically lower risk.  Here is a very simple example.

Most customers would like to reduce the time and resources required to run batch jobs.  It seems as if there is not a CFO anywhere that does not want to reduce month end/quarter close from multiple days down to a day or less.  CFOs are not the only ones with that desire as certain functions must come to a halt during a close and/or the availability requirements go sky high during this time period requiring higher IT investments.  SAP has suggested that HANA can achieve exactly this, however it is not quite clear whether this will require BW HANA, Suite on HANA or some combination of the two or even another as yet unannounced HANA variant.  I am sure that if you ask a dozen consultants, you will get a dozen different answer to how to achieve these goals with HANA and it is entirely possible that each of them are correct in their own way.  One thing is certain however: it won’t come cheaply.  Not only will a company have to buy HANA HW and SW, but they will have to pay for a migration and a boatload of consulting services.  It will also not come without risk.  BW HANA and Suite on HANA require a full migration.  Those systems become the exclusive repository of business critical data.  HANA is currently in its 58th revision in a little over two years.  HA, DR and backup/recovery tools are still evolving.  No benchmarks for Suite on HANA have been published which means that sizing guidelines are based purely on the size of the DB, not on throughput or even users.  Good luck finding extensive large scale customer references or even medium sized ones in your industry.   To make matters worse, a migration to HANA is a one way path.  There is no published migration methodology to move from HANA back to a conventional DB.  It is entirely possible that Suite on HANA will be much more stable than BW HANA was, that these systems will scream on benchmarks,  that all of those HA, DR, Backup/Recovery and associated tools will mature in short order and that monkeys will fly.  Had the word risk not been invented previously, Suite on HANA would probably be the first definition in the dictionary for it.

So, is there another way to achieve those goals, maybe one that is less expensive, does not require a migration, software licenses or consulting services?  Of course not because that would be as impossible to believe as the above mentioned flying monkeys.   Well, strap on your red shoes and welcome to Oz, because it is not only possible but many customers are already achieving exactly those gains.  How?  By utilizing high performance flash storage subsystems like the IBM FlashSystem.  Where transaction processing typically accesses a relatively small amount of data cached in database buffers, batch, month end and quarter close jobs tend to be very disk intensive.  A well-tuned disk subsystem can deliver access speeds of around 5 milliseconds.  SSDs can drop this to about 1 millisecond.   A FlashSystem can deliver incredible throughput while accessing data in as little as 100 microseconds.   Many customers have seen batch times reduced to a third or less than what they experienced before implementing FlashSystem.  Best of all, there are no efforts around migration, recoding, consulting and no software license costs.  A FlashSystem is “just another disk subsystem” to SAP.  If an IBM SVC (SAN Volume Controller) or V7000 is placed in front of a FlashSystem, data can be transparently replicated from a conventional disk subsystem to FlashSystem without even a system outage.  If the subsystem does not produce the results expected the system can be repurposed or, if tried out via a POC, simply returned at no cost.  To date, few, if any, customers have returned a FlashSystem after completing a POC as they have universally delivered such incredible results that the typical result is an order for more units.

Another super simple, no risk option is to consider using the old 2-tier approach to SAP systems.  In this situation, instead of utilizing separate database and application server systems/partitions, database and app server instances are housed within a single OS system/partition.  Some customers don’t realize how “chatty” app servers are with an amazing number of very small queries and data running back and forth to DB servers.  As fast as Ethernet is, it is as slow as molasses compared to the speed of an inter-process communication within an OS.  As crazy as it may seem, simply by consolidating DB and app servers into a single OS, batch and close activity may speed up dramatically.  And here is the no risk part.  Most customers have QA systems and from an SAP architecture perspective, there is no difference in having app servers within a single OS compared to on separate OSs.  As a result, customers can simply give it a shot and see what happens.  No pain other than a little time to set up and test the environment.  Yes, this is the salesman telling you not to spend any money with him.

This is not the only business case for HANA.  Others involve improving reporting or even doing away with reporting in favor of real-time analytics.  Here is the interesting part.  Before Suite on HANA or even BW HANA became available, SAP had introduced real-time replication into side-car HANA appliances.    With these devices, the source of business critical data is kept on conventional databases.  You remember those archaic old systems that are reliable, secure, scalable and around which you have built a best practices environment not to mention have purchased a DB license and are simply paying maintenance on it.  Perhaps naively, I call this the 95-5 rule, not 80-20.  You may be able to achieve 95% of your business goals with such a side-car without risking a migration or the integrity of your data.   Also, since you will be dealing with a subset of data, the cost of the SW license for such a device will likely be a small fraction of the cost of an entire DB.  Even better, as an appliance, if it fails, you just replace the appliance as the data source has not been changed.  Sounds too good to be true?  Ask your SAP AE and see what sort of response you get.  Or make it a little more interesting and suggest that you may be several years away from being ready go to Suite on HANA but could potentially do a side-car in the short term and observe the way the shark will smell blood in the water.  By the way, since you have to be on current levels of SAP software in order to migrate to Suite on HANA and reportedly 70% of customers in North America are not current, (no idea of the rest of the world) so this may not even be much or a stretch.

And I have not even mentioned DB2 BLU yet but will leave that for a later blog posting.

August 5, 2013 Posted by | Uncategorized | , , , , , , , , , , , , , | 4 Comments

Oracle Exadata for SAP revisited

Oracle’s Exadata, Exalogic and Exalytic systems have failed to take the market by storm but that has not stopped Oracle from pushing them as much as possible at every opportunity.  Recently, an SAP customer started to investigate the potential of an Exadata system for a BW environment.  I was called in to explain the issues surrounding such an implementation.   A couple of disclaimers before I start; I am not an Oracle expert nor have I placed hands on an Exadata system, so what I present here is the result of my effort to get educated on this topic.   Thanks go to some brilliant people in IBM that are incredible Oracle and SAP experts and whose initials are R.B., M.C., R.K. and D.R.

My first question is: why would any customer implement BW on a non-strategic platform such as Exadata when BW HANA is available?  Turns out, there are some reasons, albeit a little of a stretch.   Some customers may feel that BW HANA is immature and lacks the ecosystem and robust tools necessary to utilize in production today.  This is somewhat valid and, from my experience, many customers tend to wait a year or so after V1.0 of any product to consider it for production.  That said, even prior to the GA of BW HANA, SAP has reported that HANA sales were very strong, presumably for non-BW purposes.  Some customers may be abandoning the V1.0 principle in some cases which makes sense for many HANA environments where there may be no other way or very limited ways of accomplishing the task at hand, e.g.  COPA.  The jury is out on BW HANA as there are valid and viable solutions today including BW with conventional DBs and BWA.  Another reason revolves around sweetheart deals where Oracle gives 80% or larger discounts to get the first footprint in a customer’s door.  Of course, sweetheart deals usually apply only for the first installation, rarely for upgrades or additional systems which may result in an unpleasant surprise at that time.  Oracle has also signed a number of ULAs (Unlimited License Agreement) with some customers that include an Exadata as part of that agreement.  Some IT departments have learned about this only when systems actually arrived on their loading docks, not always something they were prepared to deal with.

Beside the above, what are the primary obstacles to implementing Exadata?  Most of these considerations are not limited to SAP.  Let’s consider them one at a time.

Basic OS installation and maintenance.  Turns out that despite the system looking like a single system to the end user, it operates like two distinct clusters to the administrator and DBA.  One is the RAC database cluster, which involves a minimum of two servers in a quarter rack of the “EP” nodes or full rack of “EX” nodes and up to 8 servers in a full rack of the “EP” nodes.  Each node must not only have its own copy of Oracle Enterprise Linux, but a copy of the Oracle database software, Oracle Grid Infrastructure (CRS + ASM) and any Oracle tools that are desired, of which the list can be quite significant.  The second is the storage cluster, which involves a minimum of 3 storage serves for a quarter rack, 7 for a half rack and 14 for a full rack.  Each of these nodes has its own copy of Oracle Enterprise Linux and Exadata Storage software.  So, for a half rack of “EP” nodes, a customer would have 4 RAC nodes, 7 storage nodes + 3 Infiniband switches which may require their own unique updates.  I am told that the process for applying an update is complex, manual and typically sequential.  Updates typically come out about once a month, sometimes more often.  Most updates can be applied while the Exadata server is up, but storage nodes, must be brought down, one at a time, to apply maintenance.  When a storage node is taken down for maintenance, apparently data may not be present, i.e. it must be wiped clean which means that after a patchset is applied the data must be copied from one of its ASM created copies.

The SAP Central Instance may be installed on an Exadata server, but if this is done, several issues must be considered.  One, the CI must be installed on every RAC node, individually.   The same for any updates.  When storage nodes are updated, the SAP/Exadata  best practices manual states that the CI must be tested after the storage nodes are updated, i.e. you have to bring down the CI and consequently must incur an outage of the SAP environment.

Effective vs. configured storage.  Exadata offers no hardware raid for storage, only ASM software based RAID10, i.e. it stripes the data across all available disks and mirrors those stripes to a minimum of one other storage server unless you are using SAP, in which case, the best practices manual states that you must mirror across 3 storage servers total.  This offers effectively the same protection as RAID5 with a spare, i.e. if you lose a storage server, you can fail over access to the storage behind that storage server which in turn is protected by a third server.  But, this comes at the cost of the effective amount of storage which is 1/3 of the total installed.  So, for every 100TB of installed disks, you only get 33TB of usable space compared to RAID5 with a 6+1+1 configuration which results in 75TB of usable space.  Not only is the ASM triple copy a waste of space, but every spinning disk utilizes energy and creates heat which must be removed and increases the number of potential failures which must be dealt with.

Single points of failure.  Each storage server has not one, but over a dozen single points of failure.  The infiniband controller, the disk controller and every single disk in the storage server (12 per storage server) represent single points of failure.  Remember, data is striped across every disk which means that if a disk is lost, the stripe cannot be repaired and another storage server must fulfill that request.  No problem is you usually have 1 or 2 other storage servers to which that data has been replicated.  Well, big problem in that the tuning of the system striped across not just the disks within a storage server, but across all available storage servers.  In other words, while a single database request might access data behind a single storage server, complex or large requests will have data spread across all available storage servers.  This is terrific in normal operations as it optimizes parallel read and write operations, but when a storage server fails and another picks up its duties, the one that picks up its duties now has twice the amount of storage it must manage resulting in more contention for its disks, cache, infiniband and disk controllers, i.e. the tuning for that node is pretty much wiped out until the failed storage node can be fixed.

Smart scans, not always so smart.    Like many other specialized data warehouse appliance solutions, including IBM’s Netezza, Exadata does some very clever things to speed up queries.   For instance, Exadata uses a range “index” to describe the minimum and maximum values for each column in a table for a selected set of rows.  In theory, this means that if a “where” clause requests data that is not contained in a certain set of rows, those rows will not be retrieved.  Likewise, “Smart scan” will only retrieve columns that are requested, not all columns in a table for the selected query.  Sounds great and several documents have explained when and why this works and does not work, so I will not try to do so in this document.  Instead, I will point out the operational difficulties with this.  The storage “index” is not a real index and works only with a “brute force” full table scan.  It is not a substitute for an intelligent partitioning and indexing strategy.  In fact, the term that Oracle uses is misleading as it is not a database index at all.  Likewise, smart scans are brute force full table scans and don’t work with indexes.  This makes them useful for a small subset of queries that would normally do a full table scan.  Neither of these are well suited for OLTP as OLTP deals, typically, with individual rows and utilizes indexes to determine the row in question to be queried or updated.  This means that these Exadata technologies are useful, primarily, for data warehouse environments.   Some customers may not want

So, let’s consider SAP BW.  Customers of SAP BW may have ad-hoc queries enabled where data is accessed in an unstructured and often poorly tuned way.  For these types of queries, smart scans may be very useful.  But those same customers may have dozens of reports and “canned” queries which are very specific about what they are designed to do and have dozens of well constructed indexes to enable fast access.  Those types of queries would see little or no benefit from smart scans.  Furthermore, SAP offers BWA and HANA that do an amazing job of delivering outstanding performance on ad-hoc queries.

Exadata also uses Hybrid Columnar Compression (HCC), which is quite effective at reducing the size of tables, Oracle claims about a 10 to 1 reduction.  This works very well at reducing the amount of space required on disk and in the solid state disk caches, but at a price that some customers may be unaware of.  One of the “costs” is that to enable HCC, processing must be done during construction of the table meaning that the time required to import data may take substantially longer.  Another “cost” is the voids that are left when data is inserted or deleted.  HCC works best for infrequent bulk load updates, e.g. remove the entire table and reload it with new data, not daily or more frequent inserts and deletes.   In addition the voids that it leaves, for each insert, update or delete, the “compression unit” (CU) must first be uncompressed and then recompressed with the entire CU written out to disk as the solid state caches are for reads only.  This can be a time consuming process, once again making this technology unsuitable for OLTP much less for DW/BW databases with regular update processes.  HCC is unique to Exadata which means that data which is backed up from an Exadata system may only be recovered to an Exadata system.  That is fine is Exadata is the only type of system used but not so good if a customer has a mixed environment with Exadata for production and, perhaps, conventional Oracle DB systems for other purposes, e.g. disaster recovery.

Speaking of backup, it is interesting to note that Oracle only supports their own Infiniband attached backup system.  The manuals state that other “light weight” backup agents are supported but apparently, third parties like Tivoli Storage Manager, EMC’s Legato Networker or Symantec Netbackup are not considered “light weight” and consequently, not supported.  Perhaps you use a typical split mirror or “flash” backup image that allows you to attach a static copy of the database to another system for backup purposes with minimal interruption to the production environment.  This sort of copy is often kept around for 24 hours in case of data corruption allowing for a very fast recovery.  Sorry, but not only can’t you use whatever storage standard that you may have in your enterprise since Exadata has its own internal storage, but you can’t use that sort of backup methodology either.  Same goes for DR, where you might use storage replication today.  Not an option and only Oracle DataGuard is supported for DR.

Assuming you are still unconvinced, there are a few other “minor” issues.  SAP is not “RAC aware”, as has been covered in a previous blog posting.  This means that Exadata performance is limited by two factors, i.e. a single RAC node represents the maximum possible capacity for a given transaction or query, no parallel queries are issued by SAP.  Secondly, if data that is requested by OLTP transaction, such as may be issued by ECC or CRM, unless the application server that is uniquely associated with a particular RAC node requests data that is hosted in that particular RAC node, data will have to be transferred across the infiniband network within the Exadata system at speeds that are 100,000 times slower than local memory accesses.  Exadata supports no virtualization meaning that you have to go back to a 1990s concept of separate systems for separate purposes.  While some customers may get “sweetheart” deals on the purchase of their first Exadata system, unless customers are unprecedentedly brilliant negotiators, and better than Oracle at that, it is unlikely that these “sweetheart” conditions are likely to last meaning that upgrades may be much more expensive than the first expenditure.  Next is the granularity.  An Exadata system may be purchase in a ¼ rack, ½ rack or full rack configuration.  While storage nodes may be increased separately from RAC nodes, these upgrades are also not very granular.  I spoke with a customer recently that wanted to upgrade their system from 15 cores to 16 on an IBM server.  As they had a Capacity on Demand server, this was no problem.  Try adding just 6.25% cpu capacity to an Exadata system when the minimum granularity is 100%!!  And the next level of granularity is 100% on top of the first, assuming you went from ¼ to ½ to full rack.

Also consider best practices for High Availability.  Of course, we want redundancy among nodes, but we usually want to separate components as much as possible.  Many customers that I have worked with place each node in an HA cluster in separate parts of their datacenter complex, often in separate buildings on their campus or even geographic separation.  A single Exadata system, while offering plenty of internal redundancy, does not protect against the old “water line” break, fire in that part of the datacenter, or someone hitting the big red button.  Of course, you can add that by adding another ¼ or larger Exadata rack, but that comes with more storage that you may or may not need and a mountain of expensive software.    Remember, when you utilize conventional HA for Oracle, Oracle’s terms and conditions allow for your Oracle licenses to transfer, temporarily, to that backup server so that additional licenses are not required.   No such provision exists for Exadata.

How about test, dev, sandbox and QA?  Well, either you create multiple separate clusters within each Exadata system, each with a minimum of 2 RAC nodes and 3 storage nodes, or you have to combined different purposes together and share environments between environments that your internal best practices suggest should be separated.  The result is either multiple non-prod systems or larger systems with considerable excess capacity may be required.  Costs, of course, go up proportionately or worse, may not be part of the original deal and may receive a different level of discounts.  This compares to a virtualized Power Systems box which can host partitions for dev, test, QA and DR replication servers simultaneously and without the need for any incremental hardware, beyond memory perhaps.  In the event of a disaster declaration, capacity is automatically shifted toward production but dev, test and QA don’t have to be shut down, unless the memory for those partitions is needed for production.  Instead, those partitions, simply get the “left over” cycles that production does not require.

Bottom line:  Exadata is largely useful only for infrequently updated DW environments, not the typical SAP BW environment, provides acceleration for only a subset of typical queries, is not useful for OLTP like ECC and CRM, is inflexible lacking virtualization and poor granularity, can be very costly once a proper HA environment is constructed, requires non-standard and potentially duplicative backup and DR environments, is a potential maintenance nightmare and is not strategic to SAP.

I welcome comments and will update this posting if anyone points out any factual errors that can be verified.

 

I just found a blog that has a very detailed analysis of the financials surrounding Exadata.  It is interesting to note that the author came to similar conclusions as I did, albeit from a completely different perspective.  http://wikibon.org/wiki/v/The_Limited_Value_of_Oracle_Exadata

May 17, 2012 Posted by | Uncategorized | , , , , , , , , , , , , | 6 Comments