SAPonPower

An ongoing discussion about SAP infrastructure

IBM’s HANA solution

IBM’s implementation of an SAP HANA appliance is nothing short of a technological coup de grace over the competition.   These are words that you would never normally see me write as I am not one to use superlatives, but in this case, they are appropriate.  This is not to say that I am contradicting anything that I said in my posting last year where I was positioning when and where HANA is the right choice: https://saponpower.wordpress.com/2011/09/18/get-all-the-benefits-of-sap-hana-2-0-today-with-no-risk-and-at-a-fraction-of-the-cost/

As expected, HANA has been utilized for more applications and is about to GA for BW in the very near future.  SAP reports brisk sales of systems and our experience echoes this, especially for proof of concept systems.  Even though all of the warts of a V1.0 product have not been overcome, the challenges are being met by a very determined development team at SAP following a corporate mandate.  It is only a matter of time before customers move into productive environments with more and more applications of HANA.  That said, conventional databases and systems are not going away any time soon.  Many systems such as ERP, CRM and SCM run brilliantly with conventional database systems.  This is a tribute to the way that SAP delivered a strong architecture in the past.   There are pieces of those systems which are “problem” areas and SAP is rapidly deploying solutions to fix those problems, often with HANA based point solutions, e.g. COPA.  SAP envisions a future in which HANA replaces conventional databases, but not only are there challenges to be overcome, but there are simply not that many problems with databases based on DB2, Oracle, SQLserver or Sybase, not a compelling business case for change, as of yet.

Of course, I am not trying to suggest that HANA is not appropriate for some customers or that it does not deliver outstanding results in many cases.  In many situations, the consolidation of a BW and BWA set of solutions makes tremendous sense.  As we evolve into the future, HANA will make progressively more sense to more and more customers.  And this brings me back to my original “superlative” laden statement.  How does IBM deliver what none of the competitors do?

Simply put, (yes I know, I rarely put anything simply), IBM’s HANA appliance utilizes a single, high performance stack, regardless of how small or large a customer’s environment is.  And with IBM’s solution, as demonstrated at Sapphire two weeks ago, 100TB across 100 nodes, is neither a challenge nor a limit of its architecture.   By the way, Hasso Platner unveiled this solution in his key note speech at Sapphire and, surprisingly, he called out and commended IBM.  Let us delve a little deeper and get a little less simple.

At the heart of the IBM solution is GPFS, the General Parallel File System.  This is, strangely enough, a product of IBM Power Systems.  It allows for striping of data among local SSD and HDD as well as spanning of data across clustered systems, replication of data, high availability and disaster recovery.  On a single node system, utilizing the same number of drives, up to twice the IOPS of an ext3 file system can be expected with GPFS.  When, not if, a customer grows and needs either larger systems or multiple systems, the file system stays the same and simply adapts to its different requirements.  GFPS owes its roots to high performance computing.  Customers trying to find the answer (not 42 for those of you geeky enough to understand what that means) to complex problems often required dozens, hundreds or even thousands of nodes and had to connect all of those nodes to a single set of data.  As these solutions would run for often, ridiculous, period of time, sometimes counted in months or years even, the file system upon which they relied simply could not break no matter what the underlying hardware did.  ASCI, the US Accelerated Strategic Computing Initiative focused on, among other things, simulating the effect of nuclear weapons storage decay, drove this requirement.  The same requirements exist in many other “grand challenge” problems, whether they be Deep Blue or Watson and their ability to play games better than humans, or “simple” problems of unfolding DNA or figuring out how weather systems work or how airplane wings can fly more efficiently.   More modestly, allowing thousands of scientists and engineers to collaborate using a single file system or thousands of individuals to access a file system without regard to location or the boundaries of the underlying storage technology, e.g. IBM SONAS, are results of this technology.  Slightly less modestly, GPFS is the file system used by DB2 PureScale and, historically, Oracle RAC and even though Oracle ASM is now supported on all platforms, GPFS is still frequently used despite the fact that GPFS is an additional cost over ASM which is included with the RAC license.  The outcome is an incredibly robust, fault resilient and consistent file system.

Why did I go down that rabbit hole?  Because, it is important to understand that whether a customer utilizes one HANA node with a total of 128GB of memory or a thousand nodes with 2PB of memory, the technology does not have to be developed to support this, it has already been done with GPFS.

Now for the really amazing part of this solution:  All drives, whether HDD or SSD are located within each of the nodes but through the magic of GPFS, are available to all nodes within the system.  This means there is no SAN, no NAS, no NFS, no specialized switches, no gateways, just a simple 10GB/s Ethernet for GPFS to communicate amongst its nodes.  Replication is built in so that the data and logs physically located on each node can be duplicated to one or more other nodes.  This provides for an automatic HA of the file systems where, even if a node fails, all data can be accessed from other nodes and HANA can restart the failed node on a standby system.

Some other features of IBM’s implementation are worth note.  IBM offers two different types of systems for HANA, the x3690 X5, a 1 or 2 socket system and the x3950 X5 system, a 2, 4 or 8 socket system.  Either system may be used standalone or for scale up, but currently, the x3690 only is certified to scale to 4 nodes where the x3950 is certified to scale to 16 nodes.  While it is not possible to mix and match these nodes in a scale out configuration, it is possible to utilize one in production and another in non-production, for example, as they have identical stacks.  It is interesting to note another valuable feature.  The x3950 is capable of scaling to 8 sockets, but customers don’t have to purchase an 8 socket system up front.  This is because each “drawer” of a x3950 supports up to 4 sockets and a second drawer may be added, at any time, to upgrade a system to 8 sockets.  Taken all together, an 8 socket standalone system costs roughly twice what a 4 socket standalone system does and a two node scale out implementation costs roughly twice what a single node standalone system does.  For that matter, a 16 node scale out implementation costs 8 times what a 2 node scale out implementation does.

How does this compare with implementations from HP, Dell, Cisco, Fujitsu, Hitachi and NEC?  For HP, a customer must choose whether to utilize a 4 or 8 socket systems and consequently must pay for a 4 or 8 node system up front as there is no path from the DL580 4 socket system to the DL980 8 socket system.  Many more drives are required, e.g.  a 128GB configuration requires 24@15K HDDs compared to an equivalent IBM solution which only requires 8@10K.  Next, if one decides to start standalone and then move into a parallel implementation, one must move from a ext3 and xfs file systems to NFS.  Unfortunately, HP has not certified either of those systems for scale out, so customers must move to the BL680c for scale out implementations, but the BL680c is only available as 4 socket/512GB nodes.  A standalone implementation utilizes internal disks plus, frequently, a disk drawer in order to deliver the high IOPS that HANA requires, but a scale out implementation requires one HP P6500EVA for each 4 nodes and one HP X9300 Network Storage Gateway for every 2 nodes.   The result is that not only does the stack change from standalone to scale out, but the systems change, the enclosure and interconnections change and as more nodes are added, complexity grows dramatically.  Also, cost is not proportional but instead grows at an ever increasing rate as more nodes are added.

The other vendors’ implementations all share similar characteristics with HP’s with, of course, different types of nodes, file systems and storage architecture, e.g. Cisco uses EMC’s VNX5300 and MPFS parallel file system for scale-out.

At the time of writing this, only Cisco and Dell were not certified for the 1TB HANA configuration as SAP requires 8 sockets for this size system, based on SAP sizing rules, not based on limitations of the systems to support 1TB.  Also, only IBM was certified for a scale out configuration utilizing 1TB nodes and up to 16 of those nodes.

The lead that IBM has over the competition is almost unprecedented.  In fact, in the x86 space, I am not aware of this wide a gap at any time in IBM’s history.  If you would like to get the technical details behind IBM’s HANA implementation, please visit http://IBM-SAP.com/HANA and to see a short video on the subject from Rich Travis, one of IBM’s foremost experts on HANA and the person that I credit with most of my knowledge of HANA, but none of the mistakes in this blog post, please visit: http://www.youtube.com/watch?v=Au-P28-oZvw&feature=youtu.be

Advertisements

May 30, 2012 - Posted by | Uncategorized | , , , , , ,

No comments yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: