SAPonPower

An ongoing discussion about SAP infrastructure

Get all the benefits of SAP HANA 2.0+ today with no risk and at a fraction of the cost!

I know, that sounds like a guy on an infomercial trying to sell you a set of knives, but it may surprise you that much of the benefits planned for the eventual HANA 2.0 Transactional database from SAP are available today, with tried and proven technology and no rewriting of any SAP applications.

Let’s take a quick step back.  I just got back from SAP TechEd 2011 in Las Vegas.  As with Sapphire, just about every other word out of SAP employees’ mouths was HANA or in-memory computing.   SAP is moving rapidly on this exciting project.  They have now released two new HANA applications, Smart Meter Analytics and COPA Accelerator.  SAP shared that BW on HANA would be entering ramp-up in November.  They are busy coding other solutions as well.  Some people have the misconception that HANA is going to be just like BWA, i.e. plug and play, just select the data you want moved to it, transparent to the application other than being faster.  But, that is not how HANA works.  Applications have to be rewritten to HANA not ported from existing SAP systems.   Data must be modeled based on specific application requirements.  Though the benefits can be huge, the effort to get there is not trivial.

It is going to be a gradual process by which specific applications are rolled out on HANA.  BW is an obvious next step since many customers have both a BW system and a BWA.  BW on HANA promises the ability to have a single device with radical improvements in speed and not just for pre-selected Infocubes, but for the entire database.  Side note, HANA provides additional benefit for text queries as it does not have the 60 character limitation of BW.  It is not quite so clear if customers will be willing to pay the price as this would place even old, infrequently accessed data in memory along with the incremental price for systems, memory and SAP software for this aged data.

While this may be obvious, it is worthwhile summarizing the basic benefits of HANA.  HANA and in-memory database, has three major benefits: 1) all data and indexes reside in memory eliminating any disk access for query purposes resulting in dramatic gains in query speeds, 2) application code execution on the same device as the database thereby eliminating data transfers between application servers and database servers, 3)near realtime replication of data from not just SAP systems, but from just about any other data source that a customer might choose which is indeed a great thing.

So, great goals and eventually, something that should benefit not just query based applications, but a wide variety of applications including transactional processing.  As mentioned above, it is not simply a question of exporting the data from the current ERP database, for instance, and dropping it on HANA.  Every one of the 80,000 or so tables must be modeled into HANA.  All code, including thousands of ABAP and JAVA programs, must be rewritten to run on HANA.  And, as mentioned in a previous blog entry, SAP must significantly enhance HANA to be able to deal with cache coherency, transactional integrity, lock management and discrete data recovery, to name just a few of a long laundry list of challenges.  In other words, despite some overenthusiastic individuals’ assertions or innuendos that the transactional in-memory database will be available in 2 to 3 years, in reality, it will likely take far longer.

This means you have to wait to attain those benefits, right?  Wrong.  The technology exists today, with no code change, no data modeling and fully supported to place entire databases, including indexes, in memory along with application servers thereby achieving two of the three goals of HANA.  Let’s explore that a little further.  HANA, today, allows for 5 to 1 compression of most uncompressed data, but to make HANA work, an equal amount of memory to the compressed data memory must be allocated for temporary work space.  For example, a 1 TB uncompressed database should require about 200GB for data and another 200GB for work space resulting in a total memory requirement of 400GB.  A 10TB uncompressed database would require 4TB of memory, but since the supported configurations only allow for a total of 1TB per node, this would require a cluster of 4 @ 1TB systems.  Fortunately, IBM provides the System  x3850 and is certified for exactly this configuration.    But remember, we are talking about a future in-memory transactional system not today’s HANA.

DB2 offers data compression of approximately 60% today which means that a 10TB database, if buffered completely in memory, would require 4TB of memory.  A tall requirement, but achievable since IBM offers a 4TB system called the Power 795.  However, IBM also offers a feature called Active Memory Expansion (AME), available only with Power Systems and AIX, which can make real memory appear to applications as if it was larger than it really is using memory compression.  DB2 is fully supported with this feature and can see an additional 20% to 35% compression using AME.   In other words, that same 10TB database may be able to fit within 3TB or less of real memory.

Some customers might not need such a large system from a processing perspective, so two options exist.  One, with Capacity on Demand, customers can turn on a small fraction of the available processors while still having access to all available memory.  This would significantly reduce the cost of activating processors as well as for AIX and related software licenses and maintenance.   For those customers that purchase their DB2 software on a per core basis, this would further reduce those costs, but clearly have no effect on customers who purchase the SAP OEM edition of DB2.

A second option is to use DB2 PureScale, now a certified database option available for approved pilot customers of SAP.  With this option, using the same AME feature, a customer could cluster 3 @ 1TB Power Systems or 6 @ 500GB systems.

By the same token, SAP application servers can co-reside in the same OS instance or instances with DB2.  While this would add to the memory requirement, ABAP servers get even more benefit of 50% or more compression with AME.

So, it is entirely possible and conceivable to build a full, in-memory database, using IBM Power Systems and DB2 housing both the database and application server with no code changes, no data modeling, with an established database system that is proven and handles all of the transactional database requirements noted above today.    Assuming you already have a DB2 license, you would not even see any incremental software cost unless you move to DB2 Purescale which does come at an additional cost of 2% SAV.  For those with licenses for Oracle or SQLserver, while I am no expert on DB2 cost studies, those that I have seen show very good ROI over relatively short periods, and that is before you consider the ability to achieve near HANA like performance described in this blog post.  Lastly, this solution is available under existing SAP license agreements unlike HANA which comes with a pretty significant premium.

Advertisements

September 18, 2011 - Posted by | Uncategorized | , , , , , , ,

No comments yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: