SAPonPower

An ongoing discussion about SAP infrastructure

HANA – Implications for UNIX systems

Ever since SAP announced HANA, I have received the occasional question about what this product means to UNIX systems, but the pace of those questions picked up significantly after Sapphire. Let me address the three phases of SAP in-memory database computing as I understand them.

HANA, High performance ANalytical Appliance, is the first in-memory database application. According to SAP, with very little effort, a company can extract large sets of data from their current SAP and non-SAP systems and, in near real time, keep that data extract up to date, at least for SAP systems. The data is placed into in-memory columns which are not only high compressible but are very fast for ad-hoc searches. Though Hasso Plattner talked about 10 to 1 compression, individuals that I have talked to that have direct experience with the current technology tell me that 5 to 1 is more likely. Even at 5 to 1, a 1TB conventional DB would fit into 200GB using HANA. The goal is not necessarily to replicate entire databases, including aged data that might be best archived, but to replicate only data that is useful in analyzing the business and developing new opportunities for driving revenue or reducing expenses. The promise is that analyses that would have been prohibitively expensive and time consuming to construct the underlying systems and database schemas will now be very affordable. If true, companies could extend innovation potential to just about anyone in the company with a good idea rather than just the elite few analysts that perform this work at the direction of only top executives. This solution is currently based on Intel based systems running Linux from a pretty decent set of SAP technology partners. Though SAP has not eliminated any other type of systems from being considered for support, they have also not indicated a plan for support of any other type of system.

The next phase of in-memory database technology, I picked up from various conversations and presentations at Sapphire. Two major areas were discussed. The first deals entirely with BW. The insinuation was that BWA and HANA are likely to be combined into a single set of technology and have the ability to run the entire BW database stack, thereby eliminating the need for a separate BW database server. I can imagine a lot of customers that already have BWA’s or are planning on HANA finding this to be a very useful direction. The lack of transactional updates in such an environment make this a very doable goal. Once again, SAP made no statements of support or elimination of support for any platform or technology.

The second area involves a small, but historically troublesome portion of SAP transactions which involve much longer run times and/or large amounts of data transfer back and forth between database and applications servers and consequentially consume much larger amounts of resources. Though SAP was not specific, their goal is to use in-memory database technology to run those sets of SAP transactions that have these sorts of characteristics. Consider this a sort of coprocessor similar to the way that BWA acted as a back end database coprocessor for BW. Other than faster performance, this would be transparent to the end user. Programmers would see this, but perhaps just as an extension of the ABAP language for these sorts of transactions. Not all customers are experiencing problems in this area. On the other hand, there are some customers that deal with these sorts of pesky performance issues quite regularly and therefore would be prime candidates for such a technology. It is also technically, quite a bit more complex to develop this sort of coprocessor. I would envision this coming out somewhat later than the in-memory BW database technology described above.

The last phase, pushed strongly by Hasso Plattner, but barely mentioned by anyone else at SAP, involved a full transactional in-memory database. This would act as a full replacement for Oracle, DB2 and SQLserver databases. Strangely, no one representing those companies seemed to be very concerned about this so, clearly, this sparked my interest. When I asked some database experts, I was given a little rudimentary education. Transactional databases are fundamentally different than primarily read-only databases populated by other databases. At the most basic level, a query in a read-only database can examine any data element with no regard for any other query that might be doing the same. A transactional database must determine if a data element that may be changed by a transaction is locked by another transaction and, if so, what to do about it, e.g. wait, steal the lock, abandon the task, etc. At a slightly more advanced level, if an update to a read-only database fails, the data can be simply repopulated from the source. If an update fails in a transactional database, real data loss with potentially profound implications can result. Backup, recovery, roll forward, roll back, security, high availability, disaster recovery and dozens of other technologies have been developed by the database companies over time to ensure comprehensive database integrity. These companies therefore believe that if SAP goes down this path, it will not be an easy or quick one and may be fraught with complications.

And then there is a matter of cost. The software portion of HANA is not inexpensive today. If SAP were to maintain a similar pricing model for the substantially more complicated transactional database of the future, customers may be faced with database licensing costs that could be twice or more than they pay currently for SAP OEM editions of DB2, SQLserver, both licensed at 8% of SAV (SAP Application Value) or Oracle, 11% of SAV (but announced as growing to 15% this month, August).

This begs the question then. What is broken today for which an SAP in-memory transactional database fixes the problem? If you can maintain virtually all of your valuable data in a read-only copy on HANA and perform all of the analyses that your heart desires, what will a single transactional and analytical repository do that you can’t do today with the separate databases? 10 years ago, having two copies of a 10TB database would have required a big investment in disk subsystems. Now, 20TB is incredibly inexpensive and is almost a rounding error in many IT budgets.

Bottom line; HANA looks like a real winner. Phase two has a lot of promise. Phase three looks like a solution looking for a problem. So for those UNIX fans, database and application server demands will continue to be met primarily by existing technology solutions for a long time to come.

Advertisements

August 5, 2011 - Posted by | Uncategorized | , , , , , , , ,

No comments yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: