SAPonPower

An ongoing discussion about SAP infrastructure

SAP Extends support for Business Suite 7 to 2027 and beyond … but the devil is always in the detail

The SAP world has been all abuzz since SAP’s announcement on February 4, 2020 about their extension of support for Business Suite 7 (BS7) which many people know as ECC 6.0 and/or related components.  According to the press release[i], customers with existing maintenance contracts will be able to continue using and getting support for BS7 through the end of 2027 and will be able to purchase extended maintenance support through the end of 2030 for an additional 2 points over their current contracts.

It is clear that SAP blinked first although, in an interview[ii], SAP positions this as a “proactive step”, not as a reaction to customer pushback.   Many tweets and articles have already come out talking about how customers now have breathing room and have clarity on support for BS7.  And if I just jumped on the bandwagon here, those of you who have been reading my blog for years would be sorely disappointed.

And now for the rest of the story

Most of you are aware that BS7 is the application suite which can use one of several 3rd party database options.  Historically, the most popular database platform for medium to large customers was Oracle DB, followed by IBM Db2.  BS7 can also run on HANA and in that context is considered Suite on HANA (SoH).

What was not mentioned in this latest announcement is the support for the underlying databases.  For this, one must access the respective SAP Notes for Oracle[iii] and Db2[iv].

This may come as a surprise to some, but if you are running Oracle 12.2.0.1, you only have until November of this year to move to Oracle 19c (or Oracle 18c, but that would seem pretty pointless as its support ends in June of 2021.)  But it gets much more fun as that is only supported under normal maintenance until March, 2023 and under extended support until March, 2026.  In theory, there might be another version or dot release supported beyond this time, but that is not detailed in any SAP Note.   In the best-case scenario, Oracle 12 customers will have to upgrade to 19c and then later to an, as yet unannounced, later version which may be more transition than many customers are willing to accept.

Likewise, for Db2 customers, 10.5, 11.1 and 11.5 are all supported through December, 2025.  The good news is that no upgrades are required through the end of 2025.

For both, however, what happens if later versions of either DB are not announced as being supported by SAP. Presumably, this means that a heterogeneous migration to Suite on HANA would be required.  In other words, unless SAP provides clarity on the DB support picture, customers using either Oracle DB or IBM Db2 may be faced with an expensive, time consuming and disruptive upgrade to SoH near the end of 2025.  Most customers have expressed that they are unwilling to do back to back migrations, so if they are required to migrate to SoH in 2025 and then migrate to S/4HANA in 2027, that is simply too close for comfort.

Lacking any further clarification from SAP, it still seems as if it would be best to complete your conversion to S/4HANA by the end of 2025.  Alternately, you may want to ask SAP for a commitment to support your current and/or planned DB for BS7 through the end of 2027, see how they respond and how much they will charge.

[i] https://news.sap.com/2020/02/sap-s4hana-maintenance-2040-clarity-choice-sap-business-suite-7/
[ii] https://news.sap.com/2020/02/interview-extending-maintenance-for-sap-s4hana/
[iii] https://launchpad.support.sap.com/#/notes/1174136
[iv] https://launchpad.support.sap.com/#/notes/1168456

February 6, 2020 Posted by | Uncategorized | , , , , , , , , | Leave a comment

The hype and the reality of HANA

Can you imagine walking into a new car dealership and before you can say anything about your current vehicle and needs, a salesperson  immediately offers to show you the latest, greatest and most popular new car!  Of course you can since this is what that person gets paid to do.  Now, imagine the above scenario where the salesperson says “how is your current car not meeting your needs?” and following it up with “I don’t want you to buy anything from me unless it brings you substantial value”.  After smelling salts have been administered, you might recover enough to act like a cartoon character trying to check your ears to make sure they are functioning properly and ask the salesperson to repeat what he or she said.

The first scenario is occurring constantly with SAP account execs, systems integrators and consultants playing the above role of new car salesperson.  The second rarely happens, but that is exactly the role that I will play in this blog post.

The hype around HANA could not be much louder and deep than it is currently.  As bad as it might be, the FUD (Fear, Uncertainty and Doubt) is worse.  The hype suggests that HANA can do everything except park your car since that is a future capability (not really, I just made that up.)  At the very worst, this hype suggests a vision for the future that, while not solving world hunger or global warming, might improve the operations and profitability of companies.  The second is more insidious.  It suggests that unless you act like lambs and follow the lead of the individual telling this tale, you will be like a lost sheep, out of support and further out of the mainstream.

I will address the second issue first.  As of today, the beginning of August, SAP has made absolutely no statement indicating they will discontinue support for any platform, OS or DB.  In fact, a review of SAP notes shows support for most OS’s with no end date and even DB2 9.7 has an end of support date that is several years past that of direct standard support from IBM!  So, what gives???  Is SAP saying one thing internally and another externally?  I have been working with SAP for far too long and know their business practices too well to believe that they would act in such a two-faced manner not to mention exposing themselves to another round of expensive and draining lawsuits.  Instead, I place the arrow of shame squarely on those rogue SAP account execs that are perpetuating this story.  The next time that one of them makes this sort of suggestion, turn the tables on them.  Ask them to provide you with a statement, in writing, backed up with official press releases or SAP notes, showing that this is the case.  If they can’t, it is reasonable to conclude that they are simply trying to use the age old FUD tactic to get you to spend more money with them now rather than waiting until/if SAP actually decides to stop supporting a particular type of HW, OS or DB.

And now for the first issue; the hype around HANA.  HANA offers dramatic benefits to some SAP customers.  Some incarnation of HANA may indeed be inevitable for the vast majority.  However, the suggestion that HANA is the end-all be-all flies in the face of many other solutions on the market, many of which are radically less expensive and often have dramatically lower risk.  Here is a very simple example.

Most customers would like to reduce the time and resources required to run batch jobs.  It seems as if there is not a CFO anywhere that does not want to reduce month end/quarter close from multiple days down to a day or less.  CFOs are not the only ones with that desire as certain functions must come to a halt during a close and/or the availability requirements go sky high during this time period requiring higher IT investments.  SAP has suggested that HANA can achieve exactly this, however it is not quite clear whether this will require BW HANA, Suite on HANA or some combination of the two or even another as yet unannounced HANA variant.  I am sure that if you ask a dozen consultants, you will get a dozen different answer to how to achieve these goals with HANA and it is entirely possible that each of them are correct in their own way.  One thing is certain however: it won’t come cheaply.  Not only will a company have to buy HANA HW and SW, but they will have to pay for a migration and a boatload of consulting services.  It will also not come without risk.  BW HANA and Suite on HANA require a full migration.  Those systems become the exclusive repository of business critical data.  HANA is currently in its 58th revision in a little over two years.  HA, DR and backup/recovery tools are still evolving.  No benchmarks for Suite on HANA have been published which means that sizing guidelines are based purely on the size of the DB, not on throughput or even users.  Good luck finding extensive large scale customer references or even medium sized ones in your industry.   To make matters worse, a migration to HANA is a one way path.  There is no published migration methodology to move from HANA back to a conventional DB.  It is entirely possible that Suite on HANA will be much more stable than BW HANA was, that these systems will scream on benchmarks,  that all of those HA, DR, Backup/Recovery and associated tools will mature in short order and that monkeys will fly.  Had the word risk not been invented previously, Suite on HANA would probably be the first definition in the dictionary for it.

So, is there another way to achieve those goals, maybe one that is less expensive, does not require a migration, software licenses or consulting services?  Of course not because that would be as impossible to believe as the above mentioned flying monkeys.   Well, strap on your red shoes and welcome to Oz, because it is not only possible but many customers are already achieving exactly those gains.  How?  By utilizing high performance flash storage subsystems like the IBM FlashSystem.  Where transaction processing typically accesses a relatively small amount of data cached in database buffers, batch, month end and quarter close jobs tend to be very disk intensive.  A well-tuned disk subsystem can deliver access speeds of around 5 milliseconds.  SSDs can drop this to about 1 millisecond.   A FlashSystem can deliver incredible throughput while accessing data in as little as 100 microseconds.   Many customers have seen batch times reduced to a third or less than what they experienced before implementing FlashSystem.  Best of all, there are no efforts around migration, recoding, consulting and no software license costs.  A FlashSystem is “just another disk subsystem” to SAP.  If an IBM SVC (SAN Volume Controller) or V7000 is placed in front of a FlashSystem, data can be transparently replicated from a conventional disk subsystem to FlashSystem without even a system outage.  If the subsystem does not produce the results expected the system can be repurposed or, if tried out via a POC, simply returned at no cost.  To date, few, if any, customers have returned a FlashSystem after completing a POC as they have universally delivered such incredible results that the typical result is an order for more units.

Another super simple, no risk option is to consider using the old 2-tier approach to SAP systems.  In this situation, instead of utilizing separate database and application server systems/partitions, database and app server instances are housed within a single OS system/partition.  Some customers don’t realize how “chatty” app servers are with an amazing number of very small queries and data running back and forth to DB servers.  As fast as Ethernet is, it is as slow as molasses compared to the speed of an inter-process communication within an OS.  As crazy as it may seem, simply by consolidating DB and app servers into a single OS, batch and close activity may speed up dramatically.  And here is the no risk part.  Most customers have QA systems and from an SAP architecture perspective, there is no difference in having app servers within a single OS compared to on separate OSs.  As a result, customers can simply give it a shot and see what happens.  No pain other than a little time to set up and test the environment.  Yes, this is the salesman telling you not to spend any money with him.

This is not the only business case for HANA.  Others involve improving reporting or even doing away with reporting in favor of real-time analytics.  Here is the interesting part.  Before Suite on HANA or even BW HANA became available, SAP had introduced real-time replication into side-car HANA appliances.    With these devices, the source of business critical data is kept on conventional databases.  You remember those archaic old systems that are reliable, secure, scalable and around which you have built a best practices environment not to mention have purchased a DB license and are simply paying maintenance on it.  Perhaps naively, I call this the 95-5 rule, not 80-20.  You may be able to achieve 95% of your business goals with such a side-car without risking a migration or the integrity of your data.   Also, since you will be dealing with a subset of data, the cost of the SW license for such a device will likely be a small fraction of the cost of an entire DB.  Even better, as an appliance, if it fails, you just replace the appliance as the data source has not been changed.  Sounds too good to be true?  Ask your SAP AE and see what sort of response you get.  Or make it a little more interesting and suggest that you may be several years away from being ready go to Suite on HANA but could potentially do a side-car in the short term and observe the way the shark will smell blood in the water.  By the way, since you have to be on current levels of SAP software in order to migrate to Suite on HANA and reportedly 70% of customers in North America are not current, (no idea of the rest of the world) so this may not even be much or a stretch.

And I have not even mentioned DB2 BLU yet but will leave that for a later blog posting.

August 5, 2013 Posted by | Uncategorized | , , , , , , , , , , , , , | 4 Comments

The top 3 things that SAP needs are memory, memory and I can’t remember the third. :-) A review of the IBM Power Systems announcements with a focus on the memory enhancements.

While this might not exactly be new news, it is worthwhile to consider the value of the latest Power Systems announcements for SAP workloads.  On October 12, 2011, IBM released a wide range of enhancements to the Power Systems family.  The ones that might have received the most publicity, not to mention new model numbers, were valuable but not the most important part of the announcement, from my point of view.  Yes, the new higher MHz Power 770 and 780 and the ability to order a 780 with 2 chips per socket thereby allowing the system to grow to 96 cores were certainly very welcome additions to the family.  Especially nice was that the 3.3 GHz processors in the new MMC model of the 770 came in at the same price as the 3.1 GHz processors in the previous MMB model.  So, 6.5% more performance at no additional cost.

For SAP, however, raw performance often takes second fiddle to memory.   The old rule is that for SAP workloads, we run out of memory long before we run out of CPU.   IBM started to address this issue in 2010 with the announcement of the Active Memory Expansion (AME)  feature of POWER7 systems.  This feature allows for dynamic compression/decompression of memory pages thereby making memory appear to be larger than it really is.   The administrator of a system can select the target “expansion” and the system will then build a “compressed” pool in memory into which pages are compressed and placed starting from those pages less frequently accessed to those more frequently accessed.  As pages are touched, they are uncompressed and moved into the regular memory pool from which they are accessed normally.  Applications run unchanged as AIX performs all of the moves without any interaction or awareness required by the application.   The point at which response time, throughput or a large amount of CPU overhead starts to occur is the “knee of the curve”, i.e. slightly higher than the point at which the expansion should be set.  A tool, called AMEPAT, allows the administrator to “model” the workload prior to turning AME on, or for that matter on older hardware as long as the OS level is AIX 6.1 TL4 SP2 or later.

Some workloads will see more benefit than others.  For instance, during internal test run by IBM, the 2-tier SD benchmark showed outstanding opportunities for compression and hit 111% expansion, e.g. 10GB of real memory appears to be 21GB to the application, before response time or thoughput showed any negative effect from the compression/decompression activity.  During testing of a retail BW workload, 160% expansion was reached.  Even database workloads tend to benefit from AME.  DB2 database, which already feature outstanding compression, have seen another 30% or 40% expansion.  The reason for this difference comes from the different approaches to compression.  In DB2, if 1,000 residences or business have an address on Main Street,  Austin, Texas,  (had to pick a city so selected my own) DB2 replaces Main Street, Austin, Texas in each row with a pointer to another table that has a single row entitled Main Street, Austin, Texas.  AME, by comparison, is more of an inline compression, e.g. if it sees a repeating pattern, it can replace that pattern with a symbol that represents the pattern and how often it repeats.  Oracle recently announced that they would also support AME.  The amount of expansion with AME will likely vary from something close to DB2, if Oracle Advanced Compression is used, to significantly higher if Advanced Compression is not used since many more opportunities for compression will likely exist.

So, AME can help SAP workloads close the capacity gap between memory and CPU.  Another way to view this is that this technology can decrease the cost of Power Systems by either allowing customers to purchase less memory or to place more workloads on the same system, thereby driving up utilization and decreasing the cost per workload.  It is worthwhile to note than many x86 systems have also tried to address this gap, but as none offer anything even remotely close to AME, they have instead resorted to more DIMM slots.  While this is a good solution, it should be noted that twice the number of DIMMs requires twice the amount of power and cooling and suffers from twice the failures, i.e. TANSTAFL: there ain’t no such thing as a free lunch.

In the latest announcements, IBM introduced support for the new 32GB dimms.  This effectively doubled the maximum memory on most models, from the 710 through the 795.  Combined with AME, this decreases or eliminates the gap between memory capacity and  CPU and makes these models even more cost effective since more workloads can share the same hardware.  Two other systems received similar enhancements recently, but these were not part of the formal announcement.  The two latest blades in the Power Systems portfolio, the PS703 and the PS704, were announced earlier this year with twice the number of cores but the same memory as the PS701 and PS702 respectively.  Now, using 16GB DIMMS, the PS703/PS704 can support up to 256GB/512GB of memory making these blades very respectable especially for application server workloads.  Add to that, with the Systems Director Management Console (SDMC) AME can be implemented for blades allowing for even more effective memory per blade.   Combined, these blades have closed the price difference even further compared to similar x86 blades.

One last memory related announcement may have been largely overlooked by many because it involved an enhancement to the Active Memory Sharing (AMS) feature of PowerVM.  AMS has historically been a technology that allowed for overcommitment of memory.  While CPU overcommitment is now routine, memory overcommitment means that some % of memory pages will have to be paged out to solid state or other types of disk.  The performance penalty is well understood making this not appropriate for production workloads but potentially beneficial for many other non-prod, HA or DR workloads.  That said, few SAP customers have implemented this technology due to the complexity and performance variability that can result.  The new announcement introduces Active Memory™ Deduplication for AMS implementations.   Using this new technology, PowerVM will scan partitions after they finish booting and locate  identical pages within and across all partitions on the system.  When identical pages are detected, all copies, except one, will be removed and all memory references will point to the same “first copy” of the page.   Since PowerVM is doing this, even the OSs can be unaware of this action.  Instead, as this post processing proceeds, the PowerVM free memory counter will increase until a steady state has been reached.  Once enough memory is freed up in this manner, new partitions may be started.  It is quite easy to imagine that a large number of pages are duplicates, e.g. each instance of an OS has many read only pages which are identical and multiple instances of an application, e.g. SAP app servers, will likewise have executable pages which are identical.  The expectation is that another 30% to 40% effective memory expansion will occur for many workloads using this new technology.  One caveat however; since the scan is after a partition boots, operationally it will be important to have a phased booting schedule to allow for the dedupe process to free up pages prior to starting more partitions thereby avoiding the possibility of paging.  Early testing suggests that the dedupe process should arrive at a steady state approximately 20 minutes after partitions are booted.

The bottom line is that with the larger DIMMS, AME and AMS Memory Deduplication, IBM Power Systems are in a great position to allow customers to fully exploit the CPU power of these systems by combining even more workloads together on fewer servers.  This will effectively drive down the TCA for customers and remove what little difference there might be between Power Systems and systems from various x86 vendors.

November 29, 2011 Posted by | Uncategorized | , , , , , , , , , , , , | 4 Comments

Get all the benefits of SAP HANA 2.0+ today with no risk and at a fraction of the cost!

I know, that sounds like a guy on an infomercial trying to sell you a set of knives, but it may surprise you that much of the benefits planned for the eventual HANA 2.0 Transactional database from SAP are available today, with tried and proven technology and no rewriting of any SAP applications.

Let’s take a quick step back.  I just got back from SAP TechEd 2011 in Las Vegas.  As with Sapphire, just about every other word out of SAP employees’ mouths was HANA or in-memory computing.   SAP is moving rapidly on this exciting project.  They have now released two new HANA applications, Smart Meter Analytics and COPA Accelerator.  SAP shared that BW on HANA would be entering ramp-up in November.  They are busy coding other solutions as well.  Some people have the misconception that HANA is going to be just like BWA, i.e. plug and play, just select the data you want moved to it, transparent to the application other than being faster.  But, that is not how HANA works.  Applications have to be rewritten to HANA not ported from existing SAP systems.   Data must be modeled based on specific application requirements.  Though the benefits can be huge, the effort to get there is not trivial.

It is going to be a gradual process by which specific applications are rolled out on HANA.  BW is an obvious next step since many customers have both a BW system and a BWA.  BW on HANA promises the ability to have a single device with radical improvements in speed and not just for pre-selected Infocubes, but for the entire database.  Side note, HANA provides additional benefit for text queries as it does not have the 60 character limitation of BW.  It is not quite so clear if customers will be willing to pay the price as this would place even old, infrequently accessed data in memory along with the incremental price for systems, memory and SAP software for this aged data.

While this may be obvious, it is worthwhile summarizing the basic benefits of HANA.  HANA and in-memory database, has three major benefits: 1) all data and indexes reside in memory eliminating any disk access for query purposes resulting in dramatic gains in query speeds, 2) application code execution on the same device as the database thereby eliminating data transfers between application servers and database servers, 3)near realtime replication of data from not just SAP systems, but from just about any other data source that a customer might choose which is indeed a great thing.

So, great goals and eventually, something that should benefit not just query based applications, but a wide variety of applications including transactional processing.  As mentioned above, it is not simply a question of exporting the data from the current ERP database, for instance, and dropping it on HANA.  Every one of the 80,000 or so tables must be modeled into HANA.  All code, including thousands of ABAP and JAVA programs, must be rewritten to run on HANA.  And, as mentioned in a previous blog entry, SAP must significantly enhance HANA to be able to deal with cache coherency, transactional integrity, lock management and discrete data recovery, to name just a few of a long laundry list of challenges.  In other words, despite some overenthusiastic individuals’ assertions or innuendos that the transactional in-memory database will be available in 2 to 3 years, in reality, it will likely take far longer.

This means you have to wait to attain those benefits, right?  Wrong.  The technology exists today, with no code change, no data modeling and fully supported to place entire databases, including indexes, in memory along with application servers thereby achieving two of the three goals of HANA.  Let’s explore that a little further.  HANA, today, allows for 5 to 1 compression of most uncompressed data, but to make HANA work, an equal amount of memory to the compressed data memory must be allocated for temporary work space.  For example, a 1 TB uncompressed database should require about 200GB for data and another 200GB for work space resulting in a total memory requirement of 400GB.  A 10TB uncompressed database would require 4TB of memory, but since the supported configurations only allow for a total of 1TB per node, this would require a cluster of 4 @ 1TB systems.  Fortunately, IBM provides the System  x3850 and is certified for exactly this configuration.    But remember, we are talking about a future in-memory transactional system not today’s HANA.

DB2 offers data compression of approximately 60% today which means that a 10TB database, if buffered completely in memory, would require 4TB of memory.  A tall requirement, but achievable since IBM offers a 4TB system called the Power 795.  However, IBM also offers a feature called Active Memory Expansion (AME), available only with Power Systems and AIX, which can make real memory appear to applications as if it was larger than it really is using memory compression.  DB2 is fully supported with this feature and can see an additional 20% to 35% compression using AME.   In other words, that same 10TB database may be able to fit within 3TB or less of real memory.

Some customers might not need such a large system from a processing perspective, so two options exist.  One, with Capacity on Demand, customers can turn on a small fraction of the available processors while still having access to all available memory.  This would significantly reduce the cost of activating processors as well as for AIX and related software licenses and maintenance.   For those customers that purchase their DB2 software on a per core basis, this would further reduce those costs, but clearly have no effect on customers who purchase the SAP OEM edition of DB2.

A second option is to use DB2 PureScale, now a certified database option available for approved pilot customers of SAP.  With this option, using the same AME feature, a customer could cluster 3 @ 1TB Power Systems or 6 @ 500GB systems.

By the same token, SAP application servers can co-reside in the same OS instance or instances with DB2.  While this would add to the memory requirement, ABAP servers get even more benefit of 50% or more compression with AME.

So, it is entirely possible and conceivable to build a full, in-memory database, using IBM Power Systems and DB2 housing both the database and application server with no code changes, no data modeling, with an established database system that is proven and handles all of the transactional database requirements noted above today.    Assuming you already have a DB2 license, you would not even see any incremental software cost unless you move to DB2 Purescale which does come at an additional cost of 2% SAV.  For those with licenses for Oracle or SQLserver, while I am no expert on DB2 cost studies, those that I have seen show very good ROI over relatively short periods, and that is before you consider the ability to achieve near HANA like performance described in this blog post.  Lastly, this solution is available under existing SAP license agreements unlike HANA which comes with a pretty significant premium.

September 18, 2011 Posted by | Uncategorized | , , , , , , , | Leave a comment

HANA – Implications for UNIX systems

Ever since SAP announced HANA, I have received the occasional question about what this product means to UNIX systems, but the pace of those questions picked up significantly after Sapphire. Let me address the three phases of SAP in-memory database computing as I understand them.

HANA, High performance ANalytical Appliance, is the first in-memory database application. According to SAP, with very little effort, a company can extract large sets of data from their current SAP and non-SAP systems and, in near real time, keep that data extract up to date, at least for SAP systems. The data is placed into in-memory columns which are not only high compressible but are very fast for ad-hoc searches. Though Hasso Plattner talked about 10 to 1 compression, individuals that I have talked to that have direct experience with the current technology tell me that 5 to 1 is more likely. Even at 5 to 1, a 1TB conventional DB would fit into 200GB using HANA. The goal is not necessarily to replicate entire databases, including aged data that might be best archived, but to replicate only data that is useful in analyzing the business and developing new opportunities for driving revenue or reducing expenses. The promise is that analyses that would have been prohibitively expensive and time consuming to construct the underlying systems and database schemas will now be very affordable. If true, companies could extend innovation potential to just about anyone in the company with a good idea rather than just the elite few analysts that perform this work at the direction of only top executives. This solution is currently based on Intel based systems running Linux from a pretty decent set of SAP technology partners. Though SAP has not eliminated any other type of systems from being considered for support, they have also not indicated a plan for support of any other type of system.

The next phase of in-memory database technology, I picked up from various conversations and presentations at Sapphire. Two major areas were discussed. The first deals entirely with BW. The insinuation was that BWA and HANA are likely to be combined into a single set of technology and have the ability to run the entire BW database stack, thereby eliminating the need for a separate BW database server. I can imagine a lot of customers that already have BWA’s or are planning on HANA finding this to be a very useful direction. The lack of transactional updates in such an environment make this a very doable goal. Once again, SAP made no statements of support or elimination of support for any platform or technology.

The second area involves a small, but historically troublesome portion of SAP transactions which involve much longer run times and/or large amounts of data transfer back and forth between database and applications servers and consequentially consume much larger amounts of resources. Though SAP was not specific, their goal is to use in-memory database technology to run those sets of SAP transactions that have these sorts of characteristics. Consider this a sort of coprocessor similar to the way that BWA acted as a back end database coprocessor for BW. Other than faster performance, this would be transparent to the end user. Programmers would see this, but perhaps just as an extension of the ABAP language for these sorts of transactions. Not all customers are experiencing problems in this area. On the other hand, there are some customers that deal with these sorts of pesky performance issues quite regularly and therefore would be prime candidates for such a technology. It is also technically, quite a bit more complex to develop this sort of coprocessor. I would envision this coming out somewhat later than the in-memory BW database technology described above.

The last phase, pushed strongly by Hasso Plattner, but barely mentioned by anyone else at SAP, involved a full transactional in-memory database. This would act as a full replacement for Oracle, DB2 and SQLserver databases. Strangely, no one representing those companies seemed to be very concerned about this so, clearly, this sparked my interest. When I asked some database experts, I was given a little rudimentary education. Transactional databases are fundamentally different than primarily read-only databases populated by other databases. At the most basic level, a query in a read-only database can examine any data element with no regard for any other query that might be doing the same. A transactional database must determine if a data element that may be changed by a transaction is locked by another transaction and, if so, what to do about it, e.g. wait, steal the lock, abandon the task, etc. At a slightly more advanced level, if an update to a read-only database fails, the data can be simply repopulated from the source. If an update fails in a transactional database, real data loss with potentially profound implications can result. Backup, recovery, roll forward, roll back, security, high availability, disaster recovery and dozens of other technologies have been developed by the database companies over time to ensure comprehensive database integrity. These companies therefore believe that if SAP goes down this path, it will not be an easy or quick one and may be fraught with complications.

And then there is a matter of cost. The software portion of HANA is not inexpensive today. If SAP were to maintain a similar pricing model for the substantially more complicated transactional database of the future, customers may be faced with database licensing costs that could be twice or more than they pay currently for SAP OEM editions of DB2, SQLserver, both licensed at 8% of SAV (SAP Application Value) or Oracle, 11% of SAV (but announced as growing to 15% this month, August).

This begs the question then. What is broken today for which an SAP in-memory transactional database fixes the problem? If you can maintain virtually all of your valuable data in a read-only copy on HANA and perform all of the analyses that your heart desires, what will a single transactional and analytical repository do that you can’t do today with the separate databases? 10 years ago, having two copies of a 10TB database would have required a big investment in disk subsystems. Now, 20TB is incredibly inexpensive and is almost a rounding error in many IT budgets.

Bottom line; HANA looks like a real winner. Phase two has a lot of promise. Phase three looks like a solution looking for a problem. So for those UNIX fans, database and application server demands will continue to be met primarily by existing technology solutions for a long time to come.

August 5, 2011 Posted by | Uncategorized | , , , , , , , , | Leave a comment