An ongoing discussion about SAP infrastructure

Oracle Exadata for SAP revisited

Oracle’s Exadata, Exalogic and Exalytic systems have failed to take the market by storm but that has not stopped Oracle from pushing them as much as possible at every opportunity.  Recently, an SAP customer started to investigate the potential of an Exadata system for a BW environment.  I was called in to explain the issues surrounding such an implementation.   A couple of disclaimers before I start; I am not an Oracle expert nor have I placed hands on an Exadata system, so what I present here is the result of my effort to get educated on this topic.   Thanks go to some brilliant people in IBM that are incredible Oracle and SAP experts and whose initials are R.B., M.C., R.K. and D.R.

My first question is: why would any customer implement BW on a non-strategic platform such as Exadata when BW HANA is available?  Turns out, there are some reasons, albeit a little of a stretch.   Some customers may feel that BW HANA is immature and lacks the ecosystem and robust tools necessary to utilize in production today.  This is somewhat valid and, from my experience, many customers tend to wait a year or so after V1.0 of any product to consider it for production.  That said, even prior to the GA of BW HANA, SAP has reported that HANA sales were very strong, presumably for non-BW purposes.  Some customers may be abandoning the V1.0 principle in some cases which makes sense for many HANA environments where there may be no other way or very limited ways of accomplishing the task at hand, e.g.  COPA.  The jury is out on BW HANA as there are valid and viable solutions today including BW with conventional DBs and BWA.  Another reason revolves around sweetheart deals where Oracle gives 80% or larger discounts to get the first footprint in a customer’s door.  Of course, sweetheart deals usually apply only for the first installation, rarely for upgrades or additional systems which may result in an unpleasant surprise at that time.  Oracle has also signed a number of ULAs (Unlimited License Agreement) with some customers that include an Exadata as part of that agreement.  Some IT departments have learned about this only when systems actually arrived on their loading docks, not always something they were prepared to deal with.

Beside the above, what are the primary obstacles to implementing Exadata?  Most of these considerations are not limited to SAP.  Let’s consider them one at a time.

Basic OS installation and maintenance.  Turns out that despite the system looking like a single system to the end user, it operates like two distinct clusters to the administrator and DBA.  One is the RAC database cluster, which involves a minimum of two servers in a quarter rack of the “EP” nodes or full rack of “EX” nodes and up to 8 servers in a full rack of the “EP” nodes.  Each node must not only have its own copy of Oracle Enterprise Linux, but a copy of the Oracle database software, Oracle Grid Infrastructure (CRS + ASM) and any Oracle tools that are desired, of which the list can be quite significant.  The second is the storage cluster, which involves a minimum of 3 storage serves for a quarter rack, 7 for a half rack and 14 for a full rack.  Each of these nodes has its own copy of Oracle Enterprise Linux and Exadata Storage software.  So, for a half rack of “EP” nodes, a customer would have 4 RAC nodes, 7 storage nodes + 3 Infiniband switches which may require their own unique updates.  I am told that the process for applying an update is complex, manual and typically sequential.  Updates typically come out about once a month, sometimes more often.  Most updates can be applied while the Exadata server is up, but storage nodes, must be brought down, one at a time, to apply maintenance.  When a storage node is taken down for maintenance, apparently data may not be present, i.e. it must be wiped clean which means that after a patchset is applied the data must be copied from one of its ASM created copies.

The SAP Central Instance may be installed on an Exadata server, but if this is done, several issues must be considered.  One, the CI must be installed on every RAC node, individually.   The same for any updates.  When storage nodes are updated, the SAP/Exadata  best practices manual states that the CI must be tested after the storage nodes are updated, i.e. you have to bring down the CI and consequently must incur an outage of the SAP environment.

Effective vs. configured storage.  Exadata offers no hardware raid for storage, only ASM software based RAID10, i.e. it stripes the data across all available disks and mirrors those stripes to a minimum of one other storage server unless you are using SAP, in which case, the best practices manual states that you must mirror across 3 storage servers total.  This offers effectively the same protection as RAID5 with a spare, i.e. if you lose a storage server, you can fail over access to the storage behind that storage server which in turn is protected by a third server.  But, this comes at the cost of the effective amount of storage which is 1/3 of the total installed.  So, for every 100TB of installed disks, you only get 33TB of usable space compared to RAID5 with a 6+1+1 configuration which results in 75TB of usable space.  Not only is the ASM triple copy a waste of space, but every spinning disk utilizes energy and creates heat which must be removed and increases the number of potential failures which must be dealt with.

Single points of failure.  Each storage server has not one, but over a dozen single points of failure.  The infiniband controller, the disk controller and every single disk in the storage server (12 per storage server) represent single points of failure.  Remember, data is striped across every disk which means that if a disk is lost, the stripe cannot be repaired and another storage server must fulfill that request.  No problem is you usually have 1 or 2 other storage servers to which that data has been replicated.  Well, big problem in that the tuning of the system striped across not just the disks within a storage server, but across all available storage servers.  In other words, while a single database request might access data behind a single storage server, complex or large requests will have data spread across all available storage servers.  This is terrific in normal operations as it optimizes parallel read and write operations, but when a storage server fails and another picks up its duties, the one that picks up its duties now has twice the amount of storage it must manage resulting in more contention for its disks, cache, infiniband and disk controllers, i.e. the tuning for that node is pretty much wiped out until the failed storage node can be fixed.

Smart scans, not always so smart.    Like many other specialized data warehouse appliance solutions, including IBM’s Netezza, Exadata does some very clever things to speed up queries.   For instance, Exadata uses a range “index” to describe the minimum and maximum values for each column in a table for a selected set of rows.  In theory, this means that if a “where” clause requests data that is not contained in a certain set of rows, those rows will not be retrieved.  Likewise, “Smart scan” will only retrieve columns that are requested, not all columns in a table for the selected query.  Sounds great and several documents have explained when and why this works and does not work, so I will not try to do so in this document.  Instead, I will point out the operational difficulties with this.  The storage “index” is not a real index and works only with a “brute force” full table scan.  It is not a substitute for an intelligent partitioning and indexing strategy.  In fact, the term that Oracle uses is misleading as it is not a database index at all.  Likewise, smart scans are brute force full table scans and don’t work with indexes.  This makes them useful for a small subset of queries that would normally do a full table scan.  Neither of these are well suited for OLTP as OLTP deals, typically, with individual rows and utilizes indexes to determine the row in question to be queried or updated.  This means that these Exadata technologies are useful, primarily, for data warehouse environments.   Some customers may not want

So, let’s consider SAP BW.  Customers of SAP BW may have ad-hoc queries enabled where data is accessed in an unstructured and often poorly tuned way.  For these types of queries, smart scans may be very useful.  But those same customers may have dozens of reports and “canned” queries which are very specific about what they are designed to do and have dozens of well constructed indexes to enable fast access.  Those types of queries would see little or no benefit from smart scans.  Furthermore, SAP offers BWA and HANA that do an amazing job of delivering outstanding performance on ad-hoc queries.

Exadata also uses Hybrid Columnar Compression (HCC), which is quite effective at reducing the size of tables, Oracle claims about a 10 to 1 reduction.  This works very well at reducing the amount of space required on disk and in the solid state disk caches, but at a price that some customers may be unaware of.  One of the “costs” is that to enable HCC, processing must be done during construction of the table meaning that the time required to import data may take substantially longer.  Another “cost” is the voids that are left when data is inserted or deleted.  HCC works best for infrequent bulk load updates, e.g. remove the entire table and reload it with new data, not daily or more frequent inserts and deletes.   In addition the voids that it leaves, for each insert, update or delete, the “compression unit” (CU) must first be uncompressed and then recompressed with the entire CU written out to disk as the solid state caches are for reads only.  This can be a time consuming process, once again making this technology unsuitable for OLTP much less for DW/BW databases with regular update processes.  HCC is unique to Exadata which means that data which is backed up from an Exadata system may only be recovered to an Exadata system.  That is fine is Exadata is the only type of system used but not so good if a customer has a mixed environment with Exadata for production and, perhaps, conventional Oracle DB systems for other purposes, e.g. disaster recovery.

Speaking of backup, it is interesting to note that Oracle only supports their own Infiniband attached backup system.  The manuals state that other “light weight” backup agents are supported but apparently, third parties like Tivoli Storage Manager, EMC’s Legato Networker or Symantec Netbackup are not considered “light weight” and consequently, not supported.  Perhaps you use a typical split mirror or “flash” backup image that allows you to attach a static copy of the database to another system for backup purposes with minimal interruption to the production environment.  This sort of copy is often kept around for 24 hours in case of data corruption allowing for a very fast recovery.  Sorry, but not only can’t you use whatever storage standard that you may have in your enterprise since Exadata has its own internal storage, but you can’t use that sort of backup methodology either.  Same goes for DR, where you might use storage replication today.  Not an option and only Oracle DataGuard is supported for DR.

Assuming you are still unconvinced, there are a few other “minor” issues.  SAP is not “RAC aware”, as has been covered in a previous blog posting.  This means that Exadata performance is limited by two factors, i.e. a single RAC node represents the maximum possible capacity for a given transaction or query, no parallel queries are issued by SAP.  Secondly, if data that is requested by OLTP transaction, such as may be issued by ECC or CRM, unless the application server that is uniquely associated with a particular RAC node requests data that is hosted in that particular RAC node, data will have to be transferred across the infiniband network within the Exadata system at speeds that are 100,000 times slower than local memory accesses.  Exadata supports no virtualization meaning that you have to go back to a 1990s concept of separate systems for separate purposes.  While some customers may get “sweetheart” deals on the purchase of their first Exadata system, unless customers are unprecedentedly brilliant negotiators, and better than Oracle at that, it is unlikely that these “sweetheart” conditions are likely to last meaning that upgrades may be much more expensive than the first expenditure.  Next is the granularity.  An Exadata system may be purchase in a ¼ rack, ½ rack or full rack configuration.  While storage nodes may be increased separately from RAC nodes, these upgrades are also not very granular.  I spoke with a customer recently that wanted to upgrade their system from 15 cores to 16 on an IBM server.  As they had a Capacity on Demand server, this was no problem.  Try adding just 6.25% cpu capacity to an Exadata system when the minimum granularity is 100%!!  And the next level of granularity is 100% on top of the first, assuming you went from ¼ to ½ to full rack.

Also consider best practices for High Availability.  Of course, we want redundancy among nodes, but we usually want to separate components as much as possible.  Many customers that I have worked with place each node in an HA cluster in separate parts of their datacenter complex, often in separate buildings on their campus or even geographic separation.  A single Exadata system, while offering plenty of internal redundancy, does not protect against the old “water line” break, fire in that part of the datacenter, or someone hitting the big red button.  Of course, you can add that by adding another ¼ or larger Exadata rack, but that comes with more storage that you may or may not need and a mountain of expensive software.    Remember, when you utilize conventional HA for Oracle, Oracle’s terms and conditions allow for your Oracle licenses to transfer, temporarily, to that backup server so that additional licenses are not required.   No such provision exists for Exadata.

How about test, dev, sandbox and QA?  Well, either you create multiple separate clusters within each Exadata system, each with a minimum of 2 RAC nodes and 3 storage nodes, or you have to combined different purposes together and share environments between environments that your internal best practices suggest should be separated.  The result is either multiple non-prod systems or larger systems with considerable excess capacity may be required.  Costs, of course, go up proportionately or worse, may not be part of the original deal and may receive a different level of discounts.  This compares to a virtualized Power Systems box which can host partitions for dev, test, QA and DR replication servers simultaneously and without the need for any incremental hardware, beyond memory perhaps.  In the event of a disaster declaration, capacity is automatically shifted toward production but dev, test and QA don’t have to be shut down, unless the memory for those partitions is needed for production.  Instead, those partitions, simply get the “left over” cycles that production does not require.

Bottom line:  Exadata is largely useful only for infrequently updated DW environments, not the typical SAP BW environment, provides acceleration for only a subset of typical queries, is not useful for OLTP like ECC and CRM, is inflexible lacking virtualization and poor granularity, can be very costly once a proper HA environment is constructed, requires non-standard and potentially duplicative backup and DR environments, is a potential maintenance nightmare and is not strategic to SAP.

I welcome comments and will update this posting if anyone points out any factual errors that can be verified.


I just found a blog that has a very detailed analysis of the financials surrounding Exadata.  It is interesting to note that the author came to similar conclusions as I did, albeit from a completely different perspective.


May 17, 2012 Posted by | Uncategorized | , , , , , , , , , , , , | 6 Comments

IBM PureSystems for SAP

On April 11, 2012, IBM announced a new family of converged architecture systems called PureSystems.  IBM does not use the term “converged architecture” in the materials released with this announcement, preferring the term “Expert Integrated Systems” due to the fact that it goes well beyond the traditional definition of converged infrastructure.   Other companies have offered converged infrastructure in one form or another for a few years now.  HP introduced this concept several years ago, but in my countless meetings with customers, I have yet to hear a single customer mention it.  They talk about HP blades frequently, but nothing about the converged solution.  Cisco UCS, on the other hand, is much more often mentioned in this context.  While Oracle might try to suggest that they offer a set of converged infrastructure solutions, I believe that would be a stretch as each of the Exa offerings stand on their own, each with their own management, storage and network framework.  The Exa solutions might better be described as special purpose appliances with a converged hardware/software stack.  Dell’s converged solution is basically a management layer on top of their existing systems.  This would be like IBM trying to suggest that IBM Systems Director is a converged infrastructure, which has never been the case as this would imply that IBM is ignorant.


IBM learned from the missteps and mistakes of our competitors and designed a solution that takes a leadership position.  Let’s take a short journey through this new set of offerings during which I will attempt to illustrate how it is superior to competitive offerings.  A more comprehensive look at PureSystems can be found at:


Converged infrastructures generally include servers, storage, networking, virtualization and management.  Efficient utilization of resources is at the cornerstone of the value proposition in that businesses can deliver significantly more application environments with fewer personnel, lower hardware costs, greater datacenter density and lower environmental costs.  Whether and how well companies deliver on these promises is where the story gets interesting.


Issues #1: open vs. closed.  Some, such as Oracle’s Exa systems, are so closed that existing storage, servers, virtualization or software that you may have from a company other than Oracle, with rare exceptions, cannot be part of an Exa system.  Others suggest openness, but rapidly become more closed as you take a closer look.  Cisco UCS is open as long as you only want x86 systems, networking and SAN switches only from Cisco and virtualization only from VMware or Microsoft HyperV.  VCE takes UCS further and limits choices by including only EMC V-Max or Clarion CS4-480 and VMware.  By comparison, PureSystems are built on openness starting with the choice of nodes, x86 and Power, and OSs, Microsoft Windows, RedHat and SUSE Linux, AIX and IBM i.  Supported virtualization offerings include VMware, HyperV, KVM and PowerVM.  Storage can be almost anything that is supported by the IBM V7000 which includes most EMC, HDS, HP, Netapp and, of course, all IBM storage subsystems.  Networking is built into each PureSystems chassis but supports network adapters from Broadcom, Emulex and Mellanox and Fibre Channel adapters from Qlogic, Emulex and Brocade plus both QDR and FDR Infiniband adapters.  Top of rack (TOR) switching can be provided by just about any network technology of your choosing.  Management of the nodes, networking, storage and chassis is provided by IBM, but is designed to be compatible with IBM Systems Director, Tivoli and a variety of upstream managers.


Issue #2: management interface.  IBM spent a great many person years developing a consistent, intuitive and integrated management environment for PureSystems.  Among a wide variety of cross system management features, this new interface provides a global search feature allow an administrator to quickly identify where a virtual resource is located in the physical world.  Ask any administrator and you will find this is a lot more difficult than it sounds.    Cisco does a great job of demonstrating UCS based on an impressive level of prep work.  They show how easily images can be cloned and provisioned and this is indeed a significant accomplishment.  The problem is that a significant amount of prep work is required.  Likewise, when changes occur in the underlying environment, e.g. a new storage subsystem is attached or expanded, or a new network environment is added, a different set of management tools must be utilized, each with their own interface and some less intuitive than others. VCE offers a more consistent and intuitive interface, but at the cost of a very rigid set of components and software.  For instance, “Vblocks”, the term for VCE systems, must be implemented in large chunks, not granularly based on customer demands, must be “approved” for SW or firmware updates by VCE, even emergency fixes for known security issues, and do not allow any sort of outside components at all.


Issue #3: the network is the computer.  This is a bit tongue in cheek as that was the slogan of the old Sun company, anyone remember them?  Cisco’s architecture seems to be an echo of this old and outdated concept.  PureSystems, as noted above, provides an integrated network but allows a wide variety of adapters and upstream devices.  By choice, customers can directly integrate multiple chassis together without the need for a top of rack switch until and whenever they want to communicate with external networks.  For instance, should an SAP application server have to use a TOR switch to talk to a SAP DB server?  Should a DB2 PureScale cluster have to use a TOR switch to talk among its nodes and to a central lock manager?  Should an Oracle RAC cluster have to incur additional latency when communicating with its distributed lock manager?  IBM believes the answer to all of the above is that it is up to the customer.  If you want to use a TOR switch, you should, but that should be your choice, not a mandate.  After all, IBM’s goal is to provide an excellent computing infrastructure, not sell switches.  By comparison, Cisco’s architecture is dependent on very expensive 6100 and similar interconnects.  In fact, Cisco even suggests that customers utilize VM-FEX technology as they claim that it greatly simplifies network management.  What some customers may not realize is that to utilize this technology, you must disable the virtual switch used by VMware.  This switch allows different VMs on a single system to communicate at near memory speeds.  Using VM-FEX, this switch is disabled and communication, even between adjacent VMs, must communicate via TOR switches and instead of interconnect latencies measured in 100s of nanoseconds, those latencies can take several orders of magnitude greater time.


For SAP, it is reasonable to ask whether a converged infrastructure solution is required.  Clearly, the answer is no, as customers have been successfully implementing SAP on everything from single, massive, 2-tier virtualized servers to large arrays of 3-tier, small, non-virtualized systems and everything in between for many years now.  There is nothing on the SAP roadmap that specifies or certifies such technology.  But, is there value in such a solution.  The answer, obviously, is yes.


While consolidation of many different SAP instances on large, 2-tier virtualized systems offers tremendous value to customers, there are a variety of reasons why customers chose to utilize a landscape with multiple, smaller servers.  Cost of acquisition is usually the biggest factor and is almost always less when small servers are utilized.  The desire to not have all eggs in one basket is another.  Some customers prefer to keep production and non-production separate.  Yet others are uncomfortable with the use of virtualization for some systems, e.g. Oracle database systems under VMware.  This is not intended to be a comprehensive list as there may be many other factors that influence the use of such an architecture.


If multiple systems are utilized, it is very easy to get into a situation in which their utilization is low, the number of systems is multiplying like rabbits, the cost of management is high, flexibility is low, the space required is ever increasing and power/cooling is a growing concern.  In this situation, a suitably flexible converged infrastructure solution may be the optimal solution to these problems.


PureSystems may be the best solution for many SAP customers.  For existing Power Systems customers, it allows for a very smooth and completely binary compatible path to move into a converged architecture.  Both 2 socket and 4 socket Power nodes are available, the p260 and p460.  Pure Systems feature an improved airflow design and a higher power capacity than IBM’s BladeCenter, which therefore allows for nodes that can be outfitted with the latest processors running at their nominal frequency and with full memory complements.  As a result, these new nodes feature performance that is very close to the standalone Power 740 and 750 systems respectively.  With a very fast 10Gb/sec Ethernet backbone, these new nodes are ideal for virtualized application and non-production servers.  The p460 offers additional redundancy and support of dual VIO servers which makes it an excellent platform for all types of servers including database.   One, or many, of these nodes can be used for part of an SAP landscape featuring existing rack mount servers and BladeCenter blades.  Live Partition Mobility is supported between any of these nodes assuming compatible management devices, e.g. HMC, SDMC or PureFlex Manager.


Entire SAP landscapes can be hosted completely within one or more Pure Systems chassis.  Not only would such a configuration result in the most space efficient layout, but it would provide for optimized management and the fastest, lowest latency possible connections between app and DB servers and between various SAP components.


Some SAP customers feel that a hybrid approach, e.g. using Power Systems for database and x86 systems for application servers, is the right choice for them.  Once again, PureSystems delivers.  Power and x86 nodes may coexist in the same chassis, using the same management environment, the same V7000 virtualized data storage and, of course, the same network environment.  Clearly, the OSs, virtualization stacks and system characteristics are dependent on the underlying type of node, but regardless, they are all designed to work seamlessly together.


Yet other customers may prefer a 100% x86 landscape.  This is also completely supported and offers similar benefits as the 100% Power environment described above, with the inherent advantages or disadvantages of each respective platform characteristics, which has been discussed at some length in my other blog postings.


There are many good blogs that have discussed PureSystems.  Here are but a few that you may wish to check out:


May 8, 2012 Posted by | Uncategorized | , , , , , , , , , , , , , , , , | 5 Comments

Oracle Exadata for SAP

On June 10, 2011, Oracle announced that SAP applications had been certified for use with their Exadata Database Machine. I was intrigued as to what this actually meant, what was included, what requirement was this intended to address and what limitations might be imposed by such a system. First the meaning: Did this mean that you could actually run SAP applications on an Exadata system? Absolutely not! Exadata is a database machine. It runs Oracle RAC. Exadata has been on the market for almost 2 years. Oracle 11G RAC has been certified to run SAP databases for well over a year now. Now, there is a formal statement of support for running SAP databases on Exadata. So, the obvious question, at least to me, is why did it take so long? What is fundamentally different about Oracle RAC on Exadata vs. Oracle RAC on any x86 cluster from an SAP perspective? To the best of my knowledge, SAP sees only a RAC cluster, not an Exadata system. I offer no conclusion, just an observation that this “certification” seems to have taken an awfully long time.

What was included? As mentioned before, you can’t run SAP applications on Exadata which means that you must purchase other systems for application servers. Thankfully, you can run the CI on Exadata and can use Oracle Clusterware to protect it. In the FAQ and white papers published by Oracle, there is no mention of OracleVM or any other type of virtualization. While you can run multiple databases on a single Exadata system, all would have to reside in the same set of OS images.  This could involve multiple Oracle instances, whether RAC or not, under one OS, multiple databases under one Oracle instance, or even running different database instances on different nodes, for example.   Many customers chose to have one OS image per database instance to give them the flexibility of upgrading one instance at a time. Apparently, that is not an option when a customer chooses to use Exadata, so if a customer has this requirement, they may need to purchase additional Exadata systems. So, it might seem natural to assume that all of the software required or recommended to support this environment would be included in the SAP OEM edition of Oracle, but that would be wrong. Since Exadata is based on Oracle RAC, the RAC license must be obtained either through an additional cost for the OEM license from SAP or directly through Oracle. Active DataGuard and Real Application Testing, optional components but considered by many to be important when RAC is utilized, are also not included and must be purchased separately. Lastly, Oracle Exadata Storage Server must be purchased separately.

So, what problem is this intended to solve? Scalability? IBM’s System X, as well as several other x86 vendors, have published SAP 2-tier benchmarks in excess of 100,000 SAPS not to mention over 680,000 for IBM’s Power Systems. Using the typical 1:4 ratio of database to application server SAPS, this means that you could support an SAP requirement of at least 500,000 SAPS with a single high end x86 database server. Perhaps 1% or less of all SAP implementations need more capacity than this, so this can’t be the requirement for Exadata. How about high availability? Oracle RAC was already available on a variety of systems, so this is not unique to Exadata. A primary requirement of HA is to physically separate systems but Exadata places all of the nodes in a single rack, unless you get up to really huge multi-rack configurations, so using Exadata would go contrary to HA best practices.
Let us not forget about limitations. Already mentioned is the lack of virtualization. This can be a major issue for customers with more than one database. But what about non-production? For a customer that requires a database server for each of development, test and QA, not to mention pre-prod, post-prod or any of a variety of other purposes, this could drive the need for multiple other Exadata systems and each would have a radically more capacity than a customer could reasonably be expected to utilize. What is a customer has an existing storage standard? Exadata supports only its own storage, so must be managed separately. Many customers utilize a static and/or space efficient image of the database for backup purposes but that requires a storage system that supports such a capability not to mention the ability to mount that image on a separate server, both of which are not possible with Exadata.  A workaround might involve the use of Active Data Guard to create a synchronous copy on another system which can be utilized for backup purposes, but not only does this come with additional cost, but is more complex, is not space efficient and might require additional systems capacity.  And then there are the known limitations of RAC for SAP.  While SAP is RAC enabled, it is not RAC aware. In other words, each SAP application server must be uniquely associated with a single RAC node and all traffic directed to that node. If data is located in the memory of another RAC node, that data must be moved, through cache fusion, to the requesting node, at a speed of over 100,000 times slower than the speed of moving data through main memory. This is but one of the many tuning issues related to RAC but not intended as a knock to RAC, just a reality check. For customers that require the highest possible Oracle database availability, RAC is the best choice, but it comes at the cost of tuning and other limitations.

I am sure that I must be missing something, but I can’t figure out why any customer would need an Exadata system for SAP.

July 27, 2011 Posted by | Uncategorized | , , , , , , , , , , , , , , , , , , , , | Leave a comment