After Vishal Sikka’s announcement that SAP was investigating the potential of HANA on IBM Power Systems, it seemed that all that was needed for this concept to become a reality was for IBM to invest in the resources to aid SAP in porting and optimization of SAP HANA on Power (HoP) and for customers to weigh in on their desire for such a solution.
Many, very large customers told us that they did let SAP know of their interest in HoP. IBM and SAP made the necessary investments for a proof of concept with HoP. This successful effort was an example of the outstanding results that happen when two great companies cooperate and put some of their best people together. However, there are still no commitments to deliver HoP in 2013. SAP apparently has not ruled out such a solution at some point in the future. So, why should you care since HANA already runs on x86?
Simple answer. Are you ready to bet your business on x86?
Do Intel systems offer the scalability that your business requires and can those systems react fast enough to changing business conditions? Power scales far high than x86, has no artificial limitations and responds to changing demands almost instantly.
Are x86 systems reliable enough? Power Systems inherited a wide array of self correcting and fault tolerant features from the mainframe, still the standard for reliability in the industry.
Are x86 systems secure enough? Despite the best attempts by hackers, PowerVM has still never been breached.
Can you exploit virtualization or will you have to go back to a 1990s concept of islands of automation? The PowerVM hypervisor is part of every Power system, so it is virtualized by default and the journey that most customers have been on for most of this millennium can continue unabated.
What can you do about this? Speak up!! Call your SAP Account Executive and send them notes. Let them know that you are unwilling to take a chance on allowing your SAP Business Suite database systems to be placed on anything less than the most reliable, scalable, secure and flexible systems available, i.e. IBM Power Systems. Remind SAP that Business Suite DB already runs very well on current Power Systems and that until SAP is willing to support this platform for HANA, there is very little compelling reason for you to consider a move to HANA.
Sapphire is just a week away. This may be the best opportunity for you to deliver this message as most of SAP’s leadership will be present in Orlando. If they hear this message from enough customers, it is unlikely that they will simply ignore it.
For those of you who will be attending the SAPPHIRE and/or ASUG annual conference in Orlando, IBM will be having a big presence at the conference. IBM will be presenting at 15 different ASUG and SAPPHIRE breakout sessions and will have a number of additional featured sessions. The IBM booth will have IBM experts on all facets of solutions around SAP. We will get you in contact with the right people who can answer any questions you may have. You can also check out all of the new solutions IBM has for SAP at the IBM booth. I will personally be manning the IBM booth with respect to SAP solutions on z. Whether you are running SAP on z or not, it would be great to see you. Please stop by and say hello.
The following link http://www.ibm.com/solutions/sap/us/en/landing/N844815N07692Q43-3.html has lots of information on all of the different activities IBM will have at the SAPPHIRE and ASUG conference. For your convenience, some general information is listed below in section A. Some key topics covered in the Experiential Zone are covered below in section B. The 15 different breakout sessions are covered in section C below.
A) Join IBM at the SAPPHIRE NOW and ASUG Annual Conference, May 14 – 16 in Orlando
For more than four decades, IBM and SAP have worked together to deliver superior ROI through tens of thousands of successful implementations to help companies innovate, adapt, and Compete in the Era of SMART
Visit IBM at booth #1017 to see how businesses successfully integrate IBM products and solutions
–Front Office & Mobility Solutions: Improve ProductivityAnywhere – Everywhere
–Cloud Computing: Reduce costs and improve flexibility
–SAP Analytics and SAP HANA
–Enterprise Application Services, Line of Business &Industry Solutions: Optimize SAP Investments
–Breakthrough Technologies: World Class Infrastructure, IBM DB2 and Middleware
Tuesday, May 14 1p – 140p: Under Armour speeds real time decisions to maximize product availability
Wednesday, May 15 4p – 4:40p: GM’s Big-Bang Service Parts Transformation
Tuesday, May 14 12p – 12:45p: Micro forum – Energize Your SAP Software Investment with IBM DB2 10.5 BLU Acceleration
2) Talk with our experts on key topics at the Experiential Zones:
What happens if you can’t make your product? How much money do you lose? What is the impact to an automotive enterprise such as the customers, consumers, dealers, wholesalers and shareholders? You rely on equipment, machines and plants to produce the products you sell and the Preventative Maintenance Solution from SAP has been developed to keep you from having any disruption. Our unique and powerful solution is the marriage of IBM and SAP’s innovative technologies. We have combined the IBM research department, our Watson artificial intelligence technology, SAP HANA, SAP Business Objects, SAP Mobility & Syclo and IBM hardware. This new innovative solution will allow Automotive OEMs, Suppliers, Dealers to predict more accurately their maintenance strategy for each piece of equipment in their enterprise.
Learn how to decrease unplanned equipment / machine/ plant down time
Learn how to reduce planned downtime
Learn how to reduce replenishment stock
Learn how to get maximum value from parts and repair / replace before failure
Learn how to reduce work order mistakes
The SAP Predictive Analytics in a Connected Health Care solution establishes critical components of a closed loop analytics environment. This solution establishes a industry leading data model that captures front office data (implant device, health and wellness, other 3rd party patient data) and integrates it with core back-office data (inventory, customer, sales, manufacturing) to provide connected patient, device, payer and provider analytics. In addition, advanced, predictive analytics are used to evaluate the data captured in the data model to provide insight into the patient or device’s health which will provide critical insight into a patients health and ultimately drive lower health care costs. The connected care solution will demonstrate the power of integrating the front office and back office with real time analytics through HANA, an approach that can be used for other med device and life sciences companies, as well as for other industries dealing with similar ecosystem challenges.
The “IBM Loyalty Management Concept Application” connects seamlessly into IBM’s Enhanced Loyalty Management Solution for SAP and enables loyalty program members to connect and transact like never before. This comprehensive app shows real-time reward updates, loyalty account information, purchase history, and personalized promotions computed by IBM Research’s unique analytics engine. The mobile application provides for mCommerce and rewards redemption capabilities.
The use of the application results in increased customer loyalty, transaction volume, and promotional conversion rates.
Manage loyalty accounts – review your account points, status levels, and upcoming rewards
Get benefits and share – get personalized offers, share them with your social network, and add them to your loyalty card for future use
Find stores – use the store locator to search for directions and details
“Lay away”- a method to borrow points for future transactions
3) IBM break-out sessions at ASUG and SAPPHIRE
1) IBM and SAP Transportation Management – Experiences in the Transportation Industry
ASUG session 4505
Within IBM, SAP Transportation Management is a strategic focus and IBM has teamed with SAP to deliver a next generation transportation management solution for DHL. This session will describe the project scope, status, and IBM’s role and relationship with SAP and Transportation Management.
2) Portal/ESS From Blueprint and Workshops to WDA/ABAP Configuration, Security, and Second Level PIN Authentication to an Employee Self-Service Portal
ASUG session 2101
This presentation will cover ESS Implementation experience with SAP Neteaver Portal 7.3 and ehp5 ECC. Presenters will demostratate the portal functionally of ESS time and pay statements and intgeration of a second level authentication. They’ll discuss implementation approach blueprinting via prototyping; show solutions to WebDynpro ABAP configuration challenges; and demo some ABAP enhancement for Info Type integration and explain some of the security and onboard process challenges. Presenters will also demonstrate and show under-the-hood functionality of the second Level PIN Authentication, including PIN reset and know its store on an Info Type. They will show documents and approach as well as configuration in our session that are useful to anyone thinking of implementing ESS with NW 7.3 and ehp5+. This session will offer some lessons learned on NW 7.3 Portal and ECC eph5 implementation.
3) Energize Your SAP Software Investment with IBM DB2 10.5 BLU Acceleration
SAPPHIRE session 88564
Whether you’re an SAP and DB2 veteran or want to explore superior alternatives to an incumbent non-DB2 database, join to discuss how exciting new joint innovations make an SAP software and DB2 LUW combination the best-of-breed business solution, including BLU Acceleration.
4) Case Study: How First Solar Achieved Real-Time Analysis of Supplier and Delivery Performance Metrics Using SAP HANA Enterprise Solution.
ASUG Session 1203
IT organization was challenged to stay abreast with changing data needs due to the nature of solar energy industry trends. Business users want reliable and real-time reporting capabilities to make informed and critical business decisions. At the same time First Solar wants to enable business self-service analytical capabilities with SAP HANA platform for addressing current and future information needs. The goal of the SAP HANA Enterprise project implementation was to move business analytics development from IT to the business users, providing business users access to more data, and deliver a robust BI reporting solution for supply chain management. Due to lengthy BW and ECC project timelines in the past as well as the aforementioned challenges, SAP HANA provided an ideal platform and better real-time analytics to assist business users with quick decision making process. Key highlights of the project include: SAP HANA platform provided scalable solution for Supply Chain analysis particularly for the procurement team; established foundation for big data and real-time information delivery requirements across the organization and greatly reduced query response times; better analysis of elements of the business that leverage product and vendor master data, purchasing analytics for supplier and delivery performance; and reduced burden on IT for reporting requests and provided platform for quicker time-to-value analysis and automated information delivery methods.
5) BC Hydro’s Integrated Project and Portfolio Management Enterprise Solution
ASUG session 2803
BC Hydro’s Project and Portfolio Management solution (PPM) is an integrated solution for delivering capital projects, programs, and portfolios. The solution consists of: Primavera P6: scheduling; SAP Project Systems: WBS & cost management; SAP Enterprise Project Connection: Primavera P6 and SAP PS integration; SAP BW and P6 Reporting Database; Microsoft SharePoint: project document, issue, risk, and change management; andIBM Rational: Practice reference tool. This case study presentation describes the PPM solution; its implementation, sustainment, challenges, and critical success factors to ensure its successful adoption as an enterprise solution.
6) Sell, Deliver, and Invoice Bundles of Products and Services
SAPPHIRE session 94914
This discussion details ways to create customer invoices based on consumption of ordered bundles. Learn why IBM turned to SAP solutions for consumption and invoicing now that professional services firms require companies to sell, deliver, and invoice product bundles.
7) Implementation Scenarios & Architectural Considerations for SAP MII Implementations
ASUG session 1105
SAP MII is a composition and performance management platform for manufacturing integration and visualization, which needs specific architectural considerations to address requirements related to integration and messaging, data persistency, user interface development, deployment, and security. This session will take you through the different architectural scenarios and decisions that you may need to take while implementing SAP MII along with industry-specific scenarios of SAP MII implementations. The following architectural scenarios and industry case studies will be explained in the presentation: SAP MII Deployment Architecture, SAP MII Data Persistency Architecture, SAP MII Integration & Messaging Architecture, SAP MII User Interface Options & Architecture, SAP MII Security Architecture, Typical Industry Scenarios for SAP MII, Implementations Process & Methods to Ease SAP MII Implementation, and IBM Cross Industry Solutions Assets on MII
8) Localizing the Global Template ? A Global SAP Transformation Challenge
ASUG session 1505
More and more multinational companies are either consolidating their world-wide SAP instances or embarking on green-field, global SAP-based transformation projects. These implementations are often complex and difficult. Typically, the integration of global operations running on a single SAP platform requires more careful and systematic consideration of all business requirements, both global and local, as they may sometimes be conflicting with each other, confusing, and/or be missed out completely. In many cases, the consequences will be long lasting, if not permanent. But this does not have to happen. How do we bring all these puzzles together while keeping two basic principles intact: 1) enabling the creation of a global template with common core processes and 2) successfully staging deployment (only adding incremental legal and regulatory requirements) so as to manage project costs, risks, and return on investment (ROI). Among other things, presenters will discuss a robust approach and look at corners of the world where legal and regulatory requirements are most challenging and needs to be considered upfront during the global blueprint. They will explore localization accelerators and requirement databases. Presenters will also have a lively discussion on the intricacies of global cultures and how to strategically optimize the strength of each of these teams from Blueprint to Go-Live.
9) At IBM, Complex Offerings Just Got a Lot Less Complicated
ASUG Session 3506
The first rule of business: don’t sell products, sell solutions. Well, easier said than done, particularly for industries dealing with complex products and services. You might wonder, then, how IBM configures price and quote its complex offerings with such remarkable efficiency. Simple. IBM uses SAP Solution Sales Configuration to address issues surrounding complex configurations and solution selling. With this application, your organization will be able to: cross-sell and up-sell across multiple brands and product lines, provide a consistent experience and support for all audiences across multiple channels, and offerintegrated real-time services for configuration and master data, so orders are filled accurately each time. Find out how SAP Solution Sales Configuration can shorten your sales cycle by generating quotes quickly, so your entire team can work more efficiently and productively every day.
10) SAP Enterprise Application Strategy in the Era of SAP HANA, Infrastructure, Platforms, Software and Everything as a Service
ASUG Session 2309
What are the forces that will shape your enterprise application landscape? How will SAP’s strategy of “on-premise” and “off-premise” impact your company’s strategy? What are the rules that can make your company’s strategy successful and what are pitfalls? What major shifts in technology and business might derail your strategy? Presenters will discuss several major company’s strategies to date. They’ll discuss how to use private, hosted, and public cloud successfully today. Finally, they’ll give attendees several tools to help any company navigate the complex future.
11) SAP Enabled Procurement Transformation Lessons Learned at General Motors
ASUG Session 1212
This presentation will describe how GM (General Motors) was able to use SAP procurement technology to vastly improve and transform their procurement business processes. Attendees will learn how GM was able to implement SAP SRM 7.01 (Supplier Relationship Management), Sourcing 7.0, and BI Reporting into a complex, global procurement organization. The presentation includes both the technical challenges and how they were overcome, as well as the business challenges and how they were addressed. The presentation covers the project from its inception, through a successful pilot deployment, and subsequent roll-outs. This presentation includes a frank review of the challenges faced and provides the specifics of how they were addressed. Each attendee will leave with a deep understanding on how to utilize SAP tools to transform complex procurement organizations.
12) IBM’s “Predictive” Maintenance Solution for SAP Software
SAPPHIRE session 87335
This new innovative solution will allow automotive OEMs, suppliers, and dealers to more accurately predict their maintenance strategy for each piece of equipment in their enterprise, leveraging the IBM research department, the SAP HANA platform, SAP BusinessObjects software, SAP mobile apps, and Syclo and IBM hardware.
13) Portal: How to Deal with Role-Based Navigation Models for Different Countries and Languages
ASUG Session 2213
In this session, presenters will discuss the lessons learned from Global SAP Portal implementations. they will share theirr experience with implementing global multi-language portals. They will demonstrate how they deal with multi-role navigation models, language preferences, and third party integration based on country requirements. Presenters will show the method to build navigation models, integrate help functionality, and address some of the ESS and MSS process challenges such as position base Personnel Change Requests (PCR), Organizational Change Requests (OCR), or people-based PCR/OCR. MSS personnel change requests and organizational change requests are part of the demonstration as well as policy and help desk (shared service) integration.
14) See How Waters Used Webdynpro to Integrate an End-to-End Process Around Document Review and Approval
ASUG Session 2914
Waters has made significant gains with it’s global application of electronic document review and approval processes. These processes today support Product Project Development, Software Tool Validation, ISO procedures and policies, as well as integration with preliminary review of Service and Operating Manuals released with Engineering Change. Using Webdynpro, the casual SAP user as well as seasoned veteran involved with document creation, review and approval is now able to access documentation from a variety of access points for general inquiry as well as for formal review and approval actions to include: Product Development Collaboration Rooms, Portal & Portal UWL, and even Lotus Notes. Further near term simplifications are goaled at 2-D text markups and integration with IBM’s Rationale Tool suite giving Waters and expandable DMS platform that will meet it’s document management needs for many years to come.
15) Single View of the Truth: How a Canadian Federal Government Agency Optimized Their SAP and non-SAP Deployment with IBM Rational Solutions
ASUG Session 3314
Discover how one of the largest Canadian Federal Government agencies optimized their SAP and non-SAP ERP solutions with IBM Rational, by delivering a comprehensive and automated approach for managing their enterprise architecture, requirements, quality, and change management. By using IBM Rational’s solution, they are now able to manage change quickly and efficiently with cost-effective Rational software, processes, and services for SAP and non-SAP solutions under one ”single view of the truth”. This offers several ongoing benefits: increase quality and speed of deployed business processes to SAP and non-SAP environments; centralize management of both SAP and non-SAP assets in a single “vanilla” ERP solution; centralize all SAP and non-SAP business processes and link them all to their respective business requirements and test plans; and an ability to manage and test SAP and non-SAP projects in a unified way. The following three initiatives were used with this Canadian Federal Government agency to help optimize their SAP and non-SAP deliveries, they are: Enterprise planning for SAP and non-SAP, Application Lifecycle Management for SAP and non-SAP, and Quality management for SAP and non-SAP.
This content was created by Bob Wolf, North America Sales Exec – SAP on System z Solutions.
Before you get the wrong impression, SAP has not announced the availability of HANA on Power and in no way, should you interpret this posting as any sort of pre-announcement. This is purely a discussion about why you should care whether SAP decides to support HANA on Power .
As you may be aware, during SAP’s announcement of the availability of HANA for Business Suite DB for ramp-up customers in early January, Vishal Sikka, Chief Technology Officer and member of the Executive Board at SAP, stated “We have been working heavily with [IBM]. All the way from lifecycle management and servers to services even Cognos, Business Intelligence on top of HANA – and also evaluating the work that we have been doing on POWER. As to see how far we can be go with POWER - the work that we have been doing jointly at HPI. This is a true example of open co-innovation, that we have been working on.” Ken Tsai, VP of SAP HANA product marketing, later added in an interview with IT Jungle, “Power is something that we’re looking at very closely now.” http://www.itjungle.com/fhs/fhs011513-story01.html. And from Amit Sinha, head of database and technology product marketing. “[HANA] on Power is a research project currently sponsored at Hasso Plattner Institute. We await results from that to take next meaningful steps jointly with IBM,” Clearly, something significant is going on. So, why should you care?
Very simply, the reasons why customers chose Power Systems (and perhaps HP Integrity and Oracle/Fujitsu SPARC/Solaris) for SAP DBs in the past, i.e. scalability, reliability, security, are just as relevant now with HANA as in the past with conventional databases, perhaps even more so. Why more so? Because once the promise of real time analytics on an operational database is realized, not necessarily in Version 1.0 of the product, but in the future undoubtedly, then the value obtained by this capability would result in the exact same loss of value if the system was not available or did not respond with the speed necessary for real time analytics.
A little known fact is that HANA for Business Suite DB currently is limited to a single node. This means that scale out options, common in the BW HANA space and others, is not an option for this implementation of the product. Until that becomes available, customers that wish to host large databases may require a larger number of cores than x86 vendors currently offer.
A second known but often overlooked fact is that parallel transactional database systems for SAP are often complex, expensive and have so many limitations that only two types of customers consider this option; those which need continuous or near continuous availability and those that want to move away from a robust UNIX solution and realize that to attain the same level of uptime as a single node UNIX system with conventional HA, an Oracle RAC or DB2 PureScale cluster is required. Why is it so complex? Without getting into too much detail, we need to look at the way SAP applications work and interact with the database. As most are aware, when a user logs on to SAP, they are connecting to a unique application server and until they log off, will remain connected to that server. Each application server is, in turn, connected to one node of a parallel DB cluster. Each request to read or write data is sent to that node and if the data is local, i.e. in the memory of that node, the processing occurs very rapidly. If, on the other hand, the data is on another node, that data must be moved from the remote node to the local node. Oracle RAC and DB2 PureScale use two different approaches with Oracle RAC using their Cache Fusion to move the data across an IP network and DB2 PureScale using Remote DMA to move the data across the network without using an IP stack, thereby improving speed and reducing overhead. Though there may be benefits of one over the other, this posting is not intended to debate this point, but instead point out that even with the fastest, lowest overhead transfer on an Infiniband network, access to remote memory is still thousands of time slower than accessing local memory.
Some applications are “cluster aware”, i.e. application servers connect to multiple DB nodes at the same time and direct traffic based on data locality which can only be possible if the DB and App servers work cooperatively to communicate what data is located where. SAP Business Suite is not currently cluster aware meaning that without a major change in the Netweaver Stack, replacing a conventional DB with a HANA DB will not result in cluster awareness and the HANA DB for Business Suite may need to remain as a single node implementation for some time.
Reliability and Security have been the subject of previous blog posts and will be reviewed in some detail in an upcoming post. Clearly, where some level of outages may be tolerable for application servers due to an n+1 architecture, few customers consider outages of a DB server to be acceptable unless they have implemented a parallel cluster and even then, may be mitigated, but still not considered tolerable. Also, as mentioned above, in order to achieve this, one must deal with the complexity, cost and limitations of a parallel DB. Since HANA for Business Suite is a single node implementation, at least for the time being, an outage or security intrusion would result in a complete outage of that SAP instance, perhaps more depending on interaction and interfaces between SAP components. Power Systems has a proven track record among Medium and Large Enterprise SAP customers of delivering the lowest level of both planned and unplanned outages and security vulnerabilities of any open system.
Virtualization and partition mobility may also be important factors to consider. As all Power partitions are by their very definition “virtualized”, it should be possible to dynamically resize a HANA DB partition, host multiple HANA DB partitions on the same system and even move those partitions around using Live Partition Mobility. By comparison, an x86 environment lacking VMware or similar virtualization technology could do none of the above. Though, in theory, SAP might support x86 virtualization at some point for production HANA Business Suites DBs, they don’t currently and there are a host of reasons why they should not which are the same reasons why any production SAP databases should not be hosted on VMware as I discussed in my blog posting: http://saponpower.wordpress.com/2011/08/29/vsphere-5-0-compared-to-powervm/ Lacking x86 virtualization, a customer might conceivably need a DB/HA pair of physical machines for each DB instance compared to potentially a single DB/HA pair for a Power based virtualized environment.
And now a point of pure speculation; with a conventional database, basis administrators and DBAs weigh off the cost/benefit of different levels in a storage hierarchy including main memory, flash and HDDs. Usually, main memory is sized to contain upwards of 95% of commonly accessed data with flash being used for logs and some hot data files and HDDs for everything else. For some customers, 30% to 80% of an SAP database is utilized so infrequently that keeping aged items in memory makes little sense and would add cost without any associated benefit. Unlike conventional DBs, with HANA, there is no choice. 100% of an SAP database must reside in memory with flash used for logs and HDDs used for a copy of the data in memory. Not only does this mean radically larger amounts of memory must be used but as a DB grows, more memory must be added over time. Also, more memory means more DIMMS with an associated increase in DIMM failure rates, power consumption and heat dissipation. Here Power Systems once again shines. First, IBM offers Power Systems with much larger memory capabilities but also offers Memory on Demand on Power 770 and above systems. With this, customers can pay for just the memory they need today and incrementally and non-disruptively add more as they need it. That is not speculation, but the following is. Power Systems using AIX offers Active Memory Expansion (AME), a unique feature which allows infrequently accessed memory pages to be placed into a compress pool which occupies much less space than uncompressed pages. AIX then transparently moves pages between uncompressed and compressed pools based on page activity using a hardware accelerator in POWER7+. In theory, a HANA DB could take advantage of this in an unprecedented way. Where test with DB2 have shown a 30% to 40% expansion rate (i.e. 10GB of real memory looks like 13GB to 14GB to the application), since potentially far more of a HANA DB would have low use patterns that it may be possible to size the memory of a HANA DB at a small fraction of the actual data size and consequently at a much lower cost plus associated lower rates of DIMM failures, less power and cooling.
If you feel that these potential benefits make sense and that you would like to see a HoP option, it is important that you share this desire with SAP as they are the only ones that can make the decision to support Power. Sharing your desire does not imply that you are ready to pull the trigger or that you won’t consider all available option, simply that you would like to get informed about SAP’s plans. In this way, SAP can gauge customer interest and you can have the opportunity to find out which of the above suggested benefits might actually be part of a HoP implementation or even get SAP to consider supporting one or more of them that you consider to be important. Customers interested in receiving more detailed information on the HANA on Power effort should approach their local SAP Account Executive in writing, requesting disclosure information on this platform technology effort.
Cloud means many things to many people. One definition, popularized by various internet based organizations refers to cloud as a repository of web URLs, email, documents, pictures, videos, information about items for sale, etc. on a set of servers maintained by an internet provider where any server in that cluster may access and make available to the end user the requested object. This is a good definition for those types of services, however SAP does not exist as a set of independent objects that can be stored and made available on such a cloud.
Another definition involves the dynamic creation, usage and deletion of system images on a set of internet based servers hosted by a provider. Those images could contain just about anything including SAP software and customer data. Security of customer data, both on disk and in transit across the internet, service level agreements, reliability, backup/recovery and government compliance (where appropriate) are just a few of the many issues that have to be addressed in such implementations. Non-production systems are well suited for this type of cloud since many of the above issues may be less of a concern than for production systems. Of course, that is only the case when no business data or intellectual property, e.g. developed ABAP or Java code, is stored on such servers in which case these systems become more and more sensitive, like production. This type of public cloud may offer a low cost for infrequently accessed or low utilization environments. Those economics can often change dramatically as usage increases or if more controls are desired.
Yet another definition utilizes traditional data center hosting providers that offer robust security, virtual private networks, high speed communications, high availability, backup/recovery and thorough controls. The difference between conventional static hosting and cloud hosting is that the resources utilized for a given customer or application instance may be hosted on virtual rather than dedicated systems, available on demand, may be activated or removed via a self-service portal and may be multi-tenant, i.e. multiple customers may be hosted on a shared cloud. While more expensive than the above cloud, this sort of cloud is usually more appropriate for SAP production implementations and is often less expensive than building a data center, staffing it with experts, acquiring the necessary support infrastructure, etc.
As many customers already own data centers, have large staffs of experts and host their own SAP systems today, another cloud alternative is often required: a Private Cloud. These customers often wish to reduce the cost of systems by driving higher utilization, shared use of infrastructure among various workloads, automatic load balancing, improvements in staff productivity and potentially even self-service portals for on demand systems with charge back accounting to departments based on usage.
Utilizing a combination of tools from IBM and SAP, customers can implement a private cloud and achieve as many of the above goals as desired. Let’s start with SAP. SAP made its first foray into this area several years ago with their Adaptive Computing Controller (ACC). Leveraging SAP application virtualization it allowed for start, stop and relocate of SAP instances under the control of basis administrators. This helped SAP to get a much deeper appreciation for customer requirements which enabled them to develop SAP NetWeaver Landscape Virtualization Management (LVM). SAP, very wisely, realized that attempting to control infrastructure resources directly would require a huge effort, continuous updates as partner technology changed not to mention an almost unlimited number of testing and support scenarios. Instead, SAP developed a set of business workflows to allow basis admins to perform a wide array of common tasks. They also developed an API and invited partners to write interfaces to their respective cloud enabling solutions. In this way, while governing a workflow SAP LVM simply has to request a resource, for example, from the partner’s systems or storage manager, and once that resource is delivered, continue with the rest of the workflow on the SAP application level.
IBM was an early partner with SAP ACC and has continued that partnership with SAP LVM. By integrating storage management, the solution and enablement in the IBM Power Systems environment is particularly thorough and is probably the most complete of its kind on the market. IBM offers two types of systems managers, IBM Systems Director (SD) and IBM Flex Systems Manager (FSM). SD is appropriate for rack based systems including conventional Power Systems in addition to IBM’s complete portfolio of systems and storage. As part of that solution, customers can manage physical and virtual resources, maintain operating systems, consolidate error management, control high availability and even optimize data center energy utilization. FSM is a manager specifically for IBM’s new PureSystems family of products including several Power Systems nodes. FSM is focused on the management of the components delivered as part of a PureSystems environment where SD is focused on the entire data center including PureSystems, storage and rack based systems. Otherwise, the functions in an LVM context, are largely the same. FSM may be used with SD in a data center either side by side or with FSM feeding certain types of information up to SD. IBM also offers a storage management solution called Tivoli Storage FlashCopy Manager (FCM). This solution drives the non-disruptive copying of filesystems on appropriate storage subsystems such as IBM’s XIV as well as virtually any IBM or non-IBM storage subsystem through the IBM SAN Volume Controller (SVC) or V7000 (basically an SVC packaged with its own HDD and SSD).
Using the above, SAP LVM can capture OS images including SAP software, find resources on which to create new instances, rapidly deploy images, move them around as desired to load balance systems or when preventative maintenance is desired, monitor SAP instances, provide advanced dashboards, a variety of reports, and make SAP system/db copies, clones or refreshes including the SAP relevant post-copy automation tasks.
What makes the IBM Power Systems implementation unique is the integration between all of the pieces of the solution. Using LVM with Power Systems with either SD, FSM or both, a basis admin can see and control both physical and virtual resources as PowerVM is built in and is part of every Power System automatically. This means that when, for instance, a physical node is added to an environment, SD and FSM can see it immediately meaning the LVM can also see it and start using it. In the x86 world, there are two supported configurations for LVM, native and virtualized. Clearly, a native installation is limited by its very definition as all of the attributes of resource sharing, movement and some management features that come with virtualization are not present in a native installation.
According to SAP Note 1527538 – SAP NetWeaver Landscape Virtualization Management 1.0, currently only VMware is supported for virtualized x86 environments. LVM with VMware/x86 based implementations rely on VMware vCenter meaning they can control only virtual resources. Depending on customer implementation, systems admins may have to use a centralized systems management tool for installation, network, configuration and problem management, i.e. the physical world, and vCenter for the virtual world. This contrasts with SD or FSM which can manage the entire Power Systems physical and virtual environment plus all of the associated network and chassis management, where appropriate.
LVM with Power Systems and FCM can drive full database copy/clone/refresh activity through disk subsystems. Disk subsystems such as IBM XIV, can make copies very fast in a variety of ways. Some make pointer based copies which means that only changed blocks are duplicated and a “copy” is made available almost immediately for further processing by LVM. In some situations and/or with some disk subsystems, a full copy process, in which every block is duplicated, might be utilized but this happens at the disk subsystem or SAN level without involving a host system so is not only reasonably fast but also does not take host system resources. In fact, a host system in this configuration does not even need to stop processing but merely place the source DB into “logging only” mode which is then resumed into normal operating mode a short time later after the copy is initiated.
LVM with x86 offers two options. Option 1, utilize VMware and its storage copy service. Option 2, utilize LVM native or with VMware and use a separate plugin with from a storage subsystem vendor. Option 2 works pretty much the same as the above described Power/FCM solution except that only certain vendors are supported and any integration of plugins from different companies, not to mention any troubleshooting is a customer task. It might be worthwhile to consider the number of companies that might be required to solve a problem in this environment, e.g. SAP, VMware, the storage subsystem vendor, the OS vendor and the systems vendor.
For Option 1, VMware drives copies via vCenter using a host only process. According to above mentioned SAP Note, “virtualization based cloning is only supported with Offline Database.” This might be considered a bit disruptive by some and impossible to accommodate by others. Even though it might be theoretically possible to use a VMware snapshot, a SID rename process must be employed for a clone and every table must be read in and then out again with changes to the SID. (That said, for some other LVM activities not involving a full clone, a VMware snapshot might be used.) As a result, VMware snapshots may quickly take on the appearance of a full copy, so may not be the best technology to use both for the overhead on the system and the fact that VMware itself does not recommend keeping database snapshots around for more than a few days at most, so the clone process typically uses the full copy option. When the full copy is initiated by VMware, every block must be read into VMware and then back out. Not only is this process slow for large databases, but it places a large load on the source system potentially resulting in poor performance for other partitions during this time. Since a full copy is utilized, a VMware based copy/clone will also take radically more disk storage than a Power/XIV based clone as it is fully supported with a “changed block only” copy.
Of course, the whole discussion of using LVM with vCenter may be moot. After all, the assumption is that one would be utilizing VMware for database systems. Many customers choose not to do this for a variety of reasons, from multiple single points of failure, to scaling to database vendor support to potential issues in problem resolution due to the use of a multi-layer, multi-vendor stack, e.g. hardware from one vendor with proprietary firmware from another vendor, processor chip from another vendor, virtualization software from VMware, OS from Microsoft, SUSE, Red Hat or Oracle not to mention high availability and other potential issues. Clearly, this would not be an issue if one eliminates database systems from the environment, but that is where some of the biggest benefit of LVM are realized.
LVM, as sophisticated as it is currently, does not address all of the requirements that some customers might have for a private cloud. The good news is that it doesn’t have to. IBM supplies a full range of cloud enabling products under the brand name of IBM Smart Cloud. These tools range from an “Entry” product suitable for adding a simple self-service portal, some additional automation and some accounting features to a full feature “Enterprise” version. Those tools call SD or FSM functions to manage the environment which is quite fortunate as any changes made by those tools is immediately visible to LVM, thereby completing the circle.
SAP and IBM collaborated to produce a wonderful and in-depth document that details the IBM/Power solution: https://scn.sap.com/docs/DOC-24822
A blogger at SAP has also written extensively on the topic of Cloud for SAP. You can see his blog at: http://blogs.sap.com/cloud/2012/07/24/sap-industry-analyst-base-camp-a-recap-of-the-sap-cloud-strategy-session/
SAP TechEd 2012 in Las Vegas is now finished and what a great conference it was. The conference started off with a bang for IBM Power Systems customers using SAP on the “I for Business” platform. The attendance for these “i” sessions was about 50% higher than last year which shows the continued interest in high performance, low management that is offered by this platform.
IBM’s presence on the showroom floor was quite interesting. The main booth featured experts in many areas and a unique virtual hardware device which allowed pretty much any IBM hardware solution to be viewed in 3D, interactively. In addition, IBM was present in the Intel booth featuring both our HANA solution as well as IBM PureSystems. IBM also participated in the SAP HANA technology showcase and an IBM partner, Bluefin Solutions, competed in the “SAP HANA: Real Time Race” against a Cisco/EMC partner, Optimal Solutions, http://scn.sap.com/community/hana-in-memory/blog/2012/10/16/the-sap-hana-real-time-race-is-on-at-sap-teched-las-vegas. In that race, each contestant used a 4-node HANA system with two distinct phases, data migration from an existing SAP system of a massive amount of data and then processing and visualizing the results. Bluefin smoked Optimal on the import, finishing the import before Optimal could even get their import started. Optimal won the race overall due to their performance in the visualization phase. From my perspective, this test proved two things; IBM’s HANA solution showed its superior performance and ease of use in the first phase; Optimal won the second phase based on their knowledge and experience with data visualization. This is not to imply that Optimal would have won against another integrator such as IBM GBS, simply that in this test, they demonstrated results against another Bluefin.
During the main conference, IBM personnel delivered 14 different sessions ranging including ones on HANA, Cloud, Security, Process Optimization, Mobility, Risk, Database and last, hopefully not least, the second of my two sessions was on my favorite topic, i.e. why IBM Power Systems offers advantages over x86 systems for SAP. Many of those sessions were followed up with Expert Networking Sessions on the showroom floor. Those sessions were designed to allow for a more free flowing conversation, question and answer type of format to address any issues not covered in the formal sessions. For my two set of sessions, I found that there were far more attendees that had not attended the formal sessions and even more that stopped to listen as they walked around the exhibit hall.
Three of the sessions that IBM delivered were video taped and have now been made available on the web. The links are below. The third of these is the session that I delivered on Cloud enablement of SAP environments on IBM Power Systems. This will also the be the subject of a future blog post, but for the time being, I encourage you to take a look and invite comments based on the video.
IBM STG has three SAP TechEd Las Vegas replays available for IBM, Customers, System Integrators and Business Partners:
TEC 226: Managed SAP Virtualization on Intel Xeon-based IBM PureSystem with LVM
You can also find on SAP TechEd Online at:
TEC217: Head in the Cloud? Learn how to Keep Your Feet on the Ground (Power Cloud & PureFlex)
You can also find on SAP TechEd Online at:
TEC221: IBM and SAP HANA: Scalable, Available, and Simpler with Disaster Recovery
You can also find it on the virtual platform on :
Not often does a sponsored study show the opposite of what was intended, but this study does. An astute blog reader alerted me about a white paper sponsored by HP, VMware and Intel by an organization called Enterprise Strategy Group (ESG). The white paper is entitled “Lab Validation Report – HP ProLiant DL980, Intel Xeon, and VMware vSphere 5 SAP Performance Analysis – Effectively Virtualizing Tier-1 Application Workloads – By Tony Palmer, Brian Garrett, and Ajen Johan – July 2011.” The words that they used to describe the results are, as expected, highly complementary to HP, Intel and VMware. In this paper, ESG points out that almost 60% of the respondents to their study have not virtualized “tier 1” applications like SAP yet but expect a rapid increase in the use of virtualization. We can only assume that they only surveyed x86 customers as 100% of Power Systems customers are virtualized since the PowerVM hypervisor is baked into the hardware and firmware of every system and can’t be removed. Nevertheless, it is encouraging that customers are moving in the right direction and that there is so much potential for the increased use of virtualization.
ESG provided some amazing statistics regarding scalability. ESG probably does not realize just how bad this makes VMware and HP look, otherwise, they probably would not have published it. They ran an SAP ECC 6.0 workload which they describe as “real world” but for which they provide no backup as to what this workload was comprised of, so it is possible that a given customer’s workload may be even more intensive than the one tested. They ran a single VM with 4 vcpu, then 8, 16 and 32. They show both the number of users supported as well as the IOPS and dialog response time. Then, in their conclusions, they state that scaling was nearly linear. This data shows that when scaling from 4 to 32 cores, an 8x increase, the number of users supported increased from 600 to 3,000, a 5x increase. Put a different way, 5/8 = .625 or 62.5% scalability. Not only is this not even remotely close to linear scaling, but it is an amazing poor level of scalability. IOPS, likewise, increased from 140 to 630 demonstrating 56.3% scalability and response time went from .2 seconds to 1 second, which while respectable, was 5 times that of the 4 vcpu VM.
ESG also ran a non-virtualized test with 32 physical cores. In this test, they achieved only 4,400 users/943 IOPS. Remember, VMware is limited to 32 vcpu which works out to the equivalent of 16 cores. So, with twice the number of effective physical cores, they were only able to support 46.7% more users and 49.7% more IOPS. To make matters much worse, response time almost doubled to 1.9 seconds.
ESG went on to make the following statement: “Considering that the SAP workload tested utilized only half of the CPU and one quarter of the available RAM installed in the DL980 tested, it is not unreasonable to expect that a single DL980 could easily support a second virtualized SAP workload at a similarly high utilization level and/or multiple less intensive workloads driven by other applications.” If response time is already borderline poor with VMware managing only a single workload, is it reasonable to assume that response time will go up or down if you add a second workload? If IOPS are not even keeping pace with the poor scalability of vcpu, it is reasonable to assume that IOPS will all of a sudden start improving faster? If you have not tested the effect of running a second workload, is it reasonable to speculate what might happen under drastically different conditions? This is like saying that on a hot summer day, an air conditioner was able to maintain a cool temperature in a sunny room with half of the chairs occupied and therefore it is not “unreasonable” to assume that it could do the same with all chair occupied. That might be the case, but there is absolutely no supporting evidence to support such a speculation.
ESG further speculates that because this test utilized default values for BIOS, OS, SAP and SQL Server, performance would likely be higher with tuning. … And my car will probably go faster if I wash it and add air to the tires, but by how much?? In summary and I am paraphrasing, ESG says that VMware, Intel processors and HP servers are ready for SAP primetime providing reliability and performance while simplifying operations and lowering costs. Interesting that they talk about reliability yet they, once again, provide no supporting evidence and did not mention a single thing about reliability earlier in the paper other than to say that the HP DL980 G7 delivers “enhanced reliability”. I certainly believe every marketing claim that a company makes without data to back it up, don’t you?
There are three ways that you can read this white paper.
- ESG has done a thorough job of evaluating HP x86 systems, Intel and VMware and has proven that this environment can handle SAP workloads with ease
- ESG has proven that VMware has either incredibly poor scalability or high overhead or both
- ESG has limited credibility as they make predictions for which they have no data to support their conclusions
While I might question how ESG makes predictions, I don’t believe that they do a poor job at performance testing. They seem to operate like an economist, i.e. they are very good at collecting data but make predictions based on past experience, not hard data. When is the last time that economists correctly predicted market fluctuations? If they did, they would all be incredibly rich!
I think it would be irresponsible to say that VMware based environments are incapable of handling SAP workloads. On the contrary, VMware is quite capable, but there are significant caveats. VMware does best with small workloads, e.g. 4 to 8 vcpu, not with larger workloads e.g. 16 to 32 vcpu. This means if a customer utilizes SAP on VMware, they will need more and smaller images than they would on excellent scaling platforms like IBM Power Systems, which drives up management costs substantially and reduces flexibility. By way of comparison, published SAP SD 2-tier benchmark results for IBM Power Systems utilizing POWER7 technology show 99% scalability when comparing the performance of a 16-core to a 32-core system at the same MHz, 89.3% scalability when comparing a 64-core to a 128-core system with a 5% higher MHz, which when normalized to the same MHz shows 99% scalability even at this extremely high performance level.
The second caveat for VMware and HP/Intel systems is in that area that ESG brushed over as if it was a foregone conclusion, i.e. reliability. Solitaire Interglobal examined data from over 40,000 customers and found that x86 systems suffer from 3 times or more system outages when comparing Linux based x86 systems to Power Systems and up to 10 times more system outages when comparing Windows based x86 systems to Power Systems. They also found radically higher outage durations for both Linux and Windows compared to Power and much lower overall availability when looking at both planned and unplanned outages in general: http://ibm.co/strategicOS and specifically in virtualized environments: http://ibm.co/virtualizationplatformmatters. Furthermore, as noted in my post from late last year, http://saponpower.wordpress.com/2011/08/29/vsphere-5-0-compared-to-powervm/, VMware introduces a number of single points of failure when mission critical applications demand just the opposite, i.e. the elimination of single points of failure.
I am actually very happy to see this ESG white paper, as it is has proven how poor VMware scales for large workloads like SAP in ways that few other published studies have ever exposed. Power Systems continues to set the bar very high when it comes to delivering effective virtualization for large and small SAP environments while offering outstanding, mission critical reliability. As noted in http://saponpower.wordpress.com/2011/08/15/ibm-power-systems-compared-to-x86-for-sap-landscapes/, IBM does this while maintaining a similar or lower TCO when all production, HA and non-production systems, 3 years of 24x7x365 hardware maintenance, licenses and 24x7x365 support for Enterprise Linux and vSphere 5.0 Enterprise Plus … and that analysis was done back when I did not have ESG’s lab report showing how poorly VMware scales. I may have to revise my TCO estimates based on this new data.
IBM will have a huge presence at SAP TechEd 2012 in Las Vegas. Please take a look at the information from my colleague, Bob Wolf, regarding what you can expect this year. I will be presenting two sessions, one on the value of Power Systems for SAP compared to x86 and another on Infrastructure solutions from IBM for SAP Cloud Computing. I look forward to seeing all of you there.
For those of you headed off to SAP TechEd in mid-October, just wanted to let you know about a number of sessions and events that IBM will be leading.
Even before TechEd starts, IBM and SAP will be co-hosting the IBM i and SAP Solution Update and dinner on Monday Oct 15th at 2pm. This has been an annual tradition for a number of years and is a great way to get up to date on everything related to SAP on i. Additionally, there will be a dinner afterwards where you can network with other companies running SAP on i as well as connect with IBM and SAP experts on i. If your SAP systems are running on i (formerly known as the AS/400), this should be a very valuable session. Please register ahead of time.
IBM will also be co-hosting an event with Intel on Wednesday night called Meet the Experts. The session is Wednesday night between 7pm and 9pm in the Opium room at the Venetian. In spite of the name of the room, I am absolutely certain that no opium will be served at this IBM event. Space is limited and you must register in advance to participate.
IBM will also be presenting a baker’s dozen of break-out sessions on a broad range of topics. For example, my friend Steve Bergmann and his client the Standard Bank of South Africa (SBSA) will be co-presenting a session on Banking Solutions for the 21st century. SBSA is a large bank running SAP on z. Alfred Freudenberger, one of my friends and counterparts on the AIX side, will be presenting two topics, one on UNIX and SAP, and the other on Cloud technologies for SAP. There will be two IBM sessions on HANA including one by my good friend Rich Travis. Another friend, Skip Garvin, will be presenting IBM’s experiences in helping customers reduce the risks in doing cross-platform SAP migrations. Many of the other sessions focus on key topics like Process Optimization, Single Sign-on, Value Based Archiving, Compliance Auditing and Governance, DB2, Managed SAP Virtualization, and even a topic on back propagation for Artificial Neural Networks as it relates to SAP and Mobility. Included in section C below are the descriptions, times, and locations of each of these sessions.
IBM will also have a booth (#315) at SAP TechEd where you can meet the experts and get any questions answered.
Have a safe trip to TechEd.
A) IBM i and SAP Solution Update and Dinner
Register now and reserve your seat for our annual IBM i and SAP Solution Update and Dinner at TechEd 2012 Las Vegas.
IBM and SAP experts will provide the latest news and solution updates for IBM i and SAP environments. Attend this session and learn about IBM Pure Flex Systems, SAP solution updates, and exciting new possibilities with cloud computing and In-Memory Computing
Don’t miss this opportunity to have your questions answered by experts and to engage in lively discussions with your peers. The business meeting will be followed by a group dinner.
The Venetian/Palazzo Congress Center
3355 Las Vegas Boulevard South
Las Vegas, NV 89109
Room # Toscana 3605 & 3609
Refreshments and registration, 1:30 – 2:00 p.m.
Business meeting kickoff, 2:00 – 5:45 p.m.
Group dinner, 6:00 p.m.
IBM i running SAP Solutions – New News from IBM
Customer presentation, “Migrating SAP from Windows to IBM i”
SAP trends 2012
Business Partner presentation, “Leveraging the POWER of IBM for SAP”
In-Memory Computing for SAP Landscapes
Customer Feedback Forum – Expert Panel
End of Business Session – Group Dinner
Register today to reserve your seat! Seating is limited.
We look forward to seeing you at SAP TechEd 2012 Las Vegas!
B) IBM and Intel sponsored Meet the Experts reception
Wednesday, October 17, 2012
7:00pm – 9:00pm
Opium Room at TAO in the Venetian Hotel
Please join IBM and Intel for cocktails and hors d’oeuvres as we renew our commitment to delivering best in class SAP solutions and celebrate our new IBM Client Center: Lab for SAP Solutions. Get your questions answered by IBM and Intel experts on hot topics such as SAP HANA, Mobility and Cloud Computing. Discover how IBM and Intel continue to deliver innovative solutions to clients around the globe and across a wide array of industries.
This is a private, by-invitation-only event. For quick entry, bring your confirmation and TechEd badge.
RSVP as soon as possible, space is limited.
C) IBM Sessions at TechEd
1) Banking Solutions for the 21st Century – Standard Bank of South Africa
To gain competitive advantages in today’s financial services environment, banking institutions must be able to quickly introduce new products and business strategies to drive revenue while radically lowering costs. To do so, they rely not only on their custom-designed legacy applications but also must leverage and modernize preexisting solutions, while continuing to expand a flexible performance-demanding infrastructure serving hybrid best in class solutions. This session will highlight how the Standard Bank of South Africa partnered with SAP for banking solutions, and with IBM for the agile infrastructure and middleware, to create a competitive and integrated banking services model that will meet Standard Bank of South Africa’s rapid business growth projections into the year 2015. The major discussion topics of the presentation will be centered on the business benefits derived by leveraging SAP banking solutions, the ability to rapidly deploy new products by implementing a service-oriented architecture, and optimizing operational costs while meeting performance, security, availability, scalability, and cloud objectives. This session is a joint customer presentation between the Standard Bank of South Africa and IBM.
Speaker: Steve Bergmann
Date: Wednesday October 17th
Time: 3:15 pm – 4:15 pm
Room name: Titan 2305
2) UNIX for SAP software is dead! Long live UNIX for SAP software!
Everyone “knows” that x86 servers cost less than UNIX servers, but is that a correct perception when the UNIX server is an IBM Power System? Should AIX be relegated to the role of SAP database server? Is the SAP sales and distribution 2-tier benchmark a reasonable barometer of database performance? How important is the reliability of application servers and how do IBM Power Systems deliver superior uptime? And is parallel database technology the best way to deliver optimal uptime for database servers? This session will reveal how IBM is able to deliver not just low TCO, but very competitive costs of acquisition for SAP landscapes while enabling maximum flexibility and minimizing downtime.
Speaker: Alfred Freudenberger
Date: Thursday October 18th
Time: 9:15 am – 10:15 am
Room name: Titan 2305
3) Head in the cloud – Learn how to keep your feet on the ground
Cloud is one of the big buzz words in the SAP community, but what does it mean? Is cloud the right answer for every problem? How can IBM help you to achieve your cloud goals? What can you do today with IBM Power Systems and SAP NetWeaver Landscape Virtualization Management? Where does IBM’s new PureFlex System fit? What are the other cloud alternatives for SAP systems? This session will show how IBM, with the inclusion of SAP technologies, is implementing the different flavors of cloud computing for SAP application environments.
Speaker: Alfred Freudenberger
Date: Wednesday October 17th
Time: 9:15 am – 10:15 am
Room name: Titan 2303
4) IBM HANA – Scalability, Availability and Simplicity with a Disaster Recovery
IBM System Solution for SAP HANA: Extreme Scalability, Availability and Simplicity with a Disaster Recovery Strategy. The IBM Systems Solution for SAP HANA Delivers it All via the General Parallel File System,Richard Travis will discuss the design of the IBM System Solution for SAP HANA and the critical role that the General Parallel File System (GPFS) plays in the solution. How the IBM solution with GPFS can grow from a stand-alone configuration to a very, very large and extremely reliable cluster in a dynamic and non-disruptive fashion. Rich will also discuss disaster recovery options from simple to complex, synchronous to asynchronous and how these solutions will be enhanced by GPFS and its future extensions.
Speaker: Rich Travis
Date: Wednesday, October 17th
Time: 10:30 am – 11:30 am
Room name: Titan 2303
5) Minimizing Risk in Cross-Platform SAP Migrations…Lessons Learned
This session is designed to help you understand the challenges inherent when migrating SAP workloads and will provide you with valuable insight into the do’s, don’ts, lessons learned, and best practices gleaned from hundreds of migrations completed by IBM’s Migration Factory – from non-IBM platforms, x86-based systems, mainframes, and even from earlier versions of POWER servers to IBM POWER7, xSeries, zSeries, and iSeries platforms.
Speaker: Skip Garvin
Date: Tuesday, October 16th
Time: 3:15 pm – 4:15 pm
Room Name: Titan 2303
6) Energize Your SAP Software Investment with Database Innovations
Learn how the latest SAP and IBM innovations can help you run your business with even lower total cost of ownership (TCO). In this session, you will also learn how joint innovations make an SAP and DB2 combination the best of breed business solution. Catch the latest in how you can use the rich set of new capabilities in SAP Solution Manager 7.1 to maximize your efficiency as a database administrator and learn about the new features in DB2 10. In this session, we will also cover how SAP takes advantage of innovation with DB2 along with many other topics. Regardless of whether you are an SAP and DB2 veteran, or you are exploring superior alternatives to your incumbent non-DB2 database, we will cover new and exciting technologies for everyone. See how innovations can help you to make tangible dollar savings.
Speaker: Ralf Wagner
Date: Thursday October 18th
Time: 3:15 pm – 4:15 pm
Room name: Titan 2305
7) SAP Process Optimization & Innovation Techniques
In this session you’ll learn about the latest process innovation techniques and how they apply to an SAP environment. The discussion will include details around how IBM used Process Innovation for its own SAP implementation and how lessons learned from this and other SAP implementations can be applied to a broad spectrum of SAP process optimization opportunities.
Speaker: Joe Kaczmarek & Parag Karkhanis
Date: Tuesday October 16th
Time: 4:30 pm – 5:30 pm
Room name: Titan 2305
8) The Backprop approach to Mobility-how to embed intelligence in your method
Artificial Neural Networks (ANN’s) can be used to extract patterns and detect trends that are too complex to be noticed by other computer techniques. One of the first, and still popular ways, to train ANN’s is backpropagation – a dynamic system optimization technique. So what does backprop have to do with mobility? In this session, you will learn how the fundamental principles of backprop can be applied in the form of an approach to tackle mobility projects. Embodying those principles in your method will bring distinct advantages, especially in mobility. We will also discuss how to balance a continuously adaptive approach with the practical need for stability. Specific use cases will be provided to help illustrate the point so it will become an empirically achievable method.
Speaker: Matt Schwartz & Parag Karkhanis
Date: Tuesday October 16th
Time: 10:15 am – 11:15 am
Room name: Titan 2305
9) Implementing Single Sign On and securing communication channels
In this session, participants will learn about the options in implementing Single Sign-On in various SAP modules including SAP NetWeaver Portal, SAP ECC, and SAP BusinessObjects software. This session will go into detail on leveraging various Single Sign-On options including Kerberos, certificate based, SAP SS02, Security Assertion Markup Language (SAML), and secure network communications (SNC). During this session, participants will also learn how to secure the communication between front end and back end using SNC and HTTPS. Participants will learn what factors to consider while selecting a type of Single Sign-On and securing communications. We will conclude the session by sharing lessons learned from our implementation experience and what worked for us and will include a short demo.
Speaker: Vinod Lambda
Date: Thursday October 18th
Time: 5:45 pm – 6:45 pm
Room name: TBD
10) Visualizing Predictive Analytics on SAP HANA
SAP has come out with SAP Visual Intelligence and SAP Predictive Analysis recently. This session will give an introduction on how to combine SAP HANA with predictive analytics, and how to visualize the results using the new SAP Visual Intelligence.
Speaker: Vijay Vijayasankar
Date: Thursday October 18th
Time: 8:00 am – 9:00 am
Room name: TBD
11) Value-Based Archiving and Governance – Reducing Time, Cost, and Complexity
In today’s world of accelerating data volume, variety, and input velocity, organizations are grappling with isolated islands of application-specific information. While most would agree that the “keep it forever” model died years ago, killed by governance requirements and escalating infrastructure costs, it is estimated that 70% of business information is still unnecessarily retained. Join key IBM experts for an interactive session to discuss IBM’s SAP-focused “Value-based Archiving” (VBA) strategy and Smart Archive software for managing and archiving not only SAP documents and data, but virtually “any data, any content” from almost any source. You’ll also hear about IBM customers using VBA today to enable ‘Defensible Disposal’ – keeping only what’s necessary to finally tame escalating storage, server, and operating costs – saving them up to $50M over five years.
Speaker: Jerry Bower
Date: Thursday October 18th
Time: 5:45 pm – 6:45 pm
Room name: TBD
12) Guardium – Compliance Auditing for Heterogeneous SAP Environments
Have you ever wondered on how you could be enhancing your organizational IT security or how to better tie your heterogeneous IT security to SAP systems? If so, come and learn how the upcoming IBM and SAP’s collaborative and robust solution is also going to be the simplest. With IBM InfoSphere Guardium optimized for SAP applications you can be assured regarding the privacy and integrity of trusted information in your data center and reduce overall costs by automating the entire compliance auditing process.
Speaker: Martin Mezger
Date: Thusday, October18
Time: 10:30a – 11:30a
Room Name: Titan 2305
13) Managed SAP Virtualization on Puresystem with LVM
In the last decade SAP landscapes are growing and getting more and complex. With the support of virtualized SAP system on Intel based systems and the adoption of virtualization of SAP workload, the physical growth of the infrastructure might have been contained or at least mitigated. However, we now see a growing number of SAP virtual machines on top of a large population of physical systems. At the same time, virtualization has been adopted in storage and networking – all typically growing/operating within their own silos. This session shows how we optimize daily SAP operations of virtualized SAP landscapes on systems with integrated management to operate different virtualization solutions.
Speakers: Paul Henter IBM & Rob Shiveley Intel
Date: Wednesday, October 17
Time: 2:00p – 3:00p
Room Name: Titan 2303
Answer: Not even close, especially for x86 systems. Sizings for x86 systems based on the 2-tier benchmark can be as much as 50% smaller for database only workloads as would be predicted by the 3-tier benchmark. Bottom line, I recommend that any database only sizings for x86 systems or partitions be at least doubled to ensure that enough capacity is available for the workload. At the same time, IBM Power Systems sizings are extremely conservative and have built in allowances for reality vs. hypothetical 2-tier benchmark based sizings. What follows is a somewhat technical and detailed analysis but this topic cannot, unfortunately, be boiled down into a simple set of assertions.
The details: The SAP Sales and Distribution (S&D) 2-tier benchmark is absolutely vital to SAP sizings as workloads are measured in SAPS (SAP Application Performance Standard)[i], a unit of measurement based on the 2-tier benchmark. The goal of this benchmark is to be hardware independent and useful for all types of workloads, but the reality of this benchmark is quite different. The capacity required for the database server portion of the workload is 7% to 9% of the total capacity with the remainder used by multiple instances of dialog/update servers and a message/enqueue server. This contrasts with the real world where the ratio of app to DB servers is more in the 4 to 1 range for transactional systems and 2 or 1 to 1 for BW. In other words, this benchmark is primarily an application server benchmark with a relatively small database server. Even if a particular system or database software delivered 50% higher performance for the DB server compared to what would be predicted by the 2-tier benchmark, the result on the 2-tier benchmark would only change by .07 * .5 = 3.5%.
How then is one supposed to size database servers when the SAP Quicksizer shows the capacity requirements based on 2-tier SAPS? A clue may be found by examining another, closely related SAP benchmark, the S&D 3-tier benchmark. The workload used in this benchmark is identical to that used in the 2-tier benchmark with the difference being that in the 2-tier benchmark, all instances of DB and App servers must be located within one operating system (OS) image where with the 3-tier benchmark, DB and App server instances may be distributed to multiple different OS images and servers. Unfortunately, the unit of measurement is still SAPS but this represents the total SAPS handled by all servers working together. Fortunately, 100% of the SAPS must be funneled through the database server, i.e. this SAPS measurement, which I will call DB SAPS, represents the maximum capacity of the DB server.
Now, we can compare different SAPS and DB SAPS results or sizing estimates for various systems to see how well 2-tier and 3-tier SAPS correlate with one another. Turns out, this is easier said than done as there are precious few 3-tier published results available compared to the hundreds of results published for the 2-tier benchmark. But, I would not be posting this blog entry if I did not find a way to accomplish this, would I? I first wanted to find two results on the 3-tier benchmark that achieved similar results. Fortunately, HP and IBM both published results within a month of one another back in 2008, with HP hitting 170,200 DB SAPS[ii] on a 16-core x86 system and IBM hitting 161,520 DB SAPS[iii] on a 4-core Power system.
While the stars did not line up precisely, it turns out that 2-tier results were published by both vendors just a few months earlier with HP achieving 17,550 SAPS[iv] on the same 16-core x86 system and IBM achieving 10,180 SAPS[v] on a 4-core and slightly higher MHz (4.7GHz or 12% faster than used in the 3-tier benchmark) Power system than the one in the 3-tier benchmark.
Notice that the HP 2-tier result is 72% higher than the IBM result using the faster IBM processor. Clearly, this lead would have even higher had IBM published a result on the slower processor. While SAP benchmark rules do not allow for estimates of slower to faster processors by vendors, even though I I am posting this as an individual not on behalf of IBM, I will err on the side of caution and give you only the formula, not the estimated result: 17,550 / (10,180 * 4.2 / 4.7) = the ratio of the published HP result to the projected slower IBM processor. At the same time, HP achieved only a 5.4% higher 3-tier result. How does one go from almost twice the performance to essentially tied? Easy answer, the IBM system was designed for database workloads with a whole boatload of attributes that go almost unused in application server workloads, e.g. extremely high I/O throughput and advanced cache coherency mechanisms.
One might point out that Intel has really turned up its game since 2008 with the introduction of Nehalem and Westmere chips and closed the gap, somewhat, against IBM’s Power Systems. There is some truth in that, but let’s take a look at a more recent result. In late 2011, HP published a 3-tier result of 175,320 DB SAPS[vi]. A direct comparison of old and new results show that the new result delivered 3% more performance than the old with 12 cores instead of 16 which works out to about 37% more performance per core. Admittedly, this is not completely correct as the old benchmark utilized SAP ECC 6.0 with ASCII and the new one used SAP ECC 6.0 EP4 with Unicode which is estimated to be a 28% higher resource workload, so in reality, this new result is closer to 76% more performance per core. By comparison, a slightly faster DL380 G7[vii], but otherwise almost identical system to the BL460c G7, delivered 112% more SAPS/core on the 2-tier benchmark compared to the BL680c G5 and almost 171% more per SAPS/core once the 28% factor mentioned above is taken into consideration. Once again, one would need to adjust these numbers based on differences in MHz and the formula for that would be: either of the above numbers * 3.06/3.33 = estimated SAPS/core.
After one does this math, one would find that improvement in 2-tier results was almost 3 times the improvement in 3-tier results further questioning whether the 2-tier benchmark has any relevance to the database tier. And just one more complicating factor; how vendors interpret SAP Quicksizer output. The Quicksizer conveniently breaks down the amount of workload required of both the DB and App tiers. Unfortunately, experience shows that this breakdown does not work in reality, so vendors can make modifications to the ratios based on their experience. Some, such as IBM, have found that DB loads are significantly higher than the Quicksizer estimates and have made sure that this tier is sized higher. Remember, while app servers can scale out horizontally, unless a parallel DB is used, the DB server cannot, so making sure that you don’t run out of capacity is essential. What happens when you compare the sizing from IBM to that of another vendor? That is hard to say since each can use whatever ratio they believe is correct. If you don’t know what ratio the different vendors use, you may be comparing apples and oranges.
Great! Now, what is a customer to do now that I have completely destroyed any illusion that database sizing based on 2-tier SAPS is even remotely close to reality?
One option is to say, “I have no clue” and simply add a fudge factor, perhaps 100%, to the database sizing. One could not be faulted for such a decision as there is no other simple answer. But, one could also not be certain that this sizing was correct. For example, how does I/O throughput fit into the equation. It is possible for a system to be able to handle a certain amount of processing but not be able to feed data in at the rate necessary to sustain that processing. Some virtualization managers, such as VMware have to transfer data first to the hypervisor and then to the partition or in the other direction to the disk subsystem. This causes additional latency and overhead and may be hard to estimate.
A better option is to start with IBM. IBM Power Systems is the “gold standard” for SAP open systems database hosting. A huge population of very large SAP customers, some of which have decided to utilize x86 systems for the app tier, use Power for the DB tier. This has allowed IBM to gain real world experience in how to size DB systems which has been incorporated into its sizing methodology. As a result, customers should feel a great deal of trust in the sizing that IBM delivers and once you have this sizing, you can work backwards into what an x86 system should require. Then you can compare this to the sizing delivered by the x86 vendor and have a good discussion about why there are differences. How do you work backwards? A fine question for which I will propose a methodology.
Ideally, IBM would have a 3-tier benchmark for a current system from which you could extrapolate, but that is not the case. Instead, you could extrapolate from the published result for the Power 550 mentioned above using IBM’s rperf, an internal estimate of relative performance for database intensive environments which is published externally. The IBM Power Systems Performance Report[viii] includes rperf ratings for current and past systems. If we multiply the size of the database system as estimated by the IBM ERP sizer by the ratio of per core performance of IBM and x86 systems, we should be able to estimate how much capacity is required on the x86 system. For simplicity, we will assume the sizer has determined that the database requires 10 of 16 @ IBM Power 740 3.55GHz cores. Here is the proposed formula:
Power 550 DB SAPS x 1/1.28 (old SAPS to new SAPS conversion) x rperf of 740 / rperf of 550
161,520 DB SAPS x 1/1.28 x 176.57 / 36.28 = estimated DB SAPS of 740 @ 16 cores
Then we can divide that above number by the number of cores to get a per core DB SAPS estimate. By the same token you can divide the published HP BL 460c G7 DB SAPS number by the number of cores. Then:
Estimated Power 740 DB SAPS/core / Estimated BL460c G7 DB SAPS/core = ratio to apply to sizing
The result is a ratio of 2.6, e.g. if a workload requires 10 IBM Power 740 3.55GHz cores, it would require 26 BL460c G7 cores. This contrasts to the per core estimated SAPS based on the 2-tier benchmark which suggests just that the Power 740 would have been just 1.4 time the performance per core. In other words, a 2-tier based sizing would suggest that the x86 system require just 14 cores where the 3-tier comparison suggests it actually needs almost twice that. This is, assuming the I/O throughput is sufficient. This also suggests that both systems have the same target utilization. In reality, where x86 systems are usually sized for no more than 65% utilization, Power System are routinely sized for up to 85% utilization.
If this workload was planned to run under VMware, the number of vcpus must be considered which is twice the number of cores, i.e. this workload would require 52 cores which is over the limit of 32 vcpu limit of VMware 5.0. Even when VMware can handle 64 vcpu, the overhead of VMware and its ability to sustain the high I/O of such a workload must be included in any sizing.
Of course, technology moves on and Intel is into its Gen8 processors. So, you may have to adjust what you believe to the effective throughput of the x86 system based on relative performance to the BL460c G7 above, but now, at least, you may have a frame of reference for doing the appropriate calculations. Clearly, we have shown that 2-tier is an unreliable benchmark by which to size database only systems or partitions and can easily be off by 100% for x86 systems.
HP Integrity family has taken hits from a wide range of software vendors ranging from RedHat, Microsoft, Oracle and now it appears that SAP is also getting on that bandwagon at last. In a blog called People, Process and Technology, http://peopleprocesstech.com/2012/06/05/why-the-hp-superdome-is-as-dead-as-a-dodo/, John Appleby wrote about the fact that SAP Business Objects is no longer supported on HP’s Integrity systems. As I don’t spend that much time with Business Objects, I was not aware of this, but was able to quickly verify that this was in fact correct by checking SAP’s PAM (Product Availability Matrix) which clearly states support for a variety of OSs including Windows, Red Hat and Suse Linux, AIX and Solaris with no mention of HP/UX. There is no such issue with Netweaver as of yet, but at this point, it is a matter of when, not if, SAP will also pull support in this area.
Systems based on HP/UX systems have provided very strong infrastructure solutions for SAP customer landscapes over many years. Those customers have come to expect high end features such as capacity on demand, partition isolation at a hardware level, very high scalability, a proven track record for handling very large database environments, reliability where availability in measured in the 99.9%+, robust high availability packages, the ability to add or remove processor boards without having to take down partitions or the system, a strong ecosystem of systems support software from the vendor and third party suppliers and a very extensive community of peer companies. Considering the inevitability of HP UNIX systems demise, current HP customers will have to figure out the best path going forward. As Solaris is declining almost as fast as HP/UX and Exadata is a very poor solution for most SAP environments, as noted in my previous blog entries, customers are left with only a few choices. They could consider Microsoft, but few large customers do which means they would be in an exclusive and largely untested large systems environment. Linux is another option, but once again the population of customers at the high end in the space is extremely small and many of the above characteristics are not available on the Intel platform. Only IBM Power Systems and its big brother, IBM System z, offer the sort of characteristics that HP customers have taken for granted for a long time.
In fact, for each of the above, Power Systems provide “plus” versions, e.g. instead of designing for 99.9% availability, they are designed for 99.99% availability. Instead of offering scalability to 128 cores, Power Systems scales to 256 cores and with DB2 PureScale, Power Systems can scale out well beyond those numbers. Partition isolation is available in the world of Power, not just for CPU level partitioning, i.e. npar/vpar. This is just a few of the areas in which IBM Power Systems deliver “plus” capabilities.
Although my peers from System z would say they offer “plus” capabilities over Power Systems, an assertion that I will not deny, most HP/UX customers feel that either Linux or another UNIX solution such as AIX are the only choices that they are willing to consider. In other words, the most logical and lowest risk path for large SAP customers moving forward from HP/UX systems is with IBM Power Systems.
IBM’s implementation of an SAP HANA appliance is nothing short of a technological coup de grace over the competition. These are words that you would never normally see me write as I am not one to use superlatives, but in this case, they are appropriate. This is not to say that I am contradicting anything that I said in my posting last year where I was positioning when and where HANA is the right choice: http://saponpower.wordpress.com/2011/09/18/get-all-the-benefits-of-sap-hana-2-0-today-with-no-risk-and-at-a-fraction-of-the-cost/
As expected, HANA has been utilized for more applications and is about to GA for BW in the very near future. SAP reports brisk sales of systems and our experience echoes this, especially for proof of concept systems. Even though all of the warts of a V1.0 product have not been overcome, the challenges are being met by a very determined development team at SAP following a corporate mandate. It is only a matter of time before customers move into productive environments with more and more applications of HANA. That said, conventional databases and systems are not going away any time soon. Many systems such as ERP, CRM and SCM run brilliantly with conventional database systems. This is a tribute to the way that SAP delivered a strong architecture in the past. There are pieces of those systems which are “problem” areas and SAP is rapidly deploying solutions to fix those problems, often with HANA based point solutions, e.g. COPA. SAP envisions a future in which HANA replaces conventional databases, but not only are there challenges to be overcome, but there are simply not that many problems with databases based on DB2, Oracle, SQLserver or Sybase, not a compelling business case for change, as of yet.
Of course, I am not trying to suggest that HANA is not appropriate for some customers or that it does not deliver outstanding results in many cases. In many situations, the consolidation of a BW and BWA set of solutions makes tremendous sense. As we evolve into the future, HANA will make progressively more sense to more and more customers. And this brings me back to my original “superlative” laden statement. How does IBM deliver what none of the competitors do?
Simply put, (yes I know, I rarely put anything simply), IBM’s HANA appliance utilizes a single, high performance stack, regardless of how small or large a customer’s environment is. And with IBM’s solution, as demonstrated at Sapphire two weeks ago, 100TB across 100 nodes, is neither a challenge nor a limit of its architecture. By the way, Hasso Platner unveiled this solution in his key note speech at Sapphire and, surprisingly, he called out and commended IBM. Let us delve a little deeper and get a little less simple.
At the heart of the IBM solution is GPFS, the General Parallel File System. This is, strangely enough, a product of IBM Power Systems. It allows for striping of data among local SSD and HDD as well as spanning of data across clustered systems, replication of data, high availability and disaster recovery. On a single node system, utilizing the same number of drives, up to twice the IOPS of an ext3 file system can be expected with GPFS. When, not if, a customer grows and needs either larger systems or multiple systems, the file system stays the same and simply adapts to its different requirements. GFPS owes its roots to high performance computing. Customers trying to find the answer (not 42 for those of you geeky enough to understand what that means) to complex problems often required dozens, hundreds or even thousands of nodes and had to connect all of those nodes to a single set of data. As these solutions would run for often, ridiculous, period of time, sometimes counted in months or years even, the file system upon which they relied simply could not break no matter what the underlying hardware did. ASCI, the US Accelerated Strategic Computing Initiative focused on, among other things, simulating the effect of nuclear weapons storage decay, drove this requirement. The same requirements exist in many other “grand challenge” problems, whether they be Deep Blue or Watson and their ability to play games better than humans, or “simple” problems of unfolding DNA or figuring out how weather systems work or how airplane wings can fly more efficiently. More modestly, allowing thousands of scientists and engineers to collaborate using a single file system or thousands of individuals to access a file system without regard to location or the boundaries of the underlying storage technology, e.g. IBM SONAS, are results of this technology. Slightly less modestly, GPFS is the file system used by DB2 PureScale and, historically, Oracle RAC and even though Oracle ASM is now supported on all platforms, GPFS is still frequently used despite the fact that GPFS is an additional cost over ASM which is included with the RAC license. The outcome is an incredibly robust, fault resilient and consistent file system.
Why did I go down that rabbit hole? Because, it is important to understand that whether a customer utilizes one HANA node with a total of 128GB of memory or a thousand nodes with 2PB of memory, the technology does not have to be developed to support this, it has already been done with GPFS.
Now for the really amazing part of this solution: All drives, whether HDD or SSD are located within each of the nodes but through the magic of GPFS, are available to all nodes within the system. This means there is no SAN, no NAS, no NFS, no specialized switches, no gateways, just a simple 10GB/s Ethernet for GPFS to communicate amongst its nodes. Replication is built in so that the data and logs physically located on each node can be duplicated to one or more other nodes. This provides for an automatic HA of the file systems where, even if a node fails, all data can be accessed from other nodes and HANA can restart the failed node on a standby system.
Some other features of IBM’s implementation are worth note. IBM offers two different types of systems for HANA, the x3690 X5, a 1 or 2 socket system and the x3950 X5 system, a 2, 4 or 8 socket system. Either system may be used standalone or for scale up, but currently, the x3690 only is certified to scale to 4 nodes where the x3950 is certified to scale to 16 nodes. While it is not possible to mix and match these nodes in a scale out configuration, it is possible to utilize one in production and another in non-production, for example, as they have identical stacks. It is interesting to note another valuable feature. The x3950 is capable of scaling to 8 sockets, but customers don’t have to purchase an 8 socket system up front. This is because each “drawer” of a x3950 supports up to 4 sockets and a second drawer may be added, at any time, to upgrade a system to 8 sockets. Taken all together, an 8 socket standalone system costs roughly twice what a 4 socket standalone system does and a two node scale out implementation costs roughly twice what a single node standalone system does. For that matter, a 16 node scale out implementation costs 8 times what a 2 node scale out implementation does.
How does this compare with implementations from HP, Dell, Cisco, Fujitsu, Hitachi and NEC? For HP, a customer must choose whether to utilize a 4 or 8 socket systems and consequently must pay for a 4 or 8 node system up front as there is no path from the DL580 4 socket system to the DL980 8 socket system. Many more drives are required, e.g. a 128GB configuration requires 24@15K HDDs compared to an equivalent IBM solution which only requires 8@10K. Next, if one decides to start standalone and then move into a parallel implementation, one must move from a ext3 and xfs file systems to NFS. Unfortunately, HP has not certified either of those systems for scale out, so customers must move to the BL680c for scale out implementations, but the BL680c is only available as 4 socket/512GB nodes. A standalone implementation utilizes internal disks plus, frequently, a disk drawer in order to deliver the high IOPS that HANA requires, but a scale out implementation requires one HP P6500EVA for each 4 nodes and one HP X9300 Network Storage Gateway for every 2 nodes. The result is that not only does the stack change from standalone to scale out, but the systems change, the enclosure and interconnections change and as more nodes are added, complexity grows dramatically. Also, cost is not proportional but instead grows at an ever increasing rate as more nodes are added.
The other vendors’ implementations all share similar characteristics with HP’s with, of course, different types of nodes, file systems and storage architecture, e.g. Cisco uses EMC’s VNX5300 and MPFS parallel file system for scale-out.
At the time of writing this, only Cisco and Dell were not certified for the 1TB HANA configuration as SAP requires 8 sockets for this size system, based on SAP sizing rules, not based on limitations of the systems to support 1TB. Also, only IBM was certified for a scale out configuration utilizing 1TB nodes and up to 16 of those nodes.
The lead that IBM has over the competition is almost unprecedented. In fact, in the x86 space, I am not aware of this wide a gap at any time in IBM’s history. If you would like to get the technical details behind IBM’s HANA implementation, please visit http://IBM-SAP.com/HANA and to see a short video on the subject from Rich Travis, one of IBM’s foremost experts on HANA and the person that I credit with most of my knowledge of HANA, but none of the mistakes in this blog post, please visit: http://www.youtube.com/watch?v=Au-P28-oZvw&feature=youtu.be