An ongoing discussion about SAP infrastructure

Is your company ready to put S/4HANA into the Cloud? – part 4

This is the 4th and final installment on this topic.  Sorry for the length of each part, but the issues surrounding placement of corporate application environments cannot be boiled down into simple statements like “always think cloud first” or “cloud is no place for a corporate application”.

  • How will you get from your current on-premise SAP landscape to the cloud? As mentioned in a recent blog post[i], database conversions from a conventional database or Suite on HANA to S/4HANA are not trivial to start with.  Now add the complexity of doing that across a WAN with system characteristics and technologies which you may not be able to control and you have just made a difficult task even more so.
    • Can a migration be completed within the outage window that your business allows? Fundamentally, the business will only allow outages which result in little to know lost business or financial penalties.  Where you may be able to use dedicated, 1Gb or 10Gb Ethernet or even faster internal networks (in the case of Power Systems), unless you are able to purchase temporary, massive WAN bandwidth, you may be faced with an outage that is longer than the business will allow.
    • At what cost, complexity and risk? If such a migration would take longer than allowable, there are strategies and solutions to deal with this, e.g. SAP MDS (Minimized Downtime Service), IBM CDC (InfoSphere Change Data Capture), SNP Transformation Backbone, Dell Shareplex, but these add cost, require much more planning and testing and might impose some additional risk especially across WAN communications, see the discussion on security across the WAN above.

Lest you feel that this post is overly focused on issues which might prevent you from moving to the cloud, there are good reasons as well.  As I am not an expert on that part of the story, I will refer you to some pretty good articles on the subject.[ii]  The common theme across these sites is that cloud can a) result in cost savings, b) improve agility, c) provide more elasticity and scaling, d) move from a CapEx model to OpEx.  Lets take these one at a time.

a) cost savings – For customers that are growing rapidly, are startups, have never implemented a complex ERP system, Cloud certainly can offer major cost avoidance.  For customers with existing data centers, Linux or Unix trained support staffs, UPS and diesel generator power units, storage, security and operations standards, investments and teams, backup and recovery solutions, unless the move of SAP to the cloud along with other potential moves will allow for a large portion of those staffs to be laid off and data centers sold to another company, it may be more challenging to figure out exactly what sort of savings result from a move to the cloud.  Once you address all of your corporate requirements, discussed in detail in parts 2 and 3 of this blog, a new price for the cloud services to support your SAP S/4HANA environment may emerge and then you can start the process of determining what sort of cost savings are likely to be forthcoming.  From my personal experience with customers, it often turns out that little to no cost savings actually result.

b) improve agility – This one is more clear cut.  When on-premise systems are purchased and your requirements change, often you may find out that you under or overbought and that adjusting capacity, starting up or shutting down systems or simply running power, planning for cooling, running network and storage cables, to name just a few tasks can take weeks or months.  Cloud data centers often pre-provision technology to be ready for growth and changes in demands plus this is the business they are in, so they tend to be very good at keeping ahead of the demands for their services.  Admittedly, some customers are also excellent at this and those that have chosen IBM Power Systems with PowerVM find that making adjustments to systems is so easy that agility is not a major issue.  I know of some customers that purchase larger systems than initially required with large amounts of Capacity on Demand CPUs and memory so that growth can be accommodated without any need for physical changes, simply logical activations to deal with this very issue.

c) elasticity and scaling – Elasticity is usually considered in a cost context, i.e. pay for what you use, which cloud models do very well with utility models and use of shared infrastructure plus ability to charge per unit regardless of what size systems are required, meaning nothing is lost if you start on one size system and have to move to another.  Scaling usually refers to the ability to add almost an unlimited number of additional servers very quickly and easily, once against because cloud providers focus on this and are very good at rapid provisioning.  Is this required for S/4HANA is a more important question.  After going through a proper sizing, just about all customers get something wrong.  A study by Solitaire Interglobal [iii] a few years ago revealed that customer using x86 systems for SAP, on average, were quoted a starting price that was no more than 40% of the eventual cost.  I have seen this personally with undersized offerings or ones that “answered the mail” but did not address necessary project requirements.  Customers that have experienced this sort of cost overrun will find a cloud option especially attractive because of the ability to seamlessly move between systems or scale-out as necessary.  By comparison, that same Solitaire study showed that customers that purchased IBM Power Systems for SAP were quoted a starting price of 85% to 90% of the eventual cost.  Once again, that is because we ask the right questions up front so sizes of systems are much more accurate, most project requirements have been accounted for and overruns are less common.  These sorts of customers may not find cloud quite as much of a boon for scaling.  On the elasticity front, Power Systems offer a pay as you go model with capacity on demand or flexible financing, so this issue can also be addressed for on-premise implementations.

d) CapEx vs. OpEx – Cloud is all OpEx.  Some customers’ CFOs have decided that this is necessary even though the rationale is not always clear to those of us without a finance or business degree.  Leases for on-premise systems can be structured to be mostly or all OpEx.  Of course, that only accounts for the systems, so data center infrastructure would likely fall more under CapEx.  If those are sunk assets, however, then unless they are to be sold, depreciation under CapEx will continue whether SAP systems are moved to the cloud or not.

I am sure there are plenty of other reasons to move to the cloud.  I would simply encourage customers to get informed about the challenges of migration; the costs once real corporate requirements are included; the security and control or lack thereof you will have of your mission critical systems; the options you can utilize to resolve some of the issues driving you to cloud today.  For any customer that would like to have a discussion with me about these issues, costs and solutions, please respond to this blog or send me an email:


May 8, 2017 Posted by | Uncategorized | , , , , , , , | Leave a comment

Is your company ready to put S/4HANA into the cloud? – Part 3

The third of a 4 part discussion about corporate requirements in support of S/4HANA and questions to be asked of cloud providers in support of placing this landscape in the cloud.

  • What backups must be performed? Some cloud providers might include daily, weekly, incremental or no backups.  They may include raw image backups vs. database aware backups. Just make sure that whatever backups you require are supported and included in the price for the cloud services.
    • Should corporate backup solution be used? For flexibility reasons as well as visibility, you may prefer that a backup solution that you have tested and approved works in the cloud environment.  Or, perhaps, it is an audit requirement.
    • Are extra server(s) required for backup solution? Your security and audit departments may not permit your backups to share infrastructure, including the network, with any other clients in a provider’s cloud environment.  One or more servers with or without dedicated network infrastructure may be required.
    • How quickly must backups be performed, restored and what is the RTO after database corruption? SAP HANA backups can generally take their time as long as the aggregate transfer rate is sufficient to backup the entire database prior to the next backup.  Well, that is unless you want to be able to restore to the prior day in the event of database corruption in which case you may want the backup finished prior to a specific time on the same day.   Just make sure you think about this and have the infrastructure necessary to meet your backup speed priced.  Even more important is how quickly the backup can be restored as well as what services are offered to restore the backups and roll forward any logs that have been created since that backup was initiated, i.e. the RTO for getting back up and running.
    • Will backups be available to DR systems? Not to be overlooked are backups in DR.  Not only will you want to be able to take backups in DR, but you would also need to be able to restore from a backup in the primary site to the DR site, not so easily done if the primary site is truly down and unavailable.  This means that you would need the backup server to have bi-directional replication with DR site as well as testing to ensure this works correctly.  What incremental costs are required for the replicated backup bandwidth?
  • Security – First a disclaimer. I am not a security expert, so may be only addressing a subset of the real requirements.
    • How will corporate single sign-on operate with the cloud solution? Whether you use Microsoft Active Directory, CA SSO, IBM Tivoli Access Manager or one of the dozens of other products on the market, you are probably using this solution to authenticate and authorize users in SAP.  Make sure it can integrate with the potential S/4HANA system in the cloud.  Make sure that your security administrators can control policies, assign and revoke privileges and audit as necessary.
    • Must communications to/from cloud be encrypted and what solution will be used? We all know that hackers want to access your data for malicious reasons, financial gain or industrial espionage.  Do you want your key strokes and data to and from the cloud to transmit in clear text?  If not, which solution will you use and how might the use of that solution impact performance?  How about between application servers and database servers at the cloud provider?
    • How will data stored in cloud be secured? It is one thing to have your personal email stored on storage devices shared with millions of other users, but do your corporate polices allow for corporate databases to be located on storage devices that are shared with other customers?  If not, do you require dedicated devices, of what kind and at what cost?
    • How will backups be secured? We touched on backups earlier, but this is now specific to the physical media on which those backups are stored not to mention replicated to the DR site as well as any external media that you might require, e.g. tapes, DVDs or removable disks.  How can you be ensured that no one makes a copy, removes a disk, etc?
  • What are the non-production requirements? All of the above was just talking about production, but most customers have an even more extensive non-production landscape.  Many, if not most, of those same questions can be applied to non-production.  Remember, there are few employees that command a higher salary than your developers, whether internal or external.  They create corporate intellectual property and often work with copies of production data.  Their workloads vary based on project demands, phases of implementation or problems to be addressed.  Many customers utilize DR capacity or underutilized capacity on HA systems to address non-prod requirements, however this may not be an option in a cloud environment, or if it is, at what cost?
    • How will images be created/copied, managed, isolated, secured? You may use SAP LaMa (Landscape Manager previously know as Landscape Virtualization Manager (LVM), backup/restore, disk replication, TDMS, BDLS and/or custom scripts to populate non-prod systems.  Will those tools and techniques work in the cloud and at what cost?


The last part of this discussion will deal with migration challenges when moving to the cloud and lastly, a few of the reasons that are often used to justify a move to the cloud.

May 5, 2017 Posted by | Uncategorized | , , , , , , , | Leave a comment

Is your company ready to put S/4HANA into the cloud? – Part 2

And now, the details and rationale behind the questions posed in Part 1.

  • What is the expected memory size for the HANA DB? Your HANA instances may fit comfortably within the provider’s offerings, or may force a bare-metal option, or may not be offered at all.  Equally important is expected growth as you may start within one tier and end in another or may be unable to fit in a provider’s cloud environment.
  • What are your performance objectives and how will they be measured/enforced? This may not be that important for some non-production environments, but production is used to run part, or all, of a company.  The last thing you want is to find out that transaction performance is not measured or for which no enforcement for missing an objective exists.  Even worse, what happens if these are measured, but only up to the edge of the provider’s cloud, not inclusive of WAN latency?  Sub-second response time is usually required, but if the WAN adds .5 seconds, your end users may not find this acceptable.  How about if the WAN latency varies?  The only thing worse than poor performance is unpredictable performance.
    • Who is responsible for addressing any performance issues? No one wants finger pointing so is the cloud provider willing to be responsible for end-user performance including WAN latency and at what cost?
    • Is bare-metal required or if shared, how much overhead, how much over-commitment? One of the ways that some cloud providers offer a competitive price is using shared infrastructure, virtualized with VMware or PowerVM for example.  Each of these have different limits and overhead with VMware noted by SAP as having a minimum of 12% overhead and PowerVM with 0% as the benchmarks were run under PowerVM to start with.  Likewise, VMware environments are limited to 4TB per instance and often multiple different instances may not run on shared infrastructure based on a very difficult to understand set of rules from SAP.  PowerVM has no such limits or rules and allows up to 8 concurrent production instances, each up to 16TB for S/4 or SoH up to the physical limits of the system.  If the cloud provider is offering a shared environment, are they running under SAP’s definition of “supported” or are they taking the chance and running “unsupported”?  Lastly, if it is a shared environment, is it possible that your performance or security may suffer because of another client’s use of that shared infrastructure?
  • What availability is required? 99.8%? 9%?  99.95%? 4 nines or higher?  Not all cloud providers can address the higher limits, so you should be clear about what your business requires.
  • Is HA mandatory? HA is usually an option, at a higher price.  The type of HA you desire may, or may not, be offered by each cloud provider.  Testing of that HA solution periodically may, or may not be offered so if you need or expect this, make sure you ask about it.
    • For HA, what are the RPO, RTO and RTP time limits? Not all HA solutions are created equal.  How much data loss is acceptable to your business and how quickly must you be able to get back up and running after a failure?  RTP is a term that you may not have heard to often and refers to “Return to Processing”, i.e. it is not enough to get the system back to a point of full data integrity and ready to work, but the system must be at a point that the business expects with a clear understanding of what transactions have or have not been committed.  Imagine a situation where a customer places an order or paid a bill, but it gets lost or where you paid a supplier and mistakenly pay them a second time.
  • Is DR mandatory and what are the RTP, RTO and RTP time limits? Same rationale for these questions as for HA, once again DR, when available, always offered at an additional charge and highly dependent on the type of replication used, with disk based replication usually less expensive than HANA System Replication but with a longer RTO/RTP.
    • Incremental costs for DR replication bandwidth? Often overlooked is the network costs for replicating data from the primary site to the DR site, but clearly a line item that should not be overlooked.  Some customers may decide to use two different cloud providers for primary and DR in which case not only may pricing for each be different but WAN capacity may be even more critical and pricey.
    • Disaster readiness assessment, mock drills or full, periodic data center flips? Having a DR site available is wonderful provided when you actually need it, everything works correctly.  As this is an entire discussion unto itself, let it be said that every business recovery expert will tell you to plan and test thoroughly.  Make sure you discuss this with a potential cloud provider and have the price to support whatever you require included in their bid.


I said this would be a two part post, but there is simply too much to include in only 2 parts, so the parts will go on until I address all of the questions and issues.

May 4, 2017 Posted by | Uncategorized | , , , , , , , | Leave a comment

Is your company ready to put S/4HANA in the cloud? – part 1

Cloud computing for SAP is of huge interest to SAP clients as well as providers of cloud services including IBM and SAP.  Some are so bought into the idea of cloud that for any application requirement “the answer is cloud; now what was the question?”  As my friend Bob Gagnon wrote in a recent LinkedIn blog post[i], is S/4HANA ready for the cloud? The answer is Yes!  But is your organization ready to put S/4HANA in the cloud?  The answer to that question is the subject of this two part blog post.

Bob brings up several excellent points in his blog post, a few of which I will discuss here plus a few others.  These questions are ones that every customer should either be asking their potential cloud providers or asking of themselves.  These questions are not listed in any particular order nor intended to cast cloud in a negative light, merely to point out operational requirements that must be addressed whether S/4HANA, or other SAP or non-SAP applications are placed in a cloud or hosted on-premise.  In parts 2,3 and 4, I will discuss the background and rationale for each question.  Part 4 also discusses some of the points that cloud proponents often tout with some comments by me around ways of achieving similar benefits with on-premise solutions.

  • What is the expected memory size for the HANA DB?
  • What are your performance objectives and how will they be measured/enforced?
    • Is bare-metal required or if shared, how much overhead, how much over-commitment?
  • What availability is required?
  • Is HA mandatory?
    • What are the RPO, RTO and RTP time limits?
  • Is DR mandatory?
    • What are the RTP, RTO and RTP time limits?
    • Incremental costs for DR replication bandwidth?
    • Disaster readiness assessment, mock drills or full, periodic data center flips?
  • What backups must be performed
    • Should corporate backup solution be used?
    • Are extra server(s) required for backup solution?
    • How quickly must backups be restored and RTO occur?
    • Will backups be available to DR systems?
    • What incremental costs are required for WAN copies?
  •  Security
    • How will corporate single sign-on operate with cloud solution?
    • Must communications to/from cloud be encrypted and what solution will be used?
    • How will data stored in cloud be secured?
    • How will backups be secured?
  • What are non-prod requirements, how will images be created/copied, managed, isolated, secured?
  • How will you get from your current on-premise SAP landscape to the cloud?
    • Can a migration be completed within the outage window that your business allows?
    • At what cost, complexity and risk?





May 3, 2017 Posted by | Uncategorized | , , , , , , | Leave a comment

Head in the cloud? Keep your feet on the ground with IBM with cloud computing for SAP.

Cloud means many things to many people.  One definition, popularized by various internet based organizations refers to cloud as a repository of web URLs, email, documents, pictures, videos, information about items for sale, etc. on a set of servers maintained by an internet provider where any server in that cluster may access and make available to the end user the requested object.  This is a good definition for those types of services, however SAP does not exist as a set of independent objects that can be stored and made available on such a cloud.


Another definition involves the dynamic creation, usage and deletion of system images on a set of internet based servers hosted by a provider.   Those images could contain just about anything including SAP software and customer data.  Security of customer data, both on disk and in transit across the internet, service level agreements, reliability, backup/recovery and government compliance (where appropriate) are just a few of the many issues that have to be addressed in such implementations.  Non-production systems are well suited for this type of cloud since many of the above issues may be less of a concern than for production systems.  Of course, that is only the case when no business data or intellectual property, e.g. developed ABAP or Java code, is stored on such servers in which case these systems become more and more sensitive, like production.  This type of public cloud may offer a low cost for infrequently accessed or low utilization environments.  Those economics can often change dramatically as usage increases or if more controls are desired.


Yet another definition utilizes traditional data center hosting providers that offer robust security, virtual private networks, high speed communications, high availability, backup/recovery and thorough controls.  The difference between conventional static hosting and cloud hosting is that the resources utilized for a given customer or application instance may be hosted on virtual rather than dedicated systems, available on demand, may be activated or removed via a self-service portal and may be multi-tenant, i.e. multiple customers may be hosted on a shared cloud.  While more expensive than the above cloud, this sort of cloud is usually more appropriate for SAP production implementations and is often less expensive than building a data center, staffing it with experts, acquiring the necessary support infrastructure, etc.


As many customers already own data centers, have large staffs of experts and host their own SAP systems today, another cloud alternative is often required: a Private Cloud.  These customers often wish to reduce the cost of systems by driving higher utilization, shared use of infrastructure among various workloads, automatic load balancing, improvements in staff productivity and potentially even self-service portals for on demand systems with charge back accounting to departments based on usage.


Utilizing a combination of tools from IBM and SAP, customers can implement a private cloud and achieve as many of the above goals as desired.  Let’s start with SAP.  SAP made its first foray into this area several years ago with their Adaptive Computing Controller (ACC).  Leveraging SAP application virtualization it allowed for start, stop and relocate of SAP instances under the control of basis administrators.  This helped SAP to get a much deeper appreciation for customer requirements which enabled them to develop SAP NetWeaver Landscape Virtualization Management (LVM).   SAP, very wisely, realized that attempting to control infrastructure resources directly would require a huge effort, continuous updates as partner technology changed not to mention an almost unlimited number of testing and support scenarios.  Instead, SAP developed a set of business workflows to allow basis admins to perform a wide array of common tasks.  They also developed an API and invited partners to write interfaces to their respective cloud enabling solutions.  In this way, while governing a workflow SAP LVM simply has to request a resource, for example, from the partner’s systems or storage manager, and once that resource is delivered, continue with the rest of the workflow on the SAP application level.


IBM was an early partner with SAP ACC and has continued that partnership with SAP LVM.  By integrating storage management, the solution and enablement in the IBM Power Systems environment is particularly thorough and is probably the most complete of its kind on the market.  IBM offers two types of systems managers, IBM Systems Director (SD) and IBM Flex Systems Manager (FSM).  SD is appropriate for rack based systems including conventional Power Systems in addition to IBM’s complete portfolio of systems and storage.  As part of that solution, customers can manage physical and virtual resources, maintain operating systems, consolidate error management, control high availability and even optimize data center energy utilization.  FSM is a manager specifically for IBM’s new PureSystems family of products including several Power Systems nodes.  FSM is focused on the management of the components delivered as part of a PureSystems environment where SD is focused on the entire data center including PureSystems, storage and rack based systems.  Otherwise, the functions in an LVM context, are largely the same.  FSM may be used with SD in a data center either side by side or with FSM feeding certain types of information up to SD.  IBM also offers a storage management solution called Tivoli Storage FlashCopy Manager (FCM).  This solution drives the non-disruptive copying of filesystems on appropriate storage subsystems such as IBM’s XIV as well as virtually any IBM or non-IBM storage subsystem through the IBM SAN Volume Controller (SVC) or V7000 (basically an SVC packaged with its own HDD and SSD).


Using the above, SAP LVM can capture OS images including SAP software, find resources on which to create new instances, rapidly deploy images, move them around as desired to load balance systems or when preventative maintenance is desired, monitor SAP instances, provide advanced dashboards, a variety of reports, and make SAP system/db copies, clones or refreshes including the SAP relevant post-copy automation tasks.


What makes the IBM Power Systems implementation unique is the integration between all of the pieces of the solution.  Using LVM with Power Systems with either SD, FSM or both, a basis admin can see and control both physical and virtual resources as PowerVM is built in and is part of every Power System automatically.  This means that when, for instance, a physical node is added to an environment, SD and FSM can see it immediately meaning the LVM can also see it and start using it.   In the x86 world, there are two supported configurations for LVM, native and virtualized.  Clearly, a native installation is limited by its very definition as all of the attributes of resource sharing, movement and some management features that come with virtualization are not present in a native installation.


According to SAP Note 1527538 – SAP NetWeaver Landscape Virtualization Management 1.0, currently only VMware is supported for virtualized x86 environments.  LVM with VMware/x86 based implementations rely on VMware vCenter meaning they can control only virtual resources.  Depending on customer implementation, systems admins may have to use a centralized systems management tool for installation, network, configuration and problem management, i.e. the physical world,  and vCenter for the virtual world.  This contrasts with SD or FSM which can manage the entire Power Systems physical and virtual environment plus all of the associated network and chassis management, where appropriate.


LVM with Power Systems and FCM can drive full database copy/clone/refresh activity through disk subsystems.  Disk subsystems such as IBM XIV, can make copies very fast in a variety of ways.  Some make pointer based copies which means that only changed blocks are duplicated and a “copy” is made available almost immediately for further processing by LVM.  In some situations and/or with some disk subsystems, a full copy process, in which every block is duplicated, might be utilized but this happens at the disk subsystem or SAN level without involving a host system so is not only reasonably fast but also does not take host system resources.  In fact, a host system in this configuration does not even need to stop processing but merely place the source DB into “logging only” mode which is then resumed into normal operating mode a short time later after the copy is initiated.


LVM with x86 offers two options.  Option 1, utilize VMware and its storage copy service.  Option 2, utilize LVM native or with VMware and use a separate plugin with from a storage subsystem vendor.  Option 2 works pretty much the same as the above described Power/FCM solution except that only certain vendors are supported and any integration of plugins from different companies, not to mention any troubleshooting is a customer task.  It might be worthwhile to consider the number of companies that might be required to solve a problem in this environment, e.g. SAP, VMware, the storage subsystem vendor, the OS vendor and the systems vendor.


For Option 1, VMware drives copies via vCenter using a host only process.  According to above mentioned SAP Note, “virtualization based cloning is only supported with Offline Database.”  This might be considered a bit disruptive by some and impossible to accommodate by others.  Even though it might be theoretically possible to use a VMware snapshot, a SID rename process must be employed for a clone and every table must be read in and then out again with changes to the SID.  (That said, for some other LVM activities not involving a full clone, a VMware snapshot might be used.)  As a result, VMware snapshots may quickly take on the appearance of a full copy, so may not be the best technology to use both for the overhead on the system and the fact that VMware itself does not recommend keeping database snapshots around for more than a few days at most, so the clone process typically uses the full copy option.  When the full copy is initiated by VMware, every block must be read into VMware and then back out.  Not only is this process slow for large databases, but it places a large load on the source system potentially resulting in poor performance for other partitions during this time.   Since a full copy is utilized, a VMware based copy/clone will also take radically more disk storage than a Power/XIV based clone as it is fully supported with a “changed block only” copy.


Of course, the whole discussion of using LVM with vCenter may be moot.  After all, the assumption is that one would be utilizing VMware for database systems.  Many customers choose not to do this for a variety of reasons, from multiple single points of failure, to scaling to database vendor support to potential issues in problem resolution due to the use of a multi-layer, multi-vendor stack, e.g. hardware from one vendor with proprietary firmware from another vendor, processor chip from another vendor, virtualization software from VMware, OS from Microsoft, SUSE, Red Hat or Oracle not to mention high availability and other potential issues.  Clearly, this would not be an issue if one eliminates database systems from the environment, but that is where some of the biggest benefit of LVM are realized.


LVM, as sophisticated as it is currently, does not address all of the requirements that some customers might have for a private cloud.  The good news is that it doesn’t have to.  IBM supplies a full range of cloud enabling products under the brand name of IBM Smart Cloud.  These tools range from an “Entry” product suitable for adding a simple self-service portal, some additional automation and some accounting features to a full feature “Enterprise” version.  Those tools call SD or FSM functions to manage the environment which is quite fortunate as any changes made by those tools is immediately visible to LVM, thereby completing the circle.


SAP and IBM collaborated to produce a wonderful and in-depth document that details the IBM/Power solution:


A blogger at SAP has also written extensively on the topic of Cloud for SAP.  You can see his blog at:

January 4, 2013 Posted by | Uncategorized | , , , , , , , , , , , , | Leave a comment

IBM @ SAP TechEd 2012

IBM will have a huge presence at SAP TechEd 2012 in Las Vegas.  Please take a look at the information from my colleague, Bob Wolf, regarding what you can expect this year.  I will be presenting two sessions, one on the value of Power Systems for SAP compared to x86 and another on Infrastructure solutions from IBM for SAP Cloud Computing.   I look forward to seeing all of you there.



For those of you headed off to SAP TechEd in mid-October, just wanted to let you know about a number of sessions and events that IBM will be leading.

Even before TechEd starts, IBM and SAP will be co-hosting the IBM i and SAP Solution Update and dinner on Monday Oct 15th at 2pm. This has been an annual tradition for a number of years and is a great way to get up to date on everything related to SAP on i. Additionally, there will be a dinner afterwards where you can network with other companies running SAP on i as well as connect with IBM and SAP experts on i. If your SAP systems are running on i (formerly known as the AS/400), this should be a very valuable session. Please register ahead of time.

IBM will also be co-hosting an event with Intel on Wednesday night called Meet the Experts. The session is Wednesday night between 7pm and 9pm in the Opium room at the Venetian. In spite of the name of the room, I am absolutely certain that no opium will be served at this IBM event. Space is limited and you must register in advance to participate.

IBM will also be presenting a baker’s dozen of break-out sessions on a broad range of topics. For example, my friend Steve Bergmann and his client the Standard Bank of South Africa (SBSA) will be co-presenting a session on Banking Solutions for the 21st century. SBSA is a large bank running SAP on z. Alfred Freudenberger, one of my friends and counterparts on the AIX side, will be presenting two topics, one on UNIX and SAP, and the other on Cloud technologies for SAP. There will be two IBM sessions on HANA including one by my good friend Rich Travis. Another friend, Skip Garvin, will be presenting IBM’s experiences in helping customers reduce the risks in doing cross-platform SAP migrations. Many of the other sessions focus on key topics like Process Optimization, Single Sign-on, Value Based Archiving, Compliance Auditing and Governance, DB2, Managed SAP Virtualization, and even a topic on back propagation for Artificial Neural Networks as it relates to SAP and Mobility. Included in section C below are the descriptions, times, and locations of each of these sessions.

IBM will also have a booth (#315) at SAP TechEd where you can meet the experts and get any questions answered.

Have a safe trip to TechEd.


A) IBM i and SAP Solution Update and Dinner

Register now and reserve your seat for our annual IBM i and SAP Solution Update and Dinner at TechEd 2012 Las Vegas.
IBM and SAP experts will provide the latest news and solution updates for IBM i and SAP environments. Attend this session and learn about IBM Pure Flex Systems, SAP solution updates, and exciting new possibilities with cloud computing and In-Memory Computing
Don’t miss this opportunity to have your questions answered by experts and to engage in lively discussions with your peers. The business meeting will be followed by a group dinner.
The Venetian/Palazzo Congress Center
3355 Las Vegas Boulevard South
Las Vegas, NV 89109
Room # Toscana 3605 & 3609
Refreshments and registration, 1:30 – 2:00 p.m.
Business meeting kickoff, 2:00 – 5:45 p.m.
Group dinner, 6:00 p.m.


IBM i running SAP Solutions – New News from IBM

Customer presentation, “Migrating SAP from Windows to IBM i”

SAP trends 2012

Business Partner presentation, “Leveraging the POWER of IBM for SAP”

In-Memory Computing for SAP Landscapes

Customer Feedback Forum – Expert Panel

End of Business Session – Group Dinner
Register today to reserve your seat! Seating is limited.
We look forward to seeing you at SAP TechEd 2012 Las Vegas!


B) IBM and Intel sponsored Meet the Experts reception

Wednesday, October 17, 2012
7:00pm – 9:00pm
Opium Room at TAO in the Venetian Hotel

Please join IBM and Intel for cocktails and hors d’oeuvres as we renew our commitment to delivering best in class SAP solutions and celebrate our new IBM Client Center: Lab for SAP Solutions. Get your questions answered by IBM and Intel experts on hot topics such as SAP HANA, Mobility and Cloud Computing. Discover how IBM and Intel continue to deliver innovative solutions to clients around the globe and across a wide array of industries.
This is a private, by-invitation-only event. For quick entry, bring your confirmation and TechEd badge.
RSVP as soon as possible, space is limited.

C) IBM Sessions at TechEd
1) Banking Solutions for the 21st Century – Standard Bank of South Africa
To gain competitive advantages in today’s financial services environment, banking institutions must be able to quickly introduce new products and business strategies to drive revenue while radically lowering costs. To do so, they rely not only on their custom-designed legacy applications but also must leverage and modernize preexisting solutions, while continuing to expand a flexible performance-demanding infrastructure serving hybrid best in class solutions. This session will highlight how the Standard Bank of South Africa partnered with SAP for banking solutions, and with IBM for the agile infrastructure and middleware, to create a competitive and integrated banking services model that will meet Standard Bank of South Africa’s rapid business growth projections into the year 2015. The major discussion topics of the presentation will be centered on the business benefits derived by leveraging SAP banking solutions, the ability to rapidly deploy new products by implementing a service-oriented architecture, and optimizing operational costs while meeting performance, security, availability, scalability, and cloud objectives. This session is a joint customer presentation between the Standard Bank of South Africa and IBM.
Speaker:  Steve Bergmann
Date:  Wednesday October 17th
Time:  3:15 pm – 4:15 pm
Room name:  Titan 2305
2) UNIX for SAP software is dead! Long live UNIX for SAP software!
Everyone “knows” that x86 servers cost less than UNIX servers, but is that a correct perception when the UNIX server is an IBM Power System? Should AIX be relegated to the role of SAP database server? Is the SAP sales and distribution 2-tier benchmark a reasonable barometer of database performance? How important is the reliability of application servers and how do IBM Power Systems deliver superior uptime? And is parallel database technology the best way to deliver optimal uptime for database servers? This session will reveal how IBM is able to deliver not just low TCO, but very competitive costs of acquisition for SAP landscapes while enabling maximum flexibility and minimizing downtime.
Speaker:  Alfred Freudenberger
Date:  Thursday October 18th
Time:  9:15 am – 10:15 am
Room name:  Titan 2305

3) Head in the cloud – Learn how to keep your feet on the ground
Cloud is one of the big buzz words in the SAP community, but what does it mean? Is cloud the right answer for every problem? How can IBM help you to achieve your cloud goals? What can you do today with IBM Power Systems and SAP NetWeaver Landscape Virtualization Management? Where does IBM’s new PureFlex System fit? What are the other cloud alternatives for SAP systems? This session will show how IBM, with the inclusion of SAP technologies, is implementing the different flavors of cloud computing for SAP application environments.
Speaker:  Alfred Freudenberger
Date:  Wednesday October 17th
Time:  9:15 am – 10:15 am
Room name:  Titan 2303
4) IBM HANA – Scalability, Availability and Simplicity with a Disaster Recovery
IBM System Solution for SAP HANA: Extreme Scalability, Availability and Simplicity with a Disaster Recovery Strategy. The IBM Systems Solution for SAP HANA Delivers it All via the General Parallel File System,Richard Travis will discuss the design of the IBM System Solution for SAP HANA and the critical role that the General Parallel File System (GPFS) plays in the solution. How the IBM solution with GPFS can grow from a stand-alone configuration to a very, very large and extremely reliable cluster in a dynamic and non-disruptive fashion. Rich will also discuss disaster recovery options from simple to complex, synchronous to asynchronous and how these solutions will be enhanced by GPFS and its future extensions.
Speaker:  Rich Travis
Date:  Wednesday, October 17th
Time:  10:30 am – 11:30 am
Room name:  Titan 2303

5) Minimizing Risk in Cross-Platform SAP Migrations…Lessons Learned
This session is designed to help you understand the challenges inherent when migrating SAP workloads and will provide you with valuable insight into the do’s, don’ts, lessons learned, and best practices gleaned from hundreds of migrations completed by IBM’s Migration Factory – from non-IBM platforms, x86-based systems, mainframes, and even from earlier versions of POWER servers to IBM POWER7, xSeries, zSeries, and iSeries platforms.
Speaker:  Skip Garvin
Date:  Tuesday, October 16th
Time:  3:15 pm – 4:15 pm
Room Name: Titan 2303

6) Energize Your SAP Software Investment with Database Innovations
Learn how the latest SAP and IBM innovations can help you run your business with even lower total cost of ownership (TCO). In this session, you will also learn how joint innovations make an SAP and DB2 combination the best of breed business solution. Catch the latest in how you can use the rich set of new capabilities in SAP Solution Manager 7.1 to maximize your efficiency as a database administrator and learn about the new features in DB2 10.  In this session, we will also cover how SAP takes advantage of innovation with DB2 along with many other topics. Regardless of whether you are an SAP and DB2 veteran, or you are exploring superior alternatives to your incumbent non-DB2 database, we will cover new and exciting technologies for everyone. See how innovations can help you to make tangible dollar savings.
Speaker:  Ralf Wagner
Date:  Thursday October 18th
Time:  3:15 pm – 4:15 pm
Room name:  Titan 2305

7) SAP Process Optimization & Innovation Techniques
In this session you’ll learn about the latest process innovation techniques and how they apply to an SAP environment. The discussion will include details around how IBM used Process Innovation for its own SAP implementation and how lessons learned from this and other SAP implementations can be applied to a broad spectrum of SAP process optimization opportunities.
Speaker:  Joe Kaczmarek & Parag Karkhanis
Date:  Tuesday October 16th
Time:  4:30 pm – 5:30 pm
Room name:  Titan 2305

8) The Backprop approach to Mobility-how to embed intelligence in your method
Artificial Neural Networks (ANN’s) can be used to extract patterns and detect trends that are too complex to be noticed by other computer techniques. One of the first, and still popular ways, to train ANN’s is backpropagation – a dynamic system optimization technique. So what does backprop have to do with mobility? In this session, you will learn how the fundamental principles of backprop can be applied in the form of an approach to tackle mobility projects. Embodying those principles in your method will bring distinct advantages, especially in mobility. We will also discuss how to balance a continuously adaptive approach with the practical need for stability. Specific use cases will be provided to help illustrate the point so it will become an empirically achievable method.
Speaker:  Matt Schwartz & Parag Karkhanis
Date:  Tuesday October 16th
Time:  10:15 am – 11:15 am
Room name:  Titan 2305

9) Implementing Single Sign On and securing communication channels
In this session, participants will learn about the options in implementing Single Sign-On in various SAP modules including SAP NetWeaver Portal, SAP ECC, and SAP BusinessObjects software. This session will go into detail on leveraging various Single Sign-On options including Kerberos, certificate based, SAP SS02, Security Assertion Markup Language (SAML), and secure network communications (SNC). During this session, participants will also learn how to secure the communication between front end and back end using SNC and HTTPS. Participants will learn what factors to consider while selecting a type of Single Sign-On and securing communications. We will conclude the session by sharing lessons learned from our implementation experience and what worked for us and will include a short demo.
Speaker:  Vinod Lambda
Date:  Thursday October 18th
Time:  5:45 pm – 6:45 pm
Room name:  TBD

10) Visualizing Predictive Analytics on SAP HANA
SAP has come out with SAP Visual Intelligence and SAP Predictive Analysis recently. This session will give an introduction on how to combine SAP HANA with predictive analytics, and how to visualize the results using the new SAP Visual Intelligence.
Speaker:  Vijay Vijayasankar
Date:  Thursday October 18th
Time:  8:00 am – 9:00 am
Room name:  TBD

11) Value-Based Archiving and Governance – Reducing Time, Cost, and Complexity
In today’s world of accelerating data volume, variety, and input velocity, organizations are grappling with isolated islands of application-specific information. While most would agree that the “keep it forever” model died years ago, killed by governance requirements and escalating infrastructure costs, it is estimated that 70% of business information is still unnecessarily retained.  Join key IBM experts for an interactive session to discuss IBM’s SAP-focused “Value-based Archiving” (VBA) strategy and Smart Archive software for managing and archiving not only SAP documents and data, but virtually “any data, any content” from almost any source. You’ll also hear about IBM customers using VBA today to enable ‘Defensible Disposal’ – keeping only what’s necessary to finally tame escalating storage, server, and operating costs – saving them up to $50M over five years.
Speaker:  Jerry Bower
Date:  Thursday October 18th
Time:  5:45 pm – 6:45 pm
Room name:  TBD

12) Guardium – Compliance Auditing for Heterogeneous SAP Environments
Have you ever wondered on how you could be enhancing your organizational IT security or how to better tie your heterogeneous IT security to SAP systems?  If so, come and learn how the upcoming IBM and SAP’s collaborative and robust solution is also going to be the simplest. With IBM InfoSphere Guardium optimized for SAP applications you can be assured regarding the privacy and integrity of trusted information in your data center and reduce overall costs by automating the entire compliance auditing process.
Speaker:  Martin Mezger
Date: Thusday, October18
Time:  10:30a – 11:30a
Room Name:  Titan 2305

13) Managed SAP Virtualization on Puresystem with LVM
In the last decade SAP landscapes are growing and getting more and complex. With the support of virtualized SAP system on Intel based systems and the adoption of virtualization of SAP workload, the physical growth of the infrastructure might have been contained or at least mitigated.  However, we now see a growing number of SAP virtual machines on top of a large population of physical systems. At the same time, virtualization has been adopted in storage and networking – all typically growing/operating within their own silos. This session shows how we optimize daily SAP operations of virtualized SAP landscapes on systems with integrated management to operate different virtualization solutions.
Speakers:  Paul Henter IBM & Rob Shiveley Intel
Date:  Wednesday, October 17
Time: 2:00p – 3:00p
Room Name:  Titan 2303

October 3, 2012 Posted by | Uncategorized | , , , , , , , | Leave a comment