joerg.nalik

13 Posts
Joerg Nalik

Speeding up the Internet

Posted by Joerg Nalik Oct 18, 2012

We are somewhere in the 2nd decade of mass Internet adoption, but one thing hasn’t changed much since the beginning: many websites need a few seconds or even 10ns of seconds to load and downloading large documents might take minutes. The famous Moore’s Law predicts a performance doubling about every 18 month, so since the beginning of the Internet maybe a 2^10 ~ 1000 fold improvement should have had time to happen. That this didn’t apply to web traffic and is unlikely to happen in future is due to 2 facts:

 

  • Electric signals can only travel with the speed of light or less and that causes latency time delays for communications over longer distances.
  • The prevalent standard Internet communication protocol TCP/IP is very sensitive to the unavoidable network latencies due to its “chattiness”, meaning there are a lot of latency caused waiting times for a sender getting received-acknowledgments from a receiver for  rather small data packages.

 

These two points are both exempt from Moore’s Law progress, hence the enduring slowness of the Internet.

 

Point 1 is unmovable as a law of nature. The second point is man-made but equally difficult to overcome since changing a so widely adopted standard like the Internet communication protocol would risk major disruptions due to incompatibilities during a change period. Therefore a switch to a more efficient standard didn’t happen so far.

 

 

Still solutions for overcoming the slowness of the Internet do exist and SAP NetWeaver AccAD is one of them. The so called WAN Accelerators have as a working principle to be deployed as pairs on the sending and receiving side of Internet traffic and to transform the inefficient standard protocol of the Internet into a far more efficient but proprietary method of data transmission over wide area networks (WAN). They do so in a transparent way for the application/client end-points so that no changes of your applications or front-end tools like browsers are required. These acceleration technologies lead to big response time improvements by eliminating latency effects almost entirely and by reducing bandwidth usage through advanced compression algorithms. Their benefits come at the price of needing to deploy yet another network technology in datacenters as well as in branch office and even on nowadays various types of mobile devices.

 

 

While Wan accelerators like SAP Netweaver AccAD are mostly designed for their performance and network capacity saving benefits, there are other network technologies which focus more on security and reliability aspects of the network for SAP and other applications.  Those network technology sets are often summarized under the term application delivery controllers (ADCs, vs. the before mentioned WAN optimization controllers, WOCs).

 

 

As so often, combining two complementing technologies like ADCs and WOCs can lead to greater benefits than just the sum of both. In particular when you chain network services like ADCs and WOCs a significant amount of configuration has to be done in between those two, which is an opportunity to simplify by bundling both products into one. In general a lot of additional benefits from simplified deployments as well as operations can be realized over time when consolidating network products.

 

 

Therefore, the SAP AccAD product team reached out to SAP’s network technology partners with a proposal to cooperate. Radware, one of the previously SAP certified network technology partners, engaged with us and we tested their ADC in combination with the SAP NetWeaver AccAD component in SAP’s Co-Innovation Lab. The results are quite excellent. You can find details in our recent publication https://scn.sap.com/docs/DOC-32706.  Radware offers AccAD as part of their FastView™  solution. FastView offers all network reliability, security and performance capabilities for the remote use of SAP solution together. With the a new AccAD version included in the FastView product acceleration of http based traffic as well as of SAP’s own SAPGUI front-end traffic is possible. Besides our whitepaper you can visit us at SAP TechEd 2012 in Las Vegas and Madrid to learn more. AccAD is presented at sessions TEC111 and TEC812 and Radware is exhibiting at TechEd.

When you clicked on this blog I’m sure you heard already a lot about Software as a Service (SaaS) and SAP’s On Demand Solutions so I won’t give you a long introduction into those terms. I’m generally interested in infrastructure topics, with infrastructure loosely defined as the technology layers below SAP platforms and typically provided by SAP’s technology partners. Therefore, I’d like to describe in this blog what SaaS adoption means to the Infrastructure of SAP customers.

 

At first sight this might appear odd: The whole idea of SaaS is that a customer does not need to worry about any sort of hardware and software because it is all bundled together as a SaaS solution. I’d say this is mostly true at the first stage of SaaS adoption by a customer when each SaaS application provides enough value to be adopted as an island solution in itself. Island means that such a solution is not integrated with any other solution. There are just end-users connecting via the Internet to a SaaS island application. The only infrastructure needed on the customer side is technical Internet connectivity, which by now pretty much any company has already built-up long time ago. The only infrastructure concern a customer might have is to add additional Internet connectivity capacity to support increased SaaS usage by their employees.

 

For SAP customers the second stage of SaaS adoption is to integrate the SAP On Demand (OD) solution islands with their On Premise (OP) SAP Business Suite solutions to gain even more value out of both, the OD and the OP side. There are an increasing number of use cases for building out “hybrid” On Demand – On Premise applications. The blog “What is a cloud and on-premise hybrid solution by SAP?”gives a good introduction into the business drivers for such solutions and the technologies provided by SAP to build such solutions.  If you are visiting TechEd 2012 conferences TEC102 SAP Cloud Strategy & Road Map, by Sven Denecken gives you a great overview, too.

 

The implementation of hybrid solutions might follow three basic steps:

 

  • Determine business drivers for an OD/OP integration and make an implementation decision.
  • Design and build the “application connectivity”.
  • Design and build the “technical connectivity”.

 

Application connectivity and technical connectivity are 2 distinct tasks for OD/OP solution integration. The application connectivity is concerned with application APIs and in some cases solution specific content for SAP integration components like SAP PI and SAP Data Services (DS). A customer would need to decide whether to leverage maybe pre-existing On Premise integration components or to use their respective SAP On Demand integration solutions. Both are possible in principle. Typical for the integration components is that they are general use “containers” for integration “content” which is specific for each particular hybrid business process running across your OD/OP solutions. Content refers to the particular transformation and message handling rules for a usage scenario, which have to be performed by the integration components. Setting up application connectivity at the OP side is a task for SAP application IT departments of SAP customers.

 

Technical connectivity is about building and configuring the infrastructure, in particular the network, for providing qualities such as reliability, security and good performance over the wide area network (WAN) Internet between the OD and OP datacenters. This task is typically performed by the network IT group at the On Premise datacenter of an SAP customer. On the On Demand side technical connectivity is provided by SAP and is part of the On Demand solution offering.

 

OpOdConnectivity.jpg

 

Fig.1 : Application connectivity can be provided by SAP PI OD and SAP DS OD in conjunction with an SAP On-Premise agent. The technical network connectivity consists out of network infrastructure components in the DMZ of OD and OP datacenter sites.

Here a few network connectivity guidelines I’d like to recommend:

 

  • NC-1: No direct network connection from public networks to OP SAP applications shall be allowed.
  • NC-2: All business application network traffic transmitted via Wide Area Networks shall be encrypted.

 

The background of the 1stguidelines is that a customer wants to keep control over who can access or post what information in their On Premise backend systems. Therefore, it is out of question to just connect an application server network port to the Internet, a control needs to be put in between the backend application and the Internet. In the simplest case this would be a so called proxy instance sandwiched in between additional firewalls as shown in Fig. 2. The firewalls control lower level network protocol access and guard and define the borders of different network security zones. The network zones between the Internet facing firewall and the other most inner application network security zone is commonly called De-Militarized Zone or DMZ.  The proxy inside the DMZ further restricts communications to only one Internet URL which is then mapped to an internal application server port. Thereby, internal application server IP and port addresses are hidden from the outside world. Proxies should be used for either case:

 

  • The On Premise application making a call to the On Demand application. For such outbound requests from the On Premise side so called forward proxies are used.
  • If the On demand application has to send a request to the customer’s On Premise system the customer would need to provide a public URL for accessing the backend application by the On Demand application and would link traffic to such URL to a so called reverse proxy in their DMZ. The reverse proxy would then forward traffic to the backend application server and maybe perform a number of other network operations for security and other qualities.

 

OPOdProxy.jpg

 

Fig. 2: A proxy instance in the DMZ allows controlled and save guarded access from the Internet to customers On Premise backend systems.

 

The simple traffic routing function of proxies is just the minimum service required for technical OP/OD connectivity and is part of a set of wider functions provided by so called application delivery controllers or ADCs. ADCs provide proxy functions but might also include encryption/decryption, load balancing, bandwidth capacity managing and other network services. A collection of reference information about some network technologies can be found at http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/7447

 

The 2ndnetwork connectivity rule takes into account that your business data is confidential and therefore needs to be encrypted when sent over the Internet. Encryption/decryption can be provided by the SAP application server or be off-loaded to most proxies or better ADCs, which provide that capability themselves. Application groups should consult with their network and security colleagues to decide if encryption/decryption should be done in the application server or by an ADC in the DMZ or by even both.

 

As for choosing a proxy or ADC  check out SAP’s listing of technology partners with SAP certified products at

http://www.sap.com/partners/directories/searchsolution.epx.  With the search terms ESOA-AW-PO, ESOA-AW-RA and ESOA-AW-SEC you can find DMZ network products and many of those include the needed proxy features and more. A second option is to consider the SAP NetWeaver WebDispatcher product.

 

To narrow down these choices even more you might try the following approach:

 

  • When you need to design your technical connectivity implementation contact your network group and ask what type of proxy solution they might have already implemented in the On Premise datacenter DMZ.
  • Check if that product is certified by SAP. If so, go ahead and use it.
  • If no suitable proxy solution is deployed already,  consider SAP WebDispatcher as easily available solution already included in customer’s SAP NetWeaver license.

 

Re-using existing proxy technologies of your network group is usually the easiest and most cost effective way to go. It saves you a lot of time for discussing the introduction of new technologies into your DMZ. Special purpose build network hardware appliance can usually handle dozens or even hundreds of proxy instances so new hardware for added capacity is usually not required. Adding one more proxy would be a simple configuration task of existing equipment. However, in case that no such solution is available you’d need either to implement it or do a software installation and configuration of SAP NetWeaver WebDispatcher on a small server in the DMZ.

 

For the configuration of the proxy you’d need to give your network group some information about internal application server IP addresses, SAP’s OnDemand solution URLs and they’d need to understand how to handle security certificates for the traffic encryption/decryption configuration. SAP can’t possibly describe such configuration procedures for all proxy technologies available to customers. Rather we decided to describe the process using SAP WebDispatcher, which is the SAP proxy reference implementation. If you use other proxy technologies your network group simply needs to do the equivalent configuration steps on the proxy products they prefer. The SAP WebDispatcher reference implementation is described in detail in https://service.sap.com/~sapidb/011000358700000894852012E/SAP_OD_TCG_FINAL_V12.pdf

(A customer or partner SAP Service Marketplace user account is required to retrieve this document.)

 

I’ll be visiting SAP TechEd 2012 in Las Vegas and you could meet me during my expert session “Connecting SAP Applications” on Tuesday 10/16th/2012 at 2pm in Lounge 7 Hall C to discuss network technology for SAP solutions further.

SAP’s offerings today include so called On Premise (OP) applications, meant to run inside enterprises’ datacenters and On Demand (OD) applications, which are Software as a Service (SaaS) cloud solutions of SAP. The great opportunity ahead of SAP’s customers is to integrate their existing OP solutions with new, cloud based, functional extensions. A common concern of cloud usage is their High Availability (HA) and how and if it will be specified through service level agreements (SLAs). Unplanned cloud outages from big public cloud providers always find a lot of attention. Therefore, I’d like to make some simple comparisons of HA of cloud solutions vs. HA of enterprise datacenter applications.

 

Unplanned failure situations are hopefully very rare random occurrences in either case. They are looked at with statistical methods. The ratio of unplanned down time / total time per year is the typically definition of availability. Availability of a certain component, step or other element might be 99%, meaning in average the application runs on 99 days out of 100 days. Sometimes, availability is even promised to be 99.999% . For such “5 nines” availability the unplanned downtime would be only 86.4 seconds within 100 days of operation. It takes some effort and budget to make applications really highly available, so HA and Total Costs of Ownership (TCO) are a tradeoff decision. Each unplanned downtime causes losses to your business and therefore higher TCO for higher availability should be balanced against the losses from expected unplanned downtimes.

 

Now, how do OP and OD solutions compare in regard to HA? Here a 1st example:

  • Let’s assume there are 500 On Premise customers. Each of them made the investments to have their applications 99% available.
  • Then assume 500 customers are using an On Demand solution, which also shall be specified to be 99% available.

 

Do both cases have the same level of availability? The answer is yes, but there are still differences, if you ask some follow up questions:

 

How likely is it that all 500 OP customer systems are up and running well at the same time?

 

To answer this questions one need to know that individual probabilities need to be multiplied to calculate the overall probability. So the overall probability that all 500 OP customers are running well is:

 

EQ1.jpg

0.66% might strike you as surprisingly low, but trust me that is the correct number. It clearly means that almost never all 500 customers are running well simultaneously besides their individual 99% availability. How does this make sense? If availability is 99% it means that 1% of 500 customers, so 5 customers are down in average and 495 customers in average are running well.

 

In contrast 99% of the time all 500 OD (cloud) customers are simultaneously up and running well! Sounds much better than the 0.66% in the OP case but it isn’t really. When the 1% OD downtime hits all 500 OD customers experience downtime simultaneously, much more than the average 5 out of 500 customers in the OP case.

 

So the real difference of OP and OD solutions is that OP downtimes usually are a trickle which never makes the news, vs. OD outages are being a short sharp pain to everybody, which since it is rare draws attention and makes it into the news headlines. In terms of damage avoidance, both cases in this example are equal.

 

In a second example I’d like to look at a really complex business scenario which might consist out of many “Things” which can go wrong:

  • There are numerous hardware components like servers, network routers, firewalls, load balancers, storage …
  • There are multiple software components, like applications: ERP, CRM …, middleware components like PI, databases …..
  • Not at least there are multiple processing steps in an overall business process, which all need to function all the time.

A large number of “Things” all need to function simultaneously to complete a business process. So that we can reuses the math from above assume there is a total of 500 “Things” involved in a complex business scenario. Each individual thing would be a potential “single point of failure” making the business process to break as a whole it breaks itself. Therefore, each thing needs to be considered in regard to availability of a business process.

 

If you'd have 99% availability for each of the 500 things the above calculation applies in the same way as before. The chance to run through a business scenario with 500 things of 99% available components and steps is only 0.66%. So practically, performing the whole scenario without at least one error almost never happens! And: this is true for OP and OD solutions alike because I only assume 500 things involved  and didn't make any assumption if they are OP or OD provided. Having complex business scenarios almost never succeeding is obviously a big problem and there are two solutions for this problem:

First, you could increase the individual component availability and step up HA from 99% to let’s say 99.99%. Then the overall success rate becomes:

 

EQ2.jpg

 

This would be much better. If almost 5% failure rate is still too much just add another 9 to get “5 nines” (99.999%) availability, but remember that drives up your TCO even further. The High Availability way to get to 99.99% availability if individual things have 99% availability is to provide a redundant setup like shown in the picture below. Since availability is 99%, the un-availability of one component is 1%. For 2 redundant components un-availability probabilities have to be multiplied for overall un-availability:

 

eq3.jpg          eq4.jpg

So with the HA redundant set-up you can increase the availability of things tremendously. Two 99% available components together have a 99.99% availability. However, you’d need to double everything and therefore your TCO will about double as well.

 

ha.jpg

Figure 1: High Availability is usually achieved through redundant set-up of each component such that any single point of failure is avoided. Special attention needs to be paid to x-shape inter-connectivity of components A and B so that every single failure of component A or B can be bypassed without loss of functionality. High Availability about doubles TCO compared to none HA systems. Resiliency set-up avoids TCO doubling but need more investment into inbuilt error recovery mechanisms, see text.

 

 

So what is the second alternative? It is “Resiliency”, which is the ability to recover from temporary failures or through some explicit error handling and error correction. Like before, in the 99% availability case only a small amount of steps will fail in average when performing a business scenario. You’d pass in average 495 “Things” successfully and only 5 will go wrong in average.

 

Let’s look at one failed step: How likely is it that it would fail twice when executed two times in a row?

 

As before, if you perform a step a second time if you experience an error the first time, overall availability rises from 99% to 99.99%.  A minor catch is that about 1% of the things need to be re-done and that increases overall processing times roughly by 1% in average. In most cases this is far too little to matter at all.

 

A bigger issue is that for each thing you’d need to implement re-try capabilities and that has some pre-requisites. The first prerequisite would be that you link things together to a wholesome business process “loosely coupled”.  By that I mean that you built in mechanisms to re-try a failed linkage at some later time. If a link partner is not reachable then its 99% availability tells you that it should be available again after a short time of waiting. If the things are network routers connected to each other some network protocols do the re-try automatically for you. An example in the application to application (A2A) integration world would be so called “asynchronous” communications, which are expressed in the following official BBA guideline (see http://wiki.sdn.sap.com/wiki/display/BBA/Loose+Coupling+Through+Web+Services ) :

SOA-WS-1

SAP recommends implementing remote consumption of business functionality using loosely coupled, asynchronous, stateless communication using web services. ….

 

 

 

You’d need asynchronous communications to de-couple 2 subsequent steps from each other so that individual re-tries can be done without needing the previous step being redone as well. Once you implemented loose coupling there are implicit prerequisites.  Asynchronous communications need buffers or queues on both sides, the sending and receiving end, which is more effort to program and drives up operating cost through higher server main memory demands as well. But compared to doubling everything in the HA setup case, there seems to be a smaller price tag on resiliency, right?

 

High Availability and Resiliency are two different methods to get to the same goal of let’s call it high “Reliability” of the business process execution.

Which one is better depends on your total cost of development (TCD) vs. total costs of ownership. As said, loose coupling needs more effort on the application development side. Tight coupling, like synchronous Abap RFC calls are just so much easier to use for programmers.  If you can afford higher development costs of loose coupling for integrations then the resiliency approach is much better. If hardware and operation costs are cheap it might be better to focus on end to end high availability setup and save on development costs.

 

In the traditional enterprise datacenter OP systems High Availability is often preferred. The number of things to double is usually small. TCD, the “Total Costs of Development” vs. TCO is decided by different corporations: TCD matters to the software vendor vs. TCO matters to their customers.  This makes it a bit more difficult to find the right TCD/TCO trade-off point.

 

For OD or SaaS providers TCD and TCO are both on their side and folded together into their customers’ subscription fees. Therefore a SaaS vendor has a better opportunity to decide on the best TCD vs. TCO trade-off and can better decide about Resiliency vs. High Availability approaches. The mixed case of integrating On Premise with On Demand applications for overarching business scenarios is again a bit more complicated. Hybrid cloud spanning business processes tend to be more complex and therefore error prune in the first place and they involve distributed ownership of “things” between OP customer and OD vendor. What would be particular difficult is to implement support services like holistic end-to-end monitoring across OP and OD systems. Therefore I’d think any steps in between OP and OD systems should be loosely coupled and designed for Resiliency. Imagine you’d want High Availability between OP and OD applications: strictly speaking you’d need two SaaS vendors for the same application to guarantee fail over capabilities,not very feasible.

 

Is resiliency good enough to satisfy the business users?

 

I’d think this is the case in most instances. Let’s look at this last example: expense reporting. In the overall process of expense reporting an employee might enter his or her expense data in an OD cloud application. Then the OD application sends the expense data asynchronously to an OP finance system. If the communication between the OD and OP site fails, the end user would not even notice it. He/She still would be able to enter expense data. Only if the OP-OD link is broken permanently they’d wonder when they get expenses reimbursed into their bank account. But if the link is broken only temporarily the process could be completed eventually. Even a temporary outage of a whole day wouldn’t matter to most end-users in this use case.

 

In summary, I’d recommend to consider the different approaches of High Availability and Resiliency in regard to total costs of development and total cost of ownership. In complex business scenarios you’d likely have a large number of potential single points of failures and you might decide for a mix of HA and Resiliency approaches. For OP to OD integrations the resiliency approach should be chosen, due to very high cost or lack of OD provider options for High Availability.

Web enabled applications, SOA based application-to-application and business-to-business integration, On-Premise, On-Demand, On-Device integration, virtualization/clouds, SAP NetWeaver Gateway, Sybase Mobility solutions ….. the list goes on and on  and shows the un-escapable  trend to more and more distributed deployment, use and processing of business applications.

The progress and innovations of network technologies are key enablers for the extending reach of SAP applications to more and more users and usage. To no small part the network provides reliability, security and good performance for distributed application environments.

I follow this trend for a few years now and had the privilege to work with a lot of leading network vendors and SAP internal teams on the topic of modern network technologies for SAP applications. Most fascinating are “Network Edge Services” (NES) which are, in essence, all the network services which sit in between your datacenter or cloud based application and its human or machine clients. NES are load balancing, Wide Area Network performance accelerators, VPN tunnels, intrusion and denial of service attack defenses to name just a few. 

These services are very close to reliability, security and performance means provided by the SAP application platforms themselves and therefore application and network experts should work closely together for optimizing SAP landscapes end to end. SAP and its network technology partners already cooperate closely, for instance through SAP’s network product certification or through joint efforts to integrate network partner products with SAP’s AccAD WAN optimization solution.

I’m looking forward to TechEd to meet and hear from you about your network related experiences and to discuss SAP’s and partners network technology solutions with you. You can meet me at the following events at TechEd in Las Vegas:

 Tuesday 9/13/11 10.00-10.30am; COIL Expert Session: Modern Network Technologies for SAP Applications

Tuesday 9/13/11  2.30-3.00 pm: Expert Session: TechEd LV expert session: Connecting Distributed SAP Applications, Lounge 4

Wednesday 9/14/11 10am-12pm: TEC-P12, AccAD Pod session, exhibition floor Pod #12

Thursday 9/14/11 12pm-2pm: TEC-P12, AccAD Pod session, exhibition floor Pod #12

You can also meet the following NES vendors at TechEd Las Vegas:

  • Bluecoat: Booth 107
  • Citrix: Booth 115
  • Radware: Booth 407

The trend is clear: in order to further lower the costs of IT while adding new and more application services infrastructure technologies and business application software have to be looked at together for cross technology layer optimizations.

The last decade we saw data center consolidation, outsourcing of running business applications and now a starting trend to run business applications in “Clouds”. While that happened the number of business application users grew and evolved from working at a company’s headquarter datacenter location to working from anywhere in the world. Users now exchange their desktop or laptop computers increasingly for mobile devices. And more and more IT automated business processes in between companies and in between a company and their customers are being used.

A key enabler for these trends is the network, or to be more precise:  a number of network technology layers from pure physical lines to very intelligent network services. The most intelligent upper layers of the network technology stack interact very closely with the business application technologies and thus are one example of a piece of infrastructure,  which should be optimized together with the applications.

The globally interconnected business world demands reliability, high performance and security for their business activities and those 3 requirements can only be fulfilled by the application and network technology layers being optimized together. Consequently, it is no accident that a lot of the most advanced network technologies are sitting at the “edges” of the data centers, branch locations and end user devices themselves. They sit in the network traffic path between the application servers and the end-user or application client.

Due to the importance of those “Network Edge Services” (NES) for overall application reliability, performance and security SAP cooperates and co-innovates with its network vendor partners for a few years now. SAP certifies NES solutions, provides Best Practices born out of prove-of-concepts projects in SAP’s Co-Innovation Lab and more.

Network Edge Services Solution Map

To learn more about NES we recommend you visit our session:

ALM209 Modern Network Technologies for SAP Applications

at SAP TechEd 2010 in Berlin and Las Vegas.

I f you’d like to discuss your needs and experiences with modern network technologies you might also join our expert sessions in:

Berlin        EXP474 Modern Network Technologies for SAP Applications, Wednesday 10/13/2010 2.30pm

Las Vegas EXP475 Modern Network Technologies for SAP Applications, Thursday      10/21/2010 3.30pm

I am looking forward seeing you at SAP TechEd 2010.

Server, storage and network virtualization (vLANs) is becoming more common place where real benefits of Cloud Computing  slowly but surely advance  from hype to reality. Expectations and promises for big cost savings and efficiency gains are huge and yet essential details for how to do security, performance, achieve high availability, perform  QA, backups and many other common IT deployment and operations tasks continue to lag.

Yes, it is relatively easy to move a simple, self-contained application into let’s say a public cloud, but how would you know it to be ready for productive use? There are a lot of IT operation puzzle pieces to be worked out for the new virtualization and cloud trends to make them really useful.  Luckily the SAP Co-Innovation Lab (COIL) (http://coil.sap.com/) is a place where SAP’s partners and SAP can join together to work out some new best practices for the emerging new IT environments.

For a recent Proof of Concept (POC) project we emphasized two tasks at once with SAP partners HP and Shunra. The POC topic is the stress testing of an application and the performance of applications across a wide area network.  For stress testing the HP LoadRunner tool, which is also resold by SAP, is very well known; but in this instance, we added two more twists to it:

 

  • That HP LoadRunner can be deployed and run from within a virtual machine sounds likely but would it yield the same stress test results compared to HP LoadRunner directly installed on a physical server? Would response times and error rates measured be the same?
  • What about the latest HP LoadRunner WAN emulation features provided through integration with Shunra’ s technology (note: integration made available beginning with HP LoadRunner version 9.5). Does the software based WAN emulation work when deployed inside a virtual machine? Again, would test results be the same as with the hardware appliance based WAN emulation Shunra offers as well?

Such questions are important to answer simply because if you move your application into a virtualized environment you don’t want to lose efficiency again by having your test tools deployed directly to physical servers or as dedicated appliances, physically wired into your datacenter.  If you plan to move applications into an off-premise cloud then you might not even have the option of physical deployment of software and hardware solutions any longer!

Why pay attention to WAN emulation?  First of all, even for conventionally deployed and operated business applications, most end-users are no longer working in the same (local area network, LAN) location where the application is running . As we’ve previously described in an earlier paper, the long distance connectivity over a wide area network (WAN) has a very significant performance impact on business applications. With the next stage of evolution – SaaS and collaborative apps like LinkedIn or SAP Streamworks, the notion of application and infrastructure as separately installed and operated, and most importantly performance and cost optimized entities begins to break down. Only if applications with their infrastructure together are treated as one combined product can real optimization be achieved and yield a profit for SaaS offerings. Consequently, with the continuous proliferation of virtualization leading to clouds a combined application plus its wide area network overall solution testing is required and the HP LoadRunner/Shunra combination is one tool which can help (see also Shunra’s blog).

To make the rest short: We explored testing of a simple SAP application scenario with HP and Shunra in COIL and found:

 

  • There is no detectable difference in test results using physical vs. virtual deployments of HP and Shunra testing products!
  • We derived a best practice testing methodology for not only finding problems of a combined application/WAN infrastructure solution but also for verifying solutions to in particular the WAN induced performance issues. Application and network groups can now use the same test set-up and environment to optimize a business solution together, cutting out a lot of redundancies if those groups would work separately.

Of course we documented our findings in a white paper which you can find here http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/90679c93-6653-2d10-a0af-faf3fdc22883

Writing down “SAP Guidelines” is an ambitious task. Who can possibly know everything that matters in the SAP world?  To make a start, the Best-Built Applications (BBA) team took the initiative to assemble the guidelines for our partners who develop applications which interact with SAP’s Business Suite solutions.

 

Since the task at hand is huge we decided to deliver in stages. While we develop and deliver on the guidelines, we also continuously gather feedback from our partners to ensure that we meet expectations. For PKOM 2010 we are proud to have delivered the first 3 chapters of the Best-Built Apps guideline book (SAP Guidelines for Best-Built Applications):

 
  • An introduction explaining terminologies, methodologies and scope
  • A comprehensive listing of all guidelines from different areas
  • The first detailed chapter about the Application Life-cycle Management topic
 

Further topic chapters are in the pipeline for more 2010 updates:

 
  • Process orchestration and service-oriented architecture
  • User interface and user experience
  • Information management
  • Data tools
  • Application development
  • Governance and security
 

Interestingly, while we are working on more and more detailed background to our guidelines, new topic area ideas are popping up from SAP internal and external sources. For instance, somebody suggested we include output management. At least I would never have thought about that topic. Printing business documents is so old fashion and such ancient functionality in SAP products that it usually isn’t getting much attention. However, the feedback that output management is important is a more than valid point and of course there is more to it than just printing, if you think about it.

 

I love such feedback! At PKOM 2010 we will represent the BBA initiative. Maybe check http://bestbuiltapps.sap.com before PKOM and prepare your comments and questions. I look forward to many fruitful chat discussions.

If you are an ISV or are otherwise involved in the development of complementary solutions for the SAP Business Suite, you might already have heard about SAP’s new “Best-Built Applications Guidelines for ISVs” (BBA) initiative. The first publication of our guidelines took place at TechEd 2009; the latest version can always be found at http://bestbuiltapps.sap.com/.

The overwhelming feedback we got from ISVs and others was: “good initiative but much more is needed.” That wasn’t unexpected; in fact, the BBA development team at SAP is working on an iterative rollout of the guidelines. In succession, we plan to publish additional chapters for BBA and revise older chapters based on internal and external feedback. Developing BBA is very much an SAP internal and external community-driven process.

Only about two months after TechEd, we are ready to publish SAP Guidelines for Best-Built Applications, which includes a chapter on Application Life-Cycle Management (ALM). ALM is detailed in 7 sections:

 

  • SAP Solution Manager overview
  • Requirements
  • Design
  • Deploy
  • Operate
  • Optimize 

 

While SAP Solution Manager is SAP’s tool for supporting all aspects of ALM, the other six sections are aligned with the ALM phases defined by the ITIL standard.

As usual, our new guideline chapter is based on existing SAP best practices in the ALM area and as such has gotten many contributions and SAP internal reviews from SAP’s ALM experts. In addition, we started to include the SAP Mentor Initiative, a group of the most active contributors to SAP’s community programs, in our review cycles. We are particular grateful for the valuable insights we got from the mentors. We will continue to collaborate with the mentors on the next chapters of BBA, which will be released the coming year and in rapid succession.

Having said that, we’d love to hear from any ISV, but also from SIs and of course customers about what they think about BBA in general or about any detailed point we are describing. In this way, we’ll be able to improve and extend BBA continuously with every new version.

Wouldn’t it be great if business applications from SAP’s independent software vendor (ISV) partners, which integrate with SAP Business Suite, would “work just like” SAP Business Suite solutions?  Of course, as SAP employees we would like to see this very much. But seriously, our customers run large SAP IT shops. Having complementary ISV solutions, which they can administer and upgrade using their established SAP practices, makes customer adoption of such solutions so much easier.

 

This idea is the theme of a new set of guidelines by SAP for ISVs, which will see its first publication in October 2009. I say first publication because there is so much to say that we couldn’t write all we wanted to say about guidelines for SAP partners in the first version but we didn’t want to set a publication date in the distant future.

 

Rather, we’d like to offer the guidelines to ISVs immediately and let customers see them as well. For SAP customers, these guidelines are helpful when selecting SAP partner solutions. The guidelines work toward putting customers, partners, and SAP all on the same page when it comes to integrating partner solutions with SAP Business Suite solutions at customer sites.

 

The guidelines will be updated annually so that they can be extended and will continue to tightly align with the latest product developments on SAP’s solution road map. This enables solution partners to make effective choices among technology and integration options based on the most current innovations available from SAP Business Suite.

 

The SAP guidelines are grouped into six major focus areas:

 
  • Application life-cycle management
  • Process orchestration and service-oriented architecture
  • User interface and user experience
  • Data and information management
  • Application development
  • Governance and security
 

The recommendations we give are applicable to different approaches ISVs may take to developing their solutions:

 
  • Develop and run on the SAP NetWeaver platform using either the ABAP or Java stack or
  • Migrate a Java application to the NetWeaver Java platform or
  • Choose any other platform (.NET, Ruby, C/C++, ….) and connect to SAP
 

All these approaches are valid and are covered by guidelines, which consider them individually. As the SAP Business Suite evolves and incorporates new innovations, some older technologies are naturally phased out. Therefore, some guidelines do not encourage or explicitly do not recommend certain technology usage. This way ISVs can better align with SAP’s product roadmap.

 

Now, what does a guideline look like? Here’s an example:

 

Namespaces  

SAP recommends that ISVs name software components uniquely to avoid name collisions with SAP software and with software components from other SAP partner companies.

 

Explanation

In Java, use package names to specify the namespace. You can either request a unique namespace from SAP, or you can use a namespace that is very unlikely to be used by another company, such as com.mycompany.myapplication (assuming that your company owns the domain name mycompany.com). An ABAP namespace, which must be registered with SAP, is three to eight capital letters bounded by slashes (for example, /MYAPP/).

Pointer

To request and register namespaces and for more information, go to http://service.sap.com/namespaces.

 

As simple as this particular guideline is, when developing a business solution, there are a really large number of business as well as IT requirements to fulfill and having a comprehensive list of how to fulfill such requirements is a big asset and help to SAP partners, leading to better products for customers in terms of functionality, quality and TCO. Guidelines are structured as shown above. We focus on what to do and provide only links to how to do it. This way, our guidelines complement a great wealth of how-to information already provided on SDN, the SAP Service Marketplace, help.sap.com, and other sites by giving structured advice on what has to be done to develop best-built applications.

 

The guidelines are deeply rooted in SAP’s decades of experiences in delivering enterprise-class business solutions and are focused on architecture, drawn from industry standards as well as SAP’s own product standards, and last but not least from SAP’s experience of providing excellent support to customers. In fact, our support actually extends to our partners, as the following picture shows.

 

Partner Support Integration

 

We do recommend that partners integrate with SAP Solution Manager, which can greatly streamline incident handling from customers, avoiding any double effort in opening trouble tickets and have them processed by SAP and partner support. This is just another example of what kind of guidelines we are providing.

 

Our Next Steps: 

We are rolling out SAP guidelines for Best-Built applications through multiple channels:

 
  • You can find always up to date versions on our web site: SAP Guidelines for Best-Built Applications 
  • A free downloadable pdf version of our guidelines
  • A wiki version of the guidelines on SDN at
  • Visit us at the TechEd 2009 conferences in Phoenix, Vienna or Bangalore. We present in session SOA206 as well as at the SAP PartnerEdge pod.
Your next steps: 

We’d like to hear your feedback about our guideline initiative. By using SDN as our staging ground, we have some easy ways to collaborate. Provide your feedback as comment to this blog or if it is on some particular detail, add comments to our wiki or simply email us at bestbuiltapps@sap.com.

While SOA promises the business value of agility and reuse to SAP customers, this practice also adds IT risk, due to the complexity of maintaining application integrity across workflows that span multiple technologies. How can companies achieve functional and performance testing strategies that are complete enough to cover SOA? Replicating full test landscapes for development and testing activities, with all the components and data needed to validate a rich business process, can be expensive and difficult to create and maintain. Services might not be fully developed, or they might be owned by external providers and accessed over the Internet. iTKO’s LISA solution allows teams to test incomplete component landscapes or even just individual components by simulating the full environment around them. In SAP’s Co-Innovation Lab, iTKO and SAP engaged in joint testing against LISA’s Virtual Services as a service provider as well as a service consumer in an SAP landscape (Figure). With this blog I'd like to point you to our new white paper which reports details on how the team applied both approaches to achieve successful SOA quality coverage with less manual effort.

 

How many mechanics do you take with you on a ride in your car? Maybe none, but can you operate a data center for SAP applications without engineers? Well, this provocative comparison might be unfair because an application data center is a more complex beast then a car. Nevertheless, it would be nice to take an extra step towards deployment and operations automation in IT.

While IT engineers are typically very specialized in just certain applications, network technologies and so on, in the end, all technologies used in a data center need to function well together. This means that different technologies need to be integrated with each other, which in turn forces different IT expert groups to communicate with each other across an entire IT organization. Such cross-IT silo communications can be slow and error prone. Savings in time and money could be substantial if such cross-silo work could be reduced through cross-technology layers automation.

A simple but not uncommon example for the problem above is to configure and operate a load balancer or better, an application delivery controller (ADC), which is a generalization of a load balancer, with additional network service features. ADCs are useful for SAP applications with multiple instances for scalability and high availability. “When I reconfigure an SAP application, it takes me three months to get my network colleagues to change the load balancer,” is what I once heard from a customer. Surely an extreme example, but you get the point.

In order to help provide new solutions to our customers, collaboration has been taking place through the Enterprise Services Community (ES Community), our collaborative, cross-industry program that brings together thought leaders from diverse industries to share ideas and innovations in enterprise services. One Community Definition Group (CDG) – titled “PCDG 97 NetWeaver Infrastructure APIs for Network Solutions” – is focused on automation of network-application integrated configuration and operation. As the group title implies, the SAP NetWeaver technology platform includes APIs, which can be used by ADCs (load balancers) to auto-configure themselves as proxies for multi-instance SAP application systems. If the APIs are polled on a regular basis, maybe every five minutes, ADCs become capable of reacting to SAP application instance changes during production runtime. If another application instance is brought up, let’s say for providing more computing capacity for an increasing end-user load, or if an instance is brought down temporarily for maintenance, the ADC could adjust load balancing automatically without any manual administrator intervention. No three month waiting times, nor even e-mails from the application group to the network group, are any longer needed. 

Auto-configuration and auto-operation was the main topic of the ES Community group, which could lead in the near future to extended functionalities in partner’s products. Technical details have been discussed and a certification test scenario for a new SAP certification of these features has been developed.

There is even a plan to go much further, and for that, I’d like to go back to a mechanical engineering example.

According to Wikipedia, steam engines have a long history, going back 2,000 years. Early devices were not practical power producers. This changed as James Watt invented the centrifugal or fly-wheel governor. This component automatically adjusts the engine output power to a changing workload (http://en.wikipedia.org/wiki/James_Watt). Instead of an operator fiddling with a steam valve, this was then done automatically through the fly-wheel governor. Consequence: James Watt’s steam engines needed 75 percent less fuel then their competitors.

Green IT is currently a hot discussion topic, and in that regard, a 75 percent reduction in data-center resource consumption with similar savings in power and cooling energy would surely be appreciated. I don’t know if the results of one SAP ES Community group can lead to such great savings, but maybe we can make a contribution towards advancements in Green IT.

The key elements of the modern steam engines are the steam valve and the fly-wheel governor, which with the rest of the machine are combined in a closed circuit self-regulating system. This system keeps the steam engine working at a constant pace under varying work loads. All that we need to do now is to invent closed circuit self-regulating controllers in IT systems.

With the Adaptive Computing Controller tool, a “steam valve” for SAP systems is provided. An Adaptive Computing Controller API was part of our discussions in the ES Community group. The tool allows instances of one SAP system to be started and stopped. Therefore, the Adaptive Computing Controller tool can be used for adjusting SAP processing capacity to changing workload needs.

 

A modern application delivery controller solution sits in the perfect spot to play the role of the fly-wheel governor. The ADC could measure the response time performance of the SAP application in the same way that the fly-wheel governor measures the rounds per minute of a steam engine.  If performance degrades below a certain threshold or exceeds another level, the ADC could send commands to the Adaptive Computing Controller to add or shut down SAP system instances. Of course the ADC should not overreact and therefore needs to also check when instance changes, which take some time to execute, are completed.

The up-to-date status of instances can be retrieved by the same mechanism as needed for auto-configuration and auto-operation as described above. By combining capabilities of the SAP application system with the Adaptive Computing Controller and ADC capabilities, it would be possible to build a closed circuit controlled SAP system, which uses only as much resources as needed for a changing workload at any point in time.  Computing resources from over sizing systems for occasional peak loads could be saved, which is analogous to saving fuel when using James Watt’s steam engine improvements. For end users, it makes the SAP system also a little bit more like driving a car without a mechanic on board.

I hope you like this little blog. Please let me know if you have any comments or ideas you’d like to discuss on data-center IT automation. If you’d like more details, you can also visit my ASUG session, “Simplifying SAP NetWeaver Application Server Operations,” which I’ll present at the ASUG and SAPPHIRE 2009 conference in Orlando in May.

Network Infrastructure for SAP Application-based Landscapes

The "Network", the Big Unknown

"The "network" is coming out of a little wall socket in my office, like electrical power.  Why should I care? I'm busy finding out why end users in some branch offices are complaining about the performance of our applications."

Well think again. Since end users are more often physically moving away from the location of an application's datacenter, the network is not only a local area network (LAN) but also a wide area network (WAN), which can be a beast. It introduces new obstacles for the delivery of applications to end users and might often be the root cause of a problem rather than the application. Unattended, the WAN can introduce latency time delays, transmission delays if bandwidth is low, and a number of new errors caused from the network side. If you use the Internet as a WAN, your application web site is also open to attack and overall application security becomes a concern.

Add to this mix the rise of enterprise service-oriented architecture (enterprise SOA)-based application landscapes. When an enterprise SOA is in use, applications do not only receive requests from end users through the web or other user interfaces, but they also receive requests from other applications, often through web service calls. Enterprise SOA is a means for application integration. Many companies are now facing the task of integrating applications that are housed in different locations, such as business scenarios, which span multiple companies in a supplier scenario. Application-to-application (A2A) network traffic often has to be routed over a WAN in these cases and similar concerns for end-user-related traffic exist.

Now that I have scared you, I have good news share. There are many technical solutions available to counter the WAN constraints that impact SAP applications. First of all, the SAP NetWeaver Application Server has a lot of strong, built-in features that can help reduce the negative impacts of a WAN. Some of those features are gzip compression to reduce the data transfer volume and use of https protocol for encrypted secure communications. In addition, a large number of network vendors have come up with new technologies that can help applications in many different ways. If you follow the developments in the network industry, you might notice a lot of innovations, mergers, acquisitions and growth. It is a very vibrant industry, which is driven by the networking demands of globalization.

Figure 1 provides an overview of the basic elements of a distributed enterprise SOA landscape embedded in a network infrastructure. There might be multiple data centers which host application components (1). Among each other and to end users they are connected via a WAN (6).

 

image

Figure 1. Network services deployed in datacenter DMZs in front of SAP NetWeaver Application Servers. See text for numbers.

 

The network appliances (or hosted network services, which also exist) sit as proxies in front of the application servers in a so-called "demilitarized zone" or DMZ.  End users located in branch offices might also have a small DMZ on the side of their WAN. The network functions in a DMZ are like an electrical power transformation station. In the case of a network, the DMZ transforms network traffic between a LAN and an outside WAN so it is optimized on either side for the very different properties required of both. Main features of a DMZ might include a load balancer (2), a key feature for scalability and high availability of applications; a special compression and caching appliance or WAN accelerator, which is deployed symmetrically at both WAN end points (3,7); and a security gateway (4), which filters out malicious incoming requests by analyzing the content of message requests. In addition, a firewall appliance (5) functions as a first layer of defense for blocking unwanted network protocol connections, such as telnet and more.

 

SAP's Network Vendor Ecosystem

 

Through the evolution of SAP applications - from real-time business solution to web-enabled application to enterprise SOA-based landscape - the use of wide area networks increases exponentially. In many cases, the older application end-user community, which can reach applications through a local network, became a fast-shrinking minority. Therefore, it is only natural for network vendors and SAP to collaborate first on the development of joint blueprints for an overall application/network infrastructure and then to research further joint application/network optimization opportunities to include co-innovation.

 

A little more then one year ago, network vendors and SAP came together under the Enterprise Services Community (ES Community) umbrella.

 

image

http://esc.sap.com/

 

The Enterprise Services Community (ES Community) is a program to gather customers, systems integrators, ISVs and infrastructure vendors around business themes so they can define enterprise services and other solutions that will service enable and support the SAP platform.

 

The first ES Community advisory group was launched by seven network vendors and SAP.  It had the modest goal of producing an overview white paper for customers' network and application IT groups. The paper provides detailed information about network technologies and their benefits to SAP applications. Click on the following link to access this white paper: http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/805d8c2d-0e01-0010-a694-a94109e88f2a.

 

Another important outcome of this advisory group was the launch of two new groups, one of which organized a joint test lab facility at SAP Labs in Palo Alto, Calif. Establishing this lab jointly, sponsored by network vendors and SAP, allowed us to test network solutions quickly, efficiently and at a tremendous cost savings.  All network vendors and SAP are very grateful to HP and Shunra for their contributions to the lab. HP has allowed the use of its LoadRunner tool and Shunra has provided its Virtual Enterprise WAN emulation appliance. The WAN emulator literally allowed us to test intranets and the Internet without leaving our lab room.

 

Every network vendor in this group had a chance to test solutions in the lab.  By late spring of 2007, a total of eight vendors had participated in this program. You might only see results from seven vendors because one vendor decided to acquire one of the other ones (something which I've seen happen before!). As noted previously, the network industry is very much in motion, which is exciting to watch and be a part of.

 

Many outstanding outcomes have resulted from the work of SAP's network vendor ecosystem - results that benefit you and our customers. You might have seen a number of the press releases issued by vendors highlighting the positive improvements in network response times. The reported 90+% response time improvements of WAN performance are indeed possible for some cases.

 

As of today, the following papers have been completed and co-published on SDN and on the vendors' own web sites. More papers are planned to be issued during the next weeks. Current ones can be found at the following direct links:

 

 

 

 

 

 

 

If you would like to learn more about our lab testing results, please visit the TechEd '07 presentation LCM 222 in Las Vegas or Munich, Germany in October.

 

Due to the success of the joint test lab, SAP decided to establish a much larger and permanent test lab facility for business software and infrastructure technology vendors. This lab was launched in June 2007 and was named the Co-Innovation Lab or COIL. Additional information can be found in the launch press release: http://www.sap.com/company/press/press.epx?pressid=7860.

 

With the great support from our COIL sponsors, this lab targets to demonstrate the most modern datacenter technologies available today for the operation of enterprise SOA-based solutions and other SAP applications. With network vendors on board with COIL, many new tests and solution blueprints about networks for use with SAP solutions can be expected to flow continuously.

 

In anticipation of the publication of many more blueprints and white papers in the future, we decided to establish a new top-level SDN Wiki section called "Enterprise SOA Infrastructure" (http://www.sdn.sap.com/irj/sdn/wiki), under which you will find the two branches - Virtualization and Network: the two infrastructure subjects for enterprise SOA that currently get the most attention and see most innovations. Expect the network wiki (http://www.sdn.sap.com/irj/sdn/wiki?path=/display/esoainfrastructure/networks+for+enterprise+soa+based+solutions&)to grow fast during the next several months. I'll link all published network-related papers into this wiki section as a permanent repository of all network technology for SAP solution-related information.

 

At this point, we do not have a discussion forum about network and/or application-related questions. Maybe we can start with this blog. If you send me your comments and like the idea of a network/application forum, I'll see if I can convince my SDN colleagues to open one.

Certification: It was mentioned above that from the first advisory group of the ES Community, two new groups were launched. I've already discussed the second group above. The third group was established to address network product and services certifications administered by SAP.  As I suggested this new group, the network vendors' interest and support was great right from the beginning. Together we were certain that customers would appreciate certification, which provides proof of the integration capabilities between networks and SAP applications.

The result of this group was the definition of certification terms and rules as well as test scenarios that have been developed and implemented in a test landscape by SAP. The experiences from the second group's lab tests were a great help in designing this program too.

 

To date, five pilot network product certifications have been completed and many more are scheduled. The certification process has been handed over to the SAP Integration and Certification Center (SAP ICC). Network vendors can now apply for certification through the ICC.

 

image

 

The SAP ICC and COIL continue close cooperation for the network certification program. The COIL facility maintains the landscape for the certification tests. For more information about the SAP ICC, please check the following link: http://www.sdn.sap.com/irj/sdn/icc.

 

An always up-to-date list of certified network products can be found at:

 

http://www.sap.com/partners/directories/SearchSolution.epx.

 

Search for the SAP-Defined Integration Scenarios ESOA-AW-PO (network performance certification), ESOA-AW-RA (Reliability&Availability) and ESOA-AW-SEC (network security). "AW" stands for "Application delivery and WAN optimization", the two network technology segments for which SAP offers this certification.  When the TechEd conferences have concluded, we will try to publish some more details about the network certification testing.

 

Next Steps

 

I hope you like this blog and encourage you to send your comments. The network vendors and SAP have made a start to provide you with a lot of application-network integration information, which helps to optimize user-to-application as well as typical enterprise SOA application-to-application connectivity. Your feedback and input from the SDN and BPX community side will help to steer our next big network/SAP application activities. Surely together the network vendors and SAP will continue to use ES Community, the Co-Innovation lab, certification, the SDN and BPX to provide further valuable information and solutions to benefit your IT and business.

With this Weblog I'd like to break the ice for a lively and diverse community developing around the subject matter of SAP Software Performance.  Having good software performance is great, but what exactly do you mean by that? Ask this questions and often the stuttering begins. Ask business folks at a company what performance they expect from a SAP solution. They might answer processing a million sales documents in minutes on your well matured 2 processor server. End users might like sub-second response times for everything around the globe ... Yeah right, get real!   This forum intends to help you defining performance in  comprehensible terms. We can discuss what are the most useful questions to ask and how to manage expectations when discussing performance across diverse teams of developers, consultants, project managers, business folks, end users and more. You may want to ask: "How much would you like to invest in performance?"  rather than "what performance levels do you want?".   You can break down performance into (our favorite) terms: Response times, volume throughput/hardware sizing, scalability, stability and overload robustness in order to define performance more precise. These are just some simple warm up examples on the subject matter performance.  OK, after we know now how to define performance and how to talk about it, we are put in charge of achieving good software performance. I bet the performance engineering "How to"-questions will become the biggest part of this forum. There are so many details along the whole SAP application software life cycle which need to be considered for achieving good performance.   Luckily there is a really good chance that somebody else already found a solution to a particular performance problem you might have. So give it a shot and post your question in this forum. We have a fair number of moderators with strong and diverse software performance background who will answer your questions. Over time we hope that other performance specialists will join this forum and help us answering the questions too.  I think I covered the main theme of software performance  in the two paragraphs above. Instead of going on and on explaining more details about it, I just like to list a loosely, unstructured and incomplete software performance related keywords in order to give you some ideas and leave the rest to your questions and answers:  Performance management, hardware capacity management, IT Governance (for Performance), ABAP/Java/NetWeaver/SOA/ESA performance, EP/KM/Webdynpro... performance, performance testing, stress testing, load testing, benchmarking, performance test methodology, performance test tools, performance requirements, performance tuning, LAN/WAN performance, Intra/Internet performance, network impact on application performance, Performance Service Level Agreements, production monitoring for performance, design+planning for performance, developing/implementing for performance, project management for performance, performance milestones....  Happy posting from the Performance Forum moderators.

Actions

Filter Blog

By date:
By tag: