Welcome!

Sebastian Kruk

Subscribe to Sebastian Kruk: eMailAlertsEmail Alerts
Get Sebastian Kruk via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: SOA & WOA Magazine, Java Developer Magazine, Software Configuration Management, Web Analytics, AJAX and ContinuousAPM, Application Performance Engineering

Article

Balancing the Load

Why you need to constantly monitor application performance

A question that every online application provider will face eventually is: Does my application scale? Can I add an extra 100 users and still ensure the same user experience? If the application architecture is properly designed the easiest way is to put an additional server behind the load balancer to handle more traffic.

In this article we recount an incident that happened to one of our clients when the cause of poor application performance was eventually attributed to problems with the load balancing of the application servers.

HTTP Server (500) Errors Go Over the Roof
Around 8 am the Operations team at Rendoosia Inc. (name changed for commercial reasons) got an alert from the APM tool that one of three SharePoint servers was generating many HTTP Server (500) errors. All three servers were behind a load balancer; hence why the team decided to analyze the overall performance of all three servers with the report presented in Figure 1.

Figure 1: Overview of the three SharePoint servers behind one load balancer with some KPIs: usage, response time and number of errors; two servers show performance problems

The Operations team noticed the following issues:

  1. The x.x.x.155 server (row marked with the blue box) was under significantly lower load (7k operations compared to almost 30k per each other server) than the other two. Both the load and the number of users were equally shared over two servers: x.x.x.154 and x.x.x.156
  2. Although server x.x.x.155 had the lowest user counts it was reporting the longest processing time.
  3. Server x.x.x.156 was reporting a high number of HTTP 5xx errors (marked with red box).

The team charted the HTTP server errors and the load, counted as number of transactions, for all three server over time (see Figure 2) to get a better understanding of the current situation.

Figure 2: Distribution of the number of server errors and transaction counts over time for all three servers; one server shows a lower load

The team's first observation, based on the above-mentioned reports, was that the x.x.x.155 server, with the lowest number of users, was most likely not connected to the load balancer. In order to determine the cause of the high response time on this server the team analyzed two reports:

  • Response time for x.x.x.155 broken down into network, server and redirect times indicated that almost all the time is spent on the server (see Figure 3).
  • Drill down to the operations report to analyze the load on the server (see Figure 4) shows that one particular transaction took a lot of time to complete, resulting in low application performance and poor user experience.

Figure 3: Response time breakdown for x.x.x.155: most of the time is spent on the server

Figure 4: Drill down in the context of the x.x.x.155 server shows main KPIs per transactions executed on this server; one transaction is affected by performance problems

Next, the team analyzed the 5xx errors produced by the x.x.x.156 server. They drilled down to a PurePath of one of the transactions that were reporting these errors and learned that the problem was caused by a malfunctioning database connection pool (see Figure 5)

Figure 5: Drilldown through PurePaths to the Error details reveals that the reason behind 5xx errors is caused by the database connection pool usage

The Operations team was also curious as to how the 5xx errors produced at the  x.x.x.156 server were affecting the actual user experience. The team wondered if user operations were equally distributed between both servers connected to the load balancer. The question was whether users who were unlucky and got served by the x.x.x.156 server were stuck on that server. This kind of question was hard to answer just by looking at a single SharePoint server. The Operations team used the APM tool to answer it.

Figure 6: Users remain on the server at which they have started their session

The report in Figure 6 shows that users were usually served by the same application server. Therefore those who started their session on the x.x.x.156 server remained there, resulting in a constantly poor experience due to the bad performance of that server.

Conclusion
Modern application performance management is not only about making sure that the application and database servers are operating without problems. We also need to set up the load balancer right and monitor the network infrastructure for potential problems that affect the overall application performance.

The Operations team at Rendoosia Inc., using Compuware dynaTrace Data Center Real User Monitoring (DCRUM), could get in just a few clicks from the alert about HTTP Server (500) errors through a holistic overview of application server KPIs to a root cause of the problem.

Based on the unequal load among three application servers, Requests breakdown in Figure 1 and the number of transactions in Figure 2, the team quickly determined that the x.x.x.155 server was not properly connected to the load balancer. Additional analysis illustrated that this server was also affected by low performance of one of the operations.

This story shows us that even though only one server might be experiencing performance problems, caused by many HTTP Server errors, the load balancer will not offload that server because it is not aware of those errors. That is why Operation teams need to constantly monitor, with properly set up alerts, for such outliers in application performance; even on load balanced setups.

More Stories By Sebastian Kruk

Sebastian Kruk is a Technical Product Strategist, Center of Excellence, at Compuware APM Business Unit.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.