A widespread and regular complaint about websites is the fact that “This web page is slow.” This is especially seen in web applications that replaced desktop applications. While it is true that the web provides some highly desirable characteristics like global delivery, there are particular challenges faced at the performance level.
The next thing that needs to be evaluated is the reason for the “slow” web page. Your user found a URL that had “slow” speed and informed you. Now what? It is time to evaluate the precise location where the slowness is happening. Also, determine whether it is slow to start with. Is it slow for all the users? Answer these questions. Once that happens, it will become easier for you to fix the issue. Moreover, the next time, check whether the slowness occurs the following week again.
It is possible to find performance optimization data on Google. Most of the time, it covers topics like garbage collection, SQL queries optimization, ORM pitfalls, Jit, and so on. It might seem to be the right way to go ahead with the implementation of optimization because it seems to be promising. There is one question that you need to answer: How do you comprehend whether the optimization might provide excellent results in this specific context?
There is a part that is missing from the puzzle. It is time to find performance issues continuously. This way, it becomes easier to know what is causing the slowness and the necessary measures that need to be taken to back up. After collecting all the information, you can decide as to whether performance improvements are required. You can also explain this to the stakeholders.
Once you scrutinize the performance, it becomes a useful tool to respond to perceived slowness issues. The first problem that you might encounter is the fact that it might not be all about slowness after all. How is that possible? Let’s explain this with the help of an example.
When there are timeouts where the load balancer may disconnect the connection after X seconds, it becomes nearly impossible to find out whether it is due to a deadlock or a slow response time. The reason being, both of them lead to the same result, i.e., timeout. It is time to examine the data to unearth the real issue carefully.
In the forthcoming sections of this write-up, we have mentioned the different ways to monitor and optimize ASP.Net. We promise that after reading this content piece, as an ASP.Net development company, you will be in a far better position to carry out ASP.Net development services.
Let’s now divert our attention to some of the possible reasons for the slowness in a web application:
- Application code (encompassing third party libraries)
- DNS issues
- HTTP Server (Something from ASP.NET or IIS, for example)
- ISP issues/ Network
- Load balancer
- Proxy getting in the way on the user side
- Rendering blocking on asset loading
- Slow JavaScript
- Subsystems encompassing Redis, SQL Server, Rabbit MQ, Redis, etc.
- Switches and routers
- Third-party services like payment processors, maps provider, etc.
The list is never-ending. It is contingent on the complexity and the scale you are dealing with. Now, you might have a question like: How to diagnose a performance issue in the best way since there are lots of components at disposal? The answer is simple: Data. You need concrete data that directly hits the heart of the issue about everything.
With the help of the data, it is possible to determine the fault that is causing the slow request. The data so obtained can help you start from the top and then scratch off the components that are not required as you go to the bottom. With each step, you come closer to the facts and issues.
This can include:
- Either client-side, server-side or using both in conjunction.
- Sluggish JavaScript, rendering, obstructing assets?
- Web server, load balancer, any third party or subsystem?
As you move downwards, it is possible to get to the neck of the problem. You simply need the data to search for a problem that exactly matches with the precise solution. At that stage, tools like SQL query execution plans or performance profilers become mandatory.
To ensure that you utilize the time effectively, it is important to look at Amdahl’s law:
“Regardless of the magnitude of improvement, the theoretical speedup of a task is always limited by the part of the task that cannot benefit from the improvement.”
Let’s now divert our attention to the infrastructure problems.
A top-bottom approach is quite useful when it comes to finding out an issue in a more precise manner. However, it only works when there is an underlying issue localized to a single page. What happens when the issues are connected to multiple pages? For example, what happens when multiple pages are experiencing intermittent slow response time. This can be as a result of a subsystem not being maintained or an exclusive network switch which leads to reboot.
This is where the monitoring approach concentrated on the application does not help. At that time, you require some other metrics to scrutinize the healthiness of every component in the system, i.e., both at the software and hardware level.
From hardware level, web and database servers are two machines that come to mind. However, this is just the beginning. It is essential to identify and monitor all the hardware components, including network switch, router, firewall, SAN, server, load balancer, etc.
This practice might seem to be quite ordinary for a system administrator since hardware monitoring is commonly practiced. However, it has been found that all those hardware metrics are mostly useless in terms of performance if they get separated from the application metrics. In simple terms, the importance of metrics cannot be undermined.
For example, an average of 50% CPU usage on a database server might seem to be normal in ordinary circumstances while at other times it might seem to be a ticking bomb. At peak times, 50% CPU usage tells that there is scope to fit in even more massive traffic. In case the same 50% frequently occurs during the idle periods, it tells that the application might not survive a sudden splurge of incoming requests.
It is important to connect system-wide metrics like CPU, memory, and disk to application metrics to ensure a healthy system. By comprehending application metrics like request throughput and system metrics like CPU usage provides much more comprehensive information on the health of the system.
Now, it is time to look at certain application performance management tools.
Application Performance Management (APM) Tools
With the help of APM tools, it is possible to carry out primary operations like data visualization and data storage. An agent carries out the responsibility of accumulating the data and sending it to a data store. With the help of the web interface, the data can be reflected through dashboards centred on web requests.
With the help of APM tools, you can:
- Virtually scrutinize the web application performance as a whole;
- Virtually scrutinize the performance of a specific web request;
- Automatically send alerts in case the web application does not perform as per your expectations or contain many errors;
- Find out the best way the application performs at high traffic periods;
Have a glance at an example here to get more idea.
Let’s now look at a non-exhaustive list of APM tools with innovative support for ASP.NET and IIS:
NewRelic APM
Application Insights
AppDynamics
Stackify
Infrastructure Monitoring Tools
To provide a comprehensive picture, infrastructure monitoring tools accumulate metrics at the host level. It is possible to collect the metrics at both the hardware and software levels.
Datadog OpServer - Open Source
Lightweight Profilers
It assists in ensuring high-level metrics that are received on a particular web request. With its help, developers get prompt feedback whenever they browse web pages. You can employ them in different types of environments, including QA, development, staging, production, etc. Thus, they make it possible to evaluate the performance of a particular page quickly.
The difference between lightweight profilers and their counterparts is the fact that they are not connected to the procedure. In other words, it is possible to utilize them without worrying about the overhead they generate.
From a development perspective, lightweight profilers are renowned for instant feedback on the code being worked on. With its help, it is possible to find issues like slow response time. The reason being, the response time is always depicted in the corner of the page.
Concluding Thoughts
The number of factors and tools that are involved in the performance of a system might seem overwhelming. However, one word that has all the answers to the queries is data. With the help of a clear and concise view of a system at a point of time can help you derive the reason for its performance. You can also get a fair idea about just in time learning, where performance metrics and charts can assist you to ascertain what exactly is impacting the system.