Websites being slow is perhaps the most common problem every website administrator, and developers run into. If they are extremely unlucky, then see this problem only in their production environment. Many troubleshooting techniques, best practices are available for this scenario. I will try to cover them in a different post as a part of my ASP.NET Troubleshooting series some other time. Meanwhile, you can try looking at this post of mine, where I’ve something that might help you.
For now, let’s focus on Windows Azure Web Sites. As you know this is a closed (well, not completely) hosting environment, and still there are a few things that you can do for this problem – for example, you can try collecting FREB traces for a long running request, and see where it is stuck. FREB shows ASP.NET ETW events as well, but has only the page lifecycle events. For example, it will tell you where the problem like Page_Load is, but not what inside Page_Load. To find more, you either have to profile your application, or collect a memory dump of your process serving the request, and see what the request is doing for such a long time.
I’ll put the steps to enable an automatic collection of memory dump whenever a request processing exceeds ‘x’ number of seconds. This is going to use the same customAction for FREB which I’ve detailed in this old post of mine. In WAWS, the customActionsEnabled attribute for the website is set to “true” by default, so you have to just put the below web.config file. In this example, I’m going to use Windows Sysinternals procdump.exe to take the dump of our process (w3wp.exe). Here are the steps:
Enable ‘Failed Request Tracing’ from the Portal
First, you need to turn on FREB from your management portal. This article has the brief steps how to view those logs from Visual Studio, and even configuring it from there. From the portal, for your website, under configure tab -> site diagnostics, set the below to On.
Transfer Procdump.exe to your deployment folder using FTP
Second, you need to put procdump.exe in your website deployment folder. Download it to your local machine from here. You can create a new folder, and place it in there, let that folder be the path where the dumps be stored as well. In my example, I’ve created a folder called ‘Diagnostics’ under the root, and placed the procdump.exe in there. Screenshot of my FileZilla:
Configure the web.config with configuration to collect dump
Lastly, you need to place the below configuration in the web.config file to enable procdump.exe to be spawned with certain parameters whenever the request exceeds 15 seconds, in this case:
<?xml version="1.0" encoding="UTF-8"?>
<remove path="*" />
<add path="*" customActionExe="d:\home\Diagnostics\procdump.exe" customActionParams="-accepteula w3wp d:\home\Diagnostics\w3wp_PID_%1%_" customActionTriggerLimit="5">
<add provider="ASP" verbosity="Verbose" />
<add provider="ASPNET" areas="Infrastructure,Module,Page,AppServices" verbosity="Verbose" />
<add provider="ISAPI Extension" verbosity="Verbose" />
<add provider="WWW Server" areas="Authentication,Security,Filter,StaticFile,CGI,Compression,
<failureDefinitions timeTaken="00:00:15" />
Above configuration will take a mini-dump of the w3wp.exe serving your WAWS site, and put it in the folder d:\home\Diagnostics with dump name having it’s PID. And, if you want a full dump, you can have -ma parameter added. Example customActionParams="-accepteula -ma w3wp d:\home\Diagnostics\w3wp_PID_%1%_".
You could use any other additional switches that typically use for ProcDump. For a slow running page scenario, I might collect dumps at regular intervals – 3 dumps with 5 seconds interval each, so that we can check what the request is doing across these timings. For that you can set the customActionParams to “-accepteula -s 5 -n 3 w3wp d:\home\Diagnostics\w3wp_PID_%1%_”.
Hope this helps!