Thursday, January 08, 2009

Identifying performance bottlenecks on a .NET windows app. Part II Using Native Images with CAB, reviewing Fusion Logs

We left off on the previous post with a newer version of NHibernate and a different mapping that avoided the byte per byte comparison of our byte arrays, however our application start up was slower, about 20 seconds and showing some screens for the first time was taking 10 seconds, not acceptable.

The performance decrease was gone but the start up was not good enough.

We got our hands on ANTS profiler again to see what was going on whenever we invoked a screen for the first time:

CPU usage:




Jitted Bytes per second:



and IO Bytes Read:



From these images we deducted there was quiet some Just-In-Time compilation going on when the screen was loaded. How to solve that? Using Native Images for our assemblies in order to avoid JIT compilation, see this MSDN article for this.

All in all that was quite easy to narrow down, we used NGen, installed the native images and voila!, let's profile again...

I wish it were that quick, we kept seeing JIT peaks :-O

Alright, let's use some heavier artillery and see why it's still JITting.

This is where we got our hands on Fusion logs. Fusion is the engine (DLL) in charge of loading and binding assemblies. The Fusion Log Viewer is the tool to see the logs for this DLL and troubleshoot loading problems. This tool is part of the SDK and can be downloaded from here. We aware that it's a heavy download. In order to use this tool once the SDK is installed:


1. Open in Fuslogvw.exe in folder C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin
2. If it shows up any entry click on the list box click on Delete All.
3. Click on Settings and choose Log all binds to disk and check Enable custom log path
4. And in the Custom log path edit box type C:\FusionLog
5. In C: drive create a new folder and name it FusionLog
6. Now run the application and execute scenarios where we are seeing JIT-ing
7. Now when you browse to C:\FusionLog you would see couple of folders.

We were unable to install the SDK in our production clients, so we ended up doing a registry edit in order to collect the logs. If you don't want to install the SDK, do the following:

1) Go to regedit
2) HKEY_LOCAL_MACHINE\Software\Microsoft\Fusion
3) Click on the right pane and new -> string value
4) Name it LogPath,click it in the value write C:\MyLog
5) Again right click the right pane
6) go for new DWord value,name it ForceLog
7) click it and give Value "1"
8) Then create a folder in C drive with the name MyLogs
9) Run the app and logs will be created

The logs are created as HTM files in the folder you decide. reviewing our logs we found out one of our main modules wasn't loading from its native image although the native image was on the native image cache. Why?

Let's give some more background information, we use CAB.

The Composite UI Block from Patterns and Practices had a main release on December 2005, there's been other releases for WPF and the most recent Prism project, but apart from the Smart Client Factory addition, the CAB framework has stayed pretty much the same for Windows Forms.

CAB is known for its Module Loader Service and was highly welcomed by windows developers as a framework that allows loose coupling with it's Event Publishing/Subscription mechanism, it's Services module and its MVP implementation.

All that is very good for the developer and for maintainability but the performance is not the greatest if you have quite a few publications and subscriptions going on and if you have a few modules loaded at start up. There are quite a few posts regarding this on CodePlex's CAB forum.

I could go on and on about the beauty of CAB and despite its performance issues, I do believe it offers more advantages than disadvantages to the windows developer. IMHO, being able to give modules to develop to different teams and being able to plug them into the application without any major compilations, only a configuration change is a big big plus, see these posts on CAB Module Loader Service (CAB Modules on Demand) and Dynamically Loading Modules in CAB)

The main reason for this module not loading from its native image is due to the Reflection mechanism currently used in CAB's Module Loader Service:
(namespace Microsoft.Practices.CompositeUI.Services)
assembly = Assembly.LoadFrom(file.FullName);

More information on Cook's archives

Codeplex community member Mariano Converti was prompt on offering a solution on his blog.
How To: Use the Ngen tool to improve the performance in CAB / SCSF applications

As to the date of this post, this code change hasn't been incorporated into any CAB release, they should do it soon though.

Happy performance troubleshooting!

Labels: , , , , ,

Monday, January 05, 2009

Identifying performance bottlenecks on a .NET windows app using Windows Debugging Tools and ANTS Profiler. Part I: NHibernate byte[] types

This is a curious case that led me to discover and use a very valuable tool ANTS Profiler and read a few good blogs about .NET debugging and CLR internals. Read on to bookmark with me.

Near to Christmas we received a complain one of the windows applications was performing too slow after a few hours of usage. Performance monitor counters indicated the performance problem lied on high CPU peaks sustained for a long period of time.

.Net memory counters were somewhat fine, no increase on allocated bytes or overall memory consumption, no high IO reads, no high network usage...apparently the application was just doing its stuff, but for a long time, and each time longer...

First thing that came to our mind was an infinite loop, however the curious part on this case is that the CPU peaks took longer the longer the end user worked on the application and began to be noticeable after a couple of hours, not quite the definition of an infinite loop.

If only had we had a better CPU, had the performance downgrade been noticeable after more hours. This is something we had to be thankful for, bad CPU, less time to reproduce the problem. This was one of the typical production only problems too :-p

Long sustained CPU peaks, how we dug down on the cause:

First we grabbed the free debugging tools (insert the obvious reasons here, budget, management approval, etc): CLRProfiler, winDGB, SOS and ADPlus.

Two great blog posts about how to start with these tools can be found here (Speaking Of Which) and here (Maoni's blog).

MSDN Magazine also has two good articles (Bugslayer column and this CLR Inside Out column) on the subject of windows debugging tools and how to use them in VS 2005.


Back to our own experience on the matter, CLRProfiler hung the machine beyond response and despite being able to sketch the object graph in memory, it was hard to correlate the time of the high CPU peaks with the information obtained from CLRProfiler.

This was not due to a problem with the tool itself, hanging was due to poor iron power and our over-consuming application and the inability to detect the main CPU usage cause was due to the fact that CLR profiler is only meant to identify and isolate problems related with garbage collection, excessive long lived objects or huge collections.

At first we thought the high CPU could be related with garbage collection due to long lived objects, see this post on Tess's blog If broken it is, fix it you should.


We collected memory dumps with ADPlus during the high CPU peaks as per this lab blog post and analyzed the memory dumps using WinDBG.

At the end we decided to have more control on when the dumps were taken and use WinDBG while attaching it to the process. I should also mentioned ADPlus ended up generating dumps with errors when the system was really stressed.

Instructions to take dump via WinDBG :
1. Run the application.
2. Open up WinDBG. Click on File ‐> Attach To Process ‐> Select the process ‐> Click on OK.
3. WinDBG attaches to the process and waits on the command line. Press ‘g’ and hit enter. ‘g’ is for letting the
application run.
4. Now whenever you want to take a dump, Hit Ctrl+Break in WinDBG. Now, type : .dump /ma C:\Dump1.dmp
This will take a dump.
5. Press ‘g’ and hit enter for the process to resume.


WinDBG can give valuable information about the CLR stack at the time the dump was collected (clrstack command), the types being scheduled for GC (!finalizequeue) and how many types marked for finalization belong to Gen 0, Gen 1 and Gen2.

Seeing your managed stack at a single point in time or having exact information about the memory allocation does not give information on the amount/% of CPU time each method takes though.

We tried taking dumps with WinDbg at the beginning of the CPU peak, in the middle and at the end but the results only offered a hint, too many Collections were allocated and lived to Gen 2. Some of this collections were byte arrays. It wasn't apparent from analyzing the three managed stacks (from the three memory dumps) which method was consuming the longest time.





So far we had lots of collections surviving to Generation 2 and some of them were of type byte array. Garbage collection counters, however were within the "normal".



If the application was just "busy" doing its stuff, where was this time spent? Data Binding? Event Brokerage? Database access latency and query performance had been already discarded with SQL Profiler btw.

The main sustained CPU peak cause was discovered using ANTS profiler. Memory leaks, long GC cycles were discarded using the mentioned free tools.

ANTS profiler will let you set .NET performance counters and it will attach itself to the application being debugged. You cannot set breakpoints, afaik, but can go back in the profiler results and drag your cursor over a region to get a full called stack walk. It also goes beyond that and will indicate the % of time each method is taking on CPU and the % of time its children take on CPU usage.

Finally! A tool that will correlate performance counters the called stack for you and will indicate % of CPU time per method. This information you cannot gather by taking memory snapshots or called stack snapshots, unfortunately the free tools were only useful to discard memory leaks and GC related problems on this particular case. They narrowed down the places to look into.



As you can see from the ANTS Profiler screen shot the application was indeed doing stuff, in this case comparing collections of bytes, byte per byte...Ouch!

We were able to identify the Collection comparison problem (byte[] arrays were being compared when the
NHibernate session was flushed and persisted even when they didn't changed). We correlated this with a
fixed NHibernate bug:
http://jira.nhibernate.org/browse/NH-1246
and changed our mapping attributes to indicate there was no need to update the BinaryBlob fields. Our application either inserts the binary data or deletes its.


Note: you should be logged into http://jira.nhibernate.org/ before navigating to this bug report, registration is free.

Our NHibernate version and mapping strategy contained the buggy bits...

I hope this post hasn't turned out too long, by upgrading NHibernate we solved the mystery of performance downgrade over time, the more the user worked with persistent binary data in the application the longer this loop comparing byte per byte on each collection took.

Upgrading NHibernate added to a performance challenge in another area, the application start up was taking longer. This will go on Part II as I should get some sleep.

Sweet dreams!



PS. VSTS 2008 has very promising capabilities for debugging high CPU usages, almost as good as ANTS Profiler, see this post

Labels: , , , ,

Thursday, May 03, 2007

Yet another complaint on the .NET Framework versioning

It just doesn't make sense that in order to add up functionality to the 2.0 framework without touching the runtime, yet another framework version is created and deployed in a different folder. It is extremely confusing as to what will be the implications. Let's say I have an existing 2.0 application and I want to take advantage of the WinFX in order to add/implement workflows. Do I have first to start worrying about how to migrate my current application from 2.0 to 3.0 or the classes we currently use from the 2.0 framework are still valid under the 3.0, amen of the new WinFX additions. Why not just adding a new namespace and new assembly for this piece, if it is dependent on the 2.0 framework but won't replace it????
Migrating from 1.0 to 1.1 caused some pain, more pain (necessary evil IMHO) to migrate from 1.1 to 2.0. Do we have another migration at sight, or it would be just an add up?
It would be worth to have an article published on MSDN about the cause of this decision.

WinFX 3.0 Renamed .NET Framework 3.0

....

I found the article on MSDN, it still won't explain why the versioning convention. As some bloggers have already discussed, having a Framework 3.0 and CLR 2.0 doesn't make much sense when they precisely state "...because .NET Framework 3.0 is an additive release"
Deploying Microsoft .NET Framework Version 3.0

Labels: ,