how to determine the assembly evidence at runtime

I ran into a problem the other day, one of the .NET applications we had deployed started crashing on some screens for no apparent reason. The application worked fine in the development environment and when called from the C drive in the test machine.
We tested for binding problems using the SDK tool fuslogvw.exe, but didn’t see any binding errors. The other SDK tool we tried was the .NET Framework configuration tool MScorcfg.msc, no luck. As the SDK doesn’t ship with the Framework 2.0 we had a trouble to do these tests on every single production machine…let alone that the Configuration tool does not show run-time evidence for a given assembly.
Maybe the enterprise policy had changed for the different zones and this .msi had been pushed without us knowing…
The MScorcfg.msc said otherwise…

This code and extra logs in our application did the trick on determining the evidence passed to the CLR:

private static void LogEvidence()
{

Zone myZone;
Url myURL;
Hash myHash;
Site mySite;

String strEvidence = “”;

log(” ===================== Assembly Evidence: ========================= “);

foreach (Object myEvidence in System.Reflection.Assembly.GetExecutingAssembly().Evidence)
{
strEvidence = myEvidence.GetType().ToString();

switch (myEvidence.GetType().ToString())
{
case “System.Security.Policy.Zone”:
myZone = (Zone)myEvidence;
strEvidence = strEvidence + “: ” + myZone.SecurityZone.ToString();
break;
case “System.Security.Policy.Url”:
myURL = (Url)myEvidence;
strEvidence = strEvidence + “: ” + myURL.Value;
break;
case “System.Security.Policy.Hash”:
myHash = (Hash)myEvidence;
strEvidence = strEvidence + “: ” + BitConverter.ToString(myHash.SHA1);
break;
case “System.Security.Policy.Site”:
mySite = (Site)myEvidence;
strEvidence = strEvidence + “: ” + mySite.Name;
break;
default:
break;
}

log(strEvidence);

}
log(” ===================== End of Assembly Evidence: ==================== “);

}

Encoding troubles, wait, your ANSI file is not the same as my ANSI file

Last week we made a utility for the release team to convert all the t-sql script files from any encoding to ANSI. Now we convert any encoding to Unicode, but the original request was to use ANSI encoding.

The .NET code we used basically opens with a StreamReader that detects encoding, opens a StreamWriter to a new file with Encoding.Default (now Encoding.Unicode) and writes the content read by the StreamReader.

The problem started when some developers submitted files saved with ANSI encoding. The tool always detected the encoding as US-ASCII, which has only 7 bits for character representation, while the file had accented letters that were lost in the conversion.

I was blaming StreamReader for not detecting the encoding properly until I found the article below on http://weblogs.asp.net/ahoffman/archive/2004/01/19/60094.aspx

A question posted on the Australian DOTNET Developer Mailing List …

Im having a character encoding problem that surprises me. In my C# code I have a string ” 2004″ (thats a copyright/space/2/0/0/4). When I convert this string to bytes using the ASCIIEncoding.GetBytes method I get (in hex):

3F 20 32 30 30 34

The first character (the copyright) is converted into a literal ‘?’ question mark. I need to get the result 0xA92032303034, which has 0xA9 for the copyright, just as happens when the text is saved in notepad

An ASCII encoding provides for 7 bit characters and therefore only supports the first 128 unicode characters. All characters outside that range will display an unknown symbol – typically a “?” (0x3f) or “|” (0x7f) symbol.

That explains the first byte returned using ASCIIEncoding.GetBytes()

> 3F 20 32 30 30 34

What your trying to achieve is an ANSI encoding of the string. To get an ANSI encoding you need to specify a “code page” which prescribes the characters from 128 on up. For example, the following code will produce the result you expect…

string s = ” 2004″;
Encoding targetEncoding = Encoding.GetEncoding(1252);
foreach (byte b in targetEncoding.GetBytes(s))
Console.Write(“{0:x} “, b);

> a9 20 32 30 30 34

1252 represents the code page for Western European (Windows) which is probably what your using (Encoding.Default.EncodingName). Specifying a different code page say for Simplified Chinese (54936) will produce a different result.

Ideally you should use the code page actually in use on the system as follows…

string s = ” 2004″;
Encoding targetEncoding = Encoding.Default;
foreach (byte b in targetEncoding.GetBytes(s))
Console.Write(“{0:x} “, b);

> (can depend on where you are!)

All this is particularly important if your application uses streams to write to disk. Unless care is taken, someone in another country (represented by a different code page) could write text to disk via a Stream within your application and get unexpected results when reading back the text.

In short,always specify an encoding when creating a StreamReader or StreamWriter – for example…

Our code was initially as follows:

StreamReader SR = new StreamReader(myfile, true);
String Contents = SR.ReadToEnd();
SR.Close();

The StreamReader always detected US-ASCII as the file encoding when the file was saved with ANSI encoding, so the text lost all of the accented characters once it was read by the StreamReader. The StreamReader worked fine in detecting the encoding if the encoding was different that ANSI. This might be due to the different code pages used for the different ANSI encodings…

We changed the code not to trust on the StreamReader’s ability to detect the ANSI code page:

Encoding e = GetFileEncoding(myfile);
StreamReader SR = new StreamReader(myfile, e,true);
String Contents = SR.ReadToEnd();
SR.Close();

Where GetFileEncoding was published on this post

Note that on the code above, any ANSI encoded file is defaulted to the local ANSI encoding (default). If the file was saved on a machine with an ANSI code page different than the ANSI code page where the program is running, you might still have unexpected results.

Compressed snapshot check makes it fail on one subscriber

As this blog has my personal bread crumbs I thought I would better record this.

We have problems with our merge replication topology (SQL Server 2005 9.0.3228 Publisher and Dist and SQLE 9.0.3228 as Subscribers, pull subscriptions) the other day as we switched to compressed snapshots. I posted the problem in the forum and searched the books about it without luck…

Here’s the problem

We noticed one subscriber downloaded the compressed dynamic snapshot, but was unable to apply it:

The trace at the subscriber showed the following error:

Partitioned snapshot validation failed for this Subscriber. The snapshot validation token stored in the specified partitioned snapshot location does not match the value ”b422311” used by the Merge Agent when evaluating the parameterized filter function. If specifying the location of the partitioned snapshot (using –DynamicSnapshotLocation), you must ensure that the snapshot files in that directory belong to the correct partition or allow the Merge Agent to automatically detect the location.’,1,1,N’
SQL Merge Agent encountered an error.

The merge agent log with OutputVerbose level 3 showed the following:

2008-06-03 18:07:48.222 [0%] [978 sec remaining] Snapshot will be applied from a compressed cabinet file
2008-06-03 18:07:48.238 OLE DB Distributor ‘TORDISTRIBUTOR‘: {call sys.sp_MSadd_merge_history90 (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)}
2008-06-03 18:07:48.254 OLE DB Subscriber ‘TORSUBSCRIBER‘: {call sys.sp_MSadd_merge_history90 (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)}
2008-06-03 18:07:48.285 [0%] [978 sec remaining] A dynamic snapshot will be applied from ‘\\TORDISTRIBUTOR\repldata\unc\TORPUBLISHER_PUBLISHER_ PUBLICATION\B422311_1\’
2008-06-03 18:07:48.285 OLE DB Distributor ‘TORDISTRIBUTOR‘: {call sys.sp_MSadd_merge_history90 (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)}
2008-06-03 18:07:48.300 OLE DB Subscriber ‘TORSUBSCRIBER‘: {call sys.sp_MSadd_merge_history90 (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)}
2008-06-03 18:07:48.316 [0%] [732 sec remaining] Validating dynamic snapshot
2008-06-03 18:07:48.316 OLE DB Distributor ‘TORDISTRIBUTOR‘: {call sys.sp_MSadd_merge_history90 (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)}
2008-06-03 18:07:48.332 OLE DB Subscriber ‘TORSUBSCRIBER‘: {call sys.sp_MSadd_merge_history90 (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)}
2008-06-03 18:07:48.347 [0%] [732 sec remaining] Extracting snapshot file ‘dynsnapvalidation.tok‘ from cabinet file
2008-06-03 18:07:48.363 OLE DB Distributor ‘TORDISTRIBUTOR‘: {call sys.sp_MSadd_merge_history90 (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)}
2008-06-03 18:07:48.534 OLE DB Subscriber ‘TORSUBSCRIBER‘: {call sys.sp_MSadd_merge_history90 (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)}
2008-06-03 18:07:48.566 [0%] [732 sec remaining] Extracted file ‘dynsnapvalidation.tok
2008-06-03 18:07:48.566 OLE DB Distributor ‘TORDISTRIBUTOR‘: {call sys.sp_MSadd_merge_history90 (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)}
2008-06-03 18:07:48.566 OLE DB Subscriber ‘TORSUBSCRIBER‘: sys.sp_MSregisterdynsnapseqno @snapshot_session_token = N’\\TORDISTRIBUTOR\ReplData\unc\TORPUBLISHER_PUBLISHER_ PUBLICATION\20080527113713\dynsnap‘, @dynsnapseqno = ‘E69C5BB3-1095-429C-92BC-46747E49A155’
2008-06-03 18:07:48.675 OLE DB Subscriber ‘TORSUBSCRIBER‘: sp_MSreleasesnapshotdeliverysessionlock
2008-06-03 18:07:48.753 The merge process was unable to deliver the snapshot to the Subscriber. If using Web synchronization, the merge process may have been unable to create or write to the message file. When troubleshooting, restart the synchronization with verbose history logging and specify an output file to which to write.
2008-06-03 18:07:48.753 OLE DB Subscriber ‘TORSUBSCRIBER‘: {call sys.sp_MSadd_merge_history90 (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)}
2008-06-03 18:07:48.862 The merge process was unable to deliver the snapshot to the Subscriber. If using Web synchronization, the merge process may have been unable to create or write to the message file. When troubleshooting, restart the synchronization with verbose history logging and specify an output file to which to write.

It seems that when we use compressed snapshots and we validate the subscriptions using the HOST_NAME() there is an extra check done by the sync agent. The sync agent checks a token file inside the snapshot.cab file for that user. The filename is dynsnapvalidation.tok and contains the HOST_NAME() in our case.

The HOST_NAME() the subscriber passed to the agent was lower cased, and the partition, token file and sync agent job and token file had the HOST_NAME() upper cased.

Partitioned snapshot validation failed for this Subscriber. The snapshot validation token stored in the specified partitioned snapshot location does not match the value ”b422311” used by the Merge Agent when evaluating the parameterized filter function. If specifying the location of the partitioned snapshot (using –DynamicSnapshotLocation), you must ensure that the snapshot files in that directory belong to the correct partition or allow the Merge Agent to automatically detect the location.’,1,1,N’

SQL Merge Agent encountered an error.

Here’s the solution:

The way we solved the issue by changing the reference data to be lower cased, deleted the snapshot folder, deleted the sync agent job at the distributor for that partition. Dropped the subscription and created it again and it recreated the sync agent job at the distributor with the proper lower cased Host_Name().

We haven’t been able to reproduce it again, but time permitting, we will :-p

I tried to reproduce the scenario using a publisher and subscriber on the same server (SQL Server 2005 Developer build 9.00.3228) but couldn’t reproduce the error. On our real life scenario we use SQL Server Express as subscriber and RMO to trigger replication.

Web Services Contract First doesn’t guarantee Interop always

I found this very good article that answered part of the questions I had on my previous post.
The part that is enlightening regarding WSCF as how despite that it might be less productive than Code-first, might offer better Interop in the long run is:

What? Code-first is interoperable?

Well sure! You generate schema from code, proxies consume schema and generate a representation of the schema for the caller’s platform. Of course the serialized type has to be meaningful to the caller, so a DataSet in .NET though serializable makes a horrible candidate for cross platform interoperability. Aside from this, one of the biggest interoperability issues between platforms has to do with runtime schema support. Schema is a vast standard that has much more capability than many parsers have support for. Thus, interoperability can be an issue whether you begin with schema (contract-first) or when you generate schema from objects (code-first).

Ok, ok, the article is dated 2005, but don’t be too hard on someone who has been doing windows forms and CAB for the past year :-p
Designing Service Contracts with WCF

Web Services Contract First???

I know we .NET developers have been spoiled for a while.

We go on VS2005 and create a new web site project, select the type of project as web service and voila, we land on a page where we can start typing new methods for this class and mark them as web methods with a single annotation. VS2005 does all the plumbing for us creating the wsdl and even publishing the web service for us with another click.

But, is this good enough?

If your enterprise is fully .NET, there is no harm, but if you want to inter-operate with other platforms, they might not understand what your WS said…

VS2005 implements WS-I Basic Profile 1.1, but I believe you can still use .NET types such as special collections…that won’t have a Java counterpart.

WSCF existed for a long time on the Java world, VS2005 comes with two command line utilities that can serve to the purpose (xsd.exe and wsdl.exe) but, imho, they are hard to use.

You can always request to your manager a license of Altove XMLSpy…
Or go for this freeware tool made by thinktecture:

WSCF

The tool is good if you define all of your types in a single .xsd file. If you have several files with your types and you import from one namespace into one of your schema files, you might have troubles…

Also the pumbling this utility generates is only for SOAP web services, not for REST web services.

I talked to the Java guru in the office and he mentioned that WSCF was long gone from the Java world, with Eclipse, he only had to define his business payload and the plumbing was done for him on any transport protocol he would choose…So they sort of had to define the payload first, not exactly the contract first…

That triggered my curiosity. I wonder if there is anything like it on VS2008 with WCF…

If you know and can lit the path, please comment. I’m planning to play with this on the next few hours.

Happy coding!

Getting spam from Gmail

Today I received a phishing email from an account in Google.

What is sad is that Google wants to protect its clients so badly that there is no trace of the client’s IP on the email headers. All you see is the Google’s IP.

This, imho, is not right. I want to see the sender, if the sender is so concerned about his/her IP being exposed, he/she sure will know enough to hide it. This is not Google’s job…

You certainly don’t want to receive unsigned letters…
Yes, my anti-spam filter caught the message, but I want to see who sent it.

Here’s Google’s disclaimer:

User IP addresses

Protecting our users’ privacy is something we take very seriously. Personal information, including someone’s exact location, can be gathered from someone’s IP address, so Gmail doesn’t reveal this information in outgoing mail headers. This prevents recipients from being able to track our users, or uncover what may be potentially sensitive personal information.

Don’t worry — we aren’t enabling spammers to abuse the system by not revealing IP addresses. Gmail uses many innovative spam filtering mechanisms to ensure that spammers have a difficult time sending bulk emails that arrive in users inboxes.

What happens when their innovative system fails?

Yes, I filled out the form to report the spam, but I got the Hollywood answer, don’t call us, we’ll call you…

Sure…

On which AD group are you?

On Windows based enterprise with Active Directory it is very usual to use AD groups to reinforce access lists/authentication within enterprise applications/services/databases and mashups.

If you’re lost at why you cannot access some application to take a look at, and that application uses integrated security, you’re better off knowing which AD group your username is associated with…

On the OS command prompt type in this handy command:

NET USER “username” /DOMAIN

where “username” should be replaced with your windows login.

Handy links if you like to compare

VB.NET and C# Comparison

Java (J2SE 5.0) and C# Comparison

C# vs Ruby Smackdown!

log4net vs EL 1.0

SQL Server 2000 vs Oracle 9i

A Comparison of PL/SQL and Transact SQL

and last but not least:

Comparison of SOA Suites

Note there is an ESB implementation from P&P for BizTalk 2006 R2 that sort of leverage the missing functionality in the product: ESB Guidance for BizTalk Server 2006 R2

Ok, there’s nothing new here, just a personal cheat sheet :-p

Tips and tricks for applying SQL Server 2005 hotfixes

Tips and tricks:

  • You can install any hotfix in silent mode passing the parameter /quiet to the executable in the command prompt or in a batch file. This is extremely helpful if you want to push the hotfix installation with a third party tool and without the wizard interface
  • The /? parameter will give you the rest of the options for installing the hotfix. A very useful option is /allinstances
  • You might wonder why would you like to run any hotfix unsupervised, it might not make sense on a stand alone server but it makes sense when you have a few dozen of remote subscribers and/or you deploy SQLE as part of a SmartClient application. Your batch script can provide you with the installation log afterwards.
  • You cannot rollback a hotfix or a SQL Server Service Pack. The only option is to reinstall the SQL Server instance. Plan your testing very carefully…
  • If you have a virtual drive on the box where you’re applying the hotfix beware that it will try to unzip its files in it. If that virtual drive is read only or used for another purposes you might have troubles. To determine if there is a virtual drive on your box run the command subst. You can always delete the virtual drive apply the hotfix (it will unzip on c: or the drive where the windows installation is, as expected) and put back the virtual drive after the hotfix is applied.

Hope this helps and happy patching!