Java Initialization blocks…why should we ever use them?

Coming from a C# world I’m trying to get a grasp on the Java language, mostly for comparison purposes and to see if really the grass is greener on the other side of the fence.

What I like so far of the Java 1.6 that I see C# might benefit from:
1. The main one is cross platform portability. I wish Mono would have more support.
2. Enums are more complex, but I’m wondering if we ever really use that complexity…
3. Static imports, but do they really add to maintainability or make it worse?

What I find confusing in Java vs C#:

1. Checked exceptions: In Java, the exceptions that a method throws must be declared and are part of a method’s public interface. They were left out from C# on purpose. Here’s an interview made by Bruce Eckel (Thinking in C++) to Anders Hejlsberg (lead C# architect and Turbo Pascal creator, I should point out I wrote my first program in Turbo Pascal :-p)…
2. Initialization blocks, not the static ones that are similar to c# static constructors but instance initialization blocks…why would you ever use them? I’m still puzzled…

Re Microsoft’s Entity Framework: persistence ignorance is a bliss I won’t give up on

I came across this article

Why use the Entity Framework? Yeah, why exactly?

refuting a marketing like article by one of the EF team members.

After using NHibernate for a while and looking into JPA and old JDO on the Java world, I don’t think I’ll get my hands on the EF any time soon, if I can avoid it.

Why? The main reasons are summarized on this site:

ADO .NET Entity Framework Vote of No Confidence

I will mention the most compelling ones to me:

  1. Main focus on the data aspect
  2. Lack of Lazy loading, hydration, dirty flagging
  3. Lack of persistence ignorance…The tight coupling of the persistence infrastructure to the entity classes largely eliminates the ability to efficiently use very tight feedback cycles on the business logic with automated testing.

As Peter Ritchie summarizes on his comment:
…if I’m a traditional TDD methodologist (for lack of a better term) and I’m building up my code base with test-first mentality then it’s all about the code. The automated tests are used for documentation of things like requirements, user stories, etc. Agile folk try to avoid documents like conceptual models, our conceptional model is the code, it’s our classes. I don’t need another conceptual modeler and I don’t need to have a modeler create new classes for me, I don’t need it to modify my classes, the classes I’ve defined for my application do exactly what they need to do.

As we build up our classes to reflect what the domain is, as we know it now, we eventually want to add the ability to persist those objects to a store of some sort. It’s at that point we being to think of OR/M. But we want to keep that persistence separate from our abstractions, keeping true to the single responsibility principle and separation of concerns. All the trappings of persistence are abstracted somewhere else.

Just my opinionated opinion…I have the gut feeling EF is sending ADO.NET back to the 1.1 version with Typed DataSets that were mere replicas of the database schema…

Ignorance is bliss and in this case persistence ignorance is a bliss I won’t give up on.

RIP Geocities

I had my first free sites posted there back in the late 90s…

This is how the web used to look like back then :-p

Debunking the duct tape programmer

The nasty truth about misapplying duct tape solutions in serious software development is that the duct tape solution ends up creating unnecessary additional complexity because it doesn’t address the whole problem, just the symptoms. This isn’t unique to software development, but if duct tape solutions are used to achieve short term gains, then future solutions are built on a foundation of duct tape instead of some sound organizational method.

For more read: Debunking the duct tape programmer discussion on

It is the work of the architect and team leads to ensure the solutions are not addressed with duct tape/patch programming approach, but there is a design and long term planning associated to every project.

I’ve seen too many solutions already that fail on deployment due to data center constraints the developer was not aware of, where were the architects here?

Have fun while learning, are you certifiable?

I got this ad from a marketing person. I wasn’t going to post it as I’m not very fond of sales and marketing people.
I tried the game and found it cute, it’s similar to the contest that MSDN Canada launched a few years ago, the Last Developer Standing. I was eliminated quite fast on that one :-p

If you like games, you might enjoy this one.

Check out the game at Enjoy!

Arg! the telnet client is disabled by default in Vista

I’m trying to fix my dad’s website, after his gallery suffered a sql injection and his database is no longer with us. I needed to check if port 2077 was open on the server to map a folder as a drive on my windows explorer after a few problems with my ftp connection timing out.
To make the story short, how to enable the telnet client on Vista:
Go to Control Panel.
Go to Programs.
Go to Program and Features.
Click Turn Windows Feature on or off.
Check mark the telnet client.

Encoding troubles, wait, your ANSI file is not the same as my ANSI file

Last week we made a utility for the release team to convert all the t-sql script files from any encoding to ANSI. Now we convert any encoding to Unicode, but the original request was to use ANSI encoding.

The .NET code we used basically opens with a StreamReader that detects encoding, opens a StreamWriter to a new file with Encoding.Default (now Encoding.Unicode) and writes the content read by the StreamReader.

The problem started when some developers submitted files saved with ANSI encoding. The tool always detected the encoding as US-ASCII, which has only 7 bits for character representation, while the file had accented letters that were lost in the conversion.

I was blaming StreamReader for not detecting the encoding properly until I found the article below on

A question posted on the Australian DOTNET Developer Mailing List …

Im having a character encoding problem that surprises me. In my C# code I have a string ” 2004″ (thats a copyright/space/2/0/0/4). When I convert this string to bytes using the ASCIIEncoding.GetBytes method I get (in hex):

3F 20 32 30 30 34

The first character (the copyright) is converted into a literal ‘?’ question mark. I need to get the result 0xA92032303034, which has 0xA9 for the copyright, just as happens when the text is saved in notepad

An ASCII encoding provides for 7 bit characters and therefore only supports the first 128 unicode characters. All characters outside that range will display an unknown symbol – typically a “?” (0x3f) or “|” (0x7f) symbol.

That explains the first byte returned using ASCIIEncoding.GetBytes()

> 3F 20 32 30 30 34

What your trying to achieve is an ANSI encoding of the string. To get an ANSI encoding you need to specify a “code page” which prescribes the characters from 128 on up. For example, the following code will produce the result you expect…

string s = ” 2004″;
Encoding targetEncoding = Encoding.GetEncoding(1252);
foreach (byte b in targetEncoding.GetBytes(s))
Console.Write(“{0:x} “, b);

> a9 20 32 30 30 34

1252 represents the code page for Western European (Windows) which is probably what your using (Encoding.Default.EncodingName). Specifying a different code page say for Simplified Chinese (54936) will produce a different result.

Ideally you should use the code page actually in use on the system as follows…

string s = ” 2004″;
Encoding targetEncoding = Encoding.Default;
foreach (byte b in targetEncoding.GetBytes(s))
Console.Write(“{0:x} “, b);

> (can depend on where you are!)

All this is particularly important if your application uses streams to write to disk. Unless care is taken, someone in another country (represented by a different code page) could write text to disk via a Stream within your application and get unexpected results when reading back the text.

In short,always specify an encoding when creating a StreamReader or StreamWriter – for example…

Our code was initially as follows:

StreamReader SR = new StreamReader(myfile, true);
String Contents = SR.ReadToEnd();

The StreamReader always detected US-ASCII as the file encoding when the file was saved with ANSI encoding, so the text lost all of the accented characters once it was read by the StreamReader. The StreamReader worked fine in detecting the encoding if the encoding was different that ANSI. This might be due to the different code pages used for the different ANSI encodings…

We changed the code not to trust on the StreamReader’s ability to detect the ANSI code page:

Encoding e = GetFileEncoding(myfile);
StreamReader SR = new StreamReader(myfile, e,true);
String Contents = SR.ReadToEnd();

Where GetFileEncoding was published on this post

Note that on the code above, any ANSI encoded file is defaulted to the local ANSI encoding (default). If the file was saved on a machine with an ANSI code page different than the ANSI code page where the program is running, you might still have unexpected results.

Yay!!!! The team is on TechNet Innovation Awards 2008

Hi Developer that wanders the internet searching for the solution to your bug. Take a brief moment and vote for us, bunch of developers who also wander the internet to search solutions for our bugs and blog about them to help others :)

Microsoft Canada and TechNet Innovation Code Awards

We’ll be good and post more on our blog, interesting, good stuff :-p
Kidding, let the code prevail!

Oh, we’re the Tablet PC team, good stuff with SQL Server 2005 and Smart Clients 😉