Rough day and conditional server side includes

My day began with a heavy rain. I don’t like driving in the rain, mostly because I need better windshield wipers. While I was driving to work a car hit me on the back. It was only a few scratches but that made me moody.
I got to work and the morning passed without major events, just coding and reviewing.
The whole team was invited to a lunch out for our good performance so we decided to go for Japanese food.
When I read the menu I saw that most of the plates said “Entrees”, except the Specials. Coming from a Latin culture, and read Latin as the Roman Empire official Language not as in Latin America, I thought, ok, entrees, that’s French and means entrance, or as in Spanish, entrante…so lets ask two and that’s it because I’m not too hungry. Oh well, in the Anglosaxon culture an Entree is a main course dish and the waitress kept going for an hour that I had ordered two entrees and I replied that yes, that I wasn’t too hungry…LOL

Anyways…my main problem today was a broken code.
I think I blogged before about conditional includes in classic ASP.
This morning I had to add some code for my part of the project in a common file that is shared among several sites.

The code looked like this:

<% Select myReport
Case “Report1”: %> <!–#include virtual=’Report1.asp’–>
<% Case “Lizis_Report” %> <!–#include virtual=’Lizis_Report.asp’ –>
<% End Select %>

It was an innocent extra case in a conditional statement. The thing is, conditional includes doesn’t work in ASP and I should have known better as I have read this article on 4GuysFromRolla

My innocent Lizis_Report.asp had functions with the same name and signature than the functions in Report1.asp so the original ones got overwritten. The reason: IIS processes SSI before it processes ASP code, so both files are included at the same time.

I came home decided to make a small test in PHP with Apache:
I created a mytest.php with the following content:

<?php if(false){ ?> <!–#include virtual=’file1.shtml’ –> <? } else{ ?> <!–#include virtual=’file2.shtml’ –> <? } ?>


After testing it the output to the browser is
<!–#include virtual=’file1.shtml’ –>

So I guess it works in Apache but IIS is kinda lazy…
I better go swimming for a while to burn out the two “Entrees” I ate today 😛

See you later!

ASP and ASP.NET

I’m currently working on branching an asp project, hopefully not for long. It took me some time to adapt myself back to its programming style, but I’m longing to go back to .NET.
This post is not about how to convince the stockholders than migrating a project saves money in the long run and allows branching a lot easier. It’s not about how adding branches to old hard to maintain code increases costs, it might give money right away when you sell that branch in a direct proportion, but increases maintainability costs in an exponential way. I don’t know how long it would take for you, humble programmer, to explain the difference between exponential and direct proportion.

As usual, I would like to summarize why I moved away from ASP 6 years ago, and why I look forward to move away from it today, the farther the better.

  1. Classic ASP is hard to debug and maintain. Probably because of its own programming model. The mixture of asp code with HTML in a single .ASP page, the lack of a well defined object oriented hierarchy et-cetera.
  2. Everything is interpreted script, in order to increase performance the best bet is to move part of the logic or functionality to COM components.
  3. COM components require the components to be registered on the server, what brings into account the well known DLL hell and adds deployment complexity. It also makes debugging more complex, you cannot debug locally unless you have the COM components registered on your machine. You have to have exact versions of the COM components in the development and production environments. This rules out shared web hosting.
  4. VBScript is not strong typed. Syntax errors are discovered most of the time when the call to your script shows a 500 http error. There is no way to discover any error at compilation time, because there is no compilation time :-p.
  5. In order to debug your script, you’re better off writing response.writes to the screen or taking guerrilla techniques.
    Definition of Guerrilla Technique: Select the whole content of your script, cut it and paste it in an open notepad windows. Type in a Response.Write (“I’m Here. I made it into MyScript.asp”) and request the page after saving it in your IIS box. Then start copying and pasting small chunks of the original code at a time till you no longer see the “I’m Here…” So you can determine the failing line of code.
  6. Classic ASP is not event driven programming, if you want event driven response, you better get a JavaScript book or bookmark a good online reference. It’s true that you should know JavaScript no matter what, some .NET programmers abuse the numbers of trips to the server that could be saved with some client side logic.
  7. In order to have a nice grid that you can re-use in your daily reports you have to write humongous amounts of code, even more if the grid has pagination or sorting capabilities.
  8. Option Explicit is off by default in classic ASP, oh no! (Note: it becomes Off by default since VS 2003 if I remember correct)

I’m sure some people will point out some advantages. I can hardly think of two of them:

  1. When you set the form action attribute in html to a certain script, it certainly requests that script on the server, it doesn’t go to the same original page that contents the form. Ok, ok, without removing the feature, it wouldn’t be possible a Page.IsPostback property in .NET. Have you ever tried that in .NET? I mean setting the action attribute, not the Page.IsPostback, dummy.
  2. Some coworkers have said the OnError type of error treatment is better than the Try Catch because it lets you Resume Next. To be honest, I’m quite fond to do proper Exception Handling so no thanks.

How to design a database so the developer that inherits it won’t run away to another job.

This is exactly the the feeling I got when I took a peek at the database I’m working with.
I thought, oh gosh I made a mistake when I chose this work…then i thought, well, look at the positive side,
there are lots of room for improvement here. To be honest I cant wait to make some database refactoring, if they allow my hands into it.

Whenever you mention to do database refactoring, you cant help by noticing chicken skin on some project leaders.
It is like mentioning a curse, we are all so much afraid of Murphy’s Law and the main engineering principle:
“If it’s working, don’t touch it!” that we endure whatever we have for the sake of safety.

As usual I will keep this on my blog as a reminder for future use.
It’s taken from Scott Ambler’s site http://www.agiledata.org/essays/databaseRefactoringSmells.html

1. Multi-purpose column. If a column is being used for several purposes it is very likely that extra code exists to ensure that the source data is being used the “right way”, often by checking the values of one or more other columns. An example is a column used to store either someone’s birth date if they’re a customer or their start date if they’re an employee. Worse yet, you are very likely constrained in the functionality that you can now support, for example, how would you store the birth date of an employee?
2. Multi-purpose table. Similarly, when a table is being used to store several types of entities there is likely a design flaw. An example would be a generic Customer table that is used to store information about both people and corporations. The problem with this approach is that data structures for people and corporations are different – people have a first, middle, and last name for example whereas a corporation simply has a legal name. A generic Customer table would have columns which are NULL for some kinds of customers but not others.
3. Redundant data. Redundant data is one of many serious problems in operational databases because when data is stored in several places the opportunity for inconsistency occurs. For example, it is quite common to discover that customer information is stored in many different places within your organization, in fact many companies are unable to put together an accurate list of who their customers actually are. The problem is that in one table John Smith lives at 123 Main Street and in another table at 456 Elm Street. In this case this is actually one person who used to live at 123 Main Street but who moved last year, unfortunately John didn’t submit two change of address forms to your company, one for each application which new about him.
4. Tables with many columns. When a table has many columns it is indicative that the table lacks cohesion, which it’s trying to store data from several entities. Perhaps your Customer table contains columns to store three different addresses (shipping, billing, seasonal) or several phone numbers (home, work, cell, …). You likely need to normalize this structure by adding Address and PhoneNumber tables.
5. Tables with many rows. Large tables are indicative of performance problems, for example it’s very time consuming to search a table with millions of rows. You may want to split the table vertically by moving some columns into another table, or split it horizontally by moving some rows into another table. Both strategies reduces the size of the table, potentially improving performance.
6. “Smart” columns. A “smart column” is one in which different positions within the data represent different concepts. For example, if the first four digits of the client ID indicate the client’s home branch, then client ID is a smart column because you can parse it to discover more granular information (e.g. home branch ID). Another example includes a text column used to store XML data structures; clearly you can parse the XML data structure for smaller data fields. Smart columns often need to be reorganized into their constituent data fields at some point so that the database can easily deal with them as separate elements.
7. Fear of change. If you’re afraid to change your database schema because you’re afraid to break something, for example the fifty applications which access it, then that’s the surest sign that you need to refactor your schema. Fear of change is a very good indication that you have a serious technical risk on your hands, one that will only get worse over time. My advice is to embrace change.

INETA conference and a disabled network card => big GOTCHA

Hi again, last March 26th I flew to Saskatoon to give the first conference of two for the INETA Launch. This event is part of the Microsoft Ready to Launch Tour. I had splitted the content in two conferences, one for developing the web application and the other one for the smart client and a bit of the data platform.
As you can imagine I was pretty nervous on this first conference. The first three demos worked fine but the one regarding Data Access failed, I was 10 minutes over the scheduled time and apologized to the audience telling them i would look into what had happened.
ASP.NET 2.0 makes the data binding quite easy, no need to write a line of code to pull out data from your sql server. Unfortunately, the easiest demo failed.
When I tried to reproduce the demo at the airport it failed again, it only worked once I was at home and had recovered from the 2 hours of sleep from the day before…
What the heck happened then? I’m a technician and certainly don’t believe in magic… I searched for user permissions, thinking perhaps the user I had used had no permissions in the database, I studied deeper ADO.NET 2.0, googled about the problem, read the reference to the point of exhaustion et-cetera.
Anyways, on my trip to Cuba this weekend I commented this issue to one of my ex-coworkers as he asked about the conference. He said, did you have both your Wi-Fi and your Ethernet card dead? I said, yes, I was completely offline when I was giving the conference…He said, well, you had no TCP/IP enabled and you certainly didn’t have the SQL Server Personal Edition, or configured your server to use something else…
OH NO! Whatta a gotcha! Damned!
He said, even if you put a network cable, as long as your network card sends packets, it should work…
Ok, I hope I’m able to make the trick this May 6th on the second conference, something like, now it works, now it doesnt, with my network cable on one hand 😉 , it’ll be fun…

ByVal and ByRef, coping with old asp

Another reminder to myself, never say never. When I met .NET I swore to myself, I’m not going back, well, never say never.

Visual Basic 6.0

In Visual Basic 6.0, if you do not specify ByVal or ByRef for a procedure parameter, the passing mechanism defaults to ByRef. This allows the variable passed into the procedure to be modified in the calling program.

Visual Basic 2005

When you declare a procedure in Visual Basic 2005, the passing mechanism defaults to ByVal for every parameter. This protects arguments against modification. The declaration in the preceding example can be rewritten as follows.