Technical Debt – What it is and what to do about it

The following paragraphs in italic are taken from an article in IASA (International Association of Software Architects) written by Gene Hughson

In spite of all that’s been written on the subject of technical debt, it’s still a common occurrence to see it defined as simply “bad code”. Likewise, it’s still common to see the solution offered being “stop writing bad code”.  It’s not that simple.

What is Technical Debt?

A design or construction approach that’s expedient in the short term but that creates a technical context in which the same work will cost more to do later than it would cost to do now (including increased cost over time) .

Technical debt may not only incur costs due to rework of the original item, but also by making more difficult changes that are dependent on the original item.

Deferring bug fixes is a form of technical debt as is deferring automation of recurring support tasks.

Dependencies can be a source of technical debt, both in regard to debt they carry and in terms of their fitness to your purpose.

The platform that hosts your application is yet another potential source of technical debt if not maintained.

As noted above, the “interest” on technical debt can manifest as the need for rework and/or more effort in implementing changes over time. This increase in effort can come through the proliferation of code to compensate for the effects of unresolved debt or even just through increased time to comprehend the existing code base prior to attempting a change.

In short, technical debt is any technical deficit that involves a risk of greater cost to expand the application and/or end user dissatisfaction.

What to do about it?

Becoming aware of existing debt is a critical first step, but is insufficient in itself. Taking steps to actively manage the system’s debt portfolio is essential. The first step should be to stop unconsciously taking on new debt. Latent debt tends to fit into the immediate, unexpected payback model mentioned above. Likewise, steps taken to improve the quality up front (unit testing, code review, static analysis, process changes, etc.) should reduce the effort needed for detection and remediation. Architectural and design practices should also be examined. Too little design can be as counter-productive as too much.

Just as the taking on of new debt should be done in a rational manner, so should be the retirement of old debt.

Think of technical debt as credit card debt and you’ll realize why it is important to account for it, be aware of it and pay it off before funding any new features.

Domain Modeling, why it is replacing Data Modeling and bottom up designs

Complexity is the driving force for the adoption of the Domain Model pattern. Complexity should be measured in terms of the current requirements and what processes are modeled or automated by the application.

A domain model is a collection of plain old classes, each of which faithfully represents a significant entity in the business domain. These classes are data containers and can expose public properties, but they also are expected to expose methods. The term POCO (Plain Old CLR Object) is often used to refer to classes in a domain model. At some point, the classes in the domain model needs to be persisted. Persistence is not a responsibility of the domain model, and it will happen outside of the domain model through repimageositories connected to the infrastructure layer.

The conversion between the model and relational store is typically performed by ad hoc tools— specifically, Object/ Relational Mapper (O/ RM) tools, such as Microsoft’s Entity Framework or NHibernate. The unavoidable mismatch between the domain model and the relational model is the critical point of implementing a Domain Model pattern.

For decades (1980 to 1990s), relational data modeling was the most effective way to model the business layer of software applications. In the .NET space, the turning point came in the early 2000s, when more and more companies that still had their core business logic carved in the rough stone of mainframes took advantage of the Microsoft .NET Framework and Internet breakthroughs to renovate and modernize their systems. In only a few years, this poured an incredible amount of complexity on the shoulders of developers. RAD (Rapid Application Development) and relational modeling then— we’d say almost naturally— appeared worn out and their limits were exposed.

 

All the text above was taken from “Esposito, Dino; Saltarello, Andrea (2014-08-28). Microsoft .NET – Architecting Applications for the Enterprise (2nd Edition) (Developer Reference)” with permission from the authors.

Complexity is tackled best using classes that model the business with ubiquitous language. Language that is understood by the technical modelers (software architects/developers) and by the business counterparts or system matter experts. Domain Services encapsulate services that can be consumed by other applications or different GUIs. The focus now is in modeling the domain and modeling the behavior, rather than modeling the data (RDBM) model without events or behavior. The data model, as a starting point in the design, falls short without modeling the sequence of events (behavior) and the flow of information.

Should I use sessionID to uniquely identify users?

Should I use sessionID to uniquely identify users?
NO, that’s why UserId/UserName or LoginID and password combinations are for.
SessionID is a “random” string and can be repeated (e.g. when IIS is restarted or the server is rebooted, the sequencing scheme that generates SessionID values is reset). So if you store information for a user based on the SessionID value, be very aware that a new person next week might happen to get the same SessionID value–this will either violate a primary key constraint, or mix two or more people’s data.

However in ASP.NET, the SessionID is 120 bits in length, so like a GUID, is virtually guaranteed to never repeat.

But in classic ASP, this built-in mechanism is not a good strategy for identifying users over the long term. A better methodology would be to generate a key value in a database that is guaranteed to be unique (e.g. IDENTITY or AUTOINCREMENT) and store that in a cookie on the client. Then you can identify them not only for the life of their current session, but during future visits as well (of course, only until the next time they delete their cookies).

As usual, it depends, on the technology you inherit, how the old modules were built and so on.

Storing ASP.NET SessionId and at the same time storing a SessionGUID generated from the SessionId in ASP.NET doesn’t make much sense.

L.