The ListView control in System.Windows.Forms 2.0 has a bug

Oh my, the end users are going to get really entertained with this bug.

I’m trying to present images as icons on a list view and the end user should drag and drop the image to the desired location inside the list view to have their list ordered. Unfortunately no matter to which position you drag and drop the image, the image is always placed at the end.
I found a forum post about this with no solution:
Inserting items at specific index in listviews (c# .net 2.0)

If anyone knows of a workaround, I would be more than happy to know. Poor end users will have to get a lot of training to realize the drag and drop only drops the items at the end…

Thanks for the connect program at MSDN.
There is a published workaround here:
MSDN Connect Issue # 115345

And at the end all the fault is on the Win32 List View implementation on which this .net control is based:

MSDN Connect Issue # 94685

Note the published workaround won’t work if you’re not using Groups on your ListView. I modified this to work directly with the ListViewItem Items collection. I might publish an extended version of the control here with this fix if time permits…

Happy coding!

Monday morning random rambling…

Now that caffeine is taking effect: I attended the Heroes Happen Here event in Toronto last week. The event was fun in general, lots of marketing going on so the developers get really biased, towards Microsoft technologies, of course…
Not that this is necessarily a bad thing, as long as you keep your mind open to what’s going on on other development niches and keep the constructive criticism.
I enjoyed the developer tracks in the morning but most of all I enjoyed two architecture presentations, they were light and fun, not too deep in content though, but very refreshing. Here are the blogs of the conference authors:
No Spin Architecture
Joel’s blog on architecture

Both blogs have tons of resources for aspiring architects.

There was a third presenter that raffle 4 architecture books, the raffle consisted on answering a question at light speed, doh! I’m planning on getting the books at amazon…

Overall, I got good software to try, saw that VS2008 has the split markup/code feature for web development that Macromedia had when I started doing web in 2000 and I should get a copy of Vista on my mail in the next few weeks. I’m excited about the JavaScript debugger and IntelliSense in VS2008. Hyper-V promises to be more secure than VMWare, SQL Server 2008 has the FileName type I wish I have now for my image archiving project…
I can’t wait to get my hands on Virtual Earth web service and bundle it with the new maps capabilities in SQL Server 2008, if time permits, there’s only 24 hours in a day and deadlines cannot be stretched :-p

Cheers!

The fix for batch deletions at Publisher when use_partition_groups is set to false In Merge Replication is indeed there!!!

Yay!

The fix for this bug is here. Microsoft published Cumulative Update 6 for SQL Server 2005 SP3. Please see previous post for the original bug description. We had problems when not using partition groups on our merge replication topology (see the TSQL Scripts below). When there were batch deletions caused by filtering at the Publisher, those rows were deleted at the publisher (published database) during the next replication (synchronization with subscribers)

The faulty stored procedure was sp_MSdelsubrowsbatch

The faulty line of code, believe it or not, was a single line
on the faulty SPROC and can be seen below in bold:

— change tombstone type for the non-original rows (the rows that got deleted via expansion).
update dbo.MSmerge_tombstone
set type = 1
from
(select distinct tablenick, rowguid from
#notbelong where original_row <> 1) nb,
dbo.MSmerge_tombstone ts
where ts.tablenick = nb.tablenick and
ts.rowguid = nb.rowguid
option (FORCE ORDER, LOOP JOIN)

On the sproc after CU6:
update dbo.MSmerge_tombstone
set type = @METADATA_TYPE_PartialDelete
from
(select distinct tablenick, rowguid from
#notbelong where original_row <> 1) nb,
dbo.MSmerge_tombstone ts
where ts.tablenick = nb.tablenick and
ts.rowguid = nb.rowguid
option (FORCE ORDER, LOOP JOIN)


Basically the deletions caused by filtering (Partial deletes) were marked as tombstone or user deletions.

Given the following variable declarations:

declare @METADATA_TYPE_Tombstone tinyint
declare @METADATA_TYPE_PartialDelete tinyint
declare @METADATA_TYPE_SystemDelete tinyint

and assignation:

set @METADATA_TYPE_Tombstone= 1
set @METADATA_TYPE_PartialDelete= 5
set @METADATA_TYPE_SystemDelete= 6

 

This caused deletions in the published database. Each subscriber had a subset of the published database. The merge replication was created with filters to allow having at one subscriber only the data pertaining to that subscriber and not the data pertaining to any other subscriber.

If you ever doubted the power of a single line of code… :-p

Happy coding guys!

 

Cumulative Update Number 6 for SQL Server 2005 SP2 is out!!!

Hi DBAs and Database Developers out there,
Finally the long awaited CU#6 was released on Feb 18th near mid night.

946608 Cumulative update package 6 for SQL Server 2005 Service Pack 2
http://support.microsoft.com/default.aspx?scid=kb;EN-US;946608

I’m currently testing our merge replication issues with this CU.

Even though the bug described in the DevX article is not in the KB article list, we got confirmation that the fix is on the CU#6.

So far the identity automatic management problem described on my previous post remains the same. If the user making the inserts is not a db_owner, automatic identity management ain’t going to happen on your publisher…

Identity Range not working for Merge Replication in SQL Server 2005

Back in Sept 2007 I blogged about the problem we were having with the identity range in merge replication:@@identity not working after SQL Server 2005 upgrade The problem continued until today, this post is to explain what we figured out.

The error message that describes this problem reads as follows:

[548] The insert failed. It conflicted with an identity range check constraint in database ‘DatabaseName’, replicated table ‘dbo.TableName’, column ‘ColumnNameId’. If the identity column is automatically managed by replication, update the range as follows: for the Publisher, execute sp_adjustpublisheridentityrange; for the Subscriber, run the Distribution Agent or the Merge Agent.

The identity range adjustment happens after every insert in the given article. The code responsible for the identity check adjustment is on the system trigger for the published article:
MSmerge_isn_GUID where GUID is the GUID for the given article.

if is_member(‘db_owner’) = 1
begin
— select the range values from the MSmerge_identity_range table
— this can be hardcoded if performance is a problem
declare @range_begin numeric(38,0)
declare @range_end numeric(38,0)
declare @next_range_begin numeric(38,0)
declare @next_range_end numeric(38,0)

select @range_begin = range_begin,
@range_end = range_end,
@next_range_begin = next_range_begin,
@next_range_end = next_range_end
from dbo.MSmerge_identity_range where artid=’BAEF9398-B1B1-4A68-90A4-602E3383F74A’ and subid=’0F9826DB-50FB-4F73-844D-AE3A111B4E1C’ and is_pub_range=0

if @range_begin is not null and @range_end is not NULL and @next_range_begin is not null and @next_range_end is not NULL
begin
if IDENT_CURRENT(‘[dbo].[TableName]’) = @range_end
begin
DBCC CHECKIDENT (‘[dbo].[TableName]’, RESEED, @next_range_begin) with no_infomsgs
end
else if IDENT_CURRENT(‘[dbo].[TableName]’) >= @next_range_end
begin
exec sys.sp_MSrefresh_publisher_idrange ‘[dbo].[TableName]’, ‘0F9826DB-50FB-4F73-844D-AE3A111B4E1C’, ‘BAEF9398-B1B1-4A68-90A4-602E3383F74A’, 2, 1
if @@error<>0 or @retcode<>0
goto FAILURE
end
end
end

As you might have noticed already, if the insertion is made by a user that is not member of the db_owner database role, the identity adjustment won’t happen. I believe this is a bug, not a feature. It forces the users that are allowed to do insertions to be owners of the database, or they will have the run out of identities error quite often and a manual adjustment will be required.

What the savvy DBA can do:

The template for all of the MSmerge_ins_GUID triggers are on the model database, these triggers are created by the system stored procedure: sp_MSaddmergetriggers_internal

If you change the code on this system stored procedure any new database creation will have the proper template for creating the merge insertion triggers. For the existing databases, you might like to modify the MSmerge_ins_GUID trigger manually, for now.

At the time I’m writing this post I’m not aware if this issue is addressed on any cumulative update. There is a similar case on the Cumulative Update released on January 15, 2008
http://support.microsoft.com/kb/941989

Happy coding guys!

Yay!!!! The team is on TechNet Innovation Awards 2008

Hi Developer that wanders the internet searching for the solution to your bug. Take a brief moment and vote for us, bunch of developers who also wander the internet to search solutions for our bugs and blog about them to help others 🙂

Microsoft Canada and TechNet Innovation Code Awards

We’ll be good and post more on our blog, interesting, good stuff :-p
Kidding, let the code prevail!

Oh, we’re the Tablet PC team, good stuff with SQL Server 2005 and Smart Clients 😉

Merge replication issues in SQL Server 2005 and comments on MS employee’s blog

Hi all,
As I mentioned in my previous post, i would provide details of the data loss scenarios in Merge Replication topologies with SQL Server 2005. The details were published in this addendum to the original DevX article:

UPDATED SQL Server 2005 Bug Alert: Data Loss in Merge Replication

the article is basically two repros to illustrate the data loss when the partition groups are not use in the publication. The workaround is not published and the definite solution should be available on SQL SErver 2005 SP3 or the upcoming cumulative update in February 2008.

Today I came across a blog post regarding SQL Server bugs and how to provide the information to tech support or to the Connects program.

Please read this blog post if you are posting/investigating bugs. The more information you provide, the faster the bug might be scheduled for a fix:

Getting Your “Favorite” SQL Server Bug Fixed

What is interesting on that blog post is one of the comments from a MS employee “anna”:

I’m sometimes amazed how personal people takes the bug issue. If you look at users you can sometimes think that the users think we have a complete bug list or that we have super powers to figure out what the problem are. And believing that a rotten attitude gets the problem fixed faster is so stupid.

But if you look at it from the other perspective, I often find developers taking pride in classifying something as a bug. In these days of agile and customer driven development, why taking so much pride into saying if something is a bug or a change request.

During the past two years we’ve gotten two synchronization bugs fixed in SQL Server 2005. My tips: be honest, give all information you have, understand that everyone wants to fix the bugs and don’t forget that the guys fixing and confirming the stuff are people. Often really nice people. And also remember that reporting a bug is like going to the ER: sometimes there are people who are sicker than you and they need help first.

I know there is no excuse to use fault language or be rude to have your bug scheduled first, however, there is also no excuse to have data loss due to insufficient testing or a newly introduced bug.
I can only speak for myself, but in our case the merge topology had worked fault free for over a year before we upgraded to the 2005 version.
We are humans and we err, but we are also accountable for our code and our testing procedures. It’s not a matter of taking pride on pointing out a fault. I would rather not have had the data loss issue at all and the overtime and stress that happened after in order to recover the data and avoid future losses.

Why taking so much pride in the work we do or the code we write? Maybe it’s better to ask why not?

Merge Replication parameterized row filtering, beware of the optimizations

I haven’t blogged in a while, mostly due to my workload that has sky rocketed.
My most recent challenge is with SQL Server 2005 Merge Replication.

We have had quite a few undesirable effects when we upgraded our subscribers from ole good MSDE to SQL Server Express.
The side effects led to data loss and emergency fixes, in a really chaotic way. This data loss didn’t happen with MSDE.

The article I recently published on DevX described a data loss scenario with SQL Server 2005 Ent as publisher/distributor and MSDE or SQLE as subscriber.

The scenario we got recently happens only with SQLE in the subscribers and is related to the parameterized filter optimizations in SQL Server 2005.

Tweaking the parameters @use_partition_groups and @keep_partition_changes produce different side effects that go from data loss to inability to merge the data, the parameterized filters stop working.

If you’re interested on the subject hang on, I will publish repros in the next post.

If anyone knows how to set up the Windows Synchronization Manager for Verbose History Logging to output to a file, please comment here, all my attempts have failed.

Cheers and happy holidays!

SQL Server Merge Replication issues

After a month of talking over the phone with Microsoft tech support and more than a month and a half of struggling with then problem and trying to isolate the cause; I put together a short article on how to lose your data with merge replication if you use parameterized row filtering and upgrade your publisher from SQL Server 2000 to SQL Server 2005.
The details of the problem, steps to reproduce the behavior and scripts can be found on this article published on DevX:
SQL Server 2005 Bug Alert: Merge Replication Could Mean Data Loss

My main purpose is to speed up the resolution, so any comments, workarounds or similar cases you might find, please give me a shout.
Have a great weekend!

PG

Dad put together a quick gallery with his comics

Way to go! Yay!
I particularly like this one about silicone implants :-))

Visit the original gallery here.

Now going nerdy on how fast is to set up this gallery, what’s even more interesting is the way to publish images in a batch using Windows XP Explorer. I do have to reverse eng the process, one modification in the registry and the windows explorer can publish any folder with images in three clicks…Publishing from Mac is as fast.

If you have minimum computer skills, you can set up your own image gallery with this script. Honest, stop sharing your family pictures in Flickr, Ringo or any other public website and have your own space. You can even have your grandma upload the images for the family :-p

If you need hosting and domain registration for your gallery, Chihost is the best to go with.