Cumulative Update Number 6 for SQL Server 2005 SP2 is out!!!

Hi DBAs and Database Developers out there,
Finally the long awaited CU#6 was released on Feb 18th near mid night.

946608 Cumulative update package 6 for SQL Server 2005 Service Pack 2;EN-US;946608

I’m currently testing our merge replication issues with this CU.

Even though the bug described in the DevX article is not in the KB article list, we got confirmation that the fix is on the CU#6.

So far the identity automatic management problem described on my previous post remains the same. If the user making the inserts is not a db_owner, automatic identity management ain’t going to happen on your publisher…

Identity Range not working for Merge Replication in SQL Server 2005

Back in Sept 2007 I blogged about the problem we were having with the identity range in merge replication:@@identity not working after SQL Server 2005 upgrade The problem continued until today, this post is to explain what we figured out.

The error message that describes this problem reads as follows:

[548] The insert failed. It conflicted with an identity range check constraint in database ‘DatabaseName’, replicated table ‘dbo.TableName’, column ‘ColumnNameId’. If the identity column is automatically managed by replication, update the range as follows: for the Publisher, execute sp_adjustpublisheridentityrange; for the Subscriber, run the Distribution Agent or the Merge Agent.

The identity range adjustment happens after every insert in the given article. The code responsible for the identity check adjustment is on the system trigger for the published article:
MSmerge_isn_GUID where GUID is the GUID for the given article.

if is_member(‘db_owner’) = 1
— select the range values from the MSmerge_identity_range table
— this can be hardcoded if performance is a problem
declare @range_begin numeric(38,0)
declare @range_end numeric(38,0)
declare @next_range_begin numeric(38,0)
declare @next_range_end numeric(38,0)

select @range_begin = range_begin,
@range_end = range_end,
@next_range_begin = next_range_begin,
@next_range_end = next_range_end
from dbo.MSmerge_identity_range where artid=’BAEF9398-B1B1-4A68-90A4-602E3383F74A’ and subid=’0F9826DB-50FB-4F73-844D-AE3A111B4E1C’ and is_pub_range=0

if @range_begin is not null and @range_end is not NULL and @next_range_begin is not null and @next_range_end is not NULL
if IDENT_CURRENT(‘[dbo].[TableName]’) = @range_end
DBCC CHECKIDENT (‘[dbo].[TableName]’, RESEED, @next_range_begin) with no_infomsgs
else if IDENT_CURRENT(‘[dbo].[TableName]’) >= @next_range_end
exec sys.sp_MSrefresh_publisher_idrange ‘[dbo].[TableName]’, ‘0F9826DB-50FB-4F73-844D-AE3A111B4E1C’, ‘BAEF9398-B1B1-4A68-90A4-602E3383F74A’, 2, 1
if @@error<>0 or @retcode<>0

As you might have noticed already, if the insertion is made by a user that is not member of the db_owner database role, the identity adjustment won’t happen. I believe this is a bug, not a feature. It forces the users that are allowed to do insertions to be owners of the database, or they will have the run out of identities error quite often and a manual adjustment will be required.

What the savvy DBA can do:

The template for all of the MSmerge_ins_GUID triggers are on the model database, these triggers are created by the system stored procedure: sp_MSaddmergetriggers_internal

If you change the code on this system stored procedure any new database creation will have the proper template for creating the merge insertion triggers. For the existing databases, you might like to modify the MSmerge_ins_GUID trigger manually, for now.

At the time I’m writing this post I’m not aware if this issue is addressed on any cumulative update. There is a similar case on the Cumulative Update released on January 15, 2008

Happy coding guys!

Merge replication issues in SQL Server 2005 and comments on MS employee’s blog

Hi all,
As I mentioned in my previous post, i would provide details of the data loss scenarios in Merge Replication topologies with SQL Server 2005. The details were published in this addendum to the original DevX article:

UPDATED SQL Server 2005 Bug Alert: Data Loss in Merge Replication

the article is basically two repros to illustrate the data loss when the partition groups are not use in the publication. The workaround is not published and the definite solution should be available on SQL SErver 2005 SP3 or the upcoming cumulative update in February 2008.

Today I came across a blog post regarding SQL Server bugs and how to provide the information to tech support or to the Connects program.

Please read this blog post if you are posting/investigating bugs. The more information you provide, the faster the bug might be scheduled for a fix:

Getting Your “Favorite” SQL Server Bug Fixed

What is interesting on that blog post is one of the comments from a MS employee “anna”:

I’m sometimes amazed how personal people takes the bug issue. If you look at users you can sometimes think that the users think we have a complete bug list or that we have super powers to figure out what the problem are. And believing that a rotten attitude gets the problem fixed faster is so stupid.

But if you look at it from the other perspective, I often find developers taking pride in classifying something as a bug. In these days of agile and customer driven development, why taking so much pride into saying if something is a bug or a change request.

During the past two years we’ve gotten two synchronization bugs fixed in SQL Server 2005. My tips: be honest, give all information you have, understand that everyone wants to fix the bugs and don’t forget that the guys fixing and confirming the stuff are people. Often really nice people. And also remember that reporting a bug is like going to the ER: sometimes there are people who are sicker than you and they need help first.

I know there is no excuse to use fault language or be rude to have your bug scheduled first, however, there is also no excuse to have data loss due to insufficient testing or a newly introduced bug.
I can only speak for myself, but in our case the merge topology had worked fault free for over a year before we upgraded to the 2005 version.
We are humans and we err, but we are also accountable for our code and our testing procedures. It’s not a matter of taking pride on pointing out a fault. I would rather not have had the data loss issue at all and the overtime and stress that happened after in order to recover the data and avoid future losses.

Why taking so much pride in the work we do or the code we write? Maybe it’s better to ask why not?

Merge Replication parameterized row filtering, beware of the optimizations

I haven’t blogged in a while, mostly due to my workload that has sky rocketed.
My most recent challenge is with SQL Server 2005 Merge Replication.

We have had quite a few undesirable effects when we upgraded our subscribers from ole good MSDE to SQL Server Express.
The side effects led to data loss and emergency fixes, in a really chaotic way. This data loss didn’t happen with MSDE.

The article I recently published on DevX described a data loss scenario with SQL Server 2005 Ent as publisher/distributor and MSDE or SQLE as subscriber.

The scenario we got recently happens only with SQLE in the subscribers and is related to the parameterized filter optimizations in SQL Server 2005.

Tweaking the parameters @use_partition_groups and @keep_partition_changes produce different side effects that go from data loss to inability to merge the data, the parameterized filters stop working.

If you’re interested on the subject hang on, I will publish repros in the next post.

If anyone knows how to set up the Windows Synchronization Manager for Verbose History Logging to output to a file, please comment here, all my attempts have failed.

Cheers and happy holidays!

SQL Server Merge Replication issues

After a month of talking over the phone with Microsoft tech support and more than a month and a half of struggling with then problem and trying to isolate the cause; I put together a short article on how to lose your data with merge replication if you use parameterized row filtering and upgrade your publisher from SQL Server 2000 to SQL Server 2005.
The details of the problem, steps to reproduce the behavior and scripts can be found on this article published on DevX:
SQL Server 2005 Bug Alert: Merge Replication Could Mean Data Loss

My main purpose is to speed up the resolution, so any comments, workarounds or similar cases you might find, please give me a shout.
Have a great weekend!


More on "Breaking Changes in SQL Server 2005 Replication"

We finally launched our external sql server 2005 with merge replication with pull subscriptions. After extensive QA testing, surprising data loss due to filtering, more testing with new parameterized row filter, we finally made it.

Right after we launched the servers (our publisher and distributor) and the subscribers began to synchronize we had a few conflicts we hadn’t seen in any of the test environments.

The conflict cause was CONNECT REPLICATION permission missing on the users; the server data overwrote the client data on each case.

The Books Online documentation is pretty scarce on this new permission. It only says that if you have CONTROL permission at Database level, the CONNECT REPLICATION permission is implied on the database. Also if you have CONTROL SERVER permission at server level, the CONNECT REPLICATION permission is implied as well. Other than that, there isn’t much.

The official article documenting the breaking changes can be found here:

and it does not mention the difference in permissions.

The breaking change here is that this permission that you can set through the Securables item on the user properties is brand new to SQL Server 2005.

On a SQL Server 2000 engine, the Management Studio looks like:

While on SQL Server 2005 the set of permissions for a given database looks like:

@@identity not working after SQL Server 2005 upgrade

After our server upgrade to SQL Server 2005, some of the stored procedures stopped working. The reason, the change in behavior of the @@identity function in 2005:

The value of @@identity is not restricted by current scope and can be affected, for example, by triggers. I.e. if original ‘insert’ deals with table T1, and the table T1 has a trigger that inserts rows into table T2, then after the original insert is done the @@identity will show new value for the table T2 (not for the T1 as might be expected!!!)

The suggested solution is using SCOPE_IDENTITY()function instead of @@identity. This function IS restricted by the scope and for the example above it will return new identity value for the original table T1 (not for T2 or any other table affected by triggers).

It seems to me that SELECT MAX(T1) FROM T1 is not the best solution, because T1 (and any other) table can be updated simultaneously from multiple sessions and the maximal value is not always guaranteed.

If you have merge replication going on, be aware that now replication uses triggers that were not present in 2000:

For instance – insert trigger MSmerge_ins_5D17CE9EBD2142E3ADC630C47F017946 which does insert rows into other system table with identity column!!! We didn’t see these triggers in SQL 2000 database which may explain why the @@identity worked fine in the same code under SQL 2000.

Hope this helps and happy upgrading!


SQL Server Express Command – line installation (just a link to the good stuff)

We developers are all a little bit lazy, time conscious, we don’t like to perform the same installation/upgrade over and over on 30+ remote machines.
It is Friday, and even though I added this link to my bookmarks already, I know it will take me a few minutes to find on Monday… yes, my bookmarks are that messy.
I need to create a cmd install batch that will

  1. Upgrade the local MSDE instance to SQL Express 2005
  2. Use Advance features during the installation as I’m not creating a new instance but upgrading the default one.
  3. Add the replication components that will be needed for the merge replication
  4. Add the client components in case I need to connect remotely to that rebel local instance :-p
  5. Add the Connectivity components that will install the SMO and RMO (Server Management Objects and Replication Management Objects). I’m using both namespaces to trigger/manage replication from this .NET smartie client.

My cheat sheet is here:

Command-line installation

It doesn’t seem too complicated, let’s wait for Monday to see how it goes. Now I’m still working on my RMO code. We had Replication pretty much working with the a Replication ActiveX component for MSDE but this component is deprecated for 2005.

Have a nice weekend!

Merge Replication and our need to upgrade hardware

We’ve all been pretty busy lately with a juicy project. We have a client application that runs on TabletPCs. Among other interesting parts of this project we have SQL Server Merge Replication going on. We used to have a topology of SQL Server Enterprise Edition for the Publisher and Distributor and the reliable MSDE for the subscribers.
A few months ago our excitement increased when the company decided to migrate some of the servers to SQL Server 2005 Ent.
The excitement continue but we’re having quite a few headaches with the replication. Our main problem is the hardware. Our client boxes have merely 500MB of RAM and the recommended requirements for SQL Server 2005 Express edition is 512, oh joy.
We tried to keep MSDE alive and use it still as subscriber but we use some of the features that invalidate the compatibility, in particular the Parameterized Row Filtering. More details on the incompatibility on this MSDN Reference. Please see the topic Compatibility Level for Merge Publications where it specifies that parameterized row filtering requires Compatibility Level for Merge Publications 90RTM or higher.

The solution, it seems we upgrade our subscribers to SQL Server Express 2005 and hope they still work at a decent level with the scarce hardware, or upgrade and hope there is budget for a hardware upgrade too :-p, at the end it’s all about budget, right?


SQL Server and binary fields (images)

No matter how much I disagree on putting image files into SQL Server, I had to handle this on a project.

We had to provide update statements to insert binary data into production servers from the data we had on the test servers. The statement had to be written in T-SQL, period.

Everything would have been a fairy tail if Query Analyzer wouldn’t have the 8K result limit. Most of the images were bigger than 8K. This limitation does not apply to the data you use on the query you’re running but on the buffer that stores the results to show in QA.

We ended up writing a small console app with ADO.NET to retrieve the binary data, but guess what, you cannot insert this data as it is with a T-SQL statement. We shortly after discovered we could have used a third party tool such as SQL Bundle from red-gate to do this. Did I say before that I just LOVE SQL Compare?
The binary data has to be converted to its hexadecimal equivalent so it can be used in an update or insert statement, you can not insert whatever you get from the database into your T-SQL queries.

Code samples follow:

The ADO.NET part (pretty basic):

this.sqlCommand1.CommandText=”SELECT [Image field] FROM table where ID=something”;

byte[] barrImg=(byte[])sqlCommand1.ExecuteScalar();

string strfn=Convert.ToString(“yourfilename.txt”);
FileStream fs=new FileStream(strfn,FileMode.CreateNew);

The hexadecimal conversion part, basic too:

char[] hexDigits = {‘0’, ‘1’, ‘2’, ‘3’, ‘4’, ‘5’, ‘6’, ‘7’,’8′, ‘9’, ‘A’, ‘B’, ‘C’, ‘D’, ‘E’, ‘F’};
char[] chars = new char[barrImg.Length * 2];
for (int i = 0; i <>
int b = barrImg[i];
chars[i * 2] = hexDigits[b >> 4];
chars[i * 2 + 1] = hexDigits[b & 0xF];

string results;
results=new string(chars);

StreamWriter strWr = new StreamWriter(“yourhexdatafile.txt”);

Note: VB.NET has a Hex function for the hexadecimal conversion.