Hello,
In a project I work on, there's a server and many clients connected to
it. They go offline sometimes and before they do that they request
several items from the server which are then "downloaded" to the client
computer - they basically consist of several rows in related tables
(plus several full lookup tables). Then after they get back online, the
data is being synchronized. The problem is that the current code is
static and since we're implementing some changes we thought of changing
this too.
So my question is - can a subscriber dynamically set the query it needs?
Thanks in advance,
S. Neumann
While there are dynamic filters in merge replication, I wouldn't necessarily
suggest going down that route before finding out a bit more - it may be that
static filters based on some sort of flag would be more appropriate. Please
can you post up an example table schema and tell us in what way the data
required changes from one synchronization to another.
Rgds,
Paul Ibison SQL Server MVP, www.replicationanswers.com/default.asp
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
|||Hi,
Thanks for your quick reply!
The very basics of the db are:
Files table: ID, Name, Type
Documents table: ID, FileID, Name, DocPath
Type lookup table: ID, Name
(so there are many documents in each file). There are more tables in
such relations to the files table but two tables are enough for the
example.
Before the user goes offline, he asks to "download" several files he's
been working on to his laptop and what the program does is copy the
lookup tables to a local database (currently jet), copy the data about
the files the user requested, and the actual documents (referred to
with DocPath) as well.
When the user gets back online, changes he's made (eg. update Name
field in documents or files tables, delete a row, or any change to the
actual documents) are updated on the server.
Today we do this by using a saved log of all the changes made locally
(table name, row ID and what the change was - insert/update/delete).
The main problem we have with the way we do it now, is that since the
relations between several tables are going to change to many-to-many,
it'll be hard to keep track of which rows to actually copy to the
client and updating back to the server may also get complicated. For
instance, if the new schema will be like this:
Files table: ID, Name, Type
Documents table: ID, Name, DocPath
File-Document junction table: FileID, DocID
Without cascading delete, it'll be a pain to update the junction table.
If I locally delete a document that is connected to two files of which
I have only one downloaded, the other row in the file-document junction
table (for the file I didn't download) won't be deleted.
I wouldn't want the clients to have the whole database because it's
huge and they really only need the data for the files they work on when
offline.
So can replication help me with this?
Thanks in advance,
S. Neumann
Paul Ibison wrote:
> While there are dynamic filters in merge replication, I wouldn't
> necessarily suggest going down that route before finding out a bit
> more - it may be that static filters based on some sort of flag would
> be more appropriate. Please can you post up an example table schema
> and tell us in what way the data required changes from one
> synchronization to another. Rgds,
> Paul Ibison SQL Server MVP, www.replicationanswers.com/default.asp
> (recommended sql server 2000 replication book:
> http://www.nwsu.com/0974973602p.html)
|||Hi, The very basics of the db are:
Files table: ID, Name, Type
Documents table: ID, FileID, Name, DocPath
Type lookup table: ID, Name
(so there are many documents in each file). There are more tables in
such relations to the files table but two tables are enough for the
example.
Before the user goes offline, he asks to "download" several files he's
been working on to his laptop and what the program does is copy the
lookup tables to a local database (currently jet), copy the data about
the files the user requested, and the actual documents (referred to
with DocPath) as well.
When the user gets back online, changes he's made (eg. update Name
field in documents or files tables, delete a row, or any change to the
actual documents) are updated on the server.
Today we do this by using a saved log of all the changes made locally
(table name, row ID and what the change was - insert/update/delete).
The main problem we have with the way we do it now, is that since the
relations between several tables are going to change to many-to-many,
it'll be hard to keep track of which rows to actually copy to the
client and updating back to the server may also get complicated. For
instance, if the new schema will be like this:
Files table: ID, Name, Type
Documents table: ID, Name, DocPath
File-Document junction table: FileID, DocID
Without cascading delete, it'll be a pain to update the junction table.
If I locally delete a document that is connected to two files of which
I have only one downloaded, the other row in the file-document junction
table (for the file I didn't download) won't be deleted.
I wouldn't want the clients to have the whole database because it's
huge and they really only need the data for the files they work on when
offline.
So can replication help me with this?
Thanks in advance,
S. Neumann
Paul Ibison wrote:
> While there are dynamic filters in merge replication, I wouldn't
> necessarily suggest going down that route before finding out a bit
> more - it may be that static filters based on some sort of flag would
> be more appropriate. Please can you post up an example table schema
> and tell us in what way the data required changes from one
> synchronization to another. Rgds,
> Paul Ibison SQL Server MVP, www.replicationanswers.com/default.asp
> (recommended sql server 2000 replication book:
> http://www.nwsu.com/0974973602p.html)
|||Changing the initializing data retrospectively isn't possible directly
unless you have a separate publication for each subscriber, change the
filter clause and then reinitialize. However, using static filters, it is
possible to reallocate records at the highest level, and the effect will
propagate downwards by virtue of the join filters. This will allow you to
simulate changing the filtering dynamically. Practically speaking, there
would be have to be another table which relates files to subscriber., and
table joins would relate the other tables, while the static filter would
apply to the "File_Subscriber" table only. Checking out another file by a
subscriber would be an insert into the "File_Subscriber" table. Removing a
record from this table will propagate a series of deletes to the subscriber
for the Files and Documents tables of related records. I wouldn't recommend
using cascade deletes in the manner you suggested, as the result could be
unexpected for the reason you stated. Instead, I'd use a bit flag to
designate each record as 'current' and change that accordingly - either in
the stored procedure that initiates the change or in a trigger.
Rgds,
Paul Ibison SQL Server MVP, www.replicationanswers.com/default.asp
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
No comments:
Post a Comment