Hi guys hopefully I will fond a bit of help here.
Bit of background on the Application I have developed (In VS2010), basically it needs to write data into a SQL table on a separate server but being pretty mission critical we need to write it to a redundant server aswell as the entries get written every few minutes and there is no chance of getting lost entries. This is working perfectly i.e. if the primary SQL server fails the application will automatically write the entries to the Backup SQL table till the primary server comes online again and then revert to writing to it again.
the problem i have now is to replicate the changes between the two SQL servers when the primary comes online again. It is important to note that data will only be added to the tables. no rows will be changed or deleted at all EVER. the basic layout of the tables is as such.
[ID] - Primary Key, Unique Identifier with (newid()) as default value
[Time_Stamp] - Small Date Time Stamped by the application
and a varying amount of other fields depending on the table, they will be exactly the same in both databases as they are created programatically.
I have at temped to use SQL's replication methods but due to a whole host of other restrictions I was unable to get it working correctly (Servers in a workstation environment instead of a domain, no access to the sa account etc. etc. etc.) and have to resort to doing it through code.
I know i could open up two connections and compare them row by row but i fear this could prove to be pretty slow due to the size that the tables will be growing (entries added every 15-20 minutes, nothing ever getting deleted for years on end) and was wondering if anyone could come up with a neater or more effective solution
Bit of background on the Application I have developed (In VS2010), basically it needs to write data into a SQL table on a separate server but being pretty mission critical we need to write it to a redundant server aswell as the entries get written every few minutes and there is no chance of getting lost entries. This is working perfectly i.e. if the primary SQL server fails the application will automatically write the entries to the Backup SQL table till the primary server comes online again and then revert to writing to it again.
the problem i have now is to replicate the changes between the two SQL servers when the primary comes online again. It is important to note that data will only be added to the tables. no rows will be changed or deleted at all EVER. the basic layout of the tables is as such.
[ID] - Primary Key, Unique Identifier with (newid()) as default value
[Time_Stamp] - Small Date Time Stamped by the application
and a varying amount of other fields depending on the table, they will be exactly the same in both databases as they are created programatically.
I have at temped to use SQL's replication methods but due to a whole host of other restrictions I was unable to get it working correctly (Servers in a workstation environment instead of a domain, no access to the sa account etc. etc. etc.) and have to resort to doing it through code.
I know i could open up two connections and compare them row by row but i fear this could prove to be pretty slow due to the size that the tables will be growing (entries added every 15-20 minutes, nothing ever getting deleted for years on end) and was wondering if anyone could come up with a neater or more effective solution