Hi everyone,

As the thread title suggests, I am trying to think of the best way to maintain and synchronise local and remote databases. To explain this more clearly, I'll outline the scenario and my thoughts for tackling the problem.

I have a website on a remote server, which obviously has a database containing customer information, orders, etc. This information is entered into the database via the website by customer directly through signing up, placing orders, etc.

Customers are also able to call the office to place orders. At the moment, this data is being stored in a local database, entirely separate from the website. This essentially means that two versions of the database are being maintained, which is a waste of time and money.

The obvious solution here would be for the office staff to enter the customer information and orders taken via telephone directly into the website database. However, what if the office internet connection goes down, the connection latency is too high or heaven forbid, the website goes down...

This made me think about setting up a duplicate of the website on a local server based in the office - well the administration interface at least. This would therefore allow the office staff to enter data directly into the local database, irrespective of whether there was an internet connection, etc. This then raises the issue of synchronising or combining the two databases...

My thought was that the local and remote databases could use different primary keys for all tables. For example, an auto-increment Id, and local or remote identifer, thus forming a composite primary key. Then when merging the records from the two databases, there would be no key clashes.

My question finally, is has anyone implemented anything similar, if so, how did you do it? Or can you think of any issues with my idea?


Thanks for your time and for reading this far,
R.

Recommended Answers

All 2 Replies

I've not yet seriously looked into MySQL clustering and partitioning, but maybe you should.
Problems can arise if locally entered data collide with online data - for example if you have a lookup table which should be synchronized. But as need be, a new value will be locally entered which is not in the online version and you'll get a key clash. So your local staff would really be limited to entering new data - no editing, no deleting etc. It's technically feasible, but would be a very unflexible way of setting up an application. I'd rather opt for a local replication of the online master database which is updated in short intervals - every few minutes or so. If the connection goes down, let the local staff switch to the local version with limited functionality (entering new data only) and re-synch with the master database and the full-scale application when the connection is up again.

Hi smantscheff,

I had thought of key clashes, and was considering employing composite primary keys on all tables to overcome this issue. E.g. (101, local) and (101, remote).

The only other issue I thought would be if a customer signed up on the website and then called the office. This would result in two accounts being created, if it were done on the same day (only intended to sync the two each night). A manual method for merging duplicates would therefore need to be devised. However to update all records to use one customer ID rather than another is little effort.

Using a master/slave setup could be another option I hadn't considered, but the restriction of entering new data only wouldn't be very practical. Something to investigate further perhaps.

Thank you for your input.
R.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.