hello everyone.

i have a program that writes into a mysql database, and at some time, i have to do some sort of database rotation
that's like a dump followed by a truncate (at least it's how I see it)
the problem is that while i do the dump, that program is very likely to write into the database and after the dump finishes, if i truncate the tables i might lose some lines (the ones added during the dump)
i know i can set mysqldump to lock the tables, but after the dump ends it releases them and the problem of the program inserting lines before the truncate starts appears once more (it may add even a few lines before it actually does the truncate, and then I lose them with the truncate)

practically i want to backup a database (periodically) to a file and then clear the database, without losing any intermediate data inserted into it

is there any way to solve this?

Recommended Answers

All 2 Replies

If you are using a different user for the dump, than you could disallow priviliges to the general user, while dumping/truncating, and reset priviliges afterwards.

"FLUSH TABLES WITH READ LOCK" may be a better option. But haven't used it myself.

Firstly is there any criteria as to on which you could separate the data ?
Say for example if there is any auto_increment field you could certainly use this to tuncate only those rows with id <= last_id, if not is there any timestamp value which could be used in the same way.
If there are such fields it would most rightly serve you to take the dump without locking the tables and then truncating accordingly, since locks are always going to hinder performance. Also if the dump takes a lot of time to finish, other insert/update processes might get a "lockwait timeout exceed"

If the above is not feasible due to absence of such a column on which rows could easily be separated, then what you could do is before starting with the mysqldump, create a table identical to the original one with the 'CREATE TABLE %tablename%... AS SELECT * from %originaltablename%' syntax then run mysqldump on this table and then finally drop the duplicate table. This wouldn't require locking at all (apart from the one that the mysql server itself would enforce while creating the table) and inserts could go on on the original table while you are taking the mysldump of the duplicate table.

If both the above mentioned options aren't feasible to you, you could use explicit locking on tables during the mysqldump and truncate commands and release them only after the truncate command has finished.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.