DaniWeb was written in Codeigniter, and when posting to the forums, we use database transactions that look like this:

$this->db->trans_start();

... insert into the posts table ...
... update the member's post count ...
... update the tags table ...
etc

$this->db->trans_complete();

On occassion, the error log will show for a particular query within the transaction: Query error: Deadlock found when trying to get lock; try restarting transaction - Invalid query

When that happens, the entire transaction is rolled back. However, this is not ideal because when someone attempts to make a post, we don't want it to completely fail just because the member's table happened to have been in use at the time.

That being said ... how do I go about restarting the transaction?? e.g. something such as:

if ($this->CI->db->trans_status() === false)
{
    // Code to restart the transaction
}

Is there a better way to be doing it? Should I be manually locking tables?

Recommended Answers

All 11 Replies

OK, so from what I've been reading, this needs to happen at the application layer. I need to manually reissue all of the queries. Based on the way I'm doing things, that's a little difficult.

However, what's easy for me to implement in my code is to reissue a specific write command if that query fails. However, how can I do this from within a transaction?

For example:

// Start Transaction
// Try to insert into posts table
// Insert fails
// Retry inserting into posts table
// Update multiple other tables
// Complete Transaction

If that's the case, the codeigniter complete transaction method will see that a query failed, and roll back the entire thing. I could alternatively use transactions manually, and keep track manually something failed or succeeded, and if it succeeded on the second try, then commit the transaction, and only rollback otherwise.

However, if that's the case, we use MySQL replication. Will the replication see the query failed on the master server and then not push it through to the slave? Or will the command be executed twice on the slave at the time the transaction is committed?

So I'm doing it. I'm retrying a failed query from inside the transaction, and only rolling back the transaction if the query fails more than once. How dangerous is this?!

So I take it that the insert into the post table fails. My thought is that if it fails once, it should fail the next time so deeper inspection of the insert and its data is required so one can try it manually to see if any error is thrown via a command line approach that isn't logged otherwise.

I had another thought is that what is being inserted may matter but that's unlikely to be the case since the losses I've experienced were all plain text without any embellishment, not even a word in bold.

TL;DR. When it fails, log that insert data to try again manually in hopes of more feedback. Capture any additional errors if possible.

So I take it that the insert into the post table fails.

No, that isn't what happening. Per my initial post, when you reply to a topic on DaniWeb, we start a new transaction. Insert into the posts table, update the topics table, etc. etc. We touch a handful of different tables, then commit the transaction. Inserting a new record into the posts table always works. However, updating the member post count fails on occassion because there is currently a write lock on that member record due to some other simultaneous webpage being loaded. When that happens, we wait a few milliseconds, and then try again, as is recommended by MySQL when there is a deadlock found. However, when it fails a second time, we rollback the entire transaction, which is why posts seem like they were posted just fine, but then disappear when you refresh the page. Because the entire thing is actually rolled back after the page terminates all because we couldn't update your post counter.

I took it as if the code reflected the need to retry on line 3, 4, 5. Since post counter as you say fails, that could be it's own transaction.

But what about logging deadlocks? https://dev.mysql.com/doc/refman/8.0/en/innodb-deadlocks.html seems to note we may be able to see the last deadlock and more.

I can't guess the current load on the servers but as others have noted the posting loss issue is recent. If there is some recent, most of us won't know that but one more thought.

https://dev.mysql.com/doc/refman/8.0/en/innodb-deadlock-detection.html notes https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_lock_wait_timeout and 50 seconds feels like that long delay when I post and then it is lost. I can't imagine updates to your database taking that long and then just how often do posts occur?

Since post counter as you say fails, that could be it's own transaction.

Doesn't that completely defeat the purpose of a transaction, which is to ensure the database remains in sync by treating all the queries that touch the database to do one thing (e.g. reply to a post) as a singular unit?

But what about logging deadlocks?

Yes, we're doing that, of course. That's how I know that the deadlocks are caused on the members table.

On high concurrency systems, deadlock detection can cause a slowdown when numerous threads wait for the same lock.

Hmm, I'll read more about that. That might be the problem we're having. If InnoDB is rolling back a transaction if a deadlock is detected on its own, that's conflicting with my code that only rolls back a transaction after multiple query retries after a deadlock is detected.

That could definitely be the cause of the problem!!

Remember I'm kicking it around and am only a sometimes MySQL programmer. It's my go-to solution for my factory test software that I developed long long ago. Sorry if I can't do more than this.

I was tooling around the webs today and an article codified a thought I had about your predicament. So what was the thought?

I wonder if your SQL updates are updating and accessing in the same order. Since I'm terrible at writing this up, here's the article's text and link.

Make changes to the application. In some cases, you could greatly reduce the frequency of deadlocks by splitting a long transaction into smaller ones, so locks are released sooner. In other cases, the deadlock rises because two transactions touch the same sets of data, either in one or more tables, with different orders. Then change them to access data in the same order, in another word, serialize the access. That way you would have lock wait instead of deadlock when the transactions happen concurrently.
From https://www.percona.com/blog/2014/10/28/how-to-deal-with-mysql-deadlocks/

Reading above you don't want to break up the long transaction so if there are multiple updates in action, do they "access data in the same order"?

Member Avatar for Mark_k

Just to throw a stick in the spokes of this thread: Alternatively to using transactions, why not delegate the not-so-important updates to a job queue? Member's post count and tags update to Rabbitmq or Kafka?

Another alternative might be a trigger. Create a trigger on your posts table to update the others on insert. Ya? Maybe? I dunno. Curios what others think here.

commented: Thanks for this. Queue it later is a respected solution. +15
Member Avatar for Mark_k

Third alternative: Create a view that updates on a cron job, say every 5 minutes. MySQL View Example

As far as I can tell that data doesn't need to be immediately accurate, so this would be fine.

I just stumbled across this thread randomly as the result of a Google search for something else, and thought I'd take the time to reply.

What I ultimately ended up doing is trying a deadlocked query 5 times before rolling back the transaction. Over the past 1+ years of logs, it's never taken more than two attempts to be successful, which ended any user facing bugs revolving around this.

In terms of using cron jobs to delay not-so-important updates and run them behind the scenes in batches, we have always done that for article views and so forth. However, member post count is something that users expect to show up instantaneously. I take the time to write something, and then I expect it to reflect that my account has a post submission. Otherwise, I might think it didn't go through, etc.

Anyways, the current solution has been working for the past year and a half with no issues.

commented: ?? +0
Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.