I am testing the concept of concurrent transaction using this table definition:
ID, int
Quota, int
Remaining, int
Locks, int

To simulate multiple users, i'm using threads and having them sleep random amounts of time.

So the system is this:
1. User is looking at the details for an item
2. User clicks "Buy"
3. Clicking buy brings them to a confirmation page. They click "Confirm"
4. From here we have a TRANSACTION that first updates the table to increment the # of locks. (update acquires a row-level lock)
a. If the # of Locks is legit, inform application to start a CC "transaction"
i. Is this transaction approved?
1. Yes, TRANSACTION to decrement Locks and Remaining
2. No, TRANSACTION decrement Locks
b. Otherwise decrement Locks

The reason we need a separate locking mechanism (Locks) is because a credit card transaction takes awhile, and we need to ensure that the user who clicked "Confirm" actually gets their product, and also to not hold a row-level lock while the credit card transaction is taking place.

So my question is, does having the TRANSACTION to increment Locks make sense, or is there a better way to do this? The reason I think this could be bad is that it blocks queries. This maybe just a flawed database design on my part, and we should actually have a different record for every Product, that way row-level locks do not affect queries. But that would majorly inflate the size of the database.

6 Years
Discussion Span
Last Post by BitBlt

My answer to your question is yes, your approach makes sense.

Interestingly, the concept you've come up with is analagous to the old accounting principle of a "suspense" account. The idea is that you want to make sure you aren't spending money twice, so you set it aside ("suspend" it) in a different account until you are done. Then you finish your transaction by moving the amount to where it actually goes, and zeroing out the suspense account. Your example uses the same principle but with inventory available instead of actual dollars.

As far as the whole blocking/rowlock issue, you just have to make sure that your updates are quick... begin your transaction, make your update, commit your transaction. Try to make sure any preparatory lookups or joins are done outside the transaction begin/end block. Many DBMS's can be set to queue transactions if you have the isolation level set properly. Just make sure you make the actual update statements as tight as possible. You're never going to get away from the possibility of having to rollback failed updates (physics always wins!) but by taking those steps you can minimize that risk.

Good work coming up with this method, and good luck!

This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.