Hi Guys,

I willing to hear your suggestions and advice regarding a problem of DB design in MySQL.

The situation is that I need to develop a DB design which brings High Performance under the following conditions:

+ There will be 10000 users that will consump and generate data.
+ The DB should allow users to generate a record each second 24 hours per day.
+ The DB should store records from the current time to 90 days in the past.

ALL of the above conditions will take place over a single table, so that brings the following numbers:

+ 86400 rows per user/day
+ 2,592,000 rows per user/month
+ 7,776,000 rows per user/historical
+ 77,760,000,000 rows TOTAL SIZE OF THE MAIN TABLE

So, the question is what design/tuning strategies would work in order to generate the scenario described above and keep "real-time" response times, this is:

+ The DB should allow me to retrieve the last 1-300 rows added by a random user within a response time of milliseconds.

+ The DB should be able to handle up to 10000 insertions per second.

+ There should be some sort of mechanism to automatically delete rows older than 90 days in order to make room for the new ones and keep the table size.


I really would like to hear your advices about this.

Recommended Answers

All 3 Replies

Have a look at the mysql clustering and partitioning facility. You may design your DB as usual and distribute the load to several servers. I don't know how well this works with really huge DBs, but it might be worth considering.

Have a look at the mysql clustering and partitioning facility. You may design your DB as usual and distribute the load to several servers. I don't know how well this works with really huge DBs, but it might be worth considering.

I have considered to implement hash partitioning and distribute the table load over 10000 partitions (one per user), but I was wondering if it will really contribute to get my goal, so I would like to hear others opinions/suggestions

This is of interest to me as well, we don't need even close to that size(yet) have been wanting to get into a cluster for redundancy and improved speed.

We noticed a big lift in performance awhile back, around 50%, from upgrading to SSD drives. We also upgraded to raid 0 but we did that with a large boost in ram so don't have a figure on what that did needless to say the speed was much faster.

Our current server is:

icro X8SIL w/ VGA & Dual Gigabit LAN
Intel Core i7 870 2.93Ghz QC
Socket 1156 Cooling Fan
1GB DDR3 ECC Registered 1333Mhz
9GB DDR3 RAM (total 10GB)
Seagate 500G SATAII w/ 16M Cache
RAID 0
SSD SATA - 32GB
SSD SATA - 32GB
1.44M Black FDD
In-Win Tower Mid Tower Case
350W Power Supply
100 Mb/s network port upgrade

it's dedicated mysql server, does nothing else and it will handle 100k + updates an hour at peak times whilst doing some large select queries with joins 40+ field table with 5million rows onto a table with 6 fields & 20m rows plus it's not reached its limit yet, could easily do small indexed 1000 row fetches in under 0.5 seconds.

per your stats above thats atleast 33 users sorted.

For speed i remember hearing about some setup that keeps the database in RAM making huge performance increases but of course you need alot of ram to achieve that.
http://en.wikipedia.org/wiki/In-memory_database

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.