Question: Does anyone have any recommendations that are likely to keep a large database healthy/speedy?

I'm designing a database that is going to retrieve around thirty million data points a year, up to around three hundred million to a billion in a few years time when all of the sensors come on line. The structure of the table containing all of the data is:

+--------------+----------+------+-----+---------+----------------+
| Field        | Type     | Null | Key | Default | Extra          |
+--------------+----------+------+-----+---------+----------------+
| id           | int(11)  | NO   | PRI | NULL    | auto_increment |
| feed_id      | int(11)  | NO   | MUL | NULL    |                |
| date         | datetime | NO   |     | NULL    |                |
| value        | double   | NO   |     | NULL    |                |
| creator_id   | int(11)  | NO   | MUL | NULL    |                |
| date_created | datetime | NO   |     | NULL    |                |
+--------------+----------+------+-----+---------+----------------+

The data is going to be retrieved by date and feed_id (I've made both of those indexes) and data more than three or so years back will likely be fairly useless, but probably won't be deleted.)

I've heard partitioning could be useful, does this work with multiple indexes and/or how useful is it? I've seen questions elsewhere asking about ~10,000,000 rows, and it seems MySQL is fine for that amount; should I look elsewhere for a dataset this large?

Thanks,
joehms22

store the dates in a separate table with unixtimestamp and avoid big juicy joines in that table, in every query that u use, use specific WHERES and LIMITS so it can generate faster.
Another aproach for this stuff would be SAP HANA :)

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.