0

Hi guys,

I am trying to find a solid optimization technique for my problem.

I have two tables that grow quote large:

CREATE TABLE  `mydb`.`xm_artist` (
  `artist_id` int(10) unsigned NOT NULL auto_increment,
  `artist_name` varchar(255) NOT NULL,
  PRIMARY KEY  (`artist_id`),
  UNIQUE KEY `index_artist_name` USING BTREE (`artist_name`),
  FULLTEXT KEY `fulltext_artist_name` (`artist_name`)
) ENGINE=MyISAM;

CREATE TABLE  `mydb`.`xm_artist_tags` (
  `artist_id` int(10) unsigned NOT NULL,
  `similar_artist_id` int(10) unsigned NOT NULL,
  `relevancy` smallint(5) unsigned NOT NULL,
  PRIMARY KEY  USING BTREE (`artist_id`,`similar_artist_id`),
  KEY `index_similar_artist_id` (`similar_artist_id`)
) ENGINE=MyISAM;

I have about 5,000 records in artists and about 13,400 in tags.

I expect they will double, or quadruple over time.
Does anyone have any idea if there is a way to optimize the model/indexes further?
What about the basic join queries, can they be optimized further?
So far, query speed is not unacceptable, but it would be nice to have them run faster.

Does anyone have any ideas?

Thanks,
Tim

3
Contributors
2
Replies
3
Views
9 Years
Discussion Span
Last Post by pritaeas
0

'Large' is relative as is 'optimize'. There are lots of MySQL databases handling millions of records / transactions. You can use EXPLAIN to see what your queries are actually doing. Why not wait until you notice a severe reduction in query time, then see if anyone here can help.

This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.