0

Hi there,

My knowledge on query optimization ends here. I am stuck!

My table with following structure has more than 1 million records.
CREATE TABLE `dreambank`.`db_ip_data_buffer` (
`ip_from` bigint(20) unsigned NOT NULL default '0',
`ip_to` bigint(20) unsigned NOT NULL default '0',
`country_code` varchar(2) default NULL,
`country_name` varchar(64) default NULL,
`region_name` varchar(128) default NULL,
`city_name` varchar(128) default NULL,
`latitude` double default NULL,
`longitude` double default NULL,
`is_cash` tinyint(3) unsigned default NULL,
PRIMARY KEY (`ip_to`,`ip_from`),
KEY `Index_2` (`ip_from`),
KEY `Index_3` (`ip_to`),
KEY `index` (`is_cash`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 PACK_KEYS=1;

The query I am running is:
SELECT ip_from FROM db_ip_data_buffer WHERE ip_from <= 1089969676 and ip_to >= 1089969676;

Result:
1 row fetched in 0.0211s (0.814 s)

On a site with 45,000 users this would put quite a bit of stress on mysql and I think it's slow for an index page. Is there a way to index this for better performance?

I really appreciate any help I can get!

Thanks,
Tim

4
Contributors
7
Replies
9
Views
10 Years
Discussion Span
Last Post by gypsywitchcraft
0

If you have your indexes set and it's still too slow, you may have to look at the way you're extracting data. Perhaps there is a way to trim your resultset and still get what you want.

In your case I'd try setting an index to ip_from and ip_to combined

0

If you have your indexes set and it's still too slow, you may have to look at the way you're extracting data. Perhaps there is a way to trim your resultset and still get what you want.

In your case I'd try setting an index to ip_from and ip_to combined

Hi there,
thanks for the response. Setting an index combined didn't exactly help much. Can you please give an example on trimming the resultset?

Are you maybe suggesting that I normalize the table to remove excess fields/long varchars from search table? I tried this and this only reduced the wait time from 0.814 s to 0.550 s which is good, but not exactly efficient considering the possible load on server.

Another strategy I used was adding a is_cashed BOOLEAN field. Query against set is_cashed seems to be quick, however, if the record is not in cashed data set then I must use the same old slow query again.

Any ideas?

Basically all I'm trying to do is identify the location of visitors, but having a homepage delay this much is not a good starting point.

Thanks very much,
Tim

0

Is there anything you can change to the <= and >= operators. Can you specify a range for the ip_from and the ip_to ? It seems the table has to access a lot of records it doesn't need.

Can't you do:

SELECT ip_from FROM db_ip_data_buffer 
WHERE ip_from >= xxx AND ip_from <= 1089969676 AND ip_to >= 1089969676 AND ip_to <= yyy;

Second option could perhaps be a stored procedure to return the resultset to you.

Edited by pritaeas

0

Why is it not ip_from and ip_to = ?

I have not dealt with using a composite primary key and then adding in >= or <=. I have always set them equal.

While it may not be the use scenerio you are wanting, does that speed up the query time?

0

Stylish, your solution works very well. I was able to increase response about 10 times.

Thanks very much!

Tim

0

Sorry for the bump, but I feel it is necessary. This thread came high up in Google for a relevant phrase, but just to help anyone else who comes across this, consider using BETWEEN, which is more efficient than >= and <=.

This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.