Good afternoon. I was hoping to get some query optimization tips.
I am writing a php script to retrieve data from a mySQL database. Table A, which contains most of the fields that I need, contains just under 11,000 rows. Table B, which contains 3 fields that I need, contains close to 45,000 rows. Table A's primary key (ID) is a foreign key in Table B (ID). My application requires only those two tables.
I am using a join that is similar to the following:
$query = "SELECT TableA.empl_fname, TableA.empl_lname, TableA.is_private, TableB.empl_uname, TableB.email_set AS empl_details from TableA JOIN TableB on TableA.ID = TableB.ID WHERE TableA.empl_lname like " . "'" . $empl_lname . "%' and TableA.is_private = 'N' and TableB.email_set = 'yes' " . " ORDER by TableA.empl_lname, TableA.empl_fname";
$empl_lname is retrieved from the end user. Although the join retrieves the data I need, it is extremely slow (over 30 seconds, in some cases). I also considered removing the join and using the following logic instead:
for each record in Table A returned via SELECT
use SELECT to obtain and display the record in Table B where TableA_ID = TableB_ID
That logic was slow, also - probably since each execution in the for loop includes a trip to the database.
Is it possible to rewrite the query to minimize the execution time? I think that each trip to the database to retrieve data in Table B is causing the problem. Thanks in advance.