Dear All,

I have 2 tables one tblAssociate and tblData. The link between them is one to many. My search will be based on date time range where if I find even one data in tblData between the set range then I want to show the tblAssociate data. So I have build this query but the problem it takes more then 20 seconds at time. Any idea how to improve this query?

SELECT tblLocationFrom.geoFenceName As locationFrom,
     tblLocationTo.geoFenceName As locationTo,
     tblAssociate.associateID,
     CAST(DATE_ADD(tblAssociate.dateTimeStartJourney, INTERVAL '+08:00'
     HOUR_MINUTE) AS CHAR) As dateTimeStartJourney,
     CAST(DATE_ADD(tblAssociate.dateTimeEndJourney, INTERVAL '+08:00'
     HOUR_MINUTE) AS CHAR) As dateTimeEndJourney,
     tblAssociate.totalC,
     tblAssociate.totalD,
     tblAssociate.totalE,
     tblAssociate.totalP,
     tblAssociate.totalS,
     tblAssociate.totalW
FROM tblAssociate
JOIN tblGeoFence AS tblLocationFrom
  ON tblAssociate.locationFromID = tblLocationFrom.geoFenceID
JOIN tblGeoFence AS tblLocationTo
  ON tblAssociate.locationToID = tblLocationTo.geoFenceID
JOIN tblData On tblAssociate.associateID=tblData .associateID
WHERE Date_Add(tblData .dateTimer, Interval '+08:00' hour_minute)
BETWEEN '2011-10-19 00:00:00' AND '2011-10-20 23:59:59'
Member Avatar for Rahul47

If you are willing to share table structure of both then I might be able to solve it.

Dear Rahul, I guess it got to do with indexing I have tried it helped though but the problem at times the indexing need is different for different queries how you solve that

Member Avatar for Rahul47

Ok, I just wanted to add my 2 cents :-)

You should index on any fields involved in joins. You can also have multiple indexes based on fields used in conditions (where clause) for different queries, but you should test the performance as not all indexes will prove sufficiently beneficial to warrent the overhead.

Also consider the type of join needed for the query, as this will also affect performance as well as the resultant dataset.

Review your datatypes used for columns, for instance is an int(11) (the default integer size) necessary, or will a smallint or unsigned or int(6) be adequate, char instead of varchar etc. where possible.

I recommend removing the CHAR typecast, as this is a costly operation to perform during the query, and usually be done (often implicitly) cheaper post-query execution or in external code.

This is the only advice I can think of without further information such as table structures. Hope it helps.

Member Avatar for 1stDAN

Though this is an over one-year old problem, I was wondering why a kind of recursive join (tblGeoFence is joined twice) is necessary, is there a hierarchical data structure? Also aren't there not three tables involved in this one-to-many relationship instead of the mentioned two ones, tblAssociate and tblData, only?

Not sure whether I got you right, indexes can easily be recreated for existing tables. Once an index has been created, it is maintained automatically by the database system, there is no manually cyclic reindexing necessary. (Primary-key-columns are always automatically indexed if a table has been properly created. As Hearth stated, foreign-key columns, they appear in ON clauses among others, should also be extra-indexed for enhancing performance.)

Can you post the complete create-statements of all related tables? Also tables sizes could be a good detail.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.