I come across this problem every so often and end up having to do work arounds when I would prefer to keep it in the database. I have a query where the data in the query updates frequently so the query won't stay in the cache but I don't need the query to be accurate. I was trying to avoid adding an extra step of storing the data in a temp table and having to update it on a timer.

Does anyone know of a way to get MariaDB or MySQL to Keep the result set for a period of an hour or so before actually running the query again, I have an example of a query below which takes between 4.8-6 seconds to run and the wait is too long and also is needless work for the server, It can just do the query once and get the 4 numbers - then all subsequent requests just return the 4 numbers and every 20 minutes or 1 hour the 4 numbers get updated.

Is there a built in funtion for this or is the only solution to make your own table? I could end up with many of these and I don't want to end up with 100's

SELECT
    SUM(CASE WHEN p.id IS NULL AND f.loc = 1 THEN 1 ELSE 0 END)/250 to_print
    , SUM(CASE WHEN p.id IS NULL AND f.loc NOT IN(1,2,8) THEN 1 ELSE 0 END)/250 to_print_out
    , SUM(CASE WHEN p.id IS NOT NULL AND p.print_date BETWEEN '2020-01-09 14:00:00' AND '2020-01-16 14:00:00' THEN 1 ELSE 0 END)/250 printed_tw
    , SUM(CASE WHEN p.id IS NOT NULL THEN 1 ELSE 0 END)/250 printed
    FROM typist t
    LEFT JOIN printing p ON t.typed_id = p.print_id AND p.type = 4
    LEFT JOIN files f ON t.file_id = f.file_id
    WHERE
    t.type = 1
    AND f.file_id IS NOT NULL
    AND NOT EXISTS(SELECT DISTINCT l.id FROM letters l WHERE l.deleted = 1 AND t.letter_id = l.id)

This is the EXPLAIN for the query

"1" "PRIMARY"   "t" "ref"   "file_id,letter_type,print_stat"    "print_stat"    "2" "const" "134243"    "Using where; Using index"
"1" "PRIMARY"   "f" "eq_ref"    "PRIMARY"   "PRIMARY"   "3" "sys.t.file_id" "1" ""
"1" "PRIMARY"   "p" "ref"   "id,type"   "particle_id"   "8" "sys.t.typed_id"    "1" "Using where"
"2" "MATERIALIZED"  "l" "ref"   "PRIMARY,deleted,deleted_file_id"   "deleted"   "1" "const" "4741"  "Using index"

I have already had numerous variations to try optimising this and the above is the best I have got so far. t has around 200,000 rows, l has around 300,000, f has about 130,000, p has around 300,000 but will bloat well over 1,000,000 in time.

Recommended Answers

RuMedia, I think what the OP is saying is that he’s aware of that, but he doesn’t mind if they go stale.

What I do is I use an in memory cache such as Memcached (non-persistent) or Redis (persistent) where I store the results of the query for up …

Jump to Post

All 11 Replies

A result cache is an area of memory, either in the Shared Global Area (SGA) or client application memory, that stores the results of a database query or query block for reuse. The cached rows are shared across SQL statements and sessions unless they become stale.

RuMedia, I think what the OP is saying is that he’s aware of that, but he doesn’t mind if they go stale.

What I do is I use an in memory cache such as Memcached (non-persistent) or Redis (persistent) where I store the results of the query for up to X minutes. They are both key-value pair caches. I then query memcached unless there’s a miss in which case I query the database again.

Thank you Dani, that is exactly the type of thing I was looking for.

I got Redis up & running with the predis PHP package so just need to get my head around the syntax.

It's rather simple ... just get and set.

Basically ...

$result = $cache->get('key');

if ($result === false)
 {
     $result = mysql_query($key);

     $cache->set('key', $result);

 }

  echo 'The result is: ' . $result;

Yes it is great! I am very happy you pointed it out to me as I couldn't find a solution.

I have it working now and it cuts page load times from anywhere between 500-6000ms down to 60-100ms when it is in Redis already.

I even realised the data is server wide so it can even be used anywhere on the server until it expires, not just the one page

The working code I have now is like this:

require "../includes/predis-1.1/autoload.php";

Predis\Autoloader::register();
$redis = new Predis\Client();

$D = [];
$D['data'] = [];

if(!$redis->exists('TAP')){
    //RUN QUERY, GET DATA

    //SAVE DATA TO REDIS
    $redis->hmset('TAP', [
        'total' => $D['tap']['total'],
        'scheduled' => $D['tap']['scheduled'],
        'unscheduled' => $D['tap']['unscheduled']
    ]);
    //ADD EXPIRY
    $redis->expire('TAP',300);
}else{
    //NOT EXPIRED USE REDIS DATA
    $D['tap']['total'] = $redis->hget('TAP', 'total');
    $D['tap']['scheduled'] = $redis->hget('TAP', 'scheduled');
    $D['tap']['unscheduled'] = $redis->hget('TAP', 'unscheduled');
}

Out of pure curiosity, what made you choose Redis over Memcached?

I looked up both and came across this article:

Why Redis beats memcached for caching

I thought the article could be biased but from what the article says Redis will be more like what I want as I use a lot of AJAX & JSON type objects and it says memcached has to process the data as it stores them as strings, and it mentioned that Redis is developing more new features.

Plus I found predis straight away, if I got stuck on linking it in with PHP I would of tried the other one.

Thanks for linking to that article.

I've used both for years, and both are incorporated into DaniWeb for different purposes. The big distinction for me is that Redis is persistent and can handle non-scalar objects.

For what situations would you use Memcached over Redis and why do you not use Redis for that?

Memcached is simpler, faster, and has less overhead. I use it to cache large portions of pre-generated HTML to reduce the amount of PHP and MySQL that needs to execute to build a page. I use Memcached when there’s absolutely nothing to lose if it ejects a key too early and I just need it to cache long strings of content to save a few milliseconds from page loads.

I use Redis when I need to take advantage of its ability to store volatile data objects in a fast and reliable way, such as to store user session data, API tokens, etc. In such cases, I use Redis as the primary database because MySQL is not performant for us when working with active sessions, and file based sessions (PHP native sessions) are slower over the network when being accessed by multiple web servers than our Redis server is.

Thank you for the data.

I think I want to use Redis in the same way you use it too so I am happy I got the right one.

I moved onto using javascript so i write all of my html directly in as javascript code which is loaded in the client-side memory