I need to get a set of results from the db (server) but at times I would get the same result back and I would like to cache the result instead of making the same call to the db. Any ideas?

Recommended Answers

All 6 Replies

The query cache, that rproffitt linked to, used to cache MySQL result sets for situations where an identical query is made and there have been no table writes to any of the tables involved in the query. However, it has since been deprecated.

I recommend using something like Memcached to cache query results. We do that here at DaniWeb with great success. However, when using caching systems such as Memcached or Redis, it can always take some playing around to get the optimal hit/miss ratio. Otherwise, it could end up slowing you down more than speeding you up. In other words, it takes resources to save to the cache, so if you’re saving to the cache more than you’re utilizing what’s in the cache, that’s not a good thing. Plus, caches like Memcached only know how to store data for a particular duration of time. You will have to manually clear the cache each time a table is updated. It isn’t like the built-in query cache that automatically invalidates queries for tables that are modified.

commented: Oh my. Times change so I must too! +16

As I read https://dev.mysql.com/doc/refman/5.7/en/query-cache.html my view/answer is I wouldn't bother optimizing in this situation

Also I think we both missed that they tagged SQL Server so they aren’t even using MySQL. That being said, I don’t know of any caching solutions built into SQL Server, but it’s not my forte.

I recall the caching of queries is something from about the 1990's. But hey, there is query caching (apparently) in SQL Server 2008. Plenty on the web about it such as https://dba.stackexchange.com/questions/43572/how-to-see-what-is-cached-in-memory-in-sql-server-2008 where with a little code you can see what is in the cache.

My heavy work on SQL was in the late 1990's till about 2010 when we did the end of life on that product line. So for now I cruise along on what I recall from those days. That is, we found that the overhead of a repeated query was too low to optimize away/out with more code or an addon product. That is, we learned to let the SQL Server do what it did best and work on the product side.

Sometimes it makes sense to cache query results using third-party solutions such as Memcached.

For example, querying the database for related topics is expensive, is needed on every page load, and they almost never change. It makes sense for us to cache the result set for each forum thread in something like Memcached.

Our scenario was a product tester. Also known as Automatic Test. The SQL Server was on the network and there's only a dozen test machines which would spend about 10 or so minutes per device under test. So the load was never worth optimizing. The IT department would fret about having automatic tests overload their SQL Server but they could never show any increase in load during production runs. In other words, negligible load.
What was up on the SQL Server of interest? The order details. The product was configured to the client's order during this phase of final testing. So it was either have a human type all that in or automate it and go get it from the order database. You can guess what's cheaper.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.