Don’t Forget Application Caching to Boost SQL Server Performance

by Sep 20, 2012

When it comes to SQL Server performance, many organizations look to their DBAs to help fine-tune hardware configurations, optimize indexes, or weed out poorly written code and queries. Consequently, many performance tuning topics and suggestions commonly center upon tasks and changes that DBAs can make to optimize performance.

But one very efficient way to boost overall system responsiveness and increase SQL Server performance is simply to use SQL Server less frequently – by intelligently leveraging caching capabilities within the application tier. The problem, of course, with this approach is that it requires planning and architectural considerations up front – meaning that caching is typically something that can’t be added in ‘after the fact’ as is the case with many of the ‘transparent’ fixes and changes that DBAs typically make in terms of hardware and index tuning and configuration changes.

However, any time that an application is able to re-use cached data instead of re-querying the underlying SQL Server, two primary benefits come into play:

Application Responsiveness Increases. Even if a query is perfectly tuned and all indexes are in place, executing a query still requires network transport, and some degree of activity on the part of the SQL Server. Bypassing this step entirely means that applications ‘feel’ much more responsive to end users.

Concurrent Workloads Decrease. If SQL Server doesn’t have to keep furnishing data for the same queries over and over again, then it has more ‘horse power’ available for any requests that do come in. The net result, therefore, is that the less frequently you query your server, the more responsive it can be when you go to query or update it.

Of course, despite the benefits of caching, there are typically two big obstacles that many organizations face when it comes to implementing a successful caching tier within their solutions and applications.

First, and foremost, is the problem of which technologies to use. Which, in turn, is heavily linked to the fact that caching has to be an architectural consideration – meaning that caching capabilities typically can’t be added ‘after the fact’ or merely ‘bolted on’ to a solution to boost performance.

Happily, however, ASP.NET provides some powerful caching capabilities in the form of output caching (which caches the output of a process page – or part of a page) along with application data caching – which allows developers to cache expensive or commonly used data to improve overall system performance. Managing caching details is typically a non-trivial endeavor on the part of developers – but, when done properly, can yield tremendous results in terms of overall system responsiveness and throughput. (StackOverflow.com’s use of caching functionality at the application level is a great example of how caching can decrease load and improve performance.)

Of course, not all applications that use SQL Server are ASP.NET apps. So, for other .NET applications, Microsoft has made a set of powerful caching APIs available to NON web-apps as part of .NET 4.0. This new functionality, found in the System.Runtime.Caching namespace, offers very similar functionality to what was available with ASP.NET’s Application Data Caching – but without the need for the overhead and complexity of an HTTP Context. Likewise, a myriad of well-supported 3rd party caching options exist as well – including memcached, NCache, and so on. Best of all – many of these third party options can be used in non .NET environments as well.

When developing or architecting caching-capable solutions, developers need to keep two core things in mind:

a) Cached data should always be considered volatile – meaning that if the data is available via the cache, that’s a ‘bonus’ or ‘win’ for the application. But the application should never assume or insist that data be required within or from the cache. In other words: if data is not available from the cache, the application should be able to seamlessly retrieve data from the source (and then put it into cache – if warranted for future uses).

b) Cache invalidation should ensure that a stale version of an object (or data) is no longer available once an object (or data) has been ‘ejected’ from the cache.

Managing that second point can require complex cache-invalidation strategies that attempt to remove cached items when newer data is detected. (One of my favorite jokes goes something like: “There are only 2 hard things about programming: naming things, cache invalidation, and off-by-one errors.”)

Since it takes a lot of development effort to code solutions that ‘detect’ when newer data becomes available for cached objects or data, many caching solutions simply keep data around or available in the cache on an absolute or sliding basis – meaning that they just adopt the approach of keeping data around for a few minutes (or hours) even IF newer data might become available or updated.

This approach to purposefully choosing to use stale data, in turn, leads to the second problem or obstacle that many organizations encounter when looking at caching architectures or options – which is that many organizations can’t really afford to let ‘stale’ data run rampant in their systems.

And while data in many applications is, indeed, too volatile to cache for 2 hours or even 15 minutes, it’s safe to say that almost all data (let’s say roughly 99%) can safely be cached for say 2 minutes – or even 2 seconds in most cases. In fact, the concept of caching data for mere seconds is something that many ASP.NET developers pay heavy attention too – and which is known as micro-caching. With micro-caching techniques, applications that hold on to even highly volatile data for mere seconds can benefit from tremendous performance improvements – especially on heavily trafficked systems (where performance is an even bigger consideration).

Likewise, even in rare cases where data is simply so volatile that it can’t be cached at all (not even for a few seconds), then there are commonly still opportunities to cache other, supporting, data that can result in performance benefits as needed. For example, if you have a very OLTP-heavy application that primarily logs data (which is write-intensive and therefore not prone to caching), you can commonly still cache ‘lookups’ or enums/domains of related data (such as product lists, countries, office-locations, store names, etc.) for minutes if not hours. In this way, if a typical interaction required 3-4 queries for various ‘lookups’ and then a single logging operation, you’d typically be able cut the total number of queries down by at least half – which, in turn, frees up more system resources to handle the primary business requirement of being able to log high volumes of operations (resulting in a more responsive and capable application).

In the end, caching is not a panacea – it’s just another tool that comes with its own pros and cons. But when used correctly, it can have a dramatic impact on system performance and responsiveness. Consequently, any new solution or application you undertake should be thoroughly evaluated and architected with caching capabilities where possible. Likewise, caching provides enough benefits that it should also be considered for existing applications where the cost of implementation can offset the difficulty of addressing ongoing performance problems that can’t be easily addressed by many of the more ‘transparent’ approaches that DBAs commonly undertake to tame unruly applications.