previous page
next page

Data Caching

A smart caching strategy is often the difference between a site that comes up quickly and one that leave the users staring at the little spinning globe for the traditional 25 seconds before they go to anther site. ATL Server offers data-caching services to help you get below that magic time threshold.

Caching Raw Memory

The most basic caching service is the BLOB cache. No, this isn't a gelatinous alien that will try to digest your hometown. This cache handles raw chunks of memory. Getting hold of the cache is done just as when using the session servicethat is, you use the IServiceProvider interface:

HTTP_CODE ShowPostsHandler::ValidateAndExchange( ) {
  ...
  HRESULT hr = m_spServiceProvider->QueryService(
    __uuidof(IMemoryCache), &m_spMemoryCache );
  if( FAILED( hr ) ) return HTTP_FAIL;
  ...
}

When you have an IMemoryCache interface pointer, you can stuff items into and pull them out of the cache. The cache items are stored as name/value pairs, just like session-state items. Instead of storing VARIANTs, however, the BLOB cache stores void pointers.

Retrieving an item requires two steps. First, you must get a cache item handle:

HRESULT ShowPostsHandler::GetWordOfDay(CStringA &result) {
  HRESULT hr;
  HCACHEITEM hItem;
  hr = m_spMemoryCache->LookupEntry( "WordOfDay", &hItem );
  if( SUCCEEDED( hr ) ) {
    // Found it, pull out the entry
    ...
  }
  else if( hr == E_FAIL ) {
    // Not in cache
    ...
  }
}

The LookupEntry method returns S_OK if it found the item, and E_FAIL if it didn't.[7]

[7] The ATL Server group made a mistake with this return code. A failed hrESULT is an exception: It means that something went wrong. A failed cache lookup is not an exception. This method should have returned S_FALSE instead on a cache miss.

When we have the item handle, we can retrieve the data from the cache. This is done via the GeTData method, which returns the void* that was stored in the cache, along with a DWORD giving you the length of the item:

HRESULT ShowPostsHandler::GetWordOfDay(CStringA &result) {
  ...
  // Found it, pull out the entry
  void *pData;
  DWORD dataLength;

  hr = m_spMemoryCache->GetData( hItem, &pData, &dataLength );
  if( SUCCEEDED( hr ) ) {
    result = CStringA( static_cast< char * >( pData ),
      dataLength );
  }
  m_spMemoryCache->ReleaseEntry( hItem );
  ...
}

The pointer that is returned from the GeTData call actually points to the data that's stored inside the cache's data structure; it's not a copy. Because we don't want the cache to delete the item out from under us, we copy it into our result variable.

The final call to ReleaseEntry is essential for proper cache management. The BLOB cache actually does reference counting on the items stored in the cache. Every time you call LookupEntry, the refcount for the entry you found gets incremented. ReleaseEntry decrements the refcount. Entries with a refcount greater than zero are guaranteed to remain in the cache. Because the whole point of using the cache is to pitch infrequently used data, properly releasing entries when you are finished with them is just as important as properly managing COM reference counts. Unfortunately, there's no CCacheItemPtr smart pointer template to help.[8]

[8] Check out the BlobCache sample in MSDN. It includes some helper classes to manage the cache.

If you get that E_FAIL error code, you typically want to load the cache with the necessary data for next time. Doing so is fairly easy; you just call the Add method:

HRESULT ShowPostsHandler::GetWordOfDay(CStringA &result) {
    ...

   // Not in cache
     char *wordOfTheDay = new char[ 6 ];
     memcpy( wordOfTheDay, "apple", 6 );
     FILETIME ft = { 0 };
     hr = m_spMemoryCache->Add( "WordOfDay", wordOfTheDay,
         6 * sizeof( char ), &ft, 0, 0, 0 );
     ...
}

This code allocates the memory for the item, specifies an expiration time (via the FILETIME value, where 0 means that it doesn't expire), and places it into the cache. The block of memory is now safely stored until the cache gets flushed or scavenged; at that point, we have a memory leak.

Why the memory leak? The cache is storing only void pointers; it knows nothing about how the memory it has been handed should be freed. It doesn't run destructors, either. To prevent memory leaks, there is a hook to provide a deallocator, and it's done on a per-entry basis. The last parameter in the call to the Add method is an optional pointer to an implementation of the IMemoryCacheClient interface, which has a single method:

interface IMemoryCacheClient : IUnknown {   
    HRESULT Free([ in ] const void *pvData);
};                                          

When an item is about to be removed from the cache, if you provided an IMemoryCacheClient implementation in the Add call, the cache calls the Free method to clean up. In this example, you'd just need to add a call to delete on the void pointer. Unfortunately, there's no standard implementation of this interface for use in the BLOB cache.

Caching Files

The BLOB cache is useful for storing small chunks of arbitrary data, but sometimes you need to store large chunks. The file-caching service lets you create temporary files on disk; when the cache item expires, it automatically deletes the disk file.

The file cache operates much like the BLOB cache. You use IServiceProvider to get an IFileCache interface pointer. The file cache uses handles, just like the BLOB cache. The only major difference is that the file cache stores filenames instead of chunks of memory.


previous page
next page
Converted from CHM to HTML with chm2web Pro 2.75 (unicode)