WASD HTTPd provides an optional, configurable, monitorable file data and revision time cache. File data, so that requests for documents can be fulfilled without reference to the underlying file system, potentially reducing request latency and more importantly improving overall server performance and system impact, and file revision time, so that requests specifying an "If-Modified-Since:" header can also benefit from the above. Files are cached using a hash derived from the VMS file-system path equivalent generated during the mapping process (i.e. represents the file name) but before any actual RMS activity. WASD can also cache the content of responses from non-file sources. This can be useful for reducing the system impact of frequently accessed, dynamically generated, but otherwise relatively static pages. These sources are cached using a hash derived from virtual service connected to and the request URI.
Caching, in concept, attempts to improve performance by keeping data in storage that is faster to access than the usual location. The performance improvement can be assessed in three basic ways; reduction of
This cache is provided to address all three. Where networks are particularly responsive a reduction in request latency can often be noticeable. It is also suggested a cache "hit" may consume less CPU cycles than the equivalent access to the (notoriously expensive) VMS file system. Where servers are particularly busy or where disk subsystems particularly loaded a reduction in the need to access the file system can significantly improve performance while simultaneously reducing the impact of the server on other system activities.
A comparison between cached and non-cached performance is provided in in the "Server Performance" section.
|
The WASD cache was originally provided to reduce file-system access (a somewhat expensive activity under VMS). With the expansion in the use of dynamically generated page content (e.g. PHP, Perl, Python) there is an obvious need to reduce the system impact of some of these activities. While many such responses have content specific to the individual request a large number are also generated as general site pages, perhaps with simple time or date components, or other periodic information. Non-file caching is intended for this type of dynamic content.
Revalidation of non-file content is fraught with a number of issues and so is not provided. Instead the cache entry is flushed on expiry of the [CacheValidateSeconds], or as otherwise specified by path mapping, and the request is serviced by the content source (script, PHP, Perl, etc.) with the generated response being freshly cached. All of the considerations described in 11.4 - Cache Content Validation apply equally to file and non-file content.
Determining which non-file content is cached and which not, and how long before flushing, is done using mapping rules (12.5.5 - SET Rule). The source of non-file cache content is specified using one or a combination of the following SET rules against general or specific paths.
A good understanding of site requirements and dynamic content sources, along with considerable care in specifying cache path SETings, is required to cache dynamic content effectively. It is especially important to get the content revalidation period appropriate to the content of the pages. This is specified using the following path SETings.
For example. To cache the content of PHP-generated home pages that contain a time-of-day clock, resolving down to the minute, would require a mapping rule similar to the following.
set /**/index.php cache=cgi cache=expires=minute
The WASD file cache provides for some resources to be permanently cached while others are allowed to be moved into and out of the cache according to demand. Most sites have at least some files that are fundamental components of the site's pages, are rarely modified, commonly accessed, and therefore should be permanently available from cache. Other files are modified on a regular or ad hoc basis and may experience fluctuations in demand. These more volatile resources should be cached based on current demand.
Volatile caching is the default with the site administrator using mapping rules to indicate to the server which resources on which paths should be permanently cached (11.5 - Cache Configuration).
Although permanent and volatile entries share the same cache structure and are therefore subject to the configuration's maximum number of cache entries, the memory used store the cached file data is derived from separate pools. The total size of all volatile entries data is constrained by configuration. In contrast there is no configuration limit placed on the quantity of data that can be cached by permanent entries. One of the purposes of the permanent aspect of the cache is to allow the site administrator considerable discretion in the configuration of the site's low-latency resources, no matter how large or small that might be. Of course there is the ultimate constraint of server process and system virtual memory limits on this activity. It should also be kept in mind that unless sufficient physical memory is available to keep such cached content in-memory the site may only end up trading file-system I/O for page file I/O.
A cache is not always of benefit! the cost may outweigh the return.
Any cache's efficiencies can only occur where subsets of data are consistently being demanded. Although these subsets may change slowly over time a consistent and rapidly changing aggregate of requests lose the benefit of more readily accessible data to the overhead of cache management, due to the constant and continuous flushing and reloading of cache data. This server's cache is no different, it will only improve performance if the site experiences some consistency in the files requested. For sites that have only a small percentage of files being repeatedly requested it is probably better that the cache be disabled. The other major consideration is available system memory. On a system where memory demand is high there is little value in having cache memory sitting in page space, trading disk I/O and latency for paging I/O and latency. On memory-challenged systems cache is probably best disabled.
To help assessment of the cache's efficiency for any given site monitor the Server Administration facility's cache report.
Two sets of data provide complementary information, cache activity and file request profile.
This summarizes the cache search behaviour, in particular that of the hash table.
The "searched" item, indicates the number of times the cache has been searched. Most importantly, this may include paths that can never be cached because they represent non-file requests (e.g. directory listings). Requests involving scripts, and some others, never attempt a cache search.
The "hit" item, indicates the number of times the hash table directly provided a cached path. This is very efficient.
The "miss" item, indicates the number of times the hash table directly indicated a path was not cached. This is decisive and is also very efficient.
The "collision" item, indicates the number of times multiple paths resolved to the same hash table entry. Collisions require further processing and are far less efficient. The sub-items, "collision hits" and "collision misses" indicate the number of times that further processing resulted in a found or not-found cache item.
A large number of cache misses compared to searches may only indicate a large number of non-cacheable requests and so depending on that further datum is not of great concern. A large proportion of collisions (say greater than 12.5%) is however, indicating either the hash table size needs increasing (1024 should be considered a minimum) or the hashing algorithm in the software need reviewing :-)
This summarizes the site's file request profile.
With the "loads not hit" item, the count represents the cumulative number of files loaded but never subsequently hit. If this percentage is high it means most files loaded are never hit, indicating the site's request profile is possibly unsuitable for caching.
The item "hits" respresents the cumulative, total number of hits against the cumulative, total number of loads. The percentage here can range from zero to many thousands of percent :-) with less than 100% indicating poor cache performance and from 200% upwards better and good performance. The items "1-9", "10-99" and "100+" show the count and percentage of total hits that occured when a given entry had experienced hits within that range (e.g. if an entry has had 8 previous hits, the ninth increments the "1-9" item whereas the tenth and eleventh increments the "10-99" item, etc.)
Other considerations also apply when assessing the benefit of having a cache. For example, a high number and percentage of hits can be generated while the percentage of "loads not hit" could be in the also be very high. The explanation for this would be one or two frequently requested files being hit while most others are loaded, never hit, and flushed as other files request cache space. In situations such as this it is difficult to judge whether cache processing is improving performance or just adding overhead.
The cache will automatically revalidate the volatile entry file data after a specified number of seconds ([CacheValidateSeconds] configuration parameter), by comparing the original file revision time to the current revision time. If different the file contents have changed and the cache contents declared invalid. If found invalid the file transfer then continues outside of the cache with the new contents being concurrently reloaded into the cache. Permanent entries are not subject to revalidation and the associated reloading.
Cache validation is also always performed if the request uses "Cache-Control:" with no-cache, no-store or max-age=0 attributes (HTTP/1.1 directive), or if a "Pragma: no-cache" field (HTTP/1.0 directive). These request directives are often associated with a browser agent reload page function. Hence there is no need for any explicit flushing of the cache under normal operation. If a document does not immediately reflect any changes made to it (i.e. validation time has not been reached) validation (and consequent reload) can be "forced" with a browser reload. Permanent entries are also not subject to this source of revalidation. The configuration directive [CacheGuardPeriod] limits this form of revalidation when used within the specified period since last revalidated. It has a default value of fifteen seconds.
If a site's contents are relatively static the validation seconds could be set to an extended period (say 3600 seconds, one hour) and then rely on an explicit "reload" to force validation of a changed file.
The entire cache may be purged of cached data, both volatile and permanent entries, either from the Server Administration facility or using command line server control.
$ HTTPD /DO=CACHE=PURGE
The cache is controlled using WASD_CONFIG_GLOBAL configuration file and WASD_CONFIG_MAP mapping file directives. A number of parameters control the basics of cache behaviour.
Mapping rules (12.5.5 - SET Rule) allow further tailoring of cache behaviour based on request (file) path. Those files that should be made permanent entries are indicated using the cache=perm directive. In the following example all files in the WASD runtime directories (directory icons, help files, etc.) are made permanent cache entries at the same time the path is mapped.
pass /*/-/* /wasd_root/runtime/*/* cache=perm
Of course, specified file types as well as specific paths can be mapped in this way. Here all files in the site's /help/ path are made permanent entries except those having a .PS type (PostScript documents).
set /help/* cache=perm set /help/*.ps cache=noperm
The configuration directive [CacheFileKBytesMax] puts a limit on individual file size. Those exceeding that limit are considered too large and not cached. It is possible to override this general constraint by specifying a maximum size (in kilobytes) on a per-path basis.
set /help/examples*.jpg cache=max=128 set /cai/*.mpg cache=max=2048 cache=perm
Caching may be disabled and/or enabled for specified paths and subpaths.
set /web/* cache=none set /web/icons/* cache
The cache may be enabled, disabled and purged from the Server Administration facility. In addition the same control may be exercised from the command-line using
$ HTTPD /DO=CACHE=ON $ HTTPD /DO=CACHE=OFF $ HTTPD /DO=CACHE=PURGE
If cache parameters are altered in the configuration file the server must be restarted to put these into effect. Disabling the cache on an ad hoc basis (from menu or command line) does not alter the contents in any way so it can merely be reenabled with use of the cache's previous contents resuming. In this way comparisions between the two environments may more easily be made.
There are often good reasons for bypassing or avoiding the cache. For instance, where a document is being refreshed within the cache revalidation period specified by [CacheValidateSeconds] (11.4 - Cache Content Validation). There are two mechanisms available for bypassing or invalidating the file cache.
SET /path/not/to/cache/* cache=none
/wasd_root/robots.txt;