<para>At its core, the abstraction applies caching to Java methods, reducing thus the number of executions based on the
information available in the cache. That is, each time a <emphasis>targeted</emphasis> method is invoked, the abstraction
will apply a caching behaviour checking whether the method has been already executed for the given arguments. If it has,
then the cached result is returned without having to execute the actual method; if it has not, then method is executed, the
result cached and returned to the user so that, the next time the method is invoked, the cached result is returned.
This way, expensive methods (whether CPU or IO bound) can be executed only once for a given set of parameters and the result
reused without having to actually execute the method again. The caching logic is applied transparently without any interference
to the invoker.</para>
<important>Obviously this approach works only for methods that are guaranteed to return the same output (result) for a given input
(or arguments) no matter how many times it is being executed.</important>
<para>To use the cache abstraction, the developer needs to take care of two aspects:
<itemizedlist>
<listitem>caching declaration - identify the methods that need to be cached and their policy</listitem>
<listitem>cache configuration - the backing cache where the data is stored and read from</listitem>
</itemizedlist>
</para>
<para>Note that just like other services in Spring Framework, the caching service is an abstraction (not a cache implementation) and requires
the use of an actual storage to store the cache data - that is, the abstraction frees the developer from having to write the caching
logic but does not provide the actual stores. There are two integrations available out of the box, for JDK <literal>java.util.concurrent.ConcurrentMap</literal>
and <ulinkurl="http://ehcache.org/">Ehcache</ulink> - see <xreflinkend="cache-plug"/> for more information on plugging in other cache stores/providers.</para>
<para>For caching declaration, the abstraction provides two Java annotations: <literal>@Cacheable</literal> and <literal>@CacheEvict</literal> which allow methods
to trigger cache population or cache eviction. Let us take a closer look at each annotation:</para>
<para>As the name implies, <literal>@Cacheable</literal> is used to demarcate methods that are cacheable - that is, methods for whom the result is stored into the cache
so on subsequent invocations (with the same arguments), the value in the cache is returned without having to actually execute the method. In its simplest form,
the annotation declaration requires the name of the cache associated with the annotated method:</para>
public Book findBook(ISBN isbn) {...}]]></programlisting>
<para>In the snippet above, the method <literal>findBook</literal> is associated with the cache named <literal>books</literal>. Each time the method is called, the cache
is checked to see whether the invocation has been already executed and does not have to be repeated. While in most cases, only one cache is declared, the annotation allows multiple
names to be specified so that more then one cache are being used. In this case, each of the caches will be checked before executing the method - if at least one cache is hit,
then the associated value will be returned:</para>
This approach works well for objects with <emphasis>natural keys</emphasis> as long as the <literal>hashCode()</literal> reflects that. If that is not the case then
for distributed or persistent environments, the strategy needs to be changed as the objects hashCode is not preserved.
In fact, depending on the JVM implementation or running conditions, the same hashCode can be reused for different objects, in the same VM instance.</para>
<para>To provide a different <emphasis>default</emphasis> key generator, one needs to implement the <interfacename>org.springframework.cache.KeyGenerator</interfacename> interface.
<para>Since caching is generic, it is quite likely the target methods have various signatures that cannot be simply mapped on top of the cache structure. This tends to become
obvious when the target method has multiple arguments out of which only some are suitable for caching (while the rest are used only by the method logic). For example:</para>
public Book findBook(ISBN isbn, boolean checkWarehouse, boolean includeUsed]]></programlisting>
<para>At first glance, while the two <literal>boolean</literal> arguments influence the way the book is found, they are no use for the cache. Further more what if only one of the two
is important while the other is not?</para>
<para>For such cases, the <literal>@Cacheable</literal> annotation allows the user to specify how the key is generated through its <literal>key</literal> attribute.
The developer can use <linklinkend="expressions">SpEL</link> to pick the arguments of interest (or their nested properties), perform operations or even invoke arbitrary methods without
having to write any code or implement any interface. This is the recommended approach over the <linklinkend="cache-annotations-cacheable-default-key">default</link> generator since
methods tend to be quite different in signatures as the code base grows; while the default strategy might work for some methods, it rarely does for all methods.</para>
<para>
Below are some examples of various SpEL declarations - if you are not familiar with it, do yourself a favour and read <xreflinkend="expressions"/>:
<para>Sometimes, a method might not be suitable for caching all the time (for example, it might depend on the given arguments). The cache annotations support such functionality
through the <literal>conditional</literal> parameter which takes a <literal>SpEL</literal> expression that is evaluated to either <literal>true</literal> or <literal>false</literal>.
If <literal>true</literal>, the method is cached - if not, it behaves as if the method is not cached, that is executed every since time no matter what values are in the cache or what
arguments are used. A quick example - the following method will be cached, only if the argument <literal>name</literal> has a length shorter then 32:</para>
<para>Each <literal>SpEL</literal> expression evaluates again a dedicated <literal><linklinkend="expressions-language-ref">context</link></literal>. In addition
to the build in parameters, the framework provides dedicated caching related metadata such as the argument names. The next table lists the items made available to the context
so one can use them for key and conditional(see next section) computations:</para>
<para>The cache abstraction allows not just population of a cache store but also eviction. This process is useful for removing stale or unused data from the cache. Opposed to
<literal>@Cacheable</literal>, annotation <literal>@CacheEvict</literal> demarcates methods that perform cache <emphasis>eviction</emphasis>, that is methods that act as triggers
for removing data from the cache. Just like its sibling, <literal>@CacheEvict</literal> requires one to specify one (or multiple) caches that are affected by the action, allows a
key or a condition to be specified but in addition, features an extra parameter <literal>allEntries</literal> which indicates whether a cache-wide eviction needs to be performed
rather then just an entry one (based on the key):</para>
public void loadBooks(InputStream batch)]]></programlisting>
<para>This option comes in handy when an entire cache region needs to be cleared out - rather then evicting each entry (which would take a long time since it is inefficient),
all the entires are removed in one operation as shown above. Note that the framework will ignore any key specified in this scenario as it does not apply (the entire cache is evicted not just
one entry).</para>
<para>It is important to note that void methods can be used with <literal>@CacheEvict</literal> - as the methods act as triggers, the return values are ignored (as they don't interact with
the cache) - this is not the case with <literal>@Cacheable</literal> which adds/update data into the cache and thus requires a result.</para>
<para>It is important to note that even though declaring the cache annotations does not automatically triggers their actions - like many things in Spring, the feature has to be declaratively
enabled (which means if you ever suspect caching is to blame, you can disable it by removing only one configuration line rather then all the annotations in your code). In practice, this
translates to one line that informs Spring that it should process the cache annotations, namely:</para>
<para>The namespace allows various options to be specified that influence the way the caching behaviour is added to the application through AOP. The configuration is similar (on purpose)
with that of <literal><ulinkurl="tx-annotation-driven-settings">tx:annotation-driven</ulink></literal>:
<para>The caching abstraction allows one to use her own annotations to identify what method trigger cache population or eviction. This is quite handy as a template mechanism as it eliminates
the need to duplicate cache annotation declarations (especially useful if the key or condition are specified) or if the foreign imports (<literal>org.springframework</literal>) are not allowed
in your code base. Similar to the rest of the <linklinkend="beans-stereotype-annotations">stereotype</link> annotations, both <literal>@Cacheable</literal> and <literal>@CacheEvict</literal>
can be used as meta-annotations, that is annotations that can annotate other annotations. To wit, let us replace a common <literal>@Cacheable</literal> declaration with our own, custom
<para>Above, we have defined our own <literal>SlowService</literal> annotation which itself is annotated with <literal>@Cacheable</literal> - now we can replace the following code:</para>
public Book findBook(ISBN isbn, boolean checkWarehouse, boolean includeUsed)]]></programlisting>
<para>Even though <literal>@SlowService</literal> is not a Spring annotation, the container automatically picks up its declaration at runtime and understands its meaning. Note that as
mentined <linklinkend="cache-annotation-enable">above</link>, the annotation-driven behaviour needs to be enabled.</para>
<para>If annotations are not an option (no access to the sources or no external code), one can use XML for declarative caching. So instead of annotating the methods for caching, one specifies
the target method and the caching directives externally (similar to the declarative transaction management <linklinkend="transaction-declarative-first-example">advice</link>). The previous example
can be translated into:</para>
<programlistinglanguage="xml"><![CDATA[<!-- the service we want to make cacheable -->
<para>In the configuration above, the <literal>bookService</literal> is made cacheable. The caching semantics to apply are encapsulated in the <literal>cache:advice</literal> definition which
instructs method <literal>findBooks</literal> to be used for putting data into the cache while method <literal>loadBooks</literal> for evicting data. Both definitions are working against the
<literal>books</literal> cache.</para>
<para>The <literal>aop:config</literal> definition applies the cache advice to the appropriate points in the program by using the AspectJ pointcut expression (more information is available
in <xreflinkend="aop"/>). In the example above, all methods from the <interfacename>BookService</interfacename> are considered and the cache advice applied to them.</para>
<para>The declarative XML caching supports all of the annotation-based model so moving between the two should be fairly easy - further more both can be used inside the same application.
The XML based approach does not touch the target code however it is inherently more verbose; when dealing with classes with overloaded methods that are targeted for caching, identifying the
proper methods does take an extra effort since the <literal>method</literal> argument is not a good discriminator - in these cases, the AspectJ pointcut can be used to cherry pick the target
methods and apply the appropriate caching functionality. Howeve through XML, it is easier to apply a package/group/interface-wide caching (again due to the AspectJ poincut) and to create
template-like definitions (as we did in the example above by defining the target cache through the <literal>cache:definitions </literal><literal>cache</literal> attribute).
<para>Out of the box, the cache abstraction provides integration with two storages - one on top of the JDK <interfacename>ConcurrentMap</interfacename> and one
for <ulinkurl="ehcache.org">ehcache</ulink> library. To use them, one needs to simply declare an appropriate <interfacename>CacheManager</interfacename> - an entity that controls and manages
<interfacename>Cache</interfacename>s and can be used to retrieve these for storage.</para>
<para>The JDK-based <interfacename>Cache</interfacename> implementation resides under <literal>org.springframework.cache.concurrent</literal> package. It allows one to use <classname>
ConcurrentHashMap</classname> as a backing <interfacename>Cache</interfacename> store.</para>
<para>The snippet above uses the <classname>SimpleCacheManager</classname> to create a <interfacename>CacheManager</interfacename> for the two, nested <interfacename>Concurrent</interfacename>
<interfacename>Cache</interfacename> implementations named <emphasis>default</emphasis> and <emphasis>books</emphasis>.
Note that the names are configured directly for each cache.</para>
<para>As the cache is created by the application, it is bound to its lifecycle, making it suitable for basic use cases, tests or simple applications. The cache scales well and is very fast
but it does not provide any management or persistence capabilities nor eviction contracts.</para>
<para>The Ehcache implementation is located under <literal>org.springframework.cache.ehcache</literal> package. Again, to use it, one simply needs to declare the appropriate
<para>This setup bootstraps ehcache library inside Spring IoC (through bean <literal>ehcache</literal>) which is then wired into the dedicated <interfacename>CacheManager</interfacename>
implementation. Note the entire ehcache-specific configuration is read from the resource <literal>ehcache.xml</literal>.</para>
<title>Dealing with caches without a backing store</title>
<para>Sometimes when switching environments or doing testing, one might have cache declarations without an actual backing cache configured. As this is an invalid configuration, at runtime an
exception will be through since the caching infrastructure is unable to find a suitable store. In situations like this, rather then removing the cache declarations (which can prove tedious),
one can wire in a simple, dummy cache that performs no caching - that is, forces the cached methods to be executed every time:</para>
<para>The <literal>CompositeCacheManager</literal> above chains multiple <literal>CacheManager</literal>s and aditionally, through the <literal>addNoOpManager</literal> flag, adds a
<emphasis>no op</emphasis> cache that for all the definitions not handled by the configured cache managers. That is, every cache definition not found in either <literal>jdkCache</literal>
or <literal>gemfireCache</literal> (configured above) will be handled by the no op cache, which will not store any information causing the target method to be executed every time.
<para>Clearly there are plenty of caching products out there that can be used as a backing store. To plug them in, one needs to provide a <interfacename>CacheManager</interfacename> and
<interfacename>Cache</interfacename> implementation since unfortunately there is no available standard that we can use instead. This may sound harder then it is since in practice,
the classes tend to be simple <ulinkurl="http://en.wikipedia.org/wiki/Adapter_pattern">adapter</ulink>s that map the caching abstraction framework on top of the storage API as the <literal>ehcache</literal> classes can show.
Most <interfacename>CacheManager</interfacename> classes can use the classes in <literal>org.springframework.cache.support</literal> package, such as <classname>AbstractCacheManager</classname>
which takes care of the boiler-plate code leaving only the actual <emphasis>mapping</emphasis> to be completed. We hope that in time, the libraries that provide integration with Spring
<title>How can I set the TTL/TTI/Eviction policy/XXX feature?</title>
<para>Directly through your cache provider. The cache abstraction is... well, an abstraction not a cache implementation. The solution you are using might support various data policies and different
topologies which other solutions do not (take for example the JDK <literal>ConcurrentHashMap</literal>) - exposing that in the cache abstraction would be useless simply because there would
no backing support. Such functionality should be controlled directly through the backing cache, when configuring it or through its native API.