From 02cd8681d42b069fc7040357738dd1ac8c1ce20b Mon Sep 17 00:00:00 2001 From: Chris Beams Date: Thu, 22 Dec 2011 14:16:41 +0100 Subject: [PATCH] Normalize whitespace in cache reference doc --- spring-framework-reference/src/cache.xml | 658 +++++++++++------------ 1 file changed, 329 insertions(+), 329 deletions(-) diff --git a/spring-framework-reference/src/cache.xml b/spring-framework-reference/src/cache.xml index d56f03ff073..690c96c2f73 100644 --- a/spring-framework-reference/src/cache.xml +++ b/spring-framework-reference/src/cache.xml @@ -3,34 +3,34 @@ "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd"> - Cache Abstraction - -
- Introduction - - Since version 3.1, Spring Framework provides support for transparently - adding caching into an existing Spring application. Similar to the transaction - support, the caching abstraction allows consistent use of various caching - solutions with minimal impact on the code. -
- -
+ Cache Abstraction + +
+ Introduction + + Since version 3.1, Spring Framework provides support for transparently + adding caching into an existing Spring application. Similar to the transaction + support, the caching abstraction allows consistent use of various caching + solutions with minimal impact on the code. +
+ +
Understanding the cache abstraction - + Cache vs Buffer The terms "buffer" and "cache" tend to be used interchangeably; note however they represent different things. - A buffer is used traditionally as an intermediate temporary store for data between a fast and a slow entity. As one + A buffer is used traditionally as an intermediate temporary store for data between a fast and a slow entity. As one party would have to wait for the other affecting performance, the buffer alleviates this by allowing entire blocks of data to move at once rather then in small chunks. The data is written and read only once from - the buffer. Further more, the buffers are visible to at least one party which is aware of it. + the buffer. Furthermore, the buffers are visible to at least one party which is aware of it. A cache on the other hand by definition is hidden and neither party is aware that caching occurs.It as well improves - performance but does that by allowing the same data to be read multiple times in a fast fashion. - - A further explanation of the differences between two can be found + performance but does that by allowing the same data to be read multiple times in a fast fashion. + + A further explanation of the differences between two can be found here. - + At its core, the abstraction applies caching to Java methods, reducing thus the number of executions based on the information available in the cache. That is, each time a targeted method is invoked, the abstraction will apply a caching behaviour checking whether the method has been already executed for the given arguments. If it has, @@ -39,64 +39,64 @@ This way, expensive methods (whether CPU or IO bound) can be executed only once for a given set of parameters and the result reused without having to actually execute the method again. The caching logic is applied transparently without any interference to the invoker. - + Obviously this approach works only for methods that are guaranteed to return the same output (result) for a given input (or arguments) no matter how many times it is being executed. - + To use the cache abstraction, the developer needs to take care of two aspects: caching declaration - identify the methods that need to be cached and their policy cache configuration - the backing cache where the data is stored and read from - + Note that just like other services in Spring Framework, the caching service is an abstraction (not a cache implementation) and requires the use of an actual storage to store the cache data - that is, the abstraction frees the developer from having to write the caching logic but does not provide the actual stores. There are two integrations available out of the box, for JDK java.util.concurrent.ConcurrentMap and Ehcache - see for more information on plugging in other cache stores/providers. -
- -
+
+ +
Declarative annotation-based caching - + For caching declaration, the abstraction provides two Java annotations: @Cacheable and @CacheEvict which allow methods to trigger cache population or cache eviction. Let us take a closer look at each annotation: - +
<literal>@Cacheable</literal> annotation - + As the name implies, @Cacheable is used to demarcate methods that are cacheable - that is, methods for whom the result is stored into the cache - so on subsequent invocations (with the same arguments), the value in the cache is returned without having to actually execute the method. In its simplest form, + so on subsequent invocations (with the same arguments), the value in the cache is returned without having to actually execute the method. In its simplest form, the annotation declaration requires the name of the cache associated with the annotated method: - + - + In the snippet above, the method findBook is associated with the cache named books. Each time the method is called, the cache is checked to see whether the invocation has been already executed and does not have to be repeated. While in most cases, only one cache is declared, the annotation allows multiple names to be specified so that more then one cache are being used. In this case, each of the caches will be checked before executing the method - if at least one cache is hit, then the associated value will be returned: All the other caches that do not contain the method will be updated as well even though the cached method was not actually executed. - +
Default Key Generation - + Since caches are essentially key-value stores, each invocation of a cached method needs to be translated into a suitable key for cache access. Out of the box, the caching abstraction uses a simple KeyGenerator based on the following algorithm: - If no params are given, return 0. - If only one param is given, return that instance. - If more the one param is given, return a key computed from the hashes of all parameters. + If no params are given, return 0. + If only one param is given, return that instance. + If more the one param is given, return a key computed from the hashes of all parameters. This approach works well for objects with natural keys as long as the hashCode() reflects that. If that is not the case then - for distributed or persistent environments, the strategy needs to be changed as the objects hashCode is not preserved. + for distributed or persistent environments, the strategy needs to be changed as the objects hashCode is not preserved. In fact, depending on the JVM implementation or running conditions, the same hashCode can be reused for different objects, in the same VM instance. - + To provide a different default key generator, one needs to implement the org.springframework.cache.KeyGenerator interface. Once configured, the generator will be used for each declaration that doesn not specify its own key generation strategy (see below). @@ -104,25 +104,25 @@ public Book findBook(ISBN isbn) {...}]]>
Custom Key Generation Declaration - - Since caching is generic, it is quite likely the target methods have various signatures that cannot be simply mapped on top of the cache structure. This tends to become + + Since caching is generic, it is quite likely the target methods have various signatures that cannot be simply mapped on top of the cache structure. This tends to become obvious when the target method has multiple arguments out of which only some are suitable for caching (while the rest are used only by the method logic). For example: - + At first glance, while the two boolean arguments influence the way the book is found, they are no use for the cache. Further more what if only one of the two is important while the other is not? - + For such cases, the @Cacheable annotation allows the user to specify how the key is generated through its key attribute. The developer can use SpEL to pick the arguments of interest (or their nested properties), perform operations or even invoke arbitrary methods without having to write any code or implement any interface. This is the recommended approach over the default generator since methods tend to be quite different in signatures as the code base grows; while the default strategy might work for some methods, it rarely does for all methods. - + - Below are some examples of various SpEL declarations - if you are not familiar with it, do yourself a favour and read : + Below are some examples of various SpEL declarations - if you are not familiar with it, do yourself a favour and read : - + @Cacheable(value="books", key="#isbn" public Book findBook(ISBN isbn, boolean checkWarehouse, boolean includeUsed) @@ -137,83 +137,83 @@ public Book findBook(ISBN isbn, boolean checkWarehouse, boolean includeUsed)The snippets above, show how easy it is to select a certain argument, one of its properties or even an arbitrary (static) method.
- +
Conditional caching - + Sometimes, a method might not be suitable for caching all the time (for example, it might depend on the given arguments). The cache annotations support such functionality through the conditional parameter which takes a SpEL expression that is evaluated to either true or false. If true, the method is cached - if not, it behaves as if the method is not cached, that is executed every since time no matter what values are in the cache or what arguments are used. A quick example - the following method will be cached, only if the argument name has a length shorter then 32: - +
- +
Available caching <literal>SpEL</literal> evaluation context - + Each SpEL expression evaluates again a dedicated context. In addition to the build in parameters, the framework provides dedicated caching related metadata such as the argument names. The next table lists the items made available to the context so one can use them for key and conditional(see next section) computations: - - - Cache SpEL available metadata - - - - - Name - Location - Description - Example - - + +
+ Cache SpEL available metadata + + + + + Name + Location + Description + Example + + - - methodName - root object - The name of the method being invoked - #root.methodName - - - method - root object - The method being invoked - #root.method.name - - - target - root object - The target object being invoked - #root.target - - - targetClass - root object - The class of the target being invoked - #root.targetClass - - - params - root object - The arguments (as array) used for invoking the target - #root.params[0] - - - caches - root object - Collection of caches against which the current method is executed - #root.caches[0].name - - - parameter name - evaluation context - Name of any of the method parameter. If for some reason the names are not available (ex: no debug information), - the parameter names are also available under the ]]> where - stands for the parameter index (starting from 0). - iban or p0 - + + methodName + root object + The name of the method being invoked + #root.methodName + + + method + root object + The method being invoked + #root.method.name + + + target + root object + The target object being invoked + #root.target + + + targetClass + root object + The class of the target being invoked + #root.targetClass + + + params + root object + The arguments (as array) used for invoking the target + #root.params[0] + + + caches + root object + Collection of caches against which the current method is executed + #root.caches[0].name + + + parameter name + evaluation context + Name of any of the method parameter. If for some reason the names are not available (ex: no debug information), + the parameter names are also available under the ]]> where + stands for the parameter index (starting from 0). + iban or p0 +
@@ -222,214 +222,214 @@ public Book findBook(String name)]]>
<literal>@CachePut</literal> annotation - + For cases where the cache needs to be updated without interferring with the method execution, one can use the @CachePut annotation. That is, the method will always be executed and its result placed into the cache (according to the @CachePut options). It supports the same options as @Cacheable and should be used for cache population rather then method flow optimization. - + Note that using @CachePut and @Cacheable annotations on the same method is generaly discouraged because they have different behaviours. While the latter causes the method execution to be skipped by using the cache, the former forces the execution in order to execute a cache update. This leads to unexpected behaviour and with the exception of specific corner-cases (such as annotations having conditions that exclude them from each other), such declarations should be avoided. -
- +
+
<literal>@CacheEvict</literal> annotation - - The cache abstraction allows not just population of a cache store but also eviction. This process is useful for removing stale or unused data from the cache. Opposed to + + The cache abstraction allows not just population of a cache store but also eviction. This process is useful for removing stale or unused data from the cache. Opposed to @Cacheable, annotation @CacheEvict demarcates methods that perform cache eviction, that is methods that act as triggers for removing data from the cache. Just like its sibling, @CacheEvict requires one to specify one (or multiple) caches that are affected by the action, allows a key or a condition to be specified but in addition, features an extra parameter allEntries which indicates whether a cache-wide eviction needs to be performed rather then just an entry one (based on the key): - + - - This option comes in handy when an entire cache region needs to be cleared out - rather then evicting each entry (which would take a long time since it is inefficient), + + This option comes in handy when an entire cache region needs to be cleared out - rather then evicting each entry (which would take a long time since it is inefficient), all the entires are removed in one operation as shown above. Note that the framework will ignore any key specified in this scenario as it does not apply (the entire cache is evicted not just one entry). - + One can also indicate whether the eviction should occur after (the default) or before the method executes through the beforeInvocation attribute. The former provides the same semantics as the rest of the annotations - once the method completes successfully, an action (in this case eviction) on the cache is executed. If the method does not execute (as it might be cached) or an exception is thrown, the eviction does not occur. The latter (beforeInvocation=true) causes the eviction to occur always, before the method is invoked - this is useful in cases where the eviction does not need to be tied to the method outcome. - + It is important to note that void methods can be used with @CacheEvict - as the methods act as triggers, the return values are ignored (as they don't interact with the cache) - this is not the case with @Cacheable which adds/update data into the cache and thus requires a result.
- +
<literal>@Caching</literal> annotation - + There are cases when multiple annotations of the same type, such as @CacheEvict or @CachePut need to be specified, for example because the condition or the key expression is different between different caches. Unfortunately Java does not support such declarations however there is a workaround - using a enclosing annotation, in this case, @Caching. @Caching allows multiple nested @Cacheable, @CachePut and @CacheEvict to be used on the same method: - + - +
Enable caching annotations - + It is important to note that even though declaring the cache annotations does not automatically triggers their actions - like many things in Spring, the feature has to be declaratively enabled (which means if you ever suspect caching is to blame, you can disable it by removing only one configuration line rather then all the annotations in your code). In practice, this translates to one line that informs Spring that it should process the cache annotations, namely: - + - xmlns:cache="http://www.springframework.org/schema/cache" - http://www.springframework.org/schema/cache http://www.springframework.org/schema/cache/spring-cache.xsd]]> - ]]> + xmlns:cache="http://www.springframework.org/schema/cache" + http://www.springframework.org/schema/cache http://www.springframework.org/schema/cache/spring-cache.xsd]]> + ]]> ]]> - + The namespace allows various options to be specified that influence the way the caching behaviour is added to the application through AOP. The configuration is similar (on purpose) with that of tx:annotation-driven: - - - <literal><cache:annotation-driven/></literal> - settings - - - - Attribute +
+ <literal><cache:annotation-driven/></literal> + settings - Default + + + + Attribute - Description - - + Default - - - cache-manager + Description + + - cacheManager + + + cache-manager - Name of cache manager to use. Only required - if the name of the cache manager is not - cacheManager, as in the example - above. - + cacheManager - - mode + Name of cache manager to use. Only required + if the name of the cache manager is not + cacheManager, as in the example + above. + - proxy + + mode - The default mode "proxy" processes annotated - beans to be proxied using Spring's AOP framework (following - proxy semantics, as discussed above, applying to method calls - coming in through the proxy only). The alternative mode - "aspectj" instead weaves the affected classes with Spring's - AspectJ caching aspect, modifying the target class byte - code to apply to any kind of method call. AspectJ weaving - requires spring-aspects.jar in the classpath as well as - load-time weaving (or compile-time weaving) enabled. (See - for details on how to set - up load-time weaving.) - + proxy - - proxy-target-class + The default mode "proxy" processes annotated + beans to be proxied using Spring's AOP framework (following + proxy semantics, as discussed above, applying to method calls + coming in through the proxy only). The alternative mode + "aspectj" instead weaves the affected classes with Spring's + AspectJ caching aspect, modifying the target class byte + code to apply to any kind of method call. AspectJ weaving + requires spring-aspects.jar in the classpath as well as + load-time weaving (or compile-time weaving) enabled. (See + for details on how to set + up load-time weaving.) + - false + + proxy-target-class - Applies to proxy mode only. Controls what type of - caching proxies are created for classes annotated with - the @Cacheable or @CacheEvict annotations. - If the proxy-target-class attribute is set - to true, then class-based proxies are - created. If proxy-target-class is - false or if the attribute is omitted, then - standard JDK interface-based proxies are created. (See for a detailed examination of the - different proxy types.) - + false - - order + Applies to proxy mode only. Controls what type of + caching proxies are created for classes annotated with + the @Cacheable or @CacheEvict annotations. + If the proxy-target-class attribute is set + to true, then class-based proxies are + created. If proxy-target-class is + false or if the attribute is omitted, then + standard JDK interface-based proxies are created. (See for a detailed examination of the + different proxy types.) + - Ordered.LOWEST_PRECEDENCE + + order - Defines the order of the cache advice that - is applied to beans annotated with - @Cacheable or @CacheEvict. - (For more - information about the rules related to ordering of AOP advice, - see .) No - specified ordering means that the AOP subsystem determines the - order of the advice. - - - -
+ Ordered.LOWEST_PRECEDENCE - - <cache:annotation-driven/> only looks for - @Cacheable/@CacheEvict on beans in the same - application context it is defined in. This means that, if you put - <cache:annotation-driven/> in a - WebApplicationContext for a - DispatcherServlet, it only checks for - @Cacheable/@CacheEvict beans in your - controllers, and not your services. See for more information. - - - - Method visibility and - <interfacename>@Cacheable/@CachePut/@CacheEvict</interfacename> + Defines the order of the cache advice that + is applied to beans annotated with + @Cacheable or @CacheEvict. + (For more + information about the rules related to ordering of AOP advice, + see .) No + specified ordering means that the AOP subsystem determines the + order of the advice. + + + + - When using proxies, you should apply the - @Cache* annotations only to - methods with public visibility. If you do - annotate protected, private or package-visible methods with these annotations, - no error is raised, but the annotated method does not exhibit the configured - caching settings. Consider the use of AspectJ (see below) if you - need to annotate non-public methods as it changes the bytecode itself. - - - - Spring recommends that you only annotate concrete classes (and - methods of concrete classes) with the - @Cache* annotation, as opposed - to annotating interfaces. You certainly can place the - @Cache* annotation on an - interface (or an interface method), but this works only as you would - expect it to if you are using interface-based proxies. The fact that - Java annotations are not inherited from interfaces - means that if you are using class-based proxies - (proxy-target-class="true") or the weaving-based - aspect (mode="aspectj"), then the caching - settings are not recognized by the proxying and weaving - infrastructure, and the object will not be wrapped in a - caching proxy, which would be decidedly - bad. - + + <cache:annotation-driven/> only looks for + @Cacheable/@CacheEvict on beans in the same + application context it is defined in. This means that, if you put + <cache:annotation-driven/> in a + WebApplicationContext for a + DispatcherServlet, it only checks for + @Cacheable/@CacheEvict beans in your + controllers, and not your services. See for more information. + - - In proxy mode (which is the default), only external method calls - coming in through the proxy are intercepted. This means that - self-invocation, in effect, a method within the target object calling - another method of the target object, will not lead to an actual - caching at runtime even if the invoked method is marked with - @Cacheable - considering using the aspectj mode in this case. - + + Method visibility and + <interfacename>@Cacheable/@CachePut/@CacheEvict</interfacename> + + When using proxies, you should apply the + @Cache* annotations only to + methods with public visibility. If you do + annotate protected, private or package-visible methods with these annotations, + no error is raised, but the annotated method does not exhibit the configured + caching settings. Consider the use of AspectJ (see below) if you + need to annotate non-public methods as it changes the bytecode itself. + + + + Spring recommends that you only annotate concrete classes (and + methods of concrete classes) with the + @Cache* annotation, as opposed + to annotating interfaces. You certainly can place the + @Cache* annotation on an + interface (or an interface method), but this works only as you would + expect it to if you are using interface-based proxies. The fact that + Java annotations are not inherited from interfaces + means that if you are using class-based proxies + (proxy-target-class="true") or the weaving-based + aspect (mode="aspectj"), then the caching + settings are not recognized by the proxying and weaving + infrastructure, and the object will not be wrapped in a + caching proxy, which would be decidedly + bad. + + + + In proxy mode (which is the default), only external method calls + coming in through the proxy are intercepted. This means that + self-invocation, in effect, a method within the target object calling + another method of the target object, will not lead to an actual + caching at runtime even if the invoked method is marked with + @Cacheable - considering using the aspectj mode in this case. +
- +
Using custom annotations - + The caching abstraction allows one to use her own annotations to identify what method trigger cache population or eviction. This is quite handy as a template mechanism as it eliminates the need to duplicate cache annotation declarations (especially useful if the key or condition are specified) or if the foreign imports (org.springframework) are not allowed in your code base. Similar to the rest of the stereotype annotations, both @Cacheable and @CacheEvict - can be used as meta-annotations, that is annotations that can annotate other annotations. To wit, let us replace a common @Cacheable declaration with our own, custom - annotation: + can be used as meta-annotations, that is annotations that can annotate other annotations. To wit, let us replace a common @Cacheable declaration with our own, custom + annotation: - + Above, we have defined our own SlowService annotation which itself is annotated with @Cacheable - now we can replace the following code: - + @@ -445,91 +445,91 @@ public Book findBook(ISBN isbn, boolean checkWarehouse, boolean includeUsed)]]>< - + Even though @SlowService is not a Spring annotation, the container automatically picks up its declaration at runtime and understands its meaning. Note that as mentined above, the annotation-driven behaviour needs to be enabled.
-
- -
- Declarative XML-based caching - - If annotations are not an option (no access to the sources or no external code), one can use XML for declarative caching. So instead of annotating the methods for caching, one specifies - the target method and the caching directives externally (similar to the declarative transaction management advice). The previous example - can be translated into: - - +
+ +
+ Declarative XML-based caching + + If annotations are not an option (no access to the sources or no external code), one can use XML for declarative caching. So instead of annotating the methods for caching, one specifies + the target method and the caching directives externally (similar to the declarative transaction management advice). The previous example + can be translated into: + + - + - - - - + + + + - + - + ... // cache manager definition omitted ]]> - - - In the configuration above, the bookService is made cacheable. The caching semantics to apply are encapsulated in the cache:advice definition which - instructs method findBooks to be used for putting data into the cache while method loadBooks for evicting data. Both definitions are working against the - books cache. - - The aop:config definition applies the cache advice to the appropriate points in the program by using the AspectJ pointcut expression (more information is available - in ). In the example above, all methods from the BookService are considered and the cache advice applied to them. - - The declarative XML caching supports all of the annotation-based model so moving between the two should be fairly easy - further more both can be used inside the same application. - The XML based approach does not touch the target code however it is inherently more verbose; when dealing with classes with overloaded methods that are targeted for caching, identifying the - proper methods does take an extra effort since the method argument is not a good discriminator - in these cases, the AspectJ pointcut can be used to cherry pick the target - methods and apply the appropriate caching functionality. Howeve through XML, it is easier to apply a package/group/interface-wide caching (again due to the AspectJ poincut) and to create - template-like definitions (as we did in the example above by defining the target cache through the cache:definitions cache attribute). - -
- -
+ + + In the configuration above, the bookService is made cacheable. The caching semantics to apply are encapsulated in the cache:advice definition which + instructs method findBooks to be used for putting data into the cache while method loadBooks for evicting data. Both definitions are working against the + books cache. + + The aop:config definition applies the cache advice to the appropriate points in the program by using the AspectJ pointcut expression (more information is available + in ). In the example above, all methods from the BookService are considered and the cache advice applied to them. + + The declarative XML caching supports all of the annotation-based model so moving between the two should be fairly easy - further more both can be used inside the same application. + The XML based approach does not touch the target code however it is inherently more verbose; when dealing with classes with overloaded methods that are targeted for caching, identifying the + proper methods does take an extra effort since the method argument is not a good discriminator - in these cases, the AspectJ pointcut can be used to cherry pick the target + methods and apply the appropriate caching functionality. Howeve through XML, it is easier to apply a package/group/interface-wide caching (again due to the AspectJ poincut) and to create + template-like definitions (as we did in the example above by defining the target cache through the cache:definitions cache attribute). + +
+ +
Configuring the cache storage - - Out of the box, the cache abstraction provides integration with two storages - one on top of the JDK ConcurrentMap and one + + Out of the box, the cache abstraction provides integration with two storages - one on top of the JDK ConcurrentMap and one for ehcache library. To use them, one needs to simply declare an appropriate CacheManager - an entity that controls and manages Caches and can be used to retrieve these for storage. - +
JDK <interfacename>ConcurrentMap</interfacename>-based <interfacename>Cache</interfacename> - + The JDK-based Cache implementation resides under org.springframework.cache.concurrent package. It allows one to use ConcurrentHashMap as a backing Cache store. - - + + - - - - - - + + + + + + ]]> The snippet above uses the SimpleCacheManager to create a CacheManager for the two, nested Concurrent - Cache implementations named default and books. + Cache implementations named default and books. Note that the names are configured directly for each cache. - + As the cache is created by the application, it is bound to its lifecycle, making it suitable for basic use cases, tests or simple applications. The cache scales well and is very fast but it does not provide any management or persistence capabilities nor eviction contracts.
- +
Ehcache-based <interfacename>Cache</interfacename> - - The Ehcache implementation is located under org.springframework.cache.ehcache package. Again, to use it, one simply needs to declare the appropriate + + The Ehcache implementation is located under org.springframework.cache.ehcache package. Again, to use it, one simply needs to declare the appropriate CacheManager: - + @@ -538,47 +538,47 @@ public Book findBook(ISBN isbn, boolean checkWarehouse, boolean includeUsed)]]>< This setup bootstraps ehcache library inside Spring IoC (through bean ehcache) which is then wired into the dedicated CacheManager implementation. Note the entire ehcache-specific configuration is read from the resource ehcache.xml.
- +
Dealing with caches without a backing store - + Sometimes when switching environments or doing testing, one might have cache declarations without an actual backing cache configured. As this is an invalid configuration, at runtime an exception will be through since the caching infrastructure is unable to find a suitable store. In situations like this, rather then removing the cache declarations (which can prove tedious), one can wire in a simple, dummy cache that performs no caching - that is, forces the cached methods to be executed every time: - - - - - + + + + + ]]> - - The CompositeCacheManager above chains multiple CacheManagers and aditionally, through the addNoOpManager flag, adds a + + The CompositeCacheManager above chains multiple CacheManagers and aditionally, through the addNoOpManager flag, adds a no op cache that for all the definitions not handled by the configured cache managers. That is, every cache definition not found in either jdkCache or gemfireCache (configured above) will be handled by the no op cache, which will not store any information causing the target method to be executed every time.
-
- -
+
+ +
Plugging-in different back-end caches - + Clearly there are plenty of caching products out there that can be used as a backing store. To plug them in, one needs to provide a CacheManager and - Cache implementation since unfortunately there is no available standard that we can use instead. This may sound harder then it is since in practice, + Cache implementation since unfortunately there is no available standard that we can use instead. This may sound harder then it is since in practice, the classes tend to be simple adapters that map the caching abstraction framework on top of the storage API as the ehcache classes can show. Most CacheManager classes can use the classes in org.springframework.cache.support package, such as AbstractCacheManager - which takes care of the boiler-plate code leaving only the actual mapping to be completed. We hope that in time, the libraries that provide integration with Spring + which takes care of the boiler-plate code leaving only the actual mapping to be completed. We hope that in time, the libraries that provide integration with Spring can fill in this small configuration gap. -
- -
- How can I set the TTL/TTI/Eviction policy/XXX feature? - - Directly through your cache provider. The cache abstraction is... well, an abstraction not a cache implementation. The solution you are using might support various data policies and different - topologies which other solutions do not (take for example the JDK ConcurrentHashMap) - exposing that in the cache abstraction would be useless simply because there would - no backing support. Such functionality should be controlled directly through the backing cache, when configuring it or through its native API. - -
- +
+ +
+ How can I set the TTL/TTI/Eviction policy/XXX feature? + + Directly through your cache provider. The cache abstraction is... well, an abstraction not a cache implementation. The solution you are using might support various data policies and different + topologies which other solutions do not (take for example the JDK ConcurrentHashMap) - exposing that in the cache abstraction would be useless simply because there would + no backing support. Such functionality should be controlled directly through the backing cache, when configuring it or through its native API. + +
+