Transparent versus non-transparent caching
One of the divisions in caching is between what I will call transparent and non-transparent caches. Transparent caches are ones where the only thing you are supposed to notice is faster speed; non-transparent caches require you to manage them explicitly, especially cache invalidation. Operating system disk caches are an example of transparent caches, at least in theory.
(At some level every cache is non-transparent, because it has to be managed by someone's code. So this is just a question of how a cache looks to your layer of code.)
Transparent caches are harder to implement than non-transparent ones; because they have to be effectively invisible, they must get cache invalidation completely correct, which is surprisingly hard. Non-transparent caches leave the work of invalidation to your code, where you're in a position to know what results can be a little stale (and so have simpler cache invalidation strategies) and what results need complete accuracy.
Adding caching to an existing software layer to speed it up almost always requires that the caching be transparent. Even if the results of a non-transparent cache can technically be justified under a careful reading of the layer's specification, no one is going to like you very much; after all, their code is breaking because of something you did.
(This is analogous to compiler optimization, where no one cares how much ANSI C lets your compiler get away with if it breaks their program, whether or not their code was technically illegal or counting on implementation defined behavior. This is more or less the china shop rule: if you broke it, it's your responsibility, no matter how fragile it was to start with.)