kemal-cache
kemal-cache
Powerful Caching For Kemal Applications
kemal-cache is production-oriented response caching middleware for Kemal.
It is built for teams that want lower response times, less repeated work, and safer HTTP caching behavior without bolting on a large framework.
Use it when your application serves expensive pages, API responses, catalog endpoints, content feeds, or read-heavy routes that should be fast on repeat requests.
Why kemal-cache
- Kemal-native middleware with a clean Crystal API
- safe-by-default behavior for authenticated, cookie-bearing, and private responses
- in-memory and Redis-backed stores
- custom cache keys, filters, invalidation, and TTL policies
- optional per-process cache stampede protection
- automatic
ETagandLast-Modifiedgeneration - conditional request support with
304 Not Modified - built-in counters and event hooks for observability
- focused surface area that stays easy to reason about
What You Get
kemal-cache is designed to cover the caching capabilities most Kemal apps actually need:
- route-level response caching with minimal setup
- storage-agnostic design via
Store - strong default rules around what should not be cached
- request-aware cache keys
- explicit invalidation APIs
- safe fallback behavior when cached payloads are corrupt
- deployment flexibility from single-process apps to multi-instance Redis-backed setups
Quick Start
Add the shard to shard.yml:
dependencies:
kemal-cache:
github: kemalcr/kemal-cache
Install dependencies:
shards install
Then add the middleware:
require "kemal-cache"
use Kemal::Cache::Handler.new
get "/articles" do
"Expensive response"
end
Kemal.run
Every response will include X-Kemal-Cache: MISS or X-Kemal-Cache: HIT, so cache behavior is visible immediately during development and debugging.
Out-Of-The-Box Behavior
Without any configuration, kemal-cache:
- caches
GETrequests only - uses
context.request.resourceas the cache key - caches successful
2xxresponses only - stores entries for
10.minutes - uses
Kemal::Cache::MemoryStore - bypasses requests with
AuthorizationorCookie - skips storing responses with
Set-Cookie - skips storing responses with
Cache-Control: no-store,no-cache, orprivate - skips storing responses with
Vary: * - skips storing responses larger than
1_048_576bytes - skips storing responses that call
flush - auto-generates
ETagandLast-Modified - returns
304 Not Modifiedfor matching conditional requests
Those defaults are intentionally conservative so the middleware is useful in production without forcing you to hand-audit every route first.
Installation Notes
require "kemal-cache" loads the core middleware and MemoryStore.
If you want Redis support, add redis to your application and require the Redis entrypoint explicitly:
dependencies:
kemal-cache:
github: kemalcr/kemal-cache
redis:
github: jgaskins/redis
require "kemal-cache/redis"
This keeps the base package lean for applications that only need in-process caching.
Common Use Cases
Cache expensive HTML pages
require "kemal-cache"
use Kemal::Cache::Handler.new
get "/pricing" do
render "src/views/pricing.ecr"
end
Cache API responses with a custom TTL
config = Kemal::Cache::Config.new(
expires_in: 2.minutes
)
use Kemal::Cache::Handler.new(config)
get "/api/products" do
ProductSerializer.render(ProductQuery.latest)
end
Share cache across multiple app instances with Redis
require "kemal-cache/redis"
store = Kemal::Cache::RedisStore.from_env("REDIS_URL", namespace: "shop-api-cache")
config = Kemal::Cache::Config.new(store: store)
use Kemal::Cache::Handler.new(config)
Configuration
Create a custom Kemal::Cache::Config when you want to tune behavior:
require "kemal-cache"
config = Kemal::Cache::Config.new(
expires_in: 2.minutes,
max_ttl: 10.minutes,
ttl_resolver: ->(context : HTTP::Server::Context, key : String) { 2.minutes },
collapse_concurrent_misses: false,
cacheable_methods: ["GET"],
cacheable_status_codes: [200, 202],
max_body_bytes: 128_000,
cache_streaming: false,
auto_etag: true,
auto_last_modified: true,
conditional_get: true,
skip_if: ->(context : HTTP::Server::Context) { context.request.path.starts_with?("/admin") },
should_cache: ->(context : HTTP::Server::Context) { context.response.status_code == 202 }
)
use Kemal::Cache::Handler.new(config)
Cache Keys
By default the key is context.request.resource, which includes the path and query string.
Override it with key_generator when you need to change cache granularity:
config = Kemal::Cache::Config.new(
key_generator: ->(context : HTTP::Server::Context) do
locale = context.request.headers["Accept-Language"]? || "default"
"#{context.request.path}:#{locale}"
end
)
TTL Policies
Use expires_in when every cached response should use the same TTL:
config = Kemal::Cache::Config.new(
expires_in: 10.minutes,
max_ttl: 30.minutes
)
Use ttl_resolver when the TTL should vary by route or resolved cache key:
config = Kemal::Cache::Config.new(
key_generator: ->(context : HTTP::Server::Context) { context.request.path },
max_ttl: 5.minutes,
ttl_resolver: ->(context : HTTP::Server::Context, key : String) do
case key
when "/homepage"
30.seconds
when "/catalog"
10.minutes
else
context.request.path.starts_with?("/api/") ? 15.seconds : nil
end
end
)
Returning nil falls back to expires_in.
When max_ttl is set, both ttl_resolver results and the expires_in fallback are clamped to that maximum.
Invalid TTL behavior is fail-fast:
expires_inmust be positivemax_ttl, when set, must be positivettl_resolvermay returnnilor a positive TTL- zero or negative resolved TTL values raise
ArgumentError
Cache Stampede Protection
Enable collapse_concurrent_misses to coalesce concurrent cache misses for the same resolved key within the current process:
config = Kemal::Cache::Config.new(
collapse_concurrent_misses: true
)
When enabled, the first request computes and stores the response while other in-flight requests for the same key wait and retry the cache read after the leader finishes.
This protection is opt-in and process-local. It reduces duplicate origin work inside one app instance, but it does not coordinate across multiple processes or hosts.
Methods And Status Codes
Opt in to additional HTTP methods:
config = Kemal::Cache::Config.new(
cacheable_methods: ["GET", "POST"]
)
Restrict or broaden the status-code policy:
config = Kemal::Cache::Config.new(
cacheable_status_codes: [200, 203, 301]
)
Pass nil to cache every response status code:
config = Kemal::Cache::Config.new(
cacheable_status_codes: nil
)
Request And Response Filters
Use skip_if to bypass both lookup and storage:
config = Kemal::Cache::Config.new(
skip_if: ->(context : HTTP::Server::Context) do
context.request.query_params["preview"]? == "true"
end
)
Use should_cache for the final storage decision after the response is built:
config = Kemal::Cache::Config.new(
should_cache: ->(context : HTTP::Server::Context) do
context.response.status_code == 202
end
)
Temporarily disable caching without removing the middleware:
config = Kemal::Cache::Config.new(enabled: false)
Response Size And Streaming Guards
Adjust the body size limit:
config = Kemal::Cache::Config.new(
max_body_bytes: 128_000
)
Disable the body size limit entirely:
config = Kemal::Cache::Config.new(
max_body_bytes: nil
)
Allow caching responses that call flush:
config = Kemal::Cache::Config.new(
cache_streaming: true
)
HTTP Validators
Validator support is enabled by default:
config = Kemal::Cache::Config.new(
auto_etag: true,
auto_last_modified: true,
conditional_get: true
)
If your application already manages validator headers, kemal-cache preserves them.
You can disable automatic validators or conditional handling:
config = Kemal::Cache::Config.new(
auto_etag: false,
auto_last_modified: false,
conditional_get: false
)
Stores
MemoryStore
Kemal::Cache::MemoryStore is the default store.
It is thread-safe and process-local, which makes it a strong fit for development, single-instance deployments, and lightweight production services.
You can also cap the number of retained entries:
store = Kemal::Cache::MemoryStore.new(max_entries: 10_000)
config = Kemal::Cache::Config.new(store: store)
When the limit is reached, the oldest entry is evicted on the next write.
RedisStore
RedisStore is intended for shared caching across multiple application instances:
require "kemal-cache/redis"
store = Kemal::Cache::RedisStore.new(
URI.parse("redis://localhost:6379/0"),
namespace: "my-app-cache"
)
config = Kemal::Cache::Config.new(store: store)
use Kemal::Cache::Handler.new(config)
You can also build it from an environment variable:
store = Kemal::Cache::RedisStore.from_env("REDIS_URL")
config = Kemal::Cache::Config.new(store: store)
RedisStore#clear removes namespaced keys using Redis SCAN, which avoids the blocking behavior of KEYS on large datasets.
Custom Stores
Build your own store by inheriting from Kemal::Cache::Store:
class CustomStore < Kemal::Cache::Store
def get(key : String) : String?
# fetch from storage
end
def set(key : String, value : String, ttl : Time::Span) : Nil
# write to storage with ttl
end
def delete(key : String) : Nil
# delete a single key
end
def clear : Nil
# clear the namespace
end
end
Wire it into the config:
config = Kemal::Cache::Config.new(store: CustomStore.new)
use Kemal::Cache::Handler.new(config)
Invalidation
Remove a cached entry by exact key:
config = Kemal::Cache::Config.new
config.invalidate("/articles?page=2")
Use try_invalidate when you want a non-raising result:
removed = config.try_invalidate("/articles?page=2")
Invalidate directly from a request context when the key depends on request data:
post "/articles/cache/invalidate" do |env|
config.invalidate(env)
env.response.status_code = 204
end
You can also use the non-raising variant from a request context:
post "/articles/cache/invalidate" do |env|
env.response.status_code = config.try_invalidate(env) ? 204 : 503
end
Purge the configured store:
config.clear_cache
Use try_clear_cache when cache backend failures should be handled without raising:
cleared = config.try_clear_cache
Observability
Each config instance exposes thread-safe counters:
config.stats.hits
config.stats.misses
config.stats.cacheable_requests
config.stats.stores
config.stats.store_errors
config.stats.bypasses
config.stats.not_modified
config.stats.invalidations
config.stats.clears
config.stats.requests
config.stats.hit_ratio
Semantics:
not_modifiedis a subset ofhitscacheable_requestsishits + missesrequestsiscacheable_requests + bypasses
You can also subscribe to lifecycle events:
config = Kemal::Cache::Config.new(
on_event: ->(event : Kemal::Cache::Event) do
Log.info do
"type=#{event.type} key=#{event.key} path=#{event.path} " \
"method=#{event.http_method} status=#{event.status_code} detail=#{event.detail}"
end
end
)
use Kemal::Cache::Handler.new(config)
Available event types:
HitMissStoreStoreErrorBypassNotModifiedInvalidateClear
Common bypass details include disabled, method_not_cacheable, skip_if, authorization_header, and cookie_header.
Operational Notes
MemoryStoreis process-local, so each app instance keeps its own cache.- Use
RedisStorewhen multiple instances should share cached responses. collapse_concurrent_missesonly collapses misses inside the current process.clear_cacheonly clears the configured store namespace.- Corrupt cached payloads are discarded automatically and retried as cache misses.
- Store
get,set, and corrupt-payloaddeleteerrors fail open at the middleware layer and are emitted asStoreErrorevents. - Upstream middleware headers are preserved unless the cached response intentionally replaces the same header name.
How It Works
On a cache miss, the middleware buffers the response body, decides whether the response is storable, persists it with the configured TTL, and then writes the response to the client.
On a cache hit, it restores the cached body, status, and response headers without invoking the rest of the handler chain.
For safer defaults, the middleware bypasses authenticated and cookie-bearing requests and refuses to store responses that explicitly opt out of caching.
Development
shards install
crystal spec
crystal tool format --check
To run the real Redis integration spec locally, start Redis and set REDIS_URL:
REDIS_URL=redis://localhost:6379/0 crystal spec
Contributing
- Fork it (https://github.com/kemalcr/kemal-cache/fork)
- Create your feature branch (
git checkout -b my-new-feature) - Commit your changes (
git commit -am 'Add some feature') - Push to the branch (
git push origin my-new-feature) - Create a new Pull Request
Contributors
- Serdar Dogruyol - Author