Caching
Implementing caching in Medusa
Let’s talk about the invisible force that can make or break your e-commerce success: performance. Think of your store as a physical retail space - every extra second a customer waits is like adding another step between them and your products. Recent studies from Deloitte have put numbers to what we’ve long suspected: even the smallest delay can have ripple effects throughout your business:
- A mere 0.1-second improvement in load time can boost retail conversions by 8.4%
- The same speed boost correlates with a 9.2% increase in average order value
- Travel-related purchases see an even bigger jump, with conversions rising by 10.1%
- For luxury brands, faster load times lead to 8.6% more page views per session
While these findings emerged from mobile commerce research, they paint a clear picture for all platforms. In our interconnected world, where customers seamlessly switch between phones, tablets, and desktops, performance isn’t just a technical metric—it’s a fundamental business driver.
Why Caching Matters
Picture your e-commerce store during a flash sale. Every time a customer views a product, your server:
- Queries the database for product details
- Calculates current stock levels
- Applies active discounts
- Formats the response
- Sends it back to the customer
With mobile commerce expected to account for three-quarters of total e-commerce sales, and desktop users expecting equally snappy experiences, optimizing your store’s performance isn’t optional—it’s essential. Caching solves this by storing frequently accessed data in a fast-access storage layer, serving pre-computed results instantly instead of calculating everything from scratch for each request.
In this small video, we showcase the performance benefits of caching in a Medusa store.
The first request takes more than 1s to complete, while the second request is almost instant (7ms).
Caching Strategies
Medusa comes with two built-in cache modules out of the box - an “In-Memory” cache for development and a “Redis” cache for production environments. These powerful caching tools can be leveraged in various ways throughout your application.
In this guide, we’ll explore practical implementations of caching strategies, focusing on implementing caching in your API routes.
Whenever you can access the “Medusa container”, you can resolve the cache module and use it as you want, for example, it is possible to implement caching in your workflows or API routes.
Custom API Route Caching
We’ll implement caching in a custom API route that fetches products. This example demonstrates how to check the cache before hitting the database, and how to store results in the cache for future requests:
By using this approach, we can significantly reduce the number of database queries and improve the overall performance of our API.
Depending on your use case, you might want to define a unique cache key for your query.
For example, you might want to cache different pages of products separately (e.g. medusa:products:page=1
and medusa:products:page=2
)
Core API Route Caching
While custom API routes give us direct control over caching implementation, what about Medusa’s core API routes?
Since we don’t have direct access to modify these routes, we can leverage middlewares to implement caching. This approach allows us to “intercept” requests and responses to add caching functionality.
Let’s implement caching for Medusa’s product listing endpoint (/store/products
) :
This middleware implementation uses a technique called “response interception”. Here’s how it works:
- Check Cache First: When a request hits the
/store/products
endpoint, we first check if we have a cached response - Return Cached Data: If cached data exists, we return it immediately without hitting the database
- Intercept Response: If no cache exists, we override the
res.json
method to:- Capture the response data
- Store it in cache for future requests
- Send the response to the client
This approach is particularly powerful because:
- It works with any core API route
- Requires no modification to the original route handlers
- Can be easily extended to handle query parameters
Good to know: You can also set the TTL (time-to-live) for the cache, to make sure the cached data expires after a certain amount of time, depending on your use case.
Next Steps
Was this page helpful?