Let’s talk about the invisible force that can make or break your e-commerce success: performance. Think of your store as a physical retail space - every extra second a customer waits is like adding another step between them and your products. Recent studies from Deloitte have put numbers to what we’ve long suspected: even the smallest delay can have ripple effects throughout your business:

  • A mere 0.1-second improvement in load time can boost retail conversions by 8.4%
  • The same speed boost correlates with a 9.2% increase in average order value
  • Travel-related purchases see an even bigger jump, with conversions rising by 10.1%
  • For luxury brands, faster load times lead to 8.6% more page views per session

While these findings emerged from mobile commerce research, they paint a clear picture for all platforms. In our interconnected world, where customers seamlessly switch between phones, tablets, and desktops, performance isn’t just a technical metric—it’s a fundamental business driver.

Why Caching Matters

Picture your e-commerce store during a flash sale. Every time a customer views a product, your server:

  1. Queries the database for product details
  2. Calculates current stock levels
  3. Applies active discounts
  4. Formats the response
  5. Sends it back to the customer

With mobile commerce expected to account for three-quarters of total e-commerce sales, and desktop users expecting equally snappy experiences, optimizing your store’s performance isn’t optional—it’s essential. Caching solves this by storing frequently accessed data in a fast-access storage layer, serving pre-computed results instantly instead of calculating everything from scratch for each request.

In this small video, we showcase the performance benefits of caching in a Medusa store.

The first request takes more than 1s to complete, while the second request is almost instant (7ms).

Caching Strategies

Medusa comes with two built-in cache modules out of the box - an “In-Memory” cache for development and a “Redis” cache for production environments. These powerful caching tools can be leveraged in various ways throughout your application.

In this guide, we’ll explore practical implementations of caching strategies, focusing on implementing caching in your API routes.

Whenever you can access the “Medusa container”, you can resolve the cache module and use it as you want, for example, it is possible to implement caching in your workflows or API routes.

Custom API Route Caching

We’ll implement caching in a custom API route that fetches products. This example demonstrates how to check the cache before hitting the database, and how to store results in the cache for future requests:

route.ts
// src/api/store/custom/route.ts

import { Product } from ".medusa/types/remote-query-entry-points"
import {
  MedusaRequest,
  MedusaResponse,
} from "@medusajs/framework/http"
import {
  ContainerRegistrationKeys,
  Modules,
} from "@medusajs/framework/utils"

export const GET = async (
  req: MedusaRequest,
  res: MedusaResponse
) => {
  const query = req.scope.resolve(ContainerRegistrationKeys.QUERY)
  const cacheService = req.scope.resolve(Modules.CACHE)

  // ℹ️ Define a cache key
  // The cache key can be unique or not, depending on the query.
  // For example, we could add the pagination parameters
  // to the key to make sure we cache different pages of products separately
  const CACHE_KEY = "medusa:products"

  // ℹ️ First, we check if the data is cached
  const cached = await cacheService.get<{ data: Product[] }>(CACHE_KEY)

  // ℹ️ If the data is cached, we return it immediately
  if (cached?.data) {
    return res.json({ data: cached.data })
  }

  // ℹ️ If the data is not cached, we fetch it from the database
  const { data } = await query.graph({
    entity: "product",
    fields: ["*"]
  })

  // ℹ️ We store the fetched data in the cache, for future requests
  await cacheService.set(CACHE_KEY, { data })

  // ℹ️ Finally, we return the fetched data
  res.json({ data })
}

By using this approach, we can significantly reduce the number of database queries and improve the overall performance of our API.

Depending on your use case, you might want to define a unique cache key for your query. For example, you might want to cache different pages of products separately (e.g. medusa:products:page=1 and medusa:products:page=2)

Core API Route Caching

While custom API routes give us direct control over caching implementation, what about Medusa’s core API routes?

Since we don’t have direct access to modify these routes, we can leverage middlewares to implement caching. This approach allows us to “intercept” requests and responses to add caching functionality.

Let’s implement caching for Medusa’s product listing endpoint (/store/products) :

middlewares.ts
// src/api/middlewares.ts
import {
  defineMiddlewares,
  type MedusaNextFunction,
  type MedusaRequest,
  type MedusaResponse,
} from "@medusajs/framework/http"
import { Modules } from "@medusajs/framework/utils"
import { type HttpTypes } from "@medusajs/framework/types"

export default defineMiddlewares({
  routes: [
    {
      matcher: "/store/products", // ℹ️ The core API route we want to cache
      method: 'GET',
      middlewares: [
        async (
          req: MedusaRequest,
          res: MedusaResponse,
          next: MedusaNextFunction
        ) => {
          const cacheModule = req.scope.resolve(Modules.CACHE)

          // ℹ️ This is the part responsible for retrieving the products from the cache
          const cacheKey = `medusa:products`
          const cachedProducts = await cacheModule.get<HttpTypes.StoreProductListResponse>(cacheKey)

          if (cachedProducts) {
            res.json(cachedProducts)
            return
          }

          // ℹ️ This is the part responsible for caching the products after they are retrieved from the database
          const originalJsonFn = res.json
          Object.assign(res, {
            json: async function (body: HttpTypes.StoreProductListResponse) {
              await cacheModule.set(cacheKey, body)
              await originalJsonFn.call(res, body)
            },
          })

          next()
        },
      ],
    },
  ],
})

This middleware implementation uses a technique called “response interception”. Here’s how it works:

  1. Check Cache First: When a request hits the /store/products endpoint, we first check if we have a cached response
  2. Return Cached Data: If cached data exists, we return it immediately without hitting the database
  3. Intercept Response: If no cache exists, we override the res.json method to:
    • Capture the response data
    • Store it in cache for future requests
    • Send the response to the client

This approach is particularly powerful because:

  • It works with any core API route
  • Requires no modification to the original route handlers
  • Can be easily extended to handle query parameters

Good to know: You can also set the TTL (time-to-live) for the cache, to make sure the cached data expires after a certain amount of time, depending on your use case.

const ONE_DAY_MS = 60 * 60 * 24
await cacheService.set(CACHE_KEY, { data }, ONE_DAY_MS) // 1 day in cache

Next Steps