In the world of ecommerce, “performance is crucial”. Think of caching like a kitchen’s workstation - frequently used ingredients are kept within arm’s reach instead of making repeated trips to the pantry. This organized approach keeps commonly requested data readily available, making our application run more efficiently, just like a well-organized kitchen during service.

In this guide, we’ll explore how to implement caching in Medusa to optimize your application’s performance.

Overview

Medusa comes with two built-in cache modules out of the box - an “In-Memory” cache for development and a “Redis” cache for production environments. These powerful caching tools can be leveraged in various ways throughout your application.

In this guide, we’ll explore practical implementations of caching strategies, focusing on implementing caching in your API routes.

Whenever you can access the “Medusa container”, you can resolve the cache module and use it as you want, for example, it is possible to implement caching in your workflows or API routes.


In this small video, we showcase the performance benefits of caching in a Medusa store.

The first request takes more than 1s to complete, while the second request is almost instant (7ms).

API Route Caching

We’ll implement caching in an API route that fetches products. This example demonstrates how to check the cache before hitting the database, and how to store results in the cache for future requests:

route.ts
// src/api/store/custom/route.ts

import { Product } from ".medusa/types/remote-query-entry-points"
import {
  MedusaRequest,
  MedusaResponse,
} from "@medusajs/framework/http"
import {
  ContainerRegistrationKeys,
  Modules,
} from "@medusajs/framework/utils"

export const GET = async (
  req: MedusaRequest,
  res: MedusaResponse
) => {
  const query = req.scope.resolve(ContainerRegistrationKeys.QUERY)
  const cacheService = req.scope.resolve(Modules.CACHE)

  // ℹ️ Define a cache key
  // The cache key can be unique or not, depending on the query.
  // For example, we could add the pagination parameters
  // to the key to make sure we cache different pages of products separately
  const CACHE_KEY = "medusa:products"

  // ℹ️ First, we check if the data is cached
  const cached = await cacheService.get<{ data: Product[] }>(CACHE_KEY)

  // ℹ️ If the data is cached, we return it immediately
  if (cached?.data) {
    return res.json({ data: cached.data })
  }

  // ℹ️ If the data is not cached, we fetch it from the database
  const { data } = await query.graph({
    entity: "product",
    fields: ["*"]
  })

  // ℹ️ We store the fetched data in the cache, for future requests
  await cacheService.set(CACHE_KEY, { data })

  // ℹ️ Finally, we return the fetched data
  res.json({ data })
}

By using this approach, we can significantly reduce the number of database queries and improve the overall performance of our API.

Depending on your use case, you might want to define a unique cache key for your query. For example, you might want to cache different pages of products separately (e.g. medusa:products:page=1 and medusa:products:page=2)

Good to know: You can also set the TTL (time-to-live) for the cache, to make sure the cached data expires after a certain amount of time, depending on your use case.

const ONE_DAY_MS = 60 * 60 * 24
await cacheService.set(CACHE_KEY, { data }, ONE_DAY_MS) // 1 day in cache

Remember, like a well-organized store layout, effective caching strategies can significantly improve your customers’ shopping experience by reducing response times and server load.

Next Steps