Skip to content

Caching in ConnectSoft Microservice Template

Purpose & Overview

Caching is a critical performance optimization strategy integrated throughout the ConnectSoft Microservice Template. It provides multiple layers of caching support to reduce latency, lower database load, improve fault tolerance, and ensure consistent performance during traffic bursts.

Why Caching?

Caching offers several key benefits for the template:

  • Reduced Latency: Frequently accessed data is served from fast in-memory or distributed cache stores
  • Lower Database Load: Reduces repetitive queries to the database, improving overall system throughput
  • Improved Fault Tolerance: Cached data provides resilience when downstream services are unavailable
  • Consistent Performance: Handles traffic spikes without impacting database performance
  • Cost Optimization: Reduces infrastructure costs by minimizing database compute requirements
  • Scalability: Enables horizontal scaling by sharing cached data across service instances

Caching Layers

The template supports multiple caching layers: in-memory caching for development, Redis for distributed caching in production, and NHibernate second-level cache for ORM-level entity caching.

Architecture Overview

Caching is integrated at multiple layers of Clean Architecture:

API Layer (REST/gRPC/GraphQL)
Application Layer (DomainModel)
    ├── Processors (Commands/Writes)
    └── Retrievers (Queries/Reads)
    ↓ (Cache Check)
IDistributedCache / IMemoryCache
    ├── In-Memory Cache (Development)
    └── Redis Cache (Production)
    ↓ (Cache Miss)
Repository Layer
NHibernate (Optional L2 Cache)
    └── Entity/Query Result Caching
Database

Key Integration Points

Layer Component Responsibility
ApplicationModel Cache Extensions AddRedisCaching(), AddInMemoryCaching()
DomainModel Services Cache-aware use cases and processors
PersistenceModel NHibernate L2 Cache Entity and query result caching
Infrastructure Redis/Docker Distributed cache backend

Core Components

1. Service Registration

Caching is registered via extension methods in Program.cs:

// Program.cs
var builder = WebApplication.CreateBuilder(args);

// Configure caching based on environment
if (configuration.GetValue<bool>("Cache:UseRedis"))
{
    builder.Services.AddRedisCaching(configuration);
}
else
{
    builder.Services.AddInMemoryCaching();
}

var app = builder.Build();

2. Redis Caching Configuration

Redis is configured via AddRedisCaching() extension:

// DistributedCacheRedisExtensions.cs
public static IServiceCollection AddRedisCaching(
    this IServiceCollection services,
    IConfiguration configuration)
{
    ArgumentNullException.ThrowIfNull(services);
    ArgumentNullException.ThrowIfNull(configuration);

    var redisConfig = configuration.GetConnectionString("Redis");

    if (string.IsNullOrWhiteSpace(redisConfig))
    {
        throw new InvalidOperationException(
            "Redis connection string is required when UseRedis is enabled.");
    }

services.AddStackExchangeRedisCache(options =>
{
        options.Configuration = redisConfig;
    options.InstanceName = "ConnectSoft:";
});

    return services;
}

3. In-Memory Caching Configuration

For development and testing, in-memory caching is configured:

// DistributedCacheInMemoryExtensions.cs
public static IServiceCollection AddInMemoryCaching(this IServiceCollection services)
{
    ArgumentNullException.ThrowIfNull(services);

    services.AddDistributedMemoryCache();

    return services;
}

4. Using Caching in Application Layer

Cache is injected via IDistributedCache:

public class MicroserviceAggregateRootsRetriever : IMicroserviceAggregateRootsRetriever
{
    private readonly IMicroserviceAggregateRootsRepository repository;
    private readonly IDistributedCache cache;
    private readonly ILogger<MicroserviceAggregateRootsRetriever> logger;

    public MicroserviceAggregateRootsRetriever(
        IMicroserviceAggregateRootsRepository repository,
        IDistributedCache cache,
        ILogger<MicroserviceAggregateRootsRetriever> logger)
    {
        this.repository = repository;
        this.cache = cache;
        this.logger = logger;
    }

    public async Task<IMicroserviceAggregateRoot?> GetMicroserviceAggregateRootDetails(
        GetMicroserviceAggregateRootDetailsInput input,
        CancellationToken token = default)
    {
        var cacheKey = $"aggregate:{input.ObjectId}";

        // Try to get from cache
        var cached = await this.cache.GetStringAsync(cacheKey, token);

        if (cached != null)
        {
            this.logger.LogDebug("Cache HIT for key {CacheKey}", cacheKey);
            return JsonSerializer.Deserialize<MicroserviceAggregateRoot>(cached);
        }

        this.logger.LogDebug("Cache MISS for key {CacheKey}", cacheKey);

        // Fallback to repository
        var entity = await this.repository.GetByIdAsync(input.ObjectId, token);

        if (entity != null)
        {
            // Cache the result
            var serialized = JsonSerializer.Serialize(entity);
            await this.cache.SetStringAsync(
                cacheKey,
                serialized,
                new DistributedCacheEntryOptions
                {
                    AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(10)
                },
                token);

            this.logger.LogDebug("Cached result for key {CacheKey} with TTL 10 minutes", cacheKey);
        }

        return entity;
    }
}

5. Cache Invalidation on Updates

Cache is invalidated when data is modified:

public class MicroserviceAggregateRootsProcessor : IMicroserviceAggregateRootsProcessor
{
    private readonly IMicroserviceAggregateRootsRepository repository;
    private readonly IDistributedCache cache;
    private readonly ILogger<MicroserviceAggregateRootsProcessor> logger;

    public async Task<IMicroserviceAggregateRoot> CreateMicroserviceAggregateRoot(
        CreateMicroserviceAggregateRootInput input,
        CancellationToken token = default)
    {
        // Create entity
        var entity = new MicroserviceAggregateRoot(input.ObjectId);
        await this.repository.InsertAsync(entity, token);

        // Invalidate cache to ensure fresh data on next read
        var cacheKey = $"aggregate:{input.ObjectId}";
        await this.cache.RemoveAsync(cacheKey, token);

        this.logger.LogDebug("Invalidated cache for key {CacheKey}", cacheKey);

        return entity;
    }

    public async Task DeleteMicroserviceAggregateRoot(
        DeleteMicroserviceAggregateRootInput input,
        CancellationToken token = default)
    {
        await this.repository.DeleteAsync(input.ObjectId, token);

        // Remove from cache
        var cacheKey = $"aggregate:{input.ObjectId}";
        await this.cache.RemoveAsync(cacheKey, token);
    }
}

6. Helper Extensions for Typed Caching

Custom extensions simplify typed object caching:

// DistributedCacheExtensions.cs
public static class DistributedCacheExtensions
{
    public static async Task<T?> GetJsonAsync<T>(
        this IDistributedCache cache,
        string key,
        CancellationToken token = default)
    {
        var json = await cache.GetStringAsync(key, token);

        if (string.IsNullOrWhiteSpace(json))
        {
            return default(T);
        }

        return JsonSerializer.Deserialize<T>(json);
    }

public static async Task SetJsonAsync<T>(
        this IDistributedCache cache,
        string key,
        T value,
        DistributedCacheEntryOptions? options = null,
        CancellationToken token = default)
{
    var json = JsonSerializer.Serialize(value);

        await cache.SetStringAsync(
            key,
            json,
            options ?? new DistributedCacheEntryOptions
            {
                AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(10)
            },
            token);
    }

    public static async Task<T> GetOrSetJsonAsync<T>(
        this IDistributedCache cache,
        string key,
        Func<Task<T>> factory,
        DistributedCacheEntryOptions? options = null,
        CancellationToken token = default)
    {
        var cached = await cache.GetJsonAsync<T>(key, token);

        if (cached != null)
        {
            return cached;
        }

        var value = await factory();
        await cache.SetJsonAsync(key, value, options, token);

        return value;
    }
}

Configuration

appsettings.json

{
"Cache": {
    "UseRedis": true,
    "UseInMemory": false,
    "DefaultTTLSeconds": 600
  },
  "ConnectionStrings": {
    "Redis": "localhost:6379"
  }
}

appsettings.Development.json

{
"Cache": {
  "UseRedis": false,
    "UseInMemory": true,
    "DefaultTTLSeconds": 300
  }
}

appsettings.Production.json

{
"Cache": {
  "UseRedis": true,
    "UseInMemory": false,
    "DefaultTTLSeconds": 600
},
"ConnectionStrings": {
    "Redis": "${REDIS_CONNECTION_STRING}"
  }
}

Docker Compose for Local Redis

# docker-compose.yml
services:
  redis:
    image: redis:7
    container_name: redis
    restart: always
    ports:
      - "6379:6379"
    volumes:
      - redis-data:/data
    networks:
      - connectsoft-net
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 5s
      timeout: 2s
      retries: 3

volumes:
  redis-data:

networks:
  connectsoft-net:
    driver: bridge

Cache Entry Options

TTL Strategies

Option Use Case Example
AbsoluteExpiration Fixed expiration time Cache expires at specific date/time
AbsoluteExpirationRelativeToNow Time-based expiration Cache expires after 10 minutes
SlidingExpiration Activity-based expiration Cache refreshes on access, expires after inactivity

Example Usage

// Absolute expiration
var options = new DistributedCacheEntryOptions
{
    AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(10)
};

// Sliding expiration (resets on access)
var slidingOptions = new DistributedCacheEntryOptions
{
    SlidingExpiration = TimeSpan.FromMinutes(5)
};

// Combined: expires after 30 minutes OR 5 minutes of inactivity
var combinedOptions = new DistributedCacheEntryOptions
{
    AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(30),
    SlidingExpiration = TimeSpan.FromMinutes(5)
};

await cache.SetStringAsync("key", "value", options);

NHibernate Second-Level Cache

The template supports NHibernate's second-level cache for ORM-level entity caching.

Configuration

// NHibernateExtensions.cs
private static void ConfigureSecondLevelCache(Configuration cfg, IConfiguration configuration)
{
    if (configuration.GetValue<bool>("NHibernate:UseSecondLevelCache"))
    {
cfg.SetProperty(Environment.UseSecondLevelCache, "true");
cfg.SetProperty(Environment.UseQueryCache, "true");
        cfg.SetProperty(Environment.CacheProvider, 
            "NHibernate.Caches.StackExchangeRedis.RedisCacheProvider, NHibernate.Caches.StackExchangeRedis");

        var redisConnectionString = configuration.GetConnectionString("Redis");
        cfg.SetProperty("cache.redis.connection_string", redisConnectionString);
        cfg.SetProperty("cache.redis.database", "0");
        cfg.SetProperty("cache.redis.key_prefix", "NH:Microservice:");
        cfg.SetProperty("cache.default_expiration", "600");
    }
}

Entity Mapping Configuration

<!-- Entity mapping with cache -->
<class name="MicroserviceAggregateRoot" table="MicroserviceAggregateRoots">
    <cache usage="read-write" region="aggregate-root"/>
    <!-- ... other mappings ... -->
</class>

Or using Fluent NHibernate:

Cache.ReadWrite().Region("aggregate-root");

Query Cache

var results = session
    .CreateQuery("from MicroserviceAggregateRoot where IsActive = :active")
    .SetParameter("active", true)
    .SetCacheable(true)
    .SetCacheRegion("active-aggregates")
    .List<MicroserviceAggregateRoot>();

Key Naming Strategy

Consistent key naming enables targeted invalidation and avoids collisions:

Pattern Example Purpose
{type}:{id} aggregate:{guid} Single entity lookup
{type}:{context}:{id} order:summary:{guid} Contextual view
{type}:{filter}:{params} products:category:electronics:page:1 Filtered queries
tenant:{id}:{type}:{id} tenant:123:user:456 Multi-tenant isolation

Key Naming Guidelines

  1. Use consistent prefixes: aggregate:, dto:, query:
  2. Include context: order:detail:, order:summary:
  3. Support invalidation: Use patterns that enable bulk invalidation
  4. Avoid collisions: Include all relevant identifiers

Cache Invalidation Strategies

public async Task UpdateEntity(Guid id, UpdateCommand command)
{
    await repository.UpdateAsync(id, command);

    // Invalidate cache
    await cache.RemoveAsync($"aggregate:{id}");

    // Optionally pre-populate cache with updated entity
    var updated = await repository.GetByIdAsync(id);
    await cache.SetJsonAsync($"aggregate:{id}", updated);
}

2. Write-Through Cache

public async Task CreateEntity(CreateCommand command)
{
    var entity = new Entity(command.Data);
    await repository.InsertAsync(entity);

    // Update cache immediately
    await cache.SetJsonAsync($"entity:{entity.Id}", entity);
}

3. Tag-Based Invalidation

For complex invalidation scenarios, use tag-based keys:

// Set with tags
await cache.SetStringAsync("user:123", userJson, options);
await cache.SetStringAsync("tag:user:123", "exists", options); // Tag entry

// Invalidate all user-related cache
var tagKey = "tag:user:123";
await cache.RemoveAsync(tagKey);

// Alternative: Use Redis SCAN with prefix pattern
// SCAN 0 MATCH user:123:* COUNT 100

Best Practices

Do's

  1. Cache DTOs, not domain entities
  2. Domain entities may contain non-serializable references
  3. DTOs are designed for serialization
  4. Separates caching concerns from domain logic

  5. Use consistent key naming conventions

  6. Enables targeted invalidation
  7. Prevents key collisions
  8. Simplifies debugging and monitoring

  9. Set appropriate TTLs based on data volatility

  10. Static reference data: Long TTL (hours to days)
  11. Frequently changing data: Short TTL (minutes)
  12. User-specific data: Medium TTL with sliding expiration

  13. Invalidate cache on updates

  14. Always remove or update cache entries when data changes
  15. Prevents serving stale data to users

  16. Implement fallback logic

  17. Cache misses should gracefully fall back to source
  18. Never let cache failures break application functionality

  19. Monitor cache performance

  20. Track hit/miss ratios
  21. Monitor cache latency
  22. Alert on cache failures

  23. Use typed cache helpers

  24. Centralize serialization logic
  25. Ensure consistent caching patterns
  26. Simplify cache usage across codebase

  27. Test cache behavior

  28. Unit tests with mocked IDistributedCache
  29. Integration tests with real Redis
  30. Test expiration and invalidation scenarios

Don'ts

  1. Don't cache sensitive data without encryption
  2. Cache may be accessible to multiple services
  3. Consider encryption for sensitive information

  4. Don't cache everything

  5. Caching has overhead (memory, serialization)
  6. Only cache data that benefits from caching
  7. Avoid caching write-heavy or rarely accessed data

  8. Don't assume cache is always available

  9. Implement fallback to source of truth
  10. Handle cache failures gracefully
  11. Log cache errors but don't fail requests

  12. Don't use cache as primary data store

  13. Cache is volatile and can be cleared
  14. Always have authoritative source (database)

  15. Don't ignore cache consistency

  16. Ensure cache invalidation happens on all update paths
  17. Consider distributed invalidation for multi-service scenarios

  18. Don't cache large objects

  19. Large serialized objects consume memory
  20. Consider using claim check pattern for large payloads

  21. Don't set TTLs too long

  22. Balance performance gains with data freshness
  23. Consider business requirements for data staleness

Common Scenarios

Scenario 1: Caching Aggregate Roots

public async Task<AggregateRootDto?> GetAggregateRoot(Guid id)
{
    var cacheKey = $"aggregate:{id}";

    return await cache.GetOrSetJsonAsync(
        cacheKey,
        async () =>
        {
            var entity = await repository.GetByIdAsync(id);
            return entity != null ? mapper.Map<AggregateRootDto>(entity) : null;
        },
        new DistributedCacheEntryOptions
        {
            AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(10)
        });
}

Scenario 2: Caching Query Results

public async Task<IEnumerable<ProductDto>> GetProductsByCategory(
    string category,
    int page,
    int pageSize)
{
    var cacheKey = $"products:category:{category}:page:{page}:size:{pageSize}";

    return await cache.GetOrSetJsonAsync(
        cacheKey,
        async () =>
        {
            var products = await repository.GetByCategoryAsync(category, page, pageSize);
            return mapper.Map<IEnumerable<ProductDto>>(products);
        },
        new DistributedCacheEntryOptions
        {
            AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(5),
            SlidingExpiration = TimeSpan.FromMinutes(2)
        });
}

Scenario 3: Cache-Aware Command Processing

public async Task<OrderDto> CreateOrder(CreateOrderCommand command)
{
    // Create order
    var order = new Order(command.CustomerId, command.Items);
    await repository.InsertAsync(order);

    // Invalidate related caches
    await cache.RemoveAsync($"customer:{command.CustomerId}:orders");
    await cache.RemoveAsync($"customer:{command.CustomerId}:summary");

    // Pre-populate cache with new order
    var dto = mapper.Map<OrderDto>(order);
    await cache.SetJsonAsync(
        $"order:{order.Id}",
        dto,
        new DistributedCacheEntryOptions
        {
            AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(15)
        });

    return dto;
}

Scenario 4: Multi-Tenant Cache Isolation

public async Task<TenantSettingsDto> GetTenantSettings(Guid tenantId)
{
    var cacheKey = $"tenant:{tenantId}:settings";

    return await cache.GetOrSetJsonAsync(
        cacheKey,
        async () =>
        {
            var settings = await repository.GetTenantSettingsAsync(tenantId);
            return mapper.Map<TenantSettingsDto>(settings);
        },
        new DistributedCacheEntryOptions
        {
            AbsoluteExpirationRelativeToNow = TimeSpan.FromHours(1)
    });
}

Cache Stampede Prevention

When many concurrent requests hit the same cache miss, they all try to load data simultaneously. Prevent this with locking:

private static readonly SemaphoreSlim _lock = new SemaphoreSlim(1, 1);

public async Task<T> GetOrSetWithLock<T>(
    string key,
    Func<Task<T>> factory,
    DistributedCacheEntryOptions? options = null)
{
    // Try cache first
    var cached = await cache.GetJsonAsync<T>(key);
    if (cached != null)
    {
        return cached;
    }

    // Lock to prevent concurrent loads
    await _lock.WaitAsync();
    try
    {
        // Double-check after acquiring lock
        cached = await cache.GetJsonAsync<T>(key);
        if (cached != null)
        {
            return cached;
        }

        // Load from source
        var value = await factory();
        await cache.SetJsonAsync(key, value, options);
        return value;
    }
    finally
    {
        _lock.Release();
    }
}

Health Checks

Redis health checks ensure cache availability:

// HealthChecksExtensions.cs
builder.AddRedis(
    redisConnectionString: configuration.GetConnectionString("Redis") ?? "localhost:6379",
    name: "redis",
    tags: new[] { "ready", "cache", "infra" });

Health check endpoints: - GET /health - Overall health (includes Redis) - GET /health/ready - Readiness probe - GET /health/live - Liveness probe

Observability

Logging

Cache operations should be logged for debugging and monitoring:

_logger.LogDebug("Cache HIT for key {CacheKey}", key);
_logger.LogDebug("Cache MISS for key {CacheKey}", key);
_logger.LogInformation("Cached {Key} with TTL {TTL} seconds", key, ttl.TotalSeconds);
_logger.LogWarning("Cache fallback for key {CacheKey} due to error: {Exception}", key, ex);

Metrics

Track cache performance metrics:

Metric Description
cache_hits_total Total cache hits
cache_misses_total Total cache misses
cache_latency_ms Cache operation latency
cache_errors_total Cache operation errors

Monitoring Recommendations

  • Hit/Miss Ratio: Target >80% hit ratio for effective caching
  • Cache Latency: Monitor p95/p99 latency for cache operations
  • Redis Health: Alert on Redis connection failures
  • Memory Usage: Monitor Redis memory consumption
  • Eviction Rate: Track key evictions due to memory pressure

Testing

Unit Testing with Mocked Cache

[TestMethod]
public async Task GetById_ReturnsCachedValue_WhenCacheHit()
{
    // Arrange
    var cache = new Mock<IDistributedCache>();
    var cachedEntity = new AggregateRoot { Id = Guid.NewGuid() };
    var cachedJson = JsonSerializer.Serialize(cachedEntity);

    cache.Setup(c => c.GetStringAsync(It.IsAny<string>(), It.IsAny<CancellationToken>()))
        .ReturnsAsync(cachedJson);

    var retriever = new AggregateRootRetriever(repository, cache.Object, logger);

    // Act
    var result = await retriever.GetById(Guid.NewGuid());

    // Assert
    Assert.IsNotNull(result);
    Assert.AreEqual(cachedEntity.Id, result.Id);
    repository.Verify(r => r.GetByIdAsync(It.IsAny<Guid>(), It.IsAny<CancellationToken>()), Times.Never);
}

Integration Testing with Redis

[TestClass]
public class CacheIntegrationTests : IClassFixture<WebApplicationFactory<Program>>
{
    private readonly WebApplicationFactory<Program> factory;

    public CacheIntegrationTests(WebApplicationFactory<Program> factory)
    {
        this.factory = factory;
    }

    [TestMethod]
    public async Task GetById_CachesResult_OnSubsequentRequests()
    {
        // Arrange
        var client = factory.CreateClient();

        // Act - First request (cache miss)
        var response1 = await client.GetAsync("/api/aggregates/test-id");
        var response1Content = await response1.Content.ReadAsStringAsync();

        // Act - Second request (cache hit)
        var response2 = await client.GetAsync("/api/aggregates/test-id");

        // Assert
        Assert.AreEqual(HttpStatusCode.OK, response2.StatusCode);
        // Verify cache was hit (check logs or metrics)
    }
}

Performance Considerations

Cache Size Limits

  • In-Memory Cache: Limited by application memory
  • Redis: Configurable via maxmemory setting
  • Eviction Policies: LRU, LFU, or TTL-based

Serialization Overhead

  • JSON: Human-readable but larger payloads
  • MessagePack: Compact binary format, faster serialization
  • Protocol Buffers: Structured binary format

Network Latency

For distributed caches (Redis): - Co-locate Redis with application when possible - Use connection pooling to minimize connection overhead - Consider read replicas for high-read scenarios

When to Use Caching

Scenario Recommendation
High-volume reads ✅ Cache frequently accessed data
Slow queries ✅ Cache expensive query results
Reference data ✅ Cache static/lookup tables
Session data ✅ Cache user sessions with sliding expiration
Computed views ✅ Cache expensive computations
Real-time data ❌ Don't cache (requires immediate freshness)
Write-heavy ⚠️ Cache carefully, invalidate aggressively
Large objects ⚠️ Consider claim check pattern
User-specific ✅ Cache with user-scoped keys

When Not to Use Caching

Scenario Reason
Real-time financial data Requires immediate accuracy
Write-heavy workloads Cache invalidation overhead
Rarely accessed data No performance benefit
Sensitive unencrypted data Security concerns
Frequently changing data High invalidation rate

Summary

Caching in the ConnectSoft Microservice Template provides:

  • Multiple caching layers: In-memory for development, Redis for production, NHibernate L2 for ORM
  • Flexible configuration: Environment-aware setup via appsettings.json
  • Type-safe caching: Helper extensions for typed object caching
  • Cache invalidation: Strategies for maintaining data consistency
  • Health monitoring: Redis health checks and observability
  • Testing support: Mocked and integration test patterns
  • Best practices: Guidelines for effective cache usage

By following these patterns and best practices, caching becomes a powerful tool for building scalable, performant microservices while maintaining data consistency and system reliability.