Database Migrations in ConnectSoft Microservice Template¶
Purpose & Overview¶
Database Migrations provide versioned, repeatable, and safe database schema evolution for both relational (SQL) and document (MongoDB) databases. In the ConnectSoft Microservice Template, migrations ensure consistent database schemas across environments, enable safe production deployments, and provide a reliable way to evolve database structure over time.
Why Database Migrations?¶
Database migrations offer several critical benefits:
- Version Control: Database schema changes are versioned alongside code
- Reproducibility: Same migrations produce identical schemas in all environments
- Safety: Migrations can be tested and validated before production deployment
- Rollback Support: Down migrations enable schema rollback when needed
- Team Collaboration: Multiple developers can work on schema changes safely
- CI/CD Integration: Automated migration execution in deployment pipelines
- Audit Trail: Complete history of database schema changes
- Environment Consistency: Same schema across development, staging, and production
Migration Philosophy
Migrations treat database schema as code. Each migration is a discrete, versioned change that can be applied and reversed. Migrations are immutable once applied—changes are made by creating new migrations, not modifying existing ones. This ensures consistency and predictability across environments.
Architecture Overview¶
Migration Types¶
The template supports two migration systems:
Database Migrations
├── FluentMigrator (Relational Databases)
│ ├── SQL Server
│ ├── PostgreSQL
│ └── MySQL
└── CSharpMongoMigrations (MongoDB)
└── Document Collections & Indexes
Migration Flow¶
Migration Classes
├── [Migration(Version)] Attribute
├── Up() Method (Apply Changes)
└── Down() Method (Revert Changes)
↓
Migration Registration
├── AddMicroserviceFluentMigrator() (SQL)
└── AddMongoDbMigrator() (MongoDB)
↓
Migration Execution
├── RunMicroserviceFluentMigrations() (Startup)
└── Automatic via CSharpMongoMigrations (Startup)
↓
Version Tracking
├── FluentMigrator VersionInfo Table
└── MongoDB Migrations Collection
FluentMigrator (SQL Migrations)¶
Overview¶
FluentMigrator is a database migration framework for .NET that manages SQL database schema evolution using code-first migrations. It supports multiple database providers and provides a fluent API for defining schema changes.
Supported Databases¶
- SQL Server: 2000, 2005, 2008, 2012, 2016+
Vector Store Support (when UseVectorStoreSqlServer is enabled):
- SQL Server 2025 Preview or later (required for VECTOR data type)
- Azure SQL Database with vector support enabled
- The VECTOR data type is used in the Vectors table migration for storing embedding vectors
- Migration will fail on SQL Server 2022 or earlier versions
Docker Image Requirements:
- Use mcr.microsoft.com/mssql/server:2025-latest or mcr.microsoft.com/mssql/server:2025-preview-ubuntu-24.04
- SQL Server 2022 images (mcr.microsoft.com/mssql/server:2022-latest) do not support the VECTOR data type
- PostgreSQL: 8.1, 8.2, 8.3, and later
- MySQL: 5.0, 5.5, 5.8, 8.0+
Service Registration¶
Registration:
// MicroserviceRegistrationExtensions.cs
#if Migrations
services.AddMicroserviceFluentMigrator(
configuration,
typeof(MicroserviceMigration).Assembly);
#endif
Implementation:
// FluentMigratorExtensions.cs
internal static IServiceCollection AddMicroserviceFluentMigrator(
this IServiceCollection services,
IConfiguration configuration,
Assembly migrationAssembly)
{
ArgumentNullException.ThrowIfNull(services);
ArgumentNullException.ThrowIfNull(configuration);
ArgumentNullException.ThrowIfNull(migrationAssembly);
string nhibernateConnectionString = configuration.GetConnectionString(
OptionsExtensions.PersistenceModelOptions.NHibernate.NHibernateConnectionStringKey);
NHibernate.Dialect.Dialect dialect = NHibernateExtensions.GetDialect();
// Create database if it doesn't exist
IDatabaseHelper databaseHelper = CreateDatabaseHelperBasedOnDialect(dialect);
databaseHelper.CreateIfNotExists(nhibernateConnectionString);
services.AddFluentMigratorCore();
services.ConfigureRunner(fluentMigratorRunner =>
{
fluentMigratorRunner
.ConfigureDatabaseBasedOnDialect(dialect)
.WithGlobalConnectionString(nhibernateConnectionString)
.ScanIn(migrationAssembly)
.For.Migrations();
});
services.AddLogging(logger => logger.AddFluentMigratorConsole());
return services;
}
Database Provider Configuration¶
Automatic Provider Selection:
private static IMigrationRunnerBuilder ConfigureDatabaseBasedOnDialect(
this IMigrationRunnerBuilder fluentMigratorRunner,
Dialect dialect)
{
if (dialect.GetType() == typeof(MsSql2012Dialect))
{
return fluentMigratorRunner.AddSqlServer2016();
}
if (dialect.GetType() == typeof(PostgreSQLDialect))
{
return fluentMigratorRunner.AddPostgres();
}
if (dialect.GetType() == typeof(MySQL8Dialect))
{
return fluentMigratorRunner.AddMySql8();
}
// ... other dialects
return fluentMigratorRunner;
}
Migration Execution¶
Startup Execution:
// MicroserviceRegistrationExtensions.cs
#if UseNHibernate
loggerFactory.UseNHibernateLogging();
application.RunMicroserviceFluentMigrations();
#endif
Execution with Retry Policy:
// FluentMigratorExtensions.cs
internal static IApplicationBuilder RunMicroserviceFluentMigrations(
this IApplicationBuilder application)
{
ArgumentNullException.ThrowIfNull(application);
var policy = Policy
.Handle<SqlException>()
.WaitAndRetry(2, retryAttempt => TimeSpan.FromSeconds(Math.Pow(2, retryAttempt)));
policy.Execute(() =>
{
using var scope = application.ApplicationServices.CreateScope();
UpdateDatabase(scope.ServiceProvider);
});
return application;
}
private static void UpdateDatabase(IServiceProvider serviceProvider)
{
var runner = serviceProvider.GetRequiredService<IMigrationRunner>();
runner.MigrateUp();
}
Features: - Retry Policy: Exponential backoff for transient failures - Scoped Execution: Migrations run in isolated scope - Automatic Up: Applies all pending migrations - Logging: Console logging for migration execution
Creating Migrations¶
Basic Migration Structure:
// MicroserviceMigration.cs
namespace ConnectSoft.MicroserviceTemplate.DatabaseModel.Migrations
{
using FluentMigrator;
[Migration(1)]
public class MicroserviceMigration : Migration
{
internal const string SchemaName = "ConnectSoft.MicroserviceTemplate";
public override void Up()
{
this.Create.Schema(SchemaName);
this.CreateMicroserviceAggregateRootsTable();
}
public override void Down()
{
this.Delete
.Table("MicroserviceAggregateRoots")
.InSchema(SchemaName);
this.Delete.Schema(SchemaName);
}
private void CreateMicroserviceAggregateRootsTable()
{
this.Create
.Table("MicroserviceAggregateRoots")
.InSchema(SchemaName)
.WithColumn("ObjectId")
.AsGuid()
.NotNullable()
.PrimaryKey();
}
}
}
Migration Versioning:
// Sequential versioning (recommended)
[Migration(1)]
public class CreateUsersTable : Migration { }
[Migration(2)]
public class AddEmailColumn : Migration { }
[Migration(3)]
public class CreateOrdersTable : Migration { }
// Timestamp versioning (alternative)
[Migration(20240101000000)]
public class CreateUsersTable : Migration { }
[Migration(20240102000000)]
public class AddEmailColumn : Migration { }
Migration API Examples¶
Creating Tables:
public override void Up()
{
this.Create
.Table("Orders")
.InSchema("dbo")
.WithColumn("OrderId")
.AsGuid()
.NotNullable()
.PrimaryKey()
.WithColumn("CustomerId")
.AsGuid()
.NotNullable()
.WithColumn("OrderDate")
.AsDateTime()
.NotNullable()
.WithDefault(SystemMethods.CurrentDateTime)
.WithColumn("TotalAmount")
.AsDecimal(18, 2)
.NotNullable()
.WithColumn("Status")
.AsString(50)
.NotNullable()
.WithDefaultValue("Pending");
}
Altering Tables:
public override void Up()
{
this.Alter
.Table("Orders")
.InSchema("dbo")
.AddColumn("ShippingAddress")
.AsString(500)
.Nullable();
}
Creating Indexes:
public override void Up()
{
this.Create
.Index("IX_Orders_CustomerId")
.OnTable("Orders")
.InSchema("dbo")
.OnColumn("CustomerId")
.Ascending();
}
Foreign Keys:
public override void Up()
{
this.Create
.ForeignKey("FK_Orders_Customers")
.FromTable("Orders")
.InSchema("dbo")
.ForeignColumn("CustomerId")
.ToTable("Customers")
.InSchema("dbo")
.PrimaryColumn("CustomerId");
}
Custom SQL:
public override void Up()
{
this.Execute.Sql(@"
CREATE FUNCTION dbo.GetOrderTotal(@OrderId UNIQUEIDENTIFIER)
RETURNS DECIMAL(18, 2)
AS
BEGIN
RETURN (SELECT SUM(Amount) FROM OrderItems WHERE OrderId = @OrderId)
END");
}
Data Migrations:
[Migration(10, "DataMigration")]
public class SetDefaultOrderStatus : Migration
{
public override void Up()
{
this.Execute.Sql(@"
UPDATE Orders
SET Status = 'Pending'
WHERE Status IS NULL");
}
public override void Down()
{
this.Execute.Sql(@"
UPDATE Orders
SET Status = NULL
WHERE Status = 'Pending'");
}
}
Version Tracking¶
VersionInfo Table:
FluentMigrator automatically creates a VersionInfo table to track applied migrations:
CREATE TABLE [dbo].[VersionInfo](
[Version] BIGINT NOT NULL PRIMARY KEY,
[AppliedOn] DATETIME NULL,
[Description] NVARCHAR(1024) NULL
)
Querying Applied Migrations:
MongoDB Migrations¶
Overview¶
CSharpMongoMigrations is a migration framework for MongoDB that manages collection creation, indexes, and schema evolution for document databases. It provides versioned migrations similar to FluentMigrator but for MongoDB's document model.
Service Registration¶
Registration:
// MicroserviceRegistrationExtensions.cs
#if UseMongoDb
#if Migrations
services.AddMongoDbMigrator(
configuration,
typeof(MicroserviceMongoDbMigration));
#endif
#endif
Implementation:
// MongoDbExtensions.cs
internal static IServiceCollection AddMongoDbMigrator(
this IServiceCollection services,
IConfiguration configuration,
Type assemblyType)
{
ArgumentNullException.ThrowIfNull(services);
ArgumentNullException.ThrowIfNull(configuration);
ArgumentNullException.ThrowIfNull(assemblyType);
services.UseMongoDbMigrator(
configuration: configuration,
connectionStringKey: OptionsExtensions.PersistenceModelOptions.MongoDb.MongoDbConnectionStringKey,
databaseName: OptionsExtensions.PersistenceModelOptions.MongoDb.DatabaseName,
assemblyType: assemblyType);
return services;
}
Migration Execution¶
MongoDB migrations run automatically and synchronously during service registration when UseMongoDbMigrator() is called. The LoggingMigrationRunner is used internally to provide structured logging and OpenTelemetry tracing for migration operations.
Creating MongoDB Migrations¶
Basic Migration Structure:
// MicroserviceMongoDbMigration.cs
namespace ConnectSoft.MicroserviceTemplate.DatabaseModel.MongoDb.Migrations
{
using CSharpMongoMigrations;
[Migration(0, "First ConnectSoft.MicroserviceTemplate's MongoDb migration")]
public class MicroserviceMongoDbMigration : Migration
{
public override void Up()
{
this.Database.CreateCollection("MicroserviceAggregateRoots");
}
public override void Down()
{
this.Database.DropCollection("MicroserviceAggregateRoots");
}
}
}
Migration Versioning:
// Sequential versioning
[Migration(0, "Initial migration")]
public class InitialMigration : Migration { }
[Migration(1, "Add indexes")]
public class AddIndexesMigration : Migration { }
[Migration(2, "Create additional collections")]
public class CreateCollectionsMigration : Migration { }
MongoDB Migration API Examples¶
Creating Collections:
public override void Up()
{
this.Database.CreateCollection("Orders");
this.Database.CreateCollection("OrderItems");
}
Creating Indexes:
public override void Up()
{
var ordersCollection = this.Database.GetCollection<BsonDocument>("Orders");
// Single field index
var customerIdIndex = new CreateIndexModel<BsonDocument>(
Builders<BsonDocument>.IndexKeys.Ascending("CustomerId"),
new CreateIndexOptions { Unique = false });
ordersCollection.Indexes.CreateOne(customerIdIndex);
// Compound index
var compoundIndex = new CreateIndexModel<BsonDocument>(
Builders<BsonDocument>.IndexKeys
.Ascending("Status")
.Descending("OrderDate"),
new CreateIndexOptions { Unique = false });
ordersCollection.Indexes.CreateOne(compoundIndex);
}
Text Indexes:
public override void Up()
{
var productsCollection = this.Database.GetCollection<BsonDocument>("Products");
var textIndex = new CreateIndexModel<BsonDocument>(
Builders<BsonDocument>.IndexKeys.Text("Name").Text("Description"),
new CreateIndexOptions { Unique = false });
productsCollection.Indexes.CreateOne(textIndex);
}
TTL Indexes:
public override void Up()
{
var sessionsCollection = this.Database.GetCollection<BsonDocument>("Sessions");
var ttlIndex = new CreateIndexModel<BsonDocument>(
Builders<BsonDocument>.IndexKeys.Ascending("ExpiresAt"),
new CreateIndexOptions
{
ExpireAfter = TimeSpan.Zero // Documents expire when ExpiresAt is reached
});
sessionsCollection.Indexes.CreateOne(ttlIndex);
}
Dropping Collections:
public override void Down()
{
this.Database.DropCollection("Orders");
this.Database.DropCollection("OrderItems");
}
Dropping Indexes:
public override void Down()
{
var ordersCollection = this.Database.GetCollection<BsonDocument>("Orders");
ordersCollection.Indexes.DropOne("CustomerId_1");
}
Data Migrations:
[Migration(5, "DataMigration - Set default status")]
public class SetDefaultStatusMigration : Migration
{
public override void Up()
{
var ordersCollection = this.Database.GetCollection<BsonDocument>("Orders");
var filter = Builders<BsonDocument>.Filter.Eq("Status", BsonNull.Value);
var update = Builders<BsonDocument>.Update.Set("Status", "Pending");
ordersCollection.UpdateMany(filter, update);
}
public override void Down()
{
var ordersCollection = this.Database.GetCollection<BsonDocument>("Orders");
var filter = Builders<BsonDocument>.Filter.Eq("Status", "Pending");
var update = Builders<BsonDocument>.Update.Set("Status", BsonNull.Value);
ordersCollection.UpdateMany(filter, update);
}
}
Migration Versioning Strategies¶
Sequential Versioning¶
Recommended for Most Cases:
[Migration(1)]
public class CreateUsersTable : Migration { }
[Migration(2)]
public class AddEmailColumn : Migration { }
[Migration(3)]
public class CreateOrdersTable : Migration { }
Benefits: - Simple and clear - Easy to track migration order - No version conflicts
Timestamp Versioning¶
Alternative Approach:
[Migration(20240101000000)] // YYYYMMDDHHMMSS
public class CreateUsersTable : Migration { }
[Migration(20240102000000)]
public class AddEmailColumn : Migration { }
Benefits: - Migration timestamp is visible - Easy to see when migration was created - Can handle parallel development (with coordination)
Format: YYYYMMDDHHMMSS (14 digits)
Versioning Best Practices¶
- Use Sequential for Single Team: Simpler and easier to manage
- Use Timestamps for Distributed Teams: Reduces version conflicts
- Never Skip Versions: Always use consecutive numbers
- Never Reuse Versions: Each migration needs a unique version
- Document Migration Purpose: Use descriptive class names and comments
Migration Patterns¶
Schema Creation Pattern¶
Create Schema and Tables:
[Migration(1)]
public class InitialSchema : Migration
{
internal const string SchemaName = "ConnectSoft.MicroserviceTemplate";
public override void Up()
{
// Create schema
this.Create.Schema(SchemaName);
// Create tables
this.CreateUsersTable();
this.CreateOrdersTable();
this.CreateOrderItemsTable();
}
public override void Down()
{
// Drop in reverse order
this.Delete.Table("OrderItems").InSchema(SchemaName);
this.Delete.Table("Orders").InSchema(SchemaName);
this.Delete.Table("Users").InSchema(SchemaName);
this.Delete.Schema(SchemaName);
}
private void CreateUsersTable()
{
this.Create
.Table("Users")
.InSchema(SchemaName)
.WithColumn("UserId")
.AsGuid()
.NotNullable()
.PrimaryKey()
.WithColumn("Email")
.AsString(255)
.NotNullable()
.WithColumn("CreatedOn")
.AsDateTime()
.NotNullable()
.WithDefault(SystemMethods.CurrentDateTime);
}
// ... other table creation methods
}
Incremental Change Pattern¶
Add Columns, Indexes, Constraints:
[Migration(2)]
public class AddUserProfileFields : Migration
{
public override void Up()
{
this.Alter
.Table("Users")
.InSchema(MicroserviceMigration.SchemaName)
.AddColumn("FirstName")
.AsString(100)
.Nullable()
.AddColumn("LastName")
.AsString(100)
.Nullable()
.AddColumn("PhoneNumber")
.AsString(20)
.Nullable();
// Create index on email
this.Create
.Index("IX_Users_Email")
.OnTable("Users")
.InSchema(MicroserviceMigration.SchemaName)
.OnColumn("Email")
.Ascending()
.WithOptions().Unique();
}
public override void Down()
{
this.Delete.Index("IX_Users_Email")
.OnTable("Users")
.InSchema(MicroserviceMigration.SchemaName);
this.Alter
.Table("Users")
.InSchema(MicroserviceMigration.SchemaName)
.DropColumn("PhoneNumber")
.DropColumn("LastName")
.DropColumn("FirstName");
}
}
Refactoring Pattern¶
Rename Tables, Columns:
[Migration(3)]
public class RenameTables : Migration
{
public override void Up()
{
this.Rename
.Table("Users")
.InSchema(MicroserviceMigration.SchemaName)
.To("Customers");
this.Rename
.Column("Email")
.OnTable("Customers")
.InSchema(MicroserviceMigration.SchemaName)
.To("EmailAddress");
}
public override void Down()
{
this.Rename
.Column("EmailAddress")
.OnTable("Customers")
.InSchema(MicroserviceMigration.SchemaName)
.To("Email");
this.Rename
.Table("Customers")
.InSchema(MicroserviceMigration.SchemaName)
.To("Users");
}
}
Data Migration Pattern¶
Separate Data Migrations from Schema:
[Migration(10, "DataMigration")]
public class MigrateUserData : Migration
{
public override void Up()
{
// Migrate data
this.Execute.Sql(@"
UPDATE Users
SET FullName = FirstName + ' ' + LastName
WHERE FullName IS NULL");
}
public override void Down()
{
// Revert data changes
this.Execute.Sql(@"
UPDATE Users
SET FullName = NULL
WHERE FullName IS NOT NULL");
}
}
Testing Migrations¶
Unit Testing FluentMigrator Migrations¶
Test Migration Up and Down:
// MicroserviceFluentMigrationsUnitTests.cs
[TestMethod]
public void CanCorrectlyMigrateUpAndDownUsingFluentMigratorForSqlServer()
{
// Arrange
var serviceProvider = CreateServiceProviderForSqlServer(out string? connectionString);
var runner = serviceProvider.GetRequiredService<IMigrationRunner>();
// Act - Migrate up
runner.MigrateUp();
// Assert - Verify tables created
using var connection = new SqlConnection(connectionString);
connection.Open();
using var command = new SqlCommand(
"SELECT COUNT(*) FROM INFORMATION_SCHEMA.TABLES",
connection);
int tableCount = (int)command.ExecuteScalar();
Assert.IsTrue(tableCount > 1, "Database have a new tables after migration up");
// Act - Migrate down
runner.MigrateDown(0); // Migrate down to the initial database state
// Assert - Verify tables removed
tableCount = (int)command.ExecuteScalar();
Assert.AreEqual(1, tableCount, "Database is not empty after migration down");
}
Service Provider Setup:
private static ServiceProvider CreateServiceProviderForSqlServer(
out string? connectionString)
{
var services = new ServiceCollection();
var configuration = LoadConfiguration();
string? nhibernateConnectionString = configuration.GetConnectionString(
NHibernateSqlServerConnectionStringKey);
Assert.IsNotNull(nhibernateConnectionString);
SqlServerDatabaseHelper databaseHelper = new();
databaseHelper.CreateIfNotExists(nhibernateConnectionString);
services.AddFluentMigratorCore();
services.ConfigureRunner(fluentMigratorRunner => fluentMigratorRunner
.AddSqlServer2016()
.WithGlobalConnectionString(nhibernateConnectionString)
.ScanIn(typeof(MicroserviceMigration).Assembly).For.Migrations());
services.AddLogging(logger => logger.AddFluentMigratorConsole());
connectionString = nhibernateConnectionString;
return services.BuildServiceProvider();
}
Testing Multiple Databases¶
PostgreSQL Test:
[TestMethod]
public void CanCorrectlyMigrateUpAndDownUsingFluentMigratorForPostgreSQL()
{
var serviceProvider = CreateServiceProviderForPostgreSQL(out string? connectionString);
var runner = serviceProvider.GetRequiredService<IMigrationRunner>();
runner.MigrateUp();
using var connection = new NpgsqlConnection(connectionString);
connection.Open();
using var command = new NpgsqlCommand(
$"SELECT COUNT(*) FROM information_schema.tables WHERE table_schema = '{MicroserviceMigration.SchemaName}'",
connection);
long tableCount = Convert.ToInt64(command.ExecuteScalar());
Assert.IsTrue(tableCount >= 1);
runner.MigrateDown(0);
tableCount = Convert.ToInt64(command.ExecuteScalar());
Assert.AreEqual(0, tableCount);
}
MySQL Test:
[TestMethod]
public void CanCorrectlyMigrateUpAndDownUsingFluentMigratorForMySql()
{
var serviceProvider = CreateServiceProviderForMySql(out string? connectionString);
var runner = serviceProvider.GetRequiredService<IMigrationRunner>();
runner.MigrateUp();
using var connection = new MySqlConnection(connectionString);
connection.Open();
using var command = new MySqlCommand(
$"SELECT COUNT(*) FROM information_schema.tables WHERE LOWER(table_schema) = '{connection.Database.ToLowerInvariant()}'",
connection);
long tableCount = Convert.ToInt64(command.ExecuteScalar());
Assert.IsTrue(tableCount > 1);
runner.MigrateDown(0);
tableCount = Convert.ToInt64(command.ExecuteScalar());
Assert.AreEqual(1, tableCount);
}
Best Practices¶
Do's¶
-
Version Migrations Sequentially
-
Always Provide Down Methods
-
Keep Migrations Atomic
-
Use Descriptive Names
-
Extract Helper Methods
-
Test Migrations
-
Separate Schema and Data Migrations
Don'ts¶
-
Don't Modify Existing Migrations
-
Don't Skip Migration Versions
-
Don't Include Data Changes in Schema Migrations
-
Don't Hardcode Values
-
Don't Create Large Migrations
// ❌ BAD - Too many changes in one migration [Migration(5)] public class LargeMigration : Migration { public override void Up() { // 50+ lines of schema changes } } // ✅ GOOD - Split into multiple migrations [Migration(5)] public class AddEmailColumn : Migration { } [Migration(6)] public class CreateOrdersTable : Migration { } -
Don't Forget Down Methods
Migration Execution Order¶
Startup Execution¶
Middleware Pipeline Order:
// MicroserviceRegistrationExtensions.cs
application.UseForwardedHeaders();
application.UseMicroserviceSerilogRequestLogging();
#if UseNHibernate
loggerFactory.UseNHibernateLogging();
application.RunMicroserviceFluentMigrations(); // ← Migrations run early
#endif
// ... rest of middleware
Why Early Execution: - Database must be ready before services start - Prevents runtime errors from missing schema - Ensures schema consistency before request processing
Manual Execution¶
FluentMigrator CLI (if needed):
# Install FluentMigrator CLI
dotnet tool install -g FluentMigrator.DotNet.Cli
# Run migrations
dotnet fm migrate -p sqlserver2016 -c "connection-string" -a "MigrationAssembly.dll"
# Rollback one migration
dotnet fm migrate -p sqlserver2016 -c "connection-string" -a "MigrationAssembly.dll" -v -1
Programmatic Execution:
var serviceProvider = services.BuildServiceProvider();
var runner = serviceProvider.GetRequiredService<IMigrationRunner>();
// Migrate to latest
runner.MigrateUp();
// Migrate to specific version
runner.MigrateUp(5);
// Rollback one version
runner.MigrateDown(1);
// Rollback all
runner.MigrateDown(0);
Version Tracking¶
FluentMigrator VersionInfo¶
Table Structure:
CREATE TABLE [dbo].[VersionInfo](
[Version] BIGINT NOT NULL PRIMARY KEY,
[AppliedOn] DATETIME NULL,
[Description] NVARCHAR(1024) NULL
)
Querying Applied Migrations:
-- Get all applied migrations
SELECT * FROM [dbo].[VersionInfo] ORDER BY [Version] DESC;
-- Get latest migration version
SELECT MAX([Version]) FROM [dbo].[VersionInfo];
-- Check if specific migration is applied
SELECT COUNT(*) FROM [dbo].[VersionInfo] WHERE [Version] = 5;
MongoDB Migrations Tracking¶
Migrations Collection:
CSharpMongoMigrations creates a Migrations collection to track applied migrations:
// MongoDB migrations collection
{
"_id": ObjectId("..."),
"Version": 0,
"AppliedOn": ISODate("2024-01-15T10:30:00Z"),
"Description": "First ConnectSoft.MicroserviceTemplate's MongoDb migration"
}
Querying Applied Migrations:
// Get all applied migrations
db.Migrations.find().sort({ Version: -1 });
// Get latest migration version
db.Migrations.find().sort({ Version: -1 }).limit(1);
CI/CD Integration¶
Pipeline Execution¶
Azure DevOps Pipeline:
# azure-pipelines.yml
- task: DotNetCoreCLI@2
displayName: 'Run database migrations'
inputs:
command: 'custom'
custom: 'fm'
arguments: 'migrate -p sqlserver2016 -c "$(ConnectionString)" -a "$(Build.ArtifactStagingDirectory)/MigrationAssembly.dll"'
Manual Migration Step:
Deployment Strategy¶
Option 1: Migrations Before Deployment
stages:
- stage: Migrate
jobs:
- job: RunMigrations
steps:
- script: dotnet run -- migrate
- stage: Deploy
dependsOn: Migrate
jobs:
- job: DeployApplication
steps:
- script: dotnet publish
Option 2: Migrations at Startup
Migrations run automatically at application startup (current template approach).
Benefits: - Simpler deployment pipeline - Ensures migrations run before requests - Automatic retry on failure
Considerations: - Requires database connection at startup - May delay application startup - Multiple instances need to coordinate
Troubleshooting¶
Issue: Migration Fails to Apply¶
Symptoms: Migration throws exception during execution.
Solutions: 1. Check migration version conflicts 2. Verify database connection string 3. Check migration syntax (database-specific) 4. Review migration logs for specific errors 5. Verify database user has necessary permissions 6. Check for existing objects that conflict
Issue: Migration Partially Applied¶
Symptoms: Migration fails partway through, leaving database in inconsistent state.
Solutions:
1. Manually rollback using Down() method
2. Fix migration and create new migration to correct state
3. Restore database from backup
4. Use transactions where possible (database-dependent)
Issue: Version Conflicts¶
Symptoms: Multiple developers create migrations with same version number.
Solutions: 1. Use timestamp versioning for distributed teams 2. Coordinate migration versions before committing 3. Use migration version from source control 4. Resolve conflicts by renaming one migration
Issue: Migration Too Slow¶
Symptoms: Migration takes too long to execute.
Solutions: 1. Split large migrations into smaller ones 2. Use batch operations for data migrations 3. Create indexes after data migration 4. Consider offline migration for very large changes 5. Use database-specific optimizations
Issue: Cannot Rollback Migration¶
Symptoms: Down() method fails or doesn't exist.
Solutions:
1. Always implement Down() methods
2. Test Down() methods in unit tests
3. For data migrations, ensure reversible logic
4. Document irreversible migrations
5. Consider backup strategy before applying migrations
Advanced Patterns¶
Conditional Migrations¶
Environment-Specific Migrations:
[Migration(5)]
public class ConditionalMigration : Migration
{
public override void Up()
{
// Only create in development
if (Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT") == "Development")
{
this.Create
.Table("TestData")
.InSchema(SchemaName)
.WithColumn("Id").AsGuid().PrimaryKey();
}
}
public override void Down()
{
if (Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT") == "Development")
{
this.Delete.Table("TestData").InSchema(SchemaName);
}
}
}
Transactional Migrations¶
SQL Server Transactions:
[Migration(5)]
public class TransactionalMigration : Migration
{
public override void Up()
{
// FluentMigrator wraps each migration in a transaction by default
this.Create.Table("Orders")...
this.Create.Table("OrderItems")...
// If any fails, entire migration is rolled back
}
}
Multi-Database Migrations¶
Database-Specific Logic:
[Migration(5)]
public class MultiDatabaseMigration : Migration
{
public override void Up()
{
if (this.ConnectionString.Contains("PostgreSQL"))
{
this.Execute.Sql("CREATE EXTENSION IF NOT EXISTS \"uuid-ossp\"");
}
// Common migration logic
this.Create.Table("Orders")...
}
}
Summary¶
Database migrations in the ConnectSoft Microservice Template provide:
- ✅ Version Control: Schema changes versioned alongside code
- ✅ Reproducibility: Consistent schemas across environments
- ✅ Safety: Tested migrations before production
- ✅ Rollback Support: Down migrations for schema reversion
- ✅ Multi-Database Support: FluentMigrator (SQL) and CSharpMongoMigrations (MongoDB)
- ✅ Automated Execution: Migrations run at application startup
- ✅ Testing Support: Unit tests for migration validation
- ✅ CI/CD Integration: Migrations in deployment pipelines
By following these patterns, teams can:
- Version Database Schema: Track all schema changes in source control
- Deploy Safely: Apply migrations consistently across environments
- Rollback Changes: Revert schema changes when needed
- Test Migrations: Validate migrations before deployment
- Coordinate Changes: Multiple developers can work on schema changes safely
- Maintain Consistency: Same schema in all environments
Database migrations are essential for maintaining database schema consistency, enabling safe deployments, and providing a reliable way to evolve database structure over time while maintaining data integrity and enabling rollback capabilities.