Skip to content

đź“‹ Interview Questions

🔹 C# Language & OOP Foundations

1. OOP Principles

  • Can you explain the four pillars of OOP (encapsulation, inheritance, polymorphism, abstraction) with examples?

    • Answer: Encapsulation hides internal details (e.g., private fields with public methods). Inheritance lets classes extend base behavior. Polymorphism allows different implementations of the same method (virtual/override). Abstraction defines contracts with abstract classes or interfaces without exposing implementation details.
  • What’s the difference between abstract classes and interfaces in C#?

    • Answer: Abstract classes can provide both abstract methods and implemented members, while interfaces only define contracts. A class can implement multiple interfaces but only inherit from one abstract class.
  • What is method overriding vs. method overloading?

    • Answer: Overriding redefines a base class’s virtual/abstract method in a derived class. Overloading defines multiple methods with the same name but different parameter lists in the same scope.
  • How does multiple inheritance work in C# (via interfaces)?

    • Answer: C# does not support multiple base classes, but a class can implement multiple interfaces. This allows combining behaviors without the issues of multiple class inheritance.
  • What are sealed classes and when would you use them?

    • Answer: A sealed class cannot be inherited. Use it when you want to prevent further extension for reasons like security, stability, or performance optimization.

2. Types & Memory

  • What’s the difference between value types and reference types?

    • Answer: Value types (structs, primitives) hold their data directly and are usually stored on the stack. Reference types (classes, arrays) hold a reference to the actual object stored on the heap.
  • How are structs different from classes?

    • Answer: Structs are value types, lightweight, cannot inherit from other structs or classes, and are best for small, immutable data. Classes are reference types, support inheritance, and are better for complex object models.
  • What’s the difference between ref, out, and in parameters?

    • Answer: ref passes a variable by reference and must be initialized before use. out also passes by reference but must be assigned inside the method. in is passed by reference but read-only.
  • Explain boxing and unboxing. Why can it be dangerous for performance?

    • Answer: Boxing converts a value type into an object, and unboxing extracts it back. It’s dangerous for performance because it creates heap allocations and GC overhead if used frequently.
  • What are nullable reference types and how do they improve code safety?

    • Answer: Nullable reference types (introduced in C# 8) let developers explicitly mark reference variables as nullable. The compiler warns about potential null dereferences, helping reduce null reference exceptions.

3. Generics & Collections

  • What are generics and why are they useful?

    • Answer: Generics allow you to define classes, methods, and data structures with type parameters. They increase type safety, reduce code duplication, and improve performance by avoiding boxing/unboxing.
  • Difference between List<T>, Dictionary<TKey,TValue>, and HashSet<T>?

    • Answer: List<T> is an ordered collection that allows duplicates. Dictionary<TKey,TValue> stores key-value pairs with fast lookups by key. HashSet<T> stores unique values without order, optimized for fast membership checks.
  • What’s covariance and contravariance in generics?

    • Answer: Covariance allows assigning a more derived type to a base type (e.g., IEnumerable<string> to IEnumerable<object>). Contravariance allows the opposite, typically used in delegates and interfaces that consume types.
  • How does IEnumerable<T> differ from IQueryable<T>?

    • Answer: IEnumerable<T> executes queries in memory and is best for in-memory collections. IQueryable<T> builds expression trees that can be translated by a provider (like EF Core) into SQL or another query language for remote execution.
  • Explain yield return and when you’d use it.

    • Answer: yield return lets you create an iterator method that produces values on demand. It’s useful for deferred execution and when generating sequences without building intermediate collections.

4. Delegates, Events, and LINQ

  • What are delegates and how are they different from function pointers?

    • Answer: Delegates are type-safe references to methods in C#. Unlike traditional function pointers, delegates ensure correct signature matching and support multicast invocation (calling multiple methods).
  • Explain the difference between Action, Func, and Predicate.

    • Answer: Action represents a delegate with no return value, Func<T> returns a value, and Predicate<T> returns a bool to evaluate a condition. They are shorthand generic delegate types.
  • How do events work in C#? Can you explain the observer pattern?

    • Answer: Events are based on delegates and allow publishers to notify subscribers when something happens. They implement the observer pattern by decoupling publishers and multiple observers.
  • What’s the difference between LINQ to Objects, LINQ to SQL, and LINQ to Entities?

    • Answer: LINQ to Objects runs queries in-memory on collections, LINQ to SQL translates queries to SQL for relational DBs, and LINQ to Entities (EF) translates LINQ into queries against the Entity Framework ORM.
  • How do you optimize LINQ queries to avoid performance issues?

    • Answer: Avoid unnecessary enumeration, use projections (Select) to reduce data, prefer Any() instead of Count() > 0, and use compiled queries or raw SQL for heavy operations.

5. Asynchronous Programming

  • Explain the difference between synchronous, asynchronous, and parallel execution.

    • Answer: Synchronous executes tasks one after another. Asynchronous allows non-blocking operations while waiting (e.g., I/O). Parallel executes tasks simultaneously across multiple threads or cores.
  • How does the async/await pattern work in C#?

    • Answer: async/await simplifies asynchronous code by letting methods return Task or Task<T>. await suspends execution until the task completes without blocking the thread.
  • What’s the difference between Task, ValueTask, and Thread?

    • Answer: Task represents an asynchronous operation, ValueTask is a lightweight alternative for performance-sensitive scenarios, and Thread is a low-level OS thread with higher overhead.
  • When would you use ConfigureAwait(false)?

    • Answer: Use it in library code or background tasks where you don’t need to resume on the original synchronization context (e.g., UI thread). It improves performance and avoids deadlocks.
  • How would you cancel an async operation using CancellationToken?

    • Answer: Pass a CancellationToken to the async method, monitor token.IsCancellationRequested, and throw OperationCanceledException when requested. Call tokenSource.Cancel() to trigger cancellation.

6. Advanced Features

  • What are records in C# 9+ and how do they differ from classes?

    • Answer: Records are reference types optimized for immutability and value-based equality. Unlike classes, equality checks in records compare property values rather than object references.
  • How do pattern matching enhancements improve readability?

    • Answer: New pattern matching features (switch expressions, type patterns, relational patterns) reduce boilerplate code and make logic more declarative and concise.
  • What are tuples in C# and when would you prefer them over classes?

    • Answer: Tuples are lightweight data structures that group multiple values without defining a class. They are useful for temporary groupings or returning multiple values from a method.
  • Can you explain extension methods and give a real-world use case?

    • Answer: Extension methods add new methods to existing types without modifying them. For example, adding a ToWords() method to integers for better readability.
  • What’s the difference between dynamic and var?

    • Answer: var is strongly typed at compile time, while dynamic bypasses compile-time type checking and resolves at runtime. Use dynamic carefully as it sacrifices safety.

7. Reflection & Attributes

  • What is reflection in C# and when would you use it?

    • Answer: Reflection allows inspecting and manipulating types, methods, and assemblies at runtime. It’s used in serialization, dependency injection, and dynamic loading.
  • How do you define and apply custom attributes?

    • Answer: Define a class inheriting from System.Attribute and apply it using [MyAttribute]. Attributes add metadata used by reflection or frameworks.
  • What’s the difference between [Serializable], [Obsolete], and [DataContract] attributes?

    • Answer: [Serializable] marks classes for serialization, [Obsolete] warns or errors on deprecated code, and [DataContract] defines how data is serialized/deserialized in services.
  • How would you use reflection to dynamically load assemblies or types?

    • Answer: Use Assembly.Load or Assembly.LoadFrom, then GetType and Activator.CreateInstance to create objects dynamically.
  • What are the downsides of overusing reflection?

    • Answer: It is slower, less safe, and harder to maintain. Overuse can introduce runtime errors and security risks.

8. Memory Management & Performance

  • How does the Garbage Collector work in .NET?

    • Answer: The GC reclaims memory by removing unreachable objects. It runs automatically and compacts memory to reduce fragmentation.
  • What are the GC generations (Gen 0, 1, 2, LOH)?

    • Answer: Gen 0 collects short-lived objects, Gen 1 handles survivors, Gen 2 is for long-lived objects, and the Large Object Heap (LOH) stores large allocations.
  • How would you diagnose a memory leak in a C# application?

    • Answer: Use profiling tools like dotMemory or PerfView, track object retention graphs, and look for undisposed resources or event handler leaks.
  • What’s the difference between IDisposable and finalizer?

    • Answer: IDisposable provides a deterministic cleanup mechanism via Dispose. Finalizers run non-deterministically during GC, which can delay resource release.
  • How do you use the using statement for deterministic disposal?

    • Answer: The using statement ensures Dispose() is automatically called when the block ends, releasing resources promptly.

9. Threading & Parallelism

  • What’s the difference between Thread, Task, and ThreadPool?

    • Answer: Thread is a low-level OS thread. Task represents an async operation managed by the runtime. ThreadPool manages a pool of reusable threads for efficiency.
  • How do you use Parallel.ForEach or PLINQ?

    • Answer: Parallel.ForEach splits work across threads for CPU-bound tasks. PLINQ parallelizes LINQ queries for data processing workloads.
  • What are locks, Monitor, and SemaphoreSlim?

    • Answer: lock/Monitor ensures mutual exclusion for critical sections. SemaphoreSlim allows controlling access to a resource by multiple threads.
  • Explain deadlock and how to avoid it.

    • Answer: Deadlock happens when threads wait on each other’s locks indefinitely. Avoid by acquiring locks in a consistent order or using timeout/cancellation.
  • What is the Thread-Safe Singleton pattern in C#?

    • Answer: It ensures only one instance of a class is created in multi-threaded scenarios. Typically implemented using Lazy<T> or double-check locking.

10. C# Language Evolution

  • What are the most important features introduced in C# 8, 9, 10, 11, 12?

    • Answer: C# 8 introduced nullable reference types, default interface methods. C# 9 added records and init properties. C# 10 had global usings and file-scoped namespaces. C# 11 introduced raw string literals, required members. C# 12 added primary constructors for classes.
  • What are default interface methods?

    • Answer: They let interfaces define method implementations. This enables adding new functionality to interfaces without breaking existing implementations.
  • How do init-only properties differ from standard properties?

    • Answer: init properties can only be set during object initialization, making objects more immutable. Standard properties allow setting anytime.
  • Explain file-scoped namespaces.

    • Answer: A file-scoped namespace (namespace X;) applies to the whole file, reducing indentation and boilerplate.
  • What are source generators in C#?

    • Answer: Source generators run at compile time and generate additional code into the project, used for reducing boilerplate (e.g., auto-generating serialization or DTOs).

🔹 Backend Foundations (.NET Core)

1. Core .NET vs .NET Core

  • What are the main differences between .NET Framework and .NET Core?

    • Answer: .NET Framework is Windows-only, closed-source, and monolithic, while .NET Core (and .NET 5+) is cross-platform, open-source, modular, and optimized for performance and cloud-native workloads.
  • Why would you choose .NET Core over the old framework for new projects?

    • Answer: Because .NET Core provides cross-platform support, better performance, modern APIs, long-term support, and is the future of Microsoft’s development platform.
  • What does cross-platform mean in the context of .NET Core?

    • Answer: It means the same code can run on Windows, Linux, and macOS without modification, enabling broader deployment options (e.g., Docker, Kubernetes).

2. Application Startup & Middleware

  • Can you explain the request pipeline in ASP.NET Core?

    • Answer: Requests travel through a series of middleware components that can handle, modify, or forward them before reaching the endpoint. The response flows back through the pipeline in reverse order.
  • How do you add a custom middleware? What use cases have you implemented it for?

    • Answer: Implement a class with Invoke or InvokeAsync, then register it using app.UseMiddleware<T>(). Common use cases: logging, exception handling, authentication.
  • What’s the difference between Use, Run, and Map in middleware registration?

    • Answer: Use adds middleware that can call the next delegate; Run is terminal and doesn’t pass control further; Map creates a branch in the pipeline for specific request paths.
  • How do you handle global exception handling in middleware?

    • Answer: Use app.UseExceptionHandler() or a custom middleware that wraps try/catch, logs exceptions, and returns standardized error responses.
  • How would you add request/response logging middleware?

    • Answer: Implement middleware to capture HttpContext.Request and HttpContext.Response data, log it, and then call the next delegate. Often combined with Serilog or Application Insights.

3. Dependency Injection (DI)

  • How does built-in DI in ASP.NET Core work?

    • Answer: ASP.NET Core has a built-in IoC container where services are registered with lifetimes and resolved automatically via constructor injection.
  • What’s the difference between AddSingleton, AddScoped, and AddTransient?

    • Answer: Singleton creates one instance for the app’s lifetime, Scoped creates one per request, and Transient creates a new instance each time it’s requested.
  • When would you use a factory pattern inside DI?

    • Answer: When object creation requires logic or runtime parameters that aren’t known at registration time, you can register a factory delegate.
  • How do you register generic services in DI (e.g., repositories)?

    • Answer: Use open generic registrations, e.g., services.AddScoped(typeof(IRepository<>), typeof(Repository<>));.
  • Can you inject configuration or options into services?

    • Answer: Yes, by binding configuration sections to POCO classes via services.Configure<T>(), then injecting IOptions<T>, IOptionsSnapshot<T>, or IOptionsMonitor<T>.

4. Controllers & Endpoints

  • What are Controllers and Minimal APIs in ASP.NET Core?

    • Answer: Controllers are part of the MVC pattern and provide structured endpoints with attributes and conventions. Minimal APIs are lightweight, function-based endpoints introduced in .NET 6 for simpler microservices or small applications.
  • Which would you prefer for microservices – controllers or minimal APIs? Why?

    • Answer: For microservices, Minimal APIs are often preferred due to reduced overhead, faster startup, and simpler syntax. Controllers are better for larger projects needing filters, conventions, and advanced routing.
  • How do you implement model binding in ASP.NET Core?

    • Answer: ASP.NET Core automatically maps request data (query, route, body, headers) to method parameters and models. Custom model binders can be created for special cases.
  • How do you handle validation (DataAnnotations, FluentValidation)?

    • Answer: Use DataAnnotations for simple validation attributes ([Required], [Range]), or FluentValidation for richer, fluent, and testable validation logic. Integrate validation in the pipeline so invalid models return standardized responses.
  • How would you design standardized error responses?

    • Answer: Use ProblemDetails (RFC 7807) to return consistent error payloads with status, title, and trace ID. Middleware can centralize exception-to-response mapping.

5. Configuration & Options

  • How does ASP.NET Core load configuration (appsettings.json, environment variables, KeyVault, etc.)?

    • Answer: ASP.NET Core uses a configuration provider system that layers sources in order (JSON files, environment variables, secrets, KeyVault, command-line). Later sources override earlier ones.
  • What’s the purpose of Options pattern (IOptions, IOptionsSnapshot, IOptionsMonitor)?

    • Answer: It binds configuration sections to strongly typed classes. IOptions is singleton, IOptionsSnapshot updates per request (scoped), and IOptionsMonitor observes changes dynamically.
  • How do you structure configuration for multi-environment deployments (Dev/Test/Prod)?

    • Answer: Use environment-specific appsettings.{Environment}.json files, combined with environment variables and secrets. Set ASPNETCORE_ENVIRONMENT to switch environments.
  • How would you secure sensitive settings (connection strings, API keys)?

    • Answer: Store them in Azure Key Vault or environment variables, not in code or source control. Access them using managed identities or secret providers.

6. Health Checks & Readiness

  • How do you implement ASP.NET Core Health Checks?

    • Answer: Register health checks with services.AddHealthChecks() and map endpoints via app.MapHealthChecks("/health"). Add checks for DB, cache, or external dependencies.
  • What’s the difference between liveness and readiness probes?

    • Answer: Liveness indicates whether the app is running and should be restarted if failing. Readiness checks if the app is ready to handle traffic (e.g., DB connection available).
  • How would you integrate health checks into Kubernetes (AKS)?

    • Answer: Configure livenessProbe and readinessProbe in pod specs to call health check endpoints. Kubernetes uses these to restart or stop routing traffic to unhealthy pods.
  • How do you expose health checks for dependent services (DB, Redis, Service Bus)?

    • Answer: Add AddCheck() with custom logic or use built-in extensions like .AddSqlServer(), .AddRedis(), .AddAzureServiceBusQueue(). Each returns healthy/unhealthy status.

7. Telemetry & Logging

  • Which logging providers have you used in ASP.NET Core? (Serilog, NLog, Application Insights)

    • Answer: Common providers are Serilog for structured logging, NLog for flexibility, and Application Insights for telemetry and monitoring in Azure.
  • What’s the difference between structured logging and plain text logging?

    • Answer: Structured logging stores log data as key-value pairs (JSON) enabling filtering and queries. Plain text is harder to query and analyze.
  • How do you enrich logs with contextual data (e.g., correlation IDs)?

    • Answer: Use middleware or enrichers to attach correlation IDs, request IDs, or user info to the logging scope so it flows with each log entry.
  • How do you implement distributed tracing (OpenTelemetry)?

    • Answer: Configure OpenTelemetry SDK to capture traces, metrics, and logs. Export them to systems like Jaeger, Zipkin, or Application Insights to visualize end-to-end request flow.
  • How do you manage log levels across environments (debug in dev, info in prod)?

    • Answer: Configure log levels in appsettings.{Environment}.json. Use verbose/debug for development, info/warning for staging, and warning/error for production.

8. Error Handling & Resilience

  • What are the recommended patterns for global exception handling in ASP.NET Core?

    • Answer: Use UseExceptionHandler() middleware for production-safe error pages and UseDeveloperExceptionPage() in development. Custom middleware wrapping try/catch can also centralize logging and error formatting.
  • How do you return problem details (RFC 7807) instead of generic error messages?

    • Answer: Return a ProblemDetails object (built-in to ASP.NET Core) from middleware or controllers. This provides standardized fields like status, title, and traceId.
  • How do you configure retry policies for downstream API calls (Polly)?

    • Answer: Wrap HttpClient calls with AddPolicyHandler() in IHttpClientFactory using Polly policies. Configure exponential backoff or specific retry counts.
  • What’s the difference between retry, circuit breaker, and fallback patterns?

    • Answer: Retry automatically re-executes failed calls. Circuit breaker stops calls temporarily after repeated failures to avoid overloading a system. Fallback provides an alternate response or service when the primary one fails.

9. Hosting & Services

  • What’s the difference between Kestrel and IIS hosting?

    • Answer: Kestrel is the cross-platform, high-performance web server used by .NET Core. IIS (or Nginx/Apache) often acts as a reverse proxy in front of Kestrel for load balancing, TLS termination, and advanced hosting features.
  • How do you self-host a .NET Core API with Kestrel only?

    • Answer: Configure Kestrel in Program.cs using WebApplication.CreateBuilder() and builder.WebHost.UseKestrel(). Run without IIS or reverse proxy if direct exposure is acceptable.
  • What are hosted services (IHostedService, BackgroundService)?

    • Answer: They provide a way to run background tasks in ASP.NET Core applications. BackgroundService is a base class simplifying long-running services that start with the host.
  • When would you use Worker Services instead of Web APIs?

    • Answer: Worker Services are used for background processing without HTTP endpoints, such as scheduled jobs, queue processing, or ETL pipelines.
  • How do you integrate background jobs with Hangfire or Azure Functions?

    • Answer: Hangfire provides persistent background job scheduling in ASP.NET Core via queues and dashboards. Azure Functions provide serverless execution of background tasks, often triggered by timers or events.

10. Advanced Topics

  • How does ASP.NET Core support gRPC services alongside REST APIs?

    • Answer: ASP.NET Core allows adding both gRPC and REST endpoints in the same project by configuring endpoints in MapGrpcService and MapControllers together.
  • How would you implement versioning in APIs?

    • Answer: Use AspNetCore.Mvc.Versioning package to version via URL, query string, or headers. Each controller can specify versions with attributes like [ApiVersion("1.0")].
  • What’s the difference between synchronous and asynchronous controllers?

    • Answer: Synchronous controllers block threads during I/O, which can reduce scalability. Asynchronous controllers use async/await and free up threads, allowing more concurrent requests.
  • How do you implement cancellation tokens for long-running requests?

    • Answer: Accept CancellationToken as a method parameter in controllers and pass it down to async operations like DB or HTTP calls. If cancelled, throw OperationCanceledException.
  • What are filters (action, exception, authorization) in ASP.NET Core?

    • Answer: Filters allow cross-cutting logic in MVC. Action filters run before/after actions, exception filters handle errors, and authorization filters enforce policies before execution.

🔹 Data & Persistence

1. ORM & Data Access

  • What’s the difference between Entity Framework Core, NHibernate, and Dapper?

    • Answer: EF Core is Microsoft’s modern ORM with LINQ integration and good tooling. NHibernate is mature, feature-rich, and offers advanced mappings. Dapper is a micro-ORM focused on performance, mapping raw SQL to objects with minimal overhead.
  • When would you choose EF Core vs. NHibernate vs. raw ADO.NET?

    • Answer: EF Core suits most modern .NET apps with good performance and tooling. NHibernate is better for complex domain models and legacy integration. Raw ADO.NET gives the best performance and control, but requires more boilerplate.
  • What are the advantages and disadvantages of code-first vs. database-first approaches?

    • Answer: Code-first lets developers evolve schemas from code, with migrations, but requires discipline. Database-first is useful when a DB already exists but may lead to models tightly coupled to DB design.
  • How do you implement the Repository and Unit of Work patterns with EF Core or NHibernate?

    • Answer: Create repository interfaces and implementations to abstract DB operations, while the Unit of Work encapsulates DbContext or ISession to manage transactions and persistence.
  • How do you optimize ORM performance in high-traffic applications?

    • Answer: Use compiled queries, batching, eager loading when appropriate, caching, and no-tracking queries. Profile queries to detect N+1 issues and avoid unnecessary object tracking.

2. Querying & LINQ

  • What’s the difference between IEnumerable and IQueryable?

    • Answer: IEnumerable<T> executes queries in memory and is best for in-memory collections. IQueryable<T> builds expression trees executed by a provider (like EF) against a database.
  • How do you write optimized LINQ queries that avoid N+1 issues?

    • Answer: Use .Include() for eager loading, projection with .Select(), or batch queries. Avoid lazy loading in loops.
  • What’s the difference between eager loading, lazy loading, and explicit loading?

    • Answer: Eager loading retrieves related entities upfront. Lazy loading defers loading until accessed. Explicit loading requires manual calls to load navigation properties.
  • How do you debug or profile queries generated by EF Core?

    • Answer: Enable logging with ILogger, inspect SQL via ToQueryString(), or use SQL Profiler / EFCorePowerTools.
  • How do you handle raw SQL queries in EF Core or NHibernate?

    • Answer: EF Core supports FromSqlRaw() and ExecuteSqlRaw(). NHibernate allows HQL, Criteria, or direct SQL queries. Always parameterize to avoid injection.

3. Transactions & Concurrency

  • How do you handle transactions in EF Core and NHibernate?

    • Answer: In EF Core, use DbContext.Database.BeginTransaction() or ambient TransactionScope. In NHibernate, use session.BeginTransaction() and commit/rollback explicitly.
  • What’s the difference between optimistic and pessimistic concurrency control?

    • Answer: Optimistic assumes minimal conflicts and uses version checks (row version/timestamp). Pessimistic locks rows until transactions complete to prevent conflicts but reduces scalability.
  • How do you prevent deadlocks in database operations?

    • Answer: Keep transactions short, access tables in a consistent order, use proper indexing, and avoid unnecessary locks.
  • How do you implement distributed transactions (e.g., using the Outbox or Saga pattern)?

    • Answer: Use the Outbox pattern to store events in the same DB transaction and process asynchronously. For workflows across services, use the Saga pattern with compensating actions.
  • How do you manage transaction boundaries in microservices?

    • Answer: Each service manages its own local transaction. For consistency, use eventual consistency with domain events, sagas, or message queues instead of distributed transactions.

4. Database Migrations & Versioning

  • How do you apply migrations in EF Core?

    • Answer: Run Add-Migration to scaffold migration files and Update-Database to apply changes. At runtime, migrations can also be applied programmatically.
  • How do you handle migrations in a team environment with multiple developers?

    • Answer: Communicate schema changes, merge migration files carefully, and use version control. Sometimes re-scaffold or consolidate migrations to avoid conflicts.
  • How do you ensure backward compatibility during schema changes?

    • Answer: Use additive changes first (add new columns), avoid dropping immediately, deploy code that uses both old and new schemas, then remove deprecated parts later.
  • What’s your approach for zero-downtime database migrations?

    • Answer: Use blue-green deployments, backward-compatible changes, online schema updates, and feature flags to roll out DB updates gradually.
  • How do you manage seed data in different environments?

    • Answer: Use EF Core’s HasData() in migrations or custom seeding scripts, often controlled by environment flags so dev/test data doesn’t leak into production.

5. Caching & Performance

  • What’s the difference between in-memory cache and distributed cache?

    • Answer: In-memory cache is per application instance (fast but not shared). Distributed cache (Redis, SQL) is shared across multiple app instances, supporting scale-out scenarios.
  • How do you integrate Redis with .NET Core for caching?

    • Answer: Use Microsoft.Extensions.Caching.StackExchangeRedis package and configure it via services.AddStackExchangeRedisCache().
  • How do you implement cache expiration and cache invalidation strategies?

    • Answer: Use absolute/relative expiration, sliding expiration, or manual invalidation. For complex cases, implement cache-aside pattern.
  • When would you use output caching in APIs?

    • Answer: When responses are static or expensive to compute and can be reused safely across requests, e.g., catalog data or public content.
  • How do you handle caching for multi-tenant applications?

    • Answer: Use tenant-specific cache keys or namespaces to isolate data, and configure quotas or eviction strategies per tenant.

6. Advanced Persistence Patterns

  • What’s the Specification pattern and how do you use it with repositories?

    • Answer: The Specification pattern encapsulates query criteria and logic into reusable objects. Repositories accept a Specification<T> (e.g., with filters, includes, sorting), translate it to LINQ/SQL, and return results—keeping query logic out of services/controllers.
  • How do you implement soft deletes in EF Core or NHibernate?

    • Answer: Add a flag/timestamp column (e.g., IsDeleted/DeletedAt) and configure a global query filter (EF Core) or filter (NHibernate) to exclude deleted rows. Override delete operations to set the flag instead of physically removing rows; add unique indexes that account for the flag if needed.
  • How do you model complex types (value objects, JSON columns, hierarchical data)?

    • Answer: Use owned/entity types in EF Core for value objects, map JSON columns via HasConversion or providers with native JSON support, and represent hierarchies with adjacency lists (ParentId), materialized paths, or nested sets depending on read/write patterns.
  • What’s the difference between event sourcing and traditional CRUD persistence?

    • Answer: CRUD stores current state; event sourcing stores an append-only stream of domain events and rebuilds state by replaying them. Event sourcing enables auditability and temporal queries but adds complexity (projections, eventual consistency).
  • How do you design persistence for CQRS (Command Query Responsibility Segregation)?

    • Answer: Split write and read models: commands mutate the domain/write store; queries hit a read-optimized model (denormalized projections). Use events to update projections asynchronously; choose separate schemas/stores when beneficial.

7. Multi-Database & Cloud Datastores

  • How do you manage connections to multiple databases in the same application?

    • Answer: Configure multiple DbContexts/connection factories, each with its own connection string and migrations. Use DI to inject the correct context per bounded context or feature.
  • Have you worked with NoSQL databases (Cosmos DB, MongoDB) alongside SQL?

    • Answer: Use SQL for transactional/relational data and NoSQL for flexible schemas, documents, or high-scale reads. Keep data ownership per service and sync via events to avoid tight coupling.
  • How do you implement sharding or partitioning in SQL databases?

    • Answer: Use range/hash shard keys with a shard map (app-level) or native partitioning features (e.g., partitioned tables). Ensure routing logic and avoid cross-shard transactions when possible.
  • What are the trade-offs between SQL and NoSQL in microservices?

    • Answer: SQL provides strong consistency and rich querying; NoSQL offers schema flexibility and horizontal scaling. In microservices, choose per service based on access patterns and consistency needs—accepting cross-store integration via events.
  • How do you ensure data consistency across heterogeneous datastores?

    • Answer: Use event-driven propagation (outbox + consumers), idempotent handlers, and eventual consistency with reconciliation jobs/compensations. Track correlation/causation IDs for auditing.

8. Security & Compliance

  • How do you prevent SQL Injection in EF Core and NHibernate?

    • Answer: Always use parameterized queries/LINQ, avoid string concatenation, and validate inputs. In EF Core/NHibernate, the ORM parameterizes by default when using LINQ or safe APIs.
  • How do you secure connection strings across environments?

    • Answer: Store them outside source control—in Key Vault, environment variables, or secret managers—accessed with managed identities. Rotate regularly and enforce least privilege.
  • How do you implement row-level security in SQL Server or PostgreSQL?

    • Answer: Define RLS policies (PostgreSQL) or security predicates/filters (SQL Server) that restrict rows based on user/tenant context passed from the app. Ensure app sets the context securely (e.g., session context variables).
  • How do you handle PII (Personally Identifiable Information) in databases?

    • Answer: Minimize collection, encrypt at rest/in transit, mask in non-prod, and restrict access via roles. Consider field-level encryption/tokenization and audit access.
  • What practices do you follow for GDPR/HIPAA compliance in persistence?

    • Answer: Data minimization, consent tracking, right-to-erasure workflows, retention policies, encryption, auditing, and breach notification procedures. Separate duties and document controls for audits.

🔹 Messaging & Distributed Systems

1. Messaging Fundamentals

  • What’s the difference between queues and topics/subscriptions?

    • Answer: A queue is point-to-point: one consumer receives each message. Topics/subscriptions are pub/sub: a message is published to a topic and fan-out to one or more independent subscriptions, each with its own consumers.
  • How do you ensure message ordering in a distributed system?

    • Answer: Use broker features like sessions/partition keys (Azure Service Bus sessions, Kafka partitions) and single active consumer per partition. Keep producers single-threaded per key when ordering matters and avoid parallel competing consumers on the same ordered stream.
  • What’s the difference between push-based and pull-based messaging models?

    • Answer: Push delivers messages from broker to consumers (e.g., RabbitMQ with prefetch). Pull requires consumers to poll/receive from the broker (e.g., Azure Service Bus Receive/processor under the hood).
  • What’s the difference between at-most-once, at-least-once, and exactly-once delivery semantics?

    • Answer: At-most-once may drop messages but never duplicates. At-least-once never loses messages but may deliver duplicates. Exactly-once is achieved logically via idempotent handlers/outbox, since brokers typically offer at-least-once.
  • What is the purpose of a dead-letter queue (DLQ)?

    • Answer: A DLQ isolates poison or repeatedly failing messages after max retries, enabling inspection, remediation, or replay without blocking healthy traffic.

2. Azure Service Bus / RabbitMQ Basics

  • How do you publish and consume messages with Azure Service Bus in .NET?

    • Answer: Create a ServiceBusClient, send with ServiceBusSender.SendMessageAsync, and process with ServiceBusProcessor (handlers for ProcessMessageAsync/ProcessErrorAsync). Complete/Abandon/Defer/Dead-letter in the handler to control disposition.
  • What’s the difference between RabbitMQ exchanges (fanout, topic, direct, headers)?

    • Answer: Direct routes by exact routing key, topic routes by wildcard patterns, fanout broadcasts to all bound queues, and headers routes by message headers rather than routing keys.
  • How do you configure prefetch counts and message acknowledgements?

    • Answer: In RabbitMQ set basic.qos(prefetchCount) and use manual BasicAck/Nack. In Service Bus set PrefetchCount on the receiver/processor and complete messages with CompleteAsync (or abandon/defer/dead-letter) to control retries.
  • How would you handle poison messages in Service Bus or RabbitMQ?

    • Answer: Configure max delivery count/retry then send to DLQ (Service Bus) or to a dead-letter exchange/queue (RabbitMQ). Quarantine, add diagnostics, and provide a safe replay tool after fixing the root cause.
  • How do you monitor the health of queues and subscriptions?

    • Answer: Track queue depth, oldest message age, DLQ size, throughput, and processing latency. Use Azure Monitor/Application Insights for Service Bus and the RabbitMQ Management UI/Prometheus exporters for RabbitMQ.

3. Frameworks: MassTransit & NServiceBus

  • What are the benefits of using MassTransit over raw Service Bus/RabbitMQ SDKs?

    • Answer: It provides higher-level abstractions: consumers, sagas, retries, scheduling, observability, and built-in middleware/serialization—reducing boilerplate and standardizing patterns across brokers.
  • How do you define a consumer in MassTransit?

    • Answer: Implement IConsumer<TMessage> and its Consume(ConsumeContext<TMessage>) method, then register it in the bus configuration (cfg.ReceiveEndpoint(..., e => e.Consumer<MyConsumer>())).
  • How do you configure retry policies in MassTransit or NServiceBus?

    • Answer: In MassTransit use UseMessageRetry (e.g., exponential or immediate retries). In NServiceBus configure Recoverability with Immediate and Delayed retries and error queue routing.
  • What’s the role of a saga in NServiceBus, and how does it compare to MassTransit sagas?

    • Answer: A saga coordinates long-running workflows across multiple messages/services, maintaining state and issuing compensations on failure. Both frameworks support sagas with persistence; the APIs differ but the concept (stateful, message-driven orchestration) is the same.
  • How do you implement message correlation across multiple services?

    • Answer: Include a stable CorrelationId (or saga id) in message headers and propagate it across hops; enrich logs/traces with the same id (OpenTelemetry) so end-to-end flows can be traced and de-duplication/idempotency can key off it.

4. Idempotency & Reliability

  • How do you ensure idempotent message processing?

    • Answer: By storing processed message IDs in a store/cache, using the Outbox pattern, or designing handlers so reprocessing has no side effects.
  • What is the Outbox pattern and why is it important?

    • Answer: It ensures atomicity by saving events/messages in the same DB transaction as business data, then reliably publishing them later to avoid message loss.
  • How do you implement deduplication of messages?

    • Answer: By checking a unique message ID against a persistence store (SQL/Redis) before processing, or leveraging broker features like Service Bus duplicate detection.
  • How do you ensure exactly-once processing when the broker only provides at-least-once?

    • Answer: Combine at-least-once delivery with idempotent handlers, deduplication, and Outbox/Inbox patterns to simulate exactly-once.
  • How do you use transactional message publishing with EF Core or NHibernate?

    • Answer: Write events to an Outbox table within the same transaction as entity changes; a background job then publishes them to the message bus.

5. Distributed Transactions & Sagas

  • What is the Saga pattern and when should it be used?

    • Answer: Saga is a sequence of local transactions coordinated via events or messages, used when distributed ACID transactions aren’t feasible.
  • How do you model long-running workflows using sagas?

    • Answer: Store saga state in durable storage, correlate messages by ID, and progress through states with timeouts, retries, or compensations.
  • What’s the difference between orchestration and choreography in sagas?

    • Answer: Orchestration uses a central coordinator; choreography lets services react to events independently.
  • How would you implement a compensating transaction in a distributed system?

    • Answer: Define explicit rollback steps (e.g., refund payment if booking fails) triggered when a later step fails.
  • How do you debug or test a saga flow across multiple services?

    • Answer: Use correlation IDs, structured logging, distributed tracing (OpenTelemetry), and integration tests with test harnesses.

6. Event-Driven Architecture (EDA)

  • What’s the difference between commands, events, and queries?

    • Answer: Commands request an action, events announce something happened, queries retrieve data.
  • How do you decide whether to use synchronous RPC vs. event-driven messaging?

    • Answer: Use synchronous RPC for immediate responses; use events when decoupling, scalability, or eventual consistency is acceptable.
  • How do you design an event bus in microservices?

    • Answer: Abstract the broker (Service Bus, RabbitMQ), standardize message contracts, enforce correlation/trace IDs, and provide pub/sub infrastructure.
  • What are the advantages and risks of eventual consistency?

    • Answer: Advantage: scalability, decoupling. Risk: temporary data inconsistencies, harder debugging, need for retries/compensations.
  • How do you evolve event schemas without breaking consumers?

    • Answer: Use versioning, backward-compatible changes (additive fields), or schema registries. Avoid breaking changes; deprecate old events gradually.

7. Monitoring & Observability

  • How do you trace a message across multiple services (distributed tracing)?

    • Answer: Propagate a trace/context ID (W3C traceparent/baggage, CorrelationId) on every message. Instrument producers/consumers with OpenTelemetry so each span is linked, then visualize end-to-end flows in Jaeger/Zipkin/Application Insights/Grafana Tempo.
  • Which tools have you used for monitoring (Application Insights, OpenTelemetry, Prometheus, Grafana)?

    • Answer: OpenTelemetry for vendor-neutral instrumentation; Application Insights for Azure apps (traces, deps, live metrics); Prometheus to scrape metrics; Grafana to build dashboards/alerts across those data sources.
  • How do you measure queue depth, throughput, and latency?

    • Answer: Use broker metrics (ASB/RabbitMQ) and custom counters: queue length/oldest message age for backlog, msg/sec for throughput, and produce→consume time or enqueue→complete time for latency. Export via Prometheus/Azure Monitor and dashboard in Grafana/AI.
  • How do you detect message loss or duplication in production?

    • Answer: Track monotonic sequence/offsets per stream and alert on gaps or regressions; maintain idempotency/dedup stores and monitor hit rates; compare produced vs consumed counts and watch DLQ spikes.
  • How would you implement alerts for messaging failures?

    • Answer: Set SLO-based alerts on DLQ size, error/exception rate, retry bursts, processing latency, and consumer liveness. Page on critical breaches; create ticketing/Slack notifications for warning thresholds.

8. Performance & Scaling

  • How do you scale message consumers horizontally?

    • Answer: Run multiple consumer instances (pods/VMs) pointing at the same queue/partition set; use competing consumers for parallelism and auto-scalers (KEDA/HPA) based on backlog/lag.
  • What are the trade-offs between competing consumers vs. partitioned consumers?

    • Answer: Competing consumers maximize throughput but don’t preserve per-key order. Partitioned/hashed consumers preserve ordering and data locality per key but require good partitioning and careful hot-key management.
  • How do you handle backpressure in a high-throughput system?

    • Answer: Limit concurrency/prefetch, use bounded channels/queues, apply rate limiting and circuit breakers, and shed non-critical work. Scale out consumers or shard hot partitions.
  • How do you optimize batch processing of messages?

    • Answer: Use broker/SDK batch APIs, vectorized I/O (bulk DB writes), and idempotent batch semantics. Tune batch size by measuring latency vs. throughput; avoid oversized batches that increase reprocessing costs.
  • How would you tune prefetch settings in RabbitMQ or Service Bus?

    • Answer: Start low and increase prefetch/PrefetchCount until CPU/latency plateaus; balance in-flight visibility with fairness across consumers; reduce prefetch for strict ordering or slow handlers.

9. Security & Compliance

  • How do you secure Service Bus or RabbitMQ connections?

    • Answer: Enforce TLS in transit, use least-privilege credentials, rotate secrets, and restrict network access (VNet/Private Link, firewalls, VPN). Prefer managed identity over shared keys where available.
  • What’s the difference between SAS tokens, Managed Identities, and certificates in Azure Service Bus?

    • Answer: SAS uses shared keys to mint scoped tokens; Managed Identities use Azure AD for keyless, auto-rotated auth from Azure resources; certificates (with AAD apps) provide mutual auth scenarios and stronger credential governance.
  • How do you implement message encryption (in transit and at rest)?

    • Answer: Use TLS for transport and broker encryption at rest; optionally apply application-level encryption (envelope encryption with Key Vault–managed keys) for sensitive payload fields.
  • How do you enforce authorization and RBAC on message consumers?

    • Answer: Assign scoped roles/policies (Azure RBAC/SAS rights, RabbitMQ users/vhosts/permissions) per queue/topic. Validate claims/tenant in message headers at the application layer before processing.
  • How do you handle PII and GDPR compliance in event payloads?

    • Answer: Minimize PII in events, use pseudonymization/tokenization, encrypt sensitive fields, and set retention/TTL. Provide erasure hooks (delete/redact projections) and audit access to PII fields.

🔹 Service Models & Communication

1. API Paradigms & Protocols

  • Compare REST, gRPC, and GraphQL: when to choose each? Trade-offs for mobile/web/internal services.

    • Answer: REST is ubiquitous, cache-friendly, and great for public/web/mobile APIs; it’s simple but can over/under-fetch. gRPC is binary, strongly typed, and fast with streaming—ideal for internal service-to-service calls, but browser support needs gRPC-Web. GraphQL lets clients shape responses (great for mobile bandwidth and aggregating multiple resources) but adds server complexity and requires careful caching/authorization.
  • Wire formats: JSON vs Protobuf vs Avro — size, speed, schema evolution.

    • Answer: JSON is human-readable and widely supported but larger/slower. Protobuf is compact and very fast (great with gRPC) but requires .proto schemas and codegen. Avro is also compact, emphasizes schema evolution with writer/reader schemas (popular with Kafka + schema registries) and is strong for long-term compatibility.
  • Streaming models: unary, server/client streaming, bidi streaming — use cases.

    • Answer: Unary is request/response. Server streaming suits progress feeds or long result sets. Client streaming batches uploads/telemetry from client to server. Bidirectional streaming enables real-time duplex flows like chat, live dashboards, or IoT control loops.
  • Versioning philosophies across paradigms (URL/headers for REST, proto evolution for gRPC, schema evolution for GraphQL).

    • Answer: REST typically versions via URL (/v2) or headers/media types; aim for backward-compatible changes. gRPC avoids endpoint versioning—evolve proto by adding new fields, never reusing tags, and reserving removed ones. GraphQL prefers a single endpoint with non-breaking schema evolution (add fields/types), deprecate rather than remove, and let clients migrate gradually.

2. REST API Design

  • Resource modeling, HTTP verbs, status codes, and idempotency (PUT vs POST vs PATCH).

    • Answer: Model nouns (/orders) and use verbs correctly: GET (safe/idempotent), POST (create, not idempotent), PUT (full replace, idempotent), PATCH (partial update). Return meaningful status codes (2xx/4xx/5xx) and use idempotency keys on unsafe operations when needed.
  • Pagination & filtering patterns (cursor vs offset, RFC-5988 links).

    • Answer: Offset is simple but unstable on changing datasets. Cursor/continuation is more reliable and scalable. Include pagination metadata and Link headers (RFC-5988) and support filters/sorts via query parameters.
  • HATEOAS: do you use it? Pros/cons in real systems.

    • Answer: Pros: discoverability, decoupled clients. Cons: added payload complexity and limited client adoption. Many teams adopt a pragmatic approach—basic links where helpful, not full HATEOAS.
  • ETag/If-Match for concurrency & caching.

    • Answer: Use ETag to represent a resource version. If-Match implements optimistic concurrency (update only if tag matches), while If-None-Match helps cache validation to avoid transferring unchanged content.
  • Standardized errors (RFC 7807 ProblemDetails) and correlation IDs.

    • Answer: Return ProblemDetails with type, title, status, and traceId for consistency. Propagate a correlation/trace ID across services to tie logs and traces to each request.

3. gRPC Services

  • Designing proto contracts and managing breaking vs non-breaking changes.

    • Answer: Treat field numbers as API contracts: add new fields with new tags, don’t reuse old tags, and mark removed fields as reserved. Favor additive changes; avoid changing types/semantics to keep clients compatible.
  • Deadlines/timeouts, metadata/headers, and status code mapping to HTTP.

    • Answer: Set deadlines/timeouts per call to bound resource usage and support cancellations. Use metadata/headers for auth/correlation and trailers for additional status. gRPC maps to HTTP/2 with its own status codes; in gRPC-Web, errors map to HTTP with gRPC error details in headers/trailers.
  • Streaming patterns in practice (progress, large payloads).

    • Answer: Use server streaming for progress and long-running results, client streaming to upload batches/chunks, and bidi for interactive sessions (chat, live analytics). Always design flow control and backpressure handling.
  • gRPC-Web for browsers; when to expose REST alongside gRPC.

    • Answer: Browsers need gRPC-Web (or a proxy) since native HTTP/2 semantics aren’t fully exposed. Expose REST alongside gRPC when you need broad client compatibility, CDN caching, or simple public integration.
  • Load balancing for gRPC (client-side vs server-side; probes/health).

    • Answer: Prefer client-side LB with service discovery for sticky/streaming workloads; server-side LB (ingress/proxy) is simpler but can break long streams if not tuned. Implement health checks (gRPC Health Checking Protocol) and keepalive to detect broken connections.

4. GraphQL APIs

  • Schema design: queries, mutations, subscriptions; avoiding the N+1 problem (dataloaders).

    • Answer: Model queries for reads, mutations for writes, and subscriptions for real-time events. Prevent N+1 by batching/ caching child fetches with DataLoader (keyed lookups per request) and projecting only needed fields. Keep resolvers thin and delegate to services that support efficient set-based queries.
  • Authorization strategies (field-level vs resolver-level).

    • Answer: Resolver-level checks are the most precise—authorize inside each resolver using user claims/roles/tenancy. Field-level directives (e.g., @auth) centralize policy but still evaluate per field. Prefer coarse checks at type/operation level plus fine-grained checks where data sensitivity varies.
  • Caching in GraphQL (persisted queries, CDN considerations).

    • Answer: Use persisted queries (hashed) to enable CDN/proxy caching and reduce payload size. Cache at the client (normalized store) and edge for idempotent GET queries; server-side cache common resolvers. Be careful: response varies by args and auth, so include them in cache keys.
  • Federation (e.g., Apollo) vs single gateway; schema stitching.

    • Answer: Federation composes multiple subgraphs with ownership keys and requires capable gateways—great for autonomous teams. A single gateway is simpler but can become a bottleneck. Schema stitching is lightweight composition but lacks the ownership semantics and tooling maturity of federation.
  • Versioning without versions — deprecation & evolution strategies.

    • Answer: Prefer additive changes (new fields/types) and mark removals with @deprecated and clear sunset dates. Avoid breaking changes; when unavoidable, publish parallel fields/types and migrate clients gradually before removal.

5. Real-Time Communication (SignalR)

  • Transports: WebSockets, SSE, long-polling — fallback strategy and detection.

    • Answer: Attempt WebSockets first, fallback to SSE then long-polling based on server/client/network capabilities. SignalR’s negotiate endpoint handles detection; you can force/disable transports per environment or proxy constraints.
  • Scaling SignalR with a backplane (Redis/Azure SignalR), message ordering, and delivery guarantees.

    • Answer: Use Redis backplane or Azure SignalR Service to fan-out across instances. Ordering is best-effort—don’t assume strict order across nodes. Delivery is at-most-once; add app-level acks/retries if you need stronger guarantees.
  • Hub design, groups, authorization, reconnect & backoff policies.

    • Answer: Keep hubs thin; push domain logic into services. Use groups for targeted broadcasts and enforce [Authorize]/policies per hub or method. Implement exponential backoff on reconnect and handle OnReconnected/OnDisconnected for state repair.
  • Flow control & throttling for noisy clients; handling binary payloads.

    • Answer: Apply server and per-connection rate limits, message size caps, and queue bounds; drop or back-pressure noisy clients. Use MessagePack or raw binary for large/structured payloads to reduce overhead.
  • Observability for hubs (per-client metrics, slow consumer detection).

    • Answer: Emit metrics for connections, send/receive rate, queue length, errors, reconnections. Detect slow consumers by monitoring send backlog/latency per connection and take action (drop, downgrade frequency).

6. Inter-Service Communication Patterns

  • Choosing sync (REST/gRPC) vs async (events/queues); latency budgets & coupling.

    • Answer: Use sync when the caller needs an immediate answer within a tight SLO and strong consistency; it creates temporal coupling. Use async for decoupling, burst absorption, and workflow orchestration where eventual consistency is acceptable.
  • BFF (Backend-for-Frontend) pattern — when and how to apply.

    • Answer: Use a BFF to tailor APIs per client (web/mobile) and hide microservice topology. Implement thin orchestration, caching, and auth contextualization; avoid embedding heavy business logic to keep BFFs replaceable.
  • Choreography vs orchestration; where to place business workflow logic.

    • Answer: Choreography: services react to events—simple and decoupled but harder to visualize/control. Orchestration: a central saga/orchestrator drives steps—more control/observability. Place workflow logic near the process owner service or a dedicated orchestration service.
  • Handling backpressure, retries, and timeouts across service boundaries.

    • Answer: Set timeouts per call, use exponential backoff retries with jitter, and circuit breakers to shed load. Apply bounded queues/concurrency and communicate backpressure via HTTP 429/retry-after or queue depth–based scaling.
  • Designing contracts for idempotency keys and request deduplication.

    • Answer: Accept an Idempotency-Key (header/body) and store request hash/outcome keyed by it; on duplicates, return the original result. Include request IDs and deterministic business keys to support deduplication and safe retries end-to-end.

7. API Gateways & Edge

  • Selecting YARP, Ocelot, Azure API Management — criteria and typical use cases.

    • Answer: YARP is a high-performance .NET reverse proxy for custom gateways where you own code/policies. Ocelot is config-driven for simple microservice routing on .NET without heavy platform features. Azure API Management (APIM) is fully managed with security, rate limiting, analytics, and a developer portal—best for external/public APIs and governance.
  • Policies: rate limiting, JWT validation, request/response transforms, header/URL rewrite, compression.

    • Answer: In APIM, apply built-in policies (validate JWT, set rate/quotas, rewrite, cache, compress). In YARP/Ocelot, configure transforms (headers/path), integrate Polly for resilience, and add custom middleware for auth/quotas. Keep auth at the edge; pass only needed claims downstream.
  • Canary & blue/green via gateway routing; circuit breakers at the edge.

    • Answer: Use weighted/predicate routing (percent rollouts, header/cookie selectors) for canary; route by version/environment for blue/green. Apply circuit breakers and timeouts/retries at the gateway to protect backends and fail fast.
  • Developer portal, subscription keys, quotas and analytics.

    • Answer: APIM provides a portal for onboarding, subscription keys, per-product quotas/rate limits, and rich analytics. For YARP/Ocelot, pair with external identity/keys and observability stacks to approximate these capabilities.
  • Multi-tenant concerns at the edge (routing, per-tenant throttles).

    • Answer: Route by host/tenant ID and enforce tenant-scoped rate limits/quotas and claim-based authorization. Ensure per-tenant API keys/scopes, isolate analytics, and prevent data leakage via strict header/claim sanitation.

8. Service Discovery & Load Balancing

  • Kubernetes service discovery (ClusterIP/Headless/Ingress), DNS-based discovery.

    • Answer: ClusterIP exposes stable virtual IP for in-cluster discovery via DNS. Headless (clusterIP: None) returns pod IPs for client-side balancing. Ingress exposes HTTP/S externally via controllers (NGINX, AGIC). DNS (svc.namespace.svc.cluster.local) provides the names.
  • Client-side vs server-side load balancing; sticky sessions vs stateless design.

    • Answer: Client-side LB (service discovery + client picker) improves locality/streams (e.g., gRPC). Server-side LB (ingress/proxy) is simpler to operate. Prefer stateless services; use sticky sessions only when required (or externalize session state).
  • Health checks: liveness/readiness/startup and graceful shutdown.

    • Answer: Liveness restarts unhealthy pods; readiness gates traffic until ready; startup delays other probes on slow starts. For graceful shutdown, handle SIGTERM, set readiness=false, drain connections, then exit.
  • Consistent hashing, partitioning/sharding strategies for hot keys.

    • Answer: Use consistent hashing to route the same key to the same node for cache/state locality. Detect hot keys and mitigate by increasing partitions, adding salting/sub-keys, or pinning to dedicated shards.
  • Mesh-level discovery (Istio/Linkerd) and traffic shifting.

    • Answer: A service mesh provides discovery, mTLS, retries, and traffic shifting (canary, A/B) via policies (VirtualService/DestinationRule). It centralizes cross-cutting concerns without code changes.

9. Security & Compliance

  • OAuth2/OIDC flows for SPAs, mobile, and service-to-service; JWT validation, scopes & claims.

    • Answer: Use Authorization Code + PKCE for SPAs/mobile; Client Credentials for service-to-service. Validate JWT (signature, issuer, audience, exp/nbf), then enforce scopes/claims for fine-grained authorization.
  • mTLS between services; certificate rotation and zero-trust posture.

    • Answer: Enforce mTLS for service identity and encryption in mesh/ingress; automate cert issuance/rotation (Key Vault/CSI or mesh CA). Follow zero-trust: authenticate/authorize every call, least privilege, and segment networks.
  • CORS, CSRF, input validation, and request size limits at the edge.

    • Answer: Configure CORS allowlists, enforce CSRF tokens for cookie-based sessions, validate/normalize inputs, and set max body size/timeouts at the gateway to block abuse early.
  • PII minimization, payload encryption, and data masking in logs.

    • Answer: Collect minimal PII, encrypt sensitive fields in transit and at rest, and mask/redact secrets/PII in logs/telemetry. Separate access roles and maintain audit trails.
  • Threat modeling for APIs (replay, downgrade, injection) and WAF integration.

    • Answer: Perform STRIDE/attack-tree reviews; protect against replay (nonces, exp), downgrade (force TLS versions), and injection (validate/encode). Place a WAF in front to block OWASP Top-10 patterns and bots.

10. Observability, Contracts & Governance

  • OpenAPI/Swagger & Scalar for REST; gRPC reflection; GraphQL schema docs.

    • Answer: Publish OpenAPI and use Swagger/Scalar for interactive docs. Enable gRPC reflection for tools (grpcurl) and provide GraphQL SDL docs/playgrounds with schema descriptions and directives.
  • Contract testing (Pact, protobuf/gql schema checks) in CI.

    • Answer: Use Pact for consumer-driven REST contract tests; run protobuf/GraphQL compatibility checks to prevent breaking changes. Gate merges/releases on contract tests in CI.
  • OpenTelemetry for traces/metrics/logs; propagating correlation IDs across hops.

    • Answer: Instrument services with OTel and propagate W3C traceparent/baggage so logs, metrics, and traces correlate end-to-end. Export to Jaeger/Zipkin/Tempo or Application Insights/Grafana.
  • API SLOs (availability, latency, error rate) and alerting on error budgets.

    • Answer: Define SLOs (e.g., p95 latency, availability) and monitor error budget burn rates; alert on fast/slow burns to prioritize reliability work before features.
  • Deprecation policy, backward compatibility, and sunset headers.

    • Answer: Prefer backward-compatible changes; announce deprecations with docs, Deprecation/Sunset headers, and timelines. Maintain old behavior until consumers migrate; track usage and remove safely.

11. Performance & Resilience

  • Timeouts, retries, circuit breakers, bulkheads, and fallbacks (Polly patterns).

    • Answer: Timeouts bound how long you wait; retries (with exponential backoff + jitter) handle transient faults; circuit breakers stop calling an unhealthy dependency to let it recover. Bulkheads isolate resources (pools/queues) to prevent cascade failure, and fallbacks return cached/default results or alternate paths when all else fails.
  • Gateway and client-side caching; CDN considerations for APIs.

    • Answer: Push cache policy with Cache-Control, ETag, and ProblemDetails-aware rules; cache idempotent GETs at the gateway/CDN and leverage client-side normalized stores. For CDN, use persisted queries/keys, vary on auth/tenant, and avoid caching personalized data unless token-bound.
  • Payload tuning (compression, pagination windowing, gRPC message size).

    • Answer: Enable gzip/br (HTTP) and per-call compression (gRPC); paginate with cursor/continuation tokens rather than large offsets. Cap gRPC message size, prefer server streaming for large result sets, and project only needed fields.
  • Defensive limits: concurrent request caps, queue length, and token buckets.

    • Answer: Use semaphore caps per dependency, bound in-flight queue length, and enforce token-/leaky-bucket rate limits. Return 429 + Retry-After on overload and shed non-critical traffic first.
  • Load testing strategies and capacity planning for spikes.

    • Answer: Run baseline, stress, spike, and soak tests (k6/JMeter), track p95/p99 latency and error budgets, and size autoscaling from backlog/CPU/latency signals. Validate warmup, cache priming, and failover drills to ensure headroom for peak events.

12. Advanced Topics

  • Service Mesh features (Istio): mTLS by default, traffic mirroring, retries at mesh layer.

    • Answer: A mesh adds mTLS identity/encryption, traffic policies (retries, timeouts, outlier detection), and traffic mirroring for safe canaries. It centralizes these via config (VirtualService/DestinationRule) without code changes.
  • Dapr building blocks for pub/sub, bindings, service invocation.

    • Answer: Dapr provides sidecar APIs for service invocation, pub/sub, bindings, state stores, secrets, and workflows, abstracting vendors (Redis, Service Bus, Kafka). Apps call localhost HTTP/gRPC and swap components via config.
  • Bridging paradigms: exposing REST for external consumers while using gRPC internally.

    • Answer: Terminate external REST at a gateway/BFF and transcode to internal gRPC (e.g., Envoy grpc-json, APIM/YARP adapters). This keeps public compatibility, caching, and docs while preserving internal performance and strong typing.
  • Edge compute concerns (Cloudflare/Azure Front Door): auth at edge, geo routing.

    • Answer: Validate JWT/OAuth and enforce rate limits/WAF at the edge; use geo/latency routing and rules engines for locality and compliance. Edge workers can precompute, normalize headers, or short-circuit abuse before origin.
  • Multi-region routing, latency-aware load balancing, and failover plans.

    • Answer: Use latency/priority routing (Front Door/Traffic Manager), health probes, and active-active where possible with data replication and conflict resolution. Keep DNS TTL low, document runbooks, and test with chaos/failover exercises.

🔹 Frontend (Angular / Blazor / JS)

1. Angular Fundamentals

  • What are components, modules, and services in Angular, and how do they interact?

    • Answer: Components render UI and handle view logic. Services hold reusable, stateful/stateless business logic and are injected into components via DI. Modules (NgModules) group components/services/pipes and define compilation and dependency boundaries for an app or feature.
  • What’s the role of NgModules in structuring Angular apps?

    • Answer: NgModules organize features into cohesive units (e.g., AppModule, feature/lazy modules, shared modules). They control declarations, imports/exports, and providers, enabling lazy loading and clear dependency boundaries.
  • What’s the difference between template-driven and reactive forms? Which do you prefer and why?

    • Answer: Template-driven forms are declarative, simpler, and use directives in templates; great for basic forms. Reactive forms are code-driven, strongly typed, and easier to test/validate dynamically; preferred for complex, scalable forms and granular control.
  • How does change detection work in Angular?

    • Answer: Angular checks component trees for data changes and updates the DOM; by default it’s triggered by Zone.js on async tasks. Performance can be improved with ChangeDetectionStrategy.OnPush, immutability, and pushing updates via Observables/Signals.
  • What’s the purpose of zone.js in Angular?

    • Answer: zone.js patches async APIs (timers, XHR, promises) to know when to run change detection automatically. It removes the need to manually trigger updates after most async operations.

2. TypeScript & JavaScript

  • What features of TypeScript do you find most useful for large Angular apps?

    • Answer: Static typing, interfaces/generics, enums, and strict mode catch errors early and improve tooling (intellisense, refactors). Decorators and metadata integrate well with Angular’s DI and component patterns.
  • Can you explain the difference between interface and type in TypeScript?

    • Answer: Both define shapes, but interfaces are primarily for object contracts and support declaration merging/extension. Type aliases can represent unions, primitives, mapped/conditional types—more flexible for complex compositions.
  • What are generics in TypeScript and when would you use them?

    • Answer: Generics parameterize types to create reusable, type-safe APIs (e.g., Observable<T>, repository patterns). Use them when a function/class works across multiple types while preserving type information.
  • How do you handle async/await and Promises in frontend code?

    • Answer: Use async/await for readability around Promise-based APIs and wrap HTTP calls with proper try/catch and finally. In Angular, prefer RxJS Observables for streams/cancellation; convert to Promises only when necessary.
  • What are decorators in Angular, and how do they differ from standard TypeScript decorators?

    • Answer: Angular uses decorators like @Component, @Injectable, and @NgModule to attach metadata for compilation and DI. They build on TypeScript decorators but are Angular-specific and interpreted by the Angular compiler/runtime to wire components, modules, and services together.

3. State Management

  • How would you manage state in a medium-to-large Angular app?

    • Answer: Split state into UI/local component state and global app state. Use services + Observables/Signals for local or feature-scoped state, and a centralized store (e.g., NgRx/NGXS/Akita) for shared, cross-cutting state with time-travel, debugging, and persistence.
  • What’s your experience with NgRx or other state management libraries?

    • Answer: NgRx provides a Redux-style store with actions, reducers, selectors, and effects; it shines for complex flows, caching, and auditability. Alternatives (NGXS, Akita) reduce boilerplate; choice depends on team familiarity and complexity.
  • How do you decide between a service with BehaviorSubject vs. NgRx?

    • Answer: Use BehaviorSubject in a service for simple, isolated state or a single feature. Choose NgRx when you need traceable actions, effects for side effects, entity adapters, DevTools, and predictable scaling as complexity grows.
  • How do you implement selectors and effects in NgRx?

    • Answer: Create selectors with createSelector/createFeatureSelector to compute derived state efficiently. Implement effects with createEffect(() => this.actions$.pipe(...) ) to handle side effects (HTTP, routing, analytics) and dispatch follow-up actions.
  • How do you debug state in Angular (e.g., Redux DevTools, custom logging)?

    • Answer: Integrate StoreDevtoolsModule for time-travel and action inspection; add meta-reducers (logger) or ngrx-store-logger. In smaller setups, log via RxJS tap() or Angular signals’ dev tools, and expose read-only streams for inspection.

4. Data Binding & Communication

  • Can you explain one-way, two-way, and event binding in Angular?

    • Answer: One-way (property/interpolation): {{value}} / [prop]="value". Event binding: (event)="handler($event)". Two-way: [(ngModel)]="field" or custom ControlValueAccessor for components.
  • How would you pass data between parent and child components?

    • Answer: Use @Input() for parent → child and @Output() with EventEmitter for child → parent. For sibling or deep communication, use a shared service with Observables or a store.
  • What’s the role of RxJS Observables in Angular applications?

    • Answer: Observables power HttpClient, forms, router events, and state streams. They enable push-based updates, composition (map/switchMap), cancellation, and backpressure control.
  • How do you handle unsubscribing from Observables to avoid memory leaks?

    • Answer: Prefer the async pipe (auto-unsubscribes), use takeUntil/takeUntilDestroyed() or Subscription cleanup in ngOnDestroy. Avoid manual subscriptions when possible by composing streams in templates/services.
  • How would you integrate Angular with REST/gRPC APIs and handle errors?

    • Answer: Use HttpClient for REST and an adapter/proxy for gRPC-Web (or REST gateway). Centralize auth/headers/retries in HTTP interceptors, handle errors with RxJS (catchError, retryWithBackoff), and surface user-friendly messages.

5. Routing & Navigation

  • How does Angular’s Router work?

    • Answer: Define a route config (Routes) and import RouterModule. The router matches URLs to components, supports guards/resolvers, and navigates via <a routerLink> or router.navigate().
  • What’s the difference between lazy loading and eager loading?

    • Answer: Eager loads modules at startup (simple but larger bundle). Lazy loads feature modules on demand via loadChildren, improving initial load time and enabling route-level code splitting.
  • How do you implement route guards for authentication/authorization?

    • Answer: Implement CanActivate/CanLoad (and optionally CanDeactivate) to check auth/roles/tenancy before navigation. Redirect unauthorized users and use route data for required permissions.
  • What’s the role of resolvers in Angular routing?

    • Answer: Resolvers fetch data before route activation so the component starts with ready data (improves UX and avoids flicker). Implement Resolve<T> and wire it in route config.
  • How do you handle deep linking and query parameters?

    • Answer: Use ActivatedRoute’s paramMap/queryParamMap Observables to read changes reactively. Bind query params in links ([queryParams]="{ q: term }") and preserve/merge options when navigating.

5. Routing & Navigation

  • How does Angular’s Router work?

    • Answer: You define a Routes array mapping paths to components and import RouterModule. The router matches the URL, activates the route (running guards/resolvers), renders the component into <router-outlet>, and supports navigation via routerLink or Router.navigate().
  • What’s the difference between lazy loading and eager loading?

    • Answer: Eager loads feature modules at app start (simple, bigger initial bundle). Lazy loads modules on demand via loadChildren, improving first-paint and letting you split code per route.
  • How do you implement route guards for authentication/authorization?

    • Answer: Implement CanActivate/CanLoad (and CanActivateChild) to check auth/roles/tenancy before route activation. Redirect unauthorized users and use route data to declare required permissions.
  • What’s the role of resolvers in Angular routing?

    • Answer: A Resolve<T> fetches data before the route activates so the component starts with ready state (better UX, fewer spinners). Wire it in the route config under resolve.
  • How do you handle deep linking and query parameters?

    • Answer: Read params with ActivatedRoute.paramMap / queryParamMap (Observables). Bind with [queryParams] and use navigation options like queryParamsHandling: 'merge' and fragment for anchors.

6. Real-Time Communication

  • How would you integrate SignalR into an Angular app?

    • Answer: Add @microsoft/signalr, create a HubConnection in a service, start() it, and expose events as RxJS Observables to components. Send messages via the service and handle connection lifecycle (start/stop, errors).
  • How do you handle websocket disconnections/reconnections gracefully?

    • Answer: Enable withAutomaticReconnect() (SignalR) with exponential backoff, listen to onclose/onreconnected, and show an offline indicator. Buffer user actions or disable UI while reconnecting.
  • How do you throttle/debounce real-time events in Angular?

    • Answer: Pipe streams through RxJS operators: debounceTime, throttleTime, auditTime, bufferTime to limit UI updates. For heavy streams, coalesce updates and run expensive work runOutsideAngular.
  • What are strategies to ensure real-time UI responsiveness under heavy load?

    • Answer: Use ChangeDetectionStrategy.OnPush, trackBy with *ngFor, virtual scrolling, and batch DOM updates. Move heavy compute off the main thread (Web Workers) and minimize change detection with markForCheck/runOutsideAngular.

7. Blazor Fundamentals

  • What are the differences between Blazor Server and Blazor WebAssembly?

    • Answer: Server runs on the server and syncs UI over SignalR (small downloads, fast start, requires steady connection). WASM runs .NET in the browser (offline capable, less latency to UI events, larger initial download and browser sandbox limits).
  • How do Blazor components differ from Angular components?

    • Answer: Blazor components are Razor (.razor) files using C# for logic and rendering with a diffed render tree; Angular uses TypeScript, templates, and Zone.js-driven change detection. Both support DI, routing, and event binding.
  • How would you implement dependency injection in Blazor?

    • Answer: Register services in Program.cs (AddSingleton/Scoped/Transient) and inject with @inject or [Inject]. In Server, prefer Scoped per circuit; in WASM, Scoped behaves like Singleton.
  • How do you handle JS interop in Blazor?

    • Answer: Use IJSRuntime.InvokeAsync<T> to call JS from .NET and [JSInvokable] with DotNetObjectReference to call .NET from JS. Keep interop thin and batch calls to reduce overhead.
  • What are the main limitations of Blazor WebAssembly vs. Server?

    • Answer: Larger startup payload and slower cold start, browser sandbox restrictions (limited APIs, no direct server resources), interop cost for heavy JS calls, and memory/CPU constraints compared to server-hosted execution.

8. UI/UX, Accessibility & Performance

  • How do you ensure compliance with WCAG 2.1 accessibility standards?

    • Answer: Start with semantic HTML, proper labels/alt text, and logical focus order. Ensure color contrast meets AA, full keyboard navigation, visible focus states, and correct ARIA only when needed. Test with axe/Lighthouse, screen readers (NVDA/VoiceOver), and real keyboard-only flows; fix forms (errors, hints, roles).
  • What techniques do you use for responsive design (CSS Grid, Flexbox, media queries)?

    • Answer: Design mobile-first, use Flexbox for 1-D layouts and CSS Grid for 2-D. Apply media/container queries, fluid spacing/typography (clamp()), and responsive images (srcset/sizes). Prefer rem units and system breakpoints for consistency.
  • How do you optimize Angular performance (change detection, lazy loading, trackBy)?

    • Answer: Use ChangeDetectionStrategy.OnPush, trackBy for *ngFor, and async pipe to avoid manual subscriptions. Lazy-load routes, split bundles, and use pure pipes/memoized selectors. Minimize emissions with RxJS operators, and build with AOT/optimization.
  • How do you handle internationalization (i18n) in Angular (ngx-translate, Angular i18n)?

    • Answer: For compile-time i18n, use Angular i18n (message extraction, XLIFF, build per locale). For runtime switching, use ngx-translate with JSON catalogs and lazy-load translations. Support ICU pluralization, locale-specific pipes, and RTL via Directionality.
  • How do you manage theming (dark/light mode, CSS variables, Tailwind, Angular Material)?

    • Answer: Store theme preference and apply via root CSS variables (or prefers-color-scheme). With Angular Material, define multiple themes and toggle via class. Tailwind: configure theme extensions and use dark variant; keep tokens in CSS variables for runtime switches.

9. Testing & Quality

  • How do you test Angular components with Jasmine/Karma?

    • Answer: Use TestBed to configure the module, create a ComponentFixture, and test DOM/inputs/outputs. Mock services with providers, use fakeAsync/tick or waitForAsync, and test change detection and async flows.
  • What’s the difference between unit tests and e2e tests (Cypress, Playwright)?

    • Answer: Unit tests isolate components/services and run fast in JS/Node with mocks. E2E tests drive the real app in a browser, validating routing, auth, and integrations—slower but higher confidence.
  • How do you test Blazor components (e.g., with bUnit)?

    • Answer: Render components with bUnit (RenderComponent<T>()), assert markup and parameters, and trigger events. Provide mocked services via the test DI container and verify state changes/JS interop with bUnit helpers.
  • How do you ensure cross-browser compatibility?

    • Answer: Define a browserslist and transpile/polyfill accordingly; rely on Autoprefixer. Use feature detection (not UA sniffing) and validate with a target matrix (e.g., BrowserStack/Sauce). Monitor errors via RUM.
  • How do you include frontend testing in a CI/CD pipeline?

    • Answer: Run lint + unit tests on every PR with coverage gates, build production bundles, and run E2E on tagged branches or nightly. Cache node_modules, collect artifacts (screenshots/videos), and fail the pipeline on regressions.

10. Advanced Topics

  • What are progressive web apps (PWA) in Angular/Blazor?

    • Answer: PWAs add service workers, cache/offline, and a web app manifest for installable experiences. Angular: add @angular/pwa; Blazor WASM supports service workers and offline caching.
  • Have you worked with micro frontends (Module Federation, Single-SPA)?

    • Answer: Use Webpack Module Federation for runtime-shared modules and independent deploys; Single-SPA orchestrates multiple frameworks/apps. Key concerns: shared deps, routing isolation, cross-app communication, and consistent design systems.
  • How would you integrate GraphQL clients in Angular or Blazor?

    • Answer: Angular: Apollo Angular or urql with normalized caching and codegen (TypeScript types). Blazor: StrawberryShake or GraphQL.Client with generated C# types; handle errors, cache policies, and auth headers.
  • How do you secure frontend apps (XSS prevention, CSP, sanitization)?

    • Answer: Prefer Angular’s template escaping, avoid innerHTML, use DomSanitizer sparingly. Set CSP (no inline/eval), enable HTTPOnly/SameSite cookies, and validate inputs. Audit dependencies and enforce TLS/HSTS.
  • How do you optimize bundle size and load time (tree-shaking, code splitting)?

    • Answer: Enable production builds (AOT, minify, treeshake), lazy-load routes/components, and remove unused polyfills. Compress (gzip/br), optimize images/fonts, prefetch/preload critical chunks, and defer non-critical scripts.

🔹 Cloud & DevOps

1. Azure Fundamentals

  • Which Azure services have you worked with (App Service, AKS, Functions, Service Bus, Storage, Cosmos DB)?

    • Answer: App Service for hosting web APIs and slots; AKS for container orchestration and autoscaling; Functions for event-driven/serverless tasks; Service Bus (queues/topics) for decoupled messaging; Storage (Blobs/Queues/Tables) for cheap durable storage; Cosmos DB for globally distributed, low-latency NoSQL (with multi-region writes). Typically paired with Key Vault, Application Insights, Front Door/APIM.
  • How do you deploy a .NET Core app to Azure App Service?

    • Answer: Build/publish (multi-stage Docker or dotnet publish), create an App Service + plan (Linux/Windows), and deploy via GitHub Actions/Azure DevOps, Zip Deploy, or container registry. Configure App Settings/Connection Strings, enable Health Checks, set Startup Command/ASPNETCORE_URLS, and wire Application Insights and autoscale.
  • What are the differences between Azure Functions and background workers (e.g., Hangfire)?

    • Answer: Functions are serverless, trigger-based, auto-scale to zero, and best for bursty, short-lived work (consumption pricing). Workers/Hangfire run under your host (App Service/AKS/VM), great for long-running jobs, scheduled tasks, and full control with a dashboard— but you own scaling/infrastructure.
  • How do you implement scaling in Azure (horizontal vs. vertical)?

    • Answer: Horizontal: add instances (App Service autoscale rules, AKS HPA/KEDA, VM Scale Sets). Vertical: increase SKU/size (CPU/RAM). Use metrics (CPU, RAM, queue length, custom) and schedules; prefer horizontal for resilience.
  • How do you manage multi-region deployments for high availability?

    • Answer: Deploy active-active or active-passive across paired regions; route with Front Door/Traffic Manager (latency/priority). Use data services with geo-replication (Cosmos DB multi-region, SQL Failover Groups), replicate Key Vault secrets, keep DNS TTL low, and rehearse failover/runbooks.

2. Containerization & Orchestration

  • How do you containerize a .NET Core API using Docker?

    • Answer: Use a multi-stage Dockerfile: build with mcr.microsoft.com/dotnet/sdk:8.0, dotnet publish -c Release, then copy to mcr.microsoft.com/dotnet/aspnet:8.0. Set ASPNETCORE_URLS=http://+:8080, EXPOSE 8080, and ENTRYPOINT ["dotnet","App.dll"].
  • What’s the difference between Docker Compose and Kubernetes (AKS)?

    • Answer: Compose is simple, local or single-host orchestration for multi-container apps. Kubernetes/AKS is production-grade orchestration: scheduling, service discovery, rolling updates, autoscaling, self-healing, secrets/config, and policies.
  • How do you structure multi-container solutions with networking?

    • Answer: In Compose, define services and a shared network; services talk via service names. In K8s, use Services (ClusterIP) + DNS for discovery, ConfigMaps/Secrets for config, and patterns like sidecars for cross-cutting concerns.
  • What’s the role of Helm charts in Kubernetes deployments?

    • Answer: Helm packages/templatizes Kubernetes manifests with values.yaml for environment overrides, enabling repeatable installs, upgrades/rollbacks, and versioned releases across environments.
  • How do you configure KEDA for autoscaling based on queue/event load?

    • Answer: Install KEDA; define a ScaledObject linking a Deployment to a scaler (e.g., Service Bus, RabbitMQ, Kafka) with triggers (queue length/lag), min/max replicas, polling/cooldown. KEDA drives the HPA based on external metrics.

3. Infrastructure as Code (IaC)

  • What’s the difference between ARM templates, Bicep, and Terraform?

    • Answer: ARM is verbose JSON and Azure-native. Bicep is a higher-level DSL that compiles to ARM (cleaner syntax, great Azure parity). Terraform is multi-cloud with its own state backend and large provider ecosystem; uses HCL and plan/apply.
  • How would you deploy a full environment (API, DB, Storage, Service Bus) with IaC?

    • Answer: Model resources as modules (RG, VNet, App Service/AKS, DB, Storage, Service Bus), parameterize per environment, and reference outputs for dependencies. Store secrets in Key Vault, run in CI/CD with plan/preview, enforce tags/naming and RBAC, and use remote state.
  • Can you explain what Pulumi is and how it differs from Terraform/Bicep?

    • Answer: Pulumi defines infra using general-purpose languages (C#/TS/Python/Go) with real loops/abstractions; supports multiple clouds and can re-use app code/types. It manages state like Terraform but trades HCL/DSL for full programming languages.
  • How do you implement GitOps using ArgoCD or Flux?

    • Answer: Store K8s manifests/Helm charts in Git; ArgoCD/Flux continuously reconciles cluster state to Git (pull-based). Use PRs for changes, environments as folders/branches, SOPS/Sealed Secrets for secrets, and policies for promotion.
  • How do you manage environment configuration across Dev/Test/Prod?

    • Answer: Separate state/workspaces, use per-env values/parameters, and keep secrets in Key Vault. Promote from Dev→Test→Prod via pipelines, avoid drift with GitOps, and centralize app config/feature flags in Azure App Configuration.

4. CI/CD & Pipelines

  • Can you describe a CI/CD pipeline you’ve built for .NET + Angular in Azure DevOps?

    • Answer: Multi-stage YAML: restore/cache, dotnet build/test (Coverlet + Sonar), npm ci / ng build for Angular, package artifacts, Docker build/push to ACR, then deploy stages to Dev/Test/Prod (App Service or AKS). Use environments with approvals, variable groups/Key Vault, and rollback via slots or Helm rollback.
  • How would you configure a pipeline to:

    • Answer: Use separate jobs with conditionals and templates; share variables via library groups; fail fast on quality gates; publish artifacts and release via approvals.
  • Run unit tests and enforce coverage thresholds

    • Answer: dotnet test /p:CollectCoverage=true /p:Threshold=80 (Coverlet) + PublishTestResults@2. For Angular, ng test --watch=false --code-coverage; enforce via SonarQube/SonarCloud quality gate.
  • Build & push Docker images to Azure Container Registry

    • Answer: Docker@2 or az acr build; login with Azure service connection; tag with $(Build.SourceVersion) and semver; docker push to ACR.
  • Deploy to multiple environments (Dev/Test/Prod) with approvals

    • Answer: Use environment checks/manual approvals; App Service deploy with slots (or Helm to AKS). Parameterize per-env values (Helm values.yaml, app settings), and gate with smoke tests.
  • How do you implement blue/green and canary deployments?

    • Answer: App Service: deploy to a slot then swap (blue/green). AKS: run two versions and shift traffic via Ingress/Front Door weights or Istio traffic split; canary with small % then ramp up; automate rollback on health checks.
  • How do you manage feature flags in deployments (.NET Feature Management / LaunchDarkly)?

    • Answer: Use Microsoft.FeatureManagement with Azure App Configuration (labels per env/ring) or LaunchDarkly for remote toggles and targeting. Guard code paths with flags, enable flight/ring rollouts, and remove stale flags.
  • How do you secure pipeline secrets in Azure DevOps/GitHub Actions?

    • Answer: Store in Key Vault (linked variable groups) or Actions Environments; use managed identity/OIDC to avoid long-lived secrets. Scope permissions least-privilege, mask outputs, and never echo secrets in logs.

5. Secrets & Config Management

  • How do you manage secrets in Azure (Key Vault, env vars, config providers)?

    • Answer: Keep secrets in Azure Key Vault, access via Managed Identity, and load with ASP.NET Core config providers (or App Configuration + Key Vault references). Fall back to env vars in containers.
  • What’s the difference between system-assigned and user-assigned managed identities?

    • Answer: System-assigned is tied to one resource and lifecycle; deleted with it. User-assigned is standalone, reusable across resources—ideal for shared roles and rotation independence.
  • How do you rotate certificates and keys automatically?

    • Answer: Set Key Vault rotation policies and use Event Grid to trigger updates/redeploys; enable App Service auto-renew for TLS certs. Use IOptionsMonitor/reload to pick up changes without restarts where possible.
  • How do you integrate Key Vault with Kubernetes (CSI driver)?

    • Answer: Install Secrets Store CSI + AKV provider, define a SecretProviderClass mapping Key Vault objects, and mount into pods; optionally sync as K8s Secret. Authenticate with workload identity/managed identity.
  • How do you handle multi-tenant configuration isolation?

    • Answer: Separate namespaces/labels in App Configuration, tenant-scoped keys, and per-tenant Key Vaults or key prefixes with strict RBAC. Ensure logging masks tenant data and enforce isolation in pipelines and runtime.

6. Monitoring & Observability

  • Which observability tools have you used (Serilog, OpenTelemetry, Application Insights, Grafana, Prometheus)?

    • Answer: Serilog for structured logs (sinks: App Insights/Seq), OpenTelemetry for traces/metrics/logs, Prometheus for scraping metrics, and Grafana dashboards. Application Insights for distributed tracing, dependency maps, and live metrics in Azure.
  • How would you configure distributed tracing for microservices?

    • Answer: Add OTel SDK to each service, set service.name/version, auto-instrument HttpClient/ASP.NET Core/SQL, and propagate W3C traceparent/baggage. Export via OTLP to Azure Monitor, Jaeger/Zipkin/Tempo, and correlate logs with the trace ID.
  • How do you define and monitor SLOs/SLAs in Azure?

    • Answer: Define SLIs (availability, p95/p99 latency, error rate), set SLO targets, and compute error-budget burn rates via Log Analytics/KQL. Use App Insights availability tests and Azure Monitor alerts tied to SLOs.
  • How do you set up alerts and dashboards for failures or performance degradation?

    • Answer: Build Workbooks/Grafana dashboards; create metric and log alerts (KQL) with Action Groups (email/Teams/PagerDuty). Include DLQ size, dependency failures, and latency SLO breaches; tag alerts with runbooks.
  • How do you use chaos testing (Azure Chaos Studio) to validate resilience?

    • Answer: Define experiments (CPU/network faults, pod/VM shutdowns) with limited blast radius, run in lower envs first, and monitor with traces/metrics. Automate experiments in pipelines and require passing abort conditions/SLOs before promotion.

7. Resilience & Reliability

  • What’s the difference between retry, circuit breaker, and bulkhead patterns?

    • Answer: Retry re-attempts transient failures (use exponential backoff + jitter). Circuit breaker stops calls to an unhealthy dependency after repeated failures and tries again after a cool-down. Bulkhead isolates resources (threads/queues/connections) so one failing dependency can’t sink the whole service.
  • How would you configure DLQ (Dead Letter Queues) and retries in Azure Service Bus?

    • Answer: Set MaxDeliveryCount on the queue/subscription; after that many delivery attempts, the broker moves the message to the DLQ. In code, use ServiceBusProcessor with manual complete and client RetryOptions; inspect DLQ messages, fix the cause, then replay or purge safely.
  • How do you implement graceful shutdown in Kubernetes pods?

    • Answer: Handle SIGTERM: stop accepting new work, mark readiness=false to drain, finish in-flight requests, commit offsets/complete messages, dispose connections, then exit before terminationGracePeriodSeconds. Optionally add a preStop hook (short sleep) to allow the load balancer to drain.
  • How do you ensure zero-downtime deployments with rolling updates?

    • Answer: Use K8s RollingUpdate with maxUnavailable=0 and readiness/liveness probes; only route traffic to Ready pods. Keep backward-compatible DB changes, warm caches, and support fast rollback (previous image/Helm revision).
  • What’s your approach to disaster recovery (DR) in Azure?

    • Answer: Define RTO/RPO, deploy multi-region (active-active or active-passive), route via Front Door/Traffic Manager, use geo-replication (Cosmos DB multi-region, SQL Failover Groups, GZRS storage), back up infra and data, store runbooks, and perform regular failover tests.

8. Security & Compliance

  • How do you integrate OAuth2/OpenID Connect in a cloud-hosted app?

    • Answer: Use Authorization Code + PKCE for SPAs/mobile and Client Credentials for service-to-service. Configure issuer/audience in JWT middleware, validate tokens (sig/exp/nbf), and enforce scopes/claims. Prefer Entra ID (Azure AD)/B2C or OpenIddict.
  • How do you enforce TLS and mTLS across services?

    • Answer: Terminate TLS at the edge (Front Door/App Gateway) and use end-to-end TLS to backends. For service-to-service, enable mTLS (Istio/Linkerd or AGIC/Ingress certs), automate cert rotation with Key Vault/workload identity, and disable plaintext ports.
  • How do you secure API endpoints against abuse (WAF, rate limiting, throttling)?

    • Answer: Put a WAF in front (OWASP rules, bot protection), enforce rate limits/quotas (APIM/Ingress policies), set request size/time limits, IP allowlists as needed, and monitor anomalies with alerts and automatic blocking.
  • How do you implement container image scanning & signing?

    • Answer: Scan in CI and in ACR (e.g., Trivy/Defender for Cloud). Sign images (Sigstore Cosign / Notation) and enforce admission control (OPA/Gatekeeper/Azure Policy) to only run signed, vulnerability-free images; publish SBOM and provenance.
  • How do you ensure GDPR/HIPAA compliance for cloud-hosted solutions?

    • Answer: Practice data minimization, encrypt in transit/at rest, restrict access (RBAC/least privilege), log/audit access, and mask PII in logs. Implement consent, DPIA, retention & right-to-erasure flows (GDPR); for HIPAA, sign a BAA, protect PHI, and document administrative/technical safeguards.

🔹 CI/CD, Git & Collaboration

1. Version Control & Branching

  • Which Git branching strategy do you prefer (GitFlow, trunk-based, GitHub Flow)? Why?

    • Answer: Trunk-based for most teams: tiny PRs, fast CI, fewer merges, continuous delivery. GitFlow fits regulated/release-heavy products with parallel support branches. GitHub Flow is a lightweight trunk variant for web apps.
  • What are the trade-offs between long-lived feature branches and short-lived branches?

    • Answer: Long-lived = easier isolation but painful merges, stale code, slower feedback. Short-lived = frequent integration, fewer conflicts, higher quality—but requires strong CI and discipline.
  • What’s the difference between merge and rebase? When would you use each?

    • Answer: Merge preserves history with a merge commit (safe, audit-friendly). Rebase rewrites commits onto a new base (clean, linear history). Use merge for shared branches; rebase your own feature branch before opening a PR.
  • How do you handle hotfixes in a GitFlow workflow?

    • Answer: Branch from main (hotfix/*), fix + tag release, merge back to main and develop to keep lines in sync, then cut/ship a patch version.
  • How do you enforce branch naming and commit message standards (e.g., Conventional Commits)?

    • Answer: Use branch protection rules, server/PR checks, and pre-receive hooks; add commitlint with Husky (or Azure DevOps pre-commit) and PR templates; validate with CI and reject non-conforming pushes.

2. Pull Requests & Code Reviews

  • What makes a good pull request?

    • Answer: Small, focused diffs with a clear description, linked issue, screenshots (UI), migration notes, tests, and passing checks. Avoid mixing refactors with features.
  • How do you enforce mandatory reviews before merging?

    • Answer: Enable protected branches with required reviewers, CODEOWNERS, and required status checks (build, tests, security scan). Block bypass merges except for admins in emergencies.
  • How do you conduct a constructive code review?

    • Answer: Be specific and objective, reference standards, ask questions, propose alternatives, and acknowledge good patterns. Prioritize correctness, security, and maintainability over style nitpicks.
  • How do you ensure code review coverage for critical modules?

    • Answer: Use CODEOWNERS/path rules to auto-request domain reviewers, require multiple approvals, and enforce checklists. Track coverage in analytics and rotate subject-matter owners to avoid bottlenecks.
  • What tools have you used for code quality enforcement (SonarQube, analyzers, StyleCop, ESLint, Prettier)?

    • Answer: SonarQube/SonarCloud for quality gates & coverage; Roslyn analyzers/StyleCop for .NET; ESLint/Prettier for TS/JS; EditorConfig for consistent formatting; Dependabot/Renovate for dependency hygiene.

3. CI/CD Pipelines

  • Can you describe a CI/CD pipeline you’ve built for a .NET + Angular solution in Azure DevOps or GitHub Actions?

    • Answer: Multi-stage YAML: restore/cache, dotnet build/test (Coverlet + ReportGenerator + Sonar), npm ci / ng build (prod), publish artifacts, Docker build/push to ACR, then deploy to App Service/AKS with health checks, slots/Helm, and automatic rollback.
  • How do you structure pipelines for multi-service microservice environments?

    • Answer: One pipeline per service with shared templates; trigger on path filters; build/test in parallel; version and push images independently; run contract tests and environment promotions via release templates.
  • How do you implement approval gates for staging vs production?

    • Answer: Use Environments with required reviewers, manual approval checks, and automated pre-deploy validations (smoke tests, quality gates). Promote only on green checks; capture change logs.
  • How do you enforce minimum code coverage thresholds in pipelines?

    • Answer: Fail the build if coverage < threshold (Coverlet /p:Threshold=80), publish reports, and enforce via Sonar quality gates. For Angular, enable --code-coverage and aggregate in CI.
  • How do you manage pipeline templates and reusable YAML across repos?

    • Answer: Store versioned templates in a central repo; import via templates/resources (DevOps) or composite actions (GitHub). Parameterize inputs, document usage, and pin to tags to avoid breaking changes.

4. Release Strategies

  • What’s the difference between blue/green, canary, and rolling deployments?

    • Answer: Blue/green runs two identical stacks and flips traffic from blue → green (fast rollback by switching back). Canary shifts a small % of traffic to the new version, then ramps up on good metrics. Rolling replaces pods/instances gradually until all are updated.
  • Which release strategies have you used in production and why?

    • Answer: Prefer canary for metric-driven safety on high-traffic services; blue/green for database migrations or when instant rollback is critical; rolling for routine, low-risk updates that don’t need traffic splitting.
  • How do you roll back a failed deployment safely?

    • Answer: Keep immutable images and versioned configs; define automated health checks and SLO guards; for AKS use Helm rollback/previous ReplicaSet; for App Service use slot swap back. Always couple with a DB-safe migration plan (expand/contract).
  • How do you perform feature toggling and progressive delivery?

    • Answer: Use feature flags (App Config + FeatureManagement, LaunchDarkly) to gate new code, roll out by ring/percentage/segment, collect metrics, and disable instantly on regressions. Remove stale flags quickly.
  • How do you validate backward compatibility in rolling deployments?

    • Answer: Ensure schema-first additive DB changes, contract tests, and dual-read/write when needed. Run shadow traffic or canary validating both old and new versions; keep idempotent APIs and tolerant readers.

5. Documentation & Knowledge Sharing

  • What’s your experience with Docs-as-Code (Markdown, MkDocs)?

    • Answer: Store docs with code in Git (Markdown), render with MkDocs/Docusaurus, PR-review changes, and auto-publish via CI. Treat docs like code with owners and linting.
  • How do you document API endpoints (Swagger/OpenAPI, Scalar)?

    • Answer: Generate OpenAPI from controllers, validate in CI, and publish interactive docs via Swagger UI / Scalar. Keep examples, auth flows, and error shapes (ProblemDetails) current.
  • Do you use Architecture Decision Records (ADRs)? Why or why not?

    • Answer: Yes—ADRs capture context, options, and decisions with consequences. They create an audit trail, speed onboarding, and reduce re-litigating old choices.
  • How do you keep documentation in sync with code?

    • Answer: Co-locate docs, require doc updates in PR templates, fail CI if OpenAPI/diagrams are stale, and auto-generate parts (clients, schema refs). Schedule periodic docs rot checks.
  • How do you integrate diagrams-as-code (Mermaid, PlantUML) into your workflow?

    • Answer: Keep diagrams in repo as .md/.puml, render in CI to images for portals, and review diffs like code. Use templates for common topologies.

6. Collaboration & Agile Practices

  • What’s your role in a Scrum or Kanban team?

    • Answer: Participate in refinement, planning, reviews, and retros; own deliverables, write ADRs, pair/mentor, and keep WIP small. In Kanban, focus on flow and cycle time.
  • How do you report progress and blockers in a distributed team?

    • Answer: Daily async updates (status, risks, next steps), visible boards/dashboards, and early escalation of blockers with proposed options.
  • How do you manage work in Azure DevOps Boards, Jira, or Trello?

    • Answer: Use well-defined work items with acceptance criteria, link PRs/commits, keep swimlanes/WIP limits, and leverage queries/dashboards for flow metrics.
  • How do you balance following requirements vs suggesting improvements?

    • Answer: Deliver the MVP scope while flagging tech debt or better approaches with impact estimates; propose small, testable improvements within the sprint or roadmap larger ones.
  • How do you handle multilingual communication in distributed teams?

    • Answer: Prefer clear written specs, visuals, and glossary; summarize meetings, record decisions; encourage async questions and avoid idioms. Use captioning/transcripts when helpful.

7. Dependency & Artifact Management

  • How do you manage NuGet/npm dependencies across multiple repos?

    • Answer: Centralize versions via Directory.Packages.props (NuGet) and workspace/pnpm or npm workspaces; use renovation bots (Dependabot/Renovate) and lockfiles for reproducibility.
  • How do you configure Azure Artifacts or GitHub Packages for private feeds?

    • Answer: Publish packages from CI with service connections; consumers authenticate via PAT/OIDC/managed identity; configure nuget.config/.npmrc with scoped registries.
  • How do you enforce versioning (SemVer) across services?

    • Answer: Use Conventional Commits → automated semantic-release to bump versions and tag releases. Embed versions in images and APIs; reject breaking changes without major bumps.
  • How do you handle dependency scanning and security updates in pipelines?

    • Answer: Run SCA (Dependabot/Renovate + GitHub Advanced Security/Defender), SAST analyzers, and container scans (Trivy). Fail on critical CVEs; auto-PR safe upgrades with tests.
  • How do you automate changelog generation and release notes?

    • Answer: Generate from commits/PR labels via semantic-release/Release Please; include highlights, breaking changes, and migration steps. Publish notes to the repo, artifacts, and portals automatically.

8. Quality Gates & Governance

  • How do you set up SonarQube quality gates for .NET projects?

    • Answer: Run SonarScanner for .NET in CI, upload coverage (Coverlet/ReportGenerator) and analysis. Enforce a gate on new code (e.g., ≥80% coverage, 0 critical vulnerabilities, maintainability/security ratings A, code duplication ≤3%) and fail the build if it’s not met.
  • What static analyzers do you use for C# and TypeScript?

    • Answer: C#: Microsoft .NET/Roslyn analyzers, StyleCop.Analyzers, SonarAnalyzer.CSharp, optional ReSharper inspections in CI. TS: ESLint with typescript-eslint, Angular ESLint plugin; optional SonarJS/TS.
  • How do you enforce style rules (Roslyn analyzers, ESLint, Prettier)?

    • Answer: Check in .editorconfig, enable Roslyn rules as warnings/errors, run ESLint in CI, and autoformat with Prettier on pre-commit/PR. Block merges on lints/format diffs.
  • How do you prevent secrets leakage in repositories and pipelines?

    • Answer: Use git-secrets/gitleaks/truffleHog pre-commit and in CI, enable repo secret scanning, store secrets in Key Vault/GitHub Environments, use OIDC/managed identity instead of static keys, and rotate/ revoke on detection.
  • How do you measure and enforce technical debt reduction?

    • Answer: Track Sonar debt/maintainability rating, hotspot counts, complexity, and flaky tests. Set quarterly targets, reserve capacity each sprint for debt, and require fixes on new/changed code before merge.

9. Advanced Topics

  • How do you design monorepo vs multirepo strategies?

    • Answer: Monorepo enables atomic changes, shared tooling, and easier refactors (use Nx/Turborepo/Bazel); needs strong boundaries and CI partitioning. Multirepo gives independent versioning/ACLs but adds coordination and dependency drift; pick per team/product boundaries.
  • How do you manage cross-repo dependencies in CI/CD?

    • Answer: Publish versioned artifacts/packages to feeds (NuGet/npm/ACR), consume by SemVer ranges, and auto-PR updates with Dependabot/Renovate. Use path filters/pipeline triggers for critical downstream rebuilds.
  • Have you implemented GitOps with Flux/ArgoCD? What challenges did you face?

    • Answer: Yes—apps declared in Git and reconciled to clusters. Challenges: secrets management (solve with SOPS/Sealed Secrets), env overlays (Kustomize/Helm values), drift/conflicts, CRD/version skew, and promotion workflows.
  • How do you secure CI/CD pipelines against supply chain attacks?

    • Answer: Pin actions to SHAs, least-privilege tokens, OIDC for cloud creds, signed artifacts (Cosign/Notation), SBOM (Syft) + provenance (SLSA), dependency pinning/scanning, branch protection, and mandatory reviews/2FA.
  • How do you integrate compliance checks (SAST/DAST, license scanning) into CI/CD?

    • Answer: Add SAST (CodeQL/Sonar), DAST (OWASP ZAP/Burp automation), SCA/license checks (Mend/FOSSA/OWASP DC), and container scans (Trivy) to pipelines with quality gates that block releases on critical issues; publish reports and track trends on dashboards.

🔹 Testing & Quality

1. Testing Foundations

  • What’s the difference between unit tests, integration tests, and end-to-end (E2E) tests?

    • Answer: Unit test a small unit in isolation (fast, mocked deps). Integration test multiple components together (DB, message bus, HTTP). E2E test the full system from UI/API down to real dependencies.
  • Can you give an example of when you’d use each type?

    • Answer: Unit: validating a tax calculator method. Integration: API ↔ DB repository saving an order. E2E: user places an order via UI and sees confirmation email.
  • How do you balance test coverage vs. test value?

    • Answer: Aim for high coverage on critical/core code paths; prioritize tests that catch regressions and guard contracts. Don’t chase 100%—favor maintainable, flaky-free tests with strong signal.
  • What’s your approach to writing testable code?

    • Answer: Apply SOLID, inject dependencies, separate pure logic from I/O, keep functions small, and return deterministic outputs. Avoid static singletons, hide side effects behind interfaces.
  • How do you avoid flaky tests?

    • Answer: Remove sleeps; use async waits on real signals, control time/randomness (fakes), isolate state, use test containers instead of shared services, and clean up data per test.

2. Unit Testing

  • How do you structure unit tests in .NET (naming conventions, AAA pattern)?

    • Answer: Use Arrange-Act-Assert, one assertion concept per test, and names like MethodName_State_ExpectedResult. Group by SUT class, keep tests independent/idempotent.
  • Which testing frameworks have you used (xUnit, NUnit, MSTest)?

    • Answer: Prefer xUnit (fact/theory, no test context); NUnit for rich attributes; MSTest for legacy/VS integration. All support .NET CLI and CI well.
  • What’s the difference between Fact and Theory in xUnit?

    • Answer: [Fact] is a test with no parameters. [Theory] runs the same test with data rows (e.g., [InlineData], [MemberData]) to cover multiple cases.
  • How do you test async methods in C#?

    • Answer: Make the test async Task, await the call, use Assert.ThrowsAsync for exceptions, pass CancellationToken where relevant, and avoid .Result/.Wait() deadlocks.
  • How do you handle edge cases and exceptions in unit tests?

    • Answer: Use boundary values and equivalence classes, test null/empty/overflow cases, and assert exception type + message/param name (e.g., ArgumentException with paramName).

3. Integration Testing

  • How do you write integration tests for ASP.NET Core Web APIs?

    • Answer: Use WebApplicationFactory<TEntryPoint> to spin up the app in-memory, call endpoints with HttpClient, seed a test DB, and verify HTTP status + payload + side effects.
  • What’s the role of TestServer and WebApplicationFactory in integration testing?

    • Answer: TestServer hosts the ASP.NET Core pipeline in-memory. WebApplicationFactory builds on it to bootstrap your real app, letting you override config/DI for tests.
  • How do you test database interactions with EF Core/NHibernate?

    • Answer: Prefer a real DB via Testcontainers (SQL Server/Postgres) for realistic behavior. For fast checks, use SQLite in-memory with caveats (different SQL/constraints). Run migrations and isolate data per test/fixture.
  • What’s your approach to test containers (SQL Server, RabbitMQ, etc.)?

    • Answer: Start containers per test suite using DotNet.Testcontainers, inject their connection strings, ensure clean state, and tear down automatically—reliable and parallel-friendly.
  • How do you mock or stub external services in integration tests?

    • Answer: Use WireMock.NET/MockHttpMessageHandler to stub HTTP, Fake Service Bus/RabbitMQ or dedicated test queues, and contract tests to validate request/response shapes.

4. End-to-End (E2E) Testing

  • What tools have you used for E2E testing (Selenium, Playwright, Cypress)?

    • Answer: Playwright for fast, reliable cross-browser (Chromium/Firefox/WebKit) with auto-waits; Cypress for great DX and time-travel debugging; Selenium when legacy browsers/remote grids are required.
  • How do you test Angular components end-to-end?

    • Answer: Use Playwright/Cypress to drive the app like a user (routes, DOM, network). For component-level E2E, use Cypress Component Testing or Angular CDK Component Harnesses in integration tests to interact with components predictably.
  • How do you handle authentication flows in automated browser tests?

    • Answer: Prefer programmatic login: obtain a test token/cookie via API and seed storage before navigation. For full OIDC flows, use a test IdP with stable users and handle redirects/callbacks; avoid UI logins on every test by reusing authenticated storage per run.
  • How do you manage test environments and test data?

    • Answer: Spin up an ephemeral env (preview deploy) or dedicated test space; seed data via fixtures/migrations; tag data per test and reset between runs (DB snapshot/Testcontainers). Isolate external deps with stubs where appropriate.
  • How do you ensure cross-browser compatibility?

    • Answer: Run a matrix (Chromium, Firefox, WebKit) in CI with Playwright; add cloud grids (BrowserStack/Sauce) for real devices; define a browserslist, use polyfills as needed, and track failures per browser.

5. TDD & BDD

  • What’s the difference between Test-Driven Development (TDD) and Behavior-Driven Development (BDD)?

    • Answer: TDD drives design with unit tests (APIs/classes). BDD specifies business behavior in a ubiquitous language (Given/When/Then), aligning devs/testers/product and validating outcomes.
  • Can you explain the Red → Green → Refactor cycle in TDD?

    • Answer: Red: write a failing test. Green: implement the smallest code to pass. Refactor: clean design while tests stay green; repeat.
  • How do you write SpecFlow/Gherkin scenarios for a feature?

    • Answer: Capture examples as Given/When/Then with meaningful data tables; keep steps declarative (domain terms), avoid UI specifics, and map to step definitions that call application services, not UI where possible.
  • How do you integrate BDD tests into CI/CD pipelines?

    • Answer: Tag scenarios (@smoke, @regression) and run subsets per stage; produce living documentation reports; fail pipeline on critical scenarios; run against ephemeral envs provisioned in the pipeline.
  • What are the challenges of practicing TDD/BDD in real-world projects?

    • Answer: Legacy code, tight deadlines, flaky integration points, and over-specifying UI. Mitigate with seams/refactoring, fast unit suites, stable test data, and focusing BDD on business rules, not pixel details.

6. Mocking & Test Doubles

  • What’s the difference between mocks, stubs, and fakes?

    • Answer: Stubs provide canned responses; mocks verify interactions/expectations; fakes are lightweight working implementations (e.g., in-mem repo) used in tests.
  • Which mocking frameworks have you used (Moq, NSubstitute, FakeItEasy)?

    • Answer: Moq (popular, setups/verify), NSubstitute (clean, arrange-act-assert style), FakeItEasy (fluent). Choose based on team preference and ecosystem.
  • How do you mock DbContext or repositories in EF Core?

    • Answer: Prefer abstracting with a repository and mock that. If testing EF behavior, use SQLite in-memory or Testcontainers for realism; EF InMemory provider is quick but not SQL-accurate (no real constraints/translation).
  • How do you mock HTTP calls in .NET (HttpClient, Refit)?

    • Answer: Inject HttpClient with a custom HttpMessageHandler (e.g., DelegatingHandler stub or RichardSzalay.MockHttp). For Refit, supply a preconfigured HttpClient or use a local WireMock.NET server.
  • When would you prefer a real test container instead of mocks?

    • Answer: When behavior matters (SQL constraints/transactions, broker acks/ordering, serialization quirks). Use Testcontainers for SQL/RabbitMQ/Redis to catch integration issues and reduce false positives from mocks.

7. Performance & Load Testing

  • Have you used k6, JMeter, or Locust for load testing?

    • Answer: Yes—k6 for code-as-test (JS), great CI integration; JMeter for protocol breadth and legacy; Locust (Python) for custom behavior at scale.
  • How do you define performance SLAs (latency, throughput, error rate)?

    • Answer: Start from business SLOs (e.g., p95 ≤ 300ms, p99 ≤ 800ms, ≥ 1k RPS, error rate < 0.1%). Convert to SLIs (latency, availability, error rate) and set error budgets to gate releases.
  • How do you simulate spikes and stress tests?

    • Answer: Use arrival-rate/ramping patterns: baseline → spike bursts → stress beyond capacity → soak for hours. Seed realistic data, think-time, and concurrency; test cold start and cache-warm paths.
  • How do you profile performance bottlenecks in .NET?

    • Answer: Collect CPU/GC/allocations with dotnet-trace, dotnet-counters, PerfView, Application Insights Profiler, or JetBrains dotTrace/dotMemory; pinpoint hotspots, lock contention, GC pressure, and chatty I/O.
  • How do you include load testing results in release decisions?

    • Answer: Compare to baseline and SLOs; fail the pipeline on regressions (latency/error budget burn). Use canary with automated rollback if key metrics degrade.

8. Test Automation in CI/CD

  • How do you integrate tests into Azure DevOps pipelines?

    • Answer: Stages: build → unit → integration → e2e. Run dotnet test/Angular tests, PublishTestResults, PublishCodeCoverage, and surface flaky tests and trends on dashboards.
  • How do you enforce minimum code coverage thresholds?

    • Answer: Use Coverlet (/p:Threshold=80) + ReportGenerator, enforce via SonarQube/SonarCloud quality gate on new code.
  • How do you parallelize tests in pipelines for speed?

    • Answer: Shard by project/assembly, run dotnet test -m:N, split by past runtime (balancing), and use container/job matrix; Playwright/Cypress run workers in parallel.
  • How do you handle test reporting (dashboards, flaky test tracking)?

    • Answer: Publish JUnit/TRX, coverage HTML, and Allure/Playwright reports; label flaky tests, auto-retry once, quarantine with an issue, and track MTTR/flake rate.
  • How do you run E2E/browser tests in CI/CD (Playwright headless, Selenium Grid)?

    • Answer: Use Playwright headless with its browsers, record video/screenshots, and start the app via docker-compose. For cross-browser/devices, run on Selenium Grid or BrowserStack/Sauce.

9. Quality Gates & Governance

  • What’s your experience with SonarQube quality gates for test coverage?

    • Answer: Gate on new code: coverage ≥ 80%, no critical issues, duplication < 3%, Maintainability/Security A. Break the build if the gate fails.
  • How do you measure code quality beyond coverage (complexity, maintainability)?

    • Answer: Track cognitive/cyclomatic complexity, code smells, duplication, maintainability index, and security hotspots; trend over time per repo/module.
  • How do you track technical debt in projects?

    • Answer: Use Sonar’s debt estimates, convert to backlog items with priority, timebox debt work each sprint, and record rationale/mitigations in ADRs.
  • How do you ensure regression coverage as features evolve?

    • Answer: PRs must add/adjust tests; use test impact analysis, contract tests, snapshot/golden tests where appropriate, and occasionally mutation testing to gauge test strength.
  • How do you balance fast delivery vs. high test quality?

    • Answer: Follow the testing pyramid, run fast unit tests on every commit, selective integration/E2E on PRs, full suites nightly; use feature flags and canary to de-risk while maintaining velocity.

🔹 Performance & Optimization

1. Profiling & Diagnostics

  • How do you identify performance bottlenecks in a .NET application?

    • Answer: Start with measurements (p95/p99 latency, throughput), add app traces/metrics/logs, then use a sampling profiler to find hot paths. Reproduce locally, change one thing at a time, and verify with before/after baselines.
  • What tools do you use for profiling and diagnostics (e.g., dotTrace, PerfView, Visual Studio Profiler)?

    • Answer: PerfView, dotTrace, VS Profiler, dotnet-trace/counters/gcdump/dump, EventPipe/ETW, Application Insights Profiler; for live triage: dotnet-monitor.
  • How do you analyze memory allocations and GC pressure?

    • Answer: Enable MemoryDiagnoser/alloc tracking, capture heap dumps, review Gen0/½ & LOH activity and allocation flame graphs. Look for per-request allocations, boxing, large arrays/strings, and long-lived roots.
  • How do you detect and resolve memory leaks in .NET applications?

    • Answer: Watch working set/GC heap growth, take before/after dumps, and inspect retention paths to GC roots. Common fixes: unsubscribe events, dispose timers/HttpClient handlers, trim caches, and avoid static references.
  • What’s your approach to diagnosing thread contention and deadlocks?

    • Answer: Capture stacks/wait chains (PerfView, dotnet-dump analyze), look for Monitor.Enter, SemaphoreSlim.Wait, and sync-over-async (.Result/.Wait()). Reduce lock scope, prefer async, and partition work to avoid shared locks.

2. Benchmarking & Measurement

  • How do you use BenchmarkDotNet to evaluate code performance?

    • Answer: Annotate with [Benchmark] (+ [MemoryDiagnoser], [Params]), run in Release, let BDN manage warmup/iterations, and compare means, p95, allocations. Use a baseline method for relative results.
  • What’s the difference between micro-benchmarking and macro/system-level benchmarking?

    • Answer: Micro: isolates a method/algorithm (no I/O). Macro: end-to-end scenarios including I/O, network, DB—validates real throughput/latency and interactions.
  • How do you ensure benchmarks are reliable and reproducible?

    • Answer: Run on a quiet machine/VM, pin CPU, disable turbo/power saving, use fixed data sets, multiple iterations, and commit env/config with results. Avoid measuring JIT warmup by letting BDN handle it.
  • How do you profile async/await methods for performance impact?

    • Answer: Use profilers with async call chain views; check context capture overhead, state-machine allocations, and excessive task hopping. Consider ConfigureAwait(false) in libraries and batch/parallelize wisely.
  • How do you design realistic load test scenarios that match production traffic?

    • Answer: Model arrival rates, traffic mix, payload sizes, and think times from prod telemetry; include cache warm/cold, spikes, background jobs, and failure injection. Validate against SLOs.

3. .NET Optimization Techniques

  • How do you minimize boxing/unboxing overhead in .NET?

    • Answer: Use generics and generic collections, avoid non-generic APIs, prefer interpolated strings with structured logging, and use IFormatProvider/spans to avoid boxing value types.
  • When would you use Span<T>, Memory<T>, or ValueTask for performance?

    • Answer: Span<T> for fast stack-only slicing/parsing; Memory<T> when data must live on the heap or cross async boundaries; ValueTask when results are often synchronous and you need to reduce task allocations (use carefully).
  • What are the trade-offs of using structs vs classes?

    • Answer: Structs avoid GC and can be embedded, but copying large/mutable structs is costly and they can’t inherit. Classes allocate on the heap, add GC pressure, but support inheritance/references. Prefer small, immutable structs.
  • How do you avoid excessive LINQ allocations and optimize queries?

    • Answer: In hot paths, use for loops or span-based APIs, pre-size collections, avoid .ToList()/.ToArray() unnecessarily, and prevent closure captures. For EF, keep LINQ server-side (no premature .AsEnumerable()).
  • How do you reduce string allocations in high-performance scenarios (e.g., StringBuilder, pooling)?

    • Answer: Use StringBuilder for concatenation in loops, string.Create/Span<char> for formatting, cache common strings, and borrow buffers with ArrayPool<char>. Avoid repeated ToString() and unnecessary culture conversions.

4. Database Performance

  • How do you analyze slow queries in SQL Server or PostgreSQL?

    • Answer: Capture top offenders (SQL Server Query Store / SET STATISTICS IO, TIME; Postgres pg_stat_statements), inspect execution plans (EXPLAIN [ANALYZE] [BUFFERS]), check missing/unused indexes, hot scans, sort/hash spills, and fix with indexing, rewrites, or stats updates.
  • What’s your approach to indexing strategies for OLTP systems?

    • Answer: Favor narrow, selective indexes on predicates, covering indexes (INCLUDE), and composite indexes ordered by filter → sort. Limit write overhead (avoid over-indexing), maintain stats, consider clustered/PK choice, fillfactor, and partial/filtered indexes (PG).
  • How do you avoid N+1 query problems in EF Core/NHibernate?

    • Answer: Use eager loading (Include/ThenInclude), projections (Select DTOs), batch fetching (NH batch-size), or split queries (AsSplitQuery). Disable blanket lazy loading; load precisely what you need.
  • What’s the difference between eager loading, lazy loading, and explicit loading, performance-wise?

    • Answer: Eager: fewer round-trips but can over-fetch. Lazy: fetches on access—risk of N+1. Explicit: on-demand, controlled loading—good balance when used selectively.
  • How do you implement caching layers to reduce DB load?

    • Answer: Apply cache-aside with Redis for shared results, short TTL + jitter, ETag/version keys, and invalidation on writes (events/outbox). Use per-request in-memory caching for repeated lookups and precompute hot aggregates.

5. API & Service Performance

  • How do you design APIs for low-latency and high-throughput?

    • Answer: Keep services stateless, minimize payloads, use async I/O, connection pooling, and HTTP/2/gRPC where apt. Avoid chatty calls (batch), leverage caching, and tune data access and serialization.
  • What’s your approach to request batching and pagination?

    • Answer: Provide bulk endpoints for many-item ops; for reads use cursor/continuation pagination (stable, scalable), limit page size, and avoid offset on large datasets.
  • How do you apply caching (output caching, in-memory, Redis) to APIs?

    • Answer: Output cache idempotent GETs (vary by auth/params); in-memory for small, hot data per instance; Redis for cross-instance caching and rate limits. Prevent stampedes (locks/jitter) and use ETag/If-None-Match.
  • How do you reduce serialization overhead (System.Text.Json vs Newtonsoft.Json vs Protobuf)?

    • Answer: Prefer System.Text.Json with source generators, ignore nulls, and precomputed encoders. For internal high-throughput, use Protobuf (binary, schema). Avoid expensive converters and deep object graphs.
  • How do you design APIs to handle traffic spikes gracefully?

    • Answer: Implement rate limiting/token buckets, autoscale (HPA/KEDA), backpressure and queueing, fast fail/429 with Retry-After, degraded modes (serve cache), and maintain headroom via canary + rapid rollback.

6. Distributed Systems & Scaling

  • How do you identify hot spots in a microservices architecture?

    • Answer: Instrument with OpenTelemetry and RED/USE metrics, build service dependency maps, analyze p95/p99 latency, queue lag, and error rates; drill into spans to find chatty calls, slow DB queries, and lock/contention hotspots.
  • What’s your approach to horizontal vs vertical scaling decisions?

    • Answer: Horizontal for stateless services and bursty load (better resilience/cost at scale). Vertical for single-threaded or stateful bottlenecks where scaling out is hard. Use load tests + cost curves to choose; externalize session/state to enable horizontal.
  • How do you design for idempotency and backpressure in distributed systems?

    • Answer: Accept Idempotency-Key**s, store processed request IDs, make handlers **idempotent; apply bounded queues, token-bucket rate limits, circuit breakers, and shed non-critical work on overload.
  • What strategies do you use to reduce queue latency in Service Bus/RabbitMQ?

    • Answer: Increase consumer concurrency/prefetch, keep handlers fast (ack quickly), right-size messages, co-locate producers/consumers, partition/shard hot streams, and autoscale with KEDA based on backlog/lag.
  • How do you balance consistency vs availability (CAP theorem trade-offs)?

    • Answer: Choose CP when correctness matters (e.g., money moves), AP with eventual consistency for high availability (caches, feeds). Use tolerant readers, sagas, and compensations to reconcile.

7. Cloud Performance Optimization

  • How do you monitor and optimize Azure App Service or AKS performance?

    • Answer: Use Application Insights + Container/AKS Insights, track CPU/mem, p95 latency, errors, and pod restarts. Tune requests/limits, connection pools, and enable HTTP/2, ready/liveness probes, and caching.
  • How do you configure autoscaling policies effectively?

    • Answer: Define min/max bounds, scale on leading indicators (queue length, RPS, CPU) with HPA/KEDA or App Service autoscale; set cool-downs, and validate with load tests to avoid oscillation.
  • How do you optimize cold start performance in Azure Functions?

    • Answer: Use Premium plan with pre-warmed instances, trim dependencies/startup work, prefer in-proc/.NET with ReadyToRun/trim, cache clients (e.g., HttpClient), and warm via scheduled pings.
  • How do you reduce egress costs and latency in cloud deployments?

    • Answer: Co-locate services and data, use Private Link/VNet peering, compress payloads, serve static assets via CDN, and avoid cross-region chatty calls; replicate data closer to users.
  • How do you measure and optimize SLA/SLO compliance for cloud apps?

    • Answer: Define SLIs (availability, latency, error rate), set SLOs, monitor error-budget burn, and alert on fast/slow burns. Prioritize reliability work when budgets deplete.

8. Frontend Performance

  • How do you improve Angular/Blazor load times (bundling, tree-shaking, AOT, lazy loading)?

    • Answer: Enable prod builds (AOT/optimization/treeshake), code split & lazy-load routes, prefetch critical chunks, optimize images/fonts, and remove unused polyfills.
  • How do you optimize rendering performance (trackBy, virtual scrolling, change detection strategy)?

    • Answer: Use ChangeDetectionStrategy.OnPush, trackBy with *ngFor, virtual scrolling, memoized selectors, and batch DOM updates; in Blazor, minimize re-render scope and use ShouldRender when needed.
  • What’s your approach to image optimization (responsive images, WebP, CDN)?

    • Answer: Serve responsive images with srcset/sizes, modern formats (WebP/AVIF), lazy-load below-the-fold, and deliver via CDN with caching and resizing at the edge.
  • How do you measure and improve Core Web Vitals (LCP, FID, CLS)?

    • Answer: Measure with Web Vitals/RUM + Lighthouse. Improve LCP via critical CSS and optimized hero media; FID (or interaction latency) by reducing JS work and splitting bundles; CLS by reserving space for media/ads.
  • How do you handle state management performance issues (NgRx, Fluxor)?

    • Answer: Normalize state, use memoized selectors, avoid large global updates, split stores by feature, and debounce noisy streams; in Blazor/Fluxor, select minimal slices to re-render.

9. Best Practices

  • How do you decide between premature optimization vs necessary performance work?

    • Answer: Measure first; fix issues that breach SLOs or drive costs. Defer micro-optimizations until profiling proves benefit; optimize hotspots, not guesses.
  • How do you define and monitor performance KPIs (latency, throughput, memory usage)?

    • Answer: Set KPI targets (e.g., p95 latency, RPS, RAM/CPU, GC stats) from business goals; monitor with dashboards and alerts tied to error budgets.
  • How do you embed performance testing in CI/CD pipelines?

    • Answer: Run smoke perf tests on PRs, baseline tests nightly, and regression gates before release; publish trends and fail on significant regressions.
  • How do you evaluate trade-offs between simplicity vs performance hacks?

    • Answer: Compare maintainability vs measured gain; if a hack yields small benefit and increases complexity/risk, prefer clarity. Document decisions via ADRs.
  • Can you share a real-world example of a performance issue you solved and its impact?

    • Answer: Example: replacing reflection-heavy JSON with System.Text.Json source generators and batching DB writes reduced p95 latency by ~30% and cut CPU/allocations—validated by A/B canary and profiler data.

🔹 Architecture & Best Practices

1. Core Principles

  • What are the SOLID principles? Can you give a practical example of each?

    • Answer:
      • **S**ingle Responsibility: one reason to change (e.g., InvoicePrinter separate from InvoiceCalculator).
      • **O**pen/Closed: extend without modifying (strategy plugin for new tax rules).
      • **L**iskov Substitution: subtype must be usable as base (no throwing NotSupported in overrides).
      • **I**nterface Segregation: small, client-specific interfaces (IReadableStream, IWritableStream vs one fat IStream).
      • **D**ependency Inversion: depend on abstractions (domain uses IPaymentGateway, infra provides Stripe/PayPal).
  • How do you apply Separation of Concerns (SoC) in large solutions?

    • Answer: Split by layers (Domain, Application, API/UI, Infrastructure) and by bounded context; isolate cross-cutting via middleware/aspects (logging, auth); use modules/packages with clear interfaces and DI.
  • What’s the difference between KISS, DRY, and YAGNI principles?

    • Answer: KISS: keep designs simple/obvious. DRY: avoid duplicating knowledge—extract shared logic. YAGNI: don’t build until needed—delay features/abstractions without a proven use.
  • How do you balance simplicity vs. over-engineering in design?

    • Answer: Start with the simplest design that meets requirements, measure/learn, and extract abstractions only when duplication/volatility appears. Use ADRs to justify complexity with measurable benefits.
  • What’s your approach to technical debt management?

    • Answer: Make debt visible (tickets, Sonar reports), prioritize by risk/user impact, reserve capacity each sprint, enforce “no new debt on new code”, and schedule refactors tied to features.

2. Domain-Driven Design (DDD)

  • What are the building blocks of DDD (entities, value objects, aggregates, repositories, bounded contexts)?

    • Answer: Entities have identity and lifecycle; Value Objects are immutable by value; Aggregates group entities with an Aggregate Root enforcing invariants; Repositories persist aggregates; Bounded Contexts define clear model boundaries and ubiquitous language.
  • What’s the difference between a domain service and an application service?

    • Answer: Domain services hold domain logic that doesn’t fit an entity/value object (pure business rules). Application services orchestrate use cases, transactions, and I/O (call domain, map DTOs).
  • How do you identify bounded contexts in a system?

    • Answer: Use event storming and ubiquitous language to find vocabulary splits, ownership, and differing rules; align with org/team boundaries and minimize cross-context coupling.
  • How would you implement aggregate roots and invariants?

    • Answer: Keep invariants inside the aggregate; expose behaviors (methods) not setters; commit changes atomically per aggregate/transaction; size aggregates around consistency needs, not query shapes.
  • How do you manage domain events and their propagation?

    • Answer: Raise events from aggregates, dispatch in the application layer, and persist/publish with the Outbox pattern. Handle cross-context reactions asynchronously for eventual consistency.

3. Clean, Hexagonal & Onion Architecture

  • Can you outline the layers of Clean Architecture and what belongs in each?

    • Answer:
      • Domain (Entities/Value Objects): pure business rules, no dependencies.
      • Application (Use Cases): orchestrates domain, ports/interfaces, DTOs.
      • Interface Adapters: controllers, presenters, mappers, repo implementations.
      • Frameworks & Drivers (Infrastructure): DB, messaging, web, external APIs.
  • How do you prevent business logic leakage into infrastructure?

    • Answer: Keep domain framework-free, define ports in core, implement adapters in infra, enforce mapping at the edges, and test domain in isolation.
  • What is the Hexagonal (Ports & Adapters) architecture, and how does it compare to Onion?

    • Answer: Hexagonal defines input/output ports with adapters around the core; Onion layers dependencies inward. Both enforce inversion so the domain doesn’t depend on frameworks—Hexagonal is more explicit about ports.
  • How do you enforce dependency inversion in .NET projects?

    • Answer: Use separate projects (Domain → Application → API/Infra references only inward), define interfaces in core, implement in infra, wire with DI. Add architecture tests (NetArchTest), analyzers, and CI checks.
  • What’s your approach to shared kernel vs. bounded context isolation?

    • Answer: Keep a tiny, stable shared kernel (e.g., primitives, base value types); avoid sharing rich domain models/databases. Integrate contexts via events/contracts, not shared tables, to preserve autonomy.

4. Event-Driven Architecture (EDA)

  • What’s the difference between commands, events, and queries?

    • Answer: Commands tell a system to do something (imperative, may fail, exactly one target). Events state that something happened (past-tense, immutable, 0..N subscribers). Queries ask for data and must be side-effect free.
  • How do you design event buses for microservices?

    • Answer: Use a durable broker (Service Bus/RabbitMQ/Kafka), define topic per event type, strong schemas, Outbox for atomic publish, correlation/causation IDs, retries + DLQs, idempotent consumers, and full observability (traces/metrics).
  • What are the trade-offs of eventual consistency?

    • Answer: Pros: decoupling, scalability, high availability. Cons: stale reads, complex debugging, ordering challenges. Mitigate with tolerant readers, compensations/sagas, and clear UX/SLAs around consistency.
  • How do you evolve event schemas without breaking consumers?

    • Answer: Favor additive changes, keep old fields, default new ones. Use versioned topics or in-message schemaVersion, a schema registry, and deprecate with long sunset windows.
  • How do you handle idempotency and deduplication in event-driven systems?

    • Answer: Include a message ID/natural key, keep a processed table/inbox with exactly-once effects, use Outbox on publish, and guard state changes with optimistic concurrency or sequence numbers.

5. Resilience & Fault Tolerance

  • What’s the difference between retry, circuit breaker, and bulkhead patterns?

    • Answer: Retry handles transient faults (backoff + jitter). Circuit breaker stops calling a failing dependency temporarily. Bulkhead isolates resources (threads/connections) to contain failure.
  • How do you implement resilience with Polly in .NET?

    • Answer: Compose PolicyWrap of TimeoutPolicy + RetryPolicy (expo backoff) + CircuitBreaker + Fallback/Bulkhead. Register via HttpClientFactory named clients; add hedging/rate limit if needed.
  • How do you design APIs to gracefully degrade under load?

    • Answer: Serve cached/stale responses, reduce payloads, disable non-critical features via feature flags, prioritize critical paths, return 429 + Retry-After, and shed work early at the edge.
  • How do you implement timeouts, cancellation tokens, and fail-fast strategies?

    • Answer: Set per-dependency timeouts (Polly/HttpClient), flow CancellationToken end-to-end, abort on client disconnect, and fail fast via circuit breakers/backpressure when queues grow.
  • What’s your approach to chaos engineering in distributed systems?

    • Answer: Form a hypothesis, limit blast radius, run faults (CPU/network/pod kill) in lower envs first (e.g., Azure Chaos Studio), watch SLOs and rollback criteria, then automate periodic drills.

6. Scalability & Reliability

  • What’s the difference between horizontal vs. vertical scaling?

    • Answer: Horizontal adds instances (better resilience/cost elasticity). Vertical adds CPU/RAM to one node (quick but limited, single-box risk). Prefer horizontal for stateless services.
  • How do you design for stateless services?

    • Answer: Externalize state (DB/Redis), no local session/files, 12-factor config, idempotent handlers, health/readiness probes, and safe restarts/rolling updates.
  • What’s your approach to multi-region active/active vs. active/passive deployments?

    • Answer: Active/active: traffic in multiple regions, needs data replication/conflict resolution. Active/passive: cheaper, simpler failover. Use Front Door/Traffic Manager, test runbooks, and align with RTO/RPO.
  • How do you handle database sharding and partitioning?

    • Answer: Pick a good shard key (uniform, locality), choose range/hash/consistent hashing, avoid cross-shard transactions, keep a routing map, and plan re-sharding/hot-key mitigation.
  • How do you design systems to handle traffic spikes (e.g., Black Friday load)?

    • Answer: Queue-based load leveling, aggressive caching/CDN, autoscaling (HPA/KEDA), rate limiting and token buckets, pre-warm instances, and degraded modes for non-essential features.

7. API & Service Design

  • How do you decide between REST, gRPC, and GraphQL for a service?

    • Answer: REST for broad compatibility, caching, and simple CRUD; gRPC for low-latency, typed, internal microservice RPC and streaming; GraphQL when clients need flexible shapes/over-fetch reduction. Often: REST external, gRPC internal, GraphQL for complex UIs.
  • How do you implement API versioning?

    • Answer: Version via URL (/v1), header (api-version), or content negotiation; deprecate with headers and docs. In ASP.NET Core, use API Versioning package, route constraints, and compat shims while sunsetting old versions.
  • How do you secure APIs (authN, authZ, rate limiting)?

    • Answer: AuthN: OAuth2/OIDC → JWT access tokens; AuthZ: policy/claims/roles + resource checks; Rate limiting: token/leaky bucket at gateway and app (ASP.NET Rate Limiting/APIM policies). Add mTLS for service-to-service, and validate scopes/claims per endpoint.
  • What’s the role of an API Gateway in microservices? Which have you used (YARP, Ocelot, APIM)?

    • Answer: Central routing, auth, TLS, throttling, transforms, observability and developer portal. YARP for high-perf .NET reverse proxy, Ocelot for simple .NET gateways, APIM for enterprise governance, products, and analytics.
  • How do you design backend-for-frontend (BFF) patterns?

    • Answer: A per-experience API that aggregates/reshapes data, handles session/feature flags, and shields the UI from backend churn. Keep thin logic, cache smartly, and enforce strict boundary to avoid leaking domain complexity to the client.

8. Security & Compliance

  • What’s the difference between OAuth2 and OpenID Connect?

    • Answer: OAuth2: delegated authorization (who can access what). OpenID Connect: identity layer on top of OAuth2 providing ID tokens and user profile—i.e., authentication.
  • How do you manage secrets and certificates in a distributed system?

    • Answer: Store in Key Vault (not code), access via managed identity/OIDC; use short-lived tokens, rotate automatically, and distribute to K8s via CSI driver. Automate cert issuance/rotation and enforce mTLS where needed.
  • How do you prevent OWASP Top 10 vulnerabilities (XSS, SQL injection, CSRF)?

    • Answer: XSS: encode/escape output, CSP, avoid innerHTML. SQLi: parameterized queries/ORM. CSRF: same-site cookies, anti-forgery tokens. Also validate inputs, secure headers, dependency scanning, and least privilege to data.
  • How do you enforce least privilege and RBAC in services?

    • Answer: Define scoped roles/permissions per resource, issue least-scope tokens, lock down network paths, and use ABAC/claims for fine-grained checks. Review grants regularly and audit every access.
  • What’s your approach to compliance requirements (GDPR, HIPAA, SOC2)?

    • Answer: Data minimization, encryption in transit/at rest, access controls/auditing, retention & right-to-erasure workflows, vendor BAAs (HIPAA), change management, incident response, and documented controls with periodic evidence.

9. Observability & Governance

  • How do you implement centralized logging, tracing, and metrics?

    • Answer: Use structured logs (Serilog), OpenTelemetry for traces/metrics/logs, propagate W3C trace context, and export to App Insights/Prometheus+Grafana. Correlate logs with trace/span IDs.
  • What’s your experience with OpenTelemetry in .NET?

    • Answer: Add OTel SDK, instrument ASP.NET Core/HttpClient/SQL, set service.name, export via OTLP. Use resource attributes, sampling, baggage for tenant IDs, and dashboards/alerts on span/metric data.
  • How do you define and enforce SLOs and SLAs?

    • Answer: Choose SLIs (availability, p95/p99 latency, error rate), set SLOs, monitor error-budget burn, and gate releases on budgets. SLAs are contractual; align alerting/runbooks to SLOs.
  • What’s the role of Architecture Decision Records (ADRs)?

    • Answer: Lightweight docs capturing context, options, decision, consequences. They create a searchable history for audits/onboarding and prevent re-litigating choices.
  • How do you design governance models for microservices teams?

    • Answer: Provide a paved road: templates, libraries, and policies (security/observability). Use fitness functions in CI to enforce standards, a small architecture guild for guidance (not gatekeeping), and autonomy within guardrails.

10. Advanced Topics

  • How do you apply CQRS in real-world systems?

    • Answer: Split commands (write model, invariants) from queries (read model/projections). Use when write rules are complex or read scaling differs; accept eventual consistency between models.
  • What’s the Saga pattern and when do you use orchestration vs. choreography?

    • Answer: A saga coordinates a multi-step business transaction with compensations. Orchestration: central brain (easier visibility). Choreography: services react to events (looser coupling). Choose by complexity/visibility needs.
  • How do you design systems for multi-tenancy?

    • Answer: Decide isolation: shared DB + tenant key, schema per tenant, or DB per tenant. Enforce tenant in tokens/filters, partition data, apply per-tenant rate limits/quotas, and isolate secrets/config.
  • What’s your experience with the actor model (e.g., Orleans, Akka.NET)?

    • Answer: Actors encapsulate state + behavior with single-threaded message processing. Orleans virtual actors (“grains”) simplify concurrency and fan-out. Great for IoT, gaming, workflows—be mindful of hot keys and placement.
  • How do you align architecture with business capabilities?

    • Answer: Map services to bounded contexts/capabilities, apply Team Topologies (stream-aligned teams), and practice reverse-Conway by structuring code/org around the domain. Tie architecture choices to measurable business outcomes (OKRs/SLOs).

🔹 Software Engineering Practices & Design Patterns

1. Creational Patterns

  • What is the Factory Method pattern? Can you implement it in C# for creating database repositories?
    • Answer: Factory Method lets subclasses decide which concrete type to create, returning an interface/abstract type. It centralizes creation and keeps clients unaware of specifics.
public interface IRepository<T> { Task<T?> GetAsync(Guid id); }
public class SqlRepo<T> : IRepository<T> { /* ... */ }
public class MongoRepo<T> : IRepository<T> { /* ... */ }

public abstract class RepoFactory {
    public abstract IRepository<T> Create<T>();
}
public class SqlRepoFactory : RepoFactory {
    public override IRepository<T> Create<T>() => new SqlRepo<T>();
}
public class MongoRepoFactory : RepoFactory {
    public override IRepository<T> Create<T>() => new MongoRepo<T>();
}
// usage: var repo = factory.Create<Order>();
  • How does the Abstract Factory differ from the Factory Method?

    • Answer: Factory Method creates one product via subclass override. Abstract Factory creates families of related products via a set of factory methods on one object (composition), keeping products consistent.
  • When would you use the Singleton pattern? How do you make it thread-safe in C#?

    • Answer: Use for a single, shared, stateless and expensive-to-create service (e.g., configuration provider). Prefer DI singletons. Thread-safe via Lazy<T> or double-checked locking.
public sealed class Config {
    private Config() { }
    public static Config Instance => _lazy.Value;
    private static readonly Lazy<Config> _lazy = new(() => new Config(),
                                                     System.Threading.LazyThreadSafetyMode.ExecutionAndPublication);
}
  • What’s the difference between Builder and Factory patterns?

    • Answer: Factory chooses which concrete type to create and returns it fully built. Builder constructs a complex object step-by-step, allowing different configurations/representations before Build().
  • How would you use the Prototype pattern to clone objects in .NET?

    • Answer: Copy from an existing instance instead of re-creating. Use MemberwiseClone() for shallow copies, custom Clone()/copy constructors for deep copies; in modern C#, records support with cloning for immutables.

2. Structural Patterns

  • What is the Adapter pattern? How would you use it to integrate with a third-party API?
    • Answer: Adapter converts a third-party interface to your app’s expected interface.
public interface IPaymentGateway { Task PayAsync(Money m); }
// Third-party: StripeClient.ChargeAsync(...)
public class StripeAdapter : IPaymentGateway {
    private readonly StripeClient _client;
    public StripeAdapter(StripeClient client) => _client = client;
    public Task PayAsync(Money m) => _client.ChargeAsync(new Charge { Amount = m.Cents });
}
  • Explain the Decorator pattern. How would you use it for logging or caching cross-cutting concerns?
    • Answer: Decorator wraps a component to add behavior without changing the original type.
public class CachingRepo<T> : IRepository<T> {
    private readonly IRepository<T> _inner; private readonly ICache _cache;
    public CachingRepo(IRepository<T> inner, ICache cache){ _inner = inner; _cache = cache; }
    public async Task<T?> GetAsync(Guid id) =>
        await _cache.GetOrAddAsync(id.ToString(), () => _inner.GetAsync(id));
}
// Similarly: LoggingRepo<T> logs before/after delegating to _inner
  • What is the Proxy pattern? How is it used in lazy-loading scenarios in EF Core or NHibernate?

    • Answer: Proxy is a stand-in that controls access (lazy load, security, caching). ORMs generate dynamic proxies for entities; accessing a navigation property triggers deferred DB fetch.
  • When would you apply the Composite pattern (e.g., UI hierarchies)?

    • Answer: When modeling tree structures where leaves and groups should be treated uniformly (menus, UI controls, file systems). Operations apply recursively to composites.
  • What’s the difference between Facade and Adapter?

    • Answer: Facade provides a simplified high-level API over a subsystem you own. Adapter converts an incompatible external API to a target interface—focused on compatibility.

3. Behavioral Patterns

  • How does the Strategy pattern differ from simple polymorphism? Give an example in a payment system.

    • Answer: Strategy encapsulates interchangeable algorithms and selects one at runtime via composition, not inheritance hierarchy of the caller. Example: IPaymentStrategy with PayPalStrategy, CardStrategy, chosen per user/region via DI.
  • Explain the Observer pattern. How does it relate to event-driven architecture (C# events, SignalR)?

    • Answer: Observer lets subscribers react to subject changes (publish–subscribe). In .NET: event/delegate or IObservable<T>. SignalR broadcasts server events to many clients—an observer application at scale.
  • What is the Mediator pattern? How is it used in CQRS with MediatR in .NET?

    • Answer: Mediator centralizes communication so components don’t talk directly. MediatR dispatches Commands/Queries/Notifications to handlers, decoupling controllers from business logic.
  • How does the Command pattern fit into CQRS?

    • Answer: Command encapsulates a state-changing request (name + data) with a handler. In CQRS, commands mutate the write model; they’re auditable, queueable, and retriable.
  • What’s the difference between Template Method and Strategy?

    • Answer: Template Method defines an algorithm skeleton in a base class with overridable steps (inheritance). Strategy provides pluggable algorithms via composition, chosen at runtime.

4. Enterprise & .NET-Specific Practices

  • How do you apply the Repository and Unit of Work patterns in .NET?

    • Answer: Treat DbContext as Unit of Work (tracks changes/transactions). Repositories wrap DbSet<T> to expose intentful methods and hide EF queries. Commit with await context.SaveChangesAsync() inside a transaction when needed. Avoid over-abstracting—sometimes DbContext + queries is enough.
  • What is the Specification pattern, and why is it useful for persistence and querying?

    • Answer: Encapsulates query intent (filters, includes, ordering, paging) as a reusable object (e.g., Expression<Func<T,bool>>, includes). Promotes composability, testability, and consistency; repositories accept a ISpecification<T> and apply it to IQueryable<T> (e.g., Ardalis.Specification).
  • How do you apply the Decorator pattern with ASP.NET Core middleware?

    • Answer: Middleware wraps the next delegate—classic decorator for HttpContext. Example: log/timestamp → await next() → log status; or cache/short-circuit responses. Similarly, decorate services via DI (e.g., Scrutor) for cross-cuts like caching/retries.
  • How do Dependency Injection and Inversion of Control relate to design patterns?

    • Answer: IoC inverts who creates dependencies; DI is a concrete way to achieve IoC. DI enables patterns like Strategy, Decorator, Proxy via runtime composition and reduces the need for factories/singletons in application code.
  • How do you balance using patterns vs. avoiding over-engineering?

    • Answer: Start simple; add patterns only when duplication/volatility appears. Prefer language/framework features, measure impact, document with ADRs, and refactor incrementally—YAGNI over speculative abstractions.

5. Anti-Patterns & Best Practices

  • What is an anti-pattern? Can you give examples (God Object, Spaghetti Code, Golden Hammer)?

    • Answer: A common but counterproductive solution. God Object: one class does everything. Spaghetti Code: tangled, no structure. Golden Hammer: apply one tool/pattern to every problem regardless of fit.
  • What’s the danger of overusing the Singleton pattern?

    • Answer: Hidden global state, tight coupling, hard-to-test code, order-dependent bugs, and lifetime/threading issues. Prefer DI singletons or scoped services with clear lifetimes.
  • How do you avoid the Big Ball of Mud in large codebases?

    • Answer: Enforce boundaries (bounded contexts/modules), Clean/Hexagonal architecture, clear ownership, contracts (APIs/events), code reviews/linters, and automated tests; avoid sharing databases across services.
  • What’s your approach to refactoring legacy code to align with patterns?

    • Answer: Add characterization tests, introduce seams (facades/ports), inject dependencies, apply Strangler Fig to replace components gradually, and improve design stepwise (e.g., extract strategies/specifications).
  • How do you evaluate when not to use a design pattern?

    • Answer: If the cost > benefit, the variation is rare, or the framework already solves it; if it harms clarity or team comprehension. Choose the simplest solution that meets today’s needs and can evolve.

🔹 Security, Observability & Advanced Topics

1. Authentication & Authorization

  • What’s the difference between OAuth2 and OpenID Connect?

    • Answer: OAuth2 = authorization (access to resources). OpenID Connect (OIDC) = authentication layer on top of OAuth2 that issues ID tokens to prove user identity (plus user info endpoints/claims).
  • How would you implement authentication with OpenIddict in a .NET solution?

    • Answer: Use OpenIddict as your authorization server (or validation on APIs): configure endpoints, flows, scopes, and signing keys; issue JWT/DPoP tokens; validate in APIs via AddAuthentication().AddJwtBearer() or OpenIddictValidation.
builder.Services.AddOpenIddict()
    .AddCore(opt => opt.UseEntityFrameworkCore().UseDbContext<AuthDb>())
    .AddServer(opt => {
        opt.SetAuthorizationEndpointUris("/connect/authorize")
           .SetTokenEndpointUris("/connect/token")
           .AllowAuthorizationCode().AllowClientCredentials().AllowRefreshToken()
           .RegisterScopes("api.read","api.write")
           .AddDevelopmentEncryptionCertificate()
           .AddDevelopmentSigningCertificate()
           .UseAspNetCore();
    })
    .AddValidation(opt => opt.UseLocalServer().UseAspNetCore());
  • What are the advantages of using Azure AD B2C for identity management?

    • Answer: Hosted user flows/CIAM, federates social/enterprise identities, MFA, passwordless, scales globally, built-in compliance/security, custom policies/branding, and easy integration with OAuth2/OIDC.
  • What’s the difference between implicit, code, and client credentials flows?

    • Answer: Implicit (legacy SPA): tokens from auth endpoint (no refresh; not recommended). Authorization Code + PKCE: recommended for SPAs/mobile/web—secure token exchange via back channel. Client Credentials: app-to-app (no user), service principals/daemons.
  • How do you implement role-based (RBAC) and policy-based (PBAC) authorization in ASP.NET Core?

    • Answer: Add roles/claims to tokens, then:
builder.Services.AddAuthorization(opt => {
    opt.AddPolicy("CanEdit", p => p.RequireClaim("scope","posts.write"));
});
// Usage
[Authorize(Roles="Admin")]            // RBAC
[Authorize(Policy="CanEdit")]         // PBAC

Use IAuthorizationHandler for resource-based checks.


2. API & Application Security

  • How do you secure REST/gRPC APIs exposed to external clients?

    • Answer: TLS everywhere, OAuth2/OIDC JWTs (validate issuer/audience/sig/exp), scopes/claims per endpoint, rate limiting, input validation, and for service-to-service use mTLS/managed identities. For gRPC, pass tokens via metadata and enforce on the server.
  • How do you implement JWT validation and token refresh securely?

    • Answer: Validate with JwtBearer middleware (issuer, audience, lifetime, signature). Use short-lived access tokens + rotating refresh tokens (httpOnly, Secure, SameSite cookies), reuse detection & revocation, and bind tokens to client (PKCE/DPoP where applicable).
  • How do you prevent Cross-Site Scripting (XSS) in Angular/Blazor apps?

    • Answer: Rely on Angular/Blazor auto-encoding, avoid innerHTML/MarkupString and unsafe bypass APIs, sanitize rich input, set CSP, and validate/encode on server.
  • What’s the difference between CORS and CSRF, and how do you mitigate each?

    • Answer: CORS controls which origins can call your API—mitigate by allowlisting origins/headers/methods and requiring credentials explicitly. CSRF is a forged cross-site request—mitigate with SameSite cookies, anti-forgery tokens, and avoiding unsafe cookie usage.
  • How do you secure sensitive data (PII) in logs and telemetry?

    • Answer: Don’t log PII by default; use allowlists and structured logging with masking/hash filters, encrypt at rest/in transit, restrict access via RBAC, and enable data retention and deletion workflows.

3. OWASP Top 10 Mitigations

  • How would you protect against SQL Injection in .NET (EF Core/NHibernate)?

    • Answer: Use parameterized queries/linq (no string concat), validate inputs, least-privilege DB users, and review raw SQL with parameters only.
  • How do you prevent Insecure Deserialization in APIs?

    • Answer: Accept known DTOs only, avoid unsafe polymorphic deserialization; with Newtonsoft.Json keep TypeNameHandling=None; prefer System.Text.Json; validate and cap payload size.
  • How do you mitigate Broken Authentication issues?

    • Answer: Use OIDC providers, MFA, strong password and lockout policies, secure cookies (HttpOnly/Secure/SameSite), session timeout/refresh rotation, and monitor for credential stuffing with rate limits.
  • How do you secure apps against XXE (XML External Entity) attacks?

    • Answer: Disable DTDs and external entities:
var settings = new XmlReaderSettings { DtdProcessing = DtdProcessing.Prohibit, XmlResolver = null };

Use safe parsers and avoid processing untrusted XML.

  • How do you implement rate limiting and throttling to prevent abuse (DoS, brute force)?
    • Answer: Apply token/fixed window limits at gateway (APIM/NGINX) and app (ASP.NET Rate Limiting middleware), return 429 with Retry-After, and add CAPTCHA/backoff for auth endpoints; scale with queues and KEDA where relevant.

4. Monitoring & Observability

  • What’s the difference between logging, metrics, and tracing?

    • Answer: Logs = detailed, discrete events (debug/audit). Metrics = numeric time series (CPU, p95 latency) for trends/alerts. Traces = end-to-end request timelines across services (spans + context) for causality.
  • How do you configure OpenTelemetry in .NET microservices?

    • Answer: Add OTel SDK, set service/resource, enable ASP.NET Core/HttpClient/SQL instrumentation, export via OTLP to your backend.
builder.Services.AddOpenTelemetry()
  .ConfigureResource(r => r.AddService("orders-api"))
  .WithTracing(t => t
    .AddAspNetCoreInstrumentation()
    .AddHttpClientInstrumentation()
    .AddSqlClientInstrumentation(o => o.SetDbStatementForText = true)
    .AddOtlpExporter())
  .WithMetrics(m => m
    .AddAspNetCoreInstrumentation()
    .AddRuntimeInstrumentation()
    .AddOtlpExporter());
  • Which monitoring platforms have you used (Grafana, Prometheus, Azure Application Insights, Log Analytics)?

    • Answer: Prometheus for metrics scraping + Grafana dashboards/alerts; Application Insights for traces/logs/metrics + Profiler; Log Analytics for centralized queries and compliance retention.
  • How do you propagate correlation IDs across microservices?

    • Answer: Use W3C Trace Context (traceparent, tracestate) from OTel/HttpClient; include correlation ID in structured logs and forward it via headers/baggage. Add middleware to generate if missing and to log TraceId/SpanId.
  • How do you define and track SLOs, SLIs, and SLAs?

    • Answer: Pick SLIs (availability, p95 latency, error rate), set SLOs (targets over a window), and monitor error-budget burn with alerts. SLAs are contractual promises derived from SLOs and backed by runbooks.

5. Chaos Engineering & Reliability Testing

  • What is chaos engineering and why is it important?

    • Answer: Deliberately inject failures (latency, crashes, network loss) to validate resilience assumptions and improve reliability before users are impacted.
  • How would you simulate service failures in Kubernetes (Azure Chaos Studio, Gremlin)?

    • Answer: Run scoped experiments (kill pod/node, CPU/memory pressure, network partition/latency) against a non-prod or canary with blast-radius limits and automated rollback criteria.
  • How do you validate resilience policies like retries and circuit breakers in production?

    • Answer: Use synthetic checks/canaries that trigger controlled faults (fault-injection proxy/traffic policies) and verify metrics: retry counts, success after retry, circuit open/half-open behavior, and no error-budget breach.
  • How do you test for failover scenarios in multi-region cloud deployments?

    • Answer: Regular game days: disable primary region, force DNS/Front Door failover, validate RTO/RPO, data replication health, idempotent replays, and runbooks.
  • What’s your approach to game days (planned chaos experiments with teams)?

    • Answer: Define hypothesis/goals, choose scoped faults, notify stakeholders, monitor SLOs live, capture timelines/decisions, and document actions & fixes; iterate until objectives are met.

6. Cloud & Data Compliance

  • What steps do you take to ensure GDPR compliance in a SaaS platform?

    • Answer: Map data flows (RoPA), data minimization, consent, DSR (access/rectify/erase/export), encryption in transit/at rest, retention policies, audit logging, and DPIA for high-risk processing.
  • How do you implement data residency requirements in Azure?

    • Answer: Deploy region-pinned services (SQL/Cosmos/Storage) in approved regions, restrict resource locations with Azure Policy, keep telemetry and backups in-region, and control cross-region egress with Private Link/VNet peering.
  • How do you handle HIPAA compliance for healthcare applications?

    • Answer: Sign a BAA, limit/track PHI access, encrypt everywhere, strong RBAC/MFA, audit logs, secure backups/DR, vulnerability management, and documented administrative/technical safeguards.
  • How do you manage audit logging and retention policies?

    • Answer: Use structured, immutable logs (append-only store/immutable retention), capture who/what/when/where, protect with RBAC, apply per-data-class retention, and automate export/archive with lifecycle rules.
  • What’s your approach to PII encryption and anonymization in databases?

    • Answer: Encrypt at rest (TDE) and in field/column (Always Encrypted/client-side crypto), rotate CMKs in Key Vault, tokenize or hash where possible, separate keys from data, and provide anonymized datasets for non-prod via masking.

7. Advanced Security Practices

  • How do you implement mutual TLS (mTLS) between services?

    • Answer: Establish a private CA, issue per-service certs with proper SANs, and enforce client-cert verification at the ingress/sidecar (Istio/Linkerd/Consul) or app server (Kestrel + ClientCertificateMode.RequireCertificate). Automate rotation (cert-manager/AKV CSI) and pin trust to the CA, not individual certs.
  • What’s your experience with Key Vault secret rotation?

    • Answer: Store secrets/keys/certs in Azure Key Vault, access via Managed Identity (no static creds), enable rotation policies (built-in for certs; functions/automation for secrets/keys), and consume via AKV references/CSI driver so apps pick up new versions without code changes. Monitor expiry with alerts.
  • How do you secure CI/CD pipelines against supply chain attacks?

    • Answer: Pin actions to SHAs, use OIDC→cloud (no long-lived secrets), least-privilege service connections, protected branches/reviews/required status checks, SAST/SCA and secrets scanning, SBOM + provenance (SLSA), ephemeral runners, and signed artifacts.
  • How do you integrate SAST/DAST (e.g., SonarQube, OWASP ZAP) in pipelines?

    • Answer: Run SAST on every PR (fail on critical), run DAST against an ephemeral environment post-deploy, publish reports to the PR, and gate release on policy thresholds. Track trends over time.
  • How do you enforce container image scanning & signing before production deploys?

    • Answer: Scan in CI (Trivy/Grype) and in registry; sign with Cosign/Notation; enforce admission with AKS Policy/OPA Gatekeeper to only allow signed, vulnerability-clean images (severity thresholds) from trusted registries.

8. Governance & Best Practices

  • How do you structure security policies across multiple microservices?

    • Answer: Provide a paved road (baseline auth, logging, TLS, rate-limit libs), codify org rules as policy-as-code (Azure Policy/OPA), enforce via CI templates and gateway policies, and audit via centralized dashboards.
  • What’s the role of Architecture Decision Records (ADRs) in security/observability?

    • Answer: ADRs capture why a control/stack was chosen, alternatives, and consequences—creating an auditable trail that maps decisions to risks, controls, and compliance requirements.
  • How do you enforce least privilege access in Azure?

    • Answer: Use RBAC with minimum scope (resource group/resource), PIM for just-in-time elevation, Managed Identities for workloads, deny assignments/Azure Policy to block broad roles, and regular access reviews.
  • How do you conduct threat modeling for APIs?

    • Answer: Build DFDs, apply STRIDE, list abuse cases, assess risk, choose mitigations (authZ scopes, input validation, rate limiting, mTLS, logging), record decisions, and verify via security tests and checks in CI.
  • How do you audit and report compliance evidence for SOC2/ISO27001?

    • Answer: Map controls to automated evidence (Azure Activity/Sign-in logs, Policy compliance, Defender for Cloud, pipeline logs), store artifacts in a tamper-evident repo, schedule access reviews and vuln scans, and generate periodic attestation reports with tickets for gaps.

🔹 AI, OpenAI, Semantic Kernel & Intelligent Systems

1. AI Fundamentals in Applications

  • What are the key differences between traditional software engineering and AI-augmented applications?

    • Answer: Traditional apps are deterministic (rules → outputs). AI apps are probabilistic (models → likely outputs), require data pipelines, prompt/model management, offline eval, safety/guardrails, and human oversight.
  • What are LLMs (Large Language Models) and how do they differ from classical ML models?

    • Answer: LLMs are foundation models trained on massive text to perform many language tasks zero/ few-shot. Classical ML is narrow (task-specific features/labels). LLMs rely on prompting/context, not only fixed features.
  • How do you handle AI inference latency in production apps?

    • Answer: Shorten prompts (summaries/chunking), use smaller/faster models for first pass, enable streaming, cache results, batch requests where safe, pre-warm capacity (e.g., provisioned throughput), and parallelize retrieval/tools.
  • What’s your approach to AI evaluation metrics (accuracy, hallucination rate, relevance)?

    • Answer: Create a golden set with labeled tasks; measure precision/recall, relevance, groundedness/hallucination rate, and toxicity. Use both human review and LLM-as-judge with rubrics; A/B test and track SLOs.
  • How do you design for human-in-the-loop (HITL) validation?

    • Answer: Add confidence thresholds, route low-confidence outputs to review queues, provide citations/explanations, capture user feedback for retraining, and guarantee override/rollback paths.

2. OpenAI / Azure OpenAI Integration

  • How would you integrate OpenAI GPT models into a .NET application?
    • Answer: Call the API via official SDK/HTTP from a typed service; externalize keys, set model, temperature, max tokens, and use function/tool calls and JSON mode for structured outputs.
var client = new OpenAIClient(new ApiKeyCredential(cfg.Key));
var res = await client.Chat.Completions.CreateAsync(model: "gpt-4o-mini",
    messages: [ new("system","You are a helpful assistant."),
                new("user","Summarize this...") ]);
  • What’s the difference between using OpenAI API vs Azure OpenAI Service?

    • Answer: Azure offers private networking, RBAC/managed identity, regional residency, content filters, quotas/provisioned throughput, and enterprise governance; OpenAI API is direct SaaS with global endpoints and simpler setup.
  • How do you handle prompt engineering to improve response quality?

    • Answer: Use clear system prompts, few-shot examples, schemas (JSON), tool instructions, constraints (tone/length), and retrieval-augmented context with citations; iteratively test with a prompt eval set.
  • How do you ensure deterministic behavior for mission-critical AI features?

    • Answer: Constrain temperature/top_p, use structured outputs (JSON schema), strict tools/workflows, deterministic retrieval, guardrails/validators, and fallbacks (rules or smaller deterministic models).
  • How do you manage cost optimization when calling LLMs at scale?

    • Answer: Pick the smallest model that meets quality, truncate/context-compress, cache embeddings/answers, batch retrieval, stream instead of full completion, and track cost per request with budgets/alerts.

3. Azure AI Services

  • Which Azure AI Services have you used (Cognitive Search, Form Recognizer, Speech, Vision)?

    • Answer: Azure AI Search, Document Intelligence (Form Recognizer), Speech-to-Text/TTS, Vision; wired them to storage/queues and LLMs for RAG, extraction, and multimodal apps.
  • How do you integrate Cognitive Search with an OpenAI model (RAG architecture)?

    • Answer: Chunk & embed documents, index in Azure AI Search (hybrid BM25+vector), query with filters, pass top k results into the prompt with citations, optionally re-rank, and cache answers.
  • How do you process documents with Form Recognizer and integrate them into workflows?

    • Answer: Use prebuilt/custom models to extract fields/tables, validate via HITL, store normalized JSON in DB, trigger downstream steps via Event Grid/Functions, and enrich search indexes.
  • How do you implement speech-to-text and text-to-speech in a bot?

    • Answer: Use Azure Speech SDK for streaming STT and Neural TTS, integrate with Bot Framework/Direct Line, handle locale models, and add profanity filtering and endpointing.
  • What’s your approach to AI-driven personalization (recommendations, semantic search)?

    • Answer: Build event pipelines (clicks, views), create embeddings for users/items, use vector similarity + re-ranking, apply bandits/explore–exploit, and respect privacy/consent with guardrails.

4. Microsoft Bot Framework & Conversational AI

  • What are the main building blocks of the Microsoft Bot Framework?

    • Answer: Activities (messages/events), Adapters (channel bridges), TurnContext (per-turn state), Middleware, State (Conversation/User/Private), Dialogs (Waterfall/Adaptive), Recognizers (LUIS/Orchestrator), Bot Composer/SDK, and Azure Bot Service channels (Teams/Web Chat/etc.).
  • How do you implement adaptive dialogs and LUIS/Orchestrator recognizers?

    • Answer: Use Adaptive Dialogs (rules/triggers, event-driven) with a Recognizer to map user utterances to intents/entities. Configure LUIS (language models) or Orchestrator (router over multiple recognizers/QnA/LLM) and bind intents to dialog actions/steps.
  • How do you connect a bot to Azure Bot Service and integrate with Teams or WebChat?

    • Answer: Deploy bot (App Service/Functions), register Azure Bot resource, set Messaging Endpoint, enable channels (Teams/Web Chat), configure App ID/Secret. In Teams, add app manifest/permissions; for Web Chat, embed with Direct Line token.
  • How do you handle state management in bots (conversation, user, external stores)?

    • Answer: Use ConversationState and UserState backed by MemoryStorage (dev) or Cosmos DB/Blob Storage (prod). Define strongly-typed state objects, access via StatePropertyAccessor, and save/clear state each turn. External data via your own repositories.
  • How do you test and monitor bots in production?

    • Answer: Bot Framework Emulator and TestAdapter for unit/integration tests; TranscriptLoggerMiddleware + Application Insights/Log Analytics for telemetry; dashboard intent/turn metrics, latency, errors; add health checks and conversation transcripts sampling.

5. Semantic Kernel

  • What is Semantic Kernel and how does it differ from calling OpenAI APIs directly?

    • Answer: SK is an orchestration SDK: it adds functions (semantic/native), planning, tool calling, memory/embeddings, and connectors—so you compose workflows instead of hand-wiring raw API calls.
  • How do you design skills, planners, and connectors in Semantic Kernel?

    • Answer: Group related functions into skills (semantic prompts or C# methods). Use planners/routers to choose/sequence functions based on goals. Add connectors (HTTP/Graph/SQL/Search) as tools the model can invoke.
  • How do you integrate prompt templates with C# functions?

    • Answer: Create a semantic function from a prompt template and combine it with native C# functions in a pipeline:
var kernel = Kernel.CreateBuilder().AddOpenAIChatCompletion("gpt-4o-mini", key).Build();
var summarize = kernel.CreateFunctionFromPrompt("Summarize:\n{{$input}}");
var sanitize = kernel.CreateFunctionFromMethod((string s) => s.Trim(), "Sanitize");
var result = await kernel.InvokeAsync(summarize, new KernelArguments{["input"] = sanitize.Invoke(" text ")});
  • What’s the role of memory stores (vector DBs, embeddings) in SK?

    • Answer: Store embeddings for documents/notes to enable RAG/semantic recall (similarity search). SK abstracts memory providers (e.g., Azure AI Search, Redis, Postgres, Pinecone).
  • How would you implement an AI agent orchestration scenario using SK?

    • Answer: Define agent skills (retrieve, reason, act), expose tools/connectors, add planner for goal decomposition, maintain short/long-term memory, enforce policies/guardrails, and loop observe→plan→act with timeouts and fallbacks.

6. Tooling & AI Orchestration

  • What’s your experience with LangChain or Semantic Kernel for orchestration?

    • Answer: Use them to compose chains/agents with retrieval, tools, and guards. LangChain (Python/JS) has rich community integrations; SK fits .NET apps with native DI, logging, and C# function interop.
  • How do you decide between embedding search (RAG) and fine-tuning?

    • Answer: RAG when knowledge changes often or is proprietary; fine-tune for style/format adherence or specialized tasks with abundant labeled data. Often combine: RAG for facts, light tuning for tone/structure.
  • How do you integrate external tools/APIs into AI pipelines (e.g., weather API, DB lookup)?

    • Answer: Wrap each tool with strict schemas (inputs/outputs), timeouts, and idempotent behavior; pass via tool/connector functions; validate responses and sanitize inputs; log every invocation.
  • How do you prevent tool misuse or security leaks when exposing tools to LLMs?

    • Answer: Allowlist tools/params, enforce authZ server-side, redact secrets, set rate/quotas, validate/escape inputs, constrain outputs (JSON schema), and add policy prompts + runtime guards (content filters).
  • How do you monitor AI pipeline reliability and failures?

    • Answer: Instrument with OpenTelemetry (token usage, latency, tool errors), capture prompt/response fingerprints (not raw PII), track hallucination/groundedness via eval sets, add circuit breakers/fallback models, DLQs for failed tasks, and alert on SLO breaches.

7. Data & Knowledge Management

  • How do you preprocess documents for chunking & embeddings?

    • Answer: Extract clean text (OCR if needed), normalize (remove boilerplate, fix encodings), detect language, split by semantic/section boundaries with token-aware chunk sizes and small overlap, preserve metadata (source, timestamp, ACL, tenant), de-duplicate, and handle tables/code with format-aware parsers.
  • What’s your approach to vector DB selection (Azure Cognitive Search, Pinecone, Weaviate, Redis)?

    • Answer: Match needs on hybrid search (keyword+vector), filters/ACLs, scale/latency, ops model (managed vs self-hosted), region/VNet, and cost.
      • Azure AI Search: enterprise, hybrid BM25+vector, rich filters, RBAC.
      • Pinecone: managed at scale, strong performance, namespaces.
      • Weaviate: OSS/managed, schema/modules, flexibility.
      • Redis: simple, ultra-low latency, great when you already use Redis.
  • How do you handle knowledge freshness in AI-driven apps?

    • Answer: Incremental ingestion (change feeds/webhooks), re-embed only changed chunks, versioned documents, TTL/staleness scoring, on-demand fetch fallback, and recency-aware reranking; automate with pipelines (Functions/Logic Apps).
  • How do you ensure multi-tenant isolation in AI knowledge bases?

    • Answer: Isolate by index/namespace per tenant or enforce tenantId filters at query time; separate keys/secrets, network isolate (VNet/Private Link), encrypt per tenant, and propagate tenant context in baggage/claims to every layer.
  • How do you secure sensitive data used for training/inference?

    • Answer: Minimize and classify data, mask/tokenize PII, encrypt in transit/at rest (Key Vault CMKs), keep private networking, restrict access with RBAC/PIM, scrub logs/prompts, and maintain retention/erasure workflows.

8. AI Risks & Governance

  • What are AI hallucinations, and how do you mitigate them?

    • Answer: Fabricated but fluent outputs. Mitigate with RAG + citations, temperature control, constrained decoding/JSON schemas, domain guardrails, and HITL for high-risk flows.
  • How do you enforce guardrails in AI responses?

    • Answer: Layered: policy prompts, content filters, schema validation (JSON), allow/deny lists, tool allowlisting, rate limits, and post-process validation before returning to users.
  • What’s your approach to responsible AI (bias, explainability, fairness)?

    • Answer: Diverse eval sets, measure bias/fairness per segment, document model cards/ADRs, give rationales/citations, enable human review/appeals, and monitor drift; avoid sensitive attributes unless required for fairness testing.
  • How do you log and audit AI-generated outputs for compliance?

    • Answer: Structured logs with prompt/version, model, params, tools used, trace IDs, and hashes/redacted content; store decisions/feedback, retention policies, and immutable audit trails (append-only store).
  • How do you align AI solutions with GDPR, HIPAA, or SOC2?

    • Answer: DPIA/Threat models, data minimization/residency, encryption, access controls, DSR support (access/erasure), BAAs (HIPAA), vendor risk reviews, change management, incident response, and auditable controls.

9. Advanced Scenarios

  • How do you combine OpenAI + Cognitive Search + Bot Framework into a full pipeline?

    • Answer: Ingest & index docs (chunks/embeddings) → Bot receives query → Retrieve top-k from Cognitive Search → Ground an OpenAI prompt with citations → Stream answer to user (Teams/Web Chat) → log telemetry and feedback for re-ranking.
  • How do you design multi-agent collaboration (planner + specialist agents)?

    • Answer: Use a planner to decompose goals and route to specialist agents (retriever, calculator, writer). Share context via blackboard memory, set tool limits/timeouts, and reconcile outputs with a validator/arbiter.
  • What’s your approach to integrating AI assistants into developer workflows (DevOps copilots, test generation)?

    • Answer: Start with read-only access, restrict scopes, add PR comments (summaries, risk spots), generate tests/docs with prompts tied to repo conventions, run lint/security checks on suggestions, and measure adoption/quality.
  • How would you embed AI in event-driven architectures?

    • Answer: Trigger LLM/RAG steps from events (e.g., “document_uploaded”), use queues for backpressure, idempotent processing with outbox/locks, DLQ on failure, and store outputs with provenance.
  • How do you future-proof AI solutions as models and APIs evolve?

    • Answer: Abstract providers behind a service interface, make model/parameters config-driven, version prompts and test sets, track quality SLOs, support A/B across models, and keep data in portable formats with migration playbooks.

🔹 Scenario-Based System Design & Problem Solving

1. SaaS & Authentication

  • You need to build a multi-tenant SaaS platform. How would you design tenant isolation (logical vs physical)?

    • Answer: Choose isolation per risk/compliance/cost:
      • Logical (shared DB, tenantId column): cheapest, fastest; enforce with global filters (e.g., EF HasQueryFilter), per-tenant RBAC, rate limits, and row-level encryption where needed.
      • Schema per tenant: good middle ground; easier data export, stronger blast-radius control.
      • Database per tenant: strongest isolation/compliance and noisy-neighbor control; highest ops cost. Always propagate tenant context (header/claims), isolate caches/queues, rotate per-tenant keys, and add per-tenant observability.
  • You need to integrate authentication & authorization. Would you use OpenIddict, Azure AD B2C, or IdentityServer? Why?

    • Answer:
      • Azure AD B2C / Entra External ID: best for customer identity (CIAM), social/enterprise federation, built-in MFA/compliance, low ops.
      • OpenIddict: self-hosted OAuth2/OIDC in your stack; full control, no license cost, you run it.
      • IdentityServer (Duende): battle-tested features, commercial license, great for complex protocols/customization. Pick based on control vs. ops burden, federation needs, compliance, and budget.
  • How would you implement role-based access control (RBAC) for multiple tenants?

    • Answer: Make roles tenant-scoped: (TenantId, UserId) -> Roles/Permissions. Issue tenant roles in claims (e.g., role: "Admin", tid: "T1"), enforce with policy-based auth (IAuthorizationHandler) checking both permission and tenant match. Provide per-tenant admins, permission sets (not just roles), and audit every grant.
  • How would you secure API-to-API communication between services?

    • Answer: OAuth2 client credentials with short-lived JWTs (scopes/audiences), private networking, and optionally mTLS. On Azure, prefer Managed Identities instead of secrets. Add rate limiting, Polly policies, rotation/attestation of keys, and verify aud/iss on every call.
  • How would you handle user provisioning and federation (Google/Microsoft login)?

    • Answer: Use your IdP (B2C/IdentityServer/OpenIddict) to federate with Google/Microsoft via OIDC; implement JIT provisioning on first sign-in (map external identities to a local user + tenant) and optional SCIM for enterprise bulk provisioning. Support account linking, domain-based tenant discovery, and store minimal PII with proper consent.

2. Web Applications

  • You need to build an inventory management system (web-based). How would you structure frontend (Angular/Blazor) and backend (ASP.NET Core)?

    • Answer: Monorepo with frontend (Angular/Blazor) + API. Frontend: feature modules (Products, Stock, Orders), NgRx/Fluxor for state, smart/dumb components, route guards, lazy loading. Backend: ASP.NET Core Clean Architecture (Domain, Application, API, Infrastructure), EF Core + migrations, CQRS-light (MediatR), validation (FluentValidation), OpenAPI, and OpenTelemetry.
  • How would you handle real-time inventory updates (SignalR vs polling)?

    • Answer: Prefer SignalR (WebSockets) to push stock deltas to subscribed rooms (per-warehouse/sku). Fallback to SSE/long-polling. Use backpressure (buffer + drop oldest), idempotent updates with versioning. Poll only for low-change areas or offline mode.
  • How would you implement search and filtering in a large product catalog?

    • Answer: Index products in Azure AI Search/Elasticsearch with fields for facets (brand, category), synonyms, analyzers; support keyword + vector/hybrid if needed. API exposes cursor-based pagination, filters, and ETag caching. Keep DB for writes, search for reads.
  • How would you add audit logging for user actions (who changed stock, when)?

    • Answer: Append-only audit table/stream with userId, tenantId, correlationId, before/after, stored via EF SaveChanges interceptor or domain events + Outbox. Protect PII, add ProblemDetails IDs, and surface trails in admin UI.
  • How would you expose the system as a PWA for mobile warehouse workers?

    • Answer: Add manifest.json, service worker (precache app shell + runtime cache), IndexedDB for offline tasks, Background Sync for queued updates, push notifications, responsive UI, and role-based offline capabilities.

3. API & Integration

  • You are tasked with building a REST + gRPC API for customer management. How would you design versioning and backward compatibility?

    • Answer: REST: /v1 routes or header-based versioning; additive changes, ProblemDetails, deprecate with headers. gRPC: evolve proto additively (reserve removed fields, don’t reuse tags), default values, deadline support. Contract tests and canary rollout.
  • A client needs GraphQL support for analytics. How do you integrate GraphQL alongside REST?

    • Answer: Keep REST/gRPC for transactional ops; expose read-only GraphQL for analytics/aggregation with DataLoader to avoid N+1, persisted queries, auth per field, and caching at resolver level. Same domain models, separate gateway.
  • You must integrate with a third-party payment gateway. How do you handle retries, idempotency, and security of callbacks?

    • Answer: Use idempotency keys on charge requests, exponential backoff retries for transient errors, store payment state with outbox. Verify webhooks via HMAC signature (or mTLS), allowlist endpoints, replay protection, and DLQ for failures.
  • How would you design an API gateway layer to manage authentication, throttling, and monitoring?

    • Answer: Use APIM/YARP/Ocelot: JWT validation, scopes, rate limiting/quotas, header/body transforms, request/response compression, correlation IDs, OpenTelemetry export, WAF integration, and developer portal for partner onboarding.
  • How would you expose APIs to partners without exposing the entire system?

    • Answer: Create a partner surface (separate gateway/product) with scoped tokens, per-tenant quotas, sandbox keys, and versioned contracts. Use backends that map to internal services, apply field filtering, and provide mock/testing environments with analytics.

4. Messaging & Distributed Systems

  • You need to implement an order processing workflow across multiple services. Would you use MassTransit, NServiceBus, or Service Bus? Why?

    • Answer: Azure Service Bus = broker. MassTransit/NServiceBus = higher-level frameworks on top.
      • MassTransit: OSS, great with Service Bus/RabbitMQ, built-in consumers, sagas, retries, outbox.
      • NServiceBus: commercial, superb saga tooling, recoverability, audit/monitoring. Choose by budget, required features (sagas/outbox/monitoring), and team familiarity. For rich workflows on Azure: MassTransit + Service Bus (cost-effective) or NServiceBus (enterprise features).
  • How would you design a saga workflow for order → payment → shipping → notification?

    • Answer: Use an orchestrator saga with state persisted by OrderId. Steps: OrderPlaced → send AuthorizePayment; on PaymentAuthorized → ArrangeShipping; on Shipped → SendNotification. Add timeouts, compensations (refund on ship fail), idempotent handlers, and correlation/causation IDs.
  • How would you guarantee idempotency when messages are retried?

    • Answer: Use natural keys/idempotency keys per operation, keep a processed-messages (inbox) table with unique constraint, use Outbox for atomic publish + local DB write, guard side effects with optimistic concurrency/rowversion, and design handlers to be upserts.
  • A service is slow and causes queue backlogs. How do you detect and fix this?

    • Answer: Watch queue depth, age, consumer lag, p95 handle time, and DLQ counts; trace with correlation IDs. Fix by scaling consumers, raising prefetch/concurrency, batching, optimizing DB calls (indexes, caching), partitioning hot keys, and adding backpressure (rate limit producers).
  • How do you decide when to use sync (REST/gRPC) vs async (queues/events)?

    • Answer: Sync for user-facing, read operations or quick writes needing immediate confirmation within a latency budget. Async for long-running, bursty, or cross-boundary tasks needing decoupling/retries. Often: sync command that enqueues work + returns 202 with status endpoint.

5. Databases & Persistence

  • You need to design a multi-tenant database for SaaS. Do you use single database, schema-per-tenant, or database-per-tenant? Why?

    • Answer:
      • Single DB (tenantId column): lowest cost/ops, needs strict row filters & tenant ACLs.
      • Schema per tenant: moderate isolation, easier export/capacity planning.
      • DB per tenant: strongest isolation/compliance & noisy-neighbor control, highest ops. Pick by regulatory isolation, scale, cost, and customization needs.
  • How would you implement soft deletes and audit trails in EF Core/NHibernate?

    • Answer: Add IsDeleted + global query filters; override SaveChanges or use interceptors to set audit fields and write to an append-only audit table/stream (who/when/before/after, correlationId). Optionally DB triggers for defense in depth.
  • How would you handle schema migrations in production for hundreds of tenants?

    • Answer: Backward-compatible, additive changes; a migration orchestrator runs per tenant (schema/DB) with feature flags and phased rollout. Track versions in a TenantMigrations table, throttle concurrency, snapshot/backup, and alert on failures with automatic pause/resume.
  • How would you cache frequently used queries (Redis, in-memory)?

    • Answer: Use Redis for cross-instance cache with key = tenant:query:params, TTL/SLIDING policies, pub/sub invalidation on writes, and ETag/Last-Modified for HTTP. Use MemoryCache for ultra-hot, per-node results.
  • How do you ensure consistency across multiple databases (SQL + NoSQL mix)?

    • Answer: Avoid 2PC; use Outbox + domain events to propagate changes, CQRS read models, idempotent consumers, versioning/timestamps for ordering, and read-repair/reconciliation jobs. Accept eventual consistency with clear UX/SLOs.

6. Cloud & DevOps

  • You need to deploy services to Azure AKS. How do you handle secrets, scaling, and rolling updates?

    • Answer: Secrets: store in Azure Key Vault, mount via CSI driver or inject as env vars using managed identity (no static keys). Scaling: use HPA for CPU/RAM and KEDA for event-driven queues; partition hot paths. Rolling updates: Deployment with readiness/liveness probes, PDBs, maxSurge/maxUnavailable, and pod disruption budgets; use canary via Ingress/AGIC if riskier changes.
  • How would you design zero-downtime deployments for a SaaS platform?

    • Answer: Blue/green or canary behind gateway, sticky sessions at edge only if necessary, backward-compatible DB (expand/contract), feature flags, health checks + gradual traffic shift, auto-rollback on SLO/error-budget breach, and immutable images via GitOps.
  • How do you choose between Azure Functions and background workers?

    • Answer: Functions: event-driven, bursty workloads, quick time-to-prod, managed scale, pay-per-use; great with triggers (HTTP/Service Bus/Timers). Worker Service (containers on AKS/App Service): long-running daemons, custom scale/concurrency, VNet needs, heavy SDKs, or strict startup/warm requirements.
  • How would you implement disaster recovery across multiple Azure regions?

    • Answer: Decide active/active vs active/passive; global entry via Front Door/Traffic Manager; SQL Failover Groups, Cosmos DB multi-region, GRS/ZRS storage; replicate Key Vault and secrets; IaC to recreate infra; define RTO/RPO, runbooks, and scheduled DR drills.
  • How do you ensure compliance (GDPR/HIPAA) in Azure deployments?

    • Answer: Data residency policies, encryption in transit/at rest (CMKs in Key Vault), least privilege/RBAC/PIM, private networking (Private Link), audit logging/Log Analytics, DSR workflows (access/erase), BAAs (HIPAA), vulnerability/SBOM scans, and documented controls with evidence in CI/CD.

7. Testing & Quality

  • You need to test a payment workflow. How do you design unit, integration, and E2E tests?

    • Answer: Unit: domain rules (totals, idempotency keys, state machine). Integration: API + DB + sandbox/stub gateway (WireMock/TestContainers), verify retries/webhook handling/signature. E2E: browser flow with real sandbox, success/failure/refund paths, resiliency (timeouts) and audit trail checks.
  • How would you automate load testing for an API?

    • Answer: Use k6/JMeter in a pipeline stage hitting an ephemeral env; seed data, model user journeys, ramp patterns (spike/soak), set p95/p99 + error-rate thresholds to gate releases; publish trends to Grafana/Azure Load Testing.
  • How do you handle mocking external APIs in integration tests?

    • Answer: Replace endpoints with WireMock.Net/mock servers in Docker; pin contracts via Pact (consumer/provider tests); inject base URLs via config; include negative cases (timeouts/5xx/invalid signatures).
  • How would you enforce 80% test coverage in CI/CD pipelines?

    • Answer: Collect with Coverlet/dotnet test → report to SonarQube/ReportGenerator; set quality gate ≥80% (exclude generated code), fail PRs below threshold; protect main with required checks.
  • How do you detect and fix flaky tests in CI/CD?

    • Answer: Auto-retry + quarantine tag, track flake rate dashboard, eliminate sleep-based waits (use awaits/auto-wait in Playwright), isolate state (unique data, no shared static), seed randomness, stabilize environment (time/locale), and fix root causes before un-quarantining.

8. Observability & Monitoring

  • You are tasked with adding observability to a microservice system. How do you set up tracing, logging, and metrics?

    • Answer: Adopt OpenTelemetry across services: W3C trace context propagation, span attributes (tenant, orderId), and sampling. Structured logging (Serilog) with correlation IDs → centralized store (Log Analytics/ELK). Metrics for golden signals (latency p95/p99, error rate, saturation, traffic) via Prometheus/App Insights; create SLOs and alerts.
  • How would you use OpenTelemetry with Azure Application Insights?

    • Answer: Add OTel SDK (ASP.NET/HttpClient/SQL instrumentations), set Resource.ServiceName, export via OTLP → Azure Monitor/App Insights. Use Auto-collection for deps/requests, custom spans/metrics, enable live metrics, and link logs↔traces with traceId.
  • How would you detect memory leaks in a production service?

    • Answer: Alert on rising process/GC heap and LOH; capture gcdump/dump on threshold; analyze with dotnet-gcdump/dotMemory/PerfView for rooted paths. Track alloc rate, GC.GetTotalMemory, EventCounters; review IDisposable/timer/socket usage and fix retention.
  • How would you design dashboards in Grafana for a SaaS system?

    • Answer: Service overview (SLOs + error budget), per-service RED (Rate/Errors/Duration), infra (CPU/RAM/HPA/KEDA), DB/cache (qps, misses, slow queries), queue depth/age, tenant filter (labels), and drill-down panels with exemplars linking to traces.
  • How do you use chaos testing to validate system resilience?

    • Answer: Define hypothesis + abort criteria, run scoped faults (latency, pod kill, node drain, network cut) via Chaos Studio/Gremlin in canary; measure SLO impact, retry/circuit metrics, DLQ, and auto-rollback; document fixes and re-run periodically.

9. Frontend Scenarios

  • You need to build a dashboard app for a healthcare system. How do you design components, modules, and routing in Angular?

    • Answer: Feature modules (Patients, Appointments, Reports), Core (services, auth) & Shared (ui libs). Lazy load routes, route guards (auth/roles), resolvers for data prefetch, OnPush change detection, and smart/container vs dumb/presentational components.
  • How do you implement state management (NgRx, BehaviorSubject, Fluxor) in a Blazor/Angular app?

    • Answer: Use NgRx/Fluxor for complex, multi-source state with effects, selectors, and time-travel; BehaviorSubject services for simpler modules. Keep state normalized, derive views via selectors, and test reducers/effects.
  • How would you add i18n support for English, Russian, Hebrew?

    • Answer: Angular i18n or ngx-translate with JSON bundles; dynamic locale switch, RTL support for Hebrew (Dir service, CSS logical props), proper date/number locales, ICU pluralization, and translation linting. For Blazor, use .resx per culture and set CultureInfo.
  • How do you implement real-time UI updates from SignalR?

    • Answer: Create a Hub connection, map server events to RxJS streams (Angular) or state store (Blazor). Handle reconnect/backoff, subscribe by tenant/group, throttle/debounce bursts, and trigger change detection (OnPush, trackBy) for efficient rendering.
  • How do you ensure WCAG 2.1 compliance in your frontend?

    • Answer: Semantic HTML, ARIA where needed, keyboard navigation & focus order, visible focus styles, color contrast ≥ 4.5:1, labels and error messages, skip links/landmarks, RTL/locale support, media captions, and automated audits (axe, Lighthouse) plus screen-reader testing.

10. AI & Intelligent Features

  • You are asked to integrate a chatbot using Microsoft Bot Framework and Azure OpenAI. How do you design it?

    • Answer: Bot Framework SDK + Azure Bot Service (Teams/Web Chat) → Orchestration layer (Dialogs/Adaptive, Recognizer) → RAG (Azure AI Search) → Azure OpenAI (system/prompt templates, tools/function-calling) → policies (content filter, guardrails) → telemetry (App Insights, OTel) → state (Cosmos/Blob) → Key Vault/MI for secrets; support streaming, fallback, and RBAC.
  • How do you build a retrieval-augmented generation (RAG) system with OpenAI + Azure Cognitive Search?

    • Answer: Ingest docs → clean/chunk with overlap → embed & index (hybrid BM25+vector) → query: keyword + vector + filters → rerank (optional) → ground the prompt with top-k snippets + citations → call model with low temperature/JSON output → cache & log → feedback loop for re-ranking.
  • How would you add semantic search to an existing SaaS product?

    • Answer: Backfill embeddings for entities/content; add a vector index (Azure AI Search/Redis/Pinecone) alongside current keyword search; expose a /search endpoint supporting hybrid mode, filters, and tenant scoping; update UI with semantic highlights, facets, and “did-you-mean”; measure uplift via A/B.
  • How do you prevent hallucinations in AI-generated responses?

    • Answer: Ground with RAG + show citations, lower temperature, constrain outputs (JSON schema/tool calls), add answerability checks (“say I don’t know”), domain guardrails/policies, HITL for high-risk flows, and monitor hallucination/groundedness metrics.
  • How do you log and audit AI outputs for compliance (GDPR, HIPAA)?

    • Answer: Structured logs of prompt/version, model, params, tool calls, doc IDs, traceId; redact PII/PHI (hash/tokenize), encrypt at rest (CMKs), private networking, least-privilege access, retention & DSR workflows (access/erasure), immutable audit trail (append-only store), and environment isolation (no prod data in test).

🔹 Teamwork, Leadership & Communication

1. Mentorship & Collaboration

  • How do you mentor junior developers in your team?

    • Answer: Set a 30/60/90 plan, pair weekly, agree on learning goals, rotate them through features and bugfixes, do guided code reviews, share checklists, and celebrate small wins. Create psychological safety and give frequent, actionable feedback.
  • Can you describe a time when you helped a teammate overcome a technical challenge?

    • Answer: A teammate struggled with flaky integration tests. I paired to isolate nondeterminism, introduced test containers, stabilized async waits, and added a CI retry/quarantine workflow. Flakes dropped >90% and the teammate documented the fix for the team.
  • How do you foster knowledge sharing within a distributed team?

    • Answer: Docs-as-code in the repo, short RFCs/ADRs, weekly “brown-bag” demos, recorded loom videos, rotating “guilds,” and office hours. Make PRs educational (context + why), and tag subject-matter owners.
  • What’s your approach to pair programming and code reviews?

    • Answer: Prefer driver–navigator with role swaps; small, focused PRs; checklists (tests, security, perf). In reviews: prioritize correctness and clarity, suggest not dictate, link to standards, and approve quickly with follow-ups for nits.
  • How do you handle disagreements in technical discussions?

    • Answer: Align on the problem and success metrics, list options with pros/cons, run a time-boxed spike or A/B, decide via an agreed rule (e.g., owner decides after feedback), document in an ADR, and move on respectfully.

2. Communication & Stakeholder Alignment

  • How do you explain complex technical decisions to non-technical stakeholders?

    • Answer: Translate to outcomes, cost, and risk. Use visuals and plain language, offer 2–3 options with trade-offs and a recommendation, show timelines and blast radius, and commit to a phased, measurable plan.
  • Can you describe a time when you had to convince business stakeholders about a technical trade-off?

    • Answer: Proposed paying down API latency debt before a feature push. I showed p95 impact on conversion, a 2-sprint plan with canary metrics, and a de-risked rollout. We shipped early wins; conversion improved and support tickets dropped.
  • How do you ensure alignment between business goals and engineering priorities?

    • Answer: Tie work to OKRs, use outcome-based roadmaps, score via RICE/WSJF, include non-functional work on the roadmap, demo progress biweekly, and revisit priorities in monthly reviews with data.
  • What’s your approach to handling conflicting requirements from multiple stakeholders?

    • Answer: Clarify objectives, surface conflicts in a single doc, define must-haves vs nice-to-haves, propose sequencing or scope cuts, facilitate a decision with Product, and record the outcome and rationale.
  • How do you communicate project risks early to prevent surprises?

    • Answer: Maintain a risk register with owners/mitigations, publish weekly RAG status, set early warning thresholds (latency, burn-down), run pre-mortems, and escalate early with options, not just problems.

3. Decision-Making & Ownership

  • How do you balance delivery speed vs. long-term maintainability (technical debt)?

    • Answer: Define a debt budget per quarter, use expand/contract DB changes, feature-flag risky work, and require a refactor ticket for any shortcut. Track impact via error/latency deltas and schedule paydown in each sprint.
  • Can you describe a situation where you had to make a difficult trade-off between scope, time, and quality?

    • Answer: Time-boxed an MVP: cut non-critical reports, kept security & observability non-negotiable, shipped behind a beta flag, and committed to a two-sprint hardening phase with metrics-based exit criteria.
  • How do you prioritize tasks when multiple features or bugs compete for attention?

    • Answer: Use RICE/WSJF + cost-of-delay, consider SLO impact and customer escalations, then publish a ranked list with owners and due dates; revisit weekly with data.
  • What’s your decision-making process when choosing between two competing technical solutions?

    • Answer: Frame goals & constraints, compare options with a scorecard (cost, risk, latency, operability), run a spike/POC, gather stakeholder input, decide via DACI, document an ADR, and set a review checkpoint.
  • How do you take ownership when a project encounters setbacks or failures?

    • Answer: Own the outcome publicly, stabilize first, communicate timelines, run a blameless postmortem, implement action items with owners/dates, and update runbooks/alerts to prevent recurrence.

4. Leadership & Influence

  • How do you set expectations and hold your team accountable for quality and deadlines?

    • Answer: Define DoR/DoD, SLAs/SLOs, and coding standards; break work into small PRs, make quality gates mandatory, publish a visible roadmap, and track progress with weekly RAG + risk register.
  • What’s your approach to building a culture of continuous improvement?

    • Answer: Short retros every sprint, visible improvement backlog, rotate facilitators, reward fixes to toil, run post-incident reviews, and measure trends (lead time, change fail rate, MTTR).
  • How do you identify and nurture future leaders within your team?

    • Answer: Look for ownership signals (proactive comms, mentoring, quality). Give stretch projects, delegate decision rights, pair on architecture docs, and provide feedback + coaching plans.
  • How do you adapt your leadership style for senior vs junior team members?

    • Answer: Juniors: directive coaching, tighter checkpoints, more pairing. Seniors: context and outcomes, autonomy in solution, periodic syncs on risks and trade-offs.
  • How do you handle underperforming developers?

    • Answer: Diagnose root causes, set clear, measurable goals with timelines, increase feedback cadence, provide mentorship/training, reduce WIP; escalate to a PIP only after support attempts.

5. Remote Work & Team Dynamics

  • What’s your approach to working in a remote-first or hybrid team?

    • Answer: Async-first: decisions in docs, recorded demos, meeting notes with owners; define overlap hours, and bias toward written RFCs/ADRs.
  • How do you ensure clear communication across time zones?

    • Answer: Use shared source of truth (tracker + doc), handoff templates, rotate meeting times, and SLAs for responses; prefer threads over DMs.
  • How do you maintain team morale during stressful delivery periods?

    • Answer: Set realistic scopes, protect focus time, celebrate milestones, rotate on-call, enforce no-meeting blocks, and schedule recovery after crunch.
  • How do you prevent knowledge silos in distributed projects?

    • Answer: Code ownership maps, pair/mob sessions, docs-as-code, internal tech talks, and bus-factor reviews before releases.
  • What tools and practices have you found most effective for remote collaboration?

    • Answer: Azure DevOps/Jira for tracking, GitHub/PR templates for reviews, Miro/FigJam for design, Confluence/Notion for docs, Loom/Teams for async video, and Slack/Teams with clear channel conventions.