.NET Architectural Patterns: Best Practices for Scalable, Maintainable Applications

When building modern applications in .NET, developers are often faced with a choice of architectural patterns that can help structure the codebase, improve scalability, and ensure long-term maintainability. Architectural patterns provide a blueprint for organizing code in a way that supports clean, modular, and testable systems. In this article, we’ll explore some of the most popular .NET architectural patterns — each with its benefits and use cases — and walk through how they work with practical examples.

Service-Oriented Architecture (SOA)

Service-Oriented Architecture (SOA) is a design pattern that encourages the development of software components that provide services to other components over a network. SOA is typically used for large-scale, enterprise-level systems where services are loosely coupled, and communication happens via standardized protocols like HTTP or SOAP.

Example: In this example, OrderService is the service that delegates the responsibility of payment processing to another service, IPaymentService.

public class OrderService
{
private readonly IPaymentService _paymentService;

public OrderService(IPaymentService paymentService)
{
_paymentService = paymentService;
}

public void PlaceOrder(Order order)
{
// Business logic for placing the order
_paymentService.ProcessPayment(order);
}
}

This code sample demonstrates Service-Oriented Architecture (SOA) because it follows the principle of modularizing services into distinct, independently functioning components, which is a core concept of SOA. Here’s why this example fits:

  1. Separation of Concerns: The OrderService class encapsulates the business logic related to placing an order, while delegating the responsibility of processing the payment to the IPaymentService service. This separation ensures that each service focuses on a single concern, which is a hallmark of SOA.
  2. Loose Coupling: The OrderService is loosely coupled to the IPaymentService. It does not directly implement the payment processing logic; instead, it depends on the abstraction (IPaymentService). This makes it easier to change or replace the payment service without affecting the order service, which is key to maintaining flexibility and scalability in SOA.
  3. Reusability and Maintainability: By defining a PlaceOrder method in the OrderService and a ProcessPayment method in the IPaymentService, you can reuse both services in different contexts. For instance, if you have a ShippingService or a CustomerService, you could have them interact with the OrderService without needing to reimplement payment logic. This promotes reusability and maintainability, key benefits of SOA.
  4. Service Interaction: The interaction between services (OrderService and IPaymentService) demonstrates how different services communicate with each other to fulfill a business process (placing an order with payment processing). This aligns with the concept of services interacting over a network, which is a foundational idea in SOA.
  5. Abstraction and Interface Use: The use of an interface (IPaymentService) allows for the flexibility to implement different types of payment services (e.g., credit card, PayPal, etc.), adhering to the SOA principle of having service implementations abstracted behind interfaces or service contracts.

Clean Architecture

Clean Architecture aims to create a system where the business logic (core) is isolated from the external components like databases, UI, and frameworks. This separation promotes maintainability and testability, making the system adaptable to future changes without disrupting the core logic.

Key Concept: The core business logic is placed in the center, while other concerns like UI, frameworks, and data access exist in the outer layers.

Example: This sample follows this layered approach by separating the concerns of business logic (OrderService) from data access (IOrderRepository).

public interface IOrderRepository
{
void SaveOrder(Order order);
}

public class OrderService
{
private readonly IOrderRepository _repository;

public OrderService(IOrderRepository repository)
{
_repository = repository;
}

public void PlaceOrder(Order order)
{
// Core business logic
_repository.SaveOrder(order);
}
}

This sample code demonstrates Clean Architecture because it adheres to several principles of Clean Architecture, such as separation of concerns, dependency inversion, and the decoupling of core business logic from infrastructure concerns. Here’s how this code aligns with Clean Architecture principles:

1. Separation of Concerns

  • The OrderService contains the core business logic of placing an order but does not concern itself with how the order is saved. This responsibility is delegated to the IOrderRepository interface. Clean Architecture emphasizes keeping business logic (the “core” or “domain”) separate from external concerns, like data persistence, UI, or external services.
  • The IOrderRepository interface defines the contract for saving orders, which could be implemented in different ways (e.g., using a database, a file system, or an API). This keeps the OrderService focused solely on the business logic of placing an order.

2. Dependency Inversion Principle (DIP)

  • The OrderService class depends on the abstraction (IOrderRepository), not on a concrete implementation. This adheres to the Dependency Inversion Principle, which is one of the key principles of Clean Architecture. The core business logic (in OrderService) should not depend on the details of infrastructure services like databases or file systems.
  • By depending on an interface rather than a concrete implementation, the OrderService remains flexible and testable. You could easily swap out IOrderRepository implementations (e.g., switching from a SQL database to a NoSQL database) without changing the business logic in OrderService.

3. Decoupling Core Logic from Infrastructure

  • The OrderService is decoupled from the infrastructure (e.g., how the order is saved) by depending on the interface IOrderRepository. This decoupling is central to Clean Architecture, which advocates for isolating the core logic of the application (business rules) from frameworks, libraries, and external systems.
  • The implementation details of SaveOrder are not known by OrderService, only that the repository will save the order. This allows the repository to be easily swapped out or modified without impacting the core business logic of placing orders.

4. Testability

  • Since OrderService relies on the IOrderRepository interface, it is much easier to mock or stub the repository during unit testing. This decoupling allows for isolated testing of the business logic, as you can mock the repository to simulate saving the order without needing a real database.

5. Flexible and Maintainable

  • Clean Architecture encourages creating software that can evolve over time without major rewrites. If, for example, you wanted to change how orders are saved (e.g., adding more complex logic or switching databases), you could do so by just modifying the implementation of the IOrderRepository interface. The OrderService class would remain unchanged, demonstrating how Clean Architecture supports long-term maintainability and adaptability.

6. Layered Approach

In Clean Architecture, the application is often structured into layers, such as:

  • Core (Domain): Contains the business logic (represented by OrderService and the IOrderRepository interface).
  • Infrastructure: Contains the implementation of the IOrderRepository (e.g., database access code).

Command Query Responsibility Segregation (CQRS)

CQRS is a pattern where the responsibility of reading and writing data is separated into two distinct models. This helps optimize the performancescalability, and security of the system by ensuring that the command (write) operations don’t interfere with the query (read) operations.

Example: In this example, CreateOrderCommand is used to handle write operations, while OrderQuery handles read operations. This separation allows for optimized handling of different use cases.

public class CreateOrderCommand
{
public int ProductId { get; set; }
public int Quantity { get; set; }
}

public class OrderQuery
{
private readonly IOrderRepository _repository;

public OrderQuery(IOrderRepository repository)
{
_repository = repository;
}

public Order GetOrderById(int orderId)
{
return _repository.GetOrderById(orderId);
}
}

This sample code is a good example of CQRS (Command Query Responsibility Segregation) because it clearly demonstrates the separation of concerns between operations that modify state (commands) and operations that retrieve state (queries). Let’s break down why this is a good example of CQRS:

1. Separation of Commands and Queries

  • Command: The CreateOrderCommand class is an example of a command in CQRS. A command is used to perform an action that changes the system’s state. In this case, the CreateOrderCommand encapsulates the data required to create an order (e.g., ProductId and Quantity). The primary role of this command is to instruct the system to create a new order.
  • Query: The OrderQuery class is an example of a query in CQRS. Queries are used to retrieve data but not modify it. In this case, the GetOrderById method retrieves an order from the repository, but it doesn’t change the state of the system. It only reads from the data source.

2. Distinct Models for Commands and Queries

In CQRS, commands and queries should not share models because they serve different purposes. Commands are about requesting changes to the state (e.g., creating or updating an order), while queries are about fetching data. This sample clearly shows two separate classes:

  • CreateOrderCommand for encapsulating the action of creating an order (modifying state).
  • OrderQuery for fetching an order (retrieving data).

By keeping commands and queries separate, you achieve a better separation of concerns, making it easier to scale, optimize, and maintain the system.

3. Focus on Read/Write Segregation

  • Command Handling: The CreateOrderCommand would likely be processed by a command handler (not shown in the sample, but would typically be part of the CQRS implementation), which would perform the action of creating the order in the system. This handler would handle the business logic, validation, and state-changing operations.
  • Query Handling: The OrderQuery class focuses on fetching an order using the IOrderRepository. It does not change any state, just retrieves and returns data. This shows how the query side can be optimized separately from the write side, potentially with a different data store, index, or read-optimized database (a common optimization in CQRS).

4. Scalability and Optimization

In CQRS, the command side (write) and the query side (read) can be scaled independently. The code sample hints at this by separating the command (CreateOrderCommand) and the query (OrderQuery) responsibilities.

  • The command side might involve complex validation, transactional operations, or integrations that require different processing or scaling.
  • The query side can be optimized for fast retrieval of data, such as using read-optimized databases, caching mechanisms, or denormalization for faster access.

5. Potential for Different Data Models

In more advanced implementations of CQRS, the data models for commands and queries can even be different. For example, the data needed to create an order (command) might differ from what’s needed for querying an order (e.g., returning a summary of the order vs. full order details). This allows for fine-tuning and optimization for both reads and writes.

Though this sample does not show this explicitly, it aligns with the idea that commands and queries are optimized independently, often leading to the possibility of having different models for each.

6. Enhanced Maintainability and Flexibility

By separating the command and query logic, you make the system easier to maintain and extend. For example, if you needed to change how orders are queried (e.g., adding new filters or joins), you can modify the OrderQuery class without touching the logic for creating orders. Similarly, changes to how orders are created can be made in the CreateOrderCommand and its associated handler without affecting the read operations.

This clean separation improves flexibility, as the command side and query side can evolve independently without causing issues for each other.

Mediator Pattern

The Mediator pattern reduces the complexity of communication between objects by providing a mediator class that handles all interactions. In .NET, this pattern is often implemented with libraries like MediatR to simplify communication between components.

Example: In this example, CreateOrderHandler is responsible for handling the CreateOrderCommand, and communication happens through the mediator, MediatR.

public class CreateOrderCommand : IRequest
{
public int ProductId { get; set; }
public int Quantity { get; set; }
}

public class CreateOrderHandler : IRequestHandler<CreateOrderCommand>
{
public Task Handle(CreateOrderCommand request, CancellationToken cancellationToken)
{
// Handle order creation logic
return Task.CompletedTask;
}
}

This is a good example of the Mediator pattern because it demonstrates the key concepts of decoupling the sender and receiver of a request, centralizing the communication through a mediator, and handling requests via handlers. Let’s break down why this code is a good demonstration of the Mediator pattern:

1. Decoupling Sender and Receiver

In the Mediator pattern, the sender (who triggers the request) and the receiver (who handles the request) do not directly interact with each other. Instead, they communicate through a mediator. This decoupling makes the system easier to maintain and extend because new handlers or requests can be added without modifying the sender or other parts of the system.

  • In the provided example, the CreateOrderCommand (the request) does not directly know about how the order will be handled. The CreateOrderCommand is a simple data structure that encapsulates the data necessary for the order creation (e.g., ProductId and Quantity).
  • The CreateOrderHandler is the handler that knows how to process the CreateOrderCommand. The mediator (likely provided by a framework such as MediatR) takes care of dispatching the command to the correct handler (CreateOrderHandler), ensuring that the sender (the part of the system that triggers the order creation) doesn’t need to know about the handler or its logic.

2. Centralized Request Handling

The Mediator pattern promotes centralized request handling. The IRequestHandler<TRequest> interface and its implementation (CreateOrderHandler) serve as the handlers for specific requests.

  • By implementing IRequestHandler<CreateOrderCommand>, the CreateOrderHandler is the designated handler for any CreateOrderCommand that is dispatched. The mediator pattern centralizes the logic of finding and invoking the correct handler for a request, reducing the need for explicit coupling between components.

3. Single Responsibility and Clear Separation

  • The CreateOrderCommand encapsulates the data required to make a request (product ID and quantity). The CreateOrderHandler is the component responsible for processing the command and implementing the business logic for order creation. By using the Mediator pattern, each component focuses on its specific responsibility:
  • The command is just data.
  • The handler contains the logic to process that data.
  • This separation leads to better maintainability and extensibility since you can easily add more handlers for other commands without modifying existing logic.

4. Flexible and Extensible Architecture

  • The Mediator pattern allows the system to grow and evolve without tightly coupling different parts of the system. For example, if you need to add additional features to the order creation process (e.g., notifying a shipping service, applying discounts, or logging), you can create additional handlers for these tasks, and the mediator will take care of dispatching all of them when the CreateOrderCommand is issued.
  • The MediatR library (or similar frameworks) provides a straightforward way to implement this pattern. The mediator coordinates the communication and invokes handlers for different types of requests, allowing for composability of business logic.

5. Asynchronous Handling

The CreateOrderHandler‘s Handle method is asynchronous (Task.CompletedTask). This is a common pattern in the Mediator implementation, where handlers can asynchronously process requests, making it easier to integrate non-blocking operations (e.g., database access, API calls) into the workflow. The Mediator ensures that the process remains smooth, and each handler can independently handle its specific task asynchronously.

6. Simplicity in Command Dispatching

  • The use of the IRequest interface and IRequestHandler<TRequest> interface allows you to easily define and dispatch commands. For example, you don’t need to manually wire up the CreateOrderCommand with its handler. Instead, you can simply send the command, and the mediator will take care of finding and invoking the correct handler (CreateOrderHandler).
  • This reduces boilerplate code and makes command dispatching easier to maintain.

7. Easier Testing

  • The Mediator pattern makes unit testing easier because each handler can be tested independently without needing to worry about how the request is triggered or dispatched. You can focus on testing the logic in CreateOrderHandler without worrying about the larger flow of request handling.
  • Additionally, because the CreateOrderCommand is decoupled from the CreateOrderHandler, you can easily mock dependencies, making unit tests more isolated and focused.

Repository Pattern

The Repository pattern abstracts the data access layer, providing a clean interface to interact with the data storage. This is beneficial for maintaining separation of concerns and improving testability by allowing the use of mock repositories during testing.

Example: This example demonstrates the Repository pattern by encapsulating data access logic for Order entities within the OrderRepository class, providing a simplified interface (IOrderRepository) for retrieving and saving orders. This approach decouples the business logic from the data access layer, enhancing maintainability, flexibility, and testability.

public interface IOrderRepository
{
Task<Order> GetOrderByIdAsync(int orderId);
Task AddOrderAsync(Order order);
}

public class OrderRepository : IOrderRepository
{
private readonly ApplicationDbContext _context;

public OrderRepository(ApplicationDbContext context)
{
_context = context;
}

public async Task<Order> GetOrderByIdAsync(int orderId)
{
return await _context.Orders.FindAsync(orderId);
}

public async Task AddOrderAsync(Order order)
{
await _context.Orders.AddAsync(order);
await _context.SaveChangesAsync();
}
}

This is a good example of the Repository pattern because it effectively abstracts the data access logic, encapsulates persistence operations, and provides a clean interface for interacting with the data layer. Here’s a breakdown of why this code demonstrates the Repository pattern:

1. Encapsulation of Data Access Logic

The Repository pattern is about abstracting the underlying data access code to provide a simplified and cohesive interface for working with the data. In this example:

  • The IOrderRepository interface defines the contract for interacting with orders in the system, specifically the methods GetOrderByIdAsync and AddOrderAsync.
  • The OrderRepository class implements this interface, and the actual logic of fetching and saving data is abstracted away within the repository methods (GetOrderByIdAsync and AddOrderAsync).

This keeps the rest of the application decoupled from the specifics of how data is stored and retrieved (e.g., using an ApplicationDbContext and Entity Framework in this case). The application code doesn’t need to know how the orders are fetched or saved—it simply relies on the repository interface.

2. Separation of Concerns

By using the Repository pattern, the code clearly separates the data access logic (which is inside OrderRepository) from the business logic and UI layers (which can call methods on IOrderRepository).

  • This improves maintainability and testability because:
  • The business logic doesn’t directly interact with the database or persistence layer.
  • If you need to change the way data is fetched or persisted (for example, switching from Entity Framework to another ORM), you only need to modify the OrderRepository and not the rest of your application.

3. Simplified Data Interaction

  • The IOrderRepository provides a high-level abstraction for interacting with Order entities. Instead of calling EF Core directly to query or update the database in business logic or controller classes, you call the repository methods (GetOrderByIdAsyncAddOrderAsync).
  • This simplifies data interaction and makes it easier for the rest of the system to work with the data, as the repository exposes simple, well-defined methods for common operations on the data.

4. Consistency in Data Access

The Repository pattern ensures that all data access logic is centralized in one place, ensuring consistent behavior when interacting with data.

  • For example, if you want to change how orders are retrieved (e.g., applying additional filters, logging, or transforming data), you can do so in the repository, and it will apply consistently across the entire application.
  • This also means that the application is not scattered with duplicate code for querying or saving data, improving code quality and reducing errors.

5. Asynchronous Operations

  • The repository methods (GetOrderByIdAsync and AddOrderAsync) are asynchronous (Task), allowing for non-blocking I/O operations when interacting with the database.
  • This is a good practice in modern applications to ensure that the UI or other operations don’t block while data is being fetched or persisted. The Repository pattern abstracts away the asynchronous nature of these operations from the business logic or UI layer, simplifying the code and improving performance.

6. Testability

  • One of the key benefits of the Repository pattern is that it makes testing easier. Because the business logic depends on the IOrderRepository interface, you can easily mock or stub the repository in unit tests, allowing you to test your business logic without depending on an actual database.
  • For instance, in unit tests, you can mock the IOrderRepository interface to simulate the behavior of the repository without needing to connect to a real database, ensuring that tests are fast, isolated, and independent of external dependencies.

7. Maintaining Flexibility in Data Sources

  • If you need to switch the underlying data source in the future (for example, migrating from a relational database to a NoSQL database), you can change the implementation of OrderRepository without affecting the higher layers of your application.
  • This makes the system more flexible and decoupled from the specific technology or database being used, which can be beneficial for long-term scalability.

8. Consistency with Domain-Driven Design

  • In the context of Domain-Driven Design (DDD), repositories represent a collection of domain objects (e.g., Order in this case). The repository pattern is a natural fit for DDD because it allows the application to interact with domain objects without exposing the underlying persistence mechanisms.
  • The repository exposes methods that align with domain operations (e.g., retrieving an order by its ID or adding a new order), allowing developers to focus on the domain logic and business requirements rather than worrying about how data is stored or retrieved.

Vertical Slice Architecture

Vertical Slice Architecture organizes code by features, rather than layers. This approach encourages the development of small, self-contained slices of functionality where each slice includes everything needed to implement a feature — from UI, business logic, to data access.

Example: In Vertical Slice Architecture, the PlaceOrder feature contains everything related to that functionality in one slice, promoting modularity and feature-driven development.

// Feature for placing an order
public class PlaceOrder
{
public class Command
{
public int ProductId { get; set; }
public int Quantity { get; set; }
}

public class Handler : IRequestHandler<Command, bool>
{
private readonly IOrderRepository _repository;

public Handler(IOrderRepository repository)
{
_repository = repository;
}

public async Task<bool> Handle(Command request, CancellationToken cancellationToken)
{
// Place order logic here
return await _repository.AddOrderAsync(new Order { ProductId = request.ProductId, Quantity = request.Quantity });
}
}
}

This is a good example of Vertical Slice Architecture because it encapsulates all components needed to handle a specific feature (placing an order) within a single, cohesive unit or “slice.” In this case, the PlaceOrder feature includes:

  1. Command: Defines the data required for the operation (product ID and quantity).
  2. Handler: Contains the business logic for processing the command (placing the order).
  3. Repository Interaction: The handler interacts directly with the repository to persist the order.

Each slice is independent and self-contained, focusing on a single feature or use case, which makes the code more maintainable, easier to test, and scalable. By grouping the command, handler, and data access logic together, Vertical Slice Architecture avoids the complexity of layering (such as in traditional CRUD operations) and fosters clearer, more modular development.

Microservices Architecture

Microservices Architecture decomposes applications into small, loosely coupled, independently deployable services, each of which implements a specific business function. Microservices communicate over lightweight protocols, such as HTTP or gRPC, and have their own independent databases.

Example: This example illustrates the autonomous, domain-focused, and API-driven nature of microservices, where each service is responsible for a specific task and can be independently deployed, updated, and scaled.

[Route("api/payment")]
public class PaymentController : ControllerBase
{
private readonly IPaymentProcessor _paymentProcessor;

public PaymentController(IPaymentProcessor paymentProcessor)
{
_paymentProcessor = paymentProcessor;
}

[HttpPost]
public IActionResult ProcessPayment(Payment payment)
{
var result = _paymentProcessor.Process(payment);
return Ok(result);
}
}

This is a good code sample for Microservices Architecture because it demonstrates the core principles of microservices:

  1. Single Responsibility: The PaymentController focuses solely on the task of handling payment requests, embodying the microservice principle of a service being responsible for a specific domain or business capability (in this case, payment processing). The payment logic is encapsulated in the IPaymentProcessor, adhering to the idea of service independence.
  2. API-First Approach: The controller exposes a RESTful API (POST /api/payment), which is a key characteristic of microservices. Each microservice in this architecture typically communicates via HTTP-based APIs, allowing it to be easily consumed by other services or clients, enabling loose coupling between services.
  3. Separation of Concerns: The PaymentController does not handle the details of processing the payment, which is delegated to the IPaymentProcessor service. This separation makes it easy to change or update the payment processing logic without affecting the controller or the microservice API, aligning with microservices’ design principles of modularity and maintainability.
  4. Scalability: Because each service (e.g., payment processing) is isolated and handles its own domain, it can be scaled independently based on demand. If payment processing needs to handle a higher load, the payment service can be scaled without affecting other services.
  5. Independence and Flexibility: The PaymentController is an independent service that can evolve or be deployed independently of other services. This is fundamental in microservices, where each service is autonomous, and changes to one service (such as adding new payment methods or updating business rules) can be implemented without disrupting the overall system.

Hexagonal Architecture (Ports and Adapters)

Hexagonal Architecture focuses on decoupling the core business logic from external systems like databases, APIs, and user interfaces by introducing ports (interfaces) and adapters (implementations). This approach helps ensure that the core logic can remain agnostic to the external infrastructure.

Example: The CustomerService is independent of the infrastructure, making the system flexible and easy to test.

public interface ICustomerRepository
{
Customer GetCustomer(int id);
}

public class SqlCustomerRepository : ICustomerRepository
{
public Customer GetCustomer(int id)
{
// SQL implementation to retrieve customer
}
}

public class CustomerService
{
private readonly ICustomerRepository _repository;

public CustomerService(ICustomerRepository repository)
{
_repository = repository;
}

public Customer GetCustomerInfo(int id)
{
return _repository.GetCustomer(id);
}
}

This is a good example of Hexagonal Architecture (also known as the Ports and Adapters pattern) because it clearly separates the core business logic from external dependencies, such as the data access layer.

1. Core Logic (CustomerService) is Independent of External Systems

  • In Hexagonal Architecture, the business logic (core) is designed to be independent of external systems like databases, APIs, or user interfaces.
  • In this example, CustomerService contains the business logic for interacting with customer data but does not directly deal with how the data is fetched or stored. It relies on an abstraction (ICustomerRepository) to access the data, which decouples it from the specific implementation (in this case, SQL).

2. Use of Interfaces as Ports

  • ICustomerRepository serves as a port in Hexagonal Architecture. It defines an abstraction for how customer data is accessed without specifying how or where the data is stored (whether in a SQL database, file system, or external API).
  • This interface allows CustomerService to remain agnostic to the actual data access mechanism, which means the business logic doesn’t need to change if the implementation of the repository changes (e.g., from SQL to NoSQL or from a file-based to a cloud-based storage solution).

3. Adapters for External Systems

  • SqlCustomerRepository is an adapter that implements the ICustomerRepository interface. It contains the specific implementation for interacting with the database (in this case, SQL).
  • The repository adapter allows external systems (such as the database) to communicate with the core business logic through the defined ports, ensuring that the core service (CustomerService) does not depend on or know about the specifics of the database.

4. Decoupling of Internal and External Concerns

  • Hexagonal Architecture promotes decoupling the application’s core (business logic) from external systems and technologies. This ensures that the core remains stable and independent, while external systems (e.g., databases, UI, messaging queues) can evolve or change without impacting the core.
  • In this example, if the way customer data is retrieved changes (e.g., switching from SQL to an API), only the SqlCustomerRepository would need to change. The CustomerService would remain unchanged, making it easier to modify or replace external dependencies.

5. Testability

  • This architecture also improves testability. You can mock or stub the ICustomerRepository interface in unit tests for CustomerService, allowing you to test the business logic without needing to interact with the actual database. This makes unit tests isolated and independent from external resources, which is a key benefit of Hexagonal Architecture.

Event-Driven Architecture

Event-Driven Architecture revolves around the idea of services or components communicating with each other via events. Instead of making direct service calls, components publish and subscribe to events, enabling loose coupling and asynchronous communication.

Key Concepts:

  • Event Publisher: A component that triggers an event when something of interest happens (e.g., an order is placed).
  • Event Subscriber: A component that listens for specific events and reacts to them (e.g., sending an email after an order is placed).
  • Asynchronous Processing: Events can be processed asynchronously, improving performance and scalability.

How it Works in .NET: Event-driven systems in .NET are often implemented using message brokers like Azure Service BusRabbitMQ, or Kafka to handle the event publication and subscription model.

Example: This code demonstrates an event-driven architecture by using a notification system where the OrderPlacedEvent triggers the SendEmailWhenOrderPlacedHandler to send an email when an order is placed. The event is raised and handled asynchronously via the Mediator pattern to decouple the event publishing from the handling logic.

public class OrderPlacedEvent : INotification
{
public int OrderId { get; set; }
}

public class SendEmailWhenOrderPlacedHandler : INotificationHandler<OrderPlacedEvent>
{
public Task Handle(OrderPlacedEvent notification, CancellationToken cancellationToken)
{
// Send email logic here
return Task.CompletedTask;
}
}

// Raising the event
_mediator.Publish(new OrderPlacedEvent { OrderId = order.Id });

This code block is a good example of Event-Driven Architecture (EDA) because it showcases the core concepts of event publishingevent handling, and decoupling between different components of the system. Here’s why:

1. Event Definition

  • OrderPlacedEvent is an event that is triggered when a new order is placed. It contains the data related to the event (OrderId), which in this case represents an event that is of interest to other parts of the system.
  • This event acts as a signal that something significant has occurred, allowing other components of the system to respond to it asynchronously.

2. Event Handler

  • SendEmailWhenOrderPlacedHandler is a listener or handler that responds to the OrderPlacedEvent. It implements INotificationHandler<OrderPlacedEvent>, meaning it will listen for any OrderPlacedEvent and execute logic when it receives that event.
  • The handler is decoupled from the event raiser (the publisher) and does not need to know where or how the event was triggered, making the system more flexible and scalable. Here, it is used to send an email when an order is placed, but you could easily replace it with different handlers (e.g., logging, updating inventory) without modifying the core event.

3. Event Publishing

  • _mediator.Publish(new OrderPlacedEvent { OrderId = order.Id }); is the action that raises the event. This publishes the event to the mediator (a central hub), which then ensures that all interested handlers (like SendEmailWhenOrderPlacedHandler) receive the event and can process it.
  • The publisher (_mediator.Publish) is decoupled from the handler logic, meaning that the system can raise events without having to know which services or actions will handle them. The responsibility of responding to events is completely delegated to the handlers, allowing new handlers to be added without altering existing components.

4. Asynchronous and Decoupled Execution

  • In event-driven architecture, components are often asynchronous and independent of each other. When the OrderPlacedEvent is published, it triggers any handler that is interested in that event, but it does not block the flow of the main system. This allows multiple handlers to react to the event in parallel (e.g., sending emails, updating systems, processing payments) without directly affecting the main process.
  • This also leads to greater scalability: new handlers can be added or removed without impacting the rest of the system.

5. Loose Coupling

One of the core benefits of Event-Driven Architecture is loose coupling. In this example:

  • The OrderPlacedEvent class doesn’t know about the SendEmailWhenOrderPlacedHandler.
  • The CustomerService or order placement logic doesn’t need to know that an email will be sent when an order is placed. It only raises the event.
  • The handler (SendEmailWhenOrderPlacedHandler) does not need to know where the event is being triggered from. It only knows that it should respond to the OrderPlacedEvent.

This decoupling of components makes the system easier to modify, maintain, and extend without introducing breaking changes.

6. Scalability and Extensibility

  • With Event-Driven Architecture, additional functionality can be easily introduced by adding new event handlers for the same event. For example, if we wanted to add a new handler to update inventory or notify the shipping system about the order, we could do so without changing the order placement logic or the existing email handling logic.
  • This means the system is extensible and can scale horizontally to handle additional events and actions as the system grows.

Layered Architecture (N-Tier Architecture)

Layered Architecture, also known as N-Tier Architecture, is a traditional pattern where the application is divided into layers, each with a specific responsibility (e.g., Presentation, Business Logic, Data Access). This pattern encourages separation of concerns and helps organize the code into logical units.

Key Concepts:

  • Presentation Layer: Handles the user interface and user interactions.
  • Business Logic Layer: Contains business rules and logic.
  • Data Access Layer: Manages interaction with the database or other data sources.

How it Works in .NET: In .NET, Layered Architecture is implemented by organizing code into separate projects or namespaces, each representing a layer. Each layer depends on the layer below it (e.g., the presentation layer depends on the business logic layer).

Example: This code sample illustrates a layered architecture by separating concerns into three distinct layers: the Presentation layer (HomeController.cs) handles user interaction, the BusinessLogic layer (ProductService.cs) contains the core application logic, and the DataAccess layer (ProductRepository.cs) manages data retrieval and storage. This structure promotes modularity and maintainability by clearly defining the responsibilities of each layer.

/Presentation
HomeController.cs
/BusinessLogic
ProductService.cs
/DataAccess
ProductRepository.cs

This is a good example of Layered Architecture because it clearly follows the core principles of separating concerns into distinct layers, each responsible for a specific part of the application. Here’s how the code structure adheres to Layered Architecture:

1. Separation of Concerns

Presentation Layer (HomeController.cs):

  • The Presentation Layer is responsible for handling user interactions, such as receiving inputs, processing requests, and returning responses. In this case, the HomeController.cs handles HTTP requests, typically receiving data from the user (like form submissions or URL parameters) and passing that data to the business logic layer for processing.
  • The controller doesn’t contain any business rules or data access logic. Its sole purpose is to interact with the user interface and delegate the work to the other layers.

Business Logic Layer (ProductService.cs):

  • The Business Logic Layer contains the core business rules of the application. ProductService.cs encapsulates logic related to business operations, such as processing and validating orders, applying business rules, or manipulating data.
  • This layer is decoupled from how the data is stored and presented. The business logic only cares about performing operations and might rely on the data access layer to retrieve or save data.
  • The service layer (here ProductService) typically contains complex operations and acts as a bridge between the presentation and data access layers.

Data Access Layer (ProductRepository.cs):

  • The Data Access Layer is responsible for interacting with the database or any data storage system. ProductRepository.cs handles all the data operations, like fetching products from the database or saving changes to product records.
  • The repository abstracts the complexity of the database queries and provides a simple interface for the business logic layer to interact with the data store without directly dealing with database implementation details.

2. Encapsulation and Modularity

Each layer in this architecture encapsulates a specific set of responsibilities:

  • The Presentation Layer handles UI concerns (e.g., user input, view rendering).
  • The Business Logic Layer handles application-specific operations and ensures that the business rules are enforced.
  • The Data Access Layer handles the interaction with the database, abstracting any complexities related to how data is retrieved or persisted.

By separating these concerns, the code is easier to manage, test, and maintain. Each layer can evolve independently as long as the interfaces between them are respected.

3. Loose Coupling

Each layer depends on the one beneath it but doesn’t directly interact with other layers:

  • The Presentation Layer depends on the Business Logic Layer but doesn’t need to know the specifics of the data storage or retrieval. It simply calls methods from the business logic layer.
  • The Business Logic Layer depends on the Data Access Layer to fetch or store data but doesn’t need to know about the specifics of how the data is persisted (e.g., whether it’s in a database, an in-memory store, or an external API).
  • This loose coupling allows each layer to be replaced or modified without impacting the other layers, leading to better maintainability and flexibility.

4. Testability

  • With layered architecture, testing becomes easier since each layer can be tested independently:
  • You can write unit tests for ProductService (business logic) without worrying about how data is stored or how the UI behaves.
  • You can also write tests for ProductRepository to ensure that data access logic works as expected without involving the business rules.
  • The Presentation Layer (HomeController) can also be tested separately, ensuring that the user interface layer properly interacts with the business logic layer.

5. Scalability and Maintainability

  • As the application grows, new functionality can be added to any layer without affecting other layers:
  • For example, if new business rules need to be applied to product processing, you can extend ProductService.cs without touching the UI or data access logic.
  • Similarly, if the underlying database changes (e.g., switching from SQL to NoSQL), the Data Access Layer can be modified without impacting the presentation or business logic.

6. Adherence to Layered Architecture Best Practices

  • Independence of Layers: The layers are independent of each other. The HomeController doesn’t need to know about the specifics of how products are stored in the database. It just calls the ProductService to perform the operations, which in turn relies on the ProductRepository to interact with the database.
  • Layered Communication: The communication flows in a clear direction: from the presentation to the business logic, and then from the business logic to data access. The business logic layer does not know about how data is presented, and the data access layer does not need to know how data is used in the business logic.

Onion Architecture

Onion Architecture is similar to Hexagonal Architecture but emphasizes strict adherence to the Dependency Inversion Principle (DIP). The idea is to place the core business logic and domain models at the center of the application, with the outer layers containing infrastructure and user interface components.

Key Concepts:

  • Inner Circle (Core): Contains domain models, business logic, and interfaces.
  • Outer Circle (Infrastructure): Contains infrastructure code like data access, APIs, and user interface components.
  • Dependency Inversion: The core depends on abstractions, while the outer layers depend on the core through implementations.

How it Works in .NET: In .NET, Onion Architecture can be implemented by structuring the solution in layers with a clear separation between core logic and infrastructure concerns. Dependency injection helps keep the core independent of external systems.

Example: This code sample demonstrates Onion Architecture by placing the core business logic, including the Product entity and the IProductRepository interface, in the innermost layer, while the ProductRepository implementation resides in the outer infrastructure layer. This structure ensures that the core remains independent of external dependencies, promoting maintainability and flexibility.

/Core
/Entities
Product.cs
/Interfaces
IProductRepository.cs
/Infrastructure
/Repositories
ProductRepository.cs

This code structure is a good example of Onion Architecture because it demonstrates a clear separation of concerns and a focus on the central, business-oriented layer of the application (the core) while keeping dependencies on outer layers (infrastructure, data access) at a minimum. Here’s how it aligns with the principles of Onion Architecture:

1. Core Layer (Innermost Layer)

Entities (Product.cs):

  • This represents the heart of the application — the domain model. In Onion Architecture, the core layer contains the business entities and domain logic, which is the most important part of the system.
  • The Product.cs class is an entity that models the core business concepts of the application, such as a product in a store. This entity should not depend on any infrastructure or external frameworks.

Interfaces (IProductRepository.cs):

  • The IProductRepository interface is also part of the core layer. In Onion Architecture, the core should define interfaces that represent the behavior needed by the application (e.g., data access).
  • By placing the repository interface in the core, the system ensures that the core logic is abstracted from the infrastructure details. This allows the core to remain independent of external technologies (like SQL, NoSQL databases, or specific frameworks), promoting maintainability and testability.

2. Infrastructure Layer (Outer Layer)

Repositories (ProductRepository.cs):

  • The ProductRepository.cs class in the Infrastructure Layer is responsible for the implementation of the IProductRepository interface defined in the core. This layer contains the concrete implementations of interfaces such as data access, third-party services, or any other infrastructure-related concerns.
  • The ProductRepository knows how to persist Product entities, perhaps by interacting with a database, but it is completely decoupled from the core business logic. The core only knows about the IProductRepository interface and not about the specifics of how data is stored or retrieved.

3. Dependency Inversion

Inner Layer Knows Nothing About Outer Layers:

  • The core layer does not depend on the infrastructure layer. The core defines the interfaces (like IProductRepository) that represent the operations it requires, but it does not include any implementation details.
  • The infrastructure layer, on the other hand, depends on the core because it provides implementations of the core’s interfaces.
  • This ensures that the business logic (core layer) remains isolated from external concerns, such as data storage mechanisms or other infrastructure-specific logic.

4. Separation of Concerns

  • Core Business Logic: The core layer contains all the business-critical entities and abstractions, focusing purely on domain logic and application behavior.
  • Infrastructure Services: The outer layers (infrastructure) deal with concerns like database access, file handling, or network services. They can change or evolve independently of the core logic.
  • This clear separation ensures that changes in infrastructure (such as switching from one database to another) don’t affect the core business logic, and vice versa.

5. Testability

  • Since the core layer is independent of the infrastructure, it can be tested in isolation. For example, the Product class can be tested without needing to interact with the database.
  • In unit tests, you can mock the IProductRepository interface, allowing tests of business logic without worrying about actual data access.

6. Flexibility & Maintainability

  • The Onion Architecture ensures that changes to outer layers (like switching from SQL Server to MongoDB or changing the repository implementation) do not impact the core logic.
  • The core remains flexible and easily maintainable, as it is decoupled from the specifics of how data is stored or external systems are used.

7. Dependency Rule

The key dependency rule in Onion Architecture is that dependencies always point inward. The outer layers depend on the inner layers, but the core layer never depends on the outer layers.

In this example:

  • The core (which contains the Product entity and IProductRepository interface) has no knowledge of the infrastructure layer.
  • The infrastructure layer (which contains the implementation of IProductRepository in ProductRepository.cs) depends on the core to implement the required interfaces.

Pipeline Architecture

Pipeline Architecture, often used in scenarios where data flows through a series of processing steps, organizes application logic as a sequence of processing units (steps). Each step transforms or processes the input data and passes it to the next step in the pipeline.

Key Concepts:

  • Processing Steps: Each step is responsible for a specific task (e.g., validation, transformation, or logging).
  • Data Flow: Data moves through the pipeline from one processing step to another.
  • Modularization: Each step is modular, making it easy to add, remove, or modify steps without affecting others.

How it Works in .NET: In .NET, the pipeline pattern is commonly implemented in middleware pipelines (e.g., ASP.NET Core’s request/response pipeline).

Example: This code sample demonstrates pipeline architecture by implementing middleware that processes HTTP requests in a sequence. The LoggingMiddleware intercepts the request to perform logging and then passes it along to the next middleware in the pipeline, ensuring modular handling of requests in a structured order.

public class LoggingMiddleware
{
private readonly RequestDelegate _next;

public LoggingMiddleware(RequestDelegate next)
{
_next = next;
}

public async Task Invoke(HttpContext context)
{
// Logging logic
await _next(context); // Passing to the next middleware
}
}

This is a good code sample for pipeline architecture because it demonstrates the use of middleware in a series of steps (or stages) that process an HTTP request. Here’s why it fits well with pipeline architecture:

  1. Request Processing in Stages: The LoggingMiddleware is a single stage in a sequence of request handlers. It intercepts the request, performs its specific task (logging in this case), and then passes the request to the next handler in the pipeline via await _next(context). This sequential processing of requests is the essence of pipeline architecture.
  2. Modular and Composable: Each piece of middleware can handle a distinct concern (logging, authentication, error handling, etc.), and the middleware components can be composed in various orders to create a custom request processing pipeline. This makes the architecture highly flexible and scalable.
  3. Separation of Concerns: Each middleware has a clear responsibility and is decoupled from the others. The LoggingMiddleware focuses solely on logging, and the next middleware will handle its specific task. This promotes clean, maintainable code and allows individual middleware to be tested in isolation.
  4. Extensibility: You can easily add more middleware to the pipeline to address other concerns, like authentication, authorization, or caching, without altering the core logic. This is a key feature of pipeline architecture, which supports adding new steps dynamically.

Master-Slave Architecture

Master-Slave Architecture is a pattern used primarily in distributed systems where one component (the master) controls the flow of data or commands to one or more subcomponents (slaves). The master issues commands, and the slaves execute those commands, reporting back the results.

Key Concepts:

  • Master Component: Issues commands and controls the overall operation.
  • Slave Components: Execute tasks as instructed by the master and return results.

How it Works in .NET: This pattern is often seen in distributed computing or data processing systems where a central server (master) coordinates tasks across multiple workers (slaves).

Example: This code demonstrates a Master-Slave architecture where the Master class distributes tasks to multiple Slave objects, which process them asynchronously. The Master controls the flow of tasks and assigns them in a round-robin fashion, ensuring that each Slave executes the tasks without any control over task assignment.

using System;
using System.Collections.Generic;
using System.Threading.Tasks;

// Slave class represents a worker that performs tasks
public class Slave
{
public async Task PerformTask(string task)
{
Console.WriteLine($"Slave is processing: {task}");
await Task.Delay(1000); // Simulate work
Console.WriteLine($"Slave finished processing: {task}");
}
}

// Master class directs slaves to perform tasks
public class Master
{
private readonly List<Slave> _slaves;

public Master(int numberOfSlaves)
{
_slaves = new List<Slave>();
for (int i = 0; i < numberOfSlaves; i++)
{
_slaves.Add(new Slave()); // Initialize slaves
}
}

// Master assigns tasks to slaves
public async Task DistributeTasks(List<string> tasks)
{
int slaveIndex = 0;

foreach (var task in tasks)
{
var slave = _slaves[slaveIndex];
Console.WriteLine($"Master assigning task: {task} to Slave {slaveIndex + 1}");
await slave.PerformTask(task);

// Round-robin assignment of tasks to slaves
slaveIndex = (slaveIndex + 1) % _slaves.Count;
}
}
}

public class Program
{
public static async Task Main(string[] args)
{
var tasks = new List<string> { "Task 1", "Task 2", "Task 3", "Task 4", "Task 5" };

// Create a master with 3 slaves
var master = new Master(3);

// Master distributes tasks to slaves
await master.DistributeTasks(tasks);
}
}

Explanation:

  • Slave Class: Represents a worker that performs tasks. In this example, the PerformTask method simulates processing a task asynchronously.
  • Master Class: Manages a collection of Slave objects. The DistributeTasks method assigns tasks to slaves in a round-robin fashion. The master controls the flow of tasks and delegates them to the slaves for execution.
  • Round-Robin Task Distribution: The master distributes tasks in a round-robin manner, ensuring that each slave gets a task to process. The tasks are processed asynchronously.

Why This Is a Good Example of Master-Slave Architecture:

  • The Master controls the flow of tasks, deciding which Slave will execute a task, and ensures the distribution of work across multiple workers.
  • The Slaves are responsible for performing specific tasks but do not control how or when they are given tasks. This reflects the typical structure of a Master-Slave pattern.
  • The system can be extended to have more Slaves or different types of tasks, while the Master remains responsible for overseeing the assignment and flow of work.

In summary, choosing the right architectural pattern for your .NET application can significantly impact its scalabilitymaintainability, and ease of testing. Whether you are developing a monolithic system, a microservices-based application, or a feature-driven modular system, these architectural patterns provide proven strategies for organizing your codebase. By applying these patterns thoughtfully, you can create applications that are easier to extend, maintain, and adapt to future requirements.