
Read this article on Medium.
If you’ve worked on the same .NET system for more than five years, you already know this: the features you were most excited to adopt aren’t always the ones still in the codebase today. Time, not hype, is the real test of architectural decisions.
This article looks at modern .NET features through a senior developer’s lens — not based on theory, but on what remains after years of production use.
Records vs Classes: Used Carefully, Not Everywhere
Records are one of the most celebrated additions to modern C#, and for good reason. They offer concise syntax, built-in value equality, and immutability by default. On paper, they look like a strict upgrade over classes.
In practice, senior developers use them selectively.
Where Records Shine
DTOs, commands/queries, messages/events, API contracts, and configuration snapshots are all data that describe what happened or what should happen, not how behavior is implemented. That’s why records are a natural fit — they express intent clearly, safely, and concisely.
Records are a natural fit for data that represents intent, not behavior:
- DTOs
- Commands and queries
- Messages and events
- API contracts
- Configuration snapshots
public readonly record struct GetUserQuery(Guid Id);
Why this works
- Immutable by default: Once created, the data doesn’t change, which reduces accidental side effects and makes reasoning about the system easier.
- Value-based: Equality is based on the content, not the reference, which is perfect for comparing messages, commands, or snapshots.
- Concise and self-documenting: The syntax makes it clear these objects are simple data carriers, not rich domain models.
- Safe to pass across boundaries: Whether sending a command to a handler or returning a DTO from an API, records ensure you’re not unintentionally exposing mutable state.
💡If the type is moved around, copied, or serialized, a record is often the right choice.
Where Seniors Avoid Records
Records are fantastic for immutable data at system boundaries, but classes are safer and more predictable for objects that evolve over time and carry behavior.
Despite their popularity, records are rarely used for:
- EF Core entities
- Domain models with behavior
- Long-lived mutable objects
public class User
{
public Guid Id { get; private set; }
public string Email { get; private set; } = string.Empty;
public void ChangeEmail(string email)
{
Email = email;
}
}
Why classes still win here
- Mutability is intentional and controlled: Classes allow properties to change over time in a controlled way, which is essential for entities and domain models.
- Identity matters more than value equality: Entities are defined by their unique identity, not their field values, so comparing by reference is more appropriate than value-based equality.
- Change tracking works naturally: ORMs like EF Core rely on object references to track updates, which aligns naturally with classes, not immutable records.
- Debugging is simpler: Classes avoid unintended copies and
withexpressions, making the system easier to inspect and reason about during development.
// Using a record
public record UserRecord(Guid Id, string Name);
var user1 = new UserRecord(Guid.NewGuid(), "Alice");
var user2 = user1 with { Name = "Bob" }; // Creates a new object
Console.WriteLine(user1.Name); // Output: "Alice"
Console.WriteLine(user2.Name); // Output: "Bob" – could be confusing during debugging
Using records for entities often introduces:
- Accidental copies: Records create new instances with
withexpressions, which can lead to multiple copies of the same entity floating around unintentionally. - Confusing
withexpressions: Developers may assumewithmutates the original object, but it actually creates a new one, making state changes less obvious. - EF Core friction: Immutable records don’t align well with EF Core’s change tracking, causing difficulties when updating or persisting entities.
- Subtle bugs during updates: Copying entities instead of updating in place can lead to inconsistencies, lost changes, or unexpected behavior in the system.
The Mistake Juniors Make
A common early mistake is assuming:
“Records are newer, therefore better everywhere.”
That mindset leads to:
- Domain models that feel immutable but aren’t: Developers assume the data won’t change, but unexpected mutations can still occur, causing confusion and bugs.
- Logic hidden inside copy operations: Using
withexpressions can unintentionally execute changes or create new instances, hiding important business logic from plain sight. - Code that’s harder to debug under pressure: Copies and immutable patterns make it difficult to trace the actual state of an object during debugging, slowing down problem-solving.
Senior developers don’t avoid records — they respect boundaries.
💡Use records for data at the edges of your system.
💡Use classes for behavior at the core.
If the object represents:
- What happened → record
- What should happen → record
- Something that changes over time → class
Verdict
Records didn’t replace classes — they clarified intent.
Senior developers didn’t adopt records everywhere. They adopted them where immutability and value semantics actually help.
And that restraint is the theme you’ll see again and again.
Minimal APIs: Great for Edges, Rarely for Cores
Minimal APIs were introduced to reduce ceremony and make building HTTP endpoints faster and cleaner. And they succeed at that goal — in the right context.
Senior developers use Minimal APIs.
They just don’t use them everywhere.
Where Minimal APIs Shine
Minimal APIs shine when the endpoints are thin, simple, and unlikely to grow complex. They reduce ceremony and boilerplate, making small features faster to build, easier to read, and more maintainable.
- Small services: Lightweight microservices don’t need controllers, DI scaffolding, or complex routing; Minimal APIs let you expose functionality quickly.
- Gateways and BFFs: APIs that simply forward requests or aggregate data benefit from the simplicity and readability of minimal endpoints.
- Internal tools: Internal dashboards, admin panels, or utility services often don’t require complex infrastructure — Minimal APIs reduce overhead.
- Simple CRUD endpoints: For basic Create/Read/Update/Delete operations, a single-line
MapGetorMapPostkeeps the code concise and clear. - Prototypes that need to ship fast: When speed matters over structure, Minimal APIs let teams test ideas and validate concepts quickly without over-engineering.
Example: Minimal Health Check Api
app.MapGet("/health", () => Results.Ok("Healthy"));
Why this works
- Extremely low ceremony: No controllers, attributes, or DI scaffolding required — the endpoint is immediately readable and deployable.
- Easy to read in one glance: The purpose of the endpoint is obvious; anyone looking at the code instantly knows it’s a health check.
- Minimal cognitive overhead: Developers don’t have to mentally parse layers of routing, filters, or middleware — the code is self-contained.
- Perfect for infrastructure-style endpoints: Endpoints that support monitoring, metrics, or lightweight operational checks benefit most because the focus is on clarity, not business logic.
- Fast to extend — Adding new simple endpoints follows the same pattern, keeping the codebase uniform and easy to maintain.
💡If the endpoint is thin, stable, and unlikely to grow complex, Minimal APIs are an excellent choice.
Where They Start to Hurt
Minimal APIs are designed for thin, focused endpoints, but as features grow, the simplicity becomes a liability. When endpoints accumulate responsibilities, the code quickly becomes harder to read, maintain, and test. This is where Minimal APIs start to lose their advantage.
As applications grow, endpoints tend to accumulate:
- Validation: Adding input validation inside a minimal lambda can clutter the endpoint, making it hard to see the core purpose at a glance.
- Authorization: When security rules need to be applied, mixing them with the endpoint logic increases complexity and risks inconsistency.
- Mapping: Transforming request objects to domain models or DTOs inside the lambda adds boilerplate that defeats the “minimal” promise.
- Logging: Inline logging can pollute the endpoint and make debugging more cumbersome, especially when multiple developers touch it.
- Versioning: Managing multiple API versions directly in Minimal APIs can lead to confusing routes and duplication without a structured pattern.
- Error handling: Handling exceptions in each endpoint individually is repetitive and error-prone, unlike centralized middleware in a controller-based architecture.
This is where Minimal APIs begin to lose their advantage.
app.MapPost("/users", async (
CreateUserRequest request,
IUserService service,
ILogger logger) =>
{
// validation
// authorization
// mapping
// business logic
});
At this point, the “minimal” part is gone — but the structure hasn’t scaled with it.
- Validation, authorization, and mapping are now inline, mixing cross-cutting concerns with business logic.
- Logging and error handling often get sprinkled in, further cluttering the endpoint.
- The core purpose of the endpoint — creating a user — is no longer immediately obvious at a glance.
The structure hasn’t scaled with the growing complexity, which makes the code harder to read, harder to maintain, and harder to test.
This is exactly where senior developers stop trying to force Minimal APIs for complex operations. Instead, they delegate the heavy lifting to handlers or services, keeping the endpoint thin, readable, and focused.
What Seniors Do Differently
In larger systems, senior developers don’t treat Minimal APIs as the place for all business logic. Instead, they use endpoints as adapters, letting the real work happen elsewhere.
- Use Minimal APIs as adapters: The endpoint’s role is to accept requests and delegate work — it’s a simple bridge between the client and the system.
- Push real logic into handlers or services: Business rules, validation, and orchestration live in dedicated services or MediatR handlers, keeping the code organized and maintainable.
- Avoid embedding business rules in endpoint lambdas: Endpoints remain thin shells, reducing cognitive load and making it easier for new developers to understand the system at a glance.
The endpoint becomes a thin shell, not the feature itself.
app.MapPost("/users", async (
CreateUserCommand command,
IMediator mediator) =>
{
return await mediator.Send(command);
});
Now the endpoint:
- Stays readable: the intent is obvious; anyone can see that it creates a user.
- Stays testable: business logic can be tested independently in the handler, without spinning up the HTTP layer.
- Doesn’t grow uncontrolled: cross-cutting concerns remain centralized in middleware, handlers, or services.
- Can be replaced without rewriting logic: the endpoint is just the interface; swapping it out or scaling it doesn’t affect the underlying system.
Why Controllers Still Exist
Despite predictions that Minimal APIs would replace them, controllers remain a staple in large .NET systems. Senior developers recognize that structure matters as systems grow, and controllers provide the organization necessary for maintainability.
- Clear structure
Controllers group related endpoints together, making it obvious which operations belong to which feature or resource. - Discoverability
Developers can navigate the codebase and easily find all endpoints for a given feature, improving onboarding and collaboration. - Convention-based organization
Standard patterns (naming, routing, filters) reduce cognitive overhead and make the system predictable across teams. - Natural extension points
Controllers support middleware, filters, model binding, and dependency injection in a way that scales as the system grows.
Senior developers don’t avoid Minimal APIs — they avoid unstructured growth. When endpoints accumulate responsibilities or complexity, controllers provide the discipline needed to maintain clarity, testability, and long-term maintainability.
💡Use Minimal APIs when the endpoint is the feature.
💡Use structured endpoints when the endpoint hosts the feature.
If an endpoint is expected to:
- Grow
- Change
- Be maintained by multiple developers
then structure beats cleverness.
Verdict
Minimal APIs are not a replacement for architecture — they’re a tool.
Senior developers use them where they reduce friction, and abandon them where they introduce it. The goal isn’t fewer lines of code. It’s fewer surprises six months later.
MediatR: Quietly Everywhere
MediatR is rarely the most exciting library in a .NET solution. It doesn’t promise performance gains, shiny syntax, or dramatic architectural transformations.
And yet, it shows up in a lot of long-lived systems.
That’s not an accident.
Why Senior Developers Reach for MediatR
At its core, MediatR does one simple thing well: it separates intent from execution.
public record CreateUserCommand(string Email) : IRequest<UserDto>;
public class CreateUserHandler : IRequestHandler<CreateUserCommand, UserDto>
{
public async Task<UserDto> Handle(
CreateUserCommand request,
CancellationToken cancellationToken)
{
// business logic lives here
}
}
This separation gives senior teams several advantages:
- Clear ownership of behavior: Each handler owns a single feature or command, making it obvious where business rules live.
- Testable business logic: Handlers can be tested independently from HTTP endpoints or UI layers, improving confidence and coverage.
- Predictable execution flow: Requests always pass through a consistent pipeline, reducing surprises and making debugging straightforward.
- Fewer god services: Business logic isn’t centralized in bloated services, avoiding spaghetti code and hidden dependencies.
The real value isn’t CQRS — it’s discipline.
MediatR encourages a structured, intentional approach to organizing features, which is why senior developers consistently adopt it in maintainable, long-lived systems.
How Juniors Misuse It
MediatR can get a bad reputation when it’s applied without discipline. Common missteps include:
- A replacement for thinking: Developers sometimes assume MediatR magically organizes code — without consciously designing boundaries or responsibilities.
- A handler per CRUD method: Creating handlers for every trivial operation leads to an explosion of classes without real separation of concerns.
- An abstraction layered on top of already-layered services: Wrapping existing services in handlers just to use MediatR adds complexity instead of clarity.
The consequences are predictable:
- Thin handlers calling fat services: The actual business logic hides inside large services, defeating the purpose of clear separation.
- Logic spread across too many places: Maintenance becomes difficult because developers don’t know where a particular behavior lives.
- “CQRS for the sake of CQRS”: The system appears architecturally clean on paper but is chaotic in reality.
💡That’s not MediatR’s fault — it’s a boundary problem.
Misuse arises when teams fail to clearly define what each handler should own and what the endpoint is responsible for.
How Seniors Actually Use It
In mature systems, MediatR handlers aren’t just a convenient pattern, they become the feature itself. Senior developers structure them so each handler owns the complete behavior, making the system easier to understand, maintain, and test.
public async Task<UserDto> Handle(CreateUserCommand command)
{
var user = User.Create(command.Email);
await _db.Users.AddAsync(user);
await _db.SaveChangesAsync();
return user.ToDto();
}
Why this works:
- Are the feature: The handler encapsulates the entire use case, so you know exactly where the business logic lives.
- Own validation, orchestration, and rules: All necessary checks and operations happen inside the handler, avoiding scattered logic across multiple services or layers.
- Call infrastructure, not the other way around: The handler interacts with databases, repositories, or external services, rather than letting infrastructure drive business decisions.
The result:
- No service sprawl: fewer classes and less boilerplate to manage.
- No orchestration hidden elsewhere: the feature’s flow is obvious and traceable.
- Clear, testable, and maintainable code that scales naturally with the system.
💡Senior developers treat handlers as the single source of truth for a feature, rather than just a conduit for passing requests around.
Why MediatR Ages Well
Over time, MediatR proves its value not through flashy features, but by bringing discipline and clarity to a system:
- Vertical slice architecture: Handlers naturally group all logic for a single feature, making it easy to see and modify behavior without touching unrelated code.
- Easier refactoring: Because each handler encapsulates a feature, moving, renaming, or rewriting functionality is low-risk and straightforward.
- Clear test seams: Handlers can be tested independently from endpoints, UI, or other layers, improving confidence in the system’s correctness.
- Parallel development: Different teams or developers can work on separate handlers without stepping on each other, reducing merge conflicts and improving velocity.
The key insight
When a feature breaks, there’s exactly one place to look. That alone justifies the dependency.
- If business logic doesn’t live in handlers, MediatR adds little value.
- Used correctly, MediatR becomes the backbone of a system.
- Used poorly, it becomes noise.
Senior developers understand the difference — and make it work for maintainable systems.
AOT: Powerful, Selective, and Often Deferred
Ahead-of-Time (AOT) compilation promises faster startup times, smaller memory footprints, and improved performance. On paper, it looks like an obvious win, especially in a cloud-focused world where cold-start performance matters.
In practice, senior developers approach AOT with measured caution. It’s not about chasing hype; it’s about choosing where it delivers real, predictable value.
Where AOT Makes Sense Today
AOT shines in scenarios where:
- Startup time matters: Applications that must start quickly benefit most from precompiled code.
- Reflection is minimal: Dynamic behaviors can break AOT; static, predictable paths are ideal.
- Execution paths are predictable: Deterministic code ensures AOT produces reliable performance.
Common examples:
- Background workers
- Small APIs
- CLI tools
- Serverless workloads
builder.Services.AddControllers()
.AddJsonOptions(options =>
options.JsonSerializerOptions.TypeInfoResolverChain.Add(
MyJsonContext.Default));
Why seniors like this
- Predictable behavior: No surprises at runtime.
- Clear performance gains: Measurable improvements in startup or memory usage.
- Controlled surface area: Limits AOT to scenarios where it matters.
💡When the application boundary is well understood, AOT delivers real benefits.
Where Seniors Hold Back
AOT introduces tradeoffs that don’t always appear in benchmarks:
- Reflection constraints: Dynamic behaviors may fail or require extra configuration.
- More complex configuration: Enabling AOT often requires additional setup or annotations.
- Tooling and debugging friction: Errors may be harder to trace compared to JIT compilation.
- Library compatibility concerns: Some libraries rely on dynamic features that clash with AOT.
UI-heavy or dynamic systems often rely on:
- Reflection
- Late-bound behaviors
- Plugins and extensibility
💡These patterns don’t always coexist peacefully with AOT, which is why senior developers choose boundaries carefully.
The Senior Mindset Around AOT
Senior developers rarely ask, “Can we use AOT?” Instead, they ask: “Is this problem worth the additional complexity?”
Often the answer is:
- Not yet
- Not everywhere
- Not without guardrails
AOT is evaluated per workload, not adopted wholesale.
The Pattern You’ll Notice
AOT adoption usually follows a careful, incremental path:
- Start without it.
- Measure real pain (startup, memory, performance).
- Apply AOT only to the hot path.
- Leave the rest alone.
This avoids locking teams into premature constraints while still reaping AOT’s benefits where it matters most.
💡Use AOT where startup time is critical and behavior is predictable.
💡Avoid AOT where flexibility and debuggability matter more.
Verdict
AOT is not hype — it’s real, powerful, and improving quickly. But senior developers don’t chase potential. They adopt proven value, one boundary at a time.
Source Generators: Powerful, Dangerous, and Worth It?
A source generator in .NET is a feature introduced in C# 9 / .NET 5 that allows code to be automatically generated at compile time. One of the most powerful features added to the .NET ecosystem in recent years, they promise:
- Less boilerplate: Automatically generate repetitive code, reducing manual effort.
- Cleaner code: Keep source code concise and readable while retaining functionality.
- Compile-time safety: Errors are caught during compilation rather than at runtime.
However, they also introduce new complexity, which senior developers treat with caution.
Why Seniors Use Source Generators
When applied correctly, source generators provide discipline and maintainability, not just convenience:
- Automate repetitive tasks safely: Generate DTOs, mappings, or serialization helpers without manual mistakes.
- Encourage consistency: Code produced by generators is uniform, reducing subtle bugs from hand-written variations.
- Integrate at build-time: Developers know exactly what code exists, making refactoring predictable.
- Enable advanced patterns: Features like MediatR pipelines, logging, or validation can be scaffolded automatically without cluttering business logic.
Why this works
- Eliminates manual mapping
- Reduces human error
- Keeps intent explicit
- Fails at compile time
When generators do boring work, they’re incredibly effective.
Where Source Generators Can Go Wrong
Misuse can quickly create headaches:
- Obscured logic: Generated code may hide behavior, making debugging harder for unfamiliar developers.
- Build-time complexity: Compilation may slow down, and errors can be cryptic.
- Versioning and tooling challenges: Updating libraries or frameworks may require regenerating or adjusting code.
- Overuse: Generators for trivial tasks can add unnecessary indirection and cognitive overhead.
💡Senior developers treat generators like a scalpel, not a hammer — applied where they add real value, avoided where they obscure clarity.
The Senior Mindset
- Use source generators to eliminate boilerplate, enforce consistency, and reduce runtime errors.
- Avoid using them for tasks that are simple enough to maintain by hand.
- Always ask: “Does this generator improve maintainability or just hide complexity?”
Verdict
Source generators are powerful and worth mastering, but senior developers approach them with intentionality. Applied wisely, they save time, reduce errors, and scale cleanly. Applied indiscriminately, they add complexity and confusion.
💡The rule of thumb: use generators where they create clarity, never where they obscure it.
Conclusion: Predictability Beats Novelty
Across records, Minimal APIs, MediatR, AOT, and source generators, a clear pattern emerges:
The features that survive aren’t the flashiest — they’re the most predictable.
Senior developers don’t chase every new capability. They adopt tools that make systems easier to reason about, debug, and evolve over time. In the end, boring, explicit code wins — and always has.
Modern .NET offers an impressive toolbox. It’s easy to feel pressure to adopt every shiny new feature. Experience teaches a quieter lesson: longevity matters more than novelty.
Senior developers don’t ignore new capabilities — they evaluate them through the lens of maintainability, debuggability, and team velocity. Records, Minimal APIs, MediatR, AOT, and source generators all have their place. The difference lies in where they’re applied and why.
The features that survive in real systems are the ones that:
- Reduce cognitive load
- Make failures obvious
- Help new developers understand the system quickly
They’re not always exciting, but they keep the system running cleanly, reliably, and predictably — years after the initial implementation.
Maturity in .NET development isn’t about using more features. It’s about using fewer, more deliberately.
Boring code isn’t a lack of ambition. It’s a sign of experience.
Happy coding!