👋Hi, I'm Waqas — a Software Architect and Technical Consultant specializing in .NET, Azure, microservices, and API-first system design..
I help companies build reliable, maintainable, and high-performance backend platforms that scale.
Database Transactions and Isolation Levels: In-Depth with .NET
ACID, isolation levels, read phenomena, and EF Core. Deadlocks and concurrency.
October 20, 2024 · Waqas Ahmad
Read the article
Introduction
This guidance is relevant when the topic of this article applies to your system or design choices; it breaks down when constraints or context differ. I’ve applied it in real projects and refined the takeaways over time (as of 2026).
Without transactions and the right isolation level, concurrent access leads to partial updates, dirty or phantom reads, and hard-to-reproduce bugs. This article covers ACID, read phenomena, all common isolation levels (including Snapshot), when to use which, and how to implement transactions and isolation in .NET and EF Core. For architects and tech leads, choosing the right level is a trade-off between consistency and concurrency—getting it wrong causes production issues; getting it right keeps data correct and behaviour predictable.
If you are new to the topic, start with Topics covered and jump to the section you need.
System scale: Apps with concurrent reads and writes to a relational database (SQL Server, Azure SQL, PostgreSQL); applies when you’re deciding isolation level, handling deadlocks, or debugging consistency issues.
Team size: Backend or data team; someone must own transaction scope and isolation so you don’t get phantom reads or deadlocks in production.
Time / budget pressure: Fits when you can test under concurrency; breaks down when “we’ll fix deadlocks when they happen” and they happen at peak.
Technical constraints: .NET, EF Core or ADO; SQL Server or compatible; assumes you can set isolation level and design for retry.
Non-goals: This article does not optimize for NoSQL or eventual consistency only; it optimises for relational ACID and isolation.
What is ACID?
ACID is the set of properties that make a transaction reliable: a group of operations is treated as a single unit so the database stays consistent and durable even under concurrency and failures.
Letter
Property
Meaning
A
Atomicity
All operations in the transaction commit together or none do. If one step fails, the whole transaction is rolled back.
C
Consistency
The database moves from one valid state to another. Constraints (keys, checks, FKs) hold before and after.
I
Isolation
Concurrent transactions are isolated so that their intermediate or uncommitted changes do not necessarily affect each other. How much is visible is controlled by the isolation level.
D
Durability
Once a transaction is committed, the result persists (survives crashes, power loss). Usually via write-ahead log and durable storage.
When it matters: Any time you run multiple related updates (e.g. debit one account and credit another, or insert order + order lines). Without a transaction, a failure in the middle can leave data half-updated. Use a transaction so the whole unit commits or rolls back.
Read phenomena at a glance
Read phenomena are the kinds of “weird” reads that can happen when transactions run concurrently and isolation is relaxed. The isolation level determines which of these are allowed or prevented.
Phenomenon
What happens
Example
Dirty read
You read uncommitted data from another transaction. That transaction may later roll back, so you saw data that “never existed.”
You see a balance update that is then rolled back.
Non-repeatable read
You read the same row twice in one transaction and get different values because another transaction committed a change in between.
First read balance = 100, second read balance = 50.
Phantom read
You run the same query twice in one transaction and get different sets of rows because another transaction inserted or deleted rows that match the predicate.
First SELECT * FROM orders WHERE status = 'Pending' returns 3 rows; second returns 4.
Below we define each in more detail; then we map which isolation levels prevent which phenomenon.
Dirty read
What it is and when it happens
A dirty read occurs when a transaction reads data that another transaction has modified but not yet committed. If that other transaction later rolls back, the first transaction has seen data that “never existed” from the database’s point of view. That can lead to wrong decisions (e.g. basing a calculation on a value that is undone).
When it happens: Only when the isolation level allows it—Read Uncommitted (and in some databases, Read Committed in rare edge cases). Read Committed and above prevent dirty reads by not exposing uncommitted changes.
When to care: Whenever correctness depends on not reading uncommitted data (almost always). Avoid Read Uncommitted in production; use at least Read Committed.
Example (conceptual)
Transaction A: UPDATE accounts SET balance = 0 WHERE id = 1 (not committed).
Transaction B (Read Uncommitted): SELECT balance FROM accounts WHERE id = 1 → sees 0.
Transaction A: rolls back.
Transaction B has “seen” a balance of 0 that was never committed.
Non-repeatable read
What it is and when it happens
A non-repeatable read occurs when a transaction reads the same row twice and gets different values because another transaction committed a change to that row in between. The row still exists; its content changed.
When it happens: Possible under Read Committed and Read Uncommitted. Repeatable Read, Serializable, and Snapshot prevent it (within the same transaction, the same row returns the same value).
When to care: When your logic assumes “if I read a row twice in this transaction, I get the same result” (e.g. you read a balance, do some checks, then read it again before updating). Use Repeatable Read or Snapshot if you need that guarantee.
Example (conceptual)
Transaction A: SELECT balance FROM accounts WHERE id = 1 → 100.
Transaction B: UPDATE accounts SET balance = 50 WHERE id = 1; COMMIT.
Transaction A: SELECT balance FROM accounts WHERE id = 1 → 50 (same row, different value).
Phantom read
What it is and when it happens
A phantom read occurs when a transaction runs the same query twice and gets different sets of rows because another transaction inserted or deleted rows that match the query predicate. The “phantom” is the new or missing row(s).
When it happens: Possible under Read Committed and Repeatable Read. Serializable (and in SQL Server, Snapshot for the scope of the query) prevents phantoms by locking or versioning so that the result set does not change.
When to care: When you need a stable set of rows for the duration of the transaction (e.g. “count orders with status = Pending” and then insert based on that count). Use Serializable or Snapshot if phantoms would break your logic.
Example (conceptual)
Transaction A: SELECT * FROM orders WHERE status = 'Pending' → 3 rows.
Transaction A: SELECT * FROM orders WHERE status = 'Pending' → 4 rows (new row “phantomed” in).
Isolation levels at a glance
Level
Dirty read
Non-repeatable read
Phantom read
Typical use
Read Uncommitted
Allowed
Allowed
Allowed
Avoid in production.
Read Committed
No
Allowed
Allowed
Default in SQL Server; good balance.
Repeatable Read
No
No
Allowed
When same row must not change within transaction.
Serializable
No
No
No
Full isolation; highest consistency, most locking.
Snapshot
No
No
No*
Read-heavy, long-running reads; row versioning, less blocking.
*Snapshot in SQL Server gives you a consistent view of the database at the start of the transaction; phantoms in the traditional sense are avoided for that snapshot.
Read Uncommitted
What it is and when to use it
Read Uncommitted is the lowest isolation level. It allows dirty reads: you can see uncommitted changes from other transactions. It does not add read locks (in SQL Server it uses no shared locks for reads), so it can reduce blocking, but at the cost of reading data that may roll back.
When to use: Almost never in application code. Sometimes used for diagnostic queries (e.g. “what is blocking?”) or when you explicitly accept dirty data. Do not use for business logic that depends on correct reads.
Read Committed
What it is and when to use it
Read Committed is the default in SQL Server (and many databases). It prevents dirty reads: you only see committed data. Each read sees the latest committed state at the moment the read happens. Non-repeatable reads and phantom reads are still possible.
When to use: Default choice for most workloads. Good balance of consistency and concurrency. Use when you do not need “same row twice = same value” or “same query twice = same rows” within a single transaction.
Repeatable Read
What it is and when to use it
Repeatable Read prevents dirty reads and non-repeatable reads: once you read a row, you will see the same values for that row for the rest of the transaction (other transactions cannot change and commit that row until you finish). Phantom reads are still possible: new rows can appear or rows can disappear between two runs of the same query.
When to use: When you need stable reads of the same rows (e.g. read balance, validate, then update) but do not need a stable result set for a predicate. Use when phantoms are acceptable but non-repeatable reads are not.
Serializable
What it is and when to use it
Serializable gives the highest isolation: no dirty, non-repeatable, or phantom reads. The database effectively serializes access so that the outcome is as if transactions ran one after another. This is usually implemented with range locks or similar, which can block other transactions and increase deadlock risk.
When to use: When you need full consistency (e.g. critical financial or inventory logic where phantoms or non-repeatable reads would be wrong). Use sparingly; prefer Snapshot for read-heavy scenarios if your database supports it.
Snapshot isolation
What it is and when to use it
Snapshot isolation (SQL Server: READ COMMITTED SNAPSHOT or SNAPSHOT) uses row versioning (MVCC-style): readers see a consistent snapshot of the database (as of the start of the transaction or the start of the statement, depending on mode) without taking shared locks. Writes still use locks, but readers do not block writers and writers do not block readers in the same way as lock-based levels.
READ COMMITTED SNAPSHOT: Statement-level snapshot; no dirty reads, non-repeatable reads and phantoms still possible.
SNAPSHOT (full snapshot): Transaction-level snapshot; no dirty, non-repeatable, or phantom reads for the duration of the transaction.
When to use: For read-heavy, long-running read transactions where you want consistency without blocking writers. Enable at database level in SQL Server (ALLOW_SNAPSHOT_ISOLATION, READ_COMMITTED_SNAPSHOT).
Implementing transactions in .NET and EF Core
What you need
In .NET you can run database transactions in several ways:
EF Core DbContext.Database.BeginTransaction() – starts a transaction on the connection used by that context. Commit or roll back explicitly. You can pass an isolation level.
TransactionScope – ambient transaction; can enlist multiple resources (e.g. two databases, or DB + message queue) in a single transaction. Also supports isolation level.
ADO.NETSqlConnection.BeginTransaction() – low-level; use when you are not using EF.
Full working example: EF Core with explicit isolation level
1. Begin transaction with Repeatable Read
awaitusingvar context = await _contextFactory.CreateDbContextAsync();
awaitusingvar transaction = await context.Database.BeginTransactionAsync(IsolationLevel.RepeatableRead);
try {
var order = await context.Orders.FindAsync(orderId);
if (order == null) thrownew InvalidOperationException("Order not found");
order.Status = OrderStatus.Confirmed;
await context.SaveChangesAsync();
await transaction.CommitAsync();
} catch {
await transaction.RollbackAsync();
throw;
}
usingvar scope = new TransactionScope(TransactionScopeOption.Required,
new TransactionOptions { IsolationLevel = IsolationLevel.ReadCommitted },
TransactionScopeAsyncFlowOption.Enabled);
awaitusingvar context = await _contextFactory.CreateDbContextAsync();
// ... use context ...await context.SaveChangesAsync();
scope.Complete();
How this fits together:BeginTransactionAsync(IsolationLevel.RepeatableRead) starts a transaction on the context’s connection with that level. All subsequent queries and SaveChangesAsync run inside that transaction until CommitAsync or RollbackAsync. The isolation level applies to all reads and writes in that transaction. Use Read Committed (default) unless you need Repeatable Read or Serializable; use Snapshot where available for long reads.
When to use which: Default to Read Committed. Use Repeatable Read when you read the same row twice and must see the same value. Use Serializable only when you need to prevent phantoms. Use Snapshot (if enabled) for long read-only or read-heavy transactions to avoid blocking.
TransactionScope in depth
What it is and when to use it
TransactionScope is an ambient transaction in .NET: you create a scope, do work (one or more connections or DbContexts), then call Complete(). If Complete() is called, the transaction commits when the scope is disposed; if not (e.g. an exception), it rolls back. Any enlistable resource (SQL Server, other databases that support System.Transactions) can join the same transaction automatically when you open a connection inside the scope. That makes it ideal for multiple operations in one transaction (e.g. two DbContexts, or DB + message queue when the resource supports it).
When to use: When you need a single transaction that spans more than one connection or DbContext, or when you want a consistent programming model (scope-based) across the app. Use BeginTransaction when you have a single DbContext and want explicit control.
Join the current ambient transaction if one exists; otherwise create a new one. Default for most cases.
RequiresNew
Always create a new transaction; do not join an existing one. Use when you want a nested operation to commit or roll back independently (e.g. audit log that must persist even if outer transaction fails).
Suppress
Do not participate in a transaction. Use when you want to run outside the current ambient transaction (e.g. read-only diagnostic query).
Async: TransactionScopeAsyncFlowOption.Enabled
By default, TransactionScope does not flow across await boundaries. If you await inside the scope, the transaction can be lost. Always use TransactionScopeAsyncFlowOption.Enabled when using async/await:
usingvar scope = new TransactionScope(TransactionScopeOption.Required,
new TransactionOptions { IsolationLevel = IsolationLevel.ReadCommitted },
TransactionScopeAsyncFlowOption.Enabled);
// Now await is safe inside the scopeawait DoWorkAsync();
scope.Complete();
Full working example: TransactionScope with two DbContexts
publicasync Task PlaceOrderWithInventoryAsync(OrderRequest request, CancellationToken ct = default)
{
usingvar scope = new TransactionScope(TransactionScopeOption.Required,
new TransactionOptions { IsolationLevel = IsolationLevel.ReadCommitted },
TransactionScopeAsyncFlowOption.Enabled);
try
{
awaitusingvar orderContext = await _orderContextFactory.CreateDbContextAsync(ct);
awaitusingvar inventoryContext = await _inventoryContextFactory.CreateDbContextAsync(ct);
// Both contexts enlist in the same transaction when they open their connectionvar order = new Order { CustomerId = request.CustomerId, Total = request.Total };
orderContext.Orders.Add(order);
await orderContext.SaveChangesAsync(ct);
foreach (var item in request.Lines)
{
var stock = await inventoryContext.Stock.FirstOrDefaultAsync(s => s.Sku == item.Sku, ct);
if (stock == null || stock.Quantity < item.Qty) thrownew InvalidOperationException("Insufficient stock");
stock.Quantity -= item.Qty;
inventoryContext.OrderAllocations.Add(new OrderAllocation { OrderId = order.Id, Sku = item.Sku, Qty = item.Qty });
}
await inventoryContext.SaveChangesAsync(ct);
scope.Complete();
}
// If exception: scope not completed → rollback for both contexts
}
How this fits together: Both orderContext and inventoryContext open their connection inside the scope; each connection enlists in the ambient transaction. SaveChangesAsync on each context runs in that same transaction. Only when scope.Complete() is called does the transaction commit on dispose; if an exception is thrown, Complete() is never called and both databases roll back.
Good use of transactions
Principles
Keep transactions short – Open the transaction, do the minimal set of DB operations, then commit or roll back. Do not do slow I/O, HTTP calls, or user interaction inside the transaction.
One logical unit of work – A transaction should represent one business operation (e.g. “place order” = insert order + lines + update inventory). Do not wrap unrelated operations in one huge transaction.
Access resources in a consistent order – To reduce deadlocks, always lock or touch tables in the same order (e.g. orders then order_lines then inventory). Document this order and enforce it in code.
Choose the right isolation level – Use Read Committed unless you need Repeatable Read or Serializable. Use Snapshot for long read-only work if your database supports it.
Always commit or roll back – Use try/catch: commit in the success path, roll back (or let the scope not complete) on exception. Avoid leaving transactions open.
When to wrap what
Scenario
Wrap in transaction?
Single SaveChangesAsync (one context)
Usually no; EF Core runs SaveChanges in an implicit transaction per SaveChanges.
Multiple SaveChangesAsync on the same context
Yes if they must be atomic (e.g. insert order then insert lines in one commit).
Multiple contexts or connections
Yes; use TransactionScope or a single connection shared across contexts.
Read-only report
No (or use Snapshot in a short scope if you need a consistent snapshot).
Benefits of using transactions
Using transactions gives you atomicity, consistency, isolation, and durability—and concrete benefits in how you design and operate systems.
Benefit
What it gives you
No partial updates
If one step fails, all changes in the transaction are rolled back. You never leave “half” an order or half a transfer (e.g. debit done, credit not done).
Valid state
The database moves from one valid state to another; constraints (FKs, checks) hold. You can rely on invariants (e.g. sum of order lines = order total) after commit.
Predictable reads
Isolation level controls what you see from other transactions. You avoid acting on uncommitted or inconsistent data.
Committed data persists
Once you commit, the result is durable (crash-safe). Enables retry: if the client times out after commit, re-reading shows the committed state.
Clean failure and retry
On exception you roll back and retry the whole unit of work. No need to manually undo half-applied changes.
When it pays off: Any multi-step write (orders + lines, transfer between accounts, reservation + payment). Transactions are a proven way to keep data correct under concurrency and failure.
Enterprise transaction management
What it is and when to use it
Enterprise transaction management means: clear transaction boundaries (e.g. at the application or domain service layer), retry policies for transient failures and deadlocks, consistent exception handling and logging, and optionally a Unit of Work that groups operations into one transaction. Below is a full-fledged example: a service that uses a transaction boundary, Polly for retry (deadlock and timeouts), and structured logging.
Full working example: Service with transaction boundary and retry
3. Alternative: TransactionScope with retry (ambient transaction)
publicasync Task<OrderResult> PlaceOrderWithScopeAsync(OrderRequest request, CancellationToken ct = default)
{
returnawait TransactionRetryPolicy.Create().ExecuteAsync(async () =>
{
usingvar scope = new TransactionScope(TransactionScopeOption.Required,
new TransactionOptions { IsolationLevel = IsolationLevel.ReadCommitted },
TransactionScopeAsyncFlowOption.Enabled);
awaitusingvar context = await _contextFactory.CreateDbContextAsync(ct);
var order = new Order { CustomerId = request.CustomerId, Status = OrderStatus.Pending };
context.Orders.Add(order);
await context.SaveChangesAsync(ct);
foreach (var line in request.Lines)
context.OrderLines.Add(new OrderLine { OrderId = order.Id, Sku = line.Sku, Qty = line.Qty });
await context.SaveChangesAsync(ct);
scope.Complete();
returnnew OrderResult { OrderId = order.Id };
});
}
How this fits together: The transaction boundary is the PlaceOrderAsync method: one transaction per call. Polly wraps the whole operation and retries on deadlock (SQL 1205) or timeout (SQL -2, TimeoutException). On retry, a new transaction is started. Logging records start, success, and failure (with rollback). For multi-context or distributed scenarios, use TransactionScope inside the retry instead of BeginTransactionAsync. This pattern gives you enterprise-grade transaction management: clear boundaries, retries, and observability.
Rollback in depth
When rollback happens
Rollback undoes all changes made in the transaction so the database returns to the state before the transaction started. It happens in these cases:
Situation
What happens
Explicit
You call transaction.RollbackAsync() or transaction.Rollback().
TransactionScope
You do not call scope.Complete() before disposing the scope. On dispose, the transaction rolls back.
Exception
An exception is thrown before CommitAsync or Complete. If you catch it and call RollbackAsync, or if you never call Complete, the transaction rolls back.
Disposal without commit
If you dispose the transaction (or scope) without having committed or called Complete, the runtime rolls back.
So auto rollback occurs when: (1) you use TransactionScope and do not call Complete() (e.g. because an exception was thrown), or (2) you dispose an explicit transaction without having called CommitAsync. In both cases, the database rolls back the transaction.
How to guarantee rollback
Use a pattern that always rolls back on failure and never leaves the transaction open:
Explicit transaction: In a try block, do work and CommitAsync. In catch or finally, call RollbackAsynconly if the transaction is still active (not yet committed). Then rethrow or handle. Disposing the transaction (e.g. await using) also triggers rollback if not committed.
TransactionScope: Do not call Complete() unless all steps succeeded. Use try/catch: on exception, do not call Complete(); when the scope is disposed, it rolls back automatically.
Proven pattern (explicit transaction with guaranteed rollback):
Proven pattern (TransactionScope – auto rollback on exception):
usingvar scope = new TransactionScope(TransactionScopeOption.Required,
new TransactionOptions { IsolationLevel = IsolationLevel.ReadCommitted },
TransactionScopeAsyncFlowOption.Enabled);
try
{
// ... work ...
scope.Complete(); // only if everything succeeded
}
// If exception: Complete() never called → scope disposes → auto rollback
How this fits together: With TransactionScope, you get auto rollback whenever you do not call Complete()—so any exception or early return causes rollback on dispose. With explicit transactions, calling RollbackAsync in catch (or relying on disposal) guarantees no commit on failure. Either way, the database returns to a consistent state.
Enterprise self-check and auto rollback
What it is and when to use it
Enterprise self-check means verifying that the outcome of the transaction satisfies your invariants or business rulesbefore you commit. If the check fails, you do not commit—you roll back (explicitly or by not calling Complete()). That gives you a proven way to avoid committing invalid state: validate inside the transaction, then commit only when validation passes; otherwise auto rollback by not committing.
When to use: Whenever a single “unit of work” has invariants (e.g. total = sum of lines, debits = credits, stock never negative). Run the check after your writes but before CommitAsync or Complete(); if it fails, roll back and return an error or retry.
Proven verification methods
Method
What you do
When it helps
Read-your-writes
After updates, re-read the affected data and verify (e.g. sum of line amounts = order total).
Catches calculation or flush order bugs.
Invariant check
Assert business rules (e.g. stock.Quantity >= 0, balance >= 0).
Ensures no invalid state is committed.
Balance / consistency check
For transfers: sum of debits = sum of credits; or total reserved = sum of allocations.
Standard in financial and inventory systems.
Constraint check
Rely on DB constraints (FK, CHECK, UNIQUE) and catch constraint violations; then roll back in catch.
Database enforces rules; you roll back on violation.
Full working example: Validate then commit or auto rollback
publicasync Task<OrderResult> PlaceOrderWithValidationAsync(OrderRequest request, CancellationToken ct = default)
{
awaitusingvar context = await _contextFactory.CreateDbContextAsync(ct);
awaitusingvar transaction = await context.Database.BeginTransactionAsync(IsolationLevel.ReadCommitted, ct);
try
{
var order = new Order { CustomerId = request.CustomerId, Status = OrderStatus.Pending };
context.Orders.Add(order);
await context.SaveChangesAsync(ct);
decimal total = 0m;
foreach (var line in request.Lines)
{
var amount = line.UnitPrice * line.Qty;
total += amount;
context.OrderLines.Add(new OrderLine { OrderId = order.Id, Sku = line.Sku, Qty = line.Qty, Amount = amount });
}
await context.SaveChangesAsync(ct);
order.Total = total;
await context.SaveChangesAsync(ct);
// Self-check: read-your-writes – order total must equal sum of linesvar lineSum = await context.OrderLines.Where(l => l.OrderId == order.Id).SumAsync(l => l.Amount, ct);
if (lineSum != order.Total)
{
await transaction.RollbackAsync(ct); // auto rollback if requiredthrownew InvalidOperationException("Order total does not match line sum.");
}
// Optional: invariant – no negative quantitiesvar negativeQty = await context.OrderLines.AnyAsync(l => l.OrderId == order.Id && l.Qty <= 0, ct);
if (negativeQty)
{
await transaction.RollbackAsync(ct);
thrownew InvalidOperationException("Order line quantity must be positive.");
}
await transaction.CommitAsync(ct);
returnnew OrderResult { OrderId = order.Id };
}
catch
{
await transaction.RollbackAsync(ct);
throw;
}
}
How this fits together: All work runs inside one transaction. BeforeCommitAsync, we run self-checks: (1) read-your-writes (total = sum of lines), (2) invariant (no negative quantity). If any check fails, we call RollbackAsync and throw—auto rollback so invalid state is never committed. Only when all checks pass do we commit. This is a solid, enterprise-level pattern: verify by proven methods, then commit or roll back.
Deadlocks
What it is and when it happens
A deadlock occurs when two (or more) transactions wait for each other to release locks. For example: Transaction A holds lock on row 1 and waits for row 2; Transaction B holds lock on row 2 and waits for row 1. Neither can proceed; the database detects this and kills one transaction (victim) so the other can complete.
When to care: Higher isolation (Repeatable Read, Serializable) and long transactions increase lock holding time and deadlock likelihood. Keep transactions short, access resources in a consistent order (e.g. always lock accounts in order of id), and retry on deadlock (catch the deadlock exception and re-run the transaction).
Reducing deadlocks
Keep transactions short and small (few statements).
Access tables and rows in a fixed order across the app (e.g. always orders then order_lines).
Use Snapshot for read-only work so readers do not hold locks.
Retry with exponential backoff when the database returns a deadlock error.
Optimistic vs pessimistic concurrency
Approach
How it works
When to use
Optimistic
No locks on read. On write, you check that the row has not changed (e.g. via a RowVersion / timestamp column). If it has changed, you get a concurrency exception and retry or report to the user.
When conflicts are rare; good for many web apps.
Pessimistic
Lock the row (or page) when you read (e.g. SELECT ... FOR UPDATE). Other transactions block until you release.
When you need to reserve the row and conflicts are likely.
In EF Core: Use a concurrency token (e.g. [Timestamp] public byte[] RowVersion { get; set; }) so that SaveChanges includes the token in the WHERE clause. If the row was updated by someone else, no rows are updated and EF throws DbUpdateConcurrencyException. That is optimistic concurrency.
Comparison: when to use which
Scenario
Isolation level
Concurrency
Default app workload
Read Committed
Optimistic (RowVersion) for updates
Same row read twice, must match
Repeatable Read or Snapshot
—
Stable result set (no phantoms)
Serializable or Snapshot
—
Long read-only report
Snapshot (if enabled)
—
Critical financial / inventory
Serializable or careful Repeatable Read + retry
Pessimistic if needed
Common pitfalls
Holding transactions too long – Do not open a transaction, then do slow I/O or user interaction inside it. Keep the transaction around only the DB operations.
Read Uncommitted in production – Avoid; you risk acting on rolled-back data.
Serializable everywhere – Causes blocking and deadlocks. Use only where necessary.
Ignoring deadlocks – Implement retry with backoff when the driver throws a deadlock exception.
Forgetting to commit or roll back – Always Commit or Rollback (or Complete on TransactionScope); otherwise connections and locks can leak.
Mixing isolation levels – Be explicit: set the isolation level when you begin the transaction so you know what guarantees you have.
Summary
Transactions give you ACID and isolation levels control which read phenomena can occur; in .NET and EF Core use BeginTransactionAsync or TransactionScope and always rollback on failure. Choosing the wrong isolation or leaving work too long in a transaction leads to deadlocks and correctness bugs; keeping transactions short and matching isolation to your consistency needs keeps behaviour predictable. Next, review a high-concurrency path in your app, set the isolation level explicitly, and add self-check before commit where invariants must hold.
Transactions provide ACID: atomicity, consistency, isolation, and durability. Benefits include no partial updates, valid state, predictable reads, durability, and clean failure and retry. Isolation level controls which read phenomena (dirty, non-repeatable, phantom) can occur. In .NET and EF Core, use BeginTransactionAsync(IsolationLevel) or TransactionScope; use TransactionScopeAsyncFlowOption.Enabled when using async/await. Rollback happens explicitly (RollbackAsync), when TransactionScope is disposed without Complete(), or on exception; use try/catch and do not commit on failure to guarantee rollback. Enterprise self-check means validating invariants (read-your-writes, balance checks) before commit; if validation fails, auto rollback by not committing (or by calling RollbackAsync). This article covered ACID, benefits, read phenomena, all isolation levels, TransactionScope, good use, enterprise management, rollback in depth, self-check and auto rollback, deadlocks, and optimistic vs pessimistic concurrency.
Position & Rationale
I use transactions when I need ACID—multiple reads and writes must commit or roll back together; I keep transactions short and avoid user input or long-running work inside them. I choose isolation level explicitly: Read Committed (default) for most cases; Repeatable Read or Serializable only when I need to prevent non-repeatable or phantom reads and I’ve measured the impact. I use RCSI (Read Committed Snapshot Isolation) when I want to reduce blocking without raising the level. I never commit on failure—I use try/catch and RollbackAsync (or dispose TransactionScope without Complete) so rollback is guaranteed. I do enterprise self-check (validate invariants before commit) and auto rollback (don’t commit if validation fails). I reject raising isolation “to be safe” without evidence—Serializable can cause deadlocks and timeouts. I reject long-running or user-blocking work inside a transaction; keep it short and retry on deadlock.
Trade-Offs & Failure Modes
What this sacrifices: Higher isolation (Repeatable Read, Serializable) increases blocking and deadlock risk; long transactions hold locks and hurt throughput.
Where it degrades: When we use Serializable everywhere—deadlocks and timeouts. When we keep a transaction open across user input or external calls—locks held too long. When we don’t rollback on failure—partial updates and inconsistent state.
How it fails when misapplied: Using a transaction when messaging and eventual consistency would do. Mixing isolation levels without being explicit. Forgetting TransactionScopeAsyncFlowOption.Enabled with async/await so the scope doesn’t flow. Committing after a validation failure.
Early warning signs: “We’re getting deadlocks”; “transactions are timing out”; “we see partial updates after a crash”; “we’re not sure which isolation level we’re using.”
What Most Guides Miss
Guides often explain ACID and isolation levels but skip when to use which level and rollback discipline. In practice, Read Committed is enough for most cases; Repeatable Read and Serializable have real cost (blocking, deadlocks) and should be chosen with evidence. TransactionScope with async/await requires TransactionScopeAsyncFlowOption.Enabled or the scope doesn’t flow to the continuation—many posts miss that. Enterprise self-check (validate before commit, rollback if invalid) is underplayed; it’s the way to enforce invariants without holding locks too long. RCSI as a way to reduce blocking without changing application isolation level is rarely discussed for app devs.
Decision Framework
If you need multiple operations to commit or roll back together → Use a transaction; keep it short; use try/catch and rollback on failure.
If you need stronger read guarantees → Choose isolation level explicitly (Repeatable Read, Serializable) only with evidence; prefer RCSI to reduce blocking where possible.
If you use TransactionScope with async/await → Use TransactionScopeAsyncFlowOption.Enabled so the scope flows.
If you need to enforce invariants → Validate before commit (self-check); if validation fails, don’t commit (or call RollbackAsync).
If you get deadlocks → Shorten transactions; avoid user input inside the transaction; retry on deadlock; consider lower isolation or RCSI.
If reads are long-running or reporting → Consider read replicas or snapshot isolation so reporting doesn’t block OLTP.
Use transactions for ACID; keep them short; never commit on failure—rollback explicitly or dispose without Complete.
Choose isolation level explicitly; prefer Read Committed (or RCSI); raise only with evidence. Use TransactionScopeAsyncFlowOption.Enabled with async.
Self-check before commit; rollback if validation fails. Revisit when concurrency or deadlocks become a problem.
For enterprise applications, I offer consulting on database architecture, performance tuning, and EF Core optimization.
When I Would Use This Again — and When I Wouldn’t
I would use this approach again when I’m designing or debugging concurrent access to a relational database and need to choose or tune isolation level and handle deadlocks. I wouldn’t raise isolation “to be safe” without measuring—Serializable can cause more problems than it solves. I also wouldn’t leave long-running or user-blocking work inside a transaction; keep it short and retry on deadlock. Alternative: for read-heavy reporting, consider read replicas or snapshot isolation so reporting doesn’t block OLTP; for eventually consistent workloads, consider messaging and sagas instead of high isolation.
Frequently Asked Questions
Frequently Asked Questions
What is ACID?
ACID: Atomicity (all operations in a transaction commit or none do), Consistency (valid state before and after), Isolation (concurrent transactions do not see each other’s uncommitted work in unwanted ways), Durability (committed data persists).
What are isolation levels?
Isolation level controls how much concurrent transactions can see each other’s changes. Levels: Read Uncommitted, Read Committed (default), Repeatable Read, Serializable, Snapshot. Higher isolation prevents more read phenomena but can increase blocking and deadlocks.
What is a dirty read?
Reading uncommitted data from another transaction. If that transaction rolls back, you have seen data that never existed. Prevented by Read Committed and above.
What is a non-repeatable read?
Reading the same row twice in one transaction and getting different values because another transaction committed a change in between. Prevented by Repeatable Read, Serializable, and Snapshot.
What is a phantom read?
Running the same query twice in one transaction and getting different sets of rows because another transaction inserted or deleted matching rows. Prevented by Serializable and (for the snapshot) Snapshot.
Which isolation level should I use?
Read Committed for most workloads. Repeatable Read when the same row must not change within the transaction. Serializable when you need full isolation (no phantoms). Snapshot for long read-heavy transactions if your database supports it.
What is a deadlock?
Two (or more) transactions waiting for each other’s locks. The database kills one (victim). Keep transactions short, access resources in a fixed order, and retry on deadlock.
How do I set isolation level in EF Core?
Use context.Database.BeginTransactionAsync(IsolationLevel.RepeatableRead) (or another level). Commit or roll back when done. Alternatively use TransactionScope with TransactionOptions { IsolationLevel = ... }.
What is Snapshot isolation?
Row versioning: readers see a consistent snapshot of the database (as of transaction or statement start) without taking shared locks. Reduces blocking for readers. Enabled at database level in SQL Server (ALLOW_SNAPSHOT_ISOLATION, READ_COMMITTED_SNAPSHOT).
Read Committed vs Snapshot?
Read Committed uses locks; readers can block writers and vice versa. Snapshot uses versions; readers see a snapshot and do not block writers (and writers do not block readers in the same way). Snapshot is better for long-running or read-heavy workloads when available.
When to use Serializable?
When you need full isolation (no dirty, non-repeatable, or phantom reads). Use sparingly; it increases locking and deadlock risk. Prefer Snapshot for read-heavy work if supported.
What is MVCC?
Multi-Version Concurrency Control: the database keeps multiple versions of rows. Readers see a consistent version without blocking writers. Snapshot isolation in SQL Server uses a form of MVCC.
How does EF Core handle concurrency?
Use a concurrency token (e.g. RowVersion / timestamp). On update, EF includes the token in the WHERE clause. If the row was changed by another transaction, no row is updated and EF throws DbUpdateConcurrencyException. That is optimistic concurrency.
Optimistic vs pessimistic concurrency?
Optimistic: No lock on read; on write, check that the row has not changed (e.g. RowVersion). If it has, retry or report. Pessimistic: Lock the row on read (e.g. SELECT FOR UPDATE) so others block. Use optimistic when conflicts are rare; pessimistic when you need to reserve the row.
What is TransactionScope?
An ambient transaction in .NET. You create a TransactionScope, do work (e.g. multiple DbContexts or connections), then call Complete(). If all participants succeed, the transaction commits; otherwise it rolls back. Use TransactionScopeAsyncFlowOption.Enabled when using async/await. Supports distributed scenarios (e.g. two databases) when the coordinator is configured.
TransactionScope Required vs RequiresNew vs Suppress?
Required: Join current ambient transaction or create a new one. RequiresNew: Always create a new transaction (nested operation commits or rolls back independently). Suppress: Do not participate in a transaction (e.g. read-only diagnostic query).
When use TransactionScope vs BeginTransaction?
Use TransactionScope when you need multiple connections or DbContexts in one transaction, or when you want a consistent scope-based model. Use BeginTransaction when you have a single DbContext and want explicit control and slightly less overhead.
How implement retry for deadlocks?
Use Polly: Handle<SqlException>(ex => ex.Number == 1205) for deadlock, then WaitAndRetryAsync with exponential backoff. Wrap the whole transaction (begin → work → commit) in the retry policy so each retry starts a new transaction.
Unit of Work and transactions?
A Unit of Work groups multiple repository or context operations into one transaction and commits once at the end. In .NET you can implement it with a single DbContext (one transaction per SaveChanges) or with TransactionScope wrapping multiple operations. The service layer defines the boundary: one unit of work per use case (e.g. PlaceOrder).
When does rollback happen?
Rollback happens when you call RollbackAsync (explicit), when TransactionScope is disposed without calling Complete(), or when an exception occurs before commit and you do not call Complete or CommitAsync. Disposing an uncommitted transaction also triggers rollback.
How do I guarantee rollback on failure?
Use try/catch: in the catch block call transaction.RollbackAsync() (or do not call Complete() for TransactionScope), then rethrow. With TransactionScope, simply do not call Complete() if anything failed—disposal will auto rollback. Never commit in the catch block.
What is enterprise self-check and auto rollback?
Self-check means validating invariants or business rules before commit (e.g. read-your-writes: total = sum of lines; balance checks). If the check fails, auto rollback by not calling CommitAsync or Complete(), or by calling RollbackAsync, so invalid state is never committed. Proven methods: read-your-writes, invariant checks, balance/consistency checks, and DB constraints.
What are the benefits of using transactions?
Atomicity: no partial updates (all or nothing). Consistency: valid state before and after. Isolation: predictable reads under concurrency. Durability: committed data persists. Clean failure and retry: on exception you roll back and can retry the whole unit of work without manually undoing changes.
Related Guides & Resources
Explore the matching guide, related services, and more articles.