Waqas Ahmad — Software Architect & Technical Consultant - Available USA, Europe, Global

Waqas Ahmad — Software Architect & Technical Consultant

Specializing in

Distributed Systems

.NET ArchitectureCloud-Native ArchitectureAzure Cloud EngineeringAPI ArchitectureMicroservices ArchitectureEvent-Driven ArchitectureDatabase Design & Optimization

👋 Hi, I'm Waqas — a Software Architect and Technical Consultant specializing in .NET, Azure, microservices, and API-first system design..
I help companies build reliable, maintainable, and high-performance backend platforms that scale.

Experienced across engineering ecosystems shaped by Microsoft, the Cloud Native Computing Foundation, and the Apache Software Foundation.

Available for remote consulting (USA, Europe, Global) — flexible across EST, PST, GMT & CET.

services
Article

SQL Server Performance Tuning for Application Developers

SQL Server tuning for application developers: indexes, plans, statistics, and locking.

services
Read the article

Introduction

This guidance is relevant when the topic of this article applies to your system or design choices; it breaks down when constraints or context differ. I’ve applied it in real projects and refined the takeaways over time (as of 2026).

Application developers often leave SQL tuning to DBAs, but slow queries and blocking usually need indexing, plan awareness, and configuration choices that devs can influence. This article covers SQL Server performance tuning for application developers: indexes (clustered, nonclustered, covering, filtered), statistics, execution plans, parameter sniffing, server settings, locking, triggers, and Query Store—so your EF Core and raw SQL scale. For architects and tech leads, measuring first (plans, Query Store) and then applying targeted indexes and settings matters when load grows; the golden rule is measure before you add indexes or rewrite queries.

If you are new to SQL tuning, start with Topics covered and SQL Server performance tuning at a glance.

For a deeper overview of this topic, explore the full Database Design & Optimization guide.

Decision Context

  • System scale: Varies by context; the approach in this article applies to the scales and scenarios described in the body.
  • Team size: Typically small to medium teams; ownership and clarity matter more than headcount.
  • Time / budget pressure: Applicable under delivery pressure; I’ve used it in both greenfield and incremental refactors.
  • Technical constraints: .NET and related stack where relevant; constraints are noted in the article where they affect the approach.
  • Non-goals: This article does not optimize for every possible scenario; boundaries are stated where they matter.

What is SQL tuning and why application developers need it

SQL tuning (or database performance tuning) is the practice of making queries and schema (indexes, statistics, settings) faster so that your application responds quickly and does not overload the database. Application developers write the queries (via EF Core, Dapper, or raw SQL), so understanding indexes (what they are, when to add them), query plans (how SQL Server executes a query), statistics (how the engine estimates rows), parameterization (why it matters for plan reuse and security), triggers and functions (when they hurt performance), and server settings (MAXDOP, memory, cost threshold) helps you avoid slow queries and N+1 patterns.

Why it matters: Slow or blocking queries increase latency and can take down an app under load. Measuring with execution plans and Query Store, then tuning indexes, statistics, and query shape, reduces mean response time and prevents production incidents. This article explains these concepts and how to apply them when tuning SQL Server from an application developer’s perspective.


SQL Server performance tuning at a glance

Concept What it is
Index Structure that speeds up lookups and sorts; clustered = physical order (one per table); nonclustered = separate structure; covering (INCLUDE) = query satisfied from index alone.
Statistics Metadata for row count estimates; stale or missing statistics lead to bad plans. Auto-create and auto-update are usually on; use UPDATE STATISTICS for critical tables.
Execution plan Recipe the engine uses to run a query: seek (good) vs scan (often bad on large tables); key lookup = look up rest of row from table.
Parameter sniffing First parameter value shapes the cached plan; sometimes that plan is bad for other values. Use OPTION (RECOMPILE) or OPTION (OPTIMIZE FOR) only with evidence.
MAXDOP Max degree of parallelism; limits how many cores one query can use. Set per query or server-wide.
Cost threshold Threshold (in cost units) above which the engine considers a parallel plan.
TempDB Shared workspace for temp tables, sorts, spills; configure file count and size for workload.
RCSI Read Committed Snapshot Isolation; readers do not block writers; reduces blocking.
Trigger Code that runs on INSERT/UPDATE/DELETE; can slow writes and cause blocking; prefer set-based logic or change tracking.
Scalar function Function that returns one value; inlined in SQL Server 2019+ when deterministic; avoid in hot paths when not inlined.
Query Store Captures plans and runtime stats per query; use for plan analysis and forcing plans.
Loading diagram…

Indexes: clustered, nonclustered, covering, filtered

Indexes are structures that the database uses to find rows quickly instead of scanning the whole table. A clustered index defines the physical order of rows in the table (one per table); a nonclustered index is a separate structure that stores key columns and pointers to the data.

Step 1: Basic nonclustered index (filter and sort)

-- Index for common filter and sort: CustomerId, Status, CreatedAt
CREATE NONCLUSTERED INDEX IX_Orders_CustomerId_Status_Created
ON dbo.Orders (CustomerId, Status, CreatedAt DESC);

What this does: The engine can seek on CustomerId, Status, and use the index for ORDER BY CreatedAt DESC instead of scanning the table. Use for queries that filter and sort by these columns.

Step 2: Covering index (INCLUDE)

-- Covering index: add INCLUDE so the query is satisfied from the index alone (no key lookup)
CREATE NONCLUSTERED INDEX IX_Orders_CustomerId_Status_Created_INCLUDE
ON dbo.Orders (CustomerId, Status, CreatedAt DESC)
INCLUDE (Total, Currency);

What this does: If the query only needs CustomerId, Status, CreatedAt, Total, Currency, the engine can satisfy it entirely from the index without touching the table (key lookup eliminated). Use INCLUDE for columns that appear in SELECT or WHERE but are not part of the key.

Step 3: Filtered index (subset of rows)

-- Filtered index: only index rows where Status = 'Pending' (e.g. hot path)
CREATE NONCLUSTERED INDEX IX_Orders_Pending_CustomerId
ON dbo.Orders (CustomerId, CreatedAt DESC)
INCLUDE (Total)
WHERE Status = 'Pending';

What this does: The index only contains rows where Status = 'Pending', so it is smaller and faster for queries that filter by Status = 'Pending'. Use filtered indexes when a significant subset of queries target a subset of rows.

Clustered index: choose wisely

-- Clustered index: one per table; choose narrow, stable, increasing key
-- Example: Id (identity) or CreatedAt for append-only tables
CREATE CLUSTERED INDEX IX_Orders_Clustered ON dbo.Orders (Id);
-- Or for time-series: CREATE CLUSTERED INDEX IX_Events_Clustered ON dbo.Events (CreatedAt);

What this does: The clustered index defines the physical order of the table. Choose a narrow, stable, increasing key (e.g. Id, CreatedAt) so that inserts do not cause excessive page splits.

How this fits together: Start with a nonclustered index on filter/sort columns; add INCLUDE to make it covering when the query only needs a few columns; use filtered indexes when queries target a subset of rows. Keep the clustered key narrow and stable. Do not over-index: every index adds write cost and storage; add indexes for the queries that run most often or are slow.


Statistics: auto-create, auto-update, manual

Statistics are metadata the engine uses to estimate row counts and choose plans. Stale or missing statistics lead to bad plans (e.g. scan instead of seek).

Check and update statistics manually

-- Update statistics for a table (full scan)
UPDATE STATISTICS dbo.Orders WITH FULLSCAN;

-- Update statistics for a specific index
UPDATE STATISTICS dbo.Orders IX_Orders_CustomerId_Status_Created WITH FULLSCAN;

-- Create statistics on a column if missing (engine usually auto-creates)
CREATE STATISTICS ST_Orders_Status ON dbo.Orders (Status);

What this does: UPDATE STATISTICS refreshes the distribution information used by the optimizer. Use FULLSCAN for accuracy when the table has changed significantly. CREATE STATISTICS ensures statistics exist on a column (e.g. for predicates the engine has not seen).

Verify auto-create and auto-update

-- Database-level: ensure auto_create_statistics and auto_update_statistics are ON
SELECT name, is_auto_create_stats_on, is_auto_update_stats_on
FROM sys.databases WHERE name = DB_NAME();

How this fits together: Keep auto-create and auto-update statistics ON. For critical or large tables, add maintenance jobs to run UPDATE STATISTICS ... WITH FULLSCAN periodically (e.g. after large loads) so that plans stay accurate.


Execution plans: seek vs scan, key lookup

Use execution plans (SSMS, Azure Data Studio) to see how SQL Server runs a query: index seek (good) vs index scan or table scan (often bad on large tables). Key lookup is when the engine uses an index to find rows but then has to look up the rest of the columns from the table; for wide tables or many rows, that can be expensive.

Enable actual execution plan and run a query

SET STATISTICS IO ON;
SELECT OrderId, CustomerId, Status, CreatedAt, Total
FROM dbo.Orders
WHERE CustomerId = @CustomerId AND Status = 'Pending'
ORDER BY CreatedAt DESC;
SET STATISTICS IO OFF;

What to look for: In the plan, Index Seek on IX_Orders_CustomerId_Status_Created is good; Index Scan or Table Scan on a large table is a red flag. Key Lookup (bookmark lookup) means the index does not cover the query—add INCLUDE columns to make it covering. Check Estimated rows vs Actual rows; large variance can indicate statistics issues.

Parameterized query from application (EF Core)

// EF Core: parameterized by default; plan reused for different CustomerId
var orders = await context.Orders
    .Where(o => o.CustomerId == customerId && o.Status == OrderStatus.Pending)
    .OrderByDescending(o => o.CreatedAt)
    .Select(o => new { o.OrderId, o.CustomerId, o.Status, o.CreatedAt, o.Total })
    .ToListAsync();

How this fits together: Always parameterize queries so that plans are reused and SQL injection is avoided. Use execution plans to confirm seek vs scan and to eliminate key lookup with covering indexes.


Parameter sniffing and plan reuse

Parameter sniffing is when the engine compiles a plan for the first parameter value and reuses it for others; sometimes that plan is bad for other values (e.g. highly selective for value A, not selective for value B).

Option 1: OPTION (RECOMPILE) — recompile every time

SELECT OrderId, Total FROM dbo.Orders WHERE CustomerId = @CustomerId
OPTION (RECOMPILE);

What this does: The plan is recompiled for each execution, so each parameter value gets an appropriate plan. Use when the same query has very different optimal plans for different parameter values and you have evidence (e.g. from Query Store). Cost: extra compile time per execution.

Option 2: OPTION (OPTIMIZE FOR) — hint a value

SELECT OrderId, Total FROM dbo.Orders WHERE CustomerId = @CustomerId
OPTION (OPTIMIZE FOR (@CustomerId UNKNOWN));
-- Or optimize for a specific value: OPTION (OPTIMIZE FOR (@CustomerId = 12345));

What this does: OPTIMIZE FOR (@CustomerId UNKNOWN) uses average selectivity for the plan; OPTIMIZE FOR (@CustomerId = 12345) optimizes for a specific value. Use only when you have evidence that the default sniffed plan is wrong for most executions.

How this fits together: Measure with different parameter values (e.g. via Query Store) before adding RECOMPILE or OPTIMIZE FOR. In most cases, parameterized queries and up-to-date statistics are enough; use these options only when necessary.


Server settings: MAXDOP, cost threshold, memory

MAXDOP (max degree of parallelism) limits how many cores a single query can use. Cost threshold for parallelism is the cost above which the engine considers a parallel plan.

Query current settings

-- Current server settings
SELECT name, value_in_use
FROM sys.configurations
WHERE name IN ('max degree of parallelism', 'cost threshold for parallelism', 'max server memory (MB)');

Set MAXDOP and cost threshold (example)

-- Example: limit parallelism to 4; only use parallel plans when cost > 50
EXEC sp_configure 'max degree of parallelism', 4; RECONFIGURE;
EXEC sp_configure 'cost threshold for parallelism', 50; RECONFIGURE;

Override per query

SELECT * FROM dbo.LargeTable WHERE Region = @Region
OPTION (MAXDOP 2);

What this does: MAXDOP prevents a single query from consuming all cores (useful on busy OLTP servers). Cost threshold avoids parallel plans for cheap queries. Max server memory caps how much RAM SQL Server uses; set so that the OS and other apps have enough. Tune these with your DBA; defaults are often fine for small-medium workloads.


TempDB configuration

TempDB is a shared workspace for temp tables, table variables, sorts, and spills. Poor TempDB configuration can cause contention and slow queries.

Best practices (summary)

  • Multiple data files: One file per core (up to 8) for TempDB to reduce allocation contention.
  • Equal size and growth: All TempDB data files same initial size and growth so that proportional fill is used.
  • Place on fast storage: TempDB is heavily used; use fast disks or SSD.

Check TempDB file configuration

SELECT name, physical_name, size * 8 / 1024 AS size_mb
FROM sys.master_files
WHERE database_id = DB_ID('tempdb');

How this fits together: TempDB configuration is usually done by DBAs. As a developer, avoid unnecessary use of temp tables when a CTE or derived table suffices; and avoid table variables for large data when temp tables would get better plans (statistics).


Locking and blocking: isolation levels, RCSI

Blocking occurs when one session holds locks that another session needs. Read Committed Snapshot Isolation (RCSI) allows readers to see a consistent snapshot without blocking writers; writers do not block readers.

Enable RCSI at database level

ALTER DATABASE MyApp SET READ_COMMITTED_SNAPSHOT ON;

What this does: Under RCSI, read operations use row versioning instead of shared locks, so readers do not block writers and writers do not block readers. This reduces blocking in read-heavy workloads. Snapshot isolation (different option) provides statement-level snapshot for the session.

Explicit isolation level (use sparingly)

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;  -- NOLOCK-style; dirty reads
SELECT * FROM dbo.Orders;
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;

What this does: READ UNCOMMITTED (equivalent to NOLOCK) avoids shared locks but allows dirty reads. Use only when you explicitly accept dirty reads (e.g. approximate reporting); prefer RCSI for normal read consistency without blocking.

How this fits together: Enable RCSI on databases where blocking is a problem and you can accept row versioning overhead. Avoid NOLOCK as a default; use only when dirty reads are acceptable.


Triggers: performance impact and alternatives

Triggers run on INSERT/UPDATE/DELETE and can slow writes and cause blocking because they execute in the same transaction as the modifying statement.

Trigger example (avoid heavy logic in triggers)

CREATE TRIGGER tr_Orders_AfterInsert ON dbo.Orders
AFTER INSERT
AS
BEGIN
    SET NOCOUNT ON;
    -- Keep logic minimal; avoid loops and heavy queries
    INSERT INTO dbo.OrdersAudit (OrderId, Action, At)
    SELECT OrderId, 'Insert', GETUTCDATE() FROM inserted;
END;

What this does: The trigger runs for each row (or once per statement, depending on logic). If the trigger does complex queries or calls other tables in a loop, it can severely slow the write. Prefer set-based logic and minimal work in triggers.

Alternatives to triggers

  • Change Tracking or Change Data Capture (CDC): Capture changes asynchronously for audit or sync.
  • Computed columns or constraints: For derived values, use computed columns instead of trigger-updated columns.
  • Application logic: Move validation or audit to the application layer when possible.

How this fits together: Use triggers only when necessary; keep them short and set-based. For audit trails, consider temporal tables (SQL Server 2016+) or CDC instead of triggers.


Functions: scalar vs table-valued, inlining

Scalar functions (return one value) can force row-by-row execution in older SQL Server versions; in SQL Server 2019+, many scalar functions are inlined when they are deterministic. Table-valued functions (TVFs) can be inline (single statement) or multi-statement; inline TVFs are often inlined into the query plan.

Scalar function (may not inline if complex)

CREATE FUNCTION dbo.fn_FormatOrderRef(@OrderId INT) RETURNS NVARCHAR(20)
WITH SCHEMABINDING
AS
BEGIN
    RETURN 'ORD-' + FORMAT(@OrderId, '00000000');
END;
-- Use: SELECT dbo.fn_FormatOrderRef(OrderId) FROM dbo.Orders;

Inline table-valued function (better for performance)

CREATE FUNCTION dbo.fn_OrdersByCustomer(@CustomerId INT)
RETURNS TABLE
AS
RETURN (
    SELECT OrderId, Status, CreatedAt, Total
    FROM dbo.Orders
    WHERE CustomerId = @CustomerId
);
-- Use: SELECT * FROM dbo.fn_OrdersByCustomer(@id) o INNER JOIN ...

What this does: Inline TVFs are expanded into the query like a view, so the optimizer can use indexes and join order. Multi-statement TVFs and non-inlined scalar functions can cause row-by-row execution. Prefer inline TVFs or views when you need reusable table expressions; avoid scalar functions in the SELECT list for large sets unless you know they inline.

How this fits together: Use inline TVFs instead of multi-statement TVFs when possible; avoid scalar functions in hot paths unless they are inlined (SQL Server 2019+). When in doubt, measure with execution plans.


Index maintenance: REBUILD and REORGANIZE

Fragmentation (from page splits and deletes) can slow scans. REBUILD rebuilds the index from scratch; REORGANIZE defragments leaf pages in place.

Check fragmentation

SELECT OBJECT_NAME(ips.object_id) AS TableName, i.name AS IndexName,
       ips.avg_fragmentation_in_percent, ips.page_count
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'LIMITED') ips
JOIN sys.indexes i ON ips.object_id = i.object_id AND ips.index_id = i.index_id
WHERE ips.avg_fragmentation_in_percent > 10 AND ips.page_count > 1000
ORDER BY ips.avg_fragmentation_in_percent DESC;

Reorganize vs Rebuild

-- Reorganize: online, lower impact; use for moderate fragmentation
ALTER INDEX IX_Orders_CustomerId_Status_Created ON dbo.Orders REORGANIZE;

-- Rebuild: full rebuild; use for high fragmentation; can be ONLINE
ALTER INDEX IX_Orders_CustomerId_Status_Created ON dbo.Orders REBUILD
WITH (ONLINE = ON, FILLFACTOR = 90);

What this does: REORGANIZE defragments in place and is online by default. REBUILD rebuilds the index; use ONLINE = ON for Enterprise Edition to avoid blocking. FILLFACTOR leaves space on pages to reduce future page splits. Run maintenance during low-traffic windows.

How this fits together: Schedule index maintenance (REORGANIZE or REBUILD) based on fragmentation; use Query Store and dm_db_index_physical_stats to decide which indexes to maintain.


Wait statistics and Query Store

Wait statistics show what SQL Server is waiting on (e.g. PAGEIOLATCH, LCK_M_S). Use Query Store (Azure SQL, SQL Server 2016+) to capture plans and runtime stats per query so you can find regressions and force plans.

Enable Query Store

ALTER DATABASE MyApp SET QUERY_STORE = ON
WITH (OPERATION_MODE = READ_WRITE, MAX_STORAGE_SIZE_MB = 500);

Top waits (simplified)

SELECT wait_type, wait_time_ms / 1000.0 AS wait_s, waiting_tasks_count
FROM sys.dm_os_wait_stats
WHERE wait_type NOT LIKE '%SLEEP%' AND wait_type NOT IN ('REQUEST_FOR_DEADLOCK_SEARCH', 'SQLTRACE_BUFFER_FLUSH')
ORDER BY wait_time_ms DESC;

How this fits together: Use Query Store to identify slow or regressed queries and to force a known-good plan when the engine chooses a bad one. Use wait stats to see if the bottleneck is I/O, locking, or CPU.


Common issues and best practices

Adding indexes without measuring: Add indexes only for queries that are slow or run often. Measure with execution plans and usage; remove unused indexes.

Table scans on large tables: Add indexes for WHERE, JOIN, ORDER BY; use covering indexes (INCLUDE) when the query only needs a few columns. Keep statistics up to date.

Ad-hoc SQL and plan cache bloat: Parameterize all queries so that plans are reused and the plan cache is not filled with one-off plans.

Ignoring parameter sniffing: Measure with different parameter values; use RECOMPILE or OPTIMIZE FOR only when you have evidence.

Missing or stale statistics: Ensure auto-create and auto-update statistics are ON; run UPDATE STATISTICS for critical tables in maintenance.

Heavy triggers: Keep triggers minimal and set-based; consider CDC or temporal tables for audit.

Scalar functions in SELECT: Prefer inline TVFs or expressions in the query when possible; verify inlining in SQL Server 2019+.

Best practices: Measure first; parameterize; index for actual query shape; maintain statistics and indexes; use RCSI to reduce blocking; use Query Store for plan analysis.


Summary

SQL Server tuning for application developers centres on indexes (clustered, nonclustered, covering, filtered), statistics, execution plans, parameterization, and locking—measure with plans and Query Store before changing anything. Adding indexes blindly or ignoring statistics leads to no gain or worse plans; targeting seeks over scans, covering indexes to remove key lookups, and RCSI where appropriate keeps queries predictable. Next, capture the top slow or blocking queries in Query Store, inspect the plans, then add or adjust one index or statistic and re-measure.

  • Indexes: Clustered (one per table, narrow/stable/increasing); nonclustered for filter/sort; covering (INCLUDE) to avoid key lookup; filtered for subset of rows. Do not over-index.
  • Statistics: Keep auto-create and auto-update ON; run UPDATE STATISTICS for critical tables.
  • Plans: Prefer seek over scan; eliminate key lookup with covering indexes; parameterize for plan reuse.
  • Parameter sniffing: Use RECOMPILE or OPTIMIZE FOR only with evidence.
  • Settings: Tune MAXDOP and cost threshold with DBA; configure TempDB and memory appropriately.
  • Locking: RCSI reduces blocking; avoid NOLOCK as default.
  • Triggers: Keep minimal; prefer CDC or temporal for audit.
  • Functions: Prefer inline TVFs; avoid scalar functions in hot paths unless inlined.
  • Maintenance: REORGANIZE or REBUILD based on fragmentation; use Query Store for plan analysis.

Position & Rationale

I measure first—execution plans, Query Store, or Extended Events—before adding indexes or rewriting queries; I avoid guessing. I prefer covering indexes (INCLUDE) to remove key lookups when the query shape is stable; I avoid over-indexing so writes don’t suffer. I keep statistics auto-create and auto-update on and run manual UPDATE STATISTICS for critical tables when needed. I use RCSI (Read Committed Snapshot Isolation) to reduce blocking; I avoid NOLOCK as a default. I treat parameter sniffing fixes (RECOMPILE, OPTIMIZE FOR) as last resort and only with evidence from plans. I minimise triggers and scalar functions in hot paths; I prefer CDC or temporal for audit and inline TVFs where possible. I reject adding indexes without measuring; I reject changing server settings (MAXDOP, cost threshold) without DBA or evidence.


Trade-Offs & Failure Modes

  • What this sacrifices: More indexes add write cost and maintenance; RCSI increases tempdb and version store. Tuning is iterative and can conflict with other workloads.
  • Where it degrades: Over-indexing slows inserts/updates; wrong index choice (e.g. for one query only) wastes space. Ignoring statistics or plan reuse leads to sudden regressions. Triggers and scalar functions in hot paths cause hidden cost.
  • How it fails when misapplied: Adding indexes without measuring so we don’t know if they help. Using NOLOCK everywhere and accepting dirty reads. Changing MAXDOP or cost threshold globally without testing. Relying on triggers for critical logic when CDC or app-level events are safer.
  • Early warning signs: “Queries were fast last week”; “we added 10 indexes and writes got slow”; “blocking is high”; “plan changed and everything got slow.”

What Most Guides Miss

Many guides show index types and execution plans but skip measurement discipline—you need a baseline (Query Store, Extended Events) before and after so you know what actually improved. Parameter sniffing and plan reuse are underplayed; a “good” plan for one parameter value can be bad for another, and RECOMPILE/OPTIMIZE FOR have trade-offs. RCSI vs default read committed and the impact on tempdb and version store are rarely explained for app devs. Triggers and scalar functions in SELECT are often left in place; the cost of per-row execution is easy to miss until load grows.


Decision Framework

  • If a query or workload is slow → Measure first (plan, Query Store, waits); identify scans, key lookups, and missing indexes; then add or adjust indexes and re-measure.
  • If you have blocking → Enable RCSI if acceptable; avoid NOLOCK as default; look at isolation level and long transactions.
  • If plans are unstable → Check parameter sniffing; use RECOMPILE or OPTIMIZE FOR only with evidence; maintain statistics.
  • If you have triggers or scalar functions in hot paths → Consider CDC, temporal, or app-level events for audit; prefer inline TVFs or expressions; measure impact.
  • If you change server settings → Coordinate with DBA; test MAXDOP, cost threshold, and memory in non-prod first.

You can also explore more patterns in the Database Design & Optimization resource page.

Key Takeaways

  • Measure first; use covering indexes and statistics; prefer RCSI over NOLOCK; minimise triggers and scalar functions in hot paths.
  • Parameter sniffing and plan reuse matter; fix with evidence. Use Query Store and execution plans to validate changes.
  • Revisit indexes and statistics when query shapes or data volume change.

For enterprise applications, I offer consulting on database architecture, performance tuning, and EF Core optimization.

When I Would Use This Again — and When I Wouldn’t

I would use this SQL tuning approach again whenever I own or influence SQL Server-backed apps—measure first, then indexes, statistics, RCSI, and minimal triggers/functions. I’d use it for high-load systems (e.g. POS, batch, APIs with heavy queries) where app devs and DBAs collaborate. I wouldn’t add indexes or change server settings without measuring and re-measuring. I wouldn’t use NOLOCK as a blanket fix for blocking; I’d enable RCSI or fix long transactions. For small, low-throughput apps with simple queries, a light touch (basic indexes, default settings) may be enough; as soon as we have slow queries or blocking, the full discipline pays off. If the team has no access to Query Store or execution plans, I’d get that in place before deep tuning.


services
Frequently Asked Questions

Frequently Asked Questions

How do I read an execution plan in SQL Server?

Open the plan in SSMS or Azure Data Studio; look for index seek (good) vs index scan or table scan (often bad on large tables). Check estimated rows vs actual rows; large variance can indicate statistics issues. Focus on the highest cost operators first.

When should I add a covering index (INCLUDE)?

When a query only needs a small set of columns and the engine can satisfy the query from the index without touching the table. Add INCLUDE columns that are in SELECT or WHERE but not in the key; this avoids key lookups.

What is parameter sniffing and how do I handle it?

Parameter sniffing is when the engine compiles a plan for the first parameter value and reuses it for others; sometimes that plan is bad for other values. Measure with different parameters; use OPTION (RECOMPILE) or OPTION (OPTIMIZE FOR) only when you have evidence and understand the trade-off.

How do I measure SQL performance in production?

Use Extended Events or Query Store (Azure SQL, SQL Server 2016+) to capture slow queries and plans. Use Application Insights or your APM with SQL dependency tracking. Correlate slow requests with the SQL they execute.

What are common mistakes when tuning SQL Server?

Adding indexes without measuring; ignoring execution plans; ad-hoc SQL with literals; missing statistics; over-indexing (every column). Measure first; add indexes and tune based on actual query shape and usage.

How do I choose between clustered and nonclustered indexes?

Clustered: one per table; defines physical order. Choose a narrow, stable, increasing key (e.g. Id, CreatedAt). Nonclustered: for WHERE, JOIN, ORDER BY; use covering (INCLUDE) when the query only needs a few columns. Do not over-index.

What is EF Core doing under the hood when I run a query?

EF Core translates your LINQ (or raw SQL) into parameterized SQL and sends it to SQL Server. It uses ADO.NET under the hood; the same rules apply: parameterize, use indexes, avoid N+1 (e.g. Include or explicit loads). Use AsNoTracking() for read-only queries to avoid change-tracking overhead.

When should I use raw SQL instead of EF Core?

Use raw SQL when EF Core cannot express the query efficiently (e.g. complex CTEs, bulk operations, vendor-specific features) or when you need maximum control. Always parameterize; use FromSqlRaw or ExecuteSqlRaw with parameters so that plans are reused and injection is avoided.

What is a key lookup and when is it a problem?

A key lookup is when the engine uses an index to find rows but then has to look up the rest of the columns from the table. For wide tables or many rows, that can be expensive. Use a covering index (INCLUDE the needed columns) so that the query is satisfied from the index alone.

How do I find missing indexes in SQL Server?

Use Dynamic Management Views (e.g. sys.dm_db_missing_index_details) or Query Store (Azure SQL, SQL Server 2016+) to see which queries would benefit from indexes. Do not add every suggestion blindly; measure and add indexes for the most impactful queries first.

What is statistics and why does it matter?

Statistics are metadata the engine uses to estimate row counts and choose plans. Stale or missing statistics lead to bad plans (e.g. scan instead of seek). Ensure auto-create and auto-update statistics are on; for critical tables, consider maintenance jobs to update statistics regularly.

How do I avoid N+1 queries with EF Core?

N+1 is when you load a list and then loop and load related data per item (one query for the list + N for each item). Use Include() (eager load) or explicit load in one round-trip; or project only the columns you need so that EF generates a single efficient query.

What is plan cache and plan reuse?

Plan cache stores compiled execution plans so that the same query shape (with different parameter values) can reuse the plan. Parameterized queries reuse plans; ad-hoc SQL with literals can create a new plan per value and bloat the cache. Always parameterize.

How do I tune a slow stored procedure?

Measure: capture the plan and runtime with different parameter values. Check indexes on tables used in the procedure; add covering indexes if key lookups are expensive. Consider parameter sniffing (recompile or optimize for) only with evidence. Simplify logic and avoid unnecessary cursors or loops when set-based operations are possible.

What tools can I use to profile SQL Server?

SSMS / Azure Data Studio: execution plans, Activity Monitor, Query Store. Extended Events: lightweight tracing for slow queries and blocking. Application Insights / APM: correlate app requests with SQL dependencies. SQL Server Profiler (legacy): use Extended Events instead for new work.

When should I use REBUILD vs REORGANIZE?

Use REORGANIZE for moderate fragmentation (e.g. 10–30%); it is online and lower impact. Use REBUILD for high fragmentation (e.g. > 30%); use ONLINE = ON when available to reduce blocking.

What is RCSI and when should I enable it?

Read Committed Snapshot Isolation lets readers see a consistent snapshot without blocking writers. Enable when you have blocking between readers and writers and can accept the overhead of row versioning in TempDB.

Do triggers slow down INSERT/UPDATE/DELETE?

Yes. Triggers run in the same transaction as the write; heavy or row-by-row logic in triggers can significantly slow writes. Keep triggers minimal and set-based; consider CDC or temporal tables for audit.

Why avoid scalar functions in SELECT for large sets?

Scalar functions (in older SQL Server) can force row-by-row execution because the engine may not inline them. In SQL Server 2019+, many deterministic scalar functions are inlined; when not inlined, avoid them in hot paths or use inline TVFs / expressions instead.

services
Related Guides & Resources

services
Related services