Verified by Garnet Grid

How to Tune SQL Server Performance: Indexes, Query Plans, and Wait Stats

Diagnose and fix SQL Server performance issues. Covers index analysis, execution plan reading, wait statistics, parameter sniffing, and tempdb optimization.

SQL Server performance problems are almost never the engine’s fault. They’re caused by missing indexes, bad query patterns, and misconfigured settings. But finding the root cause requires a systematic approach — guessing wastes hours. This guide gives you the diagnostic toolkit in the order you should use it: wait stats first to identify the category, then targeted investigation based on what the waits tell you.

The 80/20 rule of SQL tuning: missing indexes and parameter sniffing cause 80% of performance issues. Start there before buying bigger hardware.


Step 1: Identify the Problem (Wait Stats First)

Wait statistics tell you what SQL Server is spending time doing. Always start here — they tell you where to look next.

-- Top 10 cumulative wait types
SELECT TOP 10
    wait_type,
    wait_time_ms / 1000.0 AS wait_time_sec,
    signal_wait_time_ms / 1000.0 AS signal_wait_sec,
    waiting_tasks_count,
    wait_time_ms * 100.0 / SUM(wait_time_ms) OVER() AS pct
FROM sys.dm_os_wait_stats
WHERE wait_type NOT IN (
    'CLR_SEMAPHORE', 'LAZYWRITER_SLEEP', 'RESOURCE_QUEUE',
    'SLEEP_TASK', 'SLEEP_SYSTEMTASK', 'SQLTRACE_BUFFER_FLUSH',
    'WAITFOR', 'LOGMGR_QUEUE', 'CHECKPOINT_QUEUE',
    'REQUEST_FOR_DEADLOCK_SEARCH', 'XE_TIMER_EVENT',
    'BROKER_TO_FLUSH', 'BROKER_TASK_STOP', 'CLR_MANUAL_EVENT',
    'DISPATCHER_QUEUE_SEMAPHORE', 'FT_IFTS_SCHEDULER_IDLE_WAIT',
    'XE_DISPATCHER_WAIT', 'DIRTY_PAGE_POLL', 'HADR_FILESTREAM_IOMGR_IOCOMPLETION'
)
ORDER BY wait_time_ms DESC;

Wait Type Interpretation

Wait TypeMeaningInvestigation PathFix
PAGEIOLATCH_*I/O — data not in buffer poolCheck buffer pool hit ratio, identify hot tablesAdd RAM, fix indexing, optimize queries
CXPACKET / CXCONSUMERParallelism waitsReview MAXDOP and CTFP settingsSet MAXDOP = min(8, cores/NUMA), CTFP = 50
LCK_M_*Lock contentionFind blocking chains with sp_WhoIsActiveReview isolation levels, shorten transactions
SOS_SCHEDULER_YIELDCPU pressureFind expensive queries in plan cacheOptimize queries, add CPU
WRITELOGTransaction log I/OCheck log disk performanceFaster log disk, fewer small transactions
ASYNC_NETWORK_IOClient consuming results slowlyCheck app-side processingFix application (don’t return unnecessary rows)
RESOURCE_SEMAPHOREMemory grant waitsQueries requesting too much memoryFix bad cardinality estimates, update statistics
LATCH_* (non-IO)Internal contentiontempdb allocation pages, PFS/GAM/SGAMEnable trace flag 1118, add tempdb files

Baseline Your Waits

-- Create a snapshot baseline (run this first, then after 1 hour)
-- Capture before
SELECT * INTO #waits_before FROM sys.dm_os_wait_stats;

-- Wait 1 hour... then run:
SELECT
    a.wait_type,
    (a.wait_time_ms - b.wait_time_ms) / 1000.0 AS wait_delta_sec,
    a.waiting_tasks_count - b.waiting_tasks_count AS tasks_delta
FROM sys.dm_os_wait_stats a
JOIN #waits_before b ON a.wait_type = b.wait_type
WHERE a.wait_time_ms > b.wait_time_ms
ORDER BY wait_delta_sec DESC;

Step 2: Find Missing Indexes

-- Most impactful missing indexes
SELECT TOP 20
    ROUND(s.avg_total_user_cost * s.avg_user_impact * (s.user_seeks + s.user_scans), 0) AS [Impact],
    d.statement AS [Table],
    d.equality_columns AS [Equality],
    d.inequality_columns AS [Inequality],
    d.included_columns AS [Include],
    s.user_seeks,
    s.user_scans,
    s.avg_user_impact AS [Avg Impact %]
FROM sys.dm_db_missing_index_details d
JOIN sys.dm_db_missing_index_groups g ON d.index_handle = g.index_handle
JOIN sys.dm_db_missing_index_group_stats s ON g.index_group_handle = s.group_handle
ORDER BY [Impact] DESC;

Index Creation Template

-- Create a covering index based on missing index analysis
CREATE NONCLUSTERED INDEX IX_Orders_CustomerDate
ON dbo.Orders (CustomerID, OrderDate)
INCLUDE (TotalAmount, Status)
WITH (ONLINE = ON, SORT_IN_TEMPDB = ON);

Find Unused Indexes (Candidates for Removal)

-- Indexes with 0 seeks and 0 scans (candidates for dropping)
SELECT
    OBJECT_NAME(i.object_id) AS TableName,
    i.name AS IndexName,
    i.type_desc,
    s.user_seeks,
    s.user_scans,
    s.user_updates,
    ROUND(s.user_updates * 1.0 / NULLIF(s.user_seeks + s.user_scans, 0), 2) AS write_to_read_ratio
FROM sys.indexes i
LEFT JOIN sys.dm_db_index_usage_stats s
    ON i.object_id = s.object_id AND i.index_id = s.index_id
WHERE OBJECTPROPERTY(i.object_id, 'IsUserTable') = 1
  AND i.type_desc = 'NONCLUSTERED'
  AND (s.user_seeks = 0 AND s.user_scans = 0)
ORDER BY s.user_updates DESC;

:::caution[Don’t Over-Index] Each index speeds up reads but slows down writes. A table with 15+ indexes has a write performance problem. Monitor sys.dm_db_index_usage_stats to find unused indexes and drop them. Every index you drop is a write you speed up. :::

Index Maintenance Schedule

Fragmentation LevelActionFrequency
< 10%No action needed
10-30%ALTER INDEX ... REORGANIZEWeekly (online, low impact)
> 30%ALTER INDEX ... REBUILDWeekly (online with Enterprise, offline with Standard)
Statistics staleUPDATE STATISTICSDaily for volatile tables

Step 3: Read Execution Plans

3.1 Capture the Plan

-- Enable actual execution plan for a query
SET STATISTICS IO ON;
SET STATISTICS TIME ON;

-- Your problem query here
SELECT c.CustomerName, SUM(o.TotalAmount)
FROM Customers c
JOIN Orders o ON c.CustomerID = o.CustomerID
WHERE o.OrderDate >= '2026-01-01'
GROUP BY c.CustomerName
ORDER BY SUM(o.TotalAmount) DESC;

3.2 Red Flags in Execution Plans

OperatorWarning SignAction
Table ScanFull table is readAdd an index on filter columns
Key LookupIndex found row but needs more columnsAdd INCLUDE columns to index
Nested Loop on large tableO(n²) performanceConsider HASH JOIN or index
Sort with spill to tempdbInsufficient memory grantOptimize query or increase memory
Parallelism (unexpected)MAXDOP not configuredSet MAXDOP appropriately
Thick arrowsLarge data movementFilter earlier in query
Yellow triangle ⚠️Cardinality estimate wildly wrongUpdate statistics, consider query hints
Hash Match on large dataMissing join indexAdd index on join columns

3.3 Using Query Store (SQL 2016+)

-- Enable Query Store (recommended for all production databases)
ALTER DATABASE YourDB SET QUERY_STORE = ON (
    OPERATION_MODE = READ_WRITE,
    DATA_FLUSH_INTERVAL_SECONDS = 900,
    INTERVAL_LENGTH_MINUTES = 60,
    MAX_STORAGE_SIZE_MB = 1024,
    CLEANUP_POLICY = (STALE_QUERY_THRESHOLD_DAYS = 30),
    SIZE_BASED_CLEANUP_MODE = AUTO,
    QUERY_CAPTURE_MODE = AUTO
);

-- Find regressed queries (plan changes that made things worse)
SELECT TOP 10
    q.query_id,
    qt.query_sql_text,
    rs.avg_duration / 1000.0 AS avg_duration_ms,
    rs.avg_cpu_time / 1000.0 AS avg_cpu_ms,
    rs.avg_logical_io_reads,
    rs.count_executions,
    p.plan_id
FROM sys.query_store_runtime_stats rs
JOIN sys.query_store_plan p ON rs.plan_id = p.plan_id
JOIN sys.query_store_query q ON p.query_id = q.query_id
JOIN sys.query_store_query_text qt ON q.query_text_id = qt.query_text_id
WHERE rs.last_execution_time > DATEADD(day, -7, GETUTCDATE())
ORDER BY rs.avg_duration DESC;

Step 4: Fix Parameter Sniffing

Parameter sniffing causes a plan optimized for one parameter value to perform terribly for another. A query that runs in 10ms for CustomerID = 1 (10 orders) might take 30 seconds for CustomerID = 5000 (1 million orders) because it uses the wrong plan.

-- Option 1: OPTIMIZE FOR UNKNOWN (most common fix)
SELECT * FROM Orders
WHERE CustomerID = @CustomerID
OPTION (OPTIMIZE FOR (@CustomerID UNKNOWN));

-- Option 2: RECOMPILE (for infrequent queries)
SELECT * FROM Orders
WHERE CustomerID = @CustomerID
OPTION (RECOMPILE);

-- Option 3: Plan guides (surgical fix without code changes)
EXEC sp_create_plan_guide
    @name = N'OrdersByCustomer_Guide',
    @stmt = N'SELECT * FROM Orders WHERE CustomerID = @CustomerID',
    @type = N'SQL',
    @hints = N'OPTION (OPTIMIZE FOR (@CustomerID UNKNOWN))';

Parameter Sniffing Detection

SymptomIndicatorSolution
Same query runs fast sometimes, slow othersExecution time variance > 10xOPTIMIZE FOR UNKNOWN
Timeout after deploymentNew first execution “sniffed” bad valuesClear plan cache for query
Stored procedure speed varies by callerDifferent parameters create different plansUse RECOMPILE or local variables

Step 5: Configure Server Settings

-- MAXDOP — prevent excessive parallelism
-- Rule: min(8, number of cores per NUMA node)
EXEC sp_configure 'max degree of parallelism', 4;

-- Cost Threshold for Parallelism — raise from default 5
-- Default 5 is too low, causes unnecessary parallelism
EXEC sp_configure 'cost threshold for parallelism', 50;

-- Max Server Memory — leave 10-20% for OS
-- Example: 64 GB server → set to 52 GB
EXEC sp_configure 'max server memory (MB)', 53248;

RECONFIGURE;

Configuration Checklist

SettingDefaultRecommendedWhy
MAXDOP0 (unlimited)min(8, cores/NUMA)Prevents thread starvation
CTFP550Reduces unnecessary parallelism
Max MemoryAll available80-90% of total RAMLeave room for OS
Optimize for Ad Hoc0 (off)1 (on)Reduce plan cache bloat
Instant File InitializationOffOn (grant SE_MANAGE_VOLUME)Faster database growth

Step 6: Optimize TempDB

-- Check for tempdb contention
SELECT
    wait_type,
    wait_time_ms,
    waiting_tasks_count
FROM sys.dm_os_wait_stats
WHERE wait_type LIKE 'PAGELATCH%'
ORDER BY wait_time_ms DESC;

-- Fix: Multiple tempdb data files (1 per CPU core, up to 8)
ALTER DATABASE tempdb
ADD FILE (
    NAME = tempdb2,
    FILENAME = 'D:\TempDB\tempdb2.ndf',
    SIZE = 1024MB,
    FILEGROWTH = 256MB
);

TempDB Best Practices

PracticeWhy
Equal-sized data filesSQL Server round-robins across equally-sized files
1 file per core (up to 8)Reduces allocation page contention
Pre-size files (don’t autogrow frequently)Growth events block all tempdb operations
Separate fast disk for tempdbI/O isolation from user databases
Enable trace flag 1118 (before SQL 2016)Reduce mixed extent allocation contention

Performance Tuning Checklist

  • Analyze wait statistics (identify top 5 wait types)
  • Baseline waits over 1-hour period for comparison
  • Review and create missing indexes (top 10 by impact)
  • Drop unused indexes (0 seeks/scans in 30 days)
  • Review top 10 most expensive queries (Plan Cache / Query Store)
  • Fix parameter sniffing issues on high-frequency stored procedures
  • Configure MAXDOP, CTFP, and Max Server Memory
  • Enable Query Store for baseline tracking and regression detection
  • Optimize tempdb (multiple equal-sized files, pre-sized)
  • Schedule index maintenance (reorganize < 30%, rebuild > 30%)
  • Update statistics on volatile tables (daily)
  • Enable “Optimize for Ad Hoc Workloads” to reduce plan cache bloat
  • Document baseline performance metrics for future comparison

:::note[Source] This guide is derived from operational intelligence at Garnet Grid Consulting. For a SQL performance audit, visit garnetgrid.com. :::

Jakub Dimitri Rezayev
Jakub Dimitri Rezayev
Founder & Chief Architect • Garnet Grid Consulting

Jakub holds an M.S. in Customer Intelligence & Analytics and a B.S. in Finance & Computer Science from Pace University. With deep expertise spanning D365 F&O, Azure, Power BI, and AI/ML systems, he architects enterprise solutions that bridge legacy systems and modern technology — and has led multi-million dollar ERP implementations for Fortune 500 supply chains.

View Full Profile →