Snapshot Testing
Capture and verify system output against saved snapshots. Covers snapshot testing strategies, serialization, snapshot updates, CI integration, and the patterns that make snapshot tests a reliable safety net for complex output.
Snapshot testing captures the output of a component, function, or API and compares it against a previously saved “snapshot.” If the output changes unexpectedly, the test fails and shows you the diff. Snapshot tests are invaluable for catching unintended changes in complex output — UI components, API responses, configuration files, or generated code.
How Snapshots Work
First run:
1. Execute code under test
2. Capture output (HTML, JSON, text)
3. Save as .snap file (the "snapshot")
4. Test passes (baseline established)
Subsequent runs:
1. Execute code under test
2. Compare output to saved snapshot
3. If match → Test passes ✓
4. If different → Test fails ✗
Developer reviews diff:
- Intentional change → Update snapshot ✓
- Unintentional change → Fix the bug ✗
Implementation
// Jest snapshot testing
// Component snapshot
describe('UserProfile', () => {
it('renders correctly', () => {
const component = render(
<UserProfile
name="Alice Smith"
role="Engineer"
avatar="/avatars/alice.png"
/>
);
expect(component.toJSON()).toMatchSnapshot();
// First run: Creates __snapshots__/UserProfile.test.js.snap
// Future runs: Compares against saved snapshot
});
});
// API response snapshot
describe('GET /api/users/:id', () => {
it('returns expected structure', async () => {
const response = await request(app)
.get('/api/users/123')
.expect(200);
// Snapshot captures the full response structure
expect(response.body).toMatchSnapshot();
});
});
// Inline snapshot (snapshot in the test file)
it('formats currency correctly', () => {
expect(formatCurrency(1234.5)).toMatchInlineSnapshot(`"$1,234.50"`);
expect(formatCurrency(0)).toMatchInlineSnapshot(`"$0.00"`);
expect(formatCurrency(-100)).toMatchInlineSnapshot(`"-$100.00"`);
});
Python Snapshot Testing
# syrupy: Snapshot testing for Python
import pytest
from syrupy.assertion import SnapshotAssertion
def test_api_response_structure(snapshot: SnapshotAssertion):
response = client.get("/api/v1/products")
assert response.json() == snapshot
# First run: generates .ambr snapshot file
# Future runs: compares against saved snapshot
def test_data_pipeline_output(snapshot: SnapshotAssertion):
"""Snapshot test for data transformation."""
input_data = load_fixture("raw_events.json")
result = transform_pipeline(input_data)
assert result == snapshot
# Catches unintended changes to pipeline output
# Update snapshots when changes are intentional:
# pytest --snapshot-update
Best Practices
DO:
✓ Use snapshots for complex, hard-to-assert output
✓ Review snapshot changes in code review (treat like code)
✓ Use inline snapshots for small, simple values
✓ Normalize timestamps, IDs, and random values before snapshot
✓ Name snapshots descriptively
DON'T:
✗ Snapshot everything (use assertions for simple values)
✗ Auto-update snapshots without reviewing the diff
✗ Include dynamic data (timestamps, UUIDs) in snapshots
✗ Commit snapshots of third-party API responses (they change)
✗ Use snapshots as the ONLY form of testing
Anti-Patterns
| Anti-Pattern | Consequence | Fix |
|---|---|---|
| Snapshot everything | Brittle tests, meaningless failures | Snapshot only complex output |
| Auto-update without review | Bugs silently become “expected” | Review every snapshot diff carefully |
| Dynamic data in snapshots | Tests fail randomly | Normalize timestamps, IDs before snapshot |
| Massive snapshots | Hard to review, difficult to update | Break into focused, smaller snapshots |
| No assertions + snapshot | Over-rely on snapshot, miss logic bugs | Complement snapshots with behavior assertions |
Snapshot tests are a safety net for complex output. They catch unintended changes, but they require discipline: review every update, normalize dynamic data, and complement with targeted assertions.