API testing validates that your services communicate correctly, handle edge cases gracefully, and perform under load. Unlike UI testing, API tests are fast, deterministic, and catch integration failures before they reach users. A well-designed API testing strategy is the single highest-ROI investment in software quality.
The API Test Pyramid
┌──────────────┐
│ E2E / UI │ Few — slow, brittle, costly
├──────────────┤
│ Integration │ Moderate — real HTTP, real DB
├──────────────┤
│ Contract │ Many — schema validation
├──────────────┤
│ Unit │ Most — business logic only
└──────────────┘
| Layer | Scope | Speed | Maintenance | When to Run |
|---|
| Unit | Single function/method | < 1ms | Low | Every commit |
| Contract | Request/response schema | < 100ms | Low | Every PR |
| Integration | Service + database + dependencies | 1–10s | Medium | Every PR |
| E2E | Full user flow across services | 10–60s | High | Pre-deploy |
Contract Testing
Contract tests verify that API producers and consumers agree on request/response schemas without requiring both services to be running simultaneously.
| Tool | Language | Approach |
|---|
| Pact | Multi-language | Consumer-driven contracts |
| Schemathesis | Python | Property-based from OpenAPI spec |
| Dredd | JavaScript | API Blueprint / OpenAPI validation |
| Specmatic | JVM | Contract-as-code from OpenAPI |
Consumer-Driven Contract Flow
Consumer defines expectations
→ Generates contract (Pact file)
→ Provider verifies against contract
→ Both deploy independently with confidence
Key principle: The consumer defines what it needs, not what the provider offers. This prevents providers from breaking consumers with “non-breaking” changes that actually break downstream assumptions.
Integration Testing Patterns
Test Containers
Spin up real databases, message queues, and caches in Docker containers for each test run:
Test suite starts
→ Docker Compose spins up Postgres + Redis + Kafka
→ Migrations run
→ Tests execute against real infrastructure
→ Containers destroyed
| Infrastructure | Container Image | Startup Time |
|---|
| PostgreSQL | postgres:16-alpine | ~3s |
| Redis | redis:7-alpine | ~1s |
| Kafka | confluentinc/cp-kafka | ~8s |
| MongoDB | mongo:7 | ~4s |
| Elasticsearch | elasticsearch:8 | ~15s |
Test Data Management
| Strategy | Best For | Complexity |
|---|
| Factory pattern | Generating test entities programmatically | Low |
| Fixtures | Predefined static datasets | Low |
| Snapshots | Production-like data subsets | Medium |
| Synthetic generation | GDPR-compliant test data | High |
Load Testing
| Tool | Protocol | Scripting | Best For |
|---|
| k6 | HTTP, WebSocket, gRPC | JavaScript | Developer-friendly load tests |
| Locust | HTTP | Python | Complex user behavior simulation |
| Gatling | HTTP | Scala/Java | JVM ecosystem CI integration |
| Artillery | HTTP, WebSocket | YAML/JS | Quick cloud-native load tests |
| JMeter | HTTP, JDBC, FTP | GUI/XML | Legacy enterprise load testing |
Key Metrics to Capture
| Metric | Target | Alert Threshold |
|---|
| p50 latency | < 100ms | > 200ms |
| p99 latency | < 500ms | > 1s |
| Error rate | < 0.1% | > 1% |
| Throughput | Baseline +20% headroom | < baseline |
| CPU saturation | < 70% | > 85% |
Security Testing
| Test Type | What It Catches | Tool |
|---|
| OWASP ZAP scan | Injection, XSS, misconfigurations | ZAP |
| Authentication fuzzing | Broken auth, token issues | Burp Suite |
| Rate limit verification | DDoS, brute force exposure | Custom scripts |
| Schema fuzzing | Unexpected input handling | Schemathesis |
| Dependency scanning | Known CVEs in libraries | Snyk, Trivy |
CI/CD Integration
Pull Request opened
→ Unit tests (< 2 min)
→ Contract tests (< 3 min)
→ Integration tests (< 10 min)
→ Security scan (parallel)
Merge to main
→ Full integration suite
→ Load test (baseline comparison)
→ Deploy to staging
Pre-production
→ E2E tests against staging
→ Smoke tests post-deploy
Anti-Patterns
| Anti-Pattern | Problem | Fix |
|---|
| Testing only happy paths | Misses error handling gaps | Test 400s, 401s, 403s, 404s, 429s, 500s explicitly |
| Hardcoded test data | Tests break when data changes | Use factories and builders |
| Testing implementation details | Tests break on refactors | Test behavior, not internal structure |
| No test isolation | Tests depend on execution order | Each test sets up and tears down its own state |
| Ignoring flaky tests | Team stops trusting the suite | Quarantine, fix, or delete — never ignore |
Checklist
:::note[Source]
This guide is derived from operational intelligence at Garnet Grid Consulting. For API testing strategy consulting, visit garnetgrid.com.
:::