Verified by Garnet Grid

How to Avoid Microservices Anti-Patterns: Architecture Decision Guide

Identify and fix the 8 most common microservices mistakes. Covers distributed monoliths, service granularity, data ownership, and when NOT to use microservices.

Microservices are not inherently good architecture. They’re a trade-off — you gain deployment independence at the cost of operational complexity. The anti-patterns below are the most common reasons microservices projects fail, and they all share a root cause: teams adopt the architecture without understanding the constraints that make it work.


Anti-Pattern 1: The Distributed Monolith

Symptom: All services must be deployed together. Changing one service breaks others. Teams can’t ship independently.

Root Cause: Services share databases, deploy in lockstep, or communicate via synchronous chains.

❌ Distributed Monolith
┌─────┐    sync    ┌─────┐    sync    ┌─────┐
│ Svc │──────────▶│ Svc │──────────▶│ Svc │
│  A  │           │  B  │           │  C  │
└──┬──┘           └──┬──┘           └──┬──┘
   │                 │                 │
   └────────┬────────┘                 │
            ▼                          ▼
     ┌────────────┐            ┌────────────┐
     │ Shared DB  │            │ Shared DB  │
     └────────────┘            └────────────┘

Fix: Database-per-service, asynchronous communication, API contracts.

✅ Proper Microservices
┌─────┐   event   ┌─────┐   event   ┌─────┐
│ Svc │──────────▶│ Svc │──────────▶│ Svc │
│  A  │  (async)  │  B  │  (async)  │  C  │
└──┬──┘           └──┬──┘           └──┬──┘
   ▼                 ▼                 ▼
┌──────┐         ┌──────┐         ┌──────┐
│ DB A │         │ DB B │         │ DB C │
└──────┘         └──────┘         └──────┘

How to Diagnose

SymptomDistributed MonolithTrue Microservices
Deploy frequencyAll services togetherEach service independently
Shared databaseMultiple services write to same tablesEach service owns its data
Breaking changesService B change breaks Service AContract tests catch breaks before deploy
Team couplingTeams coordinate releasesTeams ship independently
Failure blast radiusOne service down = everything downOne service down = graceful degradation

Anti-Pattern 2: Wrong Service Granularity

Too Fine-Grained: Every CRUD operation is a service. You have 200 services for a 10-developer team. Deployment overhead exceeds development capacity.

Too Coarse-Grained: “We split our monolith into 3 services.” The services are still 500K lines of code each. Nothing changed except you added a network call.

The Right Granularity

Team SizeService CountRatio
5-10 devs3-8 services1-2 services per dev
10-25 devs8-20 services~1 service per dev
25-50 devs15-40 servicesTeam-aligned services
50+ devsDomain-count servicesBounded context per team

Granularity Decision Questions

QuestionIf Yes →If No →
Does this component need independent scaling?Separate serviceKeep together
Does a different team own this?Separate serviceKeep together
Does this need a different tech stack?Separate serviceKeep together
Does this change at a different rate?Consider separatingKeep together
Does this have different SLA requirements?Separate serviceKeep together

:::tip[Two-Pizza Rule Applied] Each service should be owned by a team that can be fed by two pizzas (5-8 people). If a service requires more than that to maintain, it’s too big. If a developer maintains more than 2 services, they’re too small. :::


Anti-Pattern 3: Synchronous Everything

Symptom: Request chains that create 10 network calls before returning a response. Latency is additive across every hop.

# ❌ Synchronous chain — latency = sum of all calls
async def process_order(order):
    customer = await customer_svc.get(order.customer_id)     # 50ms
    inventory = await inventory_svc.check(order.items)       # 80ms
    pricing = await pricing_svc.calculate(order)             # 40ms
    payment = await payment_svc.charge(order, pricing)       # 200ms
    shipping = await shipping_svc.create(order, customer)    # 100ms
    notification = await email_svc.send(customer, order)     # 150ms
    # Total: 620ms minimum

Fix: Use events for non-blocking operations.

# ✅ Event-driven — only synchronous for what you need NOW
async def process_order(order):
    customer = await customer_svc.get(order.customer_id)     # 50ms
    inventory = await inventory_svc.check(order.items)       # 80ms
    pricing = await pricing_svc.calculate(order)             # 40ms
    payment = await payment_svc.charge(order, pricing)       # 200ms
    # Total: 370ms

    # Everything else happens asynchronously via events
    await event_bus.publish("order.created", {
        "order_id": order.id,
        "customer": customer,
        "items": order.items
    })
    # Shipping, notifications, analytics — all async consumers

Sync vs Async Decision

Communication TypeUse WhenAvoid When
Synchronous (HTTP/gRPC)Response is needed immediately (e.g., payment confirmation)Response can be deferred
Asynchronous (events)Fire-and-forget, eventual consistency OKStrong consistency required (rare)
Request-reply (async)Long-running operations with callbackSub-second response required

Anti-Pattern 4: No API Contract Management

Symptom: Service B deploys a breaking change. Service A discovers it at 3 AM when production breaks.

Fix: Consumer-Driven Contract Testing.

# Pact — consumer-driven contract test
# Consumer side (Service A)
from pact import Consumer, Provider

pact = Consumer('ServiceA').has_pact_with(Provider('ServiceB'))

pact.given('a customer exists') \
    .upon_receiving('a request for customer by ID') \
    .with_request('GET', '/api/customers/123') \
    .will_respond_with(200, body={
        'id': '123',
        'name': Like('John Doe'),
        'email': Like('john@example.com')
    })

API Versioning Strategy

StrategyHow It WorksWhen to Use
URL versioning (/v2/customers)Version in the pathExternal APIs, clear boundaries
Header versioning (Accept: v2)Version in headersInternal APIs, less URL clutter
No versioning (additive only)Never remove fields, only addSimple services, fast iteration

Anti-Pattern 5: Shared Data Ownership

Symptom: Multiple services write to the same database table. Impossible to know who owns the data.

Fix: Single-writer principle — one service owns each data entity.

EntityOwner ServiceRead AccessWrite Access
CustomersCustomer ServiceAll (via API)Customer Service only
OrdersOrder ServiceCustomer, ShippingOrder Service only
ProductsCatalog ServiceOrder, SearchCatalog Service only
PaymentsPayment ServiceOrder (via events)Payment Service only

Data Sharing Patterns

PatternHow It WorksTrade-off
API query (real-time)Service A calls Service B’s APICoupling, latency
Event replicationService B publishes events, A maintains local copyEventual consistency
Shared read-only viewDatabase view exposed to consumersRead-only, schema coupling
CQRSSeparate read/write modelsComplexity, eventual consistency

Anti-Pattern 6: Missing Observability

You cannot operate what you cannot observe. Every microservice needs:

# OpenTelemetry configuration
exporters:
  otlp:
    endpoint: "otel-collector:4317"

service:
  pipelines:
    traces:
      receivers: [otlp]
      exporters: [otlp]
    metrics:
      receivers: [otlp]
      exporters: [otlp]
    logs:
      receivers: [otlp]
      exporters: [otlp]

Three Pillars:

  1. Traces — Follow a request across all services (Jaeger, Zipkin)
  2. Metrics — Latency, error rate, throughput per service (Prometheus)
  3. Logs — Structured, correlated by trace ID (ELK, Loki)

Observability Checklist per Service

RequirementToolNon-Negotiable?
Distributed tracingOpenTelemetry + JaegerYes
Request/error/duration metricsPrometheus + GrafanaYes
Structured logging with trace IDsLoki / ELKYes
Health check endpoint (/health)Built-inYes
Dependency health (/ready)Built-inYes
Alerting on SLO breachPagerDuty / OpsGenieYes

Anti-Pattern 7: Premature Microservices

Symptom: Starting a new project with 15 microservices before product-market fit.

When to Use Microservices

SignalMonolithMicroservices
Team size < 10
Product is evolving rapidly
You need independent scalingConsider
Teams can’t deploy independentlyConsider
Different services need different tech stacksConsider
You have DevOps/platform team capacityRequired

The Migration Path

Phase 1: Well-structured monolith with clear module boundaries
         ↓ (when specific modules need independent scaling/deployment)
Phase 2: Extract highest-value bounded context as first service
         ↓ (validate the approach works)
Phase 3: Extract remaining contexts as justified by need
         ↓ (never extract just because "microservices")
Phase 4: Steady state — monolith core + extracted services

:::caution[Start Monolith, Extract Later] The safest path: build a well-structured monolith with clear module boundaries. When specific modules need independent scaling or deployment, extract them into services. This is cheaper and faster than starting distributed. :::


Architecture Decision Checklist

  • Can each service be deployed independently? (if not, it’s a distributed monolith)
  • Does each service own its own database? (single-writer principle)
  • Is inter-service communication primarily asynchronous? (avoid sync chains)
  • Are API contracts tested and versioned? (consumer-driven contracts)
  • Is there a single-writer for each data entity?
  • Can you trace a request across all services? (distributed tracing)
  • Is team size sufficient for the number of services?
  • Does every service have health check + readiness endpoints?
  • Is observability in place before adding more services?
  • Have you validated the need for microservices? (monolith-first default)

:::note[Source] This guide is derived from operational intelligence at Garnet Grid Consulting. For architecture audits, visit garnetgrid.com. :::

Jakub Dimitri Rezayev
Jakub Dimitri Rezayev
Founder & Chief Architect • Garnet Grid Consulting

Jakub holds an M.S. in Customer Intelligence & Analytics and a B.S. in Finance & Computer Science from Pace University. With deep expertise spanning D365 F&O, Azure, Power BI, and AI/ML systems, he architects enterprise solutions that bridge legacy systems and modern technology — and has led multi-million dollar ERP implementations for Fortune 500 supply chains.

View Full Profile →