Kubernetes won the orchestration war, but that doesn’t mean every application needs it. Running a three-service application on Kubernetes is like using a semi-truck to deliver a pizza. This guide helps you choose the right orchestration level for your actual needs.
Orchestration Spectrum
| Scale | Solution | Complexity | Best For |
|---|
| 1-3 services | Docker Compose + systemd | Minimal | Personal projects, small teams |
| 3-10 services | ECS / Cloud Run / Fly.io | Low | Startups, small-medium apps |
| 10-50 services | ECS / Nomad | Medium | Medium companies, mixed workloads |
| 50+ services | Kubernetes | High | Large organizations, platform teams |
Decision Framework
Do you have > 50 microservices?
├── Yes → Kubernetes (you need the ecosystem)
└── No → Do you have a dedicated platform team?
├── Yes → Kubernetes or Nomad
└── No → Do you need multi-cloud?
├── Yes → Kubernetes or Nomad
└── No → Managed service (ECS, Cloud Run, Fly.io)
| Feature | Kubernetes | ECS | Nomad | Cloud Run |
|---|
| Learning curve | Steep | Moderate | Moderate | Low |
| Ops overhead | High (managed K8s helps) | Low (managed) | Medium | Zero |
| Auto-scaling | HPA, VPA, KEDA | Built-in | Built-in | Built-in |
| Service mesh | Istio, Linkerd | App Mesh | Consul Connect | Built-in |
| Multi-cloud | Yes | AWS only | Yes | GCP-focused |
| Cost | Higher (control plane + nodes) | Lower | Variable | Pay-per-request |
| Ecosystem | Massive (Helm, operators) | Moderate | Growing | Limited |
When NOT to Use Kubernetes
| Situation | Better Alternative | Why |
|---|
| Solo developer or small team | Docker Compose, Cloud Run | K8s operational overhead > benefit |
| Serverless workloads | Lambda, Cloud Functions | No infrastructure to manage |
| Monolith application | Single EC2/VM, ECS | Orchestration overhead for 1 service |
| GPU/ML workloads only | SageMaker, Vertex AI | Managed ML platforms handle scaling |
| Static websites | S3 + CloudFront, Vercel | CDN, not containers |
Anti-Patterns
| Anti-Pattern | Problem | Fix |
|---|
| K8s for everything | Massive ops overhead for simple apps | Match orchestration to complexity |
| No resource limits | Noisy neighbors, resource exhaustion | Set CPU/memory requests and limits |
| One cluster per service | Cluster sprawl, management nightmare | Namespaces for isolation, fewer clusters |
| Manual scaling | Slow response to traffic changes | Auto-scaling (HPA, KEDA) |
| Ignoring managed options | Reinventing what cloud providers offer | Use EKS/GKE/AKS, not self-managed K8s |
Checklist
:::note[Source]
This guide is derived from operational intelligence at Garnet Grid Consulting. For container orchestration consulting, visit garnetgrid.com.
:::
Jakub Dimitri Rezayev
Founder & Chief Architect • Garnet Grid Consulting
Jakub holds an M.S. in Customer Intelligence & Analytics and a B.S. in Finance & Computer Science from Pace University. With deep expertise spanning D365 F&O, Azure, Power BI, and AI/ML systems, he architects enterprise solutions that bridge legacy systems and modern technology — and has led multi-million dollar ERP implementations for Fortune 500 supply chains.
View Full Profile →