Before deploying AI, you need to know if your organization can actually support it. 73% of enterprise AI projects fail — not because of model quality, but because of missing infrastructure, bad data, and organizational unreadiness. This assessment gives you an honest, quantified view of where you stand and what to fix first.
The most dangerous situation: leadership says “we need AI” without understanding the prerequisites. The result is a $500K investment in models that never reach production because the data is fragmented, the infrastructure can’t serve predictions, and nobody on the team knows how to operationalize a model.
The 5-Pillar AI Readiness Framework
| Pillar | Weight | What It Measures |
|---|
| Data Maturity | 30% | Quality, accessibility, governance of your data |
| Infrastructure | 20% | Compute, storage, MLOps tooling |
| Talent & Skills | 20% | Engineering and data science capability |
| Governance | 15% | Ethics, compliance, risk management |
| Culture | 15% | Leadership support, change management |
Quick Self-Test
Before diving into the full assessment, answer these five questions:
| Question | Yes | No |
|---|
| Can you access clean, documented data for your target use case? | +20 | 0 |
| Do you have GPUs or cloud ML compute available? | +20 | 0 |
| Is there at least one data scientist or ML engineer on staff? | +20 | 0 |
| Does leadership have a budget allocated for AI specifically? | +20 | 0 |
| Have you completed at least one data-driven project successfully? | +20 | 0 |
Score 80-100: Ready for production pilots. Use the full assessment to identify gaps.
Score 40-60: Foundation building. Address the “No” answers before investing in AI.
Score 0-20: Not ready. Focus on data infrastructure and hiring first.
Step 1: Score Data Maturity (30%)
1.1 Data Quality Audit
import pandas as pd
def score_data_quality(df: pd.DataFrame) -> dict:
"""Score a dataset's quality on key dimensions"""
total_cells = df.size
null_cells = df.isnull().sum().sum()
scores = {
"completeness": round((1 - null_cells / total_cells) * 100, 1),
"uniqueness": round(df.drop_duplicates().shape[0] / df.shape[0] * 100, 1),
"consistency": _check_consistency(df),
"freshness": _check_freshness(df),
}
scores["overall"] = round(
sum(scores.values()) / len(scores), 1
)
return scores
def _check_consistency(df):
"""Check for format consistency in key columns"""
issues = 0
for col in df.select_dtypes(include='object').columns:
# Check for mixed case inconsistency
if df[col].str.lower().nunique() < df[col].nunique():
issues += 1
consistency = max(0, 100 - issues * 10)
return consistency
def _check_freshness(df):
"""Check timestamp columns for data freshness"""
date_cols = df.select_dtypes(include='datetime64').columns
if len(date_cols) == 0:
return 50 # Can't evaluate
latest = df[date_cols].max().max()
days_old = (pd.Timestamp.now() - latest).days
if days_old < 1: return 100
if days_old < 7: return 85
if days_old < 30: return 65
return 40
1.2 Data Accessibility Checklist
| Question | Score | Why It Matters |
|---|
| Can analysts query production data without DBA involvement? | /10 | Self-service access = faster experiments |
| Is there a central data catalog (e.g., DataHub, Collibra)? | /10 | Discoverability prevents data hoarding |
| Are datasets documented with schema definitions? | /10 | Undocumented data = wasted weeks |
| Is there a self-service data access request process? | /10 | Bottlenecked access kills projects |
| Can you join data across 3+ source systems? | /10 | AI needs cross-functional data |
1.3 Data Volume Readiness
| Use Case | Minimum Data Required | Your Data |
|---|
| Classification (binary) | 1,000 labeled examples per class | ___ |
| Regression | 5,000+ labeled rows | ___ |
| NLP (text classification) | 5,000+ labeled documents | ___ |
| Computer vision | 500+ labeled images per class | ___ |
| RAG / document Q&A | 100+ documents (unlabeled OK) | ___ |
| Anomaly detection | 10,000+ normal examples | ___ |
Step 2: Score Infrastructure Readiness (20%)
2.1 Compute Assessment
# Check GPU availability
nvidia-smi --query-gpu=name,memory.total,driver_version \
--format=csv,noheader 2>/dev/null || echo "No GPU detected"
# Check available RAM
free -h | head -2
# Check Docker availability
docker --version 2>/dev/null || echo "Docker not installed"
# Check Kubernetes
kubectl cluster-info 2>/dev/null || echo "No Kubernetes cluster"
2.2 Infrastructure Scoring
| Capability | Level 1 (Basic) | Level 2 (Ready) | Level 3 (Advanced) |
|---|
| Compute | Shared VMs | Dedicated GPU instances | Auto-scaling GPU clusters |
| Storage | Local/NAS | Cloud object storage | Lakehouse with governance |
| MLOps | Manual scripts | MLflow / Weights & Biases | Full Kubeflow / SageMaker |
| Monitoring | Basic logs | APM + custom metrics | AI-specific observability |
| Networking | Public internet | VPN/Private endpoints | Zero-trust architecture |
2.3 Minimum Infrastructure by AI Type
| AI Application | GPU? | Minimum RAM | Storage | Network |
|---|
| RAG / chatbot (API-based) | No | 8 GB | 10 GB | Internet (API calls) |
| RAG / chatbot (self-hosted) | Yes (16 GB VRAM) | 32 GB | 100 GB | Internal |
| Fine-tuning LLMs | Yes (40+ GB VRAM) | 64 GB | 500 GB | Internal |
| Traditional ML (tabular) | No | 16 GB | Depends on data | Internal |
| Computer vision | Yes (8+ GB VRAM) | 32 GB | 100+ GB | Internal + edge |
Step 3: Score Talent & Skills (20%)
Skills Matrix
| Skill Area | Minimum for AI Readiness | Assessment Method |
|---|
| Data Engineering | 2+ engineers who can build ETL pipelines | Review recent pipeline work |
| ML/Data Science | 1+ scientist who can train & evaluate models | Technical interview |
| MLOps/DevOps | 1+ engineer who can containerize & deploy | Deploy a test model |
| Data Literacy | Managers can interpret model outputs | Run a decision exercise |
| AI Ethics | Someone owns responsible AI policy | Review policy document |
Hiring vs Upskilling Decision
| Gap | Hire | Upskill | Outsource |
|---|
| No data engineers | ✅ Hire (core capability) | ❌ Too specialized | Temporary contractors |
| No ML engineer | ✅ Hire if AI is strategic | ✅ If strong devs exist | ✅ Pilot projects |
| No MLOps | ✅ If scaling | ✅ DevOps → MLOps path | ✅ Managed platforms |
| Low data literacy | ❌ Not a role | ✅ Workshop-based training | ❌ Must be internal |
# Simple skills gap calculator
skills = {
"data_engineering": {"current": 2, "needed": 3},
"ml_data_science": {"current": 1, "needed": 2},
"mlops": {"current": 0, "needed": 1},
"data_literacy": {"current": 60, "needed": 80}, # % of managers
"ai_ethics": {"current": 0, "needed": 1},
}
for skill, counts in skills.items():
gap = counts["needed"] - counts["current"]
status = "✅ Met" if gap <= 0 else f"⚠️ Gap: {gap}"
print(f" {skill}: {status}")
Step 4: Score Governance Readiness (15%)
Governance Checklist
Step 5: Score Culture & Leadership (15%)
Culture Assessment
| Signal | Points |
|---|
| C-suite sponsor for AI initiatives | +20 |
| Dedicated AI budget (not borrowed from IT) | +20 |
| Cross-functional AI steering committee | +15 |
| Pilot projects completed (even if small) | +15 |
| Data-driven decision-making culture | +15 |
| Willingness to fail and iterate | +15 |
Step 6: Calculate Your Overall Score
def calculate_ai_readiness(scores: dict) -> dict:
weights = {
"data_maturity": 0.30,
"infrastructure": 0.20,
"talent_skills": 0.20,
"governance": 0.15,
"culture": 0.15,
}
weighted_score = sum(
scores[pillar] * weights[pillar]
for pillar in weights
)
tier = (
"🟢 AI-Ready" if weighted_score >= 75 else
"🟡 Foundation Building" if weighted_score >= 50 else
"🔴 Not Ready — Build Foundations First"
)
return {
"overall_score": round(weighted_score, 1),
"tier": tier,
"pillar_scores": scores,
"recommendation": _get_recommendation(scores)
}
def _get_recommendation(scores):
weakest = min(scores, key=scores.get)
return f"Priority: Strengthen '{weakest}' (score: {scores[weakest]})"
Interpretation Guide
| Score Range | Tier | Action | Timeline |
|---|
| 75-100 | AI-Ready | Proceed with production pilots | Now |
| 50-74 | Foundation Building | Address gaps, run contained experiments | 3-6 months |
| 25-49 | Early Stage | Invest in data + skills before AI | 6-12 months |
| 0-24 | Not Ready | Focus on digital transformation basics | 12-18 months |
Readiness Assessment Checklist
:::note[Source]
This guide is derived from operational intelligence at Garnet Grid Consulting. Try the free AI Readiness Assessment Tool or get a Premium AI Readiness Report.
:::