Your AI Project Is Probably Doomed (Here's Why)
Most AI projects struggle to demonstrate real business value and ROI.
It feels like pretty much every team I work with is building something with AI. RAG chatbots, on-call assistants, AI agents. But many of these projects are doomed to fail. So many of the AI solutions being built right now are what I call "solutions looking for problems".
I created a document that I use to plan AI projects which I call an A2D2 doc (AI Agent Design Document). By spending some time upfront getting clear on what your AI project will do, it can save many weeks or months of wasted effort.
Here's what you should know.
Step 1: Start With A Real Problem
The biggest lesson I learned over the years is the power of having a clearly defined goal. Before building an AI agent or AI automation, you need to get clear on the business outcome.
Business leaders are already getting tired of teams building “AI proof of concepts” that deliver no real business value. If you’re able to articulate ROI as a number (“reduce MTTR by 75%”), you’re ahead of 99% of teams building AI systems.
Goal: What problem is this AI system solving, and for whom?
Example: Eliminate the need for on-call engineers to respond to customer limit requests that can be automated by an AI agent saving ~3 hours per week.
This seems obvious, but I see teams skip this step constantly. They get excited about the technology and start building before they understand the problem. The result? A sophisticated AI system that nobody uses because it doesn't solve a real pain point.
Step 2: Map Out Your Workflows (Success AND Failure)
Most teams only think about the happy path - when their AI agent works perfectly. But the magic happens when you plan for both success and failure scenarios.
Success Scenario:
Customer submits limit increase request
Agent validates request format and checks account status
Agent automatically approves within policy limits
Customer gets instant approval
Escalation Scenario:
Request exceeds policy thresholds or has missing information
Agent flags for human review with context summary
Human approves/denies with full context
Customer gets response within SLA
By mapping out both paths upfront, you avoid the trap of building an AI that handles 80% of cases but fails catastrophically on the remaining 20%.
Step 3: Define Your Non-Negotiables
This is where most projects go off the rails. Teams focus on the AI capabilities but ignore the operational requirements that determine success or failure in production.
Ask yourself:
Performance: What's your acceptable response time target? 5 seconds? 30 seconds?
Accuracy: Can you tolerate mistakes. If something is mission critical (like sending emails to real customers), think carefully about what you’re doing.
Security: What data can the AI access? How do you handle sensitive information?
User Experience: What happens when the AI doesn't know the answer?
I've seen brilliant AI systems that solved the right problem but failed because they took 2 minutes to respond, or because they were too unreliable to be used for real.
Step 4: Count The Cost (Before You Build)
Here's the part that kills most AI projects: nobody does the math upfront.
Your costs include:
LLM API calls: 1000 customer requests × 5000 tokens each × $15/1M tokens = $75/month
Vector database: $200/month for embeddings storage
Integration APIs: Slack, JIRA, CRM connections
Human oversight: 5 hours/week of monitoring and tuning
Suddenly your "simple chatbot" costs $2000/month plus engineering time. That might be worth it - but only if you know the number before you start building.
Step 5: Design For Human In The Loop
The best AI agents don't replace humans - they amplify them. Your prompt design should make this collaboration seamless.
Instead of a generic "You are a helpful assistant," try:
"You are a customer limit specialist for Amazon. Your role is to process limit increase requests efficiently while maintaining security standards. When uncertain, escalate to humans with full context. Never approve requests outside established parameters."
Be specific about the role, clear about the boundaries, and explicit about escalation paths.
Step 6: Expect Everything to Break
Murphy's Law applies doubly to AI systems. Your knowledge base will become outdated. APIs will go down. The AI will misunderstand edge cases.
Plan for it:
Prevention: Regular knowledge base updates, confidence thresholds
Detection: Comprehensive logging, alarms, user feedback loops
Recovery: Clear escalation paths, graceful degradation
The teams that succeed are those that build monitoring and recovery into their AI systems from day one.
The Bottom Line
Most AI projects fail not because the technology isn't ready, but because teams rush into building without doing the foundational work. They skip the problem definition, ignore the operational requirements, and hope for the best.
The A2D2 framework forces you to slow down and think through these details upfront. It's not glamorous work, but it's the difference between an AI project that gets used and one that gets abandoned.
Your AI doesn't need to be perfect. It needs to be useful, reliable, and designed for the messy reality of production systems.
Start with a real problem. Design for both success and failure. Count the costs. And always, always plan for things to break.
That's how you build AI projects that actually work.
PS: you can get the A2D2 template here. Feel free to use or adapt in any way you wish.