Part 6 of 7 in Specification Factory
Starting Small: The 30-Minute Pilot That Sells Itself
The Wrong Way to Introduce This
Here's what doesn't work:
"Hey team, I found this new specification language called Chronos. We need to learn it and start using it for all our requirements. Here are 50 pages of documentation. Training sessions start next week."
That's a recipe for resistance:
- "Another tool to learn?"
- "We already have Jira"
- "This feels like overhead"
- "Who asked for this?"
People don't adopt tools. They adopt results.
The Right Way: Show, Don't Tell
Here's what works:
Week 1: The Invisible Pilot
Monday morning:
- Pick one upcoming feature (not too complex, not too simple)
- Spend 30 minutes with an AI agent describing the feature
- Generate a Chronos specification
- Review and refine (20 minutes)
- Generate Jira stories, Gherkin scenarios, and state diagrams
Total time: 50 minutes
Monday afternoon: Walk into sprint planning with:
- Complete Jira epic with 8 detailed stories
- 15 acceptance criteria covering happy path and edge cases
- State diagram showing all possible states
- 12 Gherkin test scenarios ready for QA
- OpenTelemetry schema for tracking the feature
Team reaction: "How did you have time to write all this?"
Your answer: "I didn't write it. I reviewed it."
Week 1: The Impact
Before this pilot:
- Grooming sessions: 2 hours of "what about...?" questions
- Mid-sprint clarifications: 5+ interruptions
- QA writing test scenarios: 1 day of work
- Incomplete edge case coverage
After this pilot:
- Grooming session: 30 minutes (everything already specified)
- Mid-sprint clarifications: 0 (everything was documented)
- QA test scenarios: Already written
- Complete edge case coverage
Actual sprint velocity: Increased because engineering had clear specifications from day one.
Week 2: The Organic Spread
You don't announce anything. You don't evangelize. You just do it again.
Another feature. Another 50-minute workflow. Another grooming session where you show up with complete specifications.
Engineering team lead: "How are you generating these specs?"
Now you explain. But you're not selling a tool—you're showing a result they've already experienced.
The 30-Minute Pilot (Step-by-Step)
Let me walk you through exactly how this works.
Step 1: Pick Your Feature (5 minutes)
Choose something that:
- Has clear business value (so stakeholders care)
- Involves multiple systems (to show integration validation)
- Has tricky edge cases (to demonstrate completeness)
- Is in next sprint (so impact is immediate)
Example: "Let users update their email address"
Seems simple. But there are edge cases:
- What if the new email is already taken?
- Do we send confirmation to old email or new email?
- What if they don't click the confirmation link?
- Can they change it while they have pending transactions?
- Do we log this for security audits?
Step 2: Describe to AI Agent (10 minutes)
Open a conversation with an AI agent that understands your domain:
You:
"I need to let users update their email address. They should enter their new email, confirm it, and receive a verification link. The email doesn't change until they click the link. If they don't verify within 24 hours, the request expires."
AI:
"Got it. A few clarifying questions:
- Should they re-authenticate before changing email?
- What happens to the old email address—is it available for other users immediately?
- Should we notify the old email address about the change?
- Any compliance considerations (GDPR, etc.)?"
You:
"Yes to re-auth. Old email becomes available after 30 days (in case of account recovery). Yes, notify old email. GDPR applies—this is personal data."
AI:
"Generating Chronos specification..."
Step 3: Review Generated Spec (15 minutes)
The AI generates:
namespace com.acme.users.emailchange
use com.acme.data.users#User
use com.acme.api.auth#RequireRecentAuthentication
@compliance(gdpr: true, action: "updatePersonalData")
@slo(latency: "2s", percentile: 95)
journey ChangeEmail {
actor: User
preconditions: [
"Actor is authenticated",
"Actor has verified existing email",
"No pending email change request exists"
]
steps: [
step InitiateChange {
@requires(RequireRecentAuthentication, within: "5 minutes")
action: "Enters new email address and confirms"
expectation: "System validates email format and checks for uniqueness"
outcome: If("email already taken", TransitionTo(EmailTaken))
},
step SendVerificationEmail {
action: "System generates verification token and sends email"
expectation: "Token valid for 24 hours"
@telemetry(event: "EmailChangeInitiated", attributes: ["userId", "oldEmail"])
outcome: If("email delivery fails", TransitionTo(DeliveryFailed))
},
step NotifyOldEmail {
action: "System sends notification to old email address"
expectation: "Notification includes timestamp and option to cancel change"
@compliance(gdpr: true, action: "dataChangeNotification")
},
step WaitForVerification {
action: "User clicks verification link in email"
expectation: "System validates token is not expired"
outcome: If("token expired", TransitionTo(TokenExpired),
Else(TransitionTo(EmailUpdated)))
},
step EmailUpdated {
action: "System updates User.email and marks old email as available after 30 days"
@telemetry(event: "EmailChangeCompleted", attributes: ["userId", "newEmail"])
@audit(level: "high", reason: "security-sensitive")
}
]
variants: [
variant EmailTaken {
trigger: "New email address is already registered"
outcome: "Show error: This email address is already in use"
},
variant DeliveryFailed {
trigger: "Email delivery fails (bounced, invalid address)"
outcome: "Show error: Unable to send verification email. Please check the address."
},
variant TokenExpired {
trigger: "User clicks link after 24 hours"
outcome: "Show error: Verification link expired. Please start again."
@telemetry(event: "EmailChangeExpired")
},
variant CancellationRequested {
trigger: "User clicks cancel link in old email notification"
outcome: "Email change request is cancelled"
@telemetry(event: "EmailChangeCancelled")
}
]
postconditions: [
"User.email is updated",
"Old email marked unavailable for 30 days",
"Audit log entry created",
"Email verification completed"
]
}
Your review (15 minutes):
- ✅ Re-authentication requirement: Correct (5 minutes is right)
- ✅ Old email notification: Good catch
- ✅ 30-day hold on old email: Correct
- ✅ Cancellation flow: Great addition I hadn't thought of
- ❌ Token expiry: Should be 48 hours, not 24 hours (quick edit)
- ✅ Compliance tags: Correctly applied
Refinement:
expectation: "Token valid for 48 hours" // Changed from 24
Done. 15 minutes of review vs. 6 hours of writing from scratch.
Step 4: Generate Artifacts (5 minutes)
Run Chronos:
$ chronosc build
✓ Validated against data model
✓ Validated against API contracts
✓ Validated compliance policies
✓ Generated Jira epic with 6 stories
✓ Generated 14 Gherkin scenarios
✓ Generated state diagram
✓ Generated OpenTelemetry schema
✓ Generated audit trail config
Step 5: Walk into Grooming (30 minutes vs 2 hours)
Traditional grooming:
- PM presents user story (5 minutes)
- Engineering asks questions (45 minutes)
- "What if the email is already taken?"
- "Do we need to verify the new email?"
- "What happens to the old email?"
- "Should we notify them?"
- "What about security?"
- PM scrambles to answer or says "good question, let me get back to you"
- Story gets re-estimated after clarifications
- Total time: 2 hours
Grooming with generated specs:
- PM shares Jira epic (already complete) (2 minutes)
- Team reviews acceptance criteria (8 minutes)
- Engineering: "This looks complete. No questions."
- Team discusses implementation approach (20 minutes)
- Total time: 30 minutes
Result: Engineering starts sprint with zero ambiguity.
Measuring Success
After the pilot sprint, measure:
Velocity Metrics
- Grooming time: 75% reduction (2 hours → 30 minutes)
- Mid-sprint clarifications: 90% reduction (average 5 → 0.5)
- QA test case writing: 100% reduction (scenarios already written)
- Story points completed: 15% increase (less confusion = faster delivery)
Quality Metrics
- Edge cases caught before development: 12 (vs. 3 typically found in QA)
- Bugs found in QA: 40% reduction (specifications were complete)
- Production incidents: 0 (vs. average 1.2 per feature)
Strategic Time Reclaimed
- Hours spent writing specs: 6 hours → 50 minutes (83% reduction)
- Hours freed for customer conversations: +5 hours per sprint
- Hours freed for strategic planning: +2 hours per sprint
Handling Objections
Objection 1: "This seems like it only works for simple features"
Response: Start with simple features to prove the concept. Once the team sees the value, scale to complex features (which benefit even more from automation).
Objection 2: "Engineers won't trust auto-generated tickets"
Response: They don't have to. You're reviewing generated specs, not blindly accepting them. Quality speaks for itself—after one sprint, they'll ask you to use it for all features.
Objection 3: "What if the AI generates wrong specifications?"
Response: That's literally why you review it. The AI generates 95% of the spec; you refine the 5% that needs your domain expertise. Still 10x faster than writing from scratch.
Objection 4: "We don't have time to learn a new tool"
Response: You're not asking anyone to learn anything yet. You're showing results. After one sprint, if the team wants to understand how you did it, then you explain.
The Adoption Curve
Month 1: One PM, One Team
- You run the pilot
- One team experiences the benefits
- No org-wide announcement
Month 2: Organic Expansion
- Other PMs ask "How are you doing this?"
- You share the workflow
- 2-3 teams start using it
Month 3: Visible Trend
- Leadership notices improved velocity
- Teams using it are shipping faster
- Other teams request access
Quarter 2: Scaled Adoption
- Becomes standard practice
- Training for new PMs
- Integration with existing tools deepens
Key principle: Pull, not push. Let results create demand.
What's Next?
You've seen:
- The problem (PM heads-down trap)
- The solution (Specification Factory)
- The technology (Chronos)
- The integration (full stack)
- The adoption (30-minute pilot)
In the final post, The Strategic PM, we'll explore what product management looks like when the specification burden is lifted—when you can finally do the work only you can do.
Ready to try it?
This is Part 6 of the "Look Up" series exploring how AI is finally freeing product managers to do their best work.
Specification Factory
Part 6 of 7
The Full Stack of Intent: From Customer Problem to Production Code

The Strategic PM: What Product Management Looks Like After the Specification Factory
View all posts in this series
- 1.The Best Product Managers Are Looking Down
- 2.The Spec Gap: What Engineering Sees That Product Doesn't
- 3.The Specification Factory: Having Your Cake and Eating It Too
- 4.Introducing Chronos: A Language for Product Intent
- 5.The Full Stack of Intent: From Customer Problem to Production Code
- 6.Starting Small: The 30-Minute Pilot That Sells Itself
- 7.The Strategic PM: What Product Management Looks Like After the Specification Factory
