
I want to help teams move faster together; not through more process, but through shared clarity, trust, and communication.
The goal: reduce friction without adding bureaucracy.
Clear communication is a speed advantage.
The 4-Step Collaboration System
Listen → Align → Adjust → Agree
Use this staircase as the weekly rhythm when joining or reshaping a team.
Why This Matters
Fast-moving teams don’t struggle because of a lack of talent – they struggle because of misalignment.
This playbook reduces friction between Design, Product, and Engineering by making:
-
Expectations visible
-
Decision-making shared
-
Handoffs predictable
-
Feedback continuous (not reactive)
It’s intentionally simple – no heavy ceremonies.
Week 1: Listening & Learning
Short 1:1 conversations reveal where collaboration breaks today – expectations, timing, formats, decision-making.
For Developers (ICs)
-
What slows you down the most when you get designs?
-
How do you prefer to receive designs: prototype, tickets, screenshots, walkthrough?
-
What’s your current process when designs don’t match technical constraints?
-
How do you prefer to give feedback on feasibility?
-
If you could change one thing about design–dev collaboration, what would it be?
For Dev Manager / Lead
-
What do you need from design so your team feels confident going into a sprint?
-
When things break down, what usually went wrong earlier?
-
How should disagreements be handled?
-
What has worked well in the past?
For Product Managers
-
Where does friction happen today?
-
How can we reduce rework during execution?
-
How should we document design–dev decisions?
For Everyone
-
What tools are non-negotiable?
-
How should we make decisions during urgent pivots?
Output after Week 1:
Top 3 collaboration pain points + protocol for urgent decision-making.
Week 2: Quick Wins
A. Weekly Design–Dev Sync (15–20 min)
Before sprint planning → walk upcoming designs → devs flag risks → avoid mid-sprint surprises.
B. Enhanced Handoff Checklist
Every design handoff includes:
-
Prototype link
-
Final flow (start → end + alternatives)
-
Edge cases (empty, error, loading)
-
Responsive behavior (if relevant)
-
Specs/redlines (if precision matters)
-
Assets + interaction notes
-
Technical constraints noted
-
Priority level: MVP vs. Polish vs. Delight
-
Definition of done
Optional: short Loom walkthrough (<5 min)
C. Resource Reality Framework
-
MVP: Must work; polish deferred
-
Polish: Customer/demo-ready
-
Delight: Added when time allows
Output after Week 2:
Clearer handoffs + shared expectations + fewer blockers.
Week 3: Feedback & Adjustments
Ask the team:
-
What worked better this sprint?
-
What’s still frustrating?
-
If we fix one thing next sprint, what should it be?
-
How did MVP/Polish/Delight work in practice?
Output after Week 3:
Refined systems shaped by the team, not imposed.
Week 4: Shared Working Agreement
A lightweight agreement everyone contributes to:
-
Designs shared X days before sprint planning
-
Devs review and raise blockers within 24 hours
-
Post-sprint-start changes require alignment (PM + Dev Lead + Design)
-
If unclear, ask directly – don’t guess
-
Weekly sync to prevent surprises
-
Monthly technical design review
-
Escalation path for decision deadlocks
Output after Week 4:
Clear rules → fewer misunderstandings → faster delivery.
Startup Realities
Crisis Mode Protocol
-
Design triage: Drop polish first
-
24-hour decision window: PM + Dev Lead + Design can override
-
Scope Parking Lot: Document cuts → revisit after launch
Iteration Expectations
-
Version 1 → learning
-
Version 2 → clarity
-
Version 3 → refinement
Special Considerations
-
Investor/demo features require clarity on “how good is good”
-
Prepare for scaling as headcount grows
-
Engineers can suggest UX improvements during build – partnership, not a function silo
AI-Generated Work: Setting Clear Expectations
AI tools are now part of how many teams design and build products. But without clear expectations, AI-generated outputs can create confusion, quality issues, and trust problems between design and dev.
What to Clarify Upfront
Have an explicit conversation with your team about:
When AI Tools Are Being Used
- Design: AI for wireframes, copy generation, image creation, and layout suggestions
- Development: AI for code generation, boilerplate, refactoring, and documentation
- Transparency rule: Label AI-generated work in handoffs so reviewers know what to scrutinize
Validation Standards
Not all AI outputs are equal. Establish what requires human review:
- Always validate: User-facing copy, accessibility compliance, security-sensitive code, final UI designs
- Spot-check: Boilerplate code, documentation, initial wireframes
- Who validates: Designer validates AI designs, dev validates AI code, PM validates AI copy
Handoff Quality Standards
AI-generated work should meet the same handoff standards as human-created work:
- Complete flows, not just isolated screens
- Edge cases considered and documented
- Technical constraints accounted for
- Accessibility requirements met
- Brand consistency verified
- Clear annotation of what’s AI-generated vs. human-refined
Who to Involve in the Conversation
This isn’t just a design-dev discussion. Include:
- Design + Dev + PM: Core team to establish baseline expectations
- Engineering Lead: Set technical validation requirements and security review protocols
- Design System Owner (if exists): Ensure AI outputs don’t fragment consistency
- QA/Testing: Understand what needs extra scrutiny in AI-generated features
Red Flags to Watch For
Warning signs that AI collaboration needs better guardrails:
Developers questioning design decisions that “don’t make sense” (AI hallucinations in UI)
Designers pushing back on “weird code patterns” (AI generating non-standard implementations)
Quality dropping because people assume “AI checked it”
Handoffs taking longer because reviewers don’t trust AI outputs
Sample Working Agreement Language
Add this to your Week 4 shared agreement:
- “AI-generated designs/code must be clearly labeled in handoffs”
- “All user-facing copy and accessibility features require human validation, regardless of source”
- “Developers can flag AI-generated designs for review if implementation seems unrealistic”
- “Designers can flag AI-generated code for review if it impacts UX or performance”
The goal: Use AI to move faster, but maintain the same quality bar and trust between design and dev.
Success Metrics (Monthly Check)
Metric |
What It Looks Like |
|---|---|
| Sprint Velocity | Are we delivering more consistently sprint-over-sprint? |
| Rework Frequency | Less back-and-forth between design and dev? |
| Developer Confidence | Quick pulse survey (1-5 rating on clarity and readiness) |
| Crisis Recovery | How quickly we bounce back from urgent changes |
You will know it’s working when:
There are fewer “just build what I designed” moments and fewer “design didn’t think this through” moments.
Alignment becomes the norm.
Mindset That Makes It Stick
-
Listen first: co-create solutions, don’t prescribe them.
-
Keep it light: minimal process, maximum clarity.
-
Be visible: share work before it’s “final.”
-
Celebrate quick wins: reinforce good patterns.
-
Embrace iteration: perfect is the enemy of shipped.
What You Get in Under a Month
- If you follow this playbook, in under 30 days your team will have:
- Clearer handoffs with priority context
- A short weekly sync to avoid surprises
- Crisis mode protocols for startup realities
- A feedback loop that includes technical input
- A shared agreement that scales with growth
- Simple metrics to know if it’s working
The result: Design that moves at startup speed without breaking collaboration.
Want to adapt this framework?
If you’re working through similar challenges and want to adapt this for your team, I’m always open to conversations.
I would love to know what worked or didn’t work for your team.
