AI-Mediated Collaboration
Conduit's AI-Mediated Collaboration system solves one of the oldest problems in large organizations: getting questions to the people who can actually answer them, without losing context along the way.
The Problem
In traditional organizations, questions travel through layers of management like a game of telephone. An executive asks a question, it passes through directors, managers, and team leads, each person re-interpreting and re-phrasing along the way. By the time it reaches the domain expert who can actually answer, the original intent is distorted. The answer then travels back up through the same layers, losing nuance at each step.
This creates several problems:
- Context loss: Each handoff strips away important nuance
- Slow resolution: Days or weeks to answer what should take minutes
- Political filtering: Answers get softened as they travel up
- Expert overload: The wrong people get asked because the right people are unknown
- No accountability: When the answer is wrong, no one knows where it went off track
The Solution: Hierarchical Query Routing
Conduit replaces the telephone game with AI-mediated hierarchical routing. Questions flow through the organizational hierarchy, but AI refines the intent at each level, ensuring the question arrives at the right expert with full context intact. Answers then flow back up automatically with complete lineage.
How It Works
Executive asks question
|
v
AI refines intent -> Routes to department head
|
v
AI refines further -> Routes to team lead
|
v
AI refines further -> Routes to domain expert
|
v
Expert answers with data and context
|
v
Answer flows up with context preserved
|
v
Synthesized answer with additional context
|
v
Executive receives traced answer with full lineage
Each step in this flow is a separate conversation thread with complete privacy isolation. The AI assistant at each level helps refine the question based on that level's organizational context.
Detailed Flow Example
Imagine a VP of Operations asks: "Why did production drop last week?"
Level 1 -- VP of Operations thread:
- VP asks the question
- AI identifies this spans manufacturing and maintenance domains
- AI routes refined intent to Production Manager
Level 2 -- Production Manager thread:
- Receives: "Investigating production volume decrease for Week 6. Need root cause analysis across all lines."
- Production Manager adds context: "Lines 2 and 4 were affected"
- Routes to Line 4 Supervisor (highest impact)
Level 3 -- Line 4 Supervisor thread:
- Receives: "Line 4 production decreased ~15% in Week 6. Need specific cause and timeline."
- Supervisor identifies the maintenance event
- Routes to Maintenance Lead for technical details
Level 4 -- Maintenance Lead thread:
- Receives: "Line 4 experienced unplanned downtime Week 6. Need maintenance event details, root cause, and corrective actions."
- Maintenance Lead provides detailed technical answer with data from NQE queries
Answer propagation (automatic):
- Answer flows from Maintenance Lead -> Line 4 Supervisor -> Production Manager -> VP of Operations
- Each level sees the full answer plus any context added by the level below
- VP receives a complete, traced answer within hours instead of days
Intent Lineage
Every transformation of a question is tracked. This provides complete transparency into how a question evolved from its original form to the version that was answered.
{
"intentLineage": {
"originalQuestion": "Why did production drop last week?",
"transformations": [
{
"level": 1,
"actor": "AI + VP of Operations",
"refinedIntent": "Investigating production volume decrease for Week 6",
"routedTo": "Production Manager",
"timestamp": "2026-02-03T09:15:00Z"
},
{
"level": 2,
"actor": "AI + Production Manager",
"refinedIntent": "Line 4 production decreased ~15% in Week 6",
"routedTo": "Line 4 Supervisor",
"timestamp": "2026-02-03T09:32:00Z"
},
{
"level": 3,
"actor": "AI + Line 4 Supervisor",
"refinedIntent": "Line 4 unplanned downtime root cause and corrective actions",
"routedTo": "Maintenance Lead",
"timestamp": "2026-02-03T10:05:00Z"
}
],
"finalAnswer": {
"answeredBy": "Maintenance Lead",
"timestamp": "2026-02-03T10:45:00Z",
"propagatedTo": "VP of Operations",
"propagationComplete": "2026-02-03T10:45:12Z"
}
}
}
This lineage serves multiple purposes:
- Transparency: See exactly how a question was refined at each step
- Accountability: Know who contributed what to the final answer
- Learning: The Context Engine extracts expertise patterns from successful resolutions
- Auditing: Full trail for compliance and quality review
Privacy Isolation
One of the most critical features of AI-Mediated Collaboration is thread-level privacy isolation.
The Rule
Each participant only sees their own conversation thread. There is no upstream visibility.
VP Thread: "Why did production drop?" [VISIBLE ONLY TO VP + AI]
|
Manager Thread: "Line 4 decreased 15%..." [VISIBLE ONLY TO MANAGER + AI]
|
Expert Thread: "Bearing failure on..." [VISIBLE ONLY TO EXPERT + AI]
Why This Matters
- Prevents political filtering: Experts answer honestly without worrying about who is reading upstream
- Encourages candor: A maintenance lead can say "we missed the PM schedule" without fear of executive backlash
- Separates signal from politics: The answer flows up; the politics stay out
- Protects individual privacy: Your conversation with the AI is yours alone
The AI at each level only passes the refined intent downstream, not the full conversation. This means the expert never sees the executive's original phrasing or any management commentary.
Conversation Lifecycle
Conversations in Conduit are designed to be temporary while the knowledge extracted from them is permanent.
Expiration Rules
| Parameter | Default | Configurable | | ---------------------- | ---------------------------------- | ------------ | | Conversation TTL | 30 days | Yes | | Reset on activity | Yes | Yes | | Grace period | 7 days (warning before expiration) | Yes | | Context extraction | Before expiration | Automatic |
The Lifecycle
Day 0: Conversation created
|
Day 1-29: Active conversation, TTL resets on each message
|
Day 23: No activity for 23 days -> warning notification
|
Day 30: Conversation expires
| - Raw messages deleted
| - Context patterns extracted and stored abstractly
| - Intent lineage preserved (anonymized)
|
Day 30+: Context persists permanently in the Context Engine
Why Conversations Expire
- Privacy by design: Conversations are not permanent records
- Storage efficiency: No unbounded message storage growth
- Focus on knowledge: What matters is the expertise demonstrated, not the exact words
- Legal simplicity: No indefinite retention of potentially sensitive discussions
Continue Intent (Manual Routing)
While the AI suggests routing based on expertise and organizational context, the actual routing decision is always made by a human. This is called Continue Intent.
How It Works
- A participant receives a question in their thread
- They can answer directly, or choose to continue the question to someone else
- When continuing, they select the recipient and optionally refine the intent
- The AI assists with suggestions but never routes autonomously
# Continue a conversation to another person
curl -X POST https://conduit.example.com/api/conversations/conv-123/continue \
-H "Authorization: Bearer <token>" \
-d '{
"targetUserId": "maint-lead-007",
"refinedIntent": "Need bearing failure analysis for Line 4 gearbox",
"preserveContext": true
}'
Design Principle
Every person in the chain decides where the question goes next. This ensures:
- Human judgment: AI suggests, humans decide
- Accountability: Each routing decision is attributed
- Flexibility: Override AI suggestions when you know better
- Trust: No one feels like they are being bypassed by automation
Answer Propagation
When a domain expert provides an answer, it automatically propagates up the conversation tree to the original asker.
Propagation Rules
- Answers flow up through every level that participated in routing
- Each intermediate level can see the answer and add supplementary context
- The original asker receives the complete answer with full lineage
- Safety limit: Maximum 20 levels of propagation (prevents infinite loops)
Propagation Example
Level 4 (Expert answers):
"Bearing failure on Line 4 gearbox at 02:30 on Feb 1.
Root cause: missed PM cycle. Corrective action: replaced
bearing, updated PM schedule to weekly inspection."
|
Level 3 (Supervisor adds context):
[Expert answer included]
+ "This was a known risk item from Q4 reliability review.
New PM schedule implemented Feb 3."
|
Level 2 (Manager adds context):
[Full thread included]
+ "Line 4 back to full production as of Feb 4.
Week 7 output expected to recover fully."
|
Level 1 (VP receives complete answer):
[Complete answer with all context and lineage]
Trace: Expert -> Supervisor -> Manager -> VP
Total resolution time: 1h 30m
Safety Mechanisms
| Mechanism | Purpose | | ------------------------ | -------------------------------------------------------------------------------------- | | 20-level depth limit | Prevents infinite routing loops | | Cycle detection | Cannot route back to someone already in the chain | | Timeout escalation | If no answer in 48h, notifies the previous level | | Dead-end handling | If routing reaches someone who cannot answer, they can bounce it back with explanation |
Integration with the Context Engine
Every AI-Mediated Collaboration interaction feeds the Context Engine:
- Expert identification: When someone provides a high-quality answer, their expertise score increases in the relevant domain
- Routing optimization: Over time, the AI learns who should be asked about what, based on historical success
- Collaboration patterns: The relational context between participants is strengthened
- Knowledge gaps: When questions cannot be answered, the Context Engine identifies organizational knowledge gaps
Next Steps
- Context Engine -- Learn how expertise is tracked and scored
- Privacy & Security Model -- Deep dive into thread isolation and data protection
- Natural Query Engine (NQE) -- Explore how data queries integrate with collaboration
- Architecture -- Understand the technical infrastructure