Description:
Hi PaperDebugger team!
First off, great work on the tool—the multi-agent MCP architecture and Overleaf integration are incredibly useful for academic writing.
I recently read a fascinating upcoming CHI 2026 paper titled "Texterial: A Text-as-Material Interaction Paradigm for LLM-Mediated Writing" (Shen et al.), and I think its core concepts would be a massive UX enhancement for PaperDebugger.
The Core Concept of "Texterial"
The paper argues that standard chat interfaces for LLMs often bottleneck their capabilities because writing complex prompts to refine text is tedious. Instead, it proposes treating text as a malleable physical material (like "Clay" or "Plants"), mapping direct spatial/gestural manipulations to specific LLM operations:
- Abstracting/Smudging: Blurring text to make it more abstract or impressionistic.
- Condensing/Squashing: Pushing text together to summarize it.
- Elaborating/Stretching: Pulling text apart to expand on ideas and add connective tissue.
- Concretizing/Pinching: Tightening text to make vague phrasing highly specific and concrete.
- Ideating/Growing: Treating ideas like plants that can be watered (elaborated), pruned (discarded), or grafted (merged).
Why this fits PaperDebugger
Currently, PaperDebugger relies on an AI-powered chat and a comment system. Integrating "Texterial" concepts would allow users to move beyond prompt engineering and utilize direct manipulation right inside the Overleaf editor.
Because your backend already uses an MCP-based orchestration engine with multi-step reasoning, mapping these UI actions to backend LLM prompts would fit perfectly into your architecture.
Proposed Integration Ideas for Overleaf
While full multi-touch gestures might be hard to implement inside the Overleaf DOM, we could adapt the concepts into intuitive UI overlays or context menus:
-
"Text-as-Clay" Context Menu (Direct Editing):
When a user highlights a LaTeX paragraph, instead of just a generic "Improve" or "Chat" button, the floating PaperDebugger menu could include material-based quick actions:
- 🤏 Pinch (Make Concrete): Send an underlying prompt to the backend to replace vague wording with precise academic phrasing.
- ↔️ Stretch (Elaborate): Prompt the agent to expand on the highlighted point, perhaps finding literature via your XtraMCP support to flesh it out.
- 🌫️ Smudge (Paraphrase/Abstract): Rewrite the section to focus on higher-level framing rather than specific details.
- ➡️⬅️ Squash (Condense): Summarize a wordy paragraph to save space (perfect for meeting page limits!).
-
"Text-as-Plants" Canvas (Outline/Literature Ideation):
For early-stage research or literature reviews, PaperDebugger's sidebar could feature an "Ideation Garden" where generated paper outlines or related works are visualized as branches. Users could visually "prune" bad ideas, "graft" two paragraphs together, or "water" a bullet point to ask the agent to generate a full paragraph.
Benefits
As the paper notes, this approach drastically lowers the "Gulf of Envisioning"—users don't have to figure out how to instruct the AI; they just directly manipulate the text. It makes the writing process feel more like craftsmanship than wrestling with a chatbot.
Reference:
Shen, J. J., Marquardt, N., Romat, H., Hinckley, K., Riche, N., & Chevalier, F. (2026). Texterial: A Text-as-Material Interaction Paradigm for LLM-Mediated Writing. In Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems. https://dl.acm.org/doi/epdf/10.1145/3772318.3790330
Would love to hear your thoughts on whether these direct-manipulation mappings could be added to the roadmap for the Chrome extension!
Description:
Hi PaperDebugger team!
First off, great work on the tool—the multi-agent MCP architecture and Overleaf integration are incredibly useful for academic writing.
I recently read a fascinating upcoming CHI 2026 paper titled "Texterial: A Text-as-Material Interaction Paradigm for LLM-Mediated Writing" (Shen et al.), and I think its core concepts would be a massive UX enhancement for PaperDebugger.
The Core Concept of "Texterial"
The paper argues that standard chat interfaces for LLMs often bottleneck their capabilities because writing complex prompts to refine text is tedious. Instead, it proposes treating text as a malleable physical material (like "Clay" or "Plants"), mapping direct spatial/gestural manipulations to specific LLM operations:
Why this fits PaperDebugger
Currently, PaperDebugger relies on an AI-powered chat and a comment system. Integrating "Texterial" concepts would allow users to move beyond prompt engineering and utilize direct manipulation right inside the Overleaf editor.
Because your backend already uses an MCP-based orchestration engine with multi-step reasoning, mapping these UI actions to backend LLM prompts would fit perfectly into your architecture.
Proposed Integration Ideas for Overleaf
While full multi-touch gestures might be hard to implement inside the Overleaf DOM, we could adapt the concepts into intuitive UI overlays or context menus:
"Text-as-Clay" Context Menu (Direct Editing):
When a user highlights a LaTeX paragraph, instead of just a generic "Improve" or "Chat" button, the floating PaperDebugger menu could include material-based quick actions:
"Text-as-Plants" Canvas (Outline/Literature Ideation):
For early-stage research or literature reviews, PaperDebugger's sidebar could feature an "Ideation Garden" where generated paper outlines or related works are visualized as branches. Users could visually "prune" bad ideas, "graft" two paragraphs together, or "water" a bullet point to ask the agent to generate a full paragraph.
Benefits
As the paper notes, this approach drastically lowers the "Gulf of Envisioning"—users don't have to figure out how to instruct the AI; they just directly manipulate the text. It makes the writing process feel more like craftsmanship than wrestling with a chatbot.
Reference:
Shen, J. J., Marquardt, N., Romat, H., Hinckley, K., Riche, N., & Chevalier, F. (2026). Texterial: A Text-as-Material Interaction Paradigm for LLM-Mediated Writing. In Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems. https://dl.acm.org/doi/epdf/10.1145/3772318.3790330
Would love to hear your thoughts on whether these direct-manipulation mappings could be added to the roadmap for the Chrome extension!