Overview
Main Problem:
As of the date of this post, Fibery’s inline comments cannot be managed as standard entities in views, limiting their usability, and this issue, though recognized, remains unprioritized by the Fibery team. This hinders team collaboration, as inline feedback cannot be easily organized, filtered, or integrated into broader workflows.
My Goal:
To enhance collaborative workflows in Fibery by aggregating inline comments and their cited text from rich text fields into structured, actionable Thread and Comment entities, enabling better organization, visibility, and interaction with feedback (e.g., for team discussions or re-feeding comments to AI for iterative improvements).
Challenge:
The most reliable approach would be to leverage an external Node.js script with @fibery/prosemirror-schema (Fibery’s custom schema package) and ProseMirror’s native tools like Fragment.textBetween for accurate, position-based extraction. This sidesteps the in-script recreation by using the real library, but it requires running outside Fibery (e.g., locally or on a server), then manually/API-importing results back.
Context:
Fibery restricts low-level ProseMirror utilities (e.g., Fragment.textBetween) in their API to ensure security, maintain simplicity, and optimize performance, but strong community demand for features like getTextRange could drive future implementation.
Strategy:
The strategy recreates ProseMirror’s traversal logic, specifically mimicking nodesBetween and node size calculations, directly in the script to enable precise, self-contained cited text extraction from Fibery’s rich text JSON, bypassing the need for external API dependencies.
- Key Elements Recreated: Mirror core methods like nodesBetween for recursive tree walking and overlap detection, ensuring precise handling of nested nodes.
- Supporting Calculations: Implement getNodeSize for accurate positional sizing, accounting for text lengths and structural overhead (e.g., +2 for non-leaf nodes’ entry/exit).
- Implementation Benefits: Achieves dynamic range adjustments (e.g., clamping from/to, zero-based offsets) to avoid misalignment, all in a portable, Fibery-compatible way.
- Why Effective: Bypasses external calls (avoids latency/auth issues); maintains positional accuracy via recursive, overlap-aware walking and range adjustments.
- Trade-offs: Increases code complexity but ensures control and Fibery compatibility.
Brief Overview of Script Functionality
The script fetches all inline comments and creates Message entities out of them, in the same relational hierarchy, including the cited and commented text.
- Extracts Inline Comments: Retrieves inline comments from a document in a specified Fibery database node.
- Creates/Updates Thread: Manages a thread entity linked to the node for organizing comments.
- Processes Comments: Creates or updates comment entities, linking them to threads and handling parent-child relationships.
- Formats Thread Content: Builds a formatted thread body with comments and cited text, preserving structure.
- Handles Document Content: Updates document fields (Body, CitedText, CommentText) with structured JSON content.
Detailed Functionality
1. Extracts Inline Comments
- Description: The script retrieves inline comments from a node’s
Bodyfield in theSpaceName/Blockdatabase, stored as ProseMirror JSON. It usesextractCitedTextto pull specific text segments based on comment ranges (fromandtopositions). - Process: Iterates through the document’s content, extracting text within specified ranges and handling nested nodes. Supports block separators and leaf nodes.
- Key Functions:
getNodeSize: Calculates the size of document nodes for range-based extraction.extractCitedText: Extracts text between given positions, handling text and non-text nodes with optional separators.
2. Creates/Updates Thread
- Description: Ensures a thread exists in the
SpaceName/Threaddatabase for each node, linked via theThreadsfield. If no thread exists, it creates one; otherwise, it uses the existing thread. - Process: Checks for existing threads in the node’s
Threadsfield. Creates a new thread with a default name (“Inline Comments”) if none exists and links it to the node. - Key Functions:
fibery.createEntity: Creates a new thread entity.fibery.addCollectionItem: Links the thread to the node.
3. Processes Comments
- Description: Creates or updates comment entities in the
SpaceName/Messagedatabase, handling both top-level and nested (reply) comments. Links comments to threads and parent comments as needed. - Process: Maps inline comments to entities, checks for existing comments using
Inline ID, and updates or creates new ones. Links child comments to parents. - Key Functions:
createOrUpdateCommentEntity: Manages comment creation/update, including setting fields likeThread,Parent,Author, andCreatedAt.linkChildCommentToParent: Establishes parent-child relationships between comments.
4. Formats Thread Content
- Description: Builds a formatted thread body in ProseMirror JSON, including author details, timestamps, cited text (for top-level comments), and replies (indented as blockquotes).
- Process: Iterates through top-level comments, formats each with author, date, and cited text (if applicable), and appends replies recursively. Adds horizontal rules between top-level comments.
- Key Functions:
buildCommentThread: Constructs ProseMirror JSON for comments, handling indentation and formatting.formatDateYYYYMMDD_HHMM: Formats dates for consistent display.getTextFromProseMirrorDoc: Extracts plain text from ProseMirror documents.
5. Handles Document Content
- Description: Updates the
Body,CitedText, andCommentTextfields of comment entities and theBodyfield of the thread with structured ProseMirror JSON. - Process: Sets document content using
fibery.setDocumentContent, ensuring proper formatting for display (e.g., highlighted cited text, preserved comment formatting). - Key Functions:
fibery.setDocumentContent: Updates document fields with JSON content.getTextFromProseMirrorDoc: Extracts text for theCommentTextfield.
Requirements/Expectations for Script Operation
- Fibery Environment:
- The script requires a Fibery environment with the
fiberyservice (context.getService('fibery')). - Databases must exist (but you can rename the constants at the top of the script to match your databases:
SpaceName/Block,SpaceName/Thread,SpaceName/Message, andfibery/user. - Fields must be configured as specified in
DB,NODE_FIELDS,THREAD_FIELDS, andCOMMENT_FIELDS(e.g.,Body,Threads,Messages,CitedText).
- The script requires a Fibery environment with the
- Node Structure:
- Nodes in
SpaceName/Blockmust have aBodyfield with ProseMirror JSON content, including acommentsarray with inline comments (containingid,from,to,author,date, andbody). - Comments must include valid
fromandtopositions for cited text extraction.
- Nodes in
- Dependencies:
- No external libraries are required beyond the Fibery API client.
- The script assumes a stable Fibery API for operations like
getEntityById,createEntity,setDocumentContent, andaddCollectionItem.
- Error Handling:
- The script logs errors for failed API calls (e.g., entity fetch, document updates) and continues processing other entities/comments to ensure partial success.
Notes
- The script assumes comments have unique
Inline IDvalues for matching existing entities. - Cited text is only processed for top-level comments, and replies are formatted as blockquotes for visual hierarchy.
- Debug logging is enabled by default in
extractCitedTextfor troubleshooting range extraction.
const fibery = context.getService('fibery');
/******************************
* VERSION 10.0 *
* CONFIGURATION *
******************************/
// Database Names
const DB = {
NODE: 'SpaceName/Block',
THREAD: 'SpaceName/Thread',
COMMENT: 'SpaceName/Message',
USER: 'fibery/user'
};
// Field Names
const NODE_FIELDS = {
BODY: 'Body',
THREADS: 'Threads',
NAME: 'Name'
};
const THREAD_FIELDS = {
NAME: 'Name',
DOC_REFERENCE: 'Block',
BODY: 'Body',
MESSAGES: 'Messages' // To-many from Thread to Message
};
const COMMENT_FIELDS = {
PARENT: 'ParentMessage',
AUTHOR: 'Author',
CREATED_AT: 'CreatedAt',
SUBCOMMENTS: 'Replies',
THREAD: 'Thread',
BODY: 'Body',
CITED_TEXT: 'CitedText',
COMMENT_TEXT: 'CommentText',
NAME: 'Name',
INLINE_ID: 'Inline ID' // For uniqueness
};
// Behavior
const THREAD_NAME = "Inline Comments";
/******************************
* EXTRACTION HELPERS *
******************************/
function getNodeSize(node) {
if ('text' in node) return node.text.length;
if (!('content' in node) || !node.content.length) return 1; // Leaf non-text
let size = 2; // Entry + exit for non-leaf
for (let child of node.content) {
size += getNodeSize(child);
}
return size;
}
function extractCitedText(doc, from, to, blockSeparator = '\n', leafText = null, log = true) { // Log true for debug
let text = "";
let separated = true;
function nodesBetween(fragment, from_, to_, f, nodeStart = 0, parent = null) {
let pos = 0;
for (let i = 0; i < fragment.length; i++) {
let child = fragment[i];
let end = pos + getNodeSize(child);
if (log) console.log(`Checking node ${child.type} at content-pos ${pos}-${end} for overlap with ${from_}-${to_}`);
if (end > from_ && f(child, nodeStart + pos, parent, i) !== false && 'content' in child && child.content.length > 0) {
let start = pos + 1;
if (log) console.log(`Recursing into ${child.type} content, adjusted range ${Math.max(0, from_ - start)}-${Math.min(getNodeSize(child) - 2, to_ - start)}`);
nodesBetween(child.content, Math.max(0, from_ - start), Math.min(getNodeSize(child) - 2, to_ - start), f, nodeStart + start, child);
}
pos = end;
}
}
const docSize = getNodeSize(doc);
const effectiveFrom = Math.max(from, 1);
const effectiveTo = Math.min(to, docSize);
if (log) console.log(`Extracting from doc size ${docSize}, effective range ${effectiveFrom}-${effectiveTo}`);
if (effectiveFrom >= effectiveTo) return '';
let from_ = effectiveFrom - 1;
let to_ = effectiveTo - 1;
nodesBetween(doc.content, from_, to_, (node, pos, parent, index) => {
if (log) console.log(`Processing node ${node.type} at doc-pos ${pos + 1}-${pos + 1 + getNodeSize(node) - 2}, parent: ${parent ? parent.type : 'doc'}, index: ${index}`);
if ('text' in node) {
let sliceStart = Math.max(0, from_ - pos);
let sliceEnd = Math.min(node.text.length, to_ - pos + 1); // +1 to include end if exclusive
if (sliceStart < sliceEnd) {
let added = node.text.slice(sliceStart, sliceEnd);
text += added;
separated = !blockSeparator;
if (log) console.log(`Added text slice(${sliceStart},${sliceEnd}) of len ${node.text.length} from full text "${node.text}": "${added}". Cumulative text now: "${text}"`);
}
} else if (!('content' in node && node.content.length > 0) && leafText) { // Leaf
let lt = typeof leafText === 'function' ? leafText(node) : leafText;
if (lt) {
text += lt;
separated = !blockSeparator;
if (log) console.log(`Added leaf text: "${lt}" (${node.type}). Cumulative text now: "${text}"`);
}
} else if (!separated && ['paragraph', 'blockquote', 'heading', 'code_block', 'list_item', 'bullet_list', 'ordered_list', 'horizontal_rule', 'fibery-section', 'table', 'table_row', 'table_cell', 'table_header'].includes(node.type)) {
let sep = typeof blockSeparator === 'function' ? blockSeparator(parent, index) : blockSeparator;
text += sep;
separated = true;
if (log) console.log(`Added separator: "${sep}". Cumulative text now: "${text}"`);
}
}, 0);
return text.trim();
}
/******************************
* CORE FUNCTIONALITY BELOW *
******************************/
function formatDateYYYYMMDD_HHMM(dateObj) {
const year = dateObj.getFullYear();
const month = String(dateObj.getMonth() + 1).padStart(2, '0');
const day = String(dateObj.getDate()).padStart(2, '0');
const hours = String(dateObj.getHours()).padStart(2, '0');
const minutes = String(dateObj.getMinutes()).padStart(2, '0');
return `${year}.${month}.${day}-${hours}:${minutes}`;
}
function getTextFromProseMirrorDoc(docNode) {
let text = '';
if (!docNode) return text;
if (Array.isArray(docNode.content)) {
for (const child of docNode.content) {
text += getTextFromProseMirrorDoc(child);
}
}
if (docNode.text) {
text += docNode.text;
}
return text;
}
function buildCommentThread(comment, citedText, indentLevel, childrenMap, userMap) {
const authorId = comment.author && comment.author.id ? comment.author.id : null;
const authorName = authorId && userMap[authorId] ? userMap[authorId][NODE_FIELDS.NAME] : 'Unknown Author';
const commentDate = new Date(comment.date);
const formattedDate = formatDateYYYYMMDD_HHMM(commentDate);
const commentText = getTextFromProseMirrorDoc(comment.body.doc);
if (!commentText.trim()) {
console.log(`Skipping comment with empty text at ${formattedDate} by ${authorName}`);
return [];
}
// Build ProseMirror JSON for this comment
const paragraphContent = [
{
type: "text",
marks: [{ type: "strong" }],
text: authorName
},
{
type: "text",
text: ` (${formattedDate}):`
},
{
type: "hard_break"
}
];
// Add cited text only for top-level comments (indentLevel === 0)
if (indentLevel === 0 && citedText && citedText.trim()) {
paragraphContent.push(
{
type: "text",
marks: [{ type: "em" }],
text: "Cited: "
},
{
type: "text",
marks: [
{ type: "em" },
{
type: "highlight",
attrs: {
guid: "",
color: "yellow"
}
}
],
text: citedText
},
{
type: "hard_break"
}
);
}
// Add the original comment content, preserving formatting
if (comment.body.doc && Array.isArray(comment.body.doc.content)) {
comment.body.doc.content.forEach(node => {
if (node.type === 'paragraph' && Array.isArray(node.content)) {
paragraphContent.push(...node.content);
} else if (node.type === 'hard_break') {
paragraphContent.push({ type: "hard_break" });
}
// Add support for other node types if needed
});
}
const commentNode = indentLevel > 0
? {
type: "blockquote",
content: [
{
type: "paragraph",
attrs: { guid: "" },
content: paragraphContent
}
]
}
: {
type: "paragraph",
attrs: { guid: "" },
content: paragraphContent
};
// Process replies (replies have no cited text)
const replies = childrenMap[comment.id] || [];
const replyNodes = [];
for (const reply of replies) {
const replyContent = buildCommentThread(reply, null, indentLevel + 1, childrenMap, userMap);
replyNodes.push(...replyContent);
}
return [commentNode, ...replyNodes];
}
async function linkChildCommentToParent(parentId, childId) {
try {
await fibery.addCollectionItem(DB.COMMENT, parentId, COMMENT_FIELDS.SUBCOMMENTS, childId);
console.log(`Linked child comment ${childId} to parent ${parentId}`);
} catch (err) {
console.error(`Failed to link child comment ${childId} to parent ${parentId}: ${err.message}`);
}
}
async function createOrUpdateCommentEntity(comment, parentCommentId, threadId, bodyJSON, childrenMap, userMap, existingMap) {
const isTopLevel = parentCommentId === null;
let citedText = '';
if (isTopLevel) {
citedText = extractCitedText(bodyJSON.doc, comment.from, comment.to);
}
const dateObj = new Date(comment.date);
const commentText = getTextFromProseMirrorDoc(comment.body.doc);
const commentName = commentText.trim() || "Untitled Comment";
const userId = comment.author && comment.author.id ? comment.author.id : null;
const authorRef = userId && userMap[userId] ? userMap[userId].id : null;
const newCommentData = {
[COMMENT_FIELDS.THREAD]: threadId,
[COMMENT_FIELDS.PARENT]: parentCommentId || null,
[COMMENT_FIELDS.AUTHOR]: authorRef || null,
[COMMENT_FIELDS.CREATED_AT]: dateObj,
[COMMENT_FIELDS.NAME]: commentName,
[COMMENT_FIELDS.INLINE_ID]: comment.id
};
let newCommentId;
const existingId = existingMap[comment.id];
if (existingId) {
try {
await fibery.updateEntity(DB.COMMENT, existingId, newCommentData);
console.log(`Updated Comment ${existingId} for thread ${threadId}`);
newCommentId = existingId;
} catch (err) {
console.error(`Comment update failed: ${err.message}`);
return null;
}
} else {
try {
const newComment = await fibery.createEntity(DB.COMMENT, newCommentData);
console.log(`Created Comment ${newComment.id} for thread ${threadId}`);
newCommentId = newComment.id;
await fibery.addCollectionItem(DB.THREAD, threadId, THREAD_FIELDS.MESSAGES, newCommentId);
} catch (err) {
console.error(`Comment creation failed: ${err.message}`);
return null;
}
}
// Fetch the entity to get secrets
const fullComment = await fibery.getEntityById(DB.COMMENT, newCommentId, [COMMENT_FIELDS.BODY, COMMENT_FIELDS.CITED_TEXT, COMMENT_FIELDS.COMMENT_TEXT]);
// Set Body field
if (fullComment[COMMENT_FIELDS.BODY] && fullComment[COMMENT_FIELDS.BODY].Secret) {
const commentBodyDoc = {
type: "doc",
content: [
{
type: "paragraph",
attrs: { guid: "" },
content: []
}
]
};
if (isTopLevel && citedText.trim()) {
commentBodyDoc.content[0].content.push(
{
type: "text",
marks: [{ type: "em" }],
text: "Cited: "
},
{
type: "text",
marks: [
{ type: "em" },
{
type: "highlight",
attrs: {
guid: "",
color: "yellow"
}
}
],
text: citedText
},
{
type: "hard_break"
}
);
}
// Append original comment content
if (comment.body.doc && Array.isArray(comment.body.doc.content)) {
commentBodyDoc.content[0].content.push(...comment.body.doc.content.flatMap(node => {
if (node.type === 'paragraph' && Array.isArray(node.content)) {
return node.content;
} else if (node.type === 'hard_break') {
return [{ type: "hard_break" }];
}
return [];
}));
}
const commentBodyContent = {
doc: commentBodyDoc,
comments: []
};
try {
await fibery.setDocumentContent(
fullComment[COMMENT_FIELDS.BODY].Secret,
JSON.stringify(commentBodyContent),
'json'
);
console.log(`Set Body for Comment ${newCommentId}`);
} catch (err) {
console.error(`Body update failed for comment ${newCommentId}: ${err.message}`);
}
}
// Set CitedText field (only for top-level comments)
if (isTopLevel && fullComment[COMMENT_FIELDS.CITED_TEXT] && fullComment[COMMENT_FIELDS.CITED_TEXT].Secret && citedText.trim()) {
const citedTextDoc = {
type: "doc",
content: [
{
type: "paragraph",
attrs: { guid: "" },
content: [
{
type: "text",
marks: [
{ type: "em" },
{
type: "highlight",
attrs: {
guid: "",
color: "yellow"
}
}
],
text: citedText
}
]
}
]
};
const citedTextContent = {
doc: citedTextDoc,
comments: []
};
try {
await fibery.setDocumentContent(
fullComment[COMMENT_FIELDS.CITED_TEXT].Secret,
JSON.stringify(citedTextContent),
'json'
);
console.log(`Set CitedText for Comment ${newCommentId}`);
} catch (err) {
console.error(`CitedText update failed for comment ${newCommentId}: ${err.message}`);
}
}
// Set CommentText field
if (fullComment[COMMENT_FIELDS.COMMENT_TEXT] && fullComment[COMMENT_FIELDS.COMMENT_TEXT].Secret) {
const commentTextDoc = {
type: "doc",
content: comment.body.doc.content || [
{
type: "paragraph",
attrs: { guid: "" },
content: [
{
type: "text",
text: commentText
}
]
}
]
};
const commentTextContent = {
doc: commentTextDoc,
comments: []
};
try {
await fibery.setDocumentContent(
fullComment[COMMENT_FIELDS.COMMENT_TEXT].Secret,
JSON.stringify(commentTextContent),
'json'
);
console.log(`Set CommentText for Comment ${newCommentId}`);
} catch (err) {
console.error(`CommentText update failed for comment ${newCommentId}: ${err.message}`);
}
}
if (parentCommentId) {
await linkChildCommentToParent(parentCommentId, newCommentId);
}
const childComments = childrenMap[comment.id] || [];
for (const childComment of childComments) {
await createOrUpdateCommentEntity(childComment, newCommentId, threadId, bodyJSON, childrenMap, userMap, existingMap);
}
return newCommentId;
}
async function run() {
const currentEntities = args.currentEntities || [];
if (currentEntities.length === 0) {
console.log('No selection');
return;
}
for (const entity of currentEntities) {
let nodeEntity;
try {
nodeEntity = await fibery.getEntityById(DB.NODE, entity.id, [NODE_FIELDS.BODY, NODE_FIELDS.THREADS, NODE_FIELDS.NAME]);
} catch (err) {
console.error(`Failed to fetch node ${entity.id}: ${err.message}`);
continue;
}
if (!nodeEntity || !nodeEntity[NODE_FIELDS.BODY]) {
console.log(`Skipping ${entity.id} - missing Body field`);
continue;
}
let bodyJSON;
try {
bodyJSON = await fibery.getDocumentContent(nodeEntity[NODE_FIELDS.BODY].Secret, 'json');
} catch (err) {
console.error(`Body document read failed for node ${entity.id}: ${err.message}`);
continue;
}
const inlineComments = bodyJSON.comments || [];
if (inlineComments.length === 0) {
console.log(`No comments in node ${entity.id}`);
continue;
}
const userIds = [...new Set(inlineComments.filter(c => c.author && c.author.id).map(c => c.author.id))];
let userMap = {};
if (userIds.length > 0) {
try {
const users = await fibery.getEntitiesByIds(DB.USER, userIds, [NODE_FIELDS.NAME]);
users.forEach(u => userMap[u.id] = u);
} catch (err) {
console.error(`User fetch error for node ${entity.id}: ${err.message}`);
}
}
// Find or create Thread
let threadId;
const existingThreads = nodeEntity[NODE_FIELDS.THREADS] || [];
if (existingThreads.length > 0) {
threadId = existingThreads[0].id; // Use first as default
console.log(`Using existing Thread ${threadId} for node ${entity.id}`);
} else {
const threadData = {
[THREAD_FIELDS.DOC_REFERENCE]: entity.id,
[THREAD_FIELDS.NAME]: THREAD_NAME
};
const newThread = await fibery.createEntity(DB.THREAD, threadData);
threadId = newThread.id;
await fibery.addCollectionItem(DB.NODE, entity.id, NODE_FIELDS.THREADS, threadId);
console.log(`Created Thread ${threadId} for node ${entity.id}`);
}
// Get existing Messages in the Thread for uniqueness check
const fullThread = await fibery.getEntityById(DB.THREAD, threadId, [THREAD_FIELDS.MESSAGES]);
const existingMessageIds = fullThread[THREAD_FIELDS.MESSAGES] ? fullThread[THREAD_FIELDS.MESSAGES].map(m => m.id) : [];
const existingMessages = existingMessageIds.length > 0 ? await fibery.getEntitiesByIds(DB.COMMENT, existingMessageIds, [COMMENT_FIELDS.INLINE_ID]) : [];
const existingMap = {};
existingMessages.forEach(em => {
if (em[COMMENT_FIELDS.INLINE_ID]) {
existingMap[em[COMMENT_FIELDS.INLINE_ID]] = em.id;
}
});
const childrenMap = {};
const topLevelComments = [];
inlineComments.forEach(comment => {
const parentId = comment.thread;
if (parentId) {
childrenMap[parentId] = childrenMap[parentId] || [];
childrenMap[parentId].push(comment);
} else {
topLevelComments.push(comment);
}
});
const messageIds = {};
for (const topLevelComment of topLevelComments) {
const messageId = await createOrUpdateCommentEntity(topLevelComment, null, threadId, bodyJSON, childrenMap, userMap, existingMap);
if (messageId) messageIds[topLevelComment.id] = messageId;
}
// Build and set Thread Body
const threadContent = [];
for (let i = 0; i < topLevelComments.length; i++) {
if (i > 0) {
threadContent.push({ type: "horizontal_rule" });
}
const citedText = extractCitedText(bodyJSON.doc, topLevelComments[i].from, topLevelComments[i].to);
const commentNodes = buildCommentThread(topLevelComments[i], citedText, 0, childrenMap, userMap);
threadContent.push(...commentNodes);
}
if (threadContent.length > 0 && fullThread[THREAD_FIELDS.BODY] && fullThread[THREAD_FIELDS.BODY].Secret) {
const threadDoc = {
doc: {
type: "doc",
content: threadContent
},
comments: []
};
try {
await fibery.setDocumentContent(fullThread[THREAD_FIELDS.BODY].Secret, JSON.stringify(threadDoc), 'json');
console.log(`Set Body for Thread ${threadId}`);
} catch (err) {
console.error(`Thread Body update failed: ${err.message}`);
}
}
console.log('Aggregation complete');
}
}
await run();