CSV Import with hurdles (Delimiter?)

Hi,

i finally found a way to export all my Obsidian notes to a csv.

and immediately there are new hurdles:

as there are a lot of “;” in my normal text documents, these are interpreted as delimiter and therefor detroying all the data.
i now used VS Code to replace all ; with something else and then change the new delimiter i used (~) to “;” but it would be great to have an option to set this in fibery directly… as this is a horrible workflow … especially considering if there is some kind of automation involved in the future getting new files over.

:slight_smile: thanks!

If you are getting a CSV file, then the contents of the cells should be wrapped in quotes if they contain symbols that could be interpreted as delimiting.

e.g.
Name, TextField
Hello, “This is a line, with commas; and it uses other punctuation!”

rather than
Name, TextField
Hello, This is a line, with commas; and it uses other punctuation!

Is that not happening when you export from Obsidian?

Thanks! with your tipp i found an option in the settings of the CSV export plugin

now there is only one more thing: by exporting this as csv the markdown syntax is gone and also new line breaks are broken. the exporter let me put [CR] instead of the new lines. is there any “code” that i can interpret line breaks (i can change the [CR] into anything with VS code of course

That’s disappointing.
I believe it should be possible to retain markdown within CSV files, so I don’t know why Obsidian would not do that.

Name, TextField
Hello, “#Heading# This is an *important* piece of info”

Unfortunately, markdown and CSV are not quite as standardised as people would like, so portability tends to often be less than 100% reliable.

i don’t care so much about the formatting … but the line breaks are a bit disturbing…

any way of maintaining them (with \n for example it does not work either) or changing them in fibery later on?

(obsidian in itself does not support export to csv at all… its just a plugin someone programmed, and the whole prozess its definetly not made to export the whole file content as it gets really slow when there are 100s of files in a folder to export)

WORKAROUND question:

another way of doing this would be to have fibery get the markdown file by filename. so i just import the csv table with the table data and then get the markdown from a markdown file that is on a server to be imported into the rich text field … would that be possible? (in an “semi” automatic way?)

thanks!

(why so complicated: i have extensive use of YAML frontmatter and inline data fields from Obsidian, that i want to translate into cells in fibery. and i dont see a way of having “both” the markdown AND the cells… as either i can import the MD files and have them as files in the space, OR i have the cells without the right markdown … and because i dont have extensive programming knowledge to get this through an API or something i am completely lost… but its more than a thousand files so doing it manually is not an option …)

We have markdown import as well as csv import, so it’s beginning to sound like we might be reaching a satisfactory workaround :crossed_fingers:

is there a way to link by name or import by name? curious what you will come up with :slight_smile:

the markdown import is great btw!

but I don’t want to have them files all separate. i want their data in a DB :wink:

thanks!

I don’t know what this means, sorry

Oh sorry for being unclear.

I mean would it be possible to import the markdown file by the name stored in a database. See, I get the CSV with all relevant data and have the name of the markdown file as name of the database entry. So if there is some kind of automation that could either import the corresponding markdown or take the corresponding markdown file that is already imported from a space and put it as a rich text field, this would be golden.

So, in example “database entry name = file1; look for markdown file1 in space ‘imported markdownfiles’, copy content, put in database entry content field where name is file1”. Run for all entries

Sadly chatgpts programming skills are not good enough :slight_smile:

Unfortunately, this is not possible today. Although this is in our backlog, there is no ETA for release.

i posted it elsewhere, but if you are interested in a workflow to get obsidian files in fibery (without the CSV) you can look at n8n:

the javascript that picks apart the Markdown can be adapted to work on a button inside fibery too i guess? it looks like this

var mdData = $json.data;

// Split the markdown content into sections
const sections = mdData.split(/(?:\n---\n|> \[!Meta\]\n)/);

// Initialize variables for extracted data
const extractedData = {};

// Check if a frontmatter section exists
if (sections.length > 1) {
    // Extract data from frontmatter section
    const frontmatterLines = sections[0].split('\n');
    frontmatterLines.forEach(line => {
        const match = line.match(/^\s*([^:]+):\s*(.*)$/);
        if (match) {
            const key = match[1].trim();
            const value = match[2].trim();
            extractedData[key] = value;
        }
    });
}

// Extract data from body section
const bodyLines = sections[sections.length > 2 ? 2 : 1].split('\n');
bodyLines.forEach(line => {
    const match = line.match(/^\s*([^:]+)::\s*(.*)$/);
    if (match) {
        const key = match[1].trim();
        const value = match[2].trim();
        extractedData[key] = value;
    }
});

// Extract data from meta section
if (sections.length > 2) {
    const metaLines = sections[sections.length > 2 ? 3 : 2].split('\n');
    metaLines.forEach(line => {
        const match = line.match(/^\s*([^:]+):\s*(.*)$/);
        if (match) {
            const key = match[1].trim();
            const value = match[2].trim();
            extractedData[key] = value;
        }
    });
}


return(extractedData);

the file is a bit more complex than it would need if data is kept the same throughout the vault. but mine are a mix of frontmatter YAML and “postmatter” callouts

1 Like