AI & data breaches?

Hey guys, I have a contact who liked Fibery. They work with charities. Charities are very sensitive about donors’ personal/financial data being leaked by using AI. What solutions can we use so that AI doesn’t “steal” the sensitive data? In Fibery AI chat window, I think I could use # context. What else?

The only safe solution is to disable AI

Or self-hosted AI?

From ClickUp:

ClickUp AI is not trained on data from your Workspace. We’ve secured licensing with our partners to ensure they do not access your data for training purposes. We also have zero data retention agreements with all of the large language model (LLM) organizations we partner with. The agreements require our partners not to retain any data from your Workspace after your data is input and processed through the LLM. Additionally, we use in-context learning (ICL) to ensure that our models are not learning from data.

What do you think about this, how truthful is this?

I have no idea to be honest, but in the nutshell it is all about trust to OpenAI and Anthropic. In general they indeed state that data is not used for training, etc. Same for Fibery. But it is your choice to trust them or not.

local LLM is not an option so far.