Amazing, as usual. Looking forward to testing
So glad to see it happening!
it wasnāt long since I posted a bunch of links of possibilities, and now theyāre here, itās like magic
It would be cool to see your use cases and get any other feedback!
Awesome!
Looking forward to AI-supported Vizydrop Report generation and Knowledge Assistant!
Yes, all of this stuff is awesome and really great to see so relatively quickly (AI space creation in particular; I will have to give this a workout ASAP!). But I am especially looking forward to AI-driven visualization since, try as I might, I often canāt manage anything more than fairly simple visualizations, despite the power and relative easy of use (e.g. drag-and-drop, etc.) of Vizydrop.
I am hopeful that, once you have AI-driven assistance for these normally major complex areas of friction (Space setup and visualization), Fibery will be much easier for people to get started with and use to its fullest, and thus adoption will soar. Nobody else seems to be doing things quite this cool with AI for use cases beyond simple text manipulation and it is IMO!
The recent AI focus has been timely for us and Iāve been working on a number of applications of fibery to assist with the messy frontend of our content production process.
I had been doing some experimentation with openaiās API directly to demonstrate some discrete examples before trying out what has been integrated in fibery. I think what has been implemented has been fantastic, but I wanted to highlight some areas where fiberyās structure around the API is somewhat limiting.
- Model configuration. Fibery really should allow specifying the model manually because openai is constantly iterating on the models. Right now we canāt specify the model we are using for ai actions in automations, but can no longer in rich text, but we could before. We are also blocked from using fine tuned models due to this limitation. Fine tuned models we were using to get more predictable results in some cases, but found using the new ChatGPT (turbo) API with examples could mitigate the need for fine tuned models. However, that isnāt possible eitherā¦
- Full chat API support - so, I assume you technically support this API behind the scenes, but in my experience with more nuanced transformations Iāve noticed it is important to sometimes provide a couple prompt/completion pairs before the ultimate prompt so there is a clearer set of examples for how you want the completion formatted (system, user, and assistant roles can be used). Iāve done some extensive testing and found this approach to be much, much more effective than trying to pack it all into a single prompt. I typically have a system role message to provide context or the mission of the assistant, then a couple user, then assistant prompt/response pairs, then a final user role with the prompt I want completed.
- Update field from rich text. At the moment I donāt see an easy way to highlight something in rich text, then choose some ai prompt, and have the output saved into a specific field. The capability as is seems so well suited to processing messy, unstructured data, but it is just missing the piece to quickly update a field. For example, you might have tons of text extracted from a webpage. You might want to have the user highlight a subset of that text and tell it to summarize in some style, then update the page summary field from highlighted text.
- Action buttons associated with fields. With content production combined with ai/automation, you sometimes want to err on the side of caution early on and make the content team feel in control. So, instead of always generating some text via API when the input data changes, I found I often want to have a button next to a field. Like a field of āTitleā might have an action button of āUpdate Titleā, or really you might want 3 different ways to update the title. This pattern is very unwieldy with fibery right now and isnāt as intuitive as it could be
That you for sharing your apparoach very interesting. Iām going through prompting guide myself and discovered that right prompt is not as easy as it seem at first.
Here is a quick code snippet of how it looks in practice. Iām excluding the string variables, but you can see the basic structure. I forgot that I was only just a single prompt then completion pair in this case.
res = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
temperature=0.2,
messages=[
{"role": "system", "content": context},
{"role": "user", "content": example_input},
{"role": "assistant", "content": example_output},
{"role": "user", "content": content}
]
)
It will be relatively hard to give these options to a user. Design will be not easy and we donāt want to complicate things so far. But we have this in mind for the future.
This is awesome! Possibilities are endless! Iām currently designing prompts for content creation purposes. So far Iāve created a table with content ideas based on selected input and Iāve created prompts for metaphors and anologies based on the selected input. Works great!
Feedback so far
- It would be nice if there is a more easier way to edit existing prompts. My prompt for the content calendar is quite huge. As far as I know, I can only edit it if I first run it first.
- Is it possible to let the user āvisually knowā that AI is processing the (quite large) prompt? It takes a while. That is not a problem but as a user you donāt really know if itās running or not.
- The output of my prompt is a table with content ideas. The table has 2 columns (title and description) and 10 rows (each row is a content idea). Would be awesome if I can somehow select that table and create content entities for each row.
I get that it is somewhat complicated, but the example is the most important part. I would also challenge that to say your support of using openai in templates is far more complicated than what Iām anticipating.
What Iām talking about is when you save a prompt to use again, you could provide an example then completion as part of that, which is passed through into the query as I outlined. Even a single prompt completion example is far, far more useful than a long initial prompt. Having a single prompt to explain what you want and then to provide the prompt is very ambiguous to the model.
I had a good bit of trouble getting outputs even remotely close to what would be usable from your current capability. I know what I demonstrated is somewhat technical, but I think you could easily make that very intuitive by allowing the option to provide a single example input and output, rather than relying on each user to do that inconsistently within a single prompt.
@rothnic I do wonder if there are two different approaches - a)user education to write better prompts or b) restricting them toward specific prompt structure. I find examples donāt always work well or sometimes not need if question is simple one to use zero-shot.
I feel when users are pushed into specific prompt structure it will ultimately limix flexibility of other users. WDYT?
Though maybe with a bit of magick - GitHub - ucinlp/autoprompt: AutoPrompt: Automatic Prompt Construction for Masked Language Models. both problems can be solved.
In my personal experience, I was able to get way more accurate response from LLM after changing the structure of initial prompt only.
And I believe that langchain only works on initial prompt only, isnāt it? I might be wrong there as I havenāt run traces with gpt-3.5, it could be doing something extra when itās using chatty
model.
Your are right, here is the example of ReAct pattern with a single prompt that does magic with some python help.
Yes, so far you have to run the prompt to edit it. We are collecting feedback here and will improve it in the future, it is far from ideal now.
Hmm, there should be some loader when prompt is processing. Maybe I missed the problem?
Can you show an example? I donāt get the problem so far.
Thanks a lot! We will change visual indicator. As for rows in a table - it is more challenging, we will think about this use case
Great thanks!
Another solution is also fine off course. The goal is to create entities with Name + description based on AI output
The particular use case for me is matching a particular writing style that previously I could only get good results using fine tuned models.
Essentially, we are taking unstructured product details and transforming those into a succinct summary. This involves a combination of requirements on the output around what kinds of things are important, formatting, and the more difficult part, what not to include. I discussed a few different approaches in the ChatGPT hacking discord and referenced openaiās docs around prompting and the described approach is what produced the most predictable output.
If you are doing simple summarization, extraction of structured data, or transformation without strict output requirements, a single prompt works well. However, when you have more specific output format, style, or things you donāt want included (opposite of extraction), I found the examples to be the most straightforward and intuitive method to get that, and which is suggested by openai. It really comes down to where in the past youād need to fine tune a model.
Thanks for explaining the use case. I have played with that specific use case myself and gpt 3.5 didnāt work for me at the time.
Interesting to know that you have found a way.
2 posts were split to a new topic: Add {{input}} into āAsk anythingā option only when some text is selected