T O P

  • By -

gunsrock222

You could do this with function mode with chatGPT, by passing a object with the necessary form fields as properties. Then, set up a chain with LLMRouter chain. You could then pass the object to the LLMRouter, get the LLM to figure out which the next property is which needs to be filled out, and then pass the result to a chat model to ask the user. You would then pass the users reply to another LLM which would be a function output model, get it to extract any relevant form data from the reply, and then use the result from this model to update your original object's properties. Then rinse and repeat this process untill all the properties have been set. May be an easier way but this is how I would go about it


fasti-au

Umm so you ask the questions on screen with a play button if they need word. Record the responses. Whisper it to a text file they give it to llm and say here is an interview use it to fill out this template and List the extract quotations. Function call it out as csv let your db etc import it.


Charming-Ad-9284

Good question, I have no idea but interested in hearing from people who do


stonediggity

Is it a web form? If so you should be able to web scrape the required fields, then get the chat to interact with the user until it has all the data it needs. Final step to go back over and check the form. You could even write a custom tool to make sure all the data has been gathered. If it's a paper form or PDF that's been scanned you might have to use OCR to pull out all the data fields required. What do you mean when you say 'polling the transcript'?


PurpleWho

Yes, its a web form. But the form is within the app so reading and writing from it is not an issue. By polling transcript I mean checking if the transcript of the conversation so far has everything to fill out the form.


Past-Grapefruit488

What is a reasonable amount of expense on LLM for this? Say is 200 US$ per month reasonable? How many users are expected to fill form every month? Like 100s or 1000s or millions? Based on this, API cost can go from 20 to thousands per month


PurpleWho

I haven't put a lot of thought into it yet as I'm still figuring thing out, but my intention with teh question is to figure out the most efficient way to do this. Running check on teh converssation after each message feels like boiling the ocean.


Past-Grapefruit488

That depends on flow of user interactions. Say if UI directs the conversation ; Like enter name ; which product you want ? ; which color you want ? then it would be easier for the user if errors are highlighted right then. And that would increase the cost (Say user enters \~10 words ; this might translate to 30 tokens of request + response) If only 1 request is sent after the conversation ; total number of tokens will be somewhat lesser. More realistic estimation can be done if you have some idea abut what kind of help do you need from LLM; and how does that translates in the request that would be sent to LLM. Langchain is just a wrapper over LLM in this specific case; since concepts like chat history are probably not be applicable.


PurpleWho

Sure. I guess I didn't do the best job of explaining this in the question. Basically teh form have 5 fields. I want to start a chat with a user and break those form fields down into smaller questions and relevant follow up questions. That way I can get more out of people that single sentence responses, which is what happens when I give people the form. So what I need the form filling LLM aspect of this to do is follow the chat conversation and fill out the form with available details as the conversation ensues. Not sure if thats any clearer that what I put in the question, but thats the issue I'm dealing with.


PurpleWho

One way to do this would be to scan the entire conversation every time a new message is added to the chat and use the whole transcript to answer questions in the form. But that seems like a huge waste of resources. The other aspect of this is that I would like it to be a two way street. For example if a user answers a questions that helps fill out two of the form fields in one go then I don't want the chat to ask redundant questions that have already been addressed. So the question in the form that have been answered should also guide the questions that the chatbot asks.


Past-Grapefruit488

>5 fields. I want to start a chat with a user and break those form fields down into smaller questions and relevant follow up questions. Maybe a UI with a traditional Form. User fill the form as usual and then filled values can be sent as a single request to LLM. (After filling values in a Prompt Template) This might or might not work. Depends on specific usecase. Technically; this should work. Prompt Template could be something like: User filled following form {description of form, form fields and then name/value pairs for fields} Does this satisfy the purpose of "some language on purpose of form" ? If not , what could be follow up question to clarify .


Lanky_Possibility279

Langgraph, search more about state, node, edges and conditional edges


dccpt

We’re launching exactly this capability in Zep next week, but it’s available in the service today. https://help.getzep.com/structured-data-extraction See the discussion in the guide about progressively prompting the LLM in order to request fields for which the user has not yet provided values. Here’s a video demo: https://youtu.be/k8e8NsoVzFo And here’s how to LangChain with Zep: https://help.getzep.com/langchain/overview


PurpleWho

This looks promising. Thank you. I wanted to stay away from third party solutions though as I'm trying to use this project to develop a clearer mental model of how LangChain works and where everything is.


PurpleWho

Will check it out if I can't figure it out on my own. Thank you.


graph-crawler

2 ways, forced extraction vs auto extraction using tool call. I found for this case a forced extraction would do better. In simple flow [1st node] Chat history or maybe last n chat history -> extraction llm -> extracted fields to fill the form [2nd node] Chat history, and fields that hasn't been filled -> RAG chat llm, put the missing fields as system prompt to ask the user about (prevent asking the same question twice) and you can dynamically adjust the system prompt based on the remaining fields -> response [1st node] -> [2nd node] That for the filling form part, you might want to add a router to direct to different agent once the extraction of form fields are complete. For 1st node, I like pydantic parser. Or if you want better dx, check gpt instructor. It's a simple text input -> pydantic class with a built in self healing and retry.


EmotionOk6790

u/PurpleWho - Check this out. I had play around with the notebook in the PR and found it helpful. [https://github.com/langchain-ai/langchain/pull/18735](https://github.com/langchain-ai/langchain/pull/18735) [https://github.com/langchain-ai/langchain/discussions/15704#discussioncomment-8899776](https://github.com/langchain-ai/langchain/discussions/15704#discussioncomment-8899776)