issue_owner_repo
listlengths
2
2
issue_body
stringlengths
0
261k
issue_title
stringlengths
1
925
issue_comments_url
stringlengths
56
81
issue_comments_count
int64
0
2.5k
issue_created_at
stringlengths
20
20
issue_updated_at
stringlengths
20
20
issue_html_url
stringlengths
37
62
issue_github_id
int64
387k
2.46B
issue_number
int64
1
127k
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. hey there, I am facing problem for streaming responses in fastapi can i get some info that how to use websockets to stream the response ### Suggestion: _No response_
how to deploy langchain bot using fastapi with streaming responses
https://api.github.com/repos/langchain-ai/langchain/issues/8029/comments
2
2023-07-20T21:02:55Z
2023-10-26T16:04:48Z
https://github.com/langchain-ai/langchain/issues/8029
1,814,794,951
8,029
[ "langchain-ai", "langchain" ]
### Feature request Update `TextGen` to include `streaming` support for Oobabooga. ### Motivation Oobabooga provides a streaming API that will be very helpful. [TextGen](https://github.com/hwchase17/langchain/pull/5997) already supports the regular API. ### Your contribution :)
Streaming support for Oobabooga API?
https://api.github.com/repos/langchain-ai/langchain/issues/8028/comments
4
2023-07-20T20:54:55Z
2023-11-20T16:05:42Z
https://github.com/langchain-ai/langchain/issues/8028
1,814,786,549
8,028
[ "langchain-ai", "langchain" ]
### Feature request This probably needs more refinement but how but there are a lot of components floating around that overlaps with one another. As per my understanding, the broader categorization should be: Chains: Simple scenarios - steps are hardcoded Agent Executors: Complex scenarios - steps are determined on runtime Agent Executor internally uses an agent to plan the next step. In that sense, the agents can probably be renamed to just planners, and agent executors can then be simply called agents(?) Both the chains and agents can use tools (or toolkits) to execute steps in their chain. This simplifies the overall architecture and makes the components intuitive. ### Motivation As a beginner, I'm struggling with all the components floating around, and reducing the overall components in the core architecture reduces the learning curve of devs. ### Your contribution Yes, I can to work on the PR if this change gets approval.
Agents and agent executor are 2 different concepts and shouldn't be placed in the same bucket
https://api.github.com/repos/langchain-ai/langchain/issues/8024/comments
1
2023-07-20T19:46:36Z
2023-10-26T16:04:53Z
https://github.com/langchain-ai/langchain/issues/8024
1,814,693,973
8,024
[ "langchain-ai", "langchain" ]
### Feature request E.g., it would be great if ``` m = AIMessage(content="Hi!") print(m) ``` returned something like "AI: Hi!" ### Motivation It would make representation of message history (e.g., for debugging or serialization into a json) a little bit easier. ### Your contribution yes, I'm happy to do it.
Add a str representation for the Message
https://api.github.com/repos/langchain-ai/langchain/issues/8023/comments
2
2023-07-20T18:59:28Z
2023-10-26T16:04:58Z
https://github.com/langchain-ai/langchain/issues/8023
1,814,626,098
8,023
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I am trying to use create_csv_agent with memory in order to make the model answer based on previous answers so this was the code I used to achieve such task, mostly from issue #5611 with a few adjustments ``` def csv_extractor(json_request: str): ''' Useful for extracting data from a csv file. Takes a JSON dictionary as input in the form: { "prompt":"<question>", "path":"<file_name>" } Example: { "prompt":"Find the maximum age in xyz.csv", "path":"xyz.csv" } Args: request (str): The JSON dictionary input string. Returns: The required information from csv file. ''' arguments_dictionary = json.loads(json_request) question = arguments_dictionary["prompt"] file_name = arguments_dictionary["path"] csv_agent = create_csv_agent(llm=OpenAI(),path=path_to_file,verbose=True) return csv_agent(question) request_format = '{{"prompt":"<question>","path":"<file_name>"}}' description = f'Useful for working with a csv file. Input should be JSON in the following format: {request_format}' csv_extractor_tool = Tool( name="csv_extractor", func=csv_extractor, description=description, verbose=True, ) ``` ``` tools = [csv_extractor_tool] # Adding memory to our agent from langchain.agents import ZeroShotAgent from langchain.memory import ConversationBufferMemory prefix = """Have a conversation with a human, Answer step by step and the history of the messages is critical and very important to use. The user is expected to ask you questions that you will need to use the information you had from the previous answers. For example if the user asked you about the name of the a person, he may asks you another question on that person based on the information you have so take care. You have access to the following tools:""" suffix = """Begin!" {chat_history} Question: {input} {agent_scratchpad}""" prompt = ZeroShotAgent.create_prompt( tools=tools, prefix=prefix, suffix=suffix, input_variables=["input", "chat_history", "agent_scratchpad"] ) memory = ConversationBufferWindowMemory( memory_key='chat_history', k=5, return_messages=True ) # Creating our agent from langchain.chains import LLMChain from langchain.llms import OpenAI from langchain.agents import AgentExecutor import json llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt) agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True) agent_chain = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True, memory=memory) ``` I am querying a CSVs containing names and dates of birth using my agent, so when I send the prompt like this ``` data = {"input": {"prompt": "give me the longest name", "path": "file.csv"}} json_data = json.dumps(data) result = agent_chain(json_data) ``` It returns the correct answer, let this answer be "Medjedovic" now when I ask the model to give me the date of birth of this name by asking 'What is his birth date'? It identifies correctly in the first chain observation that I want the birth date of Medjedovic which most probably mean that the name is in the memory as it should be. However, it retrieves a different and incorrect birth date in the second chain this is the code ``` data = {"input": {"prompt": "what is his birth date?", "path": "file.csv"}} json_data = json.dumps(data) result = agent_chain(json_data) ``` the output is like this: > Entering new AgentExecutor chain... Thought: I need to look up the birth date of Medjedovic Action: csv_extractor Action Input: {"prompt":"what is his birth date?","path":"file.csv"} > Entering new AgentExecutor chain... Thought: I need to find the birth_date column in the dataframe Action: python_repl_ast Action Input: df['birth_date'] Observation: 0 2/10/2000 then Final Answer: Medjedovic's date of birth is 2/10/2000. it returned a wrong birth date of a different person which is the first person in the dataset, I want it to use the name it identified in the answer of first question and the observation in the first chain of the second question to give the right answer How this could be solved or maintained? I am using gpt-3.5-turbo model API. I have used different prefixes also and neither of them worked. ### Suggestion: _No response_
Issue:Using memory with agents gives wrong results
https://api.github.com/repos/langchain-ai/langchain/issues/8020/comments
4
2023-07-20T17:59:43Z
2023-10-28T16:04:45Z
https://github.com/langchain-ai/langchain/issues/8020
1,814,527,905
8,020
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. How do you extract the last thought process from an agent? The final answer from the agent is too summarized for my liking. However the 'Final Thought' process is great with all the details. I am having some difficulties extracting that information event with `return_immediate_steps=True`. Is there anyway I can get that final thought information as part of the Final Answer? ### Suggestion: _No response_
Extract Final Thoughts from Agent as part of Final Answer.
https://api.github.com/repos/langchain-ai/langchain/issues/8019/comments
3
2023-07-20T17:53:44Z
2023-08-14T14:47:22Z
https://github.com/langchain-ai/langchain/issues/8019
1,814,519,811
8,019
[ "langchain-ai", "langchain" ]
### Feature request I am writing to request an enhancement for the FLARE chain in Langchain. I'm wondering how I can change the class to accept local (fine-tuned) models rather then use OpenAi API. Since FLARE uses a retriever, a question generator, and a response generator, it would be interesting to leverage the strength of newer models, or even custom fine-tuned ones. ### Motivation By doing so, it would give more flexibility to experiment with different models, and not rely on OpenAi, hence making it more open-source. ### Your contribution I would envision the future syntax to look something like this: ``` from langchain.chains import FlareChain flare_chain = FlareChain( question_generator_chain=question_generator, response_chain=response_generator, output_parser=None, # Replace with the output parser if available retriever=retriever, min_prob=0.2, min_token_gap=5, num_pad_tokens=2, max_iter=10, start_with_retrieval=True, ) ``` The models will be imported like so: ``` from transformers import AutoTokenizer, AutoModelForCausalLM from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from langchain.chains import RetrievalQA tokenizer = AutoTokenizer.from_pretrained(model_id) response_generator = AutoModelForCausalLM.from_pretrained(model_id, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("model_id_1") question_generator = AutoModelForSeq2SeqLM.from_pretrained("model_id_1") retriever= RetrievalQA.from_chain_type(llm=model, chain_type="stuff", retriever=docsearch.as_retriever()) ```
FLARE Implementation with Local Fine-Tuned Models
https://api.github.com/repos/langchain-ai/langchain/issues/8015/comments
3
2023-07-20T16:36:17Z
2024-04-23T07:00:46Z
https://github.com/langchain-ai/langchain/issues/8015
1,814,401,030
8,015
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I was looking up the documentation as well. However, from the documentation it seems as if both types of agents are doing the same thing. Even when I looked at the backend code, both agent codes seemed almost identical. I am not so sure what I am missing in terms of my understanding. Any leads would be appreciated. ### Suggestion: Documentation between the two types can be made more clear.
What is the difference between OpenAI Function and OpenAI Multi Functions Agent
https://api.github.com/repos/langchain-ai/langchain/issues/8011/comments
4
2023-07-20T15:07:47Z
2023-12-16T21:51:12Z
https://github.com/langchain-ai/langchain/issues/8011
1,814,224,911
8,011
[ "langchain-ai", "langchain" ]
### Issue with current documentation: Hello every one! I am following the tutorial of the use of agents. Specifically, now I am exploring the type [OpenAIFunctions. ](https://python.langchain.com/docs/modules/agents/agent_types/openai_functions_agent). I am following the exact same tutorial and I get some error when I want to use the agent to solve some math problems or connect to my SQL Data base which are tools that the agents have available. The error that I have is the following: ```python ValidationError: 1 validation error for AIMessage content none is not an allowed value (type=type_error.none.not_allowed) ``` Does any one know what is causing this error? Best regards, Orlando
Error in OpenAI Functions Agent
https://api.github.com/repos/langchain-ai/langchain/issues/8009/comments
5
2023-07-20T14:47:06Z
2023-07-20T18:25:10Z
https://github.com/langchain-ai/langchain/issues/8009
1,814,175,432
8,009
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Hi Everyone, I am trying to use `llama-2-70b-chat` with the LlamaCpp() as described here : https://python.langchain.com/docs/modules/model_io/models/llms/integrations/llamacpp#metal My system specs are MacBook Pro, M1 Chip, 16GB, 500GB SSD Here is my code using LlamaCpp: ``` DEFAULT_CHAT_MODEL_LLAMA = '/Volumes/Gargantua/LLAA2/llama-2-70b-chat/ggml-model-q4_0.bin' class HelpprBaseLLAMAV2(LlamaCpp): model_path = DEFAULT_CHAT_MODEL_LLAMA callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]) verbose = True input = {"temperature": 0.75, "top_p": 1, "max_length": 2048} n_gpu_layers = 1 n_batch = 512 f16_kv = True ``` Whenever I call `llm = HelpprBaseLLAMAV2()` via the following: ``` (venv) tapanjain@MacBook-Pro-2 helppr % python main.py ``` I get the following error: ``` zsh: illegal hardware instruction python main.py ``` Can anyone help me what exactly is the reason here? Is langchain right now not supported for llama 2? ### Suggestion: _No response_
Issue: Using "llama-2-70b-chat/ggml-model-q4_0.bin" with LlamaCpp()
https://api.github.com/repos/langchain-ai/langchain/issues/8004/comments
12
2023-07-20T13:52:10Z
2023-11-15T16:07:27Z
https://github.com/langchain-ai/langchain/issues/8004
1,814,051,785
8,004
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I came to know that there are two methods to keep memory in **ConversationalRetrievalChain** 1. Method -1 Using **ConversationBufferMemory** ``` from langchain.chains import ConversationalRetrievalChain from langchain.memory import ConversationBufferMemory memory = ConversationBufferMemory( memory_key="chat_history", return_messages=True ) qa= ConversationalRetrievalChain.from_llm( llm, retriever=retriever), memory=memory ) ``` 3. Method -2 Using **chat_history** parameter ``` from langchain.chains import ConversationalRetrievalChain qa= ConversationalRetrievalChain.from_llm( llm, retriever=retriever) ) chat_history = [] result = qa({"question": question, "chat_history": chat_history}) ``` What is the exact difference between the above two methods? When should use one over another and why? ### Suggestion: _No response_
Difference between ConversationBufferMemory and chat_history parameter
https://api.github.com/repos/langchain-ai/langchain/issues/8002/comments
5
2023-07-20T13:40:25Z
2024-07-02T12:21:04Z
https://github.com/langchain-ai/langchain/issues/8002
1,814,025,506
8,002
[ "langchain-ai", "langchain" ]
### System Info Langchain Version== 0.0.237 Platform == Google Colaboratory ### Who can help? @eyurtsev ### Information - [X] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Step 1: Import packages Step 2: Use APIfy dataset loader loader = ApifyDatasetLoader( dataset_id="KCoOphOr1tQmmUhsS", dataset_mapping_function=lambda item: Document( page_content=item["description"] or "name", metadata={"id": item["id"]} ), ) Step 3: Attempt to create a vector store index from langchain.indexes import VectorstoreIndexCreator index = VectorstoreIndexCreator().from_loaders([loader]) ### Expected behavior It should create the index from the data loader provided, but the loader does not produce the attribute "page content" when loading APIfy datasets. <img width="919" alt="Screenshot 2023-07-19 at 12 24 02" src="https://github.com/hwchase17/langchain/assets/63857960/11aee742-c513-4cc6-863e-56c5e10a43db">
APIfy dataset loader does not provide the attribute "page_content" in the loaded documents
https://api.github.com/repos/langchain-ai/langchain/issues/7999/comments
5
2023-07-20T12:32:18Z
2023-10-28T16:04:50Z
https://github.com/langchain-ai/langchain/issues/7999
1,813,895,614
7,999
[ "langchain-ai", "langchain" ]
### System Info `langchain 0.0.235` I wrote my own callback handler `class ChatHandler(BaseCallbackHandler):` which includes the function ``` def on_tool_end(self, output: str, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) -> Any: """Run when tool ends running.""" if observation_prefix is not None: print("OBSERVATION: "+str(output)) if llm_prefix is not None: print("llm_prefix: "+str(output)) ``` as inspired by the source of the file callback handler https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/file.html#FileCallbackHandler . I have a STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION agent. My goal is to capture the observation, it seems not to call the function. The handler works fine for other functions. I attach the handler like this: ``` cHandler = chatHandler.ChatHandler() cManager = manager.BaseCallbackManager(handlers=[cHandler]) initialize_agent(tools,llm, callback_manager=cManager, ... ``` Am I using the wrong callback manager? ### Who can help? @agola11 @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction ``` import os from langchain.agents import initialize_agent from langchain.callbacks import manager from langchain.chat_models import AzureChatOpenAI from langchain.agents import AgentType from langchain.callbacks.base import BaseCallbackHandler from langchain.schema import AgentAction from typing import Any, Dict, List, Optional # ENVIRONMENT KEYS; OPENAI_API_KEY import json vals = json.load(open("../local.settings.json"))['Values'] for k in vals.keys(): os.environ[k] = vals[k] from langchain.agents import Tool def a(b): print('tool used wow') return "frogimage.png" t = Tool( name="Search things online", #search location by vague name func=a, description="Use this tool to search and find things online.", return_direct=False ) BASE_URL = "https://XXX.openai.azure.com" DEPLOYMENT_NAME = "gpt-35-turbo-16k" model = AzureChatOpenAI( openai_api_base=BASE_URL, openai_api_version="2023-05-15", deployment_name=DEPLOYMENT_NAME, openai_api_key=os.environ["OPENAI_API_KEY"], openai_api_type="azure", temperature=0.0 ) class TestHandler(BaseCallbackHandler): def on_tool_end(self, output: str, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) -> Any: """Run when tool ends running.""" print("on_tool_end") def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any: """Run on agent action.""" print("on_agent_action") cHandler = TestHandler() cManager = manager.BaseCallbackManager(handlers=[cHandler]) agent_chain = initialize_agent([t],model, callback_manager=cManager, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True) agent_chain("search frog images online") ``` outputs: > > Entering new AgentExecutor chain... > on_agent_action > Action: > ``` > { > "action": "Search things online", > "action_input": "frog images" > } > ```tool used wow > > Observation: frogimage.png > Thought:I found an image of a frog. Here it is: > Action: > ``` > { > "action": "Final Answer", > "action_input": "frogimage.png" > } > ``` > > > > Finished chain. > {'input': 'search frog images online', 'output': 'frogimage.png'} ### Expected behavior The function on_tool_end is called when the agent has used a tool, printing on_tool_end, but only on_agent_action is printed.
custom BaseCallbackHandler in STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION agent not calling on_tool_end
https://api.github.com/repos/langchain-ai/langchain/issues/7998/comments
3
2023-07-20T11:16:38Z
2023-07-25T14:15:21Z
https://github.com/langchain-ai/langchain/issues/7998
1,813,748,887
7,998
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I've been stuck on a task that I was working on, not knowing how to exactly proceed with it. Where I need to combine different type of chain in the routes of an Agent. So I have managed to build the following pipeline using an Agent. I receive a query and according to the context of the query it gets routed either to any of the different LLM Chains defined (answering Greetings, Goodbyes, Unrelated Questions) or it gets routed to a RetrievalQA that searches for the answer in a VectorStore when the input query is related to the context in question. Now what I noticed, and have been trying to fix was that sometimes, despite adding memory to my agent, I can receive a follow-up question that will not be routed to the VectorStore as it would labeled as "Unrelated". Example: Supposedly the first question was : "Am I allowed to work remotely on Fridays?". This is related for exampled and the model will reply with no from the VectorStore. Now, if I follow it up with: "How about Mondays?" the model will identify the input query as unrelated as it is not tackling the subject of "Work". Ideally, I would want it to reformulate the question to: "How about working remotely on Monday?" and to route it accordingly to the VectorStore as it is tackling the subject of "work". I made my research and found that CoversationalRetrievalChains should be part of my solution, but I do not know how to put it exactly in my architecture. The main issue is that LLM Chains expect a certain type of input ('input':x) while ConversationalRelationChain expect another ('question':x, 'chat_history':y). I tried using LOTR (Lord of all Retrievers) but apparently it hasn't been working as well. I only have 1 retriever (1 VectorStore) and the rest are simple LLMChains. Yet even when I tried LOTR with CoversationalRetrievalChains in order to combine retrievers I am getting the following error: "NotImplementedError: Saving not supported for this chain type." In short I want to be able to pass the same input, regardless of the route in my Agent, whether it's a LLM Chain or a CoversationalRetrievalChains. I'm a bit lost and would love some help on the subject. ### Suggestion: _No response_
Issue: <Combining LLMChain and ConversationalRelationChain in an agent's routes>
https://api.github.com/repos/langchain-ai/langchain/issues/7997/comments
5
2023-07-20T11:01:40Z
2024-01-12T05:24:25Z
https://github.com/langchain-ai/langchain/issues/7997
1,813,722,288
7,997
[ "langchain-ai", "langchain" ]
### System Info langchain==0.0.230 MacOS ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I do not know how exactly to reproduce this, but I would like to know if someone had the same error using GPT-3.5-turbo with this agent, tool, and prompts: Agent: LLMSingleActionAgent Tool and prompt: ``` template_hello = """You are a chatbot, answer to this {input} Previous conversation history: {history} """ prompt_hello = PromptTemplate(input_variables=["input", "history"], template=template_hello) hello_chain = LLMChain( llm=ChatOpenAI(model='gpt-3.5-turbo', prompt=prompt_hello, verbose=True, memory=self.memory ) ``` Memory: ``` from langchain.memory import ConversationTokenBufferMemory self.memory = ConversationTokenBufferMemory(llm=ChatOpenAI(temperature=0, max_tokens=500, model='gpt-3.5-turbo'), memory_key="history", max_token_limit=500) ``` When I run the agent with gpt-4 it can access to the memory, if I change the agent model to gtp-3.5-turbo I have this error: > I don't have access to the short-term memory ### Expected behavior Answer based on the memory
I don't have access to the short-term memory using Agent with gpt-3.5-turbo
https://api.github.com/repos/langchain-ai/langchain/issues/7996/comments
2
2023-07-20T10:46:50Z
2023-10-26T16:05:28Z
https://github.com/langchain-ai/langchain/issues/7996
1,813,694,849
7,996
[ "langchain-ai", "langchain" ]
### System Info python 3.9 langchain 0.0.234 qdrant-client 1.1.7 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction `client = qdrant_client.QdrantClient(self.qdrant_url, port=6333)` `qdrant = Qdrant( client=client, collection_name=self.collection_name, embeddings=embeddings, )` ### Expected behavior I'm getting this error since i upgrade my langchain to 0.0.234 from 0.0.233 <AioRpcError of RPC that terminated with: status = StatusCode.UNAVAILABLE details = "failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:6334: Failed to connect to remote host: Connection refused" debug_error_string = "UNKNOWN:failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:6334: Failed to connect to remote host: Connection refused {grpc_status:14, created_time:"2023-07-20T18:17:20.830536+08:00"}" I didn't change any code in my project. Is there any setting change? it seems that it connect to 6334 port of Qdrant
Can't connect to Qdrant since 0.0.234
https://api.github.com/repos/langchain-ai/langchain/issues/7995/comments
7
2023-07-20T10:30:28Z
2023-11-23T16:07:15Z
https://github.com/langchain-ai/langchain/issues/7995
1,813,665,397
7,995
[ "langchain-ai", "langchain" ]
### System Info I am using the below code to make an agent that decides, upon the question, whether to use semantic search or pandas agent to answer the question about a dataset of employees ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [x] Agents / Agent Executors - [x] Tools / Toolkits - [x] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` semantic_retrieval_chain = RetrievalQA.from_llm( OpenAI(temperature=0), retriever=vectorstore.as_retriever(), return_source_documents=True, # memory=memory ) agg_df = pd.read_csv(TMP_FILE_PATH, index_col=0).iloc[:, :-1] aggregation_agent = create_pandas_dataframe_agent( OpenAI(temperature=0), agg_df, verbose=True, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, include_df_in_prompt=True, # handle_parse_errors=True, # suffix="Assistant also handles parsing errors.", ) tools = [ Tool( name = "Semantic Search", func=semantic_retrieval_chain, description="""useful for retrieving employees similar to a given query specifications and answering questions not involving aggregations and operations over more than one employee""" ), Tool( name = "Aggregation Agent", func=aggregation_agent.run, description="useful for answering questions involving aggregations and descriptive statistics over more than one employee" ), ] llm=OpenAI(temperature=0) agent_chain = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True, return_intermediate_steps=True, # memory=memory ) ``` I was able to successfully get the documents retrieved by the semantic search tool, but when I set the return_intermediate_step to True for the aggregation agent, I had the following error: **NotImplementedError: Saving not supported for this chain type.** ### Expected behavior I want to get the code for the pandas agent (aggregation_agent above) whenever it is invoked. I am extremely in need for this feature to be able to display the rows (if any) from the appropriate dataframe. Thanks in advance for any help.
Get the code applied by the pandas agent
https://api.github.com/repos/langchain-ai/langchain/issues/7994/comments
1
2023-07-20T10:19:46Z
2023-07-30T06:05:42Z
https://github.com/langchain-ai/langchain/issues/7994
1,813,647,909
7,994
[ "langchain-ai", "langchain" ]
### System Info Langchain 0.0.237. Python 3.11 Mac OS X M1 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction make start. INFO: Uvicorn running on http://127.0.0.1:9002 (Press CTRL+C to quit) INFO: Started reloader process [80851] using StatReload INFO: Started server process [80855] INFO: Waiting for application startup. in startup_event AttributeError: Can't get attribute '_default_relevance_score_fn' ### Expected behavior Load the chat. Note: the faiss wrapper seems updated back in May to from _default_relevance_score_fn to relevance_score_fn
ChatLangChain requesting _default_relevance_score_fn' on <module 'langchain.vectorstores.faiss> on init
https://api.github.com/repos/langchain-ai/langchain/issues/7992/comments
6
2023-07-20T09:32:36Z
2023-11-05T16:05:39Z
https://github.com/langchain-ai/langchain/issues/7992
1,813,565,399
7,992
[ "langchain-ai", "langchain" ]
null
Does it support 文心一言?
https://api.github.com/repos/langchain-ai/langchain/issues/7990/comments
15
2023-07-20T08:59:25Z
2023-12-18T23:49:08Z
https://github.com/langchain-ai/langchain/issues/7990
1,813,507,242
7,990
[ "langchain-ai", "langchain" ]
### System Info LangChain Python v0.0.237 Based on this code snippet it appears that OutputFixingParser doesn't support async flows. https://github.com/hwchase17/langchain/blob/df84e1bb64d96377f909651f696f310c43c2f2c5/langchain/output_parsers/fix.py#L46-L52 It's calling the run function and not arun ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [X] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction 1. Define async callback handler 2. Make LLM return output that is unparsable (invalid JSON or 2 code blocks) 3. OutputFixingParser will fail parsing the output and throw an exception, which will call the LLM via the run function which doesn't await on coroutines. Python will give the following error: ``` RuntimeWarning: coroutine 'AsyncCallbackHandler.on_chat_model_start' was never awaited ``` ### Expected behavior 1. Should work with coroutines as expected
OutputFixingParser is not async
https://api.github.com/repos/langchain-ai/langchain/issues/7989/comments
2
2023-07-20T08:29:12Z
2023-10-26T16:05:33Z
https://github.com/langchain-ai/langchain/issues/7989
1,813,454,976
7,989
[ "langchain-ai", "langchain" ]
### System Info langchain==0.0.237 python==3.8.16 ### Who can help? @hwchase17 @agola11 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction same issue as https://github.com/hwchase17/langchain/issues/6984 but for Redis, but the proposed fix does not work llm = OpenAI(temperature=0.1) question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT) doc_chain = load_qa_with_sources_chain(llm, chain_type="stuff") retriever = db.as_retriever(search_kwargs={"k": 10}) chain = ConversationalRetrievalChain( retriever=retriever, #redis retriever question_generator=question_generator, combine_docs_chain=doc_chain, return_source_documents=True, ) ### Expected behavior query = "who slayed Karna in the battle?" result = chain({"question": query}) print(len(result['source_documents'])) #ouput: 4 but should be 10
Redis does not use the parameters passed in by Redis.as_retriever()
https://api.github.com/repos/langchain-ai/langchain/issues/7986/comments
2
2023-07-20T07:57:13Z
2023-12-09T16:42:25Z
https://github.com/langchain-ai/langchain/issues/7986
1,813,396,450
7,986
[ "langchain-ai", "langchain" ]
### Feature request Almost all the chains offered in langchain framework support Verbose option which helps the developers understand what prompt is being applied under the hood and plan their work accordingly. It immensely help while debugging. create_extraction_chain is a very helpful one and I found this is not accepting verbose attribute. ### Motivation For many developers who are just following the langchain official documentation and not looking at the code used under the hood, this error will sound odd. Supporting this attribute will help in keeping things consistent and improve debugging feature of this chain ### Your contribution I can raise the PR for this ![Screenshot 2023-07-20 at 12 34 55 PM](https://github.com/hwchase17/langchain/assets/8801972/18b248df-1a7c-49cf-a9b1-3101e6928631)
TypeError: create_extraction_chain() got an unexpected keyword argument 'verbose'
https://api.github.com/repos/langchain-ai/langchain/issues/7982/comments
0
2023-07-20T06:39:12Z
2023-07-20T13:52:15Z
https://github.com/langchain-ai/langchain/issues/7982
1,813,275,803
7,982
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. How do you increase the max_tokens for the agent? I am using gpt35turbo16k. I notice that the model is only using ~4096 tokens. Is there a way to override this? The reason I am asking is I am ingesting large text in the prompt and the final answer I am getting is so short that does not have any meeting. Thanks! ### Suggestion: _No response_
Agent does not use max_tokens parameter from ChatOpenAI
https://api.github.com/repos/langchain-ai/langchain/issues/7981/comments
1
2023-07-20T05:58:45Z
2023-08-14T14:47:00Z
https://github.com/langchain-ai/langchain/issues/7981
1,813,216,875
7,981
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. ![image](https://github.com/hwchase17/langchain/assets/49942750/e6cf4a3a-da28-4133-9f3e-9b9998d0adb2) In Milvus, we are able to create databases and create collections within the database itself. Hence i will be able to create a collection with the same name in different databases. How can we specify the specific database and collection to use, currently the parameters only accept collection_name and not db_name like in pymilvus. ### Suggestion: _No response_
Issue: How to pass in database name parameter into Milvus
https://api.github.com/repos/langchain-ai/langchain/issues/7979/comments
14
2023-07-20T05:32:27Z
2024-08-05T08:13:51Z
https://github.com/langchain-ai/langchain/issues/7979
1,813,191,448
7,979
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Hi, I'm trying to use some LLM model from huggingface, for example "model_name=lmsys/vicuna-13b-v1.3", in the chain `load_qa_chain`. The LLM model could be fetched through AutoModelForCausalLM.from_pretrained, e.g. ``` model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, device_map={"": 0}, ) ``` What's the best way (memory efficient) to wrap the model for integration with load_qa_chain? ### Suggestion: _No response_
Issue: integrate local LLM (from huggingface) into load_qa_chain
https://api.github.com/repos/langchain-ai/langchain/issues/7975/comments
2
2023-07-20T02:37:21Z
2023-10-26T16:05:38Z
https://github.com/langchain-ai/langchain/issues/7975
1,813,037,695
7,975
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I have a valid serpapi API key that I can use in the serpapi playground and with direct calls using python requests, but when I use it with Langchain I get an error saying that the key is invalid. I have my key set as environment variable, but I continue to get this error: ValueError: Got error from SerpAPI: Invalid API key. Your API key should be here: https://serpapi.com/manage-api-key I have even got a new API key with the same results. I have tried Pydroid3 for Android, Ubuntu in Termux for Android (64 bit), and Ubuntu in AWS ec2, with the same result. ### Suggestion: _No response_
Serpapi API key not working with Langchain
https://api.github.com/repos/langchain-ai/langchain/issues/7971/comments
5
2023-07-19T23:53:59Z
2024-05-12T16:22:03Z
https://github.com/langchain-ai/langchain/issues/7971
1,812,900,999
7,971
[ "langchain-ai", "langchain" ]
### System Info LangChain version is the latest one - 0.0.237 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```python def text_split(documents): text_splitter = SpacyTextSplitter(chunk_size=1000, chunk_overlap=10, separator='\n') texts = [] for document in documents: texts.extend(text_splitter.split_documents(document.load())) return texts ``` Using this simple code, I get this error: ``` Traceback (most recent call last): ... File "/opt/project/main/genie.py", line 41, in text_split texts.extend(text_splitter.split_documents(document.load())) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/langchain/text_splitter.py", line 131, in split_documents return self.create_documents(texts, metadatas=metadatas) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/langchain/text_splitter.py", line 116, in create_documents for chunk in self.split_text(text): ^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/langchain/text_splitter.py", line 1047, in split_text splits = (s.text for s in self._tokenizer(text).sents) ^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/spacy/language.py", line 1030, in __call__ doc = self._ensure_doc(text) ^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/spacy/language.py", line 1121, in _ensure_doc return self.make_doc(doc_like) ^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/spacy/language.py", line 1113, in make_doc return self.tokenizer(text) ^^^^^^^^^^^^^^^^^^^^ TypeError: Argument 'string' has incorrect type (expected str, got lxml.etree._ElementUnicodeResult) ``` I've also tried approach with alternative `split_text` method, but still getting the same error. ### Expected behavior If I put a breakpoint at [langchain/text_splitter.py:116](https://github.com/hwchase17/langchain/blob/master/langchain/text_splitter.py#L116) and execute `text = str(text)` and resume the process, only then it works.
Argument 'string' has incorrect type (expected str, got lxml.etree._ElementUnicodeResult)
https://api.github.com/repos/langchain-ai/langchain/issues/7968/comments
4
2023-07-19T22:23:56Z
2023-10-28T16:04:55Z
https://github.com/langchain-ai/langchain/issues/7968
1,812,815,863
7,968
[ "langchain-ai", "langchain" ]
### System Info Mac M2 Max 32GB ### Who can help? @rlancemartin ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction We pass docs into a prompt, and pass prompt to an LLM chain: ``` # Prompt prompt = PromptTemplate.from_template( "Human: Summarize the main themes in these retrieved docs: {docs} Assistant: " ) # Chain llm_chain = LLMChain(llm=llm, prompt=prompt) # Run question = "What are the approaches to Task Decomposition?" docs = vectorstore.similarity_search(question) result = llm_chain(docs) ``` The doc metadata gets formatted w/ prompt and passed to LLM. This can cause hallucination (relative to intended doc page_content) if metadata contain irrelevant information. It also chews up tokens potentially with redundant information. See good example here: https://smith.langchain.com/public/9af32e5b-2ca9-41d9-a5a6-8243b218208a/r ### Expected behavior Only pass `page_content` from retrieved documents into the prompt.
Doc metadata can get passed into prompt unexpectedly
https://api.github.com/repos/langchain-ai/langchain/issues/7967/comments
1
2023-07-19T21:57:17Z
2023-10-25T16:05:37Z
https://github.com/langchain-ai/langchain/issues/7967
1,812,789,717
7,967
[ "langchain-ai", "langchain" ]
1. Dark mode by default would be great 2. A theme that uses black instead of dark grey (for OLED screens) would also be appreciated I'm sure there are others here who read the new additions to docs before bed lol
Petition for docs to be dark mode by default
https://api.github.com/repos/langchain-ai/langchain/issues/7965/comments
1
2023-07-19T21:48:22Z
2023-08-08T20:10:36Z
https://github.com/langchain-ai/langchain/issues/7965
1,812,781,032
7,965
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Hi, Not sure if someone is facting this "issue" or is something wrong I'm doing. So far .. I read, GPT 3.5 turbo and later should be used with "chat_models" instead of "models". While testing the "summary" chaing (map_reduce). I noticed that using "model" llm it does indeed run in parallel, but using chat_model does run in sequence. From the src in langchain .. I saw: [langchain][chains][combine_documents] map_reduce.py > map_results = await self.llm_chain.aapply( # FYI - this is parallelized and so it is fast. [{**{self.document_variable_name: d.page_content}, **kwargs} for d in docs], callbacks=callbacks, ) And tracing the execution down **AzureOpenAI chat_model**: it will execute a for loop and wait for the response. Multiple API calls to the endpoint > results.append( self._generate_with_cache( m, stop=stop, run_manager=run_managers[i] if run_managers else None, **kwargs, ) ) **AzureOpenAI model**: (aka completion) ... it generates a single call with all the prompts. > response = completion_with_retry(self, prompt=_prompts, **params) And here are my outcomes: - Using ChatModel (azure) ... it works as expected, following the prompt and creating the expected output, but in a sequential execution. - Using LLM Model (azure - aka completion) ... it runs in parallel, but the "summaries" are not correct, it "creates ramdon content" not related to the topic (I have set the temperature to 0 and top_p 0.9) ... but still does not "creates" summaries of provided text. So my question/concerns are: 1. Is summarization chain expected to run in parallel mode with chatModel LLMs? If so ... can anyone provide a sample .. can't make it to work in parallel. 2. Is "completion llms" (aka normal llm model) only good for "generating content" but not for "summaries"? Using gpt 3.5 turbo? Thanks for your help in advace. ### Suggestion: _No response_
Issue: [Azure] Summary chain with chat 3.5 turbo - Not being parallelized
https://api.github.com/repos/langchain-ai/langchain/issues/7964/comments
2
2023-07-19T21:34:45Z
2023-10-25T16:05:41Z
https://github.com/langchain-ai/langchain/issues/7964
1,812,765,599
7,964
[ "langchain-ai", "langchain" ]
### System Info openai==0.27.7 langchain==0.0.237 chromadb==0.4.2 Platform: Windows 11 Python Version: 3.10 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Within this file, I was expecting db_collection to have embeddings when it was printed. However, the output is like this: > db_collection {'ids': ['1234_5678_1'], 'embeddings': None, 'metadatas': [{'source': 'Test0720.txt'}], 'documents': ['Nuclear power in the United States is provided by 99 commercial reactors with a net capacity of 100,350 megawatts (MW), 65 pressurized water reactors and 34 boiling water reactors.\n\nIn 2016 they produced a total of 805.3 terawatt-hours of electricity, which accounted for 19.7% of the nation's total electric energy generation.\n\nIn 2016, nuclear energy comprised nearly 60 percent of U.S. emission-free generation.']} The value for "embeddings" is empty. Here is the code: ``` import os from flask import Blueprint, request, jsonify from werkzeug.utils import secure_filename from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import Chroma from langchain.document_loaders import TextLoader chroma_bp = Blueprint('chroma_bp', __name__, url_prefix='/v1/resource') openai_key = os.getenv('OPENAI_API_KEY') os.environ["OPENAI_API_KEY"] = openai_key @chroma_bp.route('/save_to_chroma', methods=['POST']) def api_handler(): file = request.files['file'] user_id = request.form.get('user_id') file_id = request.form.get('file_id') try: response = create_chroma_db_from_file(file, file_id, user_id) return jsonify({'response': 'Chroma DB created successfully'}), 200 except Exception as e: print(f"Exception: {e}") # Debug print statement return jsonify({'error': str(e)}), 500 def create_chroma_db_from_file(file, file_id, user_id): filename = secure_filename(file.filename) file.save(filename) # load the document and split it into chunks loader = TextLoader(filename) documents = loader.load() # split it into chunks text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) print(f"Number of documents: {len(docs)}") print(f"Documents:", docs) # # create the open-source embedding function embeddings = OpenAIEmbeddings(openai_api_key=openai_key) # load it into Chroma ids = [f"{file_id}_{user_id}_{i}" for i in range(1, len(docs) + 1)] db = Chroma.from_documents( documents=docs, embedding=embeddings, ids=ids, persist_directory="../chromadb") print(f"db", db) print(f"db_collection", db._collection.get(ids=[ids[0]])) db.persist() # query it query = "Nuclear power in the United States is provided by 99 commercial reactors with a net capacity of 100,350 megawatts (MW), 65 pressurized water reactors and 34 boiling water reactors. In 2016 they produced a total of 805.3 terawatt-hours of electricity, which accounted for 19.7% of the nation's total electric energy generation. In 2016, nuclear energy comprised nearly 60 percent of U.S. emission-free generation." search_result = db.similarity_search(query) # print results print(search_result[0].page_content) os.remove(filename) return True ``` ### Expected behavior The embedding is done successfully and could be shown in logs. Thank you!
Embedding Seems Unsuccessful for Chroma + OpenAI
https://api.github.com/repos/langchain-ai/langchain/issues/7963/comments
6
2023-07-19T21:33:31Z
2023-08-22T09:48:08Z
https://github.com/langchain-ai/langchain/issues/7963
1,812,764,337
7,963
[ "langchain-ai", "langchain" ]
### System Info langchain 0.0.237 python 3.10 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction i have an agent of type AgentType.OPENAI_MULTI_FUNCTIONS and AzureChatOpenAI LLM. I'm trying to use the FAISS embedding capabilities with a VectorStore, as shown in this example https://techcommunity.microsoft.com/t5/startups-at-microsoft/build-a-chatbot-to-query-your-documentation-using-langchain-and/ba-p/3833134 combine with the instructions here on how to work with agents and vector store https://python.langchain.com/docs/modules/agents/how_to/agent_vectorstore while the chain seems to be calling the desired function in order to do a similiarity search ` "generations": [ [ { "text": "", "generation_info": { "finish_reason": "function_call" }, "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "messages", "AIMessage" ], "kwargs": { "content": "", "additional_kwargs": { "function_call": { "name": "tool_selection", "arguments": "{\n \"actions\": [\n {\n \"action_name\": \"<<MY_FUNCTION_NAME>>\",\n \"action\": {}\n }\n ]\n}" } } } } } ] ]` the function call to qa Tool fails with the error "ToolException('Too many arguments to single-input tool <<MY_FUNCTION_NAME>>. Args: []')" any idea what may cause this error ? ### Expected behavior similarity search works as expected (same as here https://python.langchain.com/docs/modules/agents/how_to/agent_vectorstore ) even when using AgentType.OPENAI_MULTI_FUNCTIONS agent
AgentType.OPENAI_MULTI_FUNCTIONS with FAISS VectorStore results with "ToolException('Too many arguments to single-input" error
https://api.github.com/repos/langchain-ai/langchain/issues/7958/comments
1
2023-07-19T20:43:20Z
2023-10-25T16:05:46Z
https://github.com/langchain-ai/langchain/issues/7958
1,812,700,556
7,958
[ "langchain-ai", "langchain" ]
### System Info langchain 0.0.234 Python 3.10.9 aiohttp 3.8.4 Getting the following error when running an agent in async with agent.arun('Get time in Salta of Argentina') with a requests_get tool. [tool/start] [1:chain:AgentExecutor > 7:tool:requests_get] Entering Tool run with input: "http://worldtimeapi.org/api/timezone/America/Argentina/Salta" [tool/error] [1:chain:AgentExecutor > 7:tool:requests_get] [78.06s] Tool run errored with error: TypeError("aiohttp.client.ClientSession.request() got multiple values for keyword argument 'auth'" The issue lies in https://github.com/hwchase17/langchain/commit/663b0933e488383e6a9bc2a04b4b1cf866a8ea94 which was done to fix issue #7542 ### Who can help? @agola11 @EricSpeidel ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Create an agent utilizing the requests_get tool 2. Run the agent in async using arun with a prompt making it generate a GET request using the tool ### Expected behavior The auth should not be pass explicitly to sent to session.request in the code below. It will pass through automatically as part of kwargs. ``` @asynccontextmanager async def _arequest( self, method: str, url: str, **kwargs: Any ) -> AsyncGenerator[aiohttp.ClientResponse, None]: """Make an async request.""" if not self.aiosession: async with aiohttp.ClientSession() as session: async with session.request( method, url, headers=self.headers, auth=self.auth, **kwargs ) as response: yield response else: async with self.aiosession.request( method, url, headers=self.headers, auth=self.auth, **kwargs ) as response: yield response ```
TypeError("aiohttp.client.ClientSession.request() got multiple values for keyword argument 'auth'" with arun and requests_get tool on an agent
https://api.github.com/repos/langchain-ai/langchain/issues/7953/comments
1
2023-07-19T18:32:02Z
2023-10-25T16:05:51Z
https://github.com/langchain-ai/langchain/issues/7953
1,812,505,753
7,953
[ "langchain-ai", "langchain" ]
### Feature request I'm getting this back from my LLM: Observation: Use Search is not a valid tool, try another one. Obviously the LLM would like to use the search tool, but didn't request it just right. It seems LangChain has been thoroughly tested with OpenAI, but not so much other models. Which is totally understandable considering there are now thousands of models and variations. Although I'm digging through the code to find a way to do this myself, it would be awesome if LangChain had a well documented module we could enable to interpret or translate response from the LLM. Obviously in this case I could use the tool list, and string find, and if any of the tool names are found to go with that tool. ### Motivation Increased compatibility with open source LLM ### Your contribution Generic example: tool_names = # python list for tool_name in tool_names: if ai_reply.find(tool_name) >= 0: # use tool
Ability to translate/interpret LLM tool requests
https://api.github.com/repos/langchain-ai/langchain/issues/7949/comments
1
2023-07-19T17:39:47Z
2023-10-25T16:05:56Z
https://github.com/langchain-ai/langchain/issues/7949
1,812,411,168
7,949
[ "langchain-ai", "langchain" ]
### Issue with current documentation: Hello every one! I am following the tutorial of [OpenAI function Agent ](https://python.langchain.com/docs/modules/agents/agent_types/openai_functions_agent). I downloaded the chinook database in MySQL , I made the connection successfully and got good answers from this database using the [SQLDataBase agent tutorial.](https://python.langchain.com/docs/modules/agents/toolkits/sql_database). However, when I try to do sql query using the OpenAI function agent it fails. I got the following error: ```InvalidRequestError: 'MySQL DB' does not match '^[a-zA-Z0-9_-]{1,64}$' - 'functions.1.name``` When I change to other kind of agents such as CHAT_ZERO_SHOT_REACT_DESCRIPTION is works good. ### Idea or request for content: _No response_
OpenAI functions Agent is not working with SQLDatabaseChain.
https://api.github.com/repos/langchain-ai/langchain/issues/7946/comments
2
2023-07-19T17:01:47Z
2023-10-25T16:06:01Z
https://github.com/langchain-ai/langchain/issues/7946
1,812,350,601
7,946
[ "langchain-ai", "langchain" ]
### System Info langchain: 0.0.236 ### Who can help? @eyurtsev When attempting to load a JSON file from S3, I encounter the following error: `An error occurred: Json schema does not match the Unstructured schema` Don't know if it is related to https://github.com/hwchase17/langchain/issues/2222 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Use this code: ``` from langchain.document_loaders import S3FileLoader loader = S3FileLoader(bucket, file_key) documents = loader.load() ``` Getting this error: `An error occurred: Json schema does not match the Unstructured schema` ### Expected behavior I anticipate that it will load the file into the documents, just as it does for other file types I use.
Error when loading JSON file using S3FileLoader
https://api.github.com/repos/langchain-ai/langchain/issues/7944/comments
3
2023-07-19T16:06:55Z
2023-11-12T10:43:32Z
https://github.com/langchain-ai/langchain/issues/7944
1,812,263,812
7,944
[ "langchain-ai", "langchain" ]
### Issue with current documentation: We don't have enough documentation Conversational chain and found only documentation relates conversational retrieval chain. We are looking for separating the retriever functionality. Please provide some examples on passing context (lang chain documents ) to conversational chain ### Idea or request for content: _No response_
DOC: Passing Context to Conversational Chain
https://api.github.com/repos/langchain-ai/langchain/issues/7936/comments
4
2023-07-19T13:25:19Z
2024-03-30T12:30:15Z
https://github.com/langchain-ai/langchain/issues/7936
1,811,944,924
7,936
[ "langchain-ai", "langchain" ]
### System Info Langchain v0.0.235 ### Who can help? @hwchase17 @agola11 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [X] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Trying out the new [create_structured_output_chain](https://github.com/hwchase17/langchain/blame/9d7e57f5c01f9ac5c8caa439ff083de98f96fdde/langchain/chains/openai_functions/base.py#L267) introduced in #7270 with the setup like described in the [docs](https://python.langchain.com/docs/modules/chains/popular/openai_functions) I wondered about the bad performance. Then I looked at the function call.... Using a very simple example: ```python class LanguageCode(BaseModel): """"A single language code in ISO 639-1 format""" language_code: str = Field(..., description="Language code (e.g. 'en', 'de', 'fr')") class LanguageClassification(BaseModel): """Classify the languages of a user prompt.""" language_codes: list[LanguageCode] = Field(default_factory=list, description="A list of all languages present in the whole text. Exclude code sections, loanwords and technical terms in the text when deciding on the language codes. You have to output at least one language code, even if you are not certain or the text is very short!") main_language_code: LanguageCode = Field(..., description="Main Language of the text.") chat_prompt_language = ChatPromptTemplate.from_messages( [ SystemMessagePromptTemplate.from_template("You are a world-class linguist and fluent in all major languages. Your job is to determine which languages are present in the user text and which one is the main language."), HumanMessagePromptTemplate.from_template("{text}"), ] ) chain = create_structured_output_chain(LanguageClassification, llm, chat_prompt_language, verbose=True) ``` Looking at `chain.llm_kwargs['functions']`: ```python [{'name': '_OutputFormatter', 'description': 'Output formatter. Should always be used to format your response to the user.', 'parameters': {'title': '_OutputFormatter', 'description': 'Output formatter. Should always be used to format your response to the user.', 'type': 'object', 'properties': {'output': {'$ref': '#/definitions/LanguageClassification'}}, 'required': ['output'], 'definitions': {'LanguageCode': {'title': 'LanguageCode', 'description': '"A single language code in ISO 639-1 format', 'type': 'object', 'properties': {'language_code': {'title': 'Language Code', 'description': "Language code (e.g. 'en', 'de', 'fr')", 'type': 'string'}}, 'required': ['language_code']}, 'LanguageClassification': {'title': 'LanguageClassification', 'description': 'Classify the languages of a user prompt.', 'type': 'object', 'properties': {'language_codes': {'title': 'Language Codes', 'description': 'A list of all languages present in the whole text. Exclude code sections, loanwords and technical terms in the text when deciding on the language codes. You have to output at least one language code, even if you are not certain or the text is very short!', 'type': 'array', 'items': {'$ref': '#/definitions/LanguageCode'}}, 'main_language_code': {'title': 'Main Language Code', 'description': 'Main Language of the text.', 'allOf': [{'$ref': '#/definitions/LanguageCode'}]}}, 'required': ['main_language_code']}}}}] ``` What? I can barely parse this monstrosity, how should GPT3.5 do this? It is very well known by now, that the function- and variable names as well as the structure and simplicity/unambiguousness (and even the order of arguments) is very important for the performance of function calls. This implementation wastes a lot of tokens and is detrimental for performance (especially for users who put their trust in langchain and don't benchmark results); and that for a key feature. Maybe there are features that some additional complexity but this should never introduce a significant performance/token tax for completely unrelated and important use cases? ### Expected behavior Just as an example, using the exact same model/code with @jxnl s [OpenAISchema](https://github.com/jxnl/openai_function_call) and nothing else: ```python llm_kwargs={"functions": [LanguageClassification.openai_schema], "function_call": {'name':"LanguageClassification"}} chain = LLMChain(llm=llm, prompt=chat_prompt_language, verbose=True, llm_kwargs=llm_kwargs) ``` we get the following schema: ```python {'name': 'LanguageClassification', 'description': 'Classify the languages of a user prompt.', 'parameters': {'type': 'object', 'properties': {'language_codes': {'description': 'A list of all languages present in the whole text. Exclude code sections, loanwords and technical terms in the text when deciding on the language codes. You have to output at least one language code, even if you are not certain or the text is very short!', 'type': 'array', 'items': {'$ref': '#/definitions/LanguageCode'}}, 'main_language_code': {'description': 'Main Language of the text.', 'allOf': [{'$ref': '#/definitions/LanguageCode'}]}}, 'required': ['language_codes', 'main_language_code'], 'definitions': {'LanguageCode': {'description': '"A single language code in ISO 639-1 format', 'type': 'object', 'properties': {'language_code': {'description': "Language code (e.g. 'en', 'de', 'fr')", 'type': 'string'}}, 'required': ['language_code']}}}} ``` Half the tokens and way easier to read/understand which leads to way better performance and robustness. This can be parsed in a single line: ```python res=chain.generate([{'text':"..."}]) LanguageClassification(**json.loads(res.generations[0][0].message.additional_kwargs['function_call']["arguments"]) ``` And thus there isn't even more code necessary compared to the native langchain implementation. I didn't do any extensive benchmarks, but for the usecase above, the Langchain implementation was slower and returned wrong results way more often; which doesn't surprise me, given well-known function_call best practices. Sorry if I might sound slightly offensive here, but I love to use langchain due to the many integrations and my familarity with it, but problems/implementations like this will really, really hurt adoption in the longterm imho. Hidden complexity can have itsreasons, but hidden complexity that deteriorates results significantly for important use cases without obvious reasons is really the worst. EDIT The `create_extraction_chain_pydantic` is better, but still unnecessary bloated (and as often, it´s not clear from the documentation, what the differences are and which one is preferred for which use case... - `create_structured_output_chain` seems to be advertised as "Popular"): ```python [{'name': 'information_extraction', 'description': 'Extracts the relevant information from the passage.', 'parameters': {'type': 'object', 'properties': {'info': {'type': 'array', 'items': {'type': 'object', 'properties': {'language_codes': {'title': 'language_codes', 'description': 'A list of all languages present in the whole text. Exclude code sections, loanwords and technical terms in the text when deciding on the language codes. You have to output at least one language code, even if you are not certain or the text is very short!', 'type': 'array', 'items': {'description': '"A single language code in ISO 639-1 format', 'type': 'object', 'properties': {'language_code': {'description': "Language code (e.g. 'en', 'de', 'fr')", 'type': 'string'}}, 'required': ['language_code']}}, 'main_language_code': {'title': 'main_language_code', 'description': 'Main Language of the text.', 'allOf': [{'description': '"A single language code in ISO 639-1 format', 'type': 'object', 'properties': {'language_code': {'description': "Language code (e.g. 'en', 'de', 'fr')", 'type': 'string'}}, 'required': ['language_code']}]}}, 'required': ['main_language_code']}}}, 'required': ['info']}}] ``
"create_structured_output_chain" creates awful schema, deteriorates performance and should be fixed or removed
https://api.github.com/repos/langchain-ai/langchain/issues/7935/comments
3
2023-07-19T10:50:29Z
2023-11-08T16:17:23Z
https://github.com/langchain-ai/langchain/issues/7935
1,811,689,381
7,935
[ "langchain-ai", "langchain" ]
### System Info I run my code in google colab. [this is link to code.](https://colab.research.google.com/drive/1vqTz68WVT7qCGpDSahROztCUphmXy9xI?usp=sharing) processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 79 model name : Intel(R) Xeon(R) CPU @ 2.20GHz stepping : 0 microcode : 0xffffffff cpu MHz : 2199.998 cache size : 56320 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa mmio_stale_data retbleed bogomips : 4399.99 clflush size : 64 cache_alignment : 64 address sizes : 46 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 79 model name : Intel(R) Xeon(R) CPU @ 2.20GHz stepping : 0 microcode : 0xffffffff cpu MHz : 2199.998 cache size : 56320 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 1 apicid : 1 initial apicid : 1 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa mmio_stale_data retbleed bogomips : 4399.99 clflush size : 64 cache_alignment : 64 address sizes : 46 bits physical, 48 bits virtual power management: ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` ! pip install google-cloud-aiplatform==1.26 langchain==0.0.232 ! pip uninstall shapely ! pip install shapely<2.0.0 import os os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = '' from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate, AIMessagePromptTemplate ) from langchain.schema import HumanMessage, SystemMessage from langchain.chat_models import ChatVertexAI prompt = f"""Give me a jok!""" system_template = ( "Act as an conversationl chat bot" ) human_template = "{prompt}" system_message_prompt = SystemMessagePromptTemplate.from_template(system_template) human_message_prompt = HumanMessagePromptTemplate.from_template(human_template) chat_prompt = ChatPromptTemplate.from_messages( [system_message_prompt, human_message_prompt] ) messages = chat_prompt.format_prompt(prompt=prompt).to_messages() model = ChatVertexAI(model_name="chat-bison@001") response = model(messages) ``` ### Expected behavior It should get response, but I face this err. ``` /usr/local/lib/python3.10/dist-packages/langchain/chat_models/vertexai.py in _parse_chat_history(history) 46 first place. 47 """ ---> 48 from vertexai.language_models import ChatMessage 49 50 vertex_messages, context = [], None ImportError: cannot import name 'ChatMessage' from 'vertexai.language_models' (/usr/local/lib/python3.10/dist-packages/vertexai/language_models/__init__.py) ``` I think it should read from _vertexai._language_models_ instead.
ImportError: cannot import name 'ChatMessage' from 'vertexai.language_models' (/usr/local/lib/python3.10/dist-packages/vertexai/language_models/__init__.py)
https://api.github.com/repos/langchain-ai/langchain/issues/7932/comments
3
2023-07-19T10:17:30Z
2023-11-08T21:53:02Z
https://github.com/langchain-ai/langchain/issues/7932
1,811,635,089
7,932
[ "langchain-ai", "langchain" ]
### System Info LangChain Version- 0.0.235, Windows, Python Version-3.9.16 As per below source code of SQLDatabase, before executing any sql query, connection.exec_driver_sql(f"SET search_path TO {self._schema}") is executed for all database except 'snowflake' and 'bigquery'. if self._schema is not None: if self.dialect == "snowflake": connection.exec_driver_sql( f"ALTER SESSION SET search_path='{self._schema}'" ) elif self.dialect == "bigquery": connection.exec_driver_sql(f"SET @@dataset_id='{self._schema}'") else: connection.exec_driver_sql(f"SET search_path TO {self._schema}") cursor = connection.execute(text(command)) As per my knowledge, The SET search_path command is specific to PostgreSQL, not Oracle. This is why I am getting the following error. sqlalchemy.exc.DatabaseError: (oracledb.exceptions.DatabaseError) ORA-00922: missing or invalid option [SQL: SET search_path TO evr1] (Background on this error at: https://sqlalche.me/e/20/4xp6) Oracle does not recognize this command. I think it is better to use SCHEMA.TABLE in all sql queries. ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction oracle_connection_str = f"oracle+oracledb://{username}:{password}@{hostname}:{port}/?service_name={service_name}" db = SQLDatabase.from_uri( oracle_connection_str, schema="evr1", include_tables=[ ], sample_rows_in_table_info=3, ) llm = ChatOpenAI(model_name=GPT_MODEL, temperature=0, openai_api_key=OpenAI_API_KEY) toolkit = SQLDatabaseToolkit( db=db, llm=llm, ) agent_executor = create_sql_agent( llm=llm, toolkit=toolkit, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, return_intermediate_steps=True, handle_parsing_errors=_handle_error, verbose=True, ) response = agent_executor(user_input) ### Expected behavior Should not execute SET search_path TO {self._schema} for Oracle
SET search_path TO {self._schema} is executing by SQLDatabase for all databases except 'snowflake' and 'bigquery'.
https://api.github.com/repos/langchain-ai/langchain/issues/7928/comments
4
2023-07-19T08:42:23Z
2023-12-13T16:08:13Z
https://github.com/langchain-ai/langchain/issues/7928
1,811,463,298
7,928
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. when I use langchain.agents,the parameter llm can only use OpenAI or other large language model ? ### Suggestion: _No response_
llm can only use OpenAI?
https://api.github.com/repos/langchain-ai/langchain/issues/7926/comments
4
2023-07-19T07:49:20Z
2023-10-26T16:05:48Z
https://github.com/langchain-ai/langchain/issues/7926
1,811,368,550
7,926
[ "langchain-ai", "langchain" ]
### System Info Bedrock Embeddings doesn't have support for modifying the endpoint_url, the LLMs one have. ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Not able to provide a custom endpoint_url ### Expected behavior Able to provide a custom endpoint_url
Bedrock Embeddings: Add support for endpoint_url
https://api.github.com/repos/langchain-ai/langchain/issues/7925/comments
2
2023-07-19T07:27:08Z
2023-10-25T16:06:16Z
https://github.com/langchain-ai/langchain/issues/7925
1,811,334,507
7,925
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Hi I've tried doing the search functionalities using pymilvus and everything works perfectly fine doing similarity search, i am able to get the most relevant documents. However, when i try using the same credentials and parameters on Langchain.vectorstore.Milvus i am unable to replicate the results. Not sure if the connection is correct but my similarity search is returning a blank list. This is the exact same credentials i'm using currently for my on-prem Milvus `vdb = Milvus( collection_name = 'bbc_news', embedding_function = embedding_model, connection_args = {'host': 'server_host','port':'19530','user':'admin','password':'password','database':'db_dsg'}, consistency_level = "Session", index_params = index_params, search_params = search_params )` `vdb.similarity_search("What is the price of crude oil")` Returns a blank list. Am i doing anything wrong here? ### Suggestion: _No response_
Issue: Cannot replicate search function on Langchain Milvus
https://api.github.com/repos/langchain-ai/langchain/issues/7924/comments
2
2023-07-19T07:06:55Z
2023-10-25T16:06:22Z
https://github.com/langchain-ai/langchain/issues/7924
1,811,300,473
7,924
[ "langchain-ai", "langchain" ]
### System Info langchain version: 0.0.216 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.agents import create_csv_agent from langchain.agents import create_pandas_dataframe_agent from langchain.llms import OpenAI from langchain.chat_models import ChatOpenAI from langchain.agents.agent_types import AgentType from langchain.chat_models import AzureChatOpenAI from langchain.llms import AzureOpenAI import os os.environ["OPENAI_API_TYPE"] = "azure" os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview" os.environ["OPENAI_API_BASE"] = "https://####.openai.azure.com" os.environ["OPENAI_API_KEY"] = "#####" from langchain.llms import OpenAI import pandas as pd df = pd.read_csv("maccabi.csv") agent = create_pandas_dataframe_agent(AzureOpenAI(temperature=0), df, verbose=True) agent.run("how many rows are there?") getting following error: InvalidRequestError Traceback (most recent call last) Cell In[4], line 2 1 #agent.run("how many players's job is scorer and what is their name") ----> 2 agent.run("how many rows are there?") File [~/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py:290](https://file+.vscode-resource.vscode-cdn.net/Users/talzigm/Library/CloudStorage/OneDrive-AMDOCS/Documents/openai/notebooks/autogpt/~/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py:290), in Chain.run(self, callbacks, tags, *args, **kwargs) 288 if len(args) != 1: 289 raise ValueError("`run` supports only one positional argument.") --> 290 return self(args[0], callbacks=callbacks, tags=tags)[_output_key] 292 if kwargs and not args: 293 return self(kwargs, callbacks=callbacks, tags=tags)[_output_key] File [~/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py:166](https://file+.vscode-resource.vscode-cdn.net/Users/talzigm/Library/CloudStorage/OneDrive-AMDOCS/Documents/openai/notebooks/autogpt/~/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py:166), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info) 164 except (KeyboardInterrupt, Exception) as e: 165 run_manager.on_chain_error(e) --> 166 raise e 167 run_manager.on_chain_end(outputs) 168 final_outputs: Dict[str, Any] = self.prep_outputs( 169 inputs, outputs, return_only_outputs 170 ) File [~/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py:160](https://file+.vscode-resource.vscode-cdn.net/Users/talzigm/Library/CloudStorage/OneDrive-AMDOCS/Documents/openai/notebooks/autogpt/~/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py:160), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info) 154 run_manager = callback_manager.on_chain_start( 155 dumpd(self), 156 inputs, 157 ) 158 try: 159 outputs = ( --> 160 self._call(inputs, run_manager=run_manager) 161 if new_arg_supported 162 else self._call(inputs) 163 ) 164 except (KeyboardInterrupt, Exception) as e: 165 run_manager.on_chain_error(e) File [~/Library/Python/3.9/lib/python/site-packages/langchain/agents/agent.py:987](https://file+.vscode-resource.vscode-cdn.net/Users/talzigm/Library/CloudStorage/OneDrive-AMDOCS/Documents/openai/notebooks/autogpt/~/Library/Python/3.9/lib/python/site-packages/langchain/agents/agent.py:987), in AgentExecutor._call(self, inputs, run_manager) 985 # We now enter the agent loop (until it returns something). 986 while self._should_continue(iterations, time_elapsed): --> 987 next_step_output = self._take_next_step( 988 name_to_tool_map, 989 color_mapping, 990 inputs, 991 intermediate_steps, 992 run_manager=run_manager, 993 ) 994 if isinstance(next_step_output, AgentFinish): 995 return self._return( 996 next_step_output, intermediate_steps, run_manager=run_manager 997 ) File [~/Library/Python/3.9/lib/python/site-packages/langchain/agents/agent.py:792](https://file+.vscode-resource.vscode-cdn.net/Users/talzigm/Library/CloudStorage/OneDrive-AMDOCS/Documents/openai/notebooks/autogpt/~/Library/Python/3.9/lib/python/site-packages/langchain/agents/agent.py:792), in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager) 786 """Take a single step in the thought-action-observation loop. 787 788 Override this to take control of how the agent makes and acts on choices. 789 """ 790 try: 791 # Call the LLM to see what to do. --> 792 output = self.agent.plan( 793 intermediate_steps, 794 callbacks=run_manager.get_child() if run_manager else None, 795 **inputs, 796 ) 797 except OutputParserException as e: 798 if isinstance(self.handle_parsing_errors, bool): File [~/Library/Python/3.9/lib/python/site-packages/langchain/agents/agent.py:443](https://file+.vscode-resource.vscode-cdn.net/Users/talzigm/Library/CloudStorage/OneDrive-AMDOCS/Documents/openai/notebooks/autogpt/~/Library/Python/3.9/lib/python/site-packages/langchain/agents/agent.py:443), in Agent.plan(self, intermediate_steps, callbacks, **kwargs) 431 """Given input, decided what to do. 432 433 Args: (...) 440 Action specifying what tool to use. 441 """ 442 full_inputs = self.get_full_inputs(intermediate_steps, **kwargs) --> 443 full_output = self.llm_chain.predict(callbacks=callbacks, **full_inputs) 444 return self.output_parser.parse(full_output) File [~/Library/Python/3.9/lib/python/site-packages/langchain/chains/llm.py:252](https://file+.vscode-resource.vscode-cdn.net/Users/talzigm/Library/CloudStorage/OneDrive-AMDOCS/Documents/openai/notebooks/autogpt/~/Library/Python/3.9/lib/python/site-packages/langchain/chains/llm.py:252), in LLMChain.predict(self, callbacks, **kwargs) 237 def predict(self, callbacks: Callbacks = None, **kwargs: Any) -> str: 238 """Format prompt with kwargs and pass to LLM. 239 240 Args: (...) 250 completion = llm.predict(adjective="funny") 251 """ --> 252 return self(kwargs, callbacks=callbacks)[self.output_key] File [~/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py:166](https://file+.vscode-resource.vscode-cdn.net/Users/talzigm/Library/CloudStorage/OneDrive-AMDOCS/Documents/openai/notebooks/autogpt/~/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py:166), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info) 164 except (KeyboardInterrupt, Exception) as e: 165 run_manager.on_chain_error(e) --> 166 raise e 167 run_manager.on_chain_end(outputs) 168 final_outputs: Dict[str, Any] = self.prep_outputs( 169 inputs, outputs, return_only_outputs 170 ) File [~/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py:160](https://file+.vscode-resource.vscode-cdn.net/Users/talzigm/Library/CloudStorage/OneDrive-AMDOCS/Documents/openai/notebooks/autogpt/~/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py:160), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info) 154 run_manager = callback_manager.on_chain_start( 155 dumpd(self), 156 inputs, 157 ) 158 try: 159 outputs = ( --> 160 self._call(inputs, run_manager=run_manager) 161 if new_arg_supported 162 else self._call(inputs) 163 ) 164 except (KeyboardInterrupt, Exception) as e: 165 run_manager.on_chain_error(e) File [~/Library/Python/3.9/lib/python/site-packages/langchain/chains/llm.py:92](https://file+.vscode-resource.vscode-cdn.net/Users/talzigm/Library/CloudStorage/OneDrive-AMDOCS/Documents/openai/notebooks/autogpt/~/Library/Python/3.9/lib/python/site-packages/langchain/chains/llm.py:92), in LLMChain._call(self, inputs, run_manager) 87 def _call( 88 self, 89 inputs: Dict[str, Any], 90 run_manager: Optional[CallbackManagerForChainRun] = None, 91 ) -> Dict[str, str]: ---> 92 response = self.generate([inputs], run_manager=run_manager) 93 return self.create_outputs(response)[0] File [~/Library/Python/3.9/lib/python/site-packages/langchain/chains/llm.py:102](https://file+.vscode-resource.vscode-cdn.net/Users/talzigm/Library/CloudStorage/OneDrive-AMDOCS/Documents/openai/notebooks/autogpt/~/Library/Python/3.9/lib/python/site-packages/langchain/chains/llm.py:102), in LLMChain.generate(self, input_list, run_manager) 100 """Generate LLM result from inputs.""" 101 prompts, stop = self.prep_prompts(input_list, run_manager=run_manager) --> 102 return self.llm.generate_prompt( 103 prompts, 104 stop, 105 callbacks=run_manager.get_child() if run_manager else None, 106 **self.llm_kwargs, 107 ) File [~/Library/Python/3.9/lib/python/site-packages/langchain/llms/base.py:141](https://file+.vscode-resource.vscode-cdn.net/Users/talzigm/Library/CloudStorage/OneDrive-AMDOCS/Documents/openai/notebooks/autogpt/~/Library/Python/3.9/lib/python/site-packages/langchain/llms/base.py:141), in BaseLLM.generate_prompt(self, prompts, stop, callbacks, **kwargs) 133 def generate_prompt( 134 self, 135 prompts: List[PromptValue], (...) 138 **kwargs: Any, 139 ) -> LLMResult: 140 prompt_strings = [p.to_string() for p in prompts] --> 141 return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs) File [~/Library/Python/3.9/lib/python/site-packages/langchain/llms/base.py:227](https://file+.vscode-resource.vscode-cdn.net/Users/talzigm/Library/CloudStorage/OneDrive-AMDOCS/Documents/openai/notebooks/autogpt/~/Library/Python/3.9/lib/python/site-packages/langchain/llms/base.py:227), in BaseLLM.generate(self, prompts, stop, callbacks, tags, **kwargs) 221 raise ValueError( 222 "Asked to cache, but no cache found at `langchain.cache`." 223 ) 224 run_managers = callback_manager.on_llm_start( 225 dumpd(self), prompts, invocation_params=params, options=options 226 ) --> 227 output = self._generate_helper( 228 prompts, stop, run_managers, bool(new_arg_supported), **kwargs 229 ) 230 return output 231 if len(missing_prompts) > 0: File [~/Library/Python/3.9/lib/python/site-packages/langchain/llms/base.py:178](https://file+.vscode-resource.vscode-cdn.net/Users/talzigm/Library/CloudStorage/OneDrive-AMDOCS/Documents/openai/notebooks/autogpt/~/Library/Python/3.9/lib/python/site-packages/langchain/llms/base.py:178), in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs) 176 for run_manager in run_managers: 177 run_manager.on_llm_error(e) --> 178 raise e 179 flattened_outputs = output.flatten() 180 for manager, flattened_output in zip(run_managers, flattened_outputs): File [~/Library/Python/3.9/lib/python/site-packages/langchain/llms/base.py:165](https://file+.vscode-resource.vscode-cdn.net/Users/talzigm/Library/CloudStorage/OneDrive-AMDOCS/Documents/openai/notebooks/autogpt/~/Library/Python/3.9/lib/python/site-packages/langchain/llms/base.py:165), in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs) 155 def _generate_helper( 156 self, 157 prompts: List[str], (...) 161 **kwargs: Any, 162 ) -> LLMResult: 163 try: 164 output = ( --> 165 self._generate( 166 prompts, 167 stop=stop, 168 # TODO: support multiple run managers 169 run_manager=run_managers[0] if run_managers else None, 170 **kwargs, 171 ) 172 if new_arg_supported 173 else self._generate(prompts, stop=stop) 174 ) 175 except (KeyboardInterrupt, Exception) as e: 176 for run_manager in run_managers: File [~/Library/Python/3.9/lib/python/site-packages/langchain/llms/openai.py:336](https://file+.vscode-resource.vscode-cdn.net/Users/talzigm/Library/CloudStorage/OneDrive-AMDOCS/Documents/openai/notebooks/autogpt/~/Library/Python/3.9/lib/python/site-packages/langchain/llms/openai.py:336), in BaseOpenAI._generate(self, prompts, stop, run_manager, **kwargs) 334 choices.extend(response["choices"]) 335 else: --> 336 response = completion_with_retry(self, prompt=_prompts, **params) 337 choices.extend(response["choices"]) 338 if not self.streaming: 339 # Can't update token usage if streaming File [~/Library/Python/3.9/lib/python/site-packages/langchain/llms/openai.py:106](https://file+.vscode-resource.vscode-cdn.net/Users/talzigm/Library/CloudStorage/OneDrive-AMDOCS/Documents/openai/notebooks/autogpt/~/Library/Python/3.9/lib/python/site-packages/langchain/llms/openai.py:106), in completion_with_retry(llm, **kwargs) 102 @retry_decorator 103 def _completion_with_retry(**kwargs: Any) -> Any: 104 return llm.client.create(**kwargs) --> 106 return _completion_with_retry(**kwargs) File [~/Library/Python/3.9/lib/python/site-packages/tenacity/__init__.py:289](https://file+.vscode-resource.vscode-cdn.net/Users/talzigm/Library/CloudStorage/OneDrive-AMDOCS/Documents/openai/notebooks/autogpt/~/Library/Python/3.9/lib/python/site-packages/tenacity/__init__.py:289), in BaseRetrying.wraps..wrapped_f(*args, **kw) 287 @functools.wraps(f) 288 def wrapped_f(*args: t.Any, **kw: t.Any) -> t.Any: --> 289 return self(f, *args, **kw) File [~/Library/Python/3.9/lib/python/site-packages/tenacity/__init__.py:379](https://file+.vscode-resource.vscode-cdn.net/Users/talzigm/Library/CloudStorage/OneDrive-AMDOCS/Documents/openai/notebooks/autogpt/~/Library/Python/3.9/lib/python/site-packages/tenacity/__init__.py:379), in Retrying.__call__(self, fn, *args, **kwargs) 377 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs) 378 while True: --> 379 do = self.iter(retry_state=retry_state) 380 if isinstance(do, DoAttempt): 381 try: File [~/Library/Python/3.9/lib/python/site-packages/tenacity/__init__.py:314](https://file+.vscode-resource.vscode-cdn.net/Users/talzigm/Library/CloudStorage/OneDrive-AMDOCS/Documents/openai/notebooks/autogpt/~/Library/Python/3.9/lib/python/site-packages/tenacity/__init__.py:314), in BaseRetrying.iter(self, retry_state) 312 is_explicit_retry = fut.failed and isinstance(fut.exception(), TryAgain) 313 if not (is_explicit_retry or self.retry(retry_state)): --> 314 return fut.result() 316 if self.after is not None: 317 self.after(retry_state) File [/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py:438](https://file+.vscode-resource.vscode-cdn.net/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py:438), in Future.result(self, timeout) 436 raise CancelledError() 437 elif self._state == FINISHED: --> 438 return self.__get_result() 440 self._condition.wait(timeout) 442 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]: File [/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py:390](https://file+.vscode-resource.vscode-cdn.net/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py:390), in Future.__get_result(self) 388 if self._exception: 389 try: --> 390 raise self._exception 391 finally: 392 # Break a reference cycle with the exception in self._exception 393 self = None File [~/Library/Python/3.9/lib/python/site-packages/tenacity/__init__.py:382](https://file+.vscode-resource.vscode-cdn.net/Users/talzigm/Library/CloudStorage/OneDrive-AMDOCS/Documents/openai/notebooks/autogpt/~/Library/Python/3.9/lib/python/site-packages/tenacity/__init__.py:382), in Retrying.__call__(self, fn, *args, **kwargs) 380 if isinstance(do, DoAttempt): 381 try: --> 382 result = fn(*args, **kwargs) 383 except BaseException: # noqa: B902 384 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type] File [~/Library/Python/3.9/lib/python/site-packages/langchain/llms/openai.py:104](https://file+.vscode-resource.vscode-cdn.net/Users/talzigm/Library/CloudStorage/OneDrive-AMDOCS/Documents/openai/notebooks/autogpt/~/Library/Python/3.9/lib/python/site-packages/langchain/llms/openai.py:104), in completion_with_retry.._completion_with_retry(**kwargs) 102 @retry_decorator 103 def _completion_with_retry(**kwargs: Any) -> Any: --> 104 return llm.client.create(**kwargs) File [~/Library/Python/3.9/lib/python/site-packages/openai/api_resources/completion.py:25](https://file+.vscode-resource.vscode-cdn.net/Users/talzigm/Library/CloudStorage/OneDrive-AMDOCS/Documents/openai/notebooks/autogpt/~/Library/Python/3.9/lib/python/site-packages/openai/api_resources/completion.py:25), in Completion.create(cls, *args, **kwargs) 23 while True: 24 try: ---> 25 return super().create(*args, **kwargs) 26 except TryAgain as e: 27 if timeout is not None and time.time() > start + timeout: File [~/Library/Python/3.9/lib/python/site-packages/openai/api_resources/abstract/engine_api_resource.py:153](https://file+.vscode-resource.vscode-cdn.net/Users/talzigm/Library/CloudStorage/OneDrive-AMDOCS/Documents/openai/notebooks/autogpt/~/Library/Python/3.9/lib/python/site-packages/openai/api_resources/abstract/engine_api_resource.py:153), in EngineAPIResource.create(cls, api_key, api_base, api_type, request_id, api_version, organization, **params) 127 @classmethod 128 def create( 129 cls, (...) 136 **params, 137 ): 138 ( 139 deployment_id, 140 engine, (...) 150 api_key, api_base, api_type, api_version, organization, **params 151 ) --> 153 response, _, api_key = requestor.request( 154 "post", 155 url, 156 params=params, 157 headers=headers, 158 stream=stream, 159 request_id=request_id, 160 request_timeout=request_timeout, 161 ) 163 if stream: 164 # must be an iterator 165 assert not isinstance(response, OpenAIResponse) File [~/Library/Python/3.9/lib/python/site-packages/openai/api_requestor.py:230](https://file+.vscode-resource.vscode-cdn.net/Users/talzigm/Library/CloudStorage/OneDrive-AMDOCS/Documents/openai/notebooks/autogpt/~/Library/Python/3.9/lib/python/site-packages/openai/api_requestor.py:230), in APIRequestor.request(self, method, url, params, headers, files, stream, request_id, request_timeout) 209 def request( 210 self, 211 method, (...) 218 request_timeout: Optional[Union[float, Tuple[float, float]]] = None, 219 ) -> Tuple[Union[OpenAIResponse, Iterator[OpenAIResponse]], bool, str]: 220 result = self.request_raw( 221 method.lower(), 222 url, (...) 228 request_timeout=request_timeout, 229 ) --> 230 resp, got_stream = self._interpret_response(result, stream) 231 return resp, got_stream, self.api_key File [~/Library/Python/3.9/lib/python/site-packages/openai/api_requestor.py:624](https://file+.vscode-resource.vscode-cdn.net/Users/talzigm/Library/CloudStorage/OneDrive-AMDOCS/Documents/openai/notebooks/autogpt/~/Library/Python/3.9/lib/python/site-packages/openai/api_requestor.py:624), in APIRequestor._interpret_response(self, result, stream) 616 return ( 617 self._interpret_response_line( 618 line, result.status_code, result.headers, stream=True 619 ) 620 for line in parse_stream(result.iter_lines()) 621 ), True 622 else: 623 return ( --> 624 self._interpret_response_line( 625 result.content.decode("utf-8"), 626 result.status_code, 627 result.headers, 628 stream=False, 629 ), 630 False, 631 ) File [~/Library/Python/3.9/lib/python/site-packages/openai/api_requestor.py:687](https://file+.vscode-resource.vscode-cdn.net/Users/talzigm/Library/CloudStorage/OneDrive-AMDOCS/Documents/openai/notebooks/autogpt/~/Library/Python/3.9/lib/python/site-packages/openai/api_requestor.py:687), in APIRequestor._interpret_response_line(self, rbody, rcode, rheaders, stream) 685 stream_error = stream and "error" in resp.data 686 if stream_error or not 200 <= rcode < 300: --> 687 raise self.handle_error_response( 688 rbody, rcode, resp.data, rheaders, stream_error=stream_error 689 ) 690 return resp InvalidRequestError: Resource not found ### Expected behavior getting correct response
InvalidRequestError: Resource not found. when running pandas_dataframe_agent over AzureOpenAI
https://api.github.com/repos/langchain-ai/langchain/issues/7923/comments
3
2023-07-19T06:44:17Z
2023-10-30T12:05:13Z
https://github.com/langchain-ai/langchain/issues/7923
1,811,270,099
7,923
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Hi, Here is how i have initilized conversation memory memory = ConversationSummaryBufferMemory(llm=llm_model,memory_key="chat_history",return_messages=True,max_token_limit=500) Here is how I have used ConversationalRetrievalChain chain=ConversationalRetrievalChain.from_llm(llm_model,retriever=vector.as_retriever(search_kwargs={"k": 10}), memory=memory,verbose=True) I could see that answer for first question will be good, on asking further questions with result = chain({"question": <question>}) answer from bot is not good for few questions. With verbose enabled I have observed that apart from the actual history langchain is adding few more conversations by itself. Is there a way to supress this(adding additional conversation) I also tried with ConversationBufferWindowMemory aswell. There also Iam seeing the performance drop. I tried with new langchain version as well "0.0.235", there also Iam seeing the same issue. Could you help me to undersatnd if this is an existing issue or any mistake am I doing while configuring ### Suggestion: _No response_
Issue: Not getting good chat results on enabling Coversation Memory in langchain
https://api.github.com/repos/langchain-ai/langchain/issues/7921/comments
10
2023-07-19T05:47:22Z
2023-10-27T16:06:29Z
https://github.com/langchain-ai/langchain/issues/7921
1,811,200,154
7,921
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Implementing _similarity_search_with_relevance_scores on PGVector so users can set "search_type" to "similarity_score_threshold" without raising NotImplementedError. ``` retriever = pgvector.as_retriever(search_type="similarity_score_threshold", search_kwargs={"score_threshold": 0.7}) results = retriever.get_relevant_documents(query) ``` Using the search threshold on PGVector to avoid unrelated documents in the results. ### Suggestion: _No response_
Implementing search threshold on PGVector
https://api.github.com/repos/langchain-ai/langchain/issues/7905/comments
1
2023-07-18T20:27:50Z
2023-07-18T20:29:07Z
https://github.com/langchain-ai/langchain/issues/7905
1,810,665,965
7,905
[ "langchain-ai", "langchain" ]
### Feature request It would be nice if HuggingFaceHub models could be called in async mode (concurrently), as currently supported by Anthropic and OpenAI models. ### Motivation I wanted to compare performance of some models in a bunch of tasks. I was comparing Anthropic and OpenAI models, and when I tried using a HuggingFaceHub model, I noticed this functionality is not implemented. ### Your contribution I would like to know if someone is already taking on this. I could try to replicate code from other APIs to do it, but I am not sure I would be able to get to a level that is high enough to submit to LangChain.
Async calls for HuggingFaceHub
https://api.github.com/repos/langchain-ai/langchain/issues/7902/comments
6
2023-07-18T19:59:17Z
2023-11-15T16:08:10Z
https://github.com/langchain-ai/langchain/issues/7902
1,810,625,788
7,902
[ "langchain-ai", "langchain" ]
### System Info Langhain v0.0.235 Python v3.11 ### Who can help? @agola11 @hw ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I'm using OpenRouter which uses OpenAI SDK to provide different models. I encounter the problem when I use the model 'google/palm-2-chat-bison' Here is the [gist](https://gist.github.com/alonsosilvaallende/6788eaa388bfa7e60ce84e7e155a86b5) reproducing the error. Otherwise, here is the code: ```python import os import openai from langchain.chat_models import ChatOpenAI openai.api_base = "https://openrouter.ai/api/v1" openai.api_key = os.getenv("OPENROUTER_API_KEY") OPENROUTER_REFERRER = "https://github.com/alexanderatallah/openrouter-streamlit" chat = ChatOpenAI(model_name="google/palm-2-chat-bison", temperature=2, headers={"HTTP-Referer": OPENROUTER_REFERRER}) chat.predict("Tell me a joke") ``` Error message: --------------------------------------------------------------------------- KeyError Traceback (most recent call last) Cell In[3], line 13 7 OPENROUTER_REFERRER = "https://github.com/alexanderatallah/openrouter-streamlit" 8 chat = ChatOpenAI( 9 model_name="google/palm-2-chat-bison", 10 temperature=2, 11 headers={"HTTP-Referer": OPENROUTER_REFERRER} 12 ) ---> 13 chat.predict("Tell me a joke") File ~/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain/chat_models/base.py:385, in BaseChatModel.predict(self, text, stop, **kwargs) 383 else: 384 _stop = list(stop) --> 385 result = self([HumanMessage(content=text)], stop=_stop, **kwargs) 386 return result.content File ~/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain/chat_models/base.py:349, in BaseChatModel.__call__(self, messages, stop, callbacks, **kwargs) 342 def __call__( 343 self, 344 messages: List[BaseMessage], (...) 347 **kwargs: Any, 348 ) -> BaseMessage: --> 349 generation = self.generate( 350 [messages], stop=stop, callbacks=callbacks, **kwargs 351 ).generations[0][0] 352 if isinstance(generation, ChatGeneration): 353 return generation.message File ~/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain/chat_models/base.py:125, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, **kwargs) 123 if run_managers: 124 run_managers[i].on_llm_error(e) --> 125 raise e 126 flattened_outputs = [ 127 LLMResult(generations=[res.generations], llm_output=res.llm_output) 128 for res in results 129 ] 130 llm_output = self._combine_llm_outputs([res.llm_output for res in results]) File ~/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain/chat_models/base.py:115, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, **kwargs) 112 for i, m in enumerate(messages): 113 try: 114 results.append( --> 115 self._generate_with_cache( 116 m, 117 stop=stop, 118 run_manager=run_managers[i] if run_managers else None, 119 **kwargs, 120 ) 121 ) 122 except (KeyboardInterrupt, Exception) as e: 123 if run_managers: File ~/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain/chat_models/base.py:262, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs) 258 raise ValueError( 259 "Asked to cache, but no cache found at `langchain.cache`." 260 ) 261 if new_arg_supported: --> 262 return self._generate( 263 messages, stop=stop, run_manager=run_manager, **kwargs 264 ) 265 else: 266 return self._generate(messages, stop=stop, **kwargs) File ~/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain/chat_models/openai.py:372, in ChatOpenAI._generate(self, messages, stop, run_manager, **kwargs) 370 return ChatResult(generations=[ChatGeneration(message=message)]) 371 response = self.completion_with_retry(messages=message_dicts, **params) --> 372 return self._create_chat_result(response) File ~/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain/chat_models/openai.py:394, in ChatOpenAI._create_chat_result(self, response) 389 gen = ChatGeneration( 390 message=message, 391 generation_info=dict(finish_reason=res.get("finish_reason")), 392 ) 393 generations.append(gen) --> 394 llm_output = {"token_usage": response["usage"], "model_name": self.model_name} 395 return ChatResult(generations=generations, llm_output=llm_output) KeyError: 'usage' ### Expected behavior I expect that it doesn't give me an error since exactly the same code works when I use the model 'openai/gpt-3.5-turbo' instead of the model 'google/palm-2-chat-bison'.
ChatOpenAI needs usage field that Google PaLM 2 Bison doesn't provide
https://api.github.com/repos/langchain-ai/langchain/issues/7900/comments
2
2023-07-18T19:25:22Z
2023-09-22T07:58:21Z
https://github.com/langchain-ai/langchain/issues/7900
1,810,580,170
7,900
[ "langchain-ai", "langchain" ]
### System Info Occasional error out of CSV agent with JSON parsing error. Typically occurs when prompting a multi step task, but some multi step tasks are handled fine. Even in the same multi step task, depending on the wording of the prompt it can be run successfully but with different wording will error out. Heres an example of the message returned. File "C:\Users\\env\Lib\site-packages\langchain\agents\openai_functions_agent\base.py", line 212, in plan agent_decision = _parse_ai_message(predicted_message) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\\env\Lib\site-packages\langchain\agents\openai_functions_agent\base.py", line 114, in _parse_ai_message raise OutputParserException( langchain.schema.OutputParserException: Could not parse tool input: {'name': 'python', 'arguments': "df_filtered = df[df['Version1Text'].str.contains('using your budget')]\nlabel_counts = df_filtered['Label'].value_counts()\nlabel_counts"} because the `arguments` is not valid JSON. ### Who can help? @hwchase17 @agol ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce behavior: 1. Start up CSV agent 2. One example prompt that errors: "of the rows where 'Version1Text' includes 'using your budget' what are the counts of each of the unique 'Label' values" ### Expected behavior Expected behavior is to subset the csv based on the provided conditions and then return counts
CSV Agent JSON Parsing Errors
https://api.github.com/repos/langchain-ai/langchain/issues/7897/comments
7
2023-07-18T18:13:05Z
2024-02-13T16:15:08Z
https://github.com/langchain-ai/langchain/issues/7897
1,810,454,689
7,897
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Hi, I'm updating my code to use the new OpenAI function calling structure. Requirements: - New messages saved in DynamoDB together with past messages for a user - Custom prompt : 10 last messages from DynamoDB memory of the user - Function calling ### Past code The `create_prompt_from_messages(n)` function create a custom prompt based on n last messages. ```python chain = LLMChain(llm=llm, prompt=create_prompt_from_messages(10), verbose=False, memory=memory ) ``` ### New code without custom prompt This code below works but send **all** past messages to the LLM. I want to limit to **n** last messages. I didn't find a way to pass custom prompt to an agent using `AgentType.OPENAI_FUNCTIONS` Note that I don't want to delete past messages from the database. ```python message_history = DynamoDBChatMessageHistory(table_name=conversation_table_name, session_id='0') memory = ConversationBufferMemory(memory_key="chat_history", chat_memory=message_history, return_messages=True) agent_kwargs = { "extra_prompt_messages": [MessagesPlaceholder(variable_name="chat_history")], } agent = initialize_agent( tools, llm, agent=AgentType.OPENAI_FUNCTIONS, # or OPENAI_MULTI_FUNCTIONS ? verbose=True, agent_kwargs=agent_kwargs, memory=memory ) ``` How can I only sent n last message from the agent memory? Or create the a custom prompt and pass it to the OpenAI function agent? Thank you in advance! ### Suggestion: Create a Notebook with this use case for OpenAI Function agent.
Issue: OpenAI Function Agent with custom prompt from memory
https://api.github.com/repos/langchain-ai/langchain/issues/7896/comments
2
2023-07-18T18:07:22Z
2023-07-18T21:27:03Z
https://github.com/langchain-ai/langchain/issues/7896
1,810,447,563
7,896
[ "langchain-ai", "langchain" ]
### System Info Mac OS Versions: Python 3.8.15 Package | Version -----------------------|-------- aiohttp | 3.8.4 aiosignal | 1.3.1 async-timeout | 4.0.2 attrs |23.1.0 certifi |2023.5.7 charset-normalizer | 3.2.0 dataclasses-json | 0.5.9 frozenlist | 1.4.0 **gpt4all** | **1.0.5** greenlet | 2.0.2 idna | 3.4 **langchain** | **0.0.234** langsmith | 0.0.5 marshmallow | 3.19.0 marshmallow-enum | 1.5.1 multidict | 6.0.4 mypy-extensions | 1.0.0 numexpr | 2.8.4 numpy | 1.24.4 openapi-schema-pydantic | 1.2.4 packaging | 23.1 pip | 23.2 **pydantic** | **1.10.11** PyYAML | 6.0 requests | 2.31.0 setuptools | 56.0.0 SQLAlchemy | 2.0.19 tenacity | 8.2.2 tqdm | 4.65.0 typing_extensions | 4.7.1 typing-inspect | 0.9.0 urllib3 | 2.0.3 yarl | 1.9.2 ### Who can help? @agola11 @hwchase17 ### Information - [X] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction This is a minimum code example to reproduce the error: ```python from langchain.llms.gpt4all import GPT4All llm = GPT4All(model="./models/gpt4all-lora-quantized-ggml.bin") ``` I get the following error: ``` Traceback (most recent call last): File "gpt4all_me.py", line 3, in <module> llm = GPT4All(model="./models/gpt4all-lora-quantized-ggml.bin") File "/home/cserpell/git/activelooplangchain/a/lib/python3.8/site-packages/langchain/load/serializable.py", line 74, in __init__ super().__init__(**kwargs) File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for GPT4All __root__ -> __root__ __init__() takes 1 positional argument but 2 were given (type=type_error) ``` I tried giving the directory without the `./`, without the `./model/`, putting the file in the current directory, and some other options, with no success. ### Expected behavior The exception should not be raised, and GPT4All model should be available to use.
Pydantic exception when creating GPT4All model
https://api.github.com/repos/langchain-ai/langchain/issues/7895/comments
2
2023-07-18T18:01:23Z
2023-10-24T16:05:43Z
https://github.com/langchain-ai/langchain/issues/7895
1,810,440,214
7,895
[ "langchain-ai", "langchain" ]
### System Info Langchain version - 0.0.235 Python version - 3.10 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Hello, I'm trying to use a Structured Chat agent with SQL tools as well as a VectorStoreQA tool in order to retrieve data from a Postgres database and our Pinecone store. For the Agent's chat model, I'm using Azure OpenAI's gpt-3.5-turbo version 0301. I have tried providing the tools via the SQLDatabaseToolkit's get_tools() function as well as importing the individual tools and modifying their descriptions, but I get the same issue with both approaches. Despite the descriptions and formatting instructions provided below, the agent has a lot of trouble with querying the DB. It almost always starts off with 'sql_db_query' to construct a query that errors out, which then prompts the agent to check the list of tables and then their schemas. From here, it can usually arrive at the correct query - but this is not the correct behavior. Other times, it will get stuck in a loop of constructing a query, getting an error, and then either checking the query or just constructing a new one. The errors are due to the schema that it hallucinates because it didn't use the 'sql_db_list_tables' and 'sql_db_schema' tools first. In the code excerpt below, you can see the formatting instructions provided as well as the descriptions that I'm giving the SQL tools. Has anybody else had issues with SQL database tools and the StructuredChat agent? Should I be using a different type of agent for this? `````` FORMAT_INSTRUCTIONS = """Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input). For SQL queries, ALWAYS use the available tools in this order: 1. sql_db_list_tables 2. sql_db_schema 3. sql_db_query_checker 4. sql_db_query Valid "action" values: "Final Answer" or {tool_names} Provide only ONE action per $JSON_BLOB, as shown: ``` {{{{ "action": $TOOL_NAME, "action_input": $INPUT }}}} ``` Follow this format: Question: input question to answer Thought: consider previous and subsequent steps Action: ``` $JSON_BLOB ``` Observation: action result ... (repeat Thought/Action/Observation N times) Thought: I know what to respond Action: ``` {{{{ "action": "Final Answer", "action_input": "Final response to human" }}}} ```""" vectorstore_info = VectorStoreInfo( name="incident_resolution_instructions", description="MOP Documents that help users resolve incidents for different devices and causes", vectorstore=pineconeStore, ) llm = AzureChatOpenAI(temperature=0, verbose=True, deployment_name='chatgpt-35', model_name="gpt-35-turbo", max_tokens=4000) query_sql_database_tool_description = ( "ONLY use this tool AFTER using 'sql_db_schema' and 'sql_db_query_checker'." "Input to this tool is a detailed and correct SQL query, output is a " "result from the database. If the query is not correct, an error message " "will be returned. If an error is returned, rewrite the query, check the " "query, and try again. If you encounter an issue with Unknown column " "'xxxx' in 'field list', use 'sql_db_schema' to query the correct table " "fields." ) info_sql_database_tool_description = ( """ ALWAYS use this tool second AFTER 'sql_db_list_tables'. Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables. Be sure that the tables actually exist by calling 'sql_db_list_tables' first! Example Input: table1, table2, table3 """ ) list_sql_database_tool_description = ( "ALWAYS use this tool first. Input to this tool is an empty string '', output is the list of PostgreSQL tables in the database." ) query_checker_sql_database_tool_description = ( """ ALWAYS Use this tool third AFTER 'sql_db_list_tables' and 'sql_db_schema'. ALWAYS use this tool to double check if your query is correct before executing it. ALWAYS use this tool BEFORE executing a query with 'sql_db_query' """ ) tools = [ ListSQLDatabaseTool(db=db, description=list_sql_database_tool_description), InfoSQLDatabaseTool(db=db, description=info_sql_database_tool_description), QuerySQLDataBaseTool(db=db, description=query_sql_database_tool_description), QuerySQLCheckerTool(db=db, description=query_checker_sql_database_tool_description, llm=AzureOpenAI(temperature=0, verbose=True, deployment_name='chatgpt-35', model_name="gpt-35-turbo", max_tokens=4000)), VectorStoreQATool(vectorstore=vectorstore_info.vectorstore, llm=AzureOpenAI(temperature=0, verbose=True, deployment_name='chatgpt-35', model_name="gpt-35-turbo", max_tokens=4000), name="incident_resolution_steps", description="Documentation detailing steps to resolve an incident for various device types such as Routers and Switches.") ] agent_chain = initialize_agent(tools, llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True, memory=memory, agent_kwargs={ 'prefix': PREFIX, 'format_instructions': FORMAT_INSTRUCTIONS, 'suffix': SUFFIX, 'input_variables': ["input", "chat_history", "agent_scratchpad"] }) response = agent_chain.run(input=event['question']) `````` ### Expected behavior The agent should: - Follow the order provided in the default descriptions of each SQLDatabase Tool - Follow instructions for tools provided in the formatting instructions or prompt suffix
StructuredChatAgent uses SQLDatabaseToolkit tools in wrong order
https://api.github.com/repos/langchain-ai/langchain/issues/7889/comments
6
2023-07-18T16:20:09Z
2024-06-10T17:12:36Z
https://github.com/langchain-ai/langchain/issues/7889
1,810,278,050
7,889
[ "langchain-ai", "langchain" ]
### System Info - Python 3.9.13 - langchain-0.0.235-py3-none-any.whl - chromadb-0.4.0-py3-none-any.whl ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [x] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce: 1. Create a Chroma store which is locally persisted ``` store = Chroma.from_texts( texts=docs, embedding=embeddings, metadatas=metadatas, persist_directory=environ["DB_DIR"] ) ``` 2. Get the error `You are using a deprecated configuration of Chroma. Please pip install chroma-migrate and run `chroma-migrate` to upgrade your configuration. See https://docs.trychroma.com/migration for more information or join our discord at https://discord.gg/8g5FESbj for help!` 3. Suffer ### Expected behavior 1. Create locally persisted Chroma store 2. Use Chroma store The issue: Starting chromadb 0.40 the chroma_db_impl is no longer a supported parameter, it uses sqlite instead. Removing the line ` chroma_db_impl="duckdb+parquet", ` from langchain.vectorstores/chroma.py solves the issue, but the earlier DB cannot be used or migrated.
ChromaDB 0.4+ is no longer compatible with client config
https://api.github.com/repos/langchain-ai/langchain/issues/7887/comments
51
2023-07-18T15:56:56Z
2024-02-16T16:09:33Z
https://github.com/langchain-ai/langchain/issues/7887
1,810,236,515
7,887
[ "langchain-ai", "langchain" ]
### Feature request When entering the embed text in the database like pgvector, I would like to encrypt the raw text with KMS Key or any such encryption and use raw text for generating embedding ### Motivation Security ### Your contribution N/A
Encryption Key support
https://api.github.com/repos/langchain-ai/langchain/issues/7886/comments
3
2023-07-18T15:49:30Z
2023-12-25T16:09:34Z
https://github.com/langchain-ai/langchain/issues/7886
1,810,223,070
7,886
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. what is the difference between a conversationChain and a conversationalRetrieval chain. I had originially assumed that the conversational retrieval chain would be able to take in documents, input, and memory (which I have gotten to successfully work) and was under the assumption that the conversationChain could not take in our own documents. However, I found a demo online that suggests otherwise. So what exactly is the difference? It was natural to me to use conversationalretrieval chain becuase I had personal documents I wanted to use and I knew that the retrieval chains were made for that. ### Suggestion: _No response_
Difference between ConversationChain and ConversationalRetrievalChain
https://api.github.com/repos/langchain-ai/langchain/issues/7885/comments
3
2023-07-18T15:39:20Z
2023-11-16T13:39:15Z
https://github.com/langchain-ai/langchain/issues/7885
1,810,202,518
7,885
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. When using the python_repl tool with the ZeroShotAgent, I keep getting the following error: `Observation: SyntaxError('invalid syntax', ('<string>', 2, 1, '%matplotlib inline\n')) Thought:` The agent keeps looping over and over since it does not understand the issue with the magic command. Is this a known issue? Do we have a fix or ideally a way to enforce some behaviours within the python tool, via a custom prompt? ### Suggestion: _No response_
Inconsistent behaviour with the 'python_repl' tool and the ZeroShotAgent
https://api.github.com/repos/langchain-ai/langchain/issues/7882/comments
2
2023-07-18T14:12:21Z
2023-10-24T16:05:53Z
https://github.com/langchain-ai/langchain/issues/7882
1,810,022,800
7,882
[ "langchain-ai", "langchain" ]
### Feature request I think it'd be a good idea to have Langchain integrated into the fast healthcare interoperability resource (FHIR) API. Integrating chatting techniques with FHIR and having the ability to interact with ChatOpenAI would give FHIR more visibility, versatility, and adaptability in terms of use cases in healthcare. ### Motivation Integrating a ChatOpenAI with FHIR (Fast Healthcare Interoperability Resources) can provide several benefits in the healthcare domain: **Real-time Communication:** Integrating the chat techniques such as ChatOpenAI with FHIR, will have the conversations seamlessly linked to relevant patient health records and clinical data, enhancing the context and relevance of the discussions.   **Collaborative Decision-Making**: By integrating a chat with FHIR, healthcare teams can discuss patient cases, share insights, exchange knowledge, and make informed decisions together. The ability to refer to FHIR resources, such as clinical notes, lab results, or medication information, within the chat streamlines the decision-making process.   **Contextual Information:** FHIR provides a standardized format for representing and exchanging healthcare data. Integrating the chat stream with FHIR allows relevant patient data, such as demographics, diagnoses, allergies, medications, or procedures, to be readily accessible within the chat interface.   **Workflow Efficiency**: By integrating the Lnangchain's ChatOpenAI with FHIR, healthcare professionals can conveniently access patient data and perform necessary actions within the chat interface. For example, they can request lab results, schedule appointments, prescribe medications, or document clinical notes directly within the chat stream. This integration reduces the need for switching between multiple systems, streamlines workflow, and enhances productivity.   **Continuity of Care:** Integrating the chat stream with FHIR helps ensure continuity of care by maintaining a historical record of discussions, decisions, and interventions in the patient's health record. This allows healthcare professionals to refer to previous conversations, review treatment plans, and track the progression of care over time. It also supports care coordination and handoffs between healthcare providers.   **Patient Engagement:** Chat streams integrated with FHIR can empower patients to actively participate in their own healthcare journey. Patients can securely communicate with healthcare providers, ask questions, receive educational materials, or provide updates on their health status. Having access to their FHIR-based health data within the chat stream can enable patients to have informed discussions and take ownership of their care.   The motivation for integrating Langchain with ChatOpenAI is endless. ### Your contribution I thought of this because I have been studying FHIR for a while now. I am yet to understand the nitty-gritty but I am sure the API has a schema https://fhir-ru.github.io/downloads.html that can be downloaded and integrated. In addition, Langchain developers have built API agents, they can do it. I am happy to provide more information.
Integrating Langchain to FHIR API
https://api.github.com/repos/langchain-ai/langchain/issues/7881/comments
11
2023-07-18T13:24:11Z
2024-05-18T23:26:45Z
https://github.com/langchain-ai/langchain/issues/7881
1,809,932,575
7,881
[ "langchain-ai", "langchain" ]
### System Info I am using the following codes to get api call: user_query = prompt.format_prompt(user_prompt=input_text) user_query_output = chat_model(user_query.to_messages()) I am using Django and since it takes some time to get response the entire page freezes. Is there anyway to show progress bar or at least a message it is under progress? ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction include the below code under views.py: user_query = prompt.format_prompt(user_prompt=input_text) user_query_output = chat_model(user_query.to_messages()) upon calling api the page freezes which is normal ### Expected behavior A message or progress bar
showing progress or message under process
https://api.github.com/repos/langchain-ai/langchain/issues/7879/comments
9
2023-07-18T10:58:52Z
2023-10-26T16:05:59Z
https://github.com/langchain-ai/langchain/issues/7879
1,809,693,903
7,879
[ "langchain-ai", "langchain" ]
### System Info langchain==0.0.219 python 3.9 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I am using the gpt-4 model with azureOpenAI using the below code. ``` from langchain.llms import AzureOpenAI from langchain.chains.question_answering import load_qa_chain from langchain.embeddings.huggingface import HuggingFaceEmbeddings from langchain.document_loaders import DirectoryLoader from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain.vectorstores import Chroma import os import openai openai.api_type = "azure" openai.api_base = os.getenv("OPENAI_API_BASE") openai.api_version = "2023-03-15-preview" openai.api_key = os.getenv("OPENAI_API_KEY") DEPLOYMENT_NAME = 'gpt-4 model' llm = AzureOpenAI( openai_api_base=os.getenv("OPENAI_API_BASE"), openai_api_version="2023-03-15-preview", deployment_name=DEPLOYMENT_NAME, openai_api_key=os.getenv("OPENAI_API_KEY"), openai_api_type="azure", ) embeddings = HuggingFaceEmbeddings(model_name='sentence-transformers/all-MiniLM-L6-v2') query = "sample query" my_loader = DirectoryLoader('/Data', glob='**/*.pdf') docs = my_loader.load() text_split = RecursiveCharacterTextSplitter(chunk_size = 3000, chunk_overlap = 2) texts = text_split.split_documents(docs) docsearch = Chroma.from_documents(texts, embeddings, metadatas=[{"source": str(i)} for i in range(len(texts))],persist_directory="./official_db").as_retriever(search_type="similarity") docs = docsearch.get_relevant_documents(query) chain = load_qa_chain(llm, chain_type="stuff") result = chain.run(input_documents=docs, question=query) ``` But it returns the errror ``` openai.error.InvalidRequestError: The completion operation does not work with the specified model, gpt-4. Please choose different model and try again. You can learn more about which models can be used with each operation here: https://go.microsoft.com/fwlink/?linkid=2197993. ``` **Note: The same code is working well with gpt-3.5 in azureOpenAI llm** ### Expected behavior It should able to integrate gpt-4 without any issue.
openai.error.InvalidRequestError: The completion operation does not work with the specified model, gpt-4. Please choose different model and try again. You can learn more about which models can be used with each operation here: https://go.microsoft.com/fwlink/?linkid=2197993.
https://api.github.com/repos/langchain-ai/langchain/issues/7878/comments
2
2023-07-18T10:01:28Z
2023-07-19T04:44:19Z
https://github.com/langchain-ai/langchain/issues/7878
1,809,601,350
7,878
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I came to know that we can do the document question answering in langchain in different ways. One way is - using **load_qa_chain** Also it seems there are two ways to use **load_qa_chain** 1) with **run()** ``` from langchain.chains.question_answering import load_qa_chain chain = load_qa_chain(llm, chain_type="stuff") result = chain.run(input_documents=docs, question=query) ``` 2)without **run** ``` from langchain.chains.question_answering import load_qa_chain chain = load_qa_chain(llm, chain_type="stuff") result = chain({"input_documents": docs, "question": query}, return_only_outputs=True) ``` what is the exact difference between these two methods? ### Suggestion: _No response_
Difference between chain() and chain.run()
https://api.github.com/repos/langchain-ai/langchain/issues/7876/comments
6
2023-07-18T09:02:23Z
2024-01-14T19:04:13Z
https://github.com/langchain-ai/langchain/issues/7876
1,809,494,390
7,876
[ "langchain-ai", "langchain" ]
### Feature request The current Telegram loader is not very flexible. For example: ``` async for message in client.iter_messages(self.chat_entity): ``` Here are the arguments available in the api ``` def iter_messages( self: 'TelegramClient', entity: 'hints.EntityLike', limit: float = None, *, offset_date: 'hints.DateLike' = None, offset_id: int = 0, max_id: int = 0, min_id: int = 0, add_offset: int = 0, search: str = None, filter: 'typing.Union[types.TypeMessagesFilter, typing.Type[types.TypeMessagesFilter]]' = None, from_user: 'hints.EntityLike' = None, wait_time: float = None, ids: 'typing.Union[int, typing.Sequence[int]]' = None, reverse: bool = False, reply_to: int = None, scheduled: bool = False ) -> 'typing.Union[_MessagesIter, _IDsIter]': """ Iterator over the messages for the given chat. The default order is from newest to oldest, but this behaviour can be changed with the `reverse` parameter. If either `search`, `filter` or `from_user` are provided, :tl:`messages.Search` will be used instead of :tl:`messages.getHistory`. .. note:: Telegram's flood wait limit for :tl:`GetHistoryRequest` seems to be around 30 seconds per 10 requests, therefore a sleep of 1 second is the default for this limit (or above). Arguments entity (`entity`): The entity from whom to retrieve the message history. It may be `None` to perform a global search, or to get messages by their ID from no particular chat. Note that some of the offsets will not work if this is the case. Note that if you want to perform a global search, you **must** set a non-empty `search` string, a `filter`. or `from_user`. limit (`int` | `None`, optional): Number of messages to be retrieved. Due to limitations with the API retrieving more than 3000 messages will take longer than half a minute (or even more based on previous calls). The limit may also be `None`, which would eventually return the whole history. offset_date (`datetime`): Offset date (messages *previous* to this date will be retrieved). Exclusive. offset_id (`int`): Offset message ID (only messages *previous* to the given ID will be retrieved). Exclusive. max_id (`int`): All the messages with a higher (newer) ID or equal to this will be excluded. min_id (`int`): All the messages with a lower (older) ID or equal to this will be excluded. add_offset (`int`): Additional message offset (all of the specified offsets + this offset = older messages). search (`str`): The string to be used as a search query. filter (:tl:`MessagesFilter` | `type`): The filter to use when returning messages. For instance, :tl:`InputMessagesFilterPhotos` would yield only messages containing photos. from_user (`entity`): Only messages from this entity will be returned. wait_time (`int`): Wait time (in seconds) between different :tl:`GetHistoryRequest`. Use this parameter to avoid hitting the ``FloodWaitError`` as needed. If left to `None`, it will default to 1 second only if the limit is higher than 3000. If the ``ids`` parameter is used, this time will default to 10 seconds only if the amount of IDs is higher than 300. ids (`int`, `list`): A single integer ID (or several IDs) for the message that should be returned. This parameter takes precedence over the rest (which will be ignored if this is set). This can for instance be used to get the message with ID 123 from a channel. Note that if the message doesn't exist, `None` will appear in its place, so that zipping the list of IDs with the messages can match one-to-one. .. note:: At the time of writing, Telegram will **not** return :tl:`MessageEmpty` for :tl:`InputMessageReplyTo` IDs that failed (i.e. the message is not replying to any, or is replying to a deleted message). This means that it is **not** possible to match messages one-by-one, so be careful if you use non-integers in this parameter. reverse (`bool`, optional): If set to `True`, the messages will be returned in reverse order (from oldest to newest, instead of the default newest to oldest). This also means that the meaning of `offset_id` and `offset_date` parameters is reversed, although they will still be exclusive. `min_id` becomes equivalent to `offset_id` instead of being `max_id` as well since messages are returned in ascending order. You cannot use this if both `entity` and `ids` are `None`. reply_to (`int`, optional): If set to a message ID, the messages that reply to this ID will be returned. This feature is also known as comments in posts of broadcast channels, or viewing threads in groups. This feature can only be used in broadcast channels and their linked megagroups. Using it in a chat or private conversation will result in ``telethon.errors.PeerIdInvalidError`` to occur. When using this parameter, the ``filter`` and ``search`` parameters have no effect, since Telegram's API doesn't support searching messages in replies. .. note:: This feature is used to get replies to a message in the *discussion* group. If the same broadcast channel sends a message and replies to it itself, that reply will not be included in the results. scheduled (`bool`, optional): If set to `True`, messages which are scheduled will be returned. All other parameter will be ignored for this, except `entity`. Yields Instances of `Message <telethon.tl.custom.message.Message>`. Example .. code-block:: python # From most-recent to oldest async for message in client.iter_messages(chat): print(message.id, message.text) # From oldest to most-recent async for message in client.iter_messages(chat, reverse=True): print(message.id, message.text) # Filter by sender async for message in client.iter_messages(chat, from_user='me'): print(message.text) # Server-side search with fuzzy text async for message in client.iter_messages(chat, search='hello'): print(message.id) # Filter by message type: from telethon.tl.types import InputMessagesFilterPhotos async for message in client.iter_messages(chat, filter=InputMessagesFilterPhotos): print(message.photo) # Getting comments from a post in a channel: async for message in client.iter_messages(channel, reply_to=123): print(message.chat.title, message.text) """ ``` Of particular interest to me is the ability to specify `offset_date`, `limit`, and `reverse`. Other users may have other needs. I understand that the `BaseLoader` load method signature doesn't allow for any arguments. That leaves two options (perhaps there is a third way?): 1. Provide something like `**telegram_kwargs` in the TelegramApiChatLoader loader constructor, e.g. ``` class TelegramChatApiLoader(BaseLoader): """Loader that loads Telegram chat json directory dump.""" def __init__( self, chat_entity: Optional[EntityLike] = None, api_id: Optional[int] = None, api_hash: Optional[str] = None, username: Optional[str] = None, file_path: str = "telegram_data.json", **telegram_kwargs # <---- add this new argument ): ``` 2. Alternatively, refactor the `fetch_data_from_telegram` method and extract `async for message in client.iter_messages(self.chat_entity):` into a separate method that can be overriden by child classes, e.g.: ``` async def fetch_data_from_telegram(self) -> None: """Fetch data from Telegram API and save it as a JSON file.""" from telethon.sync import TelegramClient data = [] async with TelegramClient(self.username, self.api_id, self.api_hash) as client: # change this line to call a local method async for message in self.iter_messages(client, self.chat_entity): is_reply = message.reply_to is not None reply_to_id = message.reply_to.reply_to_msg_id if is_reply else None data.append( { "sender_id": message.sender_id, "text": message.text, "date": message.date.isoformat(), "message.id": message.id, "is_reply": is_reply, "reply_to_id": reply_to_id, } ) with open(self.file_path, "w", encoding="utf-8") as f: json.dump(data, f, ensure_ascii=False, indent=4) **# Add this new method** def iter_messages(client: TelegramClient, chat_entity: Optional[EntityLike]): return client.iter_messages(self.chat_entity, self._offset, self._limit, ...): ``` A child class can then override the constructor and the `iter_message` method. My current approach is to override the entire `fetch_data_from_telegram` method. This is problematic since I am duplicating code that may change in future versions of langchain ### Motivation My use case is pretty simple, return all message starting from a certain date. As I understand the current implementation, the entire history is returned. ### Your contribution I'm happy to submit a PR if one of the abovementioned approaches is approved.
Refactoring telegram loader
https://api.github.com/repos/langchain-ai/langchain/issues/7873/comments
2
2023-07-18T07:55:37Z
2023-10-24T16:06:13Z
https://github.com/langchain-ai/langchain/issues/7873
1,809,372,101
7,873
[ "langchain-ai", "langchain" ]
### System Info Running on Colab. ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.document_loaders import GoogleDriveLoader from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma from langchain.text_splitter import CharacterTextSplitter from langchain import OpenAI from langchain.document_loaders import PyPDFDirectoryLoader from langchain.document_loaders import PyPDFLoader from langchain.chains import RetrievalQA os.environ["OPENAI_API_KEY"] = 'Your API Key' loader = GoogleDriveLoader( folder_id="Your folder ID", recursive=True, ) docs = loader.load() text_splitter = CharacterTextSplitter(chunk_size=100, chunk_overlap=0) split_docs = text_splitter.split_documents(docs) embeddings = OpenAIEmbeddings() docsearch = Chroma.from_documents(split_docs, embeddings) ### Expected behavior Expect to load embedding into Chroma, then create a QA object upon that. The code was able to run without a problem yesterday but encountered an error today
OperationalError with docsearch
https://api.github.com/repos/langchain-ai/langchain/issues/7872/comments
8
2023-07-18T07:40:27Z
2023-10-24T16:06:18Z
https://github.com/langchain-ai/langchain/issues/7872
1,809,342,516
7,872
[ "langchain-ai", "langchain" ]
**When i try to use Multiprompt chain getting the below error. Any Suggestions for Solving this Issue??** ValidationError: 16 validation errors for MultiPromptChain destination_chains -> list -> database extra fields not permitted (type=value_error.extra) destination_chains -> list -> input_key extra fields not permitted (type=value_error.extra) destination_chains -> list -> llm_chain extra fields not permitted (type=value_error.extra) destination_chains -> list -> query_checker_prompt extra fields not permitted (type=value_error.extra) destination_chains -> list -> return_direct extra fields not permitted (type=value_error.extra) destination_chains -> list -> return_intermediate_steps extra fields not permitted (type=value_error.extra) destination_chains -> list -> top_k extra fields not permitted (type=value_error.extra) destination_chains -> list -> use_query_checker extra fields not permitted (type=value_error.extra) destination_chains -> query -> database extra fields not permitted (type=value_error.extra) destination_chains -> query -> input_key extra fields not permitted (type=value_error.extra) destination_chains -> query -> llm_chain extra fields not permitted (type=value_error.extra) destination_chains -> query -> query_checker_prompt extra fields not permitted (type=value_error.extra) destination_chains -> query -> return_direct extra fields not permitted (type=value_error.extra) destination_chains -> query -> return_intermediate_steps extra fields not permitted (type=value_error.extra) destination_chains -> query -> top_k extra fields not permitted (type=value_error.extra) destination_chains -> query -> use_query_checker extra fields not permitted (type=value_error.extra) **Getting the Above error when i try to Use the Below code. Any Suggestions to Solve the Above error??** physics_template = """You are a very smart Chatbot for helping users with physics-related questions. \ You excel at answering queries about the laws of nature and phenomena. \ When you don't have an answer, you admit that you don't know. Here is a physics question: {input}""" math_template = """You are a highly skilled mathematician Chatbot. \ You specialize in answering math questions of all levels of difficulty. \ You break down complex problems into simpler components and provide comprehensive solutions. Here is a math question: {input}""" prompt_infos = [ { "name": "list", "description": "Good for answering questions about query the data", "prompt_template": physics_template, }, { "name": "query", "description": "Good for answering math questions", "prompt_template": math_template, }, ] db = SQLDatabase.from_uri( **"YOUR DATABASE URI")** llm = OpenAI(temperature=0, model="text-davinci-003", max_tokens=1000, openai_api_key="**YOUR OPENAI API KEY**") destination_chains = {} textcontainer = st.container() with textcontainer: input = st.text_input("Query: ", key="input") if input: for p_info in prompt_infos: name = p_info["name"] prompt_template = p_info["prompt_template"] prompt = PromptTemplate(template=prompt_template, input_variables=["input"]) db_chain = SQLDatabaseChain( llm=llm, database=db, verbose=True,prompt=prompt) destination_chains[name] = db_chain default_chain = ConversationChain(llm=llm, output_key="text") destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos] destinations_str = "\n".join(destinations) router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(destinations=destinations_str) router_prompt = PromptTemplate( template=router_template, input_variables=["input"], output_parser=RouterOutputParser(), ) router_chain = LLMRouterChain.from_llm(llm, router_prompt) db_chain = MultiPromptChain( router_chain=router_chain, destination_chains=destination_chains, default_chain=default_chain, verbose=True, ) ### Suggestion: _No response_
Can't use SQLdatabasechain with Multipromptchain
https://api.github.com/repos/langchain-ai/langchain/issues/7869/comments
2
2023-07-18T05:25:09Z
2023-10-24T16:06:23Z
https://github.com/langchain-ai/langchain/issues/7869
1,809,140,912
7,869
[ "langchain-ai", "langchain" ]
### System Info latest ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction collection.add_texts(["coucou"], metadatas = [{ 'source' : "here"}]) returns an empty list ### Expected behavior original add_texts() method : https://github.com/hwchase17/langchain/blob/master/langchain/vectorstores/chroma.py#L144 I fixed it by adding the ids_copy at the beginning : ``` ... if ids is None: ids = [str(uuid.uuid1()) for _ in texts] ids_copy = ids ... ``` And by returning it : ``` return ids_copy ```
Chroma vectorstore add_texts() method does not return ids when there is a metadatas argument
https://api.github.com/repos/langchain-ai/langchain/issues/7865/comments
0
2023-07-18T03:42:15Z
2023-07-28T23:17:32Z
https://github.com/langchain-ai/langchain/issues/7865
1,809,020,813
7,865
[ "langchain-ai", "langchain" ]
### Feature request The WeaviateHybridSearchRetriever does not currently have an option to retrieve scores and explanations. The lack of this feature limits the usability of the retriever, as users cannot gain insights into the scoring logic behind the search results. The feature to retrieve scores and explanations, as provided in the Weaviate library, needs to be integrated into the WeaviateHybridSearchRetriever. Relevant Weaviate library documentation: [Hybrid search](https://weaviate.io/developers/weaviate/search/hybrid) ### Motivation Having access to scores and explanations is crucial for users who need to understand how each result has been scored by the hybrid search algorithm. Such understanding can help in refining search queries and filtering results that do not meet certain score thresholds. In the absence of this feature, it becomes difficult to optimize search queries and the quality of the search results can be compromised. ### Your contribution I am ready to contribute to implementing this feature. I propose to enhance the WeaviateHybridSearchRetriever to include the retrieval of score and explainScore properties from the Weaviate library. This will involve adding relevant parameters and methods in the WeaviateHybridSearchRetriever class. I will also ensure the addition of appropriate tests to validate the functionality and correctness of the implementation.
WeaviateHybridSearchRetriever has no option to retrieve scores and explanations
https://api.github.com/repos/langchain-ai/langchain/issues/7855/comments
2
2023-07-17T20:38:13Z
2023-07-18T19:50:18Z
https://github.com/langchain-ai/langchain/issues/7855
1,808,588,557
7,855
[ "langchain-ai", "langchain" ]
### System Info Langchain version 0.0.226 M1 Mac Python 3.11.3 ### Who can help? @hwchase17 @mmz-001 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` from langchain.text_splitter import CharacterTextSplitter def main(): sample_text = "This is a series of short sentences. I want them to be separated at the periods. Three sentences should be enough." text_splitter = CharacterTextSplitter(separator=". ", chunk_size=30, chunk_overlap=0) chunks = text_splitter.split_text(sample_text) for chunk in chunks: print("CHUNK:", chunk) if __name__ == "__main__": main() ``` Output in version 0.0.225: ``` CHUNK: This is a series of short sentences CHUNK: I want them to be separated at the periods CHUNK: Three sentences should be enough. ``` Output in version 0.0.226: ``` CHUNK: Thi. i. serie. o. shor CHUNK: sentences. wan. the. t. b CHUNK: separate. a. th. periods CHUNK: Thre. sentence. shoul. b CHUNK: enough. ``` ### Expected behavior The output seen in version 0.0.225 should be the same in version 0.0.226. I suspect that the bug is related to the fix for this issue https://github.com/hwchase17/langchain/pull/7263. We have also noticed that in recent versions, the metadata start_index is always -1 when using create_documents(). Please let me know if I should file a separate issue for this.
Strange chunks coming out of CharacterTextSplitter starting in version 0.0.226
https://api.github.com/repos/langchain-ai/langchain/issues/7854/comments
6
2023-07-17T19:53:47Z
2023-10-26T16:06:03Z
https://github.com/langchain-ai/langchain/issues/7854
1,808,509,106
7,854
[ "langchain-ai", "langchain" ]
### System Info LangChain version: 0.0.232 Python version: 3.10.8 Platform: Windows 11, VS Code For following usage of WeaviateHybridSearchRetriever - `w_url = os.environ["WEAVIATE_URL"]` `api_key_w= weaviate.AuthApiKey(api_key=os.environ["WEAVIATE_API_KEY"])` `wclient = weaviate.Client(url=w_url, auth_client_secret=api_key_w, additional_headers={ "X-Openai-Api-Key": os.getenv("OPENAI_API_KEY"), },)` `retriever = WeaviateHybridSearchRetriever(wclient, index_name="testindex", text_key="text")` Getting following error for the line of retriever instance creation - ` TypeError: Serializable.__init__() takes 1 positional argument but 2 were given ` ### Who can help? @hwchase17 @agola11 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Follow the steps given in the official documentation - https://python.langchain.com/docs/modules/data_connection/retrievers/integrations/weaviate-hybrid ### Expected behavior Retriever should be able to initiate correctly and should able to return queried documents.
Error while creating WeaviateHybridSearchRetriever instance
https://api.github.com/repos/langchain-ai/langchain/issues/7851/comments
3
2023-07-17T18:49:19Z
2024-02-07T16:28:43Z
https://github.com/langchain-ai/langchain/issues/7851
1,808,378,977
7,851
[ "langchain-ai", "langchain" ]
### Issue with current documentation: I think there is lacking documentation on the multitude of chains regarding QA and retrieval. There is: - [retrieval_qa](https://github.com/hwchase17/langchain/tree/master/langchain/chains/retrieval_qa) - [question_answering](https://github.com/hwchase17/langchain/tree/master/langchain/chains/question_answering) - [qa_with_sources](https://github.com/hwchase17/langchain/tree/master/langchain/chains/qa_with_sources) - [conversational_retrieval](https://github.com/hwchase17/langchain/tree/master/langchain/chains/conversational_retrieval) - [chat_vector_db](https://github.com/hwchase17/langchain/tree/master/langchain/chains/chat_vector_db) - [question answering](https://github.com/hwchase17/langchain/tree/master/langchain/chains/question_answering) Which one should be used when? Which are the base chains used by the others etc? ### Idea or request for content: A structured documentation of the different chains are needed. Maybe some refactoring of the directory structure to group chains.
DOC: What is the difference between the various QA chains?
https://api.github.com/repos/langchain-ai/langchain/issues/7845/comments
2
2023-07-17T16:45:46Z
2023-11-03T16:06:52Z
https://github.com/langchain-ai/langchain/issues/7845
1,808,165,032
7,845
[ "langchain-ai", "langchain" ]
### System Info langchain==0.0.234, windows 10, azure-identity==1.13.0, Python 3.11.4 ### Who can help? I manage to create an index in Azure Cognitive Search with _id_, _content_, _vector_content_ and _metadata_ fields. I check that docs and chunks are not nulls. I'm getting and error when querying the vector store. docs: [azuresearch-langchain-example](https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/azuresearch) Any fix for this? @hwchase17 @agola11 Regards ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Embedding is working as I test: ``` # Check that embbding is working input_text = "This is for demonstration." outcome = embeddings.embed_query(input_text) ``` When I'm trying to query with: ``` # Perform a similarity search docs = vector_store.similarity_search( query="What did the president say about Ketanji Brown Jackson", k=3, search_type="similarity", ) print(docs[0].page_content) ``` Error: ``` HttpResponseError: (InvalidRequestParameter) The 'value' property of the vector query can't be null or an empty array. Make sure to enclose the vector within a "value" property: '{"vector": { "value": [ ] } }' Parameter name: vector Code: InvalidRequestParameter Message: The 'value' property of the vector query can't be null or an empty array. Make sure to enclose the vector within a "value" property: '{"vector": { "value": [ ] } }' Parameter name: vector Exception Details: (InvalidVectorQuery) The 'value' property of the vector query can't be null or an empty array. Make sure to enclose the vector within a "value" property: '{"vector": { "value": [ ] } }' Code: InvalidVectorQuery Message: The 'value' property of the vector query can't be null or an empty array. Make sure to enclose the vector within a "value" property: '{"vector": { "value": [ ] } }' ``` ### Expected behavior I can't define is is the azure cognitive configuration index that i manually add or a bug in the code. Splitting and adding chunks to the vector store (Azure Cognitive Search) all where dont without any warning.
InvalidVectorQuery error when using AzureSearch with vector db
https://api.github.com/repos/langchain-ai/langchain/issues/7841/comments
7
2023-07-17T15:46:42Z
2023-11-15T16:07:13Z
https://github.com/langchain-ai/langchain/issues/7841
1,808,066,576
7,841
[ "langchain-ai", "langchain" ]
### System Info ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ /home/cloud/anaconda3/envs/mir/lib/python3.10/site-packages/langchain/output_parsers/json.py:32 │ │ in parse_and_check_json_markdown │ │ │ │ 29 │ │ 30 def parse_and_check_json_markdown(text: str, expected_keys: List[str]) -> dict: │ │ 31 │ try: │ │ ❱ 32 │ │ json_obj = parse_json_markdown(text) │ │ 33 │ except json.JSONDecodeError as e: │ │ 34 │ │ raise OutputParserException(f"Got invalid JSON object. Error: {e}") │ │ 35 │ for key in expected_keys: │ │ │ │ /home/cloud/anaconda3/envs/mir/lib/python3.10/site-packages/langchain/output_parsers/json.py:25 │ │ in parse_json_markdown │ │ │ │ 22 │ json_str = json_str.strip() │ │ 23 │ │ │ 24 │ # Parse the JSON string into a Python dictionary │ │ ❱ 25 │ parsed = json.loads(json_str) │ │ 26 │ │ │ 27 │ return parsed │ │ 28 │ │ │ │ /home/cloud/anaconda3/envs/mir/lib/python3.10/json/__init__.py:346 in loads │ │ │ │ 343 │ if (cls is None and object_hook is None and │ │ 344 │ │ │ parse_int is None and parse_float is None and │ │ 345 │ │ │ parse_constant is None and object_pairs_hook is None and not kw): │ │ ❱ 346 │ │ return _default_decoder.decode(s) │ │ 347 │ if cls is None: │ │ 348 │ │ cls = JSONDecoder │ │ 349 │ if object_hook is not None: │ │ │ │ /home/cloud/anaconda3/envs/mir/lib/python3.10/json/decoder.py:337 in decode │ │ │ │ 334 │ │ containing a JSON document). │ │ 335 │ │ │ │ 336 │ │ """ │ │ ❱ 337 │ │ obj, end = self.raw_decode(s, idx=_w(s, 0).end()) │ │ 338 │ │ end = _w(s, end).end() │ │ 339 │ │ if end != len(s): │ │ 340 │ │ │ raise JSONDecodeError("Extra data", s, end) │ │ │ │ /home/cloud/anaconda3/envs/mir/lib/python3.10/json/decoder.py:355 in raw_decode │ │ │ │ 352 │ │ try: │ │ 353 │ │ │ obj, end = self.scan_once(s, idx) │ │ 354 │ │ except StopIteration as err: │ │ ❱ 355 │ │ │ raise JSONDecodeError("Expecting value", s, err.value) from None │ │ 356 │ │ return obj, end │ │ 357 │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ JSONDecodeError: Expecting value: line 1 column 1 (char 0) During handling of the above exception, another exception occurred: ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ /home/cloud/anaconda3/envs/mir/lib/python3.10/site-packages/langchain/chains/query_constructor/b │ │ ase.py:37 in parse │ │ │ │ 34 │ │ try: │ │ 35 │ │ │ expected_keys = ["query", "filter"] │ │ 36 │ │ │ allowed_keys = ["query", "filter", "limit"] │ │ ❱ 37 │ │ │ parsed = parse_and_check_json_markdown(text, expected_keys) │ │ 38 │ │ │ if len(parsed["query"]) == 0: │ │ 39 │ │ │ │ parsed["query"] = " " │ │ 40 │ │ │ if parsed["filter"] == "NO_FILTER" or not parsed["filter"]: │ │ │ │ /home/cloud/anaconda3/envs/mir/lib/python3.10/site-packages/langchain/output_parsers/json.py:34 │ │ in parse_and_check_json_markdown │ │ │ │ 31 │ try: │ │ 32 │ │ json_obj = parse_json_markdown(text) │ │ 33 │ except json.JSONDecodeError as e: │ │ ❱ 34 │ │ raise OutputParserException(f"Got invalid JSON object. Error: {e}") │ │ 35 │ for key in expected_keys: │ │ 36 │ │ if key not in json_obj: │ │ 37 │ │ │ raise OutputParserException( │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ OutputParserException: Got invalid JSON object. Error: Expecting value: line 1 column 1 (char 0) During handling of the above exception, another exception occurred: ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ /home/cloud/anaconda3/envs/mir/lib/python3.10/site-packages/IPython/core/magics/execution.py:132 │ │ 5 in time │ │ │ │ 1322 │ │ else: │ │ 1323 │ │ │ st = clock2() │ │ 1324 │ │ │ try: │ │ ❱ 1325 │ │ │ │ exec(code, glob, local_ns) │ │ 1326 │ │ │ │ out=None │ │ 1327 │ │ │ │ # multi-line %%time case │ │ 1328 │ │ │ │ if expr_val is not None: │ │ in <module>:1 │ │ │ │ /home/cloud/anaconda3/envs/mir/lib/python3.10/site-packages/langchain/chains/base.py:149 in │ │ __call__ │ │ │ │ 146 │ │ │ ) │ │ 147 │ │ except (KeyboardInterrupt, Exception) as e: │ │ 148 │ │ │ run_manager.on_chain_error(e) │ │ ❱ 149 │ │ │ raise e │ │ 150 │ │ run_manager.on_chain_end(outputs) │ │ 151 │ │ final_outputs: Dict[str, Any] = self.prep_outputs( │ │ 152 │ │ │ inputs, outputs, return_only_outputs │ │ │ │ /home/cloud/anaconda3/envs/mir/lib/python3.10/site-packages/langchain/chains/base.py:143 in │ │ __call__ │ │ │ │ 140 │ │ ) │ │ 141 │ │ try: │ │ 142 │ │ │ outputs = ( │ │ ❱ 143 │ │ │ │ self._call(inputs, run_manager=run_manager) │ │ 144 │ │ │ │ if new_arg_supported │ │ 145 │ │ │ │ else self._call(inputs) │ │ 146 │ │ │ ) │ │ │ │ /home/cloud/anaconda3/envs/mir/lib/python3.10/site-packages/langchain/chains/conversational_retr │ │ ieval/base.py:110 in _call │ │ │ │ 107 │ │ │ ) │ │ 108 │ │ else: │ │ 109 │ │ │ new_question = question │ │ ❱ 110 │ │ docs = self._get_docs(new_question, inputs) │ │ 111 │ │ new_inputs = inputs.copy() │ │ 112 │ │ new_inputs["question"] = new_question │ │ 113 │ │ new_inputs["chat_history"] = chat_history_str │ │ │ │ /home/cloud/anaconda3/envs/mir/lib/python3.10/site-packages/langchain/chains/conversational_retr │ │ ieval/base.py:191 in _get_docs │ │ │ │ 188 │ │ return docs[:num_docs] │ │ 189 │ │ │ 190 │ def _get_docs(self, question: str, inputs: Dict[str, Any]) -> List[Document]: │ │ ❱ 191 │ │ docs = self.retriever.get_relevant_documents(question) │ │ 192 │ │ return self._reduce_tokens_below_limit(docs) │ │ 193 │ │ │ 194 │ async def _aget_docs(self, question: str, inputs: Dict[str, Any]) -> List[Document]: │ │ │ │ /home/cloud/anaconda3/envs/mir/lib/python3.10/site-packages/langchain/retrievers/self_query/base │ │ .py:96 in get_relevant_documents │ │ │ │ 93 │ │ inputs = self.llm_chain.prep_inputs({"query": query}) │ │ 94 │ │ structured_query = cast( │ │ 95 │ │ │ StructuredQuery, │ │ ❱ 96 │ │ │ self.llm_chain.predict_and_parse(callbacks=callbacks, **inputs), │ │ 97 │ │ ) │ │ 98 │ │ if self.verbose: │ │ 99 │ │ │ print(structured_query) │ │ │ │ /home/cloud/anaconda3/envs/mir/lib/python3.10/site-packages/langchain/chains/llm.py:281 in │ │ predict_and_parse │ │ │ │ 278 │ │ ) │ │ 279 │ │ result = self.predict(callbacks=callbacks, **kwargs) │ │ 280 │ │ if self.prompt.output_parser is not None: │ │ ❱ 281 │ │ │ return self.prompt.output_parser.parse(result) │ │ 282 │ │ else: │ │ 283 │ │ │ return result │ │ 284 │ │ │ │ /home/cloud/anaconda3/envs/mir/lib/python3.10/site-packages/langchain/chains/query_constructor/b │ │ ase.py:50 in parse │ │ │ │ 47 │ │ │ │ **{k: v for k, v in parsed.items() if k in allowed_keys} │ │ 48 │ │ │ ) │ │ 49 │ │ except Exception as e: │ │ ❱ 50 │ │ │ raise OutputParserException( │ │ 51 │ │ │ │ f"Parsing text\n{text}\n raised following error:\n{e}" │ │ 52 │ │ │ ) │ │ 53 │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ OutputParserException: Parsing text json "query": "patient medical notes", "ID": "11542052" "old": "" raised following error: Got invalid JSON object. Error: Expecting value: line 1 column 1 (char 0) ### Who can help? @hwchase17 @ag ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [X] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction model_id = 'google/flan-t5-xl' tokenizer = AutoTokenizer.from_pretrained(model_id) pipe = pipeline( model=model_id, tokenizer=tokenizer, max_length=2048, temperature=0.1, top_p=0.95, repetition_penalty=1. ) llm = HuggingFacePipeline(pipeline=pipe) document_content_description = "Patient medical notes" metadata_field_info = [ AttributeInfo( name="ID", description="The unique identifier 'ID' of the patient", type="string", ), AttributeInfo( name="source", description="source of the document", type="string", ), ] retriever = SelfQueryRetriever.from_llm( llm, db, document_content_description, metadata_field_info, verbose = True ) memory=ConversationBufferMemory(memory_key="chat_history",output_key='answer') chain = ConversationalRetrievalChain.from_llm( llm = llm, retriever=retriever, memory=memory, get_chat_history=lambda h :h) ### Expected behavior Expected behavior should be something as attached ![outputofvicuna](https://github.com/hwchase17/langchain/assets/95228400/17ef861a-43c1-4159-9ac1-59292b8dd9e1)
Self Query Retriever with Google Flan T5 models issue
https://api.github.com/repos/langchain-ai/langchain/issues/7839/comments
3
2023-07-17T14:54:36Z
2023-11-03T18:01:40Z
https://github.com/langchain-ai/langchain/issues/7839
1,807,967,519
7,839
[ "langchain-ai", "langchain" ]
### System Info MacOS: Ventura 13.4 langchain: 0.0.234 ### Who can help? @eyurtsev ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Extend the class BaseRetriever like the following: ´´´ class DocumentRetrieverExtended(BaseRetriever): def __init__(self, retriever, vector_field, text_field, k=3, return_source_documents=False, score_threshold=None, **kwargs): self.k = k self.vector_field = vector_field self.text_field = text_field self.return_source_documents = return_source_documents self.retriever = retriever self.filter = filter self.score_threshold = score_threshold self.kwargs = kwargs ´´´ 2. Provide DocumentRetrieverExtended as retriever object to ConversationalRetrievalChainExtended.from_llm() 3. You will receive the following exception: ``` File "pydantic/main.py", line 357, in pydantic.main.BaseModel.__setattr__ ValueError: "DocumentRetrieverExtended" object has no field "k" ``` ### Expected behavior Until version 0.0.189 it was working properly. Please consider to fix the bug or update the langchain documentation for providing the new way of implementing it (Honestly I was expecting that new releases should not break what was working previously)
BaseRetriever: Latest langchain update is breaking the implementation of extended classes
https://api.github.com/repos/langchain-ai/langchain/issues/7835/comments
7
2023-07-17T14:28:52Z
2023-11-24T16:07:44Z
https://github.com/langchain-ai/langchain/issues/7835
1,807,916,214
7,835
[ "langchain-ai", "langchain" ]
### Feature request ConversationalRetrievalChain is implementing in the **_call()** method the following behavior: ``` if chat_history_str: callbacks = _run_manager.get_child() new_question = self.question_generator.run( question=question, chat_history=chat_history_str, callbacks=callbacks ) else: new_question = question ``` Please add the possibility to avoid this step, like the following: ``` qa = ConversationalRetrievalChainExtended.from_llm( llm=self.llm, retriever=self.retriever, combine_docs_chain_kwargs={"prompt": PROMPT}, return_source_documents=True, verbose=True, memory=self.memory, generate_question=False ) ``` In order to avoid the generation of a new question ### Motivation The current behavior is not generic and applicable to all the LLMs. Some Foundation Models require a specific format for the template (e.g.) and this is causing exceptions on generating the question (e.g. multiple times Anthropic is generating empty results). Moreover, most of the people is already formatting the template for having the desired behavior. Please include this feature ASAP ### Your contribution I've already extended the class. Please let me know if you need the code (is a small change you should apply, strange it is not the standard behavior)
ConversationalRetrievalChain: Add parameter for not invoking self.question_generator.run()
https://api.github.com/repos/langchain-ai/langchain/issues/7834/comments
1
2023-07-17T14:23:37Z
2023-10-23T16:06:22Z
https://github.com/langchain-ai/langchain/issues/7834
1,807,906,358
7,834
[ "langchain-ai", "langchain" ]
### System Info ![Screenshot 2023-07-17 184319](https://github.com/hwchase17/langchain/assets/90586668/e454e70a-66da-40bd-b3c7-92d26caeccea) ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The above code snippet is not working when i'm trying to give LLM an input via streamlit or flask. Error occurs when we are trying to use:``` agent_type=AgentType.OPENAI_FUNCTIONS,```. It's Raising the error like as Below: ``` raise AttributeError(name) from None AttributeError: OPENAI_FUNCTIONS ``` ### Expected behavior I expect that when we give input from either post request via FLASK or via streamlit input box, it should accept the input and should pass on this input to agent to run it and give the result accordingly,.
raise AttributeError(name) from None AttributeError: OPENAI_FUNCTIONS
https://api.github.com/repos/langchain-ai/langchain/issues/7833/comments
5
2023-07-17T13:04:07Z
2023-07-24T22:40:56Z
https://github.com/langchain-ai/langchain/issues/7833
1,807,743,134
7,833
[ "langchain-ai", "langchain" ]
### System Info Using requirements: * langchain==0.0.234 * weaviate-client==3.22.1 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The error occurs, when there are less documents stored in weaviate than trying to use with the fetch_k parameter. Otherwise, this will lead to the following error: > File "...\Lib\site-packages\langchain\vectorstores\weaviate.py", line 273, in max_marginal_relevance_search return self.max_marginal_relevance_search_by_vector( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "...\Lib\site-packages\langchain\vectorstores\weaviate.py", line 324, in max_marginal_relevance_search_by_vector docs.append(Document(page_content=text, metadata=meta)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "...\Lib\site-packages\langchain\load\serializable.py", line 74, in __init__ super().__init__(**kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__pydantic.error_wrappers.ValidationError: 1 validation error for Document page_content none is not an allowed value (type=type_error.none.not_allowed) Example code for reproduction: See integration test in fork: https://github.com/yannickulmrich/langchain-issue-7829/commit/a3fea1d08a7e55894ee18099bd5f5751aecf9159 ### Expected behavior I expect the vectorstore to not fail, when the fetch_k parameter is higher, than the amount of Documents in the vector db. Especially since the default value of the fetch_k parameter is set to 20, this can lead to a lot of unwanted errors, when trying to use the vector db for the first time with few documents to test the behaviour.
Weaviate MMR Search fails with too high fetch_k parameter
https://api.github.com/repos/langchain-ai/langchain/issues/7829/comments
3
2023-07-17T12:26:21Z
2023-10-23T16:06:27Z
https://github.com/langchain-ai/langchain/issues/7829
1,807,668,123
7,829
[ "langchain-ai", "langchain" ]
### System Info I have a use case where I ask an AI agent to check if an article adheres to certain guidelines in documents. However, the recent changes in the BaseConversationalRetrievalChain _call function are causing issues. The process of requesting information from OpenAI can be divided into two steps. In Step 1, the request, chat history, and system prompt are sent, and the question is rephrased. This step aims to prevent the retrieval of irrelevant documents and ask a more accurate question. In Step 2, the rephrased question, along with the rephrased chat history and documents, is sent to OpenAI to receive an answer. The problem arises after the upgrade, as the library now sends the rephrased question instead of the original question. In the process of rephrasing the question, the article is removed from the body of the question. As a result, the AI bot is no longer able to respond effectively. A possible fix, is sending original question in Step2, it would be nice to be able to decide on it using function parameter. ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Document Doc: Golden guidelines for article Golden guidelines for articles are as following: 1. Keep the article content as short possible 2. Don’t add new content. It should be a refinement and improvement of what is already there to maintain accuracy 3. Use a clear and concise writing style: Keep sentences short and to the point. 4. Organise content using headings and subheadings: Divide the article into sections with relevant headings, making it easy for readers to find the information they need. 5. Make sure page titles and section headings are action-focused and not in question format 6. Use bullet points and numbered lists to show information in a clear and organised structure 7. Highlight important information using bold or italic text: Emphasise key points or specific instructions by using bold or italic formatting. ------ ask this question: Review the following document and check if it is following golden guidelines, here is my doc --------- What is an eTicket? Instead of a paper ticket, your ticket will be emailed to you as a PDF attachment. If you miss the email, it will also be available for download from the trip itinerary page of your TravelPerk account. This saves you time at the train station as you no longer have to print a ticket from a ticket machine. How do I get an eTicket? When searching for a train in the UK: Select your preferred time and click on See tickets. Scroll down to Ticket delivery method. Select Get an eTicket. Click on Add to trip. You will receive the PDF ticket with your TravelPerk confirmation email. You will also be able to download it from your account under Trips -> select a specific trip. How do I use my eTicket? It’s really easy! Simply open the PDF ticket on your phone and scan the barcode when you travel. If there is no scanner at the station, just show the eTicket to a staff member at the barrier. rephrased question: Review provided article final response: As an AI, I can't review the article titled "What is an eTicket?" because you haven't provided the content of the article. Please provide the article content ### Expected behavior expected answer would be sth like this: The article is well-structured and follows the Golden guidelines for articles. Here are some points to note: 1. Relevance: The article is very relevant to its intended audience, which seems to be those using eTickets for train travel in the UK. 2. Clarity: The article is clear and easy to understand. The language is simple and the sentences are concise. 3. Depth: The article provides a good level of detail on its subject. It answers many potential questions a reader might have about eTickets and provides step-by-step instructions on how to get and use them. 4. Accuracy: The information in the article appears to be accurate, although without further context or sources, it's hard to verify.
Rephrased question causing issues
https://api.github.com/repos/langchain-ai/langchain/issues/7828/comments
3
2023-07-17T10:37:53Z
2024-01-30T04:37:03Z
https://github.com/langchain-ai/langchain/issues/7828
1,807,476,168
7,828
[ "langchain-ai", "langchain" ]
### System Info vector_store = Milvus.from_documents( text, embedding=das_embedding, connection_args={"host": MILVUS_HOST, "port": MILVUS_PORT} ) ValueError: status_code: 400 code: InvalidParameter message: batch size is invalid, it should not be larger than 10.: payload.input.contents text[:10]满足,其他不行 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.vectorstores import Milvus vector_store = Milvus.from_documents( text, embedding=das_embedding, connection_args={"host": MILVUS_HOST, "port": MILVUS_PORT} ) ### Expected behavior ValueError: status_code: 400 code: InvalidParameter message: batch size is invalid, it should not be larger than 10.: payload.input.contents 这个限制性条件10条怎么改
Chroma and Milvus be larger than 10
https://api.github.com/repos/langchain-ai/langchain/issues/7827/comments
1
2023-07-17T10:15:14Z
2023-10-23T16:06:37Z
https://github.com/langchain-ai/langchain/issues/7827
1,807,441,167
7,827
[ "langchain-ai", "langchain" ]
### System Info OS: Ubuntu 22.04 Docker version: Docker version 20.10.21, build 20.10.21-0ubuntu1~22.04.3 VSCode: 1.69.0 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce the behavior: 1. Go to this documentation page: https://github.com/hwchase17/langchain/tree/master/.devcontainer 2. Click the "Dev Containers Open" button 3. The dev image build is stuck at the last step (Poetry dependencies installation). When making the Poetry more verbose, it appears that it loops forever on dependencies, like this: ``` 8: derived: pytz-deprecation-shim 8: fact: pytz-deprecation-shim (0.1.0.post0) depends on tzdata (*) 8: fact: pytz-deprecation-shim (0.1.0.post0) depends on backports.zoneinfo (*) 8: selecting pytz-deprecation-shim (0.1.0.post0) 8: fact: tensorflow-hub (0.14.0) depends on numpy (>=1.12.0) 8: fact: tensorflow-hub (0.14.0) depends on protobuf (>=3.19.6) 8: selecting tensorflow-hub (0.14.0) 8: selecting termcolor (2.3.0) 8: fact: watchfiles (0.19.0) depends on anyio (>=3.0.0) 8: selecting watchfiles (0.19.0) 8: selecting pathspec (0.11.1) 8: fact: grpcio-reflection (1.47.5) depends on protobuf (>=3.12.0) 8: fact: grpcio-reflection (1.47.5) depends on grpcio (>=1.47.5) 8: selecting grpcio-reflection (1.47.5) 8: selecting iniconfig (2.0.0) 8: fact: pycares (4.3.0) depends on cffi (>=1.5.0) 8: selecting pycares (4.3.0) 8: derived: cffi (>=1.5.0) 8: selecting colored (1.4.4) 8: fact: azure-core (1.28.0) depends on requests (>=2.18.4) 8: fact: azure-core (1.28.0) depends on six (>=1.11.0) 8: fact: azure-core (1.28.0) depends on typing-extensions (>=4.3.0) ``` I even tried to wait for 5 hours, but it was not enough (my internet connection works correctly). ### Expected behavior The dev container environment should be up and ready in a couple of minutes.
Cannot set up dev container because of Poetry solving dependencies forever
https://api.github.com/repos/langchain-ai/langchain/issues/7825/comments
13
2023-07-17T08:08:53Z
2024-03-13T19:56:40Z
https://github.com/langchain-ai/langchain/issues/7825
1,807,226,559
7,825
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I am using below chain. I want to use multiple categories in the filters. My logic is to bring the results from category=c1 OR category=c2 OR category=c3. How can we modify below code the achieve the objective `chain = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), retriever= vectorstore.as_retriever(search_kwargs={'filter': {'category':category}}), memory=memory, return_source_documents = True) ` ### Suggestion: _No response_
How to use multiple tags in metadata filtering
https://api.github.com/repos/langchain-ai/langchain/issues/7824/comments
7
2023-07-17T08:03:05Z
2024-02-15T16:11:05Z
https://github.com/langchain-ai/langchain/issues/7824
1,807,216,493
7,824
[ "langchain-ai", "langchain" ]
### Issue with current documentation: ``` # We're using the default `documents` table here. You can modify this by passing in a `table_name` argument to the `from_documents` method. vector_store = SupabaseVectorStore.from_documents(docs, embeddings, client=supabase) ``` ### Idea or request for content: throw error: httpx.ReadTimeout: The read operation timed out Is it because the documents are too large? Is there a way to change the timeout?
DOC: SupabaseVectorStore.from_documents read operation timed out.
https://api.github.com/repos/langchain-ai/langchain/issues/7823/comments
2
2023-07-17T06:49:38Z
2023-07-24T22:51:06Z
https://github.com/langchain-ai/langchain/issues/7823
1,807,105,415
7,823
[ "langchain-ai", "langchain" ]
### Issue with current documentation: https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/azuresearch ### Idea or request for content: Can anyone explain more about how to create the index? If your run the example you get: `ResourceNotFoundError: () The index 'langchain-vector-demo' for service 'cognitivesearchitest' was not found. ` I split docs and add title to metadata, so now, my documents are **_page_content_** and **_metadata_**(source, title, start_index) Do I need to manually create the 'cognitivesearchitest' index with what fields? Only with id and then the index is updated when inserting the docs? Regards.
DOC: Azure Cognitive Search Vector Store
https://api.github.com/repos/langchain-ai/langchain/issues/7816/comments
1
2023-07-17T04:51:13Z
2023-10-23T16:06:41Z
https://github.com/langchain-ai/langchain/issues/7816
1,806,955,881
7,816
[ "langchain-ai", "langchain" ]
### System Info ... ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I am trying to use vector search as shown below: `os.environ["AZURESEARCH_FIELDS_CONTENT_VECTOR"] = "section_summary_vector" os.environ["AZURESEARCH_FIELDS_CONTENT"] = "section_of_summary" docs = vector_store.similarity_search( query="What did the president say about Ketanji Brown Jackson", k=3, engine = "gpt35turbo", search_type="similarity" ) print(docs[0].page_content)` I am getting an error saying: HttpResponseError: () Unknown field 'content_vector' in vector field list. Parameter name: vectorFields Code: Message: Unknown field 'content_vector' in vector field list. Parameter name: vectorFields It seems that the custom vector field and is not being used by the function. ### Expected behavior Be able to customize the vector field to use in doing vector similarity search.
Azure Cognitive Search
https://api.github.com/repos/langchain-ai/langchain/issues/7813/comments
10
2023-07-17T03:50:56Z
2024-07-05T20:56:31Z
https://github.com/langchain-ai/langchain/issues/7813
1,806,896,033
7,813
[ "langchain-ai", "langchain" ]
### System Info 0.0.234 MacOS Big Sur ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Create a vectorstore with some documents Try this code to retrieve the documents ` from chromadb.config import Settings from langchain.embeddings import OpenAIEmbeddings from langchain.vectorstores import Chroma embeddings = OpenAIEmbeddings() vectorstore = Chroma( collection_name="langchain_store", embedding_function=embeddings, client_settings=Settings(anonymized_telemetry=False), persist_directory="./dist/vectordb", ) retriever = vectorstore.as_retriever(search_kwargs={"k": 3}) docs = retriever.get_relevant_documents("quantos litros de oleo vai no motor?") print(docs)` The result is [] no documents Now if I change the client_settings parameter to client_settings=Settings(anonymized_telemetry=False, chroma_db_impl="duckdb+parquet", persist_directory="./dist/vectordb"), I get the results correct ### Expected behavior Should be able to set client_settings=Settings(anonymized_telemetry=False) # to disable telemetry and receive the results back from the vectorstore
vectorstores Chroma client_settings anonymized_telemetry=False dont work
https://api.github.com/repos/langchain-ai/langchain/issues/7804/comments
8
2023-07-16T23:24:48Z
2024-07-28T16:05:14Z
https://github.com/langchain-ai/langchain/issues/7804
1,806,754,191
7,804
[ "langchain-ai", "langchain" ]
### System Info Langchain Version 0.0.233 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Embed from the confluence loader documents direct to weaviate.from_documents and you will get errors related to the id. (e.g.: {'error': [{'message': "'id' is a reserved property name, no such prop with name 'id' found in class 'LangChain_96f9046045fd4623acec34b0ee6acebb' in the schema. Check your schema files for which properties in this class are available"}]}_ **** Code to reproduce **** from langchain.document_loaders import ConfluenceLoader loader = ConfluenceLoader(url="https://yoursite.atlassian.com/wiki", token="12345") documents = loader.load( space_key="SPACE", include_attachments=True, limit=50, max_pages=50 ) text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() db = Weaviate.from_documents(docs, embeddings, weaviate_url=WEAVIATE_URL, by_text=False) I can fix it by creating a new list with documents that have pageid instead of id in metadata: new_documents = [] for doc in documents: metadata = doc.metadata.copy() metadata['pageid'] = metadata.pop('id') new_doc = Document(page_content=doc.page_content, metadata=metadata) new_documents.append(new_doc) and then continue with : db = Weaviate.from_documents(docs, embeddings, weaviate_url=WEAVIATE_URL, by_text=False) ### Expected behavior Documents to get embedded without the error of: {'error': [{'message': "'id' is a reserved property name, no such prop with name 'id' found in class 'LangChain_96f9046045fd4623acec34b0ee6acebb' in the schema. Check your schema files for which properties in this class are available"}]}
Confluence loader usage of id causes a conflict with Weaviate
https://api.github.com/repos/langchain-ai/langchain/issues/7803/comments
5
2023-07-16T23:19:43Z
2023-10-21T16:40:00Z
https://github.com/langchain-ai/langchain/issues/7803
1,806,752,958
7,803
[ "langchain-ai", "langchain" ]
### System Info This is on `Python 3.10.6`, on a clean virtual env, on `Ubuntu 22.04` server w/o any GPU installed. On the other hand, `pip install langchain[llms]` installs without problem. Here is the output of `pip install langchain[all]`, for langchain==0.0.234 [1.txt](https://github.com/hwchase17/langchain/files/12064621/1.txt) ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce 1. Create venv 2. Source activation script to enter venv 3. pip install langchain[all] Langchain version tested is 0.0.234. ### Expected behavior Install w/o having to try multiple versions of dependencies
pip install langchain[all] takes forever to resove dependencies
https://api.github.com/repos/langchain-ai/langchain/issues/7798/comments
7
2023-07-16T17:04:10Z
2024-03-18T16:05:04Z
https://github.com/langchain-ai/langchain/issues/7798
1,806,652,138
7,798
[ "langchain-ai", "langchain" ]
### System Info MAC OS M2 langchain: 0.0.234 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction MultiQueryRetriever.from_llm has abandon param type BaseLLM ``` @classmethod def from_llm( cls, retriever: BaseRetriever, llm: BaseLLM, prompt: PromptTemplate = DEFAULT_QUERY_PROMPT, parser_key: str = "lines", ) -> "MultiQueryRetriever": ``` when I use OpenAI, it prints warning: ``` UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI` ``` llm parameter type looks like it needs to be changed to BaseLanguageModel ### Expected behavior llm parameter type looks like it needs to be changed to BaseLanguageModel
MultiQueryRetriever.from_llm has abandon param type BaseLLM
https://api.github.com/repos/langchain-ai/langchain/issues/7791/comments
2
2023-07-16T14:11:08Z
2023-10-22T16:06:06Z
https://github.com/langchain-ai/langchain/issues/7791
1,806,596,774
7,791
[ "langchain-ai", "langchain" ]
### System Info Python 3.9.6 langchain==0.0.229 MacOS on Apple M2 hardware ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction This may be related to https://github.com/hwchase17/langchain/issues/7785 , I am not sure. When I use HuggingFaceEndpoint in regular, non-streaming mode, I see that the reply comes truncated. 1. Deploy Falcon-7b https://huggingface.co/tiiuae/falcon-7b 2. Use RetrievalQA and HuggingFaceEndpoint as described in https://github.com/hwchase17/langchain/issues/7785 3. Use the prompt described in https://github.com/hwchase17/langchain/issues/7786 (not sure if you really need this to reproduce) 4. Call it using structured_result = my_qna({query_key: query}) 5. The result will come back truncated. No streaming is performed, no token callbacks run, so you will get the json from the reply: structured_result["result"] In my case, the question and answer in Portuguese: ``` O que é a série G? A série G da Scania traz uma cabina com uma estética moderna e um confort ``` The last word is truncated, and the whole rest of the explanation is missing. ### Expected behavior A complete answer, full text. Or chunk-based callbacks via streaming, as I point out at https://github.com/hwchase17/langchain/issues/7785
HuggingFaceEndpoint returns truncated answer (could this be just a first chunk of a larger reply?)
https://api.github.com/repos/langchain-ai/langchain/issues/7790/comments
1
2023-07-16T13:18:48Z
2023-10-22T16:06:11Z
https://github.com/langchain-ai/langchain/issues/7790
1,806,579,479
7,790
[ "langchain-ai", "langchain" ]
### System Info Issue: I'm experiencing an issue while trying to apply the `frequencyPenalty` parameter to the `ChatOpenAI` class in my Flask server setup. I'm running a Flask server that handles a POST query and returns a streaming response from GPT using LlamaIndex. When I attempt to apply the `frequencyPenalty` parameter to `ChatOpenAI` instance, the server throws a warning: ```python WARNING! frequencyPenalty is not default parameter. frequencyPenalty was transferred to model_kwargs. Please confirm that frequencyPenalty is what you intended. ``` Upon reviewing the documentation, I didn't find a clear way to apply the `frequencyPenalty` to the `ChatOpenAI` class. Here's a snippet of my current code where I'm experiencing this issue: ```python # LLM that supports streaming llm = ChatOpenAI(model_name="gpt-3.5-turbo-16k", streaming=True, frequencyPenalty=6) llm_predictor = LLMPredictor(llm=llm) ``` I'm using `ChatOpenAI` to create an instance of the LLM that supports streaming. I intended to apply `frequencyPenalty` to this instance but encountered the above warning. For further reference, the complete code of my Flask server setup is shared in the original post. Any guidance on how to correctly apply the `frequencyPenalty` to `ChatOpenAI` would be greatly appreciated. Thanks in advance! ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` import openai from flask import Flask, request, Response from flask_cors import CORS from dotenv import load_dotenv import os import pandas as pd from llama_index import GPTVectorStoreIndex, SimpleDirectoryReader, ServiceContext, LLMPredictor, Document from langchain.chat_models import ChatOpenAI import logging app = Flask(__name__) CORS(app) load_dotenv() # Get the API key from the environment variable api_key = os.getenv('OPENAI_API_KEY') # Set the OpenAI API key directly openai.api_key = api_key # Loading documents from an Excel file df = pd.read_excel('data/SupplierComplete.xlsx') # Convert DataFrame rows into documents documents = [ Document(text=' '.join(f'{name}: {value}' for name, value in zip(df.columns, map(str, row.values)))) for _, row in df.iterrows() ] # LLM that supports streaming llm = ChatOpenAI(model_name="gpt-3.5-turbo-16k", streaming=True, frequencyPenalty=6) llm_predictor = LLMPredictor(llm=llm) # Construct a simple vector index index = GPTVectorStoreIndex.from_documents(documents, service_context=ServiceContext.from_defaults(llm_predictor=llm_predictor)) # Configure query engine to use streaming query_engine = index.as_query_engine(streaming=True, similarity_top_k=2) @app.route('/api/query', methods=['POST']) def query(): try: # Get the payload from the request payload = request.json # Log the received payload logging.info(f"Received payload: {payload}") # Update the LLMPredictor parameters based on the payload llm_predictor.max_tokens = payload.get('max_tokens', 256) llm_predictor.llm.temperature = payload.get('temperature', 0.9) # Get the system message from the payload system_message = [m['content'] for m in payload['messages'] if m['role'] == 'system'][0] # Get the question from the messages in the payload user_message = [m['content'] for m in payload['messages'] if m['role'] == 'user'][-1] # Combine system message and user message question = system_message + ' ' + user_message # Now, query returns a StreamingResponse object streaming_response = query_engine.query(question) def response_stream(): for text in streaming_response.response_gen: yield text + "\n" return Response(response_stream(), mimetype="text/event-stream") except Exception as e: logging.error(f"Exception occurred: {e}") return Response(f"Server error: {e}", status=500) if __name__ == '__main__': # Start the server, to run this script use "python llama_index_server.py" in terminal # Configure logging level logging.basicConfig(level=logging.DEBUG) app.run(host='0.0.0.0', port=5000, debug=True) ``` ### Expected behavior I expect to be able to apply the frequencyPenalty parameter to the ChatOpenAI class without encountering any warning messages or errors. Ideally, the frequencyPenalty should influence the LLM's generation by tuning the model's likelihood to avoid frequently occurring responses. I'm expecting this to work seamlessly with other parameters and configurations I'm setting up for my ChatOpenAI instance. If there is a specific way or an alternate parameter to achieve this functionality, I would expect the documentation to clearly illustrate this. Also, any warnings or error messages should provide actionable insights or steps for resolution.
Unable to Apply frequencyPenalty Parameter to ChatOpenAI Class
https://api.github.com/repos/langchain-ai/langchain/issues/7788/comments
2
2023-07-16T12:52:03Z
2023-10-30T16:05:43Z
https://github.com/langchain-ai/langchain/issues/7788
1,806,570,994
7,788
[ "langchain-ai", "langchain" ]
### Feature request In AmazonKendraRetriever, user should have access to a page_content formatter in order to format a Kendra ResultItem as desired, e.g. by combining all sorts of possible document attributes with the title and excerpt of the item. Currently, the [AmazonKendraRetriever](https://github.com/hwchase17/langchain/blob/master/langchain/retrievers/kendra.py#L30) does not expose a template or allow the user to overwrite how the value of the Document page_content is generated. Let me know what you think. I am open to suggestions. @3coins @baskaryan ### Motivation The Amazon Kendra result item provides not only the title and excerpt but all sorts of customizable document attributes that could be combined to improve the result of the LLM completion. For instance, according to the official [Amazon Kendra documentation](https://docs.aws.amazon.com/kendra/latest/APIReference/API_RetrieveResultItem.html): > DocumentAttributes > An array of document fields/attributes assigned to a document in the search results. For example, the document author (_author) or the source URI (_source_uri) of the document. > > Type: Array of [DocumentAttribute](https://docs.aws.amazon.com/kendra/latest/APIReference/API_DocumentAttribute.html) objects > > Required: No Different use cases could leverage on the ability to change the Document page_content final value as close to the Kendra Retriever as possible. ### Your contribution I would be glad to work on this feature. I have already started with a simple prototype and I could issue a PR proposing that change.
In AmazonKendraRetriever, user should have access to a page_content formatter in order to format the Kendra ResultItem as desired
https://api.github.com/repos/langchain-ai/langchain/issues/7787/comments
3
2023-07-16T12:42:17Z
2023-10-26T16:06:14Z
https://github.com/langchain-ai/langchain/issues/7787
1,806,568,070
7,787
[ "langchain-ai", "langchain" ]
### System Info Python 3.9.6 langchain==0.0.229 MacOS on Apple M2 hardware ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I provisioned [cerebras/Cerebras-GPT-2.7B](https://huggingface.co/cerebras/Cerebras-GPT-2.7B) and noticed I was getting an empty string when asking a question with my prompt, in "text-generation" mode. Upon debugging I found the issue: ``` if self.task == "text-generation": # Text generation return includes the starter text. text = generated_text[0]["generated_text"][len(prompt) :]. # <==== HERE elif self.task == "text2text-generation": text = generated_text[0]["generated_text"] elif self.task == "summarization": text = generated_text[0]["summary_text"] ``` I had to patch it like this: ``` text = generated_text[0]["generated_text"] ``` That is so because generated_text[0]["generated_text"] is NOT returning with the prompt text. Therefore, [len(prompt) :] effectively discards the actual answer. Basically, from debugging I see that the logic I want is the same as is coded under "text2text-generation". But I am using "text-generation" and the answer *does not* come with the prompt text back. Any chance the logic for these 2 are swapped? My prompt: ''' You are a useful and cordial assistant. Your objective is to provide precise and relevant information about Thinksurance. You must answer the questions formulated by the human user with attention and only based on the context provided. You should never invent facts. If you can't find the answer using the supplied context, just say that you are unable to answer. Context: {context} Question: {question} ''' ### Expected behavior As described above, the expected behaviour is to get the answer back, not an empty string, when a prompt is used and in "text-generation" mode.
HuggingFaceEndpoint returns empty string in "text-generation" mode with prompt template
https://api.github.com/repos/langchain-ai/langchain/issues/7786/comments
1
2023-07-16T12:15:34Z
2023-10-22T16:06:21Z
https://github.com/langchain-ai/langchain/issues/7786
1,806,559,631
7,786
[ "langchain-ai", "langchain" ]
### System Info Python 3.9.6 langchain==0.0.229 MacOS on Apple M2 hardware ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction I use RetrievalQA.from_chain_type(llm=llm, ...) and with these LLMs streaming works: - ChatOpenAI. (I can pass streaming=True) - OpenAI. (I can pass streaming=True) However, with HuggingFaceEndpoint the streaming just does not work. It does not even take streaming=bool. ### Expected behavior I am able to switch LLM engines via config, and would like to have the streaming feature as the output is incrementally shown to the user, which is great for slow responses (much better usability - no need to wait for the entire time to compute the entire answer). Unfortunately this is not possible with HuggingFaceEndpoint. https://github.com/hwchase17/langchain/issues/2918 talks about HuggingFaceHub , which is not really my case (HuggingFaceEndpoint) Is there a simple way to make it work with RetrievalQA.from_chain_type(llm=llm,...) where the llm is created like ``` llm = HuggingFaceEndpoint(endpoint_url=my_endpoint, task="text-generation", streaming=True callbacks = callbacks, model_kwargs={"temperature": temperature, "max_length": 1024}) ``` Thanks!
streaming support for LLM, from HuggingFaceEndpoint
https://api.github.com/repos/langchain-ai/langchain/issues/7785/comments
6
2023-07-16T12:04:36Z
2024-01-08T06:49:06Z
https://github.com/langchain-ai/langchain/issues/7785
1,806,556,360
7,785
[ "langchain-ai", "langchain" ]
### Feature request I want to integrate the MPT style code LLM, ReplitLM from Replit into the langchain. ### Motivation I want a code LLM in langchain, as ReplitLM has been continuously providing good results either fine tuned or not. I was working on a project for analyzing complex code patterns and structures but generalized LLMs fails on this task, so I want to add code LLMs in langchain. ### Your contribution I want just a little guidance on how to integrate any new LLM into the framework, and then i'll start working on the PR and submit it in ASAP.
ReplitLM Model_addition_in_langchain
https://api.github.com/repos/langchain-ai/langchain/issues/7784/comments
3
2023-07-16T11:36:07Z
2023-10-22T16:06:26Z
https://github.com/langchain-ai/langchain/issues/7784
1,806,548,419
7,784
[ "langchain-ai", "langchain" ]
### System Info - LangChain version: 0.0.234 - Platform: Local and AWS ECS - Python version: 3.9 ### Who can help? @3coins ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Install latest version of Langchain 2. Follow instructions here: https://python.langchain.com/docs/modules/data_connection/retrievers/integrations/amazon_kendra_retriever 3. Ask for a specific quesiton that will fall back to using the Query API that returns a ResultItem with type ANSWER and inspect the content as following I replaced the original values with "----------------". ```json { "Id": "----------------", "Type": "ANSWER", "Format": "TABLE", "AdditionalAttributes": [ { "Key": "AnswerText", "ValueType": "TEXT_WITH_HIGHLIGHTS_VALUE", "Value": { "TextWithHighlightsValue": { "Text": "----------------", "Highlights": [ { "BeginOffset": 70, "EndOffset": 81, "TopAnswer": false, "Type": "STANDARD" } ] } } } ], "DocumentId": "----------------", "DocumentTitle": { "Text": "", "Highlights": [] }, "DocumentExcerpt": { "Text": "----------------", "Highlights": [ { "BeginOffset": 0, "EndOffset": 81, "TopAnswer": false, "Type": "STANDARD" } ] }, ... } ``` will result in: ``` Document(page_content='', metadata={'type': 'ANSWER', 'source': '----------------', 'title': '', 'excerpt': '----------------'}) ``` ### Expected behavior The document page_content should contain at least the Amazon Kendra ResultItem excerpt. According to the [Amazon Kendra documentation](https://docs.aws.amazon.com/kendra/latest/APIReference/API_QueryResultItem.html), the DocumentTitle is not required, therefore we should not expect it in order to return the page_content as seen on the [AmazonKendraRetriever code](https://github.com/hwchase17/langchain/blob/master/langchain/retrievers/kendra.py#L41).
In the AmazonKendraRetriever, the Document page_content is empty for ResultItem with Type ANSWER when using the Query API
https://api.github.com/repos/langchain-ai/langchain/issues/7782/comments
1
2023-07-16T11:11:02Z
2023-07-19T01:46:28Z
https://github.com/langchain-ai/langchain/issues/7782
1,806,541,487
7,782
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. The KNNRetriever calculates the cosine similarity of documents and retrieves the top 'n' documents. Its behavior is identical to `FAISS.similarity_search(query)`. What is the rationale behind creating a separate KNNRetriever? ### Suggestion: Remove the KNNRetriever module.
Issue: Why is the KNNRetriever existed?
https://api.github.com/repos/langchain-ai/langchain/issues/7780/comments
1
2023-07-16T08:38:23Z
2023-10-22T16:06:31Z
https://github.com/langchain-ai/langchain/issues/7780
1,806,498,956
7,780
[ "langchain-ai", "langchain" ]
### System Info Python 3.8.10 gpt4all==1.0.5 langchain==0.0.234 pydantic==1.10.11 pydantic-core==2.3.0 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction **Code snippet:** from langchain import PromptTemplate, LLMChain from langchain.vectorstores import Chroma from langchain.chains import ConversationalRetrievalChain from langchain.memory import ConversationBufferMemory from langchain.embeddings import HuggingFaceInstructEmbeddings from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.llms import GPT4All embedding_model_name = "hkunlp/instructor-large" persist_directory = 'db' callbacks = [StreamingStdOutCallbackHandler()] model_path = "/home/imrohankar/gpttest/DocGPT/models/ggml-gpt4all-j-v1.3-groovy.bin" llm = GPT4All( model = model_path, callbacks = callbacks, verbose = False ) **Error:** Traceback (most recent call last): File "/home/imrohankar/gpttest/DocGPT/conversation.py", line 15, in <module> llm = GPT4All( File "/home/imrohankar/gpttest/DocGPT/venv/lib/python3.8/site-packages/langchain/load/serializable.py", line 74, in __init__ super().__init__(**kwargs) File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for GPT4All __root__ 'type' object is not subscriptable (type=type_error) **Note:** Tried downgrading pydantic, langchain versions but still the error. I am unable to understand why it gives type error while initializing GPT4All ### Expected behavior I would expect it to find the model file at the model path
pydantic.error_wrappers.ValidationError: 1 validation error for GPT4All
https://api.github.com/repos/langchain-ai/langchain/issues/7778/comments
6
2023-07-16T07:53:03Z
2023-09-13T09:48:21Z
https://github.com/langchain-ai/langchain/issues/7778
1,806,486,875
7,778
[ "langchain-ai", "langchain" ]
### System Info Langchain Version: 0.0.207 ### Who can help? Is there a way to get the whole output with Output Parser or OpenAI function calling? I have a simple prompt where I get the LLM to output responses to a set of questions, and I would like to get a structured response that separates the question number and the response to the question generated by the LLM. I have been testing it out with Structured Output Parsers but when I use it, the answers I get are often a shortened version of the original answer. This depends on the description I put in the answer_schema below. I've tried a variety of descriptions to capture as much of the original response as possible, but it is always lacking a lot of detail. ``` question_number_schema = ResponseSchema(name = "question_number", description = "Question Number, e.g. 1, 2, 3", type = "number") answer_schema = ResponseSchema(name = "answer", description = "Detailed Response") response_schemas = [question_number_schema, answer_schema] ``` ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [X] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction For example, if my original output without the Structured Output Parser was: ``` ''' Question 3: Unsupervised fine-tuning involves updating a pre-existing language model using unstructured datasets such as research papers, articles, forums, or websites. This approach allows the language model to learn patterns and vocabularies within the data without the need for explicit human labeling. Question 4: Supervised fine-tuning leverages labeled samples of data to train the language model. These labeled samples consist of input prompts paired with corresponding desired outputs, providing explicit guidance on the desired structure or behavior. This approach is used when specific outputs or classifications are required, such as text classification. ''' ``` I'm currently getting something like: ``` {"Question 3": "Unsupervised fine-tuning updates pre-existing language models using unstructured datasets.", "Question 4": "Supervised fine-tuning trains language models with labeled samples."} ``` ### Expected behavior I'd like to get something like this: ``` {"Question 3": "Unsupervised fine-tuning involves updating a pre-existing language model using unstructured datasets such as research papers, articles, forums, or websites. This approach allows the language model to learn patterns and vocabularies within the data without the need for explicit human labeling.", "Question 4": "Supervised fine-tuning leverages labeled samples of data to train the language model. These labeled samples consist of input prompts paired with corresponding desired outputs, providing explicit guidance on the desired structure or behavior. This approach is used when specific outputs or classifications are required, such as text classification."} ``` This isn't the exact example I'm using, but the idea is similar.
Capturing all content from Output Parser / OpenAI Function Calling
https://api.github.com/repos/langchain-ai/langchain/issues/7770/comments
2
2023-07-16T02:27:58Z
2023-10-22T16:06:37Z
https://github.com/langchain-ai/langchain/issues/7770
1,806,402,879
7,770
[ "langchain-ai", "langchain" ]
### Feature request I want to be able to use Google Palm (bison) as the underlying LLM for agents. Currently I'm using chatGPT, but I also want to experiment how Google Palm performs with tool-picking. I have API access to I can experiment with it. ### Motivation I propose Google's Palm because I've anecdotally seen examples where it performed well in summarization and decision-making tasks. That makes me think it would be a very compelling candidate for ### Your contribution Will I need to over-ride some class within langchain? If you can give me step-by-step instructions, I might be able to help and submit a PR. Thanks!
Google Palm 2 as underlying LLM for Agent
https://api.github.com/repos/langchain-ai/langchain/issues/7763/comments
3
2023-07-15T20:08:33Z
2024-01-30T00:48:46Z
https://github.com/langchain-ai/langchain/issues/7763
1,806,311,364
7,763
[ "langchain-ai", "langchain" ]
### System Info **System Information** System: `Linux` OS: `Pop OS` Langchain version: `0.0.232` Python version: `3.9.17` gpt4all version: used for both version `1.0.1` and version `1.0.3`. ### Who can help? Models: @hwchase17 Streaming Callbacks: @agola11 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction ```python3 import gpt4all as gpt from langchain import PromptTemplate, LLMChain from langchain.llms import GPT4All from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler template = """ Let's think step by step of the question: {question} Based on all the thought the final answer becomes: """ prompt = PromptTemplate(template=template, input_variables=["question"]) local_path = ( "./model/ggml-gpt4all-j-v1.3-groovy.bin" ) callbacks = [StreamingStdOutCallbackHandler()] llm = GPT4All( model=local_path, backend="llama", verbose=True, callbacks=callbacks ) llm_chain = LLMChain(prompt=prompt, llm=llm, verbose=True) question = "What NFL team won the Super Bowl in the year Justin Bieber was born?" llm_chain.run(question) ``` ### Expected behavior I should tokens streaming and printing in the terminal one at a time, but everything I get all at once.
Streaming does not work using streaming callbacks for gpt4all model
https://api.github.com/repos/langchain-ai/langchain/issues/7747/comments
8
2023-07-15T05:47:15Z
2024-05-12T04:10:32Z
https://github.com/langchain-ai/langchain/issues/7747
1,805,914,594
7,747
[ "langchain-ai", "langchain" ]
### Issue with current documentation: ``` pip install arxiv from langchain.chat_models import ChatOpenAI from langchain.agents import load_tools, initialize_agent, AgentType ``` ``` llm = ChatOpenAI(temperature=0.0) tools = load_tools( ["arxiv"], ) ``` ``` agent_chain = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True, ) agent_chain.run( "What's the paper 1605.08386 about?", ) ``` very often causes an OutputParserError due to the failure of parsing the final thought. `OutputParserException: Could not parse LLM output` ### Idea or request for content: It seems that just changing the llm to : `llm = OpenAI(temperature=0.0)` helps a lot with the output success completion.
DOC: Arxiv API Tool code snippet very instable and produces very often an OutputParserException
https://api.github.com/repos/langchain-ai/langchain/issues/7742/comments
2
2023-07-15T00:46:11Z
2023-10-21T16:06:35Z
https://github.com/langchain-ai/langchain/issues/7742
1,805,766,414
7,742
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. According to [here](https://python.langchain.com/docs/modules/chains/how_to/call_methods), all subclass inherited from the Chain class will have the `__call__()` and the `run()` methods to launch the chain. And according to the [LLMChain API](https://api.python.langchain.com/en/latest/chains/langchain.chains.llm.LLMChain.html?highlight=llmchain#langchain.chains.llm.LLMChain) and [SimpleSequentialChain API](https://api.python.langchain.com/en/latest/chains/langchain.chains.sequential.SimpleSequentialChain.html#langchain.chains.sequential.SimpleSequentialChain), both of them are inherited from the Chain class. However I found LLMChain and SimpleSequentialChain accept different kinds of input patterns when calling `__call__()` and the `run()`, which are highly confusing. To demonstrate, consider the following setup, where we create an LLMChain that runs a prompt template that simply asks the LLM to repeat what is told, and attach the LLMChain to a SimpleSequentialChain: ``` from langchain.chains import LLMChain, SimpleSequentialChain from langchain.prompts import StringPromptTemplate from pydantic import BaseModel, validator class CustomTemplate(StringPromptTemplate, BaseModel): """A custom prompt template that takes in the path to a json file as input, and formats the prompt template.""" @validator("input_variables") def validate_input_variables(cls, v): """Validate that the input variables are correct.""" if len(v) == 0 or "CustomKwarg" not in v: raise ValueError("CustomKwarg keyword argument must be provided .") return v def format(self, **kwargs) -> str: return "Repeat the following: \"" + kwargs["CustomKwarg"] + "\"" llmchain = LLMChain(llm=llm, prompt=CustomTemplate(input_variables=["CustomKwarg"]), verbose=True) simplesequentialchain = SimpleSequentialChain(chains=[llmchain]) ``` where ``llm`` is a custom llm. In the prompt template, we expect a keyword argument (kwarg) called `CustomKwarg` to be provided when we launch the chains. There are, however, several ways to provide `CustomKwarg`, e.g. directly providing the value for `CustomKwarg`, or providing a dictionary that maps a key "CustomKwarg" to the value of `CustomKwarg`. Also, sometimes we need to specify `input` or `inputs` as the keyword argument when we make the function call. These lead to many possible combination of syntaxes to launch the chains, and I found that the syntaxes accepted by the LLMChain and SimpleSequentialChain very different and inconsistent. Assuming we want to provide "XYZ!" as the value for CustomKwarg to the chain, see the summary of the launching results below. |#|command | `c = llmchain` | `c = simplesequentialchain` | |-|--------------------------------------------- | -----------| --------------------------| |1|`c("XYZ!")`| O | O| |2|`c({"CustomKwarg": "XYZ!"})`| O | `Missing some input keys: {'input'}` | |3|`c({"input": "XYZ!"})`| `Missing some input keys: {'CustomKwarg'}` | O | |4| `c({"inputs": "XYZ!"})`| `Missing some input keys: {'CustomKwarg'}`| `Missing some input keys: {'input'}` | |5|`c(input={"CustomKwarg": "XYZ!"})`| `Chain.__call__() got an unexpected keyword argument 'input'` | `Chain.__call__() got an unexpected keyword argument 'input'` | |6|`c(inputs={"CustomKwarg": "XYZ!"})`| O | `Missing some input keys: {'input'}` | |7|`c.run("Hello there!")`| O | O | |8|`c.run({"CustomKwarg": "XYZ!"})`| O | `Missing some input keys: {'input'}` | |9|`c.run({"input": "XYZ!"})`| `Missing some input keys: {'CustomKwarg'}` | O | |10|`c.run({"inputs": "Hello there!"})`| `Missing some input keys: {'CustomKwarg'}` | `Missing some input keys: {'input'}`| |11|`c.run(input={"CustomKwarg": "XYZ!"})`| `Missing some input keys: {'CustomKwarg'}` | O | |12|`c.run(inputs={"CustomKwarg": "XYZ!"})`| `Missing some input keys: {'CustomKwarg'}` | `Missing some input keys: {'input'}`| The allowed syntax patterns are highly inconsistent among the two types of chains. For the same syntax, the two types of chains can even fail on different error messages, which is very confusing. Since both of the chains are subclasses of Chain, one would expect them to behave similarly, especially for those methods inherited from the Chain class (`__call__()` and `run()`). Note: I am using **LangChain ver. 0.0.225**. ### Suggestion: _No response_
Issue: Chain call methods are confusing (LLMChain vs SimpleSequentialChain)
https://api.github.com/repos/langchain-ai/langchain/issues/7738/comments
2
2023-07-14T23:21:30Z
2023-12-13T16:08:18Z
https://github.com/langchain-ai/langchain/issues/7738
1,805,711,794
7,738
[ "langchain-ai", "langchain" ]
### System Info Hi. I wanted to deploy application with Langchain but I am unable to pass security scans because of the following vulnerabilities: [CVE-2023-36258](https://nvd.nist.gov/vuln/detail/CVE-2023-36258) [CVE-2023-34540](https://nvd.nist.gov/vuln/detail/CVE-2023-34540) [CVE-2023-34541](https://nvd.nist.gov/vuln/detail/CVE-2023-34541) [CVE-2023-36188](https://nvd.nist.gov/vuln/detail/CVE-2023-36188) [CVE-2023-36189](https://nvd.nist.gov/vuln/detail/CVE-2023-36189) I am unable to disable security scans. Are there any temporal fixes? @hwchase17 @JamalRahman ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction pip install langchain ### Expected behavior be able to pass Aqua Scanner
Vulnerabilities: CVE-2023-36258, CVE-2023-3454, CVE-2023-34541, CVE-2023-36188, CVE-2023-36189
https://api.github.com/repos/langchain-ai/langchain/issues/7736/comments
2
2023-07-14T22:32:45Z
2024-03-13T16:12:30Z
https://github.com/langchain-ai/langchain/issues/7736
1,805,666,654
7,736