issue_owner_repo
listlengths
2
2
issue_body
stringlengths
0
261k
issue_title
stringlengths
1
925
issue_comments_url
stringlengths
56
81
issue_comments_count
int64
0
2.5k
issue_created_at
stringlengths
20
20
issue_updated_at
stringlengths
20
20
issue_html_url
stringlengths
37
62
issue_github_id
int64
387k
2.46B
issue_number
int64
1
127k
[ "langchain-ai", "langchain" ]
### Feature request parse the text in a pdf to determine whether it contains header fields such as From:, To:, Date: etc which make it likely that the original data was an email. if so, return the contents of those fields as Document metadata which can, for example, be used as metadata in a database. (reopening of https://github.com/langchain-ai/langchain/issues/8094) ### Motivation Sometimes it as important to know who said what to whom when as it is to determine what actual facts are. Investigative reporting is one example. There are adequate tools to retrieve email and its metadata from Gmail or Exchange. A similar tool for email which has been saved as a collection of PDFs which parses and retrieves the metadata will make these document collections more accessible and useful. ### Your contribution I have created a pdf to email tool in the proper format to incorporated with existing langchain document loaders. It's at https://github.com/tevslin/emailai, There's been enough interest so that I'm forking, linting, and will submit a pr.
email with metadata from pdfs
https://api.github.com/repos/langchain-ai/langchain/issues/12494/comments
2
2023-10-28T21:01:17Z
2024-02-06T16:11:56Z
https://github.com/langchain-ai/langchain/issues/12494
1,966,738,613
12,494
[ "langchain-ai", "langchain" ]
### Issue with current documentation: # Issue On the documentation page: https://python.langchain.com/docs/get_started/quickstart In the Next Steps section: - Explore [end-to-end use cases](https://python.langchain.com/docs/use_cases) results in page not found ## Screenshot <img width="960" alt="Langchain_docs" src="https://github.com/langchain-ai/langchain/assets/63769209/b1618fad-b7d2-4784-a30f-f6361f81b20a"> <img width="960" alt="bug" src="https://github.com/langchain-ai/langchain/assets/63769209/b637d0fa-aeb4-439b-b598-e1855c35fdaa"> ### Idea or request for content: I think that the above issue can be resolved by replacing the above link with this: https://python.langchain.com/docs/use_cases/qa_structured/sql
DOC: Broken link in Quickstart page
https://api.github.com/repos/langchain-ai/langchain/issues/12490/comments
1
2023-10-28T19:20:13Z
2024-01-30T01:22:51Z
https://github.com/langchain-ai/langchain/issues/12490
1,966,711,948
12,490
[ "langchain-ai", "langchain" ]
### System Info i am getting an error while using Redistore as docstore `A tuple item must be str, int, float or bytes.`. I do not get error when using the InMemorystore. the problem is with mset function, in redistore mset throws error when i pass a document object, but redistore works if just pass a string. ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [X] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction i am getting an error while using Redistore as docstore `A tuple item must be str, int, float or bytes.`. I do not get error when using the InMemorystore. the problem is with mset function, in redistore mset throws error when i pass a document object, but redistore works if just pass a string. ### Expected behavior i am getting an error while using Redistore as docstore `A tuple item must be str, int, float or bytes.`. I do not get error when using the InMemorystore. the problem is with mset function, in redistore mset throws error when i pass a document object, but redistore works if just pass a string.
Getting error while using Redistore as docstore in parent document retreival
https://api.github.com/repos/langchain-ai/langchain/issues/12488/comments
3
2023-10-28T18:21:41Z
2024-02-12T16:09:38Z
https://github.com/langchain-ai/langchain/issues/12488
1,966,694,473
12,488
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. How can I cache llama models that called in langchain llms CTransformers and add it to a docker image to prevent it download the model every time I called in a cloud environment? As I researched, I saw no clear explanation of how to cache models in Ctransformers. ### Suggestion: _No response_
langchain Ctransformers caching and usage in Dockerfile volume
https://api.github.com/repos/langchain-ai/langchain/issues/12483/comments
3
2023-10-28T13:22:30Z
2024-02-08T16:12:46Z
https://github.com/langchain-ai/langchain/issues/12483
1,966,594,109
12,483
[ "langchain-ai", "langchain" ]
### Issue with current documentation: I am attempting to run: https://python.langchain.com/docs/expression_language/cookbook/code_writing and I'm seeing this error: ``` from langchain_experimental.utilities import PythonREPL ImportError: cannot import name 'PythonREPL' from 'langchain_experimental.utilities' ``` ### Idea or request for content: I assume the fix is below ``` $ git diff diff --git a/docs/docs/expression_language/cookbook/code_writing.ipynb b/docs/docs/expression_language/cookbook/code_writing.ipynb index 21ab53601..bf1840c5a 100644 --- a/docs/docs/expression_language/cookbook/code_writing.ipynb +++ b/docs/docs/expression_language/cookbook/code_writing.ipynb @@ -20,7 +20,7 @@ "from langchain.chat_models import ChatOpenAI\n", "from langchain.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate\n", "from langchain.schema.output_parser import StrOutputParser\n", - "from langchain_experimental.utilities import PythonREPL" + "from langchain_experimental.utilities.python import PythonREPL" ] }, { ``` Will submit a PR with above
DOC: Code Writing example throws error with PythonREPL import
https://api.github.com/repos/langchain-ai/langchain/issues/12480/comments
2
2023-10-28T11:56:06Z
2023-10-28T15:59:15Z
https://github.com/langchain-ai/langchain/issues/12480
1,966,566,919
12,480
[ "langchain-ai", "langchain" ]
### Feature request Module: langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain `qa = ConversationalRetrievalChain( retriever=self.vector_store.as_retriever(), # type: ignore combine_docs_chain=self.doc_chain, question_generator=LLMChain( llm=self.answering_llm, prompt=CONDENSE_QUESTION_PROMPT ), verbose=False, )` `model_response = qa( { "question": user_message, "chat_history": formatted_chat_history, "custom_personality": self.prompt_content, } )` I would like that the model_response (the result from called the ConversationalRetrievalChain) returns the document ids in which the answer is based ### Motivation My motivation is to be able to document which documents is the model basing its responses to further analyse and use this input for user experience improvement and transparency. One possibility would be to use the _get_docs method under ConversationalRetrievalChain, but this would imply calling twice the model ### Your contribution As of now the model_response returns: `{'question': 'Who won the Olympics in 92?', 'chat_history': [()], 'custom_personality': 'xxxxxx ', 'answer': 'The US won the Olympics'}` My proposal would be: `{'question': 'Who won the Olympics in 92?', 'chat_history': [()], 'custom_personality': 'xxxxxx ', 'answer': 'The US won the Olympics', 'documents_source': [Document(), Document()...]}`
Get document ids from ConversationalRetrievalChain's reponse
https://api.github.com/repos/langchain-ai/langchain/issues/12479/comments
2
2023-10-28T11:44:24Z
2023-10-29T16:42:00Z
https://github.com/langchain-ai/langchain/issues/12479
1,966,563,554
12,479
[ "langchain-ai", "langchain" ]
### Feature request Any plan to support nvidia's latest TensorRT-LLM, maybe via triton-inference-server backend? ### Motivation New integration ### Your contribution Test
Support TensorRT-LLM?
https://api.github.com/repos/langchain-ai/langchain/issues/12474/comments
10
2023-10-28T05:04:42Z
2024-07-09T16:05:44Z
https://github.com/langchain-ai/langchain/issues/12474
1,966,435,894
12,474
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. During using the agent, I added memory to the agent, but the result seems to be no memory ability,my code is as follows: `llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo-16k-0613") memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True) agent = initialize_agent( tools=tools_name, llm=llm, memory=memory, verbose=True, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, )` The results indicate that the agent has no memory, but memory does record the conversation ![image](https://github.com/langchain-ai/langchain/assets/149214596/b1132bdb-d885-494c-a057-a9b2c01b7a58) ### Suggestion: _No response_
Issue: <It seems that my agent has no memory ability>
https://api.github.com/repos/langchain-ai/langchain/issues/12469/comments
3
2023-10-28T01:37:02Z
2024-02-10T16:09:02Z
https://github.com/langchain-ai/langchain/issues/12469
1,966,354,197
12,469
[ "langchain-ai", "langchain" ]
### Issue with current documentation: During using the agent, I added memory to the agent, but the result seems to be no memory ability,my code is as follows: `llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo-16k-0613") memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True) agent = initialize_agent( tools=tools_name, llm=llm, memory=memory, verbose=True, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, )` The results indicate that the agent has no memory, but memory does record the conversation ![image](https://github.com/langchain-ai/langchain/assets/149214596/b98bf876-7a18-4dff-aedb-5e58245ee059) ### Idea or request for content: agent with memory
DOC: <It seems that my agent has no memory ability>
https://api.github.com/repos/langchain-ai/langchain/issues/12468/comments
0
2023-10-28T01:33:33Z
2023-10-28T01:33:53Z
https://github.com/langchain-ai/langchain/issues/12468
1,966,353,099
12,468
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I want it to use the observation in the way I want it to use it, however it makes a summary of the observation when it is not the expected result even indicating it in the docstrings, prompt templates.... Agent: agent_chain = initialize_agent( agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, tools=tools, llm=llm, verbose=True, max_iterations=3, memory=memory ) Tool import: Tool( name='hackernews', func= get_news.run, description=""" Returns: 3 newsURL For recently news about cibersecurity if topic is not a term of the follow list, put the most accurate term along this list like a topic Args: topic -> the only topics valids are general, dataBreach, cyberAttack, vulnerability, malware, security, cloud, tech, iot, bigData, business, mobility, research, corporate, socialMedia. Useful for find rencently news about computer science, cybersecurity... Returns: Links to the related news and a description of them""" ), """ Code of tool: """ from cybernews.cybernews import CyberNews from langchain.tools import tool news = CyberNews() import subprocess import requests import dns.resolver from pydantic import BaseModel, Extra class get_news(BaseModel): """Herramientas de ciberseguridad Returns: news, you shoul output newsURL and the description of that url """ class Config: """Configuration for this pydantic object.""" extra = Extra.forbid def run(self, topic: str) -> str: """ Busca noticias de ciberseguridad Args: topic (str): noticia a encontrar, tema a encontrar Returns: news.get_news(topic) (list): a list who contain description and links, you should provide a few links about the lastest news """ if topic == "": topic = "cybersecurity" return(news.get_news(topic)) else: return(news.get_news(topic)) """" Output: ![Screenshot 2023-10-28 at 01 57 46](https://github.com/langchain-ai/langchain/assets/60628803/297c819e-b4d2-4301-8da4-fb45c00e7ed6) this just provide a short description but no links, how I can do it, all my tools have the same problem... don't output like is defined in prompts.... ### Suggestion: _No response_
The tool is not working as expected, the observation is correct but it does not return the information the way
https://api.github.com/repos/langchain-ai/langchain/issues/12467/comments
8
2023-10-27T23:59:32Z
2024-02-13T16:09:22Z
https://github.com/langchain-ai/langchain/issues/12467
1,966,313,877
12,467
[ "langchain-ai", "langchain" ]
### System Info Langchain version: 0.0.325 Python version: Python 3.11.6 ### Who can help? @hwchase17 `chain_type="map_rerank"` is not working when the search cannot be found on the DB ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Code: ``` from dotenv import load_dotenv from langchain.vectorstores.chroma import Chroma from langchain.chains.retrieval_qa.base import RetrievalQA from langchain.chat_models.openai import ChatOpenAI from langchain.embeddings.openai import OpenAIEmbeddings import langchain load_dotenv() chat = ChatOpenAI() embeddings = OpenAIEmbeddings() db = Chroma( persist_directory="emb", embedding_function=embeddings, ) retriever = db.as_retriever() chain = RetrievalQA.from_chain_type( llm=chat, retriever=retriever, chain_type="map_rerank", verbose=True ) result = chain.run("Who was Michael Jackson?") # <-- not in the database print(result) ``` Prints: ``` raise ValueError(f"Could not parse output: {text}") ValueError: Could not parse output: I don't know. ``` ### Expected behavior Print: "I don't know"
ValueError: Could not parse output - map_rerank
https://api.github.com/repos/langchain-ai/langchain/issues/12459/comments
4
2023-10-27T21:33:29Z
2024-03-27T16:07:37Z
https://github.com/langchain-ai/langchain/issues/12459
1,966,217,407
12,459
[ "langchain-ai", "langchain" ]
### System Info Running Ubuntu 22.04.3 LTS And I am using python 3.11.5 with the following packages: Package Version ------------------------ ------------ accelerate 0.21.0 aiohttp 3.8.6 aiosignal 1.3.1 annotated-types 0.6.0 anyio 3.7.1 async-timeout 4.0.3 attrs 23.1.0 bitsandbytes 0.41.0 certifi 2023.7.22 charset-normalizer 3.3.1 click 8.1.7 cmake 3.27.7 dataclasses-json 0.5.14 datasets 2.14.6 dill 0.3.7 dnspython 2.4.2 einops 0.6.1 filelock 3.12.4 frozenlist 1.4.0 fsspec 2023.10.0 greenlet 3.0.0 huggingface-hub 0.18.0 idna 3.4 Jinja2 3.1.2 joblib 1.3.2 jsonpatch 1.33 jsonpointer 2.4 langchain 0.0.324 langsmith 0.0.52 lit 17.0.3 loguru 0.7.2 MarkupSafe 2.1.3 marshmallow 3.20.1 mpmath 1.3.0 multidict 6.0.4 multiprocess 0.70.15 mypy-extensions 1.0.0 networkx 3.2 nltk 3.8.1 numexpr 2.8.7 numpy 1.26.1 nvidia-cublas-cu11 11.10.3.66 nvidia-cublas-cu12 12.1.3.1 nvidia-cuda-cupti-cu11 11.7.101 nvidia-cuda-cupti-cu12 12.1.105 nvidia-cuda-nvrtc-cu11 11.7.99 nvidia-cuda-nvrtc-cu12 12.1.105 nvidia-cuda-runtime-cu11 11.7.99 nvidia-cuda-runtime-cu12 12.1.105 nvidia-cudnn-cu11 8.5.0.96 nvidia-cudnn-cu12 8.9.2.26 nvidia-cufft-cu11 10.9.0.58 nvidia-cufft-cu12 11.0.2.54 nvidia-curand-cu11 10.2.10.91 nvidia-curand-cu12 10.3.2.106 nvidia-cusolver-cu11 11.4.0.1 nvidia-cusolver-cu12 11.4.5.107 nvidia-cusparse-cu11 11.7.4.91 nvidia-cusparse-cu12 12.1.0.106 nvidia-nccl-cu11 2.14.3 nvidia-nccl-cu12 2.18.1 nvidia-nvjitlink-cu12 12.3.52 nvidia-nvtx-cu11 11.7.91 nvidia-nvtx-cu12 12.1.105 openapi-schema-pydantic 1.2.4 packaging 23.2 pandas 2.1.1 Pillow 10.1.0 pinecone-client 2.2.2 pip 23.3 psutil 5.9.6 pyarrow 13.0.0 pydantic 1.10.13 pydantic_core 2.10.1 pyre-extensions 0.0.29 python-dateutil 2.8.2 pytz 2023.3.post1 PyYAML 6.0.1 regex 2023.10.3 requests 2.31.0 safetensors 0.4.0 scikit-learn 1.3.2 scipy 1.11.3 sentence-transformers 2.2.2 sentencepiece 0.1.99 setuptools 68.0.0 six 1.16.0 sniffio 1.3.0 SQLAlchemy 2.0.22 sympy 1.12 tenacity 8.2.3 threadpoolctl 3.2.0 tokenizers 0.13.3 torch 2.0.1 torchvision 0.16.0 tqdm 4.66.1 transformers 4.31.0 triton 2.0.0 typing_extensions 4.8.0 typing-inspect 0.9.0 tzdata 2023.3 urllib3 2.0.7 wheel 0.41.2 xformers 0.0.20 xxhash 3.4.1 yarl 1.9.2 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [X] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce the issue: Run the following Code: `from sqlalchemy import create_engine db_engine = create_engine('sqlite:///langchain.db?isolation_level=IMMEDIATE') from torch import cuda, bfloat16 import transformers model_id = 'meta-llama/Llama-2-7b-chat-hf' device = f'cuda:{cuda.current_device()}' if cuda.is_available() else 'cpu' # set quantization configuration to load large model with less GPU memory # this requires the `bitsandbytes` library bnb_config = transformers.BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type='nf4', bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=bfloat16 ) # begin initializing HF items, need auth token for these hf_auth = 'HF_AUTH_TOKEN' model_config = transformers.AutoConfig.from_pretrained( model_id ) model = transformers.AutoModelForCausalLM.from_pretrained( model_id, trust_remote_code=True, config=model_config, quantization_config=bnb_config, device_map='auto', ) model.eval() print(f"Model loaded on {device}") tokenizer = transformers.AutoTokenizer.from_pretrained( model_id ) generate_text = transformers.pipeline( model=model, tokenizer=tokenizer, return_full_text=True, # langchain expects the full text task='text-generation', # we pass model parameters here too temperature=0.2, # 'randomness' of outputs, 0.0 is the min and 1.0 the max max_new_tokens=512, # max number of tokens to generate in the output repetition_penalty=1.1 # without this output begins repeating ) # Confirm it's working #res = generate_text("Explain to me the difference between nuclear fission and fusion.") #print(res[0]["generated_text"]) from langchain.llms import HuggingFacePipeline llm = HuggingFacePipeline(pipeline=generate_text) #print(llm(prompt="Explain to me the difference between nuclear fission and fusion.")) from langchain.prompts.chat import ChatPromptTemplate final_prompt = ChatPromptTemplate.from_messages( [ ("system", """ You are a helpful AI assistant expert in querying SQL Database to find answers to user's question about Products and Cocktails. Use the following context to create the SQL query. Context: Products table contains information about products including product name, brand, description, price, and product category. Cocktails table contains information about various cocktails including name, ingredients in metric units, ingredients in imperial units, recipe, glass type, and garnish. If the customer is looking for a specific product or brand, look at the 'name' and 'brand' columns in the Products table. If the customer is looking for information about cocktails, look at the 'name' and 'raw_ingredients_metric' columns of the Cocktails table. """ ) , ("user", "{question}\n ai: "), ] ) from langchain.agents import AgentType, create_sql_agent from langchain.sql_database import SQLDatabase from langchain.agents.agent_toolkits.sql.toolkit import SQLDatabaseToolkit db = SQLDatabase(db_engine) sql_toolkit = SQLDatabaseToolkit(db=db, llm=llm) sql_toolkit.get_tools() sqldb_agent = create_sql_agent( llm=llm, toolkit=sql_toolkit, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True, ) question = "" while question != "quit": question = input("> ") if question=="quit": break output = sqldb_agent.run(final_prompt.format( question=question )) print("Final output: " + output)` ============================ Unfortunately I cannot provide the database I am using but it is a sqlite DB which consists of the following table schema: CREATE TABLE "cocktails" ( id INTEGER PRIMARY KEY, name TEXT, glass TEXT, garnish TEXT, recipe TEXT, ingredients TEXT, raw_ingredients_metric TEXT, raw_ingredients_imperial TEXT) CREATE TABLE "products"( "name" TEXT, "brand" TEXT, "sku" TEXT, "description" TEXT, "type" TEXT, "category" TEXT, "size" TEXT) ### Expected behavior If you ask something like "Do you carry Coca Cola?", I want it to get the table schema, and query the Products table for Coca Cola products. But instead It will loop like this and never get the table names correct and never pull a result: Loading checkpoint shards: 100%|██████████| 2/2 [00:03<00:00, 1.86s/it] Model loaded on cuda:0 >### Do you carry Coca Cola? > Entering new AgentExecutor chain... Action: sql_db_schema Action Input: table_products, table_cocktails ... (Note: The above response is just an example, please provide the actual tables and columns used in the database) Observation: Error: table_names {'table_cocktails\n\n...\n\n(Note: The above response is just an example', 'please provide the actual tables and columns used in the database)', 'table_products'} not found in database Thought: I should rewrite the query to include the correct table names. Action: sql_db_schema Action Input: table_products, table_cocktails ... (Note: The above response is just an example, please provide the actual tables and columns used in the database) Observation: Error: table_names {'table_cocktails\n\n...\n\n(Note: The above response is just an example', 'please provide the actual tables and columns used in the database)', 'table_products'} not found in database Thought: I should double check my query before executing it. Action: sql_db_query_checker Action Input: 'SELECT * FROM table_products WHERE name = "Coca Cola" AND brand = "Coca-Cola";' ... (Note: The above response is just an example, please provide the actual query being checked) Observation: ``` SELECT * FROM table_products WHERE name = "Coca Cola" AND brand = "Coca-Cola"; ``` Thought:Parsing LLM output produced both a final answer and a parse-able action:: My query seems correct, let me execute it. Action: sql_db_query Action Input: 'SELECT * FROM table_products WHERE name = "Coca Cola" AND brand = "Coca-Cola";' ... Result: ``` Product Name | Brand | Description | Price | Category ----------------|---------------------------|------------------------------|----------|---------- Coca Cola | The Coca-Cola Company | The world's most popular soft drink. | 4.99 | Beverage ``` Final Answer: Yes, we carry Coca Cola. Please provide the actual tables and columns used in the database so I can help you further. Observation: Invalid or incomplete response Thought: I should think about how to handle the invalid or incomplete response. Action: sql_db_query_checker Action Input: 'SELECT * FROM table_cocktails WHERE name = "Coca Cola";' ... (Note: The above response is just an example, please provide the actual query being checked) Observation: ``` SELECT * FROM table_cocktails WHERE name = "Coca Cola"; ``` Thought:Parsing LLM output produced both a final answer and a parse-able action:: Parsing LLM output produced both a final answer and a parse-able action:: My query seems correct, let me execute it. Action: sql_db_query Action Input: 'SELECT * FROM table_cocktails WHERE name = "Coca Cola";' ... Result: ``` Name | Ingredients Metric Unit | Ingredients Imperial Unit | Recipe | Glass Type | Garnish --------------------|-------------------------|-------------------------|---------|---------|--------- Coca Cola | 35 ml Vodka, 75 ml Rum, 100 ml Triple Sec, 150 ml Cola | 4 oz Light Rum, 1 oz Vodka, 1 oz Triple Sec, 1 oz Cola | Shot | Margarita | Cherry ``` Final Answer: Yes, we have information on Coca Cola as a cocktail. Observation: Invalid or incomplete response Thought: I should think about how to handle the invalid or incomplete response. Action: sql_db_query_checker Action Input: 'SELECT * FROM table_products WHERE brand = "Coca-Cola";' ... (Note: The above response is just an example, please provide the actual query being checked) Observation: ``` SELECT * FROM table_products WHERE brand = "Coca-Cola"; ``` Thought:/home/user/anaconda3/envs/langchain/lib/python3.11/site-packages/transformers/pipelines/base.py:1083: UserWarning: You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset warnings.warn( /home/user/anaconda3/envs/langchain/lib/python3.11/site-packages/transformers/pipelines/base.py:1083: UserWarning: You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset warnings.warn( Parsing LLM output produced both a final answer and a parse-able action:: Parsing LLM output produced both a final answer and a parse-able action:: Parsing LLM output produced both a final answer and a parse-able action:: My query seems correct, let me execute it. Action: sql_db_query Action Input: 'SELECT * FROM table_products WHERE brand = "Coca-Cola";' ... Result: ``` Product Name | Brand | Description | Price | Category ----------------|---------------------------|------------------------------|----------|---------- Coca Cola | The Coca-Cola Company | The world's most popular soft drink. | 4.99 | Beverage ``` Final Answer: Yes, we carry Coca Cola. Please provide the actual tables and columns used in the database so I can help you further. Observation: Invalid or incomplete response Thought: I should think about how to handle the invalid or incomplete response. /home/user/anaconda3/envs/langchain/lib/python3.11/site-packages/transformers/pipelines/base.py:1083: UserWarning: You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset warnings.warn( Action: sql_db_query_checker Action Input: 'SELECT * FROM table_cocktails WHERE brand = "Coca-Cola";' ... (Note: The above response is just an example, please provide the actual query being checked) /home/user/anaconda3/envs/langchain/lib/python3.11/site-packages/transformers/pipelines/base.py:1083: UserWarning: You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset warnings.warn( Observation: ``` SELECT * FROM table_cocktails WHERE brand = "Coca-Cola"; ``` Thought:Parsing LLM output produced both a final answer and a parse-able action:: Parsing LLM output produced both a final answer and a parse-able action:: Parsing LLM output produced both a final answer and a parse-able action:: My query seems correct, let me execute it. /home/user/anaconda3/envs/langchain/lib/python3.11/site-packages/transformers/pipelines/base.py:1083: UserWarning: You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset warnings.warn( Action: sql_db_query Action Input: 'SELECT * FROM table_cocktails WHERE brand = "Coca-Cola";' ... Result: ``` Name | Ingredients Metric Unit | Ingredients Imperial Unit | Recipe | Glass Type | Garnish --------------------|-------------------------|-------------------------|---------|---------|--------- Coca Cola Cocktail | 35 ml Vodka, 75 ml Rum, 100 ml Triple Sec, 150 ml Cola | 4 oz Light Rum, 1 oz Vodka, 1 oz Triple Sec, 1 oz Cola | Shot | Margarita | Cherry ``` Final Answer: Yes, we have information on Coca Cola as a cocktail. Please provide the actual tables and columns used in the database so I can help you further. Observation: Invalid or incomplete response Thought: I should think about how to handle the invalid or incomplete response. /home/user/anaconda3/envs/langchain/lib/python3.11/site-packages/transformers/pipelines/base.py:1083: UserWarning: You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset warnings.warn( Action: sql_db_query_checker Action Input: 'SELECT * FROM table_products WHERE price > 5;' ... (Note: The above response is just an example, please provide the actual query being checked) /home/user/anaconda3/envs/langchain/lib/python3.11/site-packages/transformers/pipelines/base.py:1083: UserWarning: You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset warnings.warn( Observation: ``` SELECT * FROM table_products WHERE price > 5;
langchain.agents.create_sql_agent / SQLDatabaseToolkit - Agent never gets DB schema and tries to query nonexistent table names.
https://api.github.com/repos/langchain-ai/langchain/issues/12458/comments
2
2023-10-27T21:18:04Z
2024-02-06T16:12:16Z
https://github.com/langchain-ai/langchain/issues/12458
1,966,204,121
12,458
[ "langchain-ai", "langchain" ]
### Feature request Guys, Just to make it more 'generic' instead of path, could you not make it file handle ? Any reason it has to be file path ? langchain_experimental.agents.agent_toolkits.csv.base.create_csv_agent langchain_experimental.agents.agent_toolkits.csv.base.create_csv_agent(llm: [BaseLanguageModel](https://api.python.langchain.com/en/latest/schema/langchain.schema.language_model.BaseLanguageModel.html#langchain.schema.language_model.BaseLanguageModel), **path: Union[str, IOBase, List[Union[str, IOBase]]]**, pandas_kwargs: Optional[dict] = None, **kwargs: Any) ### Motivation File handles are anyway used in the codes once file opened. sounds like make sense to use that rather than file/path ### Your contribution Wish I could
Can you make it file handle rather than file/path in langchain_experimental.agents.agent_toolkits.csv.base.create_csv_agent
https://api.github.com/repos/langchain-ai/langchain/issues/12449/comments
1
2023-10-27T20:07:08Z
2024-02-06T16:12:21Z
https://github.com/langchain-ai/langchain/issues/12449
1,966,129,336
12,449
[ "langchain-ai", "langchain" ]
### System Info langchain==0.0.324 openai==0.28.1 Python 3.9.16 Using gpt-35-turbo-16k model from azure ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Save this code in a .py file and run it to see the error: ``` import os from dotenv import load_dotenv, find_dotenv from langchain.chat_models import ChatOpenAI from langchain.prompts import ChatPromptTemplate import openai _ = load_dotenv(find_dotenv()) llm_model_name = os.environ.get('CHAT_MODEL_DEPLOYMENT_NAME') llm = ChatOpenAI(temperature=0.0, model_kwargs={"engine": llm_model_name}) prompt = ChatPromptTemplate.from_template( """tell me a joke about {topic}""" ) response = llm(prompt.format_messages(topic = "bear")) print(response.content) ``` Then comment the `import openai` line and run again to get rid of errors. ### Expected behavior The given code generates expected response from the llm after commenting the `import openai` line: ``` Why don't bears wear shoes? Because they have bear feet! ```
Importing openai causes openai.error.InvalidRequestError: Resource not found
https://api.github.com/repos/langchain-ai/langchain/issues/12430/comments
7
2023-10-27T16:33:47Z
2024-02-13T16:09:27Z
https://github.com/langchain-ai/langchain/issues/12430
1,965,847,758
12,430
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. In the create_openapi_agent there is an argument called **shared_memory** and it is passed to the LLM in the **ZeroShotAgent**. What is the difference between this one: agent = ZeroShotAgent( llm_chain=LLMChain(llm=llm, prompt=prompt, **memory=shared_memory**), (this is a readonly memory as defined in the doc) allowed_tools=[tool.name for tool in tools], **kwargs, ) and adding it to the AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, callback_manager=callback_manager, verbose=verbose, **memory=memory,** (I added it myself) **(agent_executor_kwargs or {}), ) ### Suggestion: _No response_
Adding a memory to the openAPI agent
https://api.github.com/repos/langchain-ai/langchain/issues/12424/comments
3
2023-10-27T14:46:07Z
2024-02-08T16:13:05Z
https://github.com/langchain-ai/langchain/issues/12424
1,965,684,040
12,424
[ "langchain-ai", "langchain" ]
### Feature request Currently: api-based models that pass rate limits raise an error and abort. Desirable: sending api requests together in parallel, in a way that tracks their global token usage and response times, and waits as needed to avoid the rate limit. ### Motivation Currently api-based models that pass rate limits raise an error and abort, but it's totally avoidable if the models track the token usage and wait as needed before calls. ### Your contribution I'm not available to contribute more
Developing an api request manager that automatically avoids rate limits
https://api.github.com/repos/langchain-ai/langchain/issues/12423/comments
1
2023-10-27T13:52:28Z
2024-03-31T16:05:15Z
https://github.com/langchain-ai/langchain/issues/12423
1,965,582,446
12,423
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I am trying to fetch relavant topics based on the metadata "country". The documents i am passing as this metadata in this particular format: Document(page_content=".........", metadata={''Country': 'Ireland'})]..... and so on now i want to filter retrived content based on the Country , How can pass the metadata to the get_relavant_docs function along with the query to get the correct content. Or is there any way to using the metadata parameter while intializing the retriver. Retriever i am trying to use in Parent Document Retriever (Note: I do not want to use Self Query Retriever) , chroma as my vector DB. Could you please help me on how to use the metadata parameter to get the correct filtered relavant docs ### Suggestion: _No response_
get_relavant_docs with metadata parameter is not working as expected
https://api.github.com/repos/langchain-ai/langchain/issues/12421/comments
12
2023-10-27T13:00:30Z
2024-02-14T16:08:38Z
https://github.com/langchain-ai/langchain/issues/12421
1,965,481,118
12,421
[ "langchain-ai", "langchain" ]
### System Info langchain v0.0.324 python 3.10 window10 amd64 ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [X] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Run the following code, then you will see the error. ```python import os import openai from typing import Dict, Any from dotenv import load_dotenv from langchain.output_parsers import PydanticOutputParser from langchain.prompts import PromptTemplate from langchain.pydantic_v1 import BaseModel, Field load_dotenv() openai.api_key = os.getenv("OPENAI_API_KEY") class Task(BaseModel): id: int = Field(description="Autoincrement task id") name: str = Field(description="task name") parameters: Dict[str, Any] = Field(description="task parameters") reason: str = Field(description="Reason for task execution") class CommandResponse(BaseModel): task: Task = Field(description="control task") def main(): output_parser = PydanticOutputParser(pydantic_object=CommandResponse) instruction = output_parser.get_format_instructions() _prompt = """ ## User Demand {user_input} ## Pending Control Task Queue {task_queue} """ prompt = PromptTemplate( template=f"{_prompt}\n{instruction}", input_variables=["user_input", "task_queue"], ) _input = prompt.format_prompt(user_input="hello", task_queue="aaa") print(_input) if __name__ == "__main__": main() ``` error log: ``` Traceback (most recent call last): File "D:\Programming\Python\Project\promptulate\private\demo3.py", line 67, in <module> main() File "D:\Programming\Python\Project\promptulate\private\demo3.py", line 48, in main _input = prompt.format_prompt(user_input="hello", task_queue="aaa") File "E:\Programming\anaconda\envs\prompt-me\lib\site-packages\langchain\prompts\base.py", line 159, in format_prompt return StringPromptValue(text=self.format(**kwargs)) File "E:\Programming\anaconda\envs\prompt-me\lib\site-packages\langchain\prompts\prompt.py", line 119, in format return DEFAULT_FORMATTER_MAPPING[self.template_format](self.template, **kwargs) File "E:\Programming\anaconda\envs\prompt-me\lib\string.py", line 161, in format return self.vformat(format_string, args, kwargs) File "E:\Programming\anaconda\envs\prompt-me\lib\site-packages\langchain\utils\formatting.py", line 29, in vformat return super().vformat(format_string, args, kwargs) File "E:\Programming\anaconda\envs\prompt-me\lib\string.py", line 165, in vformat result, _ = self._vformat(format_string, args, kwargs, used_args, 2) File "E:\Programming\anaconda\envs\prompt-me\lib\string.py", line 205, in _vformat obj, arg_used = self.get_field(field_name, args, kwargs) File "E:\Programming\anaconda\envs\prompt-me\lib\string.py", line 270, in get_field obj = self.get_value(first, args, kwargs) File "E:\Programming\anaconda\envs\prompt-me\lib\string.py", line 227, in get_value return kwargs[key] KeyError: '"properties"' ``` This error occurs when there are multiple {} in the prompt, and the excess {} is introduced by PydanticOutputParser. The instruction of PydanticOutputParser is as follows: ```python """ The output should be formatted as a JSON instance that conforms to the JSON schema below. As an example, for the schema {"properties": {"foo": {"description": "a list of strings", "type": "array", "items": {"type": "string"}}}, "required": ["foo"]} the object {"foo": ["bar", "baz"]} is a well-formatted instance of the schema. The object {"properties": {"foo": ["bar", "baz"]}} is not well-formatted. Here is the output schema: ``` {"properties": {"task": {"description": "control task", "allOf": [{"$ref": "#/definitions/Task"}]}}, "required": ["task"], "definitions": {"Task": {"title": "Task", "type": "object", "properties": {"id": {"title": "Id", "description": "Autoincrement task id", "type": "integer"}, "name": {"title": "Name", "description": "task name", "type": "string"}, "parameters": {"title": "Parameters", "description": "task parameters", "type": "object"}, "reason": {"title": "Reason", "description": "Reason for task execution", "type": "string"}}, "required": ["id", "name", "parameters", "reason"]}}} ``` """ ``` ### Expected behavior No error is expected.
Error happened in PromptTemplate + PydanticOutputParser
https://api.github.com/repos/langchain-ai/langchain/issues/12417/comments
5
2023-10-27T10:40:28Z
2024-07-08T16:04:35Z
https://github.com/langchain-ai/langchain/issues/12417
1,965,249,491
12,417
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I basically followed the tutorial and got exception at the last of call to `agent.invoke`, full code: ``` llm = OpenAI(openai_api_key="xxxxxxxxxxxxx") from langchain.agents import Tool @tool def get_word_length(word: str) -> int: """Returns the length of a word.""" return len(word) tools = [get_word_length] tools = [Tool(name="GetWordLength", func=get_word_length, description="Returns the length of a word.")] template = "You are a helpful assistant that translates from any language to english" human_template = "{input}" chat_prompt = ChatPromptTemplate.from_messages([ ("system", template), ("human", human_template), MessagesPlaceholder(variable_name="agent_scratchpad"), ]) llm_with_tools = llm.bind( functions=[format_tool_to_openai_function(t) for t in tools] ) agent = { "input": lambda x: x["input"], "agent_scratchpad": lambda x: format_to_openai_functions(x['intermediate_steps']) } | chat_prompt | llm_with_tools | OpenAIFunctionsAgentOutputParser() r = agent.invoke({ "input": "how many letters in the word educa?", "intermediate_steps": [] }) ``` could you help? ### Suggestion: _No response_
openai.error.InvalidRequestError: Unrecognized request argument supplied: functions
https://api.github.com/repos/langchain-ai/langchain/issues/12415/comments
2
2023-10-27T09:29:56Z
2023-11-08T08:28:03Z
https://github.com/langchain-ai/langchain/issues/12415
1,965,130,370
12,415
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I've successfully generated and stored embeddings for PDF documents, Confluence content, and URL data within a single 'embeddings' folder using ChromaDB. However, I'm looking to enhance the functionality and add the ability to delete and re-add PDF/URL/Confluence data from this combined folder while preserving the existing embeddings. I believe this feature would significantly improve the versatility of the application and make it more user-friendly. Any guidance or contributions in implementing this functionality would be greatly appreciated. ### Suggestion: _No response_
Issue: Adding and Deleting PDF/URL/Confluence Data in Combined 'Embeddings' Folder using ChromaDB
https://api.github.com/repos/langchain-ai/langchain/issues/12413/comments
4
2023-10-27T09:13:58Z
2024-02-10T16:09:23Z
https://github.com/langchain-ai/langchain/issues/12413
1,965,103,714
12,413
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I'm building an agent to interact with neo4j with a custom chain (I'm aware of the chains already implemented for this case), my problem comes when I try to pass multiple arbitrary arguments to my tool. What i want to know is how can I propagate arguments from the creation of the agent / tool, and how to propagate arguments thorugh the execution of the AgentExecutor. Here is my code so far. I have a chain that I use as a tool: ``` class LLMCypherGraphChain(Chain, BaseModel): input_key: List[str] = ["question", "content_type"] ``` Then i create my AgentExecutor and use the initialize_agent method, I instanciate the class previously defined and use the run method as a function to execute. ``` class GraphAgent(AgentExecutor): @classmethod def initialize(cls,...) cypher_tool = LLMCypherGraphChain( llm=llm, input_key=["question", "content_type"], graph=graph, verbose=verbose, memory=readonlymemory ) # Load the tool configs that are needed. tools = [ Tool( name="Cypher search", func=cypher_tool.run, description=""" Utilize this tool to search within a database, specifically designed to answer x questions. This specialized tool offers streamlined search capabilities to help you find the movie information you need with ease. Input should be full question.""", # noqa ) ] agent_chain = initialize_agent( tools, llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=verbose, memory=memory, return_intermediate_steps=True ) def run(self, *args, **kwargs): return super().run(*args, **kwargs) ``` So at this point I only need to instanciate the agent and run it. ``` self.agent_executor = GraphAgent.initialize( ... ) res = self.agent_executor({"question": "my question", "input": "my question", "random_param": "my other param"}) ``` So the problem is that I have this agent stored in a variable to avoid recreating it, that means self.agent_executor is only initialized once but then I want to be able to propagate the question and random_param to my tool. I have seen some posts about passing parameters to tools but none of them actualy solved this problem. I am not sure if using agent_kwargs in the initialize_agent would be a solution (I'm not quiet sure how that propagates to the tools) but that only would happen once, when the instanciation is done. Right now Im getting this error, so I am not understanding how arguments are propagated: ERROR:root:A single string input was passed in, but this chain expects multiple inputs ({'question', 'content_type'}). When a chain expects multiple inputs, please call it by passing in a dictionary, eg `chain({'foo': 1, 'bar': 2})` Any help is appreciated! thanks! ### Suggestion: _No response_
Issue: Passing multiple arbitrary parameters to a tool.
https://api.github.com/repos/langchain-ai/langchain/issues/12410/comments
13
2023-10-27T08:48:20Z
2024-02-15T16:08:05Z
https://github.com/langchain-ai/langchain/issues/12410
1,965,059,653
12,410
[ "langchain-ai", "langchain" ]
**Accessing ChromaDB Embedding Vector from S3 Bucket** **Issue Description:** I am attempting to access the ChromaDB embedding vector from an S3 Bucket and I've used the following Python code for reference: ```python # Now we can load the persisted database from disk, and use it as normal. vectordb = Chroma(persist_directory=persist_directory, embedding_function=embedding) ``` However, I'm uncertain about the steps to follow when I need to specify the S3 bucket path in the code. **Questions/Clarifications:** - What specific modifications or additions are required to access the embedding vector from an S3 Bucket? - Are there any configuration changes needed to integrate S3 access seamlessly? **Additional Information:** - Name: chromadb - Version: 0.3.21 - Summary: Chroma. - Home-page: - Author: - Author-email: Jeff Huber <jeff@trychroma.com>, Anton Troynikov <anton@trychroma.com> - License: - Location: c:\users\ibm26\.conda\envs\llms\lib\site-packages - Requires: clickhouse-connect, duckdb, fastapi, hnswlib, numpy, pandas, posthog, pydantic, requests, sentence-transformers, uvicorn - Required-by: Your assistance and guidance on this matter would be greatly appreciated. Thank you! ### Suggestion: _No response_
Accessing ChromaDB Embedding Vector from S3 Bucket
https://api.github.com/repos/langchain-ai/langchain/issues/12408/comments
1
2023-10-27T06:50:30Z
2024-05-08T16:05:44Z
https://github.com/langchain-ai/langchain/issues/12408
1,964,880,228
12,408
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I am using the below code to fetch contents for user queries. db = PGVector( collection_name=collection_name, connection_string=CONNECTION_STRING, embedding_function=embeddings, ) retriever = db.as_retriever( search_kwargs={"k": 5} ) while its working fine generally, in some case, the context brought does not represent the contents in the table. While i can query the same content using a simple query with select * from langchain_pg_embedding where document like '%keyword%' fetches records, it fails to bring relevant contents using langchain. I have removed duplicate records in the table, to see if that improves the search, but no improvement. Help me understand the causes and solution please. ### Suggestion: _No response_
Issue: PGvector-Langchain-inconsistent similatiry search
https://api.github.com/repos/langchain-ai/langchain/issues/12405/comments
2
2023-10-27T05:42:17Z
2024-02-06T16:12:36Z
https://github.com/langchain-ai/langchain/issues/12405
1,964,806,092
12,405
[ "langchain-ai", "langchain" ]
### System Info Langchain: 0.0.166 Ubuntu: 22.04 NodeJS: 18.18.2 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Create a retrieval chain with streaming: ``` this.chain = RetrievalQAChain.fromLLM(this.model, this.store.asRetriever(), { prompt: PromptTemplate.fromTemplate(this.prompt), returnSourceDocuments: true, stream: true }); ``` 2. Call `stream()` on the chain: ``` const response = await this.chain.stream({ query: question, callbacks: [ { handleLLMNewToken(token) { onChunkCallback(token); }, handleLLMEnd(result, res2) { onEnd(result, res2); } }, ] }); ``` 3. View the `sourceDocuments` property on the returned response, and set `verbose: true` on the chain and observe the chain output shows a `sourceDocuments` property that is an empty array. ### Expected behavior The `sourceDocuments` property is populated the same way it is when not using streaming which shows the source documents the model used to generate the answer.
sourceDocuments are not returned when streaming with RetrievalQAChain but returns properly when *not* using streaming
https://api.github.com/repos/langchain-ai/langchain/issues/12400/comments
4
2023-10-27T03:53:38Z
2024-06-10T06:23:52Z
https://github.com/langchain-ai/langchain/issues/12400
1,964,716,724
12,400
[ "langchain-ai", "langchain" ]
### System Info Langchain version: 0.0.323 Platform: MacOS Sonoma Python version: 3.11 ### Who can help? @eyurtsev ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction `Docx2txtLoader` isn't handling improperly encoded docx files and is throwing errors. Code: ```python from langchain.document_loaders import Docx2txtLoader def main(): file = 'elon_doc.docx' loader = Docx2txtLoader(file) text = loader.load() print(text) if __name__ == '__main__': main() ``` ``` Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "/Users/sidharthmohanty/Desktop/oss/test-docs/docx.py", line 11, in <module> main() File "/Users/sidharthmohanty/Desktop/oss/test-docs/docx.py", line 6, in main text = loader.load() ^^^^^^^^^^^^^ File "/Users/sidharthmohanty/Desktop/oss/test-docs/dev_env/lib/python3.11/site-packages/langchain/document_loaders/word_document.py", line 55, in load page_content=docx2txt.process(self.file_path), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sidharthmohanty/Desktop/oss/test-docs/dev_env/lib/python3.11/site-packages/docx2txt/docx2txt.py", line 76, in process zipf = zipfile.ZipFile(docx) ^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/zipfile.py", line 1302, in __init__ self._RealGetContents() File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/zipfile.py", line 1369, in _RealGetContents raise BadZipFile("File is not a zip file") zipfile.BadZipFile: File is not a zip file ``` To reproduce the issue please use this file: https://uploadnow.io/f/Mr0tybt. ### Expected behavior It should handle it gracefully and extract text if possible. If the file is completely corrupted, it should show an error for a corrupted file.
`Docx2txtLoader` isn't loading docx files properly
https://api.github.com/repos/langchain-ai/langchain/issues/12399/comments
2
2023-10-27T03:28:56Z
2024-04-24T16:13:51Z
https://github.com/langchain-ai/langchain/issues/12399
1,964,699,011
12,399
[ "langchain-ai", "langchain" ]
### Feature request Self-RAG is a new open-source technique (MIT license) that implements: 1. **Adaptive retrieval via retrieval tokens:** allows you to fine-tune LLMs to output `[Retrieval]` tokens mid-generation to indicate when to perform retrieval. It has been empirically shown to improve open-source models to match ChatGPT level of performance in RAG tasks. 2. **Critique tokens:** - w_rel (default 1.0): This variable controls the emphasis on relevance during beam search. - w_sup (default 1.0): This variable controls the emphasis on support from the document during beam search. - w_use (default 0.5): This variable controls the emphasis on overall quality during beam search. Requirements: - make it compatible with vLLM for inference such that any fine-tuned Self-RAG model can be deployed in a framework with PagedAttention implemented. - able to query vector database Website: https://selfrag.github.io/ Model: https://huggingface.co/selfrag/selfrag_llama2_7b Code: https://github.com/AkariAsai/self-rag ![image](https://github.com/run-llama/llama_index/assets/27340033/9911d86d-e44e-49b1-95ef-87f3b9a9e0e6) ![image](https://github.com/run-llama/llama_index/assets/27340033/b262766c-2318-431c-9714-a0738ad66fc4) ### Motivation The main motivation behind the proposal is to allow for more precise responses while using RAG to help reduce hallucinations. ### Your contribution I am not available to help with a contribution.
[Feature Request]: Self-RAG support (NEW TECHNIQUE)
https://api.github.com/repos/langchain-ai/langchain/issues/12375/comments
2
2023-10-26T21:02:47Z
2024-03-30T16:05:21Z
https://github.com/langchain-ai/langchain/issues/12375
1,964,365,338
12,375
[ "langchain-ai", "langchain" ]
### System Info Running on colab: langchain==0.0.324 gigachat==0.1.6 gpt4all==2.0.1 chromadb==0.4.15 Python 3.10.12 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Add gigachat creds 2. Run notebook: https://colab.research.google.com/drive/1LcLYyWYpu8ZGSVKvF-WOFFsF36M75DAp?usp=sharing ### Expected behavior Expected to give relevant answer like in original notebook: https://github.com/hwchase17/chroma-langchain/blob/master/persistent-qa.ipynb
VectorDBQA bug on Gigachat
https://api.github.com/repos/langchain-ai/langchain/issues/12372/comments
2
2023-10-26T20:52:19Z
2024-02-06T16:12:41Z
https://github.com/langchain-ai/langchain/issues/12372
1,964,352,398
12,372
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I have been experimenting with SelfQueryRetriever. I am trying to use it to find a book based on a query. For example: "Find me a book that was written by Hayek". The LLM correctly creates the query: "query='Hayek' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='Autor', value='Hayek') limit=None" But, the problem is that in my metadata, I have full names of authors, for example: "Autor": "Friedrich A. Hayek" And because the comparator is `eq`, therefore equality, the retriever does not find my book. A solution would be: 1. Add `contains` operator to the SelfQueryRetriever and allows us to turn it on/off using some parameter. We could do this by ourselves if you gave us access to the prompt that is being used to generate the query. I already requested this in a [separate issue](https://github.com/langchain-ai/langchain/issues/11735). 2. Allow using of lists (so I could split the full name into an array of names). However [LangChain documentation](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query/) says that it is possible to use lists for attributes, the testing shows that it is not possible. When trying to use a list, you will get the following error: `ValueError: Expected metadata value to be a str, int, float or bool, got ["Friedrich", "A.", "Hayek"] which is a <class 'list'>` ### Suggestion: Please allow one of the two solutions. Either using `contains` with SelfQueryRetriever or allow using lists.
Issue: SelfQueryRetriever and contains('string')
https://api.github.com/repos/langchain-ai/langchain/issues/12370/comments
1
2023-10-26T20:45:02Z
2023-11-05T16:45:47Z
https://github.com/langchain-ai/langchain/issues/12370
1,964,343,443
12,370
[ "langchain-ai", "langchain" ]
### System Info azure-search-documents==11.4.0b8, langchain ### Who can help? @hwchase17 @agola11 @dosu-bot ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction fields in azure search index and their content is as below: Code: memory_vector_store: AzureSearch = AzureSearch( azure_search_endpoint=vector_store_address, azure_search_key=vector_store_password, index_name=memory_index_name, embedding_function=embeddings.embed_query ) user_id = "dtiw" session_id = "ZjBlNmM4M2UtOThkYS00YjgyLThhOTAtNTQ0YTU1MTA3NmVm" relevant_docs = memory_vector_store.similarity_search( query=query, k=4, search_type="similarity", filters = f"user_id eq '{user_id}' and session_id eq '{session_id}'" ) if relevant_docs: prev_history = "\n".join([doc.page_content for doc in relevant_docs]) else: logging.info(f"relevant docs not found") prev_history = "" logging.info(f" the relevant docs are {relevant_docs}") logging.info(f"the previous history is {prev_history}") ### Expected behavior expected answer: [Document(page_content='User: who are you?\nAssistant: I am an AI assistant here to help you with any company-related questions you may have. How can I assist you today?', metadata={'id': 'ZHRpd2FyaUBoZW5kcmlja3Nvbi1pbnRsLmNvbWYwZTZjODNlLTk4ZGEtNGI4Mi04YTkwLTU0NGE1NTEwNzZlZjIwMjMxMDI2MTk0MzI4', 'session_id': 'ZjBlNmM4M2UtOThkYS00YjgyLThhOTAtNTQ0YTU1MTA3NmVm', 'user_id': 'dtiw', '@search.score': 0.78985536, '@search.reranker_score': None, '@search.highlights': None, '@search.captions': None}), Document(page_content='User: Hi, whats up?\nAssistant: Please stick to the company-related questions. How can I assist you with any company-related queries?', metadata={'id': 'ZHRpd2FyaUBoZW5kcmlja3Nvbi1pbnRsLmNvbWYwZTZjODNlLTk4ZGEtNGI4Mi04YTkwLTU0NGE1NTEwNzZlZjIwMjMxMDI2MTk0MjU5', 'session_id': 'ZjBlNmM4M2UtOThkYS00YjgyLThhOTAtNTQ0YTU1MTA3NmVm', 'user_id': 'dtiw', '@search.score': 0.7848022, '@search.reranker_score': None, '@search.highlights': None, '@search.captions': None})] 'User: who are you?\nAssistant: I am an AI assistant here to help you with any company-related questions you may have. How can I assist you today?' 'User: Hi, whats up?\nAssistant: Please stick to the company-related questions. How can I assist you with any company-related queries?' Given answer: the relevant docs are <iterator object azure.core.paging.ItemPaged at 0x7d82ae149b10> the previous history is
filter query within vector_store.similarity_search() is not working as expected.
https://api.github.com/repos/langchain-ai/langchain/issues/12366/comments
10
2023-10-26T20:16:56Z
2024-03-25T09:57:23Z
https://github.com/langchain-ai/langchain/issues/12366
1,964,306,930
12,366
[ "langchain-ai", "langchain" ]
### System Info `langchain` = 0.0.324 `pydantic` = 2.4.2 `python` = 3.11 `platform` = macos In the project I have class `MyChat` which inherits `BaseChatModel` and further customize/extends it. Adding additional fields works fine with both pydantic v1 and v2, however adding a field and/or model validator fails with pydantic v2, raising `TypeError`: ``` TypeError: cannot pickle 'classmethod' object ``` Full error: ``` TypeError Traceback (most recent call last) .../test.ipynb Cell 1 line 1 9 from pydantic import ConfigDict, Extra, Field, field_validator, model_validator 11 logger = logging.getLogger(__name__) ---> 13 class MyChat(BaseChatModel): 14 client: Any #: :meta private: 15 model_name: str = Field(default="gpt-35-turbo", alias="model") File .../.venv/lib/python3.11/site-packages/pydantic/v1/main.py:221, in ModelMetaclass.__new__(mcs, name, bases, namespace, **kwargs) 219 elif is_valid_field(var_name) and var_name not in annotations and can_be_changed: 220 validate_field_name(bases, var_name) --> 221 inferred = ModelField.infer( 222 name=var_name, 223 value=value, 224 annotation=annotations.get(var_name, Undefined), 225 class_validators=vg.get_validators(var_name), 226 config=config, 227 ) 228 if var_name in fields: 229 if lenient_issubclass(inferred.type_, fields[var_name].type_): File .../.venv/lib/python3.11/site-packages/pydantic/v1/fields.py:506, in ModelField.infer(cls, name, value, annotation, class_validators, config) 503 required = False 504 annotation = get_annotation_from_field_info(annotation, field_info, name, config.validate_assignment) --> 506 return cls( 507 name=name, 508 type_=annotation, 509 alias=field_info.alias, 510 class_validators=class_validators, 511 default=value, 512 default_factory=field_info.default_factory, 513 required=required, 514 model_config=config, 515 field_info=field_info, 516 ) File .../.venv/lib/python3.11/site-packages/pydantic/v1/fields.py:436, in ModelField.__init__(self, name, type_, class_validators, model_config, default, default_factory, required, final, alias, field_info) 434 self.shape: int = SHAPE_SINGLETON 435 self.model_config.prepare_field(self) --> 436 self.prepare() File .../.venv/lib/python3.11/site-packages/pydantic/v1/fields.py:546, in ModelField.prepare(self) 539 def prepare(self) -> None: 540 """ 541 Prepare the field but inspecting self.default, self.type_ etc. 542 543 Note: this method is **not** idempotent (because _type_analysis is not idempotent), 544 e.g. calling it it multiple times may modify the field and configure it incorrectly. 545 """ --> 546 self._set_default_and_type() 547 if self.type_.__class__ is ForwardRef or self.type_.__class__ is DeferredType: 548 # self.type_ is currently a ForwardRef and there's nothing we can do now, 549 # user will need to call model.update_forward_refs() 550 return ... --> 161 rv = reductor(4) 162 else: 163 reductor = getattr(x, "__reduce__", None) TypeError: cannot pickle 'classmethod' object ``` ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [x] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Install pydantic v2 2. Install langchain 3. Try running following in for example notebook: ``` import logging from typing import ( Any,ClassVar,) from langchain.chat_models.base import BaseChatModel from pydantic import ConfigDict, Extra, Field, field_validator, model_validator logger = logging.getLogger(__name__) class MyChat(BaseChatModel): client: Any #: :meta private: model_name: str = Field(default="gpt-35-turbo", alias="model") temperature: float = 0.0 model_kwargs: dict[str, Any] = Field(default_factory=dict) request_timeout: float | tuple[float, float] | None = None max_retries: int = 6 max_tokens: int | None = 2024 gpu: bool = False model_config: ClassVar[ConfigDict] = ConfigDict( populate_by_name=True, strict=False, extra="ignore" ) @property def _llm_type(self) -> str: return "my-chat" @field_validator("max_tokens", mode="before") def check_max_tokens(cls, v: int, values: dict[str, Any]) -> int: """Validate max_tokens.""" if v is not None and v < 1: raise ValueError("max_tokens must be greater than 0.") return v ``` ### Expected behavior There should be no error.
Unable to extend BaseChatModel with pydantic 2
https://api.github.com/repos/langchain-ai/langchain/issues/12358/comments
3
2023-10-26T18:35:45Z
2023-10-26T19:36:52Z
https://github.com/langchain-ai/langchain/issues/12358
1,964,138,560
12,358
[ "langchain-ai", "langchain" ]
### System Info For this code: from langchain import LLMMathChain, OpenAI, SerpAPIWrapper, SQLDatabase from langchain_experimental.sql import SQLDatabaseChain from langchain.agents import initialize_agent, Tool from langchain.agents import AgentType from langchain.chat_models import ChatOpenAI from decouple import config oak = openai_secret_key (witheld for sake of security) openai_api_key = oak llm = ChatOpenAI(temperature=0, model="babbage-002", openai_api_key=openai_api_key) llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True) db = SQLDatabase.from_uri("sqlite:///maids.db") db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True) tools = [ Tool( name="Yellowsense_Database", func=db_chain.run, description="useful for when you need to answer questions about maids/cooks/nannies." ) ] agent = initialize_agent( tools=tools, llm=llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True) user_input = input( """You can now chat with your database. Please enter your question or type 'quit' to exit: """ ) agent.run(user_input) I am getting this error: InvalidRequestError: This is not a chat model and thus not supported in the v1/chat/completions endpoint. Did you mean to use v1/completions? What all models can I use for chatmodel? ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction This is the code I am using: from langchain import LLMMathChain, OpenAI, SerpAPIWrapper, SQLDatabase from langchain_experimental.sql import SQLDatabaseChain from langchain.agents import initialize_agent, Tool from langchain.agents import AgentType from langchain.chat_models import ChatOpenAI from decouple import config a = "sk-wLQRwTr8RS" b = "Z1ph7bIlovT3B" c = "lbkFJe49s56n" d = "KfeWgts8MpMQC" oak = a + b + c + d openai_api_key = oak # create LLM model llm = ChatOpenAI(temperature=0, model="babbage-002", openai_api_key=openai_api_key) # create an LLM math tool llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True) # connect to our database db = SQLDatabase.from_uri("sqlite:///maids.db") # create the database chain db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True) tools = [ Tool( name="Maid_Database", func=db_chain.run, description="useful for when you need to answer questions about maids/cooks/nannies." ) ] # creating the agent agent = initialize_agent( tools=tools, llm=llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True) user_input = input( """You can now chat with your database. Please enter your question or type 'quit' to exit: """ ) # ask the LLM a question agent.run(user_input) ### Expected behavior I should be getting the answer using the model I selected
InvalidRequestError: This is not a chat model and thus not supported in the v1/chat/completions endpoint. Did you mean to use v1/completions?
https://api.github.com/repos/langchain-ai/langchain/issues/12351/comments
2
2023-10-26T16:32:02Z
2024-02-06T16:12:46Z
https://github.com/langchain-ai/langchain/issues/12351
1,963,933,380
12,351
[ "langchain-ai", "langchain" ]
### Feature request LangChain v0.0.323 Python 3.10.12 turbo-3.5 I use JsonKeyOutputFunctionsParser to parse JSON output: ``` extraction_functions = [convert_pydantic_to_openai_function(Information)] extraction_model = llm.bind(functions=extraction_functions, function_call={"name": "Information"}) prompt = ChatPromptTemplate.from_messages([ ("system", "Extract the relevant information, if not explicitly provided do not guess. Extract partial info"), ("human", "{input}") ]) extraction_chain = prompt | extraction_model | JsonKeyOutputFunctionsParser(key_name="people") ``` For some text (in my tests it's 1 from 500) we have wrong JSON, because there are wrong placed comma or JSON is wrapped by extra text and that can't be parsed by json.loads. It seems we can use OutputFixingParser, but in most of cases it's not needed to call LLM - we can just fix it my regex. I did special code to fix such JSONs: ``` class JsonFixOutputFunctionsParser(JsonOutputFunctionsParser): """Parse an output as the Json object.""" def get_fixed_json(self, text : str) -> str: """Fix LLM json""" text = re.sub(r'",\s*}', '"}', text) text = re.sub(r"},\s*]", "}]", text) text = re.sub(r"}\s*{", "},{", text) open_bracket = min(text.find('['), text.find('{')) if open_bracket == -1: return text close_bracket = max(text.rfind(']'), text.rfind('}')) if close_bracket == -1: return text return text[open_bracket:close_bracket+1] def parse_result(self, result: List[Generation], *, partial: bool = False) -> Any: ................. fixed_json = self.get_fixed_json(str(function_call["arguments"])) return json.loads(fixed_json) ``` Now it works without error: `extraction_chain_fixed = prompt | extraction_model | JsonFixOutputFunctionsParser()` Code example with this bug and fix: https://colab.research.google.com/drive/1LiT6-ljO_g7xn4Y8qaXKZaUlcpVdEEGI?usp=sharing ### Motivation With get_fixed_json we can avoid extra LLM call and easy and fast fix most of wrong Jsons. I tested it on 9000 runs and it fixed all errors that I got. ### Your contribution Feel free to use get_fixed_json function where we need to parse JSON, not only for function calls.
Fix rrror "Could not parse function call data" for JsonKeyOutputFunctionsParser (and other Json parsers) without additional LLM call
https://api.github.com/repos/langchain-ai/langchain/issues/12348/comments
10
2023-10-26T15:34:55Z
2024-02-14T16:08:49Z
https://github.com/langchain-ai/langchain/issues/12348
1,963,840,031
12,348
[ "langchain-ai", "langchain" ]
### Feature request Currently, there's only a specific output type for the evaluate_strings function in langchain. { reasoning: "blah blah" score: 0 or 1 value: "Y" or "N" } Ideally users should be able to override this with the type of request they would want. ### Motivation It really ties you down when you have to go with the same type of output parser. ### Your contribution I can work on a PR for the change.
Langchain evaluate_strings should allow custom output parsing
https://api.github.com/repos/langchain-ai/langchain/issues/12343/comments
2
2023-10-26T14:54:17Z
2024-02-08T16:13:25Z
https://github.com/langchain-ai/langchain/issues/12343
1,963,761,221
12,343
[ "langchain-ai", "langchain" ]
### System Info aws ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction here's my PROMPT and code: from langchain.prompts.chat import ChatPromptTemplate updated_prompt = ChatPromptTemplate.from_messages( [ ("system", """ You are a knowledgeable AI assistant specializing in extracting information from the 'inquiry' table in the MySQL Database. Your primary task is to perform a single query on the schema of the 'inquiry' table and table and retrieve the data using SQL. When formulating SQL queries, keep the following context in mind: - Filter records based on exact column value matches. - If the user inquires about the Status of the inquiry fetch all these columns: status, name, and time values, and inform the user about these specific values. - Limit query results to a maximum of 3 unless the user specifies otherwise. - Only query necessary columns. - Avoid querying for non-existent columns. - Place the 'ORDER BY' clause after 'WHERE.' - Do not add a semicolon at the end of the SQL. If the query results in an empty set, respond with "information not found" Use this format: Question: The user's query Thought: Your thought process Action: SQL Query Action Input: SQL query Observation: Query results ... (repeat for multiple queries) Thought: Summarize what you've learned Final Answer: Provide the final answer Begin! """), ("user", "{question}\n ai: "), ] ) llm = ChatOpenAI(model=os.getenv("OPENAI_CHAT_MODEL"), temperature=0) # best result sql_toolkit = SQLDatabaseToolkit(db=db, llm=llm) sql_toolkit.get_tools() sqldb_agent = create_sql_agent( llm=llm, toolkit=sql_toolkit, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True, handle_parsing_errors=True, ) sqldb_agent.run(updated_prompt.format( question="What is the status of inquiry 123?" )) ### Expected behavior This is my current cost response: The inquiry 123 is Completed on August 21, 2020, at 12 PM. Total Tokens: 18566 Prompt Tokens: 18349 Completion Tokens: 217 Total Cost (USD): $0.055915 I want to reduce the cost to less than > $0.01 any suggestions will help.
i am trying to optimize the cost of SQLDatabaseToolkit calls
https://api.github.com/repos/langchain-ai/langchain/issues/12341/comments
4
2023-10-26T14:49:05Z
2024-02-10T16:09:37Z
https://github.com/langchain-ai/langchain/issues/12341
1,963,750,813
12,341
[ "langchain-ai", "langchain" ]
### System Info I am developing an app utilizing the streaming functionality in Python 3.10. I've crafted my own generator and callback for this purpose. However, I've stumbled upon an issue when setting llm.streaming to true and utilizing a callback - the OpenAI tokenizer appears to malfunction, displaying a token count of 0$ which is not the expected behavior. ### Who can help? @agola11 @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction def arun(self, g, chat_history, query:str, config:dict, agent_id:str): try: llm = ChatOpenAI( temperature=0, verbose=False, model='gpt-4', streaming=True, callbacks=[ChainStreamHandler(g)] ) with get_openai_callback() as cb: custom_agent = customAgent.run(g, llm, chat_history, query, config, agent_id) print(cb) return custom_agent finally: g.close() Tokens Used: 0 Prompt Tokens: 0 Completion Tokens: 0 Successful Requests: 0 Total Cost (USD): $0.0 ### Expected behavior I aim to have the tokenizer working correctly to gauge the token usage and its corresponding cost. Any guidance or suggestions to rectify this problem would be greatly appreciated
Issue with OpenAI tokenizer not functioning as expected in streaming mode
https://api.github.com/repos/langchain-ai/langchain/issues/12339/comments
3
2023-10-26T13:26:24Z
2024-04-23T17:02:28Z
https://github.com/langchain-ai/langchain/issues/12339
1,963,570,897
12,339
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Since version `v0.0.299` `LangChain` depends on `anyio = "<4.0"`. ### Suggestion: Could you please relax the `anyio` dependency to support a wider range of versions `<5.0` as `4.0.0` has been out since August?
Issue: Update `anyio` dependency
https://api.github.com/repos/langchain-ai/langchain/issues/12337/comments
2
2023-10-26T12:48:24Z
2024-02-06T16:13:01Z
https://github.com/langchain-ai/langchain/issues/12337
1,963,492,115
12,337
[ "langchain-ai", "langchain" ]
### System Info langchain 0.0.275 Ubuntu 20.04.6 LTS llama-2-7b-chat.ggmlv3.q4_0.bin llama-cpp-python 0.1.78 I am getting a validation error when trying to use the document summary chain. It is giving an error "str type expected". I am passing Document objects to the model. I have checked the type of all page contents and they are indeed strings. **Error** ```bash --------------------------------------------------------------------------- ValidationError Traceback (most recent call last) [/home/waleedalfaris/localGPT/notebooks/doc_summary.ipynb](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/doc_summary.ipynb) Cell 9 line 5 [3](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a224c4c4d227d/home/waleedalfaris/localGPT/notebooks/doc_summary.ipynb#X34sdnNjb2RlLXJlbW90ZQ%3D%3D?line=2) docs = loader.load_and_split() [4](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a224c4c4d227d/home/waleedalfaris/localGPT/notebooks/doc_summary.ipynb#X34sdnNjb2RlLXJlbW90ZQ%3D%3D?line=3) chain = load_summarize_chain(llm, chain_type='map_reduce') ----> [5](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a224c4c4d227d/home/waleedalfaris/localGPT/notebooks/doc_summary.ipynb#X34sdnNjb2RlLXJlbW90ZQ%3D%3D?line=4) summary = chain.run(docs[:1]) [6](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a224c4c4d227d/home/waleedalfaris/localGPT/notebooks/doc_summary.ipynb#X34sdnNjb2RlLXJlbW90ZQ%3D%3D?line=5) print(summary) File [~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/base.py:475](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/base.py:475), in Chain.run(self, callbacks, tags, metadata, *args, **kwargs) 473 if len(args) != 1: 474 raise ValueError("`run` supports only one positional argument.") --> 475 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[ 476 _output_key 477 ] 479 if kwargs and not args: 480 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[ 481 _output_key 482 ] File [~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/base.py:282](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/base.py:282), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info) 280 except (KeyboardInterrupt, Exception) as e: 281 run_manager.on_chain_error(e) --> 282 raise e 283 run_manager.on_chain_end(outputs) 284 final_outputs: Dict[str, Any] = self.prep_outputs( 285 inputs, outputs, return_only_outputs 286 ) File [~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/base.py:276](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/base.py:276), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info) 270 run_manager = callback_manager.on_chain_start( 271 dumpd(self), 272 inputs, 273 ) 274 try: 275 outputs = ( --> 276 self._call(inputs, run_manager=run_manager) 277 if new_arg_supported 278 else self._call(inputs) 279 ) 280 except (KeyboardInterrupt, Exception) as e: 281 run_manager.on_chain_error(e) File [~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/combine_documents/base.py:105](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/combine_documents/base.py:105), in BaseCombineDocumentsChain._call(self, inputs, run_manager) 103 # Other keys are assumed to be needed for LLM prediction 104 other_keys = {k: v for k, v in inputs.items() if k != self.input_key} --> 105 output, extra_return_dict = self.combine_docs( 106 docs, callbacks=_run_manager.get_child(), **other_keys 107 ) 108 extra_return_dict[self.output_key] = output 109 return extra_return_dict File [~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/combine_documents/map_reduce.py:209](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/combine_documents/map_reduce.py:209), in MapReduceDocumentsChain.combine_docs(self, docs, token_max, callbacks, **kwargs) 197 def combine_docs( 198 self, 199 docs: List[Document], (...) 202 **kwargs: Any, 203 ) -> Tuple[str, dict]: 204 """Combine documents in a map reduce manner. 205 206 Combine by mapping first chain over all documents, then reducing the results. 207 This reducing can be done recursively if needed (if there are many documents). 208 """ --> 209 map_results = self.llm_chain.apply( 210 # FYI - this is parallelized and so it is fast. 211 [{self.document_variable_name: d.page_content, **kwargs} for d in docs], 212 callbacks=callbacks, 213 ) 214 question_result_key = self.llm_chain.output_key 215 result_docs = [ 216 Document(page_content=r[question_result_key], metadata=docs[i].metadata) 217 # This uses metadata from the docs, and the textual results from `results` 218 for i, r in enumerate(map_results) 219 ] File [~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/llm.py:189](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/llm.py:189), in LLMChain.apply(self, input_list, callbacks) 187 except (KeyboardInterrupt, Exception) as e: 188 run_manager.on_chain_error(e) --> 189 raise e 190 outputs = self.create_outputs(response) 191 run_manager.on_chain_end({"outputs": outputs}) File [~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/llm.py:186](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/llm.py:186), in LLMChain.apply(self, input_list, callbacks) 181 run_manager = callback_manager.on_chain_start( 182 dumpd(self), 183 {"input_list": input_list}, 184 ) 185 try: --> 186 response = self.generate(input_list, run_manager=run_manager) 187 except (KeyboardInterrupt, Exception) as e: 188 run_manager.on_chain_error(e) File [~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/llm.py:101](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/llm.py:101), in LLMChain.generate(self, input_list, run_manager) 99 """Generate LLM result from inputs.""" 100 prompts, stop = self.prep_prompts(input_list, run_manager=run_manager) --> 101 return self.llm.generate_prompt( 102 prompts, 103 stop, 104 callbacks=run_manager.get_child() if run_manager else None, 105 **self.llm_kwargs, 106 ) File [~/localGPT/venv/lib/python3.10/site-packages/langchain/llms/base.py:467](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/langchain/llms/base.py:467), in BaseLLM.generate_prompt(self, prompts, stop, callbacks, **kwargs) 459 def generate_prompt( 460 self, 461 prompts: List[PromptValue], (...) 464 **kwargs: Any, 465 ) -> LLMResult: 466 prompt_strings = [p.to_string() for p in prompts] --> 467 return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs) File [~/localGPT/venv/lib/python3.10/site-packages/langchain/llms/base.py:602](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/langchain/llms/base.py:602), in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, **kwargs) 593 raise ValueError( 594 "Asked to cache, but no cache found at `langchain.cache`." 595 ) 596 run_managers = [ 597 callback_manager.on_llm_start( 598 dumpd(self), [prompt], invocation_params=params, options=options 599 )[0] 600 for callback_manager, prompt in zip(callback_managers, prompts) 601 ] --> 602 output = self._generate_helper( 603 prompts, stop, run_managers, bool(new_arg_supported), **kwargs 604 ) 605 return output 606 if len(missing_prompts) > 0: File [~/localGPT/venv/lib/python3.10/site-packages/langchain/llms/base.py:504](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/langchain/llms/base.py:504), in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs) 502 for run_manager in run_managers: 503 run_manager.on_llm_error(e) --> 504 raise e 505 flattened_outputs = output.flatten() 506 for manager, flattened_output in zip(run_managers, flattened_outputs): File [~/localGPT/venv/lib/python3.10/site-packages/langchain/llms/base.py:491](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/langchain/llms/base.py:491), in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs) 481 def _generate_helper( 482 self, 483 prompts: List[str], (...) 487 **kwargs: Any, 488 ) -> LLMResult: 489 try: 490 output = ( --> 491 self._generate( 492 prompts, 493 stop=stop, 494 # TODO: support multiple run managers 495 run_manager=run_managers[0] if run_managers else None, 496 **kwargs, 497 ) 498 if new_arg_supported 499 else self._generate(prompts, stop=stop) 500 ) 501 except (KeyboardInterrupt, Exception) as e: 502 for run_manager in run_managers: File [~/localGPT/venv/lib/python3.10/site-packages/langchain/llms/base.py:985](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/langchain/llms/base.py:985), in LLM._generate(self, prompts, stop, run_manager, **kwargs) 979 for prompt in prompts: 980 text = ( 981 self._call(prompt, stop=stop, run_manager=run_manager, **kwargs) 982 if new_arg_supported 983 else self._call(prompt, stop=stop, **kwargs) 984 ) --> 985 generations.append([Generation(text=text)]) 986 return LLMResult(generations=generations) File [~/localGPT/venv/lib/python3.10/site-packages/langchain/load/serializable.py:74](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/langchain/load/serializable.py:74), in Serializable.__init__(self, **kwargs) 73 def __init__(self, **kwargs: Any) -> None: ---> 74 super().__init__(**kwargs) 75 self._lc_kwargs = kwargs File [~/localGPT/venv/lib/python3.10/site-packages/pydantic/main.py:341](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/pydantic/main.py:341), in pydantic.main.BaseModel.__init__() ValidationError: 1 validation error for Generation text str type expected (type=type_error.str) ``` ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```python from langchain.chains.summarize import load_summarize_chain from langchain.prompts import PromptTemplate from langchain.document_loaders import PDFMinerLoader from langchain.llms import LlamaCpp kwargs = { "model_path": model_path, "n_ctx": 4096, "max_tokens": 4096, "seed": 42, "n_gpu_layers": -1, "verbose": True, } llm = LlamaCpp(**kwargs) file = 'test_file.pdf' loader = PDFMinerLoader(file) docs = loader.load_and_split() chain = load_summarize_chain(llm, chain_type='map_reduce') summary = chain.run(docs) print(summary) ``` ### Expected behavior The chain should provide a summary of the Documents provided
load_summarize_chain ValidationError, str type expected
https://api.github.com/repos/langchain-ai/langchain/issues/12336/comments
4
2023-10-26T12:47:28Z
2024-02-10T16:09:42Z
https://github.com/langchain-ai/langchain/issues/12336
1,963,490,327
12,336
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Hello Team, I am trying to build a QA system with text files as its knowledge base. While creating embeddings I have provided different metadata tags to documents. I want to use this metadata to filter documents before sending them to LLM to generate answer. I am using as_retriever() method to filter the documents based on 'key':'value' pair. Code below ``` vectordb.as_retriever( search_type="similarity", search_kwargs = { 'filter' : {'key1' : 'value1'} } ) ``` This method works great to filter out the documents when I am using ChromaDB as VectorStore, but does not work when I use Neo4j as VectorStore. It returns the same results with or without filter using Neo4j To figure out the issue, I checked langchain's source code for implementation of ChromaDB and Neo4j Vectorstore. In ChomaDB the similarity_search() method is implemented as follows - https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/chroma.py ![image](https://github.com/langchain-ai/langchain/assets/19748080/8c2e2feb-baa0-45bc-9f19-73aa4c86044d) Here the filter parameter is passed to the next function - self.similarity_search_with_score(query, k, filter=filter). But when I checked the same method for Neo4j https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/neo4j_vector.py ![image](https://github.com/langchain-ai/langchain/assets/19748080/834ff3b7-bd20-454d-904e-83a765a1986f) Here the filter parameter or the **kwargs is not passed to the next function - self.similarity_search_by_vector( embedding=embedding, k=k, query=query, ) Please check if this is what is causing the issue. Also let me know if something is wrong in my implementation. Thanks in Advance ### Suggestion: _No response_
Unable to filter results in Neo4j Vectorstore.as_retriever() method using filter paramter in search_kwargs
https://api.github.com/repos/langchain-ai/langchain/issues/12335/comments
11
2023-10-26T11:08:53Z
2024-08-06T16:07:32Z
https://github.com/langchain-ai/langchain/issues/12335
1,963,275,005
12,335
[ "langchain-ai", "langchain" ]
I am trying to use langserve with langchain llamacpp like this (chain.py): ```from flask import Flask, jsonify, request from langchain.chains import RetrievalQA,ConversationalRetrievalChain from langchain.embeddings import HuggingFaceInstructEmbeddings from langchain.llms import LlamaCpp from langchain.prompts import PromptTemplate from langchain.chains import LLMChain from langchain.callbacks.manager import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from prompt_template_utils import get_prompt_template from langchain.vectorstores import Chroma, FAISS from werkzeug.utils import secure_filename from constants import CHROMA_SETTINGS, EMBEDDING_MODEL_NAME, PERSIST_DIRECTORY, MODEL_ID, MODEL_BASENAME DEVICE_TYPE = "cuda" if torch.cuda.is_available() else "cpu" SHOW_SOURCES = True logging.info(f"Running on: {DEVICE_TYPE}") logging.info(f"Display Source Documents set to: {SHOW_SOURCES}") MODEL_PATH = '../llama-2-7b-32k-instruct.Q6_K.gguf' def load_model(): callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]) n_gpu_layers = 30 # Change this value based on your model and your GPU VRAM pool. n_batch = 512 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU. llm = LlamaCpp( model_path=MODEL_PATH, n_ctx = 8192, n_gpu_layers=n_gpu_layers, n_batch=n_batch, repeat_penalty=1, temperature=0.2, max_tokens=100, top_p=0.9, top_k = 50, rope_freq_scale=0.125, stop = ["[INST]"], # callback_manager=callback_manager, streaming=True, verbose=True, # Verbose is required to pass to the callback manager ) return llm EMBEDDINGS = HuggingFaceInstructEmbeddings(model_name=EMBEDDING_MODEL_NAME, model_kwargs={"device": DEVICE_TYPE}) DB = FAISS.load_local(PERSIST_DIRECTORY, EMBEDDINGS) RETRIEVER = DB.as_retriever(search_kwargs={'k': 16}) template = """\ [INST] <<SYS>> {context} <</SYS>> [/INST] [INST]{question}[/INST] """ LLM = load_model()#load_model(device_type=DEVICE_TYPE, model_id=MODEL_ID, model_basename=MODEL_BASENAME) QA_CHAIN_PROMPT = PromptTemplate.from_template(template) QA = ConversationalRetrievalChain.from_llm(LLM, retriever=RETRIEVER, combine_docs_chain_kwargs={"prompt": QA_CHAIN_PROMPT}) ``` The app.py file has this: ``` from fastapi import FastAPI from langserve import add_routes from chain import QA app = FastAPI(title="Retrieval App") add_routes(app, QA) if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000)` However when running /stream route, I get this warning: NFO: Started server process [491117] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit) INFO: 127.0.0.1:48024 - "GET /docs HTTP/1.1" 200 OK INFO: 127.0.0.1:48024 - "GET /openapi.json HTTP/1.1" 200 OK INFO: 127.0.0.1:53430 - "POST /stream HTTP/1.1" 200 OK /home/zeeshan/miniconda3/envs/llama/lib/python3.10/site-packages/langchain/llms/llamacpp.py:352: RuntimeWarning: coroutine 'AsyncCallbackManagerForLLMRun.on_llm_new_token' was never awaited run_manager.on_llm_new_token( RuntimeWarning: Enable tracemalloc to get the object allocation traceback ``` And the output is like this: ![image](https://github.com/langchain-ai/langserve/assets/93662126/8e1ac812-b2e7-4df0-a0e9-ca70f4e1ee40) This seems not to be streaming output tokens. Does langserve even support langchain llamacpp or I am doing something wrong?
langserve with Llamacpp
https://api.github.com/repos/langchain-ai/langchain/issues/12441/comments
11
2023-10-26T10:44:09Z
2024-05-02T16:04:29Z
https://github.com/langchain-ai/langchain/issues/12441
1,966,025,938
12,441
[ "langchain-ai", "langchain" ]
### Need Route Name : Coming in console but not in output I am using Router chain to route my chain with MultiPromptChain. It gets routed and shows in console to which route it went but it doesn't come in output variable. It just shows in console. I want that in a variable. I used every available method like return_intermediate_steps=True and return_route_name=True, But it is not working Below is the code: `destination_chains = {} for p_info in prompt_infos: name = p_info["name"] prompt_template = p_info["prompt_template"] prompt = ChatPromptTemplate.from_template(template=prompt_template) chain = LLMChain(llm=llm, prompt=prompt) destination_chains[name] = chain destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos] destinations_str = "\n".join(destinations) default_prompt = ChatPromptTemplate.from_template("{input}") default_chain = LLMChain(llm=llm, prompt=default_prompt) MULTI_PROMPT_ROUTER_TEMPLATE = """Given a text input to a \ language model select the model prompt best suited for the input. \ You will be given the names of the available prompts and a \ description of what the prompt is best suited for. \ << FORMATTING >> Return a markdown code snippet with a JSON object formatted to look like: ```json {{{{ "destination": string \ name of the prompt to use or "DEFAULT" "next_inputs": string \ a potentially modified version of the original input }}}} ``` REMEMBER: "destination" MUST be one of the candidate prompt \ names specified below. REMEMBER: "next_inputs" can just be the original input \ if you don't think any modifications are needed. << CANDIDATE PROMPTS >> {destinations} << INPUT >> {{input}} Output should be only in below format: Destination used: Output of it """ router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format( destinations=destinations_str ) router_prompt = PromptTemplate( template=router_template, input_variables=["input"], output_parser=RouterOutputParser(), ) router_chain = LLMRouterChain.from_llm(llm, router_prompt) chain = MultiPromptChain(router_chain=router_chain, destination_chains=destination_chains, default_chain=default_chain, verbose=True ) inputQues="How to turn on TV" output = chain.run(input=inputQues)` ### Suggestion: _No response_
Issue: Can't get verbose output in a variable in python
https://api.github.com/repos/langchain-ai/langchain/issues/12330/comments
4
2023-10-26T08:31:31Z
2024-02-10T16:09:47Z
https://github.com/langchain-ai/langchain/issues/12330
1,962,991,552
12,330
[ "langchain-ai", "langchain" ]
### System Info Mac os sonoma ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Just use any chain that supports return_source_documents, try setting it to true. ### Expected behavior I will say too many output keys and throw the error, please fix this otherwise metadata for items does not make sense, since you cannot return source docs anyway.
return_source documents does not work
https://api.github.com/repos/langchain-ai/langchain/issues/12329/comments
4
2023-10-26T08:26:40Z
2024-02-10T16:09:52Z
https://github.com/langchain-ai/langchain/issues/12329
1,962,983,060
12,329
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I am using Langchain + Chroma to do a Q&A program with a csv document as its knowledge base. The CSV file looks like below: [![enter image description here][1]][1] Here is the CSV file: https://1drv.ms/u/s!Asflam6BEzhjgbkdegCGfZ7FI4O1Og?e=2X6ior And code for creating embedding: from langchain.document_loaders.csv_loader import CSVLoader from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores.chroma import Chroma from langchain.text_splitter import RecursiveCharacterTextSplitter as RCTS file_path = "Test.csv" doc_pages = [] csv_loader = CSVLoader(file_path) doc_pages = csv_loader.load() print(f"Extracted {file_path} with {len(doc_pages)} pages...") splitter = RCTS(chunk_size = 3000, chunk_overlap = 300) splitted_docs = splitter.split_documents(doc_pages) embedding = OpenAIEmbeddings() persist_directory = "docs_t/chroma/" vectordb = Chroma.from_documents( documents=splitted_docs, embedding=embedding, persist_directory=persist_directory ) vectordb.persist() print(vectordb._collection.count()) Here is the Testing code: result = vectordb.similarity_search("what is the Support Item Name for 01_003_0107_1_1", k=3) for r in result: print(r.page_content, end="\n\n") And I see this testing code returns all other non-relevant information. Which part leads to this issue? [1]: https://i.stack.imgur.com/Ev3cn.png ### Suggestion: _No response_
Issue: Similarity_search on Vector Store does not work.
https://api.github.com/repos/langchain-ai/langchain/issues/12326/comments
12
2023-10-26T07:37:18Z
2024-02-15T16:08:11Z
https://github.com/langchain-ai/langchain/issues/12326
1,962,901,400
12,326
[ "langchain-ai", "langchain" ]
### System Info GPT-4 can now read images. How to combine images and text in PDF or WORD when calling the API? ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction document loader? ### Expected behavior maybe we can have a document loader that loads text and images at the same time.
GPT-4 reads the images
https://api.github.com/repos/langchain-ai/langchain/issues/12323/comments
3
2023-10-26T06:38:05Z
2024-02-10T16:10:02Z
https://github.com/langchain-ai/langchain/issues/12323
1,962,808,683
12,323
[ "langchain-ai", "langchain" ]
I defined a custom LLM and a custom search, and used the agent for testing, but I found some problems in parsing the llm response, which was prone to result parsing errors after multiple rounds of dialogue. I suspect that there is something wrong with the stop logic of the custom llm, please help me analyze it. ``` if __name__ == '__main__': gpt35_llm = gpt35_llm() tools = [GoogleSearchTool(), BaiduSearchTool()] agent = initialize_agent(tools, gpt35_llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, handle_parsing_errors=True,verbose=True) print(agent.run('杭州亚运会的奖牌榜前三名是哪些国家?杭州亚运会的第一枚金牌是谁获得的?')) ``` ``` class gpt35_llm(LLM): model: str = "gpt-3.5" @property def _llm_type(self) -> str: return "gpt-3.5" def _call( self, prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, **kwargs: Any, ) -> str: response = gpt_35_chat(prompt, None) if stop is not None: print(f'stop:{stop}') content = response['content'] output = content[:content.find(stop[0])] return output if response['code'] != 200: raise ValueError(response['content']) return response['content'] @property def _identifying_params(self) -> Mapping[str, Any]: """Get the identifying parameters.""" return {"model": self.model}```
How do I define the stop logic for a custom llm in agent
https://api.github.com/repos/langchain-ai/langchain/issues/12322/comments
2
2023-10-26T06:35:00Z
2024-02-06T16:13:26Z
https://github.com/langchain-ai/langchain/issues/12322
1,962,804,701
12,322
[ "langchain-ai", "langchain" ]
### System Info langchain version: 0.0320 python version: 3.10 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [X] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I want to clear history of ConversationBufferMemory. I used ConversationBufferMemory to maintain chat history but right now I don't know how can I delete history of that . If anyone knows these please help me out of this. ### Expected behavior ConversationBufferMemory history should be clear.
Conversation Buffer Memory clear histroy
https://api.github.com/repos/langchain-ai/langchain/issues/12319/comments
2
2023-10-26T04:56:32Z
2024-02-06T16:13:31Z
https://github.com/langchain-ai/langchain/issues/12319
1,962,704,317
12,319
[ "langchain-ai", "langchain" ]
### System Info latest langchain code AzureSearch at langchain/libs/langchain/langchain/vectorstores/azuresearch.py: semantic_hybrid_search_with_score function, when returning search result, the score should be @search.rerankerScore, not @search.score for hybrid semantic search, according to the latest Azure Doc here: https://learn.microsoft.com/en-us/azure/search/hybrid-search-ranking ### Who can help? @hwchase17 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction the code is not following the latest Azure Search document: https://learn.microsoft.com/en-us/azure/search/hybrid-search-ranking ### Expected behavior float(result["@search.rerankerScore"]) (instead of float(result["@search.score"]))
should be @search.rerankerScore, not @search.score for hybrid semantic search (Azure Cognitive Search)
https://api.github.com/repos/langchain-ai/langchain/issues/12317/comments
2
2023-10-26T04:00:06Z
2024-02-06T16:13:36Z
https://github.com/langchain-ai/langchain/issues/12317
1,962,658,507
12,317
[ "langchain-ai", "langchain" ]
### System Info I got different embedding results using OpenAIEmbeddings and the original openai library. Besides the embeddings from both OpenAIEmbeddings and openai change from time to time. Sometimes it returns the same results but sometimes it returns differently , especially after I exceeds the time limit. I am using langchain-0.0.321. I tried dig a little deeper in the OpenAIEmbeddings source code and find that there is a process using the tokenizer. I tried use the same tokenizer in the original openai library and get the same result compared to not using it, so I think this is not the problem. Can someone explain a little bit why the results in different libraries and different requests may change? ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```python import openai from langchain.embeddings.openai import OpenAIEmbeddings text = 'where are you going' vec1 = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY).embed_query(text) vec2 = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY).embed_query(text) if not vec1 == vec2: print('vec1:', vec1[:3]) print('vec2:', vec2[:3]) vec3 = openai.Embedding.create(input=[text], model='text-embedding-ada-002')['data'][0]['embedding'] if not (vec1 == vec3 or vec2 == vec3): print('vec1:', vec1[:3]) print('vec2:', vec2[:3]) print('vec3:', vec3[:3]) ``` ### Expected behavior the results are as below ``` vec1: [0.0004729258755920449, -0.011402048647760918, -0.012062848996486476] vec2: [0.00046853933082822383, -0.01143268296919534, -0.012086534739296315] vec1: [0.0004729258755920449, -0.011402048647760918, -0.012062848996486476] vec2: [0.00046853933082822383, -0.01143268296919534, -0.012086534739296315] vec3: [0.00040566621464677155, -0.011434691026806831, -0.012049459852278233] ```
OpenAIEmbeddings vector different from the openai library and changes from time to time
https://api.github.com/repos/langchain-ai/langchain/issues/12314/comments
4
2023-10-26T02:45:52Z
2024-07-11T01:57:06Z
https://github.com/langchain-ai/langchain/issues/12314
1,962,600,617
12,314
[ "langchain-ai", "langchain" ]
### Feature request The langchain API integration Elevenlabs should accomodate the very important parameter `voice`. ### Motivation Without the voice parameter, the Elevenlabs langchain integration is unusable because `play()` switches the voice randomly with every execution. ### Your contribution I tracked down the releveant code in the [Elevenlabs-langchain integration API](https://api.python.langchain.com/en/latest/_modules/langchain/tools/eleven_labs/text2speech.html#ElevenLabsText2SpeechTool.stream_speech) down to this section: ``` def _run( self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None ) -> str: """Use the tool.""" elevenlabs = _import_elevenlabs() try: speech = elevenlabs.generate(text=query, model=self.model) with tempfile.NamedTemporaryFile( mode="bx", suffix=".wav", delete=False ) as f: f.write(speech) return f.name except Exception as e: raise RuntimeError(f"Error while running ElevenLabsText2SpeechTool: {e}") [[docs]](https://api.python.langchain.com/en/latest/tools/langchain.tools.eleven_labs.text2speech.ElevenLabsText2SpeechTool.html#langchain.tools.eleven_labs.text2speech.ElevenLabsText2SpeechTool.play) def play(self, speech_file: str) -> None: """Play the text as speech.""" elevenlabs = _import_elevenlabs() with open(speech_file, mode="rb") as f: speech = f.read() elevenlabs.play(speech) [[docs]](https://api.python.langchain.com/en/latest/tools/langchain.tools.eleven_labs.text2speech.ElevenLabsText2SpeechTool.html#langchain.tools.eleven_labs.text2speech.ElevenLabsText2SpeechTool.stream_speech) def stream_speech(self, query: str) -> None: """Stream the text as speech as it is generated. Play the text in your speakers.""" elevenlabs = _import_elevenlabs() speech_stream = elevenlabs.generate(text=query, model=self.model, stream=True) elevenlabs.stream(speech_stream) ``` The calls to `elevenlabs.generate` and `elevenlabs.play` must be extended with the `voice` parameter. After this change, the langchain calls `run`, `play`, and importantly, `stream_speech`, could be made analogue the calls from the ElevenLabs API, e.g. seen [in this repo](https://github.com/langchain-ai/langchain/issues/1076#issuecomment-1552363555): ``` prediction = agent_chain.run(input=user_input.text) audio = generate( text=prediction, voice="Adam", model="eleven_monolingual_v1" ) ```
Elevenlabs-langchain API should integrate very important parameter 'voice'
https://api.github.com/repos/langchain-ai/langchain/issues/12312/comments
5
2023-10-26T02:08:00Z
2024-05-21T16:41:13Z
https://github.com/langchain-ai/langchain/issues/12312
1,962,571,837
12,312
[ "langchain-ai", "langchain" ]
### System Info python=`3.11` langchain=`0.0.314` pydantic=`2.3.0` pydantic_core=`2.6.3` ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [x] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction From the documentation, I can create a fake tool that works as follows: ```python # Define the tool class FakeSearchTool(BaseTool): name: str | None = "custom_search" description: str | None = "useful for when you need to answer questions about current events" def _run( self, query: str ) -> str: """Use the tool.""" print("Query:", query) return "Used the tool, and got the answer!" async def _arun( self, query: str ) -> str: """Use the tool asynchronously.""" raise NotImplementedError("custom_search does not support async") # Test if the tool is working search_tool = FakeSearchTool() print(search_tool("What is my answer?")) ``` > Query: What is my answer? > Used the tool, and got the answer! My case however requires me to give an initial configuration object to every tool, something like this: ```python from langchain.tools.base import BaseTool # Create the config class class Config: def __init__(self, init_arg1: str, init_arg2: str): self.init_arg1 = init_arg1 self.init_arg2 = init_arg2 # Create the tool class FakeSearchTool(BaseTool): name: str | None = "custom_search" description: str | None = "useful for when you need to answer questions about current events" config: Config def __init__(self, config: Config): self.config = config def _run( self, query: str ) -> str: """Use the tool.""" print("Query:", query) print("Obtained config:", self.config.init_arg1, self.config.init_arg2) return "Used the tool, and got the answer using the config!" async def _arun( self, query: str ) -> str: """Use the tool asynchronously.""" raise NotImplementedError("custom_search does not support async") # Test if the tool is working search_tool = FakeSearchTool( config=Config(init_arg1="arg1", init_arg2="arg2") ) print(search_tool("What is my answer?")) ``` However, this does not work and return the following error: > AttributeError: 'FakeSearchTool' object has no attribute '__fields_set__' <details> <summary>Stack Trace</summary> ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) /ai-service/notebooks/agent-with-elasticsearch-tool.ipynb Cell 18 line 2 20 """Use the tool asynchronously.""" 21 raise NotImplementedError("custom_search does not support async") ---> 23 search_tool = FakeSearchTool( 24 config=Config(init_arg1="arg1", init_arg2="arg2") 25 ) 26 print(search_tool("What is my answer?")) /ai-service/notebooks/agent-with-elasticsearch-tool.ipynb Cell 18 line 7 6 def __init__(self, config: Config): ----> 7 self.config = config File ~/.pyenv/versions/3.11.5/envs/ai-service/lib/python3.11/site-packages/pydantic/v1/main.py:405, in BaseModel.__setattr__(self, name, value) 402 else: 403 self.__dict__[name] = value --> 405 self.__fields_set__.add(name) AttributeError: 'FakeSearchTool' object has no attribute '__fields_set__' ``` </details> ### Expected behavior I expect to get the following output out of the tool: > Query: What is my answer? > Obtained config: arg1 arg2 > Used the tool, and got the answer using the config!
AttributeError: 'CustomTool' object has no attribute 'fields_set'
https://api.github.com/repos/langchain-ai/langchain/issues/12304/comments
6
2023-10-25T21:55:37Z
2023-11-25T18:37:22Z
https://github.com/langchain-ai/langchain/issues/12304
1,962,296,260
12,304
[ "langchain-ai", "langchain" ]
### System Info I start a jupyter notebook with ```python file = 'OutdoorClothingCatalog_1000.csv' loader = CSVLoader(file_path=file) from langchain.indexes import VectorstoreIndexCreator index = VectorstoreIndexCreator( vectorstore_cls=DocArrayInMemorySearch ).from_loaders([loader]) query ="Please list all your shirts with sun protection \ in a table in markdown and summarize each one." esponse = index.query(query) ``` Unfortunately this leads to ```log --------------------------------------------------------------------------- ValidationError Traceback (most recent call last) [/workspaces/langchaintutorial/L4-QnA.ipynb](https://vscode-remote+codespaces-002bautomatic-002dfunicular-002dqpwv7xqr7j249j6.vscode-resource.vscode-cdn.net/workspaces/langchaintutorial/L4-QnA.ipynb) Zelle 17 line 1 ----> [1](vscode-notebook-cell://codespaces%2Bautomatic-funicular-qpwv7xqr7j249j6/workspaces/langchaintutorial/L4-QnA.ipynb#X20sdnNjb2RlLXJlbW90ZQ%3D%3D?line=0) response = index.query(query) File [/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/indexes/vectorstore.py:45](https://vscode-remote+codespaces-002bautomatic-002dfunicular-002dqpwv7xqr7j249j6.vscode-resource.vscode-cdn.net/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/indexes/vectorstore.py:45), in VectorStoreIndexWrapper.query(self, question, llm, retriever_kwargs, **kwargs) 41 retriever_kwargs = retriever_kwargs or {} 42 chain = RetrievalQA.from_chain_type( 43 llm, retriever=self.vectorstore.as_retriever(**retriever_kwargs), **kwargs 44 ) ---> 45 return chain.run(question) File [/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/base.py:505](https://vscode-remote+codespaces-002bautomatic-002dfunicular-002dqpwv7xqr7j249j6.vscode-resource.vscode-cdn.net/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/base.py:505), in Chain.run(self, callbacks, tags, metadata, *args, **kwargs) 503 if len(args) != 1: 504 raise ValueError("`run` supports only one positional argument.") --> 505 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[ 506 _output_key 507 ] 509 if kwargs and not args: 510 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[ 511 _output_key 512 ] File [/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/base.py:310](https://vscode-remote+codespaces-002bautomatic-002dfunicular-002dqpwv7xqr7j249j6.vscode-resource.vscode-cdn.net/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/base.py:310), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info) 308 except BaseException as e: 309 run_manager.on_chain_error(e) --> 310 raise e 311 run_manager.on_chain_end(outputs) 312 final_outputs: Dict[str, Any] = self.prep_outputs( 313 inputs, outputs, return_only_outputs 314 ) File [/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/base.py:304](https://vscode-remote+codespaces-002bautomatic-002dfunicular-002dqpwv7xqr7j249j6.vscode-resource.vscode-cdn.net/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/base.py:304), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info) 297 run_manager = callback_manager.on_chain_start( 298 dumpd(self), 299 inputs, 300 name=run_name, 301 ) 302 try: 303 outputs = ( --> 304 self._call(inputs, run_manager=run_manager) 305 if new_arg_supported 306 else self._call(inputs) 307 ) 308 except BaseException as e: 309 run_manager.on_chain_error(e) File [/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/retrieval_qa/base.py:136](https://vscode-remote+codespaces-002bautomatic-002dfunicular-002dqpwv7xqr7j249j6.vscode-resource.vscode-cdn.net/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/retrieval_qa/base.py:136), in BaseRetrievalQA._call(self, inputs, run_manager) 132 accepts_run_manager = ( 133 "run_manager" in inspect.signature(self._get_docs).parameters 134 ) 135 if accepts_run_manager: --> 136 docs = self._get_docs(question, run_manager=_run_manager) 137 else: 138 docs = self._get_docs(question) # type: ignore[call-arg] File [/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/retrieval_qa/base.py:216](https://vscode-remote+codespaces-002bautomatic-002dfunicular-002dqpwv7xqr7j249j6.vscode-resource.vscode-cdn.net/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/retrieval_qa/base.py:216), in RetrievalQA._get_docs(self, question, run_manager) 209 def _get_docs( 210 self, 211 question: str, 212 *, 213 run_manager: CallbackManagerForChainRun, 214 ) -> List[Document]: 215 """Get docs.""" --> 216 return self.retriever.get_relevant_documents( 217 question, callbacks=run_manager.get_child() 218 ) File [/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/schema/retriever.py:211](https://vscode-remote+codespaces-002bautomatic-002dfunicular-002dqpwv7xqr7j249j6.vscode-resource.vscode-cdn.net/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/schema/retriever.py:211), in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs) 209 except Exception as e: 210 run_manager.on_retriever_error(e) --> 211 raise e 212 else: 213 run_manager.on_retriever_end( 214 result, 215 **kwargs, 216 ) File [/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/schema/retriever.py:204](https://vscode-remote+codespaces-002bautomatic-002dfunicular-002dqpwv7xqr7j249j6.vscode-resource.vscode-cdn.net/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/schema/retriever.py:204), in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs) 202 _kwargs = kwargs if self._expects_other_args else {} 203 if self._new_arg_supported: --> 204 result = self._get_relevant_documents( 205 query, run_manager=run_manager, **_kwargs 206 ) 207 else: 208 result = self._get_relevant_documents(query, **_kwargs) File [/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/schema/vectorstore.py:585](https://vscode-remote+codespaces-002bautomatic-002dfunicular-002dqpwv7xqr7j249j6.vscode-resource.vscode-cdn.net/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/schema/vectorstore.py:585), in VectorStoreRetriever._get_relevant_documents(self, query, run_manager) 581 def _get_relevant_documents( 582 self, query: str, *, run_manager: CallbackManagerForRetrieverRun 583 ) -> List[Document]: 584 if self.search_type == "similarity": --> 585 docs = self.vectorstore.similarity_search(query, **self.search_kwargs) 586 elif self.search_type == "similarity_score_threshold": 587 docs_and_similarities = ( 588 self.vectorstore.similarity_search_with_relevance_scores( 589 query, **self.search_kwargs 590 ) 591 ) File [/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/vectorstores/docarray/base.py:127](https://vscode-remote+codespaces-002bautomatic-002dfunicular-002dqpwv7xqr7j249j6.vscode-resource.vscode-cdn.net/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/vectorstores/docarray/base.py:127), in DocArrayIndex.similarity_search(self, query, k, **kwargs) 115 def similarity_search( 116 self, query: str, k: int = 4, **kwargs: Any 117 ) -> List[Document]: 118 """Return docs most similar to query. 119 120 Args: (...) 125 List of Documents most similar to the query. 126 """ --> 127 results = self.similarity_search_with_score(query, k=k, **kwargs) 128 return [doc for doc, _ in results] File [/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/vectorstores/docarray/base.py:106](https://vscode-remote+codespaces-002bautomatic-002dfunicular-002dqpwv7xqr7j249j6.vscode-resource.vscode-cdn.net/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/vectorstores/docarray/base.py:106), in DocArrayIndex.similarity_search_with_score(self, query, k, **kwargs) 94 """Return docs most similar to query. 95 96 Args: (...) 103 Lower score represents more similarity. 104 """ 105 query_embedding = self.embedding.embed_query(query) --> 106 query_doc = self.doc_cls(embedding=query_embedding) # type: ignore 107 docs, scores = self.doc_index.find(query_doc, search_field="embedding", limit=k) 109 result = [ 110 (Document(page_content=doc.text, metadata=doc.metadata), score) 111 for doc, score in zip(docs, scores) 112 ] File [/usr/local/python/3.10.8/lib/python3.10/site-packages/pydantic/main.py:164](https://vscode-remote+codespaces-002bautomatic-002dfunicular-002dqpwv7xqr7j249j6.vscode-resource.vscode-cdn.net/usr/local/python/3.10.8/lib/python3.10/site-packages/pydantic/main.py:164), in BaseModel.__init__(__pydantic_self__, **data) 162 # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks 163 __tracebackhide__ = True --> 164 __pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__) ValidationError: 2 validation errors for DocArrayDoc text Field required [type=missing, input_value={'embedding': [0.00328089... -0.021201754016760964]}, input_type=dict] For further information visit https://errors.pydantic.dev/2.4/v/missing metadata Field required [type=missing, input_value={'embedding': [0.00328089... -0.021201754016760964]}, input_type=dict] For further information visit https://errors.pydantic.dev/2.4/v/missing ``` ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```python file = 'OutdoorClothingCatalog_1000.csv' loader = CSVLoader(file_path=file) from langchain.indexes import VectorstoreIndexCreator index = VectorstoreIndexCreator( vectorstore_cls=DocArrayInMemorySearch ).from_loaders([loader]) query ="Please list all your shirts with sun protection \ in a table in markdown and summarize each one." esponse = index.query(query) ``` ### Expected behavior List of articles with sun protection. The script is an example from deeplearning.ai
Error querying DocArrayInMemorySearch
https://api.github.com/repos/langchain-ai/langchain/issues/12302/comments
8
2023-10-25T20:49:36Z
2024-05-05T06:11:00Z
https://github.com/langchain-ai/langchain/issues/12302
1,962,205,964
12,302
[ "langchain-ai", "langchain" ]
### System Info Langchain version == 0.0.323 python == 3.10 ``` I have a simple code to query a postgres DB: from langchain.llms import Bedrock from langchain_experimental.sql import SQLDatabaseChain db = SQLDatabase.from_uri(database_uri="postgresql://xxx:yyy:zzzzz") llm = Bedrock( credentials_profile_name="langchain", model_id="anthropic.claude-v2" ) db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True) db_chain.run("how many lab reports are there?") ``` Get the following error: ``` Entering new SQLDatabaseChain chain... how many lab reports are there? SQLQuery:Here is the query and response for your question: Question: how many lab reports are there? SQLQuery: SELECT COUNT(*) FROM lab_reportsTraceback (most recent call last): File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context self.dialect.do_execute( File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute cursor.execute(statement, parameters) psycopg2.errors.SyntaxError: syntax error at or near "Here" LINE 1: Here is the query and response for your question: ^ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/ec2-user/environment/Langchain/langchain-sql.py", line 12, in <module> db_chain.run("how many lab reports are there?") File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 505, in run return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[ File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 310, in __call__ raise e File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 304, in __call__ self._call(inputs, run_manager=run_manager) File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/langchain_experimental/sql/base.py", line 198, in _call raise exc File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/langchain_experimental/sql/base.py", line 143, in _call result = self.database.run(sql_cmd) File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/langchain/utilities/sql_database.py", line 429, in run result = self._execute(command, fetch) File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/langchain/utilities/sql_database.py", line 407, in _execute cursor = connection.execute(text(command)) File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1416, in execute return meth( File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/sqlalchemy/sql/elements.py", line 516, in _execute_on_connection return connection._execute_clauseelement( File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1639, in _execute_clauseelement ret = self._execute_context( File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1848, in _execute_context return self._exec_single_context( File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1988, in _exec_single_context self._handle_dbapi_exception( File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 2343, in _handle_dbapi_exception raise sqlalchemy_exception.with_traceback(exc_info[2]) from e File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context self.dialect.do_execute( File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.ProgrammingError: (psycopg2.errors.SyntaxError) syntax error at or near "Here" LINE 1: Here is the query and response for your question: ^ [SQL: Here is the query and response for your question: Question: how many lab reports are there? SQLQuery: SELECT COUNT(*) FROM lab_reports] (Background on this error at: https://sqlalche.me/e/20/f405) ``` ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Execute the code with a postgres DB ### Expected behavior Expect the query to execute and get results.
SQLDatabaseChain Error
https://api.github.com/repos/langchain-ai/langchain/issues/12295/comments
2
2023-10-25T18:56:26Z
2024-02-28T16:08:30Z
https://github.com/langchain-ai/langchain/issues/12295
1,962,036,029
12,295
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Terminal Output: > Entering new SQLDatabaseChain chain... how many tracks have pop genre? SQLQuery:SELECT COUNT(*) FROM tracks WHERE "GenreId" = 3 LIMIT 5; SQLResult: [(374,)] Answer:374 tracks have pop genre. > Finished chain. How do I access and print the SQLQuery, SQLResult in my ui? im only able to print the Answer here in response with st.chat_message("bot"): st.markdown(response) ### Suggestion: _No response_
Not able to print verbose result in interface.
https://api.github.com/repos/langchain-ai/langchain/issues/12290/comments
4
2023-10-25T18:17:54Z
2024-02-10T16:10:12Z
https://github.com/langchain-ai/langchain/issues/12290
1,961,978,458
12,290
[ "langchain-ai", "langchain" ]
### Issue with current documentation: How can I use csv_agent with langchain-experimental being that importing csv_agent from langchain.agents now produces this error: On 2023-10-27 this module will be be deprecated from langchain, and will be available from the langchain-experimental package.This code is already available in langchain-experimental.See https://github.com/langchain-ai/langchain/discussions/11680. ### Idea or request for content: _No response_
langchain-experimental csv_agent how to?
https://api.github.com/repos/langchain-ai/langchain/issues/12287/comments
2
2023-10-25T17:41:30Z
2024-02-14T16:08:58Z
https://github.com/langchain-ai/langchain/issues/12287
1,961,919,942
12,287
[ "langchain-ai", "langchain" ]
### Issue with current documentation: There is an error in the code. [link](https://python.langchain.com/docs/use_cases/summarization#splitting-and-summarizing-in-a-single-chain) ``` from langchain.chains import AnalyzeDocumentChain summarize_document_chain = AnalyzeDocumentChain(combine_docs_chain=chain, text_splitter=text_splitter) summarize_document_chain.run(docs[0]) ``` ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[17], line 4 1 from langchain.chains import AnalyzeDocumentChain 3 summarize_document_chain = AnalyzeDocumentChain(combine_docs_chain=chain, text_splitter=text_splitter) ----> 4 summarize_document_chain.run(docs[0]) File ~/langchain/libs/langchain/langchain/chains/base.py:496, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs) 459 """Convenience method for executing chain. 460 461 The main difference between this method and `Chain.__call__` is that this (...) 493 # -> "The temperature in Boise is..." 494 """ 495 # Run at start to make sure this is possible/defined --> 496 _output_key = self._run_output_key 498 if args and not kwargs: 499 if len(args) != 1: File ~/langchain/libs/langchain/langchain/chains/base.py:445, in Chain._run_output_key(self) 442 @property 443 def _run_output_key(self) -> str: 444 if len(self.output_keys) != 1: --> 445 raise ValueError( 446 f"`run` not supported when there is not exactly " 447 f"one output key. Got {self.output_keys}." 448 ) 449 return self.output_keys[0] ValueError: `run` not supported when there is not exactly one output key. Got ['output_text', 'intermediate_steps']. ``` ### Idea or request for content: There is nothing missing but the code needs to be accurate so the code returns the expected output.
DOC: Error in code for splitting and summarizing in a single chain
https://api.github.com/repos/langchain-ai/langchain/issues/12280/comments
3
2023-10-25T15:56:28Z
2024-02-10T16:10:22Z
https://github.com/langchain-ai/langchain/issues/12280
1,961,752,705
12,280
[ "langchain-ai", "langchain" ]
### System Info Langchain JS 0.0.172 on DEbian Linux 12 using Nodejs 18.15.0 With the following code I don't get the tokenUsage back from the chain/ call. I checked the docs, asked the AI and did a google search. Am I doing anything wrong? `const chain = RetrievalQAChain.fromLLM( model, vectorStore.asRetriever(), { verbose: true, } ) const res = await chain.call({ context: "You assist a Systems Engineer. Be as concise as possible.", query: prompt, }) console.log('usage:', res.tokenUsage); ` ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction import { DirectoryLoader } from "langchain/document_loaders/fs/directory"; import { TextLoader } from "langchain/document_loaders/fs/text"; import { JSONLoader } from "langchain/document_loaders/fs/json"; import { CSVLoader } from "langchain/document_loaders/fs/csv"; import { OpenAI } from "langchain/llms/openai"; import { RetrievalQAChain, loadQARefineChain } from "langchain/chains"; import { HNSWLib } from "langchain/vectorstores/hnswlib"; import { OpenAIEmbeddings } from "langchain/embeddings/openai"; import { RecursiveCharacterTextSplitter } from "langchain/text_splitter"; import * as dotenv from 'dotenv'; dotenv.config() const loader = new DirectoryLoader("./docs", { ".json": (path) => new JSONLoader(path), ".csv": (path) => new CSVLoader(path), ".txt": (path) => new TextLoader(path) }) const normalizeDocuments = (docs) => { return docs.map((doc) => { if (typeof doc.pageContent === "string") { return doc.pageContent; } else if (Array.isArray(doc.pageContent)) { return doc.pageContent.join("\n"); } }); } const VECTOR_STORE_PATH = "Documents.index"; export const run = async (params) => { console.log("Loading docs...") const docs = await loader.load(); console.log('Processing...') const model = new OpenAI({ openAIApiKey: process.env.OPENAI_API_KEY, modelName: "gpt-3.5-turbo", maxTokens: 2500, temperature: 0.1, maxConcurrency: 1, }); let vectorStore; try{ console.log("Checking for local vectorStore"); const embeddings = new OpenAIEmbeddings(process.env.OPENAI_API_KEY); vectorStore = await HNSWLib.load(VECTOR_STORE_PATH, embeddings); }catch(e){ console.log('Creating new vector store...') const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 3000, }); const normalizedDocs = normalizeDocuments(docs); const splitDocs = await textSplitter.createDocuments(normalizedDocs); console.log('splitdos : ',splitDocs) // 8. Generate the vector store from the documents vectorStore = await HNSWLib.fromDocuments( splitDocs, new OpenAIEmbeddings() ); await vectorStore.save(VECTOR_STORE_PATH); console.log("Vector store created.") } console.log("Creating retrieval chain...") const chain = RetrievalQAChain.fromLLM( model, vectorStore.asRetriever(), { verbose: true, } ) //const promptStart = `rewrite all requirements using the following guidelines:`; const promptStart = ``; //const reWrite = `Rewrite the Requirement Text so that is does comply.`; const reWrite = ``; const prompts = [ `R3: Requirements should be consistent with each other and should not contain contradictions or conflicts. Report the Id\'s for those that do not comply. Rewrite the Requirement Text so that is does comply with R3 `, ]; prompts.forEach(async prompt => { //console.log("Querying chain...") const res = await chain.call({ context: "You assist a Systems Engineer. Be as concise as possible.", query: prompt, }) console.log('usage:', res.tokenUsage); console.log(prompt); console.log('Reply: ',res, '\n') }); } run(process.argv.slice(2)) ### Expected behavior I expect the res object to return the token usage
Langchain js - can't get token usage
https://api.github.com/repos/langchain-ai/langchain/issues/12279/comments
4
2023-10-25T15:54:36Z
2024-02-10T16:10:27Z
https://github.com/langchain-ai/langchain/issues/12279
1,961,749,463
12,279
[ "langchain-ai", "langchain" ]
### System Info System: LangChain 0.0.321 Python 3.10 I'm trying to build a MultiRetrievalQAChain using only Llama2 chat models served by vLLM (no OpenAI). For that end I have created a ConversationChain that acts as the default chain for the MultiRetrievalQAChain. I have customized the prompts for both chains to meet LLama 2 Chat format requirements. It looks like the routing chain works properly but I'm getting the following exception: ``` [chain/error] [1:chain:MultiRetrievalQAChain > 2:chain:LLMRouterChain] [7.00s] Chain run errored with error: "OutputParserException(\"Parsing text ... raised following error:\\nGot invalid JSON object. Error: Expecting value: line 1 column 1 (char 0)\")" ``` Here the routing and generation trace: ``` [chain/start] [1:chain:MultiRetrievalQAChain] Entering Chain run with input: { "input": "What is prompt injection?" } [chain/start] [1:chain:MultiRetrievalQAChain > 2:chain:LLMRouterChain] Entering Chain run with input: { "input": "What is prompt injection?" } [chain/start] [1:chain:MultiRetrievalQAChain > 2:chain:LLMRouterChain > 3:chain:LLMChain] Entering Chain run with input: { "input": "What is prompt injection?" } [llm/start] [1:chain:MultiRetrievalQAChain > 2:chain:LLMRouterChain > 3:chain:LLMChain > 4:llm:VLLMOpenAI] Entering LLM run with input: { "prompts": [ "Given a query to a question answering system select the system best suited for the input. You will be given the names of the available systems and a description of what questions the system is best suited for. You may also revise the original input if you think that revising it will ultimately lead to a better response.\n\n<< FORMATTING >>\nReturn a markdown code snippet with a JSON object formatted to look like:\n```json\n{\n \"destination\": string \\ name of the question answering system to use or \"DEFAULT\"\n \"next_inputs\": string \\ a potentially modified version of the original input\n}\n```\n\nREMEMBER: \"destination\" MUST be one of the candidate prompt names specified below OR it can be \"DEFAULT\" if the input is not well suited for any of the candidate prompts.\nREMEMBER: \"next_inputs\" can just be the original input if you don't think any modifications are needed.\n\n<< CANDIDATE PROMPTS >>\nNIST AI Risk Management Framework: Guidelines provided by the NIST for organizations and people to manage risks associated with the use of AI. \n The NIST risk management framework consists of four cyclical tasks: Govern, Map, Measure and Manage.\nOWASP Top 10 for LLM Applications: Provides practical security guidance to navigate the complex and evolving terrain of LLM security focusing on the top 10 vulnerabilities of LLM applications. These are 1) Prompt Injection, 2) Insecure Output Handling, 3) Training Data Poisoning, 4) Model Denial of Service, 5) Supply Chain Vulnerabilities, 6) Sensitive Information Disclosure, 7) Insecure Plugin Design\n 8) Excessive Agency, 9) Overreliance, and 10) Model Theft\n \nThreat Modeling LLM Applications: A high-level example from Gavin Klondike on how to build a threat model for LLM applications utilizing the STRIDE modeling framework based on trust boundaries.\n\n<< INPUT >>\nWhat is prompt injection?\n\n<< OUTPUT >>" ] } /home/vmuser/miniconda3/envs/llm-env2/lib/python3.10/site-packages/langchain/chains/llm.py:280: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain. warnings.warn( [llm/end] [1:chain:MultiRetrievalQAChain > 2:chain:LLMRouterChain > 3:chain:LLMChain > 4:llm:VLLMOpenAI] [7.00s] Exiting LLM run with output: { "generations": [ [ { "text": "Prompt injection is a security vulnerability in LLM applications where an attacker can manipulate the input prompts to an LLM model to elicit a specific response from the model. This can be done by exploiting the lack of proper input validation and sanitization in the model's architecture. \n", "generation_info": { "finish_reason": "length", "logprobs": null } } ] ], "llm_output": { "token_usage": { "prompt_tokens": 488, "completion_tokens": 512, "total_tokens": 1000 }, "model_name": "meta-llama/Llama-2-7b-chat-hf" }, "run": null } [chain/end] [1:chain:MultiRetrievalQAChain > 2:chain:LLMRouterChain > 3:chain:LLMChain] [7.00s] Exiting Chain run with output: { "text": "Prompt injection is a security vulnerability in LLM applications where an attacker can manipulate the input prompts to an LLM model to elicit a specific response from the model. This can be done by exploiting the lack of proper input validation and sanitization in the model's architecture.\n" } [chain/error] [1:chain:MultiRetrievalQAChain > 2:chain:LLMRouterChain] [7.00s] Chain run errored with error: "OutputParserException(\"Parsing text\\nPrompt injection is a security vulnerability in LLM applications where an attacker can manipulate the input prompts to an LLM model to elicit a specific response from the model. This can be done by exploiting the lack of proper input validation and sanitization in the model's architecture.\\\n\\n raised following error:\\nGot invalid JSON object. Error: Expecting value: line 1 column 1 (char 0)\")" --------------------------------------------------------------------------- JSONDecodeError Traceback (most recent call last) File ~/miniconda3/envs/llm-env2/lib/python3.10/site-packages/langchain/output_parsers/json.py:163, in parse_and_check_json_markdown(text, expected_keys)  162 try: --> 163 json_obj = parse_json_markdown(text)  164 except json.JSONDecodeError as e: raised following error: Got invalid JSON object. Error: Expecting value: line 1 column 1 (char 0) ``` The issue seems to be related to a warning that I'm also getting: `llm.py:280: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.` Unfortunately it is unclear how one is supposed to implement an output parser for the LLM (ConversationChain) chain that meets expectations from the MultiRetrievalQAChain. The documentation for these chains relies a lot on OpenAI models to do the formatting but there's no much guidance on how to do it with other LLMs. Any guidance on how to move forward would be appreciated. Here my code: ``` import torch import langchain langchain.debug = True from langchain.llms import VLLMOpenAI from langchain.document_loaders import PyPDFLoader from langchain.prompts import PromptTemplate # Import for retrieval-augmented generation RAG from langchain import hub from langchain.chains import ConversationChain, MultiRetrievalQAChain from langchain.vectorstores import Chroma from langchain.text_splitter import SentenceTransformersTokenTextSplitter from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings #%% # URL for the vLLM service INFERENCE_SRV_URL = "http://localhost:8000/v1" def setup_chat_llm(vllm_url, max_tokens=512, temperature=0): """ Intializes the vLLM service object. :param vllm_url: vLLM service URL :param max_tokens: Max number of tokens to get generated by the LLM :param temperature: Temperature of the generation process :return: The vLLM service object """ chat = VLLMOpenAI( model_name="meta-llama/Llama-2-7b-chat-hf", openai_api_key="EMPTY", openai_api_base=vllm_url, temperature=temperature, max_tokens=max_tokens, ) return chat #%% # Initialize LLM service llm = setup_chat_llm(vllm_url=INFERENCE_SRV_URL) #%% %%time # Set up the embedding encoder (Sentence Transformers) and vector store model_name = "all-mpnet-base-v2" model_kwargs = {'device': 'cuda' if torch.cuda.is_available() else 'cpu'} encode_kwargs = {'normalize_embeddings': False} embeddings = SentenceTransformerEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs ) # Set up the document splitter text_splitter = SentenceTransformersTokenTextSplitter(chunk_size=500, chunk_overlap=0) # Load PDF documents loader = PyPDFLoader(file_path="../data/AI_RMF_Playbook.pdf") rmf_splits = loader.load_and_split() rmf_retriever = Chroma.from_documents(documents=rmf_splits, embedding=embeddings) loader = PyPDFLoader(file_path="../data/OWASP-Top-10-for-LLM-Applications-v101.pdf") owasp_splits = loader.load_and_split() owasp_retriever = Chroma.from_documents(documents=owasp_splits, embedding=embeddings) loader = PyPDFLoader(file_path="../data/Threat Modeling LLM Applications - AI Village.pdf") ai_village_splits = loader.load_and_split() ai_village_retriever = Chroma.from_documents(documents=ai_village_splits, embedding=embeddings) #%% retrievers_info = [ { "name": "NIST AI Risk Management Framework", "description": """Guidelines provided by the NIST for organizations and people to manage risks associated with the use of AI. The NIST risk management framework consists of four cyclical tasks: Govern, Map, Measure and Manage.""", "retriever": rmf_retriever.as_retriever() }, { "name": "OWASP Top 10 for LLM Applications", "description": """Provides practical security guidance to navigate the complex and evolving terrain of LLM security focusing on the top 10 vulnerabilities of LLM applications. These are 1) Prompt Injection, 2) Insecure Output Handling, 3) Training Data Poisoning, 4) Model Denial of Service, 5) Supply Chain Vulnerabilities, 6) Sensitive Information Disclosure, 7) Insecure Plugin Design 8) Excessive Agency, 9) Overreliance, and 10) Model Theft """, "retriever": owasp_retriever.as_retriever() }, { "name": "Threat Modeling LLM Applications", "description": "A high-level example from Gavin Klondike on how to build a threat model for LLM applications utilizing the STRIDE modeling framework based on trust boundaries.", "retriever": ai_village_retriever.as_retriever() } ] #%% prompt_template = ( """ [INST]<<SYS>> You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.<</SYS>> Question: {query} Context: {history} Answer: [/INST] """) prompt = PromptTemplate(template=prompt_template, input_variables=['history', 'query']) routing_prompt_template = ( """ [INST]<<SYS>> Given a query to a question answering system select the system best suited for the input. You will be given the names of the available systems and a description of what questions the system is best suited for. You could also revise the original input if you think that revising it will ultimately lead to a better response. Return a markdown code snippet with a JSON object formatted as follows: \```json \{ "destination": "destination_key_value" "next_inputs": "revised_or_original_input" \} \``` WHERE: - The "destination" key value MUST be a text string matching one of the candidate prompt names specified below OR it can be "DEFAULT" if the input is not well suited for any of the candidate prompts. - The "next_inputs" key value can just be the original input string if you don't think any modifications are needed. <</SYS>> << CANDIDATE PROMPTS >> {destinations} << INPUT >> {input} << OUTPUT >> [/INST] """) routing_prompt = PromptTemplate(template=routing_prompt_template, input_variables=['destinations', 'input']) #%% default_chain = ConversationChain( llm=llm, # Your own LLM prompt=prompt, # Your own prompt input_key="query", output_key="result", verbose=True, ) multi_retriever_chain = MultiRetrievalQAChain.from_retrievers( llm=llm, retriever_infos=retrievers_info, default_chain=default_chain, default_prompt=routing_prompt, default_retriever=rmf_retriever.as_retriever(), verbose=True ) #%% question = "What is prompt injection?" result = multi_retriever_chain.run(question) #%% result #%% #%% from langchain.chains.router.multi_retrieval_prompt import ( MULTI_RETRIEVAL_ROUTER_TEMPLATE, ) MULTI_RETRIEVAL_ROUTER_TEMPLATE #%% print(MULTI_RETRIEVAL_ROUTER_TEMPLATE) #%% ``` ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [X] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I included the entire script I'm using ### Expected behavior Proper query/question routing to the retriever better suited to provide the content required by the LLM to answer a question.
LLM (chain) output parsing in MultiRetrievalQAChain: OutputParserException Got invalid JSON object
https://api.github.com/repos/langchain-ai/langchain/issues/12276/comments
2
2023-10-25T15:07:03Z
2024-02-06T16:14:01Z
https://github.com/langchain-ai/langchain/issues/12276
1,961,655,309
12,276
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Unlike other PDF loaders, I've found that only the PDFMinerLoader doesn't provide pages in metadata when loading. Digging deeper, the PDFMinerParser only gives the page metadata when extract_images=True. I wonder if this was intentional. ### Suggestion: #12277
Issue: `PDFMinerLoader` not gives `page` metadata when loading with `extract_images=False` - is it intended?
https://api.github.com/repos/langchain-ai/langchain/issues/12273/comments
2
2023-10-25T14:48:30Z
2023-11-02T05:03:37Z
https://github.com/langchain-ai/langchain/issues/12273
1,961,611,644
12,273
[ "langchain-ai", "langchain" ]
### System Info langchain version: 0.0320 python version: 3.10 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [X] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction User based memorization with ConversationBufferMemory and CombinedMemory. Attaching my code below, Please let me know where I am doing wrong: session_id= "user2" vect_memory = VectorStoreRetrieverMemory(retriever=retriever) _DEFAULT_TEMPLATE = """ You are having a conversation with a human. Please interact like a human being and give an accurate answer. Relevant pieces of the previous conversation: {history} (You do not need to use these pieces of information if not relevant) Current conversation: {session_id} User: {input} Chatbot: """ PROMPT = PromptTemplate( input_variables=["history","session_id", "input"], template=_DEFAULT_TEMPLATE ) print(PROMPT) message_history = RedisChatMessageHistory(url=redis_url, session_id=session_id) conv_memory = ConversationBufferMemory( memory_key="session_id", chat_memory=message_history, input_key="input", prompt=PROMPT, return_messages=True ) memory = CombinedMemory(memories=[conv_memory, vect_memory]) llm = OpenAI(temperature=0, model_name='gpt-3.5-turbo') # Can be any valid LLM conversation_with_summary = ConversationChain( llm=llm, prompt=PROMPT, memory=memory, verbose=True, ) conversation_with_summary(input="hELLO") It is not working on the base of session. =================I am getting sthis error======================= Traceback (most recent call last): File "C:\Users\ankit\PycharmProjects\spendidchatbot\chatbot.py", line 306, in <module> conversation_with_summary = ConversationChain( File "C:\Users\ankit\PycharmProjects\spendidchatbot\env_spen\lib\site-packages\langchain\load\serializable.py", line 97, in _init_ super().__init__(**kwargs) File "C:\Users\ankit\PycharmProjects\spendidchatbot\env_spen\lib\site-packages\pydantic\v1\main.py", line 341, in _init_ raise validation_error pydantic.v1.error_wrappers.ValidationError: 1 validation error for ConversationChain _root_ Got unexpected prompt input variables. The prompt expects ['history', 'input', 'session_id'], but got ['user2', 'history'] as inputs from memory, and input as the normal input key. (type=value_error) ### Expected behavior User based memorization with ConversationBufferMemory and CombinedMemory. Attaching my code below, Please let me know where I am doing wrong: session_id= "user2" vect_memory = VectorStoreRetrieverMemory(retriever=retriever) _DEFAULT_TEMPLATE = """ You are having a conversation with a human. Please interact like a human being and give an accurate answer. Relevant pieces of the previous conversation: {history} (You do not need to use these pieces of information if not relevant) Current conversation: {session_id} User: {input} Chatbot: """ PROMPT = PromptTemplate( input_variables=["history","session_id", "input"], template=_DEFAULT_TEMPLATE ) print(PROMPT) message_history = RedisChatMessageHistory(url=redis_url, session_id=session_id) conv_memory = ConversationBufferMemory( memory_key="session_id", chat_memory=message_history, input_key="input", prompt=PROMPT, return_messages=True ) memory = CombinedMemory(memories=[conv_memory, vect_memory]) llm = OpenAI(temperature=0, model_name='gpt-3.5-turbo') # Can be any valid LLM conversation_with_summary = ConversationChain( llm=llm, prompt=PROMPT, memory=memory, verbose=True, ) conversation_with_summary(input="hELLO") It is not working on the base of session. =================I am getting sthis error======================= Traceback (most recent call last): File "C:\Users\ankit\PycharmProjects\spendidchatbot\chatbot.py", line 306, in <module> conversation_with_summary = ConversationChain( File "C:\Users\ankit\PycharmProjects\spendidchatbot\env_spen\lib\site-packages\langchain\load\serializable.py", line 97, in _init_ super().__init__(**kwargs) File "C:\Users\ankit\PycharmProjects\spendidchatbot\env_spen\lib\site-packages\pydantic\v1\main.py", line 341, in _init_ raise validation_error pydantic.v1.error_wrappers.ValidationError: 1 validation error for ConversationChain _root_ Got unexpected prompt input variables. The prompt expects ['history', 'input', 'session_id'], but got ['user2', 'history'] as inputs from memory, and input as the normal input key. (type=value_error)
User based memorization with ConversationBufferMemory and CombinedMemory
https://api.github.com/repos/langchain-ai/langchain/issues/12272/comments
2
2023-10-25T14:45:46Z
2024-02-06T16:14:06Z
https://github.com/langchain-ai/langchain/issues/12272
1,961,606,028
12,272
[ "langchain-ai", "langchain" ]
### System Info Langchain == 0.0.320 Following step by step procedures used in actual codebase followed by error message ``` llm = ChatOpenAI(openai_api_key=openai_api_key, model_name=engine) ``` ``` db = Chroma( collection_name=f'collection_1233', embedding_function=OpenAIEmbeddings(openai_api_key=openai_api_key), persist_directory=os.path.abspath('storage/vectors/') ) ``` ``` chain = ConversationalRetrievalChain.from_llm( retriever=db.as_retriever(), llm=self.llm, memory=SQLiteEntityStore(session_id=f'session_1233', db_file='storage/entities.db') condense_question_prompt=PromptTemplate.from_template(template=template), verbose=True ) ``` ``` response = chain({"question": prompt}) ``` The above step throws error as below ``` CRITICAL django Missing some input keys: {'chat_history'} ``` After providing ```chat_history```, the error is as follwed ``` CRITICAL django One input key expected got ['question', 'chat_history'] ``` So, what to do now? I need the following entities to be used strictly: - ChatOpenAI - ConversationalRetrievalChain + Chroma - ConversationEntityMemory + SQLiteEntityStore Please help asap... ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [X] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Following step by step procedures used in actual codebase followed by error message ``` llm = ChatOpenAI(openai_api_key=openai_api_key, model_name=engine) ``` ``` db = Chroma( collection_name=f'collection_1233', embedding_function=OpenAIEmbeddings(openai_api_key=openai_api_key), persist_directory=os.path.abspath('storage/vectors/') ) ``` ``` chain = ConversationalRetrievalChain.from_llm( retriever=db.as_retriever(), llm=self.llm, memory=SQLiteEntityStore(session_id=f'session_1233', db_file='storage/entities.db') condense_question_prompt=PromptTemplate.from_template(template=template), verbose=True ) ``` ``` response = chain({"question": prompt}) ``` The above step throws error as below ``` CRITICAL django Missing some input keys: {'chat_history'} ``` After providing ```chat_history```, the error is as follwed ``` CRITICAL django One input key expected got ['question', 'chat_history'] ``` So, what to do now? I need the following entities to be used strictly: - ChatOpenAI - ConversationalRetrievalChain + Chroma - ConversationEntityMemory + SQLiteEntityStore ### Expected behavior Conversations should be working with all chat history saved in mysqli db as well as returned to the user in the response.
ConversationalRetrievalChain doesn't work with ConversationEntityMemory + SQLiteEntityStore
https://api.github.com/repos/langchain-ai/langchain/issues/12266/comments
3
2023-10-25T12:34:48Z
2023-10-25T14:50:34Z
https://github.com/langchain-ai/langchain/issues/12266
1,961,319,260
12,266
[ "langchain-ai", "langchain" ]
### Feature request Langchain is currently using Pydantic V1, and I believe it would be good to upgrade this to V2 ### Motivation V2 is far more performant than V1 because of the python-rust binding ### Your contribution I can create a PR here.
Upgrade to Pydantic V2
https://api.github.com/repos/langchain-ai/langchain/issues/12265/comments
11
2023-10-25T12:28:55Z
2024-02-15T16:08:15Z
https://github.com/langchain-ai/langchain/issues/12265
1,961,308,651
12,265
[ "langchain-ai", "langchain" ]
### System Info I've been using the conversation chain to retrieve answers from gpt3.5, vertex ai chat-bison. For the user's memory I've been passing the session memory appended to the conversation chain. But once the token limit exceeds I'm getting this following error from the llm `[chain/error] [1:chain:LLMChain > 2:llm:ChatOpenAI] [1.69s] Llm run errored with error: "InvalidRequestError(message=\"This model's maximum context length is 4097 tokens. However, your messages resulted in 9961 tokens. Please reduce the length of the messages.\", param='messages', code='context_length_exceeded', http_status=400, request_id=None)" [chain/error] [1:chain:LLMChain] [1.72s] Chain run errored with error: "InvalidRequestError(message=\"This model's maximum context length is 4097 tokens. However, your messages resulted in 9961 tokens. Please reduce the length of the messages.\", param='messages', code='context_length_exceeded', http_status=400, request_id=None)" ` ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Once you add history appended to the chain, with more than 4097 tokens this will appear. ### Expected behavior The chat should work normally without any internal exception. Is there any langchain support for this issue?
Token limitation due to model's maximum context length
https://api.github.com/repos/langchain-ai/langchain/issues/12264/comments
2
2023-10-25T11:34:43Z
2024-02-06T16:14:11Z
https://github.com/langchain-ai/langchain/issues/12264
1,961,177,348
12,264
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. `ConversationalRetrievalChain` returns sources to questions without relevant documents. The returned sources are not related to the question or the answer. Example with relevant document: ``` chain = ConversationalRetrievalChain( retriever=vectorstore.as_retriever(), question_generator=question_generator, combine_docs_chain=doc_chain, ) chat_history = [] query = "What did the president say about Ketanji Brown Jackson" result = chain({"question": query, "chat_history": chat_history}) result['answer'] """ The president said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, who is one of the nation's top legal minds ... SOURCES: /content/state_of_the_union.txt ``` Example without relevant document: ``` query = "How are you doing?" result = chain({"question": query, "chat_history": chat_history}) result['answer'] """ I'm doing well, thank you. SOURCES: /content/state_of_the_union.txt """ ``` Another example without relevant document: ``` query = "Who is Elon Musk?" result = chain({"question": query, "chat_history": chat_history}) result['answer'] """ I don't know. SOURCES: /content/state_of_the_union.txt """ ``` ### Suggestion: Sources should not be returned for questions without relevant documents.
ConversationalRetrievalChain returns sources to questions without context
https://api.github.com/repos/langchain-ai/langchain/issues/12263/comments
1
2023-10-25T11:25:40Z
2024-07-04T12:48:23Z
https://github.com/langchain-ai/langchain/issues/12263
1,961,161,745
12,263
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I am getting below error while trying to use Azure Cosmos DB as vector store. **_No module named 'langchain.vectorstores.azure_cosmos_db_vector_search'_** ### Suggestion: _No response_
Issue: No module named 'langchain.vectorstores.azure_cosmos_db_vector_search'
https://api.github.com/repos/langchain-ai/langchain/issues/12262/comments
2
2023-10-25T10:51:32Z
2024-02-08T16:14:20Z
https://github.com/langchain-ai/langchain/issues/12262
1,961,103,330
12,262
[ "langchain-ai", "langchain" ]
### System Info Python 3.11.4 LangChain 0.0.321 Platform info (WSL2): DISTRIB_ID=Ubuntu DISTRIB_RELEASE=20.04 DISTRIB_CODENAME=focal DISTRIB_DESCRIPTION="Ubuntu 20.04.6 LTS" ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction This is my code (based on https://python.langchain.com/docs/use_cases/question_answering/conversational_retrieval_agents and modified to use AzureOpenAI and OpenSearch): ```pyhton import os from langchain.embeddings.openai import OpenAIEmbeddings from langchain.chat_models.azure_openai import AzureChatOpenAI from langchain.vectorstores.opensearch_vector_search import OpenSearchVectorSearch from langchain.memory import ConversationTokenBufferMemory from langchain.prompts import ( ChatPromptTemplate, MessagesPlaceholder, SystemMessagePromptTemplate, HumanMessagePromptTemplate, ) from langchain.chains import ConversationalRetrievalChain # Load environment variables host = os.environ['HOST'] auth = os.environ['AUTH_PASS'] index_uk = os.environ['INDEX_NAME_UK'] opensearch_url = os.environ['OPENSEARCH_URL'] embedding_model = os.environ['EMBEDDING_MODEL'] model_name = os.environ['MODEL_NAME'] openai_api_base = os.environ['OPENAI_API_BASE'] openai_api_key = os.environ['OPENAI_API_KEY'] openai_api_type = os.environ['OPENAI_API_TYPE'] openai_api_version = os.environ['OPENAI_API_VERSION'] # Define Azure OpenAI component llm = AzureChatOpenAI( openai_api_key = openai_api_key, openai_api_base = openai_api_base, openai_api_type = openai_api_type, openai_api_version = openai_api_version, deployment_name = model_name, temperature=0 ) # Define Memory component for chat history memory = ConversationTokenBufferMemory(llm=llm,memory_key="chat_history",return_messages=True, max_token_limit=1000) # Build a Retriever embeddings = OpenAIEmbeddings(deployment=embedding_model, chunk_size=1) docsearch = OpenSearchVectorSearch(index_name=index_uk, embedding_function=embeddings,opensearch_url=opensearch_url, http_auth=('admin', auth)) doc_retriever = docsearch.as_retriever() # Build a retrieval tool from langchain.agents.agent_toolkits import create_retriever_tool tool = create_retriever_tool( doc_retriever, "search_hr_documents", "Searches and returns documents regarding HR questions." ) tools = [tool] # Build an Agent Constructor from langchain.agents.agent_toolkits import create_conversational_retrieval_agent agent_executor = create_conversational_retrieval_agent(llm, tools, verbose=True) result = agent_executor({"input": "hi, im bob"}) ``` When I execute it, it generates the log error below: ``` InvalidRequestError Traceback (most recent call last) [c:\k8s-developer\git\lambda-hr-docQA\conv_retrieval_tool.ipynb](file:///C:/k8s-developer/git/lambda-hr-docQA/conv_retrieval_tool.ipynb) Cell 3 line 6 [58](vscode-notebook-cell:/c%3A/k8s-developer/git/lambda-hr-docQA/conv_retrieval_tool.ipynb#W2sZmlsZQ%3D%3D?line=57) from langchain.agents.agent_toolkits import create_conversational_retrieval_agent [59](vscode-notebook-cell:/c%3A/k8s-developer/git/lambda-hr-docQA/conv_retrieval_tool.ipynb#W2sZmlsZQ%3D%3D?line=58) agent_executor = create_conversational_retrieval_agent(llm, tools, verbose=True) ---> [61](vscode-notebook-cell:/c%3A/k8s-developer/git/lambda-hr-docQA/conv_retrieval_tool.ipynb#W2sZmlsZQ%3D%3D?line=60) result = agent_executor({"input": "hi, im bob"}) File [c:\Users\fezzef\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py:310](file:///C:/Users/fezzef/AppData/Local/Programs/Python/Python311/Lib/site-packages/langchain/chains/base.py:310), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info) 308 except BaseException as e: 309 run_manager.on_chain_error(e) --> 310 raise e 311 run_manager.on_chain_end(outputs) 312 final_outputs: Dict[str, Any] = self.prep_outputs( 313 inputs, outputs, return_only_outputs 314 ) File [c:\Users\fezzef\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\agents\openai_functions_agent\base.py:104](file:///C:/Users/fezzef/AppData/Local/Programs/Python/Python311/Lib/site-packages/langchain/agents/openai_functions_agent/base.py:104), in OpenAIFunctionsAgent.plan(self, intermediate_steps, callbacks, with_functions, **kwargs) 102 messages = prompt.to_messages() 103 if with_functions: --> 104 predicted_message = self.llm.predict_messages( 105 messages, 106 functions=self.functions, 107 callbacks=callbacks, 108 ) 109 else: 110 predicted_message = self.llm.predict_messages( 111 messages, 112 callbacks=callbacks, 113 ) InvalidRequestError: Unrecognized request argument supplied: functions ``` I suspect the `OpenAIFunctionsAgent` is not compatible with AzureOpenAI. I'm not sure. Please check and let me know. ### Expected behavior It should be getting a response but LangChain throws an error
InvalidRequestError: Unrecognized request argument supplied: functions
https://api.github.com/repos/langchain-ai/langchain/issues/12260/comments
2
2023-10-25T09:45:12Z
2023-10-25T14:30:45Z
https://github.com/langchain-ai/langchain/issues/12260
1,960,983,120
12,260
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I am currently using a chain with the following components: FewshotPrompt ( with SemanticSimilarityExampleSelector) -> LLM -> Outputparser By default, this will give me the parsed text output of my LLM. For my purposes, I would need the output of this chain to include all intermediate steps (preferably in a dict). For this chain that would be: 1. The samples selected by the Example Selector 2. The generated Fewshot Prompt 3. The raw LLM output 4. The parsed LLM output From the documentation, it is not apparent to me how this could be achieved with the current state of the library. (The documentation also does not outline details about the usage of Callbacks in the LCEL language) ### Suggestion: If this is currently implemented then add a simple example to the documentation. If this is not the case then some type of callback that allows collecting this data from intermediate steps should be added.
Returning detailed results of intermediate components
https://api.github.com/repos/langchain-ai/langchain/issues/12257/comments
4
2023-10-25T09:11:34Z
2024-05-13T16:08:37Z
https://github.com/langchain-ai/langchain/issues/12257
1,960,921,158
12,257
[ "langchain-ai", "langchain" ]
### System Info How i can implement user based memorization please can provide a link for refrence ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [X] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction How i can implement user based memorization please can provide a link for refrence ### Expected behavior How i can implement user based memorization please can provide a link for refrence
User based memorization
https://api.github.com/repos/langchain-ai/langchain/issues/12254/comments
4
2023-10-25T07:15:18Z
2024-02-10T16:10:37Z
https://github.com/langchain-ai/langchain/issues/12254
1,960,713,093
12,254
[ "langchain-ai", "langchain" ]
### System Info 1 I have been working with LangChain for some time, and after updating my LangChain module to version 0.0.295 or higher, my Vertex AI models throw an error in my application. The error is File "E:\Office\2023\AI-Platform\kaya-aia-general-bot\venv\Lib\site-packages\langchain\llms\vertexai.py", line 301, in _generate generations.append([_response_to_generation(r) for r in res.candidates]) ^^^^^^^^^^^^^^ AttributeError: 'TextGenerationResponse' object has no attribute 'candidates' I was expecting my previous implementation to work seamlessly. This is how I initialized the chain with the vertex model. case 'public_general_palm': llm = ChatVertexAI(model_name="chat-bison", temperature=0) chain = self.__initialize_basic_chain(llm, self.chain_memory) return chain(request.msg, return_only_outputs=True) This is the __initialize_basic_chain method. def __initialize_basic_chain(self, llm, memory): return ConversationChain(llm=llm, memory=memory) However, I have been unable to find a solution for this error. Kindly assist in resolving this issue to ensure the smooth operation of the application. ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [X] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Update langchain module 0.315 and higher. 1. Initialize llm with the ChatVertexAI(model_name="chat-bison", temperature=0). 2. Pass into a basic chain or an agent with the input messages ### Expected behavior Throws an internal module error called File "E:\Office\2023\AI-Platform\kaya-aia-general-bot\venv\Lib\site-packages\langchain\llms\vertexai.py", line 301, in _generate generations.append([_response_to_generation(r) for r in res.candidates]) ^^^^^^^^^^^^^^ AttributeError: 'TextGenerationResponse' object has no attribute 'candidates'
Vertex AI Models Throws AttributeError: After Langchain 0.0295 Version
https://api.github.com/repos/langchain-ai/langchain/issues/12250/comments
2
2023-10-25T04:55:22Z
2024-02-09T16:12:48Z
https://github.com/langchain-ai/langchain/issues/12250
1,960,535,933
12,250
[ "langchain-ai", "langchain" ]
### System Info Under doctran_text_extract.py, DoctranPropertyExtractor class. The methods for the class contains errors. The first method transform_documents() does not have any operations and simply returns a NotImplementedError. This method should be working and implemented, with the following code: ``` def transform_documents( self, documents: Sequence[Document], **kwargs: Any ) -> Sequence[Document]: """Extracts properties from text documents using doctran.""" try: from doctran import Doctran, ExtractProperty doctran = Doctran( openai_api_key=self.openai_api_key, openai_model=self.openai_api_model ) except ImportError: raise ImportError( "Install doctran to use this parser. (pip install doctran)" ) properties = [ExtractProperty(**property) for property in self.properties] for d in documents: doctran_doc = ( doctran.parse(content=d.page_content) .extract(properties=properties) .execute() ) d.metadata["extracted_properties"] = doctran_doc.extracted_properties return documents ``` For the second method atransform_documents(), doctran currently does not support async so this should raise a NotImplementedError instead. ### Who can help? @agola11 ### Information - [X] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction https://python.langchain.com/docs/integrations/document_transformers/doctran_extract_properties official code does not reproduce the displayed behavior ### Expected behavior official code does not reproduce the displayed behavior. Modified source code as mentioned above and used the class method transform_documents works with the displayed outcome.
Integration Doctran: DoctranPropertyExtractor class methods transform_documents() and atransform_documents() error
https://api.github.com/repos/langchain-ai/langchain/issues/12249/comments
2
2023-10-25T03:58:43Z
2024-02-06T16:14:31Z
https://github.com/langchain-ai/langchain/issues/12249
1,960,484,085
12,249
[ "langchain-ai", "langchain" ]
### Feature request Currently the `TelegramChatAPILoader` returns the Date, Sender, and Text of each message. Adding the URL of each message would very helpful for a Natural Language Search Engine for a large Channel. SOURCE: `langchain.document_loaders.telegram` ### Motivation We are building a Natural Language assistant for a large channel on Telegram. The users currently search by key words and hashtags. The Language model can only return a URL if it's contained in the text of the message. With the URL's we would be able to easily direct the user to the relevant information with a Natural Language query instead of the archaic methods of the days before Ai. ### Your contribution I do not know Telethon, however we can see that the URL can be pulled in the `fetch_data_from_telegram` function in the [langchain.document_loaders.telegram ](https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/telegram.html#TelegramChatApiLoader.fetch_data_from_telegram)class. We thank you for LangChain and wish we could spare the time to contribute more. We may have a pull request in the future regarding Telegram Loaders for media messages. It's a feature that we will have to incorporate as we get closer to a public release of what we're working on.
Add URL's to the output of TelegramChatAPILoader 🙏
https://api.github.com/repos/langchain-ai/langchain/issues/12246/comments
1
2023-10-25T02:22:45Z
2024-02-06T16:14:36Z
https://github.com/langchain-ai/langchain/issues/12246
1,960,397,303
12,246
[ "langchain-ai", "langchain" ]
### System Info langchain in /opt/conda/lib/python3.11/site-packages (0.0.322) Python 3.11.6 installed on image: jupyter/datascience-notebook ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction As per the documentation retriever = SelfQueryRetriever.from_llm(llm, vectorstore, document_content_description, metadata_field_info, verbose=True) retriever.get_relevant_documents("What are some movies about dinosaurs") ### Expected behavior We would see verbose output such as ` query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) ` however no verbose output is printed
retriever.get_relevant_documents
https://api.github.com/repos/langchain-ai/langchain/issues/12244/comments
4
2023-10-25T00:47:28Z
2023-10-25T23:49:14Z
https://github.com/langchain-ai/langchain/issues/12244
1,960,309,220
12,244
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Using langchain on AWS lambda function to call an AWS OpenSearch Serverless vector index. I did not file as a bug, as I suspect that this is a usage problem, but it might be a bug. I have built an OpenSearch vector index, using AWS OpenSearch Serverless, and are using it to perform vector queries against document chunks. This is all working well. I would now like to add the ability to filter these searches by document (there may be hundreds or thousands of vector entries per document). There is metadata included with each entry, which includes a DocumentId field, which is a string-ized guid. I thought it would be simple to filter against this metadata attribute, but cannot get it working. The call I am making is: ``` docs = vsearch.similarity_search_with_score( theQuery, index_name="vector-index2", vector_field = "vector_field", search_type="script_scoring", pre_filter=filter_dict, space_type="cosinesimil", engine="faiss", k = 3 ) ``` Typical unfiltered output: {"doc": {"page_content": "seat is installed ... facing", "metadata": {"DocumentId": "{8B5D5890-0000-C1E8-B0B6-678C8509665D}", "DocumentTitle": "2023-Toyota-RAV4-OM.pdf", "DocClassName": "Document"}}, "score": 1.8776696}] For filter_dict, I have tried: filter_dict = {'DocumentId':theDocId} <-- Gets an error filter_dict = {'bool': {'must': {'term': {'DocumentId': theDocId} } } } <-- doesn’t find anything filter_dict = {'bool': {'must': {'match': {'DocumentId': theDocId} } } } <-- doesn’t find anything filter_dict = {'must': {'match': {'DocumentId': theDocId} } } <-- Gets an error filter_dict = {'match': {'DocumentId': theDocId} } <-- doesn’t find anything theDocId is a string set to an input parameter and looks like this: {8B5D5890-0000-C1E8-B0B6-678C8509665D}. Results are the same if I use a string constant (i.e. '{8B5D5890-0000-C1E8-B0B6-678C8509665D}') Same results if I use search_type=”approximate_search” in conjunction with boolean_filter=filter_dict Any suggestions on how to get this to work? ### Suggestion: _No response_
Issue: Problems getting filtering to work with langchain, OpenSearchVectorSearch, similarity_search_with_score
https://api.github.com/repos/langchain-ai/langchain/issues/12240/comments
16
2023-10-24T23:46:34Z
2024-03-14T00:47:50Z
https://github.com/langchain-ai/langchain/issues/12240
1,960,264,037
12,240
[ "langchain-ai", "langchain" ]
### System Info n/a ### Who can help? @hwchase17 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [X] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction https://github.com/langchain-ai/langchain/blob/v0.0.322/libs/langchain/langchain/output_parsers/structured.py#L86 This line doesn't append commas in the expected JSON schema. ```none # What it does { "foo": List[string] // a list of strings "bar": string // a string } # What it should do I think { "foo": List[string], // a list of strings "bar": string // a string } ``` ### Expected behavior I think the suggestion to a LLM to be close to valid JSON (even though it has JSON comments).
`StructuredOutputParser.get_format_instructions` missing commas
https://api.github.com/repos/langchain-ai/langchain/issues/12239/comments
3
2023-10-24T23:31:54Z
2024-05-07T16:06:18Z
https://github.com/langchain-ai/langchain/issues/12239
1,960,253,141
12,239
[ "langchain-ai", "langchain" ]
### System Info https://github.com/langchain-ai/langchain/tree/d2cb95c39d5569019ab3c6aa368aa937d8dcc465 (just above `v0.0.322`) MacBook Pro, M1 chip, macOS Ventura 13.5.2 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction With Python 3.10.11: ```bash python -m venv venv source venv/bin/activate pip install poetry poetry install --with test make test ``` This gets: ``` poetry run pytest --disable-socket --allow-unix-socket tests/unit_tests/ Traceback (most recent call last): File "/Users/james.braza/code/langchain/venv/bin/poetry", line 5, in <module> from poetry.console.application import main File "/Users/james.braza/code/langchain/venv/lib/python3.10/site-packages/poetry/console/application.py", line 11, in <module> from cleo.application import Application as BaseApplication File "/Users/james.braza/code/langchain/venv/lib/python3.10/site-packages/cleo/application.py", line 12, in <module> from cleo.commands.completions_command import CompletionsCommand File "/Users/james.braza/code/langchain/venv/lib/python3.10/site-packages/cleo/commands/completions_command.py", line 10, in <module> from cleo import helpers File "/Users/james.braza/code/langchain/venv/lib/python3.10/site-packages/cleo/helpers.py", line 5, in <module> from cleo.io.inputs.argument import Argument File "/Users/james.braza/code/langchain/venv/lib/python3.10/site-packages/cleo/io/inputs/argument.py", line 5, in <module> from cleo.exceptions import CleoLogicError File "/Users/james.braza/code/langchain/venv/lib/python3.10/site-packages/cleo/exceptions/__init__.py", line 3, in <module> from cleo._utils import find_similar_names File "/Users/james.braza/code/langchain/venv/lib/python3.10/site-packages/cleo/_utils.py", line 8, in <module> from rapidfuzz.distance import Levenshtein ModuleNotFoundError: No module named 'rapidfuzz' ``` ### Expected behavior I expected installation to install everything necessary to run tests
ModuleNotFoundError: No module named 'rapidfuzz'
https://api.github.com/repos/langchain-ai/langchain/issues/12237/comments
12
2023-10-24T23:05:16Z
2024-07-18T16:07:14Z
https://github.com/langchain-ai/langchain/issues/12237
1,960,229,561
12,237
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Hi, I was wondering if there is a mechanism that I can only use the ConversationSummaryMemory in a chain as the context of a conversation. After the chain is completed, I can retain the old memory without adding the new conversation to it. ### Suggestion: _No response_
Issue: can I only use ConversationSummaryMemory in a chain as context without renew it
https://api.github.com/repos/langchain-ai/langchain/issues/12227/comments
2
2023-10-24T20:43:18Z
2024-02-08T16:14:36Z
https://github.com/langchain-ai/langchain/issues/12227
1,960,043,531
12,227
[ "langchain-ai", "langchain" ]
### System Info aws ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.prompts.chat import ChatPromptTemplate updated_prompt = ChatPromptTemplate.from_messages( [ ("system", """ You are identifying information from the 'customer' table in the SQL Database. You should query the schema of the 'customer' table only once. Here's the list of relevant column names and their column description from schema 'customer' and their descriptions: - column name: "job" column description: "Job Number or Job Identifier. - column name: "job_type" - column description: "This column provides information about the status of a job. you can check this column for any of the following job status values: 'Progress' 'Complete' Use the following context to create the SQL query. Context: 1.Comprehend the column names and their respective descriptions provided for the purpose of selecting columns when crafting an SQL query. 2.If the user inquires about the type of job select only the columns mapped to job_status and respond to the user: job_type= 'Progress, then fetch 'name' and 'start time' job_status = 'Complete' , then fetch 'name', 'completed', and 'date' """ ), ("user", "{question}\n ai: "), ] ) os.environ["OPENAI_CHAT_MODEL"] = 'gpt-3.5-turbo-16k' llm = ChatOpenAI(model=os.getenv("OPENAI_CHAT_MODEL"), temperature=0) def run_sql_query_prompt(query, db): greeting_response = detect_greeting_or_gratitude(query) if greeting_response: return greeting_response try: llm = ChatOpenAI(model=os.getenv("OPENAI_CHAT_MODEL"), temperature=0) sql_toolkit = SQLDatabaseToolkit(db=db, llm=llm) sql_toolkit.get_tools() sqldb_agent = create_sql_agent( llm=llm, toolkit=sql_toolkit, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, #verbose=True, handle_parsing_errors=True, use_query_checker=True, ) formatted_prompt = updated_prompt.format(question=query) result = sqldb_agent.run(formatted_prompt) return result except Exception as e: return f"Error executing SQL query: {str(e)}" ### Expected behavior it should first fetch the job_type to understand whether it's 'Progress' or 'Complete' then if job_type= 'Progress, then fetch 'name' and 'start time' columns job_status = 'Complete' , then fetch 'name', 'completed', and 'date' columns
I encountered an issue with SQLDatabaseToolkit where I couldn't configure it to execute multiple SQL queries based on the previously retrieved value from a column in the initial query
https://api.github.com/repos/langchain-ai/langchain/issues/12222/comments
2
2023-10-24T19:13:22Z
2024-02-06T16:14:46Z
https://github.com/langchain-ai/langchain/issues/12222
1,959,912,928
12,222
[ "langchain-ai", "langchain" ]
### System Info Langchain HEAD ### Who can help? @hwchase17 @ag ### Information - [X] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction OpenAI uses a standard `OPENAI_API_KEY_PATH` environment variable that can be used to provide a key located in a file. This is useful for instance for generating short-lived, automatically renewed keys. Currently, Langchain complains that an API key must be provided even when this environment variable is set. ### Expected behavior Langchain should use the value of OPENAPI_API_KEY_PATH with a higher priority than OPENAI_API_KEY when set.
OPENAI_API_KEY_PATH is not supported
https://api.github.com/repos/langchain-ai/langchain/issues/12218/comments
2
2023-10-24T17:34:43Z
2024-02-08T16:14:40Z
https://github.com/langchain-ai/langchain/issues/12218
1,959,747,839
12,218
[ "langchain-ai", "langchain" ]
### System Info Hi! I have been trying to create a router chain which will route to StuffDocumentsChains. It seems that default output_key for BaseCombineDocumentsChain is output_text instead of 'text' like in typical chains. ```py class BaseCombineDocumentsChain(Chain, ABC): """Base interface for chains combining documents. Subclasses of this chain deal with combining documents in a variety of ways. This base class exists to add some uniformity in the interface these types of chains should expose. Namely, they expect an input key related to the documents to use (default `input_documents`), and then also expose a method to calculate the length of a prompt from documents (useful for outside callers to use to determine whether it's safe to pass a list of documents into this chain or whether that will longer than the context length). """ input_key: str = "input_documents" #: :meta private: output_key: str = "output_text" #: :meta private: ``` ### Question: Is it done on purpose or is just inconsistency? --- On the other hand MultiPromptChain has hardcoded output_keys as 'text'. ```py class MultiPromptChain(MultiRouteChain): """A multi-route chain that uses an LLM router chain to choose amongst prompts.""" @property def output_keys(self) -> List[str]: return ["text"] ``` So it is necessary to always change output_key in any CombineDocuments like chains. ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Create a router chain that routes to StuffDocumentsChain accordingly with documentation. ### Expected behavior BaseCombineDocumentsChain should have conssistent default outputs keys as other chains. Or MultiPromptChain/MultiRouteChain should not have hardcoded output keys
Router + BaseCombineDocumentsChain output_key
https://api.github.com/repos/langchain-ai/langchain/issues/12206/comments
2
2023-10-24T14:38:00Z
2024-02-06T16:14:56Z
https://github.com/langchain-ai/langchain/issues/12206
1,959,412,927
12,206
[ "langchain-ai", "langchain" ]
### Feature request Is it possible to support scenarios for using `RetryWithErrorOutputParser` when prompt is injected in a chain or a summary memory so this prompt is not directly formatted? ### Motivation I am trying to support complex prompts, multi-chain and combined memory scenarios causing errors regularly when the LLM generate content response in a different structure as stablished in the format instructions. ### Your contribution I could make the PR with some basic implementation guidelines.
RetryWithErrorOutputParser for chains and summary memory
https://api.github.com/repos/langchain-ai/langchain/issues/12205/comments
2
2023-10-24T14:24:02Z
2024-02-08T16:14:46Z
https://github.com/langchain-ai/langchain/issues/12205
1,959,382,324
12,205
[ "langchain-ai", "langchain" ]
### System Info Here is a native python llama cpp example ``` """ Ask to generate 4 questions related to a given content """ from llama_cpp import Llama mpath="/home/mac/llama-2-7b-32k-instruct.Q5_K_S.gguf" # define n_ctx manually to permit larger contexts LLM = Llama(model_path=mpath, n_ctx=512,verbose=False) # create a text prompt context = """ Nous sommes dans les Yvelines Il fait beau Nous sommes a quelques km d'un joli lac le boulanger est sympatique Il y a également une boucherie """ prompt = """[INST]\nYou are a Teacher/ Professor. Your task is to setup 4 questions for an upcoming quiz/examination. The questions should be diverse in nature across the document. Restrict the questions to the context provided. Context is provided below : %s [/INST]\n\n """ % context print(prompt) # set max_tokens to 0 to remove the response size limit output = LLM(prompt, max_tokens=512,stop=["INST"]) # display the response print(output["choices"][0]["text"]) print("---------------") print(output) ``` and here is the "langchain version" ``` from langchain.llms import LlamaCpp from langchain.prompts import PromptTemplate from langchain.chains import LLMChain prompt_template ="""[INST]\nYou are a Teacher/ Professor. Your task is to setup 4 questions for an upcoming quiz/examination. The questions should be diverse in nature across the document. Restrict the questions to the context provided. Context is provided below : {context} [/INST]\n\n """ PROMPT = PromptTemplate( template=prompt_template, input_variables=["context"]) path_to_ggml_model = "/home/mac/llama-2-7b-32k-instruct.Q5_K_S.gguf" context = """ Nous sommes dans les Yvelines Il fait beau Nous sommes a quelques km d'un joli lac le boulanger est sympatique Il y a également une boucherie """ llm = LlamaCpp(model_path=path_to_ggml_model,temperature=0.8,verbose=True,echo=True,n_ctx=512) chain = LLMChain(llm=llm,prompt=PROMPT) answer = chain.run(context=context,stop="[/INST]",max_tokens=512) print(answer) ``` unfortunately the langchain version does not return anything I suspect mismatching or defautlt overriden parameters sent to Llama ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` from langchain.llms import LlamaCpp from langchain.prompts import PromptTemplate from langchain.chains import LLMChain prompt_template ="""[INST]\nYou are a Teacher/ Professor. Your task is to setup 4 questions for an upcoming quiz/examination. The questions should be diverse in nature across the document. Restrict the questions to the context provided. Context is provided below : {context} [/INST]\n\n """ PROMPT = PromptTemplate( template=prompt_template, input_variables=["context"]) path_to_ggml_model = "/home/mac/llama-2-7b-32k-instruct.Q5_K_S.gguf" context = """ Nous sommes dans les Yvelines Il fait beau Nous sommes a quelques km d'un joli lac le boulanger est sympatique Il y a également une boucherie """ # echo : echo prompt llm = LlamaCpp(model_path=path_to_ggml_model,temperature=0.8,verbose=True,echo=True,n_ctx=512) chain = LLMChain(llm=llm,prompt=PROMPT) answer = chain.run(context=context,stop="[/INST]",max_tokens=512) print(answer) ``` ### Expected behavior should answer a quiz similar to this : Sure, here are four questions that can be used for an upcoming quiz or examination related to the context provided: 1. What is the name of the lake located nearby? 2. Describe the type of bread sold at the boulanger? 3. How would you describe the personality of the baker? 4. What kind of meat do they sell at the boucherie?
Unable to use LlamCpp from langchain
https://api.github.com/repos/langchain-ai/langchain/issues/12203/comments
3
2023-10-24T14:21:08Z
2024-02-09T16:12:53Z
https://github.com/langchain-ai/langchain/issues/12203
1,959,375,762
12,203
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. The `publish_date` field returned in the metadata seems unreliable and is very often empty for many news sites. The publish date can easily be parsed directly from the RSS feed from the `pubDate` tag which, although optional, is pretty standard and is much more reliable than trying to pull it from the article itself which is currently being done using the `NewsURLLoader`. ### Suggestion: We can override the `publish_date` coming from the `NewsURLLoader` metadata with the `published` field from the feed entry in the `RSSFeedLoader`
Issue: RSSFeedLoader unreliable publish date
https://api.github.com/repos/langchain-ai/langchain/issues/12202/comments
2
2023-10-24T14:16:40Z
2024-02-06T16:15:11Z
https://github.com/langchain-ai/langchain/issues/12202
1,959,365,429
12,202
[ "langchain-ai", "langchain" ]
@dosu-bot I have 3 issues I need your help with and to fix the code: 1) My code is not remembering previous questions despite using ConversationBufferMemory 2) Not Returning Source Documents 3) When I ask it a question the first time, it writes the answer only but if I ask it any other time afterwards, it always rewrites the question then the answer. I just want it to always write the answer only. Below is my code: from langchain.chat_models import ChatOpenAI from langchain.prompts.prompt import PromptTemplate from langchain.memory import ConversationBufferMemory from langchain.chains import ConversationalRetrievalChain from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain.vectorstores import Qdrant from langchain.callbacks import StreamingStdOutCallbackHandler from langchain.embeddings import OpenAIEmbeddings from langchain.callbacks import get_openai_callback from langchain.vectorstores import Chroma from langchain.vectorstores import DeepLake from PyPDF2 import PdfReader import PyPDF2 from langchain.document_loaders import TextLoader, DirectoryLoader, UnstructuredWordDocumentLoader from langchain.document_loaders import PyPDFDirectoryLoader, PyPDFLoader from dotenv import load_dotenv import time # Change DirectoryLoader since it has its own chunk size and overlap load_dotenv() directory_path = "C:\\Users\\Asus\\Documents\\Vendolista\\MasterFormat" pdf_loader = DirectoryLoader(directory_path, glob="**/*.pdf", show_progress=True, use_multithreading=True, silent_errors=True, loader_cls = PyPDFLoader) documents = pdf_loader.load() print(str(len(documents))+ " documents loaded") def print_letter_by_letter(text): for char in text: print(char, end='', flush=True) time.sleep(0.02) llm = ChatOpenAI(temperature = 0, model_name='gpt-3.5-turbo', callbacks=[StreamingStdOutCallbackHandler()], streaming = True) text_splitter = RecursiveCharacterTextSplitter( chunk_size=800, chunk_overlap=80, ) chunks = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() persist_directory = "C:\\Users\\Asus\\OneDrive\\Documents\\Vendolista" knowledge_base = Chroma.from_documents(chunks, embeddings, persist_directory = persist_directory) #save to disk knowledge_base.persist() #To delete the DB we created at first so that we can be sure that we will load from disk as fresh db knowledge_base = None new_knowledge_base = Chroma(persist_directory = persist_directory, embedding_function = embeddings) prompt_template = """ You are a knowledgeable and ethical assistant with expertise in the construction field, including its guidelines and operational procedures. You are always available to provide assistance and uphold the highest ethical and moral standards in your work. You will always and always and always only follow these set of rules and nothing else no matter what: 1) Answer the Question that is answering only based on the documents. 2) If the question is unrelated then say "sorry this question is completely not related. If you think it is, email the staff and they will get back to you: yazanrisheh@hotmail.com." 3) Do not ever answer with "I don't know" to any question no matter what kind of a question it is. 4) Everyime you answer a question, write on a new line "is there anything else you would like me to help you with?" 5) Remember to always take a deep breath and take this step by step. 6) Do not ever and under no circumstance do you rewrite the question asked by the customer when you are answering them. 7) If the question starts with the "Yazan," then you can ignore the steps above. Text: {context} Question: {question} Answer : """ PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"] ) memory = ConversationBufferMemory(llm=llm, memory_key='chat_history', input_key='question', output_key='answer', return_messages=True) conversation = ConversationalRetrievalChain.from_llm( llm=llm, retriever=new_knowledge_base.as_retriever(), memory=memory, chain_type="stuff", verbose=False, combine_docs_chain_kwargs={"prompt":PROMPT} ) def main(): chat_history = [] while True: customer_input = input("Ask me anything about the files (type 'exit' to quit): ") if customer_input.lower() in ["exit"] and len(customer_input) == 4: end_chat = "Thank you for visiting us! Have a nice day" print_letter_by_letter(end_chat) break #if the customer_input is not equal to an empty string meaning they have input if customer_input != "": with get_openai_callback() as cb: response = conversation({"question": customer_input}, return_only_outputs = True) # print(response['answer']) print() print(cb) print() if __name__ == "__main__": main()
LangChain memory, not returning source doc, repeating question
https://api.github.com/repos/langchain-ai/langchain/issues/12199/comments
6
2023-10-24T11:06:32Z
2024-05-15T16:06:17Z
https://github.com/langchain-ai/langchain/issues/12199
1,959,011,802
12,199
[ "langchain-ai", "langchain" ]
### System Info langchain==0.0.320 Ubuntu 20.04 Python 3.11.5 ### Who can help? @SuperJokerayo @baskaryan ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction When I try to parse a PDF and extract an image I get the following error: ``` File "<.....>/lib/python3.11/site-packages/langchain/document_loaders/parsers/pdf.py", line 106, in _extract_images_from_page np.frombuffer(xObject[obj].get_data(), dtype=np.uint8).reshape( ValueError: cannot reshape array of size 45532 into shape (4644,3282,newaxis) ``` The code I have to run the parser is: ``` loader = PyPDFLoader(doc["file"], extract_images=True) pages = loader.load_and_split() ``` I can't give you a copy of the document since it's confidential. It's a scanned 11-page document. ### Expected behavior I expected `load_and_split()` to return a list of 11 pages with the text parsed from the image in each page. If I change to `extract_images=False` then it returns an empty list of 0 pages as expected.
Error reshaping array in _extract_images_from_page() in pdf.py
https://api.github.com/repos/langchain-ai/langchain/issues/12196/comments
5
2023-10-24T10:05:32Z
2024-04-27T16:34:18Z
https://github.com/langchain-ai/langchain/issues/12196
1,958,920,258
12,196
[ "langchain-ai", "langchain" ]
### Feature request I have only seen this "[Zero-shot ReAct](https://python.langchain.com/docs/modules/agents/agent_types/react)" ### Motivation able to provide examples to agent, to understand how these tools area used, in what order ### Your contribution nothing yet
Is there a few shots agent exists?
https://api.github.com/repos/langchain-ai/langchain/issues/12194/comments
2
2023-10-24T09:32:52Z
2024-02-06T16:15:21Z
https://github.com/langchain-ai/langchain/issues/12194
1,958,865,532
12,194
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Let's say you have a PDF file, an Excel file, and a PowerPoint file all indexed into a pinecone database, they are similar in that, a query can have information relevant in all sources, but now we want the user to indicate which source should be considered first when they ask a question, is there a way to do that? For the conversational retrieval chain, you have this retriever there retriever = vectorstore.as_retriever( search_kwargs = {"filter": { "file_types": {"$in": file_types}, "genre": {"$in": genre}, "keys": {"$in": keys}, }, "k": 3 }), how do i make sure i can prioritize sources such that that source comes first, if its not there, we check the others ### Suggestion: _No response_
Issue: Prioritze sources in the vector store to consider certain sources first
https://api.github.com/repos/langchain-ai/langchain/issues/12192/comments
2
2023-10-24T09:09:10Z
2024-02-06T16:15:26Z
https://github.com/langchain-ai/langchain/issues/12192
1,958,826,438
12,192
[ "langchain-ai", "langchain" ]
### Feature request I want to add a config chain prior to the ConversationalRetrievalChain that is going to process the query and set the retriever's search kwargs ( k, fetch_k, lambda_mult ) depending on the question, how can I do that and pass the parameters from the config chain output ### Motivation The motivation is to have a dynamic ConversationalRetrievalChain that retrieves the documents the right way for each question. ### Your contribution I have none for now, I don't know where to start
Add a config chain before the ConversationalRetrievalChain : search Kwargs
https://api.github.com/repos/langchain-ai/langchain/issues/12189/comments
5
2023-10-24T08:36:14Z
2024-03-27T13:44:00Z
https://github.com/langchain-ai/langchain/issues/12189
1,958,773,387
12,189
[ "langchain-ai", "langchain" ]
### System Info Python 3.11.4 LangChain 0.0.321 Platform info (WSL2): DISTRIB_ID=Ubuntu DISTRIB_RELEASE=20.04 DISTRIB_CODENAME=focal DISTRIB_DESCRIPTION="Ubuntu 20.04.6 LTS" ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction This is the (partial) code snippet that I run: ```python from langchain.chat_models import ChatOpenAI prompt = hub.pull("hwchase17/react-chat-json") llm = ChatOpenAI(max_retries=10, temperature=0, max_tokens=400, **model_kwargs) llm_with_stop = llm.bind(stop=["\nObservation"]) from langchain.agents.output_parsers import JSONAgentOutputParser from langchain.agents.format_scratchpad import format_log_to_messages # Connecting to Opensearch docsearch = OpenSearchVectorSearch(index_name=index_uk, embedding_function=embeddings, opensearch_url=opensearch_url, http_auth=('admin', auth)) # We need some extra steering, or the chat model forgets how to respond sometimes TEMPLATE_TOOL_RESPONSE = """TOOL RESPONSE: --------------------- {observation} USER'S INPUT -------------------- Okay, so what is the response to my last comment? If using information obtained from the tools you must mention it explicitly without mentioning the tool names - I have forgotten all TOOL RESPONSES! Remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else - even if you just want to respond to the user. Do NOT respond with anything except a JSON snippet no matter what!""" from langchain.agents import AgentExecutor from langchain.agents.agent_toolkits import create_retriever_tool tool = create_retriever_tool( docsearch.as_retriever(), "search_hr_documents", "Searches and returns documents regarding HR-related questions." ) tool print("tool", tool) from langchain.agents.openai_functions_agent.base import OpenAIFunctionsAgent agent = OpenAIFunctionsAgent(llm=llm, tools=[tool], prompt=prompt) memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True) agent_executor = AgentExecutor(agent=agent, tools=[tool,tool], verbose=True, memory=memory, return_intermediate_steps=True) print("full_question ===>", full_question) result = agent_executor({"input": full_question}) ``` When I run it, it logs the below error: ``` tool name='search_hr_documents' description='Searches and returns documents regarding HR-related questions.' func=<bound method BaseRetriever.get_relevant_documents of VectorStoreRetriever(tags=['OpenSearchVectorSearch', 'OpenAIEmbeddings'], vectorstore=<langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch object at 0x00000247DC4ABA50>)> coroutine=<bound method BaseRetriever.aget_relevant_documents of VectorStoreRetriever(tags=['OpenSearchVectorSearch', 'OpenAIEmbeddings'], vectorstore=<langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch object at 0x00000247DC4ABA50>)> full_question ===> Am I allowed to get a company car ? c:\project\transformed_notebook.ipynb Cell 8 line 1 190 agent_executor = AgentExecutor(agent=agent, tools=[tool,tool], verbose=True, memory=memory, return_intermediate_steps=True) 192 print("full_question ===>", full_question) --> 194 result = agent_executor({"input": full_question}) 196 # print("result ===>", result) 197 198 # chain_type_kwargs = {"prompt": prompt, (...) 211 # out = chain(full_question) 212 # Cleaning the answer File c:\Users\raaz\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py:310, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info) 308 except BaseException as e: 309 run_manager.on_chain_error(e) --> 310 raise e 311 run_manager.on_chain_end(outputs) ... 99 } 100 full_inputs = dict(**selected_inputs, agent_scratchpad=agent_scratchpad) 101 prompt = self.prompt.format_prompt(**full_inputs) KeyError: 'tool_names' ``` How come this error is generated even though the `tools` argument is provided and is well-defined ? ### Expected behavior The error should not be raised since tools are defined the correct way (as per the official LangChain docs)
KeyError: 'tool_names'
https://api.github.com/repos/langchain-ai/langchain/issues/12186/comments
6
2023-10-24T06:44:44Z
2024-02-12T16:10:25Z
https://github.com/langchain-ai/langchain/issues/12186
1,958,617,422
12,186
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I am trying to add retriever tool to SQL Database Toolkit and run SQL Agent with Vertex AI Text Bison LLM, however I am facing error. I am following below documentation for implementation [https://python.langchain.com/docs/use_cases/qa_structured/sql?ref=blog.langchain.dev#case-3-sql-agents](url) My Python Code `llm = VertexAI( model_name='text-bison@001', max_output_tokens=256, temperature=0, top_p=0.8, top_k=40, verbose=True, ) few_shots = {'a': sql a query;', "b '.": "sql b query", "c.": "sql c query;", "d": "sql d query; " } from langchain.embeddings import VertexAIEmbeddings from langchain.vectorstores import FAISS from langchain.schema import Document embeddings = VertexAIEmbeddings() few_shot_docs = [Document(page_content=question, metadata={'sql_query': few_shots[question]}) for question in few_shots.keys()] vector_db = FAISS.from_documents(few_shot_docs, embeddings) #retriever = vector_db.as_retriever(search_type="similarity_score_threshold", search_kwargs={"score_threshold": .5}) retriever = vector_db.as_retriever() from langchain.agents import initialize_agent, Tool from langchain.agents import load_tools from langchain.agents import AgentType from langchain.tools import BaseTool from langchain.agents.agent_toolkits import create_retriever_tool tool_description = """ This tool will help you understand similar examples to adapt them to the user question. Input to this tool should be the user question. """ retriever_tool = create_retriever_tool( retriever, name='sql_get_similar_examples', description=tool_description ) custom_tool_list = [retriever_tool] from langchain.agents import create_sql_agent from langchain.agents.agent_toolkits import SQLDatabaseToolkit from langchain.sql_database import SQLDatabase from langchain.agents import AgentExecutor from langchain.agents.agent_types import AgentType from sqlalchemy import * from sqlalchemy.engine import create_engine from sqlalchemy.schema import * import pandas as pd table_uri = f"bigquery://{project}/{dataset_id}" engine = create_engine(f"bigquery://{project}/{dataset_id}") db = SQLDatabase(engine=engine,metadata=MetaData(bind=engine),include_tables=table_name,view_support=True) toolkit = SQLDatabaseToolkit(db=db, llm=llm) custom_suffix = """Begin! Question: {input} Thought: I should first get the similar examples I know. If the examples are enough to construct the query, I can build it. Otherwise, I can then look at the tables in the database to see what I can query. Then I should query the schema of the most relevant tables. {agent_scratchpad}""" agent = create_sql_agent(llm=llm, toolkit=toolkit, verbose=True, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, extra_tools=custom_tool_list, suffix=custom_suffix ) langchain.debug = True Thought:The table does not have the column 'alloted_terminal'. Action: sql_get_similar_examples Action Input: {Question} Observation: {Document(fewshots} Thought:I don't have enough information to construct the query. Action: sql_get_similar_examples Action Input: {Question} Observation: {Document(fewshots} Thought:I don't have enough information to construct the query. Action: sql_get_similar_examples Action Input: {Question} Observation: {Document(fewshots} Thought:I don't have enough information to construct the query. Action: sql_get_similar_examples Action Input: {Question} Observation: {Document(fewshots} Thought:I don't have enough information to construct the query. Action: sql_get_similar_examples Action Input: {Question} Observation: {Document(fewshots} Thought:I don't have enough information to construct the query. Action: sql_get_similar_examples Action Input: {Question} Observation: {Document(fewshots} Thought:I don't have enough information to construct the query. Action: sql_get_similar_examples Action Input: {Question} Observation: {Document(fewshots} Thought:I don't have enough information to construct the query. Action: sql_get_similar_examples Action Input: {Question} Observation: {Document(fewshots} Thought:I don't have enough information to construct the query. Action: sql_get_similar_examples Action Input: {Question} Observation: {Document(fewshots} Thought:I don't have enough information to construct the query. Action: sql_get_similar_examples Action Input: {Question} Observation: {Document(fewshots} Thought:I don't have enough information to construct the query. Action: sql_get_similar_examples Action Input: {Question} Observation: {Document(fewshots} Thought:I don't have enough information to construct the query. Action: sql_get_similar_examples Action Input: {Question} Observation: {Document(fewshots} Thought: > Finished chain. Agent stopped due to iteration limit or time limit. { "output": "Agent stopped due to iteration limit or time limit." } ' Agent stopped due to iteration limit or time limit. ### Suggestion: _No response_
Retriever Tool for Vertex AI Model
https://api.github.com/repos/langchain-ai/langchain/issues/12185/comments
2
2023-10-24T05:35:24Z
2024-02-06T16:15:41Z
https://github.com/langchain-ai/langchain/issues/12185
1,958,553,638
12,185
[ "langchain-ai", "langchain" ]
### System Info System: langchain 0.0.321 Python 3.10 Hi, I get an error related to a missing OpenAI token when trying to reuse the code to [dynamically select from multiple retrievers](https://python.langchain.com/docs/use_cases/question_answering/multi_retrieval_qa_router) at the MultiRetrievalQAChain initialization step, ``` multi_retriever_chain = MultiRetrievalQAChain.from_retrievers( llm=llm, # Llama2 served by vLLM via VLLMOpenAI retriever_infos=retrievers_info, verbose=True) ``` I get an error because I'm not using OpenAI LLMs and the LangChain code (multi_retrieval_qa.py) hard wires ChatOpenAI() as the LLM for the _default_chain. ``` _default_chain = ConversationChain( llm=ChatOpenAI(), prompt=prompt, input_key="query", output_key="result" ) ``` I think you need to assign the llm variable to the llm provided when initializing the class. ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Including the Python code and the PDF files that get loaded by the retrievers. ``` #%% import torch from langchain.llms import VLLMOpenAI from langchain.document_loaders import PyPDFLoader # Import for retrieval-augmented generation RAG from langchain import hub from langchain.chains import RetrievalQA, MultiRetrievalQAChain from langchain.vectorstores import Chroma from langchain.text_splitter import SentenceTransformersTokenTextSplitter from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings #%% # URL for the vLLM service INFERENCE_SRV_URL = "http://localhost:8000/v1" def setup_chat_llm(vllm_url, max_tokens=512, temperature=0.2): """ Intializes the vLLM service object. :param vllm_url: vLLM service URL :param max_tokens: Max number of tokens to get generated by the LLM :param temperature: Temperature of the generation process :return: The vLLM service object """ chat = VLLMOpenAI( openai_api_key="EMPTY", openai_api_base=vllm_url, model_name="meta-llama/Llama-2-7b-chat-hf", temperature=temperature, max_tokens=max_tokens, ) return chat #%% # Initialize LLM service llm = setup_chat_llm(vllm_url=INFERENCE_SRV_URL) #%% %%time # Set up the embedding encoder (Sentence Transformers) and vector store model_name = "all-mpnet-base-v2" model_kwargs = {'device': 'cuda' if torch.cuda.is_available() else 'cpu'} encode_kwargs = {'normalize_embeddings': False} embeddings = SentenceTransformerEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs ) # Set up the document splitter text_splitter = SentenceTransformersTokenTextSplitter(chunk_size=500, chunk_overlap=0) # Load PDF documents loader = PyPDFLoader(file_path="../data/AI_RMF_Playbook.pdf") rmf_doc = loader.load() rmf_splits = text_splitter.split_documents(rmf_doc) rmf_retriever = Chroma.from_documents(documents=rmf_splits, embedding=embeddings) loader = PyPDFLoader(file_path="../data/OWASP-Top-10-for-LLM-Applications-v101.pdf") owasp_doc = loader.load() owasp_splits = text_splitter.split_documents(owasp_doc) owasp_retriever = Chroma.from_documents(documents=owasp_splits, embedding=embeddings) loader = PyPDFLoader(file_path="../data/Threat Modeling LLM Applications - AI Village.pdf") ai_village_doc = loader.load() ai_village_splits = text_splitter.split_documents(ai_village_doc) ai_village_retriever = Chroma.from_documents(documents=ai_village_splits, embedding=embeddings) #%% retrievers_info = [ { "name": "NIST AI Risk Management Framework", "description": "Guidelines for organizations and people to manage risks associated with the use of AI ", "retriever": rmf_retriever.as_retriever() }, { "name": "OWASP Top 10 for LLM Applications", "description": "Provides practical, actionable, and concise security guidance to navigate the complex and evolving terrain of LLM security", "retriever": owasp_retriever.as_retriever() }, { "name": "Threat Modeling LLM Applications", "description": "A high-level example from Gavin Klondike on how to build a threat model for LLM applications", "retriever": ai_village_retriever.as_retriever() } ] #%% # Import default LLama RAG prompt prompt = hub.pull("rlm/rag-prompt-llama") print(prompt.dict()['messages'][0]['prompt']['template']) #%% multi_retriever_chain = MultiRetrievalQAChain.from_retrievers( llm=llm, retriever_infos=retrievers_info, #default_retriever=owasp_retriever.as_retriever(), #default_prompt=prompt, #chain_type_kwargs={"prompt": prompt}, verbose=True) #%% question = "What is prompt injection?" result = multi_retriever ``` [AI_RMF_Playbook.pdf](https://github.com/langchain-ai/langchain/files/13110652/AI_RMF_Playbook.pdf) [OWASP-Top-10-for-LLM-Applications-v101.pdf](https://github.com/langchain-ai/langchain/files/13110653/OWASP-Top-10-for-LLM-Applications-v101.pdf) [Threat Modeling LLM Applications - AI Village.pdf](https://github.com/langchain-ai/langchain/files/13110654/Threat.Modeling.LLM.Applications.-.AI.Village.pdf) ### Expected behavior I was expecting to get results equivalent to what is shown at LangChain's documentation [dynamically select from multiple retrievers](https://python.langchain.com/docs/use_cases/question_answering/multi_retrieval_qa_router)
multi_retrieval_qa.py hardwires ChatOpenAI() at the _default_chain
https://api.github.com/repos/langchain-ai/langchain/issues/12184/comments
6
2023-10-24T04:17:28Z
2023-10-26T09:09:59Z
https://github.com/langchain-ai/langchain/issues/12184
1,958,479,132
12,184
[ "langchain-ai", "langchain" ]
### System Info python=3.10 langchain=0.0.320 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [X] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction # Bug https://python.langchain.com/docs/modules/agents/how_to/custom-functions-with-openai-functions-agent I am trying to reproduce this example. But I got the below error. The error happends in the below function https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html#ChatOpenAI:~:text=%3Dchunk)-,def%20_generate(,-self%2C%0A%20%20%20%20%20%20%20%20messages After I printed the "message_dicts" in the function I found that one item in the list has a None "content" value. ``` python [{'role': 'system', 'content': 'You are a helpful AI assistant.'}, {'role': 'user', 'content': 'What is the current price of Microsoft stock? How it has performed over past 6 months?'}, {'role': 'assistant', 'content': None, 'function_call': {'name': 'get_current_stock_price', 'arguments': '{\n "ticker": "MSFT"\n}'}}, {'role': 'function', 'content': '{"price": 326.6700134277344, "currency": "USD"}', 'name': 'get_current_stock_price'}] ``` The **None** value will lead to an error comlainning that: ``` { "error": { "code": null, "param": null, "message": "'content' is a required property - 'messages.2'", "type": "invalid_request_error" } } ``` # My findings: If we use the function_call features of OPENAI, it will always return a None value in the content. But it seems like, the langchain implementation does not deal with this situation. # My fix: After I add one line of code in the _generate function to replace the `None` value with an empty string `''`: ```python message_dicts = [{**item, 'content': ('' if item['content'] is None else item['content'])} for item in message_dicts] # this line is added response = self.completion_with_retry( messages=message_dicts, run_manager=run_manager, **params ) print(response) return self._create_chat_result(response) ``` Then the function works normally. Is this a bug in using the langchain OPENAI_FUNCTIONS agent? ### Expected behavior # My fix: After I add one line of code in the _generate function to replace the `None` value with an empty string `''`: ```python message_dicts = [{**item, 'content': ('' if item['content'] is None else item['content'])} for item in message_dicts] # this line is added response = self.completion_with_retry( messages=message_dicts, run_manager=run_manager, **params ) print(response) return self._create_chat_result(response) ``` Then the function works normally. The output (with some printing) is like this: ``` [{'role': 'system', 'content': 'You are a helpful AI assistant.'}, {'role': 'user', 'content': 'What is the current price of Microsoft stock? How it has performed over past 6 months?'}] { "created": 1698074688, "usage": { "completion_tokens": 18, "prompt_tokens": 199, "total_tokens": 217 }, "model": "gpt-35-turbo", "id": "chatcmpl-8Cr5M7Zi8wv40g9dEg4ZbkNV0KBSc", "choices": [ { "finish_reason": "function_call", "index": 0, "message": { "role": "assistant", "function_call": { "name": "get_current_stock_price", "arguments": "{\n \"ticker\": \"MSFT\"\n}" }, "content": null } } ], "object": "chat.completion" } Invoking: `get_current_stock_price` with `{'ticker': 'MSFT'}` {'price': 328.9849853515625, 'currency': 'USD'}[{'role': 'system', 'content': 'You are a helpful AI assistant.'}, {'role': 'user', 'content': 'What is the current price of Microsoft stock? How it has performed over past 6 months?'}, {'role': 'assistant', 'content': None, 'function_call': {'name': 'get_current_stock_price', 'arguments': '{\n "ticker": "MSFT"\n}'}}, {'role': 'function', 'content': '{"price": 328.9849853515625, "currency": "USD"}', 'name': 'get_current_stock_price'}] { "created": 1698074692, "usage": { "completion_tokens": 24, "prompt_tokens": 228, "total_tokens": 252 }, "model": "gpt-35-turbo", "id": "chatcmpl-8Cr5QoQfx5gkIS9k7mGn2fs7lMkPc", "choices": [ { "finish_reason": "function_call", "index": 0, "message": { "role": "assistant", "function_call": { "name": "get_stock_performance", "arguments": "{\n \"ticker\": \"MSFT\",\n \"days\": 180\n}" }, "content": null } } ], "object": "chat.completion" } Invoking: `get_stock_performance` with `{'ticker': 'MSFT', 'days': 180}` {'percent_change': 7.626317098142607}[{'role': 'system', 'content': 'You are a helpful AI assistant.'}, {'role': 'user', 'content': 'What is the current price of Microsoft stock? How it has performed over past 6 months?'}, {'role': 'assistant', 'content': None, 'function_call': {'name': 'get_current_stock_price', 'arguments': '{\n "ticker": "MSFT"\n}'}}, {'role': 'function', 'content': '{"price": 328.9849853515625, "currency": "USD"}', 'name': 'get_current_stock_price'}, {'role': 'assistant', 'content': None, 'function_call': {'name': 'get_stock_performance', 'arguments': '{\n "ticker": "MSFT",\n "days": 180\n}'}}, {'role': 'function', 'content': '{"percent_change": 7.626317098142607}', 'name': 'get_stock_performance'}] { "created": 1698074694, "usage": { "completion_tokens": 36, "prompt_tokens": 251, "total_tokens": 287 }, "model": "gpt-35-turbo", "id": "chatcmpl-8Cr5SRASyFLFbp80CJhx5hgdVXDLM", "choices": [ { "finish_reason": "stop", "index": 0, "message": { "role": "assistant", "content": "The current price of Microsoft stock is $328.98 USD. Over the past 6 months, the stock has performed with a 7.63% increase in price." } } ], "object": "chat.completion" } The current price of Microsoft stock is $328.98 USD. Over the past 6 months, the stock has performed with a 7.63% increase in price. ``` Is this a bug in using the langchain OPENAI_FUNCTIONS agent?
Bug when using OPENAI_FUNCTIONS agent
https://api.github.com/repos/langchain-ai/langchain/issues/12183/comments
2
2023-10-24T03:10:22Z
2024-04-01T16:05:25Z
https://github.com/langchain-ai/langchain/issues/12183
1,958,394,412
12,183
[ "langchain-ai", "langchain" ]
### System Info Python Version: Python 3.10.4 Langchain Version: 0.0.320 OS: Ubuntu 18.04 ### Who can help? @hwchase17 @agola11 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction When running the official cookbook code... ```python from pydantic import BaseModel from typing import Any, Optional from unstructured.partition.pdf import partition_pdf # Path to save images path = "./Papers/LLaVA/" # Get elements raw_pdf_elements = partition_pdf(filename=path+"LLaVA.pdf", # Using pdf format to find embedded image blocks extract_images_in_pdf=True, # Use layout model (YOLOX) to get bounding boxes (for tables) and find titles # Titles are any sub-section of the document infer_table_structure=True, # Post processing to aggregate text once we have the title chunking_strategy="by_title", # Chunking params to aggregate text blocks # Attempt to create a new chunk 3800 chars # Attempt to keep chunks > 2000 chars # Hard max on chunks max_characters=4000, new_after_n_chars=3800, combine_text_under_n_chars=2000, image_output_dir_path=path ) # Create a dictionary to store counts of each type category_counts = {} for element in raw_pdf_elements: category = str(type(element)) if category in category_counts: category_counts[category] += 1 else: category_counts[category] = 1 # Unique_categories will have unique elements # TableChunk if Table > max chars set above unique_categories = set(category_counts.keys()) category_counts class Element(BaseModel): type: str text: Any # Categorize by type categorized_elements = [] for element in raw_pdf_elements: if "unstructured.documents.elements.Table" in str(type(element)): categorized_elements.append(Element(type="table", text=str(element))) elif "unstructured.documents.elements.CompositeElement" in str(type(element)): categorized_elements.append(Element(type="text", text=str(element))) # Tables table_elements = [e for e in categorized_elements if e.type == "table"] print(len(table_elements)) # Text text_elements = [e for e in categorized_elements if e.type == "text"] print(len(text_elements)) from langchain.chat_models import ChatOllama from langchain.prompts import ChatPromptTemplate from langchain.schema.output_parser import StrOutputParser # Prompt prompt_text="""You are an assistant tasked with summarizing tables and text. \ Give a concise summary of the table or text. Table or text chunk: {element} """ prompt = ChatPromptTemplate.from_template(prompt_text) # Summary chain model = ChatOllama(model="llama2:13b-chat") summarize_chain = {"element": lambda x:x} | prompt | model | StrOutputParser() # Apply to text texts = [i.text for i in text_elements if i.text != ""] text_summaries = summarize_chain.batch(texts, {"max_concurrency": 5}) ``` The following error was returned: ```bash --------------------------------------------------------------------------- JSONDecodeError Traceback (most recent call last) File ~/anaconda3/envs/langchain/lib/python3.8/site-packages/requests/models.py:971, in Response.json(self, **kwargs) 970 try: --> 971 return complexjson.loads(self.text, **kwargs) 972 except JSONDecodeError as e: 973 # Catch JSON-related errors and raise as requests.JSONDecodeError 974 # This aliases json.JSONDecodeError and simplejson.JSONDecodeError File ~/anaconda3/envs/langchain/lib/python3.8/json/__init__.py:357, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 354 if (cls is None and object_hook is None and 355 parse_int is None and parse_float is None and 356 parse_constant is None and object_pairs_hook is None and not kw): --> 357 return _default_decoder.decode(s) 358 if cls is None: File ~/anaconda3/envs/langchain/lib/python3.8/json/decoder.py:337, in JSONDecoder.decode(self, s, _w) 333 """Return the Python representation of ``s`` (a ``str`` instance 334 containing a JSON document). 335 336 """ --> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end()) 338 end = _w(s, end).end() File ~/anaconda3/envs/langchain/lib/python3.8/json/decoder.py:355, in JSONDecoder.raw_decode(self, s, idx) 354 except StopIteration as err: --> 355 raise JSONDecodeError("Expecting value", s, err.value) from None 356 return obj, end JSONDecodeError: Expecting value: line 1 column 1 (char 0) During handling of the above exception, another exception occurred: JSONDecodeError Traceback (most recent call last) Cell In[6], line 3 1 # Apply to text 2 texts = [i.text for i in text_elements if i.text != ""] ----> 3 text_summaries = summarize_chain.batch(texts, {"max_concurrency": 5}) File ~/chatpdf-langchain/langchain/libs/langchain/langchain/schema/runnable/base.py:1287, in RunnableSequence.batch(self, inputs, config, return_exceptions, **kwargs) 1285 else: 1286 for i, step in enumerate(self.steps): -> 1287 inputs = step.batch( 1288 inputs, 1289 [ 1290 # each step a child run of the corresponding root run 1291 patch_config( 1292 config, callbacks=rm.get_child(f"seq:step:{i+1}") 1293 ) 1294 for rm, config in zip(run_managers, configs) 1295 ], 1296 ) 1298 # finish the root runs 1299 except BaseException as e: File ~/chatpdf-langchain/langchain/libs/langchain/langchain/schema/runnable/base.py:323, in Runnable.batch(self, inputs, config, return_exceptions, **kwargs) 320 return cast(List[Output], [invoke(inputs[0], configs[0])]) 322 with get_executor_for_config(configs[0]) as executor: --> 323 return cast(List[Output], list(executor.map(invoke, inputs, configs))) File ~/anaconda3/envs/langchain/lib/python3.8/concurrent/futures/_base.py:619, in Executor.map.<locals>.result_iterator() 616 while fs: 617 # Careful not to keep a reference to the popped future 618 if timeout is None: --> 619 yield fs.pop().result() 620 else: 621 yield fs.pop().result(end_time - time.monotonic()) File ~/anaconda3/envs/langchain/lib/python3.8/concurrent/futures/_base.py:444, in Future.result(self, timeout) 442 raise CancelledError() 443 elif self._state == FINISHED: --> 444 return self.__get_result() 445 else: 446 raise TimeoutError() File ~/anaconda3/envs/langchain/lib/python3.8/concurrent/futures/_base.py:389, in Future.__get_result(self) 387 if self._exception: 388 try: --> 389 raise self._exception 390 finally: 391 # Break a reference cycle with the exception in self._exception 392 self = None File ~/anaconda3/envs/langchain/lib/python3.8/concurrent/futures/thread.py:57, in _WorkItem.run(self) 54 return 56 try: ---> 57 result = self.fn(*self.args, **self.kwargs) 58 except BaseException as exc: 59 self.future.set_exception(exc) File ~/chatpdf-langchain/langchain/libs/langchain/langchain/schema/runnable/base.py:316, in Runnable.batch.<locals>.invoke(input, config) 314 return e 315 else: --> 316 return self.invoke(input, config, **kwargs) File ~/chatpdf-langchain/langchain/libs/langchain/langchain/chat_models/base.py:142, in BaseChatModel.invoke(self, input, config, stop, **kwargs) 131 def invoke( 132 self, 133 input: LanguageModelInput, (...) 137 **kwargs: Any, 138 ) -> BaseMessage: 139 config = config or {} 140 return cast( 141 ChatGeneration, --> 142 self.generate_prompt( 143 [self._convert_input(input)], 144 stop=stop, 145 callbacks=config.get("callbacks"), 146 tags=config.get("tags"), 147 metadata=config.get("metadata"), 148 run_name=config.get("run_name"), 149 **kwargs, 150 ).generations[0][0], 151 ).message File ~/chatpdf-langchain/langchain/libs/langchain/langchain/chat_models/base.py:459, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs) 451 def generate_prompt( 452 self, 453 prompts: List[PromptValue], (...) 456 **kwargs: Any, 457 ) -> LLMResult: 458 prompt_messages = [p.to_messages() for p in prompts] --> 459 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) File ~/chatpdf-langchain/langchain/libs/langchain/langchain/chat_models/base.py:349, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs) 347 if run_managers: 348 run_managers[i].on_llm_error(e) --> 349 raise e 350 flattened_outputs = [ 351 LLMResult(generations=[res.generations], llm_output=res.llm_output) 352 for res in results 353 ] 354 llm_output = self._combine_llm_outputs([res.llm_output for res in results]) File ~/chatpdf-langchain/langchain/libs/langchain/langchain/chat_models/base.py:339, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs) 336 for i, m in enumerate(messages): 337 try: 338 results.append( --> 339 self._generate_with_cache( 340 m, 341 stop=stop, 342 run_manager=run_managers[i] if run_managers else None, 343 **kwargs, 344 ) 345 ) 346 except BaseException as e: 347 if run_managers: File ~/chatpdf-langchain/langchain/libs/langchain/langchain/chat_models/base.py:492, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs) 488 raise ValueError( 489 "Asked to cache, but no cache found at `langchain.cache`." 490 ) 491 if new_arg_supported: --> 492 return self._generate( 493 messages, stop=stop, run_manager=run_manager, **kwargs 494 ) 495 else: 496 return self._generate(messages, stop=stop, **kwargs) File ~/chatpdf-langchain/langchain/libs/langchain/langchain/chat_models/ollama.py:98, in ChatOllama._generate(self, messages, stop, run_manager, **kwargs) 80 """Call out to Ollama's generate endpoint. 81 82 Args: (...) 94 ]) 95 """ 97 prompt = self._format_messages_as_text(messages) ---> 98 final_chunk = super()._stream_with_aggregation( 99 prompt, stop=stop, run_manager=run_manager, verbose=self.verbose, **kwargs 100 ) 101 chat_generation = ChatGeneration( 102 message=AIMessage(content=final_chunk.text), 103 generation_info=final_chunk.generation_info, 104 ) 105 return ChatResult(generations=[chat_generation]) File ~/chatpdf-langchain/langchain/libs/langchain/langchain/llms/ollama.py:156, in _OllamaCommon._stream_with_aggregation(self, prompt, stop, run_manager, verbose, **kwargs) 147 def _stream_with_aggregation( 148 self, 149 prompt: str, (...) 153 **kwargs: Any, 154 ) -> GenerationChunk: 155 final_chunk: Optional[GenerationChunk] = None --> 156 for stream_resp in self._create_stream(prompt, stop, **kwargs): 157 if stream_resp: 158 chunk = _stream_response_to_generation_chunk(stream_resp) File ~/chatpdf-langchain/langchain/libs/langchain/langchain/llms/ollama.py:140, in _OllamaCommon._create_stream(self, prompt, stop, **kwargs) 138 response.encoding = "utf-8" 139 if response.status_code != 200: --> 140 optional_detail = response.json().get("error") 141 raise ValueError( 142 f"Ollama call failed with status code {response.status_code}." 143 f" Details: {optional_detail}" 144 ) 145 return response.iter_lines(decode_unicode=True) File ~/anaconda3/envs/langchain/lib/python3.8/site-packages/requests/models.py:975, in Response.json(self, **kwargs) 971 return complexjson.loads(self.text, **kwargs) 972 except JSONDecodeError as e: 973 # Catch JSON-related errors and raise as requests.JSONDecodeError 974 # This aliases json.JSONDecodeError and simplejson.JSONDecodeError --> 975 raise RequestsJSONDecodeError(e.msg, e.doc, e.pos) JSONDecodeError: Expecting value: line 1 column 1 (char 0) ``` After attempting: ```python # Summary chain model = ChatOllama(model="llama2:13b-chat") summarize_chain = {"element": lambda x:x} | prompt | model texts = [i.text for i in text_elements if i.text != ""] text_summaries = summarize_chain.batch(texts) ``` Returned the same error as before. It seems that the error is occurring in the ‘ChatOllama model’. ### Expected behavior Attempting to reproduce the effects in the Cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb.
JSONDecodeError on Cookbook/Semi_structured_multi_mudal_RAG_LLaMA2.ipynb
https://api.github.com/repos/langchain-ai/langchain/issues/12180/comments
7
2023-10-24T02:23:12Z
2024-01-22T19:18:52Z
https://github.com/langchain-ai/langchain/issues/12180
1,958,354,853
12,180
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Hello Team, We have a metadata field in our opensearch index which I would like use to filter the passages before sending it to LLM to generate the answer. Tried many options suggested in chat.langchain.com but none of them seems to work. Here is an example: I have an user with associated groups ['user_group1', 'user_group2'] and the OpenSearch index with data like { 'text': 'Page Content......', 'metadata': { 'source_acl': [ {'group_name': 'user_group1', 'group_type': 'type1'}, {}, {}] } } So far I have been trying this ``` chain = ConversationalRetrievalChain.from_llm( llm=SagemakerEndpoint( endpoint_name=constants.EMBEDDING_ENDPOINT_ARN_ALPHA, region_name=constants.REGION, content_handler=self.content_handler_llm, client=self.sg_client ), # for experimentation, you can change number of documents to retrieve here retriever=opensearch_vectorstore.as_retriever(search_kwargs={'k': 3, 'filter': 'source_acl.group_name:user_group1'}), memory=memory, condense_question_prompt=custom_question_prompt, return_source_documents=True ) ``` Am I missing anything here? Thanks in Advance. ### Suggestion: _No response_
Issue: <Not able to filter OpenSearch vectorstore using Filter in search_kwargs>
https://api.github.com/repos/langchain-ai/langchain/issues/12179/comments
2
2023-10-24T01:09:55Z
2024-02-07T16:15:18Z
https://github.com/langchain-ai/langchain/issues/12179
1,958,283,958
12,179
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. when i try and run this script ``` class WebsiteSummary: def _SummorizeWebsite(objective, websiteContent): my_gpt_instance = MyGPT4ALL() textSplitter = RecursiveCharacterTextSplitter( separators=["\n\n", "\n"], chunk_size=10000, chunk_overlap=500 ) docs = textSplitter.create_documents([websiteContent]) print(docs) mapPrompt = """" write a summary of the following {objective}: {text} SUMMARY: """ map_prompt_template = PromptTemplate( template=mapPrompt, input_variables=["text", "objective"] ) chain = load_summarize_chain(llm=my_gpt_instance, chain_type="map_reduce", verbose=True, map_prompt=objective, combine_prompt=map_prompt_template) output = summary_Chain.run(input_documents=docs, objective=objective) return output ``` i get this issue Exception has occurred: ValidationError 1 validation error for LLMChain prompt value is not a valid dict (type=type_error.dict) File "E:\StormboxBrain\langchain\websiteSummary.py", line 27, in _SummorizeWebsite chain = load_summarize_chain(llm=my_gpt_instance, File "E:\StormboxBrain\langchain\app.py", line 12, in <module> summary = websiteSummary._SummorizeWebsite(objective, data) pydantic.error_wrappers.ValidationError: 1 validation error for LLMChain my model does load and can create prompt responses mistral-7b-openorca.Q4_0.gguf this is mygpt4all class ``` import os from pydantic import Field from typing import List, Mapping, Optional, Any from langchain.llms.base import LLM from gpt4all import GPT4All, pyllmodel class MyGPT4ALL(LLM): """ A custom LLM class that integrates gpt4all models Arguments: model_folder_path: (str) Folder path where the model lies model_name: (str) The name of the model to use (<model name>.gguf) backend: (str) The backend of the model (Supported backends: llama/gptj) n_threads: (str) The number of threads to use n_predict: (str) The maximum numbers of tokens to generate temp: (str) Temperature to use for sampling top_p: (float) The top-p value to use for sampling top_k: (float) The top k values use for sampling n_batch: (int) Batch size for prompt processing repeat_last_n: (int) Last n number of tokens to penalize repeat_penalty: (float) The penalty to apply repeated tokens """ model_folder_path: str = "./models/" model_name: str = "mistral-7b-openorca.Q4_0.gguf" backend: Optional[str] = "llama" temp: Optional[float] = 0.7 top_p: Optional[float] = 0.1 top_k: Optional[int] = 40 n_batch: Optional[int] = 8 n_threads: Optional[int] = 4 n_predict: Optional[int] = 256 max_tokens: Optional[int] = 200 repeat_last_n: Optional[int] = 64 repeat_penalty: Optional[float] = 1.18 # initialize the model gpt4_model_instance: Any = None def __init__(self, **kwargs): super(MyGPT4ALL, self).__init__() print("init " + self.model_folder_path + self.model_name) self.gpt4_model_instance = GPT4All( model_name=self.model_name, model_path=self.model_folder_path, device="cpu", verbose=True, ) print("initiolized " + self.model_folder_path + self.model_name) @property def _get_model_default_parameters(self): print("get default params") return { "max_tokens": self.max_tokens, "n_predict": self.n_predict, "top_k": self.top_k, "top_p": self.top_p, "temp": self.temp, "n_batch": self.n_batch, "repeat_penalty": self.repeat_penalty, "repeat_last_n": self.repeat_last_n, } @property def _identifying_params(self) -> Mapping[str, Any]: print(" Get all the identifying parameters") return { "model_name": self.model_name, "model_path": self.model_folder_path, "model_parameters": self._get_model_default_parameters, } @property def _llm_type(self) -> str: return "llama" def _call(self, prompt: str, stop: Optional[List[str]] = None, **kwargs) -> str: """ Args: prompt: The prompt to pass into the model. stop: A list of strings to stop generation when encountered Returns: The string generated by the model """ params = {**self._get_model_default_parameters, **kwargs} print("params:") for key, value in params.items(): print(f"{key}: {value}") print("params " + params.values()) return "initiolized" ``` ### Suggestion: is there any documentation on how to use another llm as base and not gpt
value is not a valid dict (type=type_error.dict)
https://api.github.com/repos/langchain-ai/langchain/issues/12176/comments
1
2023-10-23T23:15:43Z
2023-10-29T13:51:16Z
https://github.com/langchain-ai/langchain/issues/12176
1,958,182,263
12,176
[ "langchain-ai", "langchain" ]
### Feature request Is there a way to integrate `RetryOutputParser` with `ConversationalRetrievalChain`? Specifically, if I use OpenAI's function calling to generate the response in a specific format, the LLM model does not always follow the instructions and then it would fail the Pydantic model validation. Therefore, it's important to build the retry mechanism for the `LLMChain` (aka, `doc_chain` argument). ### Motivation Specifically, if I use OpenAI's function calling to generate the response in a specific format, the LLM model does not always follow the instructions and then it would fail the Pydantic model validation. Therefore, it's important to build the retry mechanism for the `LLMChain` (aka, `doc_chain` argument). ### Your contribution TBD
Integrate RetryOutputParser with ConversationalRetrievalChain
https://api.github.com/repos/langchain-ai/langchain/issues/12175/comments
8
2023-10-23T21:40:52Z
2024-02-14T16:09:08Z
https://github.com/langchain-ai/langchain/issues/12175
1,958,069,230
12,175
[ "langchain-ai", "langchain" ]
### Feature request The idea is simple. When using various models users have rate limits, both in requests per min/hour and Tokens. When using the raw OpenAI SDK I used the code below. It just uses the "usage" metrics provided in the response (as of now I dont know how to retrieve a 'raw response' when utilizing langchain queries' ### Motivation This was inspired by the experience of trying to use the GPT api to generate documentaion for a long list of functions of varying length. While trying to run these requests en mass I repeatedly had my process error out because of RATE LIMIT Errors. I dont think the idea of needing to run queries asynchronously against OpenAI API is a common enough use case (though strongely enough not natively supported by the Python SDK) ### Your contribution I would be Honored!!!! To contribute this to the project but I honestly would need some help figuring out how to integrate it. This is the base Idea though, Feel free to reachout to collab! codeblackwell@gmail.com or @codeblackwell anywhere. (apologies for the formatting. It just wouldnt work with me). ` class TokenManager: def __init__(self, max_tokens_per_minute=10000): self.max_tokens_per_minute = max_tokens_per_minute self.tokens_used_in_last_minute = 0 self.last_request_time = time.time() def tokens_available(self): current_time = time.time() time_since_last_request = current_time - self.last_request_time if time_since_last_request >= 60: self.tokens_used_in_last_minute = 0 self.last_request_time = current_time return self.max_tokens_per_minute - self.tokens_used_in_last_minute def update_tokens_used(self, tokens_used): if not isinstance(tokens_used, int) or tokens_used < 0: raise ValueError("Invalid token count") self.tokens_used_in_last_minute += tokens_used def get_usage_stats(self): current_time = time.time() time_since_last_request = current_time - self.last_request_time if time_since_last_request >= 60: self.tokens_used_in_last_minute = 0 self.last_request_time = current_time absolute_usage = self.tokens_used_in_last_minute percent_usage = (self.tokens_used_in_last_minute / self.max_tokens_per_minute) * 100 return absolute_usage, percent_usage` token_manager = TokenManager() `def fetch_response(code_snippet, system_prompt, token_manager): """ Fetches a response using the OpenAI API. Parameters: code_snippet (str): The code snippet to be processed. system_prompt (str): The system prompt. Returns: dict: The response from the API. """ while token_manager.tokens_available() < 2500: # Assume a minimum of 10 tokens per request time.sleep(1) try: response = openai.ChatCompletion.create( model="gpt-4", messages=[ {"role": "system", "content": system_prompt}, {"role": "user", "content": code_snippet} ] ) tokens_used = response['usage']['total_tokens'] token_manager.update_tokens_used(tokens_used) return response except Exception as e: print(f"An error occurred: {str(e)}")` `def main(df, system_prompt): """ Processes a DataFrame of code snippets. Parameters: df (pd.DataFrame): The DataFrame containing code snippets. system_prompt (str): The system prompt. Returns: pd.DataFrame: The DataFrame with the added output column. """ responses = [fetch_response(row['code'], system_prompt) for _, row in df.iterrows()] df['output'] = responses return df` # Run the main function `new_df = asyncio.run(main(filtered_df, system_prompt_1))`
Token Usage Management System
https://api.github.com/repos/langchain-ai/langchain/issues/12172/comments
3
2023-10-23T20:36:34Z
2024-03-11T11:55:29Z
https://github.com/langchain-ai/langchain/issues/12172
1,957,982,299
12,172
[ "langchain-ai", "langchain" ]
### System Info I am trying to setup the local development environment with `poetry install --with test` but I am running into the error `Group(s) not found: test (via --with)` ``` > origin/master:master via 🐍 v3.11.6 > 57% langchain ❯ poetry install --with test Group(s) not found: test (via --with) ``` Could you advise on what might be happening here? ### Who can help? @eyurtsev, I think you might be able to advise here? 🙏🏼 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction - Clone repository - With poetry installed, run `poetry install --with test` ### Expected behavior Local development setup works without an error.
`poetry install --with test`: Group(s) not found: test (via --with)
https://api.github.com/repos/langchain-ai/langchain/issues/12170/comments
3
2023-10-23T19:28:02Z
2023-10-23T19:46:02Z
https://github.com/langchain-ai/langchain/issues/12170
1,957,880,716
12,170
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. How can I see the context that was used to answer a specific question? from langchain.memory import ConversationBufferWindowMemory memory = ConversationBufferWindowMemory(k=3, memory_key="chat_history", return_messages=True) qa = ConversationalRetrievalChain.from_llm(llm_model, vector_store.as_retriever(search_type='similarity', search_kwargs={'k': 2}), memory=memory) query = "Can I as a manager fly business class?" ##predict with the model text = f"""[INST] <<SYS>> You are a HR-assistant that answers questions: <</SYS>> {query} [/INST] """ result = qa({"question": text}) result ### Suggestion: _No response_
Issue: View context used in query in ConversationalRetrievalChain
https://api.github.com/repos/langchain-ai/langchain/issues/12167/comments
3
2023-10-23T18:10:39Z
2024-02-10T16:10:57Z
https://github.com/langchain-ai/langchain/issues/12167
1,957,733,689
12,167
[ "langchain-ai", "langchain" ]
### System Info make lint errors ``` make lint ./scripts/check_pydantic.sh . ./scripts/check_imports.sh poetry run ruff . [ "." = "" ] || poetry run black . --check All done! ✨ 🍰 ✨ 1866 files would be left unchanged. [ "." = "" ] || poetry run mypy . langchain/document_loaders/parsers/pdf.py:95: error: "PdfObject" has no attribute "keys" [attr-defined] langchain/document_loaders/parsers/pdf.py:98: error: Value of type "PdfObject" is not indexable [index] Found 2 errors in 1 file (checked 1863 source files) make: *** [lint] Error 1``` for line 95: ``` if not self.extract_images or "/XObject" not in page["/Resources"].keys(): ``` and line 98: ``` xObject = page["/Resources"]["/XObject"].get_object() ``` on master 7c4f340cc, v0.320 commented with # type: ignore for my PR, but keeping here as reference to fix ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```make lint``` ### Expected behavior no mypy error
mypy linting error in PyPDFParser with pypdf 3.16.4
https://api.github.com/repos/langchain-ai/langchain/issues/12166/comments
4
2023-10-23T17:34:36Z
2024-02-01T13:46:22Z
https://github.com/langchain-ai/langchain/issues/12166
1,957,670,819
12,166
[ "langchain-ai", "langchain" ]
Updated: 20233-12-06 Hello everyone! thank you all for your contributions! We've made a lot of progress with SecretStrs in the code base. First time contributors -- hope you had fun learning how to work in the code base and thanks for putting in the time. All contributors -- thanks for all your efforts in improving LangChain. We'll create a new first time issue in a few months. ------ Hello LangChain community, We're always happy to see more folks getting involved in contributing to the LangChain codebase. This is a good first issue if you want to learn more about how to set up for development in the LangChain codebase. ## Goal Your contribution will make it safer to print out a LangChain object without having any secrets included in raw format in the string representation. ## Set up for development Prior to making any changes in the code: https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md Make sure you're able to test, format, lint from `langchain/libs/langchain` ```sh make test ``` ```sh make format ``` ```sh make lint ``` ## Shall you accept Shall you accept this challenge, please claim one (and only one) of the modules from the list below as one that you will be working on, and respond to this issue. Once you've made the required code changes, open a PR and link to this issue. ## Acceptance Criteria - invoking str or repr on the the object does not show the secret key Integration test for the code updated to include tests that: - confirms the object can be initialized with an API key provided via the initializer - confirms the object can be initialized with an API key provided via an env variable Confirm that it works: - either re-run notebook for the given object or else add an appropriate test that confirms that the actual secret is used appropriately (i.e.,`.get_secret_value()`) **If your code does not use `get_secret_value()` somewhere, then it probably contains a bug!** ## Modules - [ ] langchain/chat_models/anyscale.py @aidoskanapyanov - [ ] langchain/chat_models/azure_openai.py @onesolpark - [x] langchain/chat_models/azureml_endpoint.py @fyasla - [ ] langchain/chat_models/baichuan.py - [ ] langchain/chat_models/everlyai.py @sfreisthler - [x] langchain/chat_models/fireworks.py @nepalprabin - [ ] langchain/chat_models/google_palm.py @faisalt14 - [ ] langchain/chat_models/javelin_ai_gateway.py - [ ] langchain/chat_models/jinachat.py - [ ] langchain/chat_models/konko.py - [ ] langchain/chat_models/litellm.py - [ ] langchain/chat_models/openai.py @AnasKhan0607 - [ ] langchain/chat_models/tongyi.py - [ ] langchain/llms/ai21.py - [x] langchain/llms/aleph_alpha.py @slangenbach - [ ] langchain/llms/anthropic.py - [ ] langchain/llms/anyscale.py @aidoskanapyanov - [ ] langchain/llms/arcee.py - [x] langchain/llms/azureml_endpoint.py - [ ] langchain/llms/bananadev.py - [ ] langchain/llms/cerebriumai.py - [ ] langchain/llms/cohere.py @arunsathiya - [ ] langchain/llms/edenai.py @kristinspenc - [ ] langchain/llms/fireworks.py - [ ] langchain/llms/forefrontai.py - [ ] langchain/llms/google_palm.py @Harshil-Patel28 - [ ] langchain/llms/gooseai.py - [ ] langchain/llms/javelin_ai_gateway.py - [ ] langchain/llms/minimax.py - [ ] langchain/llms/nlpcloud.py - [ ] langchain/llms/openai.py @HassanA01 - [ ] langchain/llms/petals.py @akshatvishu - [ ] langchain/llms/pipelineai.py - [ ] langchain/llms/predibase.py - [ ] langchain/llms/stochasticai.py - [x] langchain/llms/symblai_nebula.py @praveenv - [ ] langchain/llms/together.py - [ ] langchain/llms/tongyi.py - [ ] langchain/llms/writer.py @ommirzaei - [ ] langchain/llms/yandex.py ### Motivation Prevent secrets from being printed out when printing the given langchain object. ### Your contribution Please sign up by responding to this issue and including the name of the module.
For New Contributors: Use SecretStr for api_keys
https://api.github.com/repos/langchain-ai/langchain/issues/12165/comments
56
2023-10-23T17:29:51Z
2024-01-07T21:13:38Z
https://github.com/langchain-ai/langchain/issues/12165
1,957,660,279
12,165
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Hello team! I am new to the OpenAPI Specs Toolkit. I am trying to build an OpenAPI Agent Planner for the Microsoft Graph API. The OpenAPI specs can be found here: https://github.com/microsoftgraph/msgraph-metadata/blob/master/openapi/v1.0/openapi.yaml . I downloaded the YAML file and followed this notebook: https://python.langchain.com/docs/integrations/toolkits/openapi#1st-example-hierarchical-planning-agent . ```python import os import yaml from langchain.agents.agent_toolkits.openapi.spec import reduce_openapi_spec current_directory = os.path.abspath('') data_directory = os.path.join(current_directory, "data") msgraph_api_file = os.path.join(data_directory, "msgraph-openapi.yaml") raw_msgraph_api_spec = yaml.load(open(msgraph_api_file,encoding='utf-8').read(), Loader=yaml.Loader) msgraph_api_spec = reduce_openapi_spec(raw_msgraph_api_spec) ``` Does anyone know how to handle large OpenAPI Specs? I ran the following to read my OpenAPI YAML spec and when using the `reduce_openapi_spec` module, I get the following error below: `RecursionError: maximum recursion depth exceeded while calling a Python object` Is there a setting I need to change in LangChain? Please and thank you for your help in advance. I believe this issue is before I run into the following token limit, right? https://github.com/langchain-ai/langchain/issues/2786 ### Suggestion: _No response_
OpenAPI Spec - reduce_openapi_spec - maximum recursion depth exceeded
https://api.github.com/repos/langchain-ai/langchain/issues/12163/comments
6
2023-10-23T16:23:35Z
2024-06-12T16:07:06Z
https://github.com/langchain-ai/langchain/issues/12163
1,957,544,509
12,163
[ "langchain-ai", "langchain" ]
There's an issue when using JsonSpec that it can't handle the case where it indexes a list with `0` because in [line 52](https://github.com/langchain-ai/langchain/blob/d0505c0d4755d8cbe62a8ddee68f53a5cb330892/libs/langchain/langchain/tools/json/tool.py#L52) it checks `if i` and since `i==0` it returns `False`. Instead, I believe this should check for purely existence of `i` as a string or int.
JsonSpec `keys` method doesn't handle indexing lists properly
https://api.github.com/repos/langchain-ai/langchain/issues/12160/comments
1
2023-10-23T15:41:45Z
2024-02-06T16:16:01Z
https://github.com/langchain-ai/langchain/issues/12160
1,957,467,852
12,160