issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### System Info
- LangChain: 0.0.353
- System: Ubuntu 22.04
- Python: 3.10.12
### Information
I run the code in the quickstart part of the [document](https://python.langchain.com/docs/get_started/quickstart#agent), code:
```python
from langchain.chat_models import ChatOpenAI
from langchain import hub
from langchain.agents import create_openai_functions_agent
from langchain.agents import AgentExecutor
# Get the prompt to use - you can modify this!
prompt = hub.pull("hwchase17/openai-functions-agent")
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
agent = create_openai_functions_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
```
However, the Python interpreter told me:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/tmp/ipykernel_665358/3241410387.py in <module>
7 prompt = hub.pull("hwchase17/openai-functions-agent")
8 llm = ChatOpenAI(openai_api_key=openai_api_key, model="gpt-3.5-turbo", temperature=0)
----> 9 agent = create_openai_functions_agent(llm=llm, tools=tools, prompt=prompt)
10 agent_executor = AgentExecutor(agent, tools, verbose=True)
[~/.local/lib/python3.10/site-packages/langchain/agents/openai_functions_agent/base.py](https://vscode-remote+ssh-002dremote-002b158-002e132-002e9-002e210.vscode-resource.vscode-cdn.net/home/iot/Documents/langchain/~/.local/lib/python3.10/site-packages/langchain/agents/openai_functions_agent/base.py) in create_openai_functions_agent(llm, tools, prompt)
285 )
286 llm_with_tools = llm.bind(
--> 287 functions=[format_tool_to_openai_function(t) for t in tools]
288 )
289 agent = (
[~/.local/lib/python3.10/site-packages/langchain/agents/openai_functions_agent/base.py](https://vscode-remote+ssh-002dremote-002b158-002e132-002e9-002e210.vscode-resource.vscode-cdn.net/home/iot/Documents/langchain/~/.local/lib/python3.10/site-packages/langchain/agents/openai_functions_agent/base.py) in <listcomp>(.0)
285 )
286 llm_with_tools = llm.bind(
--> 287 functions=[format_tool_to_openai_function(t) for t in tools]
288 )
289 agent = (
[~/.local/lib/python3.10/site-packages/langchain_community/tools/convert_to_openai.py](https://vscode-remote+ssh-002dremote-002b158-002e132-002e9-002e210.vscode-resource.vscode-cdn.net/home/iot/Documents/langchain/~/.local/lib/python3.10/site-packages/langchain_community/tools/convert_to_openai.py) in format_tool_to_openai_function(tool)
10 def format_tool_to_openai_function(tool: BaseTool) -> FunctionDescription:
11 """Format tool into the OpenAI function API."""
---> 12 if tool.args_schema:
13 return convert_pydantic_to_openai_function(
14 tool.args_schema, name=tool.name, description=tool.description
AttributeError: 'VectorStoreRetriever' object has no attribute 'args_schema'
```
It seems that some packages have version incompatibility.
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Directly run the following code:
```
from langchain.chat_models import ChatOpenAI
from langchain import hub
from langchain.agents import create_openai_functions_agent
from langchain.agents import AgentExecutor
# Get the prompt to use - you can modify this!
prompt = hub.pull("hwchase17/openai-functions-agent")
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
agent = create_openai_functions_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
```
### Expected behavior
It should run successfully without any error report. | AttributeError: 'VectorStoreRetriever' object has no attribute 'args_schema' | https://api.github.com/repos/langchain-ai/langchain/issues/15359/comments | 2 | 2023-12-31T15:17:25Z | 2024-04-10T16:15:34Z | https://github.com/langchain-ai/langchain/issues/15359 | 2,061,090,976 | 15,359 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.353
Python 3.10.13 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:24:38) [MSC v.1916 64 bit (AMD64)]
Windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chat_models.openai import ChatOpenAI
from langchain_community.chat_loaders.facebook_messenger import FolderFacebookMessengerChatLoader, SingleFileFacebookMessengerChatLoader
from pathlib import Path
import os
chat_file = Path("data/my-fb-folder/messages/inbox/message-dir/message_1.json")
loader = SingleFileFacebookMessengerChatLoader(chat_file)
loader.load()
```
Stacktrace:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[11], [line 8](vscode-notebook-cell:?execution_count=11&line=8)
[6](vscode-notebook-cell:?execution_count=11&line=6) chat_file = Path("data/my-fb-folder/your_activity_across_facebook/messages/inbox/message-dir/message_1.json")
[7](vscode-notebook-cell:?execution_count=11&line=7) loader = SingleFileFacebookMessengerChatLoader(chat_file)
----> [8](vscode-notebook-cell:?execution_count=11&line=8) loader.load()
File [c:\Users\th4tkh13m\miniconda3\envs\rag\lib\site-packages\langchain_community\chat_loaders\base.py:16](file:///C:/Users/th4tkh13m/miniconda3/envs/rag/lib/site-packages/langchain_community/chat_loaders/base.py:16), in BaseChatLoader.load(self)
[14](file:///C:/Users/th4tkh13m/miniconda3/envs/rag/lib/site-packages/langchain_community/chat_loaders/base.py:14) def load(self) -> List[ChatSession]:
[15](file:///C:/Users/th4tkh13m/miniconda3/envs/rag/lib/site-packages/langchain_community/chat_loaders/base.py:15) """Eagerly load the chat sessions into memory."""
---> [16](file:///C:/Users/th4tkh13m/miniconda3/envs/rag/lib/site-packages/langchain_community/chat_loaders/base.py:16) return list(self.lazy_load())
File [c:\Users\th4tkh13m\miniconda3\envs\rag\lib\site-packages\langchain_community\chat_loaders\facebook_messenger.py:43](file:///C:/Users/th4tkh13m/miniconda3/envs/rag/lib/site-packages/langchain_community/chat_loaders/facebook_messenger.py:43), in SingleFileFacebookMessengerChatLoader.lazy_load(self)
[39](file:///C:/Users/th4tkh13m/miniconda3/envs/rag/lib/site-packages/langchain_community/chat_loaders/facebook_messenger.py:39) messages = []
[40](file:///C:/Users/th4tkh13m/miniconda3/envs/rag/lib/site-packages/langchain_community/chat_loaders/facebook_messenger.py:40) for m in sorted_data:
[41](file:///C:/Users/th4tkh13m/miniconda3/envs/rag/lib/site-packages/langchain_community/chat_loaders/facebook_messenger.py:41) messages.append(
[42](file:///C:/Users/th4tkh13m/miniconda3/envs/rag/lib/site-packages/langchain_community/chat_loaders/facebook_messenger.py:42) HumanMessage(
---> [43](file:///C:/Users/th4tkh13m/miniconda3/envs/rag/lib/site-packages/langchain_community/chat_loaders/facebook_messenger.py:43) content=m["content"], additional_kwargs={"sender": m["sender_name"]}
[44](file:///C:/Users/th4tkh13m/miniconda3/envs/rag/lib/site-packages/langchain_community/chat_loaders/facebook_messenger.py:44) )
[45](file:///C:/Users/th4tkh13m/miniconda3/envs/rag/lib/site-packages/langchain_community/chat_loaders/facebook_messenger.py:45) )
[46](file:///C:/Users/th4tkh13m/miniconda3/envs/rag/lib/site-packages/langchain_community/chat_loaders/facebook_messenger.py:46) yield ChatSession(messages=messages)
KeyError: 'content'
```
### Expected behavior
The chat message should be loaded normally. | SingleFileFacebookMessengerChatLoader fails when the chat contains non-text contents such as stickers and photos. | https://api.github.com/repos/langchain-ai/langchain/issues/15356/comments | 3 | 2023-12-31T09:31:07Z | 2024-01-02T14:36:02Z | https://github.com/langchain-ai/langchain/issues/15356 | 2,061,000,149 | 15,356 |
[
"langchain-ai",
"langchain"
] | ### System Info
azure-search-documents==11.4.0b8
langchain==0.0.352
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have my own search index with no `metadata` field.
#### Code
```python
from langchain.vectorstores.azuresearch import AzureSearch
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embeddings.embed_query,
semantic_configuration_name="default"
)
query = "How many employees does Contoso Electronics have?"
docs = vector_store.semantic_hybrid_search(
query=query,
search_type="semantic_hybrid",
)
print(docs[0])
```
#### Stack trace
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
File \lib\site-packages\langchain_community\vectorstores\azuresearch.py:656, in <listcomp>(.0)
622 semantic_answers_dict[semantic_answer.key] = {
623 "text": semantic_answer.text,
624 "highlights": semantic_answer.highlights,
625 }
626 # Convert results to Document objects
627 docs = [
628 (
629 Document(
630 page_content=result.pop(FIELDS_CONTENT),
631 metadata={
632 **(
633 {FIELDS_ID: result.pop(FIELDS_ID)}
634 if FIELDS_ID in result
635 else {}
636 ),
637 **(
638 json.loads(result[FIELDS_METADATA])
639 if FIELDS_METADATA in result
640 else {
641 k: v
642 for k, v in result.items()
643 if k != FIELDS_CONTENT_VECTOR
644 }
645 ),
646 **{
647 "captions": {
648 "text": result.get("@search.captions", [{}])[0].text,
649 "highlights": result.get("@search.captions", [{}])[
650 0
651 ].highlights,
652 }
653 if result.get("@search.captions")
654 else {},
655 "answers": semantic_answers_dict.get(
--> 656 json.loads(result["metadata"]).get("key"), ""
657 ),
658 },
659 },
660 ),
661 float(result["@search.score"]),
662 float(result["@search.reranker_score"]),
663 )
664 for result in results
665 ]
666 return docs
KeyError: 'metadata'
```
### Expected behavior
I get search results from Azure AI Search.
This error is caused by the hardcoding of `metadata` field name, such as `result["metadata"]` in line 656 of `langchain\libs\community\langchain_community\vectorstores\azuresearch.py`. Therefore, performing a search on an Azure AI Search index that does not have this field will fail. | AzureSearch semantic_hybrid_search fails due to hardcoding of metadata fields | https://api.github.com/repos/langchain-ai/langchain/issues/15355/comments | 1 | 2023-12-31T08:43:04Z | 2024-04-07T16:07:34Z | https://github.com/langchain-ai/langchain/issues/15355 | 2,060,988,370 | 15,355 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
How can I make output templates in langchain? That is, for example, I throw a request for AI to write a joke, but with strict adherence to the template [set-up, punchline] and therefore get as a result:
```
Set-up: ...
Punchline: ...
```
and nothing more
### Suggestion:
_No response_ | Issue: output templates in langchain | https://api.github.com/repos/langchain-ai/langchain/issues/15350/comments | 1 | 2023-12-31T00:18:00Z | 2024-04-07T16:07:29Z | https://github.com/langchain-ai/langchain/issues/15350 | 2,060,892,236 | 15,350 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.353
Python 3.10.12
System Ubuntu 22.04
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am trying to modify and run the example on [this page](https://python.langchain.com/docs/use_cases/question_answering/). I am modifying it slightly to use a different embedding tool from the Ollama model.
The last line, which should create the vector store, in the below snippet crashes.
```
import bs4
from langchain import hub
from langchain.chat_models import ChatOpenAI
from langchain.document_loaders import WebBaseLoader
from langchain.embeddings import OllamaEmbeddings
from langchain.schema import StrOutputParser
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain_core.runnables import RunnablePassthrough
embeddings_open = OllamaEmbeddings(model="mistral")
loader = WebBaseLoader(
web_paths=("https://lilianweng.github.io/posts/2023-06-23-agent/",),
bs_kwargs=dict(
parse_only=bs4.SoupStrainer(
class_=("post-content", "post-title", "post-header")
)
),
)
docs = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
splits = text_splitter.split_documents(docs)
vectorstore = Chroma.from_documents(documents=splits, embedding=embeddings_open)
```
### Expected behavior
I would expect the code to work, unless I'm missing something important. Instead, I get this error.
Any clues are most appreciated. I'm sure it is something simple I overlooked.
```
>>> vectorstore = Chroma.from_documents(documents=splits, embedding = embeddings_open)
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 169, in _new_conn
conn = connection.create_connection(
File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 96, in create_connection
raise err
File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 86, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 756, in urlopen
retries = retries.increment(
File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 574, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/embeddings (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f8d11256bc0>: Failed to establish a new connection: [Errno 111] Connection refused'))
``` | Chromadb connection error | https://api.github.com/repos/langchain-ai/langchain/issues/15348/comments | 3 | 2023-12-30T18:38:19Z | 2023-12-31T12:17:59Z | https://github.com/langchain-ai/langchain/issues/15348 | 2,060,823,804 | 15,348 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
The [documentation](https://python.langchain.com/docs/use_cases/summarization) describes the different options for summarizing a text, for longer texts the 'map_reduce' option is suggested. It is mentioned further under 'Go deeper' that it is possible to use different LLMs via the `llm` parameter. This seems to work well using the code below with the `chain_type='stuff'` parameter and, in particular, using a local model (in the example below [this model](https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF) is used).
```
from langchain.document_loaders import PyPDFLoader
from langchain.llms import CTransformers
from langchain.chains.summarize import load_summarize_chain
# load a PDF-file
loader = PyPDFLoader("C:/xyz.pdf")
docs = loader.load()
# use a local LLAMA2 model
llm = CTransformers(model='./models/llama-2-7b-chat.Q5_K_M.gguf', model_type='llama', config={'context_length': 4096, 'max_new_tokens': 256, 'temperature': 0}, local_files_only=True)
# summarise the text (this works only if it fits into the context length of ~4000 tokens)
chain = load_summarize_chain(llm, chain_type="stuff")
chain.run(docs)
```
However, surprisingly, it returns the following error when using the `chain_type='map_reduce'` parameter: 'OSError: Can't load tokenizer for 'gpt2'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'gpt2' is the correct path to a directory containing all relevant files for a GPT2TokenizerFast tokenizer.'
The suggestion [mentioned in this issue on Github](https://github.com/langchain-ai/langchain/issues/9273) doesn't work for the local model used above. It would be great to have more specific information in the LangChain documentation on (1) how to perform text summarization with LangChain using different LLMs, and (2) specifically for using local models that don't require an internet connection and/or require gpt2. Since the above code works with the parameter `chain_type='stuff'` but not with the parameter `chain_type='map_reduce'`, it would be important to explain what happens under the hood so users can make this work for local models.
### Idea or request for content:
_No response_ | DOC: Summarization 'map_reduce' - Can't load tokenizer for 'gpt2' | https://api.github.com/repos/langchain-ai/langchain/issues/15347/comments | 11 | 2023-12-30T17:44:16Z | 2024-06-12T15:24:45Z | https://github.com/langchain-ai/langchain/issues/15347 | 2,060,810,975 | 15,347 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Help me understand how I can save the intermediate data of chain execution results?

### Suggestion:
_No response_ | Issue: <Saving intermediate variable chains ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/15345/comments | 2 | 2023-12-30T15:47:22Z | 2024-04-06T16:06:32Z | https://github.com/langchain-ai/langchain/issues/15345 | 2,060,781,653 | 15,345 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
A few days back, I was referring to the [Prompt templates](https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/) page which now shows: "**Page Not Found**"
### Idea or request for content:
I understand that LangChain is an evolving framework undergoing continuous development.
- Could we consider implementing versioning for the documentation? This would allow users to access specific documentation versions.
- Alternatively, if a section undergoes modification, we could preserve the existing documentation and label it as 'Legacy,' ensuring clarity about deprecated practices. | DOC: Prompt Templates "Page Not Found" | https://api.github.com/repos/langchain-ai/langchain/issues/15342/comments | 3 | 2023-12-30T11:14:48Z | 2024-04-14T16:13:36Z | https://github.com/langchain-ai/langchain/issues/15342 | 2,060,716,887 | 15,342 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain 0.0.353
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Cannot set top_p to useful values via ChatOllama(top_p=0.3), is reduced to value 0 because it's an int:
top_p: Optional[int] = None
top_p must be float.
### Expected behavior
top_p must be a float, 0.3 should appear in ollama log. | _OllamaCommon contains top_p with int-restriction | https://api.github.com/repos/langchain-ai/langchain/issues/15341/comments | 1 | 2023-12-30T10:29:06Z | 2024-01-15T19:59:40Z | https://github.com/langchain-ai/langchain/issues/15341 | 2,060,706,496 | 15,341 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
`below is my code for generating custom prompt which takes context and user query and we pass it into model:
def generate_custom_prompt(new_project_qa,query,name,not_uuid):
check = query.lower()
result = new_project_qa(query)
relevant_document = result['source_documents']
context_text="\n\n---\n\n".join([doc.page_content for doc in relevant_document])
# print(context_text,"context_text")
user_experience_inst = UserExperience.objects.get(not_uuid=not_uuid)
greetings = ['hi', 'hello', 'hey', 'hui', 'hiiii', 'hii', 'hiii', 'heyyy']
if check in greetings:
custom_prompt_template = f"""
Just simply reply with "Hello {name}! How can I assist you today?"
"""
elif check not in greetings and user_experience_inst.custom_prompt:
custom_prompt_template = f"""Answer the question based only on following context: ```{context_text} ```
You are a chatbot designed to provide answers to User's Questions:```{check}```, delimited by triple backticks.
Generate your answer to match the user's requirements: {user_experience_inst.custom_prompt}
If you encounter a question for which you don't know the answer, please respond with 'I don't know' and refrain from making up an answer.
- Before saying 'I don't know,' please re-verify your vector store to ensure the answer is not present in the database.
Remember, your goal is to assist the user in the best way possible. If the question is unclear or ambiguous, feel free to ask for clarification.
User's Question: ```{check}```
AI Answer:"""
else:
# Create the custom prompt template
custom_prompt_template = f"""Generate your response exclusively from the provided context: {{context_text}}. You function as a chatbot specializing in delivering detailed answers to the User's Question: ```{{check}} ```, enclosed within triple backticks.
Generate your answer in points in the following format:
1. Point no 1
1.1 Its subpoint in details
1.2 More information if needed.
2. Point no 2
2.1 Its subpoint in details
2.2 More information if needed.
…
N. Another main point.
If you encounter a question for which you don't know the answer based on the predefined points, please respond with 'I don't know' and refrain from making up an answer.
However, if the answer is not present in the predefined points, then Provide comprehensive information related to the user's query.
Remember, your goal is to assist the user in the best way possible. If the question is unclear or ambiguous, you can ask for clarification.
User's Question: ```{{check}} ```
AI Answer:"""
# Create the PromptTemplate
custom_prompt = ChatPromptTemplate.from_template(custom_prompt_template)
formatted_prompt = custom_prompt.format(context_text=context_text, check=check)
return formatted_prompt
below is my conversation chain where i am inplementing memory
def retreival_qa_chain(chroma_db_path):
embedding = OpenAIEmbeddings()
vectordb = Chroma(persist_directory=chroma_db_path, embedding_function=embedding)
llm = ChatOpenAI(temperature=0.1)
memory = ConversationBufferMemory(llm=llm,output_key='answer',memory_key='chat_history',return_messages=True)
retriever = vectordb.as_retriever(search_kwargs={"k": 2})
qa = ConversationalRetrievalChain.from_llm(llm=llm,memory=memory,chain_type="stuff",retriever=retriever,return_source_documents=True,get_chat_history=lambda h : h,verbose=True)
# qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever,return_source_documents=True)
return qa
but I am not getting desired output as expected `
### Suggestion:
_No response_ | Issue: Not getting desired output while implementing memory | https://api.github.com/repos/langchain-ai/langchain/issues/15339/comments | 7 | 2023-12-30T04:32:17Z | 2024-04-06T16:06:27Z | https://github.com/langchain-ai/langchain/issues/15339 | 2,060,626,887 | 15,339 |
[
"langchain-ai",
"langchain"
] | ### System Info
New versions
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Start the code
### Expected behavior
Hi , I'm trying to do a chain stuff query, but sometimes when I ask questions I have this error:
openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 4097 tokens, however you requested 4177 tokens (3921 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.", 'type': 'invalid_request_error', 'param': None, 'code': None}}
How can I solve this ? Can I cut my prompt ? If yes how? Or Can I upgrade the max_token ? IF yes How please..
Here is my code:
import getpass
import os
from langchain.document_loaders import TextLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Pinecone
from langchain_community.vectorstores import Pinecone
from langchain_community.embeddings.openai import OpenAIEmbeddings
import pinecone
import sys
# Set your Pinecone API key and environment
pinecone_api = "API"
pinecone_env = "API"
# Set your OpenAI API key
openai_api = "API"
# Initialize Pinecone
pinecone.init(api_key=pinecone_api, environment=pinecone_env)
# Define the index name
index_name = "rewind"
# Check if the index already exists, if not, create it
if index_name not in pinecone.list_indexes():
pinecone.create_index(name=index_name, metric="cosine", dimension=1536)
# Initialize the OpenAIEmbeddings
embeddings = OpenAIEmbeddings(api_key=openai_api)
# Create or load the Pinecone index
docsearch = Pinecone.from_existing_index(index_name, embeddings)
# Perform similarity search
query = sys.argv[1] if len(sys.argv) > 1 else "what Commits there is in github"
text_splitter = CharacterTextSplitter(chunk_size=3000, chunk_overlap=0)
docs = docsearch.similarity_search(query)
docs = text_splitter.split_documents(docs)
if __name__ == '__main__':
results = docsearch.similarity_search(query)
# Load the question answering chain
from langchain.chains.question_answering import load_qa_chain
from langchain.llms import OpenAI
chain = load_qa_chain(OpenAI(), chain_type="stuff")
answers = chain.run(input_documents=docs, question=query)
print(answers) | This model's maximum context length is 4097 tokens, however you requested 4177 tokens | https://api.github.com/repos/langchain-ai/langchain/issues/15333/comments | 1 | 2023-12-29T23:25:32Z | 2024-04-05T16:08:50Z | https://github.com/langchain-ai/langchain/issues/15333 | 2,060,459,074 | 15,333 |
[
"langchain-ai",
"langchain"
] | ### System Info
I've been trying to create a self query retriever so that I can look at metadata field info. This issue comes up. Should I be using another vector store to make this work? I can only really work with FAISS. I cannot use ChromaDB since my Python environment is limited to a previous version.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a FAISS Vectorstore DB
2. Create a metadata_field_info object and pass it to a SelfQuery object
3. Create LLM with this retriever
```python
embedding_function = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY, model="text-embedding-ada-002")
db = FAISS.load_local(input_dir + "/" + "storage/deploy/faiss-db", embedding_function)#, distance_strategy="COSINE", normalize_L2 = True)
# retriever = KNNRetriever(vectorstore=db)
llm = ChatOpenAI(
temperature=0,
verbose=False,
openai_api_key=key,
model_name="gpt-3.5-turbo"
)
metadata_field_info = [
AttributeInfo(
name="source",
description="The document this chunk is from.",
type="string",
),
AttributeInfo(
name="origin",
description="The origin the document came from. Comes from either scraped websites like TheKinection.org, Kinecta.org or database files like Bancworks. Bancworks is the higher priority.",
type="string",
),
AttributeInfo(
name="date_day",
description="The day the document was uploaded.",
type="integer",
),
AttributeInfo(
name="date_month",
description="The month the document was uploaded.",
type="integer",
),
AttributeInfo(
name="date_year",
description="The year the document was uploaded.",
type="integer",
),
]
# retriever = db.as_retriever(search_type="similarity", search_kwargs={'k': 6}, metadata_field_info=metadata_field_info)
retriever = SelfQueryRetriever.from_llm(
llm, db, "Information about where documents originated from and when they were published.", metadata_field_info, verbose=True
)
```
### Expected behavior
Successfully create a SelfQuery retriever with FAISS vector store. | Self query retriever with Vector Store type <class 'langchain_community.vectorstores.faiss.FAISS'> not supported. | https://api.github.com/repos/langchain-ai/langchain/issues/15331/comments | 4 | 2023-12-29T22:05:18Z | 2024-01-11T22:59:30Z | https://github.com/langchain-ai/langchain/issues/15331 | 2,060,431,327 | 15,331 |
[
"langchain-ai",
"langchain"
] | ### Feature request
This proposal requests the integration of the latest OpenAI models, specifically gpt-4-1106-preview, into the existing framework of [relevant GitHub project, e.g., LangChain]. The newer models offer significantly larger context windows, which are crucial for complex SQL querying and other advanced functionalities. This feature would involve ensuring compatibility with the latest version of the OpenAI API (version 1.0.0 and beyond), which has undergone substantial changes, including the deprecation of certain features like openai.ChatCompletion. Relevant links:
OpenAI API (1.0.0): [OpenAI API Documentation](https://github.com/openai/openai-python)
Migration Guide: [OpenAI Python Library Migration Guide](https://github.com/openai/openai-python/discussions/742)
### Motivation
The primary motivation for this feature request is to leverage the advanced capabilities of the newer OpenAI models, particularly the extended context windows they offer. These capabilities are essential for applications involving extensive data interaction and complex language understanding, such as SQL database querying and management.
Current limitations with the older models and API versions restrict the potential of applications, especially when dealing with lengthy queries or requiring deeper contextual understanding. For example, while working on a project involving the LangChain framework for SQL database interaction, I encountered the APIRemovedInV1 error, which signifies incompatibility with the latest OpenAI API. This issue underscores the need for updating the framework to align with the latest advancements in language models and API standards.
### Your contribution
Might make my own SQL Agent or modify yours. | Integration with OpenAI's Latest Models and API Compatibility | https://api.github.com/repos/langchain-ai/langchain/issues/15328/comments | 5 | 2023-12-29T20:33:36Z | 2024-04-11T17:54:09Z | https://github.com/langchain-ai/langchain/issues/15328 | 2,060,386,330 | 15,328 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
how to use embeddings in langchain with fireworks?(I need it for RAG) It's just that the documentation only talks about OpenAIEmbeddings
https://python.langchain.com/docs/modules/data_connection/text_embedding/
### Idea or request for content:
RAG with fireworks API | DOC: how to use embeddings in langchain with fireworks? | https://api.github.com/repos/langchain-ai/langchain/issues/15325/comments | 1 | 2023-12-29T19:38:49Z | 2024-04-05T16:08:39Z | https://github.com/langchain-ai/langchain/issues/15325 | 2,060,357,840 | 15,325 |
[
"langchain-ai",
"langchain"
] | ### System Info
"langchain": "^0.0.211",
MacOS Sonoma 14.2
Next.js 14.0.4
### Who can help?
@agola11
@hwc
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. pnpm add langchain in a Next.js project
2. Create a Next.js Route handler
3. Create the following route:
```
import { NextResponse } from 'next/server';
import { ChatOllama } from 'langchain/chat_models/ollama';
import { ChatPromptTemplate, MessagesPlaceholder } from 'langchain/prompts';
import { BufferMemory, ChatMessageHistory } from 'langchain/memory';
import { ConversationChain } from 'langchain/chains';
export async function POST(req: Request) {
const data = await req.json();
const prompt = ChatPromptTemplate.fromMessages([
[
'system',
`You are an AI Computer Science Data Structures teaching system that responds to all questions STRICTLY
in JSON format. You will be given a question on DSA concepts. Contents of JSON made by you will be used
to create elements within a node of a graph that displays
explanations of topics, and a user interface that allows users to follow up if they need help or want
more information. There are 4 elements, "Topic", "Description", "Subtopics", "Questions": an array of strings. You will also be given a number of nodes that already
exist, to be able to assign unique ids. IDs MUST BE STRINGS. MAKE SURE YOU ARE ONLY REPLYING WITH JSON AND NOT MARKDOWN
These are the only node types you are allowed to pick from:
"promptNode": USE FOR ALL EXPLANATIONS
"confusedNode": USED WHEN CONFUSED
{
"{DEFINE ID BUT IN "STRING" FORM! +1 HIGHER THAN NUMBER GIVEN}": {
"THE ID AGAIN": {number},
"type": "promptNode",
"position": { "x": 0, "y": 0 },
"data": {
"topic": "{Short name of topic}",
"description": "{The explanation of topic}",
"subtopics": [an array of strings of 5 related topics],
"questions": [an array of objects of 4 related questions and answers, eg: {'q': 'Question?', 'a': 'Ans'}],
"im_confused": [array of concepts mentioned in the description that they could be confused about]
}
}`,
],
new MessagesPlaceholder('history'),
['human', '{input}'],
]);
//@ts-ignore
const chatHistory = [];
const llm = new ChatOllama({
baseUrl: 'http://localhost:11434', // Default value
model: 'mistral', // Default value
});
const memory = new BufferMemory({
returnMessages: true,
memoryKey: 'history',
//@ts-ignore
chatHistory: new ChatMessageHistory(chatHistory),
});
const chain = new ConversationChain({
memory: memory,
prompt: prompt,
llm: llm,
verbose: true,
});
const result = await chain.invoke({
input: data.prompt,
});
console.log(result);
return NextResponse.json(
{
},
{ status: 200 }
);
}
```
### Expected behavior
Model output. | Issue when running a simple ChatOllama prompt in Next.js/TypeScript: "Error: Single '}' in template." | https://api.github.com/repos/langchain-ai/langchain/issues/15318/comments | 2 | 2023-12-29T15:48:07Z | 2023-12-29T16:03:41Z | https://github.com/langchain-ai/langchain/issues/15318 | 2,060,210,050 | 15,318 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I have built a custom LLM Agent by following the Documentation provided. The custom agent contains multiple tools, one of them is the "LLMMathChain" which is giving me ValueError, cause my agent is passing "None" as an Action Input. I want to handle that error. So that my chatbot doesn't break in the middle of a conversation.
## My Custom Agent

## Calculator Tool

## Prompt Template

## Output Parser

### Suggestion:
_No response_ | Issue: Error Handling in Tools used in custom agents | https://api.github.com/repos/langchain-ai/langchain/issues/15317/comments | 1 | 2023-12-29T12:44:32Z | 2024-04-05T16:08:35Z | https://github.com/langchain-ai/langchain/issues/15317 | 2,059,715,813 | 15,317 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Presently, JSON can be utilized to enable the multimodal capability of GPT-4 series models within ChatOpenAI and OpenAI. However, this functionality lacks portability.
### Motivation
Using multimodal approaches lacks portability, and GPT-4 isn't the sole model employing multimodal capabilities. Therefore, it becomes imperative to establish a standardized method for accessing various multimodal models.
### Your contribution
I may submit a PR about this if I have spare time | Add common mulit model support | https://api.github.com/repos/langchain-ai/langchain/issues/15316/comments | 3 | 2023-12-29T12:42:22Z | 2024-04-08T16:08:22Z | https://github.com/langchain-ai/langchain/issues/15316 | 2,059,700,790 | 15,316 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
below is my code, How can I implement Conversation Chain along with ConversationSummaryMemory in my code
`def retreival_qa_chain(chroma_db_path):
embedding = OpenAIEmbeddings()
vectordb = Chroma(persist_directory=chroma_db_path, embedding_function=embedding)
llm = ChatOpenAI(temperature=0.1)
retriever = vectordb.as_retriever(search_kwargs={"k": 2})
qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever,return_source_documents=True)
return qa
def create_global_qa_chain():
chroma_db_path = "chroma-databases"
folders = os.listdir(chroma_db_path)
qa_chains = {}
for index, folder in enumerate(folders):
folder_path = f"{chroma_db_path}/{folder}"
project = retreival_qa_chain(folder_path)
qa_chains[folder] = project
return qa_chains`
### Suggestion:
_No response_ | Issue: How can I implement Conversation Chain along with ConversationSummaryMemory | https://api.github.com/repos/langchain-ai/langchain/issues/15315/comments | 1 | 2023-12-29T11:23:25Z | 2024-04-05T16:08:25Z | https://github.com/langchain-ai/langchain/issues/15315 | 2,059,344,749 | 15,315 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am trying to add a specific prompt template to my ConversationalRetrievalChain. This is my current code:
> PROMPT_TEMPLATE = """
Act as the policies interactive Bot that gives advice on the Company policies, Travel policies, and Information security policies for the company.
Do not try to make up an answer. Use only the given pieces of context; do not use your own knowledge.
Chat History:
{chat_history}
Follow Up Input: {question}
"""
qa_prompt = PromptTemplate(input_variables=["chat_history", "question",], template=PROMPT_TEMPLATE)
> chat = ChatOpenAI(
verbose=True,
model_name=MODEl_NAME,
temperature=TEMPERATURE,
max_retries=MAX_RETRIES,
streaming=True,
)
qa_chain =ConversationalRetrievalChain.from_llm(
llm=chat,
retriever=MyVectorStoreRetriever(
vectorstore=vectordb,
search_type="similarity_score_threshold",
search_kwargs={"score_threshold": SIMILARITY_THRESHOLD, "k": 1},
),
return_source_documents=True,
combine_docs_chain_kwargs={'prompt': qa_prompt}, )
response = qa_chain(
{
"question": query,
"chat_history": chat_history,
},
callbacks=[stream_handler],
)
This is the error I'm currently getting,
> qa_chain =ConversationalRetrievalChain.from_llm(
File "/home/sfm/anaconda3/envs/chat_v2/lib/python3.10/site-packages/langchain/chains/conversational_retrieval/base.py", line 360, in from_llm
doc_chain = load_qa_chain(
File "/home/sfm/anaconda3/envs/chat_v2/lib/python3.10/site-packages/langchain/chains/question_answering/__init__.py", line 249, in load_qa_chain
return loader_mapping[chain_type](
File "/home/sfm/anaconda3/envs/chat_v2/lib/python3.10/site-packages/langchain/chains/question_answering/__init__.py", line 81, in _load_stuff_chain
return StuffDocumentsChain(
File "/home/sfm/anaconda3/envs/chat_v2/lib/python3.10/site-packages/langchain/load/serializable.py", line 97, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for StuffDocumentsChain
__root__
document_variable_name context was not found in llm_chain input_variables: ['chat_history', 'question'] (type=value_error)
can you help me to figure out the error and correct it?
### Suggestion:
_No response_ | Issue: document_variable_name context was not found in llm_chain input_variables | https://api.github.com/repos/langchain-ai/langchain/issues/15314/comments | 1 | 2023-12-29T10:42:37Z | 2024-04-05T16:08:20Z | https://github.com/langchain-ai/langchain/issues/15314 | 2,059,302,480 | 15,314 |
[
"langchain-ai",
"langchain"
] | ### System Info
lc: 0.0.352, os: ubuntu 22, python 3.10
### Who can help?
### Description
I am encountering a significant performance issue when using Qdrant with HuggingfaceEmbeddings in a CPU-only environment, specifically within a FastAPI endpoint. The process is notably slow, particularly at the `aadd_documents(...)` stage.
### Additional Information
- As a comparison, I tried embedding a document directly using `sentence_transformers`. This approach utilized all CPU cores, resulting in a much faster process.
- I also experimented with a custom implementation, using only necessary functions from [this Qdrant file](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/vectorstores/qdrant.py) to perform `aadd_documents`. This approach also showed improved performance and full CPU utilization.
### Question
Does anyone have an idea or suggestion on what might be causing this performance bottleneck when using Qdrant with HuggingfaceEmbeddings in a CPU-only environment?
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
### Steps to Reproduce
1. Set up Qdrant with HuggingfaceEmbeddings in a CPU-only machine (no GPU).
2. Integrate it within a FastAPI endpoint.
3. Execute `aadd_documents(...)` for documents (for example, documents with around 45K characters).
### Expected behavior
### Expected Behavior
I expected the embedding and addition of documents to Qdrant to be efficient and utilize multiple CPU cores effectively.
### Observed Behavior
- The embedding process for a document of approximately 45K characters took over one minute.
- Resource utilization monitoring showed that only one out of 70 CPU cores was being utilized during the embedding process.
| Slow aadd_documents using Qdrant and HuggingfaceEmbeddings on CPU | https://api.github.com/repos/langchain-ai/langchain/issues/15310/comments | 1 | 2023-12-29T09:45:06Z | 2024-04-05T16:08:14Z | https://github.com/langchain-ai/langchain/issues/15310 | 2,059,251,491 | 15,310 |
[
"langchain-ai",
"langchain"
] | null | b | https://api.github.com/repos/langchain-ai/langchain/issues/15307/comments | 2 | 2023-12-29T08:30:47Z | 2023-12-29T08:37:37Z | https://github.com/langchain-ai/langchain/issues/15307 | 2,059,195,701 | 15,307 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.340
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
My vector library preserves tens of thousands of documents, but as the document increases, the accuracy rate of the retriever is becoming low, and the correct document cannot be retrieved.
The retrieval cannot be given the document correctly
### Expected behavior
db = FAISS.load_local(VS['comixfaiss'], embeddings)
retriever = db.as_retriever(search_type="similarity_score_threshold", search_kwargs={"score_threshold": 0.5,"k":5})
logger.info(retriever.get_relevant_documents('3736085'))
LOG:
2023-12-29 15:44:51,650 - loader.py[line:56] - INFO: Successfully loaded faiss with AVX2 support.
2023-12-29 15:44:52,843 - local_doc_qa.py[line:204] - INFO: [Document(page_content='\ufeffSelection: 非校验\n商品编号: 3730559\n商品名称: 飞捷 FJ21325 39-45码(QXGZ)中筒 防水鞋户外雨靴套鞋胶鞋 黑色(单位:双)\n物料编码: 3730559\n大类: 生活用品\n中类: 办公日杂\n小类: 雨伞雨具\n品牌: 梦奇\n颜色: 黑色\n型号: FJ21325\n建议零售价: 77.45\n卖点: 品牌:飞捷 颜色:黑色 型号:FJ21325 包装清单:雨靴*1\n上架状态: 上架\n状态: \n状态信息:', metadata={'source': '/mnt/data/pdf/comixgpt/pd/齐心商城商品数据2万条2023-12-27.csv', 'row': 8975}), Document(page_content='\ufeffSelection: 非校验\n商品编号: 3051396\n商品名称: 得力 9387 三联送(销)货单据 129*188mm 20份/本 黄色 单位:本\n物料编码: 3051396\n大类: 办公文具\n中类: 财务行政用品\n小类: 财务单据\n品牌: 得力\n颜色: 黄色\n型号: 9387\n建议零售价: 4.35\n卖点: 0\n上架状态: 上架\n状态: \n状态信息:', metadata={'source': '/mnt/data/pdf/comixgpt/pd/齐心商城商品数据2万条2023-12-27.csv', 'row': 20709}), Document(page_content='\ufeffSelection: 非校验\n商品编号: 3278812\n商品名称: 惠普\xa0W9055MC\xa0成像鼓 彩色 (单位:个)\n物料编码: 3278812\n大类: 办公耗材\n中类: 打印机耗材\n小类: 硒鼓\n品牌: 惠普\n颜色: 彩色\n型号: W9055MC\n建议零售价: 3645.88\n卖点: 打印机耗材\n上架状态: 上架\n状态: \n状态信息:', metadata={'source': '/mnt/data/pdf/comixgpt/pd/齐心商城商品数据2万条2023-12-27.csv', 'row': 12167}), Document(page_content='\ufeffSelection: 非校验\n商品编号: 3197277\n商品名称: 得力 9307 报刊架 480*360*1450 银色 单位:个\n物料编码: 3197277\n大类: 办公文具\n中类: 会议展示用品\n小类: 报刊/杂志架\n品牌: 得力\n颜色: \n型号: 9307\n建议零售价: 330.33\n卖点: 0\n上架状态: 上架\n状态: \n状态信息:', metadata={'source': '/mnt/data/pdf/comixgpt/pd/齐心商城商品数据2万条2023-12-27.csv', 'row': 16685}), Document(page_content='\ufeffSelection: 非校验\n商品编号: 3278811\n商品名称: 惠普 W9054MC 成像鼓 黑色 (单位:个)\n物料编码: 3278811\n大类: 办公耗材\n中类: 打印机耗材\n小类: 硒鼓\n品牌: 惠普\n颜色: 黑色\n型号: W9054MC\n建议零售价: 2471.19\n卖点: 打印机耗材\n上架状态: 上架\n状态: \n状态信息:', metadata={'source': '/mnt/data/pdf/comixgpt/pd/齐心商城商品数据2万条2023-12-27.csv', 'row': 12168})]
| The retrieval cannot be given the document correctly | https://api.github.com/repos/langchain-ai/langchain/issues/15306/comments | 4 | 2023-12-29T08:00:29Z | 2024-04-08T16:08:17Z | https://github.com/langchain-ai/langchain/issues/15306 | 2,059,175,187 | 15,306 |
[
"langchain-ai",
"langchain"
] | Hi @dosu-bot,
This is my code
```
import langchain
from langchain.cache import SQLAlchemyCache, Emb
from sqlalchemy import create_engine
from sqlalchemy.orm import declarative_base
from sqlalchemy import Column, Integer, Text
from urllib.parse import quote_plus
from langchain.llms import OpenAI
Base = declarative_base()
class FulltextLLMCache(Base):
__tablename__ = "llm_cache_full_text"
id = Column(Integer, primary_key=True)
prompt = Column(Text, nullable=False)
llm = Column(Text, nullable=False)
idx = Column(Integer)
response = Column(Text)
db_uri = f"mssql+pyodbc://JUPYTER\SQLEXPRESS/my_database?driver=ODBC+Driver+17+for+SQL Server"
cache_engine = create_engine(db_uri, pool_recycle=240, pool_size=20, max_overflow=30)
# Assigning to llm_cache
langchain.llm_cache = SQLAlchemyCache(cache_engine, FulltextLLMCache)
```
The above code is for exact cache which is very low hit rate, how can i do similarity caching?
| How do i use similarity caching in my code? | https://api.github.com/repos/langchain-ai/langchain/issues/15304/comments | 1 | 2023-12-29T07:36:10Z | 2024-04-05T16:08:05Z | https://github.com/langchain-ai/langchain/issues/15304 | 2,059,159,495 | 15,304 |
[
"langchain-ai",
"langchain"
] | Hi @dosu-bot.
Below is my code,
```
from langchain.cache import SQLAlchemyCache
from sqlalchemy import create_engine
engine = create_engine("mssql+pyodbc://JUPYTER\SQLEXPRESS/my_database?driver=ODBC+Driver+17+for+SQL Server")
set_llm_cache(SQLAlchemyCache(engine))
memory = ConversationBufferWindowMemory(k=2, memory_key="chat_history", chat_memory=chat_message_history ,return_messages=True, output_key="answer", input_key="question")
retriever = load_emdeddings(cfg.faiss_persist_directory, cfg.embeddings).as_retriever(search_type="similarity_score_threshold",
search_kwargs={"score_threshold": .65,
"k": 2})
custom_prompt_template = """
You are a friendly chatbot named "XYZ", designed to provide assistance and answer queries.
{context}
Chat History: {chat_history}
Question: {question}
"""
# Create a PromptTemplate instance with your custom template
custom_prompt = PromptTemplate(
template=custom_prompt_template,
input_variables=["context", "question", "chat_history", "User_Name", "User_Location"],
)
# Use your custom prompt when creating the ConversationalRetrievalChain
qa = ConversationalRetrievalChain.from_llm(
llm,
verbose=False,
retriever=retriever,
memory=memory,
combine_docs_chain_kwargs={"prompt": custom_prompt},
return_source_documents = True
)
```
if i use llm.predict("Tell me a joke"). I can see the cache is getting stored in db.
but for qa if i ask question its not saving , why?
| Cache not getting saved in ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/15303/comments | 1 | 2023-12-29T06:30:14Z | 2024-04-05T16:07:59Z | https://github.com/langchain-ai/langchain/issues/15303 | 2,059,118,347 | 15,303 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hi,
I'm new to this, so I apologize if my lack of in-depth understanding to how this library works caused to me raise a false alarm. Im trying to an ocr on pdf image using the UnstructuredPDFLoader, Im passing the following args:
`
loader = UnstructuredPDFLoader(file_path="myfile.pdf", mode="elements",include_page_break=True,infer_table_structure=False,languages=["Eng"],strategy="hi_res",include_metadata=True,model_name="chipper")`
However I keep getting the following error:
```
OSError: unstructuredio/chipper-v3 is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`
```
Not sure what Im missing here ?
Thanks
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.document_loaders import UnstructuredPDFLoader
loader = UnstructuredPDFLoader(file_path="myfile.pdf", mode="elements",include_page_break=True,infer_table_structure=False,languages=["Eng"],strategy="hi_res",include_metadata=True,model_name="chipper")
documents = loader.load()
print(documents)
```
### Expected behavior
I should be getting the metadata similar to when I use other models like "yolox" which works fine. I heard chipper model is much better so I wanted to try it. | Using chipper model with hi_res strategy gives an error | https://api.github.com/repos/langchain-ai/langchain/issues/15300/comments | 2 | 2023-12-29T02:33:48Z | 2024-04-05T16:07:54Z | https://github.com/langchain-ai/langchain/issues/15300 | 2,059,008,076 | 15,300 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain = "^0.0.352"
@agola11
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Specify async open ai client upon intialization
client = openai.AsyncOpenAI()
assistant = OpenAIAssistantRunnable(assistant_id=self.assistant_id,as_agent=as_agent,client=client)
Produces error
pydantic.v1.errors.ConfigError: field "client" not yet prepared so type is still a ForwardRef, you might need to call OpenAIAssistantRunnable.update_forward_refs().
### Expected behavior
expect intialization to be successful | Cannot specify asyn clienct for OpenAIAssistantRunnable | https://api.github.com/repos/langchain-ai/langchain/issues/15299/comments | 1 | 2023-12-29T02:29:20Z | 2024-01-29T20:19:49Z | https://github.com/langchain-ai/langchain/issues/15299 | 2,059,006,360 | 15,299 |
[
"langchain-ai",
"langchain"
] | ### System Info
Name: langchain
Version: 0.0.352
Name: openai
Version: 1.6.1
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.chat_models import AzureChatOpenAI
from langchain.schema import HumanMessage
BASE_URL = "https://resource.openai.azure.com/"
API_KEY = "abc123"
DEPLOYMENT_NAME = "GPT35"
model = AzureChatOpenAI(
openai_api_base=BASE_URL,
openai_api_version="2023-05-15",
deployment_name=DEPLOYMENT_NAME,
openai_api_key=API_KEY,
openai_api_type="azure",
)
print(model(
[
HumanMessage(
content="Translate this sentence from English to French. I love programming."
)
]
))
### Expected behavior
I get an error about openai module. I get same error when i try and use embeddings.
I can use the openai azure pythons fine with my resource and api key - but langchain is broken.
AttributeError Traceback (most recent call last)
AttributeError: module 'openai' has no attribute 'error' | Azure function not working - openai error with latest builds | https://api.github.com/repos/langchain-ai/langchain/issues/15289/comments | 3 | 2023-12-28T22:42:25Z | 2023-12-30T12:46:52Z | https://github.com/langchain-ai/langchain/issues/15289 | 2,058,918,716 | 15,289 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version: 0.0.348
Python 3.9.18
Mac OS M2 (Ventura 13.6.2)
AWS Bedrock Titan text express, Claude v2
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
SQLDatabaseChain produces SQL query where the logic is correct but uses double quotes "identifier" hence incorrect for the snowflake SQL which require single quotes 'identifier'
output = SQL: SELECT "company" = "ABC"
desired output = SQL: SELECT 'company' = 'ABC'requires
### Expected behavior
desired out should be snowflake SQL single quotes for the identifier 'ABC' | Incorrect Snowflake SQL dialect in SQLDatabaseChain | https://api.github.com/repos/langchain-ai/langchain/issues/15285/comments | 12 | 2023-12-28T21:26:16Z | 2024-04-22T16:31:04Z | https://github.com/langchain-ai/langchain/issues/15285 | 2,058,832,286 | 15,285 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain Version: 0.0.348
python Version: Python 3.9.18
OS: Mac OS M2 (Ventura 13.6.2)
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
llm = Bedrock(
credentials_profile_name= os.environ.get('profile_name'),
model_id="anthropic.claude-v2",
model_kwargs={"temperature": 0.1},
endpoint_url="https://bedrock-runtime.us-east-1.amazonaws.com",
region_name="us-east-1",
verbose=True
)
db = SQLDatabase.from_uri(snowflake_url, sample_rows_in_table_info=3, include_tables=["table_name"])
output= SQLDatabaseChain.from_llm(
llm,
db,
prompt=few_shot_prompt,
return_intermediate_steps=True,
)
Gives the following error:
Error: syntax error line 1 at position 0 unexpected '**The**'.
[SQL: **The** query looks good to me, I don't see any of the common mistakes listed. Here is the original query again: SELECT *
FROM table]
### Expected behavior
the output should only produce SQL query outputted plainly, should not surround it in quotes or any comments prior to the SQL Query
Desired output:
[SQL: SELECT *
FROM table] | AWS bedrock Claude v2 SQLDatabaseChain produces comments before the SQL Query | https://api.github.com/repos/langchain-ai/langchain/issues/15283/comments | 20 | 2023-12-28T19:51:15Z | 2024-06-08T16:08:26Z | https://github.com/langchain-ai/langchain/issues/15283 | 2,058,773,284 | 15,283 |
[
"langchain-ai",
"langchain"
] | ### System Info
```
from langchain.tools import DuckDuckGoSearchRun
from langchain.agents.openai_assistant import OpenAIAssistantRunnable
from langchain.agents import AgentExecutor
tools = [DuckDuckGoSearchRun()]
assistant = OpenAIAssistantRunnable.create_assistant(
name="langchain assistant",
instructions="You are a personal math tutor.",
tools=tools,
model="gpt-4-1106-preview",
as_agent=True,
)
logger.debug(assistant)
logger.debug(assistant.assistant_id)
agent_executor = AgentExecutor(agent=assistant, tools=tools,verbose=True)
response = agent_executor.invoke({"content": "whats the whether in london"})
print(response)
logger.debug(response)
```
I am trying to run the following from the example. It prints out the assitant information and id but after that it get completely stuck. I tried to step through the debugger but after while it continues and never comes back after calling
```
callback_manager = CallbackManager.configure(
callbacks,
self.callbacks,
self.verbose,
tags,
self.tags,
metadata,
self.metadata,
)
```
in the `__call__` method
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.tools import DuckDuckGoSearchRun
from langchain.agents.openai_assistant import OpenAIAssistantRunnable
from langchain.agents import AgentExecutor
tools = [DuckDuckGoSearchRun()]
assistant = OpenAIAssistantRunnable.create_assistant(
name="langchain assistant",
instructions="You are a personal math tutor.",
tools=tools,
model="gpt-4-1106-preview",
as_agent=True,
)
logger.debug(tai_assistant)
logger.debug(tai_assistant.assistant_id)
agent_executor = AgentExecutor(agent=assistant, tools=tools,verbose=True)
response = agent_executor.invoke({"content": "whats the whether in london"})
print(response)
logger.debug(response)
### Expected behavior
to have an output from the agent and not be stuck | OpenAIAssistantRunnable stuck on execution with langchain tools | https://api.github.com/repos/langchain-ai/langchain/issues/15270/comments | 2 | 2023-12-28T13:33:35Z | 2023-12-28T17:46:23Z | https://github.com/langchain-ai/langchain/issues/15270 | 2,058,448,990 | 15,270 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python: 3.11
Langchain: 0.0.352
mistralai: 0.0.8
### Who can help?
@efriis
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
If the ChatMistralAI model is used for an agent or similar, an error appears because the official Mistral API does not currently support the stop parameter (as other APIs such as OpenAI do).
### Expected behavior
Although this is something that should be fixed by Mistral in its official API, one of the following options should be done:
- Warn the user that this model cannot be used with a stop sequence before breaking execution due to the error.
- Implement an own solution for the stop sequence in the same package and do not send that parameter to the official client call. | [mistralai]: Don´t support stop sequence | https://api.github.com/repos/langchain-ai/langchain/issues/15269/comments | 2 | 2023-12-28T13:14:32Z | 2024-01-10T00:27:22Z | https://github.com/langchain-ai/langchain/issues/15269 | 2,058,428,380 | 15,269 |
[
"langchain-ai",
"langchain"
] | ### System Info
Is there any way to manipulate the data in database like update, insert, delete through chatgpt chatbot with openai and langchain?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [x] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Is there any way to manipulate the data in database like update, insert, delete through chatgpt chatbot with openai and langchain?
### Expected behavior
possibility of manipulate the data in database like update, insert, delete through chatgpt chatbot with openai and langchain? | Manipulating database using chatgpt | https://api.github.com/repos/langchain-ai/langchain/issues/15266/comments | 7 | 2023-12-28T12:24:15Z | 2024-05-10T03:22:41Z | https://github.com/langchain-ai/langchain/issues/15266 | 2,058,376,378 | 15,266 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
According to the documentation listed under the page: https://python.langchain.com/docs/modules/agents/how_to/add_memory_openai_functions, adding a `BaseChatMemory` as `memory` property to an `OpenAIFunctionAgent` should add "memory" to the agent.
**Example listed under the page:**
>>Human: 'Hi'
>>Agent: 'How can I assist you today'
>>Human: 'My name is Bob'
>>Agent: 'Nice to meet you, Bob! How can I help you today?'
>>Human: 'What is my name'
>>Agent: 'Your name is Bob.'
**Actual result:**
>>Human: 'Hi'
>>Agent: 'How can I assist you today'
>>Human: 'My name is Bob'
>>Agent: 'Nice to meet you, Bob! How can I help you today?'
>>Human: 'What is my name'
>>Agent: 'I am not programmed to say your name'
RCA:
- The example implies the memory object that is passed to the functions agent instantiation actually takes care of converting the previous messages into required `ChatMessages` model, but implementation of such abstraction seems missing, atleast in langchain >= 0.0.350
- Upon checking with [visualizer](https://github.com/amosjyng/langchain-visualizer), it is seen that:

the latest invocation of agent does not include any "history" of any previous `run` with the `agent`. Curiously however, the agent executor does contain a variable `memory` which does enlist the previous conversations:

### Idea or request for content:
**Expected resolution:**
1. Update documentation to point to the correct way of incorporating memory with openai functions agent (ad-hoc implementation possibly)
2. Adding and updating implementation to make this API work as expected.
| DOC: Issue with the page titled "Add Memory to OpenAI Functions Agent | 🦜️🔗 Langchain" | https://api.github.com/repos/langchain-ai/langchain/issues/15262/comments | 2 | 2023-12-28T10:39:12Z | 2023-12-28T11:05:16Z | https://github.com/langchain-ai/langchain/issues/15262 | 2,058,277,920 | 15,262 |
[
"langchain-ai",
"langchain"
] | ### Feature request
It should be possible to search a Chroma vectorstore for a particular Document by it's ID. Given that the Document object is required for the `update_document` method, this lack of functionality makes it difficult to update document metadata, which should be a fairly common use-case.
Currently, there are two methods for searching a vectorstore, `get` and `search` but neither allow me to collect a Document by it's id
`vectorstore.get`: This allows for search via `id`, however, this does not return the actual `Document` object. Instead, the return is a dictionary of lists containing the `id`, `document`, and optionally, the `embeddings` for all matched documents. This provides an easy interface for utilising documents downstream, however, this creates a challenge for document updates as the `update_document` method needs the `Document` object to be passed, which would require needless recreation for updates.
`vectorstore.search`: This returns the `Document` object as required, however, it is not possible to explicitly search via `id`, only similarity search is possible.
As such, it appears that there is currently no easy way to do this at present, without manually recreating the Document from the `get` output.
### Motivation
For my use-case, I am performing offline clustering of my embeddings in order to find the core groups of documents and would like to add the predicted label to each document as metadata "cluster_label".
Below is a simple representation of my current pipeline:
```
all_docs = vectorstore.get(include=["embeddings", "documents"])
doc_ids = all_docs["ids"]
embeddings = np.array(all_docs["embeddings"])
cluster_model, labels = fit_predict_clustering(embeddings, max_components=10)
for doc_id, label in zip(ids, labels):
# Fetch the document from the vectorstore
doc = vectorstore.get(doc_id) # returns Dict[str, List], but I need Document
# Given current implementation, I would need to now convert the above dictionary to Document
...
# Update metadata with the cluster label
doc.metadata["cluster_label"] = label
vectorstore.update(doc_id, doc)
```
### Your contribution
I'm happy to contribute to this feature if deemed beneficial. To my mind, it should be achievable by either:
1. Updating the get method to allow `Document` returning,
2. Including a new method with the required functionality, or
3. Providing a utility for easy bulk conversion from `get` output to `List[Document]`.
However, I'm open to suggestions as to the most fitting solution.
| Get Chroma vectorstore Document by `doc_id` for document / metadata updates. | https://api.github.com/repos/langchain-ai/langchain/issues/15261/comments | 1 | 2023-12-28T09:48:44Z | 2024-04-04T16:09:01Z | https://github.com/langchain-ai/langchain/issues/15261 | 2,058,224,878 | 15,261 |
[
"langchain-ai",
"langchain"
] | ### System Info
0.0.350
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
help(qdrant.amax_marginal_relevance_search)
print("&&&&&&&&&&&&&&&&&")
help(qdrant.max_marginal_relevance_search)
hits = await qdrant.amax_marginal_relevance_search(text, k=20, fetch_k=100,filter=filter_empty)
print(hits)
hits1 = qdrant.max_marginal_relevance_search(text, k=20, fetch_k=100,filter=filter_empty)
print(hits1)
### Expected behavior
qdrant.amax_marginal_relevance_search have not results but qdrant.max_marginal_relevance_search hava results | qdrant.amax_marginal_relevance_search have not results but qdrant.max_marginal_relevance_search hava results | https://api.github.com/repos/langchain-ai/langchain/issues/15256/comments | 1 | 2023-12-28T07:41:26Z | 2023-12-29T03:31:51Z | https://github.com/langchain-ai/langchain/issues/15256 | 2,058,104,532 | 15,256 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python: 3.10
from langchain.chat_models import ChatOpenAI
openai = ChatOpenAI(model_name="gpt-3.5-turbo",
temperature=0.8,
max_tokens=60)
error occurs at openai.py, error message is: AttributeError: module 'openai' has no attribute 'OpenAI'
the reason, I guess, is version not match.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import os
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
HumanMessage,
SystemMessage
)
openai = ChatOpenAI(model_name="gpt-3.5-turbo",
temperature=0.8,
max_tokens=60)
messages = [
SystemMessage(content="bla"),
HumanMessage(content="bla")
]
response = openai(messages)
print(response)
### Expected behavior
no exception | langchain 0.5.7 not match latest openai | https://api.github.com/repos/langchain-ai/langchain/issues/15255/comments | 1 | 2023-12-28T07:17:09Z | 2024-04-04T16:08:56Z | https://github.com/langchain-ai/langchain/issues/15255 | 2,058,083,922 | 15,255 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Similar to the way callbacks are implemented in BaseLLM the embedding class should also support callbacks.
### Motivation
When using embedding models in a RAG application it would be useful to track e.g. the number of tokens.
Callbacks can be used to log usage details to monitoring services (eg Langsmith).
### Your contribution
There is a closed PR adressing the same topic https://github.com/langchain-ai/langchain/pull/7920 | Callbacks for embeddings | https://api.github.com/repos/langchain-ai/langchain/issues/15253/comments | 2 | 2023-12-28T06:29:24Z | 2024-06-11T16:07:18Z | https://github.com/langchain-ai/langchain/issues/15253 | 2,058,046,954 | 15,253 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
What should I do if I want to log the number of tokens shot with llm in chain via lcel?
### Suggestion:
lcel chain token usage tracking | Issue: lcel chain token usage tracking | https://api.github.com/repos/langchain-ai/langchain/issues/15249/comments | 3 | 2023-12-28T04:51:21Z | 2024-06-24T16:07:30Z | https://github.com/langchain-ai/langchain/issues/15249 | 2,057,986,272 | 15,249 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I do not understand how chains are built with the transfer of information between generations. here is an example of the code in the langchain [documentation](https://python.langchain.com/docs/expression_language/why):
```
from langchain_core.runnables import RunnablePassthrough
prompt = ChatPromptTemplate.from_template(
"Tell me a short joke about {topic}"
)
output_parser = StrOutputParser()
model = llm
chain = (
{"topic": RunnablePassthrough()}
| prompt
| model
| output_parser
)
chain.invoke("ice cream")
```
here in the promo, please write a joke about ice cream, based on this example, my question will be: "how to make the chain continue further and, for example, analyze this joke (that is, work further with what was generated)."
There was an idea to just create a second promt and add it to the chain:
```
prompt = ChatPromptTemplate.from_template(
"Tell me a short joke about {topic}"
)
prompt1 = ChatPromptTemplate.from_template(
"What was the joke about?"
)
output_parser = StrOutputParser()
model = llm
chain = (
{"topic": RunnablePassthrough()}
| prompt
| model
| output_parser
| prompt1
| model
| output_parser
)
```
But it won't work that way, because for some reason the model doesn't know the context...

### Idea or request for content:
_No response_ | DOC: langchain LCEL - transfer of information between generations | https://api.github.com/repos/langchain-ai/langchain/issues/15247/comments | 10 | 2023-12-28T04:06:17Z | 2024-04-05T16:07:50Z | https://github.com/langchain-ai/langchain/issues/15247 | 2,057,963,845 | 15,247 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Based on the documentation and RFC standards referenced in the links:
- https://peps.python.org/pep-0604/
- https://www.blog.pythonlibrary.org/2021/09/11/python-3-10-simplifies-unions-in-type-annotations/
it's evident that the introduction of using | instead of 'union' for type annotations is a feature that was introduced in Python 3.10.
However, I've observed that in our project's pyproject.toml and ci.yaml files, the Python version is specified as python = ">=3.8.1,<4.0".
This leads me to question whether LangChain will face issues with type checking or even running in the specified Python 3.8 environment, given that it doesn't support the | syntax for unions.
If there are any considerations or plans, such as updating the pyproject.toml and ci.yaml to make LangChain compatible with a minimum of Python 3.10, or if it's appropriate for me to submit a PR to address the use of the | operator in type annotations within LangChain, I'd appreciate your input and guidance.
### Suggestion:
Upgrade the python version, or fix and remove | syntax, I would be happy to do this, please let me know your decision
@hwchase17 | python 3.10 `|` union syntax compatibility | https://api.github.com/repos/langchain-ai/langchain/issues/15244/comments | 1 | 2023-12-28T02:53:57Z | 2023-12-28T06:06:43Z | https://github.com/langchain-ai/langchain/issues/15244 | 2,057,929,816 | 15,244 |
[
"langchain-ai",
"langchain"
] | ### System Info
如何对langchain加载的chatglm-6b模型进行量化处理
### Who can help?
@hwchase17 @hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# 本地模型
else:
from configs.model_config import VLLM_MODEL_DICT
if kwargs["model_names"][0] in VLLM_MODEL_DICT and args.infer_turbo == "vllm":
import fastchat.serve.vllm_worker
from fastchat.serve.vllm_worker import VLLMWorker, app, worker_id
from vllm import AsyncLLMEngine
from vllm.engine.arg_utils import AsyncEngineArgs,EngineArgs
args.tokenizer = args.model_path # 如果tokenizer与model_path不一致在此处添加
args.tokenizer_mode = 'auto'
args.trust_remote_code= True
args.download_dir= None
args.load_format = 'auto'
args.dtype = 'auto'
args.seed = 0
args.worker_use_ray = False
args.pipeline_parallel_size = 1
args.tensor_parallel_size = 1
args.block_size = 16
args.swap_space = 4 # GiB
args.gpu_memory_utilization = 0.90
args.max_num_batched_tokens = None # 一个批次中的最大令牌(tokens)数量,这个取决于你的显卡和大模型设置,设置太大显存会不够
args.max_num_seqs = 256
args.disable_log_stats = False
args.conv_template = None
args.limit_worker_concurrency = 5
args.no_register = False
args.num_gpus = 4 # vllm worker的切分是tensor并行,这里填写显卡的数量
args.engine_use_ray = False
args.disable_log_requests = False
# 0.2.1 vllm后要加的参数, 但是这里不需要
args.max_model_len = None
args.revision = None
args.quantization = None
args.max_log_len = None
args.tokenizer_revision = None
# 0.2.2 vllm需要新加的参数
args.max_paddings = 256
if args.model_path:
args.model = args.model_path
if args.num_gpus > 1:
args.tensor_parallel_size = args.num_gpus
for k, v in kwargs.items():
setattr(args, k, v)
engine_args = AsyncEngineArgs.from_cli_args(args)
engine = AsyncLLMEngine.from_engine_args(engine_args)
worker = VLLMWorker(
controller_addr = args.controller_address,
worker_addr = args.worker_address,
worker_id = worker_id,
model_path = args.model_path,
model_names = args.model_names,
limit_worker_concurrency = args.limit_worker_concurrency,
no_register = args.no_register,
llm_engine = engine,
conv_template = args.conv_template,
)
sys.modules["fastchat.serve.vllm_worker"].engine = engine
sys.modules["fastchat.serve.vllm_worker"].worker = worker
sys.modules["fastchat.serve.vllm_worker"].logger.setLevel(log_level)
### Expected behavior
这里加载本地模型的时候如何对模型进行量化 | 如何对langchain加载的chatglm-6b模型进行量化处理 | https://api.github.com/repos/langchain-ai/langchain/issues/15243/comments | 3 | 2023-12-28T02:17:17Z | 2024-04-04T16:08:46Z | https://github.com/langchain-ai/langchain/issues/15243 | 2,057,912,633 | 15,243 |
[
"langchain-ai",
"langchain"
] | ### System Info
I used the standard code example from the langchain documentation about Fireworks where I inserted my API key. That's the mistake I made:
```
[llm/start] [1:llm:Fireworks] Entering LLM run with input:
{
"prompts": [
"Name 3 sports."
]
}
[llm/error] [1:llm:Fireworks] [761ms] LLM run errored with error:
"AuthenticationError({'fault': {'faultstring': 'Invalid ApiKey', 'detail': {'errorcode': 'oauth.v2.InvalidApiKey'}}})Traceback (most recent call last):\n\n\n File \"C:\\Users\\akidra\\AppData\\Roaming\\Python\\Python311\\site-packages\\langchain_core\\language_models\\llms.py\", line 540, in _generate_helper\n self._generate(\n\n\n File \"C:\\Users\\akidra\\AppData\\Roaming\\Python\\Python311\\site-packages\\langchain_community\\llms\\fireworks.py\", line 100, in _generate\n response = completion_with_retry_batching(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"C:\\Users\\akidra\\AppData\\Roaming\\Python\\Python311\\site-packages\\langchain_community\\llms\\fireworks.py\", line 296, in completion_with_retry_batching\n return batch_sync_run()\n ^^^^^^^^^^^^^^^^\n\n\n File \"C:\\Users\\akidra\\AppData\\Roaming\\Python\\Python311\\site-packages\\langchain_community\\llms\\fireworks.py\", line 293, in batch_sync_run\n results = list(executor.map(_completion_with_retry, prompt))\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"C:\\Program Files\\Python311\\Lib\\concurrent\\futures\\_base.py\", line 619, in result_iterator\n yield _result_or_cancel(fs.pop())\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"C:\\Program Files\\Python311\\Lib\\concurrent\\futures\\_base.py\", line 317, in _result_or_cancel\n return fut.result(timeout)\n ^^^^^^^^^^^^^^^^^^^\n\n\n File \"C:\\Program Files\\Python311\\Lib\\concurrent\\futures\\_base.py\", line 456, in result\n return self.__get_result()\n ^^^^^^^^^^^^^^^^^^^\n\n\n File \"C:\\Program Files\\Python311\\Lib\\concurrent\\futures\\_base.py\", line 401, in __get_result\n raise self._exception\n\n\n File \"C:\\Program Files\\Python311\\Lib\\concurrent\\futures\\thread.py\", line 58, in run\n result = self.fn(*self.args, **self.kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"C:\\Users\\akidra\\AppData\\Roaming\\Python\\Python311\\site-packages\\tenacity\\__init__.py\", line 289, in wrapped_f\n return self(f, *args, **kw)\n ^^^^^^^^^^^^^^^^^^^^\n\n\n File \"C:\\Users\\akidra\\AppData\\Roaming\\Python\\Python311\\site-packages\\tenacity\\__init__.py\", line 379, in __call__\n do = self.iter(retry_state=retry_state)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"C:\\Users\\akidra\\AppData\\Roaming\\Python\\Python311\\site-packages\\tenacity\\__init__.py\", line 314, in iter\n return fut.result()\n ^^^^^^^^^^^^\n\n\n File \"C:\\Program Files\\Python311\\Lib\\concurrent\\futures\\_base.py\", line 449, in result\n return self.__get_result()\n ^^^^^^^^^^^^^^^^^^^\n\n\n File \"C:\\Program Files\\Python311\\Lib\\concurrent\\futures\\_base.py\", line 401, in __get_result\n raise self._exception\n\n\n File \"C:\\Users\\akidra\\AppData\\Roaming\\Python\\Python311\\site-packages\\tenacity\\__init__.py\", line 382, in __call__\n result = fn(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^\n\n\n File \"C:\\Users\\akidra\\AppData\\Roaming\\Python\\Python311\\site-packages\\langchain_community\\llms\\fireworks.py\", line 289, in _completion_with_retry\n return fireworks.client.Completion.create(**kwargs, prompt=prompt)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"C:\\Users\\akidra\\AppData\\Roaming\\Python\\Python311\\site-packages\\fireworks\\client\\base_completion.py\", line 80, in create\n return cls._create_non_streaming(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"C:\\Users\\akidra\\AppData\\Roaming\\Python\\Python311\\site-packages\\fireworks\\client\\base_completion.py\", line 158, in _create_non_streaming\n response = client.post_request_non_streaming(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"C:\\Users\\akidra\\AppData\\Roaming\\Python\\Python311\\site-packages\\fireworks\\client\\api_client.py\", line 125, in post_request_non_streaming\n self._error_handling(response)\n\n\n File \"C:\\Users\\akidra\\AppData\\Roaming\\Python\\Python311\\site-packages\\fireworks\\client\\api_client.py\", line 91, in _error_handling\n self._raise_for_status(resp)\n\n\n File \"C:\\Users\\akidra\\AppData\\Roaming\\Python\\Python311\\site-packages\\fireworks\\client\\api_client.py\", line 67, in _raise_for_status\n raise AuthenticationError(resp.json())\n\n\nfireworks.client.error.AuthenticationError: {'fault': {'faultstring': 'Invalid ApiKey', 'detail': {'errorcode': 'oauth.v2.InvalidApiKey'}}}"
---------------------------------------------------------------------------
AuthenticationError Traceback (most recent call last)
Cell In[25], line 7
1 from langchain.llms.fireworks import Fireworks
3 llm = Fireworks(
4 fireworks_api_key="<BPR7ILI5ar0xAVWKwwAPvE8cyL2yBFpJRGqDGU3QirD6N8W0>",
5 model="accounts/fireworks/models/mixtral-8x7b-instruct",
6 max_tokens=256)
----> 7 llm("Name 3 sports.")
File ~\AppData\Roaming\Python\Python311\site-packages\langchain_core\language_models\llms.py:892, in BaseLLM.__call__(self, prompt, stop, callbacks, tags, metadata, **kwargs)
885 if not isinstance(prompt, str):
886 raise ValueError(
887 "Argument `prompt` is expected to be a string. Instead found "
888 f"{type(prompt)}. If you want to run the LLM on multiple prompts, use "
889 "`generate` instead."
890 )
891 return (
--> 892 self.generate(
893 [prompt],
894 stop=stop,
895 callbacks=callbacks,
896 tags=tags,
897 metadata=metadata,
898 **kwargs,
899 )
900 .generations[0][0]
901 .text
902 )
File ~\AppData\Roaming\Python\Python311\site-packages\langchain_core\language_models\llms.py:666, in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, run_name, **kwargs)
650 raise ValueError(
651 "Asked to cache, but no cache found at `langchain.cache`."
652 )
653 run_managers = [
654 callback_manager.on_llm_start(
655 dumpd(self),
(...)
664 )
665 ]
--> 666 output = self._generate_helper(
667 prompts, stop, run_managers, bool(new_arg_supported), **kwargs
668 )
669 return output
670 if len(missing_prompts) > 0:
File ~\AppData\Roaming\Python\Python311\site-packages\langchain_core\language_models\llms.py:553, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
551 for run_manager in run_managers:
552 run_manager.on_llm_error(e, response=LLMResult(generations=[]))
--> 553 raise e
554 flattened_outputs = output.flatten()
555 for manager, flattened_output in zip(run_managers, flattened_outputs):
File ~\AppData\Roaming\Python\Python311\site-packages\langchain_core\language_models\llms.py:540, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
530 def _generate_helper(
531 self,
532 prompts: List[str],
(...)
536 **kwargs: Any,
537 ) -> LLMResult:
538 try:
539 output = (
--> 540 self._generate(
541 prompts,
542 stop=stop,
543 # TODO: support multiple run managers
544 run_manager=run_managers[0] if run_managers else None,
545 **kwargs,
546 )
547 if new_arg_supported
548 else self._generate(prompts, stop=stop)
549 )
550 except BaseException as e:
551 for run_manager in run_managers:
File ~\AppData\Roaming\Python\Python311\site-packages\langchain_community\llms\fireworks.py:100, in Fireworks._generate(self, prompts, stop, run_manager, **kwargs)
98 choices = []
99 for _prompts in sub_prompts:
--> 100 response = completion_with_retry_batching(
101 self,
102 self.use_retry,
103 prompt=_prompts,
104 run_manager=run_manager,
105 stop=stop,
106 **params,
107 )
108 choices.extend(response)
110 return self.create_llm_result(choices, prompts)
File ~\AppData\Roaming\Python\Python311\site-packages\langchain_community\llms\fireworks.py:296, in completion_with_retry_batching(llm, use_retry, run_manager, **kwargs)
293 results = list(executor.map(_completion_with_retry, prompt))
294 return results
--> 296 return batch_sync_run()
File ~\AppData\Roaming\Python\Python311\site-packages\langchain_community\llms\fireworks.py:293, in completion_with_retry_batching.<locals>.batch_sync_run()
291 def batch_sync_run() -> List:
292 with ThreadPoolExecutor() as executor:
--> 293 results = list(executor.map(_completion_with_retry, prompt))
294 return results
File C:\Program Files\Python311\Lib\concurrent\futures\_base.py:619, in Executor.map.<locals>.result_iterator()
616 while fs:
617 # Careful not to keep a reference to the popped future
618 if timeout is None:
--> 619 yield _result_or_cancel(fs.pop())
620 else:
621 yield _result_or_cancel(fs.pop(), end_time - time.monotonic())
File C:\Program Files\Python311\Lib\concurrent\futures\_base.py:317, in _result_or_cancel(***failed resolving arguments***)
315 try:
316 try:
--> 317 return fut.result(timeout)
318 finally:
319 fut.cancel()
File C:\Program Files\Python311\Lib\concurrent\futures\_base.py:456, in Future.result(self, timeout)
454 raise CancelledError()
455 elif self._state == FINISHED:
--> 456 return self.__get_result()
457 else:
458 raise TimeoutError()
File C:\Program Files\Python311\Lib\concurrent\futures\_base.py:401, in Future.__get_result(self)
399 if self._exception:
400 try:
--> 401 raise self._exception
402 finally:
403 # Break a reference cycle with the exception in self._exception
404 self = None
File C:\Program Files\Python311\Lib\concurrent\futures\thread.py:58, in _WorkItem.run(self)
55 return
57 try:
---> 58 result = self.fn(*self.args, **self.kwargs)
59 except BaseException as exc:
60 self.future.set_exception(exc)
File ~\AppData\Roaming\Python\Python311\site-packages\tenacity\__init__.py:289, in BaseRetrying.wraps.<locals>.wrapped_f(*args, **kw)
287 @functools.wraps(f)
288 def wrapped_f(*args: t.Any, **kw: t.Any) -> t.Any:
--> 289 return self(f, *args, **kw)
File ~\AppData\Roaming\Python\Python311\site-packages\tenacity\__init__.py:379, in Retrying.__call__(self, fn, *args, **kwargs)
377 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
378 while True:
--> 379 do = self.iter(retry_state=retry_state)
380 if isinstance(do, DoAttempt):
381 try:
File ~\AppData\Roaming\Python\Python311\site-packages\tenacity\__init__.py:314, in BaseRetrying.iter(self, retry_state)
312 is_explicit_retry = fut.failed and isinstance(fut.exception(), TryAgain)
313 if not (is_explicit_retry or self.retry(retry_state)):
--> 314 return fut.result()
316 if self.after is not None:
317 self.after(retry_state)
File C:\Program Files\Python311\Lib\concurrent\futures\_base.py:449, in Future.result(self, timeout)
447 raise CancelledError()
448 elif self._state == FINISHED:
--> 449 return self.__get_result()
451 self._condition.wait(timeout)
453 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
File C:\Program Files\Python311\Lib\concurrent\futures\_base.py:401, in Future.__get_result(self)
399 if self._exception:
400 try:
--> 401 raise self._exception
402 finally:
403 # Break a reference cycle with the exception in self._exception
404 self = None
File ~\AppData\Roaming\Python\Python311\site-packages\tenacity\__init__.py:382, in Retrying.__call__(self, fn, *args, **kwargs)
380 if isinstance(do, DoAttempt):
381 try:
--> 382 result = fn(*args, **kwargs)
383 except BaseException: # noqa: B902
384 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type]
File ~\AppData\Roaming\Python\Python311\site-packages\langchain_community\llms\fireworks.py:289, in completion_with_retry_batching.<locals>._completion_with_retry(prompt)
287 @conditional_decorator(use_retry, retry_decorator)
288 def _completion_with_retry(prompt: str) -> Any:
--> 289 return fireworks.client.Completion.create(**kwargs, prompt=prompt)
File ~\AppData\Roaming\Python\Python311\site-packages\fireworks\client\base_completion.py:80, in BaseCompletion.create(cls, model, prompt_or_messages, request_timeout, stream, client, **kwargs)
76 return cls._create_streaming(
77 model, request_timeout, client=client, **kwargs
78 )
79 else:
---> 80 return cls._create_non_streaming(
81 model, request_timeout, client=client, **kwargs
82 )
File ~\AppData\Roaming\Python\Python311\site-packages\fireworks\client\base_completion.py:158, in BaseCompletion._create_non_streaming(cls, model, request_timeout, client, **kwargs)
156 client = client or FireworksClient(request_timeout=request_timeout)
157 data = {"model": model, "stream": False, **kwargs}
--> 158 response = client.post_request_non_streaming(
159 f"{client.base_url}/{cls.endpoint}", data=data
160 )
161 return cls.response_class(**response)
File ~\AppData\Roaming\Python\Python311\site-packages\fireworks\client\api_client.py:125, in FireworksClient.post_request_non_streaming(self, url, data)
119 with httpx.Client(
120 headers={"Authorization": f"Bearer {self.api_key}"},
121 timeout=self.request_timeout,
122 **self.client_kwargs,
123 ) as client:
124 response = client.post(url, json=data)
--> 125 self._error_handling(response)
126 return response.json()
File ~\AppData\Roaming\Python\Python311\site-packages\fireworks\client\api_client.py:91, in FireworksClient._error_handling(self, resp)
89 if resp.is_error:
90 resp.read()
---> 91 self._raise_for_status(resp)
File ~\AppData\Roaming\Python\Python311\site-packages\fireworks\client\api_client.py:67, in FireworksClient._raise_for_status(self, resp)
65 raise InvalidRequestError(resp.json())
66 elif resp.status_code == 401:
---> 67 raise AuthenticationError(resp.json())
68 elif resp.status_code == 403:
69 raise PermissionError(resp.json())
AuthenticationError: {'fault': {'faultstring': 'Invalid ApiKey', 'detail': {'errorcode': 'oauth.v2.InvalidApiKey'}}}
```
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. i use https://python.langchain.com/docs/integrations/providers/fireworks
2. got the API key in https://app.fireworks.ai/api-keys
3. I inserted my key into this code:
```
from langchain.llms.fireworks import Fireworks
import os
os.environ["FIREWORKS_API_KEY"] = "<My key was here.>"
llm = Fireworks(fireworks_api_key="<My key was here.>")
llm = Fireworks(
fireworks_api_key="<My key was here.>",
model="accounts/fireworks/models/mixtral-8x7b-instruct",
max_tokens=256)
llm("Name 3 sports.")
```
### Expected behavior
this example is from the documentation - I just want it to work to move on. | error when running the sample code from the langchain documentation about fireworks | https://api.github.com/repos/langchain-ai/langchain/issues/15239/comments | 1 | 2023-12-28T01:10:59Z | 2023-12-28T01:24:35Z | https://github.com/langchain-ai/langchain/issues/15239 | 2,057,882,953 | 15,239 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
hello everyone! Is it possible to use the OpenAI-compatible URL API from text-generation-webui with langchain? the langchain [documentation](https://python.langchain.com/docs/integrations/llms/textgen) says about localhost, but I don't have access to it, I tried to insert the link into model_url, the error appeared both in google colab and in the terminal.


### Idea or request for content:
_No response_ | DOC: langchain plus OpenAI-compatible URL API equally error | https://api.github.com/repos/langchain-ai/langchain/issues/15237/comments | 6 | 2023-12-28T00:56:43Z | 2024-01-04T16:19:09Z | https://github.com/langchain-ai/langchain/issues/15237 | 2,057,877,277 | 15,237 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am receiving this error 2 validation errors for ConversationalRetrievalChain
qa_template
extra fields not permitted (type=value_error.extra)
question_generator_chain_options
extra fields not permitted (type=value_error.extra) , for the following code :
```
retriever = vector_store.as_retriever()
sales_persona_prompt = PromptTemplate.from_template(SALES_PERSONA_PROMPT)
condense_prompt = PromptTemplate.from_template(CONDENSE_PROMPT)
question_generator_chain_options = {
"llm": non_streaming_model,
"template": condense_prompt,
}
chain = ConversationalRetrievalChain.from_llm(
streaming_model,
retriever,
qa_template=sales_persona_prompt,
question_generator_chain_options=question_generator_chain_options,
return_source_documents=False,
)
```
### Suggestion:
_No response_ | Issue: validation errors for ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/15236/comments | 3 | 2023-12-28T00:30:51Z | 2024-04-04T16:08:41Z | https://github.com/langchain-ai/langchain/issues/15236 | 2,057,867,182 | 15,236 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: 0.0.340
Python version: 3.11.0
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am trying to use the HuggingFace Hub Wrapper to create a chat model instance and use the model in a chain. However these seems to be some library discrepancies between various base files.
Below is the code that works:
from langchain_community.llms import HuggingFaceHub
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_community.chat_models.huggingface import ChatHuggingFace
from langchain.prompts import PromptTemplate, ChatPromptTemplate
llm = HuggingFaceHub(
repo_id="HuggingFaceH4/zephyr-7b-beta",
task="text-generation",
model_kwargs={
"max_new_tokens": 512,
"top_k": 30,
"temperature": 0.1,
"repetition_penalty": 1.03,
},
)
chat_model = ChatHuggingFace(llm=llm)
messages = [
SystemMessage(content="You're a zoologist who is able to answer questions about various animals. You are tasked with answering the following question provided"),
HumanMessage(content="What is the average lifespan of an Elephant?"),
]
res = chat_model.invoke(messages)
print(res.content)
I want to modify this to allow the prompt to be more dynamic and potentially include a chain of prompts. Here is my modification:
prompt = ChatPromptTemplate.from_messages(
[
SystemMessage(content="You're a zoologist who is able to answer questions about various animals. You are tasked with answering the following question provided"),
HumanMessage(content="What is the average lifespan of an {animal}?"),
]
)
chain1 = prompt| chat_model
chain1.invoke({"animal": "giraffe"})
I get the following error: NotImplementedError: Unsupported message type: <class 'langchain_core.messages.system.SystemMessage'> - this is because in the chat.py file the import statement for the messages is the following: from langchain.schema.messages import (
AIMessage,
AnyMessage,
BaseMessage,
ChatMessage,
HumanMessage,
SystemMessage,
get_buffer_string,
). However the updated version I found in documentation states to use langchain_core.messages.
Even if I update the import statement to be the old version, I run into the following error: TypeError: 'ChatPromptValue' object is not subscriptable.
### Expected behavior
I should be able to execute the chain and receive the same output from the non-dynamic verison of the code - res.content output. | Executing Chain with HuggingFace Models using wrapper | https://api.github.com/repos/langchain-ai/langchain/issues/15235/comments | 1 | 2023-12-28T00:20:19Z | 2024-04-04T16:08:36Z | https://github.com/langchain-ai/langchain/issues/15235 | 2,057,862,832 | 15,235 |
[
"langchain-ai",
"langchain"
] | ### System Info
python = "3.11"
langchain = "0.0.352"
cohere = "4.39"
mlflow = {extras = ["genai"], version = "2.9.2"}
### Who can help?
@harupy
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I followed the official example for embeddings here, except that I am using Cohere instead of OpenAI: https://python.langchain.com/docs/integrations/providers/mlflow
<details>
<summary>Click for specific steps</summary>
More specifically, I installed mlflow genai and set my `COHERE_API_KEY` environment variable:
```bash
pip install 'mlflow[genai]'
export COHERE_API_KEY=...
```
I created `config.yaml` like so:
```yaml
endpoints:
- name: completions
endpoint_type: llm/v1/completions
model:
provider: cohere
name: command
config:
cohere_api_key: $COHERE_API_KEY
- name: embeddings
endpoint_type: llm/v1/embeddings
model:
provider: cohere
name: embed-english-light-v3.0
config:
cohere_api_key: $COHERE_API_KEY
```
I started the mlflow deployments server:
```bash
mlflow deployments start-server --config-path config.yaml
```
<details>
<summary>The server started as expected</summary>
```
xxx/python3.11/site-packages/pydantic/_internal/_config.py:321: UserWarning: Valid config keys have changed in V2:
* 'schema_extra' has been renamed to 'json_schema_extra'
warnings.warn(message, UserWarning)
[2023-12-27 13:53:18 -0800] [22480] [INFO] Starting gunicorn 21.2.0
[2023-12-27 13:53:18 -0800] [22480] [INFO] Listening at: http://127.0.0.1:5000 (22480)
[2023-12-27 13:53:18 -0800] [22480] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2023-12-27 13:53:18 -0800] [22481] [INFO] Booting worker with pid: 22481
[2023-12-27 13:53:18 -0800] [22482] [INFO] Booting worker with pid: 22482
xxx/python3.11/site-packages/pydantic/_internal/_config.py:321: UserWarning: Valid config keys have changed in V2:
* 'schema_extra' has been renamed to 'json_schema_extra'
warnings.warn(message, UserWarning)
xxx/python3.11/site-packages/pydantic/_internal/_config.py:321: UserWarning: Valid config keys have changed in V2:
* 'schema_extra' has been renamed to 'json_schema_extra'
warnings.warn(message, UserWarning)
[2023-12-27 13:53:20 -0800] [22481] [INFO] Started server process [22481]
[2023-12-27 13:53:20 -0800] [22481] [INFO] Waiting for application startup.
[2023-12-27 13:53:20 -0800] [22481] [INFO] Application startup complete.
[2023-12-27 13:53:20 -0800] [22482] [INFO] Started server process [22482]
[2023-12-27 13:53:20 -0800] [22482] [INFO] Waiting for application startup.
[2023-12-27 13:53:20 -0800] [22482] [INFO] Application startup complete.
```
</details>
In `test.py`, I added the embeddings example:
```python
from langchain.embeddings import MlflowEmbeddings
embeddings = MlflowEmbeddings(
target_uri="http://127.0.0.1:5000",
endpoint="embeddings",
)
print(embeddings.embed_query("hello"))
print(embeddings.embed_documents(["hello"]))
```
And I ran it with `python test.py`.
</details>
Here is the error I got:
```
raise HTTPError(
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http://127.0.0.1:5000/endpoints/embeddings/invocations. Response text: {"detail":{"message":"invalid request: valid input_type must be provided with the provided model"}}
```
<details>
<summary>Full trace</summary>
```
xxx/python3.11/site-packages/pydantic/_internal/_config.py:321: UserWarning: Valid config keys have changed in V2:
* 'schema_extra' has been renamed to 'json_schema_extra'
warnings.warn(message, UserWarning)
Traceback (most recent call last):
File "xxx/python3.11/site-packages/mlflow/utils/request_utils.py", line 52, in augmented_raise_for_status
response.raise_for_status()
File "xxx/python3.11/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http://127.0.0.1:5000/endpoints/embeddings/invocations
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "yyy/test.py", line 8, in <module>
print(embeddings.embed_query("hello"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "xxx/python3.11/site-packages/langchain_community/embeddings/mlflow.py", line 74, in embed_query
return self.embed_documents([text])[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "xxx/python3.11/site-packages/langchain_community/embeddings/mlflow.py", line 69, in embed_documents
resp = self._client.predict(endpoint=self.endpoint, inputs={"input": txt})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "xxx/python3.11/site-packages/mlflow/deployments/mlflow/__init__.py", line 294, in predict
return self._call_endpoint(
^^^^^^^^^^^^^^^^^^^^
File "xxx/python3.11/site-packages/mlflow/deployments/mlflow/__init__.py", line 139, in _call_endpoint
augmented_raise_for_status(response)
File "xxx/python3.11/site-packages/mlflow/utils/request_utils.py", line 55, in augmented_raise_for_status
raise HTTPError(
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http://127.0.0.1:5000/endpoints/embeddings/invocations. Response text: {"detail":{"message":"invalid request: valid input_type must be provided with the provided model"}}
```
</details>
The issue is that `embed_query`/`embed_documents` don't allow passing in the input_type argument, which is needed by the Cohere API -- see https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/cohere.py#L81
### Proposed solution
My quick solution was to modify the two methods in `MlflowEmbeddings` to allow for kwargs:
```python
class MlflowEmbeddings:
....
def embed_documents(self, texts: List[str], **kwargs) -> List[List[float]]:
embeddings: List[List[float]] = []
for txt in _chunk(texts, 20):
resp = self._client.predict(endpoint=self.endpoint, inputs={"input": txt, **kwargs})
embeddings.extend(r["embedding"] for r in resp["data"])
return embeddings
def embed_query(self, text: str, **kwargs) -> List[float]:
return self.embed_documents([text], **kwargs)[0]
```
So `test.py` changes to:
```python
print(embeddings.embed_query("hello", input_type="search_query"))
print(embeddings.embed_documents(["hello"], input_type="search_document"))
```
This might not be the best solution since it kind of defeats the purpose of separating `embed_query` and `embed_documents` for Cohere. Another solution is to subclass MlflowEmbeddings for Cohere (and others?).
I intend to open a PR with this change, so any guidance on the best approach is much appreciated!
### Expected behavior
The code should generate embeddings for the given words | MlflowEmbeddings: input_type argument is missing, required by Cohere embeddings models | https://api.github.com/repos/langchain-ai/langchain/issues/15234/comments | 2 | 2023-12-27T23:59:40Z | 2024-03-21T20:47:30Z | https://github.com/langchain-ai/langchain/issues/15234 | 2,057,854,254 | 15,234 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I am currently following the document to use a hugigngface LLM as a chat model: https://python.langchain.com/docs/integrations/chat/huggingface
I have setup my Huggingface API and am using Option 3 (HuggingFaceHub) to instantiate an LLM.
After running this line: chat_model._to_chat_prompt(messages)
I get the following error: ValueError: last message must be a HumanMessage
I am running the code the exactly the same as the documentation, including using the HuggingFaceH4/zephyr-7b-beta model.
Any help in resolving this issue is much appreciated.
### Idea or request for content:
_No response_ | HuggingFace Chat Wrapper - issue with HuggingFaceHub | https://api.github.com/repos/langchain-ai/langchain/issues/15232/comments | 4 | 2023-12-27T21:52:37Z | 2024-04-03T16:09:39Z | https://github.com/langchain-ai/langchain/issues/15232 | 2,057,799,134 | 15,232 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
@dosu-bot Currently im experiencing an old bug that was supposed to be fixed patches ago.
```
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/flask/app.py", line 1455, in wsgi_app
response = self.full_dispatch_request()
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/flask/app.py", line 869, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/flask/app.py", line 867, in full_dispatch_request
rv = self.dispatch_request()
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/flask/app.py", line 852, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/functions_framework/__init__.py", line 134, in view_func
return function(request._get_current_object())
File "/workspace/main.py", line 109, in entry_point_http
faq_response = chain.invoke(inputs)
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1510, in invoke
input = step.invoke(
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 160, in invoke
self.generate_prompt(
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 491, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 378, in generate
raise e
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 368, in generate
self._generate_with_cache(
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 524, in _generate_with_cache
return self._generate(
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/langchain/chat_models/vertexai.py", line 187, in _generate
response = chat.send_message(question.content, **msg_params)
TypeError: _ChatSessionBase.send_message() got an unexpected keyword argument 'candidate_count'
```
My current version is 0.0.348 and im trying to create a Cloud Function. Here is my code:
```
from google.cloud import bigquery, storage
from langchain.chains.conversation.memory import ConversationBufferWindowMemory
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.chat_models import ChatVertexAI
from langchain.document_loaders import PyPDFLoader
from langchain.document_loaders.csv_loader import CSVLoader
from langchain.memory import ConversationSummaryBufferMemory
from langchain.schema.runnable import RunnablePassthrough, RunnableLambda
from operator import itemgetter
from langchain.schema.output_parser import StrOutputParser
from langchain.callbacks.tracers import ConsoleCallbackHandler
from langchain.embeddings import VertexAIEmbeddings
from langchain.llms import VertexAI
from langchain.prompts import PromptTemplate, ChatPromptTemplate
from langchain.retrievers import BM25Retriever, EnsembleRetriever, ContextualCompressionRetriever
from langchain.retrievers.merger_retriever import MergerRetriever
from langchain.document_transformers import EmbeddingsRedundantFilter
from langchain.retrievers.document_compressors import DocumentCompressorPipeline
from langchain.retrievers.document_compressors import LLMChainExtractor
from langchain.schema import Document
from langchain.schema import StrOutputParser
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Chroma
import google.cloud.storage
from langchain.agents.format_scratchpad import format_log_to_str
from langchain.prompts import StringPromptTemplate
from typing import List, Union
from langchain.prompts import StringPromptTemplate
from langchain.schema import AgentAction, AgentFinish, OutputParserException
from langchain.vectorstores import MatchingEngine
import re
import io
import ipywidgets as widgets
import json
import langchain
import math
import os
import pandas as pd
import time
import logging
from faq_redpro_prompt import template_faq
from faq_redpro_fewshot import few_shot_faq
PROJECT_ID_ME = os.environ.get("PROJECT_ID_ME")
ME_REGION = os.environ.get("ME_REGION")
ME_BUCKET_FAQ = os.environ.get("ME_BUCKET_FAQ")
ME_INDEX_ID_FAQ = os.environ.get("ME_INDEX_ID_FAQ")
ME_INDEX_ENDPOINT_ID_FAQ = os.environ.get("ME_INDEX_ENDPOINT_ID_FAQ")
def entry_point_http(request):
request_json = request.get_json()
# Extraer la entrada del parámetro enviado por Dialogflow CX
user_query = request_json.get('sessionInfo', {}).get('parameters', {}).get('user_query')
#Modelos
llm = VertexAI(
model_name = "text-bison",
temperature = 0.1 #Prueba
)
chat = ChatVertexAI(
model_name = "chat-bison@001",
temperature = 0.4,
top_p = 0.8,
top_k = 40,
max_output_tokens = 500
)
embeddings = VertexAIEmbeddings(model_name="textembedding-gecko-multilingual@001")
me_faqs = MatchingEngine.from_components(
project_id=PROJECT_ID_ME,
region=ME_REGION,
gcs_bucket_name=ME_BUCKET_FAQ,
embedding=embeddings,
index_id=ME_INDEX_ID_FAQ,
endpoint_id=ME_INDEX_ENDPOINT_ID_FAQ,
)
me_retriever = me_faqs.as_retriever(
search_type="similarity",
search_kwargs={
"k": 2,
},
)
faq_prompt = PromptTemplate(
template=template_faq,
input_variables=["context", "question", "few_shot_faq"]
)
chain = (
RunnablePassthrough.assign(
context=itemgetter("question") | me_retriever,
question=itemgetter("question"),
few_shot_faq=itemgetter("few_shot_faq"),
)
| faq_prompt
| chat
| StrOutputParser()
)
inputs = {"question": user_query, "few_shot_faq": few_shot_faq}
faq_response = chain.invoke(inputs)
print(f'LangChain response: {faq_response}')
formatted_results = format_response(faq_response)
response["fulfillment_response"]["messages"][0]["text"]["text"][0] = formatted_results
return (response, 200, headers)
def format_response(results):
answer = results['answer']
sources = results.get('sources', '')
if sources != '':
source_uri = sources
else:
source_documents = results.get('source_documents', '')
if source_documents != '':
source_uri = results['source_documents'][0].metadata['source']
else:
source_uri = 'Não encontrei uma fonte para essa pergunta.'
formatted_response = f"{answer}\nSources: {source_uri}"
return formatted_response
```
### Suggestion:
_No response_ | TypeError: _ChatSessionBase.send_message() got an unexpected keyword argument 'candidate_count' | https://api.github.com/repos/langchain-ai/langchain/issues/15228/comments | 1 | 2023-12-27T19:07:13Z | 2024-04-03T16:09:34Z | https://github.com/langchain-ai/langchain/issues/15228 | 2,057,694,401 | 15,228 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
from langchain.document_loaders.parsers.pdf import PDFPlumberParser
def generate_embeddings(config: dict = None, urls = None, file_path = None, persist_directory=None):
if file_path:
parser = PDFPlumberParser()
data = parser.load(file_path)
processed_data = parser.process(data)
print(processed_data,"processed_data is-----------------llllllllllllllllllllllllllllll")
#below is the error i'm getting
data = parser.load(file_path)
AttributeError: 'PDFPlumberParser' object has no attribute 'load'
### Suggestion:
_No response_ | Issue: issue with pdfplumber | https://api.github.com/repos/langchain-ai/langchain/issues/15227/comments | 7 | 2023-12-27T18:50:09Z | 2024-04-04T16:08:31Z | https://github.com/langchain-ai/langchain/issues/15227 | 2,057,681,667 | 15,227 |
[
"langchain-ai",
"langchain"
] | ### Feature request
As per documentation there's a package for Gemini support but this only works for Gemini API and doesn't work with Vertexai.
https://python.langchain.com/docs/integrations/platforms/google
However in the vertexai docs gemini is mentioned (for some reason gemini ultra ? ) even though when tried with geimini-pro (gemini-ultra is not out yet unless Langchain folks have connections at Google :) ) it's throwing an error indicating that model doesn't exist.
https://python.langchain.com/docs/integrations/llms/google_vertex_ai_palm#multimodality
Unknown model publishers/google/models/gemini-pro-vision; {'gs://google-cloud-aiplatform/schema/predict/instance/chat_generation_1.0.0.yaml': <class 'vertexai.language_models.ChatModel'>} (type=value_error)
### Motivation
gemini has been out for a while and seemingly should be supported by langchain as they already made a whole package for it.
### Your contribution
I would be willing to make a pr but I'm not even sure what's the issue since the docs supposedly mention that it should be already supported. | support gemini on vertexai | https://api.github.com/repos/langchain-ai/langchain/issues/15222/comments | 9 | 2023-12-27T17:07:05Z | 2024-04-24T16:47:21Z | https://github.com/langchain-ai/langchain/issues/15222 | 2,057,600,249 | 15,222 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I need a mechanism to allow more control over the ANN search performed for a given RAG chain. Consider the initial example:
```
retriever = vectorstore.as_retriever()
template = """You're a helpful assistant who is great at code generation. Don't give me any explanation or summary. I'll give you some examples that may or may not be relevant, and I want you to use the examples to write code that solves the provided problem. Return only the code that solves the problem.
PROBLEM:
{problem}
EXAMPLES:
{context}
ANSWER:
"""
prompt = ChatPromptTemplate.from_template(template)
model = ChatOpenAI()
chain = (
{"context": retriever, "problem": RunnablePassthrough()}
| prompt
| model
| StrOutputParser()
)
chain.invoke("Generate an example Python method that uses LCEL to write a CQL query")
```
This approach assumes that the question will be used for creating the embedding. However, consider something like this:
```
retriever = vectorstore.as_retriever(ann_query="LCEL, AstraDB, CQL")
```
In this situation, when the retriever is invoked to embed the query, instead of performing vector search on the embedding of the very wordy
> "Generate an example Python method that uses LCEL to write a CQL query"
I want vector search to perform ANN on:
> "LCEL, AstraDB, CQL"
so that I have a greater likelihood of having the right docs stuffed into the prompt for the LLM to solve the problem, which was:
> Generate an example Python method that uses LCEL to write a CQL query
### Motivation
RAG results can be poor when the human input is very wordy or contains more info (for the LLM) than we want the vector store to search for. We need a mechanism to allow separation between the vector search query and the LLM query.
### Your contribution
I will create a PR. | Enable manual override of vector search query for controlled RAG | https://api.github.com/repos/langchain-ai/langchain/issues/15221/comments | 1 | 2023-12-27T16:54:54Z | 2024-04-03T16:09:24Z | https://github.com/langchain-ai/langchain/issues/15221 | 2,057,589,837 | 15,221 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: 0.0.341
OpenAI version: 1.3.5
Model: gpt-4-1106-preview
Python version:3.10.13
Platform: Celery worker in Docker Container
### Who can help?
@eyurtsev @hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am working on implementing LangChain Agents in my Python project I am running this project using docker compose. In my project I am using Celery worker and have multiple worker services which executes from the queue. The entire setup is working fine and all celery tasks are executed as expected.
One of these workers in agent worker where I have configured LangChain Agent. I have created a function where I am loading tools, initializing agent and passing agent input. Here's the full code of my **agent module**:
| Langchain agent not executing properly in Celery worker running as Docker container | https://api.github.com/repos/langchain-ai/langchain/issues/15220/comments | 9 | 2023-12-27T16:42:21Z | 2024-03-14T14:26:45Z | https://github.com/langchain-ai/langchain/issues/15220 | 2,057,579,438 | 15,220 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Like title, haven't found anything in the doc - docubot please help
### Idea or request for content:
having a proper document could help | how to create a custom chat model | https://api.github.com/repos/langchain-ai/langchain/issues/15214/comments | 2 | 2023-12-27T13:27:25Z | 2024-04-03T16:09:19Z | https://github.com/langchain-ai/langchain/issues/15214 | 2,057,373,883 | 15,214 |
[
"langchain-ai",
"langchain"
] | ### System Info

I am plannign to add new param like "affeciton"
How could I set the query databody to fill up the params here?( Langserve setup !)

### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
How to handle the input variables in LCEL mode?
### Expected behavior
get the right place I put in when I new a variables in the prompt. | How to query with new variables in LCEL mode? | https://api.github.com/repos/langchain-ai/langchain/issues/15213/comments | 3 | 2023-12-27T12:51:07Z | 2024-04-03T16:09:14Z | https://github.com/langchain-ai/langchain/issues/15213 | 2,057,338,192 | 15,213 |
[
"langchain-ai",
"langchain"
] | ### System Info
MacOS, M1 Pro
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run the following code:
```
import os
from dotenv import load_dotenv
from langchain.chains import ConversationalRetrievalChain
from langchain.prompts.chat import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
SystemMessagePromptTemplate,
)
from langchain.chat_models import ChatOllama
from langchain.vectorstores import FAISS
from langchain.document_loaders import PyPDFLoader
from langchain.embeddings.openai import OpenAIEmbeddings
load_dotenv()
messages = [
SystemMessagePromptTemplate.from_template(
"You are a truthful, accurate AI agent that responds to the user's questions, given an AI paper by Apple."
),
HumanMessagePromptTemplate.from_template("What is the paper about, in summary?"),
]
qa_prompt = ChatPromptTemplate.from_messages(messages)
chat_model = ChatOllama(
model="mistral",
)
loader = PyPDFLoader("./llm_in_a_flash_apple.pdf")
pages = loader.load_and_split()
embeddings = OpenAIEmbeddings(api_key=os.getenv("OPENAI_API_KEY"))
print(pages[0])
db = None
if not os.path.exists("./faiss_index"):
db = FAISS.from_documents(pages, embeddings)
db.save_local("./faiss_index")
else:
db = FAISS.load_local("faiss_index", embeddings)
query = "What is the paper about?"
docs = db.similarity_search_with_score(query)
print(docs[0])
ConversationalRetrievalChain.from_llm(
llm=chat_model,
retriever=db.as_retriever(search_type="similarity", search_kwargs={"k": 0.8}),
verbose=True,
combine_docs_chain_kwargs={"prompt": qa_prompt},
return_source_documents=True,
)
```
additional files here: https://github.com/polooner/chatpdf/blob/main/main.py
### Expected behavior
An answer from the Chat Model | Error using ConversationalRetrievalChain.from_llm: "document_variable_name context was not found in llm_chain input_variables: [] (type=value_error)" | https://api.github.com/repos/langchain-ai/langchain/issues/15210/comments | 1 | 2023-12-27T11:46:08Z | 2024-04-03T16:09:09Z | https://github.com/langchain-ai/langchain/issues/15210 | 2,057,277,681 | 15,210 |
[
"langchain-ai",
"langchain"
] | ### System Info
Baichuan Chat (with both Baichuan-Turbo and Baichuan-Turbo-192K models) has updated their APIs. There are breaking changes. For example, BAICHUAN_SECRET_KEY is removed in the latest API but is still required in Langchain. Baichuan's Langchain integration needs to be updated to the latest version.
Also we have released out new Baichuan-Turbo-192K API. We are adding support for this.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://python.langchain.com/docs/integrations/chat/baichuan
SECRET_KEY has been deprecated.
### Expected behavior
Baichuan Chat works normally. | Fix Baichuan's integration and introduce Baichuan-Turbo-192K API. | https://api.github.com/repos/langchain-ai/langchain/issues/15206/comments | 1 | 2023-12-27T10:21:21Z | 2024-04-03T16:09:04Z | https://github.com/langchain-ai/langchain/issues/15206 | 2,057,190,266 | 15,206 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am using ConversationalRetrievalChain
with a callback handler for streaming responses back.
> qa_chain =ConversationalRetrievalChain.from_llm(
llm=chat,
retriever=MyVectorStoreRetriever(
vectorstore=vectordb,
search_type="similarity_score_threshold",
search_kwargs={"score_threshold": SIMILARITY_THRESHOLD, "k": 1},
),
return_source_documents=True,
rephrase_question=False,
return_generated_question=False,
)
> response = qa_chain(
{
"question": user_input,
"chat_history": chat_history,
},
callbacks=[stream_handler],
)
```
class StreamHandler(BaseCallbackHandler):
def __init__(self):
self.text = ""
def on_llm_new_token(self, token: str, **kwargs: Any):
# Initialize old_text
old_text = self.text
print("old text ", old_text)
# Check if the token is not part of the prompts before adding it to the queue
print("token is", token)
if token is not None and token != "":
self.text += token
# Calculate the new content since the last emission
new_content = self.text[len(old_text) :]
socketio.emit("update_response", {"response": new_content})
```
I have provided the value of rephrase_question and return_generated_question False.
Even after that the streaming response contains the rephrased question.
But the final response from the LLM does not contain this rephrased question.
what could be the reason, please provide an appropriate solution.
### Suggestion:
_No response_ | Issue: Streaming Response contains the rephrased question in ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/15205/comments | 3 | 2023-12-27T10:20:47Z | 2024-04-03T16:08:59Z | https://github.com/langchain-ai/langchain/issues/15205 | 2,057,189,374 | 15,205 |
[
"langchain-ai",
"langchain"
] | ### System Info
OS: MacOS Sonoma
Python: 3.11.6
LangChain: 0.0.352
llama-cpp-python = 0.2.25
pydantic: 1.10.13 (I know that it is not the latest version, but version 1 is still officially supported)
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When trying to use LlamaCpp in conjunction with grammar, I get an error from pydantic. The following code snipped was adapted from the [docs](https://python.langchain.com/docs/integrations/llms/llamacpp#grammars): so that a `LlamaGrammar` is passed, instead of the path to the grammar file.
```python
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain_community.llms.llamacpp import LlamaCpp
from llama_cpp.llama_grammar import LlamaGrammar
from pydantic import BaseModel
class SomeSchema(BaseModel):
some_field: str
LlamaCpp(
model_path="some model",
callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),
verbose=True,
grammar=LlamaGrammar.from_json_schema(SomeSchema.schema_json()),
)
# Fails with:
# pydantic.errors.ConfigError: field "grammar" not yet prepared so type is still a ForwardRef, you might need to call LlamaCpp.update_forward_refs().
```
The following works though (and the grammar object is used properly:
```python
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain_community.llms.llamacpp import LlamaCpp
from llama_cpp.llama_grammar import LlamaGrammar
from pydantic import BaseModel
class SomeSchema(BaseModel):
some_field: str
model = LlamaCpp(
model_path="some model",
callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),
verbose=True,
)
model.grammar = LlamaGrammar.from_json_schema(SomeSchema.schema_json())
```
### Expected behavior
It should be possible to pass a `LlamaGrammar` object in the `__init__` of `LlamaCpp`, as per its [definition](https://github.com/langchain-ai/langchain/blob/f36ef0739dbb548cabdb4453e6819fc3d826414f/libs/community/langchain_community/llms/llamacpp.py#L129)
I had a quick look at the pydantic [documentation regarding this problem](https://docs.pydantic.dev/1.10/usage/postponed_annotations/), but I couldn't find the postponed annotation in question. | Pydantic forward ref issue when creating using LlamaCpp with grammar | https://api.github.com/repos/langchain-ai/langchain/issues/15204/comments | 1 | 2023-12-27T10:11:11Z | 2024-04-03T16:08:54Z | https://github.com/langchain-ai/langchain/issues/15204 | 2,057,179,711 | 15,204 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/tools/render.py
https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/tools/convert_to_openai.py
For backward compatibility purposes, should we proceed with a direct import?
### Suggestion:
_No response_ | Issue: Identical Content in Two Files | https://api.github.com/repos/langchain-ai/langchain/issues/15203/comments | 1 | 2023-12-27T09:49:59Z | 2024-04-03T16:08:49Z | https://github.com/langchain-ai/langchain/issues/15203 | 2,057,154,943 | 15,203 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchian=0.0.352
qianfan=0.2.4
When I tried the usage of agent in this [video](https://learn.deeplearning.ai/langchain/lesson/7/agents), I changed the model in it from ChatGpt-3.5-turbo to ERNIE-Bot, and the output of agent showed the following error:
```bash
> Entering new AgentExecutor chain...
Could not parse LLM output: xxxxxxxxx
Observation: Invalid or incomplete response
Thought: Could not parse LLM output: xxxxx
Observation: Invalid or incomplete response
...
```
And, ERNIE-Bot can't call (llm-math) tool correctly.
I wonder if the problem is a lack of capability in the qianfan model itself, or if there is a problem in the qianfan code.
Or is there something wrong with my usage or other issues?
### Who can help?
@danielhjz
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
**my code**
```python
llm = QianfanChatEndpoint(
temperature=0.000001,
model='ERNIE-Bot'
)
tools = load_tools(
["llm-math", "wikipedia"],
llm=llm
)
agent = initialize_agent(
tools,
llm,
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True,
verbose=True
)
agent("300的1/4是多少?")
```
**code in the video**
```python
# code in the video
llm = ChatOpenAI(
temperature=0
)
tools = load_tools(
["llm-math", "wikipedia"],
llm=llm
)
agent = initialize_agent(
tools,
llm,
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True,
verbose=True
)
agent("What is the 25% of 300?")
```
### Expected behavior
**Run by ChatOpenAI(temperature=0)**
````bash
> Entering new AgentExecutor chain...
Thought: We need to calculate 25% of 300, which means we need to multiply 300 by 0.25.
Action:
```
{
"action": "Calculator",
"action_input": "300*0.25"
}
```
Observation: Answer: 75.0
Thought:The calculator tool returned the answer 75.0, which is correct.
Final Answer: 25% of 300 is 75.0.
> Finished chain.
{'input': 'What is the 25% of 300?', 'output': '25% of 300 is 75.0.'}
````
| "Could not parse LLM output" when using QianfanChatEndpoint in agent. | https://api.github.com/repos/langchain-ai/langchain/issues/15199/comments | 2 | 2023-12-27T08:49:02Z | 2024-04-04T16:08:26Z | https://github.com/langchain-ai/langchain/issues/15199 | 2,057,093,818 | 15,199 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
```python
conversational_qa_chain = (
_inputs | _context | ConfigurableTokenLimitProcessor(model="gpt_35_turbo").configurable_fields(
model=ConfigurableFieldSingleOption(
id="model",
name="model",
options={
"gpt_35_turbo": "gpt_35_turbo",
"gpt_35_turbo_1106": "gpt_35_turbo_1106",
"gpt_4_1106_preview": "gpt_4_1106_preview",
"gpt_4_32k": "gpt_4_32k"
},
default="gpt_35_turbo",
)
) | ANSWER_PROMPT | llm | StrOutputParser()
)
```
```python
chain = conversational_qa_chain.with_types(input_type=ChatHistory).with_fallbacks([RunnableLambda(when_all_is_lost)])
```
```python
add_routes(app,
chain,
enable_feedback_endpoint=True,
path="/test",
config_keys=["llm", "collection_name", "model", "configurable"]
)
```
It's a code developed with langserve, but if i send a request to `/test/stream` using playground, unlike before adding the `with_fallbacks` function, the response is not exposed on the screen by token, but all the responses are shown on the screen at once, what's the reason?
### Suggestion:
Even if i add `with_fallbacks`, it should be streamed on the screen for each token. | Issue: lcel langserve with_fallbacks streaming | https://api.github.com/repos/langchain-ai/langchain/issues/15195/comments | 4 | 2023-12-27T04:53:43Z | 2024-05-22T16:07:52Z | https://github.com/langchain-ai/langchain/issues/15195 | 2,056,910,699 | 15,195 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I apologize for the naive question.it's not about an error or a bug.
I'm trying to implement routing by following the guide here: https://python.langchain.com/docs/modules/chains/foundational/router
However, I can't figure out how to use RAG.
I tried changing the last code in the guide like this.
```python
final_chain = (
RunnablePassthrough.assign(topic=itemgetter("input") | classifier_chain)
| prompt_branch
| ChatOpenAI()
| StrOutputParser()
)
```
```python
final_chain = (
{
"context": retriever,
"topic": itemgetter("input") | classifier_chain,
}
| prompt_branch
| llm
| StrOutputParser()
)
```
But I get the following error:
```shell
File "/Users/user/Library/Python/3.9/lib/python/site-packages/tiktoken/core.py", line 116, in encode
if match := _special_token_regex(disallowed_special).search(text):
TypeError: expected string or buffer
```
### Suggestion:
_No response_ | Issue: <Please tell me how to combine Routing and RAG in a chain.> | https://api.github.com/repos/langchain-ai/langchain/issues/15193/comments | 5 | 2023-12-27T04:29:43Z | 2024-04-16T16:20:16Z | https://github.com/langchain-ai/langchain/issues/15193 | 2,056,898,273 | 15,193 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
why CSVLoader can't load? error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[25], line 1
----> 1 from langchain.document_loaders.csv_loader import CSVLoader
3 loader = CSVLoader(file_path='./data/bugreport.csv', csv_args={
4 'delimiter': ',',
5 'quotechar': '"',
6 'fieldnames': ["URL","Resolved","Backport_of","Submitted","Status","CPU","Priority","Sub_Component","Updated","Fix_Versions","Affected_Version","OS","Type","Resolution","Component"]
7 })
9 data = loader.load()
File D:\miniconda\lib\site-packages\langchain\document_loaders\__init__.py:49
47 from langchain.document_loaders.bigquery import BigQueryLoader
48 from langchain.document_loaders.bilibili import BiliBiliLoader
---> 49 from langchain.document_loaders.blackboard import BlackboardLoader
50 from langchain.document_loaders.blob_loaders import (
51 Blob,
52 BlobLoader,
53 FileSystemBlobLoader,
54 YoutubeAudioLoader,
55 )
56 from langchain.document_loaders.blockchain import BlockchainDocumentLoader
File D:\miniconda\lib\site-packages\langchain\document_loaders\blackboard.py:1
----> 1 from langchain_community.document_loaders.blackboard import BlackboardLoader
3 __all__ = ["BlackboardLoader"]
File D:\miniconda\lib\site-packages\langchain_community\document_loaders\__init__.py:51
49 from langchain_community.document_loaders.bigquery import BigQueryLoader
50 from langchain_community.document_loaders.bilibili import BiliBiliLoader
---> 51 from langchain_community.document_loaders.blackboard import BlackboardLoader
52 from langchain_community.document_loaders.blob_loaders import (
53 Blob,
54 BlobLoader,
55 FileSystemBlobLoader,
56 YoutubeAudioLoader,
57 )
58 from langchain_community.document_loaders.blockchain import BlockchainDocumentLoader
File D:\miniconda\lib\site-packages\langchain_community\document_loaders\blackboard.py:10
7 from langchain_core.documents import Document
9 from langchain_community.document_loaders.directory import DirectoryLoader
---> 10 from langchain_community.document_loaders.pdf import PyPDFLoader
11 from langchain_community.document_loaders.web_base import WebBaseLoader
14 class BlackboardLoader(WebBaseLoader):
File D:\miniconda\lib\site-packages\langchain_community\document_loaders\pdf.py:18
16 from langchain_community.document_loaders.base import BaseLoader
17 from langchain_community.document_loaders.blob_loaders import Blob
---> 18 from langchain_community.document_loaders.parsers.pdf import (
19 AmazonTextractPDFParser,
20 DocumentIntelligenceParser,
21 PDFMinerParser,
22 PDFPlumberParser,
23 PyMuPDFParser,
24 PyPDFium2Parser,
25 PyPDFParser,
26 )
27 from langchain_community.document_loaders.unstructured import UnstructuredFileLoader
29 logger = logging.getLogger(__file__)
File D:\miniconda\lib\site-packages\langchain_community\document_loaders\parsers\__init__.py:5
3 from langchain_community.document_loaders.parsers.grobid import GrobidParser
4 from langchain_community.document_loaders.parsers.html import BS4HTMLParser
----> 5 from langchain_community.document_loaders.parsers.language import LanguageParser
6 from langchain_community.document_loaders.parsers.pdf import (
7 PDFMinerParser,
8 PDFPlumberParser,
(...)
11 PyPDFParser,
12 )
14 __all__ = [
15 "BS4HTMLParser",
16 "DocAIParser",
(...)
24 "PyPDFParser",
25 ]
File D:\miniconda\lib\site-packages\langchain_community\document_loaders\parsers\language\__init__.py:1
----> 1 from langchain_community.document_loaders.parsers.language.language_parser import (
2 LanguageParser,
3 )
5 __all__ = ["LanguageParser"]
File D:\miniconda\lib\site-packages\langchain_community\document_loaders\parsers\language\language_parser.py:24
18 try:
19 from langchain.text_splitter import Language
21 LANGUAGE_EXTENSIONS: Dict[str, str] = {
22 "py": Language.PYTHON,
23 "js": Language.JS,
---> 24 "cobol": Language.COBOL,
25 }
27 LANGUAGE_SEGMENTERS: Dict[str, Any] = {
28 Language.PYTHON: PythonSegmenter,
29 Language.JS: JavaScriptSegmenter,
30 Language.COBOL: CobolSegmenter,
31 }
32 except ImportError:
File D:\miniconda\lib\enum.py:437, in EnumMeta.__getattr__(cls, name)
435 return cls._member_map_[name]
436 except KeyError:
--> 437 raise AttributeError(name) from None
AttributeError: COBOL
### Suggestion:
_No response_ | Issue: <CSVLoader can't load> | https://api.github.com/repos/langchain-ai/langchain/issues/15192/comments | 9 | 2023-12-27T03:54:38Z | 2024-03-01T05:21:04Z | https://github.com/langchain-ai/langchain/issues/15192 | 2,056,881,303 | 15,192 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I'm going to make a chain through lcel and try to process an error through `with_fallbacks` at the end, but unlike before I put `with_fallbacks`, streaming is not possible and all responses go down at once. Can i process streaming using `with_fallbacks`?
### Suggestion:
lcel `with_fallbacks` streaming | Issue: lcel `with_fallbacks` streaming | https://api.github.com/repos/langchain-ai/langchain/issues/15191/comments | 1 | 2023-12-27T03:40:56Z | 2023-12-27T04:53:55Z | https://github.com/langchain-ai/langchain/issues/15191 | 2,056,875,193 | 15,191 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
If the rate limit of the api key is exceeded when developing a chain through lcel, I want to dynamically change another api key and retry to give the client a normal response, is there a way?
### Suggestion:
Dynamically catch an error in lcel, change the api key, and try again | Issue: openai api key rate limit error handing | https://api.github.com/repos/langchain-ai/langchain/issues/15190/comments | 2 | 2023-12-27T03:37:31Z | 2024-04-03T16:08:39Z | https://github.com/langchain-ai/langchain/issues/15190 | 2,056,873,561 | 15,190 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I want to contribute to one of the libs and started a fork. Here are the steps I took:
I am trying to add a new feature but first need to experiment with it. I am unsure on how to get started writing some short scripts to use the libs.
1. I went into ```libs/experimental```, ```libs/core```, ```libs/community``` ```libs/langchain``` and ran ```poetry install``` in all of them.
2. I start an environment from ```libs/langchain``` with ```poetry shell```
3. I created a file inside of it, made some short code:
```from langchain.document_loaders import PyPDFLoader
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
# load the document and split it into chunks
loader = PyPDFLoader("./llm_in_a_flash_apple.pdf")
documents = loader.load_and_split()
# create the open-source embedding function
embedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
# load it into Chroma
db = Chroma.from_documents(documents, embedding_function)
# query it
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
# print results
print(docs[0].page_content)
```
and got the following error:
```
Traceback (most recent call last):
File "/Users/polo/langchain/libs/langchain/test.py", line 9, in <module>
loader = PyPDFLoader("./llm_in_a_flash_apple.pdf")
File "/Users/polo/langchain/libs/community/langchain_community/document_loaders/pdf.py", line 154, in __init__
raise ImportError(
ImportError: pypdf package not found, please install it with `pip install pypdf`
```
This is my first time in a Python project like this. I am unsure how to get started using all the different packages while in a fork of the repository. If anyone can guide me I would love to make a PR on this, it is quite daunting for beginners to get around and start contributing!
### Idea or request for content:
_No response_ | DOC: How to write my own short scripts within a fork to test some code? | https://api.github.com/repos/langchain-ai/langchain/issues/15177/comments | 2 | 2023-12-26T18:42:18Z | 2024-05-04T08:50:34Z | https://github.com/langchain-ai/langchain/issues/15177 | 2,056,625,948 | 15,177 |
[
"langchain-ai",
"langchain"
] | ### System Info
OS: Windows
Python: 3.9.10
Langchain version: 0.0.352
openai version: 1.6.1
### Who can help?
@BeautyyuYanli
@baskaryan
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.vectorstores.pgvecto_rs import PGVecto_rs
from langchain.embeddings import AzureOpenAIEmbeddings
from dotenv import dotenv_values
import os
```
```
config = dotenv_values(".env")
# os.environ["TIKTOKEN_CACHE_DIR"] = "./cache/tiktoken/"
embeddings = AzureOpenAIEmbeddings(
max_retries=3,
timeout=60,
api_key=config["api_key"],
model="text-embedding-ada-002",
openai_api_type=config["api_type"],
azure_endpoint=config["api_base"]
)
URL = "postgresql+psycopg://{username}:{password}@{host}:{port}/{db_name}".format(
port=config["db_port"],
host=config["db_host"],
username=config["db_user"],
password=config["db_pass"],
db_name=config["db_name"],
)
db = PGVecto_rs(
embedding=embeddings,
db_url=URL,
dimension=1536, # text-embedding-ada-002
collection_name="test",
)
```
```
docs = ["a text about mathematics", "a text about physics"]
meta = [{"id": "1"}, {"id": "2"}]
db.add_texts(
texts=docs,
metadatas=meta
)
retr = db.as_retriever(
search_kwargs = {
"k": 1,
"filter": {"id": "1"}
}
)
```
```
retr.invoke("physics")
```
>[Document(page_content='a text about physics', metadata={'id': '2'})]
### Expected behavior
The search should only be performed on documents where the `metadata` field contains `{"id": "1"}`.
In this case, adding a filter makes no difference to the retrieval. | pgvecto.rs: retriever filter not working | https://api.github.com/repos/langchain-ai/langchain/issues/15173/comments | 2 | 2023-12-26T14:35:49Z | 2024-01-15T19:42:01Z | https://github.com/langchain-ai/langchain/issues/15173 | 2,056,466,694 | 15,173 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
`def generate_custom_prompt(new_project_qa,query,name,not_uuid):
check = query.lower()
result = new_project_qa(query)
relevant_document = result['source_documents']
context_text="\n\n---\n\n".join([doc.page_content for doc in relevant_document])
# print(context_text,"context_text")
user_experience_inst = UserExperience.objects.get(not_uuid=not_uuid)
greetings = ['hi', 'hello', 'hey', 'hui', 'hiiii', 'hii', 'hiii', 'heyyy']
if check in greetings:
custom_prompt_template = f"""
Just simply reply with "Hello {name}! How can I assist you today?"
"""
elif check not in greetings and user_experience_inst.custom_prompt:
custom_prompt_template = f"""Answer the question based only on following context: ```{context_text} ```
You are a chatbot designed to provide answers to User's Questions:```{check}```, delimited by triple backticks.
Generate your answer to match the user's requirements: {user_experience_inst.custom_prompt}
If you encounter a question for which you don't know the answer, please respond with 'I don't know' and refrain from making up an answer.
- Before saying 'I don't know,' please re-verify your vector store to ensure the answer is not present in the database.
Remember, your goal is to assist the user in the best way possible. If the question is unclear or ambiguous, feel free to ask for clarification.
User's Question: ```{check}```
AI Answer:"""
else:
# Create the custom prompt template
custom_prompt_template = f"""Generate your response exclusively from the provided context: {{context_text}}. You function as a chatbot specializing in delivering detailed answers to the User's Question: ```{{check}} ```, enclosed within triple backticks.
Generate your answer in points in the following format:
1. Point no 1
1.1 Its subpoint in details
1.2 More information if needed.
2. Point no 2
2.1 Its subpoint in details
2.2 More information if needed.
…
N. Another main point.
If you encounter a question for which you don't know the answer based on the predefined points, please respond with 'I don't know' and refrain from making up an answer.
However, if the answer is not present in the predefined points, then Provide comprehensive information related to the user's query.
Remember, your goal is to assist the user in the best way possible. If the question is unclear or ambiguous, you can ask for clarification.
User's Question: ```{{check}} ```
AI Answer:"""
# Create the PromptTemplate
custom_prompt = ChatPromptTemplate.from_template(custom_prompt_template)
formatted_prompt = custom_prompt.format(context_text=context_text, check=check)
return formatted_prompt
def retreival_qa_chain(chroma_db_path):
embedding = OpenAIEmbeddings()
vectordb = Chroma(persist_directory=chroma_db_path, embedding_function=embedding)
llm = ChatOpenAI(temperature=0.1)
retriever = vectordb.as_retriever(search_kwargs={"k": 2})
qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever,return_source_documents=True)
return qa`
### Suggestion:
_No response_ | Issue: Explain Memory and How it's implemented in my Case. | https://api.github.com/repos/langchain-ai/langchain/issues/15170/comments | 4 | 2023-12-26T12:45:59Z | 2023-12-27T05:34:44Z | https://github.com/langchain-ai/langchain/issues/15170 | 2,056,381,701 | 15,170 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I would like to build RAG based on Mistral 7B model
The model is already hosted, and I provide llm_url in the custom LLM setup
I am able to make a request and get a response from the URL using the `llm._call` method, however something is wrong with the callbacks in `RetrievalQA.from_chain_type` method
It gives me below error
`'Mistral7B_LLM' object has no attribute 'callbacks'`
Am I missing anything in the below code
```
from pydantic import Extra
import requests
from typing import Any, List,Dict, Callable, Type, Mapping, Optional
from langchain.callbacks.manager import CallbackManagerForLLMRun
from langchain.llms.base import LLM, BaseLLM
class Mistral7B_LLM(LLM):
def __init__(self):
self.__post_init__()
def __post_init__(self) -> None:
def _import_mistral7B_llm() -> Any:
from svcs.vector.src.controllers.llm.mistral7B_serving import Mistral7B_LLM
return Mistral7B_LLM
def __getattr__() -> Any:
return Mistral7B_LLM()
def get_type_to_cls_dict() -> Dict[str, Callable[[], Type[BaseLLM]]]:
return {
"Mistral7B_LLM": _import_mistral7B_llm,
}
__all__ = ["Mistral7B_LLM"]
class Config:
extra = Extra.forbid
@property
def _llm_type(self) -> str:
return "Mistral7B_LLM"
def _call(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
max_new_tokens: Optional[int] = 156,
temperature: Optional[float] = 0.7,
**kwargs: Any,
) -> str:
if stop is not None:
raise ValueError("stop kwargs are not permitted.")
payload = {
"inputs": [prompt],
"max_new_tokens": max_new_tokens,
"temperature": temperature,
}
headers = {"Content-Type": "application/json"}
llm_url = 'my url'
response = requests.post(llm_url, json=payload, headers=headers, verify=False)
response.raise_for_status()
# print("API Response:", response.json())
answer = response.json()["outputs"].split("[/INST]")[-1]
return answer
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {"llmUrl": self.llm_url}
```
### Suggestion:
_No response_ | Issue: Custom Mistral based LLM from API for RetrievalQA chain | https://api.github.com/repos/langchain-ai/langchain/issues/15168/comments | 5 | 2023-12-26T11:56:09Z | 2024-06-26T12:00:33Z | https://github.com/langchain-ai/langchain/issues/15168 | 2,056,342,401 | 15,168 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Given a tool that generates a dataframe, how can I pass it through the chain?
```
llm_with_tools = llm.bind(functions=[format_tool_to_openai_function(t) for t in tools])
prompt = ChatPromptTemplate.from_messages(
[
("system", """
You are a helpful assistant for marketing department.
"""),
MessagesPlaceholder(variable_name="history"),
("user", """
Provide the answer to the question with 3 sentences long.
If the response is related to video-on-demand. Please make sure you return the content id to the answers
Question: {input}
"""),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
agent = (
{
"input": lambda x: x["input"],
"dataframe": <<my_dataframe>>,
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
"history": lambda x: x['history']
}
| prompt
| llm_with_tools
| OpenAIFunctionsAgentOutputParser()
AgentExecutor(agent=agent, tools=tools, verbose=True)
```
Is it possible?
### Suggestion:
_No response_ | Issue: Pass additional data through AgentExecutor | https://api.github.com/repos/langchain-ai/langchain/issues/15165/comments | 3 | 2023-12-26T10:47:02Z | 2024-06-19T08:30:56Z | https://github.com/langchain-ai/langchain/issues/15165 | 2,056,290,653 | 15,165 |
[
"langchain-ai",
"langchain"
] | ### System Info
python3.10
langchain 0.0.333
### Who can help?
@hwchase17 @agola11 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [x] Async
### Reproduction
1. I tried to use the asynchronous call chain=ConversationalRetrievalChain.from_llm combined with the local knowledge base to find the answer.
2. When chat_history is not passed in chain.acall({"question": query, "chat_history":[]}), it can correctly return the result of the streaming output.
3. When I pass in chat_history, the returned result is new_question. new_question is a processed question and is not the answer I want.
code:
` db = FAISS.load_local(COMIXGPT_VECTOR, embeddings)
retriever = db.as_retriever(search_type="similarity_score_threshold",
search_kwargs={"score_threshold": score_threshold,
"k": VECTOR_SEARCH_TOP_K})
prompt = PromptTemplate(
input_variables=["chat_history", "context", "question"],
template=prompt_template
)
chain = ConversationalRetrievalChain.from_llm(
llm=model,
chain_type="stuff",
retriever=retriever,
#memory=memory,
return_source_documents=True,
return_generated_question=True,
combine_docs_chain_kwargs={'prompt': prompt},
condense_question_llm=model,
verbose=True
)
task = asyncio.create_task(wrap_done(
chain.acall({"question": query, "chat_history":chat_history}),
callback.done),
)
if stream:
async for token in callback.aiter():
# Use server-sent-events to stream the response
yield json.dumps({"answer": token}, ensure_ascii=False)
yield json.dumps({"docs": source_documents}, ensure_ascii=False)
else:
answer = ""
async for token in callback.aiter():
answer += token
yield json.dumps({"answer": answer,
"docs": source_documents},
ensure_ascii=False)
await task
return StreamingResponse(knowledge_base_chat_iterator(query=query,
top_k=top_k,
history=history,
chat_history=chat_history,
model_name=model_name,
prompt_name=prompt_name),
media_type="text/event-stream")`
### Expected behavior
I don't know why it called LLM twice, and then it returned the updated Question, which was not the Assistant I wanted. The call log is as follows:
log:
> Entering new LLMChain chain...
Prompt after formatting:
Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.
Chat History:
Human: hello
Assistant: Hi there! Is there anything I can help you with? Youre welcome, just tell me~
Human: hello hello make friends
Assistant: ok
Follow Up Input: hello!
Standalone question:
> Entering new StuffDocumentsChain chain...
> Entering new LLMChain chain...
Prompt after formatting:
Chat History:
Human: hello
Assistant: Hi there! Is there anything I can help you with? Youre welcome, just tell me~
Human: hello hello make friends
Assistant: ok
Question: How can I make friends?
Helpful Answer:
2023-12-26 17:27:43,269 - _client.py[line:1758] - INFO: HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
> Finished chain.
> Finished chain.
| 【BUG】ConversationalRetrievalChain.from_llm and pass in chat_history, there is a problem with the callback. | https://api.github.com/repos/langchain-ai/langchain/issues/15164/comments | 2 | 2023-12-26T09:32:29Z | 2024-01-10T03:36:51Z | https://github.com/langchain-ai/langchain/issues/15164 | 2,056,226,273 | 15,164 |
[
"langchain-ai",
"langchain"
] | is it correct using CharacterTextSplitter in Confluence
### Issue you'd like to raise.
confluence_url = config.get("confluence_url", None)
username = config.get("username", None)
api_key = config.get("api_key", None)
space_key = config.get("space_key", None)
documents = []
embedding = OpenAIEmbeddings()
loader = ConfluenceLoader(
url=confluence_url,
username=username,
api_key=api_key
)
for space_key in space_key:
try:
documents.extend(loader.load(space_key=space_key,include_attachments=True,limit=100))
except:
documents=[]
text_splitter = CharacterTextSplitter(chunk_size=6000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=10, encoding_name="cl100k_base")
texts = text_splitter.split_documents(texts)
### Suggestion:
_No response_ | Issue: How it can be splitted ? | https://api.github.com/repos/langchain-ai/langchain/issues/15162/comments | 1 | 2023-12-26T07:41:33Z | 2023-12-26T10:37:39Z | https://github.com/langchain-ai/langchain/issues/15162 | 2,056,133,474 | 15,162 |
[
"langchain-ai",
"langchain"
] | ### System Info
When I set `verbose=True` when creating chains using ConversationBufferMemory as memory and **redirect** the output to a txt/log file, the return messages shows that the ConversationBufferMemory saves same round conversation twice. You can get the example in later part of this issue.
**This problem will not happen if I just print the return messages in terminal instead of redirecting them into files.**
Does ConversationBufferMemory actually save conversation twice? If so, this will waste half of the input tokens to LLMs. How can I set some variables to make it only save once with any round of conversation?
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(model_name = 'gpt-4-1106-preview', temperature = 0.0)
def_memory = ConversationBufferMemory(memory_key="history", return_messages=True)
def_chain = ConversationChain(
llm = llm,
memory = def_memory,
verbose = True)
def_queries = ['When answering questions below, you should play a role as a vehicle system engineer. Your job is to read the VDR (Vehicle Digital Requirement) form and evaluate the quality of the VDR completion. Make your answer as brief as you can. If you understand what I said, reply only [UNDERSTAND].',
'You can see duplication in memory of this query.']
for def_q in def_queries:
ret = def_chain.run(def_q)
def_memory.save_context({"input": def_q}, {"output": ret})
# pls redirect the output into some .txt or .log file
```
### Expected behavior
### Below is my redirected gh.log file, I bold the duplicate part
[1m> Entering new ConversationChain chain...[0m
Prompt after formatting:
[32;1m[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
[]
Human: When answering questions below, you should play a role as a vehicle system engineer. Your job is to read the VDR (Vehicle Digital Requirement) form and evaluate the quality of the VDR completion. Make your answer as brief as you can. If you understand what I said, reply only [UNDERSTAND].
AI:[0m
[1m> Finished chain.[0m
[1m> Entering new ConversationChain chain...[0m
Prompt after formatting:
[32;1m[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
**[HumanMessage(content='When answering questions below, you should play a role as a vehicle system engineer. Your job is to read the VDR (Vehicle Digital Requirement) form and evaluate the quality of the VDR completion. Make your answer as brief as you can. If you understand what I said, reply only [UNDERSTAND].'), AIMessage(content='[UNDERSTOOD]'), HumanMessage(content='When answering questions below, you should play a role as a vehicle system engineer. Your job is to read the VDR (Vehicle Digital Requirement) form and evaluate the quality of the VDR completion. Make your answer as brief as you can. If you understand what I said, reply only [UNDERSTAND].'), AIMessage(content='[UNDERSTOOD]')]**
Human: You can see duplication in memory of this query.
AI:[0m
[1m> Finished chain.[0m
| Does ConversationBufferMemory actually save conversation twice? | https://api.github.com/repos/langchain-ai/langchain/issues/15161/comments | 2 | 2023-12-26T07:21:01Z | 2024-01-02T06:47:11Z | https://github.com/langchain-ai/langchain/issues/15161 | 2,056,117,735 | 15,161 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
i'm using openai function call agent , gpt llm offen gives bad tool parameters, i want to achieve this: pass certain params to all tools through through some path, before every tool get executed, i can check whether the llm produced params is right or directly use the certain params already get
### Suggestion:
_No response_ | Issue: i want to use langchain callbacks to pass a tool parameter to it? what should i do? | https://api.github.com/repos/langchain-ai/langchain/issues/15160/comments | 1 | 2023-12-26T06:56:58Z | 2024-04-02T16:07:09Z | https://github.com/langchain-ai/langchain/issues/15160 | 2,056,099,364 | 15,160 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
When using Qdrant as retriever, how to retrieve the relevant documents with the similarity score? For now, I do not see any methods that I can use to retrieve the documents and also return me the similarity score. However, if use the vector store to run similarity search, I have the option to get the documents and corresponding scores. Isn't there a way to achieve the same thing via retriever?
### Suggestion:
_No response_ | Issue: When using Qdrant as retriever, how to retrieve the relevant documents with the similarity score? | https://api.github.com/repos/langchain-ai/langchain/issues/15158/comments | 4 | 2023-12-26T06:24:17Z | 2024-04-02T16:07:04Z | https://github.com/langchain-ai/langchain/issues/15158 | 2,056,076,604 | 15,158 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I've wondering that in this part of the code in order to define `cypher generation template` of langchain with neo4j graph database from Neo4j DB QA chain Documentation
```python
from langchain.prompts.prompt import PromptTemplate
CYPHER_GENERATION_TEMPLATE = """Task:Generate Cypher statement to query a graph database.
Instructions:
Use only the provided relationship types and properties in the schema.
Do not use any other relationship types or properties that are not provided.
Schema:
{schema}
Note: Do not include any explanations or apologies in your responses.
Do not respond to any questions that might ask anything else than for you to construct a Cypher statement.
Do not include any text except the generated Cypher statement.
Examples: Here are a few examples of generated Cypher statements for particular questions:
# How many people played in Top Gun?
MATCH (m:Movie {{title:"Top Gun"}})<-[:ACTED_IN]-()
RETURN count(*) AS numberOfActors
The question is:
{question}"""
CYPHER_GENERATION_PROMPT = PromptTemplate(
input_variables=["schema", "question"], template=CYPHER_GENERATION_TEMPLATE
)
chain = GraphCypherQAChain.from_llm(
ChatOpenAI(temperature=0),
graph=graph,
verbose=True,
cypher_prompt=CYPHER_GENERATION_PROMPT,
)
```
Just want to ask that what variables that `schema` and `question` in **input_variables** parameters in `PromptTemplate` refers to ?
### Idea or request for content:
Please explain what schema and question refers to, did schema from our connected neo4j database and question is a text we pass into `chain.run("text input")`. Since i'm a little bit confused with documentation itself and need some explanation. Maybe use an example from it to explain will be much understanable | DOC: Need some clarification on Neo4j DB QA chain documentation | https://api.github.com/repos/langchain-ai/langchain/issues/15157/comments | 3 | 2023-12-26T04:36:18Z | 2024-04-02T16:06:59Z | https://github.com/langchain-ai/langchain/issues/15157 | 2,056,019,228 | 15,157 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain Version: 0.0.352
Langchain experimental Version: 0.0.47
Python : 3.10
Ubuntu : 22.04
Poetry is being used
**Code: `test.py`**
```python
import json
from langchain.schema import HumanMessage
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.chat_models import ChatOllama
chat_model = ChatOllama(model="mistral:instruct")
json_schema = {
"title": "Person",
"description": "Identifying information about a person.",
"type": "object",
"properties": {
"name": {"title": "Name", "description": "The person's name", "type": "string"},
"age": {"title": "Age", "description": "The person's age", "type": "integer"},
"fav_food": {
"title": "Fav Food",
"description": "The person's favorite food",
"type": "string",
},
},
"required": ["name", "age"],
}
messages = [
HumanMessage(
content="Please tell me about a person using the following JSON schema:"
),
HumanMessage(content=json.dumps(json_schema, indent=2)),
HumanMessage(
content="Now, considering the schema, tell me about a person named John who is 35 years old and loves pizza."
),
]
chat_model_response = chat_model(messages)
```
**Error:**
```sh
Traceback (most recent call last):
File "test.py", line 35, in <module>
chat_model_response = chat_model(messages)
File ".venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 636, in __call__
generation = self.generate(
File ".venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 382, in generate
raise e
File ".venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 372, in generate
self._generate_with_cache(
File ".venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 528, in _generate_with_cache
return self._generate(
File ".venv/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py", line 209, in _generate
final_chunk = self._chat_stream_with_aggregation(
File ".venv/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py", line 168, in _chat_stream_with_aggregation
for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
File ".venv/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py", line 155, in _create_chat_stream
yield from self._create_stream(
File ".venv/lib/python3.10/site-packages/langchain_community/llms/ollama.py", line 198, in _create_stream
raise OllamaEndpointNotFoundError(
langchain_community.llms.ollama.OllamaEndpointNotFoundError: Ollama call failed with status code 404.
```
checked if ollama is running on port 11434 it is working fine, but still seeing the issue.
@hwchase17 @agola11
Need some help on this.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
run the file `test.py`
### Expected behavior
model should complete the predication without any issue | langchain_community.llms.ollama.OllamaEndpointNotFoundError: Ollama call failed with status code 404 | https://api.github.com/repos/langchain-ai/langchain/issues/15147/comments | 9 | 2023-12-25T14:08:45Z | 2024-05-29T12:18:55Z | https://github.com/langchain-ai/langchain/issues/15147 | 2,055,708,933 | 15,147 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version 0.352
SystemMessage is ignored when I invoke AgentExecutor.run function. the code looks as below.
```
from typing import Tuple, Dict
from langchain.agents import initialize_agent, AgentType
from langchain.agents.agent import AgentExecutor
from langchain.agents.format_scratchpad.openai_functions import format_to_openai_function_messages
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
from langchain.memory import ConversationBufferMemory
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.tools.render import format_tool_to_openai_function
from langchain_core.messages import SystemMessage
from elasticsearch_agent.config import cfg
from elasticsearch_agent.tools.index_data_tool import IndexShowDataTool
from elasticsearch_agent.tools.index_details_tool import IndexDetailsTool
from elasticsearch_agent.tools.index_search_tool import create_search_tool
from elasticsearch_agent.tools.list_indices_tool import ListIndicesTool
tools = [
ListIndicesTool(),
IndexShowDataTool(),
IndexDetailsTool(),
create_search_tool(),
]
def elastic_agent_factory() -> AgentExecutor:
system_msg = """
You are a helpful AI ElasticSearch Expert Assistant
**Always you will get the field names of the ElasticSearch index from the Elasticsearch DB as a first step.
You are provided with various tools to help the user to get information from an ElasticSearch index.
you will get the index name from the question. If not provided, show the list of available indices and ask the user to choose it.
You will generate required aggregation queries for any analytical questions asked.
You will use 'aggregations' field in response object for answering analytical queries.
Dont's:
Never assume index names or field names.
"""
agent_kwargs, memory = setup_memory()
agent_kwargs["system_message"] = SystemMessage(content=system_msg)
return initialize_agent(
tools,
cfg.llm,
agent=AgentType.OPENAI_FUNCTIONS,
verbose=False,
agent_kwargs=agent_kwargs,
memory=memory
)
def setup_memory() -> Tuple[Dict, ConversationBufferMemory]:
"""
Sets up memory for the open ai functions agent.
:return a tuple with the agent keyword pairs and the conversation memory.
"""
agent_kwargs = {
"extra_prompt_messages": [MessagesPlaceholder(variable_name="memory")],
}
memory = ConversationBufferMemory(memory_key="memory", return_messages=True)
return agent_kwargs, memory
if __name__ == "__main__":
agent_executor = elastic_agent_factory()
prompt = agent_executor.agent.prompt
print(prompt)
print(type(agent_executor.agent.prompt))
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
run the given code
### Expected behavior
The chain agent should consider using the system message and extra prompt message provided to it. | SystemMessage are not considered while creating AgentExecutor with OPENAI_FUNCTIONS | https://api.github.com/repos/langchain-ai/langchain/issues/15145/comments | 5 | 2023-12-25T12:11:14Z | 2024-04-01T16:06:55Z | https://github.com/langchain-ai/langchain/issues/15145 | 2,055,649,057 | 15,145 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.352
langchain-community==0.0.6
langchain-core==0.1.3
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.document_loaders.parsers.audio import OpenAIWhisperParserLocal
whisper = OpenAIWhisperParserLocal(device="cuda")
```
This fails when cuda is requested but is not available and generates the following error:
`AttributeError: 'OpenAIWhisperParserLocal' object has no attribute 'lang_model'`
This is cause by the following logic: https://github.com/langchain-ai/langchain/blob/a2d30428237695f076060dec881bae0258123775/libs/community/langchain_community/document_loaders/parsers/audio.py#L176C18-L176C21
### Expected behavior
Provide a more clear error or fall back to CPU. | OpenAIWhisperParserLocal fails when specifying cuda device but cuda is not available | https://api.github.com/repos/langchain-ai/langchain/issues/15143/comments | 1 | 2023-12-25T09:53:52Z | 2024-04-01T16:06:50Z | https://github.com/langchain-ai/langchain/issues/15143 | 2,055,569,018 | 15,143 |
[
"langchain-ai",
"langchain"
] | ### System Info
wsl
conda 23.7.4 python 3.8.11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
repo_id = "Qwen/Qwen-1_8B-Chat"
llm = HuggingFaceHub(
repo_id=repo_id, model_kwargs={"max_length": 128, "temperature": 0.5}
)
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run(question))
```
output:
```
ValueError: Error raised by inference API: The repository for Qwen/Qwen-1_8B-Chat contains custom code which must be executed to correctly load the model. You can inspect the repository content at https://hf.co/Qwen/Qwen-1_8B-Chat.
Please pass the argument `trust_remote_code=True` to allow custom code to be run.
```
similar issue #6080 | HuggingFaceHub api can not pass trust_remote_code argument | https://api.github.com/repos/langchain-ai/langchain/issues/15141/comments | 1 | 2023-12-25T09:10:42Z | 2024-04-01T16:06:45Z | https://github.com/langchain-ai/langchain/issues/15141 | 2,055,540,800 | 15,141 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
In the current documentations the output of `Upstash Redis Cache` section in LLM Caching documentation seems wrong. The second run after caching is done has wrong output and wrong code and comments written in the code block.
### Idea or request for content:
Update the code block with appropriate comment and matching output to remove the confusion. | DOC: Wrong output in `Upstash Redis Cache` section of LLM Caching documentation | https://api.github.com/repos/langchain-ai/langchain/issues/15139/comments | 1 | 2023-12-25T07:13:29Z | 2024-04-01T16:06:40Z | https://github.com/langchain-ai/langchain/issues/15139 | 2,055,458,803 | 15,139 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain : 0.0.352
Python : 3.11.5
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. use streaming
```
llm_sm_ep = SagemakerEndpoint(
endpoint_name=endpoint_name,
client=client,
content_handler=content_handler,
model_kwargs=model_param,
endpoint_kwargs=endpoint_param,
streaming=True,
)
```
### Expected behavior
When I used TGI model, `invoke_endpoint_with_response_stream` of response does't have `outputs`. Instead it return `token` like below full response.
```
data:{"token":{"id":601,"text":" time","logprob":-0.10015869,"special":false},"generated_text":null,"details":null}
``` | Sagemaker Endpoint not working streaming | https://api.github.com/repos/langchain-ai/langchain/issues/15138/comments | 1 | 2023-12-25T06:28:01Z | 2024-04-01T16:06:35Z | https://github.com/langchain-ai/langchain/issues/15138 | 2,055,427,344 | 15,138 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Currently if one wants to use the RetryWithErrorOutputParser - we need to do the parsing manually instead of generating a chain that does it for us (including all the nice chain functions: batch, ainvoke, etc)
There are 2 issues:
1. The RetryWithOutputParser requires the prompt to be given to it as input so that it can do it's magic. It does this by implementing a `parse_with_prompt` function. Unfortunately this function is not plumbed all the way into the `BaseParser` so that when this is invoked as part of a regular chain it gives the `NotImplementedError: This OutputParser can only be called by the parse_with_prompt method.` exception.
2. The default output of the chatmodels is to return just the output or `AIMessages`. However in this case we need both the prompt and the output.
### Motivation
Currently we need to run the output parsing for the retry parsing manually. This tends to look something like this:
```
chain = chat_prompt | self.chat_model
output_batch = chain.batch(messages_batch,
config={"max_concurrency": 10,
"callbacks": [tracing_callback_handler]})
prompts_list = tracing_callback_handler.prompts
result_list = tracing_callback_handler.results
parsed_output_batch = []
for idx, output in enumerate(output_batch):
parsed_output = retry_parser.parse_with_prompt(output.content, prompts_list[idx])
parsed_output_batch.append(parsed_output)
```
In the above code the `tracing_callback_handler` is a custom callback handler that persists the prompt and results - which we end up using to give the retry_parser the prompt.
This is cumbersome and it would be awesome if this would just work with the chain itself like so
```
chain = chat_prompt | self.chat_model | retry_parser
output_batch = chain.batch(messages_batch,
config={"max_concurrency": 10,
"callbacks": [tracing_callback_handler]})
```
### Your contribution
If someone can validate that my understanding of the problem is correct - I can go ahead and create a PR for this. | RetryWithErrorOutputParser does not work with LLMChain because it does not implement the `parse` function | https://api.github.com/repos/langchain-ai/langchain/issues/15133/comments | 3 | 2023-12-24T21:26:43Z | 2024-05-06T16:07:59Z | https://github.com/langchain-ai/langchain/issues/15133 | 2,055,216,057 | 15,133 |
[
"langchain-ai",
"langchain"
] |
# How Adding a prompt template to conversational retrieval chain giving the code:
`template= """Use the following pieces of context to answer the question at the end.
If you don't know the answer,
just say that you don't know.
{context}
Question: {question}
Helpful Answer:"""
QA_CHAIN_PROMPT = PromptTemplate.from_template(template)
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory(
memory_key="chat_history",
return_messages=True
)
from langchain.chains import ConversationalRetrievalChain
qa = ConversationalRetrievalChain.from_llm(
llm,
retriever=db.as_retriever(),
memory=memory,
chain_type_kwargs={"prompt": QA_CHAIN_PROMPT}
)
`
`
ValidationError: 1 validation error for ConversationalRetrievalChain
chain_type_kwargs
extra fields not permitted (type=value_error.extra)`
How do I add the prompt template to the chain efficiently?
### Suggestion:
How do I add the prompt template to the chain efficiently? Please, I need help with this. | Adding Prompt template to ConversationalRetrievalChain.from_llm | https://api.github.com/repos/langchain-ai/langchain/issues/15132/comments | 1 | 2023-12-24T21:26:16Z | 2024-03-31T16:06:50Z | https://github.com/langchain-ai/langchain/issues/15132 | 2,055,216,000 | 15,132 |
[
"langchain-ai",
"langchain"
] | ### System Info
windows
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The below code fails
import os
from operator import itemgetter
from dotenv import load_dotenv
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import DirectoryLoader
from langchain_community.embeddings import OpenAIEmbeddings
from langchain_community.vectorstores.faiss import FAISS
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
load_dotenv()
OPENAI_API_KEY = os.environ.get('OPENAI_API_KEY')
loader = DirectoryLoader("/Users/joyeed/langchain_examples/langchain_examples/data/", glob='**/*.md')
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
text = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY)
vectorstore = FAISS.from_documents(text, embeddings)
retriever = vectorstore.as_retriever()
prompt_template = ChatPromptTemplate.from_template(
"""
Write 2 {platform} posts about {topic}?
"""
)
model = ChatOpenAI(openai_api_key=OPENAI_API_KEY)
# Compose the chain for generating posts
chain = (
{"topic": RunnablePassthrough(), "platform": RunnablePassthrough(), "context": retriever}
| prompt_template
| model
| StrOutputParser()
)
# Invoke the chain to generate a post
output = chain.invoke({"topic": "baseball", "platform": "twitter"})
# Print the generated post
print(output)
I think it is failing because the invoke is expecting a string as an input, but earlier we were able to pass key/value pairs. It is failing in tiktoken/core.py in the below code, it expects a text here.
if match := _special_token_regex(disallowed_special).search(text):
raise_disallowed_special_token(match.group())
### Expected behavior
invoke should allow accepting JSON inputs | chain.invoke is no longer taking a json as input | https://api.github.com/repos/langchain-ai/langchain/issues/15131/comments | 1 | 2023-12-24T17:35:05Z | 2024-03-31T16:06:45Z | https://github.com/langchain-ai/langchain/issues/15131 | 2,055,171,635 | 15,131 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version: 0.0.352, Windows 10, Python 3.11.6,
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm testing a couple of apps created from Langchain templates and I use dotenv and a .env file in the app's root folder ( default in docs: "my-app" )
I'm not able to work with Neo4J data added into it, when trying to run eg.:
**neo4j-advanced-rag-app\packages\neo4j-advanced-rag\ingest.py**
( My .env is in the neo4j-advanced-rag-app folder )
This is strange, because eg. Langsmith related env vars can be used, so I think the issue is not related to all env vars in .env file!
The last terminal error is:
```
Traceback (most recent call last):
File "d:\Projects\AI_testing\LangChain_test\Python_231026\neo4j-advanced-rag-app\packages\neo4j-advanced-rag\ingest.py", line 16, in <module>
graph = Neo4jGraph()
^^^^^^^^^^^^
File "D:\Projects\AI_testing\LangChain_test\Python_231026\langchain-venv\Lib\site-packages\langchain_community\graphs\neo4j_graph.py", line 65, in __init__
url = get_from_env("url", "NEO4J_URI", url)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Projects\AI_testing\LangChain_test\Python_231026\langchain-venv\Lib\site-packages\langchain_core\utils\env.py", line 41, in get_from_env
raise ValueError(
ValueError: Did not find url, please add an environment variable `NEO4J_URI` which contains it, or pass `url` as a named parameter.
```
### Expected behavior
I expect the app created from a template can use all the env vars in the .env file, which is placed into app root folder. | Template issue: Neo4J environmental variables in .env file not found | https://api.github.com/repos/langchain-ai/langchain/issues/15130/comments | 3 | 2023-12-24T14:59:45Z | 2024-03-31T16:06:40Z | https://github.com/langchain-ai/langchain/issues/15130 | 2,055,130,570 | 15,130 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Environment
```
Edition Windows 11 Home
Version 22H2
Installed on 4/30/2023
OS build 22621.2861
Experience Windows Feature Experience Pack 1000.22681.1000.0
langchain package version: "0.0.212"
zod package version: "3.22.4"
typescript package version: "5.1.6"
```
Prompt
```
My prompt data with keys: {chat_history}, {currentPoint}, {language}, {topic} and Last AI message: {lastAiMessage} and User response: {message}| format: json
```
Creating model code
```
// LLM constructor
constructor(args: any[]) {
this.llm = new ChatOpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
modelName: 'gpt-3.5-turbo-1106',
modelKwargs: {
response_format: {
type: 'json_object',
},
},
});
this.answerScheme = LLMChain.getAnswerScheme();
this.formatInstructions = createParserFromSchema(this.answerScheme).getFormatInstructions();
const prompt = ChatPromptTemplate.fromMessages([new SystemMessage(args.prompt, { "json": true })]);
const memory = new ConversationSummaryMemory({
llm: this.llm,
memoryKey: 'chat_history',
inputKey: 'message',
});
this.chain = new ConversationChain({
llm: this.llm,
prompt,
memory,
verbose: true,
})
}
private static getAnswerScheme() {
return z.object({
answer: z.string(),
action: z.enum(['none', 'next']),
});
}
```
Send message code
```
async sendMessage(chainValues: ChainValues) {
chainValues['currentPoint'] = this.currentPoint;
chainValues['lastAiMessage'] = this.lastAiMessage ?? '';
try {
const modelKwargs = {
response_format: {
type: 'json_object',
},
};
const rawResponse = await this.chain.call({ ...chainValues, ...modelKwargs, format_instructions: this.formatInstructions });
const { answer, action } = this.answerScheme.parse(rawResponse);
this.lastAiMessage = answer;
this.__parseActionKeyword(action);
return answer;
}
catch (error) {
console.error('LLMChain ERROR:', error);
return "Something goes wrong.\n\n" + error;
}
}
```
LLM run with input
```
[llm/start] [1:chain:ConversationChain > 2:llm:ChatOpenAI] Entering LLM run with input: {
"messages": [
[
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"messages",
"SystemMessage"
],
"kwargs": {
"content": "My prompt data with keys: {chat_history}, {currentPoint}, {language}, {topic} and Last AI message: {lastAiMessage} and User response: {message}| format: json",
"additional_kwargs": {
"json": true
}
}
}
]
]
}
```
Error: `400 'messages' must contain the word 'json' in some form, to use 'response_format' of type 'json_object'.`
### Suggestion:
_No response_ | Issue: LLMChain error. response_format json error with messages. Messages is array of array | https://api.github.com/repos/langchain-ai/langchain/issues/15125/comments | 4 | 2023-12-24T12:57:20Z | 2023-12-24T15:06:36Z | https://github.com/langchain-ai/langchain/issues/15125 | 2,055,093,069 | 15,125 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
If my agent tool requires user to pass 2 parameters, and if these 2 parameters are not included in the user's question, how can I remind him to enter the parameters
### Suggestion:
_No response_ | If my agent tool requires user to pass 2 parameters, and if these 2 parameters are not included in the user's question, how can I remind him to enter the parametersIssue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/15122/comments | 1 | 2023-12-24T07:32:59Z | 2024-03-31T16:06:35Z | https://github.com/langchain-ai/langchain/issues/15122 | 2,055,013,662 | 15,122 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
what is RAG and how it's implemented as of now I completed exploring custom_prompt_template and want to know more about RAG?
### Suggestion:
_No response_ | Issue: what is RAG and how it's implemented? | https://api.github.com/repos/langchain-ai/langchain/issues/15116/comments | 5 | 2023-12-24T06:39:33Z | 2024-04-01T16:06:30Z | https://github.com/langchain-ai/langchain/issues/15116 | 2,055,002,722 | 15,116 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I suspect a potential issue where Chroma.from_documents might not be embedding and storing vectors for metadata in documents.
I have loaded five tabular documents using DataFrameLoader. However, when attempting to retrieve content based on similarity from the vector store, it appears that sentences in the metadata are not being utilized for matching. I don't see the documentation have clarify if this is the expected behavior or if I might be overlooking a specific argument or setting?
To illustrate, suppose I have a table with three fields: customer_question, agent_answer, and manager_note. If I query using the exact string from one of a manager_note, it surprisingly doesn't return the corresponding document at the top of the results.
**Is this a normal outcome?
Should I modify my table structure to include all relevant content in the page_content_column when setting up the DataFrameLoader?**
Here is the process
```
loader = DataFrameLoader(customer_q_a_001, page_content_column='customer_question')
docs = loader2.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=15)
docs = text_splitter.split_documents(docs )
# db = Chroma.from_documents(all_docs, embeddings, persist_directory="./chroma_db")
# db.persist()
db.similarity_search_with_score((query))
```
Library version:
langchain: 0.0.352
I would appreciate any insights or suggestions regarding this question.
### Suggestion:
_No response_ | Chroma.from_documents exclude metadata in embedding? [Question] | https://api.github.com/repos/langchain-ai/langchain/issues/15115/comments | 5 | 2023-12-24T06:13:37Z | 2024-03-31T16:06:25Z | https://github.com/langchain-ai/langchain/issues/15115 | 2,054,997,867 | 15,115 |
[
"langchain-ai",
"langchain"
] | ### Feature request
It would be great to have adapters support in huggingface embedding class
### Motivation
Many really good embedding models have special adapters for retrieval, for example specter2 which is a leading embedding for scientific texts have many adapters, for example https://huggingface.co/allenai/specter2_aug2023refresh
and current huggingface embedding implementations does not allow using them
### Your contribution
so far I am just implementing it in ugly way in my projects, not sure if/when I will have time for proper PR | add support for embedding models with adapters | https://api.github.com/repos/langchain-ai/langchain/issues/15112/comments | 2 | 2023-12-24T01:18:05Z | 2024-04-03T16:08:34Z | https://github.com/langchain-ai/langchain/issues/15112 | 2,054,952,674 | 15,112 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Add streaming support for Together AI Endpoints in Langchain. The official endpoint supports streaming with `stream_tokens` keyword, which should be not that hard to implement `_stream` method and add streaming support with the `streaming = True` flag
this is what the endpoint output when `stream_token` is set to `true`
```
data: {
"choices": [{"text": " the"}],
"request_id": "83a18448f8c030ab-SEA",
"token": {"engine": "", "id": 253, "logprob": 0, "special": false},
"id": "671a9e090c3fe06af8ab9445a46684298b6f5e5b458c4ff8a145bee456eb77cf",
}
data: {
"choices": [{"text": " French"}],
"request_id": "83a18448f8c030ab-SEA",
"token": {"engine": "", "id": 5112, "logprob": -0.8027344, "special": false},
"id": "671a9e090c3fe06af8ab9445a46684298b6f5e5b458c4ff8a145bee456eb77cf",
}
...
data: [DONE]
```
supports
### Motivation
together LLM integration does not support streaming although its endpoints are supported officially, adding streaming adds a huge benefit to user experience and quickly shows the model output generation
### Your contribution
implementing `_stream` method and processing the output of the API response, this can be done like this:
```python
payload = {
...,
"stream_tokens": True
}
response = requests.post(..., payload, stream=True)
for line in response.iter_lines():
....
yield GenerationChunk(
text = line["choices"][0]["text"],
...
) | [improvement] Add Streaming Support for Together AI | https://api.github.com/repos/langchain-ai/langchain/issues/15109/comments | 1 | 2023-12-23T19:48:33Z | 2024-03-30T16:07:11Z | https://github.com/langchain-ai/langchain/issues/15109 | 2,054,881,350 | 15,109 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I am using langchain.vectorstores.redis and langchain.chains.ConversationalRetrievalChain.from_llm
I would like to get the scores of the matching documents with my query.
I know you can filter with the `search_kwargs={"score_threshold": 0.8}`
But still I want to get the similarity scores in the output.
### Motivation
To be able to play with the similarity scores on my end and allow flexibility to the user
### Your contribution
The output should be a list (like now) of tupples (Doc, score). In fact this already exists in the similarity_search_with_relevance_scores in lanchain.schema.vectorstore so the implementation should be quite straightforward
Thanks! | Return similarity score ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/15097/comments | 5 | 2023-12-23T11:56:23Z | 2024-04-04T16:08:21Z | https://github.com/langchain-ai/langchain/issues/15097 | 2,054,765,710 | 15,097 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I'm trying to initialize an existing collection via:
store = PGVector(
collection_name=COLLECTION_NAME,
connection_string=CONNECTION_STRING,
embedding_function=embeddings,
)
I keep getting:
Exception has occurred: NoReferencedTableError
Foreign key associated with column 'langchain_pg_embedding.collection_id' could not find table 'langchain_pg_collection' with which to generate a foreign key to target column 'uuid'
I've used a docker installation for PGVector and I can confirm that the table langchain_pg_collection does exist with the key.
This is the view from PgAdmin
langchain_pg_collection

and
langchain_pg_embedding

So I'm not sure why its throwing the exception or how to resolve it
If its relevant I had to make this change inside pgvector
```
from sqlalchemy import MetaData
class CollectionStore(BaseModel):
"""Collection store."""
metadata = MetaData()
if not metadata.tables.get('langchain_pg_collection'):
__tablename__ = "langchain_pg_collection"
name = sqlalchemy.Column(sqlalchemy.String)
cmetadata = sqlalchemy.Column(JSON)
embeddings = relationship(
"EmbeddingStore",
back_populates="collection",
passive_deletes=True,
)
```
to resolve [aTable 'langchain_pg_collection' is already defined for this MetaData instance](https://github.com/langchain-ai/langchain/issues/14699)
### Suggestion:
_No response_ | Foreign key associated with column 'langchain_pg_embedding.collection_id' could not find table | https://api.github.com/repos/langchain-ai/langchain/issues/15096/comments | 1 | 2023-12-23T11:56:18Z | 2024-03-30T16:07:01Z | https://github.com/langchain-ai/langchain/issues/15096 | 2,054,765,699 | 15,096 |
[
"langchain-ai",
"langchain"
] | ### Feature request
The safety settings are there in the **google_generativeai** library are are **not** there in the **langchain_google_genai** library
The safety settings is an basically array of dictionaries passed when sending the prompt
### Motivation
The problem with not having this is that when we use the ChatGoogleGenerativeAI model, if there is some kind of prompt which violate the basic safety settings then the model won't return with your answer
If we can change the safety settings and send it with the prompt to the model we could fix this issue
### Your contribution
I am currently reading the code of the library and will raise a PR if i could fix the issue | Feature: No safety settings when using langchain_google_genai's ChatGoogleGenerativeAI | https://api.github.com/repos/langchain-ai/langchain/issues/15095/comments | 22 | 2023-12-23T09:00:07Z | 2024-08-02T10:50:19Z | https://github.com/langchain-ai/langchain/issues/15095 | 2,054,725,088 | 15,095 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.352
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I use langserve develop a chain , and expose as remote tool. my friend wants to call my chain in his agent, how to do it?
**Joke chain:**
```
#!/usr/bin/env python
from fastapi import FastAPI
from langchain.prompts import ChatPromptTemplate
from langchain.chat_models import ChatAnthropic, ChatOpenAI
from langserve import add_routes
llm = ChatOpenAI(
openai_api_base=f"http://192.168.1.201:18001/v1",
openai_api_key="EMPTY",
model="gpt-3.5-turbo",
temperature=0.5,
top_p="0.3",
default_headers={"x-heliumos-appId": "general-inference"},
tiktoken_model_name="gpt-3.5-turbo",
verbose=True,
)
app = FastAPI(
title="LangChain Server",
version="1.0",
description="A simple api server using Langchain's Runnable interfaces",
)
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
add_routes(
app,
prompt | llm,
path="/joke",
)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
```
**Agent:**
```
from langchain.agents import initialize_agent, AgentType
from langchain_community.chat_models import ChatOpenAI
from langserve import RemoteRunnable
from langchain.tools import Tool
llm = ChatOpenAI(
openai_api_base=f"http://xxxx:xxx/v1",
openai_api_key="EMPTY",
model="gpt-3.5-turbo",
temperature=0.5,
top_p="0.3",
tiktoken_model_name="gpt-3.5-turbo",
verbose=True,
)
remote_tool = RemoteRunnable("http://xxx:xxx/joke/")
tools = [
Tool.from_function(
func=remote_tool.invoke,
name="joke",
description="用户要求讲笑话的时候使用该工具",
# coroutine= ... <- you can specify an async method if desired as well
),
]
agent = initialize_agent(
tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
)
result = agent.run("讲一个关于坐出租车的笑话")
print(result)
```
agent always got error: because not there is not valid input to remote tool.
### Expected behavior
no error | Agent how to call remote tool (exposed by langserve) | https://api.github.com/repos/langchain-ai/langchain/issues/15094/comments | 1 | 2023-12-23T08:50:23Z | 2024-03-30T16:06:56Z | https://github.com/langchain-ai/langchain/issues/15094 | 2,054,722,951 | 15,094 |
[
"langchain-ai",
"langchain"
] | ### System Info
I'm using the latest version of langchain.
When my system prompt is longer than 23 lines, i get this error:
KeyError: "Input to ChatPromptTemplate is missing variable ''. Expected: ['', 'description'] Received: ['description']"
It's being generated from this snippet:
```
def generate_output(user_input: str) -> str:
'''This function will generate the output.scad file.'''
chain = chat_prompt | chat_model
print(chain)
# similarity_search(user_input)
llm_output = str(chain.invoke({"description": user_input})) ``` ( Error occurs on this line)
This error does not occur when my system prompt is shorter than 23 lines. Here is the code im using:
`
chat_model = ChatOpenAI(openai_api_key=api_key(), model_name="gpt-4-1106-preview", temperature=0.2, model_kwargs=
{"frequency_penalty": 0, "presence_penalty": 0, "top_p": 1})
System_Message = Systemprompt("hello.txt")
Human_Message = "generate python code to {description} "
print("hi")
print(Human_Message)
chat_prompt = ChatPromptTemplate.from_messages([
("system", System_Message),
("human", Human_Message),
]) `
Here is my SystermPrompt functions:
`def Systemprompt(file_path: str) -> str:
'''This function will return system prompt.'''
try:
with open(file_path, "r") as file:
text = file.read()
return text
except FileNotFoundError:
return FileNotFoundError
except IOError as e:
return IOError`
How can i fix this?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
chat_model = ChatOpenAI(openai_api_key=api_key(), model_name="gpt-4-1106-preview", temperature=0.2, model_kwargs=
{"frequency_penalty": 0, "presence_penalty": 0, "top_p": 1})
System_Message = Systemprompt("hello.txt")
Human_Message = "generate python code to {description} "
print("hi")
print(Human_Message)
chat_prompt = ChatPromptTemplate.from_messages([
("system", System_Message),
("human", Human_Message),
### Expected behavior
Expected behavious is that this shoudn't happen, and i should get python code
| Issue with ChatPromptTemplate | https://api.github.com/repos/langchain-ai/langchain/issues/15093/comments | 4 | 2023-12-23T08:09:22Z | 2024-03-31T16:06:10Z | https://github.com/langchain-ai/langchain/issues/15093 | 2,054,713,803 | 15,093 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
`def generate_custom_prompt(new_project_qa,query,name,not_uuid):
check = query.lower()
result = new_project_qa(query)
relevant_document = result['source_documents']
context_text="\n\n---\n\n".join([doc.page_content for doc in relevant_document])
user_experience_inst = UserExperience.objects.get(not_uuid=not_uuid)
greetings = ['hi', 'hello', 'hey', 'hui', 'hiiii', 'hii', 'hiii', 'heyyy']
if check in greetings:
custom_prompt_template = f"""
Just simply reply with "Hello {name}! How can I assist you today?"
"""
elif check not in greetings and user_experience_inst.custom_prompt:
custom_prompt_template = f"""You are a chatbot designed to provide answers to User's Questions, delimited by triple backticks.
Generate your answer to match the user's requirements: {user_experience_inst.custom_prompt}
If you encounter a question for which you don't know the answer, please respond with 'I don't know' and refrain from making up an answer.
- Before saying 'I don't know,' please re-verify your vector store to ensure the answer is not present in the database.
Remember, your goal is to assist the user in the best way possible. If the question is unclear or ambiguous, feel free to ask for clarification.
User's Question: ```{check}```
AI Answer:"""
else:
# Create the custom prompt template
custom_prompt_template = f"""Answer the question based only on following context: ```{context_text} ```
You are a chatbot designed to provide answers in details to User's Question: ```{check} ``` which is delimited by triple backticks.
Generate your answer in points in the following format:
1. Point no 1
1.1 Its subpoint in details
1.2 More information if needed.
2. Point no 2
2.1 Its subpoint in details
2.2 More information if needed.
…
N. Another main point.
If you encounter a question for which you don't know the answer based on the predefined points, please respond with 'I don't know' and refrain from making up an answer.
However, if the answer is not present in the predefined points,then Provide comprehensive information related to the user's query.
Remember, your goal is to assist the user in the best way possible. If the question is unclear or ambiguous, you can ask for clarification.
User's Question: ```{check} ```
AI Answer:"""
# Create the PromptTemplate
custom_prompt = ChatPromptTemplate(
template=custom_prompt_template, input_variables=["check","context_text"]
)
formatted_prompt = custom_prompt.format()
return formatted_prompt
`
#below is the error I am getting
Traceback (most recent call last):
File "/home/hs/env/lib/python3.8/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/home/hs/env/lib/python3.8/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/hs/CustomBot/chatbot/views.py", line 366, in GetChatResponse
custom_message=generate_custom_prompt(chat_qa,query,name,not_uuid)
File "/home/hs/CustomBot/accounts/common_langcain_qa.py", line 70, in generate_custom_prompt
custom_prompt = ChatPromptTemplate(
File "/home/hs/env/lib/python3.8/site-packages/langchain_core/load/serializable.py", line 97, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 339, in pydantic.main.BaseModel.__init__
File "pydantic/main.py", line 1050, in pydantic.main.validate_model
File "/home/hs/env/lib/python3.8/site-packages/langchain_core/prompts/chat.py", line 449, in validate_input_variables
messages = values["messages"]
KeyError: 'messages'
### Suggestion:
_No response_ | Issue: Getting error while using ChatPromptTemplate | https://api.github.com/repos/langchain-ai/langchain/issues/15089/comments | 6 | 2023-12-23T05:10:34Z | 2024-04-18T16:21:18Z | https://github.com/langchain-ai/langchain/issues/15089 | 2,054,676,213 | 15,089 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.11
langchain 0.0.352
langchain-core 0.1.3
langchain-community 0.0.4 (doesn't work with neithwer `from langchain.llms import OpenAI` nor `langchain.chat_models import ChatOpenAI`)
langchain-community 0.0.2 (works as expected with `from langchain.llms import OpenAI` but it doesn't with `langchain.chat_models import ChatOpenAI`)
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
#### The following code works fine with `langchain-community 0.0.2`:
Plese refer to this [LangSmith run](https://smith.langchain.com/public/1c6c7960-e3b7-42fc-8835-6b78520e6580/r)
```import config
from langchain.vectorstores.redis import Redis
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain.storage import RedisStore
from langchain.embeddings import OpenAIEmbeddings, CacheBackedEmbeddings
from langchain.chains.query_constructor.schema import AttributeInfo
from langchain.llms import OpenAI
embed_store = RedisStore(redis_url=config.REDIS_URL, client_kwargs={
'db': 1}, namespace='embedding_cache')
underlying_embeddings = OpenAIEmbeddings()
embeddings = CacheBackedEmbeddings.from_bytes_store(
underlying_embeddings, embed_store, namespace=underlying_embeddings.model
)
metadata_field_info = [
AttributeInfo(
name="source",
description="The source URL or book title where the document comes from.",
type="string",
),
AttributeInfo(
name="title",
description="The title where the text was taken from. Use this attribute to filter the User Query, but don't filter for exact matches. For example: 'Qué dice el código de trabajo', the filter could be 'Código de Trabajo'; 'Qué dice la ley sobre el teletrabajo', the filter could be 'Teletrabajo'.",
type="string",
),
AttributeInfo(
name="doc_type",
description="Type of document classification to be used only as Filter for the User Query. Laws or Labor Code go under 'Legislación'. Company, Organizational or employer information go under 'Organización'. Company Policies go under 'Política'. Company internal procedures go under 'Procedimiento'.",
type="string",
),
AttributeInfo(
name="keywords",
description="A list of keywords taken from the document to filter the query. Always use this attribute to filter the query when a specific article number is needed. For example: 'Qué dice el artículo 10 del código de trabajo', you must capitalize the words 'articulo' to 'ARTÍCULO', and filter 'ARTÍCULO 10'.",
type="string",
),
]
document_content_description = "Data source comprised of the entire contents of the Costa Rican Labor Code and other related Laws: 1) Código de trabajo; 2) Ley de protección al trabajador; 3) Ley de acoso sexual; 4) Ley de teletrabajo; y 5) Ley de Protección de Datos Personales."
llm = OpenAI(temperature=0.0)
rds_store = Redis.from_existing_index(
embeddings,
index_name=config.INDEX_NAME,
redis_url=config.REDIS_URL,
schema='./Redis_schema.yaml'
)
selfq_retriever = SelfQueryRetriever.from_llm(
llm,
rds_store,
document_content_description,
metadata_field_info,
enable_limit=False,
# verbose=True,
)
retriever = rds_store.as_retriever()
```
By just changing `from langchain.llms import OpenAI` to `from langchain.chat_models import ChatOpenAI` or by upgrading `langchain-community` to version 0.0.4, the query output is as follows and the retrieval doesn't work as intended:
```{
"id": [
"langchain",
"chains",
"query_constructor",
"ir",
"StructuredQuery"
],
"lc": 1,
"repr": "StructuredQuery(query='articulo 143', filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='title', value='Código de Trabajo'), limit=None)",
"type": "not_implemented"
}
```
Plese refer to this [LangSmith run](https://smith.langchain.com/public/2d4732f0-8712-4e84-9af2-5d13ffc6cb93/r) for the unsuccessful retrieval
### Expected behavior
This is the expected result:
```{
"id": [
"langchain",
"chains",
"query_constructor",
"ir",
"StructuredQuery"
],
"lc": 1,
"repr": "StructuredQuery(query=' ', filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='title', value='Código de Trabajo'), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='doc_type', value='Legislación'), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='keywords', value='ARTÍCULO 143')]), limit=None)",
"type": "not_implemented"
}
``` | SelfQueryRetriever broken with latest langchain-community or using ChatOpenAI as llm | https://api.github.com/repos/langchain-ai/langchain/issues/15087/comments | 1 | 2023-12-23T02:55:37Z | 2024-03-30T16:06:46Z | https://github.com/langchain-ai/langchain/issues/15087 | 2,054,631,468 | 15,087 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
#### Issue Description
- **Overview**: The current documentation for the 'Return Source Documents' functionality seems to be outdated or incorrect. The provided code snippet results in errors when executed.
https://python.langchain.com/docs/integrations/providers/vectara/vectara_chat#conversationalretrievalchain-with-question-answering-with-sources
- **Details**:
- The current code:
```python
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = bot({"question": query, "chat_history": chat_history})
```
produces the following error:
```
/python3.9/site-packages/langchain/memory/chat_memory.py", line 29, in _get_input_output
raise ValueError(f"One output key expected, got {outputs.keys()}")
ValueError: One output key expected, got dict_keys(['answer', 'source_documents'])
```
- **Examples**: This issue occurs when trying to use the 'Return Source Documents' as outlined in the current documentation.
#### Additional Information
- **Related Issue**: This documentation update is related to the issue raised in https://github.com/langchain-ai/langchain/issues/2256.
### Idea or request for content:
#### Suggested Fix
- Update the documentation with the correct code snippet:
```python
memory = ConversationBufferMemory(memory_key="chat_history", input_key='question', output_key='answer', return_messages=True)
bot = ConversationalRetrievalChain.from_llm(llm, retriever, memory=memory, return_source_documents=True)
result = bot({"question": query,})
```
- This revision correctly handles the output and does not produce the aforementioned error. | DOC: Documentation Update Needed for 'Return Source Documents' Functionality | https://api.github.com/repos/langchain-ai/langchain/issues/15086/comments | 2 | 2023-12-23T02:21:36Z | 2024-03-30T16:06:41Z | https://github.com/langchain-ai/langchain/issues/15086 | 2,054,623,466 | 15,086 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.