issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### Feature request
Implement extraction of message content whenever iMessageChatLoader skips messages with null `text` field even though the content is present in other fields.
### Motivation
Due to iMessage's database schema change in its new MacOS Ventura, newer messages now have content encoded in the `attributedBody` body, with the `text` field being null.
This issue was originally raised by @idvorkin in Issue #10680.
### Your contribution
We intend to submit a pull request for this issue close to the end of November. | Enhancing iMessageChatLoader to prevent skipping messages with extractable content | https://api.github.com/repos/langchain-ai/langchain/issues/13326/comments | 0 | 2023-11-14T04:08:46Z | 2023-11-28T20:45:45Z | https://github.com/langchain-ai/langchain/issues/13326 | 1,991,952,054 | 13,326 |
[
"langchain-ai",
"langchain"
] | I have a web app and the SQLAlchemy is 1.x , when I use langchain which needs SQLAlchemy 2.x , it‘s diffcult for me to update the SQLAlchemy to 2.x,how can I fix this problem | SQLAlchemy version | https://api.github.com/repos/langchain-ai/langchain/issues/13325/comments | 3 | 2023-11-14T03:56:20Z | 2024-02-20T16:06:30Z | https://github.com/langchain-ai/langchain/issues/13325 | 1,991,938,627 | 13,325 |
[
"langchain-ai",
"langchain"
] | ### System Info
- Langchain version: langchain==0.0.335
- OpenAI version: openai==1.2.3
- Platform: Darwin MacBook Pro 22.4.0 Darwin Kernel Version 22.4.0: Mon Mar 6 21:00:41 PST 2023; root:xnu-8796.101.5~3/RELEASE_ARM64_T8103 arm64
- Python version: Python 3.9.6
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Example script:
```python
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI()
user_input = input("Input prompt: ")
response = llm.invoke(user_input)
```
```python
return self._generate(
File "...lib/python3.9/site-packages/langchain/chat_models/openai.py", line 360, in _generate
response = self.completion_with_retry(
File "...lib/python3.9/site-packages/langchain/chat_models/openai.py", line 293, in completion_with_retry
retry_decorator = _create_retry_decorator(self, run_manager=run_manager)
File "...lib/python3.9/site-packages/langchain/chat_models/openai.py", line 73, in _create_retry_decorator
openai.error.Timeout,
Exception module 'openai' has no attribute 'error'
```
Cause appears to be in `langchain.chat_models.openai`: https://github.com/langchain-ai/langchain/blob/5a920e14c06735441a9ea28c1313f8bd433dc721/libs/langchain/langchain/chat_models/openai.py#L82-L88
Modifying above to this appears to resolve the problem:
```python
errors = [
openai.Timeout,
openai.APIError,
openai.APIConnectionError,
openai.RateLimitError,
openai.ServiceUnavailableError,
]
```
### Expected behavior
Chat model invocation should return a BaseMessage. | OpenAI exception classes have different import path in openai 1.2.3, causing breaking change in ChatOpenAI - Simple fix | https://api.github.com/repos/langchain-ai/langchain/issues/13323/comments | 2 | 2023-11-14T03:41:42Z | 2024-02-14T04:22:33Z | https://github.com/langchain-ai/langchain/issues/13323 | 1,991,927,961 | 13,323 |
[
"langchain-ai",
"langchain"
] | ### System Info
I get following error just by adding model parameters to existing code that works with other models
"Malformed input request: 2 schema violations found, please reformat your input and try again."
```
model_name = "meta.llama2-13b-chat-v1"
model_kwargs = {
"max_gen_len": 512,
"temperature": 0.2,
"top_p": 0.9
}
bedrock_boto = boto3.client("bedrock-runtime", "us-east-1")
bedrock_llm = Bedrock(model_id=model_name, client=bedrock_boto,model_kwargs=model_kwargs)
bedrock_llm("Hello!")
```
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Attempt to call the new llama2 bedrock model like so:
```
model_name = "meta.llama2-13b-chat-v1"
model_kwargs = {
"max_gen_len": 512,
"temperature": 0.2,
"top_p": 0.9
}
bedrock_boto = boto3.client("bedrock-runtime", "us-east-1")
bedrock_llm = Bedrock(model_id=model_name, client=bedrock_boto,model_kwargs=model_kwargs)
bedrock_llm("Hello!")
```
### Expected behavior
The Bedrock class would work successfully as it does for other BedRock models | Add support for Bedrock Llama 2 13b model (meta.llama2-13b-chat-v1) | https://api.github.com/repos/langchain-ai/langchain/issues/13316/comments | 4 | 2023-11-14T00:34:45Z | 2024-02-22T16:07:04Z | https://github.com/langchain-ai/langchain/issues/13316 | 1,991,719,732 | 13,316 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I wan´t to use Pinecone SelfQueryRetriever with a metadata in datetime format. When I try to ask anything with a date show the error Bad Request.
I already change my class PineconeTranslator with this code with the same pattern of Scale, change the translator class. Since Pinecone just accept integer values and they ask you to use epoch format.
langchain\retrievers\self_query\pinecone.py
Class PineconeTranslator
` def visit_comparison(self, comparison: Comparison) -> Dict:
value = comparison.value
# convert timestamp to float as epoch format.
if type(value) is date:
value = time.mktime(value.timetuple())
return {
comparison.attribute: {
self._format_func(comparison.comparator): value
}
}`
### Motivation
I want to use selfretriever with pinecone and langchain the error limit the use of langchain with pinecone.
### Your contribution
Yes, I already made the changes in my machine like the code in the description | Pinecone SelfQueryRetriever with datetime filter | https://api.github.com/repos/langchain-ai/langchain/issues/13309/comments | 3 | 2023-11-13T22:25:18Z | 2024-06-01T00:07:33Z | https://github.com/langchain-ai/langchain/issues/13309 | 1,991,588,047 | 13,309 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Implement Async methods in ollama llm and chat model classes.
### Motivation
Ollama implementation doesn't include the async methods _astream and _agenerate and i cannot create a async agent...
### Your contribution
This is my first issue, i can try but i am working in 3 different projects right now... | Ollama LLM: Implement async functionality | https://api.github.com/repos/langchain-ai/langchain/issues/13306/comments | 6 | 2023-11-13T21:59:16Z | 2024-05-20T16:07:34Z | https://github.com/langchain-ai/langchain/issues/13306 | 1,991,555,162 | 13,306 |
[
"langchain-ai",
"langchain"
] | ### Feature request
We have several RAG templates in [langchain template](https://github.com/langchain-ai/langchain/tree/master/templates) using other vector DB except Opensearch.
We need to add a similar template for Opensearch also as opensearch support the same feature.
As a first step we will start with OpenAI and then we expand this to other LLMs including Bedrock.
### Motivation
Opensearch supports this feature and it will be super helpful for opensearch users.
### Your contribution
I will take the initiative to raise PR on this issue. | Langchain Template for RAG using Opensearch | https://api.github.com/repos/langchain-ai/langchain/issues/13295/comments | 3 | 2023-11-13T16:58:01Z | 2024-02-12T16:51:20Z | https://github.com/langchain-ai/langchain/issues/13295 | 1,991,066,462 | 13,295 |
[
"langchain-ai",
"langchain"
] | ### System Info
Linux and langchain
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am currently implementing a RAG chatbot using `ConversationBufferMemory` and `ConversationalRetrievalChain`as follow:
```
memory = ConversationBufferMemory(
memory_key="chat_history",
output_key="answer",
return_messages=True,
)
```
chain = ConversationalRetrievalChain.from_llm(
llm,
chain_type="stuff",
retriever=db.as_retriever(search_kwargs={"k": 1}),
memory=memory,
combine_docs_chain_kwargs={"prompt": PROMPT},
return_source_documents=True,
verbose=True,
)`
Where PROMPT is a prompt template that inputs chat_history, context and question.
### Expected behavior
I would like the retrieval chain and memory chain to keep track of the context i.e. instead of updating the context with the retrieved documents every time a user input a new question, I can keep track of the context.
My end goal is to have a function that takes into the old context, validate if retrieval of new documents is necessary, and if those documents can replace the new context or they need to be append to the old context. Is there a way to do this? | Context retrieval | https://api.github.com/repos/langchain-ai/langchain/issues/13291/comments | 3 | 2023-11-13T13:48:33Z | 2024-02-19T16:06:10Z | https://github.com/langchain-ai/langchain/issues/13291 | 1,990,699,489 | 13,291 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I currently have an OLLAMA model running locally. I aim to develop a CustomLLM model that supports asynchronous streaming to interface with this model. However, the OLLAMA adapter provided by LangChain lacks support for these specific features. How can I accomplish this integration?
### Suggestion:
_No response_ | Issue: Create async streaming custom LLM | https://api.github.com/repos/langchain-ai/langchain/issues/13289/comments | 6 | 2023-11-13T11:20:02Z | 2024-02-19T16:06:15Z | https://github.com/langchain-ai/langchain/issues/13289 | 1,990,442,921 | 13,289 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am working with an elastic search index running locally via a docker container. Now I want to incorporate memory so that an elastic search query can be handled just like a chat. How can I do that so that every time the question I asked from the previous answers? It should prepare the es query accordingly.
Here is the sample code:
```
from elasticsearch import Elasticsearch
from langchain.chat_models import ChatOpenAI
from langchain.chains.elasticsearch_database import ElasticsearchDatabaseChain
import os
import openai
from dotenv import load_dotenv
import json
from langchain.prompts import HumanMessagePromptTemplate, AIMessagePromptTemplate
from langchain.schema import HumanMessage, AIMessage
from langchain.memory import ConversationBufferMemory
from langchain.prompts import PromptTemplate
load_dotenv()
os.environ["OPENAI_API_KEY"] = os.getenv('OPENAI_API_KEY')
ELASTIC_SEARCH_SERVER = "http://localhost:9200"
db = Elasticsearch(ELASTIC_SEARCH_SERVER)
memory = ConversationBufferMemory(memory_key="chat_history")
PROMPT_TEMPLATE = """Given an input question, create a syntactically correct Elasticsearch query to run. Unless the user specifies in their question a specific number of examples they wish to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.
Unless told to do not query for all the columns from a specific index, only ask for a the few relevant columns given the question.
Pay attention to use only the column names that you can see in the mapping description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which index. Return the query as valid json.
Use the following format:
Question: Question here
ESQuery: Elasticsearch Query formatted as json
"""
PROMPT_SUFFIX = """Only use the following Elasticsearch indices:
{indices_info}
Question: {input}
ESQuery:"""
query_Prompt=PROMPT_TEMPLATE + PROMPT_SUFFIX
PROMPT = PromptTemplate.from_template(
query_Prompt,
)
DEFAULT_ANSWER_TEMPLATE = """Given an input question and relevant data from a database, answer the user question.
Only provide answer to me.
Use the following format:
Question: {input}
Data: {data}
Answer:"""
ANSWER_PROMPT = PromptTemplate.from_template(DEFAULT_ANSWER_TEMPLATE)
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
chain = ElasticsearchDatabaseChain.from_llm(llm=llm, database=db, query_prompt=PROMPT, answer_prompt=ANSWER_PROMPT ,verbose=True, memory=memory)
while True:
# Get user input
user_input = input("Ask a question (or type 'exit' to end): ")
if user_input.lower() == 'exit':
break
# Invoke the chat model with the user's question
chain_answer = chain.invoke(user_input)
```
What could be the solution? Currently, it is not working if the following question is related to the previous answer.
### Suggestion:
_No response_ | Incorporate Memory inside ElasticsearchDatabaseChain | https://api.github.com/repos/langchain-ai/langchain/issues/13288/comments | 6 | 2023-11-13T11:02:36Z | 2024-02-20T16:06:40Z | https://github.com/langchain-ai/langchain/issues/13288 | 1,990,415,478 | 13,288 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Similar to Text Generation Inference (TGI) for LLMs, HuggingFace created an inference server for text embeddings models called Text Embedding Inference (TEI).
See: https://github.com/huggingface/text-embeddings-inference
Could you integrate TEI into the supported LangChain text embedding models or do you guys already plan to do this?
### Motivation
We currently develop a rag based chat app and plan to deploy the components as microservices (LLM, DB, Embedding Model). Currently the only other suitable solution for us would be to use SagemakerEndpointEmbeddings. However being able to use TEI would be a great benefit.
### Your contribution
I work as an ML Engineer and could probably assist in some way if necessary. | Support for Text Embedding Inference (TEI) from HuggingFace | https://api.github.com/repos/langchain-ai/langchain/issues/13286/comments | 3 | 2023-11-13T08:52:10Z | 2024-04-29T16:11:21Z | https://github.com/langchain-ai/langchain/issues/13286 | 1,990,163,579 | 13,286 |
[
"langchain-ai",
"langchain"
] | ### System Info
from langchain.chat_models import AzureChatOpenAI
llm_chat = AzureChatOpenAI(deployment_name="gpt-4_32k", model_name = 'gpt-4-32k', openai_api_version=openai.api_version, temperature=0)
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
prompt = PromptTemplate(
input_variables = ["text"],
template="{text}"
)
llmchain = LLMChain(llm=llm_chat, prompt=prompt)
llmchain.run(text)
NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
openai==1.2.3 and langchain==0.0.335
from langchain.chat_models import AzureChatOpenAI
text = "Where is Germany located?"
llm_chat = AzureChatOpenAI(deployment_name="gpt-4_32k", model_name = 'gpt-4-32k', openai_api_version=openai.api_version, temperature=0)
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
prompt = PromptTemplate(
input_variables = ["text"],
template="{text}"
)
llmchain = LLMChain(llm=llm_chat, prompt=prompt)
llmchain.run(text)
### Expected behavior
In europe | NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}} when using openai==1.2.3 and langchain==0.0.335 | https://api.github.com/repos/langchain-ai/langchain/issues/13284/comments | 24 | 2023-11-13T08:30:39Z | 2024-07-24T06:53:59Z | https://github.com/langchain-ai/langchain/issues/13284 | 1,990,129,379 | 13,284 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
**my code:**
from langchain.document_loaders import PyPDFLoader, TextLoader, PyMuPDFLoader
from langchain.chat_models import ChatOpenAI
from langchain.chains import QAGenerationChain
from langchain.prompts.prompt import PromptTemplate
loader_pdf = PyMuPDFLoader("/Users/11.pdf")
doc_pdf = loader_pdf.load()
llm = ChatOpenAI(temperature=0.1,model_name="gpt-3.5-turbo-16k")
templ = """You are a smart assistant designed to help high school teachers come up with reading comprehension questions.
Given a piece of text, you must come up with a question and answer pair that can be used to test a student's reading comprehension abilities.
When coming up with this question/answer pair, you must respond in the following format in Chinese:
```
{{
"question": "$YOUR_QUESTION_HERE",
"answer": "$THE_ANSWER_HERE"
}}
```
Everything between the ``` must be valid json.
Please come up with a question/answer pair, in the specified JSON format, for the following text:
----------------
{text}"""
prompt = PromptTemplate.from_template(templ)
chain = QAGenerationChain.from_llm(llm=llm, prompt=prompt)
doc_ = ''
for i in doc_pdf:
doc_ += i.page_content
print(doc_)
qa_pdf = chain.run(doc_)
This code can run correctly and have results.
But when “model_name='gpt-4' ”, an error is reported.
What is it???
**the error:**
---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
Cell In[22], line 1
----> 1 qa_pdf = chain.run(doc_)
File ~/anaconda3/envs/langchain-py311/lib/python3.11/site-packages/langchain/chains/base.py:505, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
503 if len(args) != 1:
504 raise ValueError("`run` supports only one positional argument.")
--> 505 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
506 _output_key
507 ]
509 if kwargs and not args:
510 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
511 _output_key
512 ]
File ~/anaconda3/envs/langchain-py311/lib/python3.11/site-packages/langchain/chains/base.py:310, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
308 except BaseException as e:
309 run_manager.on_chain_error(e)
--> 310 raise e
311 run_manager.on_chain_end(outputs)
312 final_outputs: Dict[str, Any] = self.prep_outputs(
313 inputs, outputs, return_only_outputs
314 )
File ~/anaconda3/envs/langchain-py311/lib/python3.11/site-packages/langchain/chains/base.py:304, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
297 run_manager = callback_manager.on_chain_start(
298 dumpd(self),
299 inputs,
300 name=run_name,
301 )
302 try:
303 outputs = (
--> 304 self._call(inputs, run_manager=run_manager)
305 if new_arg_supported
306 else self._call(inputs)
307 )
308 except BaseException as e:
309 run_manager.on_chain_error(e)
File ~/anaconda3/envs/langchain-py311/lib/python3.11/site-packages/langchain/chains/qa_generation/base.py:75, in QAGenerationChain._call(self, inputs, run_manager)
71 docs = self.text_splitter.create_documents([inputs[self.input_key]])
72 results = self.llm_chain.generate(
73 [{"text": d.page_content} for d in docs], run_manager=run_manager
74 )
---> 75 print(results)
76 qa = [json.loads(res[0].text) for res in results.generations]
77 return {self.output_key: qa}
File ~/anaconda3/envs/langchain-py311/lib/python3.11/site-packages/langchain/chains/qa_generation/base.py:75, in <listcomp>(.0)
71 docs = self.text_splitter.create_documents([inputs[self.input_key]])
72 results = self.llm_chain.generate(
73 [{"text": d.page_content} for d in docs], run_manager=run_manager
74 )
---> 75 print(results)
76 qa = [json.loads(res[0].text) for res in results.generations]
77 return {self.output_key: qa}
File ~/anaconda3/envs/langchain-py311/lib/python3.11/json/__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
341 s = s.decode(detect_encoding(s), 'surrogatepass')
343 if (cls is None and object_hook is None and
344 parse_int is None and parse_float is None and
345 parse_constant is None and object_pairs_hook is None and not kw):
--> 346 return _default_decoder.decode(s)
347 if cls is None:
348 cls = JSONDecoder
File ~/anaconda3/envs/langchain-py311/lib/python3.11/json/decoder.py:337, in JSONDecoder.decode(self, s, _w)
332 def decode(self, s, _w=WHITESPACE.match):
333 """Return the Python representation of ``s`` (a ``str`` instance
334 containing a JSON document).
335
336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
338 end = _w(s, end).end()
339 if end != len(s):
File ~/anaconda3/envs/langchain-py311/lib/python3.11/json/decoder.py:355, in JSONDecoder.raw_decode(self, s, idx)
353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
--> 355 raise JSONDecodeError("Expecting value", s, err.value) from None
356 return obj, end
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
### Suggestion:
_No response_ | Replacing the correct code with GPT-4 will result in an error | https://api.github.com/repos/langchain-ai/langchain/issues/13283/comments | 3 | 2023-11-13T08:16:48Z | 2024-02-19T16:06:25Z | https://github.com/langchain-ai/langchain/issues/13283 | 1,990,101,660 | 13,283 |
[
"langchain-ai",
"langchain"
] | ### System Info
Platform:Windows
LangChain:0.0.335
python:3.9.12
### Who can help?
@eyurtsev @hwchase17 @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Here is my code:
from langchain.memory import ConversationBufferMemory
import langchain.load
ChatMemory = ConversationBufferMemory()
ChatMemory.chat_memory.add_ai_message("Hello")
ChatMemory.chat_memory.add_user_message("HI")
tojson = ChatMemory.json()
with open('E:\\memory.json', 'w') as file:
file.write(tojson)
file.close()
ParserChatMemory = ConversationBufferMemory.parse_file('E:\\memory.json')
### Expected behavior
Regarding the parse_file method in ConversationBufferMemory, there is not much introduction in the document, and I have not found a similar problem description. However, saving Memory into a file/or generating it from a file is a normal requirement, but when I execute parse_file This method throws a "ValidationError", the complete description is as follows:
ValidationError: 1 validation error for ConversationBufferMemory
chat_memory
instance of BaseChatMessageHistory expected (type=type_error.arbitrary_type; expected_arbitrary_type=BaseChatMessageHistory)
If there is any violation of my operation, please tell me, thank you very much | Use "parse_file" in class "ConversationBufferMemory" raise a ValidationError | https://api.github.com/repos/langchain-ai/langchain/issues/13282/comments | 7 | 2023-11-13T07:47:40Z | 2024-03-13T18:59:57Z | https://github.com/langchain-ai/langchain/issues/13282 | 1,990,059,585 | 13,282 |
[
"langchain-ai",
"langchain"
] |
I am using PGvector as retriever in my Conversational Retriever Chain.
in my document there is some meta information saved like topic, keywords etc.
how can use this meta information in retrieval instead of only using content?
Would appreciate any solution for this. | Utilize metadata during retrieval in ConversationalRetrieval and PGvector | https://api.github.com/repos/langchain-ai/langchain/issues/13281/comments | 4 | 2023-11-13T07:43:07Z | 2024-02-21T16:07:04Z | https://github.com/langchain-ai/langchain/issues/13281 | 1,990,054,092 | 13,281 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
i should qa over an document using retrieval. i should not use any large language model. i can only use embedding model. here is my code:
```
# import
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.document_loaders import TextLoader
from silly import no_ssl_verification
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema import StrOutputParser
from langchain.schema.runnable import RunnablePassthrough
from langchain.chains import RetrievalQA
from langchain.document_loaders import TextLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.llms import OpenAI
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
with no_ssl_verification():
# load the document and split it into chunks
loader = TextLoader("state_of_the_union.txt")
documents = loader.load()
# split it into chunks
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
# create the open-source embedding function
embedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
# hfemb = HuggingFaceEmbeddings()
# load it into Chroma
db = Chroma.from_documents(docs, embedding_function)
# query it
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
# print results
print(docs[0].page_content)
qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=docsearch.as_retriever())
query = "What did the president say about Ketanji Brown Jackson"
qa.run(query)
```
how can i change this code with not using any llm in "qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=docsearch.as_retriever())"
### Suggestion:
_No response_ | qa retrieval | https://api.github.com/repos/langchain-ai/langchain/issues/13279/comments | 5 | 2023-11-13T07:15:53Z | 2024-02-19T16:06:36Z | https://github.com/langchain-ai/langchain/issues/13279 | 1,990,021,102 | 13,279 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
i fix the code as following:
```
# import
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.document_loaders import TextLoader
from silly import no_ssl_verification
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
with no_ssl_verification():
# load the document and split it into chunks
loader = TextLoader("paul_graham/paul_graham_essay.txt")
documents = loader.load()
# split it into chunks
text_splitter = CharacterTextSplitter(chunk_size=2000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
# create the open-source embedding function
embedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
# hfemb = HuggingFaceEmbeddings()
# load it into Chroma
db = Chroma.from_documents(docs, embedding_function)
# query it
query = "What were the two main things the author worked on before college?"
docs = db.similarity_search(query)
# print results
print(docs[0].page_content)
```
i get following output:
"I was nervous about money, because I could sense that Interleaf was on the way down. Freelance Lisp hacking work was very rare, and I didn't want to have to program in another language, which in those days would have meant C++ if I was lucky. So with my unerring nose for financial opportunity, I decided to write another book on Lisp. This would be a popular book, the sort of book that could be used as a textbook. I imagined myself living frugally off the royalties and spending all my time painting. (The painting on the cover of this book, ANSI Common Lisp, is one that I painted around this time.)
The best thing about New York for me was the presence of Idelle and Julian Weber. Idelle Weber was a painter, one of the early photorealists, and I'd taken her painting class at Harvard. I've never known a teacher more beloved by her students. Large numbers of former students kept in touch with her, including me. After I moved to New York I became her de facto studio assistant.
She liked to paint on big, square canvases, 4 to 5 feet on a side. One day in late 1994 as I was stretching one of these monsters there was something on the radio about a famous fund manager. He wasn't that much older than me, and was super rich. The thought suddenly occurred to me: why don't I become rich? Then I'll be able to work on whatever I want.
Meanwhile I'd been hearing more and more about this new thing called the World Wide Web. Robert Morris showed it to me when I visited him in Cambridge, where he was now in grad school at Harvard. It seemed to me that the web would be a big deal. I'd seen what graphical user interfaces had done for the popularity of microcomputers. It seemed like the web would do the same for the internet."
bu i should get "Before college the two main things I worked on, outside of school, were writing and programming."
### Suggestion:
_No response_ | retrieval embedding | https://api.github.com/repos/langchain-ai/langchain/issues/13278/comments | 4 | 2023-11-13T06:49:13Z | 2024-03-17T16:05:37Z | https://github.com/langchain-ai/langchain/issues/13278 | 1,989,993,127 | 13,278 |
[
"langchain-ai",
"langchain"
] | how can I use SQLDatabaseSequentialChain to reduce the number of tokens for llms,can you give me some example or demo,I dont find useful info from the docs | how to use SQLDatabaseSequentialChain | https://api.github.com/repos/langchain-ai/langchain/issues/13277/comments | 3 | 2023-11-13T06:38:26Z | 2024-02-19T16:06:40Z | https://github.com/langchain-ai/langchain/issues/13277 | 1,989,982,161 | 13,277 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
i have the following code for q&a system with retrieval mechanism:
```
# import
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.document_loaders import TextLoader
from silly import no_ssl_verification
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
with no_ssl_verification():
# load the document and split it into chunks
loader = TextLoader("paul_graham/paul_graham_essay.txt")
documents = loader.load()
# split it into chunks
text_splitter = CharacterTextSplitter(chunk_size=2000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
# create the open-source embedding function
embedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
# hfemb = HuggingFaceEmbeddings()
# load it into Chroma
db = Chroma.from_documents(docs, embedding_function)
# query it
query = "What were the two main things the author worked on before college?"
docs = db.similarity_search(query)
# print results
print(docs[0].page_content)
```
i should do this retrieval in turkish dataset. so i should use turkish embeddings. how can i do that in my code?
### Suggestion:
_No response_ | turkish embedding | https://api.github.com/repos/langchain-ai/langchain/issues/13276/comments | 6 | 2023-11-13T06:15:51Z | 2024-03-17T16:05:31Z | https://github.com/langchain-ai/langchain/issues/13276 | 1,989,960,245 | 13,276 |
[
"langchain-ai",
"langchain"
] | ### System Info
```
cat /proc/version
=> Linux version 5.15.107+
cat /etc/debian_version
=> 10.13
import langchain
langchain.__version__
=> '0.0.334'
```
I spent some time to debug why the function signature `search` is different between linux / macos m1.
I found that in macos, because it has arm arch, doesn't have swigfaiss_avx2.py file on download wheel.
Besides, in linux, because it has amd64 arch, which support avx2, has swigfaiss_avx2.py.
Here's the amd64 example:
If I make faiss vector store using `from_documents` like
```py
from langchain.embeddings import OpenAIEmbeddings
docs = [] # some documents
vs = FAISS.from_documents(docs, OpenAIEmbeddings())
vs.index
# <faiss.swigfaiss_avx2.IndexFlatL2; proxy of <Swig Object of type 'faiss::IndexFlatL2 *' at> >
vs.index.search
<bound method handle_Index.<locals>.replacement_search of <faiss.swigfaiss_avx2.IndexFlatL2; proxy of <Swig Object of type 'faiss::IndexFlatL2 *' at> >>
```
It is delived from swigfaiss_avx2's IndexFlatL2, and it's search method is replaced by `replacement_search` in faiss.__init__.py intentionally.
But, when we save this to local and load like:
```py
vs.save_local("faiss", "abcd.index")
vs2 = FAISS.load_local("faiss", OpenAIEmbeddings(), "abcd.index")
vs2.index
<faiss.swigfaiss.IndexFlat; proxy of <Swig Object of type 'faiss::IndexFlat *' at> >
vs2.index.search
<bound method IndexFlat.search of <faiss.swigfaiss.IndexFlat; proxy of <Swig Object of type 'faiss::IndexFlat *' at> >>
```
We can see that the vector store : vs2 is the IndexFlat of swigfaiss.py, not swigfaiss_avx2.py
I don't know whether the save process is problem, or the load process is problem, but I think it's quite a big bug because two `search` signatures are totally different.
```py
import inspect
inspect.signature(vs.index.search)
# <Signature (x, k, *, params=None, D=None, I=None)>
inspect.signature(vs2.index.search)
# <Signautre (n, x, k, distances, labels, params=None)>
```
Besides, I found the `FAISS_NO_AVX2` is not work, too because when using the flag,
the `replacement_search` is not wrapped the original `search` method at all.
like explained in https://github.com/langchain-ai/langchain/issues/8857
So, how can I use the local save / load mechanism in amd64 arch?
(arm doesn't have any problem)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
You can refer to my issue above.
### Expected behavior
Can use save/load faiss vectorstore machanism with avx2 IndexFlat in amd64 so that method `search` can wrapped properly by `replacement_search` | FAISS save_local / load_local don't be aware of avx2 | https://api.github.com/repos/langchain-ai/langchain/issues/13275/comments | 5 | 2023-11-13T06:05:42Z | 2024-02-19T16:06:45Z | https://github.com/langchain-ai/langchain/issues/13275 | 1,989,950,234 | 13,275 |
[
"langchain-ai",
"langchain"
] | ### System Info
Thank you for your great help. I have issue with setting the score function (I have AWS LangChain using bedrock). What I have is:
from langchain.vectorstores import FAISS
loader = CSVLoader("./rag_data/a.csv")
documents_aws = loader.load()
docs = CharacterTextSplitter(chunk_size=2000, chunk_overlap=400, separator=",").split_documents(documents_aws)
def custom_score(i):
#return 1 - 1 / (1 + np.exp(i))
return 1
vectorstore_faiss_aws = FAISS.from_documents(documents=docs,embedding = br_embeddings)
vectorstore_faiss_aws.relevance_score_function=custom_score
This made no difference in the scores (did not give any errors either as I am getting large negative numbers and the vectorstore_faiss_aws.similarity_search_with_relevance_scores is indifferent to the score_threshold value)
Then I tried:
vectorstore_faiss_aws = FAISS(relevance_score_fn=custom_score).from_documents(documents=docs,embedding = br_embeddings)
and it gave the following error:
FAISS.__init__() missing 4 required positional arguments: 'embedding_function', 'index', 'docstore', and 'index_to_docstore_id'
Your advice is highly appreciated
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The code is included:
from langchain.vectorstores import FAISS
loader = CSVLoader("./rag_data/a.csv")
documents_aws = loader.load()
docs = CharacterTextSplitter(chunk_size=2000, chunk_overlap=400, separator=",").split_documents(documents_aws)
def custom_score(i):
#return 1 - 1 / (1 + np.exp(i))
return 1
vectorstore_faiss_aws = FAISS.from_documents(documents=docs,embedding = br_embeddings)
vectorstore_faiss_aws.relevance_score_function=custom_score
### Expected behavior
The score to be in the range of 0 and 1 and the vectorstore_faiss_aws.similarity_search_with_relevance_scores to react to score_threshold | AWS LangChain using bedrock: Setting Relevance Score Function | https://api.github.com/repos/langchain-ai/langchain/issues/13273/comments | 18 | 2023-11-13T04:15:41Z | 2024-03-18T16:06:24Z | https://github.com/langchain-ai/langchain/issues/13273 | 1,989,857,834 | 13,273 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I am try to use sagemaker endpoint to use the embedding model, but I am confused about using EmbeddingsContentHandler, after trying I figure out how to define the transform_input function, but the transform_output did not work as expected, it give out some errors, which like
**embeddings = response_json[0]["embedding"]
TypeError: list indices must be integers or slices, not str**, I would be really garteful it someone konw how to sove it
my ContentHandler function is :
class ContentHandler(EmbeddingsContentHandler):
content_type = "application/json"
accepts = "application/json"
def transform_input(self, prompt: str, model_kwargs={}) -> bytes:
input_str = json.dumps({"inputs": prompt, **model_kwargs})
return input_str.encode("utf-8")
def transform_output(self, output: bytes) -> str:
response_json = json.loads(output.read().decode("utf-8"))
embeddings = response_json[0]["embedding"]
return embeddings
### Idea or request for content:
_No response_ | https://python.langchain.com/docs/modules/model_io/models/llms/integrations/sagemaker | https://api.github.com/repos/langchain-ai/langchain/issues/13271/comments | 5 | 2023-11-13T02:59:08Z | 2024-02-19T16:06:50Z | https://github.com/langchain-ai/langchain/issues/13271 | 1,989,795,517 | 13,271 |
[
"langchain-ai",
"langchain"
] | ### Feature request
The LangChail modules can be naturally used to build something like OpenAI GPTs builder.
My understanding is, LangChain needs to add descriptions and description_embeddings to all integrations/chains/agents (not only to the tools). It allows to build the Super Agent aka Agent Builder.
### Motivation
LangChain's space (in terms of integrations/chains/agents) is bigger than OpenAI's. Let's use this.
### Your contribution
I can help with documentations, examples,ut-s | Chain/Agent Builder | https://api.github.com/repos/langchain-ai/langchain/issues/13270/comments | 2 | 2023-11-13T01:44:32Z | 2024-02-12T16:56:52Z | https://github.com/langchain-ai/langchain/issues/13270 | 1,989,733,972 | 13,270 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
- Langchain version: 0.0.335
- Affected LLM: chatgpt 3.5 turbo 1106 (NOT affecting: gpt 3.5 turbo 16k, gpt4, gpt4 turbo preview)
- Code template, similar to:
`llm=ChatOpenAI(model_name='gpt-3.5-turbo-1106',
openai_api_key=openai_key
)`
`memory = ConversationBufferMemory()`
`conversation = ConversationChain(
llm=llm, memory=memory
)`
`conversation.predict(input='my instructions')
`
**The problem**
When the identical instructions and parameters are used in the API and the open ai playground, model responds radically different. The open ai playground responds as expected, producing answers. While the API through langchain always returns 'I cannot fufil your request'.
**suspected causes**
Open AI made an upgrade to its API to 1.x, and as far as I know Langchain now only works with open ai 0.28.1. Could that be the reason?
Thanks
**UPDATE 13 Nov:**
I further tested my prompts with OpenAI api 1.2.3 alone, without langchain and everything works fine:
- First I put my prompts into the OpenAI playground, which produced responses
- Second I 'view source' > 'copy', put the python code into a .py file, edited out the last AI response and ran the code = I got the response I wanted.
- Third I tested the same prompt with langchain 0.0.335 (openai 0.28.1) this time, and I always get 'I cannot fulfil this request'.
I know that langchain adds a little bit of context to every request and my prompts did consist of two rounds of chats and thus my code uses memory. I don't know which part of this is problematic.
Again, this problem does not occur with gpt-3.5-turbo, or gpt-4, or gpt-4-turbo, only **'gpt-3.5-turbo-1106'**
### Suggestion:
_No response_ | Issue: gpt3.5-turbo-1106 responds differently from the OpenAI playground | https://api.github.com/repos/langchain-ai/langchain/issues/13268/comments | 5 | 2023-11-12T23:20:38Z | 2024-05-01T16:05:23Z | https://github.com/langchain-ai/langchain/issues/13268 | 1,989,630,478 | 13,268 |
[
"langchain-ai",
"langchain"
] | I am testing a simple RAG implementation with Azure Cognitive Search. I am seeing a "cannot import name 'Vector' from azure.search.documents.models" error when I invoke my chain. Origin of my error is line 434 in lanchain/vectorstores/azuresearch.py (from azure.search.documents.models import Vector)
this is the relevant code snippet, I get the import error when I execute rag_chain.invoke(question)
from langchain.schema.runnable import RunnablePassthrough
from langchain.prompts import ChatPromptTemplate
from langchain.chat_models.azure_openai import AzureChatOpenAI
question = "my question.."
-- vector_store is initialized using AzureSearch(), not including that snippet here --
retriever = vector_store.as_retriever()
template = '''
Answer the question based on the following context:
{context}
Question: {question}
'''
prompt = ChatPromptTemplate.from_template(template=template)
llm = AzureChatOpenAI( deployment_name='MY_DEPLOYMENT_NAME', model_name='MY_MODEL', openai_api_base=MY_AZURE_OPENAI_ENDPOINT, openai_api_key=MY_AZURE_OPENAI_KEY, openai_api_version='2023-05-15', openai_api_type='azure' )
rag_chain = {'context' : retriever, 'question' : RunnablePassthrough} | prompt | llm
rag_chain.invoke(question)
--------------
my package versions
langchain==0.0.331
azure-search-documents==11.4.0b11
azure-core==1.29.5
openai==0.28.1
_Originally posted by @yallapragada in https://github.com/langchain-ai/langchain/discussions/13245_ | Import error in lanchain/vectorstores/azuresearch.py | https://api.github.com/repos/langchain-ai/langchain/issues/13263/comments | 3 | 2023-11-12T20:01:20Z | 2024-02-18T16:05:11Z | https://github.com/langchain-ai/langchain/issues/13263 | 1,989,546,510 | 13,263 |
[
"langchain-ai",
"langchain"
] | ### System Info
I can not import BabyAGI
from langchain_experimental import BabyAGI
The langchain version is:
pip show langchain
Name: langchain
Version: 0.0.334
Summary: Building applications with LLMs through composability
Home-page: https://github.com/langchain-ai/langchain
Author:
Author-email:
License: MIT
Location: c:\users\test\anaconda3\envs\nenv\lib\site-packages
Requires: aiohttp, anyio, async-timeout, dataclasses-json, jsonpatch, langsmith, numpy, pydantic, PyYAML, requests, SQLAlchemy, tenacity
Required-by: langchain-experimental
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import dotenv
from dotenv import load_dotenv
load_dotenv()
import os
import langchain
from langchain.embeddings import OpenAIEmbeddings
import faiss
from langchain.vectorstores import FAISS
from langchain.docstore import InMemoryDocstore
# Define the embedding model
embeddings_model = OpenAIEmbeddings(model="text-embedding-ada-002")
# Initialize the vectorstore
embedding_size = 1536
index = faiss.IndexFlatL2(embedding_size)
vectorstore = FAISS(embeddings_model, index, InMemoryDocstore({}), {})
from langchain.llms import OpenAI
from langchain_experimental import BabyAGI
### Expected behavior
I want to be able to execute:
# Define the embedding model
embeddings_model = OpenAIEmbeddings(model="text-embedding-ada-002") | ImportError: cannot import name 'BabyAGI' from 'langchain_experimental' | https://api.github.com/repos/langchain-ai/langchain/issues/13256/comments | 3 | 2023-11-12T17:16:05Z | 2024-02-18T16:05:16Z | https://github.com/langchain-ai/langchain/issues/13256 | 1,989,494,128 | 13,256 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I hope this message finds you well. I noticed that OpenAI has recently unveiled their Assistant API, and I observed that the Langchain framework in JavaScript has already implemented an agent utilizing this new API. Could you kindly provide information on when this integration will be extended to Python?
For your reference, here is the documentation for the JavaScript implementation: [JavaScript Doc Reference](https://js.langchain.com/docs/modules/agents/agent_types/openai_assistant).
Thank you for your assistance.
### Motivation
This is a new feature from OpenAI and will be really useful for building LLM-based Apps which needs to utilize code generation feature.
### Your contribution
N/A | OpenAI Assistant in Python | https://api.github.com/repos/langchain-ai/langchain/issues/13255/comments | 1 | 2023-11-12T14:58:55Z | 2023-11-26T14:48:34Z | https://github.com/langchain-ai/langchain/issues/13255 | 1,989,440,076 | 13,255 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.334
python 3.11.5
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
hubspot_chain = get_openapi_chain("https://api.hubspot.com/api-catalog-public/v1/apis/crm/v3/objects/companies",
headers={"Authorization": "Bearer <My Hubspot API token>"})
hubspot_chain("Fetch all contacts in my CRM, please")
I get the error :
BadRequestError: Error code: 400 - {'error': {'message': "'get__crm_v3_objects_companies_{companyId}_getById' does not match '^[a-zA-Z0-9_-]{1,64}$' - 'functions.1.name'", 'type': 'invalid_request_error', 'param': None, 'code': None}}
This happens with other APIs too, I've tried the following :
1. OpenAPI
2. Asana
3. Hubspot
4. Slack
None work.
### Expected behavior
The API calls should work, I should get response to my natural language requests, which should get internally mapped to actions, as specified in the OpenAPI specs I've supplied. | Any time I try to use a openAPI API that requires auth, the openapi_chain call fails. | https://api.github.com/repos/langchain-ai/langchain/issues/13251/comments | 11 | 2023-11-12T06:43:23Z | 2024-02-21T16:07:08Z | https://github.com/langchain-ai/langchain/issues/13251 | 1,989,262,646 | 13,251 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello,
Thanks for this framework. It's making everyone's work simpler!
I'm using an LLM model to infer data from Portuguese websites, and expecting answers in Portuguese. But some of Langchain features, namely the "Get format instructions" for Output Parses come written in English. I'm not sure what the best approach would be here.
Should I do all my prompting in English, and just add "Answer in Portuguese" at the end?
Should this be a feature in Langchain, and if so, how would it work?
I'm not sure asking the community to translate it would be the right approach, because this is not just about translation, but making sure the prompts work correctly. I would be fine with some api call to replace the English text with my translation, but that doesn't seem to be part of the public API at the moment.
Thanks,
### Suggestion:
_No response_ | Issue: How to use other natural languages besides English? | https://api.github.com/repos/langchain-ai/langchain/issues/13250/comments | 7 | 2023-11-12T06:28:49Z | 2024-07-10T13:09:05Z | https://github.com/langchain-ai/langchain/issues/13250 | 1,989,259,330 | 13,250 |
[
"langchain-ai",
"langchain"
] | async def query(update: Update, context: CallbackContext):
global chain, metadatas, texts
if chain is None:
await context.bot.send_message(
chat_id=update.effective_chat.id,
text="Please load the chain first using /load")
return
user_query = update.message.text
cb = AsyncFinalIteratorCallbackHandler()
cb.stream_final_answer = True
cb.answer_prefix_tokens = ["FINAL", "ANSWER"]
cb.answer_reached = True
res = await chain.acall(user_query, callbacks=[cb])
answer = res["answer"]
sources = res.get("source_documents", [])
context.user_data['sources'] = sources
await context.bot.send_message(chat_id=update.effective_chat.id, text=answer)
for idx, source in enumerate(sources, start=1):
source_name = source.metadata.get("source", f"Unknown Source {idx}").replace(".", "")
keyboard = [[InlineKeyboardButton("Show Hadith", callback_data=str(idx))]]
await context.bot.send_message(chat_id=update.effective_chat.id,
text=f"{idx}. {source_name}",
reply_markup=InlineKeyboardMarkup(keyboard))
### Idea or request for content:
please help me, if the system cannot find the user's answer based on the existing context then the show hadith will not be displayed either the source name or keyboard
| why when the system doesn't find the answer to the user's question, show hadith still appears? | https://api.github.com/repos/langchain-ai/langchain/issues/13249/comments | 22 | 2023-11-12T05:49:18Z | 2024-02-18T16:05:31Z | https://github.com/langchain-ai/langchain/issues/13249 | 1,989,251,097 | 13,249 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
```
import uvicorn
import os
from typing import AsyncIterable, Awaitable
from dotenv import load_dotenv
from fastapi import FastAPI
from fastapi.responses import FileResponse, StreamingResponse
from langchain.callbacks import AsyncIteratorCallbackHandler
from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage, ChatMessage
import asyncio
async def wait_done(fn, event):
try:
await fn
except Exception as e:
print('error', e)
event.set()
finally:
event.set()
async def call_openai(question):
callback = AsyncIteratorCallbackHandler()
model = ChatOpenAI(
openai_api_key= os.environ["OPENAI_API_KEY"],
streaming=True,
callbacks=[callback]
)
print('question', question)
coroutine = wait_done(model.agenerate(messages=[[HumanMessage(content=question)]]), callback.done)
task = asyncio.create_task(coroutine)
print('task', task)
print('coroutine', callback.aiter())
async for token in callback.aiter():
yield f"{token}"
await task
app = FastAPI()
@app.get("/")
async def homepage():
return FileResponse('static/index.html')
@app.post("/ask")
def ask(body: dict):
print('body', body)
# return call_openai(body['question'])
return StreamingResponse(call_openai(body['question']), media_type="text/event-stream")
if __name__ == "__main__":
uvicorn.run(host="127.0.0.1", port=8888, app=app)
```
run it:
(venv) (base) zhanglei@zhangleideMacBook-Pro chatbot % python server.py
INFO: Started server process [46402]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8888 (Press CTRL+C to quit)
INFO: 127.0.0.1:53273 - "GET / HTTP/1.1" 200 OK
body {'question': '你好'}
INFO: 127.0.0.1:53273 - "POST /ask HTTP/1.1" 200 OK
question 你好
question 你好
task <Task pending name='Task-6' coro=<wait_done() running at /Users/zhanglei/Desktop/github/chatbot/server.py:21>>
coroutine <async_generator object AsyncIteratorCallbackHandler.aiter at 0x117158e40>
Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
### Suggestion:
_No response_ | Issue: <openai APIConnectionError> | https://api.github.com/repos/langchain-ai/langchain/issues/13247/comments | 4 | 2023-11-12T04:30:51Z | 2024-02-18T16:05:36Z | https://github.com/langchain-ai/langchain/issues/13247 | 1,989,233,891 | 13,247 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
when chain_type='stuff', normal
when
chain_type = 'map_reduce', error:
1 validation error for RetrievalQA
question_prompt
extra fields not permitted (type=value_error.extra)
### Suggestion:
_No response_ | when i use map_reduce type, error appear | https://api.github.com/repos/langchain-ai/langchain/issues/13246/comments | 3 | 2023-11-12T02:00:07Z | 2024-02-18T16:05:41Z | https://github.com/langchain-ai/langchain/issues/13246 | 1,989,197,947 | 13,246 |
[
"langchain-ai",
"langchain"
] | ### Discussed in https://github.com/langchain-ai/langchain/discussions/12799
<div type='discussions-op-text'>
<sup>Originally posted by **younes-io** November 2, 2023</sup>
I have an `NotImplementedError: ` error when I run this code:
```python
embeddings = OpenAIEmbeddings(deployment=embedding_model, chunk_size=1)
docsearch = OpenSearchVectorSearch(index_name=index_company_docs, embedding_function=embeddings,opensearch_url=opensearch_url, http_auth=('user', auth))
doc_retriever = docsearch.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={'score_threshold': 0.8}
)
qa = RetrievalQAWithSourcesChain.from_chain_type(
memory=memory,
llm=llm,
chain_type="stuff", # See other types of chains here
retriever=doc_retriever,
return_source_documents=True,
verbose=True,
chain_type_kwargs=chain_type_kwargs,
)
response = qa({"question": "When was the company founded?"})
```</div> | I get a `NotImplementedError` when I use `docsearch.as_retriever` with `similarity_score_threshold` | https://api.github.com/repos/langchain-ai/langchain/issues/13242/comments | 6 | 2023-11-11T17:04:13Z | 2024-02-19T16:07:05Z | https://github.com/langchain-ai/langchain/issues/13242 | 1,989,042,216 | 13,242 |
[
"langchain-ai",
"langchain"
] | ### System Info
Macos: 13.4.1 (apple silicon M1)
python version 3.10.13
relevant packages:
langchain 0.0.334
pydantic 1.10.13
pydantic_core 2.10.1
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://colab.research.google.com/drive/18d2UIFd3LHwkD3ml6_OydetvytKNnoBW?usp=sharing
1. `from langchain.tools import tool`
2. define a simple structured function
3. Use `StructuredTool.from_function(simple_function)`
### Expected behavior
I expected that it would either work, or tell me that the @tool decorator is not meant to be used with StructuredTool. This is not specified anywhere in the docs or code. | @tool decorator for StructuredTool.from_function doesn't fill in the `__name__` attribute correctly | https://api.github.com/repos/langchain-ai/langchain/issues/13241/comments | 3 | 2023-11-11T11:37:37Z | 2023-11-11T12:14:23Z | https://github.com/langchain-ai/langchain/issues/13241 | 1,988,907,431 | 13,241 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain: 0.0.334
python: 3.11.6
weaviate-client: 3.25.3
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am trying to implement the WeaviateHybridSearchRetriever to retrieve documents from Weaviate. My schema indicates the document ID is stored in the _id field based on the shardingConfig.
When setting up the retriever, I included _id in the attributes list:
````
hybrid_retriever = WeaviateHybridSearchRetriever(
attributes=["_id", "aliases", "categoryid", "name", "page_content", "ticker"]
)
````
However, when I try to access _id on the returned Document objects, I get an error that _id is not found.
For example:
````
results = hybrid_retriever.get_relevant_documents(query="some query")
print(results[0]._id) # Error!_id not found
````
I have tried variations like id, document_id instead of _id but still cannot seem to access the document ID field.
Any suggestions on what I am missing or doing wrong when trying to retrieve the document ID from the Weaviate results using the _id field specified in the schema?
Let me know if any other details would be helpful in troubleshooting this issue!
**Schema Details**
````
{
"classes":[
{
"class":"Category_taxonomy",
"invertedIndexConfig":{
"bm25":{
"b":0.75,
"k1":1.2
},
"cleanupIntervalSeconds":60,
"stopwords":{
"additions":"None",
"preset":"en",
"removals":"None"
}
},
"moduleConfig":{
"text2vec-openai":{
"baseURL":"https://api.openai.com",
"model":"ada",
"modelVersion":"002",
"type":"text",
"vectorizeClassName":true
}
},
"multiTenancyConfig":{
"enabled":false
},
"properties":[
{
"dataType":[
"text"
],
"description":"Content of the page",
"indexFilterable":true,
"indexSearchable":true,
"moduleConfig":{
"text2vec-openai":{
"skip":false,
"vectorizePropertyName":false
}
},
"name":"page_content",
"tokenization":"word"
},
{
"dataType":[
"number"
],
"description":"Identifier for the category",
"indexFilterable":true,
"indexSearchable":false,
"moduleConfig":{
"text2vec-openai":{
"skip":false,
"vectorizePropertyName":false
}
},
"name":"categoryid"
},
{
"dataType":[
"text"
],
"description":"Ticker symbol",
"indexFilterable":true,
"indexSearchable":true,
"moduleConfig":{
"text2vec-openai":{
"skip":false,
"vectorizePropertyName":false
}
},
"name":"ticker",
"tokenization":"word"
},
{
"dataType":[
"text"
],
"description":"Name of the entity",
"indexFilterable":true,
"indexSearchable":true,
"moduleConfig":{
"text2vec-openai":{
"skip":false,
"vectorizePropertyName":false
}
},
"name":"name",
"tokenization":"word"
},
{
"dataType":[
"text"
],
"description":"Aliases for the entity",
"indexFilterable":true,
"indexSearchable":true,
"moduleConfig":{
"text2vec-openai":{
"skip":false,
"vectorizePropertyName":false
}
},
"name":"aliases",
"tokenization":"word"
}
],
"replicationConfig":{
"factor":1
},
"shardingConfig":{
"virtualPerPhysical":128,
"desiredCount":1,
"actualCount":1,
"desiredVirtualCount":128,
"actualVirtualCount":128,
"key":"_id",
"strategy":"hash",
"function":"murmur3"
},
"vectorIndexConfig":{
"skip":false,
"cleanupIntervalSeconds":300,
"maxConnections":64,
"efConstruction":128,
"ef":-1,
"dynamicEfMin":100,
"dynamicEfMax":500,
"dynamicEfFactor":8,
"vectorCacheMaxObjects":1000000000000,
"flatSearchCutoff":40000,
"distance":"cosine",
"pq":{
"enabled":false,
"bitCompression":false,
"segments":0,
"centroids":256,
"trainingLimit":100000,
"encoder":{
"type":"kmeans",
"distribution":"log-normal"
}
}
},
"vectorIndexType":"hnsw",
"vectorizer":"text2vec-openai"
}
]
}
````
**Example Document**
````
{
"class": "Category_taxonomy",
"creationTimeUnix": 1699553747601,
"id": "ad092eb1-e4a6-4d93-a7d2-c507c33c3837",
"lastUpdateTimeUnix": 1699553747601,
"properties": {
"aliases": "Binance Coin, Binance Smart Chain",
"categoryid": 569,
"name": "BNB",
"page_content": "ticker: bnb\nname: BNB\naliases: Binance Coin, Binance Smart Chain",
"ticker": "bnb"
},
"vectorWeights": null
}
````
Example Search Result
````
{
"status":"success",
"results":[
{
"page_content":"ticker: bnb\nname: BNB\naliases: Binance Coin, Binance Smart Chain",
"metadata":{
"_additional":{
"explainScore":"(vector) [-0.0067740963 -0.03091735 0.00511335 0.0016186031 -0.016120477 0.017543973 -0.0072548385 -0.023063144 0.015246399 -0.0020884196]... \n(hybrid) Document ad092eb1-e4a6-4d93-a7d2-c507c33c3837 contributed 0.00819672131147541 to the score",
"score":"0.008196721"
},
"aliases":"Binance Coin, Binance Smart Chain",
"categoryid":569,
"name":"BNB",
"ticker":"bnb"
},
"type":"Document"
}
]
}
````
App Code
````
# Prepare global variables
WEAVIATE_URL = os.getenv('WEAVIATE_URL')
WEAVIATE_API_KEY = os.getenv('WEAVIATE_API_KEY')
OPENAI_API_KEY = os.getenv('OPENAI_API_KEY')
INDEX_NAME = "Category_taxonomy"
TEXT_KEY = "page_content"
# Dependency provider function for Weaviate client
def get_weaviate_vectorstore():
# Initialize the Weaviate client with API key authentication
client = weaviate.Client(
url=WEAVIATE_URL,
auth_client_secret=weaviate.AuthApiKey(WEAVIATE_API_KEY),
additional_headers={
"X-Openai-Api-Key": OPENAI_API_KEY,
}
)
# Initialize embeddings with a specified model
embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY, model='text-embedding-ada-002')
# Initialize vector store with attributes and schema
vectorstore = Weaviate(
client=client,
index_name=INDEX_NAME,
text_key=TEXT_KEY,
embedding=embeddings,
attributes=["aliases", "categoryid", "name", "page_content", "ticker"],
by_text=False
)
return client, vectorstore
def get_weaviate_hybrid_retriever(k: int = 5):
# Directly call the function to get the client and vectorstore
client, vectorstore = get_weaviate_vectorstore()
# Instantiate the retriever with the settings from the vectorstore
hybrid_retriever = WeaviateHybridSearchRetriever(
client=client,
index_name=INDEX_NAME,
text_key=TEXT_KEY,
attributes=["aliases", "categoryid", "name", "page_content", "ticker"],
k=k,
create_schema_if_missing=True
)
return hybrid_retriever
async def parse_query_params(request: Request) -> Dict[str, List[Any]]:
parsed_values = defaultdict(list)
for key, value in request.query_params.multi_items():
# Append the value for any key directly
parsed_values[key].append(value)
return parsed_values
@router.get("/hybrid_search_category_taxonomy/")
async def hybrid_search_category_taxonomy(parsed_values: Dict[str, List[Any]] = Depends(parse_query_params), query: Optional[str] = None, k: int = 5):
categoryids = parsed_values.get('categoryid', [])
tickers = parsed_values.get('ticker', [])
names = parsed_values.get('name', [])
aliasess = parsed_values.get('aliases', [])
# Use a partial function to pass 'k' to 'get_weaviate_hybrid_retriever'
retriever = get_weaviate_hybrid_retriever(k=k)
# Initialize the where_filter with an 'And' operator if there are any filters provided
logging.info(
f"query: {query}, "
f"categoryID: {categoryids}, "
f"ticker: {tickers}, "
f"name: {names}, "
f"aliases: {aliasess}, "
f"k: {k}"
)
# Adjustments to reference parameters from 'parse_query_params'
where_filter = {"operator": "And", "operands": []} if any([categoryids, tickers, names, aliasess]) else None
# Add filters for categoryid and ticker with the 'Equal' operator
if categoryids:
category_operands = [{"path": ["categoryid"], "operator": "Equal", "valueNumber": cid} for cid in categoryids]
if category_operands:
where_filter["operands"].append({"operator": "Or", "operands": category_operands})
if tickers:
ticker_operands = [{"path": ["ticker"], "operator": "Equal", "valueText": ticker} for ticker in tickers]
if ticker_operands:
where_filter["operands"].append({"operator": "Or", "operands": ticker_operands})
if names:
name_operands = [{"path": ["name"], "operator": "Equal", "valueText": name} for name in names]
if name_operands:
where_filter["operands"].append({"operator": "Or", "operands": name_operands})
if aliasess:
aliases_operands = [{"path": ["aliases"], "operator": "Equal", "valueText": aliases} for aliases in aliasess]
if aliases_operands:
where_filter["operands"].append({"operator": "Or", "operands": aliases_operands})
try:
# Format the results for the response
effective_query = " " if not query or not query.strip() else query
# Log the where_filter before fetching documents
logging.info(f"where_filter being used: {where_filter}")
# Fetch the relevant documents using the hybrid retriever instance
results = retriever.get_relevant_documents(effective_query, where_filter=where_filter, score=True)
# Format the results for the response
response_data = [vars(doc) for doc in results]
return {"status": "success", "results": response_data}
except Exception as e:
logger.error(f"Error while processing request: {str(e)}", exc_info=True)
raise HTTPException(detail=str(e), status_code=500)
````
### Expected behavior
**Expected Behavior**
When using the WeaviateHybridSearchRetriever for document retrieval, I expect that including the _id attribute in the attributes list will allow me to access the document ID of each retrieved document without any issues. Specifically, after setting up the WeaviateHybridSearchRetriever like so:
````
hybrid_retriever = WeaviateHybridSearchRetriever(
attributes=["_id", "aliases", "categoryid", "name", "page_content", "ticker"]
)
````
I anticipate that executing a query and attempting to print the _id of the first result should successfully return the unique identifier of the document, as per the below code snippet:
````
results = hybrid_retriever.get_relevant_documents(query="some query")
print(results[0]._id) # Expecting this to print the _id of the first result
````
In this scenario, my expectation is that the _id field, being specified in the attributes parameter, should be readily accessible in each Document object returned by the get_relevant_documents method. This behavior is crucial for my application as it relies on the unique document IDs for further processing and analysis of the retrieved data.
| Trouble Accessing Document ID in WeaviateHybridSearchRetriever Results | https://api.github.com/repos/langchain-ai/langchain/issues/13238/comments | 5 | 2023-11-11T04:39:30Z | 2024-05-15T16:07:19Z | https://github.com/langchain-ai/langchain/issues/13238 | 1,988,722,501 | 13,238 |
[
"langchain-ai",
"langchain"
] | ### System Info
I'm using colab
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.prompts.example_selector.semantic_similarity import SemanticSimilarityExampleSelector
key_selector = SemanticSimilarityExampleSelector(vectorstore=few_shots,k=2)
few_shots_selector = SemanticSimilarityExampleSelector(vectorstore=few_shots,k=2)
key_selector.select_examples({"All": "who is in Bengaluru?"})
# It's return from few_shots_selector
### Expected behavior
key_selector.select_examples({"All": "who is in Bengaluru?"})
it's need to return from key_selector | If i assign two SemanticSimilarityExampleSelector with different data in different variable but it combines | https://api.github.com/repos/langchain-ai/langchain/issues/13234/comments | 4 | 2023-11-11T02:23:06Z | 2023-11-11T05:35:01Z | https://github.com/langchain-ai/langchain/issues/13234 | 1,988,660,318 | 13,234 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Why did I follow the tutorial below to generate vector library data, but I wanted to use ConversationalRetrievalChain.from_llm to answer my question, but couldn't answer the question? Or can I only answer with chain?
https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb?ref=blog.langchain.dev
code show as below:
def read_item(query=Body(..., embed=True)):
question=query
print(question)
embeddings = OpenAIEmbeddings()
vector_store_path = r"/mnt/PD/VS"
docsearch = Chroma(persist_directory=vector_store_path, embedding_function=embeddings)
# Build prompt
template = """Use the following pieces of context to answer the question at the end. \
If you don't know the answer, just say that you don't know, don't try to make up an answer. \
Use three sentences maximum. Keep the answer as concise as possible. Always say "thanks for asking!" \
at the end of the answer.
{context}
{chat_history}
Question: {question}
Helpful Answer:"""
# Build prompt
#QA_CHAIN_PROMPT = PromptTemplate.from_template(template)
prompt = PromptTemplate(
input_variables=["chat_history", "context", "question"],
template=template,
)
store = InMemoryStore()
id_key = "doc_id"
# The retriever (empty to start)
retriever = MultiVectorRetriever(
vectorstore=docsearch,
docstore=store,
id_key=id_key,
)
llm = OpenAI(
temperature=0,max_tokens=1024,
model_name="gpt-4-1106-preview"
)
memory = ConversationKGMemory(llm=llm,memory_key='chat_history',return_messages=True,output_key='answer')
qa = ConversationalRetrievalChain.from_llm(
llm=llm,
chain_type="stuff",
retriever=retriever,
memory=memory,
return_source_documents=True,
return_generated_question=True,
combine_docs_chain_kwargs={'prompt': prompt}
)
# 进行问答
result = qa({"question": question})
#print(qa.combine_documents_chain.memory)
return result
### Suggestion:
_No response_ | Issue: <ConversationalRetrievalChain.from_llm and partition_pdf > | https://api.github.com/repos/langchain-ai/langchain/issues/13233/comments | 3 | 2023-11-11T02:10:50Z | 2024-02-17T16:05:28Z | https://github.com/langchain-ai/langchain/issues/13233 | 1,988,652,580 | 13,233 |
[
"langchain-ai",
"langchain"
] | ### System Info
from langchain.text_splitter import CharacterTextSplitter
from langchain.docstore.document import Document
from langchain.chains.summarize import load_summarize_chain
from fastapi.encoders import jsonable_encoder
from langchain.chains.mapreduce import MapReduceChain
from time import monotonic
gpt_4_8k_max_tokens = 8000 #https://platform.openai.com/docs/models/gpt-4
text_splitter = CharacterTextSplitter.from_tiktoken_encoder(model_name=model_name, chunk_size=gpt_4_8k_max_tokens, chunk_overlap=0)
verbose = False
# Initialize output dataframe with all the columns in the patient history class
column_names = list(PatientHistory.model_fields.keys())
df_AOAI_extracted_text = pd.DataFrame(columns=column_names)
# Create documents from the input text
texts = text_splitter.split_text(test_text)
docs = [Document(page_content=t) for t in texts]
print(f"Number of Documents {len(docs)}")
# Count the number of tokens in the document
num_tokens = num_tokens_from_string(test_text, model_name)
print(f"Number of Tokens {num_tokens}")
# call langchain summarizer to get the output for the given prompt
summaries = []
if num_tokens < gpt_4_8k_max_tokens:
#Stuffing is the simplest method, whereby you simply stuff all the related data into the prompt as context to pass to the language model. This is implemented in LangChain as the StuffDocumentsChain.
#This method is sutable for smaller piece of data.
chain = load_summarize_chain(llm, chain_type="stuff", prompt=TABLE_PROMPT, verbose=verbose)
else:
#MapReduceDocumentsChain is an advanced document processing technique that extends the capabilities of the conventional MapReduce framework.
#It goes beyond the typical MapReduce approach by executing a distinct prompt to consolidate the initial outputs.
#This method is designed to generate a thorough and cohesive summary or response that encompasses the entire document.
print('mapreduce')
chain = load_summarize_chain(llm, chain_type="map_reduce", map_prompt=TABLE_PROMPT, combine_prompt=TABLE_PROMPT, verbose=verbose, return_intermediate_steps=False)
start_time = monotonic()
summary = chain.run(docs)
print(summary)
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.text_splitter import CharacterTextSplitter
from langchain.docstore.document import Document
from langchain.chains.summarize import load_summarize_chain
from fastapi.encoders import jsonable_encoder
from langchain.chains.mapreduce import MapReduceChain
from time import monotonic
gpt_4_8k_max_tokens = 8000 #https://platform.openai.com/docs/models/gpt-4
text_splitter = CharacterTextSplitter.from_tiktoken_encoder(model_name=model_name, chunk_size=gpt_4_8k_max_tokens, chunk_overlap=0)
verbose = False
# Initialize output dataframe with all the columns in the patient history class
column_names = list(PatientHistory.model_fields.keys())
df_AOAI_extracted_text = pd.DataFrame(columns=column_names)
# Create documents from the input text
texts = text_splitter.split_text(test_text)
docs = [Document(page_content=t) for t in texts]
print(f"Number of Documents {len(docs)}")
# Count the number of tokens in the document
num_tokens = num_tokens_from_string(test_text, model_name)
print(f"Number of Tokens {num_tokens}")
# call langchain summarizer to get the output for the given prompt
summaries = []
if num_tokens < gpt_4_8k_max_tokens:
#Stuffing is the simplest method, whereby you simply stuff all the related data into the prompt as context to pass to the language model. This is implemented in LangChain as the StuffDocumentsChain.
#This method is sutable for smaller piece of data.
chain = load_summarize_chain(llm, chain_type="stuff", prompt=TABLE_PROMPT, verbose=verbose)
else:
#MapReduceDocumentsChain is an advanced document processing technique that extends the capabilities of the conventional MapReduce framework.
#It goes beyond the typical MapReduce approach by executing a distinct prompt to consolidate the initial outputs.
#This method is designed to generate a thorough and cohesive summary or response that encompasses the entire document.
print('mapreduce')
chain = load_summarize_chain(llm, chain_type="map_reduce", map_prompt=TABLE_PROMPT, combine_prompt=TABLE_PROMPT, verbose=verbose, return_intermediate_steps=False)
start_time = monotonic()
summary = chain.run(docs)
print(summary)
### Expected behavior
Should go through all docs and provide the summary | load_summarize_chain with map_reduce error : InvalidRequestError: This model's maximum context length is 8192 tokens. However, your messages resulted in 13516 tokens. Please reduce the length of the messages. | https://api.github.com/repos/langchain-ai/langchain/issues/13230/comments | 3 | 2023-11-10T23:05:46Z | 2024-02-16T16:05:46Z | https://github.com/langchain-ai/langchain/issues/13230 | 1,988,539,254 | 13,230 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hi: I have tried several strategies to implement map reduce summarization using Azure OpenAi and Langchain . My model is "gpt-35-turbo-16k".
I have tried several experiments but always get to the same warning:
from langchain.schema.document import Document
from langchain.chains.mapreduce import MapReduceChain
from langchain.text_splitter import CharacterTextSplitter
from langchain.document_loaders import TextLoader
llm_summary = AzureChatOpenAI(
openai_api_base= azure_api_base,
openai_api_version=azure_openai_api_version,
deployment_name=azure_deployment_name,
openai_api_key=azure_openai_api_key,
openai_api_type= azure_api_type,
model_name=azure_model_name,
temperature=azure_model_temperature
)
text="""The ReduceDocumentsChain handles taking the document mapping results and reducing them into a single output.\
It wraps a generic CombineDocumentsChain (like StuffDocumentsChain) but adds the ability to collapse documents before passing it
to the CombineDocumentsChain if their cumulative size exceeds token_max. In this example, we can actually re-use our chain for combining
our docs to also collapse our docs."""
text1=""" You can continue with your English studies and never use Inversion in sentences. That’s perfectly okay. However, if you are preparing for a Cambridge or IELTS exam or other exams or situations where you need to demonstrate an extensive use of English, you will be expected to know about Inversion.
Let’s start with why and when. After all, if you don’t know why we use Inversion, you won’t know when to use it.
WHY & WHEN do we use INVERSION?
Inversion is mainly used for EMPHASIS. The expressions used (never, rarely, no sooner, only then, etc.) have much more impact when used at the beginning of a sentence than the more common pronoun subject, especially as most of them are negative.
Negatives are more dramatic. Consider negative contractions: don’t, won’t, can’t, haven’t, etc. They usually have strong stress in English whilst positive contractions: I’m, he’ll, she’s, we’ve, I’d, etc. usually have weak stress.
"""
doc= [Document(page_content=text1)]
chain = load_summarize_chain(llm_summary, chain_type="map_reduce") #chain_type="map_reduce"
chain.run(doc)`
and Strategy 2 with text_splitter:
`from langchain import PromptTemplate
from langchain.chains.summarize import load_summarize_chain
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(chunk_size=5000, chunk_overlap=50)
chunks = text_splitter.create_documents([text1])
chain = load_summarize_chain(
llm_summary,
chain_type='map_reduce',
verbose=False
)
summary = chain.run(chunks)
summary
I always get the same output:
<img width="1473" alt="image" src="https://github.com/langchain-ai/langchain/assets/7675634/fc871a72-4f5f-43ff-9725-52a718ebaeac">
I have some questions:
1) How to fix this warning?
2) Can I trust in the output when the model is not found?
### Who can help?
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
run this chunks of code in any notebook
### Expected behavior
I want to fix this warning helping to langchain to find the model. | Warning: model not found. Using cl100k_base encoding. with Azure Openai and load_summarize_chain when I am trying to implement map_reduce | https://api.github.com/repos/langchain-ai/langchain/issues/13224/comments | 9 | 2023-11-10T20:00:32Z | 2024-05-31T17:36:59Z | https://github.com/langchain-ai/langchain/issues/13224 | 1,988,305,077 | 13,224 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.332
Python 3.10.12
Platform: GCP
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code to reproduce the problem:
```
from google.cloud import aiplatform
from langchain.embeddings import VertexAIEmbeddings
from langchain.vectorstores.matching_engine import MatchingEngine
embeddings = VertexAIEmbeddings()
embeddings
vector_store = MatchingEngine.from_components(
index_id=INDEX,
region=REGION,
embedding=embeddings,
project_id=PROJECT_ID,
endpoint_id=ENDPOINT,
gcs_bucket_name=DOCS_EMBEDDING)
vector_store.similarity_search('hello world', k=8)
```
Traceback:
```Traceback (most recent call last):
File "/opt/conda/envs/classifier/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3526, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-13-c6115207e7f5>", line 1, in <module>
relevant_documentation = vector_store.similarity_search('hello world', k=8)
File "/opt/conda/envs/classifier/lib/python3.10/site-packages/langchain/vectorstores/matching_engine.py", line 291, in similarity_search
docs_and_scores = self.similarity_search_with_score(
File "/opt/conda/envs/classifier/lib/python3.10/site-packages/langchain/vectorstores/matching_engine.py", line 202, in similarity_search_with_score
return self.similarity_search_by_vector_with_score(
File "/opt/conda/envs/classifier/lib/python3.10/site-packages/langchain/vectorstores/matching_engine.py", line 234, in similarity_search_by_vector_with_score
if self.endpoint._public_match_client:
AttributeError: 'MatchingEngineIndexEndpoint' object has no attribute '_public_match_client'
```
**Expected Behavior**:
No Error
**Analysis**:
The newest changes in https://github.com/langchain-ai/langchain/pull/10056 added the following to matching engine. [Source](https://github.com/langchain-ai/langchain/blob/869df62736f9084864ab907e7ec5736dd19f05d4/libs/langchain/langchain/vectorstores/matching_engine.py#L234)
`if self.endpoint._public_match_client:`
However, in GCP's MatchingEngineIndexEndpoint Class, object _public_match_client does not get instantiated until the following check passes. [Source](https://github.com/googleapis/python-aiplatform/blob/fcf05cb6da15c83e91e6ce5f20ab3e6649983685/google/cloud/aiplatform/matching_engine/matching_engine_index_endpoint.py#L132-L133)
```
if self.public_endpoint_domain_name:
self._public_match_client = self._instantiate_public_match_client()
```
Therefore I think the if self.endpoint._public_match_client check may be preventing all Private Network users from using vector search with matching Engine. | Vector Search on GCP Private Network gives AttributeError: 'MatchingEngineIndexEndpoint' object has no attribute '_public_match_client' | https://api.github.com/repos/langchain-ai/langchain/issues/13218/comments | 3 | 2023-11-10T19:10:23Z | 2024-02-16T16:05:51Z | https://github.com/langchain-ai/langchain/issues/13218 | 1,988,218,361 | 13,218 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hello All,
I have just installed the helm chart with some small additions to the basic values.yaml and the "langchain-langsmith-backend" contain keeps breaking with the following error, Has anyone had this before?
INFO: Started server process [1]
--
Fri, Nov 10 2023 2:41:24 pm | INFO: Waiting for application startup.
Fri, Nov 10 2023 2:41:24 pm | INFO: Application startup complete.
Fri, Nov 10 2023 2:41:24 pm | INFO: Uvicorn running on http://0.0.0.0:1984 (Press CTRL+C to quit)
Fri, Nov 10 2023 2:42:18 pm | INFO: Shutting down
Fri, Nov 10 2023 2:42:18 pm | INFO: Waiting for application shutdown.
Fri, Nov 10 2023 2:42:19 pm | ERROR:root:Error closing httpx client for clickhouse name '_httpx_client' is not defined
Fri, Nov 10 2023 2:42:19 pm | Traceback (most recent call last):
Fri, Nov 10 2023 2:42:19 pm | File "/code/smith-backend/app/main.py", line 140, in shutdown_event
Fri, Nov 10 2023 2:42:19 pm | File "/code/lc_database/lc_database/clickhouse.py", line 36, in close_clickhouse_client
Fri, Nov 10 2023 2:42:19 pm | await _httpx_client.aclose()
Fri, Nov 10 2023 2:42:19 pm | ^^^^^^^^^^^^^
Fri, Nov 10 2023 2:42:19 pm | NameError: name '_httpx_client' is not defined
Fri, Nov 10 2023 2:42:19 pm | INFO: Application shutdown complete.
Fri, Nov 10 2023 2:42:19 pm | INFO: Finished server process [1]
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
helm repo add langchain https://langchain-ai.github.io/helm/
helm install langchain/langsmith . --values values.yaml --namespace it-dev-langchain
Running this on an on prem Rancher cluster.
### Expected behavior
The contain runs normally. | NameError: name '_httpx_client' is not defined | https://api.github.com/repos/langchain-ai/langchain/issues/13204/comments | 4 | 2023-11-10T14:58:08Z | 2024-02-16T16:05:56Z | https://github.com/langchain-ai/langchain/issues/13204 | 1,987,788,713 | 13,204 |
[
"langchain-ai",
"langchain"
] | ### System Info
I tried to use `ChatVertexAI` as a replacement for `ChatOpenAI` as the latter is quite slow these days.
I have this code for Chat OpenAI
```
template_string = """ # some explanation
give me suggestions in JSON format where the suggestions are a list of dictionaries with the following keys:
- before:
- after:
- reason:
"""
chat = ChatOpenAI(
temperature=0.0,
openai_api_key=self.session_info.openai_api_key,
model=llm_model,
)
prompt_template = ChatPromptTemplate.from_template(template_string)
service_messages = prompt_template.format_messages(
text=self.text
)
response = chat(service_messages)
info = json.loads(response.content)
```
Then I print `response.content`, here is the format as expected:
```
[{"before": ..., "after": ...}, ...]
```
Then I use `ChatVertexAI` with the same `template_string`
```
chat = ChatVertexAI(
temperature=0.0,
google_api_key=google_api_key,
model="codechat-bison",
max_output_tokens=2048,
)
prompt_template = ChatPromptTemplate.from_template(template_string)
service_messages = prompt_template.format_messages(
text=self.text
)
response = chat(service_messages)
```
I print the `response.content` coming from Vertex AI and the output format is encapsulated by
```(3 backticks)JSON [{...}] (3 backticks)```. To make `response.content` the same, I used the following code
```
json_match = re.search(r'```JSON(.*?)\n```', response.content, re.DOTALL)
if json_match:
json_data = json_match.group(1).strip()
info = json.loads(json_data)
else:
info = {}
print("No JSON data found in the input string.")
```
I am not sure of the potential failures of this solution, but unless this is intentional, it might be better to make it consistent.
I will try to use the other LLMs and see how others work.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I shared my codes above, but the steps are:
1. setting up openai_api_key
2. setting up gcp environment
3. running ChatOpenAI
4. running ChatVertexAI
5. comparing the responses (i.e. `response.content`)
### Expected behavior
My expectation is to obtain the outputs of different models the same way. i.e., `json.loads` should output a list of dictionary in each case. Users should not have to deal with `regex` to obtain the data in the expected format. | Inconsistent output format of ChatVertexAI compared to ChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/13202/comments | 2 | 2023-11-10T14:42:40Z | 2024-02-09T18:37:05Z | https://github.com/langchain-ai/langchain/issues/13202 | 1,987,762,764 | 13,202 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.333
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [X] Async
### Reproduction
The current mypy crash with `make lint` and propose to install the last master version.
So, after a `git clone` and `poetry install --with dev,test,lint`, run
```bash
pip install git+https://github.com/python/mypy.git
make lint
```
**Mypy found 834 errors in 238 files**
### Expected behavior
0 errors | More than 800 errors detected with the latest version of mypy | https://api.github.com/repos/langchain-ai/langchain/issues/13199/comments | 4 | 2023-11-10T14:26:34Z | 2024-02-17T16:05:33Z | https://github.com/langchain-ai/langchain/issues/13199 | 1,987,735,492 | 13,199 |
[
"langchain-ai",
"langchain"
] | ### System Info
The text-embedding-ada-002 OpenAI embedding model on Azure OpenAI has a maximum batch size of 16. MlflowAIGatewayEmbeddings has a hard-coded batch size of 20 which results in it being unusable with Azure OpenAI's text-embedding-ada-002.
The best fix would be to allow a configurable batchsize as an argument to MlflowAIGatewayEmbeddings,
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a gateway route to the text-embedding-ada-002 on azure openai
```
ROUTE_NAME = "azure-ada-002"
# Try to delete the route if it already exists
try:
delete_route(ROUTE_NAME)
print("Route deleted")
except:
print("Route does not exist, creating..")
create_route(
name=ROUTE_NAME,
route_type="llm/v1/embeddings",
model = {
"name" : "text-embedding-ada-002",
"provider" : "openai",
"openai_config" : {
"openai_api_type" : azure_openai_type,
"openai_api_key": azure_openai_key,
"openai_deployment_name": "ada-embed-v1",
"openai_api_base": azure_openai_base,
"openai_api_version": "2023-05-15"
}
}
)
```
2. Initialize the `MlflowAIGatewayEmbeddings` and try to embed more than 16 documents
```
from langchain.embeddings import MlflowAIGatewayEmbeddings
import pandas as pd
azure_ada = MlflowAIGatewayEmbeddings(route="azure-ada-002")
test_strings = [
"aaaa" for i in range(20)
]
azure_ada.embed_documents(test_strings)
```
3. Observe the error
```
HTTPError: 400 Client Error: Bad Request for url: https://ohio.cloud.databricks.com/gateway/azure-ada-002/invocations. Response text: {
"error": {
"message": "Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.",
"type": "invalid_request_error",
"param": null,
"code": null
}
}
```
### Expected behavior
I should be able to use the `MlflowAIGatewayEmbeddings` class with Azure OpenAI. | MlflowAIGatewayEmbeddings : Default Batch size incompatible with Azure OpenAI text-embedding-ada-002 | https://api.github.com/repos/langchain-ai/langchain/issues/13197/comments | 8 | 2023-11-10T13:52:34Z | 2024-05-15T16:01:41Z | https://github.com/langchain-ai/langchain/issues/13197 | 1,987,672,041 | 13,197 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
i have the following code:
```
# import
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.document_loaders import TextLoader
from silly import no_ssl_verification
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
with no_ssl_verification():
# load the document and split it into chunks
loader = TextLoader("state_of_the_union.txt")
documents = loader.load()
# split it into chunks
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
# create the open-source embedding function
embedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
# hfemb = HuggingFaceEmbeddings()
# load it into Chroma
db = Chroma.from_documents(docs, embedding_function)
# query it
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
# print results
print(docs[0].page_content)
```
i must use retrieval mechanism. and i am using Turkish language data. please change my code
### Suggestion:
_No response_ | langchain Retrieval | https://api.github.com/repos/langchain-ai/langchain/issues/13196/comments | 4 | 2023-11-10T12:47:23Z | 2024-02-22T16:07:08Z | https://github.com/langchain-ai/langchain/issues/13196 | 1,987,564,233 | 13,196 |
[
"langchain-ai",
"langchain"
] | ### System Info
Latest langchain, Mac
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Hi Community,
I'm trying to build a chain with Chroma database as context, AzureOpenAI embeddings and AzureOpenAI GPT model.
The following libs work fine for me and doing their work
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.chat_models import AzureChatOpenAI
But I can't figure out how to build the chain itself and what kind of chain I should use.
Could you kindly provide any suggestions?
Thanks, Artem.
### Expected behavior
Desired behaviour:
- user asks question
- code searches it in Chroma using AzureOpenAI embeddings to transform text
- In prompt for AzureOpenAI GPT there is context with info retrieved from Chroma and current chat history with Human/AI questions and answers
- Also there should be a custom prompt template used
- Also I want to wrap it in Flask server, so users chat history could be stored in session with username as key until it's forcibly cleared | Example for chat chain with documents retrieval and history capability | https://api.github.com/repos/langchain-ai/langchain/issues/13195/comments | 3 | 2023-11-10T12:23:44Z | 2024-02-16T16:06:11Z | https://github.com/langchain-ai/langchain/issues/13195 | 1,987,527,709 | 13,195 |
[
"langchain-ai",
"langchain"
] | ### System Info
I just updated langchain to newest version and my Agent is not working anymore.
Tool structure :
```
class Data_Retriever(BaseModel):
db : Any
class Config:
extra = Extra.forbid
def run(self,request) -> str:
data = self.db.get(request)
return data
```
When Agent uses the above tool, it will always stop at "Action Input" step :
```
> Entering new AgentExecutor chain...
AI: Thought: Do I need to use a tool? Yes
Action: Data_Retriever
Action Input: DATA
> Finished chain.
```
Does anyone know how to fix this ? I'm using langchain==0.0.332
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```from langchain.tools import BaseTool, StructuredTool, Tool, tool
from langchain.agents import AgentType, initialize_agent,load_tools
tools = [
Tool(
name="Music Search",
func=lambda x: "'All I Want For Christmas Is You' by Mariah Carey.", # Mock Function
description="A Music search engine. Use this more than the normal search if the question is about Music, like 'who is the singer of yesterday?' or 'what is the most popular song in 2022?'",
),
]
agent = initialize_agent(
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
tools=tools,
llm=llm,
verbose=True)
agent.run(input = "Random song")
```
### Expected behavior
The output should be All I Want For Christmas Is You' by Mariah Carey.
My agent stopped once it hit the action input step.
> Entering new AgentExecutor chain...
AI: Thought: Do I need to use a tool? Yes
Action: Music Search
Action Input: "Random song"
> Finished chain. | The Agent is not using Custom Tools. | https://api.github.com/repos/langchain-ai/langchain/issues/13194/comments | 17 | 2023-11-10T12:23:33Z | 2024-07-25T19:04:06Z | https://github.com/langchain-ai/langchain/issues/13194 | 1,987,527,490 | 13,194 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
_No response_
### Suggestion:
my code:
```
# import
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.document_loaders import TextLoader
from silly import no_ssl_verification
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
with no_ssl_verification():
# load the document and split it into chunks
loader = TextLoader("nazim.txt")
documents = loader.load()
# split it into chunks
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
# create the open-source embedding function
embedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
# hfemb = HuggingFaceEmbeddings()
# load it into Chroma
db = Chroma.from_documents(docs, embedding_function)
# query it
query = "When was Nâzım Hikmet entered the Naval School?"
docs = db.similarity_search(query)
# print results
print(docs[0].page_content)
```
my nazim.txt:
```
Nâzım Hikmet was born on 21 November 1901 in Salonica, but his birth was certified as 15 January 1902 in order to prevent his age appearing older by a year older on account of 40 days. He died on 3 June 1963 in Moscow.
His paternal grandfather Nâzım Pasha the governor was a liberal and a poet. He belonged to Mevlevi Mysticism. He was a close friend to Mithat Pasha. His father, Hikmet was graduated from Mekteb-i Sultani (later the Galatasaray Lycée). He firstly dealt with trade but when he had been unsuccessful in that area, he became a civil servant at the Ministry of Foreign Affairs (Kalem-i Ecnebiye).
His mother was the daughter of Enver Pasha who was a linguist and an educator. Celile Hanım spoke French, played the piano and painted pictures as well as an artist.
His family environment, with its progressive thoughts, had tremendous effect on the education of Nâzım Hikmet. He was first trained at a school where the language of instruction was French and later attended the Numune Mektep (Taş Mektep) in Göztepe in Istanbul. After graduating primary school, he attended the prep class of the Mekteb-i Sultani with his friend, Vâlâ Nurettin. The year after, because of the financial strait in which his family found themselves, he changed his school to the Nişantaşı Junior High School.
In this period, with the influence of his grandfather, he started to write poetry. During a family meeting, Cemal Pasha, the Minister of the Navy, evinced to very much moved when Nâzım Hikmet read a heroic poem he had written about sailors. Cemal Pasha offered to send him to the Heybeliada Naval School and after the acceptance of the offer by the family, he helped Nâzım enter this school.
Nâzım Hikmet entered the Naval School in 1917 and graduated thence in 1919 and started to work as intern deck officer on the Hamidiye Cruiser. But in winter of the same year his pleurisy illness repeated. After a medical treatment period of nearly two months, during which he was under control of the head doctor of the Navy Hospital, Hakkı Şinasi Pasha, he was given permission to go home for two months. But he did not recover enough to return to work as a navy officer. By a Health Council Report he was discharged as unfit for duty in May 1920.
At this time, he was going to be known among Syllabist Poets as a young voice. Nâzım Hikmet admired Yahya Kemal who was his history and literature teacher and also his family friend, and showed him his poems for his critique. In 1920, the Alemdar Newspaper organised a contest where the jury consisted of famous poets. Nâzım was elected recipient of the award. Young masters, such as Faruk Nafiz, Yusuf Ziya, Orhan Seyfi talked about him with great admiration.
Istanbul had been under occupation and Nâzım Hikmet was writing resistance poems reflecting the ebullient patriotism. In the last days of 1920 the poem "Gençlik" (Youth) called the young generation to fight for the liberation of the country.
On 1 January 1921, with the help of illegal organisation which provided weapons to Mustafa Kemal, four poets (Faruk Nafiz,Yusuf Ziya, Nâzım Hikmet and Vâlâ Nurettin) secretly got on the Yeni Dünya (New World) Ship in Sirkeci. In order to gain passage to Ankara, one had to wait nearly 5 or 6 days at Inebolu. The passage was granted only to Nâzım Hikmet and Vâlâ Nurettin by Ankara.
During the days in Inebolu, they met young students coming from Germany who were waiting for permission to go to Ankara, like them. Among them there were Sadık Ahi ( later Mehmet Eti- Parliamentary of CHP),Vehbi(Prof. Vehbi Sarıdal) Nafi Atuf(Kansu-General Secretary of CHP). These were called as Spartans and they defended socialism and praised SSCB that was the first country which accepted Turkey's National Pact (Misak-ı Milli) Borders. These ideas were new for Nâzım and his friend Vâlâ Nureddin.
When they reached Ankara the first duty given to them was to write a poem to summon the youth of Istanbul to the national Struggle. They finished this three-page poem within three days. It was published by the Matbuat Müdürlüğü (Directory of Press) in four pages in 11.5 x 18 cm. format and served out in ten-thousand copies in March 1921. The impact of the poem was so great that the members of the National Assembly started to argue how to resolve the enthusiastic upheaval which was caused by the poem. Muhittin Birgen, the Director of Press, was criticised in the negative because of publishing and serving out of the poem.
It would have been a great problem to find jobs if the youth of Istanbul had come to Ankara. Discomforted by having been compelled to report to the National Assembly on the matter, Muhittin Birgen decided to transfer Nâzım Hikmet and Vâlâ Nureddin to the auspices of the Ministry of Education.
At this time, İsmail Fazıl Pasha, one of the relatives of Celile Hanım, summoned to the Assembly these two talented poets whose poem had caused such a stir, and introduced them to Mustafa Kemal Pasha.
```
QUESTİON:
My output should be one line and correct answer according to question "Nâzım Hikmet entered the Naval School in 1917". What should i change in my code?
| Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/13192/comments | 4 | 2023-11-10T11:17:39Z | 2024-02-21T16:07:19Z | https://github.com/langchain-ai/langchain/issues/13192 | 1,987,429,747 | 13,192 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
_No response_
### Suggestion:
```
# import
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.document_loaders import TextLoader
from silly import no_ssl_verification
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
with no_ssl_verification():
# load the document and split it into chunks
loader = TextLoader("state_of_the_union.txt")
documents = loader.load()
# split it into chunks
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
# create the open-source embedding function
embedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
# hfemb = HuggingFaceEmbeddings()
# load it into Chroma
db = Chroma.from_documents(docs, embedding_function)
# query it
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
# print results
print(docs[0].page_content)
```
i use this code but i get the following error:
ValueError: Expected EmbeddingFunction.__call__ to have the following signature: odict_keys(['self', 'input']), got odict_keys(['self', 'args', 'kwargs'])
Please see https://docs.trychroma.com/embeddings for details of the EmbeddingFunction interface.
Please note the recent change to the EmbeddingFunction interface: https://docs.trychroma.com/migration#migration-to-0416---november-7-2023
can you give me fixed code and explain?
| langchain & chroma - Basic Example | https://api.github.com/repos/langchain-ai/langchain/issues/13191/comments | 3 | 2023-11-10T10:46:38Z | 2024-02-16T16:06:21Z | https://github.com/langchain-ai/langchain/issues/13191 | 1,987,380,190 | 13,191 |
[
"langchain-ai",
"langchain"
] | ### System Info
newest version
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When loading a local model it keeps hallucinating a conversation, how to make it stop at "Human:" ? In the create_llm function you see two ways I tried, giving kwars and binding, but both did not work. Is the bind specifically made for LCEL?
Is there some other way to make the stopping work?
```
from langchain.llms import CTransformers
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from langchain.memory import ConversationBufferMemory
def create_llm(model_path='./models/mistral-7b-instruct-v0.1.Q4_K_M.gguf', model_type="mistral"):
llm = CTransformers(model=model_path, model_type=model_type,kwargs={'stop': ['Human:']})
llm.bind(stop=["Human:"])
return llm
def create_memory():
memory = ConversationBufferMemory(memory_key="history")
return memory
def create_memory_prompt():
template = """You are an AI chatbot having a conversation with a human. Answer his questions.
Previous conversation: {history}
Human: {human_input}
AI: """
prompt = PromptTemplate(input_variables=["history", "human_input"], template=template)
return prompt
def create_normal_chain(llm, prompt, memory):
llm_chain = LLMChain(llm=llm, prompt=prompt, memory=memory)
return llm_chain
chain = create_normal_chain(create_llm(), create_memory_prompt(), create_memory())
out = chain.invoke({"human_input" : "Hello"})
print(out)
```
### Expected behavior
Generation stops at "Human:". | Binding stop to a local llm does not work? | https://api.github.com/repos/langchain-ai/langchain/issues/13188/comments | 4 | 2023-11-10T10:31:00Z | 2024-02-09T16:04:41Z | https://github.com/langchain-ai/langchain/issues/13188 | 1,987,355,999 | 13,188 |
[
"langchain-ai",
"langchain"
] | ### System Info
cenos7
python3.9
langchai 0.0.333
Reading error PDF file here:
[llm.pdf](https://github.com/langchain-ai/langchain/files/13318006/llm.pdf)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from lxml import html
from pydantic import BaseModel
from typing import Any, Optional
from unstructured.partition.pdf import partition_pdf
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema.output_parser import StrOutputParser
import uuid,os
from langchain.vectorstores import Chroma
from langchain.storage import InMemoryStore
from langchain.schema.document import Document
from langchain.embeddings import OpenAIEmbeddings
from langchain.retrievers.multi_vector import MultiVectorRetriever
os.environ["OPENAI_API_KEY"] = ''
path=rf"/mnt/PD/PDF/"
# Get elements
raw_pdf_elements = partition_pdf(
filename=path + "llm.pdf",
# Unstructured first finds embedded image blocks
extract_images_in_pdf=False,
# Use layout model (YOLOX) to get bounding boxes (for tables) and find titles
# Titles are any sub-section of the document
infer_table_structure=True,
# Post processing to aggregate text once we have the title
chunking_strategy="by_title",
# Chunking params to aggregate text blocks
# Attempt to create a new chunk 3800 chars
# Attempt to keep chunks > 2000 chars
max_characters=4000,
new_after_n_chars=3800,
combine_text_under_n_chars=2000,
image_output_dir_path=path,
)
### Expected behavior
https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb?ref=blog.langchain.dev
When testing langchain's Semi_Structured_RAG.ipynb, I tested the PDF files used. Some PDFs were read normally, but some PDFs reported errors. The error log is as follows:
File "/mnt/PD/test.py", line 20, in <module>
raw_pdf_elements = partition_pdf(
File "/usr/local/lib/python3.9/site-packages/unstructured/documents/elements.py", line 372, in wrapper
elements = func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/unstructured/file_utils/filetype.py", line 591, in wrapper
elements = func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/unstructured/file_utils/filetype.py", line 546, in wrapper
elements = func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/unstructured/chunking/title.py", line 297, in wrapper
elements = func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/unstructured/partition/pdf.py", line 182, in partition_pdf
return partition_pdf_or_image(
File "/usr/local/lib/python3.9/site-packages/unstructured/partition/pdf.py", line 312, in partition_pdf_or_image
_layout_elements = _partition_pdf_or_image_local(
File "/usr/local/lib/python3.9/site-packages/unstructured/utils.py", line 179, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/unstructured/partition/pdf.py", line 413, in _partition_pdf_or_image_local
final_layout = process_file_with_ocr(
File "/usr/local/lib/python3.9/site-packages/unstructured/partition/ocr.py", line 170, in process_file_with_ocr
raise e
File "/usr/local/lib/python3.9/site-packages/unstructured/partition/ocr.py", line 159, in process_file_with_ocr
merged_page_layout = supplement_page_layout_with_ocr(
File "/usr/local/lib/python3.9/site-packages/unstructured/partition/ocr.py", line 238, in supplement_page_layout_with_ocr
page_layout.elements[:] = supplement_element_with_table_extraction(
File "/usr/local/lib/python3.9/site-packages/unstructured/partition/ocr.py", line 274, in supplement_element_with_table_extraction
element.text_as_html = table_agent.predict(cropped_image, ocr_tokens=table_tokens)
File "/usr/local/lib/python3.9/site-packages/unstructured_inference/models/tables.py", line 54, in predict
return self.run_prediction(x, ocr_tokens=ocr_tokens)
File "/usr/local/lib/python3.9/site-packages/unstructured_inference/models/tables.py", line 191, in run_prediction
prediction = recognize(outputs_structure, x, tokens=ocr_tokens)[0]
IndexError: list index out of range | cookbook/Semi_Structured_RAG.ipynb ERROR:IndexError: list index out of range | https://api.github.com/repos/langchain-ai/langchain/issues/13187/comments | 4 | 2023-11-10T09:59:01Z | 2024-02-21T16:07:24Z | https://github.com/langchain-ai/langchain/issues/13187 | 1,987,302,580 | 13,187 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
**My code:**
loader_pdf = PyMuPDFLoader("/Users/python/test_pdf.pdf")
doc_pdf = loader_pdf.load()
llm = ChatOpenAI(temperature=0)
chain = QAGenerationChain.from_llm(llm=llm)
print("pdf:\n",doc_pdf[3].page_content)
qa_pdf = chain.run(doc_pdf[3].page_content)

The PDF I am using is a Chinese data file,but the output is in English.
There are no relevant records in the document. I would like to know how to solve this problem or can I set a Prompt to solve it?
### Suggestion:
_No response_ | QAGenerationChain output in different languages | https://api.github.com/repos/langchain-ai/langchain/issues/13186/comments | 3 | 2023-11-10T09:53:25Z | 2024-02-16T16:06:31Z | https://github.com/langchain-ai/langchain/issues/13186 | 1,987,293,567 | 13,186 |
[
"langchain-ai",
"langchain"
] | ### System Info
<pre>
platform = macOS-10.16-x86_64-i386-64bit
version = 3.10.13 (main, Sep 11 2023, 08:39:02) [Clang 14.0.6 ]
langchain.version = 0.0.332
pydantic.version = 1.10.13
openai.version = 1.2.0
</pre>
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm trying to initialize an instance of AzureChatOpenAI with a custom http_client. Following is the code I'm using -
<pre>
from langchain.chat_models import AzureChatOpenAI
from langchain.schema import HumanMessage, SystemMessage
import httpx
messages = [
SystemMessage(content="You are a helpful assistant that translates English to Spanish"),
HumanMessage(content="Good morning Vietnam!")
]
client = httpx.Client()
chat = AzureChatOpenAI(
openai_api_key="my-secret-key",
openai_api_version="2023-07-01-preview",
model="gpt-4",
deployment_name="my-deployment-name",
azure_endpoint="https://my-instance.openai.azure.com",
http_client=client
)
# AzureChatOpenAI.update_forward_refs()
chat(messages)
</pre>
This fails with the following pydantic config error:
<pre>
Traceback (most recent call last):
File "azure-custom.py", line 24, in <module>
chat = AzureChatOpenAI(
File "langchain-env/lib/python3.10/site-packages/langchain/load/serializable.py", line 97, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 339, in pydantic.main.BaseModel.__init__
File "pydantic/main.py", line 1076, in pydantic.main.validate_model
File "pydantic/fields.py", line 860, in pydantic.fields.ModelField.validate
pydantic.errors.ConfigError: field "http_client" not yet prepared so type is still a ForwardRef, you might need to call AzureChatOpenAI.update_forward_refs().
</pre>
### Expected behavior
As per the [ChatOpenAI API Doc](https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html), `http_client` seems to be a valid parameter to set in order to customize the httpx client in the background. But I'm unable to do that customization.
Adding `AzureChatOpenAI.update_forward_refs()` also does not solve this issue - the code raises the error even before hitting that line. | Setting a custom http_client fails with pydantic ConfigError | https://api.github.com/repos/langchain-ai/langchain/issues/13185/comments | 5 | 2023-11-10T09:36:13Z | 2023-11-27T18:50:16Z | https://github.com/langchain-ai/langchain/issues/13185 | 1,987,257,994 | 13,185 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Hello guys,
I have to use a proxy to access Azure OpenAi because I'm using a VPN for my company.
However when I try to use the [](https://python.langchain.com/docs/modules/data_connection/retrievers/web_research) , I block at fetching the pages because the code inside the langchain library does not use the proxy.
So I managed to make it work for GoogleSearchAPIWrapper by setting it in the os.environ but it does not work for the inside request and html_load call in the langchain library.

### Idea or request for content:
So I would like to have a proxy parameter when I set up a web_research_retriever = WebResearchRetriever.from_llm(
vectorstore=vectorstore, llm=llm, search=search
)
Or any other tips that can help me to dodge this proxy issue.
Thanks in advance | DOC: Setting proxy for the whole langchain library, especially web_research_retriever | https://api.github.com/repos/langchain-ai/langchain/issues/13180/comments | 4 | 2023-11-10T08:54:16Z | 2024-02-16T16:06:36Z | https://github.com/langchain-ai/langchain/issues/13180 | 1,987,191,884 | 13,180 |
[
"langchain-ai",
"langchain"
] | ### Feature request
As the title suggests, there is currently no built-in method to retrieve chunks linked through edges in graph-based structures.
This is especially relevant in cases where documents are not self-contained and chunks reference other chunks, either within the same document or in other documents. These "children" chunks are often necessary to properly answer user queries.
Specifically, I would like to have the ability to perform the following within the same chain:
```
1. Get the user query
2. Embed it
3. Perform approximate nearest neighbor (ANN) lookup and retrieve the most relevant chunk
4. Given the ID of the chunk, expand the set of matches by navigating the graph
5. Retrieve all of these to provide to the Large Language Model (LLM)
6. Send everything to the LLM to answer the user query
```
To further contextualize my suggestion, I have been considering the following approach to begin with:
```python
import networkx as nx
GraphType = Union[nx.DiGraph, nx.Graph]
class NXInMemoryGraphRetriever(BaseRetriever):
def __init__(
self,
bootstrap_retriever: BaseRetriever,
graph: GraphType,
navigate_graph_function: Callable[[GraphType, str], List[Document]],
):
self._graph = graph
self._navigate_graph_fn = navigate_graph_function
...
def get_relevant_documents(
self,
query: str,
*,
callbacks: Optional[List[Callable]] = None,
tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
run_name: Optional[str] = None,
**kwargs: Any,
) -> List[Document]:
base_documents = self.bootstrap_retriever.get_relevant_documents(...)
expanded_set_of_docs = set(base_documents)
for document in base_documents:
expanded_set_of_docs.update(
self.navigate_graph_function(self.graph, document.metadata["id"])
)
...
return list(expanded_set_of_documents)
# Example of callable function to pass
def get_everything_one_hop_away(graph: GraphType, source_doc_id: str) -> List[Documents]:
related_docs = []
for source_doc, related_doc in nx.dfs_edges(
graph, source=source_doc_id, depth_limit=1
):
related_docs.add(Document(...)) # From the related docs id.
return related_docs
```
Additionally, expanding this to support Neo4J or similar should be rather straightforward. Don't pass the graph but the DB driver and the `navigate_graph_function` with a cypher query string/literal.
### Motivation
Use langchain to navigate graph related datasets easily.
### Your contribution
Happy to send a PR your way :) | Document retriever from Knowledge-graph based source | https://api.github.com/repos/langchain-ai/langchain/issues/13179/comments | 1 | 2023-11-10T08:44:39Z | 2023-11-13T09:08:19Z | https://github.com/langchain-ai/langchain/issues/13179 | 1,987,177,608 | 13,179 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Support OpenAI seed and fingerprint parameters to get more consistent outputs for the same inputs and model version.
https://cookbook.openai.com/examples/deterministic_outputs_with_the_seed_parameter#seed
https://platform.openai.com/docs/api-reference/chat/create#chat-create-seed
### Motivation
Running unit tests that are dependent on LLM responses often leads to random flakiness.
### Your contribution
PR | Support OpenAI seed for deterministic outputs | https://api.github.com/repos/langchain-ai/langchain/issues/13177/comments | 6 | 2023-11-10T08:29:51Z | 2024-08-08T16:06:29Z | https://github.com/langchain-ai/langchain/issues/13177 | 1,987,156,699 | 13,177 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
`
# Import necessary libraries
from llama_index import (
LangchainEmbedding,
)
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
from llama_index import VectorStoreIndex, SimpleDirectoryReader, ServiceContext
from llama_index.vector_stores import ChromaVectorStore
from llama_index.storage.storage_context import StorageContext
import chromadb
# Create client and a new collection
chroma_client = chromadb.EphemeralClient()
chroma_collection = chroma_client.create_collection("quickstart")
hfemb = HuggingFaceEmbeddings()
embed_model = LangchainEmbedding(hfemb)
documents = SimpleDirectoryReader("./docs/examples/data/paul_graham").load_data()
# Set up ChromaVectorStore and load in data
vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
service_context = ServiceContext.from_defaults(llm=None, embed_model=embed_model)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context, service_context=service_context, show_progress=True
)
# Query Data
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(response)
`
I used this code with llama-index retrieval mechanism for question&answer with documents. How can i write this code's LangChain form? I should'nt use model. I retrieve answer from documents. And i should use HuggingFaceEmbeddings for embedding.
### Suggestion:
_No response_ | Simple Retrieval QA Example | https://api.github.com/repos/langchain-ai/langchain/issues/13176/comments | 5 | 2023-11-10T08:20:00Z | 2024-02-16T16:06:41Z | https://github.com/langchain-ai/langchain/issues/13176 | 1,987,140,842 | 13,176 |
[
"langchain-ai",
"langchain"
] | ### System Info
python version: 3.11
I'm trying to run the sampe code "LangChain: Q&A over Documents", but when I run the below cell, it reports below error
```
pip install --upgrade langchain
from llm_commons.langchain.btp_llm import ChatBTPOpenAI
from llm_commons.langchain.btp_llm import BTPOpenAIEmbeddings
from langchain.chains import RetrievalQA
from langchain.document_loaders import CSVLoader
from langchain.vectorstores import DocArrayInMemorySearch
from IPython.display import display, Markdown
file = 'OutdoorClothingCatalog_1000.csv'
loader = CSVLoader(file_path=file, encoding='utf-8')
from langchain.indexes import VectorstoreIndexCreator
```
ImportError Traceback (most recent call last)
Cell In[4], line 1
----> 1 from langchain.indexes import VectorstoreIndexCreator
File /opt/conda/lib/python3.11/site-packages/langchain/indexes/__init__.py:17
1 """Code to support various indexing workflows.
2
3 Provides code to:
(...)
14 documents that were derived from parent documents by chunking.)
15 """
16 from langchain.indexes._api import IndexingResult, aindex, index
---> 17 from langchain.indexes._sql_record_manager import SQLRecordManager
18 from langchain.indexes.graph import GraphIndexCreator
19 from langchain.indexes.vectorstore import VectorstoreIndexCreator
File /opt/conda/lib/python3.11/site-packages/langchain/indexes/_sql_record_manager.py:21
18 import uuid
19 from typing import Any, AsyncGenerator, Dict, Generator, List, Optional, Sequence, Union
---> 21 from sqlalchemy import (
22 URL,
23 Column,
24 Engine,
25 Float,
26 Index,
27 String,
28 UniqueConstraint,
29 and_,
30 create_engine,
31 delete,
32 select,
33 text,
34 )
35 from sqlalchemy.ext.asyncio import (
36 AsyncEngine,
37 AsyncSession,
38 async_sessionmaker,
39 create_async_engine,
40 )
41 from sqlalchemy.ext.declarative import declarative_base
ImportError: cannot import name 'URL' from 'sqlalchemy' (/opt/conda/lib/python3.11/site-packages/sqlalchemy/__init__.py)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
python version: 3.11
I'm trying to run the sampe code "LangChain: Q&A over Documents", but when I run the below cell, it reports below error
```
pip install --upgrade langchain
from llm_commons.langchain.btp_llm import ChatBTPOpenAI
from llm_commons.langchain.btp_llm import BTPOpenAIEmbeddings
from langchain.chains import RetrievalQA
from langchain.document_loaders import CSVLoader
from langchain.vectorstores import DocArrayInMemorySearch
from IPython.display import display, Markdown
file = 'OutdoorClothingCatalog_1000.csv'
loader = CSVLoader(file_path=file, encoding='utf-8')
from langchain.indexes import VectorstoreIndexCreator
```
ImportError Traceback (most recent call last)
Cell In[4], line 1
----> 1 from langchain.indexes import VectorstoreIndexCreator
File /opt/conda/lib/python3.11/site-packages/langchain/indexes/__init__.py:17
1 """Code to support various indexing workflows.
2
3 Provides code to:
(...)
14 documents that were derived from parent documents by chunking.)
15 """
16 from langchain.indexes._api import IndexingResult, aindex, index
---> 17 from langchain.indexes._sql_record_manager import SQLRecordManager
18 from langchain.indexes.graph import GraphIndexCreator
19 from langchain.indexes.vectorstore import VectorstoreIndexCreator
File /opt/conda/lib/python3.11/site-packages/langchain/indexes/_sql_record_manager.py:21
18 import uuid
19 from typing import Any, AsyncGenerator, Dict, Generator, List, Optional, Sequence, Union
---> 21 from sqlalchemy import (
22 URL,
23 Column,
24 Engine,
25 Float,
26 Index,
27 String,
28 UniqueConstraint,
29 and_,
30 create_engine,
31 delete,
32 select,
33 text,
34 )
35 from sqlalchemy.ext.asyncio import (
36 AsyncEngine,
37 AsyncSession,
38 async_sessionmaker,
39 create_async_engine,
40 )
41 from sqlalchemy.ext.declarative import declarative_base
ImportError: cannot import name 'URL' from 'sqlalchemy' (/opt/conda/lib/python3.11/site-packages/sqlalchemy/__init__.py)
### Expected behavior
the import should work as expected

| from langchain.indexes import VectorstoreIndexCreator report errors "" | https://api.github.com/repos/langchain-ai/langchain/issues/13172/comments | 3 | 2023-11-10T06:47:43Z | 2024-02-16T16:06:46Z | https://github.com/langchain-ai/langchain/issues/13172 | 1,987,020,797 | 13,172 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.333
openai == 1.2.2
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Make a simple OpenAI call with the latest OpenAI version
### Expected behavior
OpenAI in there recent changes have updated their error API so no longer it has to be referenced with OpenAI.error.
The error is in this file.
langchain/libs/langchain/langchain/chat_models/openai.py
So the error list that is
errors = [
openai.error.Timeout,
openai.error.APIError,
openai.error.APIConnectionError,
openai.error.RateLimitError,
openai.error.ServiceUnavailableError,
]
should be updated to
errors = [
openai.Timeout,
openai.APIError,
openai.APIConnectionError,
openai.RateLimitError,
openai.ServiceUnavailableError,
]
| Problem With OpenAI Error update | https://api.github.com/repos/langchain-ai/langchain/issues/13171/comments | 2 | 2023-11-10T06:37:20Z | 2024-02-17T16:05:52Z | https://github.com/langchain-ai/langchain/issues/13171 | 1,987,009,878 | 13,171 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
in https://python.langchain.com/docs/get_started/quickstart
in the section LLM / Chat Model
with the code
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
llm = OpenAI()
chat_model = ChatOpenAI()
# I recevce the error
File [~/nghia_1660s_workspace/gates-llm-chatbots/.venv/lib/python3.10/site-packages/langchain/llms/base.py:40](https://vscode-remote+ssh-002dremote-002bnghia-002d1660s.vscode-resource.vscode-cdn.net/home/nghia_1660s/nghia_1660s_workspace/gates-llm-chatbots/learn_langchain/~/nghia_1660s_workspace/gates-llm-chatbots/.venv/lib/python3.10/site-packages/langchain/llms/base.py:40)
[29](https://vscode-remote+ssh-002dremote-002bnghia-002d1660s.vscode-resource.vscode-cdn.net/home/nghia_1660s/nghia_1660s_workspace/gates-llm-chatbots/learn_langchain/~/nghia_1660s_workspace/gates-llm-chatbots/.venv/lib/python3.10/site-packages/langchain/llms/base.py:29) import yaml
[30](https://vscode-remote+ssh-002dremote-002bnghia-002d1660s.vscode-resource.vscode-cdn.net/home/nghia_1660s/nghia_1660s_workspace/gates-llm-chatbots/learn_langchain/~/nghia_1660s_workspace/gates-llm-chatbots/.venv/lib/python3.10/site-packages/langchain/llms/base.py:30) from tenacity import (
[31](https://vscode-remote+ssh-002dremote-002bnghia-002d1660s.vscode-resource.vscode-cdn.net/home/nghia_1660s/nghia_1660s_workspace/gates-llm-chatbots/learn_langchain/~/nghia_1660s_workspace/gates-llm-chatbots/.venv/lib/python3.10/site-packages/langchain/llms/base.py:31) RetryCallState,
[32](https://vscode-remote+ssh-002dremote-002bnghia-002d1660s.vscode-resource.vscode-cdn.net/home/nghia_1660s/nghia_1660s_workspace/gates-llm-chatbots/learn_langchain/~/nghia_1660s_workspace/gates-llm-chatbots/.venv/lib/python3.10/site-packages/langchain/llms/base.py:32) before_sleep_log,
ref='~/nghia_1660s_workspace/gates-llm-chatbots/.venv/lib/python3.10/site-packages/langchain/llms/base.py:0'>0</a>;32m (...)
...
--> [106](https://vscode-remote+ssh-002dremote-002bnghia-002d1660s.vscode-resource.vscode-cdn.net/usr/lib/python3.10/abc.py:106) cls = super().__new__(mcls, name, bases, namespace, **kwargs)
[107](https://vscode-remote+ssh-002dremote-002bnghia-002d1660s.vscode-resource.vscode-cdn.net/usr/lib/python3.10/abc.py:107) _abc_init(cls)
[108](https://vscode-remote+ssh-002dremote-002bnghia-002d1660s.vscode-resource.vscode-cdn.net/usr/lib/python3.10/abc.py:108) return cls
**TypeError: multiple bases have instance lay-out conflict**
### Idea or request for content:
The TypeError: multiple bases have instance lay-out conflict is the error for
Here is an example illustrating the instance layout conflict in Python:
Let's say we have two extension types defined in C:
class Base1(object):
__slots__ = ()
_fields_ = ["field1"]
class Base2(object):
__slots__ = ()
_fields_ = ["field2"]
Now let's attempt multiple inheritance:
class Derived(Base1, Base2):
__slots__ = ()
Here you will get TypeError: multiple bases have instance layout conflict.
This error is somewhat specific to extension types and it's due to violating multiple inheritance compatibility in Python under the hood when working with extension types. In simpler terms, Python is complaining about a conflict that occurs due to Base1 and Base2 having a different memory layout. Python doesn't know how to resolve this ambiguity or conflict, so it throws a TypeError.
I use python the python I use is 3.10.12 | TypeError: multiple bases have instance layout conflict. DOC: <Quick start code dont work with version langchain ==0.333> | https://api.github.com/repos/langchain-ai/langchain/issues/13168/comments | 3 | 2023-11-10T04:51:26Z | 2023-11-11T07:50:51Z | https://github.com/langchain-ai/langchain/issues/13168 | 1,986,880,174 | 13,168 |
[
"langchain-ai",
"langchain"
] | ### System Info
Getting the below error when try to use load_qa_chain
openai.lib._old_api.APIRemovedInV1:
You tried to access openai.ChatCompletion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.
You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface.
Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`
A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
run load_qa_chain
### Expected behavior
it should not error | getiing an error with openai v1 | https://api.github.com/repos/langchain-ai/langchain/issues/13162/comments | 6 | 2023-11-10T02:41:34Z | 2024-03-13T20:02:47Z | https://github.com/langchain-ai/langchain/issues/13162 | 1,986,745,002 | 13,162 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I'm using llama2 with langchain. To use gpu, I re-installed the llama-cpp-python with cublas options, but after that, I can not get response from langchain.LLMChain.predict() anymore.
The result is like:
```
User:hi
> Entering new LLMChain chain...
Prompt after formatting:
History:
Question: hi
Answer:
llama_print_timings: load time = 883.81 ms
llama_print_timings: sample time = 0.47 ms / 2 runs ( 0.24 ms per token, 4255.32 tokens per second)
llama_print_timings: prompt eval time = 883.77 ms / 14 tokens ( 63.13 ms per token, 15.84 tokens per second)
llama_print_timings: eval time = 228.71 ms / 1 runs ( 228.71 ms per token, 4.37 tokens per second)
llama_print_timings: total time = 1117.20 ms
> Finished chain.
```
The following is how I define the llm and the chain:
```
llm = LlamaCpp(
model_path = model_path,
n_ctx=4096,
top_k=10,
top_p=0.9,
temperature=0.7, repeat_penalty=1.1,
verbose=True,
callback_manager=callback_manager,
n_gpu_layers=35,
n_batch=100,
stop=["Question:","Human:"]
)
memory = ConversationBufferMemory(memory_key='chat_history')
llm_chain = LLMChain(
prompt=prompt,
llm=llm,
memory=memory,
)
```
Actually when I used local llama.cpp and normally installed llama-cpp-python, everything went on well. The only change I made is the llama-cpp-python and all the code just remain the same. I wonder why will this happen.
### Suggestion:
_No response_ | Issue: Can't get response from chain.predict() | https://api.github.com/repos/langchain-ai/langchain/issues/13161/comments | 10 | 2023-11-10T02:09:17Z | 2023-11-13T03:13:33Z | https://github.com/langchain-ai/langchain/issues/13161 | 1,986,715,808 | 13,161 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi @baskaryan! Thanks for open-sourcing the [notebook](https://github.com/langchain-ai/langchain/blob/master/cookbook/openai_v1_cookbook.ipynb) related to GPT-4-Vision.
Is there a way, we can estimate the cost for the API calls similar to using `get_openai_callback`?
### Suggestion:
We could get the cost using callbacks as:
```
from langchain.callbacks import get_openai_callback
from langchain.prompts import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
human_message_prompt = HumanMessagePromptTemplate.from_template(prompt)
chat_prompt = ChatPromptTemplate.from_messages(
[system_message_prompt, human_message_prompt]
)
with get_openai_callback() as cb:
openai_chain = LLMChain(prompt=chat_prompt, llm=model)
response = openai_chain.run({})
n_tokens = cb.total_tokens
total_cost = cb.total_cost
```
How do I use `HumanMessage` with the callback? | Issue: Get cost estimates of GPT-4-Vision | https://api.github.com/repos/langchain-ai/langchain/issues/13159/comments | 2 | 2023-11-10T00:53:36Z | 2024-05-20T16:07:24Z | https://github.com/langchain-ai/langchain/issues/13159 | 1,986,646,789 | 13,159 |
[
"langchain-ai",
"langchain"
] | ### System Info
Notebook with latest langchain
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Trying in notebook the HTMLHeaderTextSplitter
```jupyter-notebook
from langchain.text_splitter import RecursiveCharacterTextSplitter
headers_to_split_on = [
("h1", "Header 1"),
("h2", "Header 2"),
("h3", "Header 3"),
("h4", "Header 4"),
]
html_splitter = HTMLHeaderTextSplitter(headers_to_split_on=headers_to_split_on)
html_header_splits = html_splitter.split_text_from_file("/content/X.html")
chunk_size = 500
chunk_overlap = 30
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=chunk_size, chunk_overlap=chunk_overlap
)
Split
splits = text_splitter.split_documents(html_header_splits)
splits[80:85]`
```
```jupyter-notebook
---------------------------------------------------------------------------
XSLTApplyError Traceback (most recent call last)
[<ipython-input-54-bd3edea942d1>](https://localhost:8080/#) in <cell line: 12>()
10 html_splitter = HTMLHeaderTextSplitter(headers_to_split_on=headers_to_split_on)
11
---> 12 html_header_splits = html_splitter.split_text_from_file("/content/X.html")
13
14 chunk_size = 500
[/usr/local/lib/python3.10/dist-packages/langchain/text_splitter.py](https://localhost:8080/#) in split_text_from_file(self, file)
586 xslt_tree = etree.parse(xslt_path)
587 transform = etree.XSLT(xslt_tree)
--> 588 result = transform(tree)
589 result_dom = etree.fromstring(str(result))
590
src/lxml/xslt.pxi in lxml.etree.XSLT.__call__()
XSLTApplyError: maxHead```
Is the HTML just to large to be handled by the textsplitter?
### Expected behavior
Load the html.. | HTMLHeaderTextSplitter won't run (maxHead) | https://api.github.com/repos/langchain-ai/langchain/issues/13149/comments | 8 | 2023-11-09T20:48:09Z | 2024-06-18T19:23:47Z | https://github.com/langchain-ai/langchain/issues/13149 | 1,986,380,076 | 13,149 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I have 100 docs , from which i am trying to retrieve top 10 docs which are most relevant to my query using parent document retriever. How can this be achieved.
### Suggestion:
_No response_ | How to retrieve custom number of docs from parent retriver document , I have 100 docs and i don't want the reteiver to just retrieve only 4 docs | https://api.github.com/repos/langchain-ai/langchain/issues/13145/comments | 11 | 2023-11-09T19:56:05Z | 2024-04-02T09:18:17Z | https://github.com/langchain-ai/langchain/issues/13145 | 1,986,306,840 | 13,145 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Need the ability to pass arguments to `browser.chromium.launch()` in the `create_sync_playwright_browser` and `create_async_playwright_browser` functions located in [`langchain.tools.playwright.utils.py`](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/tools/playwright/utils.py).
### Motivation
I created an agent that uses `PlayWrightBrowserToolkit`. All local testing worked, but I encounter errors when deploying to an AWS Lambda. I encountered two errors when testing within the AWS Lambda UI: `Target page, context or browser has been closed` and `Timeout 30000ms exceeded`. I did not experience these issues locally. I identified the issue occurred when the agent used the `navigate_browser` tool. I needed to launch the browser with `browser.chromium.launch(headless=True, args=["--disable-gpu", "--single-process"])`
### Your contribution
I have modified `utils.py` for Playwright and will submit a PR. | Add ability to pass arguments when creating Playwright browser | https://api.github.com/repos/langchain-ai/langchain/issues/13143/comments | 1 | 2023-11-09T19:09:40Z | 2024-02-15T16:06:06Z | https://github.com/langchain-ai/langchain/issues/13143 | 1,986,241,586 | 13,143 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
The documentation recommends to install openai with `pip install openai`.
If you do it currently (langchain 0.0.332) then an incompatible openai version is installed (1.1.2).
It's better to install it using `pip install langchain[openai]` as that will pick a version compatible with langchain.
So I propose to replace it in the docs.
WDYT ?
### Idea or request for content:
_No response_ | DOC: Install openai with langchain[openai] | https://api.github.com/repos/langchain-ai/langchain/issues/13134/comments | 11 | 2023-11-09T15:42:53Z | 2024-04-12T16:51:25Z | https://github.com/langchain-ai/langchain/issues/13134 | 1,985,905,385 | 13,134 |
[
"langchain-ai",
"langchain"
] | https://github.com/langchain-ai/langchain/blob/c52725bdc5958d5295c2d563fa9b7fcb6ed09a3e/libs/langchain/langchain/chat_models/vertexai.py#L136C29-L136C29
seems there is an issue
from vertexai.preview.language_models import ChatModel
changed into
from vertexai.language_models import ChatModel
| Import changed of Vertex chatModel | https://api.github.com/repos/langchain-ai/langchain/issues/13132/comments | 2 | 2023-11-09T15:30:47Z | 2024-02-15T16:06:11Z | https://github.com/langchain-ai/langchain/issues/13132 | 1,985,875,195 | 13,132 |
[
"langchain-ai",
"langchain"
] | ### Feature request
We are using langchain create_sql_agent to build a chat engine with database.
generation and execution of query will be handled by create_sql_agent.
I want to modify the query before execution.
Please find below example:
original query : select * from transaction where type = IPC
modified query : select * from (select * from transaction where account_id = 123) where type = IPC
Can you please provide me way forward to achieve this ?
### Motivation
To protect some data from client, need this functionality
### Your contribution
I'm not familiar with the internal code, but I can help if guided. | Modify mysql query before execution | https://api.github.com/repos/langchain-ai/langchain/issues/13129/comments | 16 | 2023-11-09T14:07:57Z | 2024-07-03T12:36:57Z | https://github.com/langchain-ai/langchain/issues/13129 | 1,985,703,330 | 13,129 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi.
I am currently using the model of the hugging face.
The code I use is below.
`llm = HuggingFacePipeline.from_model_id(model_id="beomi/llama-2-ko-7b", task="text-generation", model_kwargs={"max_length": 2000}, device=0)`
Currently I am only using device=0, but I would like to load the model using multiple devices (device=0,1,2,3).
But I couldn't find a way to handle multiple devices for HuggingFacePipeline.from_model_id.
If you know anything about this, I hope you can help.
### Suggestion:
_No response_ | How do I use multiple GPU when using a model with hugging face? | https://api.github.com/repos/langchain-ai/langchain/issues/13128/comments | 11 | 2023-11-09T13:45:39Z | 2024-06-28T07:43:23Z | https://github.com/langchain-ai/langchain/issues/13128 | 1,985,658,707 | 13,128 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.332
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The default `Runnable#stream` doesn't stream, but rather defers to `invoke`. As `RunnableLambda` doesn't override the `stream` function, this causes the entire chain to run with `invoke` instead of `stream` when it is used in any part of the chain.
```python
chain = RunnableLambda(lambda x: x) | ChatOpenAI()
for part in chain.stream("hello"):
print(part)
```
### Expected behavior
What I'd expect is that I could use a `LambdaRunnable` to transform my inputs on-the-fly, then send it through to the rest of my chain and still use the stream/batch/etc functionalities that LCEL should bring.
If the early runnables don't support streaming, that doesn't mean that the final output can't be streamed. Currently when any of the runnables in a chain don't support streaming, the entire chain doesn't support streaming. | Streaming not working when using RunnableLambda | https://api.github.com/repos/langchain-ai/langchain/issues/13126/comments | 3 | 2023-11-09T13:32:54Z | 2023-11-09T14:54:59Z | https://github.com/langchain-ai/langchain/issues/13126 | 1,985,634,692 | 13,126 |
[
"langchain-ai",
"langchain"
] | ### Feature request
We are using langchain create_sql_agent to build a chat engine with database.
I want to write custom implementation for tool : query_sql_checker_tool
In this custom logic, we want to modify the sql query before execution.
Example:
original query : select * from transaction where type = IPC
modified query : select * from (select * from transaction where account_id = 123) where type = IPC
Can you please provide me way forward to achieve this ?
### Motivation
To protect some data from client, need this functionality
### Your contribution
I'm not familiar with the internal code, but I can help if guided. | Custom implementation for query checker tool | https://api.github.com/repos/langchain-ai/langchain/issues/13125/comments | 11 | 2023-11-09T13:05:16Z | 2024-02-16T16:07:01Z | https://github.com/langchain-ai/langchain/issues/13125 | 1,985,582,820 | 13,125 |
[
"langchain-ai",
"langchain"
] | ### System Info
Running langchain==0.0.332 with python 3.11 and openai==1.2.0 on Windows.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chat_models import ChatOpenAI
ChatOpenAI(
model="gpt-3.5-turbo",
request_timeout=5,
)
```
### Expected behavior
Should run without errors.
Likely due to newly introduced `httpx.Timeout` type in `request_timeout` (https://github.com/langchain-ai/langchain/pull/12948). Always importing httpx and tiktoken (i.e. not conditionally on `TYPE_CHECKING`) fixes the issue. | Adding timeout to ChatOpenAI raises ConfigError | https://api.github.com/repos/langchain-ai/langchain/issues/13124/comments | 4 | 2023-11-09T12:47:32Z | 2023-11-13T09:49:28Z | https://github.com/langchain-ai/langchain/issues/13124 | 1,985,552,554 | 13,124 |
[
"langchain-ai",
"langchain"
] | ### Feature request
please support httpx_client in openai version 1.1.1
just one more parameter
if there is another solution i would like to get it
thanks
### Motivation
cant work without ssl cert
### Your contribution
in AzureChatOpenAI and AzureOpenAI class
add parameter openai_http_client
in method validate_environment
please add at line 134 the next code
values["http_client"] = get_from_dict_or_env(
values, "openai_http_client", "OPENAI_HTTP_CLIENT", default=""
)
| httpx openai support | https://api.github.com/repos/langchain-ai/langchain/issues/13122/comments | 4 | 2023-11-09T12:04:56Z | 2024-02-25T16:05:52Z | https://github.com/langchain-ai/langchain/issues/13122 | 1,985,482,594 | 13,122 |
[
"langchain-ai",
"langchain"
] | ### System Info
Ubuntu 22.04
langchain 0.0.332
python 3.10
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
pip install langchain --upgrade
langchain-server
### Expected behavior
Traceback (most recent call last):
File "/home/ps/anaconda3/envs/langchain/bin/langchain-server", line 6, in <module>
from langchain.server import main
ModuleNotFoundError: No module named 'langchain.server' | ModuleNotFoundError: No module named 'langchain.server' | https://api.github.com/repos/langchain-ai/langchain/issues/13120/comments | 4 | 2023-11-09T10:28:35Z | 2024-02-15T16:06:25Z | https://github.com/langchain-ai/langchain/issues/13120 | 1,985,321,425 | 13,120 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version : 0.0.327
Python version : 3.10.11
Platform : Windows 11
### Who can help?
@agola11 @hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
I was using streaming callbacks with LLMChains.
I switched now to use LangChain Expression Language (LCEL), and using the stream/astream functions.
They don't seem to save the results to the configured LLM cache and don't load the results from there.
I'm not sure if this is intentional, or a bug.
If it is intentional I'm wondering how I should use the cache.
Code to reproduce the issue:
```python
from langchain.prompts import ChatPromptTemplate
from langchain.chat_models import ChatOpenAI
from langchain.globals import set_llm_cache
from langchain.cache import InMemoryCache
set_llm_cache(InMemoryCache())
model = ChatOpenAI(cache=True)
prompt = ChatPromptTemplate.from_template(
"tell me a joke in about 5 sentences about {topic}"
)
chain = prompt | model
for s in chain.stream({"topic": "bears"}):
print(s.content, end="", flush=True)
print("\n\nFINISHED FIRST COMPLETION\n")
for s in chain.stream({"topic": "bears"}):
print(s.content, end="", flush=True)
```
### Expected behavior
The second result should be the same and should be retrieved quickly from the LLM cache. | LCEL stream function doesn't use LLM cache | https://api.github.com/repos/langchain-ai/langchain/issues/13119/comments | 4 | 2023-11-09T10:12:40Z | 2024-03-17T16:05:16Z | https://github.com/langchain-ai/langchain/issues/13119 | 1,985,293,295 | 13,119 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Langchain is a great work!
Is there any possible that you guys can combine [Fusion](https://github.com/run-llama/llama_index/blob/main/docs/examples/low_level/fusion_retriever.ipynb) and [Semi_Structured_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb)?
### Motivation
There are a lot of tables and text in my data. First of all, I have tried Fusion_RAG, which is much better than the baseline, but it is limited to the text, and the table can not be processed, so I wondered if there a way could combine Semi_Structured_RAG and Fusion_RAG so that I could deal with both text and table at the same time. ^-^
### Your contribution
Please make the fusion: BM25+Vec+Table | Fusion_RAG + Semi_Structured_RAG? | https://api.github.com/repos/langchain-ai/langchain/issues/13117/comments | 2 | 2023-11-09T09:53:44Z | 2024-02-15T16:06:30Z | https://github.com/langchain-ai/langchain/issues/13117 | 1,985,253,003 | 13,117 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version 0.0.332, affect all platforms
There is a mistake in the file:
qianfan\resources\llm\completion.py
this `endpoint="/chat/completions-pro`, is not correct ,it should be `endpoint="/chat/completions_pro`,"
like below:
``
"ERNIE-Bot-4": QfLLMInfo(
endpoint="/chat/completions_pro",
required_keys={"messages"},
optional_keys={
"stream",
"temperature",
"top_p",
"penalty_score",
"user_id",
"system",
},
),
``
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
llm = QianfanLLMEndpoint(streaming = True,temperature = 0.5)
# llm.model = "ERNIE-Bot-turbo"
llm.model = "ERNIE-Bot-4"
# llm.model = "ChatGLM2-6B-32K"
res = llm("hi")
this will report error
### Expected behavior
should work without error | Qianfan llm error calling "ERNIE-Bot-4" | https://api.github.com/repos/langchain-ai/langchain/issues/13116/comments | 4 | 2023-11-09T09:42:52Z | 2024-02-16T16:07:06Z | https://github.com/langchain-ai/langchain/issues/13116 | 1,985,231,279 | 13,116 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: 0.0.332
Python version: 3.11.5
### Who can help?
@hwchase17
When loading a local .owl file (the standard example pizza.owl) the operation breaks and gives the following error for all the URI:
does not look like a valid URI, trying to serialize this will break.
Here's the traceback
```
Traceback (most recent call last):
File ~\AppData\Roaming\Python\Python311\site-packages\IPython\core\interactiveshell.py:3526 in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
Cell In[13], line 4
graph = RdfGraph(
File C:\Python311\Lib\site-packages\langchain\graphs\rdf_graph.py:159 in __init__
self.graph.parse(source_file, format=self.serialization)
File C:\Python311\Lib\site-packages\rdflib\graph.py:1501 in parse
raise se
File C:\Python311\Lib\site-packages\rdflib\graph.py:1492 in parse
parser.parse(source, self, **args)
File C:\Python311\Lib\site-packages\rdflib\plugins\parsers\notation3.py:2021 in parse
p.loadStream(stream)
File C:\Python311\Lib\site-packages\rdflib\plugins\parsers\notation3.py:479 in loadStream
return self.loadBuf(stream.read()) # Not ideal
File C:\Python311\Lib\site-packages\rdflib\plugins\parsers\notation3.py:485 in loadBuf
self.feed(buf)
File C:\Python311\Lib\site-packages\rdflib\plugins\parsers\notation3.py:511 in feed
i = self.directiveOrStatement(s, j)
File C:\Python311\Lib\site-packages\rdflib\plugins\parsers\notation3.py:532 in directiveOrStatement
return self.checkDot(argstr, j)
File C:\Python311\Lib\site-packages\rdflib\plugins\parsers\notation3.py:1214 in checkDot
self.BadSyntax(argstr, j, "expected '.' or '}' or ']' at end of statement")
File C:\Python311\Lib\site-packages\rdflib\plugins\parsers\notation3.py:1730 in BadSyntax
raise BadSyntax(self._thisDoc, self.lines, argstr, i, msg)
File <string>
BadSyntax
```
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce the behaviour:
1. Get the source file from : https://protege.stanford.edu/ontologies/pizza/pizza.owl and place it where the code runs
2. Use the following code:
```
from langchain.chains import GraphSparqlQAChain
from langchain.graphs import RdfGraph
graph = RdfGraph(
source_file="pizza.owl",
standard="owl"
)
graph.load_schema()
print(graph.get_schema)
```
### Expected behavior
For the graph to load and for graph.get_schema to show the classes and object properties. | langchain.graph RDFGraph does not read .owl extension files | https://api.github.com/repos/langchain-ai/langchain/issues/13115/comments | 3 | 2023-11-09T09:39:19Z | 2023-11-09T09:56:03Z | https://github.com/langchain-ai/langchain/issues/13115 | 1,985,225,113 | 13,115 |
[
"langchain-ai",
"langchain"
] |
# Issue you'd like to raise.
## Issue: <ValidationError: 1 validation error for ChatOpenAI __root__ openai has no ChatCompletion attribute, this is likely due to an old version of the openai package. Try upgrading it with pip install --upgrade openai. (type=value_error)>
### Import necessary packages for Streamlit app, PDF processing, and OpenAI integration.
```
`import` streamlit as st
import pdfplumber
import os
from langchain.vectorstores import faiss
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.chains.conversation.memory import ConversationBufferWindowMemory
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains.summarize import load_summarize_chain
from langchain.docstore.document import Document
from dotenv import load_dotenv
```
### Load and verify environment variables, specifically the OpenAI API key.
### Load .env file that is in the same directory as your script.
```
load_dotenv()
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
if OPENAI_API_KEY is None:
raise ValueError("OpenAI API key not found. Make sure you have an .env file with the key defined.")
```
### Initialize the OpenAI language model with a given API key and temperature setting.
```
chain = ConversationalRetrievalChain.from_llm(
llm = ChatOpenAI(temperature=0.1,model_name='gpt-4'),
retriever=vectorstore.as_retriever())
```
### Initializing the Embedding Model and OpenAI Model
```
embeddings = OpenAIEmbeddings()
vectorstore = faiss.from_documents(data, embeddings)
```
### Process an uploaded PDF document and extract its text content.
```
def process_document(uploaded_file):
with pdfplumber.open(uploaded_file) as pdf:
document_text = "\n".join(page.extract_text() for page in pdf.pages if page.extract_text())
return document_text
```
### Summarize the extracted text from the document using the OpenAI language model.
```
def summarize_document(llm, document_text):
text_splitter = CharacterTextSplitter(max_length=1000)
texts = text_splitter.split_text(document_text)
docs = [Document(content=t) for t in texts]
summarize_chain = load_summarize_chain(llm, chain_type='map_reduce')
return summarize_chain.run(docs)
```
### Initialize a conversation chain with memory capabilities for the chatbot.
```
def` initialize_conversation_chain(llm):
return ConversationalRetrievalChain(
llm=llm,
memory=ConversationBufferWindowMemory(k=5) # Stores the last 5 interactions.
)
```
### Define the main function to run the Streamlit application.
```
def run_app():
llm = initialize_llm(OPENAI_API_KEY)
st.title("Earnings Call Analysis App")
### UI for document upload and processing.
uploaded_file = st.file_uploader("Upload your earnings call transcript", type=["pdf"])
process_button = st.button("Process Document")
```
### Process document and generate summaries
```
if process_button and uploaded_file:
with st.spinner('Processing Document...'):
document_text = process_document(uploaded_file)
summaries = summarize_document(llm, document_text)
display_summaries(summaries)
st.success("Document processed!")
```
### UI for interactive chatbot with memory feature.
```
conversation_chain = initialize_conversation_chain(llm)
user_input = st.text_input("Ask a question about the earnings call:")
if st.button('Get Response'):
with st.spinner('Generating response...'):
response = generate_chat_response(conversation_chain, user_input, document_text)
st.write(response)
```
### Display summaries on the app interface and provide download option for each.
```
def display_summaries(summaries):
if summaries:
for i, summary in enumerate(summaries):
st.subheader(f"Topic {i+1}")
st.write("One-line topic descriptor: ", summary.get("one_line_summary", ""))
st.write("Detailed bulleted topic summaries: ", summary.get("bulleted_summary", ""))
download_summary(summary.get("bulleted_summary", ""), i+1)
```
### Create a downloadable summary file.
```
def download_summary(summary, topic_number):
summary_filename = f"topic_{topic_number}_summary.txt"
st.download_button(
label=f"Download Topic {topic_number} Summary",
data=summary,
file_name=summary_filename,
mime="text/plain"
)
```
### Generate a response from the chatbot based on the user's input and document's context.
```
def generate_chat_response(conversation_chain, user_input, document_text):
response = conversation_chain.generate_response(
prompt=user_input,
context=document_text
)
return response.get('text', "Sorry, I couldn't generate a response.")
if __name__ == "__main__":
run_app()
```
### Suggestion:
_No response_ | Issue: <ValidationError: 1 validation error for ChatOpenAI __root__ `openai` has no `ChatCompletion` attribute, this is likely due to an old version of the openai package. Try upgrading it with `pip install --upgrade openai`. (type=value_error)> | https://api.github.com/repos/langchain-ai/langchain/issues/13114/comments | 7 | 2023-11-09T09:35:22Z | 2024-02-08T16:06:57Z | https://github.com/langchain-ai/langchain/issues/13114 | 1,985,218,334 | 13,114 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
for example in my below case
```python
english_tools = [
Tool(name="SomeNAME_1",
func=lambda q: app.finance_chain.run(q),
description=" Some app related description ",
return_direct=True,
coroutine=lambda q: app.finance_chain.arun(q),
),
Tool(name="SomeNAME_2",
func=lambda q: app.rqa(q),
description=" Some app related description ",
coroutine=lambda q: app.rqa_english.arun(q),
return_direct=True
),
Tool.from_function(
name="SomeNAME_3",
func=lambda q: app.pd_agent(q),
description=" Some app related description",
coroutine=lambda q: app.pd_agent.arun(q),
)
]```
so when SomeNAME_3 invoke i don't wann pass memory to this tool
### Suggestion:
_No response_ | How can we selectively pass memory to specific tools without passing it to all tools? | https://api.github.com/repos/langchain-ai/langchain/issues/13112/comments | 4 | 2023-11-09T06:33:12Z | 2024-02-23T16:06:52Z | https://github.com/langchain-ai/langchain/issues/13112 | 1,984,928,992 | 13,112 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version: 0.0.300
Python version: 3.10.12
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
```py
# Prompt
prompt_text = """You are an assistant tasked with summarizing tables and text. \
Give a concise summary of the table or text. Table or text chunk: {element} """
prompt = PromptTemplate.from_template(prompt_text)
# Summary chain
model = Replicate(
model="meta/llama-2-7b-chat:13c3cdee13ee059ab779f0291d29054dab00a47dad8261375654de5540165fb0",
model_kwargs={"temperature": 0.75, "max_length": 3000, "top_p":0.25}
)
summarize_chain = {"element": lambda x: x} | prompt | model | StrOutputParser()
# Apply to text
texts = [i.text for i in text_elements]
text_summaries = summarize_chain.batch(texts, {"max_concurrency": 5})
```
Error:
```txt
ValueError: config must be a list of the same length as inputs, but got 27 configs for 5 inputs
```
With OpenAI, it is working as expected.
### Expected behavior
The summarization chain (powered by Replicate) should process each chunks batch-wise. | Summarization chain batching is not working with Replicate | https://api.github.com/repos/langchain-ai/langchain/issues/13108/comments | 5 | 2023-11-09T04:33:37Z | 2024-02-29T11:04:57Z | https://github.com/langchain-ai/langchain/issues/13108 | 1,984,802,487 | 13,108 |
[
"langchain-ai",
"langchain"
] | 
Hello all, I'm attempting to perform a SPARQL graph query using my local LLM, but it appears that something is amiss. Please feel free to share any helpful tips or guidance.
`graph = RdfGraph(
source_file="http://www.w3.org/People/Berners-Lee/card",
standard="rdf",
local_copy="test1109.ttl",
)
tokenizer = AutoTokenizer.from_pretrained('C:\\data\\llm\\chatglm-6b-int4', trust_remote_code=True)
model = AutoModel.from_pretrained('C:\\data\\llm\\chatglm-6b-int4', trust_remote_code=True).half().cuda().eval()
chain = GraphSparqlQAChain.from_llm(model, graph=graph, verbose=True)
question = "What is Tim Berners-Lee's work homepage?"
result = chain.run(question)`
` File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 2 validation errors for LLMChain
llm
instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
llm
instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)`
| Performing a Graph SPARQL Query with a Local LLM | https://api.github.com/repos/langchain-ai/langchain/issues/13107/comments | 6 | 2023-11-09T03:37:33Z | 2024-02-26T16:07:08Z | https://github.com/langchain-ai/langchain/issues/13107 | 1,984,758,551 | 13,107 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Here's an example implementation of `RunnableRetry`,
```
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.schame.runnable.retry import RunnableRetry
template = PromptTemplate.from_template("tell me a joke about {topic}.")
error_template = PromptTemplate.from_template("tell me a joke about {topic} with {context}.")
model = ChatOpenAI(temperature=0.5)
chain = template | model
retryable_chain = RunnableRetry(bound=chain, max_attempt_number=3, callback={'prompt': error_template})
```
As for now `RunnableRetry` won't support callback and I'm not sure about altering the first step of the RunnableSequence with modified inputs (inputs along with the error message.)
### Suggestion:
_No response_ | How to use a different prompt template upon runnable retry? | https://api.github.com/repos/langchain-ai/langchain/issues/13105/comments | 3 | 2023-11-09T03:13:18Z | 2024-02-12T08:10:50Z | https://github.com/langchain-ai/langchain/issues/13105 | 1,984,740,642 | 13,105 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.330
openai==0.28.1
python==3.9.17
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have written a simple structured output parser. I am using to extract useful data from a document text. Here's my code:
```
import os
import logging
from dotenv import load_dotenv
from langchain.output_parsers import StructuredOutputParser, ResponseSchema
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
load_dotenv()
response_schemas = [
ResponseSchema(
name="document_type", description="Type of document, typically found on top"
),
ResponseSchema(name="shipper", description="Shipper name found in the data"),
ResponseSchema(name="consignee", description="Consignee name found in the data"),
ResponseSchema(name="point_of_origin", description="Point of origin in the data"),
ResponseSchema(
name="customer_order_number", description="Customer order number in the data"
),
ResponseSchema(
name="order_number", description="Order number mentioned in the data"
),
ResponseSchema(
name="bill_of_lading",
description="Bill of lading number(B/L number) found in the data",
),
ResponseSchema(
name="carrier_name", description="Carrier name mentioned in the data"
),
ResponseSchema(
name="required_ship_date", description="Required ship date in date format"
),
ResponseSchema(
name="shipped_date", description="Shipped date, typically separated by /"
),
ResponseSchema(
name="transportation_mode", description="Transportation mode such as truck etc."
),
ResponseSchema(
name="vehicle_number", description="Vehicle number found in the data"
),
ResponseSchema(name="routing_info", description="Routing info found in the data"),
ResponseSchema(
name="invoice_to_buyer", description="Invoice to buyer data found in the data"
),
ResponseSchema(
name="consignee_number", description="Consignee number mentioned in the data"
),
ResponseSchema(
name="net_weight",
description="Net weight found in the data, typically found on the second page. It's a number succeeded by weight symbol such as kg/lb/1b/15/16 and ends with NT (Net weight).",
),
ResponseSchema(name="ticket_number", description="Ticket number found in the data"),
ResponseSchema(name="outbound_date", description="Outbound date found in the data"),
]
system_prompt = """
Following is the data extracted from a document through OCR wrapped inside <ocr_data> delimeter. It may be unstructured and unorganized, and you'll help me extract key information from this data. The data can be nuanced, and field and it's respective values may be at different positions. The presented data can be of multiple pages, separated by (------). Analyze the OCR data below and give me the value of given fields. If you can't find the values in the OCR data, simply return 'N/A'.
"""
class BolAgent:
def __init__(self):
self.openai_api_key = os.getenv("OPENAI_API_KEY")
self.llm = OpenAI(
openai_api_key=self.openai_api_key, temperature=0.1, max_tokens=1000
)
self.chat_model = ChatOpenAI(
model="gpt-3.5-turbo-16k",
openai_api_key=self.openai_api_key,
temperature=0,
)
self.response_schemas = response_schemas
self.system_prompt = system_prompt
def extract_paramerts(
self,
ocr_data,
):
output_parser = StructuredOutputParser.from_response_schemas(
self.response_schemas
)
input_data = f"<ocr_data>/n{ocr_data}/n</ocr_data>"
format_instructions = output_parser.get_format_instructions()
prompt = ChatPromptTemplate(
messages=[
HumanMessagePromptTemplate.from_template(
"{system_prompt}\n\n{format_instructions}\n\n{input_data}"
)
],
input_variables=["system_prompt", "input_data"],
partial_variables={"format_instructions": format_instructions},
)
llm_input = prompt.format_prompt(
system_prompt=system_prompt, input_data=input_data
)
logging.info(f"LLM Input: {llm_input}")
output = self.chat_model(llm_input.to_messages())
logging.info(f"LLM Output: {output}")
result = output_parser.parse(output.content)
return result
```
When I use this code on any input data, the output parser gives error most of the time. Here's a sample input data:
`\n------------------\nSTRAIGHT BILL OF LADING - SHORT FORM\nTEST\nCHEMTRADE\nFICHE D\'EXPEDITION - FORMULE REGULIERE\nS4D\nSHIPPER/EXPEDITEUR\nChemtrade West Limited Partnership\nTIME IN/ARRIVEE\nGROSS/BRUT\nCONSIGNEE/DESTINATAIRE\nSASK POWER\n TARE\nSHIP TO/EXPEDIEZ A\nCORY \nCOGENERATION STATION\nTIME OUT/DEPART\n8 KM W OF SASKATOON HWY 7\nNET\nVANSCOY SOK 1VO SK CA\nPOINT OF ORIGIN/POINT D\'EXPEDITION\nCUSTOMER ORDER NO./N DE COMMANDE DU CLIENT\nORDER NO./N DE COMM.\n3/L NO./NDE CONN.\nCHEMTRADE (SASKATOON)\nS\n1856\n80001877\n CARRIER NAME/NOM DU TRANSPORTEUR\nREQUIRED SHIP DATE/DATE EXP.DEM.\nDATE SHIPPED/EXPEDIE LE\nCARON TRANSPORT LTD\nNov 06,2023\nTRANSPORTATION MODE/MODE DE TRANSPORT\nVEHICLE T/C NO. - MARQUE DU WAGON\nTruck\n UNIVAR CANADA LTD.\n ROUTING/ITINERAIRE\nCONSIGNEE#/CONSIGNATAIRE\nPAGE\n600929\n1 of\n3\nNO.AND DESCRIPTION OF PACKS\nD.G.\nDESCRIPTION OF ARTICLES AND SPECIAL MARKS\nNET WEIGHT KG\nNBRE ET DESCRIPTION DE COLIS\nDESCRIPTION DES ARTICLES ET INDICATIONS SPECIALS\nPOIDS NET\n1 TT\nX\nUN1830, SULFURIC ACID, 8, PG II\n21.000 Tonne\nSULFURIC ACID 93%\nER GUIDE #137\n4 PLACARDS REQUIRED; CLASS 8, CORROSIVE\nSTCC 4930040\nSulfuric Acid 93%\nCOA W/ SHIPMENT\nDELIVERY HOURS: 8:OOAM-1: OOPM MON-THURS\nATTENDANCE DURING OFFLOAD REQUIRED\nSAFETY GOGGLES, FACE SHIELD, GLOVES, BOOTS, HARD\nHAT, STEEL TOED SHOES, PROTECTIVE SUIT\n3" QUICK CONNECT CAMLOCK; 1 HOSE REQUIRED\nPersonal Protective Equipment: Gloves. Protective clothing. Protective goggles. Face shield.\nnsufficient ventilation: wear respiratory protection.\nERP 2-1564 and Chemtrade Logistics 24-Hour Number >>\n1-866-416-4404\nPIU 2-1564 et Chemtrade Logistics Numero de 24 heures >>\n1-866-416-4404\nConsignor / Expediteur:\nLocation / Endroit:\nCHEMTRADE WEST LIMITED PARTNERSHIP\n11TH STREET WEST\nI hereby declare that the contents of this consignment are fully and accurately described above by the proper shipping\nSASKATOON SK CA\nare in all respects in proper condition for transport according to the Transportation of Dangerous Goods Regulations.\nS7K 4C8\nPer/Par:Michael Rumble, EHS Director, Risk Management\nIF CHARGES ARE TO BE PREPAID, WRITE OR STAMP\nJe declare que le contenu de ce chargement est decrit ci-dessus de faconcomplete et exacte par Iappellation reglementaire\nINDIQUER ICI SI L\'ENVOI SE FAIT EN "PORT-PAYE"\negards bien conditionne pouretre transporte conformement au Reglement sur le transport des marchandises dangereuses.\nPrepaid\nFORWARD INVOICE FOR PREPAID FREIGHT\nChemtrade West Limited Partnership\nQUOTING OUR B/L NO.TO:\n155 Gordon\nBaker Rd #300\nWeight Agreement\nFAIRE SUIVRE FACTURE POUR EXPEDITION PORT\nToronto,\nOnt.\nM2H 3N5\nPAYE EN REFERANT A NOTRE NUMERO DE CONN.A:\nSHIPPER\nChemtrade West Limited\nAGENT\nCONSIGNEE.\nEXPEDITEUR\nPartnership\nDESTINATAIRE\nPER\nPERMANENT POST OFFICE ADDRESS OF SHIPPER\nPER\nPER\nPAR\n(ADRESSE POSTALE PERMANENTE DE L\'EXPEDITEUR)\nTHESE PRODUCTS ARE SOLD AND SHIPPED IN ACCORDANCE WITH\nTHE TERMS OF SALES ON THE REVERSE SIDE OF THIS,DOCUMENT.\nResponsible Care\nCES PRODUITS SONT VENDUS ET EXPEDIES CONFORMEMENTAUX\nCONDITIONS DE VENTE APPARAISSANT AU VERSO DE LA PRESENTE\nOur commitment to sustainability.\nS4D PRASRNov 06,2023 1618`
Upon further debugging I found that for some reason the output has two triple-backticks at the end and because of this the Structured Output Parser ends up giving the error. Here the output for better clarity (Notice the end of output):
`content='```json\n{\n\t"document_type": "STRAIGHT BILL OF LADING - SHORT FORM",\n\t"shipper": "Chemtrade West Limited Partnership",\n\t"consignee": "SASK POWER",\n\t"point_of_origin": "VANSCOY SOK 1VO SK CA",\n\t"customer_order_number": "80001877",\n\t"order_number": "1856",\n\t"bill_of_lading": "600929",\n\t"carrier_name": "CARON TRANSPORT LTD",\n\t"required_ship_date": "Nov 06,2023",\n\t"shipped_date": "Nov 06,2023",\n\t"transportation_mode": "Truck",\n\t"vehicle_number": "T/C NO.",\n\t"routing_info": "UNIVAR CANADA LTD.",\n\t"invoice_to_buyer": "Chemtrade West Limited Partnership",\n\t"consignee_number": "600929",\n\t"net_weight": "21.000 Tonne",\n\t"ticket_number": "N/A",\n\t"outbound_date": "N/A"\n}\n```\n```'`
I have started to notice this error at high-frequency after OpenAI dev day. Any idea what I might be doing wrong?
### Expected behavior
The output should only have one triple-backticks at the end and the output parser should parse the output properly. | Structured Output Parser Always Gives Error | https://api.github.com/repos/langchain-ai/langchain/issues/13101/comments | 7 | 2023-11-09T01:22:50Z | 2024-02-15T16:06:40Z | https://github.com/langchain-ai/langchain/issues/13101 | 1,984,653,868 | 13,101 |
[
"langchain-ai",
"langchain"
] | I see the following when using AzureChatOpenAI with with_fallbacks. Removing with_fallbacks doesn't cause this issue.
`Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, apredict, apredict_messages, generate_prompt, invoke, predict, predict_messages (type=type_error)`
### System Info
Pydantic v1, same with v2
**Lanchain 0.0.295**
### Who can help?
@hwchase17
@agola11
@ey
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This doesn't work:
``` fallback_chat_model = ChatOpenAI(model_name="model_name")
primary_chat_model = AzureChatOpenAI()
chat_model = primary_chat_model.with_fallbacks([fallback_chat_model])
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=chunk_size,
chunk_overlap=chunk_overlap,
length_function=primary_chat_model.get_num_tokens,
)
```
### Expected behavior
Should work as this, but with fallback support
``` fallback_chat_model = ChatOpenAI(model_name="model_name")
primary_chat_model = AzureChatOpenAI()
chat_model = primary_chat_model #.with_fallbacks([fallback_chat_model])
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=chunk_size,
chunk_overlap=chunk_overlap,
length_function=primary_chat_model.get_num_tokens,
)
``` | Serialize failing - Can't use with_fallbacks with MapReduceChain/Summarization: Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt... | https://api.github.com/repos/langchain-ai/langchain/issues/13098/comments | 4 | 2023-11-08T23:48:38Z | 2024-02-15T16:06:45Z | https://github.com/langchain-ai/langchain/issues/13098 | 1,984,577,336 | 13,098 |
[
"langchain-ai",
"langchain"
] | ### System Info
make text
Running Sphinx v4.5.0
loading pickled environment... done
[autosummary] generating autosummary for: index.rst
building [mo]: targets for 0 po files that are out of date
building [text]: targets for 0 source files that are out of date
updating environment: 0 added, 1 changed, 0 removed
reading sources... [100%] index
....
Exception occurred:
File "/lib/python3.10/site-packages/sphinx/registry.py", line 354, in create_translator
setattr(translator, 'visit_' + name, MethodType(visit, translator))
TypeError: first argument must be callable
Full path of file registry.py is truncated.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run make text in docs/api_reference folder
### Expected behavior
Documentation output in .rst format | make text fails | https://api.github.com/repos/langchain-ai/langchain/issues/13096/comments | 4 | 2023-11-08T22:39:37Z | 2024-02-15T16:06:50Z | https://github.com/langchain-ai/langchain/issues/13096 | 1,984,517,806 | 13,096 |
[
"langchain-ai",
"langchain"
] | ### System Info
llm/chat_models created using with_fallback() throw attribute error when trying to access their get_num_tokens() function. This could potentially break several use cases where one does chat_model.get_num_tokens(). I believe even the docs access it as such (For example in the [MapReduce using LCEL](https://python.langchain.com/docs/modules/chains/document/map_reduce#recreating-with-lcel) docs)
My use case was as such:
```
fallback_chat_model = ChatOpenAI(model_name="model_name", temperature=0)
primary_chat_model = AzureChatOpenAI(temperature=0)
chat_model = primary_chat_model.with_fallbacks([fallback_chat_model])
## Split text using length_function and RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=chunk_size,
chunk_overlap=chunk_overlap,
length_function=chat_model.get_num_tokens,
)
texts = text_splitter.create_documents([text_sentence_numbered])
```
Of course, I can just use primary_chat_model's get_num_tokens if it's equivalent to the fallback's but in the case it's not, this would be a bigger issue. Worse still, this may be used within chains like MapReduceChain etc and many haven't even begun the switch to LCEL.
**Langchain Version 0.0.295**
### Who can help?
@eyurtsev @hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
fallback_chat_model = ChatOpenAI(model_name="model_name", temperature=0)
primary_chat_model = AzureChatOpenAI(temperature=0)
chat_model = primary_chat_model.with_fallbacks([fallback_chat_model])
## Split text using length_function and RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=chunk_size,
chunk_overlap=chunk_overlap,
length_function=chat_model.get_num_tokens,
)
texts = text_splitter.create_documents([text_sentence_numbered])
```
### Expected behavior
get_num_tokens() is accessible and based on the fallback/original chat_model or llm being used. | 'RunnableWithFallbacks' object has no attribute 'get_num_tokens' | https://api.github.com/repos/langchain-ai/langchain/issues/13095/comments | 8 | 2023-11-08T22:28:33Z | 2024-02-14T16:06:08Z | https://github.com/langchain-ai/langchain/issues/13095 | 1,984,500,319 | 13,095 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Is it possible to send back the annotations in the response when a request / response is content filtered by AzureOpenAI? Some prompts are content filtered by Azure and sometimes the response to certain prompts are content filtered as well.
Azure OpenAI sends back the annotations https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/content-filter?tabs=python#annotations-preview
Langchain needs to capture these annotations and send it along with the response
### Motivation
This feature will help us to respond to our uses why a particular prompt was content filtered. Today we can only tell our users whether or not a prompt was content filtered by Azure but when the users ask why the filtration was applied, we have no definite answer. With these annotations we can tell our users why the content filtration was applied
### Your contribution
No, not at the moment | Send back annotations sent by OpenAI for content filtered requests (and responses) | https://api.github.com/repos/langchain-ai/langchain/issues/13090/comments | 2 | 2023-11-08T21:23:40Z | 2024-02-14T16:06:13Z | https://github.com/langchain-ai/langchain/issues/13090 | 1,984,421,073 | 13,090 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
The tests for Langchain's `Chroma` are currently broken. The reason is:
- By default, chroma persists locally, for ease of use
- All the tests use the same collection name
- Thus, when a new `Chroma` is created, it sees the collections persisted from previous tests.
This causes inconsistent behavior, since the contents of the collection depend on the order of the tests, not just what the test itself added.
### Suggestion:
The solution is to create fixtures which appropriately teardown the `Chroma` after every test. | Issue: Chroma tests are buggy | https://api.github.com/repos/langchain-ai/langchain/issues/13087/comments | 2 | 2023-11-08T20:56:30Z | 2024-02-14T16:06:18Z | https://github.com/langchain-ai/langchain/issues/13087 | 1,984,384,651 | 13,087 |
[
"langchain-ai",
"langchain"
] | ### System Info
Traceback:
File "/home/appuser/venv/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 534, in _run_script
exec(code, module.__dict__)
File "/app/demo-chatbot/updatedapp.py", line 351, in <module>
asyncio.run(main())
File "/usr/local/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/local/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
return future.result()
File "/app/demo-chatbot/updatedapp.py", line 214, in main
vectors = file.get_vector()
File "/app/demo-chatbot/file.py", line 39, in get_vector
self.save_vector()
File "/app/demo-chatbot/file.py", line 30, in save_vector
embeddings = OpenAIEmbeddings(openai_api_key=st.secrets["OPEN_AI_KEY"])
File "pydantic/main.py", line 339, in pydantic.main.BaseModel.__init__
File "pydantic/main.py", line 1102, in pydantic.main.validate_model
File "/home/appuser/venv/lib/python3.9/site-packages/langchain/embeddings/openai.py", line 166, in validate_environment
values["client"] = openai.Embedding
### Who can help?
@agola11 OpenAI changed the code https://platform.openai.com/docs/guides/embeddings/what-are-embeddings
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Just follow a funciton similar to
def save_vector(self):
self.create_images()
self.extract_text()
loader = TextLoader(f"{self.dir}/info.txt")
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
documents = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vectors = FAISS.from_documents(documents, embeddings)
### Expected behavior
No traceback adding .create | OpenAIEmbeddings | https://api.github.com/repos/langchain-ai/langchain/issues/13082/comments | 4 | 2023-11-08T19:37:37Z | 2024-02-15T16:06:55Z | https://github.com/langchain-ai/langchain/issues/13082 | 1,984,273,473 | 13,082 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain cloned
peortry with python v3.11.4
openai v1.1.1
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
cloned the repositpry
cd to `libs/langchain`
```
poetry install --with test
```
```
make test
```
output
```
....
FAILED tests/unit_tests/llms/test_anyscale.py::test_api_key_is_secret_string - AttributeError: module 'openai' has no attribute 'ChatCompletion'
FAILED tests/unit_tests/llms/test_anyscale.py::test_api_key_masked_when_passed_from_env - AttributeError: module 'openai' has no attribute 'ChatCompletion'
FAILED tests/unit_tests/llms/test_anyscale.py::test_api_key_masked_when_passed_via_constructor - AttributeError: module 'openai' has no attribute 'ChatCompletion'
FAILED tests/unit_tests/llms/test_gooseai.py::test_api_key_is_secret_string - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/llms/test_gooseai.py::test_api_key_masked_when_passed_via_constructor - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/llms/test_gooseai.py::test_api_key_masked_when_passed_from_env - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/llms/test_openai.py::test_openai_model_param - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/llms/test_openai.py::test_openai_model_kwargs - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/llms/test_openai.py::test_openai_incorrect_field - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/llms/test_openai.py::test_openai_retries - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/llms/test_openai.py::test_openai_async_retries - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/load/test_dump.py::test_serialize_openai_llm - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/load/test_dump.py::test_serialize_llmchain - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/load/test_dump.py::test_serialize_llmchain_env - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/load/test_dump.py::test_serialize_llmchain_with_non_serializable_arg - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/load/test_load.py::test_loads_openai_llm - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/load/test_load.py::test_loads_llmchain - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/load/test_load.py::test_loads_llmchain_env - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/load/test_load.py::test_loads_llmchain_with_non_serializable_arg - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/load/test_load.py::test_load_openai_llm - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/load/test_load.py::test_load_llmchain - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/load/test_load.py::test_load_llmchain_env - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/load/test_load.py::test_load_llmchain_with_non_serializable_arg - AttributeError: module 'openai' has no attribute 'Completion'
==================================================================================== 23 failed, 1348 passed, 270 skipped, 24 warnings in 16.11s =========================================================
```
### Expected behavior
All tests should pass or skipped without failing. | Tests are failing in local development | https://api.github.com/repos/langchain-ai/langchain/issues/13081/comments | 5 | 2023-11-08T19:12:43Z | 2024-02-14T16:06:28Z | https://github.com/langchain-ai/langchain/issues/13081 | 1,984,239,004 | 13,081 |
[
"langchain-ai",
"langchain"
] | ### System Info
version 0.0.331
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
omit
### Expected behavior
I would like to use BAAI/bge-reranker-large model as the reranker to rerank the intial retrieval results,
how to use this model with langchain ?
I have seen the demo with Cohere ranker,
but how to use ohter reranker models?
Does langchain support other reranker models except for CohereReranker? | how to use reranker model with langchain in retrievalQA case? | https://api.github.com/repos/langchain-ai/langchain/issues/13076/comments | 20 | 2023-11-08T18:06:30Z | 2024-07-31T16:06:20Z | https://github.com/langchain-ai/langchain/issues/13076 | 1,984,144,941 | 13,076 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Using LCEL as suggested on the [docs](https://python.langchain.com/docs/modules/agents/), combined with `AgentExecutor`, generates a typing error when passing the runnable agent to the `AgentExecutor` constructor. This is because `AgentExecutor` defines its `agent` property as of type `BaseSingleActionAgent | BaseMultiActionAgent`:
https://github.com/langchain-ai/langchain/blob/55aeff6777431dc24e48f018e39aa418f95a6489/libs/langchain/langchain/agents/agent.py#L728-L731
So:
```python
agent = {
...
} | prompt | llm | OpenAIFunctionsAgentOutputParser()
# agent would be of type Runnable[Unknown, Unknown]
# but the typing on AgentExecutor only takes a BaseSingleActionAgent
# or a BaseMultiActionAgent as a valid agent
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
# ↳ "Runnable[Unknown, Unknown]" is incompatible with "BaseSingleActionAgent"
# ↳ "Runnable[Unknown, Unknown]" is incompatible with "BaseMultiActionAgent"
```
### Suggestion:
`AgentExecutor` should accept a `Runnable` for its agent property | Issue: AgentExecutor typings should accept Runnable for the agent property (to support LCEL agent) | https://api.github.com/repos/langchain-ai/langchain/issues/13075/comments | 3 | 2023-11-08T18:05:14Z | 2024-05-21T05:05:21Z | https://github.com/langchain-ai/langchain/issues/13075 | 1,984,143,084 | 13,075 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Hi, I'd like to request a feature for a `AnthropicFunctionsAgent` built on top of `AnthropicFunctions` and ideally compatible with `create_conversational_retrieval_agent`.
### Motivation
Everyone working with Anthropic models could use an Agents class!
### Your contribution
Can't currently. | [Feature Request] AnthropicFunctionsAgent | https://api.github.com/repos/langchain-ai/langchain/issues/13073/comments | 2 | 2023-11-08T17:29:21Z | 2024-02-14T16:06:33Z | https://github.com/langchain-ai/langchain/issues/13073 | 1,984,087,808 | 13,073 |
[
"langchain-ai",
"langchain"
] | ### System Info
python==3.11
langchain==0.0.326
ollama==v0.1.8
### Who can help?
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [X] Async
### Reproduction
Steps to reproduce the behavior:
```
llm = Ollama(model='llama2')
r = await llm.agenerate(['write a limerick about babies'])
print('\n'.join([t[0].text for t in r.generations]))
```
Generates the following output:
```
[/path/to/python3.11/site-packages/langchain/llms/ollama.py:164](https://file+.vscode-resource.vscode-cdn.net/Users/gburns/miniconda3/envs/alhazen/lib/python3.11/site-packages/langchain/llms/ollama.py:164): RuntimeWarning: coroutine 'AsyncCallbackManagerForLLMRun.on_llm_new_token' was never awaited
run_manager.on_llm_new_token(
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
Sure! Here is a limerick about babies:
everybody loves babies, they're so sweet;
their coos and snuggles can't be beat.
They suck their thumbs and play with toes,
and bring joy to all who sees.
```
### Expected behavior
The generation should occur without the warning.
The error is due to the `_OllamaCommon._stream_with_aggregation(` function not being able to distinguish between being called in a blocking or an async context.
The reason this is important is that sometimes Ollama gets stuck in a generation (taking a long time to complete) and I would like to be able to call a timeout on the underlying process. The following code can do this, but we get the warning as previously described (note, this requires that ollama be running in the background).
```
def _callback(fut: asyncio.Future):
if fut.cancelled() or not fut.done():
print("Timed out! - Terminating server")
fut.cancel()
async def run_llm(llm, prompt, timeout=300):
# create task
task = asyncio.create_task(llm.agenerate([prompt]))
task.add_done_callback(_callback)
# try to await the task
try:
r = await asyncio.wait_for(task, timeout=timeout)
except asyncio.TimeoutError as ex:
print(ex)
if r is not None:
return '\n'.join([t[0].text for t in r.generations])
else:
return ''
text = await llm.agenerate(['write a limerick about babies'])
print(text)
```
| Running Ollama asynchronously generates a warning: RuntimeWarning: coroutine 'AsyncCallbackManagerForLLMRun.on_llm_new_token' was never awaited (langchain/llms/ollama.py:164) | https://api.github.com/repos/langchain-ai/langchain/issues/13072/comments | 5 | 2023-11-08T17:23:48Z | 2024-04-02T16:06:09Z | https://github.com/langchain-ai/langchain/issues/13072 | 1,984,077,044 | 13,072 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Related to the #13036.
I've searched in the LangChain documentation by "Search" button and didn't find existing example which was in `cookbooks`.
### Idea or request for content:
Is it possible to add `cookbooks` into the Documentation search? Or maybe right into the Main menu? | DOC: no `cookbooks` search | https://api.github.com/repos/langchain-ai/langchain/issues/13070/comments | 4 | 2023-11-08T16:48:44Z | 2024-02-07T16:43:40Z | https://github.com/langchain-ai/langchain/issues/13070 | 1,984,016,062 | 13,070 |
[
"langchain-ai",
"langchain"
] | ### System Info
Unix and Windows
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
pip install langchain==0.0.79
pip install langchain>=0.0.331
```
### Expected behavior
I would expect to install the latest version 0.0.331 but its not. Seems pip undestands that 0.0.79 is a higher version than 0.0.331? | PyPI versions issue e.g. langchain==0.0.331 is not newer than langchain==0.0.79 | https://api.github.com/repos/langchain-ai/langchain/issues/13069/comments | 3 | 2023-11-08T16:23:02Z | 2023-11-16T12:28:10Z | https://github.com/langchain-ai/langchain/issues/13069 | 1,983,971,005 | 13,069 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
The agent(STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION) sometimes does not respond the "Final Answer" instead of the AI's thought.
### Actual result:
The human is asking for the default xxxxxx. The tool has provided the information. I will now relay this information to the human.
### Expected result:
{
"action": "Final Answer",
"action_input": "The information comes from the tool."
}
### System info:
AzureChatOpenAI / gpt-4
This is a sometimes issue. I have tried with the smith playground, it can happen as well. Any thoughts about it? | Issue: STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION sometimes not return the "Final Answer" | https://api.github.com/repos/langchain-ai/langchain/issues/13065/comments | 2 | 2023-11-08T15:22:08Z | 2024-02-16T16:07:16Z | https://github.com/langchain-ai/langchain/issues/13065 | 1,983,852,453 | 13,065 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version pinned to "^0.0.278" (using Poetry)
Python 3.11.5
Other modules from langchain (such as langchain.cache and langchain.chains) are imported within the same file in the application code and are able to be found. Only the `langchain.globals` module is not being recognized
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Import langchain.globals
Error happens on application startup
### Expected behavior
Expected to be able to find the module, as langchain's other module's are able to be found by the same application code | Seeing an error ModuleNotFoundError: No module named 'langchain.globals' | https://api.github.com/repos/langchain-ai/langchain/issues/13064/comments | 3 | 2023-11-08T15:22:05Z | 2024-05-05T16:05:37Z | https://github.com/langchain-ai/langchain/issues/13064 | 1,983,852,320 | 13,064 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.