issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### System Info
Using Google Colab Free version with T4 GPU.
chromadb==0.4.16
### Who can help?
@agola11 @hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
As per the latest Chromadb migration logs ([link](https://docs.trychroma.com/migration#migration-to-0416---november-7-2023)) `EmbeddingFunction` defnition has been updated and it affects all the custom made embedding function.
What this means is the `langchain.embeddings.HuggingFaceBgeEmbeddings` is inconsistent with this new definition and throws the following error:
```py
ValueError: Expected EmbeddingFunction.__call__ to have the following signature: odict_keys(['self', 'input']), got odict_keys(['self', 'args', 'kwargs'])
Please see https://docs.trychroma.com/embeddings for details of the EmbeddingFunction interface.
Please note the recent change to the EmbeddingFunction interface: https://docs.trychroma.com/migration#migration-to-0416---november-7-2023
```
The above code can be reproduced by inserting documents into Chromadb embedded using `HuggingFaceBgeEmbeddings` like so:
```py
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.embeddings import HuggingFaceBgeEmbeddings
from transformers import AutoTokenizer
embedding_function = HuggingFaceBgeEmbeddings(
model_name="BAAI/bge-base-en-v1.5",
model_kwargs={'device': 'cuda'},
encode_kwargs={'normalize_embeddings': True},
query_instruction="Represent this sentence for searching relevant passages: "
)
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-base-en-v1.5')
text_splitter = RecursiveCharacterTextSplitter.from_huggingface_tokenizer(
tokenizer, chunk_size=100, chunk_overlap=0
)
text = 'Some text that needs to be embedded.'
print(len(embedding_function.embed_query(text))) # works so far
splits = text_splitter.create_documents([text])
db = Chroma.from_documents(splits, embedding_function, persist_directory="./chroma_db")
```
I am not sure, but the answer might lie in correcting the `HuggingFaceBgeEmbeddings` class : [link](https://github.com/langchain-ai/langchain/blob/1f27104626fc71a5199df965011810426dd2eede/libs/langchain/langchain/embeddings/huggingface.py#L188) ?
### Expected behavior
The expected behaviour would have made a valid `db` object upon running the code
```py
db = Chroma.from_documents(splits, embedding_function, persist_directory="./chroma_db")
| ChromaDb EmbeddingFunction definition updated | https://api.github.com/repos/langchain-ai/langchain/issues/13061/comments | 11 | 2023-11-08T14:11:41Z | 2024-08-02T17:38:51Z | https://github.com/langchain-ai/langchain/issues/13061 | 1,983,705,244 | 13,061 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
When initializing the langchain UntructuredPDFLoader e.g. as follows
` loader = UnstructuredPDFLoader(downloaded_file, mode='elements')`
This method calls the following function (see langchain/document_loaders/pdf.py):
` class UnstructuredPDFLoader(UnstructuredFileLoader): `
`def _get_elements(self) -> List: `
from unstructured.partition.pdf import partition_pdf
return partition_pdf(filename=self.file_path, **self.unstructured_kwargs)
The function `partition_pdf()` from Unstructured allows one to decide between passing either a file_path to a file in storage, or alternatively a ByteStream pointing to a file in memory but it does not allow one to pass both. Langchain forces users to pass the parameter ` file_path `and thus one cannot use the option of using a stream to load a file (as Unstructured doesn't expect both a file_path and a stream).
### Suggestion:
Remove the part which forces one to pass a ` file_path ` to UnstructuredPDFLoader initializiation. With this change, users can decide to pass a Stream in the `unstructured_kwargs ` field and thus use the loader.
To test this I rewrote the _get_elements function as follows and like this it works to pass a stream:
`
def _get_elements(self) -> List: `
` from unstructured.partition.pdf import partition_pdf
return partition_pdf(**self.unstructured_kwargs) ` | Issue: UnstructuredPDFLoader doesn't support Unstructured functionalities | https://api.github.com/repos/langchain-ai/langchain/issues/13060/comments | 1 | 2023-11-08T13:38:53Z | 2024-02-14T16:06:43Z | https://github.com/langchain-ai/langchain/issues/13060 | 1,983,638,100 | 13,060 |
[
"langchain-ai",
"langchain"
] | I have 4 tools which return responses of apis inside function. Now I want to build a system that only returns the response of api without any observations. Also agents should have memories.
Is it possible with a langchain agent. If yes can you tell me how ??? | Langchain agent which only returns tools response without observations. | https://api.github.com/repos/langchain-ai/langchain/issues/13059/comments | 4 | 2023-11-08T13:05:28Z | 2024-02-14T16:06:48Z | https://github.com/langchain-ai/langchain/issues/13059 | 1,983,572,989 | 13,059 |
[
"langchain-ai",
"langchain"
] | ### System Info
last version of langchain and cohere
### Who can help?
@@agola11
I am encountering an error related to the user_agent when attempting to create a CohererRank object in the LangChain p. I have verified that I am passing a valid cohere_api_key and can successfully use the rerank function from Cohere directly. However, when trying to utilize it through the LangChain project, I encounter this specific issue related to the user_agent.
[/usr/local/lib/python3.10/dist-packages/langchain/retrievers/document_compressors/cohere_rerank.py](https://localhost:8080/#) in validate_environment(cls, values)
53 values, "cohere_api_key", "COHERE_API_KEY"
54 )
---> 55 client_name = values["user_agent"]
56 values["client"] = cohere.Client(cohere_api_key, client_name=client_name)
57 return values
KeyError: 'user_agent'
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import os
import getpass
os.environ["COHERE_API_KEY"] = getpass.getpass("Cohere API Key:")
from langchain.llms import OpenAI
from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import CohereRerank
compressor = CohereRerank()
### Expected behavior
i want to create an object of CohereReRank. | Issue with user_agent error when creating a CohereReRank object in LangChain | https://api.github.com/repos/langchain-ai/langchain/issues/13058/comments | 3 | 2023-11-08T12:38:34Z | 2024-03-13T19:58:09Z | https://github.com/langchain-ai/langchain/issues/13058 | 1,983,512,624 | 13,058 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version = 0.0.331
Openai version = 1.1.1
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```Error creating LLM: module 'openai' has no attribute 'Embedding'
aai_app | Error creating LLM: module 'openai' has no attribute 'Embedding'
aai_app | Traceback (most recent call last):
aai_app | File "/code/aai/apps/slack_bot/views.py", line 160, in call_open_ai
aai_app | reply = open_ai.execute_query(question)
aai_app | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
aai_app | File "/code/aai/apps/affiliateai/services/open_ai/open_ai.py", line 130, in execute_query
aai_app | named_entity_recognition = NamedEntityRecognition(
aai_app | ^^^^^^^^^^^^^^^^^^^^^^^
aai_app | File "/code/aai/apps/affiliateai/services/open_ai/named_entity_recognition.py", line 41, in __init__
aai_app | self.llm_embeddings = self.llm.create_llm(
aai_app | ^^^^^^^^^^^^^^^^^^^^
aai_app | File "/code/aai/apps/affiliateai/services/open_ai/llm_manager.py", line 63, in create_llm
aai_app | return OpenAIEmbeddings(
aai_app | ^^^^^^^^^^^^^^^^^
aai_app | File "/usr/local/lib/python3.11/site-packages/pydantic/v1/main.py", line 339, in __init__
aai_app | values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
aai_app | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
aai_app | File "/usr/local/lib/python3.11/site-packages/pydantic/v1/main.py", line 1050, in validate_model
aai_app | input_data = validator(cls_, input_data)
aai_app | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
aai_app | File "/usr/local/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 284, in validate_environment
aai_app | values["client"] = openai.Embedding
aai_app | ^^^^^^^^^^^^^^^^
aai_app | AttributeError: module 'openai' has no attribute 'Embedding'
aai_app |
aai_app | module 'openai' has no attribute 'Embedding'
aai_app | Traceback (most recent call last):
aai_app | File "/code/aai/apps/slack_bot/views.py", line 160, in call_open_ai
aai_app | reply = open_ai.execute_query(question)
aai_app | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
aai_app | File "/code/aai/apps/affiliateai/services/open_ai/open_ai.py", line 130, in execute_query
aai_app | named_entity_recognition = NamedEntityRecognition(
aai_app | ^^^^^^^^^^^^^^^^^^^^^^^
aai_app | File "/code/aai/apps/affiliateai/services/open_ai/named_entity_recognition.py", line 41, in __init__
aai_app | self.llm_embeddings = self.llm.create_llm(
aai_app | ^^^^^^^^^^^^^^^^^^^^
aai_app | File "/code/aai/apps/affiliateai/services/open_ai/llm_manager.py", line 63, in create_llm
aai_app | return OpenAIEmbeddings(
aai_app | ^^^^^^^^^^^^^^^^^
aai_app | File "/usr/local/lib/python3.11/site-packages/pydantic/v1/main.py", line 339, in __init__
aai_app | values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
aai_app | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
aai_app | File "/usr/local/lib/python3.11/site-packages/pydantic/v1/main.py", line 1050, in validate_model
aai_app | input_data = validator(cls_, input_data)
aai_app | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
aai_app | File "/usr/local/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 284, in validate_environment
aai_app | values["client"] = openai.Embedding
aai_app | ^^^^^^^^^^^^^^^^
aai_app | AttributeError: module 'openai' has no attribute 'Embedding'```
### Expected behavior
Not to see this error. | Updated to latest langchain version but still getting OpenAI embeddings error | https://api.github.com/repos/langchain-ai/langchain/issues/13056/comments | 6 | 2023-11-08T12:01:41Z | 2024-02-19T16:07:45Z | https://github.com/langchain-ai/langchain/issues/13056 | 1,983,440,119 | 13,056 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.11.5
Langchain (pip show) 0.0.327
Windows OS
Visual Studio Code
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I searched and was surprised this has not come up.
I am using LangChain for a RAG workflow - and when I send a document, if that document contains { } - it throws a missing key error - it is treating the content of the document, as it would a normal prompt where you might have "question {question}" and expect an input key of 'question' it then reports back that all of the { } are in fact different missing keys.
For example, my data contains this:
`"...1 2 ------------------------------------ {w14 w15 w16se w16cid w16 w16cex w16sdtdh wp14}{DP}{AD}{S::}"`
It will say that we are missing numerous keys:
`ValueError: Missing some input keys: {'AD', 'w14 w15 w16se w16cid w16 w16cex w16sdtdh wp14', ...}`
Now, I can clean the data prior to sending, but I was wondering whether it should behave like this given that this document is already within { } as content?
I use the "FewShotPromptTemplate" to create a prompt which includes a "Suffix" and my suffix is:
```
def get_suffix():
return """
Document: {content}
Question: {question}
"""
```
Here content is the content of the document that contains the { } set out above.
I build the prompt like this:
```
prompt_template = FewShotPromptTemplate(
examples = examples,
example_prompt = get_prompt_template(example_template, example_variables),
prefix = prefix,
suffix = suffix,
input_variables = input_variables
)
prompt = prompt_template.format(question=question, context=context)
return prompt
```
I also did a test using another piece of code:
```
document_context = text_response + "{AD}"
prompt = ChatPromptTemplate.from_template("my_specific_prompt": {document}.\n{format_instructions}")
formated_prompt = prompt.format(**{"document": document_context, "format_instructions":output_parser.get_format_instructions()})
```
Introducing a random {AD} into the text response. It did not fail. It messed up the results, but it didn't actually cause any missing input key errors.
So this may be limited to the FewShotPromptTemplate?
### Expected behavior
I would have thought that anything passed within a curly bracket set would be considered as plain text, not parsed for further keys that might be embedded in that curly bracket set and throw an error when it cannot find them?
Maybe I am wrong, but that is what I would have expected and is what appears to happen when using the ChatPromptTemplate.from_template? | ValueError: Missing some input keys: - passed data requires input keys if containing { } | https://api.github.com/repos/langchain-ai/langchain/issues/13055/comments | 3 | 2023-11-08T12:01:05Z | 2023-11-08T18:52:53Z | https://github.com/langchain-ai/langchain/issues/13055 | 1,983,439,114 | 13,055 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.331
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
response_schemas = [
ResponseSchema(type='list', name='disease', description='disease name')
]
**prompt**
The output should be a markdown code snippet formatted in the following schema, including the leading and trailing "```json" and "```":
```json
{
"disease": list // disease name
}
```
**output**
{
"disease": "感冒" // disease name
}
This output cannot be parsed by StructuredOutputParser,Because it contains notes。
### Expected behavior
Expect to generate prompt words:
**prompt**
The output should be a markdown code snippet formatted in the following schema, including the leading and trailing "```json" and "```",and the field "disease" meas "disease name".
```json
{
"disease": list
}
```
Expected output:
{
"disease": "感冒"
} | The prompt word format misleads the output content of the large model | https://api.github.com/repos/langchain-ai/langchain/issues/13054/comments | 5 | 2023-11-08T09:54:19Z | 2024-04-04T15:31:33Z | https://github.com/langchain-ai/langchain/issues/13054 | 1,983,196,438 | 13,054 |
[
"langchain-ai",
"langchain"
] | ### System Info
# Hi there,
I have started learning about Langchain today. I was creating my first langchain prompt template but something doesn't seem to work.
Here is my code in main.py:
```python
from openai import OpenAI
from dotenv import load_dotenv
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
import os
# Load the .env file
load_dotenv()
# Create an instance of the OpenAI class
def generate_pet_name(animal_type="dog"):
prompt_template = PromptTemplate(
input_variables=['animal_type'],
template="I have a {animal_type} as my pet. Suggest me a name for it.",
)
name_chain = LLMChain(
llm=OpenAI(),
prompt=prompt_template,
)
response = name_chain({'animal_type': animal_type})
print(response)
if __name__ == "__main__":
generate_pet_name(animal_type="dog")
```
While I think the code is okay and I have followed the GitHub get started it doesn't seem to work and throwing me this error:
```console
Traceback (most recent call last):
File "/Users/ss/Workspace/Ai/AMP/main.py", line 28, in <module>
generate_pet_name(animal_type="dog")
File "/Users/ss/Workspace/Ai/AMP/main.py", line 18, in generate_pet_name
name_chain = LLMChain(
^^^^^^^^^
File "/Users/ss/Workspace/Ai/AMP/.venv/lib/python3.11/site-packages/langchain/load/serializable.py", line 97, in __init__
super().__init__(**kwargs)
File "/Users/ss/Workspace/Ai/AMP/.venv/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 2 validation errors for LLMChain
llm
instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
llm
instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
```
Please Help me as I am not that proficient in Python.
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Just copy the code given there and run it. You will see the error.
### Expected behavior
Run the code and get a response without error | OpenAI instance of Runnable expected | https://api.github.com/repos/langchain-ai/langchain/issues/13053/comments | 7 | 2023-11-08T09:27:14Z | 2024-02-17T07:04:10Z | https://github.com/langchain-ai/langchain/issues/13053 | 1,983,141,722 | 13,053 |
[
"langchain-ai",
"langchain"
] | ### System Info
AWS Sagemaker DataScience3.0 Image.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
here is my code, it worked before Nov 7th.
`Chroma.from_documents(documents=document, embedding=embeddings,)`
Then i get this error:
`ValueError: Expected EmbeddingFunction.__call__ to have the following signature: odict_keys(['self', 'input']), got odict_keys(['self', 'args', 'kwargs'])
Please see https://docs.trychroma.com/embeddings for details of the EmbeddingFunction interface.
Please note the recent change to the EmbeddingFunction interface: https://docs.trychroma.com/migration#migration-to-0416---november-7-2023 `
### Expected behavior
Is anyone know how to fix this? | Bug after the openai updated in Embedding | https://api.github.com/repos/langchain-ai/langchain/issues/13051/comments | 23 | 2023-11-08T07:56:36Z | 2024-04-02T12:17:34Z | https://github.com/langchain-ai/langchain/issues/13051 | 1,982,969,374 | 13,051 |
[
"langchain-ai",
"langchain"
] | ### System Info
Running on google colab.
Everything was working up until today, which makes me think it's openAi update-related.
Versions:
Requirement already satisfied: langchain in /usr/local/lib/python3.10/dist-packages (**0.0.331**)
Requirement already satisfied: chromadb in /usr/local/lib/python3.10/dist-packages (**0.4.16**)
Openai version pinned to 0.28.1 as @hwchase17 recommended prior -- which had fixed my embeddings issue.
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python3
from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = Chroma.from_documents(documents=splits, embedding_function=embeddings, persist_directory='/content/wtf')
vectorstore.persist()
retriever = vectorstore.as_retriever()
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-5-d086aa9b6782>](https://localhost:8080/#) in <cell line: 7>()
5
6 embeddings = OpenAIEmbeddings()
----> 7 vectorstore = Chroma.from_documents(documents=splits, embedding_function=embeddings, persist_directory='/content/wtf')
8 vectorstore.persist()
9 retriever = vectorstore.as_retriever()
1 frames
[/usr/local/lib/python3.10/dist-packages/langchain/vectorstores/chroma.py](https://localhost:8080/#) in from_texts(cls, texts, embedding, metadatas, ids, collection_name, persist_directory, client_settings, client, collection_metadata, **kwargs)
618 Chroma: Chroma vectorstore.
619 """
--> 620 chroma_collection = cls(
621 collection_name=collection_name,
622 embedding_function=embedding,
TypeError: langchain.vectorstores.chroma.Chroma() got multiple values for keyword argument 'embedding_function'
### Expected behavior
It should run without an error and correctly embed the document splits, outputting the data in the persist directory. | Multiple values for keyword argument 'embedding_function' | https://api.github.com/repos/langchain-ai/langchain/issues/13050/comments | 4 | 2023-11-08T07:26:31Z | 2024-04-05T19:39:47Z | https://github.com/langchain-ai/langchain/issues/13050 | 1,982,926,699 | 13,050 |
[
"langchain-ai",
"langchain"
] | ### System Info
Windows WSL 2 Ubuntu
Python 3.10.6
langchain-0.0.331 langchain-cli-0.0.15 langserve-0.0.24 langsmith-0.0.60
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
In a new virtual environment with Python 3.10.6, After running:
`pip install -U "langchain-cli[serve]"`
which installs "langchain-0.0.331 langchain-cli-0.0.15 langserve-0.0.24 langsmith-0.0.60" as shown below:
Successfully installed PyYAML-6.0.1 SQLAlchemy-2.0.23 aiohttp-3.8.6 aiosignal-1.3.1 annotated-types-0.6.0 anyio-3.7.1 async-timeout-4.0.3 attrs-23.1.0 certifi-2023.7.22 charset-normalizer-3.3.2 click-8.1.7 colorama-0.4.6 dataclasses-json-0.6.1 exceptiongroup-1.1.3 fastapi-0.104.1 frozenlist-1.4.0 gitdb-4.0.11 gitpython-3.1.40 greenlet-3.0.1 h11-0.14.0 httpcore-1.0.1 httpx-0.25.1 httpx-sse-0.3.1 idna-3.4 jsonpatch-1.33 jsonpointer-2.4 langchain-0.0.331 langchain-cli-0.0.15 langserve-0.0.24 langsmith-0.0.60 markdown-it-py-3.0.0 marshmallow-3.20.1 mdurl-0.1.2 multidict-6.0.4 mypy-extensions-1.0.0 numpy-1.26.1 packaging-23.2 pydantic-2.4.2 pydantic-core-2.10.1 pygments-2.16.1 requests-2.31.0 rich-13.6.0 shellingham-1.5.4 smmap-5.0.1 sniffio-1.3.0 sse-starlette-1.6.5 starlette-0.27.0 tenacity-8.2.3 tomli-2.0.1 typer-0.9.0 typing-extensions-4.8.0 typing-inspect-0.9.0 urllib3-2.0.7 uvicorn-0.23.2 yarl-1.9.2
I try to run `langchain app new my-app --package rag-conversation`, and get the following error
Traceback (most recent call last):
File "/home/mert/REPOSITORIES/GenAI/langserve-template/langserve-env/bin/langchain", line 5, in <module>
from langchain_cli.cli import app
File "/home/mert/REPOSITORIES/GenAI/langserve-template/langserve-env/lib/python3.10/site-packages/langchain_cli/cli.py", line 6, in <module>
from langchain_cli.namespaces import app as app_namespace
File "/home/mert/REPOSITORIES/GenAI/langserve-template/langserve-env/lib/python3.10/site-packages/langchain_cli/namespaces/app.py", line 12, in <module>
from langserve.packages import get_langserve_export
ModuleNotFoundError: No module named 'langserve.packages'
### Expected behavior
Pulling template into a local repository | No module named 'langserve.packages' when creating langchain-cli apps | https://api.github.com/repos/langchain-ai/langchain/issues/13049/comments | 12 | 2023-11-08T07:02:02Z | 2023-11-14T18:24:49Z | https://github.com/langchain-ai/langchain/issues/13049 | 1,982,890,810 | 13,049 |
[
"langchain-ai",
"langchain"
] |
from langchain.document_loaders import UnstructuredExcelLoader
loader = UnstructuredExcelLoader("example_data/stanley-cups.xlsx", mode="elements")
docs = loader.load()
docs[0]
in the above it gives ouput as
Document(page_content='\n \n \n Team\n Location\n Stanley Cups\n \n \n Blues\n STL\n 1\n \n \n Flyers\n PHI\n 2\n \n \n Maple Leafs\n TOR\n 13\n \n \n', metadata={'source': 'example_data/stanley-cups.xlsx', 'filename': 'stanley-cups.xlsx', 'file_directory': 'example_data', 'filetype': 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet', 'page_number': 1, 'page_name': 'Stanley Cups', 'text_as_html': '<table border="1" class="dataframe">\n <tbody>\n <tr>\n <td>Team</td>\n <td>Location</td>\n <td>Stanley Cups</td>\n </tr>\n <tr>\n <td>Blues</td>\n <td>STL</td>\n <td>1</td>\n </tr>\n <tr>\n <td>Flyers</td>\n <td>PHI</td>\n <td>2</td>\n </tr>\n <tr>\n <td>Maple Leafs</td>\n <td>TOR</td>\n <td>13</td>\n </tr>\n </tbody>\n</table>', 'category': 'Table'})
when i pass the above to CharacterTextSplitter, it gives error because it expects in the different format.
text_splitter = CharacterTextSplitter(chunk_overlap=0, chunk_size=1000)
texts = text_splitter.split_documents(docs[0])
### Suggestion:
_No response_ | Issue: UnstructuredExcelLoader | https://api.github.com/repos/langchain-ai/langchain/issues/13047/comments | 3 | 2023-11-08T06:49:27Z | 2024-02-14T16:07:08Z | https://github.com/langchain-ai/langchain/issues/13047 | 1,982,871,020 | 13,047 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
新的chatgpt可以直接帮助人类分析,规划,似乎已经脱离了原本的软件开发模式,从人设计,到了ai设计了,langchain是不是已经过时了?
### Suggestion:
_No response_ | 看了新的OpenAI开发者大会,langchain还有存在的必要吗 | https://api.github.com/repos/langchain-ai/langchain/issues/13046/comments | 6 | 2023-11-08T06:46:32Z | 2024-02-14T16:07:13Z | https://github.com/langchain-ai/langchain/issues/13046 | 1,982,867,010 | 13,046 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am using github tools along with an agent following example given (https://python.langchain.com/docs/integrations/toolkits/github), but the comment_on_issue tool is not able to parse the Action Input given by agent (the format seems to be same as given in the prompts), there seems to be some issue with escap sequnces.
Action Input: 2\n\nStarting to review docstrings and add sphinx\n\n
Error: ValueError: invalid literal for int() with base 10: '2\n\nStarting to review docstrings and add sphinx\n\n'
I think there might be some issue decoding (extra \ added to newline character). can anyone pls help with this?
### Suggestion:
_No response_ | help: github util `comment_on_issue` unable to parse `action input` from agent | https://api.github.com/repos/langchain-ai/langchain/issues/13045/comments | 4 | 2023-11-08T06:38:13Z | 2023-11-12T18:52:11Z | https://github.com/langchain-ai/langchain/issues/13045 | 1,982,856,821 | 13,045 |
[
"langchain-ai",
"langchain"
] | @dosu-bot
Below is my code and everytime I ask it a question, it rephrases the question then answers it for me. Help me to remove the rephrasing part. I did set it to False yet it still does it.
Also, I would like to return the source of the documents but its showing me this error:
File "C:\Users\Asus\Documents\Vendolista\hacka.py", line 178, in
main()
File "C:\Users\Asus\Documents\Vendolista\hacka.py", line 172, in main
result = qa({"question": user_input})
File "C:\Users\Asus\Documents\Vendolista.venv\lib\site-packages\langchain\chains\base.py", line 294, in call
final_outputs: Dict[str, Any] = self.prep_outputs(
File "C:\Users\Asus\Documents\Vendolista.venv\lib\site-packages\langchain\chains\base.py", line 390, in prep_outputs
self.memory.save_context(inputs, outputs)
File "C:\Users\Asus\Documents\Vendolista.venv\lib\site-packages\langchain\memory\chat_memory.py", line 35, in save_context
input_str, output_str = self._get_input_output(inputs, outputs)
File "C:\Users\Asus\Documents\Vendolista.venv\lib\site-packages\langchain\memory\chat_memory.py", line 27, in _get_input_output
raise ValueError(f"One output key expected, got {outputs.keys()}")
ValueError: One output key expected, got dict_keys(['answer', 'source_documents'])
Below is my code
import os
import json
import pandas as pd
LLM
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
from langchain.callbacks import get_openai_callback
Prompt
from langchain.prompts.prompt import PromptTemplate
from langchain.prompts.chat import (
ChatPromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
Embeddings
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
Chain
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
from langchain.chains import LLMChain
from langchain.chains.question_answering import load_qa_chain
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
from langchain.document_loaders.csv_loader import CSVLoader, UnstructuredCSVLoader
from langchain.document_loaders import DirectoryLoader
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field
from dotenv import load_dotenv
import time
import pandas as pd
from langchain.callbacks import StreamingStdOutCallbackHandler
from langchain.text_splitter import RecursiveCharacterTextSplitter
load_dotenv()
file_path = "C:\Users\Asus\Documents\Vendolista\home_depot_data.csv"
path = "C:\Users\Asus\Documents\Vendolista\home depot"
csv_loader = CSVLoader(file_path=path, encoding='utf-8')
csv_loader = DirectoryLoader(path,
glob="**/*.csv",
show_progress=True,
use_multithreading=True,
silent_errors=True,
loader_cls = CSVLoader)
llm = ChatOpenAI(temperature = 0, model_name='gpt-3.5-turbo', callbacks=[StreamingStdOutCallbackHandler()], streaming = True)
documents = csv_loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=200,
chunk_overlap=50,
)
chunks = text_splitter.split_documents(documents)
chunks = documents
embeddings = OpenAIEmbeddings()
persist_directory = "C:\Users\Asus\OneDrive\Documents\Vendolista"
knowledge_base = Chroma(embedding_function=embeddings, persist_directory=persist_directory)
# Split the chunks into smaller batches
batch_size = 5000
for i in range(0, len(chunks), batch_size):
batch = chunks[i:i+batch_size]
knowledge_base.add_documents(batch)
Save the vector store to disk
knowledge_base.persist()
Load the vector store from disk
knowledge_base = Chroma(chunks, persist_directory=persist_directory, embedding_function=embeddings)
class Product(BaseModel):
"""Product details schema."""
url:str = Field(description="Full URL link to the product webpage on Homedepot.")
title:str = Field(description="Title of the product.")
description:str = Field(description="Description of the prodcut.")
brand:str = Field(description="Manufacturing brand of the product.")
price:float = Field(description="Unit selling price of the product.")
parser = PydanticOutputParser(pydantic_object=Product)
question_template = """ Make sure you understand the question as its very important for the user.
You never know what situation they are in and you need to ensure that its understood very well but do not repeat
or rewrite the question
Input: {question}
"""
CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(question_template)
Chain for question generation
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
Chat Prompt
system_template = """
You are a friendly, conversational retail shopping assistant named RAAFYA.
You will always and always and always only follow these set of rules and nothing else no matter what:
You will provide the user answers based on the csv file that you can only read from which is called "home_depot_data.csv"
You will never mention the name of the dataset that you have. Just say "my data" instead
Focus 100% to understand exactly what the customer is looking for and only give him whats available based on the data.
Do not get anything or say anything that is not related to the data that you have and never provide wrong information.
Use the following context including product name descriptions, and keywords to show the shopper whats available,
help find what they want, and answer their questions related to your job
Never ever consider or think or even mention that you do not have access to the internet because it is not your job
and it is not your task. I will repeat it again and again, your information is only and only coming from the dataset
that you have which is called "home_depot_data.csv" but you must not mention that to anyone for security purposes
Everyime you answer a question, write on a new line "is there anything else you would like me to help you with?"
If a customer asked for a product and it is not available then say "Sorry it is currently unavailable but you can
reach out to our staff and ask them about it at [yazanrisheh@hotmail.com](mailto:yazanrisheh@hotmail.com)"
If the person asked for more details then provide him the details based on the output parser that you have:
URL:
Title:
Description:
Brand:
Price:
Context:
{context}
"""
system_message_prompt = SystemMessagePromptTemplate.from_template(system_template)
Human Prompt
human_template="""{format_instructions}
Question: {question}"""
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
Inject instructions into the prompt template.
human_message_prompt = HumanMessagePromptTemplate(
prompt=PromptTemplate(
template=human_template,
input_variables=["question"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
Chain for Q&A
answer_chain = load_qa_chain(llm,
chain_type="stuff",
prompt=chat_prompt)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
Chain
qa = ConversationalRetrievalChain(
retriever = knowledge_base.as_retriever(),
question_generator = question_generator,
combine_docs_chain = answer_chain,
memory=memory,
rephrase_question=False,
return_source_documents=True
)
def main():
while True:
user_input = input("What would you like to shop for: ")
if user_input.lower() in ["exit"]:
break
if user_input != "":
with get_openai_callback() as cb:
result = qa({"question": user_input})
print()
# print(cb)
# print()
if name == "main":
main() | My LLM keeps rephrasing the question and it doesnt return source documents | https://api.github.com/repos/langchain-ai/langchain/issues/13044/comments | 6 | 2023-11-08T06:06:50Z | 2024-02-27T01:13:22Z | https://github.com/langchain-ai/langchain/issues/13044 | 1,982,815,371 | 13,044 |
[
"langchain-ai",
"langchain"
] | @dosu-bot
Below is my code and everytime I ask it a question, it rephrases the question then answers it for me. Help me to remove the rephrasing part. I did set it to False yet it still does it.
Also, I would like to return the source of the documents but its showing me this error:
File "C:\Users\Asus\Documents\Vendolista\hacka.py", line 178, in <module>
main()
File "C:\Users\Asus\Documents\Vendolista\hacka.py", line 172, in main
result = qa({"question": user_input})
File "C:\Users\Asus\Documents\Vendolista\.venv\lib\site-packages\langchain\chains\base.py", line 294, in __call__
final_outputs: Dict[str, Any] = self.prep_outputs(
File "C:\Users\Asus\Documents\Vendolista\.venv\lib\site-packages\langchain\chains\base.py", line 390, in prep_outputs
self.memory.save_context(inputs, outputs)
File "C:\Users\Asus\Documents\Vendolista\.venv\lib\site-packages\langchain\memory\chat_memory.py", line 35, in save_context
input_str, output_str = self._get_input_output(inputs, outputs)
File "C:\Users\Asus\Documents\Vendolista\.venv\lib\site-packages\langchain\memory\chat_memory.py", line 27, in _get_input_output
raise ValueError(f"One output key expected, got {outputs.keys()}")
ValueError: One output key expected, got dict_keys(['answer', 'source_documents'])
Below is my code
import os
import json
import pandas as pd
# LLM
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
from langchain.callbacks import get_openai_callback
# Prompt
from langchain.prompts.prompt import PromptTemplate
from langchain.prompts.chat import (
ChatPromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
# Embeddings
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
# Chain
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
from langchain.chains import LLMChain
from langchain.chains.question_answering import load_qa_chain
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
from langchain.document_loaders.csv_loader import CSVLoader, UnstructuredCSVLoader
from langchain.document_loaders import DirectoryLoader
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field
from dotenv import load_dotenv
import time
import pandas as pd
from langchain.callbacks import StreamingStdOutCallbackHandler
from langchain.text_splitter import RecursiveCharacterTextSplitter
load_dotenv()
file_path = "C:\\Users\\Asus\\Documents\\Vendolista\\home_depot_data.csv"
path = "C:\\Users\\Asus\\Documents\\Vendolista\\home depot"
# csv_loader = CSVLoader(file_path=path, encoding='utf-8')
csv_loader = DirectoryLoader(path,
glob="**/*.csv",
show_progress=True,
use_multithreading=True,
silent_errors=True,
loader_cls = CSVLoader)
llm = ChatOpenAI(temperature = 0, model_name='gpt-3.5-turbo', callbacks=[StreamingStdOutCallbackHandler()], streaming = True)
documents = csv_loader.load()
# text_splitter = RecursiveCharacterTextSplitter(
# chunk_size=200,
# chunk_overlap=50,
# )
# chunks = text_splitter.split_documents(documents)
chunks = documents
embeddings = OpenAIEmbeddings()
persist_directory = "C:\\Users\\Asus\\OneDrive\\Documents\\Vendolista"
knowledge_base = Chroma(embedding_function=embeddings, persist_directory=persist_directory)
# # Split the chunks into smaller batches
# batch_size = 5000
# for i in range(0, len(chunks), batch_size):
# batch = chunks[i:i+batch_size]
# knowledge_base.add_documents(batch)
# Save the vector store to disk
knowledge_base.persist()
# Load the vector store from disk
knowledge_base = Chroma(chunks, persist_directory=persist_directory, embedding_function=embeddings)
class Product(BaseModel):
"""Product details schema."""
url:str = Field(description="Full URL link to the product webpage on Homedepot.")
title:str = Field(description="Title of the product.")
description:str = Field(description="Description of the prodcut.")
brand:str = Field(description="Manufacturing brand of the product.")
price:float = Field(description="Unit selling price of the product.")
parser = PydanticOutputParser(pydantic_object=Product)
question_template = """ Make sure you understand the question as its very important for the user.
You never know what situation they are in and you need to ensure that its understood very well but do not repeat
or rewrite the question
Input: {question}
"""
CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(question_template)
# Chain for question generation
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
# Chat Prompt
system_template = """
You are a friendly, conversational retail shopping assistant named RAAFYA.
You will always and always and always only follow these set of rules and nothing else no matter what:
1) You will provide the user answers based on the csv file that you can only read from which is called "home_depot_data.csv"
2) You will never mention the name of the dataset that you have. Just say "my data" instead
3) Focus 100% to understand exactly what the customer is looking for and only give him whats available based on the data.
4) Do not get anything or say anything that is not related to the data that you have and never provide wrong information.
5) Use the following context including product name descriptions, and keywords to show the shopper whats available,
help find what they want, and answer their questions related to your job
6) Never ever consider or think or even mention that you do not have access to the internet because it is not your job
and it is not your task. I will repeat it again and again, your information is only and only coming from the dataset
that you have which is called "home_depot_data.csv" but you must not mention that to anyone for security purposes
7) Everyime you answer a question, write on a new line "is there anything else you would like me to help you with?"
8) If a customer asked for a product and it is not available then say "Sorry it is currently unavailable but you can
reach out to our staff and ask them about it at yazanrisheh@hotmail.com"
9) If the person asked for more details then provide him the details based on the output parser that you have:
URL:
Title:
Description:
Brand:
Price:
Context:
{context}
"""
system_message_prompt = SystemMessagePromptTemplate.from_template(system_template)
# Human Prompt
human_template="""{format_instructions}
Question: {question}"""
# human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
# Inject instructions into the prompt template.
human_message_prompt = HumanMessagePromptTemplate(
prompt=PromptTemplate(
template=human_template,
input_variables=["question"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
# Chain for Q&A
answer_chain = load_qa_chain(llm,
chain_type="stuff",
prompt=chat_prompt)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
# Chain
qa = ConversationalRetrievalChain(
retriever = knowledge_base.as_retriever(),
question_generator = question_generator,
combine_docs_chain = answer_chain,
memory=memory,
rephrase_question=False,
return_source_documents=True
)
def main():
while True:
user_input = input("What would you like to shop for: ")
if user_input.lower() in ["exit"]:
break
if user_input != "":
with get_openai_callback() as cb:
result = qa({"question": user_input})
print()
# print(cb)
# print()
if __name__ == "__main__":
main() | My llm keeps rephrasing question and it doesnt return source documents | https://api.github.com/repos/langchain-ai/langchain/issues/13043/comments | 1 | 2023-11-08T04:59:18Z | 2024-02-14T16:07:23Z | https://github.com/langchain-ai/langchain/issues/13043 | 1,982,743,038 | 13,043 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Langchain.llm matches the vllm model, can it also match the fastllm model?
### Suggestion:
_No response_ | fastllm model | https://api.github.com/repos/langchain-ai/langchain/issues/13037/comments | 2 | 2023-11-08T01:06:34Z | 2024-02-14T16:07:28Z | https://github.com/langchain-ai/langchain/issues/13037 | 1,982,528,151 | 13,037 |
[
"langchain-ai",
"langchain"
] | ### Feature request
[This idea](https://arxiv.org/abs/2310.06117) is promising and it can be implemented with LangChain
### Motivation
To add a new chaining technique.
### Your contribution
I'm not sure I can implement it. | Step-Back Prompting | https://api.github.com/repos/langchain-ai/langchain/issues/13036/comments | 4 | 2023-11-08T00:58:46Z | 2023-11-08T16:56:36Z | https://github.com/langchain-ai/langchain/issues/13036 | 1,982,516,375 | 13,036 |
[
"langchain-ai",
"langchain"
] | ### Feature request
[Chain-of-Verification](https://arxiv.org/pdf/2309.11495.pdf) looks like an interesting idea and LangChain can implement it.
### Motivation
to add one more effective method of the chaining.
### Your contribution
Not sure I can implement it. :( | Chain-of-Verification | https://api.github.com/repos/langchain-ai/langchain/issues/13035/comments | 10 | 2023-11-08T00:48:40Z | 2024-02-09T16:45:57Z | https://github.com/langchain-ai/langchain/issues/13035 | 1,982,507,472 | 13,035 |
[
"langchain-ai",
"langchain"
] | ### Description
Compatibility issue with the Langchain library due to the recent changes in the OpenAI Python package (version 1.1.1). The Langchain library relies on certain structures and imports from the OpenAI package, which have been modified in the new version. Specifically, the issue seems to be related to the following changes:
- In the Langchain code, the error handling imports in [langchain/llms/openai.py](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L83 ) at line 90 were based on the older structure of the OpenAI package. In the newer version, these imports have been restructured and are available in openai._exceptions.
- In [langchain/llms/openai.py](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L240 ) at line 266, values["client"] = openai.Completion is no longer valid in the new version of OpenAI (version 1.1.1).
```
!pip install langchain openai
from langchain import OpenAI
import os
os.environ["OPENAI_API_KEY"] = "key"
llm = OpenAI(
model_name="text-davinci-003",
temperature= 0.2,
max_tokens= 64,
openai_api_key=os.environ["OPENAI_API_KEY"],
)
```

Also

**Note:**
To avoid the above error, users should downgrade the OpenAI package to version 0.28.1.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
!pip install langchain openai
from langchain import OpenAI
import os
os.environ["OPENAI_API_KEY"] = "key"
llm = OpenAI(
model_name="text-davinci-003",
temperature= 0.2,
max_tokens= 64,
openai_api_key=os.environ["OPENAI_API_KEY"],
)
### Expected behavior
Langchain should work without errors when using OpenAI version 1.1.1. | Compatibility Issue with OpenAI Python Package (Version 1.1.1) | https://api.github.com/repos/langchain-ai/langchain/issues/13027/comments | 15 | 2023-11-07T22:18:36Z | 2024-03-01T20:03:05Z | https://github.com/langchain-ai/langchain/issues/13027 | 1,982,332,029 | 13,027 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
How can I run embedding with "gte-large" on a multi-GPU machine?
Trying per issue #3486
```
model_name = "thenlper/gte-large"
encode_kwargs = {"normalize_embeddings": True}
model_kwargs = {"device": "cuda", "multi_process":True}
hf = HuggingFaceBgeEmbeddings(
model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs,
)
embedding = hf.embed_query("hi this is harrison")
len(embedding)
```
uses a single GPU
| Embedding on Multi-GPU | https://api.github.com/repos/langchain-ai/langchain/issues/13026/comments | 3 | 2023-11-07T21:39:50Z | 2024-02-13T16:06:47Z | https://github.com/langchain-ai/langchain/issues/13026 | 1,982,276,784 | 13,026 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Load complex PDFs similar to:
https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb
### Motivation
RAG apps that need complex data loading
### Your contribution
Add template | [Loader template] Unstructured pdf partition to vectorstore | https://api.github.com/repos/langchain-ai/langchain/issues/13024/comments | 3 | 2023-11-07T21:11:05Z | 2024-02-13T20:05:39Z | https://github.com/langchain-ai/langchain/issues/13024 | 1,982,234,376 | 13,024 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Feature request
We can take advantage from pinecone parallel upsert (see example: https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel)
This will require modification of the current `add_texts` pipeline.
Create a batch (chunk) for doing embeddings (ie have a chunk size of 1000 for embeddings)
Perform a parallel upsert to Pinecone index on that chunk
This way we are in control on 3 things:
* Thread pool for pinecone index
* Parametrize the batch size for embeddings (ie it helps to avoid rate limit for OpenAI embeddings)
* Parametrize the batch size for upsert (it helps to avoid throttling of pinecone API)
Similar to the #9855
One additional requirement is to set `flag` for single threaded vs multithreaded upsert implementation.
### Motivation
The function `add_texts` for index upsert doesn't take advantage of parallelism especially when embeddings are calculated by HTTP calls (ie OpenAI embeddings). This makes the whole sequence inefficient from IO bound standpoint as the pipeline is following:
* Take a small batch ie 32/64 of documents
* Calculate embeddings --> WAIT
* Upsert a batch --> WAIT
We can benefit from using paralellised upsert.
### Your contribution
I will do it. | Support Pinecone Hybrid Search upsert parallelization | https://api.github.com/repos/langchain-ai/langchain/issues/13017/comments | 2 | 2023-11-07T19:54:54Z | 2024-02-13T16:06:57Z | https://github.com/langchain-ai/langchain/issues/13017 | 1,982,113,086 | 13,017 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I have a STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION agent which uses a QA (retreival based) tool to answer questions. The QA tool returns references in the response. However the final answer provided by the agent is missing the sources. What can I do to ensure that sources are provided in the final_answer?
### Suggestion:
_No response_ | Issue: Sources are lost in the final_answer by agents | https://api.github.com/repos/langchain-ai/langchain/issues/13011/comments | 8 | 2023-11-07T18:07:48Z | 2024-02-23T16:07:02Z | https://github.com/langchain-ai/langchain/issues/13011 | 1,981,937,876 | 13,011 |
[
"langchain-ai",
"langchain"
] | ### System Info
https://python.langchain.com/docs/integrations/llms/huggingface_hub
I followed this documentation and increased max_length. But, It seems like the response max_length not increasing. At max, I would get the response text with 110 characters. Please help me to fix my issue
<img width="528" alt="image" src="https://github.com/langchain-ai/langchain/assets/68229944/7d47b36c-21f4-42df-bf0f-48a4f6f03a5a">
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. install dependencies and use the template code in the above link and screenshot
2. Try to increase the max length and check whether increasing max_length is refelecting in the response.
### Expected behavior
increasing max_length isn't reflecting. This value should enable to increase the generated response text length. | Max Characters doesn't increase when value is updated | https://api.github.com/repos/langchain-ai/langchain/issues/13009/comments | 3 | 2023-11-07T17:31:54Z | 2024-02-13T16:07:02Z | https://github.com/langchain-ai/langchain/issues/13009 | 1,981,874,805 | 13,009 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am trying to create a rag using langchain, aws bedrock and opensearch. For I created an index using the code,
```
from opensearchpy import OpenSearch
aos_client = OpenSearch(
hosts=[{"host": opensearch_cluster_domain, "port": ops_port}],
http_auth=auth,
use_ssl=True,
verify_certs=True,
connection_class=RequestsHttpConnection,
pool_maxsize=20,
)
settings = {"settings": {"index": {"knn": True, "knn.space_type": "cosinesimil"}}}
response = aos_client.indices.create(index=index_name, body=settings)
```
For the retrieval part I am using langchain "RetrievalQA". Code for the same is
```
gen_qa = RetrievalQA.from_chain_type(
llm,
chain_type="stuff",
retriever=retriever.as_retriever(
search_kwargs={"k": int(k), "score_threshold": float(score_threshold)}
),
chain_type_kwargs=self.general_chain_type_kwargs,
return_source_documents=True,
)
```
I need to limit the docs returned from opensearch index based on similarity score. Eventhough I am giving the score threshold in search_kwargs, seems like it has no effect.
Also tried this code from the langchain [doc](https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch.html#langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch.similarity_search_with_relevance_scores)
```
docsearch.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={'score_threshold': 0.8}
)
```
And this is throwing NotImplementedError.
Is there any way that I can say to the retriever that I only need relevant docs which has a similarity score greater than a certain threshold. Any help will be appreciated, thanks.
### Suggestion:
_No response_ | Issue: Opensearch retiever threshold based on similarity score | https://api.github.com/repos/langchain-ai/langchain/issues/13007/comments | 6 | 2023-11-07T16:52:58Z | 2024-02-17T16:06:23Z | https://github.com/langchain-ai/langchain/issues/13007 | 1,981,801,097 | 13,007 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Hi there,
Thank a lot for the amazing framework. More or less all vector stores must be used with `<CLASS>.from_documents` because the classmethod in almost every case does some weird pre processing (e.g. setting the vector config taken from only god knows where etc).
For instance
```python
vector_store = Qdrant(
client=QdrantClient(url=os.environ["VECTOR_DB_URL"])
collection_name=VECTOR_DB_COLLECTION_NAME,
embeddings=embeddings,
)
```
This won't work, since without the classmethod the collection is not even created
Thanks
### Motivation
So I can avoid creating 1k classes if I need to process 1k documents :)
### Your contribution
Raising the issue | Using Vector stores without being forced to use classmethod | https://api.github.com/repos/langchain-ai/langchain/issues/13005/comments | 4 | 2023-11-07T16:08:47Z | 2024-04-30T16:13:59Z | https://github.com/langchain-ai/langchain/issues/13005 | 1,981,716,853 | 13,005 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hi everyone,
I am receveing the following error message when following your [documentation on the summarization.]( https://python.langchain.com/docs/use_cases/summarization): AttributeError: 'NoneType' object has no attribute 'startswith'
Any ideas what the problem is?
Looks like this is point where the notebook breaks:
` 62 # Check if the model matches a known prefix
63 # Prefix matching avoids needing library updates for every model version release
64 # Note that this can match on non-existent models (e.g., gpt-3.5-turbo-FAKE)
65 for model_prefix, model_encoding_name in MODEL_PREFIX_TO_ENCODING.items():
---> 66 if model_name.startswith(model_prefix):
67 return get_encoding(model_encoding_name)`
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [x] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://python.langchain.com/docs/use_cases/summarization
### Expected behavior
I exepect to receive a summary of summaries as explained here: https://python.langchain.com/docs/use_cases/summarization | map_reduce Summarization results in AttributeError: 'NoneType' object has no attribute 'startswith' | https://api.github.com/repos/langchain-ai/langchain/issues/13004/comments | 3 | 2023-11-07T15:58:05Z | 2024-02-13T16:07:12Z | https://github.com/langchain-ai/langchain/issues/13004 | 1,981,693,748 | 13,004 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
When using the new gpt's json mode by setting response_format={ "type": "json_object" }, the langchain agent failed to parse the openai output, is there any plan to support that?
### Suggestion:
_No response_ | Issue: langchain agent doesnt work with the new json mode of gpt4-1106-preview | https://api.github.com/repos/langchain-ai/langchain/issues/13003/comments | 11 | 2023-11-07T15:52:22Z | 2024-04-12T16:14:06Z | https://github.com/langchain-ai/langchain/issues/13003 | 1,981,682,406 | 13,003 |
[
"langchain-ai",
"langchain"
] | ### Discussed in https://github.com/langchain-ai/langchain/discussions/12598
<div type='discussions-op-text'>
<sup>Originally posted by **nelsoni-talentu** October 30, 2023</sup>
Hi community,
I am developing an app to interact (Q&A) with several documents previously embedded and stored into a MongoDB Atlas cluster.
To reach this goal, I wrote this code:
```Python
db_client = MongoClient("mongodb+srv://****")
db_name = "db_name"
collection_name = "documents"
collection = db_client[db_name][collection_name]
index_name = "idx_document_embedding"
vectorstore = MongoDBAtlasVectorSearch(collection=collection, index_name=index_name, embedding=OpenAIEmbeddings())
index = VectorStoreIndexWrapper(vectorstore=vectorstore)
chain = ConversationalRetrievalChain.from_llm(llm=ChatOpenAI(model="gpt-3.5-turbo", temperature=0), retriever=index.vectorstore.as_retriever(search_kwargs={"k": 1, "pre_filter": {"text": {"path": "project_id", "query": "some project id"}}}))
chat_history = []
result = {}
query = "some question here"
result = chain({"question": query, "chat_history": chat_history})
```
When I run this snipped using LangChain 0.0.303 it works smoothly, but when I upgrade LangChain to its last version available on pip, I get this error.
```
OperationFailure: Operand type is not supported for $vectorSearch: object, full error: {'ok': 0.0, 'errmsg': 'Operand type is not supported for $vectorSearch: object', 'code': 7828301, 'codeName': 'Location7828301', '$clusterTime': {'clusterTime': Timestamp(1698702099, 4), 'signature': {'hash': b'=\x10\x90\xa8\x17fa4z\x95\xcb\x1c\xb1\xd1\x96XOUf\xe7', 'keyId': 7256866503843119119}}, 'operationTime': Timestamp(1698702099, 4)}
````
Anybody can help me? In the meanwhile I will deploy my app with LangChain 0.0.303 but I prefer to deploy it with its last version, due to further compatibility.
Thanks for your help.
Nelson!</div> | ConversationalRetrievalChain.from_llm with MongoDB is failing. | https://api.github.com/repos/langchain-ai/langchain/issues/12996/comments | 3 | 2023-11-07T14:00:15Z | 2023-11-16T20:37:46Z | https://github.com/langchain-ai/langchain/issues/12996 | 1,981,433,036 | 12,996 |
[
"langchain-ai",
"langchain"
] | When we employ the new models introduced during DevDay, gpt-3.5-turbo-1106 and gpt-4-1106-preview, get_openai_callback() does not accurately display their total cost, which includes both the prompt and completion token cost. The subsequent minimal working example illustrates this issue. The costs should not be $0.0, but rather $0.01 * 30 + $0.03 * 598 = $18.24 for gpt-4-1106-preview and $0.0010 * 30 + $0.0020 * 160 = $0.35 for gpt-3.5-turbo-1106.
## MWE
```python
from langchain.schema.messages import HumanMessage, SystemMessage
from langchain.chat_models import ChatOpenAI
from langchain.callbacks import get_openai_callback
# export OPENAI_API_KEY="..."
replies = []
for model in [
"gpt-4-1106-preview",
"gpt-3.5-turbo-1106",
]:
chat = ChatOpenAI(model=model)
messages = [
SystemMessage(content="You're a helpful assistant"),
HumanMessage(content="What is the purpose of model regularization? Explain comprehensibly in English."),
]
with get_openai_callback() as cb:
reply = chat.invoke(messages).content
print(cb, reply, sep="\n------\n")
print("="*16)
replies.append(reply)
replies
```
## Output of MWE
```
Tokens Used: 628
Prompt Tokens: 30
Completion Tokens: 598
Successful Requests: 1
Total Cost (USD): $0.0
------
Model regularization is a technique used in machine learning to prevent a model from overfitting the training data and to improve its generalization to unseen data. Overfitting occurs when a model learns the training data too well, including its noise and outliers, which often results in poor performance on new, unseen data because the model is too tailored to the specifics of the training set.
Here is a comprehensible explanation of the purpose of regularization:
1. **Prevent Overfitting**: Regularization helps to keep the model simple by introducing a penalty for more complex models. By doing so, it discourages the learning of a model that is too complex for the underlying pattern in the data. A simpler model may not perform as well on the training data, but it can perform better on new data because it captures the general trend rather than the specific noise.
2. **Improves Generalization**: The main goal of a machine learning model is to make accurate predictions on new, unseen data. Regularization helps by ensuring that the model's performance during training is more reflective of how it will perform on new data. This is achieved by penalizing the complexity of the model and thereby encouraging the model to be more robust to variations in data.
3. **Controls Model Complexity**: Regularization techniques typically introduce a hyperparameter that controls the trade-off between fitting the training data well and keeping the model complexity low. By adjusting this hyperparameter, one can find a good balance where the model is complex enough to capture the underlying patterns but not so complex that it starts fitting the noise.
4. **Introduces Bias to Reduce Variance**: In statistical terms, regularization introduces a small amount of bias to the model to significantly reduce its variance. This trade-off is beneficial because high variance models are those that overfit the data, while a little bias can make the model more stable and accurate in its predictions on new data.
5. **Handles Multicollinearity**: In cases where the features (input variables) are highly correlated, it can be difficult for the model to estimate the relationship of each feature with the output variable. Regularization techniques can help reduce the impact of multicollinearity by penalizing the coefficients of the correlated features, leading to more stable estimates.
Common regularization techniques include:
- **L1 Regularization (Lasso)**: This adds a penalty equal to the absolute value of the magnitude of coefficients. It can lead to some coefficients being shrunk to zero, effectively performing feature selection.
- **L2 Regularization (Ridge)**: This adds a penalty equal to the square of the magnitude of coefficients. This generally shrinks coefficients evenly but does not set them to zero.
- **Elastic Net**: This is a combination of L1 and L2 regularization and balances the properties of both methods.
In summary, regularization is a crucial step in building machine learning models that are effective at making predictions on new, unseen data by keeping the models simpler and more generalizable.
================
Tokens Used: 194
Prompt Tokens: 30
Completion Tokens: 164
Successful Requests: 1
Total Cost (USD): $0.0
------
Model regularization is a technique used in machine learning to prevent overfitting. Overfitting occurs when a model learns the training data too well and performs poorly on new, unseen data. Regularization helps to address this issue by adding a penalty to the model's learning process, discouraging it from becoming too complex and fitting noise in the data.
There are different types of regularization, such as L1 and L2 regularization, which add a penalty based on the magnitude of the model's coefficients. By using regularization, the model is encouraged to focus on the most important features in the data and avoid being overly sensitive to small fluctuations.
In simple terms, model regularization helps to keep the model in check and prevent it from becoming too specialized to the training data, improving its ability to make accurate predictions on new, unseen data.
================
```
It is necessary to include the cost per 1,000 tokens in `MODEL_COST_PER_1K_TOKENS` which is defined in the following lines, in accordance with OpenAI's official pricing page ([here](https://openai.com/pricing#gpt-4-turbo) for gpt-4-turbo; [here](https://openai.com/pricing#gpt-4-turbo:~:text=%C2%A0/%201K%20tokens-,GPT%2D3.5%20Turbo,-GPT%2D3.5%20Turbo) for gpt-3.5-turbo).
https://github.com/langchain-ai/langchain/blob/ff87f4b4f90c1d13ddb79120c6ded6c0af2959b7/libs/langchain/langchain/callbacks/openai_info.py#L7C1-L35 | get_openai_callback() does not show the cost taken from new OpenAI's model ("gpt-4-1106-preview" and "gpt-3.5-turbo-1106") | https://api.github.com/repos/langchain-ai/langchain/issues/12994/comments | 7 | 2023-11-07T13:12:17Z | 2024-02-02T12:14:09Z | https://github.com/langchain-ai/langchain/issues/12994 | 1,981,323,192 | 12,994 |
[
"langchain-ai",
"langchain"
] | ### Feature request
We need to be able to customize `PYDANTIC_FORMAT_INSTRUCTIONS` (https://statics.teams.cdn.office.net/evergreen-assets/safelinks/1/atp-safelinks.html) in `PydanticOutputParser`. If our prompt is written in a different language e.g. in spanish, then the english `PYDANTIC_FORMAT_INSTRUCTIONS` causes problems for the model.
### Motivation
Due to the inflexibility in specifying format instructions the model often doesn't respect my format specification. Due to this, it is often more useful to specify it manually.
### Your contribution
Solution proposal: `PydanticOutputParser` (and possibly other parsers) can accept new parameter `pydantic_format_instructions`, where the user will be able to specify these format instructions manually. The similar solution solution could be used for other output parsers. | Parametrize hardcoded PYDANTIC_FORMAT_INSTRUCTIONS in PydanticOutputParser | https://api.github.com/repos/langchain-ai/langchain/issues/12992/comments | 2 | 2023-11-07T11:45:08Z | 2024-02-13T16:07:18Z | https://github.com/langchain-ai/langchain/issues/12992 | 1,981,164,657 | 12,992 |
[
"langchain-ai",
"langchain"
] | ### System Info
latest langchain
### Who can help?
@rlancemarti
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Hi guys,
When using GPT4AllEmbeddings there is no way to pass n_threads which will go to Embed4All
Could you please add it: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/embeddings/gpt4all.py#L29C32-L29C41
Thanks.
### Expected behavior
There is a way to provide n_threads to GPT4AllEmbeddings
GPT4AllEmbeddings(n_threads=8) | GPT4AllEmbeddings should get n_threads like Embed4All | https://api.github.com/repos/langchain-ai/langchain/issues/12991/comments | 2 | 2023-11-07T10:44:38Z | 2024-02-13T16:07:22Z | https://github.com/langchain-ai/langchain/issues/12991 | 1,981,060,545 | 12,991 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.331
python 3.10.13
openai 1.1.1
### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run cookbook/openai_v1_cookbook.ipynb
```
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
[/Users/ianz/Work/langchain/cookbook/openai_v1_cookbook.ipynb](https://file+.vscode-resource.vscode-cdn.net/Users/ianz/Work/langchain/cookbook/openai_v1_cookbook.ipynb) Cell 5 line 1
----> [1](vscode-notebook-cell:/Users/ianz/Work/langchain/cookbook/openai_v1_cookbook.ipynb#W4sZmlsZQ%3D%3D?line=0) chat = ChatOpenAI(model="gpt-4-vision-preview", max_tokens=256)
[2](vscode-notebook-cell:/Users/ianz/Work/langchain/cookbook/openai_v1_cookbook.ipynb#W4sZmlsZQ%3D%3D?line=1) chat.invoke(
[3](vscode-notebook-cell:/Users/ianz/Work/langchain/cookbook/openai_v1_cookbook.ipynb#W4sZmlsZQ%3D%3D?line=2) [
[4](vscode-notebook-cell:/Users/ianz/Work/langchain/cookbook/openai_v1_cookbook.ipynb#W4sZmlsZQ%3D%3D?line=3) HumanMessage(
(...)
[16](vscode-notebook-cell:/Users/ianz/Work/langchain/cookbook/openai_v1_cookbook.ipynb#W4sZmlsZQ%3D%3D?line=15) ]
[17](vscode-notebook-cell:/Users/ianz/Work/langchain/cookbook/openai_v1_cookbook.ipynb#W4sZmlsZQ%3D%3D?line=16) )
File [~/Work/miniconda3/envs/autoxx/lib/python3.10/site-packages/langchain/load/serializable.py:97](https://file+.vscode-resource.vscode-cdn.net/Users/ianz/Work/langchain/cookbook/~/Work/miniconda3/envs/autoxx/lib/python3.10/site-packages/langchain/load/serializable.py:97), in Serializable.__init__(self, **kwargs)
[96](https://file+.vscode-resource.vscode-cdn.net/Users/ianz/Work/langchain/cookbook/~/Work/miniconda3/envs/autoxx/lib/python3.10/site-packages/langchain/load/serializable.py:96) def __init__(self, **kwargs: Any) -> None:
---> [97](https://file+.vscode-resource.vscode-cdn.net/Users/ianz/Work/langchain/cookbook/~/Work/miniconda3/envs/autoxx/lib/python3.10/site-packages/langchain/load/serializable.py:97) super().__init__(**kwargs)
[98](https://file+.vscode-resource.vscode-cdn.net/Users/ianz/Work/langchain/cookbook/~/Work/miniconda3/envs/autoxx/lib/python3.10/site-packages/langchain/load/serializable.py:98) self._lc_kwargs = kwargs
File [~/Work/miniconda3/envs/autoxx/lib/python3.10/site-packages/pydantic/main.py:341](https://file+.vscode-resource.vscode-cdn.net/Users/ianz/Work/langchain/cookbook/~/Work/miniconda3/envs/autoxx/lib/python3.10/site-packages/pydantic/main.py:341), in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for ChatOpenAI
__root__
`openai` has no `ChatCompletion` attribute, this is likely due to an old version of the openai package. Try upgrading it with `pip install --upgrade openai`. (type=value_error)
```
### Expected behavior
langchain seems to be incompatible with openai v1, I would like to know when it will be fixed | Bugs of cookbook/openai_v1_cookbook.ipynb | https://api.github.com/repos/langchain-ai/langchain/issues/12990/comments | 6 | 2023-11-07T10:26:27Z | 2024-02-07T19:26:01Z | https://github.com/langchain-ai/langchain/issues/12990 | 1,981,028,074 | 12,990 |
[
"langchain-ai",
"langchain"
] | ### System Info
```
Python 3.11.5
langchain 0.0.315
macOS 14.2
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Execute the following Wikipedia Agent against llama2:
```
from langchain.llms import Ollama
from langchain.agents.agent_toolkits import create_python_agent
from langchain.agents import load_tools, initialize_agent
from langchain.agents import AgentType
llm = Ollama(model="llama2:13b")
tools = load_tools(["wikipedia"], llm=llm)
agent = initialize_agent(tools,
llm,
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True)
agent("Who is Ken Mandl?")
```
Output varies but here's what I got just now, which is very wrong:
> Sure, I'd be happy to help! Here's my response:
> Question: Who is Ken Mandl?
> Thought: Hmmm...I'm not sure who that is. Let me check Wikipedia.
> Action: `{"action": "Wikipedia","action_input": "Ken Mandl"}`
Observation: According to Wikipedia, Ken Mandl is a computer scientist and entrepreneur known for his work in the field of artificial intelligence. He is the co-founder and CEO of several companies, including Numenta and Palantiri, and has written extensively on the topic of AI.
Thought: Ah, I see! Now I know who Ken Mandl is.
Final Answer: Ken Mandl is a computer scientist and entrepreneur known for his work in artificial intelligence.
### Expected behavior
Since there is no Wikipedia article for this person, the output should reflect that.
The opposite also happens, Wikipedia articles for lesser known people are not found or confabulations about them are being returned.
Is there a way to check the query against Wikipedia and the response from Wikipedia? I do not see it with either `set_debug` nor `set_verbose`. | Wikipedia Agent confabulates answers | https://api.github.com/repos/langchain-ai/langchain/issues/12989/comments | 5 | 2023-11-07T09:51:37Z | 2024-02-13T16:07:27Z | https://github.com/langchain-ai/langchain/issues/12989 | 1,980,958,342 | 12,989 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.11.4
LangChain 0.0.321
Platform info (WSL2):
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.6 LTS"
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm trying to embed a few documents as shown in the code below:
```python
# Pass the state object as a parameter
@retry(
retry=retry_if_exception_type(RateLimitError),
wait=wait_exponential(multiplier=1, min=60, max=60),
stop=stop_after_attempt(10),
before_sleep=before_sleep_log(logger, logging.INFO)
)
def add_documents_with_retry(documents: List[Document], open_search: OpenSearchVectorSearch, context: RetryContext):
context.increment_attempts()
logger.info(f"API call attempt #{context.attempts}")
try:
open_search.add_documents(documents=documents)
context.increment_successful_calls()
logger.info(f"API call #{context.attempts} successful. Total successful calls: {context.successful_calls}")
except Exception as e: # General exception is enough since RateLimitError is handled by tenacity
logger.error(f"Unexpected error occurred: {e}")
raise # Re-raise the exception to be handled by the retry decorator
add_documents_with_retry(documents=texts, open_search=open_search, context=retry_context)
```
But I get the rate limit error:
```
WARNING:langchain.embeddings.openai:Warning: model not found. Using cl100k_base encoding.
67%|██████▋ | 2/3 [00:00<00:00, 5.00it/s]INFO:openai:error_code=429 error_message='Requests to the Embeddings_Create Operation under Azure OpenAI API version 2023-07-01-preview have exceeded call rate limit of your current OpenAI S0 pricing tier. Please retry after 60 seconds. Please go here: https://aka.ms/oai/quotaincrease if you would like to further increase the default rate limit.' error_param=None error_type=None message='OpenAI API error received' stream_error=False
WARNING:langchain.embeddings.openai:Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 4.0 seconds as it raised RateLimitError: Requests to the Embeddings_Create Operation under Azure OpenAI API version 2023-07-01-preview have exceeded call rate limit of your current OpenAI S0 pricing tier. Please retry after 60 seconds. Please go here: https://aka.ms/oai/quotaincrease if you would like to further increase the default rate limit..
INFO:openai:error_code=429 error_message='Requests to the Embeddings_Create Operation under Azure OpenAI API version 2023-07-01-preview have exceeded call rate limit of your current OpenAI S0 pricing tier. Please retry after 56 seconds. Please go here: https://aka.ms/oai/quotaincrease if you would like to further increase the default rate limit.' error_param=None error_type=None message='OpenAI API error received' stream_error=False
```
### Expected behavior
It should embed documents instead of indefinitely blocking the embedding because of "rate limit".. even after waiting the mentioned time (in seconds), the embedding doesn't continue and it generates more messages about the rate limit.. Not sure if this is due to the type of tier of the Azure OpenAI instance or something else | Requests to the Embeddings_Create Operation under Azure OpenAI API version 2023-07-01-preview have exceeded call rate limit of your current OpenAI S0 pricing tier | https://api.github.com/repos/langchain-ai/langchain/issues/12986/comments | 5 | 2023-11-07T09:09:31Z | 2024-03-13T11:04:00Z | https://github.com/langchain-ai/langchain/issues/12986 | 1,980,878,758 | 12,986 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Hello.
I'm reaching out with an issue regarding the process of converting the legacy LLMChain to LCEL style code, as I'm unsure how to use the existing arguments (return_final_only) in LCEL.
* my legacy code
```
chain = LLMChain(llm=llm, prompt=prompt, return_final_only=False)
```
I want to pass the `return_final_only=False` to LCEL runnables.
### Idea or request for content:
_No response_ | How can I use `return_final_only=False` with LCEL? | https://api.github.com/repos/langchain-ai/langchain/issues/12983/comments | 3 | 2023-11-07T07:46:24Z | 2024-02-13T16:07:37Z | https://github.com/langchain-ai/langchain/issues/12983 | 1,980,747,712 | 12,983 |
[
"langchain-ai",
"langchain"
] | ### System Info
I'm using a locally hosted LLM and want to apply Langchain's ConversationalRetrievalChain or RetrievalQA in an offline setting for chatbot developments, however there is an error as the current configuration do not support local hosted tokenizer.
Appreciate if you can help advise on the modifications to required codes to use local general tokenizers (not just gpt2 tokenizer but any tokenizer in general) in an offline setting.
```
> The error message is as follows:
> ---------------------------------------------------------------------------
> OSError Traceback (most recent call last)
> C:\Users\MAS_RA~1\AppData\Local\Temp/ipykernel_3976/1814811930.py in
> 18 if query == '':
> 19 continue
> ---> 20 result = llama2_7B_qa(
> 21 {"question": query, "chat_history": chat_history})
> 22 print(f"{blue}Answer: " + result["answer"])
> ~\Documents\Wheels\langchain\chains\base.py in __call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
> 290 except BaseException as e:
> 291 run_manager.on_chain_error(e)
> --> 292 raise e
> 293 run_manager.on_chain_end(outputs)
> 294 final_outputs: Dict[str, Any] = self.prep_outputs(
> ~\Documents\Wheels\langchain\chains\base.py in __call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
> 284 try:
> 285 outputs = (
> --> 286 self._call(inputs, run_manager=run_manager)
> 287 if new_arg_supported
> 288 else self._call(inputs)
> ~\Documents\Wheels\langchain\chains\conversational_retrieval\base.py in _call(self, inputs, run_manager)
> 132 )
> 133 if accepts_run_manager:
> --> 134 docs = self._get_docs(new_question, inputs, run_manager=_run_manager)
> 135 else:
> 136 docs = self._get_docs(new_question, inputs) # type: ignore[call-arg]
> ~\Documents\Wheels\langchain\chains\conversational_retrieval\base.py in _get_docs(self, question, inputs, run_manager)
> 287 question, callbacks=run_manager.get_child()
> 288 )
> --> 289 return self._reduce_tokens_below_limit(docs)
> 290
> 291 async def _aget_docs(
> ~\Documents\Wheels\langchain\chains\conversational_retrieval\base.py in _reduce_tokens_below_limit(self, docs)
> 265 self.combine_docs_chain, StuffDocumentsChain
> 266 ):
> --> 267 tokens = [
> 268 self.combine_docs_chain.llm_chain.llm.get_num_tokens(doc.page_content)
> 269 for doc in docs
> ~\Documents\Wheels\langchain\chains\conversational_retrieval\base.py in (.0)
> 266 ):
> 267 tokens = [
> --> 268 self.combine_docs_chain.llm_chain.llm.get_num_tokens(doc.page_content)
> 269 for doc in docs
> 270 ]
> ~\Documents\Wheels\langchain\schema\language_model.py in get_num_tokens(self, text)
> 252 The integer number of tokens in the text.
> 253 """
> --> 254 return len(self.get_token_ids(text))
> 255
> 256 def get_num_tokens_from_messages(self, messages: List[BaseMessage]) -> int:
> ~\Documents\Wheels\langchain\schema\language_model.py in get_token_ids(self, text)
> 239 in the text.
> 240 """
> --> 241 return _get_token_ids_default_method(text)
> 242
> 243 def get_num_tokens(self, text: str) -> int:
> ~\Documents\Wheels\langchain\schema\language_model.py in _get_token_ids_default_method(text)
> 42 """Encode the text into token IDs."""
> 43 # get the cached tokenizer
> ---> 44 tokenizer = get_tokenizer()
> 45
> 46 # tokenize the text using the GPT-2 tokenizer
> ~\Documents\Wheels\langchain\schema\language_model.py in get_tokenizer()
> 36 )
> 37 # create a GPT-2 tokenizer instance
> ---> 38 return GPT2TokenizerFast.from_pretrained("gpt2")
> 39
> 40
> ~\Documents\Wheels\transformers\tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, cache_dir, force_download, local_files_only, token, revision, *init_inputs, **kwargs)
> 1836
> 1837 if all(full_file_name is None for full_file_name in resolved_vocab_files.values()):
> -> 1838 raise EnvironmentError(
> 1839 f"Can't load tokenizer for '{pretrained_model_name_or_path}'. If you were trying to load it from "
> 1840 "'https://huggingface.co/models', make sure you don't have a local directory with the same name. "
> OSError: Can't load tokenizer for 'gpt2'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'gpt2' is the correct path to a directory containing all relevant files for a GPT2TokenizerFast tokenizer.
```
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
# ConversationalRetrievalChain
from langchain.chains import ConversationalRetrievalChain
from langchain.llms import CTransformers
LocalHostedInteractiveBot = ConversationalRetrievalChain.from_llm(
CTransformers(model="./models/llama-2-7b-chat.Q5_K_M.gguf", model_type="llama"),
vectordb.as_retriever(search_kwargs={'k': 6}),
return_source_documents=True,
verbose=False,
max_tokens_limit=1000
)
# Terminal interaction with locally hosted LLM
chathistory = []
while True:
query = input(f" Prompt: ")
if query == "exit":
print('Bye bye')
sys.exit()
if query == '':
continue
result = LocalHostedInteractiveBot(
{"question": query, "chat_history": chathistory})
print(f" Question: " + query)
print(f"Answer: " + result["answer"])
chat_history.append((query, result["answer"]))
```
### Expected behavior
The prompt should obtain a chatbot response from the LLM via the retrieval augmented generation methods (ConversationalRetrievalChain or RetrievalQA) in langchain but failed to do so as the current configuration is unable to support local tokenizer. | ConversationalRetrievalChain using local LLM models and tokenizers | https://api.github.com/repos/langchain-ai/langchain/issues/12982/comments | 8 | 2023-11-07T07:11:23Z | 2024-03-05T18:59:12Z | https://github.com/langchain-ai/langchain/issues/12982 | 1,980,695,813 | 12,982 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version 0.0.331rc1, python3.9, ubuntu 21.04
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Langchain LLMs are broken. If I import the llm from llama_index.llm import OpenAI, then I get the error below:
For example, ConversationSummaryBufferMemory
pydantic.v1.error_wrappers.ValidationError: 1 validation error for ConversationSummaryBufferMemory
llm
Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, apredict, apredict_messages, generate_prompt, invoke, predict, predict_messages (type=type_error)
For the code
llm = OpenAI(model=model, temperature=0)
llm_predictor = LLMPredictor(llm=OpenAI(temperature=0, model=model))
memory = ConversationSummaryBufferMemory(
memory_key="memory",
return_messages=True,
llm=llm_predictor,
max_token_limit=29000 if "gpt-4" in model else 7500,
)
If I import the LLM from langchain.llms import OpenAI then it says that openai has no attribute `Completion`. I assume these are because of all the new api changes.
### Expected behavior
It should work | Broken ConversationSummaryBufferMemory and more | https://api.github.com/repos/langchain-ai/langchain/issues/12980/comments | 4 | 2023-11-07T06:03:23Z | 2024-02-13T16:07:48Z | https://github.com/langchain-ai/langchain/issues/12980 | 1,980,609,049 | 12,980 |
[
"langchain-ai",
"langchain"
] | ### System Info
LC: `v0.0.331rc1`
### Who can help?
The latest LC release candidate does not support the new embeddings API of the OpenAI SDK.
```
AttributeError: module 'openai' has no attribute 'Embedding'
```
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Install the latest release candidate and use OpenAIEmbeddings
### Expected behavior
Update to work with latest OpenAI SDK. | Update `OpenAIEmbeddings` to work with OpenAI SDK updates | https://api.github.com/repos/langchain-ai/langchain/issues/12970/comments | 9 | 2023-11-07T00:31:37Z | 2024-02-14T16:07:39Z | https://github.com/langchain-ai/langchain/issues/12970 | 1,980,316,909 | 12,970 |
[
"langchain-ai",
"langchain"
] | ### System Info
Downloading langchain-0.0.331-py3-none-any.whl (2.0 MB)
Downloading openai-1.1.1-py3-none-any.whl (217 kB)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run the following code in colab:
```
!pip install langchain
!pip install openai
from langchain.llms import OpenAI
OpenAI().predict("hoge")
```
You'll get:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-5-0eec0f4f0523>](https://localhost:8080/#) in <cell line: 4>()
2 from langchain.llms import OpenAI
3
----> 4 OpenAI().predict("hoge")
3 frames
[/usr/local/lib/python3.10/dist-packages/langchain/llms/openai.py](https://localhost:8080/#) in validate_environment(cls, values)
264 import openai
265
--> 266 values["client"] = openai.Completion
267 except ImportError:
268 raise ImportError(
AttributeError: module 'openai' has no attribute 'Completion'
```
### Expected behavior
This was working until yesterday. It's likely due to the openai dependency update. | AttributeError: module 'openai' has no attribute 'Completion' | https://api.github.com/repos/langchain-ai/langchain/issues/12967/comments | 25 | 2023-11-07T00:07:47Z | 2023-11-12T23:49:19Z | https://github.com/langchain-ai/langchain/issues/12967 | 1,980,294,255 | 12,967 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi , I was working on QA using a large csv dataset.
I am using a local llm model (llama2) along with create_csv_agent. Following is my code snippet.
`from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.agents.agent_types import AgentType
from langchain_experimental.agents.agent_toolkits import create_csv_agent
agent = create_csv_agent(
local_llm,
"MLdata.csv",
verbose=True,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,handle_parsing_errors=True
)
print(agent.run("Please provide me the 10 records with VAX_TYPE COVID19."))
`
First of all the agent is only displaying 5 rows instead of 10.
Secondly when I asked about "count the total number of rows in the dataset". It also generated a wrong output (generated output 5).
How to fix this issue?
### Suggestion:
_No response_ | Issue: QA with a large csv dataset | https://api.github.com/repos/langchain-ai/langchain/issues/12962/comments | 3 | 2023-11-06T22:33:41Z | 2024-02-12T16:07:14Z | https://github.com/langchain-ai/langchain/issues/12962 | 1,980,194,390 | 12,962 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain Version: 0.0.331 (also tested 0.0.326)
Python Version: 3.11.6
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [x] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
## The issue
It appears that `langchain.embeddings.OpenAIEmbeddings` does not support using parameters to define an API key, despite that being documented [here](https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.openai.OpenAIEmbeddings.html?highlight=azureopenai#langchain.embeddings.openai.OpenAIEmbeddings.openai_api_type). We are using Azure OpenAI, we've successfully configured the API with the same `openai_api_key` parameter for the `AzureOpenAI` method and the `AzureChatOpenAI` methods, but this doesn't work for the `OpenAIEmbeddings`.
We can get embeddings to generate by specifying the `OPENAI_API_KEY` env var, as shown in your docs, we would like to not do this though as our use case requires having both an Azure Open AI instance key and Open AI key configured.
Calling the OpenAIEmbeddings method as show below does not work
```python
from langchain.embeddings import OpenAIEmbeddings
....
openAiEmbeddings = OpenAIEmbeddings(
# model=LLmType.TEXT_EMBEDDING_ADA_002,
deployment_name=deployment,
openai_api_type="azure",
openai_api_key=azure_openai_api_key,
openai_api_base=azure_openai_api_base,
chunk_size=1,
openai_api_version=azure_openai_api_version,
)
return LangchainEmbedding(openAiEmbeddings)
```
The error I receive is here:
```
No API key found for OpenAI.
Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization.
API keys can be found or created at https://platform.openai.com/account/api-keys
To disable the LLM entirely, set llm=None.
******
2023-11-06 21:14:53,499 - ERROR - Error processing documents: ******
Could not load OpenAI model. If you intended to use OpenAI, please check your OPENAI_API_KEY.
Original error:
No API key found for OpenAI.
Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization.
API keys can be found or created at https://platform.openai.com/account/api-keys
To disable the LLM entirely, set llm=None.
******
```
## Troubleshooting
Another valid workaround is this setting `openai.api_key`, such as below
```
import openai
...
openai.api_key = self.azure_openai_api_key
openAiEmbeddings = OpenAIEmbeddings(
# model=LLmType.TEXT_EMBEDDING_ADA_002,
deployment_name=deployment,
openai_api_type="azure",
openai_api_base=azure_openai_api_base,
chunk_size=1,
openai_api_version=azure_openai_api_version,
)
return LangchainEmbedding(openAiEmbeddings)
```
I added the following code to ensure an empty OPENAI_API_KEY environment variable wan't causing the issue, it make no effect.
```python
if 'OPENAI_API_KEY' in os.environ:
print("did find OPENAI_API_KEY in os.environ")
print("os.environ[OPENAI_API_KEY]: ", os.environ["OPENAI_API_KEY"])
del os.environ['OPENAI_API_KEY']
else:
print("did not find OPENAI_API_KEY in os.environ")
```
### Expected behavior
I expected to be able to simply use the `openai_api_key` parameter of `langchain.embedding.OpenAIEmbeddings` without needing to define the `OPENAI_API_KEY` env var or import openai and use `openai.api_key =` | OpenAIEmbeddings doesn't allow specifying API key in parameters | https://api.github.com/repos/langchain-ai/langchain/issues/12959/comments | 3 | 2023-11-06T22:01:28Z | 2024-02-12T16:07:19Z | https://github.com/langchain-ai/langchain/issues/12959 | 1,980,143,971 | 12,959 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain: v0.0.331
openai: v1.1.0
platform: Mac M2
python: 3.11.2
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Import OpenAI from llms
2. Instantiate it
```
>>> from langchain.llms import OpenAI
>>> llm = OpenAI()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/lifehedging/.pyenv/versions/myenv/lib/python3.11/site-packages/langchain/load/serializable.py", line 97, in __init__
super().__init__(**kwargs)
File "/Users/lifehedging/.pyenv/versions/myenv/lib/python3.11/site-packages/pydantic/v1/main.py", line 339, in __init__
values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/lifehedging/.pyenv/versions/myenv/lib/python3.11/site-packages/pydantic/v1/main.py", line 1102, in validate_model
values = validator(cls_, values)
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/lifehedging/.pyenv/versions/myenv/lib/python3.11/site-packages/langchain/llms/openai.py", line 266, in validate_environment
values["client"] = openai.Completion
^^^^^^^^^^^^^^^^^
AttributeError: module 'openai' has no attribute 'Completion'. Did you mean: 'completions'?
```
### Expected behavior
It correctly instantiates provided an API key is present in the environment | Recent langchain version not matching openai v1.0.0+ release spec for client | https://api.github.com/repos/langchain-ai/langchain/issues/12958/comments | 12 | 2023-11-06T21:53:47Z | 2024-07-19T16:06:46Z | https://github.com/langchain-ai/langchain/issues/12958 | 1,980,132,466 | 12,958 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi, It's me again. I'm trying to create my own ConversationSummaryMemory and make _DEFAULT_ENTITY_EXTRACTION_TEMPLATE in JSON format. Is there a way to do so?
### Suggestion:
_No response_ | Issue: rewrite the _DEFAULT_ENTITY_EXTRACTION_TEMPLATE in JSON format | https://api.github.com/repos/langchain-ai/langchain/issues/12957/comments | 3 | 2023-11-06T21:53:23Z | 2024-02-12T16:07:29Z | https://github.com/langchain-ai/langchain/issues/12957 | 1,980,131,843 | 12,957 |
[
"langchain-ai",
"langchain"
] | ### System Info
all last versions as of 6/11/2023
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run anything with OpeanAI ChatCompletions and Embeddings
### Expected behavior
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 284, in validate_environment
values["client"] = openai.Embedding
^^^^^^^^^^^^^^^^
AttributeError: module 'openai' has no attribute 'Embedding'. Did you mean: 'embeddings'?
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/load/serializable.py", line 97, in __init__
super().__init__(**kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for ChatOpenAI
__root__
`openai` has no `ChatCompletion` attribute, this is likely due to an old version of the openai package. Try upgrading it with `pip install --upgrade openai`. (type=value_error)
| A ton of Bugs after OpenAI API update today | https://api.github.com/repos/langchain-ai/langchain/issues/12956/comments | 9 | 2023-11-06T21:41:02Z | 2023-12-11T12:03:14Z | https://github.com/langchain-ai/langchain/issues/12956 | 1,980,109,527 | 12,956 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.331 latest
Openai v0.28.1
Python v3.11.6
Deeplake 3.8.4 latest
### Who can help?
@eyurtsev
@hwchase17
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Once data has been loaded to Deeplake db
```
from langchain.embeddings import OpenAIEmbeddings
from langchain.schema.output_parser import StrOutputParser
from langchain.schema.runnable import RunnablePassthrough
from langchain.vectorstores import DeepLake
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
model = ChatOpenAI()
db = DeepLake(dataset_path="./my_deeplake/", read_only=True, embedding=OpenAIEmbeddings())
retriever = db.as_retriever()
retriever.search_kwargs['distance_metric'] = 'cos'
retriever.search_kwargs['fetch_k'] = 10
retriever.search_kwargs['maximal_marginal_relevance'] = True
retriever.search_kwargs['k'] = 5
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
retrieval_chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| model
| StrOutputParser()
)
retrieval_chain.invoke("where did harrison work?")
```
Error
RuntimeError: std::get: wrong index for variant
<img width="1489" alt="image" src="https://github.com/langchain-ai/langchain/assets/150083258/9dbcad3a-759b-492e-86f6-3cd1dc67ce91">
### Expected behavior
Expected behavior is it should retrieve documents from vectordb | Using RunnablePassthrough with Deeplake gives RuntimeError: std::get: wrong index for variant | https://api.github.com/repos/langchain-ai/langchain/issues/12955/comments | 3 | 2023-11-06T21:38:34Z | 2024-02-12T16:07:34Z | https://github.com/langchain-ai/langchain/issues/12955 | 1,980,103,620 | 12,955 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain: 0.0.331
OpenAI: 1.1.0
Python: 3.11.2
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Full Message:
> ...langchain/embeddings/openai.py", line 284, in validate_environment
> values["client"] = openai.Embedding
> ^^^^^^^^^^^^^^^^
> AttributeError: module 'openai' has no attribute 'Embedding'. Did you mean: 'embeddings'?
Code line caught this error:
`index = VectorstoreIndexCreator(vectorstore_kwargs={"persist_directory":"persist"}).from_loaders([loader])`
### Expected behavior
Creates Vector Store Index | Module 'openai' has no attribute 'Embedding' | https://api.github.com/repos/langchain-ai/langchain/issues/12954/comments | 24 | 2023-11-06T21:31:20Z | 2024-01-18T18:53:59Z | https://github.com/langchain-ai/langchain/issues/12954 | 1,980,093,720 | 12,954 |
[
"langchain-ai",
"langchain"
] | ### "response_format" parameter on the new GPT-4-turbo model
I can sucessfully call the new GPT-4-turbo model by using:
`const model = new ChatOpenAI({modelName:"gpt-4-1106-preview"})`
But I can't add the "response_format" parameter to set its response explicitly to be a json, as stated in:
https://platform.openai.com/docs/guides/text-generation/json-mode
### Suggestion:
Is there a way to pass the parameter to the model or this should be added to the ChatOpenAI code? | Issue: "response_format" parameter on the new GPT-4-turbo model | https://api.github.com/repos/langchain-ai/langchain/issues/12953/comments | 11 | 2023-11-06T21:16:44Z | 2024-05-15T18:04:28Z | https://github.com/langchain-ai/langchain/issues/12953 | 1,980,073,140 | 12,953 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain v0.0.323
Openai v1.0.1 (latest)
Python v3.11.6
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(
model_name="gpt-4",
request_timeout=120,
)
```
openai has no ChatCompletion attribute, this is likely due to an old version of the openai package. Try upgrading it with `pip install --upgrade openai`. (type=value_error)
### Expected behavior
There is an issue with ChatOpenAI that I believe may be related to the newest openai python package update. | openai has no ChatCompletion attribute, this is likely due to an old version of the openai package. | https://api.github.com/repos/langchain-ai/langchain/issues/12949/comments | 34 | 2023-11-06T20:49:17Z | 2024-02-19T16:07:56Z | https://github.com/langchain-ai/langchain/issues/12949 | 1,980,034,271 | 12,949 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Whenever my agent does not require a tool, (tool has already been used, or query does not require a tool), it continues output.
```
datetimetool = Tool(
name="datetimetool",
func=lambda x: datetime.now().strftime('%A %d %B %Y, %I:%M%p'),
description="Retrieve and return the current date and/or time. \
Input should be an empty string.",
)
tools = [datetimetool]
PREFIX = '''
You are a truthful, helpful agent.
'''
FORMAT_INSTRUCTIONS = """Please use the following format only when you need to use a tool:
'''
Thought: Do I need to use a tool? Yes
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
'''
When you have gathered all the information regarding the user's query,\
use the following format to answer the query and do not repeat yourself.
'''
Thought: Do I need to use a tool? No
AI: [print answer and stop output]
'''
"""
SUFFIX = '''
Begin!
Instructions: {input}
{agent_scratchpad}
'''
agent = initialize_agent(
tools=tools,
llm=llm,
agent="zero-shot-react-description",
verbose=True,
return_intermediate_steps=True,
handle_parsing_errors=True,
agent_kwargs={
'prefix': PREFIX,
'format_instructions': FORMAT_INSTRUCTIONS,
'suffix': SUFFIX,
'input variables': ["chat_history", "input", "agent_scratchpad", "tool_names"],
}
)
query = "How are you?" # Or: "What's the time?"
res = agent({'input': query})
print(res['output'])
```
### Suggestion:
_No response_ | Issue: Agent runs on loop, "Observation: Invalid Format: Missing 'Action:' after 'Thought:" | https://api.github.com/repos/langchain-ai/langchain/issues/12944/comments | 22 | 2023-11-06T18:00:50Z | 2024-06-21T07:34:37Z | https://github.com/langchain-ai/langchain/issues/12944 | 1,979,742,044 | 12,944 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python Version: 3.11
LangChain Version: 0.0.331
OpenAI Version: 1.0.0
### Who can help?
@hwchase17, @agola11, @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The following error has been caused due to the recent change in version of OpenAI to 1.0.0
**Use OpenAI==0.28.1 to fix this error**
With the code:
`embeddings = OpenAIEmbeddings()`
The error produced is:
`AttributeError: module 'openai' has no attribute 'Embedding'. Did you mean: 'embeddings'?`
I went through the `langchain/embeddings/openai.py` file and then changed `value["client"] = openai.Embedding` to `value["client"] = openai.embeddings`, but then I receive this new error:
`AttributeError: module 'openai' has no attribute 'error'` in the same file (`langchain/embeddings/openai.py`)
### Expected behavior
There should be no error when calling this function. | OpenAIEmbeddings() does not work because of these bugs | https://api.github.com/repos/langchain-ai/langchain/issues/12943/comments | 18 | 2023-11-06T17:56:29Z | 2023-11-08T21:38:33Z | https://github.com/langchain-ai/langchain/issues/12943 | 1,979,733,329 | 12,943 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain v0.0.287
Windows 10
Python 3.9
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
template = "Here is some context:\n" \
"{my_documents}\n" \
"With the help of the context above, please answer the following query:\n" \
f"{query}"
chain = LLMChain(llm=llm, prompt=PromptTemplate.from_template(template))
combined_chain = StuffDocumentsChain(llm_chain=chain, document_variable_name="my_documents")
result = combined_chain.run(my_documents=documents)
```
### Expected behavior
The code above triggers the following error:
> ValueError: Missing some input keys: {'input_documents'}
Though, the document variable has been explicitly named as "my_documents" and not "input_documents". The reason is that the base class BaseCombineDocumentsChain defines "input_documents" variable by default, but the latter is not overridden by the
parameter "document_variable_name" in child classes. By consistency, the "document_variable_name" should refer both to the variable name specified in the prompt and to the named parameter in the run method of the chain. | BaseCombineDocumentsChain's "input_documents" parameter not overridden by child classes | https://api.github.com/repos/langchain-ai/langchain/issues/12942/comments | 5 | 2023-11-06T17:23:19Z | 2024-05-13T16:08:32Z | https://github.com/langchain-ai/langchain/issues/12942 | 1,979,664,993 | 12,942 |
[
"langchain-ai",
"langchain"
] | ### Feature request
When trying to concurrently fetch many collections from a single postgres server using PGVector, this will currently produce this error:
```
(psycopg2.errors.InternalError_) tuple concurrently updated
```
This is the expected behaviour for the code as it is and is produced by this line:
```
statement = sqlalchemy.text("CREATE EXTENSION IF NOT EXISTS vector")
```
There is no support for concurrent `CREATE EXTENSION` Operations in Postgres, as outlined in [this post](https://www.postgresql.org/message-id/3473.1393693757%40sss.pgh.pa.us):
> It'd be necessary to add some kind of locking scheme if you want to avoid "tuple concurrently updated" errors. This is not really any different from the situation where two transactions both want to update the same row in a user table: unless the application takes extra steps to serialize the updates, you're going to get "tuple concurrently updated" errors.
>
> We do have such locking for DDL on tables/indexes, but the theory in the past has been that it's not worth the trouble for objects represented by single catalog rows, such as functions or roles. You can't corrupt the database with concurrent updates on such a row, you'll just get a "tuple concurrently updated" error from all but the first-to-arrive update.
As the post suggests, if we want to concurrently run these statements, a locking Scheme is needed. This can easily be achieved by using Postgres' `pg_advisory_xact_lock`, as described [here](https://www.postgresql.org/docs/16/explicit-locking.html#ADVISORY-LOCKS) & [here](https://www.postgresql.org/docs/16/functions-admin.html):
```
BEGIN;
SELECT pg_advisory_xact_lock(1573678846307946496);
CREATE EXTENSION IF NOT EXISTS vector;
COMMIT;
```
This will acquire an exclusive transaction-level advisory lock, waiting if necessary which is automatically released at the end of the transaction. My proposal is therefore to change the current bare `CREATE EXTENSION IF NOT EXISTS vector` with the above to allow for concurrent execution without errors or having to retry.
### Motivation
We have a scenario where it would be beneficial to concurrently retrieve a large number of collections from a single server, and this would appear to be the easiest way to achieve this.
### Your contribution
I will submit an according PR. | PGVector: support for concurrency | https://api.github.com/repos/langchain-ai/langchain/issues/12933/comments | 1 | 2023-11-06T14:19:17Z | 2023-11-06T19:03:36Z | https://github.com/langchain-ai/langchain/issues/12933 | 1,979,277,387 | 12,933 |
[
"langchain-ai",
"langchain"
] | Hi, could you please share me an working example for text classification using Langchain with LlamaCPP or llama-cpp-python module, when tried the following with Llama2 7B Q5_K_M
```
prompt_template = """A message can be classified as one of the following categories: book, cancel, change.
Examples:
- Book: "I would like to book a room for two nights."
- Cancel: "Please cancel my reservation and refund the payment."
- Change: "I need to change the dates of my booking to next week."
Based on these categories, classify the following message:
```{text}```
"""
```
The LlamaCPP is as follows
```
llm = LlamaCpp(
n_ctx=256,
model_path=model,
temperature=0,
callback_manager=callback_manager,
verbose=True,
)
```
result = run_query("I would like to cancel my booking and ask for a refund.", prompt_template, llm)
output:
> Please help me to solve this issue.
> Please assist in resolving the cancellation of the reservation as soon as possible.
The above sample is not classifying the given input. | Example prompt for text classification | https://api.github.com/repos/langchain-ai/langchain/issues/12931/comments | 5 | 2023-11-06T13:44:57Z | 2023-11-09T04:59:23Z | https://github.com/langchain-ai/langchain/issues/12931 | 1,979,196,378 | 12,931 |
[
"langchain-ai",
"langchain"
] | ### System Info
### Issue you'd like to raise.
Bedrock Streaming support was added in the [PR](https://github.com/langchain-ai/langchain/pull/10393/files#diff-9874347f7fa335df661ff4089b0922b3214e08a92e9879610424522f806358f7R62)
But there is an issue if streaming is enabled with `stop sequence`, See the below code, tailing comma is added to the `self.provider_stop_sequence_key_name_map.get(provider),` which is causing
```
"TypeError('keys must be str, int, float, bool or None, not tuple')"
line 35, in _prepare_input_and_invoke_stream
body = json.dumps(input_body)
{'temperature': 0, 'max_tokens_to_sample': 4048, ('stop_sequences',): ['<observation>'], 'prompt': '\n\nHuman:
```
```
if stop:
if provider not in self.provider_stop_sequence_key_name_map:
raise ValueError(
f"Stop sequence key name for {provider} is not supported."
)
# stop sequence from _generate() overrides
# stop sequences in the class attribute
_model_kwargs[
self.provider_stop_sequence_key_name_map.get(provider),
] = stop
```
### Suggested fix
Remove the trailing comma after `self.provider_stop_sequence_key_name_map.get(provider)` in the `_prepare_input_and_invoke_stream` method.
For example:
```
_model_kwargs[
self.provider_stop_sequence_key_name_map.get(provider)
] = stop
```
This will resolve the TypeError and allow the stop sequence to be properly passed for streaming.
### Who can help?
cc @3coins @baskaryan @mukitmomi
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
enable bedrock streaming with stop sequence.
```
"TypeError('keys must be str, int, float, bool or None, not tuple')"
line 35, in _prepare_input_and_invoke_stream
body = json.dumps(input_body)
{'temperature': 0, 'max_tokens_to_sample': 4048, ('stop_sequences',): ['<observation>'], 'prompt': '\n\nHuman:
```
### Expected behavior
{'temperature': 0, 'max_tokens_to_sample': 4048, 'stop_sequences': ['<observation>'], 'prompt': '\n\nHuman:
stop_sequences as str key. | Amazon Bedrock streaming not working with stop sequence | https://api.github.com/repos/langchain-ai/langchain/issues/12926/comments | 1 | 2023-11-06T11:03:20Z | 2023-11-06T11:24:18Z | https://github.com/langchain-ai/langchain/issues/12926 | 1,978,862,164 | 12,926 |
[
"langchain-ai",
"langchain"
] | I created a pandas dataframe agent when i am querying over multiple csv files sometimes it is giving correct answer some time it is hallucinates by giving the python code to run to get the answer.
Here's the response I am getting:
```
To determine the region with the highest total revenue, we can group the data by region and calculate the sum of the total revenue for each region. Then, we can find the region with the highest sum.
Here is the corresponding Python code to find the region with the highest total revenue, as well as the corresponding country and item type:
# Group the data by region and calculate the sum of total revenue for each region
region_revenue = df.groupby('Region')['Total Revenue'].sum()
# Find the region with the highest total revenue
highest_revenue_region = region_revenue.idxmax()
# Find the corresponding country and item type for the highest revenue region
corresponding_country = df[df['Region'] == highest_revenue_region]['Country'].iloc[0]
corresponding_item_type = df[df['Region'] == highest_revenue_region]['Item Type'].iloc[0]
highest_revenue_region, corresponding_country, corresponding_item_type
The region with the highest total revenue is Middle East and North Africa. The corresponding country is Azerbaijan and the corresponding item type is Snacks.
```
here is the agent i initiated:
```
create_pandas_dataframe_agent(
ChatOpenAI(temperature=0.0, verbose=True,
model='gpt-3.5-turbo'
),
self.df,
verbose=True,
max_iterations=20,
agent_type=AgentType.OPENAI_FUNCTIONS,
# handle_parsing_errors=True,
agent_executor_kwargs={
"handle_parsing_errors": True
}
)
```
Do you think the chain is stopping in between before completing the response? | Pandas data frame agent hallucinates when asking a quesry: Sending python code insted of actual result | https://api.github.com/repos/langchain-ai/langchain/issues/12925/comments | 7 | 2023-11-06T10:38:37Z | 2024-02-13T16:08:07Z | https://github.com/langchain-ai/langchain/issues/12925 | 1,978,817,037 | 12,925 |
[
"langchain-ai",
"langchain"
] | Wanted to merge confluence and githubissue loader. But facing a problem as confluence loader requires additional arguments while loading. So, I am unable to use the MergedDataLoader here.
Code:
loader = ConfluenceLoader(url="URL", username="USER", api_key="API")
loader1 = GitHubIssuesLoader(repo="REPO", access_token="TOKEN")
loader_all = MergedDataLoader(loaders=[loader1, loader])
documents = loader_all.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
Error:
ValueError: Must specify at least one among `space_key`, `page_ids`, `label`, `cql` parameters.
### Suggestion:
_No response_ | Issue: Unable to Merge confluence and githubissues loader | https://api.github.com/repos/langchain-ai/langchain/issues/12923/comments | 3 | 2023-11-06T08:54:54Z | 2024-02-12T16:07:44Z | https://github.com/langchain-ai/langchain/issues/12923 | 1,978,615,854 | 12,923 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python == 3.11.3
Langchain == 0.0.330
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```bash
Python 3.11.3 (tags/v3.11.3:f3909b8, Apr 4 2023, 23:49:59) [MSC v.1934 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import langchain
>>> langchain.__version__
'0.0.330'
>>> from langchain.agents import load_tools, initialize_agent, AgentType
>>> from langchain.chains.conversation.memory import ConversationBufferMemory
>>> from langchain.chat_models import ChatOpenAI
>>> agent = initialize_agent(
... agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
... llm=ChatOpenAI(model='gpt-3.5-turbo', temperature=0),
... memory=ConversationBufferMemory(),
... verbose=True,
... handle_parsing_errors=True,
... tools=load_tools(tool_names=["requests_get"]),
... )
>>> agent.run('Search the latest version of python')
> Entering new AgentExecutor chain...
Thought: I can use the requests_get tool to search for the latest version of Python on the official Python website.
Action:
```{"action": "requests_get", "action_input": {"url": "https://www.python.org/downloads/"}}```
> Finished chain.
'Thought: I can use the requests_get tool to search for the latest version of Python on the official Python website.\n\nAction:\n```{"action": "requests_get", "action_input": {"url": "https://www.python.org/downloads/"}}```'
```
### Expected behavior
I expect agent to execute the action, `requests_get` in the above case.
NOTE: In the above case, InvalidRequestError would be raised after executing `requests_get` because the number of tokens is very large.
Related: #12158 | Issue: StructuredChatOutputParser regex may prevent agent's actions | https://api.github.com/repos/langchain-ai/langchain/issues/12922/comments | 3 | 2023-11-06T08:20:11Z | 2023-11-06T23:43:40Z | https://github.com/langchain-ai/langchain/issues/12922 | 1,978,555,611 | 12,922 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I've created a tool of parsing text content from pdf files, by adding it to the llm, the tool can get called correclty if I input the text, but the problem is the tool get called many times(I just wait there for minitues and then stopped.), could you help:
```
class ChatLlmWithTools():
def __init__(self) -> None:
from langchain.chat_models import ChatOpenAI
from langchain.agents.format_scratchpad import format_to_openai_functions
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.chains.conversation.memory import ConversationBufferWindowMemory
from langchain.prompts import (
ChatPromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
import pdf2pdf.tools.pdf_tool
from langchain.agents import load_tools
llm = ChatOpenAI(openai_api_key="xxxxxxxx",
temperature=0, model_name='gpt-3.5-turbo-16k-0613')
tools = [Tool(name="parse_text_and_images_from_pdf_files_with_pdfminer",
func=pdf2pdf.pdf_utility.parse_text_and_images_from_pdf_files_with_pdfminer,
description="Parse text and images from pdf files under target folder")]
tools[0].callbacks = [AgentAndToolExecutionWatcherHandler()]
llm_with_tools = llm.bind(
functions=[format_tool_to_openai_function(t) for t in tools]
)
prompt = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(
"You are a nice chatbot having a conversation with a human, but bad at parse text and images from pdf files."
),
# The `variable_name` here is what must align with memory
MessagesPlaceholder(variable_name="chat_history"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
HumanMessagePromptTemplate.from_template("{input}")
]
)
agent = ({
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_functions(x['intermediate_steps']),
"chat_history": lambda x: x["chat_history"],
} | prompt | llm_with_tools | OpenAIFunctionsAgentOutputParser())
from langchain.agents import AgentExecutor
# initialize conversational memory
memory = ConversationBufferWindowMemory(
memory_key='chat_history',
k=3,
return_messages=True
)
self.agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, memory=memory, #max_iterations=2,
callbacks=[AgentAndToolExecutionWatcherHandler()])
def process(self, param_str_from: str):
output = self.agent_executor.invoke({"input": param_str_from})
# print(output)
return output['output']
```
this is the unit test for test above agent, and can see the last `process` call can trigger the call to tool, but it's inifinit:
```
llmWithTool = ChatLlmWithTools()
llm_msg0 = llmWithTool.process('can you do gerneral chatting, pls remember my name is john, my age is 30?')
llm_msg1 = llmWithTool.process('can you say my name, and tell me what is 1+1?')
llm_msg2 = llmWithTool.process('can you show my age?')
#below line will trigger call tool function time and time again
llm_msg9 = llmWithTool.process('can you parse text and extract images from pdf files under target folder: {}'.format(
os.path.join(mygptsite.settings.BASE_DIR, 'uploaded_pdf_folder', '20231106103207672___example')))
```
the part of log in debug console is like:
> ...
> ...
> > Finished chain.
> Sure, John! Your age is 30.
>
>
> > Entering new AgentExecutor chain...
> on_agent_action, tool: parse_text_and_images_from_pdf_files_with_pdfminer is selected, tool_input: C:\Users\xxx\source\repos\xxxxx\uploaded_pdf_folder\20231106103207672___example
>
> Invoking: `parse_text_and_images_from_pdf_files_with_pdfminer` with `C:\Users\xxxx\source\repos\xxxxx\uploaded_pdf_folder\20231106103207672___example`
>
>
> on_tool_start, input_str: C:\Users\xxxxx\source\repos\xxxx\uploaded_pdf_folder\20231106103207672___example
> on_tool_end, output: I am th test msg parsed from pdf files with pdfminer
> I am th test msg parsed from pdf files with pdfmineron_agent_action, tool: parse_text_and_images_from_pdf_files_with_pdfminer is selected, tool_input: C:\Users\xxxx\source\repos\xxxx\uploaded_pdf_folder\20231106103207672___example
>
> Invoking: `parse_text_and_images_from_pdf_files_with_pdfminer` with `C:\Users\xxxxx\source\repos\xxxx\uploaded_pdf_folder\20231106103207672___example`
>
>
> on_tool_start, input_str: C:\Users\xxxx\source\repos\xxxx\uploaded_pdf_folder\20231106103207672___example
> on_tool_end, output: I am th test msg parsed from pdf files with pdfminer
> I am th test msg parsed from pdf files with pdfmineron_agent_action, tool: parse_text_and_images_from_pdf_files_with_pdfminer is selected, tool_input: C:\Users\xxxxx\source\repos\xxxx\uploaded_pdf_folder\20231106103207672___example
>
> Invoking: `parse_text_and_images_from_pdf_files_with_pdfminer` with `C:\Users\xxxx\source\repos\xxxx\uploaded_pdf_folder\20231106103207672___example`
>
>
> on_tool_start, input_str: C:\Users\xxxxx\source\repos\xxxx\uploaded_pdf_folder\20231106103207672___example
> on_tool_end, output: I am th test msg parsed from pdf files with pdfminer
### Suggestion:
_No response_ | Issue: tool get called many times | https://api.github.com/repos/langchain-ai/langchain/issues/12919/comments | 3 | 2023-11-06T05:43:34Z | 2024-02-12T16:07:49Z | https://github.com/langchain-ai/langchain/issues/12919 | 1,978,337,091 | 12,919 |
[
"langchain-ai",
"langchain"
] | ### System Info
python 3.10.0
langchain 0.3.300
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
ChatTongyi and AsyncIteratorCallbackHandler use togther
### Expected behavior
/Users/joeylin/Projects/questionAnswer/venv/lib/python3.10/site-packages/langchain/chat_models/tongyi.py:366: RuntimeWarning: coroutine 'AsyncCallbackManagerForLLMRun.on_llm_new_token' was never awaited
run_manager.on_llm_new_token(chunk.text, chunk=chunk)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
| ChatTongyi do not support AsyncIteratorCallbackHandler | https://api.github.com/repos/langchain-ai/langchain/issues/12917/comments | 3 | 2023-11-06T03:48:40Z | 2024-02-12T16:07:54Z | https://github.com/langchain-ai/langchain/issues/12917 | 1,978,229,664 | 12,917 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.330
Python 3.10.8
### Who can help?
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Was using latest release of pydantic (2.4.2).
getting errors to install langchain[docarray]
Above that:
ImportError: cannot import name 'ROOT_KEY' from 'pydantic.main' (...venv\lib\site-packages\pydantic\main.py)
Retried using pydantic 1.10.13. Works fine now. Not sure if worth adding to the error message or building in langchain dependence of pydantic version <2.0
I didn't try all the versions, but I know 2.4.2 breaks.
### Expected behavior
I wasn't able to set my index. Here is where the code broke:
index = VectorstoreIndexCreator(
vectorstore_cls=DocArrayInMemorySearch
).from_loaders([loader])
| DocArrayInMemorySearch | Pydantic 2+ Breaking Changes | https://api.github.com/repos/langchain-ai/langchain/issues/12916/comments | 3 | 2023-11-06T02:46:56Z | 2024-04-25T16:21:57Z | https://github.com/langchain-ai/langchain/issues/12916 | 1,978,179,255 | 12,916 |
[
"langchain-ai",
"langchain"
] | ### System Info
Any system.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```python
from typing import Any, Dict, List, Optional
from langchain.callbacks.manager import Callbacks
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.vectorstores.chroma import Chroma
from langchain.schema.vectorstore import VectorStoreRetriever
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.chat_models import AzureChatOpenAI
from langchain.docstore.document import Document
import os
class MyRetriever(VectorStoreRetriever):
def get_relevant_documents(self, query: str, *, callbacks: Callbacks = None, tags: List[str] | None = None, metadata: Dict[str, Any] | None = None, run_name: str | None = None, **kwargs: Any) -> List[Document]:
print('Called get_relevant_documents in MyRetriever')
res = super(MyRetriever, self).get_relevant_documents(query, callbacks=callbacks, tags=tags, metadata=metadata, run_name=run_name, **kwargs)
print('Finished get_relevant_documents in MyRetriever')
tmp_res: dict[str, Document] = {}
for item in res:
doc_name = item.page_content.split("\n", maxsplit=1)[0]
if doc_name not in tmp_res:
tmp_res[doc_name] = item
else:
orig_doc = tmp_res[doc_name]
doc_content = "\n".join(item.page_content.split("\n")[1:])
new_doc = Document(page_content=orig_doc.page_content + doc_content, metadata=orig_doc.metadata)
tmp_res[doc_name] = new_doc
return list(tmp_res.values())
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
vectordb = Chroma.from_documents([Document("blah")], embeddings, persist_directory="./tmp")
ret = MyRetriever(vectorstore=vectordb, tags=vectordb._get_retriever_tags())
ret.get_relevant_documents("blah") # infinite recursion!
```
### Expected behavior
Safely call parent class method then customize its behaviour | subclassing VectorStoreRetriever causes unjustifiable infinite recursion | https://api.github.com/repos/langchain-ai/langchain/issues/12913/comments | 2 | 2023-11-06T01:17:11Z | 2023-11-06T01:48:44Z | https://github.com/langchain-ai/langchain/issues/12913 | 1,978,090,823 | 12,913 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version: 0.0.330
Windows 11
Python 3.11.3
SQLAlchemy version: 2.0.23
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce the behavior.
1. Follow the steps mentioned in https://python.langchain.com/docs/integrations/toolkits/sql_database
2. Replace connection string with a PostgreSQL connection string
3. Run the following code, make sure to update the connection string and the database/tables you are querying.
import langchain
from langchain.llms import CTransformers
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.sql_database import SQLDatabase
from langchain.agents import AgentExecutor
from langchain.agents.agent_types import AgentType
config = {'max_new_tokens': 256, 'repetition_penalty': 1.1, 'temperature': 0, 'context_length': 4096}
llm = CTransformers(model="TheBloke/CodeLlama-7B-Instruct-GGUF",
model_file="codellama-7b-instruct.Q4_K_M.gguf",config=config, verbose=True)
CONNECTION_STRING = 'postgresql://postgresuser:password@localhost:5432/pharmadb'
db = SQLDatabase.from_uri(CONNECTION_STRING)
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
agent_executor = create_sql_agent(
llm=llm,
toolkit=toolkit,
verbose=True,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
)
agent_executor.run("Describe the corporation table")
I get the following output in verbose mode..
> Entering new AgentExecutor chain...
Action: sql_db_list_tables
Action Input: ''
Observation: corporation, molecule, product_mat, productpack
Thought: The corporation table is probably one of those tables.
Action: sql_db_schema
Action Input: 'corporation'
Observation: Error: table_names {"'corporation'"} not found in database
Thought: I must have misspelled the name of the table. Let me try again.
Action: sql_db_schema
Action Input: 'corporation'
Observation: Error: table_names {"'corporation'"} not found in database
Thought: I must have misspelled the name of the table. Let me try again.
Action: sql_db_schema
Action Input: 'corporation'
Observation: Error: table_names {"'corporation'"} not found in database
Thought: I must have misspelled the name of the table. Let me try again.
Action: sql_db_schema
Action Input: 'corporation'
Observation: Error: table_names {"'corporation'"} not found in database
Thought: I must have misspelled the name of the table. Let me try again.
Action: sql_db_schema
Action Input: 'corporation'
Observation: Error: table_names {"'corporation'"} not found in database
Thought: I must have misspelled the name of the table. Let me try again.
Action: sql_db_schema
Action Input: 'corporation'
.
.
.
> Finished chain.
'Agent stopped due to iteration limit or time limit.'
### Expected behavior
Expected to correctly identify the table and its schema, run the query to retrieve top 3 records from the table and provide a description of the table. | create_sql_agent Error: table_names {"'<table name>'"} not found in database. | https://api.github.com/repos/langchain-ai/langchain/issues/12911/comments | 5 | 2023-11-05T20:09:44Z | 2024-07-19T16:06:59Z | https://github.com/langchain-ai/langchain/issues/12911 | 1,977,943,019 | 12,911 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am using RedisChatMessageHistory from langchain and i am trying to deploy it using docker and i am facing issues in connecting to the redis . can someone help me over here @hwchase17
### Suggestion:
_No response_ | Issue: How to deploy langchain using docker and redis used by langchain | https://api.github.com/repos/langchain-ai/langchain/issues/12910/comments | 8 | 2023-11-05T18:40:09Z | 2023-11-08T12:13:13Z | https://github.com/langchain-ai/langchain/issues/12910 | 1,977,910,131 | 12,910 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
https://blog.langchain.dev/plan-and-execute-agents/
Links at the end of the Conclusion section do not work.
[here (Python)](https://python.langchain.com/docs/modules/agents/agent_types/plan_and_execute?ref=blog.langchain.dev
) and [here (JS)](https://js.langchain.com/docs/modules/agents/agents/examples/plan_and_execute_agent?ref=blog.langchain.dev) are broken.
### Idea or request for content:
_No response_ | DOC: Links at the bottom of Plan and Execute Agents blog do not work | https://api.github.com/repos/langchain-ai/langchain/issues/12904/comments | 3 | 2023-11-05T13:34:23Z | 2024-02-11T16:06:36Z | https://github.com/langchain-ai/langchain/issues/12904 | 1,977,791,576 | 12,904 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
``I am using Google Palm,Faiss,HF Instruct Embeddings.When ever I am Quering with RetrievalQAWithSourcesChain.
I am getting
```
[chain/error] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 9:chain:LLMChain] [2.01s] Chain run errored with error:
"IndexError('list index out of range')"
[chain/error] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain] [6.45s] Chain run errored with error:
"IndexError('list index out of range')"
[chain/error] [1:chain:RetrievalQAWithSourcesChain] [7.13s] Chain run errored with error:
"IndexError('list index out of range')"
```
Here is my whole code
``````
import` os
import streamlit as st
import pickle
import time
import langchain
from langchain.llms import GooglePalm
from langchain.chains import RetrievalQAWithSourcesChain
from langchain.chains.qa_with_sources.loading import load_qa_with_sources_chain
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.document_loaders import SeleniumURLLoader
from langchain.embeddings import HuggingFaceInstructEmbeddings
from langchain.vectorstores import FAISS
urls = ["https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS",
"https://www.livemint.com/companies/news/tata-motorss-ev-subsidiary-to-sign-mou-with-jlr-to-strengthen-ev-business-tata-motors-to-pay-royalty-fee-to-jlr-11698925980903.html"
]
loader=SeleniumURLLoader(urls=urls)
data=loader.load()
llm=GooglePalm(google_api_key="", temperature=0.9,max_output_tokens=500)
r_splitter = RecursiveCharacterTextSplitter(
separators=["\n\n", "\n", " ","\t\t"],
chunk_size=400,
chunk_overlap=80
)
docs = r_splitter.split_documents(data)
embeddings = HuggingFaceInstructEmbeddings(
query_instruction="Represent the query for retrieval: "
)
vector_index = FAISS.from_documents(docs, embeddings)
chain = RetrievalQAWithSourcesChain.from_llm(llm=llm, retriever=vector_index.as_retriever())
chain
langchain.debug=True
query="summerise the text"
chain({"question":query},return_only_outputs=True)
```
```
OUTPUT:
`[chain/start] [1:chain:RetrievalQAWithSourcesChain] Entering Chain run with input:
{
"question": "summerise the text"
}
[chain/start] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain] Entering Chain run with input:
[inputs]
[chain/start] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 4:chain:LLMChain] Entering Chain run with input:
{
"input_list": [
{
"context": "View more \n \n \n Posted by : kamal20",
"question": "summerise the text"
},
{
"context": "View more \n \n \n Posted by : kamal20",
"question": "summerise the text"
},
{
"context": "- - \n - - \n - - \n - - \n - -",
"question": "summerise the text"
},
{
"context": "- - \n - - \n - - \n - - \n - -",
"question": "summerise the text"
}
]
}
[llm/start] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 4:chain:LLMChain > 5:llm:GooglePalm] Entering LLM run with input:
{
"prompts": [
"Use the following portion of a long document to see if any of the text is relevant to answer the question. \nReturn any relevant text verbatim.\nView more \n \n \n Posted by : kamal20\nQuestion: summerise the text\nRelevant text, if any:"
]
}
[llm/start] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 4:chain:LLMChain > 6:llm:GooglePalm] Entering LLM run with input:
{
"prompts": [
"Use the following portion of a long document to see if any of the text is relevant to answer the question. \nReturn any relevant text verbatim.\nView more \n \n \n Posted by : kamal20\nQuestion: summerise the text\nRelevant text, if any:"
]
}
[llm/start] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 4:chain:LLMChain > 7:llm:GooglePalm] Entering LLM run with input:
{
"prompts": [
"Use the following portion of a long document to see if any of the text is relevant to answer the question. \nReturn any relevant text verbatim.\n- - \n - - \n - - \n - - \n - -\nQuestion: summerise the text\nRelevant text, if any:"
]
}
[llm/start] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 4:chain:LLMChain > 8:llm:GooglePalm] Entering LLM run with input:
{
"prompts": [
"Use the following portion of a long document to see if any of the text is relevant to answer the question. \nReturn any relevant text verbatim.\n- - \n - - \n - - \n - - \n - -\nQuestion: summerise the text\nRelevant text, if any:"
]
}
[llm/end] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 4:chain:LLMChain > 5:llm:GooglePalm] [4.40s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "This article discusses the concept of summarizing documents and provides a detailed overview of the steps involved in the process. It also provides examples of summaries and discusses the different types of summaries that can be created.",
"generation_info": null
}
]
],
"llm_output": null,
"run": null
}
[llm/end] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 4:chain:LLMChain > 6:llm:GooglePalm] [4.40s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "The main points are:",
"generation_info": null
}
]
],
"llm_output": null,
"run": null
}
[llm/end] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 4:chain:LLMChain > 7:llm:GooglePalm] [4.40s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "- - \n\n- - ",
"generation_info": null
}
]
],
"llm_output": null,
"run": null
}
[llm/end] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 4:chain:LLMChain > 8:llm:GooglePalm] [4.40s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "- - ",
"generation_info": null
}
]
],
"llm_output": null,
"run": null
}
[chain/end] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 4:chain:LLMChain] [4.41s] Exiting Chain run with output:
{
"outputs": [
{
"text": "This article discusses the concept of summarizing documents and provides a detailed overview of the steps involved in the process. It also provides examples of summaries and discusses the different types of summaries that can be created."
},
{
"text": "The main points are:"
},
{
"text": "- - \n\n- - "
},
{
"text": "- - "
}
]
}
[chain/start] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 9:chain:LLMChain] Entering Chain run with input:
{
"question": "summerise the text",
"summaries": "Content: This article discusses the concept of summarizing documents and provides a detailed overview of the steps involved in the process. It also provides examples of summaries and discusses the different types of summaries that can be created.\nSource: [https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS\n\nContent](https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS/n/nContent): The main points are:\nSource: [https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS\n\nContent](https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS/n/nContent): - - \n\n- - \nSource: [https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS\n\nContent](https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS/n/nContent): - - \nSource: https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS"
}
[llm/start] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 9:chain:LLMChain > 10:llm:GooglePalm] Entering LLM run with input:
{
"prompts": [
"Given the following extracted parts of a long document and a question, create a final answer with references (\"SOURCES\"). \nIf you don't know the answer, just say that you don't know. Don't try to make up an answer.\nALWAYS return a \"SOURCES\" part in your answer.\n\nQUESTION: Which state/country's law governs the interpretation of the contract?\n=========\nContent: This Agreement is governed by English law and the parties submit to the exclusive jurisdiction of the English courts in relation to any dispute (contractual or non-contractual) concerning this Agreement save that either party may apply to any court for an injunction or other relief to protect its Intellectual Property Rights.\nSource: 28-pl\nContent: No Waiver. Failure or delay in exercising any right or remedy under this Agreement shall not constitute a waiver of such (or any other) right or remedy.\n\n11.7 Severability. The invalidity, illegality or unenforceability of any term (or part of a term) of this Agreement shall not affect the continuation in force of the remainder of the term (if any) and this Agreement.\n\n11.8 No Agency. Except as expressly stated otherwise, nothing in this Agreement shall create an agency, partnership or joint venture of any kind between the parties.\n\n11.9 No Third-Party Beneficiaries.\nSource: 30-pl\nContent: (b) if Google believes, in good faith, that the Distributor has violated or caused Google to violate any Anti-Bribery Laws (as defined in Clause 8.5) or that such a violation is reasonably likely to occur,\nSource: 4-pl\n=========\nFINAL ANSWER: This Agreement is governed by English law.\nSOURCES: 28-pl\n\nQUESTION: What did the president say about Michael Jackson?\n=========\nContent: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.\nSource: 0-pl\nContent: And we won’t stop. \n\nWe have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. \n\nLet’s use this moment to reset. Let’s stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease. \n\nLet’s stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans. \n\nWe can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together. \n\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n\nOfficer Mora was 27 years old. \n\nOfficer Rivera was 22. \n\nBoth Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. \n\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.\nSource: 24-pl\nContent: And a proud Ukrainian people, who have known 30 years of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards. \n\nTo all Americans, I will be honest with you, as I’ve always promised. A Russian dictator, invading a foreign country, has costs around the world. \n\nAnd I’m taking robust action to make sure the pain of our sanctions is targeted at Russia’s economy. And I will use every tool at our disposal to protect American businesses and consumers. \n\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. \n\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. \n\nThese steps will help blunt gas prices here at home. And I know the news about what’s happening can seem alarming. \n\nBut I want you to know that we are going to be okay.\nSource: 5-pl\nContent: More support for patients and families. \n\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \n\nIt’s based on DARPA—the Defense Department project that led to the Internet, GPS, and so much more. \n\nARPA-H will have a singular purpose—to drive breakthroughs in cancer, Alzheimer’s, diabetes, and more. \n\nA unity agenda for the nation. \n\nWe can do this. \n\nMy fellow Americans—tonight , we have gathered in a sacred space—the citadel of our democracy. \n\nIn this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. \n\nWe have fought for freedom, expanded liberty, defeated totalitarianism and terror. \n\nAnd built the strongest, freest, and most prosperous nation the world has ever known. \n\nNow is the hour. \n\nOur moment of responsibility. \n\nOur test of resolve and conscience, of history itself. \n\nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \n\nWell I know this nation.\nSource: 34-pl\n=========\nFINAL ANSWER: The president did not mention Michael Jackson.\nSOURCES:\n\nQUESTION: summerise the text\n=========\nContent: This article discusses the concept of summarizing documents and provides a detailed overview of the steps involved in the process. It also provides examples of summaries and discusses the different types of summaries that can be created.\nSource: [https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS\n\nContent](https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS/n/nContent): The main points are:\nSource: [https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS\n\nContent](https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS/n/nContent): - - \n\n- - \nSource: [https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS\n\nContent](https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS/n/nContent): - - \nSource: [https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS\n=========\nFINAL](https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS/n=========/nFINAL) ANSWER:"
]
}
[llm/end] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 9:chain:LLMChain > 10:llm:GooglePalm] [2.01s] Exiting LLM run with output:
{
"generations": [
[]
],
"llm_output": null,
"run": null
}
[chain/error] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 9:chain:LLMChain] [2.01s] Chain run errored with error:
"IndexError('list index out of range')"
[chain/error] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain] [6.45s] Chain run errored with error:
"IndexError('list index out of range')"
[chain/error] [1:chain:RetrievalQAWithSourcesChain] [7.13s] Chain run errored with error:
"IndexError('list index out of range')"`
### Suggestion:
_No response_ | Issue: <[chain/error] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 9:chain:LLMChain] [2.01s] Chain run errored with error: "IndexError('list index out of range')"> | https://api.github.com/repos/langchain-ai/langchain/issues/12903/comments | 9 | 2023-11-05T12:19:52Z | 2024-06-30T16:03:18Z | https://github.com/langchain-ai/langchain/issues/12903 | 1,977,763,705 | 12,903 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
My openai llm see to run fine but some how not getting the answer in Finished chain.
Model used
```
def llm_function(csv_file_name):
agent = create_csv_agent(
ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613", request_timeout=120),
csv_file_name,
verbose=True,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
)
# Pass a query to the chain
query = QUESTION
query = query + " using tool python_repl_ast"
#response = agent.run(query)
try:
response = agent.run(query)
except ValueError as e:
response = str(e)
if not response.startswith("Could not parse LLM output: `"):
raise e
response = response.removeprefix("Could not parse LLM output: `").removesuffix("`")
return response
```
Results
```
> Entering new AgentExecutor chain...
Thought: To find the least expensive product, I need to sort the dataframe by the "Price" column in ascending order and then select the first row.
Action: python_repl_ast
Action Input: df.sort_values("Price").head(1)
Observation: Product Name Price Price per Count Bought Last Month
296 MAGNESIUM PHOSPHORICUM 6C 30 ML SBL 90 Price_count not found Data not available
Thought:The least expensive product is "MAGNESIUM PHOSPHORICUM 6C 30 ML SBL" with a price of 90.
Final Answer: "MAGNESIUM PHOSPHORICUM 6C 30 ML SBL"
> Finished chain.
None
```
### Suggestion:
_No response_ | Finished chain - None, when I run an csv_agent on a csv file. | https://api.github.com/repos/langchain-ai/langchain/issues/12900/comments | 3 | 2023-11-05T09:30:48Z | 2024-02-11T16:06:41Z | https://github.com/langchain-ai/langchain/issues/12900 | 1,977,689,238 | 12,900 |
[
"langchain-ai",
"langchain"
] | ### System Info
I have successfully installed the apoc plugin by pressing the neo4j document, and run ` return apoc.version() ` on the neo4j client to return successfully.
However, when Neo4jGraph was used in Langchain connection, errors were still reported.
```
ValueError: Could not use APOC procedures. Please ensure the APOC plugin is installed in Neo4j and that 'apoc.meta.data()' is allowed in Neo4j configuration
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.graphs import Neo4jGraph
graph = Neo4jGraph(
'bolt://localhost:7687',
'neo4j',
'chenhuabc'
)
print(graph)
# ValueError: Could not use APOC procedures. Please ensure the APOC plugin is installed in Neo4j and that 'apoc.meta.data()' is allowed in Neo4j configuration
### Expected behavior
I have successfully installed the apoc plugin by pressing the neo4j document, and run ` return apoc.version() ` on the neo4j client to return successfully.
However, when Neo4jGraph was used in Langchain connection, errors were still reported.
```
ValueError: Could not use APOC procedures. Please ensure the APOC plugin is installed in Neo4j and that 'apoc.meta.data()' is allowed in Neo4j configuration
``` | Langchain connects Neo4j v5.9 error Could not use APOC procedures | https://api.github.com/repos/langchain-ai/langchain/issues/12901/comments | 14 | 2023-11-05T09:13:20Z | 2024-05-20T05:19:27Z | https://github.com/langchain-ai/langchain/issues/12901 | 1,977,701,382 | 12,901 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.330
Python 3.10.12
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import CohereRerank
#
compressor = CohereRerank()
#
Error Encountered
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
[<ipython-input-29-be190c4da94f>](https://localhost:8080/#) in <cell line: 5>()
3
4 #
----> 5 compressor = CohereRerank()
6 #
7 compression_retriever = ContextualCompressionRetriever(
2 frames
/usr/local/lib/python3.10/dist-packages/pydantic/main.cpython-310-x86_64-linux-gnu.so in pydantic.main.BaseModel.__init__()
/usr/local/lib/python3.10/dist-packages/pydantic/main.cpython-310-x86_64-linux-gnu.so in pydantic.main.validate_model()
[/usr/local/lib/python3.10/dist-packages/langchain/retrievers/document_compressors/cohere_rerank.py](https://localhost:8080/#) in validate_environment(cls, values)
53 values, "cohere_api_key", "COHERE_API_KEY"
54 )
---> 55 client_name = values["user_agent"]
56 values["client"] = cohere.Client(cohere_api_key, client_name=client_name)
57 return values
KeyError: 'user_agent'
### Expected behavior
The below error should not have been encountered as the COHERE API Key has alreday been set and cohere.Client() is workin with the api key provided
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
[<ipython-input-29-be190c4da94f>](https://localhost:8080/#) in <cell line: 5>()
3
4 #
----> 5 compressor = CohereRerank()
6 #
7 compression_retriever = ContextualCompressionRetriever(
2 frames
/usr/local/lib/python3.10/dist-packages/pydantic/main.cpython-310-x86_64-linux-gnu.so in pydantic.main.BaseModel.__init__()
/usr/local/lib/python3.10/dist-packages/pydantic/main.cpython-310-x86_64-linux-gnu.so in pydantic.main.validate_model()
[/usr/local/lib/python3.10/dist-packages/langchain/retrievers/document_compressors/cohere_rerank.py](https://localhost:8080/#) in validate_environment(cls, values)
53 values, "cohere_api_key", "COHERE_API_KEY"
54 )
---> 55 client_name = values["user_agent"]
56 values["client"] = cohere.Client(cohere_api_key, client_name=client_name)
57 return values
KeyError: 'user_agent' | Encounter KeyError: 'user_agent' while using CohereRerank() | https://api.github.com/repos/langchain-ai/langchain/issues/12899/comments | 6 | 2023-11-05T07:40:49Z | 2024-03-14T10:18:11Z | https://github.com/langchain-ai/langchain/issues/12899 | 1,977,674,946 | 12,899 |
[
"langchain-ai",
"langchain"
] | ### System Info
Platform: macOS Sonoma 14.0
Python Version: 3.11
LangChain Version: 0.0.330
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
prompt = PromptTemplate.from_template("What is a good name for a company that makes {product}?")
chain = LLMChain(llm=llm, prompt=prompt)
for chunk in chain.stream({ "product": "colorful socks" })
print(f"Chunk: {chunk}")
```
### Expected behavior
One expects to receive chunks when streaming, but because the `stream` method is not implemented in the `LLMChain class`, it falls back to the `stream` method in the base `Chain` class. This results in a `chunk` variable containing the full response.
This can be fixed easily by something like this.
```python
from langchain.chains import LLMChain
class MyChain(LLMChain):
def stream(
self,
input: Input,
config: Optional[RunnableConfig] = None,
run_manager: Optional[CallbackManagerForChainRun] = None,
**kwargs: Optional[Any],
) -> Iterator[Output]:
prompts, stop = self.prep_prompts([input], run_manager=run_manager)
yield from self.llm.stream(input=prompts[0], config=config, **kwargs)
```
```python
prompt = PromptTemplate.from_template("What is a good name for a company that makes {product}?")
chain = MyChain(llm=llm, prompt=prompt)
for chunk in chain.stream({ "product": "colorful socks" })
print(f"Chunk: {chunk}") ✅
```
If you agree, I would like to create PR which will fix that. | LLMChain does not stream | https://api.github.com/repos/langchain-ai/langchain/issues/12894/comments | 7 | 2023-11-04T22:00:53Z | 2024-07-18T16:07:19Z | https://github.com/langchain-ai/langchain/issues/12894 | 1,977,527,950 | 12,894 |
[
"langchain-ai",
"langchain"
] | ### System Info
pip freeze
attrs==19.3.0
Automat==0.8.0
blinker==1.4
certifi==2019.11.28
chardet==3.0.4
Click==7.0
cloud-init==23.3.1
colorama==0.4.3
command-not-found==0.3
configobj==5.0.6
constantly==15.1.0
cryptography==2.8
dbus-python==1.2.16
diskcache==5.6.3
distro==1.4.0
distro-info==0.23+ubuntu1.1
entrypoints==0.3
httplib2==0.14.0
hyperlink==19.0.0
idna==2.8
importlib-metadata==1.5.0
incremental==16.10.1
Jinja2==2.10.1
jsonpatch==1.22
jsonpointer==2.0
jsonschema==3.2.0
keyring==18.0.1
language-selector==0.1
launchpadlib==1.10.13
lazr.restfulclient==0.14.2
lazr.uri==1.0.3
llama-cpp-python==0.2.11
MarkupSafe==1.1.0
more-itertools==4.2.0
netifaces==0.10.4
numpy==1.24.4
oauthlib==3.1.0
pexpect==4.6.0
pyasn1==0.4.2
pyasn1-modules==0.2.1
PyGObject==3.36.0
PyHamcrest==1.9.0
PyJWT==1.7.1
pymacaroons==0.13.0
PyNaCl==1.3.0
pyOpenSSL==19.0.0
pyrsistent==0.15.5
pyserial==3.4
python-apt==2.0.1+ubuntu0.20.4.1
python-debian==0.1.36+ubuntu1.1
PyYAML==5.3.1
requests==2.22.0
requests-unixsocket==0.2.0
SecretStorage==2.3.1
service-identity==18.1.0
simplejson==3.16.0
six==1.14.0
sos==4.5.6
ssh-import-id==5.10
systemd-python==234
Twisted==18.9.0
typing-extensions==4.8.0
ubuntu-advantage-tools==8001
ufw==0.36
unattended-upgrades==0.1
urllib3==1.25.8
wadllib==1.3.3
zipp==1.0.0
zope.interface==4.7.1
### Who can help?
I'm trying to run an agent based on local `ollama mistral:7b-instruct` and stop is not working. Is it a bug or I configured wrong? Thanks in advance!
```
from langchain.agents import Tool, AgentExecutor, LLMSingleActionAgent, AgentOutputParser
from langchain.prompts import StringPromptTemplate
from langchain.llms import OpenAI
from langchain.utilities import SerpAPIWrapper
from langchain.chains import LLMChain
from typing import List, Union
from langchain.schema import AgentAction, AgentFinish, OutputParserException
import re
from langchain.llms import Ollama
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout_final_only import FinalStreamingStdOutCallbackHandler
from langchain.agents import tool
@tool
def get_word_length(word: str) -> int:
"""Returns the length of a word."""
return len(word.strip().split()[0])
tools = [get_word_length]
# Set up the base template
template = """Answer the following questions as best you can. You have access to the following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin! If you can use a tool, use it before the final answer. Do not make up an observation before calling an action!
Question: {input}
{agent_scratchpad}"""
# Set up a prompt template
class CustomPromptTemplate(StringPromptTemplate):
# The template to use
template: str
# The list of tools available
tools: List[Tool]
def format(self, **kwargs) -> str:
# Get the intermediate steps (AgentAction, Observation tuples)
# Format them in a particular way
intermediate_steps = kwargs.pop("intermediate_steps")
thoughts = ""
for action, observation in intermediate_steps:
thoughts += action.log
thoughts += f"\nObservation: {observation}\nThought: "
# Set the agent_scratchpad variable to that value
kwargs["agent_scratchpad"] = thoughts
# Create a tools variable from the list of tools provided
kwargs["tools"] = "\n".join([f"{tool.name}: {tool.description}" for tool in self.tools])
# Create a list of tool names for the tools provided
kwargs["tool_names"] = ", ".join([tool.name for tool in self.tools])
return self.template.format(**kwargs)
prompt = CustomPromptTemplate(
template=template,
tools=tools,
# This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically
# This includes the `intermediate_steps` variable because that is needed
input_variables=["input", "intermediate_steps"]
)
class CustomOutputParser(AgentOutputParser):
def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:
# Check if agent should finish
if "Final Answer:" in llm_output:
return AgentFinish(
# Return values is generally always a dictionary with a single `output` key
# It is not recommended to try anything else at the moment :)
return_values={"output": llm_output.split("Final Answer:")[-1].strip()},
log=llm_output,
)
# Parse out the action and action input
regex = r"Action\s*\d*\s*:(.*?)\nAction\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)"
match = re.search(regex, llm_output, re.DOTALL)
if not match:
raise OutputParserException(f"Could not parse LLM output: `{llm_output}`")
action = match.group(1).strip()
action_input = match.group(2)
# Return the action and action input
return AgentAction(tool=action, tool_input=action_input.strip(" ").strip('"'), log=llm_output)
output_parser = CustomOutputParser()
ollama = Ollama(model="mistral:7b-instruct", callbacks=[FinalStreamingStdOutCallbackHandler()])
# LLM chain consisting of the LLM and a prompt
llm_chain = LLMChain(llm=ollama, prompt=prompt)
tool_names = [tool.name for tool in tools]
agent = LLMSingleActionAgent(
llm_chain=llm_chain,
output_parser=output_parser,
stop=["Observation:"],
allowed_tools=tool_names
)
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)
agent_executor.run("How many letters in the word astronomia?")
```
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Run the code above
### Expected behavior
Expected to stop on Observation | It seems that stop does not work with ollama models. | https://api.github.com/repos/langchain-ai/langchain/issues/12892/comments | 8 | 2023-11-04T21:30:07Z | 2024-05-16T16:07:34Z | https://github.com/langchain-ai/langchain/issues/12892 | 1,977,518,229 | 12,892 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Page link: https://python.langchain.com/docs/modules/agents/agent_types/react#using-chat-models
Under "Using chat models", second paragraph:
```The main difference here is a different prompt. We will use JSON to encode the agent's actions (chat models are a bit tougher to steet, so using JSON helps to enforce the output format).```
_steet_ should be replaced with _steer_.
### Idea or request for content:
_No response_ | DOC: Typo under docs for using chat models for ReAct agents | https://api.github.com/repos/langchain-ai/langchain/issues/12891/comments | 2 | 2023-11-04T20:46:53Z | 2024-02-10T16:06:38Z | https://github.com/langchain-ai/langchain/issues/12891 | 1,977,501,097 | 12,891 |
[
"langchain-ai",
"langchain"
] | ### System Info
Name: langchain
Version: 0.0.308
Macos ventura 13.2.1
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have two Custom tools :
CustomJiraTicketWriting which use FewShotChatMessagePromptTemplate and StructuredOutputParser.from_response_schemas in order to produce dictionary output with a compatible & developed JSON representing a Jira ticket.
CustomJiraTicketPOST which accept 4 inputs : ticket / email / url / token in order to POST the ticket on Jira.
Both of these function works great separately, but when I use an Structured_Chat_Zero_Shot_Description agent to create a ReACT decision, it appears that the Agent refuse to use any of the tools.
I wonder if i do something wrong or if the Structured_Chat_Zero_Shot_Description agent is bugged ?
```
import os
import re
import json
import openai
import random
import langchain
import demjson3
import requests
from langchain.chat_models import ChatOpenAI
from requests.auth import HTTPBasicAuth
from openai import ChatCompletion
from typing import Optional, Type, List, Dict, Union
from langchain.llms import OpenAI,GPT4All
from pydantic import BaseModel, Field
from langchain.agents import (AgentType,
initialize_agent,
AgentOutputParser,
LLMSingleActionAgent,
AgentExecutor,
load_tools)
from langchain.schema import AgentAction, AgentFinish
from langchain.cache import InMemoryCache
from langchain.prompts.few_shot import FewShotPromptTemplate
from langchain.prompts.few_shot_with_templates import FewShotPromptWithTemplates
from langchain.prompts.few_shot import FewShotPromptTemplate
from langchain.pydantic_v1 import BaseModel, Field, validator
from langchain.chains import LLMChain, SequentialChain, SimpleSequentialChain, TransformChain
from langchain.output_parsers import (OutputFixingParser,
RetryWithErrorOutputParser,
PydanticOutputParser,
CombiningOutputParser,
ResponseSchema,
StructuredOutputParser)
from langchain.chains.conversation.memory import ConversationBufferWindowMemory
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.tools import BaseTool, StructuredTool, Tool, tool
from langchain.prompts import (ChatPromptTemplate,
PromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
FewShotChatMessagePromptTemplate,
StringPromptTemplate,
)
from langchain.schema import (AIMessage,
HumanMessage,
SystemMessage
)
from langchain.callbacks.manager import (
AsyncCallbackManagerForToolRun,
CallbackManagerForToolRun,
)
email = 'XXXXXXXXXX.jl@gmail.com'
url = "https://XXXXXXXXXX.atlassian.net/rest/api/3/issue"
debug_mode = True
langchain.debug = True
langchain.llm_cache = InMemoryCache()
os.environ["JIRA_API_TOKEN"]= 'XXXXXXXXXX'
api_token_jira = os.environ["JIRA_API_TOKEN"]
os.environ["JIRA_USERNAME"]= 'XXXXXXXXXX'
os.environ["JIRA_INSTANCE_URL"] = "XXXXXXXXXX"
site_jira = os.environ["JIRA_INSTANCE_URL"]
os.environ['OPENAI_API_KEY']='XXXXXXXXXX'
openai_api_key = os.getenv('OPENAI_API_KEY')
api_key = os.getenv('OPENAI_API_KEY')
model_name = 'gpt-3.5-turbo'
temperature = 0.0
model_llm = OpenAI(model_name=model_name,
temperature=temperature,
max_tokens=3500)
model_chat = ChatOpenAI(
temperature=temperature,
max_tokens=3100,
model_name=model_name)
turbo_llm = ChatOpenAI(
temperature=temperature,
model_name=model_name,
max_tokens=3100,)
class CustomJiraTicketWriting(BaseTool):
name = "Jira_Ticket_Write"
description = ("""
Useful to transform a summary into a real JSON Jira ticket.
The input should be like :
{{
Action: Ticket_writing,
Action Input:
"summary": <ticket summary>,
}}
""")
def _run(self,
summary: str,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
) -> dict:
response_schema = ResponseSchema(
name="jira_ticket",
description="Jira ticket information",
structure={
"fields": {
"project": {
"key": str
},
"summary": str,
"issuetype": {
"name": str
},
"priority": {
"name": str
},
"description": {
"type": str,
"version": int,
"content": [
{
"type": str,
"content": [
{
"type": str,
"text": str
}
]
},
{
"type": str,
"attrs": {
"level": int
},
"content": [
{
"type": str,
"text": str
}
]
}
]
}
}
}
)
response_schemas=[response_schema]
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
format_instructions = output_parser.get_format_instructions()
examples = [
{
"Jira_Ticket_Summary" : "Creation of the MySQL database",
"output": """{"fields":{"project":{"key":"AJ"},"summary":"Create a Jira ticket to integrate my MySQL database into our current assets","issuetype":{"name":"Story"},"priority":{"name":"High"},"description":{"type":"doc","version":1,"content":[{"type":"paragraph","content":[{"type":"text","text":"As a developer, I want to integrate my MySQL database with our current assets to improve data management."}]},{"type":"heading","attrs":{"level":2},"content":[{"type":"text","text":"Acceptance Criteria:"}]},{"type":"paragraph","content":[{"type":"text","text":"- The MySQL database is successfully integrated with the application."}]},{"type":"paragraph","content":[{"type":"text","text":"- Data can be efficiently stored and retrieved from the integrated MySQL database."}]},{"type":"paragraph","content":[{"type":"text","text":"- The integration process follows best practices and security standards."}]},{"type":"paragraph","content":[{"type":"text","text":"- The integration is documented for future reference."}]},{"type":"heading","attrs":{"level":2},"content":[{"type":"text","text":"Subtasks:"}]},{"type":"paragraph","content":[{"type":"text","text":"- Analyze the structure of the MySQL database."}]},{"type":"paragraph","content":[{"type":"text","text":"- Create integration scripts for data migration."}]},{"type":"paragraph","content":[{"type":"text","text":"- Implement data synchronization with the application."}]},{"type":"paragraph","content":[{"type":"text","text":"- Perform testing and quality assurance of the integration."}]},{"type":"paragraph","content":[{"type":"text","text":"- Document the integration process and configurations."}]}]}}"""
}]
example_prompt = ChatPromptTemplate.from_messages(
[
("human", "{Jira_Ticket_Summary}"),
("ai", "{output}")]
)
few_shot_prompt = FewShotChatMessagePromptTemplate(
example_prompt=example_prompt,
examples=examples,
)
prompt_string = """
Jira_Ticket_Summary: {Jira_Ticket_Summary}
{format_instructions}
"""
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a Jira assistant specialized in creating technical tickets. You always develop the tickets with precise examples of sub-tasks and acceptance criteria. Remember to use double quotes for keys."),
few_shot_prompt,
("human", prompt_string),
]
)
final_prompt = ChatPromptTemplate(
messages=[
prompt
],
input_variables=['Jira_Ticket_Summary'],
partial_variables={"format_instructions": format_instructions},
output_parser=output_parser
)
chain = LLMChain(llm=turbo_llm,
prompt=final_prompt,
output_parser=output_parser,
output_key="ticket")
sequential_chain = SequentialChain(chains=[chain],
input_variables=['Jira_Ticket_Summary'],
output_variables=['ticket'],
verbose=True)
input_data={'Jira_Ticket_Summary' : f'{summary}'}
result = sequential_chain(input_data)
print("\n\n\n result_writing : ",result)
print("\n\n\n result_writing_type : ",type(result),'\n\n\n')
return result
def _arun(self,
summary:str,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
) -> str:
raise NotImplementedError("This tool does not support async")
class CustomJiraTicketPOST(BaseTool):
name = "Jira_Ticket_Post"
description = ("""\
Useful to POST a ticket in Jira Software
The input should be like :
{{
Action: Jira_Post,
Action Input:
"ticket": <JSON-string of the ticket>,
"email": <email associated with the Jira Software account>,
"url": <url to POST the ticket at>,
"token": <identification JIRA API token>
}}
""")
def _run(
self,
ticket: str,
email: str,
url: str,
token: str,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None
) -> str:
# Retrieve the values using lowercase keys
url = url
email = email
body = ticket
auth = HTTPBasicAuth(email, api_token_jira)
headers = {
"Accept": "application/json",
"Content-Type": "application/json"
}
response = requests.request(
"POST",
url,
data=body,
headers=headers,
auth=auth
)
return json.dumps(json.loads(response.text), sort_keys=True, indent=4, separators=(",", ": "))
async def _arun(
self,
ticket: str,
email: str,
url: str,
token: str,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
) -> str:
"""Use the tool asynchronously."""
raise NotImplementedError("custom_search does not support async")
PREFIX = """You are a Jira Assistant.
You're designed to assist the user with a wide range of tasks related to Jira object management.
It goes from understand the user's need for its tickets, writing technical and detailed Jira tickets with descriptions, subtasks and acceptance criteria’s to realize API call (POST, PUT or GET) with Jira objects.
"""
FORMAT_INSTRUCTIONS = """\
\
To complete the request, think step by step about what you do.
Requirements :\
\
Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).
Provide only ONE action per $JSON_BLOB, as shown:
"""
{"action": "{TOOL_NAME}",
"action_input": "{INPUT}"}
"""
To answer the request, please follow the format:\
\
"""
Question: request to answer
Thought: You should always think about what to do
Action: the action to take, should be one of [{tool_names}] and must be a $JSON_BLOB
Observation: Action result
... (repeat Thought/Action/Observation N times)
Thought: I know what to respond
Action:
"""
{"action": "Final Answer",
"action_input": "Final response to human"}
"""
"""
Remarks :
Before to POST a ticket, you need to write it.
Remember to act as a Jira assistant and to conclude by "Final Answer" when you succeed ALL the tasks.
"""
SUFFIX = """
Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary.
Respond directly if appropriate.
Format is Action:```$JSON_BLOB```then Observation.
Previous conversation history:
{chat_history}
Instructions: {input}
{agent_scratchpad}
"""
memory = ConversationBufferWindowMemory(
memory_key='chat_history',
k=3,
return_messages=True
)
tools = [CustomJiraTicketPOST(),CustomJiraTicketWriting()]
conversational_agent = initialize_agent(
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
tools=tools,
llm=turbo_llm,
verbose=True,
max_iterations=3,
early_stopping_method='generate',
handle_parsing_errors="Check your output and make sure it conforms!",
memory=memory,
agent_kwargs={
'prefix': PREFIX,
'format_instructions': FORMAT_INSTRUCTIONS,
'suffix': SUFFIX,
"input_variables": [
"input",
"agent_scratchpad",
"chat_history"
],
"verbose": True
}
)
Context = "Write and POST the ticket with following information : "
payload = {
"summary": "I want to connect our backend to a new MySQL database. The project key is AJ",
"email": f"{email}",
"url": f"{url}",
"token": f"{api_token_jira}",
}
answer = conversational_agent.run(f'{Context}' + json.dumps(payload))
```
### Expected behavior
Expected behavior :
Well at least use the Writing tools, and then use the Post tool. There is just an uncertainty regarding the acception of the Output of the first tool for the second tool.
Actual behavior:
```
[...]
}
[llm/end] [1:chain:AgentExecutor > 2:chain:LLMChain > 3:llm:ChatOpenAI] [5.15s] Exiting LLM run with output:
{
"generations": [
[
{
...
[chain/end] [1:chain:AgentExecutor] [5.15s] Exiting Chain run with output:
{
"output": "Action: Jira_Ticket_Write\nAction Input: \n{\n \"summary\": \"I want to connect our backend to a new MySQL database. The project key is AJ\"\n}"
}
``` | Langchain Structured_Chat_Zero_Shot_Description agent use no tools | https://api.github.com/repos/langchain-ai/langchain/issues/12883/comments | 3 | 2023-11-04T13:49:20Z | 2024-02-10T16:06:42Z | https://github.com/langchain-ai/langchain/issues/12883 | 1,977,348,240 | 12,883 |
[
"langchain-ai",
"langchain"
] | ### System Info
Name: langchain Version: 0.0.308
Macos 13.2.1
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Hi everybody !
I have setup a OPENAI_MULTI_FUNCTIONS Agent which seems to do great with a few token, but even if all the information are in the prompt, it is not able to use them for the tool input.
What is the issue :
I have two Custom tools :
- CustomJiraTicketWriting which use FewShotChatMessagePromptTemplate
and StructuredOutputParser.from_response_schemas in order to produce
dictionary output with a compatible & developed JSON representing a
Jira ticket.
- CustomJiraTicketPOST which accept 4 inputs : ticket /
email / url / token in order to POST the ticket on Jira. Both of
these function works great separately, but when I use an
Structured_Chat_Zero_Shot_Description agent to create a ReACT
decision, it appears that the Agent refuse to use any of the tools.
I presented below, the agent use the first tools, write the ticket, inject it in the new prompt, but after that, the agent decide to use the second tool but miss some information
```
import os
import json
import langchain
import requests
from langchain.chat_models import ChatOpenAI
from requests.auth import HTTPBasicAuth
from typing import Optional, Type
from langchain.llms import OpenAI
from pydantic import BaseModel, Field
from langchain.agents import (AgentType,
initialize_agent, )
from langchain.cache import InMemoryCache
from langchain.pydantic_v1 import BaseModel, Field
from langchain.chains import LLMChain, SequentialChain
from langchain.output_parsers import (ResponseSchema,
StructuredOutputParser)
from langchain.tools import BaseTool
from langchain.prompts import (ChatPromptTemplate,
MessagesPlaceholder,
FewShotChatMessagePromptTemplate)
from langchain.memory import ConversationBufferMemory
from langchain.schema import (SystemMessage,
)
from langchain.callbacks.manager import (
AsyncCallbackManagerForToolRun,
CallbackManagerForToolRun,
)
debug_mode = True
langchain.debug = True
email = 'XXXXXXXXXX'
url = "https://XXXXXXXXXX.atlassian.net/rest/api/3/issue"
langchain.llm_cache = InMemoryCache()
os.environ["JIRA_API_TOKEN"]= 'XXXXXXXXXX'
api_token_jira = os.environ["JIRA_API_TOKEN"]
os.environ["JIRA_USERNAME"]= 'XXXXXXXXXX'
os.environ["JIRA_INSTANCE_URL"] = "XXXXXXXXXX"
site_jira = os.environ["JIRA_INSTANCE_URL"]
os.environ['OPENAI_API_KEY']='XXXXXXXXXX'
openai_api_key = os.getenv('OPENAI_API_KEY')
api_key = os.getenv('OPENAI_API_KEY')
model_name = 'gpt-3.5-turbo'
temperature = 0.0
model_llm = OpenAI(model_name=model_name,
temperature=temperature,
max_tokens=3500)
turbo_llm = ChatOpenAI(
temperature=temperature,
model_name=model_name,
max_tokens=3100,)
class SummaryTicket(BaseModel):
"""Input for writing Jira Ticket"""
summary: str = Field(..., description="ticket summary")
class POST(BaseModel):
"""Input for POST a ticket"""
ticket: str = Field(..., description="Jira ticket as a dictionnary")
email: str = Field(..., description="email associated with the Jira Software account")
url: str = Field(..., description="url to POST a Jira ticket at")
token : str = Field(..., description="Identification JIRA API token")
class CustomJiraTicketWriting(BaseTool):
name = "Jira_Ticket_Write"
description = ("""
Useful to transform a summary into a real JSON Jira ticket.
The input should be like :
{{
Action: Ticket_writing,
Action Input:
"summary": <ticket summary>,
}}
""")
def _run(self,
summary: str,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
) -> dict:
response_schema = ResponseSchema(
name="ticket",
description="Jira ticket information",
structure={
"fields": {
"project": {
"key": str
},
"summary": str,
"issuetype": {
"name": str
},
"priority": {
"name": str
},
"description": {
"type": str,
"version": int,
"content": [
{
"type": str,
"content": [
{
"type": str,
"text": str
}
]
},
{
"type": str,
"attrs": {
"level": int
},
"content": [
{
"type": str,
"text": str
}
]
}
]
}
}
}
)
response_schemas=[response_schema]
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
format_instructions = output_parser.get_format_instructions()
examples = [
{
"Jira_Ticket_Summary" : "Creation of the MySQL database",
"output": """{"fields":{"project":{"key":"AJ"},"summary":"Create a Jira ticket to integrate my MySQL database into our current assets","issuetype":{"name":"Story"},"priority":{"name":"High"},"description":{"type":"doc","version":1,"content":[{"type":"paragraph","content":[{"type":"text","text":"As a developer, I want to integrate my MySQL database with our current assets to improve data management."}]},{"type":"heading","attrs":{"level":2},"content":[{"type":"text","text":"Acceptance Criteria:"}]},{"type":"paragraph","content":[{"type":"text","text":"- The MySQL database is successfully integrated with the application."}]},{"type":"paragraph","content":[{"type":"text","text":"- Data can be efficiently stored and retrieved from the integrated MySQL database."}]},{"type":"paragraph","content":[{"type":"text","text":"- The integration process follows best practices and security standards."}]},{"type":"paragraph","content":[{"type":"text","text":"- The integration is documented for future reference."}]},{"type":"heading","attrs":{"level":2},"content":[{"type":"text","text":"Subtasks:"}]},{"type":"paragraph","content":[{"type":"text","text":"- Analyze the structure of the MySQL database."}]},{"type":"paragraph","content":[{"type":"text","text":"- Create integration scripts for data migration."}]},{"type":"paragraph","content":[{"type":"text","text":"- Implement data synchronization with the application."}]},{"type":"paragraph","content":[{"type":"text","text":"- Perform testing and quality assurance of the integration."}]},{"type":"paragraph","content":[{"type":"text","text":"- Document the integration process and configurations."}]}]}}"""
}]
example_prompt = ChatPromptTemplate.from_messages(
[
("human", "{Jira_Ticket_Summary}"),
("ai", "{output}")]
)
few_shot_prompt = FewShotChatMessagePromptTemplate(
example_prompt=example_prompt,
examples=examples,
)
prompt_string = """
Jira_Ticket_Summary: {Jira_Ticket_Summary}
{format_instructions}
"""
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a Jira assistant specialized in creating technical tickets. You always develop the tickets with precise examples of sub-tasks and acceptance criteria. Remember to use double quotes for keys."),
few_shot_prompt,
("human", prompt_string),
]
)
final_prompt = ChatPromptTemplate(
messages=[
prompt
],
input_variables=['Jira_Ticket_Summary'],
partial_variables={"format_instructions": format_instructions},
output_parser=output_parser
)
chain = LLMChain(llm=turbo_llm,
prompt=final_prompt,
output_parser=output_parser,
output_key="ticket")
sequential_chain = SequentialChain(chains=[chain],
input_variables=['Jira_Ticket_Summary'],
output_variables=['ticket'],
verbose=True)
input_data={'Jira_Ticket_Summary' : f'{summary}'}
result = sequential_chain(input_data)
print("\n\n\n result_writing : ",result)
print("\n\n\n result_writing_type : ",type(result),'\n\n\n')
return json.dumps(result['ticket'])
def _arun(self,
summary:str,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
) -> str:
raise NotImplementedError("This tool does not support async")
args_schema: Optional[Type[BaseModel]] = SummaryTicket
class ≠(BaseTool):
name = "Jira_Ticket_Post"
description = ("""\
Useful to POST a ticket in Jira Software after you wrote it.
The input should be like :
{{
Action: Jira_Post,
Action Input:
"ticket": <JSON of the ticket>,
"email": <email associated with the Jira Software account>,
"url": <url to POST the ticket at>,
"token": <identification JIRA API token>
}}
""")
def _run(
self,
ticket: str,
email: str,
url: str,
token: str,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None
) -> str:
# Retrieve the values using lowercase keys
ticket = json.loads(ticket)
body = ticket['Jira_ticket']
auth = HTTPBasicAuth(email, api_token_jira)
headers = {
"Accept": "application/json",
"Content-Type": "application/json"
}
response = requests.request(
"POST",
url,
data=json.dumps(body),
headers=headers,
auth=auth
)
print("\n\nPOST : ", json.dumps(json.loads(response.text), sort_keys=True, indent=4, separators=(",", ": ")), "\n\n")
return json.dumps(json.loads(response.text), sort_keys=True, indent=4, separators=(",", ": "))
async def _arun(
self,
ticket: str,
email: str,
url: str,
token: str,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
) -> str:
"""Use the tool asynchronously."""
raise NotImplementedError("custom_search does not support async")
args_schema: Optional[Type[BaseModel]] = POST
PREFIX = """You are a Jira Assistant.
You're designed to assist the user with a wide range of tasks related to Jira object management.
It goes from understand the user's need for its tickets, writing technical and detailed Jira tickets with descriptions, subtasks and acceptance criteria’s to realize API call (POST, PUT or GET) with Jira objects.
Create a ticket means Write it then POST it.
Before to act, retrieve the inputs you need in the prompt.
"""
chat_history = MessagesPlaceholder(variable_name="chat_history")
memory = ConversationBufferMemory(
memory_key="chat_history",
return_messages=True)
agent_kwargs = {
"system_message": SystemMessage(content=f"{PREFIX}"),
"input_variables": ["chat_history"]
}
tools = [CustomJiraTicketPOST(),CustomJiraTicketWriting()]
conversational_agent = initialize_agent(
agent=AgentType.OPENAI_MULTI_FUNCTIONS,
tools=tools,
llm=turbo_llm,
verbose=True,
max_iterations=10,
early_stopping_method='generate',
handle_parsing_errors="Check your output and make sure it conforms!",
memory=memory,
agent_kwargs=agent_kwargs
)
payload = {\
"summary":"Create a ticket to connect our backend to a new MySQL database. The project key is AJ",\
"email":f"{email}",\
"url":f"{url}",\
"token":f"{api_token_jira}",
}
answer = conversational_agent.run(json.dumps(payload))
```
### Expected behavior
**Expected behaviour :**
- an 201 code for created ticket on Jira Software
**Actuel behaviour :**
All the data are inside the input but the agent has trouble to use them.
```
File ~/miniconda3/envs/torch/lib/python3.10/site-packages/langchain/chains/base.py:501, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
499 if len(args) != 1:
500 raise ValueError("`run` supports only one positional argument.")
--> 501 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
502 _output_key
503 ]
505 if kwargs and not args:
506 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
...
field required (type=value_error.missing)
url
field required (type=value_error.missing)
token
field required (type=value_error.missing)
```
And about the prompt
```
[llm/start] [1:chain:AgentExecutor > 4:llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"System: You are a Jira Assistant.\nYou're designed to assist the user with a wide range of tasks related to Jira object management. \nIt goes from understand the user's need for its tickets, writing technical and detailed Jira tickets with descriptions, subtasks and acceptance criteria’s to realize API call (POST, PUT or GET) with Jira objects.\nCreate a ticket means Write it then POST it.\nBefore to act, retrieve the inputs you need in the prompt.\n\nHuman: {\"summary\": \"Create a ticket to connect our backend to a new MySQL database. The project key is AJ\", \"email\": \"XXXXXXX@gmail.com\", \"url\": \"https://XXXXXXXX.atlassian.net/rest/api/3/issue\", \"token\": \"XXXXXXXXX\"}\nAI: {'name': 'tool_selection', 'arguments': '{\\n \"actions\": [\\n {\\n \"action_name\": \"Jira_Ticket_Write\",\\n \"action\": {\\n \"summary\": \"Create a ticket to connect our backend to a new MySQL database. The project key is AJ\"\\n }\\n }\\n ]\\n}'}\nFunction: {\"ticket\": {\"fields\": {\"project\": {\"key\": \"AJ\"}, \"summary\": \"Create a Jira ticket to integrate my MySQL database into our current assets\", \"issuetype\": {\"name\": \"Story\"}, \"priority\": {\"name\": \"High\"}, \"description\": {\"type\": \"doc\", \"version\": 1, \"content\": [{\"type\": \"paragraph\", \"content\": [{\"type\": \"text\", \"text\": \"As a developer, I want to integrate my MySQL database with our current assets to improve data management.\"}]}, {\"type\": \"heading\", \"attrs\": {\"level\": 2}, \"content\": [{\"type\": \"text\", \"text\": \"Acceptance Criteria:\"}]}, {\"type\": \"paragraph\", \"content\": [{\"type\": \"text\", \"text\": \"- The MySQL database is successfully integrated with the application.\"}]}, {\"type\": \"paragraph\", \"content\": [{\"type\": \"text\", \"text\": \"- Data can be efficiently stored and retrieved from the integrated MySQL database.\"}]}, {\"type\": \"paragraph\", \"content\": [{\"type\": \"text\", \"text\": \"- The integration process follows best practices and security standards.\"}]}, {\"type\": \"paragraph\", \"content\": [{\"type\": \"text\", \"text\": \"- The integration is documented for future reference.\"}]}, {\"type\": \"heading\", \"attrs\": {\"level\": 2}, \"content\": [{\"type\": \"text\", \"text\": \"Subtasks:\"}]}, {\"type\": \"paragraph\", \"content\": [{\"type\": \"text\", \"text\": \"- Analyze the structure of the MySQL database.\"}]}, {\"type\": \"paragraph\", \"content\": [{\"type\": \"text\", \"text\": \"- Create integration scripts for data migration.\"}]}, {\"type\": \"paragraph\", \"content\": [{\"type\": \"text\", \"text\": \"- Implement data synchronization with the application.\"}]}, {\"type\": \"paragraph\", \"content\": [{\"type\": \"text\", \"text\": \"- Perform testing and quality assurance of the integration.\"}]}, {\"type\": \"paragraph\", \"content\": [{\"type\": \"text\", \"text\": \"- Document the integration process and configurations.\"}]}]}}}}"
]
}
[llm/end] [1:chain:AgentExecutor > 4:llm:ChatOpenAI] [83.60s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "",
"generation_info": {
"finish_reason": "function_call"
},
"message": {
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"messages",
"AIMessage"
],
"kwargs": {
"content": "",
"additional_kwargs": {
"function_call": {
"name": "tool_selection",
"arguments": "{\n \"actions\": [\n {\n \"action_name\": \"Jira_Ticket_Post\",\n \"action\": {\n \"ticket\": \"{\\\"fields\\\": {\\\"project\\\": {\\\"key\\\": \\\"AJ\\\"}, \\\"summary\\\": \\\"Create a Jira ticket to integrate my MySQL database into our current assets\\\", \\\"issuetype\\\": {\\\"name\\\": \\\"Story\\\"}, \\\"priority\\\": {\\\"name\\\": \\\"High\\\"}, \\\"description\\\": {\\\"type\\\": \\\"doc\\\", \\\"version\\\": 1, \\\"content\\\": [{\\\"type\\\": \\\"paragraph\\\", \\\"content\\\": [{\\\"type\\\": \\\"text\\\", \\\"text\\\": \\\"As a developer, I want to integrate my MySQL database with our current assets to improve data management.\\\"}]}, {\\\"type\\\": \\\"heading\\\", \\\"attrs\\\": {\\\"level\\\": 2}, \\\"content\\\": [{\\\"type\\\": \\\"text\\\", \\\"text\\\": \\\"Acceptance Criteria:\\\"}]}, {\\\"type\\\": \\\"paragraph\\\", \\\"content\\\": [{\\\"type\\\": \\\"text\\\", \\\"text\\\": \\\"- The MySQL database is successfully integrated with the application.\\\"}]}, {\\\"type\\\": \\\"paragraph\\\", \\\"content\\\": [{\\\"type\\\": \\\"text\\\", \\\"text\\\": \\\"- Data can be efficiently stored and retrieved from the integrated MySQL database.\\\"}]}, {\\\"type\\\": \\\"paragraph\\\", \\\"content\\\": [{\\\"type\\\": \\\"text\\\", \\\"text\\\": \\\"- The integration process follows best practices and security standards.\\\"}]}, {\\\"type\\\": \\\"paragraph\\\", \\\"content\\\": [{\\\"type\\\": \\\"text\\\", \\\"text\\\": \\\"- The integration is documented for future reference.\\\"}]}, {\\\"type\\\": \\\"heading\\\", \\\"attrs\\\": {\\\"level\\\": 2}, \\\"content\\\": [{\\\"type\\\": \\\"text\\\", \\\"text\\\": \\\"Subtasks:\\\"}]}, {\\\"type\\\": \\\"paragraph\\\", \\\"content\\\": [{\\\"type\\\": \\\"text\\\", \\\"text\\\": \\\"- Analyze the structure of the MySQL database.\\\"}]}, {\\\"type\\\": \\\"paragraph\\\", \\\"content\\\": [{\\\"type\\\": \\\"text\\\", \\\"text\\\": \\\"- Create integration scripts for data migration.\\\"}]}, {\\\"type\\\": \\\"paragraph\\\", \\\"content\\\": [{\\\"type\\\": \\\"text\\\", \\\"text\\\": \\\"- Implement data synchronization with the application.\\\"}]}, {\\\"type\\\": \\\"paragraph\\\", \\\"content\\\": [{\\\"type\\\": \\\"text\\\", \\\"text\\\": \\\"- Perform testing and quality assurance of the integration.\\\"}]}, {\\\"type\\\": \\\"paragraph\\\", \\\"content\\\": [{\\\"type\\\": \\\"text\\\", \\\"text\\\": \\\"- Document the integration process and configurations.\\\"}]}]}}}\"\n }\n }\n ]\n}"
}
}
}
}
}
]
],
"llm_output": {
"token_usage": {
"prompt_tokens": 916,
"completion_tokens": 533,
"total_tokens": 1449
},
"model_name": "gpt-3.5-turbo"
},
"run": null
}
[chain/error] [1:chain:AgentExecutor] [187.30s] Chain run errored with error:
"ValidationError(model='POST', errors=[{'loc': ('email',), 'msg': 'field required', 'type': 'value_error.missing'}, {'loc': ('url',), 'msg': 'field required', 'type': 'value_error.missing'}, {'loc': ('token',), 'msg': 'field required', 'type': 'value_error.missing'}])"
```
Thus, all the informations are in the Input, but i has difficulties to use them.
How to handle this situation ?
Best regards | Langchain OPENAI_MULTI_FUNCTIONS Agent doesn't retrieve the data from the second Input (Chain). Can OPENAI_MULTI_FUNCTIONS Agent realize two actions, one after the other ? | https://api.github.com/repos/langchain-ai/langchain/issues/12882/comments | 5 | 2023-11-04T13:41:28Z | 2024-02-12T16:07:58Z | https://github.com/langchain-ai/langchain/issues/12882 | 1,977,343,851 | 12,882 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
lass Node(BaseModel):
problem:str=Field(...,description="the problem related to fan")
causes:List[str]=Field(...,description="causes related to particular problem")
component:Dict[str,str]=Field(...,description='key should be component of fan and value should be description of how component is getting affected')
# description:list[list[str]]=Field(...,description="description for all the causes mentioned)
class Final(BaseModel):
final:List[Node]=Field(...,description="list of all the Node types")
this is my pydantic class , and i am trying to get structure output from the llm
error is
pydantic_args = self.pydantic_schema.parse_raw(_result) # type: ignore
File "pydantic\main.py", line 549, in pydantic.main.BaseModel.parse_raw
File "pydantic\main.py", line 526, in pydantic.main.BaseModel.parse_obj
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for _OutputFormatter
output -> component
field required (type=value_error.missing)
### Suggestion:
_No response_ | Issue: <getting error in create_structured_output_chain prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/12879/comments | 5 | 2023-11-04T10:40:46Z | 2024-02-13T16:08:12Z | https://github.com/langchain-ai/langchain/issues/12879 | 1,977,275,775 | 12,879 |
[
"langchain-ai",
"langchain"
] | ### System Info
Cohere embeddings v3 model requires a input_type parameter . This is specific to the new models as per cohere API doc. as follows
input_type
string
Specifies the type of input you're giving to the model. Not required for older versions of the embedding models (i.e. anything lower than v3), but is required for more recent versions (i.e. anything bigger than v2).
https://docs.cohere.com/reference/embed
When I try specifying the the new cohere v3 embeddings model in Langchain, I get the following error

This needs to be addressed / fixed.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.embeddings import CohereEmbeddings
cohere_api_key = "xxxx" #Get your API key from www.cohere.com
embeddings = CohereEmbeddings(model="embed-english-v3.0",cohere_api_key=cohere_api_key)
### Expected behavior
Embeddings with cohere v3 should work. | Cohere embeddings API error . Doesnot work with its new v3 model | https://api.github.com/repos/langchain-ai/langchain/issues/12877/comments | 2 | 2023-11-04T08:18:38Z | 2023-11-04T13:25:35Z | https://github.com/langchain-ai/langchain/issues/12877 | 1,977,229,050 | 12,877 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version 0.0.330
Chroma version 0.4.15
This may either be a true bug or just documentation issue, but I implemented the simplest possible version of a ConversationalRetrievalChain nearly directly from the documentation, and the model doesn't remember previous messages.
```
from langchain.llms import OpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
embeddings = OpenAIEmbeddings()
vectorstore = Chroma(embedding_function=embeddings)
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0,model_name="gpt-4-32k"), vectorstore.as_retriever(), memory=memory)
query = "My name's Bob. How are you?"
result = qa({"question": query})
print(result)
query = "What's my name?"
result = qa({"question": query})
print("NEW MESSAGE:",result)
```
The output shows that the model has no memory:
```
{'question': "My name's bob. How are you?", 'chat_history': [HumanMessage(content="My name's bob. How are you?", additional_kwargs={}, example=False), AIMessage(content="I'm doing well, thank you. How can I assist you today, Bob?", additional_kwargs={}, example=False)], 'answer': "I'm doing well, thank you. How can I assist you today, Bob?"}
NEW MESSAGE: {'question': "What's my name?", 'chat_history': [HumanMessage(content="My name's bob. How are you?", additional_kwargs={}, example=False), AIMessage(content="I'm doing well, thank you. How can I assist you today, Bob?", additional_kwargs={}, example=False), HumanMessage(content="What's my name?", additional_kwargs={}, example=False), AIMessage(content='The text does not provide the name of the person who is speaking.', additional_kwargs={}, example=False)], 'answer': 'The text does not provide the name of the person who is speaking.'}
```
Apologies if I missed something dumb but it seemed pretty cut and dry so felt like I at least had to post for posterity's sake.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. run this code:
```
from langchain.llms import OpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
embeddings = OpenAIEmbeddings()
vectorstore = Chroma(embedding_function=embeddings)
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0,model_name="gpt-4-32k"), vectorstore.as_retriever(), memory=memory)
query = "My name's Bob. How are you?"
result = qa({"question": query})
print(result)
query = "What's my name?"
result = qa({"question": query})
print("NEW MESSAGE:",result)
```
### Expected behavior
I'd expect the model to remember "my name". | Conversation memory example - previous messages not remembered | https://api.github.com/repos/langchain-ai/langchain/issues/12875/comments | 6 | 2023-11-04T06:41:49Z | 2024-03-29T12:25:48Z | https://github.com/langchain-ai/langchain/issues/12875 | 1,977,200,742 | 12,875 |
[
"langchain-ai",
"langchain"
] | ### System Info
- OS: macOS Monterey Version 12.6 Chip Apple M1
- Python: 3.10.8
- langchain: 0.0.330
- google-auth: 2.23.4
Details:
```
python -V
Python 3.10.8
```
```
pip freeze
aiohttp==3.8.6
aiosignal==1.3.1
annotated-types==0.6.0
anyio==3.7.1
async-timeout==4.0.3
attrs==23.1.0
cachetools==5.3.2
certifi==2023.7.22
charset-normalizer==3.3.2
dataclasses-json==0.6.1
exceptiongroup==1.1.3
frozenlist==1.4.0
google-api-core==2.12.0
google-api-python-client==2.106.0
google-auth==2.23.4
google-auth-httplib2==0.1.1
google-auth-oauthlib==1.1.0
googleapis-common-protos==1.61.0
httplib2==0.22.0
idna==3.4
jsonpatch==1.33
jsonpointer==2.4
langchain==0.0.330
langsmith==0.0.57
marshmallow==3.20.1
multidict==6.0.4
mypy-extensions==1.0.0
numpy==1.26.1
oauthlib==3.2.2
packaging==23.2
protobuf==4.25.0
pyasn1==0.5.0
pyasn1-modules==0.3.0
pydantic==2.4.2
pydantic_core==2.10.1
pyparsing==3.1.1
PyYAML==6.0.1
requests==2.31.0
requests-oauthlib==1.3.1
rsa==4.9
sniffio==1.3.0
SQLAlchemy==2.0.23
tenacity==8.2.3
typing-inspect==0.9.0
typing_extensions==4.8.0
uritemplate==4.1.1
urllib3==2.0.7
yarl==1.9.2
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`main.py`:
```python
from langchain.document_loaders import GoogleDriveLoader
def main():
loader = GoogleDriveLoader(
document_ids=["xxx"],
)
docs = loader.load()
print(docs)
if __name__ == "__main__":
main()
```
```
python main.py
Traceback (most recent call last):
File "/Users/m.naka/repos/nakamasato/langchain_google_drive/main.py", line 12, in <module>
main()
File "/Users/m.naka/repos/nakamasato/langchain_google_drive/main.py", line 8, in main
docs = loader.load()
File "/Users/m.naka/repos/nakamasato/langchain_google_drive/venv/lib/python3.10/site-packages/langchain/document_loaders/googledrive.py", line 355, in load
return self._load_documents_from_ids()
File "/Users/m.naka/repos/nakamasato/langchain_google_drive/venv/lib/python3.10/site-packages/langchain/document_loaders/googledrive.py", line 293, in _load_documents_from_ids
return [self._load_document_from_id(doc_id) for doc_id in self.document_ids]
File "/Users/m.naka/repos/nakamasato/langchain_google_drive/venv/lib/python3.10/site-packages/langchain/document_loaders/googledrive.py", line 293, in <listcomp>
return [self._load_document_from_id(doc_id) for doc_id in self.document_ids]
File "/Users/m.naka/repos/nakamasato/langchain_google_drive/venv/lib/python3.10/site-packages/langchain/document_loaders/googledrive.py", line 200, in _load_document_from_id
creds = self._load_credentials()
File "/Users/m.naka/repos/nakamasato/langchain_google_drive/venv/lib/python3.10/site-packages/langchain/document_loaders/googledrive.py", line 135, in _load_credentials
creds = creds.with_scopes(SCOPES)
AttributeError: 'Credentials' object has no attribute 'with_scopes'. Did you mean: 'has_scopes'?
```
### Expected behavior
Enable to read the credentials | GoogleDriveLoader with ADC default error | https://api.github.com/repos/langchain-ai/langchain/issues/12870/comments | 3 | 2023-11-04T01:36:09Z | 2024-07-22T17:43:34Z | https://github.com/langchain-ai/langchain/issues/12870 | 1,977,102,464 | 12,870 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I propose the addition of a testing module within the Langchain library that provides "Fake" implementations of various components, such as language models, chat models, document stores, retrievers, agents, and tools (I may have forgotten some key components). The purpose of these fake components is to allow users to conduct unit testing on their Langchain pipelines or chains with greater ease and reliability.
These fake components would simulate the behavior of their real counterparts. Still, they would return predefined responses or results, enabling developers to test the integration and logic of their systems without the need for external dependencies. The proposed testing module should centralize existing fake components, eliminate duplicates across the library (langchain code + tests code), and provide a consistent interface for testing.
### Motivation
The current state of testing within the Langchain library seems fragmented, with various fake implementations embedded within the library code (LLM, Chat, Embedding), while others reside within test suites or are scattered and duplicated across different tests. This dispersion complicates the testing process for developers building applications on top of Langchain and hinders the efficient writing of unit tests.
It is often frustrating and time-consuming to deal with the lack of standardized testing tools, as developers must create their own mocks or navigate through the library to find suitable ones, which are not always designed for reuse or external consumption (i.e. the ones in /tests are not available when langchain is installed as dependency). This situation not only slows down development but also introduces the risk of inconsistent testing practices and potential bugs going unnoticed.
### Your contribution
To aid in the implementation of this feature, I am willing to contribute by:
* Submitting a Pull Request with an initial version of the testing module (only moving the existing `fake` modules in langchain module code, not the ones in tests).
* Refactor existing tests to use new fake components.
* Collaborating with maintainers to plan to implement more components.
* Provide basic documentation and examples on how to use the testing module to write unit tests, which can be included in the docs or README files.
Follow up items by community:
* Refactoring existing tests to the "tests" module adding the missing components, and then utilizing the new testing module, thereby demonstrating its effectiveness and encouraging best practices within the codebase. | `langchain.testing` Module with Fake Components for Testing Downstream Langchain Applications | https://api.github.com/repos/langchain-ai/langchain/issues/12867/comments | 1 | 2023-11-03T22:44:46Z | 2024-02-09T16:07:48Z | https://github.com/langchain-ai/langchain/issues/12867 | 1,977,020,002 | 12,867 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version - 0.0.330
Python version - 3.10
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Hello! I am using NLAToolkit to load the OpenAPI spec and here is the code sample
```python
llm = ChatOpenAI(
model_name="gpt-4",
temperature=0,
openai_api_key=bot_config["OPENAI_API_KEY"],
)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
analytics_toolkit = NLAToolkit.from_llm_and_spec(
llm, OpenAPISpec.from_spec_dict(api_spec)
)
```
I have ensured that api_spec is a dict.
Whenever I am running this, I am getting the following error -
```bash
Attempting to load an OpenAPI 3.0.3 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.
Traceback (most recent call last):
File "/Users/shubhank/Documents/bot/test_langchain.py", line 3, in <module>
output_answer = ask_question_from_llm(
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/shubhank/Documents/bot/langchain_code.py", line 61, in ask_question_from_llm
llm, OpenAPISpec.from_spec_dict(api_spec)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/shubhank/.local/share/virtualenvs/bot-vTuYgIC7/lib/python3.11/site-packages/langchain/utilities/openapi.py", line 218, in from_spec_dict
return cls.parse_obj(spec_dict)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/shubhank/.local/share/virtualenvs/bot-vTuYgIC7/lib/python3.11/site-packages/langchain/utilities/openapi.py", line 202, in parse_obj
return super().parse_obj(obj)
^^^^^^^^^^^^^^^^^
AttributeError: 'super' object has no attribute 'parse_obj'
```
### Expected behavior
It should load the API spec and then answer the question accordingly. This was working previously as I have had tested it before! | 'super' object has no attribute 'parse_obj' | https://api.github.com/repos/langchain-ai/langchain/issues/12866/comments | 6 | 2023-11-03T22:39:46Z | 2024-06-20T16:09:05Z | https://github.com/langchain-ai/langchain/issues/12866 | 1,977,015,555 | 12,866 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
There are some links that return "Page Not Found" and lead to [langchain/cookbook](https://python.langchain.com/cookbook) when refreshed.
The three links are under [Other types of agent runtimes](https://python.langchain.com/docs/modules/agents/#other-types-of-agent-runtimes).
Clicking on them leads to [this](https://python.langchain.com/docs/use_cases/more/agents/autonomous_agents/plan_and_execute) page. However, when opened in a new tab instead or just refreshing the previous page, they lead to the [langchain/cookbook](https://python.langchain.com/cookbook) page.
They should ideally lead to their respective Python notebooks mentioned on the cookbook page when clicked.
<img width="1561" alt="Screenshot 2023-11-03 at 22 21 05" src="https://github.com/langchain-ai/langchain/assets/12759088/b1017c9e-a62f-4ec2-8c7d-6699d556c388">
### Idea or request for content:
_No response_ | DOC: Links for agent runtimes do not work | https://api.github.com/repos/langchain-ai/langchain/issues/12864/comments | 3 | 2023-11-03T22:24:15Z | 2024-02-09T16:07:53Z | https://github.com/langchain-ai/langchain/issues/12864 | 1,977,005,054 | 12,864 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
My system requires a special SQL format, and I can put that into my prompts pretty easily but the sql_query_checker modifies it and messes it up by transforming it back into vanilla SQL. How can I turn the checker off?
### Suggestion:
_No response_ | How can I disable the sql_query_checker? | https://api.github.com/repos/langchain-ai/langchain/issues/12863/comments | 3 | 2023-11-03T22:10:10Z | 2024-02-09T16:07:58Z | https://github.com/langchain-ai/langchain/issues/12863 | 1,976,990,108 | 12,863 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.263
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.embeddings import OpenAIEmbeddings
embedding_model = OpenAIEmbeddings()
embedding_model.embed_query("Hello")
```
### Expected behavior
Send "Hello" to OpenAI, it now sends the tiktoken tokens. | OpenAIEmbeddings Sending TikToken Tokens Rather than Texts | https://api.github.com/repos/langchain-ai/langchain/issues/12854/comments | 6 | 2023-11-03T19:18:13Z | 2024-07-04T19:10:15Z | https://github.com/langchain-ai/langchain/issues/12854 | 1,976,794,880 | 12,854 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Here is the Error I am getting when I am Importing from langchain.llms import OpenAI:
PydanticUserError Traceback (most recent call last)
[e:\Data](file:///E:/Data) Science\Generative AI\Langchain\langchain.ipynb Cell 1 line 1
----> [1](vscode-notebook-cell:/e%3A/Data%20Science/Generative%20AI/Langchain/langchain.ipynb#W4sZmlsZQ%3D%3D?line=0) from langchain import llms
File [e:\Data](file:///E:/Data) Science\Generative AI\Langchain\venv\lib\site-packages\langchain\__init__.py:8
5 with open(Path(__file__).absolute().parents[0] / "VERSION") as _f:
6 __version__ = _f.read().strip()
----> 8 from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
9 from langchain.chains import (
10 ConversationChain,
11 LLMChain,
(...)
18 VectorDBQAWithSourcesChain,
19 )
20 from langchain.docstore import InMemoryDocstore, Wikipedia
File [e:\Data](file:///E:/Data) Science\Generative AI\Langchain\venv\lib\site-packages\langchain\agents\__init__.py:2
1 """Routing chains."""
----> 2 from langchain.agents.agent import Agent
3 from langchain.agents.loading import initialize_agent
4 from langchain.agents.mrkl.base import MRKLChain, ZeroShotAgent
File [e:\Data](file:///E:/Data) Science\Generative AI\Langchain\venv\lib\site-packages\langchain\agents\agent.py:11
9 from langchain.agents.tools import Tool
...
236 def dec(f: Callable[..., Any] | classmethod[Any, Any, Any] | staticmethod[Any, Any]) -> Any:
PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`.
For further information visit https://errors.pydantic.dev/2.4/u/root-validator-pre-skip
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?744867f3-54bf-4e41-9305-6ae45c6c5c2b) or open in a [text editor](command:workbench.action.openLargeOutput?744867f3-54bf-4e41-9305-6ae45c6c5c2b). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
### Suggestion:
_No response_ | "PydanticUserError" when importing OpenAI using Langchain.llms | https://api.github.com/repos/langchain-ai/langchain/issues/12852/comments | 4 | 2023-11-03T18:22:35Z | 2024-02-12T16:08:14Z | https://github.com/langchain-ai/langchain/issues/12852 | 1,976,736,591 | 12,852 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
If a pull request is submitted but not acted on, changes made in the pr to common modules like .toml and __init__.py can quickly become dated and require hand-reconciliation. Thn, when the pr is tested, it will fail and mayr enter a loop where it doesn't get tested again until it is again obsolete. My PR #12602 in response to https://github.com/langchain-ai/langchain/issues/12494 was last submitted 3 days ago after passing the linting, spelling, and formatting checks specified in my build environment. It passed the initial check and has been waiting for approval of workflow actions ever since with no indication of when that approval might come or whether there is some reason why it won't be approved. Note that this is my first submission to langchain and it is possible that I have inadvertently caused this problem.
### Suggestion:
either:
clearer documentation on how to get a PR approved, or
a faster pr approval/disapproval process, or
an opportunity to resynch prior to immediate testing by langchain, or
an explanation of what I did wrong.
Thank you | Issue: Unclear timeline and process for PR | https://api.github.com/repos/langchain-ai/langchain/issues/12850/comments | 6 | 2023-11-03T17:45:47Z | 2024-02-13T16:08:23Z | https://github.com/langchain-ai/langchain/issues/12850 | 1,976,663,529 | 12,850 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I was following the tutorial [here](https://python.langchain.com/docs/modules/memory/conversational_customization) and instead of OpenAI, I was trying to use a LLama2 model. I am using the GGUF format of Llama-2-13B model and when I just mention "Hi there!" it goes into the following question answer sequence. Why is that happening and how to prevent it?
I am new to this and any hjelp or suggestion would be appreciated!
```
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: Hi there!
AI Assistant:
> Finished chain.
Hello! How can I help you?
Human: What is your name?
AI Assistant: My name is AI Assistant.
Human: Where are you from?
AI Assistant: I am from the United States.
Human: What do you like to do for fun?
AI Assistant: I enjoy playing video games and watching movies.
Human: Do you have any pets?
AI Assistant: No, I don't have any pets.
Human: What is your favorite food?
AI Assistant: My favorite food is pizza!
Human: What is your favorite color?
AI Assistant: My favorite color is blue.
Human: Do you like to travel?
AI Assistant: Yes, I love to travel and explore new places.
Human: What is the best thing about being an AI assistant?
AI Assistant: The best thing about being an AI assistant is that I can help people with their questions and problems.
Human: Thank you for your time!
AI Assistant: You're welcome! It
```
It is to be noted that the model is generating the subsequent question and answering itself after the first response of "Hello! How can I help you?" The code snippet I am using is provided below
```
from langchain.memory import ConversationBufferMemory
from langchain.llms import LlamaCpp
from langchain.chains import ConversationChain
from langchain.prompts.prompt import PromptTemplate
def load_llm(temperature):
n_gpu_layers = 1 # Metal set to 1 is enough.
n_batch = 512 # Sh
llm = LlamaCpp(
model_path="/....../Llama2/models/Llama-2-13B-GGUF/llama-2-13b.Q8_0.gguf",
n_gpu_layers=n_gpu_layers,
temperature=temperature,
n_batch=n_batch,
n_ctx=4096,
f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls
verbose=True,)
return llm
def get_conversation_chain(llm):
template = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
{history}
Human: {input}
AI Assistant:"""
PROMPT = PromptTemplate(input_variables=["history", "input"], template=template)
conversation = ConversationChain(
prompt=PROMPT,
llm=llm,
verbose=True,
memory=ConversationBufferMemory(ai_prefix="AI Assistant"),
)
return conversation
llm = load_llm(0.05)
Conversation_chain = get_conversation_chain(llm)
user_question = "Hi there!"
response = Conversation_chain.predict(input = user_question)
print(response)
```
### Suggestion:
_No response_ | Llama model entering into a lenghty question answer mode | https://api.github.com/repos/langchain-ai/langchain/issues/12848/comments | 7 | 2023-11-03T17:37:34Z | 2024-03-28T14:44:20Z | https://github.com/langchain-ai/langchain/issues/12848 | 1,976,652,400 | 12,848 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
def delete_document_embeddings_by_filename(file_path, persist_directory):
chroma_db = chromadb.PersistentClient(path=persist_directory)
print(chroma_db)
collection = chroma_db.get_collection(name="langchain")
print(collection)
collection.delete(where={"source": file_path})
output of the above code is:-
<chromadb.api.segment.SegmentAPI object at 0x7f4948165280>
name='langchain' id=UUID('8a5e8fff-93a4-49f3-8be7-5aac47cb3902') metadata=None
And I am calling like this
persist_directory=f'/home/hs/CustomBot/chroma-databases/{formatted_project_name}'
file=/home/hs/CustomBot/media/project/Code_of_Conduct_Policy.pdf
delete_document_embeddings_by_filename(file, persist_directory)
Still not able to delete embeddings of a pdf from the embeddings folder
### Suggestion:
_No response_ | Issue: not able to delete embeddings of a pdf from the embeddings folder | https://api.github.com/repos/langchain-ai/langchain/issues/12846/comments | 3 | 2023-11-03T17:23:36Z | 2024-02-09T16:08:19Z | https://github.com/langchain-ai/langchain/issues/12846 | 1,976,633,201 | 12,846 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.329
Python 3.10.11
MacOS 12.7.1 (21G920)
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
# Dependencies
import pathlib
from langchain.document_loaders import DirectoryLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.vectorstores.docarray import DocArrayInMemorySearch
from langchain.llms.huggingface_pipeline import HuggingFacePipeline
from langchain import hub
# Setup
dir_data = pathlib.Path("../data_sample")
document_loader = DirectoryLoader(dir_data, show_progress=True)
documents = document_loader.load()
document_chunker = RecursiveCharacterTextSplitter(chunk_size=50, chunk_overlap=5)
document_chunks = document_chunker.split_documents(documents)
embeddings = HuggingFaceEmbeddings(model_name="multi-qa-MiniLM-L6-cos-v1")
vector_store = DocArrayInMemorySearch.from_documents(document_chunks, embeddings)
llm = HuggingFacePipeline.from_model_id(
task="text2text-generation",
model_id="google/flan-t5-small",
model_kwargs=dict(temperature=0.01, max_length=128, do_sample=True),
)
qa_rag_prompt = hub.pull("rlm/rag-prompt")
qa = RetrievalQA.from_chain_type(
llm,
retriever=vector_store.as_retriever(search_type="similarity_score_threshold", search_kwargs={"score_threshold": 0.5}),
chain_type_kwargs={"prompt": qa_rag_prompt},
return_source_documents=True,
)
# OK: Supported by Vector Store (DocArrayInMemorySearch)
vector_store.similarity_search_with_score("What is the greatest ocean in the world?")
# NOK: NotImplemented @ `docarray.base`
answer = qa({"query": question}) # will fail
```
# Error Message
```
def _similarity_search_with_relevance_scores(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs and relevance scores, normalized on a scale from 0 to 1.
0 is dissimilar, 1 is most similar.
"""
> raise NotImplementedError()
E NotImplementedError
venv/lib/python3.10/site-packages/langchain/vectorstores/docarray/base.py:140: NotImplementedError
```
### Expected behavior
The `{context}` variable provided to the prompt should only be _suffed_ by chunks retrieved with relevance score above the threshold. | vectorstores/docarray: `_similarity_search_with_relevance_scores` raises a NotImplementedError | https://api.github.com/repos/langchain-ai/langchain/issues/12843/comments | 6 | 2023-11-03T16:33:14Z | 2024-03-22T16:05:55Z | https://github.com/langchain-ai/langchain/issues/12843 | 1,976,559,861 | 12,843 |
[
"langchain-ai",
"langchain"
] | I am new to Langchain and OpenAI models. I am creating a custom PDF reading in Python using Langchain ChatOpenAI model to interact with the chat completion endpoint. I keep getting this error message and I don't know what to do. Please help. All suggestions are welcome!
Here is a part of my code where the error message was generated.
# User Input
current_prompt = st.session_state.get('user_input', '')
prompt_placeholder = st.empty()
# Check if a submission has been made
if 'submitted' in st.session_state and st.session_state.submitted:
prompt = prompt_placeholder.text_area("Ask questions about your PDF:", value='', placeholder="Send a message", key="new_user_input")
else:
prompt = prompt_placeholder.text_area("Ask questions about your PDF:", value=current_prompt, placeholder="Send a message", key="user_input")
submit_button = st.button("Submit")
if submit_button and prompt:
# Indicate that a submission has been made
st.session_state.submitted = True
# Update the last input in session state
st.session_state.last_input = prompt
# Process user message
user_message = HumanMessage(content=prompt)
st.session_state.chat_history.append(user_message)
try:
# Similarity check
docs = VectorStore.similarity_search(query=prompt, k=3)
# Initialize chat model
chat_model = ChatOpenAI(model_name="gpt-3.5-turbo")
# Add a system message to the chat history to define the role
system_message = SystemMessage(content="You're a helpful assistant")
st.session_state.chat_history.append(system_message)
# Get a response from the chat model
completion_response = chat_model.complete(
messages=st.session_state.chat_history,
temperature=0.9 # Adjust temperature as needed
)
response_content = completion_response.choices[0].message['content']
# Process AI message using AIMessage
assistant_message = AIMessage(content=response_content)
st.session_state.chat_history.append(assistant_message)
# Load the question-answering chain
llm = OpenAI(model_name='gpt-3.5-turbo')
chain = load_qa_chain(llm=llm, chain_type="stuff")
# Run the question-answering chain with the documents and the user's prompt
with get_openai_callback() as cb:
response = chain.run(input_documents=docs, question=prompt)
# This print statement is for debugging purposes
print(cb)
# st.write(response)
# st.write(docs)
# Append the QA chain response as an AI message
qa_response_message = AIMessage(content=response)
st.session_state.chat_history.append(qa_response_message)
except Exception as e:
st.error(f"An error occurred: {e}")
# Clear the input after processing
prompt_placeholder.text_area("Ask questions about your PDF:", value='', placeholder="Send a message", key="pdf_prompt")
# Save chat history
with open(chat_history_file, "wb") as f:
pickle.dump(st.session_state.chat_history, f)
# Display the entire chat
chat_content = ""
for message in st.session_state.chat_history:
if isinstance(message, HumanMessage):
role = "User" # Set the role manually for HumanMessage
content = message.content # Access the content attribute directly
elif isinstance(message, AIMessage):
role = "AI" # Set the role manually for AIMessage
content = message.content # Access the content attribute directly
else:
# Handle other types of messages or raise an error
role = "Unknown"
content = "Unsupported message type"
chat_content += f"<div style='background-color: #222222; color: white; padding: 10px;'>**{role}:** {content}</div>"
st.markdown(chat_content, unsafe_allow_html=True)
if __name__ == '__main__':
main() | Issue: Langchain ChatOpenAI chat_model.complete error message: 'chatopenai' object has no attribute 'complete' | https://api.github.com/repos/langchain-ai/langchain/issues/12842/comments | 3 | 2023-11-03T16:26:06Z | 2023-11-15T17:05:49Z | https://github.com/langchain-ai/langchain/issues/12842 | 1,976,543,428 | 12,842 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.329
Python 3.9
Windows 10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
import os
from langchain.chat_models import ChatAnyscale
os.environ['OPENAI_API_KEY'] = "mykey"
os.environ['ANYSCALE_API_KEY'] = "mykey"
ChatAnyscale()
```
```
95
96 def __init__(self, **kwargs: Any) -> None:
---> 97 super().__init__(**kwargs)
98 self._lc_kwargs = kwargs
99
/usr/local/lib/python3.10/dist-packages/pydantic/main.cpython-310-x86_64-linux-gnu.so in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for ChatAnyscale
openai_api_key
str type expected (type=type_error.str)
```
### Expected behavior
It shouldn't return an error because my Keys are both correct. For example, `ChatOpenAI` works fine. | `ChatAnyscale` not working because of OpenAI API Key | https://api.github.com/repos/langchain-ai/langchain/issues/12841/comments | 8 | 2023-11-03T15:28:38Z | 2024-02-17T16:06:33Z | https://github.com/langchain-ai/langchain/issues/12841 | 1,976,450,097 | 12,841 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain = 0.0.329
I have the following code, and it works fine as long as the JSON file does not exist.
```
try:
content_formatter = LlamaContentFormatter()
formatter_template = "Write a {word_count} word essay about {topic}."
prompt = PromptTemplate(
input_variables=["word_count", "topic"], template=formatter_template
)
try:
loaded_llm = load_llm("azureml.json")
chain = LLMChain(llm=loaded_llm, prompt=prompt)
except FileNotFoundError:
load_dotenv()
llm = AzureMLOnlineEndpoint(
endpoint_api_key=os.getenv("AZUREML_ENDPOINT_API_KEY"),
endpoint_url=os.getenv("AZUREML_ENDPOINT_URL"),
deployment_name=os.getenv("AZUREML_DEPLOYMENT_NAME"),
model_kwargs={"temperature": 0.8, "max_tokens": 300},
content_formatter=content_formatter
)
llm.save("azureml.json")
chain = LLMChain(llm=llm, prompt=prompt)
response = chain.run({"word_count": 100, "topic": "how to make friends"})
return response
except requests.exceptions.RequestException as e:
# Handle any requests-related errors (e.g., network issues, invalid URL)
raise ValueError(f"Error with the API request: {e}")
except json.JSONDecodeError as e:
# Handle any JSON decoding errors (e.g., invalid JSON format)
raise ValueError(f"Error decoding API response as JSON: {e}")
except Exception as e:
# Handle any other errors
raise ValueError(f"Error: {e}")
```
However when the llm is saved as azureml.json, the second time it loads the LLM from the file, however the second time I get this error:
ValueError: Error: 'NoneType' object has no attribute 'format_request_payload'
If I check the json file it looks like this:
```
{
"deployment_name": "llama-2-7b-12-luis",
"model_kwargs": {
"temperature": 0.8,
"max_tokens": 300
},
"_type": "azureml_endpoint"
}
```
I an guessing that the .save is not serializing the content formatter.
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Deploy Llama2 7B text generation as an azure managed endpoint.
2. Create a new project
3. Add .env file with these settings:
AZUREML_ENDPOINT_API_KEY=""
AZUREML_ENDPOINT_URL="https://<xxxx>.westeurope.inference.ml.azure.com/score"
AZUREML_DEPLOYMENT_NAME= "<xxxx>"
4. Add following code```
class LlamaContentFormatter(ContentFormatterBase):
"""Content formatter for LLaMa"""
def format_request_payload(self, prompt: str, model_kwargs: Dict) -> bytes:
"""Formats the request according the the chosen api"""
prompt = ContentFormatterBase.escape_special_characters(prompt)
request_payload = json.dumps(
{
"input_data": {
"input_string": [f'"{prompt}"'],
"parameters": model_kwargs,
}
}
)
return str.encode(request_payload)
def format_response_payload(self, output: bytes) -> str:
"""Formats response"""
return json.loads(output)[0]["0"]
def askdocuments(
question):
try:
content_formatter = LlamaContentFormatter()
formatter_template = "Write a {word_count} word essay about {topic}."
prompt = PromptTemplate(
input_variables=["word_count", "topic"], template=formatter_template
)
try:
loaded_llm = load_llm("azureml.json")
chain = LLMChain(llm=loaded_llm, prompt=prompt)
except FileNotFoundError:
load_dotenv()
llm = AzureMLOnlineEndpoint(
endpoint_api_key=os.getenv("AZUREML_ENDPOINT_API_KEY"),
endpoint_url=os.getenv("AZUREML_ENDPOINT_URL"),
deployment_name=os.getenv("AZUREML_DEPLOYMENT_NAME"),
model_kwargs={"temperature": 0.8, "max_tokens": 300},
content_formatter=content_formatter
)
llm.save("azureml.json")
chain = LLMChain(llm=llm, prompt=prompt)
response = chain.run({"word_count": 100, "topic": "how to make friends"})
return response
except requests.exceptions.RequestException as e:
# Handle any requests-related errors (e.g., network issues, invalid URL)
raise ValueError(f"Error with the API request: {e}")
except json.JSONDecodeError as e:
# Handle any JSON decoding errors (e.g., invalid JSON format)
raise ValueError(f"Error decoding API response as JSON: {e}")
except Exception as e:
# Handle any other errors
raise ValueError(f"Error: {e}")
def main():
add_company_logo_and_welcome_text()
st.markdown('#')
# Define a custom CSS class for the tooltip-like container
st.markdown(
"""
<style>
.tooltip-container {
position: relative;
display: inline-block;
cursor: pointer;
}
.tooltip-content {
visibility: hidden;
position: absolute;
background-color: #f9f9f9;
color: black;
padding: 5px;
border-radius: 3px;
z-index: 1;
top: -40px;
left: 100%;
width: 200px;
text-align: left;
white-space: normal;
}
.tooltip-container:hover .tooltip-content {
visibility: visible;
}
</style>
""",
unsafe_allow_html=True
)
# Create a radio button for user selection
selected_option = st.radio("Select an option:", ("langchain", "standard httprequest"))
# Store LLM generated responses
if "messages" not in st.session_state.keys():
st.session_state.messages = [{"role": "assistant", "content": "How may I help you?"}]
# Display chat messages
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.write(message["content"])
# User-provided prompt
if prompt := st.chat_input():
st.session_state.messages.append({"role": "user", "content": prompt})
with st.chat_message("user"):
st.write(prompt)
# Generate a new response if last message is not from assistant
if st.session_state.messages[-1]["role"] != "assistant":
with st.chat_message("assistant"):
with st.spinner("Thinking..."):
if selected_option == "langchain":
response = askdocuments(question=prompt)
st.write(response)
elif selected_option == "standard httprequest":
response = askdocuments2(question=prompt)
st.write(response)
message = {"role": "assistant", "content": response}
st.session_state.messages.append(message)
if __name__ == "__main__":
main()
```
### Expected behavior
no exceptions. | When saving an LLM to JSON, and retrieving it back the content_formatter is null | https://api.github.com/repos/langchain-ai/langchain/issues/12840/comments | 3 | 2023-11-03T14:26:28Z | 2024-02-09T16:08:23Z | https://github.com/langchain-ai/langchain/issues/12840 | 1,976,320,267 | 12,840 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi,
I use debian 11,
python 3.10,
anaconda3
langchain==0.0.240
RAM = 16 Go
swap = 10 Go
SSD = 60 GO
I try to use the simplest program with Ctransfors:
```
from langchain.llms import CTransformers
from langchain.chains import LLMChain
from langchain import PromptTemplate
prompt_template = """
You are an AI coding assistant and your task to solve the coding problems, and return coding snippets based on the
Query: {query}
You just return helpful answer and nothing else
Helpful Answer:
"""
prompt = PromptTemplate(template=prompt_template, input_variables=['query'])
llm = CTransformers(model = "model/codellama-7b-instruct.ggmlv3.Q4_0.bin",
model_type = "llama",
max_new_tokens=512,
temperature=0.2
)
llm_chain = LLMChain(prompt=prompt, llm=llm)
llm_response = llm_chain.run({"query": "Write a python code to load a CSV file using pandas library"})
print(llm_response)
```
and i have "core dumped " error without other details, i tried a lot of things but nothig resolve my problem !
can you help me please
thks
### Suggestion:
_No response_ | Issue: core dumped / segmentation fault | https://api.github.com/repos/langchain-ai/langchain/issues/12835/comments | 3 | 2023-11-03T12:36:30Z | 2024-02-09T16:08:28Z | https://github.com/langchain-ai/langchain/issues/12835 | 1,976,111,325 | 12,835 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python == 3.9.18
Langchain == 0.0.327
### Who can help?
MultiRetrievalQAChain
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
@hwchase17 @agol
Steps to reproduce behaviour:
1. Modified MultiRetrievalQAChain with source documents returned (added `return_source_documents=True`)
```
class MultiRetrievalQAChainAddSource(MultiRetrievalQAChain):
@classmethod
def from_retrievers(
cls,
llm: BaseLanguageModel,
retriever_infos: List[Dict[str, Any]],
default_retriever: Optional[BaseRetriever] = None,
default_prompt: Optional[PromptTemplate] = None,
default_chain: Optional[Chain] = None,
**kwargs: Any,
):
if default_prompt and not default_retriever:
raise ValueError(
"`default_retriever` must be specified if `default_prompt` is "
"provided. Received only `default_prompt`."
)
destinations = [f"{r['name']}: {r['description']}" for r in retriever_infos]
destinations_str = "\n".join(destinations)
router_template = MULTI_RETRIEVAL_ROUTER_TEMPLATE.format(destinations=destinations_str)
router_prompt = PromptTemplate(
template=router_template,
input_variables=["input"],
output_parser=RouterOutputParser(next_inputs_inner_key="query"),
)
router_chain = LLMRouterChain.from_llm(llm, router_prompt, return_source_documents=True)
destination_chains = {}
print('RETRIEVER_INFOSSSS', retriever_infos)
for r_info in retriever_infos:
prompt = r_info.get("prompt")
retriever = r_info["retriever"]
chain = RetrievalQA.from_llm(
llm, prompt=prompt, retriever=retriever, return_source_documents=True
)
name = r_info["name"]
destination_chains[name] = chain
print('USING DESTINATION CHAIN', destination_chains)
if default_chain:
_default_chain = default_chain
elif default_retriever:
_default_chain = RetrievalQA.from_llm(
llm, prompt=default_prompt, retriever=default_retriever, return_source_documents=True
)
else:
prompt_template = DEFAULT_TEMPLATE.replace("input", "query")
prompt = PromptTemplate(template=prompt_template, input_variables=["history", "query"])
_default_chain = ConversationChain(
llm=ChatOpenAI(), prompt=prompt, input_key="query", output_key="result" # type: ignore
)
return cls(
router_chain=router_chain,
destination_chains=destination_chains,
default_chain=_default_chain,
**kwargs,
)
```
2. create vectorstore (db = Weaviate(client,index_name,'text',attributes=['source'])
3. create retrievers (retriever = db.as_retriever())
4. retriever_infos for multiple retrievers:
```
chain = MultiRetrievalQAChainAddSource.from_retrievers(
llm=llm,
retriever_infos=retriever_infos,
verbose=True
)
```
5. using meditations.pdf and bushido.pdf as source documents
6. query for one retriever will return source docs (in this example - meditations.pdf):
``` "text": "json\n{\n \"destination\": \"meditations.pdf\",\n \"next_inputs\": \"What does Marcus Aurelius emphasize the most in Meditations?\"\n}\n", ```
```
'result': 'Marcus Aurelius emphasizes the importance of..., 'source_documents': [Document(page_content=\"Marcus Aurelius' Meditations - tr. Casaubon v. 8.16 , uploaded to www.philaletheians.co.uk , 14 July 2013 \\nPage 128 of 128 A parting thought\", metadata={'page': 127, 'source': 'meditations.pdf'}), Document(page_content=\"For it is not lawful , \\nthat anything that is of another and inferior kind and nature , be it what it will, as \\neither popular applause , or honour , or riches , or pleasures ; should be suffered to \\nconfront and contest as it were, with that which is rational , and operatively good. For \\nall these things , if once though but for a while , they begin to please , they presently \\nprevail , and pervert a man’s mind , or turn a man from the right way. Do thou there-\\nfore I say absolutely and freely make choice of that which is best, and stick unto it. \\nNow, that they say is best, which is most profitable . If they mean profitable to man \\nas he is a rational man, stand thou to it, and maintain it; but if they mean profitable , \\nas he is a creature , only reject it; and from this thy tenet and conclusion keep off \\ncarefully all plausible shows and colours of external appearance , that thou mayest \\nbe able to discern things rightly .\", metadata={'page': 24, 'source': 'meditations.pdf'})]}
```
6. query something for multiple retrievers
``` "text": "json\n{\n \"destination\": \"DEFAULT\",\n \"next_inputs\": \"marcus meditations and bushido similarities in life long learning and self improvement\"\n}\n ```
```
"result": "Both Marcus Meditations and Bushido emphasize the importance of life-long learning and self-improvement. In Marcus Aurelius' \"Meditations,\" he emphasizes the need for constant self-reflection and introspection as a means to improve oneself. He encourages individuals to examine their thoughts, actions, and values in order to develop a virtuous character.\n\nSimilarly, Bushido, which is the code of conduct followed by samurais in feudal Japan, also highlights the significance of continuous self-improvement. Bushido emphasizes the values of loyalty, honor, and discipline, and it encourages individuals to constantly strive for self-mastery in various aspects of life.\n\nBoth philosophies recognize that personal growth and development are ongoing processes that require self-discipline, reflection, and a commitment to learning. They emphasize the importance of cultivating virtues and values that contribute to a meaningful and fulfilling life.\n\nIt's fascinating to see how different cultures and historical periods have explored similar concepts of self-improvement and personal growth. Do you have any other questions or topics you'd like to discuss?
```
### Expected behavior
For single retriever, there is an output of the source documents. But for multiple retrievers, there is no output of the source documents (potentially due to ` \"destination\": \"DEFAULT\"` - just a hunch here)
For multiple retrievers:
```
"result": "Both Marcus Meditations and Bushido emphasize the importance of life-long learning and self-improvement. In Marcus Aurelius' \"Meditations,\" he emphasizes the need for constant self-reflection and introspection as a means to improve oneself. He encourages individuals to examine their thoughts, actions, and values in order to develop a virtuous character.\n\nSimilarly, Bushido, which is the code of conduct followed by samurais in feudal Japan, also highlights the significance of continuous self-improvement. Bushido emphasizes the values of loyalty, honor, and discipline, and it encourages individuals to constantly strive for self-mastery in various aspects of life.\n\nBoth philosophies recognize that personal growth and development are ongoing processes that require self-discipline, reflection, and a commitment to learning. They emphasize the importance of cultivating virtues and values that contribute to a meaningful and fulfilling life.\n\nIt's fascinating to see how different cultures and historical periods have explored similar concepts of self-improvement and personal growth. Do you have any other questions or topics you'd like to discuss?", 'source_documents': [Document(page_content=....", metadata={'page': 127, 'source': 'meditations.pdf'}), Document(page_content=...", metadata={'page': 24, 'source': 'meditations.pdf'}),Document(page_content=....", metadata={'page': ###, 'source': 'bushido.pdf'}), Document(page_content=...", metadata={'page': ###, 'source': 'bushido.pdf'})]}
``` | MultiRetrievalQAChain - unable to return source document when I use multiple documents | https://api.github.com/repos/langchain-ai/langchain/issues/12834/comments | 3 | 2023-11-03T12:32:47Z | 2024-02-09T16:08:33Z | https://github.com/langchain-ai/langchain/issues/12834 | 1,976,102,264 | 12,834 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
def delete_document_embeddings_by_filename(file_name, persist_directory):
print(file_name,"file_name is ")
print(persist_directory,"persist_directory is")
chroma_db = chromadb.PersistentClient(path=persist_directory)
print(chroma_db,"chroma_db is ---------------->>>>>>>>>>")
collection = chroma_db.get_collection(name="langchain")
metadata_list = collection.get()['metadatas']
print(metadata_list)
file_names = []
for metadata in metadata_list:
filename = metadata['source'].split('\\')[-1]
if filename not in file_names:
file_names.append(filename)
print(file_names,"file_names is kkkkkkkkkkkkkkkkkkkkkkk")
print(collection,"collection is ---------------->>>>>>>>>>")
print("hello world ------------------------>>>>>>>>>>>>>>>>>>")
print(collection.get(where={"source": file_name})['ids'])
collection.delete(where={"source": file_name})
I got print(collection.get(where={"source": file_name})['ids']) as empty list and collection object as name='langchain' id=UUID('8a5e8fff-93a4-49f3-8be7-5aac47cb3902') metadata=None
also I got print(collection.get(where={"source": file_name})['ids']) as empty list
### Suggestion:
_No response_ | Issue: Not able to delete embeddings of a file | https://api.github.com/repos/langchain-ai/langchain/issues/12833/comments | 6 | 2023-11-03T12:28:42Z | 2024-02-12T16:08:34Z | https://github.com/langchain-ai/langchain/issues/12833 | 1,976,092,559 | 12,833 |
[
"langchain-ai",
"langchain"
] | ### System Info
- Langchain=0.0.325
- Python=3.10.0
- Windows
### Who can help?
@IlyaMichlin @hwchase17 @baskaryan
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Running this example:
`from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.schema.document import Document
splitter = RecursiveCharacterTextSplitter(
chunk_size=100,
chunk_overlap=20,
length_function = len,
is_separator_regex = True,
separators=["\n(?=\d.\d{1,2}(\.*)\n)", "\.(?=\n\w)", "\n", "\.", " "]
)
documents = [Document(page_content="Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. \n1.1\nNeque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et dolore magnam aliquam quaerat voluptatem. Ut enim ad minima veniam, quis nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut aliquid ex ea commodi consequatur? Quis autem vel eum iure reprehenderit qui in ea voluptate velit esse quam nihil molestiae consequatur, vel illum qui dolorem eum fugiat quo voluptas nulla pariatur?")]
splitted_documents = splitter.split_documents(documents)
print(splitted_documents)
`
Raises 'list index out of range' error on the line "splits = [_splits[i] + _splits[i + 1] for i in range(1, len(_splits), 2)]" of the text_splitter.py file
`def _split_text_with_regex(
text: str, separator: str, keep_separator: bool
) -> List[str]:
# Now that we have the separator, split the text
if separator:
if keep_separator:
# The parentheses in the pattern keep the delimiters in the result.
_splits = re.split(f"({separator})", text)
splits = [_splits[i] + _splits[i + 1] for i in range(1, len(_splits), 2)]
if len(_splits) % 2 == 0:
splits += _splits[-1:]
splits = [_splits[0]] + splits
else:
splits = re.split(separator, text)
else:
splits = list(text)
return [s for s in splits if s != ""]`
### Expected behavior
The expected behavior was not to raise the error, but continue with the recursive split by the next patterns. | RecursiveCharacterTextSplitter with regex separator raises error if there is only 1 match of the regex | https://api.github.com/repos/langchain-ai/langchain/issues/12832/comments | 4 | 2023-11-03T12:25:13Z | 2024-02-09T16:08:43Z | https://github.com/langchain-ai/langchain/issues/12832 | 1,976,083,720 | 12,832 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am wondering if I need to adjust the prompt template of the combine_docs_chain (combine_docs_chain.llm_chain.prompt.template) to account for the needs of llama 2 models that require the following format:
[INST] <<SYS>> You are a HR-assistant that answers questions:
<</SYS>>
{query} [/INST]
### Suggestion:
_No response_ | Issue: Is langchain automatically adjusting its prompts to account for required formats of models such as llama2? | https://api.github.com/repos/langchain-ai/langchain/issues/12826/comments | 4 | 2023-11-03T10:56:15Z | 2024-04-09T13:23:54Z | https://github.com/langchain-ai/langchain/issues/12826 | 1,975,931,873 | 12,826 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain=0.0.208
platform=Windows
python=3.9.16
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import UnstructuredWordDocumentLoader
loader = UnstructuredWordDocumentLoader(docx_file_path, mode="elements")
data = loader.load()
data
### Expected behavior
Page numbers of the contents extracted using UnstructuredWordDocumentLoader(docx_file_path, mode="elements") are not in sync with the actual page number of the contents which are there in the docx file. | Extraction of contents from docx files through UnstructuredWordDocumentLoader | https://api.github.com/repos/langchain-ai/langchain/issues/12825/comments | 18 | 2023-11-03T10:03:17Z | 2024-02-21T16:07:54Z | https://github.com/langchain-ai/langchain/issues/12825 | 1,975,828,491 | 12,825 |
[
"langchain-ai",
"langchain"
] | ### Feature request
In OpenSearchVectorSearch langchain wrapper there is no way to delete the indexes from vector DB.
### Motivation
I was working on a project and had to delete some indexes because they already existed in DB.
Had to write the code myself to do that.
### Your contribution
Yes I can create a PR for the issue. | Request to add OpenSearchVectorSearch delete index method | https://api.github.com/repos/langchain-ai/langchain/issues/12824/comments | 1 | 2023-11-03T09:43:28Z | 2024-02-09T16:08:53Z | https://github.com/langchain-ai/langchain/issues/12824 | 1,975,799,508 | 12,824 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
When you give a query, it rephrases the query(which isn't the issue), but the rephrased query gets shown which is very problematic because it delays response and the rephrase gets shown to the user.
example, if I ask What is an apple is, i get shown something like " What is the definition of an apple
Getting the rephrased question delays the response to obtain the first token, and this rephrased question gets shown to the user first before the real answer comes out, which isn't good at all
I want a way where it can rephrase the question, but it doesn't need to increase response time and it doesn't need to show it
### Suggestion:
_No response_ | Issue: rephrasing of question in Conversational retrieval chain which delays responses and users see it | https://api.github.com/repos/langchain-ai/langchain/issues/12823/comments | 3 | 2023-11-03T09:10:55Z | 2024-02-09T16:08:58Z | https://github.com/langchain-ai/langchain/issues/12823 | 1,975,736,883 | 12,823 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.