issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi, I'm trying to implement the [web scraping tutorial ](https://python.langchain.com/docs/use_cases/web_scraping#llm-with-function-calling) using ChatOllama instead of ChatOpenAI.
This is what I'm trying to do:
```
import pprint
from langchain.chains import create_extraction_chain
from langchain.document_loaders import AsyncChromiumLoader
from langchain.document_transformers import BeautifulSoupTransformer
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.chat_models import ChatOllama
def extract(content: str, schema: dict, llm):
return create_extraction_chain(schema=schema, llm=llm).run(content)
def scrape_with_playwright(urls, schema, llm):
loader = AsyncChromiumLoader(urls)
docs = loader.load()
bs_transformer = BeautifulSoupTransformer()
docs_transformed = bs_transformer.transform_documents(
docs, tags_to_extract=["span"]
)
print("Extracting content with LLM")
# Grab the first 1000 tokens of the site
splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
chunk_size=1000, chunk_overlap=0
)
splits = splitter.split_documents(docs_transformed)
# Process the first split
extracted_content = extract(schema=schema, content=splits[0].page_content, llm=llm)
return extracted_content
if __name__ == '__main__':
llm = ChatOllama(base_url="https://localhost:11434", model="llama2")
schema = {
"properties": {
"news_article_title": {"type": "string"},
"news_article_summary": {"type": "string"},
},
"required": ["news_article_title", "news_article_summary"],
}
urls = ["https://www.wsj.com"]
extracted_content = scrape_with_playwright(urls, schema=schema, llm=llm)
pprint.pprint(extracted_content)
```
Instead of the results shown I get this error: `requests.exceptions.SSLError: HTTPSConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate/ (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1006)')))` when the `extract` function is called.
Could anyone please help me understand what I'm doing wrong? Thanks!
### Suggestion:
_No response_ | Web Scraping with ChatOllama gives SSL: WRONG_VERSION_NUMBER | https://api.github.com/repos/langchain-ai/langchain/issues/14450/comments | 6 | 2023-12-08T15:19:32Z | 2024-07-03T07:29:16Z | https://github.com/langchain-ai/langchain/issues/14450 | 2,032,847,275 | 14,450 |
[
"langchain-ai",
"langchain"
] | During the recent initiative to secure API keys with `SecretStr` (https://github.com/langchain-ai/langchain/issues/12165), some implementations and their corresponding tests were implemented with some flaws. More specifically, they were not really masking the API key.
For instsance, in `libs/langchain/langchain/chat_models/javelin_ai_gateway.py` we have:
```
@property
def _default_params(self) -> Dict[str, Any]:
params: Dict[str, Any] = {
"gateway_uri": self.gateway_uri,
"javelin_api_key": cast(SecretStr, self.javelin_api_key).get_secret_value(),
"route": self.route,
**(self.params.dict() if self.params else {}),
}
return params
```
In the above snippet, `self.javelin_api_key` is cast to `SecretStr`, and then immediately `.get_secret_value()` is invoked, essentially retrieving the original string. Note that Javelin chat lacks unit tests. This could be used to handle the case where the API key is `None`, but then it might appear like there's no masking and it's preferable to address the `None` case directly.
It's worth noting that this pattern is repeated in tests, such as in `libs/langchain/tests/integration_tests/chat_models/test_baiduqianfan.py`:
```
def test_uses_actual_secret_value_from_secret_str() -> None:
"""Test that actual secret is retrieved using `.get_secret_value()`."""
chat = QianfanChatEndpoint(
qianfan_ak="test-api-key",
qianfan_sk="test-secret-key",
)
assert cast(SecretStr, chat.qianfan_ak).get_secret_value() == "test-api-key"
assert cast(SecretStr, chat.qianfan_sk).get_secret_value() == "test-secret-key"
```
The point of the test would be to assert that the API key is indeed a secret, and not just cast it back and forth.
Let me point out that the test suite for baiduqianfan chat does indeed catch whether the API key is indeed masked with a `SecretStr` by capturing the stdout.
@eyurtsev @hwchase17
### Suggestion:
PR to fix the issues | Issue: Flawed implementations of SecretStr for API keys | https://api.github.com/repos/langchain-ai/langchain/issues/14445/comments | 1 | 2023-12-08T13:45:01Z | 2024-02-02T14:32:30Z | https://github.com/langchain-ai/langchain/issues/14445 | 2,032,683,865 | 14,445 |
[
"langchain-ai",
"langchain"
] | ### System Info
npm version: "^0.0.203"
MacOS
Bun version: 1.0.15+b3bdf22eb
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The following code will cause this error:
```
import { Pinecone } from '@pinecone-database/pinecone';
import { VectorDBQAChain } from 'langchain/chains';
import { OpenAIEmbeddings } from 'langchain/embeddings/openai';
import { OpenAI } from 'langchain/llms/openai';
import { PineconeStore } from 'langchain/vectorstores/pinecone';
const pinecone = new Pinecone();
const indexKey = process.env.PINECONE_INDEX_KEY;
if (!indexKey) {
throw new Error('PINECONE_INDEX_KEY is not set.');
}
const pineconeIndex = pinecone.Index(indexKey);
export async function queryDocuments(query: string, returnSourceDocuments = false) {
const vectorStore = await PineconeStore.fromExistingIndex(
new OpenAIEmbeddings({
modelName: 'text-embedding-ada-002',
}),
{
pineconeIndex,
},
);
const model = new OpenAI({
modelName: 'gpt-4-1106-preview',
});
const chain = VectorDBQAChain.fromLLM(model, vectorStore, {
k: 5,
returnSourceDocuments,
});
return await chain.call({ query });
}
```
The embeddings have been created and confirmed to exist in the Pinecone console, e.g.:
<img width="1240" alt="Screenshot 2023-12-08 at 13 46 24" src="https://github.com/langchain-ai/langchain/assets/1304307/66c23c7e-916a-461d-b8f6-28a7fa460300">
### Expected behavior
I would expect it to query the vector DB and correctly prompt GPT-4 with the results. But instead, I get the following error:
```
? Enter your query what is the third wave of dam
Creating query for "what is the third wave of dam"...
499 | var _a;
500 | return __generator(this, function (_b) {
501 | switch (_b.label) {
502 | case 0:
503 | _a = this.transformer;
504 | return [4 /*yield*/, this.raw.json()];
^
SyntaxError: Unexpected end of JSON input
at /Users/andy/dev/runestone/node_modules/@pinecone-database/pinecone/dist/pinecone-generated-ts-fetch/runtime.js:504:46
at step (/Users/andy/dev/runestone/node_modules/@pinecone-database/pinecone/dist/pinecone-generated-ts-fetch/runtime.js:72:18)
at /Users/andy/dev/runestone/node_modules/@pinecone-database/pinecone/dist/pinecone-generated-ts-fetch/runtime.js:53:53
at /Users/andy/dev/runestone/node_modules/@pinecone-database/pinecone/dist/pinecone-generated-ts-fetch/runtime.js:47:9
at new Promise (:1:21)
at /Users/andy/dev/runestone/node_modules/@pinecone-database/pinecone/dist/pinecone-generated-ts-fetch/runtime.js:43:12
at /Users/andy/dev/runestone/node_modules/@pinecone-database/pinecone/dist/pinecone-generated-ts-fetch/runtime.js:498:16
at /Users/andy/dev/runestone/node_modules/@pinecone-database/pinecone/dist/pinecone-generated-ts-fetch/apis/VectorOperationsApi.js:405:46
at step (/Users/andy/dev/runestone/node_modules/@pinecone-database/pinecone/dist/pinecone-generated-ts-fetch/apis/VectorOperationsApi.js:84:18)
``` | Unexpected end of JSON | https://api.github.com/repos/langchain-ai/langchain/issues/14443/comments | 1 | 2023-12-08T12:47:37Z | 2024-03-18T16:07:49Z | https://github.com/langchain-ai/langchain/issues/14443 | 2,032,599,462 | 14,443 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello Team,
We are using opensearch vectordb to store the embeddings, then we are using these embeddings to retrieve the similar document during conversastnal retrieval. while checking the settings of the index created by vector_db.add_text(page_content,metadatas). i have seen the number_of_replicas as 5 and shards as 1(this is the default behaviour of vector_db.add_text(page_content,metadats)
**I want to pass as number_of_replicas as 1** and shards as 1 whenever the index is created on opensearch. Can you please help me with this, how can i custom pass this replicas value. Also i am adding code for better understanding below
In the below code if u see the **langchain vector_db.add_text() in Line 109** creating a index with 5 shards by default. i just want to pass this parameter manually lets say 1 shard. Can u please help me out here. Please let me know if you want any more info to understand my issue.

### Suggestion:
_No response_ | Issue: Opensearch manually assigned shards and replicas while using vector_db.add_text(page_contents,metatas) | https://api.github.com/repos/langchain-ai/langchain/issues/14442/comments | 1 | 2023-12-08T11:02:46Z | 2024-03-16T16:14:06Z | https://github.com/langchain-ai/langchain/issues/14442 | 2,032,443,629 | 14,442 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
#prompt template
How Can I use Prompt Template in my code below?
def chat_langchain(new_project_qa, query, not_uuid):
check = query.lower()
user_experience_inst = UserExperience.objects.get(not_uuid=not_uuid)
# query="If the context is related to hi/hello with or without incorrect spelling then reply Hi! How can I assist you today?,else Search the entire context and provide formal and accurate answer for this query - {}. Explain the relevant information with important points, if necessary.If you don't find answer then give relavant answer else reply with text 'Sorry, I can't find the related information' ".format(check)
if check not in ['hi','hello','hey','hui','hiiii','hii','hiii','heyyy'] and not user_experience_inst.custom_prompt:
query = "Search the entire context and provide formal and accurate answer for this query - {}. Explain the relevant information with important points, if necessary.If you don't find answer then reply with text 'Sorry, I can't find the related information' otherwise give relavant answer".format(check)
elif check not in ['hi','hello','hey','hui','hiiii','hii','hiii','heyyy'] and user_experience_inst.custom_prompt:
query = f"{user_experience_inst.custom_prompt} {check}.If you don't find answer then reply with text 'Sorry, I can't find the related information'"
else:
query="Search the entire context and provide formal and accurate answer for this query - {}. Explain the relevant information with important points, if necessary.".format(check)
result = new_project_qa(query)
relevant_document = result['source_documents']
if relevant_document:
source = relevant_document[0].metadata.get('source', '')
# Check if the file extension is ".pdf"
file_extension = os.path.splitext(source)[1]
if file_extension.lower() == ".pdf":
source = os.path.basename(source)
# Retrieve the UserExperience instance using the provided not_uuid
user_experience_inst = UserExperience.objects.get(not_uuid=not_uuid)
bot_ending = user_experience_inst.bot_ending_msg if user_experience_inst.bot_ending_msg is not None else ""
# Create the list_json dictionary
if bot_ending != '':
list_json = {
'bot_message': result['result'] + '\n\n' + str(bot_ending),
"citation": source
}
else:
list_json = {
'bot_message': result['result'] + str(bot_ending),
"citation": source
}
else:
# Handle the case when relevant_document is empty
list_json = {
'bot_message': result['result'],
'citation': ''
}
# Return the list_json dictionary
return list_json
### Suggestion:
_No response_ | Issue: How Can I use Prompt Template? | https://api.github.com/repos/langchain-ai/langchain/issues/14441/comments | 1 | 2023-12-08T10:24:07Z | 2024-03-16T16:14:01Z | https://github.com/langchain-ai/langchain/issues/14441 | 2,032,387,502 | 14,441 |
[
"langchain-ai",
"langchain"
] | ### Feature request
When utilizing custom_table_info in the sqldatabase instance while employing the create_sql_agent function, it appears that there is an issue where it disregards the sql_db_schema. Currently, it only utilizes either the custom_table_info or the sql_db_schema. This poses a challenge, especially when crucial information, such as identifying which ID column corresponds to other tables, cannot be specified in the custom table info. There is a need for an option to use both the custom table information and the sql_db_schema
table_info={'invoice':" the customer_id in invoice table is referenced to customers table's company_id",}
db=SQLDatabase(engine=dbengine,include_tables=["invoice","customer"],custom_table_info=table_info)
Invoking: `sql_db_schema` with `customer, invoice`
CREATE TABLE customer (
id SERIAL NOT NULL,
key VARCHAR NOT NULL,
company_id VARCHAR NOT NULL,
company_name VARCHAR NOT NULL,
)
/*
1 rows from customer table:
id key company_id company_name
670 CUST-0ab15 17 Aim Inc
*/
the customer_id in invoice table is referenced to customers table's company_id,
It doesnt have the schema of invoice table so generating wrong sql queries
### Motivation
I need both custom_table_info and sql_db_schema to work, as some kinda metadata is needed to specify that is specific to my use cases.
### Your contribution
NO | custom_table_info along with sql_db_schema while using create_sql_agent | https://api.github.com/repos/langchain-ai/langchain/issues/14440/comments | 1 | 2023-12-08T10:10:16Z | 2024-03-16T16:13:56Z | https://github.com/langchain-ai/langchain/issues/14440 | 2,032,367,290 | 14,440 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
The supabase vectorstore does not support setting the `score_threshold` in `as_retriever` despite being showcased as an option in the vectorestore superclass docstring example.
https://github.com/langchain-ai/langchain/blob/a05230a4ba4dee591d3810440ce65e16860956ae/libs/langchain/langchain/vectorstores/supabase.py#L218
https://github.com/langchain-ai/langchain/blob/a05230a4ba4dee591d3810440ce65e16860956ae/libs/core/langchain_core/vectorstores.py#L596
### Idea or request for content:
The VectoreStore superclass of SupabaseVectoreStore contains logic in `similarity_search_by_vector_with_relevance_scores` that could be used in the SupabaseVectorStore subclass to support the `score_threshold` parameter. | DOC: `SupabaseVectorStore` support for similarity `score_threshold` filtering in `as_retriever` | https://api.github.com/repos/langchain-ai/langchain/issues/14438/comments | 2 | 2023-12-08T09:48:57Z | 2024-03-17T16:10:06Z | https://github.com/langchain-ai/langchain/issues/14438 | 2,032,332,864 | 14,438 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.11
Langchain 0.0.348
### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am trying DoctranTextTranslator of langchain. However, I got an error message after running below code.
Error:
doctran_docs[i] = await doc.translate(language=self.language).execute()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: object Document can't be used in 'await' expression
```
from langchain.document_transformers import DoctranTextTranslator
from langchain.schema import Document
from dotenv import load_dotenv
import asyncio
load_dotenv()
sample_text = """[Generated with ChatGPT]
Confidential Document - For Internal Use Only
Date: July 1, 2023
Subject: Updates and Discussions on Various Topics
Dear Team,
I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential.
Security and Privacy Measures
As part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: john.doe@example.com) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at security@example.com.
HR Updates and Employee Benefits
Recently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: michael.johnson@example.com).
Marketing Initiatives and Campaigns
Our marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company.
Research and Development Projects
In our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: david.rodriguez@example.com) in his role as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th.
Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly.
Thank you for your attention, and let's continue to work together to achieve our goals.
Best regards,
Jason Fan
Cofounder & CEO
Psychic
jason@psychic.dev
"""
documents = [Document(page_content=sample_text)]
qa_translator = DoctranTextTranslator(language="spanish", openai_api_model="gpt-3.5-turbo")
async def atransform_documents(docs):
return await qa_translator.atransform_documents(docs)
translated_document = asyncio.run(atransform_documents(documents))
print(translated_document[0].page_content)
```
### Expected behavior
It should return the translated text. | DoctranTextTranslator Is Not Working | https://api.github.com/repos/langchain-ai/langchain/issues/14437/comments | 1 | 2023-12-08T09:10:36Z | 2024-03-16T16:13:46Z | https://github.com/langchain-ai/langchain/issues/14437 | 2,032,270,431 | 14,437 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version: 0.0.348
for the type hint for es_connection variable in class ElasticsearchChatMessageHistory, module is used as a type
@hwchase17 @eyurtsev
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import json
import logging
from time import time
from typing import TYPE_CHECKING, Any, Dict, List, Optional
from langchain_core.chat_history import BaseChatMessageHistory
from langchain_core.messages import (
BaseMessage,
message_to_dict,
messages_from_dict,
)
if TYPE_CHECKING:
from elasticsearch import Elasticsearch
logger = logging.getLogger(__name__)
class ElasticsearchChatMessageHistory(BaseChatMessageHistory):
"""Chat message history that stores history in Elasticsearch.
Args:
es_url: URL of the Elasticsearch instance to connect to.
es_cloud_id: Cloud ID of the Elasticsearch instance to connect to.
es_user: Username to use when connecting to Elasticsearch.
es_password: Password to use when connecting to Elasticsearch.
es_api_key: API key to use when connecting to Elasticsearch.
es_connection: Optional pre-existing Elasticsearch connection.
index: Name of the index to use.
session_id:Arbitrary key that is used to store the messages
of a single chat session.
"""
def __init__(
self,
index: str,
session_id: str,
*,
es_connection: Optional["**Elasticsearch**"] = None,
es_url: Optional[str] = None,
es_cloud_id: Optional[str] = None,
es_user: Optional[str] = None,
es_api_key: Optional[str] = None,
es_password: Optional[str] = None,
):
self.index: str = index
self.session_id: str = session_id
### Expected behavior
import of module elasticsearch is not done properly | import error in elasticsearch memory module: Module cannot be used as a type | https://api.github.com/repos/langchain-ai/langchain/issues/14436/comments | 1 | 2023-12-08T09:09:41Z | 2024-03-16T16:13:41Z | https://github.com/langchain-ai/langchain/issues/14436 | 2,032,269,106 | 14,436 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version:0.0.311
os:macOS 11.6
python: 3.11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This is code sample of using agent:
```
sys_p = self.create_sys_prompt()
tools = [
ProgrammerAgent(self.env),
DefaultAgent(),
]
agent_obj = StructuredChatAgent.from_llm_and_tools(
llm=self.env.llm,
tools=tools,
prefix=sys_p + PREFIX,
verbose=True,
)
agent = AgentExecutor.from_agent_and_tools(
agent=agent_obj,
tools=tools,
verbose=True,
)
task = f"""
Here is the requirement:
```Markdown
{requirement}
```
Please Implement the requirement.
"""
return agent.run(task)
```
this is the code of ProgrammerAgent tool:
```
from typing import Any, Type
from langchain.tools import BaseTool
from pydantic import Field, BaseModel
from enviroment import Environment
class ProgramRequirementSchema(BaseModel):
task: str = Field(description="Coding task")
task_context: str = Field(description="Contextual background information for the task.")
project_path: str = Field(description="Project path")
class ProgrammerAgent(BaseTool):
name: str = "programmer_agent"
description: str = """
programmer agent is a agent that write code for a given coding task.
"""
args_schema: Type[ProgramRequirementSchema] = ProgramRequirementSchema
env: Environment = Field(default=None)
def __init__(self, env: Environment):
super().__init__()
self.env = env
def _run(self, task: str, task_context: str, project_path: str) -> Any:
result = "success"
return result
```
And this is the wrong action:
```
Action:
```json
{
"action": "programmer_agent",
"action_input": {
"task": {
"title": "Implement the requirement",
"description": "1. Update the `grpc.proto` file. 2. Design the database and write the create SQL. 3. Implement the database operation interface. 4. Implement the grpc interface."
},
"task_context": {
"title": "Context",
"description": "The project is built with golang, and the database used is relational database. The grpc interface is defined in `grpc.proto`."
},
"project_path": "golang.52tt.com/services/tt-rev/offering-room"
}
}
```
### Expected behavior
the value of action_input should be :
```json
{
"action": "programmer_agent",
"action_input": {
"task": "1. Update the `grpc.proto` file. 2. Design the database and write the create SQL. 3. Implement the database operation interface. 4. Implement the grpc interface.",
"task_context": "The project is built with golang, and the database used is relational database. The grpc interface is defined in `grpc.proto`.",
"project_path": "golang.52tt.com/services/tt-rev/offering-room"
}
}
``` | StructuredChatAgent did not provide the correct action input. | https://api.github.com/repos/langchain-ai/langchain/issues/14434/comments | 3 | 2023-12-08T08:15:05Z | 2023-12-08T10:27:18Z | https://github.com/langchain-ai/langchain/issues/14434 | 2,032,157,152 | 14,434 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain Version: 0.0.346
Python: 3.11.4
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.vectorstores import ElasticsearchStore
```
Results in
```
File "C:\Users\XXX\Desktop\Projects\XXX\api\controllers\Vector.py", line 5, in <module>
from langchain.vectorstores import ElasticsearchStore
ImportError: cannot import name 'ElasticsearchStore' from 'langchain.vectorstores'
```
or
```python
from langchain.vectorstores.elasticsearch import ElasticsearchStore
```
Results in
```
File "C:\Users\XXX\Desktop\Projects\XXX\api\controllers\Vector.py", line 5, in <module>
from langchain.vectorstores.elasticsearch import ElasticsearchStore
ModuleNotFoundError: No module named 'langchain.vectorstores.elasticsearch'
```
### Expected behavior
I am upgrading from `langchain==0.0.279` to `langchain==0.0.346` and this is the issue that arised.
Expected behavior would be import successfully, new langchain version does not seems to be backward compatible for `ElasticsearchStore` | Bug: ImportError for ElasticsearchStore | https://api.github.com/repos/langchain-ai/langchain/issues/14431/comments | 3 | 2023-12-08T07:14:10Z | 2023-12-08T15:16:20Z | https://github.com/langchain-ai/langchain/issues/14431 | 2,032,065,241 | 14,431 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python ==3.11.3
pymilvus== 2.3.1
langchain==0.0.327
openai==0.28.1
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Description:
The issue encountered revolves around the inability of the retrieval process to filter data based on metadata, specifically focusing on the 'file_name' object in the metadata. The context pertains to utilizing Milvius as the database for the vector store.
Details:
Metadata Structure: The metadata comprises a list of dictionaries, where each dictionary holds key-value pairs containing metadata information. The 'file_name' attribute is utilized for filtration purposes.
Example Metadata List:
[{'page_number': '4', 'file_name': 'Apple_history.pdf', 'source_path': '.../Apple_history.pdf'}, ...]
Applied Options and Observations:
Attempted a approach by directly using a 'filter_query' with the 'file_name'.
filter_query = {"filter": {"file_name": 'samsung.pdf'}, "k": self.top}
retriever = vectorstore.as_retriever(search_kwargs=filter_query) # Retrieval of top k results
result = retriever.get_relevant_documents(agent_query)
Similar observations were made where the retrieval fetched data unrelated to the specified 'samsung.pdf' filename.
### Expected behavior
The anticipated functionality was to filter the retrieval process based on the 'file_name' metadata attribute. In scenarios where there are no chunks associated with the specified 'file_name', the retrieval should ideally return no data. | Retrieval Inability to Filter Based on Metadata in Milvius Database | https://api.github.com/repos/langchain-ai/langchain/issues/14429/comments | 2 | 2023-12-08T04:37:35Z | 2024-03-17T16:10:02Z | https://github.com/langchain-ai/langchain/issues/14429 | 2,031,914,421 | 14,429 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain Version: 0.0.348
Output from `poetry env info`
**Virtualenv**
Python: 3.9.12
Implementation: CPython
Path: /Users/peternf/Desktop/langchain/libs/langchain/.venv
Executable: /Users/peternf/Desktop/langchain/libs/langchain/.venv/bin/python
Valid: True
**System**
Platform: darwin
OS: posix
Python: 3.9.12
Path: /Users/peternf/opt/miniconda3
Executable: /Users/peternf/opt/miniconda3/bin/python3.9
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
### Reason to Install litellm
@xavier-xia-99 and I are collaborating on finishing the tasks of #12165 for _libs/langchain/langchain/chat_models/litellm.py_. We have finished the implementation and is adding unit tests to verify our logic when we discovered that we need to include litellm for the unit tests, or `make test` will just skip our newly added unit tests. Yet as we try to add the optional dependency litellm according to the instructions provided in langchain/.github/CONTRIBUTING.md, it shows the following dependency conflict. Hence, we will appreciate any help in addressing this conflict or running the CI. Thank you!
### Steps to Reproduce the Behavior
1. Run `poetry add --optional litellm`
2. Error Message: <img width="720" alt="Screenshot 2023-12-07 at 5 50 43 PM" src="https://github.com/langchain-ai/langchain/assets/98713019/3e0d5730-2134-4e11-b901-fbda927bc796">
### Expected behavior
Output of litellm downloads in progress and successful installation of relevant packages. | Dependency Conflict between litellm and tiktoken | https://api.github.com/repos/langchain-ai/langchain/issues/14419/comments | 1 | 2023-12-07T23:10:21Z | 2024-03-16T16:13:31Z | https://github.com/langchain-ai/langchain/issues/14419 | 2,031,666,536 | 14,419 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
im trying to create a metadata to separate user id fields vectors, I searched a lot and couldn't find anything , im using zilliz vector store , I tried using the userid as memory key but it's not possible , any ideas ? Im using the docs is automatically saved and retrieved from the saved collection on zilliz , so I can't edit the docs too , im just trying to add meta fields when user chats and when retrieved for context.
```
vectordb = Milvus.from_documents(
{} ,
embeddings,
connection_args={
"uri": ZILLIZ_CLOUD_URI,
"token": ZILLIZ_CLOUD_API_KEY, # API key, for serverless clusters which can be used as replacements for user and password
"secure": True,
},
)
# vectordb.
retriever = Milvus.as_retriever(vectordb,search_kwargs= {"k":15 , 'user_id': "a3"} ) # here we use userid with "a" for retreiving memory
# print(retriever)
memory= VectorStoreRetrieverMemory(retriever=retriever , memory_key="history" , metadata={"user_id": "a3"} )
chain = ConversationChain(llm=self.llm, memory=memory, verbose=True,prompt = PROMPT , metadata={"user_id": "a3"})
res = chain.predict(input=input_text)
# with chain_recorder as recording:
# llm_response = chain(input_text)
return res
```
### Suggestion:
_No response_ | Issue: <Zilliz and Milvus metadata field and memory seperation> | https://api.github.com/repos/langchain-ai/langchain/issues/14412/comments | 1 | 2023-12-07T20:08:44Z | 2024-03-16T16:13:26Z | https://github.com/langchain-ai/langchain/issues/14412 | 2,031,444,278 | 14,412 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version : Latest
```python
def delete(self, ids: Optional[List[str]] = None, **kwargs: Any) -> Optional[bool]:
```
[This check](https://github.com/langchain-ai/langchain/blob/54040b00a4a05e81964a1a7f7edbf0b830d4395c/libs/langchain/langchain/vectorstores/faiss.py#L798) causes the issue.
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
db = FAISS.from_documents(text_pages, embeddings)
db.delete()
```
Error
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[51], line 1
----> 1 db.delete()
File /opt/conda/lib/python3.10/site-packages/langchain/vectorstores/faiss.py:799, in FAISS.delete(self, ids, **kwargs)
789 """Delete by ID. These are the IDs in the vectorstore.
790
791 Args:
(...)
796 False otherwise, None if not implemented.
797 """
798 if ids is None:
--> 799 raise ValueError("No ids provided to delete.")
800 missing_ids = set(ids).difference(self.index_to_docstore_id.values())
801 if missing_ids:
ValueError: No ids provided to delete.
```
### Expected behavior
The index should be deleted without needing to pass an index `id`. | FAISS db.delete() says `ids` is required even when it is Optional | https://api.github.com/repos/langchain-ai/langchain/issues/14409/comments | 1 | 2023-12-07T19:22:56Z | 2024-03-17T16:09:57Z | https://github.com/langchain-ai/langchain/issues/14409 | 2,031,382,949 | 14,409 |
[
"langchain-ai",
"langchain"
] | ### System Info
from langchain.llms import GooglePalm
from sqlalchemy import create_engine
from langchain.utilities import SQLDatabase
from langchain.llms import GooglePalm
from langchain_experimental.sql import SQLDatabaseChain
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
st.write(db_chain_sql_out.return_sql) is returning bool value "True" Instead of the model generated actual SQL Statement -
I am using Google Palm - is this the normal output?
### Expected behavior
Expecting model generated SQL | db_chain_sql_out.return_sql | https://api.github.com/repos/langchain-ai/langchain/issues/14404/comments | 4 | 2023-12-07T15:55:01Z | 2024-03-17T16:09:52Z | https://github.com/langchain-ai/langchain/issues/14404 | 2,031,053,527 | 14,404 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am using langchain 0.0.316 an trying to create a ElasticsearchStore to do some similarity_search
However, whenever I try to create it (using from_documents or not) I get the following error :
`raise ValueError("check_hostname requires server_hostname")`
This is a SSL error, and I suspect it to be the problem as I cannot use SSL. Somewhere else in the project, I cannot with Python to Elastic using verify_certs = False and everything works perfectly.
Thus I tried to create ElasticsearchStore with the following arguments :
```
db = ElasticsearchStore(texts,
es_url='...',
index_name = '...',
embedding,
ssl_verify = {'verify_certs'=False'})
```
But I still have the error and nothing has changed.
How can I make langchain to initialize Elastic without checking for ssl certificates ?
### Suggestion:
_No response_ | Issue: ElasticsearchStore with ssl_verify = {'verify_certs':False} does not work | https://api.github.com/repos/langchain-ai/langchain/issues/14403/comments | 5 | 2023-12-07T15:46:15Z | 2024-02-05T02:29:06Z | https://github.com/langchain-ai/langchain/issues/14403 | 2,031,037,974 | 14,403 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain: 0.0.346
OpenAI: 1.3.7
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Simple script to authenticate to Azure with RBAC
```
from langchain.embeddings import AzureOpenAIEmbeddings
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
token_provider = get_bearer_token_provider(DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default")
embeddings = AzureOpenAIEmbeddings(azure_endpoint='xxxxxxx', azure_ad_token_provider=token_provider)
```
### Expected behavior
Should authenticate, but is seems like the `azure_ad_token_provider` is not added to the values dict
langchain/embeddings/azure_openai.py line 80-86
```
values["azure_endpoint"] = values["azure_endpoint"] or os.getenv(
"AZURE_OPENAI_ENDPOINT"
)
values["azure_ad_token"] = values["azure_ad_token"] or os.getenv(
"AZURE_OPENAI_AD_TOKEN"
)
```
Other parameters are added to values, but not `azure_ad_token_provider` | AzureOpenAIEmbeddings cannot authenticate with azure_ad_token_provider | https://api.github.com/repos/langchain-ai/langchain/issues/14402/comments | 10 | 2023-12-07T15:35:19Z | 2024-03-21T08:22:18Z | https://github.com/langchain-ai/langchain/issues/14402 | 2,031,016,359 | 14,402 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello community,
I'm currently working on a project that involves using the langchain library for natural language processing. I'm encountering an issue with the LLMChain class, and I'm hoping someone can help me troubleshoot.
I've initialized a Hugging Face pipeline and constructed a prompt using PromptTemplate. However, when I attempt to load a QA chain using the load_qa_chain function, I get a ValidationError related to the Runnable type. The error suggests that an instance of Runnable is expected, but it seems there's a mismatch.
Here's a simplified version of my code:
```
from langchain.prompts import PromptTemplate
from langchain import load_qa_chain
from transformers import HuggingFacePipeline
hf = HuggingFacePipeline.from_model_id(
model_id="ai-forever/rugpt3large_based_on_gpt2",
task="text-generation",
pipeline_kwargs={"max_new_tokens": 10},
)
prompt = """Question: {question}
Answer: {text}"""
# The next line is where the error occurs
chain = load_qa_chain(hf(prompt=prompt), chain_type="stuff")
```
```
ValidationError: 2 validation errors for LLMChain
llm
instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
llm
instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
```
I have checked the documentation and versions of the libraries, but I'm still having trouble understanding and resolving the issue. Could someone please provide guidance on what might be causing this ValidationError and how I can address it?
Thank you in advance for your help!
### Suggestion:
_No response_ | Issue: <Trouble with langchain Library: Error in LLMChain Validation> | https://api.github.com/repos/langchain-ai/langchain/issues/14401/comments | 1 | 2023-12-07T15:17:37Z | 2024-03-17T16:09:46Z | https://github.com/langchain-ai/langchain/issues/14401 | 2,030,975,852 | 14,401 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hi Team,
Am trying to connect to SQL/CSV using the HuggingFaceHUb and I get value error. This value error occurs even when I use the same example as given in https://python.langchain.com/docs/use_cases/qa_structured/sql except that instead of openAI am using huggingfaceHub
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
st.title("SQL DB with Langchain")
#entering input through streamlit into the app for querying
input_text = st.text_input("enter the text for search")
#input_period = st.text_input("enter the period for which you need summarization")
#connecting to hugging face API
os.environ["HUGGINGFACEHUB_API_TOKEN"] = huggingface_write_key
#SQL
db = SQLDatabase.from_uri("sqlite:///Chinook.db")
repo_id = "google/flan-t5-xxl"
llm = HuggingFaceHub(
repo_id=repo_id, model_kwargs={"temperature": 0.2}
)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
if input_text:
st.write(db_chain.run(input_text))
### Expected behavior
Expected to give output for the query thats run | value error when using huggingfacehub API | https://api.github.com/repos/langchain-ai/langchain/issues/14400/comments | 10 | 2023-12-07T14:05:46Z | 2024-04-25T11:22:42Z | https://github.com/langchain-ai/langchain/issues/14400 | 2,030,832,959 | 14,400 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
python3.10/site-packages/langchain/llms/bedrock.py:315: RuntimeWarning: coroutine 'AsyncCallbackManagerForLLMRun.on_llm_new_token' was never awaited
### Suggestion:
_No response_ | Issue:python3.10/site-packages/langchain/llms/bedrock.py:315: RuntimeWarning: coroutine 'AsyncCallbackManagerForLLMRun.on_llm_new_token' was never awaited | https://api.github.com/repos/langchain-ai/langchain/issues/14399/comments | 4 | 2023-12-07T13:59:30Z | 2023-12-08T02:24:40Z | https://github.com/langchain-ai/langchain/issues/14399 | 2,030,821,733 | 14,399 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain: 0.0.346
python: 3.11.7
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Error when run:
```
from langchain.tools import DuckDuckGoSearchRun
search = DuckDuckGoSearchRun()
search.run("Obama's first name?")
```
### Expected behavior
I got this error when run:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[c:\Users\thenh\OneDrive\M](file:///C:/Users/thenh/OneDrive/M)áy tính\demo\test.ipynb Cell 13 line 4
.....
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\langchain_core\tools.py:337, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, **kwargs)
[334](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:334) try:
[335](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:335) tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)
[336](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:336) observation = (
--> [337](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:337) self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
[338](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:338) if new_arg_supported
[339](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:339) else self._run(*tool_args, **tool_kwargs)
[340](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:340) )
[341](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:341) except ToolException as e:
[342](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:342) if not self.handle_tool_error:
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\langchain\tools\ddg_search\tool.py:37, in DuckDuckGoSearchRun._run(self, query, run_manager)
[31](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/tools/ddg_search/tool.py:31) def _run(
[32](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/tools/ddg_search/tool.py:32) self,
[33](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/tools/ddg_search/tool.py:33) query: str,
[34](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/tools/ddg_search/tool.py:34) run_manager: Optional[CallbackManagerForToolRun] = None,
[35](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/tools/ddg_search/tool.py:35) ) -> str:
[36](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/tools/ddg_search/tool.py:36) """Use the tool."""
---> [37](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/tools/ddg_search/tool.py:37) return self.api_wrapper.run(query)
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\langchain\utilities\duckduckgo_search.py:81, in DuckDuckGoSearchAPIWrapper.run(self, query)
[79](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:79) """Run query through DuckDuckGo and return concatenated results."""
[80](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:80) if self.source == "text":
---> [81](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:81) results = self._ddgs_text(query)
[82](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:82) elif self.source == "news":
[83](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:83) results = self._ddgs_news(query)
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\langchain\utilities\duckduckgo_search.py:48, in DuckDuckGoSearchAPIWrapper._ddgs_text(self, query, max_results)
[45](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:45) from duckduckgo_search import DDGS
[47](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:47) with DDGS() as ddgs:
---> [48](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:48) ddgs_gen = ddgs.text(
[49](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:49) query,
[50](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:50) region=self.region,
[51](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:51) safesearch=self.safesearch,
[52](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:52) timelimit=self.time,
[53](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:53) max_results=max_results or self.max_results,
[54](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:54) backend=self.backend,
[55](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:55) )
[56](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:56) if ddgs_gen:
[57](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:57) return [r for r in ddgs_gen]
TypeError: DDGS.text() got an unexpected keyword argument 'max_results'
```
After remove _`max_results=max_results or self.max_results`_, i still got another error:
```
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
[c:\Users\thenh\OneDrive\M](file:///C:/Users/thenh/OneDrive/M)áy tính\demo\test.ipynb Cell 13 line 4
[1](vscode-notebook-cell:/c%3A/Users/thenh/OneDrive/M%C3%A1y%20t%C3%ADnh/demo/test.ipynb#X15sZmlsZQ%3D%3D?line=0) from langchain.tools import DuckDuckGoSearchRun
[2](vscode-notebook-cell:/c%3A/Users/thenh/OneDrive/M%C3%A1y%20t%C3%ADnh/demo/test.ipynb#X15sZmlsZQ%3D%3D?line=1) search = DuckDuckGoSearchRun()
----> [4](vscode-notebook-cell:/c%3A/Users/thenh/OneDrive/M%C3%A1y%20t%C3%ADnh/demo/test.ipynb#X15sZmlsZQ%3D%3D?line=3) search.run("who is newjeans")
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\langchain_core\tools.py:365, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, **kwargs)
[363](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:363) except (Exception, KeyboardInterrupt) as e:
[364](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:364) run_manager.on_tool_error(e)
--> [365](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:365) raise e
[366](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:366) else:
[367](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:367) run_manager.on_tool_end(
[368](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:368) str(observation), color=color, name=self.name, **kwargs
[369](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:369) )
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\langchain_core\tools.py:337, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, **kwargs)
[334](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:334) try:
[335](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:335) tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)
[336](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:336) observation = (
--> [337](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:337) self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
[338](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:338) if new_arg_supported
[339](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:339) else self._run(*tool_args, **tool_kwargs)
[340](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:340) )
[341](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:341) except ToolException as e:
[342](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:342) if not self.handle_tool_error:
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\langchain\tools\ddg_search\tool.py:37, in DuckDuckGoSearchRun._run(self, query, run_manager)
[31](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/tools/ddg_search/tool.py:31) def _run(
[32](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/tools/ddg_search/tool.py:32) self,
[33](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/tools/ddg_search/tool.py:33) query: str,
[34](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/tools/ddg_search/tool.py:34) run_manager: Optional[CallbackManagerForToolRun] = None,
[35](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/tools/ddg_search/tool.py:35) ) -> str:
[36](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/tools/ddg_search/tool.py:36) """Use the tool."""
---> [37](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/tools/ddg_search/tool.py:37) return self.api_wrapper.run(query)
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\langchain\utilities\duckduckgo_search.py:81, in DuckDuckGoSearchAPIWrapper.run(self, query)
[79](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:79) """Run query through DuckDuckGo and return concatenated results."""
[80](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:80) if self.source == "text":
---> [81](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:81) results = self._ddgs_text(query)
[82](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:82) elif self.source == "news":
[83](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:83) results = self._ddgs_news(query)
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\langchain\utilities\duckduckgo_search.py:57, in DuckDuckGoSearchAPIWrapper._ddgs_text(self, query, max_results)
[48](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:48) ddgs_gen = ddgs.text(
[49](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:49) query,
[50](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:50) region=self.region,
(...)
[54](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:54) backend=self.backend,
[55](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:55) )
[56](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:56) if ddgs_gen:
---> [57](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:57) return [r for r in ddgs_gen]
[58](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:58) return []
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\langchain\utilities\duckduckgo_search.py:57, in <listcomp>(.0)
[48](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:48) ddgs_gen = ddgs.text(
[49](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:49) query,
[50](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:50) region=self.region,
(...)
[54](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:54) backend=self.backend,
[55](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:55) )
[56](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:56) if ddgs_gen:
---> [57](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:57) return [r for r in ddgs_gen]
[58](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:58) return []
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\duckduckgo_search\duckduckgo_search.py:150, in DDGS.text(self, keywords, region, safesearch, timelimit, backend)
[134](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:134) """DuckDuckGo text search generator. Query params: https://duckduckgo.com/params
[135](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:135)
[136](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:136) Args:
(...)
[147](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:147)
[148](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:148) """
[149](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:149) if backend == "api":
--> [150](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:150) yield from self._text_api(keywords, region, safesearch, timelimit)
[151](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:151) elif backend == "html":
[152](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:152) yield from self._text_html(keywords, region, safesearch, timelimit)
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\duckduckgo_search\duckduckgo_search.py:203, in DDGS._text_api(self, keywords, region, safesearch, timelimit)
[201](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:201) for s in ("0", "20", "70", "120"):
[202](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:202) payload["s"] = s
--> [203](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:203) resp = self._get_url(
[204](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:204) "GET", "https://links.duckduckgo.com/d.js", params=payload
[205](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:205) )
[206](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:206) if resp is None:
[207](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:207) break
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\duckduckgo_search\duckduckgo_search.py:89, in DDGS._get_url(self, method, url, **kwargs)
[87](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:87) logger.warning(f"_get_url() {url} {type(ex).__name__} {ex}")
[88](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:88) if i >= 2 or "418" in str(ex):
---> [89](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:89) raise ex
[90](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:90) sleep(3)
[91](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:91) return None
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\duckduckgo_search\duckduckgo_search.py:82, in DDGS._get_url(self, method, url, **kwargs)
[78](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:78) resp = self._client.request(
[79](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:79) method, url, follow_redirects=True, **kwargs
[80](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:80) )
[81](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:81) if self._is_500_in_url(str(resp.url)) or resp.status_code == 202:
---> [82](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:82) raise httpx._exceptions.HTTPError("")
[83](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:83) resp.raise_for_status()
[84](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:84) if resp.status_code == 200:
HTTPError:
``` | TypeError: DDGS.text() got an unexpected keyword argument 'max_results' AND HTTPError: | https://api.github.com/repos/langchain-ai/langchain/issues/14397/comments | 1 | 2023-12-07T13:55:48Z | 2023-12-07T14:38:09Z | https://github.com/langchain-ai/langchain/issues/14397 | 2,030,814,970 | 14,397 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
def delete_confluence_embeddings(file_path, persist_directory,not_uuid):
chroma_db = chromadb.PersistentClient(path=persist_directory)
collection = chroma_db.get_or_create_collection(name="langchain")
project_instance = ProjectName.objects.get(not_uuid=not_uuid)
confluence_data = json.loads(project_instance.media).get('confluence', [])
confluence_url = project_instance.url
username = project_instance.confluence_username
api_key = base64.b64decode(project_instance.api_key).decode('utf-8')
space_keys = [space_data['space_key'] for space_data in project_instance.space_key]
documents = []
loader = ConfluenceLoader(
url=confluence_url,
username=username,
api_key=api_key
)
for space_key in space_keys:
documents.extend(loader.load(space_key=space_key, limit=100))
page_info = []
for document in documents:
page_id = document.metadata.get('id')
page_title = document.metadata.get('title')
formatted_title = page_title.replace(' ', '+')
page_info.append({"id": page_id, "title": formatted_title})
# print(f"Page ID: {page_id}, Page Title: {formatted_title}")
for entry in page_info:
entry_file_path = f"{file_path}/pages/{entry['id']}"
ids = collection.get(where={"source": entry_file_path})['ids']
collection.delete(where={"source": entry_file_path}, ids=ids)
for entry in page_info:
entry_file_path = f"{file_path}/pages/{entry['id']}/{entry['title']}"
ids = collection.get(where={"source": entry_file_path})['ids']
collection.delete(where={"source": entry_file_path}, ids=ids)
chroma_db.delete_collection(name="langchain")
print("Delete successfully")
how can i delete for particular space.
file path is url/spaces/space_key/
### Suggestion:
_No response_ | Issue: How to delete particular space embeddings for a confluence projects | https://api.github.com/repos/langchain-ai/langchain/issues/14396/comments | 1 | 2023-12-07T13:14:35Z | 2024-03-16T16:13:01Z | https://github.com/langchain-ai/langchain/issues/14396 | 2,030,741,752 | 14,396 |
[
"langchain-ai",
"langchain"
] | ### System Info
when i am using Retrieval QA with custom prompt on official llama2 model it gives back an empty result even though retriever has worked but LLM failed to give back the response but if i directly pass the query to chain without any prompt it works as expected
## Versions
Python - 3.10
Langchain - 0.0.306
@hwchase17 and @agola11 please take a loot at this issue
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`qa_chain = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=final_retriever,
chain_type_kwargs={"prompt": prompt_template},
return_source_documents=True
)`
if i initialize the chain like this it is failing without
### Expected behavior
`qa_chain = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=final_retriever,
return_source_documents=True
)`
if i initialize the chain like this it is working as expected | Retrieval QA chain does not work | https://api.github.com/repos/langchain-ai/langchain/issues/14395/comments | 1 | 2023-12-07T13:08:45Z | 2024-03-16T16:12:56Z | https://github.com/langchain-ai/langchain/issues/14395 | 2,030,731,840 | 14,395 |
[
"langchain-ai",
"langchain"
] | ### System Info
Google colab
### Who can help?
@agola
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
!pip install lmql==0.0.6.6 langchain==0.0.316 openai==0.28.1 -q
import lmql
import aiohttp
import os
os.environ["OPENAI_API_KEY"] = ""
from langchain import LLMChain, PromptTemplate
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.llms import OpenAI
# Setup the LM to be used by langchain
llm = OpenAI(temperature=0.9)
human_message_prompt = HumanMessagePromptTemplate(
prompt=PromptTemplate(
template="What is a good name for a company that makes {product}?",
input_variables=["product"],
)
)
chat_prompt_template = ChatPromptTemplate.from_messages([human_message_prompt])
chat = ChatOpenAI(temperature=0.9)
chain = LLMChain(llm=chat, prompt=chat_prompt_template)
# Run the chain
chain.run("colorful socks")
```
gives error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-17-c7d901a7e281>](https://localhost:8080/#) in <cell line: 25>()
23
24 # Run the chain
---> 25 chain.run("colorful socks")
11 frames
[/usr/local/lib/python3.10/dist-packages/langchain/chat_models/openai.py](https://localhost:8080/#) in _create_retry_decorator(self)
300
301 def _combine_llm_outputs(self, llm_outputs: List[Optional[dict]]) -> dict:
--> 302 overall_token_usage: dict = {}
303 for output in llm_outputs:
304 if output is None:
AttributeError: module 'openai' has no attribute 'error'
```
### Expected behavior
show results | AttributeError: module 'openai' has no attribute 'error' | https://api.github.com/repos/langchain-ai/langchain/issues/14394/comments | 1 | 2023-12-07T12:47:28Z | 2024-03-18T16:07:39Z | https://github.com/langchain-ai/langchain/issues/14394 | 2,030,689,372 | 14,394 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
How does conversationbuffermemory works with routerchain for suppose if I wanted to create a chat application I need memory to store the conversations how does that thing work with routerchain?
I'm currently using the same implementation that has shown in the documentation
please respond as soon as possible
thank you:))
### Suggestion:
_No response_ | Issue: <how does langchain's routerchain work with conversationbuffermemory> | https://api.github.com/repos/langchain-ai/langchain/issues/14392/comments | 14 | 2023-12-07T10:39:13Z | 2024-06-13T16:07:42Z | https://github.com/langchain-ai/langchain/issues/14392 | 2,030,447,654 | 14,392 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain v 0.0.344
pydantic v 2.5.2
pydantic_code v 2.14.5
python v 3.10.13
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I noticed that the **OutputFixingParser** class does not work when applied to a **PydanticOutputParser** class.
Something has probably changed in **Pydantic**.
Doing step-by-step debugging, I saw that in **PydanticOutputParser**, at line 32 (see below)...

...the exception caught is indeed a **ValidationError**, but _it is not the same_ **ValidationError**...
The **ValidationError** expected from that `try..except` block is of this type.

While the **ValidationError** raised is of this other type.

### Expected behavior
I therefore imagine that the **LangChain** code needs to be updated to also handle the new exception (since the old one belongs to a "**v1**" package). | OutputFixingParser does not work with PydanticOutputParser | https://api.github.com/repos/langchain-ai/langchain/issues/14387/comments | 2 | 2023-12-07T09:01:20Z | 2024-03-17T16:09:31Z | https://github.com/langchain-ai/langchain/issues/14387 | 2,030,232,645 | 14,387 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.346
python==3.11.6
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Hi there!
I'm currently exploring the index feature in Langchain to load documents into the vector store upon my app startup.
However, I've encountered an issue where the index doesn't delete old documents when utilizing Redis as the vector store. After some investigation, I discovered that the `delete` function in `langchain.vectorstores.redis.base.Redis` is a static method, which poses a limitation—it cannot access instance variables, including the essential `key_prefix`. Without the `key_prefix`, Redis is unable to delete documents correctly.
This leads me to question why the `delete` method of the Redis vector store is static. I've noticed that other vector stores, such as Pinecone, do not have a static `delete` function and seem to handle this differently.
### Expected behavior
index with Redis as vector store should delete documents correctly.
| index with redis as vector store cannot delete documents | https://api.github.com/repos/langchain-ai/langchain/issues/14383/comments | 2 | 2023-12-07T08:09:30Z | 2024-03-13T21:06:56Z | https://github.com/langchain-ai/langchain/issues/14383 | 2,030,135,709 | 14,383 |
[
"langchain-ai",
"langchain"
] | ### Feature request
GitLab is currently trying to adopt LangChain in https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/merge_requests/475 for Anthropic usage. It's just simply passing a raw prompt to the Anthropic client, however, LangChain seems not supporting it because:
- `langchain.chat_models.ChatAnthropic` class can't take a raw prompt. It expects [`List[BaseMessage]`](https://github.com/dosuken123/langchain/blob/master/libs/langchain/langchain/chat_models/anthropic.py#L45C5-L45C41) and constructs the new messages.
- `langchain.llms.Anthropic` class was deprecated by @hwchase17 in https://github.com/langchain-ai/langchain/commit/52d95ec47dbb06a1bcc3f0ff30cadc50135351db. We don't want to use a deprecated class.
It sounds like we should add an option to `ChatAnthropic` to allow raw prompt, or recover `langchain.llms.Anthropic` from deprecated state.
### Motivation
Increasing the adoption of LangChain in GitLab
### Your contribution
I can contribute to this issue as LangChain contributor. | Allow ChatAnthropic to receive Raw prompt or don't deprecate llms.Anthropic | https://api.github.com/repos/langchain-ai/langchain/issues/14382/comments | 2 | 2023-12-07T06:57:30Z | 2024-03-17T16:09:26Z | https://github.com/langchain-ai/langchain/issues/14382 | 2,030,036,527 | 14,382 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am using ConversationChain with a custom prompt, and now I am looking to integrate tools into it. How can we do that?
### Suggestion:
_No response_ | How can we integrate tools with custom prompt? | https://api.github.com/repos/langchain-ai/langchain/issues/14381/comments | 1 | 2023-12-07T06:45:10Z | 2024-03-16T16:12:36Z | https://github.com/langchain-ai/langchain/issues/14381 | 2,030,022,877 | 14,381 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
How to remove citation from langchain results and can we get the image out of it?
### Suggestion:
How to remove citation from langchain results and can we get the image out of it? | Issue: How to remove citation from langchain results | https://api.github.com/repos/langchain-ai/langchain/issues/14380/comments | 1 | 2023-12-07T06:03:28Z | 2024-03-16T16:12:31Z | https://github.com/langchain-ai/langchain/issues/14380 | 2,029,974,064 | 14,380 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version = 0.0.344
Python version = 3.11.5
@agola11 @hwchase17
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Here is my code and it is giving me below error for create_sql_agent whe i use suffix variable.
A single string input was passed in, but this chain expects multiple inputs (set()). When a chain expects multiple inputs, please call it by passing in a dictionary, eg `chain({'foo': 1, 'bar': 2})
agent_inputs = {
'prefix': MSSQL_AGENT_PREFIX,
'format_instructions': MSSQL_AGENT_FORMAT_INSTRUCTIONS,
'suffix': MSSQL_AGENT_SUFFIX,
'llm': llm,
'toolkit': toolkit,
'top_k': 30,
'early_stopping_method': 'generate',
'handle_parsing_errors': True,
'input_variables': ['question']
}
agent_executor_sql = create_sql_agent(**agent_inputs)
i also used suffix': [MSSQL_AGENT_SUFFIX], and suffix': str(MSSQL_AGENT_SUFFIX). Yet the error persist. Kindly help.
### Expected behavior
It should take suffix and work. | create_sql_agent Suffix error | https://api.github.com/repos/langchain-ai/langchain/issues/14379/comments | 2 | 2023-12-07T05:39:25Z | 2024-03-29T16:07:00Z | https://github.com/langchain-ai/langchain/issues/14379 | 2,029,945,805 | 14,379 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.347
langchain-core==0.0.11
### Who can help?
@JeanBaptiste-dlb @hwchase17 @kacperlukawski
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The code I'm trying is based from: https://python.langchain.com/docs/integrations/vectorstores/qdrant
```
import os
directory_path = 'data/'
from langchain.document_loaders import TextLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Qdrant
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(openai_api_key="sk-XXX")
loader = TextLoader(os.path.join(directory_path, "concept-note.md"))
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
qdrant = Qdrant.from_documents(
docs,
embeddings,
location=":memory:", # Local mode with in-memory storage only
collection_name="my_documents",
)
```
The error:
```
Created a chunk of size 1893, which is longer than the specified 1000
Created a chunk of size 1728, which is longer than the specified 1000
Created a chunk of size 1317, which is longer than the specified 1000
Created a chunk of size 1464, which is longer than the specified 1000
Created a chunk of size 2119, which is longer than the specified 1000
Created a chunk of size 1106, which is longer than the specified 1000
Created a chunk of size 1822, which is longer than the specified 1000
Created a chunk of size 3658, which is longer than the specified 1000
Created a chunk of size 1233, which is longer than the specified 1000
Created a chunk of size 1522, which is longer than the specified 1000
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[3], line 20
16 docs = text_splitter.split_documents(documents)
17 docs
---> 20 qdrant = Qdrant.from_documents(
21 docs,
22 embeddings,
23 location=":memory:", # Local mode with in-memory storage only
24 collection_name="my_documents",
25 )
File /opt/conda/lib/python3.11/site-packages/langchain_core/vectorstores.py:510, in VectorStore.from_documents(cls, documents, embedding, **kwargs)
508 texts = [d.page_content for d in documents]
509 metadatas = [d.metadata for d in documents]
--> 510 return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File /opt/conda/lib/python3.11/site-packages/langchain/vectorstores/qdrant.py:1345, in Qdrant.from_texts(cls, texts, embedding, metadatas, ids, location, url, port, grpc_port, prefer_grpc, https, api_key, prefix, timeout, host, path, collection_name, distance_func, content_payload_key, metadata_payload_key, vector_name, batch_size, shard_number, replication_factor, write_consistency_factor, on_disk_payload, hnsw_config, optimizers_config, wal_config, quantization_config, init_from, on_disk, force_recreate, **kwargs)
1210 """Construct Qdrant wrapper from a list of texts.
1211
1212 Args:
(...)
1311 qdrant = Qdrant.from_texts(texts, embeddings, "localhost")
1312 """
1313 qdrant = cls.construct_instance(
1314 texts,
1315 embedding,
(...)
1343 **kwargs,
1344 )
-> 1345 qdrant.add_texts(texts, metadatas, ids, batch_size)
1346 return qdrant
File /opt/conda/lib/python3.11/site-packages/langchain/vectorstores/qdrant.py:190, in Qdrant.add_texts(self, texts, metadatas, ids, batch_size, **kwargs)
174 """Run more texts through the embeddings and add to the vectorstore.
175
176 Args:
(...)
187 List of ids from adding the texts into the vectorstore.
188 """
189 added_ids = []
--> 190 for batch_ids, points in self._generate_rest_batches(
191 texts, metadatas, ids, batch_size
192 ):
193 self.client.upsert(
194 collection_name=self.collection_name, points=points, **kwargs
195 )
196 added_ids.extend(batch_ids)
File /opt/conda/lib/python3.11/site-packages/langchain/vectorstores/qdrant.py:2136, in Qdrant._generate_rest_batches(self, texts, metadatas, ids, batch_size)
2122 # Generate the embeddings for all the texts in a batch
2123 batch_embeddings = self._embed_texts(batch_texts)
2125 points = [
2126 rest.PointStruct(
2127 id=point_id,
2128 vector=vector
2129 if self.vector_name is None
2130 else {self.vector_name: vector},
2131 payload=payload,
2132 )
2133 for point_id, vector, payload in zip(
2134 batch_ids,
2135 batch_embeddings,
-> 2136 self._build_payloads(
2137 batch_texts,
2138 batch_metadatas,
2139 self.content_payload_key,
2140 self.metadata_payload_key,
2141 ),
2142 )
2143 ]
2145 yield batch_ids, points
File /opt/conda/lib/python3.11/site-packages/langchain/vectorstores/qdrant.py:1918, in Qdrant._build_payloads(cls, texts, metadatas, content_payload_key, metadata_payload_key)
1912 raise ValueError(
1913 "At least one of the texts is None. Please remove it before "
1914 "calling .from_texts or .add_texts on Qdrant instance."
1915 )
1916 metadata = metadatas[i] if metadatas is not None else None
1917 payloads.append(
-> 1918 {
1919 content_payload_key: text,
1920 metadata_payload_key: metadata,
1921 }
1922 )
1924 return payloads
TypeError: unhashable type: 'list'
```
### Expected behavior
It should not raise an error. | Issue with Qdrant: TypeError: unhashable type: 'list' | https://api.github.com/repos/langchain-ai/langchain/issues/14378/comments | 8 | 2023-12-07T05:17:40Z | 2023-12-07T19:19:13Z | https://github.com/langchain-ai/langchain/issues/14378 | 2,029,922,881 | 14,378 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.344
langchain-experimental>=0.0.42
python==3.10.12
```
embeddings = OpenAIEmbeddings(model_name=model,
openai_api_key=get_model_path(model),
chunk_size=CHUNK_SIZE)
```
error
```
WARNING! model_name is not default parameter.
model_name was transferred to model_kwargs.
Please confirm that model_name is what you intended.
warnings.warn(
2023-12-07 12:36:12,645 - embeddings_api.py[line:39] - ERROR: Embeddings.create() got an unexpected keyword argument 'model_name'
AttributeError: 'NoneType' object has no attribute 'conjugate'
```
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(model_name="text-embedding-ada-002",
openai_api_key='xxxx',
chunk_size=512)
data = embeddings.embed_documents(texts)
```
### Expected behavior
Normal loading | OpenAIEmbeddings bug | https://api.github.com/repos/langchain-ai/langchain/issues/14377/comments | 3 | 2023-12-07T05:16:19Z | 2023-12-11T04:28:11Z | https://github.com/langchain-ai/langchain/issues/14377 | 2,029,920,632 | 14,377 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Add support for Private Service Connect endpoints in Vertex AI LLM.
### Motivation
Currently api_endpoint for VertexAIModelGarden LLM is hard coded to ```aiplatform.googleapis.com```
https://github.com/langchain-ai/langchain/blob/db6bf8b022c17353b46f97ab3b9f44ff9e88a488/libs/langchain/langchain/llms/vertexai.py#L380-L382
Google supports Private Service Connect or private endpoints to google services.
https://cloud.google.com/vpc/docs/configure-private-service-connect-apis#using-endpoints
Users can use private domains i.e. ```us-central1-aiplatform-xxxx.p.googleapis.com``` to call models in vertex ai.
### Your contribution
I can create a PR to add api_endpoint_base to let users specify google api endpoint for VertexAIModelGarden.
Currently pretrained models in ```vertexai.language_models```doesn't support specifying googleapi endpoint.
I'll also create an issue on vertex ai python sdk.
Changes to pretrained model can be made if necessary changes are made to vertex ai python sdk. | Add support for private endpoint(Private Service Connect) for Vertex AI LLM | https://api.github.com/repos/langchain-ai/langchain/issues/14374/comments | 1 | 2023-12-07T02:53:41Z | 2024-03-17T16:09:22Z | https://github.com/langchain-ai/langchain/issues/14374 | 2,029,776,848 | 14,374 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Can the 'AgentType parameter of ‘initialize_agent’ function only be of one type? How can I specify multiple types? I want to set agent type to 'STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION' and 'ZERO_SHOT_REACT_DESCRIPTION' at the same time ?
### Suggestion:
Set agent type to 'STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION' and 'ZERO_SHOT_REACT_DESCRIPTION' at the same time when create a agent? | Issue: How can we specify multiple types when initialize an agent | https://api.github.com/repos/langchain-ai/langchain/issues/14372/comments | 10 | 2023-12-07T02:10:55Z | 2023-12-07T05:41:04Z | https://github.com/langchain-ai/langchain/issues/14372 | 2,029,730,782 | 14,372 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Issue:
```
PS E:\CODE\research-agents> python app.py
Traceback (most recent call last):
File "E:\CODE\research-agents\app.py", line 3, in <module>
from langchain.text_splitter import RecursiveCharacterTextSplitter
ModuleNotFoundError: No module named 'langchain'
```
What I've tried:
- Upgrading to langchain newest version
- Downgrading to langchain version==0.0.340
- Adding python path to environment variables
- Fresh environment with Python 3.10.x
- Fresh environment with Python 3.11.5
System:
Python ver: 3.11.5 (Anaconda)
Langchain ver: 0.0.340
OS: Windows 10
Thoughts, suggestions and tips are great appreciated. Thanks in advance!
### Suggestion:
_No response_ | Issue: No module named 'langchain' | https://api.github.com/repos/langchain-ai/langchain/issues/14371/comments | 9 | 2023-12-07T02:09:09Z | 2024-07-21T15:47:04Z | https://github.com/langchain-ai/langchain/issues/14371 | 2,029,727,951 | 14,371 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi, I'm trying to make a Chinese agent using customized tools and prompt in Chinese. However the pre-defined output parser and streaming didn't work well. For example, the thought process was printed out even if I'm using the FinalStreamingStdOutCallbackHandler. I was wondering if you can help me 1, to understand how the prompt, output parser and streaming works in agent; 2 provide me some suggestions of making my own prompt, output parser and streaming class for Chinese processing.
### Suggestion:
_No response_ | Issue: agent output parser | https://api.github.com/repos/langchain-ai/langchain/issues/14363/comments | 1 | 2023-12-06T21:40:20Z | 2024-03-16T16:12:16Z | https://github.com/langchain-ai/langchain/issues/14363 | 2,029,429,941 | 14,363 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
https://python.langchain.com/docs/integrations/chat/ollama_functions
Ollama functions page shows how to start a conversation where the llm calls a function.
But it stops there.
How do we keep the conversation going? (i.e. I should give the llm the answer to the function call, and then it should give a text reply with his explanation of the weather in Paris, right?)
Here's what I tried
```
>>> from langchain_experimental.llms.ollama_functions import OllamaFunctions
>>> from langchain.schema import HumanMessage, FunctionMessage
>>> import json
>>> model = OllamaFunctions(model="mistral")
>>> model = model.bind(
... functions=[
... {
... "name": "get_current_weather",
... "description": "Get the current weather in a given location",
... "parameters": {
... "type": "object",
... "properties": {
... "location": {
... "type": "string",
... "description": "The city and state, " "e.g. San Francisco, CA",
... },
... "unit": {
... "type": "string",
... "enum": ["celsius", "fahrenheit"],
... },
... },
... "required": ["location"],
... },
... }
... ],
... function_call={"name": "get_current_weather"},
... )
>>>
>>> messages = [HumanMessage(content="how is the weather in Paris?")]
>>> aim = model.invoke(messages)
>>> aim
AIMessage(content='', additional_kwargs={'function_call': {'name': 'get_current_weather', 'arguments': '{"location": "Paris, FR", "unit": "celsius"}'}})
>>> messages.append(aim)
>>> fm = FunctionMessage(name='get_current_weather', content=json.dumps({'temperature': '25 celsius'}))
>>> messages.append(fm)
>>> aim = model.invoke(messages)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2871, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 160, in invoke
self.generate_prompt(
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 491, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 378, in generate
raise e
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 368, in generate
self._generate_with_cache(
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 524, in _generate_with_cache
return self._generate(
^^^^^^^^^^^^^^^
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain_experimental/llms/ollama_functions.py", line 90, in _generate
response_message = self.llm.predict_messages(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 685, in predict_messages
return self(messages, stop=_stop, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 632, in __call__
generation = self.generate(
^^^^^^^^^^^^^^
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 378, in generate
raise e
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 368, in generate
self._generate_with_cache(
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 524, in _generate_with_cache
return self._generate(
^^^^^^^^^^^^^^^
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain/chat_models/ollama.py", line 97, in _generate
prompt = self._format_messages_as_text(messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain/chat_models/ollama.py", line 70, in _format_messages_as_text
[self._format_message_as_text(message) for message in messages]
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain/chat_models/ollama.py", line 70, in <listcomp>
[self._format_message_as_text(message) for message in messages]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain/chat_models/ollama.py", line 65, in _format_message_as_text
raise ValueError(f"Got unknown type {message}")
ValueError: Got unknown type content='{"temperature": "25 celsius"}' name='get_current_weather'
>>>
```
### Idea or request for content:
_No response_ | DOC: Explain how to continue the conversation with OllamaFunctions | https://api.github.com/repos/langchain-ai/langchain/issues/14360/comments | 12 | 2023-12-06T21:17:30Z | 2024-06-13T16:07:37Z | https://github.com/langchain-ai/langchain/issues/14360 | 2,029,399,684 | 14,360 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version: 0.0.315
python3.9
### Who can help?
@hwchase17 @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
splitter = CharacterTextSplitter(
separator="\n",
chunk_size=1000,
chunk_overlap=i0
)
paragraphs = splitter.split_text("This is the first paragraph.\n\nThis is the second paragraph.")
print(paragraphs)
paragraphs = splitter.split_text("This is the first paragraph.\n \nThis is the second paragraph.")
print(paragraphs)
```
Returns
```
This is the first paragraph.\nThis is the second paragraph. #seems wrong, as It omits a new line character.
This is the first paragraph.\n \nThis is the second paragraph. #correct
```
### Expected behavior
```
This is the first paragraph.\n\nThis is the second paragraph.
This is the first paragraph.\n \nThis is the second paragraph.
```
| Unexpected behaviour: CharacterTextSpitter | https://api.github.com/repos/langchain-ai/langchain/issues/14348/comments | 3 | 2023-12-06T16:02:29Z | 2024-03-17T16:09:11Z | https://github.com/langchain-ai/langchain/issues/14348 | 2,028,892,316 | 14,348 |
[
"langchain-ai",
"langchain"
] | ### System Info
I want to use the `with_retry` from the Runnable class with Bedrock class to initiate a retry if Bedrock is raising a ThrottlingException (too many requests) the problem is the error catching in `BedrockBase` class in the `_prepare_input_and_invoke` method is too broad. (line 269)
```
except Exception as e:
raise ValueError(f"Error raised by bedrock service: {e}")
```
Is it possible to use something like :
```
except Exception as e:
raise ValueError(f"Error raised by bedrock service: {e}") from e
```
Or:
```
except Exception as e:
raise ValueError(f"Error raised by bedrock service: {e}")with_traceback(e.__traceback__)
```
To keep the initial exception type.
Am I missing something that would explain this broad error catching?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
No reproduction needed
### Expected behavior
Expected behavior explained previously | Error catching in BedrockBase | https://api.github.com/repos/langchain-ai/langchain/issues/14347/comments | 3 | 2023-12-06T14:54:05Z | 2024-01-03T01:25:50Z | https://github.com/langchain-ai/langchain/issues/14347 | 2,028,739,597 | 14,347 |
[
"langchain-ai",
"langchain"
] | ### System Info
I have been using OpenAI Embeddings specifically text-embedding-ada-002 and noticed it was very sensitive to punctuation even. I have around 1000 chunks and need to extract each time the 15 most similar chunks to my query. I have been testing my query without punctuation and when I add a dot '.' at the end of my query it changes the initial set I got from the retriever with the query without punctuation (some chunks are the same but new ones may appear or the initial order is different).
- Have you noticed anything similar ?
- Is it the basic behaviour of this embedding to be that sensitive to punctuation ?
- Is there a way to make it more robust to minor changes in the query ?
FYI: I am using PGvector to store my chunks vectors
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
-
### Expected behavior
- | Embedding very sensitive to punctuation | https://api.github.com/repos/langchain-ai/langchain/issues/14346/comments | 1 | 2023-12-06T13:00:05Z | 2024-03-16T16:12:06Z | https://github.com/langchain-ai/langchain/issues/14346 | 2,028,506,571 | 14,346 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: 0.0.346
Python version: 3.9.16
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`PythonREPL` which has been moved to `experimental` still exists in based library under path:
`libs/langchain/langchain/utilities/python.py`
which triggers security scans vulnerabilities (`exec()` call) and doesn't allow us to use the package on the production environment.
Since
https://nvd.nist.gov/vuln/detail/CVE-2023-39631
Should be most likely closed soon, this is only vulnerability that would have to be addressed so we can freely use `langchain`.
### Expected behavior
`PythonREPL` should only exist in `experimental` version of `langchain` | `PythonREPL` removal from langchain library | https://api.github.com/repos/langchain-ai/langchain/issues/14345/comments | 5 | 2023-12-06T12:20:42Z | 2024-05-22T17:48:58Z | https://github.com/langchain-ai/langchain/issues/14345 | 2,028,418,418 | 14,345 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I have created a multi model inference endpoint in the new version of SageMaker studio (the original one now is called studio classic). I don't see there is a place that I can set the `inference component` in the `SagemakerEndpoint` class. So I ended up get an expected error from SageMaker.
`An error occurred (ValidationError) when calling the InvokeEndpointWithResponseStream operation: Inference Component Name header is required for endpoints to which you plan to deploy inference components. Please include Inference Component Name header or consider using SageMaker models.`
### Motivation
To support multi model endpoint in SageMaker which is cost efficient way to run models.
### Your contribution
I can test and verify. | Support SageMaker Inference Component of multi model endpoint | https://api.github.com/repos/langchain-ai/langchain/issues/14344/comments | 3 | 2023-12-06T11:51:12Z | 2024-01-12T02:36:39Z | https://github.com/langchain-ai/langchain/issues/14344 | 2,028,370,080 | 14,344 |
[
"langchain-ai",
"langchain"
] | ### System Info
python3.10.13
langchain==0.0.346
langchain-core==0.0.10
### Who can help?
@agola11, @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
from langchain.chat_models import ChatOpenAI
from langchain.agents.format_scratchpad import format_to_openai_function_messages
text_with_call = """You are a helpful assistant. Here is a function call that you should not imitate: <functioncall> {"name":"generate_anagram", "arguments": {"word": "listen"}}
"""
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
text_with_call,
),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
}
| prompt
| llm
| OpenAIFunctionsAgentOutputParser()
)
output = agent.invoke(
{
"input": "Hello",
"intermediate_steps": [],
}
)
```
### Expected behavior
I would expect not to get an error.
I get error: `KeyError: 'Input to ChatPromptTemplate is missing variable \'"name"\'. Expected: [\'"name"\', \'agent_scratchpad\', \'input\'] Received: [\'input\', \'agent_scratchpad\']'`
> I think this error is cause by the f-string prompt template recognising the brackets inside the prompt as other variables needed an input.
I tried using Jinja2 template for my prompt instead, but I cannot setup this option in `ChatPromptTemplate`.
I understand this could be for security reasons as mentioned in [this issue](https://github.com/langchain-ai/langchain/issues/4394)
So the possible solutions I see :
- Use PromptTemplate like:
```
prompt = PromptTemplate.from_template(
text_with_call, template_format="jinja2"
)
```
But I would like to make use of the `agent_scratchpad`, so the ChatPromptTemplate is needed in my case.
- Change ChatPromptTemplate class to support jinja2 template, which I understand could not be done
- Re-implement my custom ChatPromptTemplate
- find another way to accept this prompt without falsely flagging prompt elements as input variables.
Do you have any ideas? Thanks for your help 😃 | Specific prompt adds false input variables | https://api.github.com/repos/langchain-ai/langchain/issues/14343/comments | 2 | 2023-12-06T11:35:34Z | 2023-12-06T11:48:20Z | https://github.com/langchain-ai/langchain/issues/14343 | 2,028,344,953 | 14,343 |
[
"langchain-ai",
"langchain"
] | ### System Info
I try this example code
```
from langchain.retrievers import ParentDocumentRetriever
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.storage import InMemoryStore
# This text splitter is used to create the parent documents
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=2000)
# This text splitter is used to create the child documents
# It should create documents smaller than the parent
child_splitter = RecursiveCharacterTextSplitter(chunk_size=400)
# The vectorstore to use to index the child chunks
vectorstore = Chroma(embedding_function=OpenAIEmbeddings())
# The storage layer for the parent documents
store = InMemoryStore()
vectorstore = Chroma(collection_name="test", embedding_function=OpenAIEmbeddings())
```
# Initialize the retriever
parent_document_retriever = ParentDocumentRetriever(
vectorstore=vectorstore,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter,
)
but I encountered an error:
```
1 # Initialize the retriever
----> 2 parent_document_retriever = ParentDocumentRetriever(
3 vectorstore=vectorstore,
4 docstore=store,
5 child_splitter=child_splitter,
TypeError: MultiVectorRetriever.__init__() got an unexpected keyword argument 'child_splitter'
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.retrievers import ParentDocumentRetriever
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.storage import InMemoryStore
# This text splitter is used to create the parent documents
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=2000)
# This text splitter is used to create the child documents
# It should create documents smaller than the parent
child_splitter = RecursiveCharacterTextSplitter(chunk_size=400)
# The vectorstore to use to index the child chunks
vectorstore = Chroma(embedding_function=OpenAIEmbeddings())
# The storage layer for the parent documents
store = InMemoryStore()
vectorstore = Chroma(collection_name="test", embedding_function=OpenAIEmbeddings())
# Initialize the retriever
parent_document_retriever = ParentDocumentRetriever(
vectorstore=vectorstore,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter,
)
```
### Expected behavior
I can run. | Error: | https://api.github.com/repos/langchain-ai/langchain/issues/14342/comments | 5 | 2023-12-06T11:09:11Z | 2023-12-06T19:12:51Z | https://github.com/langchain-ai/langchain/issues/14342 | 2,028,301,021 | 14,342 |
[
"langchain-ai",
"langchain"
] | ### System Info
Windows 10 Pro - 22H2 - Build 9045.3693 - Windows Feature Experience Pack 1000.19053.1000.0
Python 3.11.5
langchain-cli 0.0.19
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. creating a virtual environments with python - m venv .venv and activate it;
2. installing langchain-cli with pip install -U "langchain-cli[serve]"
3. launch langchain app new qdrant-app --package self-query-qdrant
### Expected behavior
As for other templates that I have installed, I expected to find the .py files of the app in the packages directory, but it's empty.
[log.txt](https://github.com/langchain-ai/langchain/files/13580197/log.txt)
| When I install the template "self-query-qdrant" I get this error: UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 132: character maps to <undefined> | https://api.github.com/repos/langchain-ai/langchain/issues/14341/comments | 5 | 2023-12-06T10:42:16Z | 2024-03-18T16:07:29Z | https://github.com/langchain-ai/langchain/issues/14341 | 2,028,253,492 | 14,341 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.299 python 3.8 gpt-4
when AzureChatOpenAI use gpt-4 ,I use agents to deal with problems,raised a error about RateLimitError; but gpt-3.5 do not appear this quesition
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
llm = AzureChatOpenAI(
openai_api_base=api_base,
openai_api_version=api_version,
deployment_name=deployment_name,
openai_api_key=api_token,
openai_api_type="azure",
max_tokens = max_tokens,
model_name = model_name,
)
memory = ConversationBufferMemory(memory_key="chat_history")
agent = create_pandas_dataframe_agent(
llm,
df,
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
verbose=True,
memory=memory
)
question = "what are the top 2 most polluted counties"
res = agent.run(question)
### Expected behavior
Tell me what caused this error and how to avoid it
| Requests to the Creates a completion for the chat message Operation under Azure OpenAI API version 2023-03-15-preview have exceeded token rate limit of your current OpenAI S0 pricing tier. Please go here: https://aka.ms/oai/quotaincrease if you would like to further increase the default rate limit. | https://api.github.com/repos/langchain-ai/langchain/issues/14339/comments | 1 | 2023-12-06T10:29:14Z | 2024-03-16T16:11:56Z | https://github.com/langchain-ai/langchain/issues/14339 | 2,028,230,943 | 14,339 |
[
"langchain-ai",
"langchain"
] | ### System Info
- LangChain version: 0.0.346
- Platform: Mac mini M1 16GB - macOS Sonoma 14.0
- Python version: 3.11
- LiteLLM version: 1.10.6
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.chat_models import ChatLiteLLM
# Code for initializing the ChatLiteLLM instance
chat_model = ChatLiteLLM(api_base="https://custom.endpoints.huggingface.cloud", model="huggingface/Intel/neural-chat-7b-v3-1")
# Make a call to LiteLLM
text = "What would be a good company name for a company that makes colorful socks?"
messages = [HumanMessage(content=text)]
print(chat_model(messages).content)
```
Error:
```
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.11/site-packages/litellm/utils.py", line 4919, in handle_huggingface_chunk
raise ValueError(chunk)
ValueError: {"error":"The model Intel/neural-chat-7b-v3-1 is too large to be loaded automatically (14GB > 10GB). Please use Spaces (https://huggingface.co/spaces) or Inference Endpoints (https://huggingface.co/inference-endpoints)."}
```
Is the same error if:
`chat_model = ChatLiteLLM(model="huggingface/Intel/neural-chat-7b-v3-1")`
So api_base parameter not properly propagated in client calls in ChatLiteLLM.
### Expected behavior
I would expect the ChatLiteLLM instance to correctly utilize the api_base parameter when making requests to the LiteLLM client. This should enable using models larger than the default size limit without encountering the error message about model size limits.
Notably, if I explicitly add the api_base argument in chat_models/litellm.py on line 239 (e.g., `return self.client.completion(api_base=self.api_base, **kwargs)`), the problem is resolved. This suggests that the api_base argument is not being correctly passed through **kwargs. | api_base parameter not properly propagated in client calls in ChatLiteLLM | https://api.github.com/repos/langchain-ai/langchain/issues/14338/comments | 7 | 2023-12-06T09:13:46Z | 2023-12-07T11:20:13Z | https://github.com/langchain-ai/langchain/issues/14338 | 2,028,088,277 | 14,338 |
[
"langchain-ai",
"langchain"
] | ### System Info
Trying to execute the chatbot script with sagemaker endpoint of LLAMA2 llm model getting dict validation error for RetrievalQA
Request:
def retreiveFromLL(userQuery: str) -> QueryResponse:
pre_prompt = """[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Answer exactly in detail from the context
<</SYS>>
Answer the question below from context below :
"""
prompt = pre_prompt + "CONTEXT:\n\n{context}\n" +"Question : {question}" + "[\INST]"
llama_prompt = PromptTemplate(template=prompt, input_variables=["context", "question"])
chain_type_kwargs = {"prompt": llama_prompt}
embeddings = SentenceTransformerEmbeddings(model_name=EMBEDDING_MODEL)
# Initialize PGVector index
vector_db = PGVector(
embedding_function=embeddings,
collection_name='CSE_runbooks',
connection_string=CONNECTION_STRING,
)
print("**Invoking PGVector")
# Custom ContentHandler to handle input and output to the SageMaker Endpoint
class LlamaChatContentHandler(LLMContentHandler):
content_type = "application/json"
accepts = "application/json"
def transform_input(self, inputs: str, model_kwargs: Dict = {}) -> bytes:
payload = {
"inputs": pre_prompt,
"parameters": {"max_new_tokens":2000, "top_p":0.9, "temperature":0.1}}
input_str = ' '.join(inputs)
input_str = json.dumps(payload)
print(payload)
return input_str.encode("utf-8")
def transform_output(self, output: bytes) -> str:
response_json = json.loads(output.read().decode("utf-8"))
content = response_json[0]["generated_text"]
return content
# Initialize SagemakerEndpoint
print("Invoking LLM SageMaker Endpoint")
llm = SagemakerEndpoint(
endpoint_name=LLAMA2_ENDPOINT,
region_name=AWS_REGION,
content_handler=LlamaChatContentHandler(),
callbacks=[StreamingStdOutCallbackHandler()],
endpoint_kwargs={"CustomAttributes": "accept_eula=true"},
)
print(llm)
# Create a RetrievalQA instance with Pinecone as the retriever
query = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=vector_db, return_source_documents=True, chain_type_kwargs=chain_type_kwargs)
print("**Invoking query")
result = query({"query": userQuery})
response = result["result"]
Error:
Traceback (most recent call last):
File "/home/ec2-user/.local/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 534, in _run_script
exec(code, module.__dict__)
File "/home/ec2-user/milvus/qa_UI.py", line 26, in <module>
userResponse = getLLMResponse(user_input)
File "/home/ec2-user/milvus/getLLMResponse1.py", line 37, in getLLMResponse
userResponse = retreiveFromLL(userQuery)
File "/home/ec2-user/milvus/getLLMResponse1.py", line 97, in retreiveFromLL
query = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=vector_db, return_source_documents=True, chain_type_kwargs=chain_type_kwargs)
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain/chains/retrieval_qa/base.py", line 103, in from_chain_type
return cls(combine_documents_chain=combine_documents_chain, **kwargs)
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain/load/serializable.py", line 97, in __init__
super().__init__(**kwargs)
File "/home/ec2-user/.local/lib/python3.9/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for RetrievalQA
retriever
value is not a valid dict (type=type_error.dict)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Execute the code
### Expected behavior
response from LLM | Dict validation error | https://api.github.com/repos/langchain-ai/langchain/issues/14337/comments | 28 | 2023-12-06T08:43:25Z | 2024-06-29T16:16:49Z | https://github.com/langchain-ai/langchain/issues/14337 | 2,028,038,022 | 14,337 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I'm exploring the use of open-source models via Ollama excellent application, I'm facing quite some challenges in adapting the instructions that were made for OpenAI-type models (they are perfectly interchangeable, I must say).
Is the format_instructions generated from the PydanticOutputParser supposed to work also for non-Openai models?
With Zephyr the model keeps returning the answer AND the JSON schema, even when I try to stop it from doing that...
### Idea or request for content:
_No response_ | Get JSON output from non-OpenAI models | https://api.github.com/repos/langchain-ai/langchain/issues/14335/comments | 2 | 2023-12-06T07:50:44Z | 2024-03-17T16:09:02Z | https://github.com/langchain-ai/langchain/issues/14335 | 2,027,899,154 | 14,335 |
[
"langchain-ai",
"langchain"
] | ### System Info
Deployed on Cloud Run
```
Python 3.10
langchain 0.0.345
langchain-cli 0.0.19
langchain-core 0.0.9
langchain-experimental 0.0.43
langdetect 1.0.9
langserve 0.0.32
langsmith 0.0.69
```
```
google-ai-generativelanguage 0.3.3
google-api-core 2.14.0
google-api-python-client 2.109.0
google-auth 2.24.0
google-auth-httplib2 0.1.1
google-auth-oauthlib 1.1.0
google-cloud-aiplatform 1.36.4
google-cloud-bigquery 3.13.0
google-cloud-core 2.3.3
google-cloud-discoveryengine 0.11.3
google-cloud-pubsub 2.18.4
google-cloud-resource-manager 1.10.4
google-cloud-storage 2.13.0
google-crc32c 1.5.0
google-generativeai 0.2.2
google-resumable-media 2.6.0
googleapis-common-protos 1.61.0
```
Deployed via the Langchain template here:
https://github.com/langchain-ai/langchain/tree/master/templates/rag-google-cloud-vertexai-search
### Who can help?
@holtskinner I think has been fixing Google related stuff or knows someone who can
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Add https://github.com/langchain-ai/langchain/tree/master/templates/rag-google-cloud-vertexai-search to Langserve server
Run in playground with any input - get error.
AttributeError("'ProtoType' object has no attribute 'DESCRIPTOR'")
Langsmith chain:
```
{
"id": [
"langchain_core",
"runnables",
"RunnableSequence"
],
"lc": 1,
"type": "constructor",
"kwargs": {
"last": {
"id": [
"langchain_core",
"output_parsers",
"string",
"StrOutputParser"
],
"lc": 1,
"type": "constructor",
"kwargs": {}
},
"first": {
"id": [
"langchain_core",
"runnables",
"RunnableParallel"
],
"lc": 1,
"type": "constructor",
"kwargs": {
"steps": {
"context": {
"id": [
"langchain",
"retrievers",
"google_vertex_ai_search",
"GoogleVertexAISearchRetriever"
],
"lc": 1,
"repr": "GoogleVertexAISearchRetriever(project_id='project-id', data_store_id='datastore-id')",
"type": "not_implemented"
},
"question": {
"id": [
"langchain_core",
"runnables",
"RunnablePassthrough"
],
"lc": 1,
"type": "constructor",
"kwargs": {
"func": null,
"afunc": null,
"input_type": null
}
}
}
}
},
"middle": [
{
"id": [
"langchain_core",
"prompts",
"chat",
"ChatPromptTemplate"
],
"lc": 1,
"type": "constructor",
"kwargs": {
"messages": [
{
"id": [
"langchain_core",
"prompts",
"chat",
"HumanMessagePromptTemplate"
],
"lc": 1,
"type": "constructor",
"kwargs": {
"prompt": {
"id": [
"langchain_core",
"prompts",
"prompt",
"PromptTemplate"
],
"lc": 1,
"type": "constructor",
"kwargs": {
"template": "Answer the question based only on the following context:\n{context}\nQuestion: {question}\n",
"input_variables": [
"context",
"question"
],
"template_format": "f-string",
"partial_variables": {}
}
}
}
}
],
"input_variables": [
"context",
"question"
]
}
},
{
"id": [
"langchain",
"chat_models",
"vertexai",
"ChatVertexAI"
],
"lc": 1,
"type": "constructor",
"kwargs": {
"model_name": "chat-bison",
"temperature": 0
}
}
]
}
}
```
The traceback comes back as:
```
File "/usr/local/lib/python3.10/site-packages/langchain_core/retrievers.py", line 157, in _aget_relevant_documents
return await asyncio.get_running_loop().run_in_executor(
File "/usr/local/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.10/site-packages/langchain/retrievers/google_vertex_ai_search.py", line 343, in _get_relevant_documents
search_request = self._create_search_request(query)
File "/usr/local/lib/python3.10/site-packages/langchain/retrievers/google_vertex_ai_search.py", line 327, in _create_search_request
return SearchRequest(
File "/usr/local/lib/python3.10/site-packages/proto/message.py", line 570, in __init__
pb_value = marshal.to_proto(pb_type, value)
File "/usr/local/lib/python3.10/site-packages/proto/marshal/marshal.py", line 222, in to_proto
proto_type.DESCRIPTOR.has_options
AttributeError: 'ProtoType' object has no attribute 'DESCRIPTOR'
```
### Expected behavior
The call to https://github.com/langchain-ai/langchain/blob/e1ea1912377ca7c013e89fac4c1d26c0cb836009/libs/langchain/langchain/retrievers/google_vertex_ai_search.py#L327
returns correctly with some documents. | GoogleVertexAISearchRetriever - AttributeError: 'ProtoType' object has no attribute 'DESCRIPTOR' | https://api.github.com/repos/langchain-ai/langchain/issues/14333/comments | 1 | 2023-12-06T07:13:51Z | 2024-03-17T16:08:58Z | https://github.com/langchain-ai/langchain/issues/14333 | 2,027,823,831 | 14,333 |
[
"langchain-ai",
"langchain"
] | ### System Info
I already create venv and re-install langchain: `pip install langchain` but error:
> from langserve import add_routes
> File "/usr/local/lib/python3.10/dist-packages/langserve/__init__.py", line 7, in <module>
> from langserve.client import RemoteRunnable
> File "/usr/local/lib/python3.10/dist-packages/langserve/client.py", line 29, in <module>
> from langchain.callbacks.tracers.log_stream import RunLogPatch
> ModuleNotFoundError: No module named 'langchain.callbacks.tracers.log_stream'
>
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
$ python3 -m venv venv
$ source venv/bin/activate
$ pip install langchain
$ pip install google-generativeai
$ pip install fastapi
$ pip install uvicorn
$ pip install langserve
$ `uvicorn main:app --host 0.0.0.0 --port 8000`
### Expected behavior
App run with langserve without error | No module named 'langchain.callbacks.tracers.log_stream' | https://api.github.com/repos/langchain-ai/langchain/issues/14330/comments | 4 | 2023-12-06T04:00:41Z | 2024-03-17T16:08:51Z | https://github.com/langchain-ai/langchain/issues/14330 | 2,027,603,413 | 14,330 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Cannot figure out why validate_tools_single_input is necessary. I've tried to adopt multi-inputs tools in CHAT_CONVERSATIONAL_REACT_DESCRIPTION agents by just commenting out the implementation of such function. Everything seemed fun.
So what's the purpose of such design, and if it is possible to use multi-inputs tools in a more flexible setup?
### Suggestion:
_No response_ | Issue: The purpose to validate_tools_single_input in CHAT_CONVERSATIONAL_REACT_DESCRIPTION agent | https://api.github.com/repos/langchain-ai/langchain/issues/14329/comments | 2 | 2023-12-06T03:49:44Z | 2024-03-17T16:08:47Z | https://github.com/langchain-ai/langchain/issues/14329 | 2,027,594,343 | 14,329 |
[
"langchain-ai",
"langchain"
] | ### Feature request
The langchain library only supports anonymizing it seems. The native microsoft library can redact data:
```
analyzer_results = analyzer.analyze(text=text_to_anonymize,
entities=["PERSON", "PHONE_NUMBER", "EMAIL_ADDRESS", "URL", "LOCATION"],
ad_hoc_recognizers=ad_hoc_recognizers,
language='en')
anonymized_results = anonymizer.anonymize(
text=text_to_anonymize,
analyzer_results=analyzer_results,
operators={"DEFAULT": OperatorConfig("redact", {})}
)
```
I don't see a way to do this via langchain_experimental.data_anonymizer
### Motivation
Redaction is cleaner than anonymizing. It's better than replacing first names with gibberish for my. use case
### Your contribution
I can help provide use-cases, like i did above. | Allow PresidioAnonymizer() to redact data instead of anonymizing. | https://api.github.com/repos/langchain-ai/langchain/issues/14328/comments | 3 | 2023-12-06T03:35:45Z | 2023-12-27T11:33:34Z | https://github.com/langchain-ai/langchain/issues/14328 | 2,027,583,278 | 14,328 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I tried to run the notebook [Semi_structured_multi_modal_RAG_LLaMA2.ipynb](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb). I installed and imported the required libraries, but an error message was returned as a result of running the below code:
```
from typing import Any
from pydantic import BaseModel
from unstructured.partition.pdf import partition_pdf
path = "/home/nickjtay/LLaVA/"
raw_pdf_elements = partition_pdf(
filename=path + "LLaVA.pdf",
# Using pdf format to find embedded image blocks
extract_images_in_pdf=True,
# Use layout model (YOLOX) to get bounding boxes (for tables) and find titles
# Titles are any sub-section of the document
infer_table_structure=True,
# Post processing to aggregate text once we have the title
chunking_strategy="by_title",
# Chunking params to aggregate text blocks
# Attempt to create a new chunk 3800 chars
# Attempt to keep chunks > 2000 chars
# Hard max on chunks
max_characters=4000,
new_after_n_chars=3800,
combine_text_under_n_chars=2000,
image_output_dir_path=path,
)
```
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[5], line 4
1 from typing import Any
3 from pydantic import BaseModel
----> 4 from unstructured.partition.pdf import partition_pdf
5 import pdfminer
6 print(pdfminer.utils.__version__)
File ~/Projects/langtest1/lib/python3.10/site-packages/unstructured/partition/pdf.py:40
38 from pdfminer.pdfparser import PSSyntaxError
39 from pdfminer.pdftypes import PDFObjRef
---> 40 from pdfminer.utils import open_filename
41 from PIL import Image as PILImage
43 from unstructured.chunking.title import add_chunking_strategy
ImportError: cannot import name 'open_filename' from 'pdfminer.utils' (/home/nickjtay/Projects/langtest1/lib/python3.10/site-packages/pdfminer/utils.py)
```
### Suggestion:
_No response_ | Issue: <Please write a coImportError: cannot import name 'open_filename' from 'pdfminer.utils' (/home/nickjtay/Projects/langtest1/lib/python3.10/site-packages/pdfminer/utils.py)mprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/14326/comments | 6 | 2023-12-06T02:23:04Z | 2024-06-28T16:05:53Z | https://github.com/langchain-ai/langchain/issues/14326 | 2,027,520,337 | 14,326 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Currently when getting started developing, there is:
- `extras`: `extended_testing`
- `poetry`: `dev`, `test`, `test_integration`
There is also an annoying `rapidfuzz` error one can hit: https://github.com/langchain-ai/langchain/issues/12237
Can we have the poetry groups reusing `extras`?
### Motivation
Less moving parts, and a more clear installation workflow
### Your contribution
I propose:
- Removing `dev` group
- Having the `test` group install the `extended_testing` extra | Better synergy with `poetry` groups and extras | https://api.github.com/repos/langchain-ai/langchain/issues/14321/comments | 1 | 2023-12-05T22:51:16Z | 2024-03-16T16:11:31Z | https://github.com/langchain-ai/langchain/issues/14321 | 2,027,269,893 | 14,321 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I believe the Oobabooga Text Generation Web UI API was rewritten, causing the code on the TextGen page of the Langchain docs to stop working.
e.g.: the way the code handles talking to a ws: causes a 403. I can execute API calls that work well, e.g.: curl http://127.0.0.1:5000/v1/chat/completions \
> -H "Content-Type: application/json" \
> -d '{
> "messages": [
> {
> "role": "user",
> "content": "Hello! Who are you?"
> }
> ],
> "mode": "chat",
> "character": "Example"
> }'
works.
while llm_chain.run(question) returns a 403 (failed handshake).
### Idea or request for content:
It would be awesome if this would be fixed. If not, please pull the page. | DOC: TextGen (Text Generation Web UI) - the code no longer works. | https://api.github.com/repos/langchain-ai/langchain/issues/14318/comments | 10 | 2023-12-05T22:22:39Z | 2024-05-16T11:09:38Z | https://github.com/langchain-ai/langchain/issues/14318 | 2,027,229,878 | 14,318 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi,
Currently, I want to build RAG chatbot for production.
I already had my LLM API and I want to create a custom LLM and then use this in RetrievalQA.from_chain_type function.
```
curl --location 'https:/myhost:10001/llama/api' -k \
--header 'Content-Type: application/json' \
--data-raw '{
"inputs": "[INST] Question: Who is Albert Einstein? \n Answer: [/INST]",
"parameters": {"max_new_tokens":100},
"token": "abcdfejkwehr"
}
```
I don't know whether Langchain support this in my case.
I read about this topic on reddit: https://www.reddit.com/r/LangChain/comments/17v1rhv/integrating_llm_rest_api_into_a_langchain/
And in langchain document: https://python.langchain.com/docs/modules/model_io/llms/custom_llm
But this still does not work when I apply the custom LLM to qa_chain.
Below is my code, hope for the support from you, sorry for my language, english is not my mother tongue.
```
from pydantic import Extra
import requests
from typing import Any, List, Mapping, Optional
from langchain.callbacks.manager import CallbackManagerForLLMRun
from langchain.llms.base import LLM
class LlamaLLM(LLM):
llm_url = 'https:/myhost/llama/api'
class Config:
extra = Extra.forbid
@property
def _llm_type(self) -> str:
return "Llama2 7B"
def _call(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> str:
if stop is not None:
raise ValueError("stop kwargs are not permitted.")
payload = {
"inputs": prompt,
"parameters": {"max_new_tokens": 100},
"token": "abcdfejkwehr"
}
headers = {"Content-Type": "application/json"}
response = requests.post(self.llm_url, json=payload, headers=headers, verify=False)
response.raise_for_status()
# print("API Response:", response.json())
return response.json()['generated_text'] # get the response from the API
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {"llmUrl": self.llm_url}
```
```
llm = LlamaLLM()
```
```
#Testing
prompt = "[INST] Question: Who is Albert Einstein? \n Answer: [/INST]"
result = llm._call(prompt)
print(result)
Albert Einstein (1879-1955) was a German-born theoretical physicist who is widely regarded as one of the most influential scientists of the 20th century. He is best known for his theory of relativity, which revolutionized our understanding of space and time, and his famous equation E=mc².
```
```
# Build prompt
from langchain.prompts import PromptTemplate
template = """[INST] <<SYS>>
Answer the question base on the context below.
<</SYS>>
Context: {context}
Question: {question}
Answer:
[/INST]"""
QA_CHAIN_PROMPT = PromptTemplate(input_variables=["context", "question"],template=template,)
# Run chain
from langchain.chains import RetrievalQA
qa_chain = RetrievalQA.from_chain_type(llm,
verbose=True,
# retriever=vectordb.as_retriever(),
retriever=custom_retriever,
return_source_documents=True,
chain_type_kwargs={"prompt": QA_CHAIN_PROMPT})
```
```
question = "Is probability a class topic?"
result = qa_chain({"query": question})
result["result"]
Encountered some errors. Please recheck your request!
```
### Suggestion:
_No response_ | Custom LLM from API for QA chain | https://api.github.com/repos/langchain-ai/langchain/issues/14302/comments | 24 | 2023-12-05T17:36:18Z | 2023-12-26T11:37:08Z | https://github.com/langchain-ai/langchain/issues/14302 | 2,026,785,573 | 14,302 |
[
"langchain-ai",
"langchain"
] | ### System Info
azure-search-documents==11.4.0b9
langchain 0.0.342
langchain-core 0.0.7
### Who can help?
@hwc
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
My local.settings.json has the custom field names for Azure Cognitive Search:
```
{
"IsEncrypted": false,
"Values": {
"FUNCTIONS_WORKER_RUNTIME": "python",
"AZURESEARCH_FIELDS_ID" :"chunk_id",
"AZURESEARCH_FIELDS_CONTENT" :"chunk",
"AZURESEARCH_FIELDS_CONTENT_VECTOR " :"vector",
"AZURESEARCH_FIELDS_TAG" :"metadata",
"FIELDS_ID" : "chunk_id",
"FIELDS_CONTENT" : "chunk",
"FIELDS_CONTENT_VECTOR" : "vector",
"FIELDS_METADATA" : "metadata",
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"AzureWebJobsFeatureFlags": "EnableWorkerIndexing"
}
}
```
I also tried to create a Fields array and pass it into the AzureSearch constructor like this:
```
os.environ["AZURE_OPENAI_API_KEY"] = "xx"
os.environ["AZURE_OPENAI_ENDPOINT"] = "https://xx.openai.azure.com/"
embeddings = AzureOpenAIEmbeddings(
azure_deployment="text-embedding-ada-002",
openai_api_version="2023-05-15",
)
fields = [
SimpleField(
name="chunk_id",
type=SearchFieldDataType.String,
key=True,
filterable=True,
),
SearchableField(
name="chunk",
type=SearchFieldDataType.String,
searchable=True,
),
SearchField(
name="vector",
type=SearchFieldDataType.Collection(SearchFieldDataType.Single),
searchable=True,
vector_search_dimensions=1536,
vector_search_configuration="default",
)
]
FIELDS_ID = get_from_env(
key="AZURESEARCH_FIELDS_ID", env_key="AZURESEARCH_FIELDS_ID", default="id"
)
FIELDS_CONTENT = get_from_env(
key="AZURESEARCH_FIELDS_CONTENT",
env_key="AZURESEARCH_FIELDS_CONTENT",
default="content",
)
FIELDS_CONTENT_VECTOR = get_from_env(
key="AZURESEARCH_FIELDS_CONTENT_VECTOR",
env_key="AZURESEARCH_FIELDS_CONTENT_VECTOR",
default="content_vector",
)
FIELDS_METADATA = get_from_env(
key="AZURESEARCH_FIELDS_TAG", env_key="AZURESEARCH_FIELDS_TAG", default="metadata"
)
vector_store_address: str = "https://xx.search.windows.net"
vector_store_password: str = "xx"
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name="vector-1701341754619",
fiekds=fields,
embedding_function=embeddings.embed_query
)
llm = AzureChatOpenAI(
azure_deployment="chat",
openai_api_version="2023-05-15",
)
chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=Element61Retriever(vectorstore=vector_store),
return_source_documents=True)
result = chain({"query": 'Whats out of scope?'})
return result
```
However I am always getting:
```
Executed 'Functions.TestCustomRetriever' (Failed, Id=2f243ed8-24bd-414b-af51-6cf1419633a5, Duration=6900ms)
[2023-12-05T15:08:53.252Z] System.Private.CoreLib: Exception while executing function: Functions.TestCustomRetriever. System.Private.CoreLib: Result: Failure
Exception: HttpResponseError: (InvalidRequestParameter) Unknown field 'content_vector' in vector field list.
Code: InvalidRequestParameter
Message: Unknown field 'content_vector' in vector field list.
Exception Details: (UnknownField) Unknown field 'content_vector' in vector field list.
Code: UnknownField
```
Please note this is being executed in an Azure Function locally
### Expected behavior
The custom field names should be taken into account | Using AzureSearch with custom vector field names | https://api.github.com/repos/langchain-ai/langchain/issues/14298/comments | 9 | 2023-12-05T15:09:36Z | 2024-08-06T20:18:36Z | https://github.com/langchain-ai/langchain/issues/14298 | 2,026,436,998 | 14,298 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.344
langchain-core==0.0.8
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [x] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The code incorrectly calls __add with kwargs=kwargs
```python
return self.__add(
texts,
embeddings,
metadatas=metadatas,
ids=ids,
bulk_size=bulk_size,
kwargs=kwargs,
)
```
This results that the _add method has a kwargs dict which contains a key 'kwargs' and all provided parameters (eg: engine="faiss") are not picked up...
### Expected behavior
The code does not mistakingly wraps the kwargs, it does it correctly in the add_embeddings:
```python
return self.__add(
list(texts),
list(embeddings),
metadatas=metadatas,
ids=ids,
bulk_size=bulk_size,
**kwargs,
)
``` | OpenSearchVectorSearch add_texts does wraps kwargs | https://api.github.com/repos/langchain-ai/langchain/issues/14295/comments | 1 | 2023-12-05T14:48:10Z | 2023-12-08T06:58:43Z | https://github.com/langchain-ai/langchain/issues/14295 | 2,026,387,711 | 14,295 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Hey there,
the `_format_chat_history` function [in the RAG cookbook entry](https://python.langchain.com/docs/expression_language/cookbook/retrieval) contains some older syntax. As far as I understand, the cookbook is supposed to provide examples for the newest Langchain version, though.
```
for dialogue_turn in chat_history:
human = "Human: " + dialogue_turn[0]
ai = "Assistant: " + dialogue_turn[1]
```
should be:
```
for dialogue_turn in chat_history:
human = "Human: " + dialogue_turn[0].content
ai = "Assistant: " + dialogue_turn[1].content
```
Thank you. :)
### Idea or request for content:
_No response_ | DOC: Cookbook entry for RAG contains older syntax | https://api.github.com/repos/langchain-ai/langchain/issues/14292/comments | 1 | 2023-12-05T12:40:54Z | 2024-03-16T16:11:26Z | https://github.com/langchain-ai/langchain/issues/14292 | 2,026,106,178 | 14,292 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
In this Announcements (https://github.com/langchain-ai/langchain/discussions/categories/announcements) [LangChain Core #13823
](https://github.com/langchain-ai/langchain/discussions/13823) hwchase17 say he
> TL;DR: we are splitting our core functionality to langchain-core to make LangChain more stable and reliable. This should be invisible to the eye and will happen in the background for the next two weeks, and we’d recommend not using langchain-core until then, but we’re flagging for transparency.
RunnablePassthrough moved from `langchain_core.runnables ` to from `langchain.schema.runnable`
And the same
langchain version : 0.0.336
### Idea or request for content:
**change**
```
from langchain_core.runnables import RunnablePassthrough
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_core.runnables import ConfigurableField
```
**To**
```
from langchain.schema.runnable import RunnablePassthrough
from langchain.schema import StrOutputParser
from langchain.prompts import ChatPromptTemplate
from langchain.schema.runnable import RunnablePassthrough
from langchain.schema.runnable.utils import ConfigurableField
```
| DOC: Why use LCEL ModuleNotFoundError: No module named 'langchain_core' | https://api.github.com/repos/langchain-ai/langchain/issues/14287/comments | 7 | 2023-12-05T10:38:45Z | 2024-03-30T16:05:46Z | https://github.com/langchain-ai/langchain/issues/14287 | 2,025,877,332 | 14,287 |
[
"langchain-ai",
"langchain"
] | ### System Info
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[27], line 5
1 #### All together!
2 # Put it all together now
3 full_chain = {"topic": chain, "question": lambda x: x["question"]} | branch
----> 5 full_chain.invoke({"question":"what is the best city?"})
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain_core/runnables/base.py:1427, in RunnableSequence.invoke(self, input, config)
1425 try:
1426 for i, step in enumerate(self.steps):
-> 1427 input = step.invoke(
1428 input,
1429 # mark each step as a child run
1430 patch_config(
1431 config, callbacks=run_manager.get_child(f"seq:step:{i+1}")
1432 ),
1433 )
1434 # finish the root run
1435 except BaseException as e:
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain_core/runnables/branch.py:186, in RunnableBranch.invoke(self, input, config, **kwargs)
177 expression_value = condition.invoke(
178 input,
179 config=patch_config(
(...)
182 ),
183 )
185 if expression_value:
--> 186 output = runnable.invoke(
187 input,
188 config=patch_config(
189 config,
190 callbacks=run_manager.get_child(tag=f"branch:{idx + 1}"),
191 ),
192 **kwargs,
193 )
194 break
195 else:
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain/chains/base.py:89, in Chain.invoke(self, input, config, **kwargs)
82 def invoke(
83 self,
84 input: Dict[str, Any],
85 config: Optional[RunnableConfig] = None,
86 **kwargs: Any,
87 ) -> Dict[str, Any]:
88 config = config or {}
---> 89 return self(
90 input,
91 callbacks=config.get("callbacks"),
92 tags=config.get("tags"),
93 metadata=config.get("metadata"),
94 run_name=config.get("run_name"),
95 **kwargs,
96 )
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain/chains/base.py:312, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
310 except BaseException as e:
311 run_manager.on_chain_error(e)
--> 312 raise e
313 run_manager.on_chain_end(outputs)
314 final_outputs: Dict[str, Any] = self.prep_outputs(
315 inputs, outputs, return_only_outputs
316 )
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain/chains/base.py:306, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
299 run_manager = callback_manager.on_chain_start(
300 dumpd(self),
301 inputs,
302 name=run_name,
303 )
304 try:
305 outputs = (
--> 306 self._call(inputs, run_manager=run_manager)
307 if new_arg_supported
308 else self._call(inputs)
309 )
310 except BaseException as e:
311 run_manager.on_chain_error(e)
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain/agents/agent.py:1251, in AgentExecutor._call(self, inputs, run_manager)
1249 # We now enter the agent loop (until it returns something).
1250 while self._should_continue(iterations, time_elapsed):
-> 1251 next_step_output = self._take_next_step(
1252 name_to_tool_map,
1253 color_mapping,
1254 inputs,
1255 intermediate_steps,
1256 run_manager=run_manager,
1257 )
1258 if isinstance(next_step_output, AgentFinish):
1259 return self._return(
1260 next_step_output, intermediate_steps, run_manager=run_manager
1261 )
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain/agents/agent.py:1038, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1035 intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
1037 # Call the LLM to see what to do.
-> 1038 output = self.agent.plan(
1039 intermediate_steps,
1040 callbacks=run_manager.get_child() if run_manager else None,
1041 **inputs,
1042 )
1043 except OutputParserException as e:
1044 if isinstance(self.handle_parsing_errors, bool):
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain/agents/agent.py:391, in RunnableAgent.plan(self, intermediate_steps, callbacks, **kwargs)
379 """Given input, decided what to do.
380
381 Args:
(...)
388 Action specifying what tool to use.
389 """
390 inputs = {**kwargs, **{"intermediate_steps": intermediate_steps}}
--> 391 output = self.runnable.invoke(inputs, config={"callbacks": callbacks})
392 return output
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain_core/runnables/base.py:1427, in RunnableSequence.invoke(self, input, config)
1425 try:
1426 for i, step in enumerate(self.steps):
-> 1427 input = step.invoke(
1428 input,
1429 # mark each step as a child run
1430 patch_config(
1431 config, callbacks=run_manager.get_child(f"seq:step:{i+1}")
1432 ),
1433 )
1434 # finish the root run
1435 except BaseException as e:
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain_core/output_parsers/base.py:170, in BaseOutputParser.invoke(self, input, config)
166 def invoke(
167 self, input: Union[str, BaseMessage], config: Optional[RunnableConfig] = None
168 ) -> T:
169 if isinstance(input, BaseMessage):
--> 170 return self._call_with_config(
171 lambda inner_input: self.parse_result(
172 [ChatGeneration(message=inner_input)]
173 ),
174 input,
175 config,
176 run_type="parser",
177 )
178 else:
179 return self._call_with_config(
180 lambda inner_input: self.parse_result([Generation(text=inner_input)]),
181 input,
182 config,
183 run_type="parser",
184 )
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain_core/runnables/base.py:848, in Runnable._call_with_config(self, func, input, config, run_type, **kwargs)
841 run_manager = callback_manager.on_chain_start(
842 dumpd(self),
843 input,
844 run_type=run_type,
845 name=config.get("run_name"),
846 )
847 try:
--> 848 output = call_func_with_variable_args(
849 func, input, config, run_manager, **kwargs
850 )
851 except BaseException as e:
852 run_manager.on_chain_error(e)
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain_core/runnables/config.py:308, in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
306 if run_manager is not None and accepts_run_manager(func):
307 kwargs["run_manager"] = run_manager
--> 308 return func(input, **kwargs)
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain_core/output_parsers/base.py:171, in BaseOutputParser.invoke.<locals>.<lambda>(inner_input)
166 def invoke(
167 self, input: Union[str, BaseMessage], config: Optional[RunnableConfig] = None
168 ) -> T:
169 if isinstance(input, BaseMessage):
170 return self._call_with_config(
--> 171 lambda inner_input: self.parse_result(
172 [ChatGeneration(message=inner_input)]
173 ),
174 input,
175 config,
176 run_type="parser",
177 )
178 else:
179 return self._call_with_config(
180 lambda inner_input: self.parse_result([Generation(text=inner_input)]),
181 input,
182 config,
183 run_type="parser",
184 )
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain_core/output_parsers/base.py:222, in BaseOutputParser.parse_result(self, result, partial)
209 def parse_result(self, result: List[Generation], *, partial: bool = False) -> T:
210 """Parse a list of candidate model Generations into a specific format.
211
212 The return value is parsed from only the first Generation in the result, which
(...)
220 Structured output.
221 """
--> 222 return self.parse(result[0].text)
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain/agents/output_parsers/xml.py:45, in XMLAgentOutputParser.parse(self, text)
43 return AgentFinish(return_values={"output": answer}, log=text)
44 else:
---> 45 raise ValueError(f"Could not parse output: {text}")
ValueError:
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.prompts import PromptTemplate
from langchain.schema.output_parser import StrOutputParser
from langchain.chat_models import AzureChatOpenAI
import openai
openai.api_key = "........................................."
llm = AzureChatOpenAI(
azure_endpoint=".......................................",
openai_api_version="....................................",
deployment_name=..............................................'',
openai_api_key=openai.api_key,
openai_api_type="azure",
temperature=0
)
from langchain.chains import LLMChain
from langchain.memory import ConversationBufferMemory
from langchain.callbacks.stdout import StdOutCallbackHandler
memory = ConversationBufferMemory(return_messages=True)
chain = LLMChain(prompt = PromptTemplate.from_template(template="""
Given the user question below, classify it as either being about 'city', `weather` or `other`.
Do not respond with more than one word.
<question>```
{question}
</question>
Classification:""",
output_parser = StrOutputParser(),
),
llm = llm,
memory = memory,
callbacks=[StdOutCallbackHandler()]
)
from langchain.agents import XMLAgent, tool, AgentExecutor
from langchain.chat_models import ChatAnthropic
model = llm
@tool
def search(query: str) -> str:
"""Search things about current wheather."""
return "32 degrees" # 32 degrees Dzień Dobry przyjacielu!
tool_list = [search
@tool
def find_city(query: str) -> str:
"""Search the answer"""
return "Gdynia" # 32 degrees
city_tool_list = [find_city]
prompt = XMLAgent.get_default_prompt()
def convert_intermediate_steps(intermediate_steps):
log = ""
for action, observation in intermediate_steps:
print('\n')
print(action)
print(observation)
log += (
f"<tool>{action.tool}</tool><tool_input>{action.tool_input}"
f"</tool_input><observation>{observation}</observation>"
)
return log
def convert_tools(tools):
return "\n".join([f"{tool.name}: {tool.description}" for tool in tools])
agent = (
{
"question": lambda x: x["question"],
"intermediate_steps": lambda x: convert_intermediate_steps(x["intermediate_steps"])
}
| prompt.partial(tools=convert_tools(tool_list))
| model.bind(stop=["</tool_input>", "</final_answer>"])
| XMLAgent.get_default_output_parser()
)
agent_executor = AgentExecutor(agent=agent, tools=tool_list, verbose=True)
city_prompt = XMLAgent.get_default_prompt()
new_template = """Use the tools to find the answer.
You have access to the following tools:
{tools}
In order to use a tool, you can use <tool></tool> and <tool_input></tool_input> tags. You will then get back a response in the form <observation></observation>
For example, if you have a tool called 'find_city' that could run and search best city:
<tool>find_city</tool><tool_input>what city?</tool_input>
<observation>Gdynia</observation>
When you are done, respond with a final answer between <final_answer></final_answer>. For example:
<final_answer>The city is Gdynia</final_answer>
Begin!
Question: {question}"""
city_prompt.messages[0].prompt.template = new_template
city_agent = (
{
"question": lambda x: x["question"],
"intermediate_steps": lambda x: convert_intermediate_steps(x["intermediate_steps"])
}
| city_prompt.partial(tools=convert_tools(city_tool_list))
| model.bind(stop=["</tool_input>", "</final_answer>"]) # .bind(memory=memory)
| XMLAgent.get_default_output_parser()
)
city_agent_executor = AgentExecutor(agent=city_agent, tools=city_tool_list, verbose=True)
general_chain = PromptTemplate.from_template("""Respond to the following question:
Question: {question}
Answer:""") | llm
from langchain.schema.runnable import RunnableBranch
branch = RunnableBranch(
(lambda x: "weather" in x["topic"]['text'].lower(), agent_executor),
(lambda x: "city" in x["topic"]['text'].lower(), city_agent_executor),
general_chain
)
full_chain = {"topic": chain, "question": lambda x: x["question"]} | branch
full_chain.invoke({"question":"what is the best city?"})
```
```
### Expected behavior
In xml.py
langchain==0.0.341
line 45.
Is now:
`raise ValueError`
It would be better to have:
`raise ValueError(f"Could not pa](url)rse output: {text}")` | xml.py Value Error --> insufficient error information | https://api.github.com/repos/langchain-ai/langchain/issues/14286/comments | 1 | 2023-12-05T10:12:15Z | 2024-03-16T16:11:21Z | https://github.com/langchain-ai/langchain/issues/14286 | 2,025,828,110 | 14,286 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
so i have this function that saves data into the vector database , what i want is , to extract that vector id so i can delete it later if i want to.
is there a way to accomplish this ?
### Suggestion:
_No response_ | extract vector id and delete one | https://api.github.com/repos/langchain-ai/langchain/issues/14284/comments | 2 | 2023-12-05T09:18:12Z | 2024-03-16T16:11:16Z | https://github.com/langchain-ai/langchain/issues/14284 | 2,025,725,528 | 14,284 |
[
"langchain-ai",
"langchain"
] | ### System Info
MacOS 14.1.1
```
pip3 show langchain
Name: langchain
Version: 0.0.345
Summary: Building applications with LLMs through composability
Home-page: https://github.com/langchain-ai/langchain
Author:
Author-email:
License: MIT
Location: /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages
Requires: aiohttp, anyio, dataclasses-json, jsonpatch, langchain-core, langsmith, numpy, pydantic, PyYAML, requests, SQLAlchemy, tenacity
Required-by: langserve, permchain
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
testFunctions = [
{
"name": "set_animal_properties",
"description": "Set different properties for an animal.",
"parameters": {
"type": "object",
"properties": {
"animals": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "Name of the animal",
},
"appearance": {
"type": "string",
"description": "Summary of the appearance of the animal",
},
"joke": {
"type": "string",
"description": "A joke about the animal",
}
}
}
}
}
}
}
]
def get_chain_animals() -> Runnable:
"""Return a chain."""
prompt = ChatPromptTemplate.from_template("Pick 5 random animals for me. For each of them, give me a 300 word summary of their appearance, and tell me a joke about them. Please call a function with this information.")
# Uncomment to use ChatOpenAI model
#model = ChatOpenAI(model_name="gpt-3.5-turbo-1106",
# openai_api_key=OpenAISettings.OPENAI_API_KEY,
# ).bind(functions=testFunctions, function_call={"name": "set_animal_properties"})
model = AzureChatOpenAI(temperature=.7,
openai_api_base=AzureSettings.BASE_URL,
openai_api_version=AzureSettings.API_VERSION,
deployment_name=AzureSettings.DEPLOYMENT_NAME,
openai_api_key=AzureSettings.API_KEY,
openai_api_type=AzureSettings.API_TYPE,
).bind(functions=testFunctions, function_call={"name": "set_animal_properties"})
parser = JsonOutputFunctionsParser()
return prompt | model | parser
if __name__ == "__main__":
chain = get_chain_animals()
for chunk in chain.stream({}):
print(chunk)
```
### Expected behavior
LCEL Streaming doesn't seem to work properly when using an AzureChatOpenAI model, and the JsonOutputFunctionsParser parser.
I'm unable to stream an LCEL chain correctly when using an Azure-hosted OpenAI model (using the AzureChatOpenAI class).
I'm using a simple LCEL chain:
`chain = promptTemplate | model | parser`
The parser is of type `langchain.output_parsers.openai_functions.JsonOutputFunctionsParser`
When using a AzureChatOpenAI model, the text is not streamed as the tokens are generated. Instead, it appears that I receive all of the text at once after all the tokens are generated.
However, if I **replace** the AzureChatOpenAI model with a ChatOpenAI model (using the same prompt, function bindings, etc.), the stream **DOES** work as intended, returning text in real-time as the tokens are generated.
So I believe I've isolated the problem down to that particular AzureChatOpenAI model.
Any insight or workarounds would be appreciated. | LCEL Streaming doesn't seem to work properly when using an AzureChatOpenAI model, and the JsonOutputFunctionsParser parser | https://api.github.com/repos/langchain-ai/langchain/issues/14280/comments | 2 | 2023-12-05T08:30:23Z | 2024-04-30T16:30:14Z | https://github.com/langchain-ai/langchain/issues/14280 | 2,025,593,782 | 14,280 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
It's not clear from the documentation whether when calling Ollama, langchain will take care of formatting the template correctly or if I have to supply the template by myself.
For example, in https://ollama.ai/library/mistral:instruct
we have:
```
parameter
stop "[INST]"
stop "[/INST]"
stop "<<SYS>>"
stop "<</SYS>>"
template
[INST] {{ .System }} {{ .Prompt }} [/INST]
```
Do I have to take care of formatting my instructions using these parameters and template, or langchain will take care of it?
### Idea or request for content:
If this is not implemented, would be very useful to have definitely | Ollama: parameters and instruction templates | https://api.github.com/repos/langchain-ai/langchain/issues/14279/comments | 4 | 2023-12-05T07:49:56Z | 2024-03-29T21:36:33Z | https://github.com/langchain-ai/langchain/issues/14279 | 2,025,500,538 | 14,279 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Currently, when working with LangChain and generating JSON files which will be rendered in user facing application. We haven't found a dedicated method to check whether a JSON file adheres to the desired structure before executing subsequent operators.
This feature request suggests the implementation of a method or utility for asserting the structure of a JSON file/format, ideally use defined schema (i.e: ResponseSchema). This would allow users to proactively identify any issues with the format.
### Motivation
In the current workflow, users have to rely on executing downstream operators (e.g., RetryWithErrorOutputParser) to discover issues with the JSON structure. This approach can be inefficient, especially when the error detection occurs after execution. Having a method to check the JSON structure beforehand would enable users to catch format issues early in the process. This would prevent users to further interact with LLM
### Your contribution
I'm willing to contribute to the implementation of this feature. I'll carefully read the CONTRIBUTING.md and follow the guidelines to submit a Pull Request once the scope is clarified.
Some simple example use case code:
```python
import os
import json
from dotenv import load_dotenv, find_dotenv
import openai
from langchain.output_parsers import StructuredOutputParser
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from jsonschema import validate, exceptions
_ = load_dotenv(find_dotenv())
openai.api_key = os.environ["OPENAI_API_KEY"]
expected_json_file_path = "app/files/expected_schema.json"
# LCEL - INSTANCES & STRUCTURE
prompt = ChatPromptTemplate.from_template(template=template_string)
chat_llm = ChatOpenAI(temperature=0.0)
# Define output parser based on reponse schemas
output_parser = StructuredOutputParser.from_response_schemas(
asset_instance.response_schemas
)
# Define format instruction using get format instruction method
format_instructions = output_parser.get_format_instructions()
# LCEL DEFINE CHAIN
simple_chain_validator = prompt | chat_llm | output_parser
# LCEL INVOKE CHAIN
chain_to_parse = simple_chain_validator.invoke(
{
"content_to_format": asset_instance.raw_information,
"format_instructions": format_instructions,
}
)
# Define the expected JSON schema
with open(expected_json_file_path, "r") as schema_file:
expected_schema = json.load(schema_file)
# Validate against the schema
try:
validate(instance=chain_to_parse, schema=expected_schema)
print("JSON is valid.")
except exceptions.ValidationError as e:
print(f"Validation Error: {e}")
``` | OutputParser cheap json format Validation | https://api.github.com/repos/langchain-ai/langchain/issues/14276/comments | 1 | 2023-12-05T07:05:27Z | 2024-03-16T16:11:11Z | https://github.com/langchain-ai/langchain/issues/14276 | 2,025,423,592 | 14,276 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.330
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
class SummarizationTool(BaseTool):
name = "summarization_tool"
description = '''This tool must be used at the very end.
It is used to summarize the results from each of other tools.
It needs the entire text of the results from each of the previous tool.'''
llm: BaseLanguageModel
return_direct = True
def _run(self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None) -> str:
print ('\n in summarization tool. query:', query, 'query type:', type(query), query[0], query[1])
```
I can see that this tool is called at the very end, as expected.
The output from the query is:
in summarization tool. query: [text from openai_search tool, text from wikipedia_search tool] query type: <class 'str'> [ t
Instead of " [text from openai_search tool, text from wikipedia_search tool]", I want the actual text.
How do I get it to pass the actual text?
### Expected behavior
Instead of " [text from openai_search tool, text from wikipedia_search tool]", I want the actual text.
How do I get it to pass the actual text? | Agent not calling tool with the right data. | https://api.github.com/repos/langchain-ai/langchain/issues/14274/comments | 1 | 2023-12-05T05:44:22Z | 2024-03-16T16:11:06Z | https://github.com/langchain-ai/langchain/issues/14274 | 2,025,331,957 | 14,274 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
In UnstructuredExcelLoader code comment, introduce 'elements' mode for twice, but nothing about 'single' mode.
Original code comment are following:
Unstructured loaders, UnstructuredExcelLoader can be used in both "single" and "elements" mode. **_If you use the loader in "elements" mode_**, each sheet in the Excel file will be a an Unstructured Table element. **_If you use the loader in "elements" mode_**, an HTML representation of the table will be available in the "text_as_html" key in the document metadata.
### Idea or request for content:
_No response_ | DOC: UnstructuredExcelLoader code comment‘s error | https://api.github.com/repos/langchain-ai/langchain/issues/14271/comments | 2 | 2023-12-05T05:20:47Z | 2024-03-16T16:11:01Z | https://github.com/langchain-ai/langchain/issues/14271 | 2,025,310,554 | 14,271 |
[
"langchain-ai",
"langchain"
] | ### System Info
Every time I use Langchain, something is wrong with it. This is just the latest iteration. If you guys want people to use your library you seriously need to clean things up.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
To reproduce:
from llama_index import SimpleDirectoryReader, LLMPredictor, ServiceContext, GPTVectorStoreIndex
### Expected behavior
Run without error | ImportError: cannot import name 'BaseCache' from 'langchain' | https://api.github.com/repos/langchain-ai/langchain/issues/14268/comments | 6 | 2023-12-05T04:51:11Z | 2024-07-02T16:08:12Z | https://github.com/langchain-ai/langchain/issues/14268 | 2,025,281,757 | 14,268 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Hi, I'm not quite sure how to translate the `ParentDocumentRetriever` examples to ingest documents to OpenSearch in one phase, and then reconnect to it by instantiating a retriever at a later point.
The examples use an `InMemoryStore()` for the parent documents. Is the idea then that it would be necessary if I wanted to use OpenSearch to create two different OpenSearch clusters, one for the parent docs and one for the child docs? Or is there a more simple way to do this?
| DOC: ParentDocumentRetriever without InMemoryStore | https://api.github.com/repos/langchain-ai/langchain/issues/14267/comments | 16 | 2023-12-05T04:26:07Z | 2024-07-24T11:29:39Z | https://github.com/langchain-ai/langchain/issues/14267 | 2,025,258,589 | 14,267 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Does langchain support using local LLM models to request the Neo4j database in a non-openai access mode?
### Motivation
It is inconvenient to use local LLM for cypher generation
### Your contribution
No solution available at this time | Does langchain support using local LLM models to request the Neo4j database? | https://api.github.com/repos/langchain-ai/langchain/issues/14261/comments | 1 | 2023-12-05T02:18:01Z | 2024-03-16T16:10:56Z | https://github.com/langchain-ai/langchain/issues/14261 | 2,025,142,888 | 14,261 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
_No response_
### Suggestion:
_No response_ | how to add Qwen model in initialize_agent() | https://api.github.com/repos/langchain-ai/langchain/issues/14260/comments | 1 | 2023-12-05T02:10:39Z | 2024-03-16T16:10:51Z | https://github.com/langchain-ai/langchain/issues/14260 | 2,025,137,077 | 14,260 |
[
"langchain-ai",
"langchain"
] | ### System Info
When running the tool: Error YahooFinanceNewsTool()
the following error message is displayed:
```
File "C:\Users\xxx\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\tools\yahoo_finance_news.py", line 66, in <listcomp>
if query in doc.metadata["description"] or query in doc.metadata["title"]
KeyError: 'description'
```
I have tried to reproduce the error. It looks like the docs element does not contain a field that can return a "description".
```
loader = WebBaseLoader(web_paths=links)
docs = loader.load()
print(docs) # only insert for test
```
output print (docs):
....
tieren\nAlle ablehnen\nDatenschutzeinstellungen verwalten\n\n\n\n\n\nZum Ende\n \n\n\n\n\n\n\n\n\n\n\n\n\n', metadata={'source': 'https://finance.yahoo.com/m/280830e6-928c-3b1f-97f4-bd37147499cb/not-even-tesla-bulls-love-the.html', 'title': 'Yahooist Teil der Yahoo Markenfamilie', 'language': 'No language found.'}), Document(page_content='\n\n\nYahooist Teil der Yahoo Markenfamilie\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n guce guce \n\n\n\n Yahoo ist Teil der Yahoo MarkenfamilieDie Websites und Apps, die wir betreiben und verwalten, einschließlich Yahoo und AOL, sowie unser digitaler Werbedienst Yahoo Advertising.Yahoo Markenfamilie.\n\n Bei der Nutzung unserer Websites und Apps verwenden wir CookiesMithilfe von Cookies (einschließlich ähnlicher Technologien wie der Webspeicherung) können die Betreiber von Websites und Apps Informationen auf Ihrem Gerät speichern und ablesen. Weitere Informationen finden Sie in unserer Cookie-Richtlinie.Cookies, um:\n\n\nunsere Websites und Apps für Sie bereitzustellen\nNutzer zu authentifizieren, Sicherheitsmaßnahmen anzuwenden und Spam und Missbrauch zu verhindern, und\nIhre Nutzung unserer Websites und Apps zu messen\n\n\n\n Wenn Sie auf „Alle akzeptieren“ klicken, verwenden wir und unsere Partner (einschließlich der 239, die dem IAB Transparency & Consent Framework angehören) Cookies und Ihre personenbezogenen Daten, wie IP-Adresse, genauen Standort sowie Browsing- und Suchdaten, auch für folgende Zwecke:\n\n\npersonalisierte Werbung und Inhalte auf der Grundlage von Interessenprofilen anzuzeigen\ndie Effektivität von personalisierten Anzeigen und Inhalten zu messen, sowie\nunsere Produkte und Dienstleistungen zu entwickeln und zu verbessern\n\n Klicken Sie auf „Alle ablehnen“, wenn Sie nicht möchten, dass wir und unsere Partner Cookies und personenbezogene Daten für diese zusätzlichen Zwecke verwenden.\n\n Wenn Sie Ihre Auswahl anpassen möchten, klicken Sie auf „Datenschutzeinstellungen verwalten“.\n\n Sie können Ihre Einstellungen jederzeit ändern, indem Sie auf unseren Websites und Apps auf den Link „Datenschutz- und Cookie-Einstellungen“ oder „Datenschutz-Dashboard“ klicken. Weitere Informationen darüber, wie wir Ihre personenbezogenen Daten nutzen, finden Sie in unserer Datenschutzerklärung und unserer Cookie-Richtlinie.\n\n\n\n\n\n\n\n\nAlle akzeptieren\nAlle ablehnen\nDatenschutzeinstellungen verwalten\n\n\n\n\n\nZum Ende\n
\n\n\n\n\n\n\n\n\n\n\n\n\n', metadata={'source': 'https://finance.yahoo.com/news/minister-khera-participates-unesco-global-215400355.html', 'title': 'Yahooist Teil der Yahoo Markenfamilie', 'language': 'No language found.'})]
....
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The example in the LangChain docs.
### Expected behavior
The example in the LangChain docs. | Error YahooFinanceNewsTool() Tools | https://api.github.com/repos/langchain-ai/langchain/issues/14248/comments | 8 | 2023-12-04T22:17:30Z | 2024-08-02T16:06:53Z | https://github.com/langchain-ai/langchain/issues/14248 | 2,024,863,086 | 14,248 |
[
"langchain-ai",
"langchain"
] | ### System Info
Name: langchain
Version: 0.0.344
Python 3.9.6
--(tags/v3.9.6:db3ff76, Jun 28 2021, 15:26:21) [MSC v.1929 64 bit (AMD64)] on win32
OS - Window 11
Programming language - Python
Database - SQL Anywhere 17
### Who can help?
@hwchase17
@agola11
**I have SAP SQL Anywhere 17 database server** which I want to talk to **using Langchain and OpenAI in python language**
I am successful able to do this with MS SQL **but failing to do with SQL Anywhere**
**Code below**
import os
import pyodbc
import tkinter as tk
import tkinter.ttk as ttk
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.sql_database import SQLDatabase
from langchain.llms.openai import OpenAI
from langchain.agents import AgentExecutor
DATABASE_SERVER = 'sqldemo'
DATABASE_NAME = 'demo'
DATABASE_USERNAME = 'dba'
DATABASE_PASSWORD = 'sql'
DRIVER = '{SQL Anywhere 17}'
// For Microsoft SQL DB - working
// conn_uri = f"mssql+pyodbc://{DATABASE_SERVER}/{DATABASE_NAME}?driver=ODBC+Driver+18+for+SQL+Server&TrustServerCertificate=yes&Trusted_Connection=yes"
//For SQL Anywhere 17 - Not working
conn_uri = f"**sqlanywhere+pyodbc:**//{DATABASE_USERNAME}:{DATABASE_PASSWORD}@{DATABASE_SERVER}/{DATABASE_NAME}?driver=SQL+Anywhere+17"
db = SQLDatabase.from_uri(conn_uri) #Error line
llm = OpenAI(api_key=os.environ['OPENAI_API_KEY'], temperature=0)
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
agent_executor = create_sql_agent(
llm=OpenAI(temperature=0),
toolkit=toolkit,
verbose=True
)
return agent_executor
================================
**Error I am getting as below**
Exception has occurred: NoSuchModuleError
**Can't load plugin: sqlalchemy.dialects:sqlanywhere.pyodbc**

I can be reached at cloudme50@gmail.com if needed
### Information
- [] The official example notebooks/scripts
- [X ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code can be found here
Please download and check
https://github.com/developer20sujeet/Self_GenerativeAI/blob/main/Langchain_OpenAI_SQLAnywhere_Chat/sql_anywhere_error.py
### Expected behavior
able to connect and query SQL anyhwere 17 | Lang chain not able to connect to SQL Anywhere 17 | https://api.github.com/repos/langchain-ai/langchain/issues/14247/comments | 6 | 2023-12-04T21:47:32Z | 2024-03-17T16:08:42Z | https://github.com/langchain-ai/langchain/issues/14247 | 2,024,815,058 | 14,247 |
[
"langchain-ai",
"langchain"
] | ### Feature request
With `LLMChain`, it was possible to instantiate with `callbacks`, and just pass around the `LLMChain`.
With LCEL, the only way to handle `callbacks` is to pass them to every `invoke` call. This requires one to pass around both the runnable LCEL object as well as the `callbacks`.
### Motivation
It's preferable to bake the `callbacks` into the LCEL object in advance at instantiation, then they get called each `invoke`.
### Your contribution
I can contribute something if I can get a confirm this is desirable. The callbacks would be inserted at the base of the LCEL:
```python
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.schema.callbacks.stdout import StdOutCallbackHandler
from langchain.schema.runnable import RunnableConfig
config = RunnableConfig(callbacks=[StdOutCallbackHandler()])
prompt = PromptTemplate.from_template(
"What is a good name for a company that makes {product}?"
)
runnable = config | prompt | ChatOpenAI()
runnable.invoke(input={"product": "colorful socks"})
``` | Request: ability to set callbacks with LCEL at instantiation | https://api.github.com/repos/langchain-ai/langchain/issues/14241/comments | 5 | 2023-12-04T18:33:49Z | 2024-07-17T16:04:33Z | https://github.com/langchain-ai/langchain/issues/14241 | 2,024,477,858 | 14,241 |
[
"langchain-ai",
"langchain"
] | ### System Info
I have just updated my LangChain packages. Until a few weeks ago, LangChain was working fine for me with my Azure OpenAI resource and deployment of the GPT-4-32K model. As I've gone to create more complex applications with it, I got stuck at one section where I kept getting the error: "InvalidRequestError: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again."
I have given up on this error personally, but I currently have multiple Microsoft employees trying to help me figure it out. However, all of sudden, the basic implementations of LangChain now seem to be creating the same issue. Even though the native/direct API call to Azure OpenAI services is functioning correctly with the same credentials.
I am trying to use the following code to get the basic implementation working again, following directly what is written at https://python.langchain.com/docs/integrations/chat/azure_chat_openai :
model = AzureChatOpenAI(
azure_deployment="Brian",
openai_api_version="2023-05-15"
)
message = HumanMessage(
content="Translate this sentence from English to French. I love programming."
)
print(model([message]))
This keeps getting the same error (as above with "the API deployment does not exist...") and I'm am completely stumped. How has LangChain gone from working to this error without me changing my credentials? And at the same time these credentials still work for the native Azure OpenAI API call. Any help would be massively appreciated.
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have put the code above that you can use, but it seems more of an issue about how LangChain uses my Azure OpenAI account.
### Expected behavior
WARNING! azure_deployment is not default parameter.
azure_deployment was transferred to model_kwargs.
Please confirm that azure_deployment is what you intended.
---------------------------------------------------------------------------
InvalidRequestError Traceback (most recent call last)
Cell In[16], line 9
1 model = AzureChatOpenAI(
2 azure_deployment="Brian",
3 openai_api_version="2023-05-15"
4 )
6 message = HumanMessage(
7 content="Translate this sentence from English to French. I love programming."
8 )
----> 9 print(model([message]))
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/chat_models/base.py:606, in BaseChatModel.__call__(self, messages, stop, callbacks, **kwargs)
599 def __call__(
600 self,
601 messages: List[BaseMessage],
(...)
604 **kwargs: Any,
605 ) -> BaseMessage:
--> 606 generation = self.generate(
607 [messages], stop=stop, callbacks=callbacks, **kwargs
608 ).generations[0][0]
609 if isinstance(generation, ChatGeneration):
610 return generation.message
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/chat_models/base.py:355, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
353 if run_managers:
354 run_managers[i].on_llm_error(e)
--> 355 raise e
356 flattened_outputs = [
357 LLMResult(generations=[res.generations], llm_output=res.llm_output)
358 for res in results
359 ]
360 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/chat_models/base.py:345, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
342 for i, m in enumerate(messages):
343 try:
344 results.append(
--> 345 self._generate_with_cache(
346 m,
347 stop=stop,
348 run_manager=run_managers[i] if run_managers else None,
349 **kwargs,
350 )
351 )
352 except BaseException as e:
353 if run_managers:
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/chat_models/base.py:498, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
494 raise ValueError(
495 "Asked to cache, but no cache found at `langchain.cache`."
496 )
497 if new_arg_supported:
--> 498 return self._generate(
499 messages, stop=stop, run_manager=run_manager, **kwargs
500 )
501 else:
502 return self._generate(messages, stop=stop, **kwargs)
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/chat_models/openai.py:360, in ChatOpenAI._generate(self, messages, stop, run_manager, stream, **kwargs)
358 message_dicts, params = self._create_message_dicts(messages, stop)
359 params = {**params, **kwargs}
--> 360 response = self.completion_with_retry(
361 messages=message_dicts, run_manager=run_manager, **params
362 )
363 return self._create_chat_result(response)
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/chat_models/openai.py:299, in ChatOpenAI.completion_with_retry(self, run_manager, **kwargs)
295 @retry_decorator
296 def _completion_with_retry(**kwargs: Any) -> Any:
297 return self.client.create(**kwargs)
--> 299 return _completion_with_retry(**kwargs)
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/tenacity/__init__.py:289, in BaseRetrying.wraps.<locals>.wrapped_f(*args, **kw)
287 @functools.wraps(f)
288 def wrapped_f(*args: t.Any, **kw: t.Any) -> t.Any:
--> 289 return self(f, *args, **kw)
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/tenacity/__init__.py:379, in Retrying.__call__(self, fn, *args, **kwargs)
377 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
378 while True:
--> 379 do = self.iter(retry_state=retry_state)
380 if isinstance(do, DoAttempt):
381 try:
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/tenacity/__init__.py:314, in BaseRetrying.iter(self, retry_state)
312 is_explicit_retry = fut.failed and isinstance(fut.exception(), TryAgain)
313 if not (is_explicit_retry or self.retry(retry_state)):
--> 314 return fut.result()
316 if self.after is not None:
317 self.after(retry_state)
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/concurrent/futures/_base.py:451, in Future.result(self, timeout)
449 raise CancelledError()
450 elif self._state == FINISHED:
--> 451 return self.__get_result()
453 self._condition.wait(timeout)
455 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/concurrent/futures/_base.py:403, in Future.__get_result(self)
401 if self._exception:
402 try:
--> 403 raise self._exception
404 finally:
405 # Break a reference cycle with the exception in self._exception
406 self = None
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/tenacity/__init__.py:382, in Retrying.__call__(self, fn, *args, **kwargs)
380 if isinstance(do, DoAttempt):
381 try:
--> 382 result = fn(*args, **kwargs)
383 except BaseException: # noqa: B902
384 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type]
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/chat_models/openai.py:297, in ChatOpenAI.completion_with_retry.<locals>._completion_with_retry(**kwargs)
295 @retry_decorator
296 def _completion_with_retry(**kwargs: Any) -> Any:
--> 297 return self.client.create(**kwargs)
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/openai/api_resources/chat_completion.py:25, in ChatCompletion.create(cls, *args, **kwargs)
23 while True:
24 try:
---> 25 return super().create(*args, **kwargs)
26 except TryAgain as e:
27 if timeout is not None and time.time() > start + timeout:
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py:155, in EngineAPIResource.create(cls, api_key, api_base, api_type, request_id, api_version, organization, **params)
129 @classmethod
130 def create(
131 cls,
(...)
138 **params,
139 ):
140 (
141 deployment_id,
142 engine,
(...)
152 api_key, api_base, api_type, api_version, organization, **params
153 )
--> 155 response, _, api_key = requestor.request(
156 "post",
157 url,
158 params=params,
159 headers=headers,
160 stream=stream,
161 request_id=request_id,
162 request_timeout=request_timeout,
163 )
165 if stream:
166 # must be an iterator
167 assert not isinstance(response, OpenAIResponse)
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/openai/api_requestor.py:299, in APIRequestor.request(self, method, url, params, headers, files, stream, request_id, request_timeout)
278 def request(
279 self,
280 method,
(...)
287 request_timeout: Optional[Union[float, Tuple[float, float]]] = None,
288 ) -> Tuple[Union[OpenAIResponse, Iterator[OpenAIResponse]], bool, str]:
289 result = self.request_raw(
290 method.lower(),
291 url,
(...)
297 request_timeout=request_timeout,
298 )
--> 299 resp, got_stream = self._interpret_response(result, stream)
300 return resp, got_stream, self.api_key
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/openai/api_requestor.py:710, in APIRequestor._interpret_response(self, result, stream)
702 return (
703 self._interpret_response_line(
704 line, result.status_code, result.headers, stream=True
705 )
706 for line in parse_stream(result.iter_lines())
707 ), True
708 else:
709 return (
--> 710 self._interpret_response_line(
711 result.content.decode("utf-8"),
712 result.status_code,
713 result.headers,
714 stream=False,
715 ),
716 False,
717 )
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/openai/api_requestor.py:775, in APIRequestor._interpret_response_line(self, rbody, rcode, rheaders, stream)
773 stream_error = stream and "error" in resp.data
774 if stream_error or not 200 <= rcode < 300:
--> 775 raise self.handle_error_response(
776 rbody, rcode, resp.data, rheaders, stream_error=stream_error
777 )
778 return resp
InvalidRequestError: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again. | Using AzureChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/14238/comments | 5 | 2023-12-04T16:40:49Z | 2024-05-13T16:10:00Z | https://github.com/langchain-ai/langchain/issues/14238 | 2,024,280,548 | 14,238 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
from langchain.agents import AgentType, initialize_agent
from langchain.chat_models import ChatOpenAI
from langchain.tools import DuckDuckGoSearchRun
llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo-16k")
web_search = DuckDuckGoSearchRun()
tools = [
web_search,
# other tools
]
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("<Some question>")
VQDExtractionException: Could not extract vqd. keywords="blah blah blah"
---
The code above worked fine several weeks ago, but it is not working any more. I tried every version of LangChain released after September.
Maybe there is a change in DuckDuckGo's API. Please help me, thanks.
### Suggestion:
_No response_ | Issue: Recently, the DuckduckGo search tool seems not working. VQDExtractionException: Could not extract vqd. keywords="keywords" | https://api.github.com/repos/langchain-ai/langchain/issues/14233/comments | 9 | 2023-12-04T16:08:09Z | 2024-07-10T16:05:25Z | https://github.com/langchain-ai/langchain/issues/14233 | 2,024,212,346 | 14,233 |
[
"langchain-ai",
"langchain"
] | ### System Info
python 3.11.6
langchain==0.0.345
langchain-core==0.0.9
jupyter_client==8.6.0
jupyter_core==5.5.0
ipykernel==6.27.0
ipython==8.17.2
on mac M2
### Who can help?
@baskaryan @tomasonjo @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. The setup is the same as in https://github.com/langchain-ai/langchain/issues/14231, although I don't think it matters.
2. Run `existing_graph.similarity_search_with_score("It is the end of the world. Take shelter!")`
3. It returns the following error
---------------------------------------------------------------------------
ClientError Traceback (most recent call last)
[/Users/josselinperrus/Projects/streetpress/neo4j.ipynb](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/neo4j.ipynb) Cell 17 line 1
----> [1](vscode-notebook-cell:/Users/josselinperrus/Projects/streetpress/neo4j.ipynb#X21sZmlsZQ%3D%3D?line=0) existing_graph.similarity_search_with_score("It is the end of the world. Take shelter !")
File [~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:550](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:550), in Neo4jVector.similarity_search_with_score(self, query, k)
[540](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:540) """Return docs most similar to query.
[541](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:541)
[542](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:542) Args:
(...)
[547](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:547) List of Documents most similar to the query and score for each
[548](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:548) """
[549](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:549) embedding = self.embedding.embed_query(query)
--> [550](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:550) docs = self.similarity_search_with_score_by_vector(
[551](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:551) embedding=embedding, k=k, query=query
[552](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:552) )
[553](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:553) return docs
File [~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:595](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:595), in Neo4jVector.similarity_search_with_score_by_vector(self, embedding, k, **kwargs)
[586](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:586) read_query = _get_search_index_query(self.search_type) + retrieval_query
[587](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:587) parameters = {
[588](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:588) "index": self.index_name,
[589](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:589) "k": k,
(...)
[592](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:592) "query": kwargs["query"],
[593](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:593) }
--> [595](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:595) results = self.query(read_query, params=parameters)
[597](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:597) docs = [
[598](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:598) (
[599](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:599) Document(
(...)
[607](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:607) for result in results
[608](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:608) ]
[609](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:609) return docs
File [~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:242](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:242), in Neo4jVector.query(self, query, params)
[240](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:240) try:
[241](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:241) data = session.run(query, params)
--> [242](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:242) return [r.data() for r in data]
[243](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:243) except CypherSyntaxError as e:
[244](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:244) raise ValueError(f"Cypher Statement is not valid\n{e}")
File [~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:242](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:242), in <listcomp>(.0)
[240](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:240) try:
[241](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:241) data = session.run(query, params)
--> [242](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:242) return [r.data() for r in data]
[243](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:243) except CypherSyntaxError as e:
[244](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:244) raise ValueError(f"Cypher Statement is not valid\n{e}")
File [~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/work/result.py:270](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/work/result.py:270), in Result.__iter__(self)
[268](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/work/result.py:268) yield self._record_buffer.popleft()
[269](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/work/result.py:269) elif self._streaming:
--> [270](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/work/result.py:270) self._connection.fetch_message()
[271](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/work/result.py:271) elif self._discarding:
[272](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/work/result.py:272) self._discard()
File [~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:178](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:178), in ConnectionErrorHandler.__getattr__.<locals>.outer.<locals>.inner(*args, **kwargs)
[176](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:176) def inner(*args, **kwargs):
[177](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:177) try:
--> [178](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:178) func(*args, **kwargs)
[179](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:179) except (Neo4jError, ServiceUnavailable, SessionExpired) as exc:
[180](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:180) assert not asyncio.iscoroutinefunction(self.__on_error)
File [~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt.py:849](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt.py:849), in Bolt.fetch_message(self)
[845](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt.py:845) # Receive exactly one message
[846](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt.py:846) tag, fields = self.inbox.pop(
[847](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt.py:847) hydration_hooks=self.responses[0].hydration_hooks
[848](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt.py:848) )
--> [849](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt.py:849) res = self._process_message(tag, fields)
[850](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt.py:850) self.idle_since = perf_counter()
[851](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt.py:851) return res
File [~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt5.py:374](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt5.py:374), in Bolt5x0._process_message(self, tag, fields)
[372](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt5.py:372) self._server_state_manager.state = self.bolt_states.FAILED
[373](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt5.py:373) try:
--> [374](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt5.py:374) response.on_failure(summary_metadata or {})
[375](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt5.py:375) except (ServiceUnavailable, DatabaseUnavailable):
[376](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt5.py:376) if self.pool:
File [~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:245](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:245), in Response.on_failure(self, metadata)
[243](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:243) handler = self.handlers.get("on_summary")
[244](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:244) Util.callback(handler)
--> [245](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:245) raise Neo4jError.hydrate(**metadata)
ClientError: {code: Neo.ClientError.Procedure.ProcedureCallFailed} {message: Failed to invoke procedure `db.index.fulltext.queryNodes`: Caused by: org.apache.lucene.queryparser.classic.ParseException: Encountered "<EOF>" at line 1, column 42.
Was expecting one of:
<BAREOPER> ...
"(" ...
"*" ...
<QUOTED> ...
<TERM> ...
<PREFIXTERM> ...
<WILDTERM> ...
<REGEXPTERM> ...
"[" ...
"{" ...
<NUMBER> ...
<TERM> ...
"*" ...
}
### Expected behavior
No error | similarity_search_with_score does not accept "!" in the query | https://api.github.com/repos/langchain-ai/langchain/issues/14232/comments | 1 | 2023-12-04T16:07:02Z | 2023-12-13T17:09:52Z | https://github.com/langchain-ai/langchain/issues/14232 | 2,024,210,192 | 14,232 |
[
"langchain-ai",
"langchain"
] | ### System Info
python 3.11.6
langchain==0.0.345
langchain-core==0.0.9
jupyter_client==8.6.0
jupyter_core==5.5.0
ipykernel==6.27.0
ipython==8.17.2
on mac M2
### Who can help?
@baskaryan @tomasonjo @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Add a first node to the graph
`first_node = Node(
id="1",
type="Sentence",
properties={
"description": "This is my first node"
}
)
graph_document = GraphDocument(
nodes= [first_node],
relationships= [],
source= Document(page_content="my first document")
)
graph.add_graph_documents([graph_document])`
2. Add a second node to the graph
`second_node = Node(
id="2",
type="Sentence",
properties={
"description": "I love eating spinach"
}
)
graph_document = GraphDocument(
nodes= [second_node],
relationships= [],
source= Document(page_content="second doc")
)
graph.add_graph_documents([graph_document])`
3. Create a hybrid index
`existing_graph = Neo4jVector.from_existing_graph(
embedding=OpenAIEmbeddings(),
url=url,
username=username,
password=password,
index_name="sentence_index",
keyword_index_name="sentence_kindex",
node_label="Sentence",
text_node_properties=["description"],
embedding_node_property="embedding",
search_type="hybrid"
)`
4. Do a similarity search
`existing_graph.similarity_search_with_score("It is the end of the world. Take shelter")`
5. This yields a relevance score of 1 for the 1st result
`[(Document(page_content='\ndescription: This is my first node'), 1.0),
(Document(page_content='\ndescription: I love eating spinach'),
0.8576263189315796)]`
6. Test the strategy in use:
`existing_graph._distance_strategy`
which returns `<DistanceStrategy.COSINE: 'COSINE'>`
### Expected behavior
The relevance score should return the cosine similarity score.
In this particular case the cosine similarity score is 0.747
`def get_embedding(text):
response = openai.embeddings.create(input=text, model="text-embedding-ada-002")
return response.data[0].embedding
def cosine_similarity(vec1, vec2):
return dot(vec1, vec2) / (norm(vec1) * norm(vec2))
embedding1 = get_embedding("This is my first node")
embedding2 = get_embedding("It is the end of the world. Take shelter")
similarity = cosine_similarity(embedding1, embedding2)
print(f"Cosine Similarity: {similarity}")`
returns
`Cosine Similarity: 0.7475260325549817` | similarity_search_with_relevance_scores returns incoherent relevance scores with Neo4jVector | https://api.github.com/repos/langchain-ai/langchain/issues/14231/comments | 3 | 2023-12-04T16:00:03Z | 2024-03-17T16:08:36Z | https://github.com/langchain-ai/langchain/issues/14231 | 2,024,195,747 | 14,231 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I cant find documentattion about filtered retrievers, basically I want to fix this issue: https://github.com/langchain-ai/langchain/issues/14227
### Idea or request for content:
And according to other github issues, the way to fix it is with custom filtered retrievers.
So I tried the following:
```
class FilteredRetriever:
def __init__(self, retriever, title):
self.retriever = retriever
self.title = title
def retrieve(self, *args, **kwargs):
results = self.retriever.retrieve(*args, **kwargs)
return [doc for doc in results if doc['title'].startswith(self.title)]
filtered_retriever = FilteredRetriever(vector_store.as_retriever(), '25_1_0.pdf')
llm = AzureChatOpenAI(
azure_deployment="chat",
openai_api_version="2023-05-15",
)
retriever = vector_store.as_retriever(search_type="similarity", kwargs={"k": 3})
chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=filtered_retriever,
return_source_documents=True)
result = chain({"query": 'Can Colleagues contact their managers??'})
for res in result['source_documents']:
print(res.metadata['title'])
```
However I get this error:
```
ValidationError: 1 validation error for RetrievalQA
retriever
value is not a valid dict (type=type_error.dict)
```
but as I cant find documentation about it, I am not sure how to solve it | DOC: How to create a custom filtered retriever | https://api.github.com/repos/langchain-ai/langchain/issues/14229/comments | 7 | 2023-12-04T15:07:02Z | 2024-05-15T16:06:53Z | https://github.com/langchain-ai/langchain/issues/14229 | 2,024,080,225 | 14,229 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.342
langchain-core 0.0.7
azure-search-documents 11.4.0b8
Python: 3.10
### Who can help?
@hw
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The following code works fine:
```
from langchain_core.vectorstores import VectorStore, VectorStoreRetriever
index_name: str = "langchain-vector-demo"
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name="vector-1701341754619",
embedding_function=embeddings.embed_query
)
res = vector_store.similarity_search(
query="Can Colleagues contact their managers?", k=20, search_type="hybrid", filters="title eq '25_1_0.pdf'")
The res object contains the chunkcs where title is 25_1_0.pdf' ONLY
```
However when using it with an LLM:
```
llm = AzureChatOpenAI(
azure_deployment="chat",
openai_api_version="2023-05-15",
)
retriever = vector_store.as_retriever(search_type="similarity", filters="title eq '25_1_0.pdf'", kwargs={"k": 3})
chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
result = chain({"query": 'Can Colleagues contact their managers??'})
for res in result['source_documents']:
print(res.metadata['title'])
```
My output has chunks which dont respect the filter:
142_2_0.pdf
99_9_0.docx
99_9_0.docx
142_2_0.pdf
### Expected behavior
The answer generated with source_documents, should contain chunks which respects the given filters. | Filters dont work with Azure Search Vector Store retriever | https://api.github.com/repos/langchain-ai/langchain/issues/14227/comments | 5 | 2023-12-04T14:58:48Z | 2024-05-27T16:06:08Z | https://github.com/langchain-ai/langchain/issues/14227 | 2,024,062,849 | 14,227 |
[
"langchain-ai",
"langchain"
] | ### System Info
ml.g5.48xlarge EC2 instance on AWS with:
- Langchain 0.0.305
- Python 3.10
### Who can help?
@hwchase17 @agola11
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I run into the above error when importing `HuggingFaceEmbeddings`.
```py
from langchain.embeddings import HuggingFaceEmbeddings
```
I believe this is due to the fact that we have a file named `requests.py` in the root folder of the project, which conflicts with the `requests` package.
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[19], line 1
----> 1 from langchain.embeddings import HuggingFaceEmbeddings
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/embeddings/__init__.py:17
14 import logging
15 from typing import Any
---> 17 from langchain.embeddings.aleph_alpha import (
18 AlephAlphaAsymmetricSemanticEmbedding,
19 AlephAlphaSymmetricSemanticEmbedding,
20 )
21 from langchain.embeddings.awa import AwaEmbeddings
22 from langchain.embeddings.azure_openai import AzureOpenAIEmbeddings
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/embeddings/aleph_alpha.py:6
3 from langchain_core.embeddings import Embeddings
4 from langchain_core.pydantic_v1 import BaseModel, root_validator
----> 6 from langchain.utils import get_from_dict_or_env
9 class AlephAlphaAsymmetricSemanticEmbedding(BaseModel, Embeddings):
10 """Aleph Alpha's asymmetric semantic embedding.
11
12 AA provides you with an endpoint to embed a document and a query.
(...)
30
31 """
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/utils/__init__.py:14
7 from langchain_core.utils.formatting import StrictFormatter, formatter
8 from langchain_core.utils.input import (
9 get_bolded_text,
10 get_color_mapping,
11 get_colored_text,
12 print_text,
13 )
---> 14 from langchain_core.utils.utils import (
15 check_package_version,
16 convert_to_secret_str,
17 get_pydantic_field_names,
18 guard_import,
19 mock_now,
20 raise_for_status_with_text,
21 xor_args,
22 )
24 from langchain.utils.env import get_from_dict_or_env, get_from_env
25 from langchain.utils.math import cosine_similarity, cosine_similarity_top_k
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain_core/utils/__init__.py:14
7 from langchain_core.utils.formatting import StrictFormatter, formatter
8 from langchain_core.utils.input import (
9 get_bolded_text,
10 get_color_mapping,
11 get_colored_text,
12 print_text,
13 )
---> 14 from langchain_core.utils.loading import try_load_from_hub
15 from langchain_core.utils.utils import (
16 build_extra_kwargs,
17 check_package_version,
(...)
23 xor_args,
24 )
26 __all__ = [
27 "StrictFormatter",
28 "check_package_version",
(...)
41 "build_extra_kwargs",
42 ]
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain_core/utils/loading.py:10
7 from typing import Any, Callable, Optional, Set, TypeVar, Union
8 from urllib.parse import urljoin
---> 10 import requests
12 DEFAULT_REF = os.environ.get("LANGCHAIN_HUB_DEFAULT_REF", "master")
13 URL_BASE = os.environ.get(
14 "LANGCHAIN_HUB_URL_BASE",
15 "[https://raw.githubusercontent.com/hwchase17/langchain-hub/{ref}/](https://raw.githubusercontent.com/hwchase17/langchain-hub/%7Bref%7D/)",
16 )
File ~/SageMaker/langchain/libs/langchain/langchain/requests.py:2
1 """DEPRECATED: Kept for backwards compatibility."""
----> 2 from langchain.utilities import Requests, RequestsWrapper, TextRequestsWrapper
4 __all__ = [
5 "Requests",
6 "RequestsWrapper",
7 "TextRequestsWrapper",
8 ]
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/utilities/__init__.py:8
1 """**Utilities** are the integrations with third-part systems and packages.
2
3 Other LangChain classes use **Utilities** to interact with third-part systems
4 and packages.
5 """
6 from typing import Any
----> 8 from langchain.utilities.requests import Requests, RequestsWrapper, TextRequestsWrapper
11 def _import_alpha_vantage() -> Any:
12 from langchain.utilities.alpha_vantage import AlphaVantageAPIWrapper
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/utilities/requests.py:10
6 import requests
7 from langchain_core.pydantic_v1 import BaseModel, Extra
---> 10 class Requests(BaseModel):
11 """Wrapper around requests to handle auth and async.
12
13 The main purpose of this wrapper is to handle authentication (by saving
14 headers) and enable easy async methods on the same base object.
15 """
17 headers: Optional[Dict[str, str]] = None
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/utilities/requests.py:27, in Requests()
24 extra = Extra.forbid
25 arbitrary_types_allowed = True
---> 27 def get(self, url: str, **kwargs: Any) -> requests.Response:
28 """GET the URL and return the text."""
29 return requests.get(url, headers=self.headers, auth=self.auth, **kwargs)
AttributeError: partially initialized module 'requests' has no attribute 'Response' (most likely due to a circular import)
```
### Expected behavior
To be able to correctly import and use the embeddings. | partially initialized module 'requests' has no attribute 'Response' | https://api.github.com/repos/langchain-ai/langchain/issues/14226/comments | 1 | 2023-12-04T14:24:15Z | 2024-03-16T16:10:31Z | https://github.com/langchain-ai/langchain/issues/14226 | 2,023,977,308 | 14,226 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python: 3.10
LangChain: 0.0.344
OpenSearch: Amazon OpenSearch Serverless Vector Engine
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have loaded documents with metadata into OpenSearch which includes a date value. When generating a SelfQuerying Retriever the generated query specifies the date type as "date", and results in an invalid query being generated and sent to OpenSearch that then returns an error. I have set the AttributeInfo to "string", but even if it was "date" the query would be invalid.
```
document_content_description = "Feedback from users"
metadata_field_info = [
AttributeInfo(
name="timestamp",
description="A string representing the date that the feedback was submitted in the 'yyyy-MM-dd' format",
type="string",
)
]
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_content_description, metadata_field_info, verbose=True
)
retriever.get_relevant_documents("Summarize feedback submitted on 2023-06-01")
```
Output Logs:
```
"repr": "StructuredQuery(query=' ', filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='timestamp', value={'date': '2023-06-01', 'type': 'date'}), limit=None)"
...
{
"size": 4,
"query": {
"bool": {
"filter": {
"term": {
"metadata.timestamp": {
"date": "2023-06-01",
"type": "date"
}
}
},
...
}
{
"error": {
"root_cause": [
{
"type": "parsing_exception",
"reason": "[term] query does not support [date]",
"line": 1,
"col": 76
}
],
"type": "x_content_parse_exception",
"reason": "[1:76] [bool] failed to parse field [filter]",
"caused_by": {
"type": "parsing_exception",
"reason": "[term] query does not support [date]",
"line": 1,
"col": 76
}
},
"status": 400
}
```
### Expected behavior
LangChain should generate a valid OpenSearch query such as:
```
{
"size": 4,
"query": {
"bool": {
"filter": {
"term": {
"metadata.timestamp.keyword": "2023-06-01"
}
},
...
}
``` | SelfQueryRetriever with OpenSearch generating invalid queries with Date type | https://api.github.com/repos/langchain-ai/langchain/issues/14225/comments | 3 | 2023-12-04T12:21:19Z | 2024-01-22T09:09:43Z | https://github.com/langchain-ai/langchain/issues/14225 | 2,023,732,355 | 14,225 |
[
"langchain-ai",
"langchain"
] | ### System Info
MacOS
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
As described in the [docs page](https://python.langchain.com/docs/integrations/tools/dalle_image_generator):
```
from langchain.agents import initialize_agent, load_tools
tools = load_tools(["dalle-image-generator"])
agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True)
output = agent.run("Create an image of a halloween night at a haunted museum")
```
Below is output:
```
> Entering new AgentExecutor chain...
I need to generate an image from a text description
Action: Dall-E-Image-Generator
Action Input: "Halloween night at a haunted museum"
Observation: https://oaidalleapiprodscus.blob.core.windows.net/private/org-yt03sAlJZ8YRfqIcNivAAqZu/user-xV6mgISZftMROz9SukNqCHqH/img-EKpmrjqlb1988YrkkBm0vgjr.png?st=2023-12-04T08%3A30%3A50Z&se=2023-12-04T10%3A30%3A50Z&sp=r&sv=2021-08-06&sr=b&rscd=inline&rsct=image/png&skoid=6aaadede-4fb3-4698-a8f6-684d7786b067&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2023-12-03T23%3A02%3A03Z&ske=2023-12-04T23%3A02%3A03Z&sks=b&skv=2021-08-06&sig=9TvprwW3Wl3ZHj%2B2ga6juBT1KQLJIc9TUz%2BDIVcd3XA%3D
Thought: I now know the final answer
Final Answer: https://oaidalleapiprodscus.blob.core.windows.net/private/org-yt03sAlJZ8YRfqIcNivAAqZu/user-xV6mgISZftMROz9SukNqCHqH/img-EKpmrjqlb1988YrkkBm0vgjr.png?st=2023-12-04T08%3A30%3A50Z&se=2023-12-04T10%3A30%3A50Z&sp=r&sv=2021-08-06&sr=b&rscd=inline&rsct=image/png&skoid=6aaadede-4fb3-4698-a8f6-684d7786b067&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2023-12-03T23%3A02%3A03Z&ske=2023-12-04T23%
> Finished chain.
```
Seems ok? But the url has been cut, it not origin! So you will not get the right picture.
Instead got this:
<img width="1472" alt="image" src="https://github.com/langchain-ai/langchain/assets/19658300/f856e606-9d9e-466d-bd8f-de4eed70f15c">
### Expected behavior
Dalle is a import tool in openai.
I want get the picture which I can see it. So it should give me the origin link.
Just like this:
```
> Entering new AgentExecutor chain...
I can use the Dall-E-Image-Generator to generate an image of a volcano island based on the text description.
Action: Dall-E-Image-Generator
Action Input: "A volcano island"
Observation: https://oaidalleapiprodscus.blob.core.windows.net/private/org-yt03sAlJZ8YRfqIcNivAAqZu/user-xV6mgISZftMROz9SukNqCHqH/img-H3m0wSNxDXVUkUKiE9kOKgvg.png?st=2023-12-04T09%3A39%3A05Z&se=2023-12-04T11%3A39%3A05Z&sp=r&sv=2021-08-06&sr=b&rscd=inline&rsct=image/png&skoid=6aaadede-4fb3-4698-a8f6-684d7786b067&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2023-12-03T22%3A42%3A28Z&ske=2023-12-04T22%3A42%3A28Z&sks=b&skv=2021-08-06&sig=WSEO5/OX5GgYaNTWxZhNmsK%2BqeaDLMEsDdGEnHX18BY%3D
Thought:I now know the final answer.
Final Answer: The image of a volcano island can be found at the following link: https://oaidalleapiprodscus.blob.core.windows.net/private/org-yt03sAlJZ8YRfqIcNivAAqZu/user-xV6mgISZftMROz9SukNqCHqH/img-H3m0wSNxDXVUkUKiE9kOKgvg.png?st=2023-12-04T09%3A39%3A05Z&se=2023-12-04T11%3A39%3A05Z&sp=r&sv=2021-08-06&sr=b&rscd=inline&rsct=image/png&skoid=6aaadede-4fb3-4698-a8f6-684d7786b067&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2023-12-03T22%3A42%3A28Z&ske=2023-12-04T22%3A42%3A28Z&sks=b&skv=2021-08-06&sig=WSEO5/OX5GgYaNTWxZhNmsK%2BqeaDLMEsDdGEnHX18BY%3D
> Finished chain.
The image of a volcano island can be found at the following link: https://oaidalleapiprodscus.blob.core.windows.net/private/org-yt03sAlJZ8YRfqIcNivAAqZu/user-xV6mgISZftMROz9SukNqCHqH/img-H3m0wSNxDXVUkUKiE9kOKgvg.png?st=2023-12-04T09%3A39%3A05Z&se=2023-12-04T11%3A39%3A05Z&sp=r&sv=2021-08-06&sr=b&rscd=inline&rsct=image/png&skoid=6aaadede-4fb3-4698-a8f6-684d7786b067&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2023-12-03T22%3A42%3A28Z&ske=2023-12-04T22%3A42%3A28Z&sks=b&skv=2021-08-06&sig=WSEO5/OX5GgYaNTWxZhNmsK%2BqeaDLMEsDdGEnHX18BY%3D
``` | Dall-E Image Generator return url without authentication information | https://api.github.com/repos/langchain-ai/langchain/issues/14223/comments | 4 | 2023-12-04T10:41:50Z | 2024-04-16T22:36:58Z | https://github.com/langchain-ai/langchain/issues/14223 | 2,023,542,482 | 14,223 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am working on extracting data from HTML files. I need to extract table data to store in a data frame as a table. With the help of langchain document loader I can extract the data row wise but the headers of columns are not getting extracted. How to extract column headers along with the data.
### Suggestion:
_No response_ | Table data extraction from HTML files. | https://api.github.com/repos/langchain-ai/langchain/issues/14218/comments | 2 | 2023-12-04T09:49:35Z | 2024-03-17T16:08:31Z | https://github.com/langchain-ai/langchain/issues/14218 | 2,023,436,264 | 14,218 |
[
"langchain-ai",
"langchain"
] | https://github.com/langchain-ai/langchain/blob/ee94ef55ee6ab064da08340817955f821dfa6261/libs/langchain/langchain/chains/llm.py#L71
In my humble opinion, I think `llm_kwargs` variable should be a parameter declared in the LLM class rather than the LLM Chain. One of the reasons for this, is that when you are declaring, for instance, a `RetrievalQA` chain `from_chain_type` or `from_llm` , you cannot specify these `llm_kwargs`.
You can workaround that by calling the inner `llm_chain` with (given that you're using a `combine_documents_chain`):
```python
chain.combine_documents_chain.llm_chain.llm_kwargs = {'test': 'test'}
```
But seems very odd that you have to do that. And the kwargs seems that it should be a responsibility of the LLM, not the LLM Chain.
Happy to be proven wrong! 🙂 | Add `llm_kwargs` to `BaseRetrievalQA.from_llm` | https://api.github.com/repos/langchain-ai/langchain/issues/14216/comments | 2 | 2023-12-04T09:38:10Z | 2024-03-16T16:10:17Z | https://github.com/langchain-ai/langchain/issues/14216 | 2,023,415,515 | 14,216 |
[
"langchain-ai",
"langchain"
] | ### System Info
Mac os
Python3.9
### Who can help?
@Jiaaming
I'm working on this issue.
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
when you run the BiliBiliLoader example from the official [docs](https://python.langchain.com/docs/integrations/document_loaders/bilibili)
```
loader = BiliBiliLoader(
[
"https://www.bilibili.com/video/BV1g84y1R7oE/",
]
)
docs = loader.load()
print(docs)
```
will get
```
bilibili_api.exceptions.CredentialNoSessdataException.CredentialNoSessdataException: Credential 类未提供 sessdata 或者为空。
Process finished with exit code 1
```
The is because the original [bilibili_api](https://nemo2011.github.io/bilibili-api/#/get-credential) require a Credential to fetch the info from the video
### Expected behavior
Should return a Document object
```
[Document(page_content="Video Title:...,description:....)]
``` | BiliBiliLoader Credential No Sessdata error | https://api.github.com/repos/langchain-ai/langchain/issues/14213/comments | 4 | 2023-12-04T06:58:42Z | 2024-06-17T16:09:49Z | https://github.com/langchain-ai/langchain/issues/14213 | 2,023,161,496 | 14,213 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
ValidationError: 2 validation errors for ConversationChain
advisor_summary
extra fields not permitted (type=value_error.extra)
__root__
Got unexpected prompt input variables. The prompt expects ['advisor_summary', 'history', 'input'], but got ['history'] as inputs from memory, and input as the normal input key. (type=value_error)
How to resolve this error?
### Suggestion:
_No response_ | ConversationChain error with multiple inputs | https://api.github.com/repos/langchain-ai/langchain/issues/14210/comments | 4 | 2023-12-04T04:34:52Z | 2024-03-17T16:08:26Z | https://github.com/langchain-ai/langchain/issues/14210 | 2,023,006,180 | 14,210 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi, I am trying to use `initialize_agent` to leverage custom tools but when the agent is accepting it, it is modifying the user's input in its own way and causing error in the output. For example, in SQLTool, it is automatically converting it into SQL query which is wrong and ended up giving me wrong answer. I want to know how can I ensure raw input is getting passed into the `action_input` of `initialize_agent`! An example below:
User input: How many users have purchased PlayStation from New York in October?
Expected flow of execution will start like this:
```
> Entering new AgentExecutor chain...
Action:
```
{
"action": "SQLTool",
"action_input": {
"raw_input": "How many users have purchased PlayStation from New York in October?"
}
}
```
```
Also, since I am using BaseTool in the custom tools, I had to ensure output format is in string - can we customize it like getting it in JSON format?
### Suggestion:
_No response_ | Issue: How to avoid modifying user input and output in initialize_agent? | https://api.github.com/repos/langchain-ai/langchain/issues/14209/comments | 11 | 2023-12-04T03:45:23Z | 2024-03-19T16:05:32Z | https://github.com/langchain-ai/langchain/issues/14209 | 2,022,958,852 | 14,209 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I have a problem in this code `llm_chain = LLMChain(prompt=prompt,llm=local_llm)`, how to define the local huggingface models into this `local_llm`
### Suggestion:
_No response_ | Issue: <How to use the local huggingface models> | https://api.github.com/repos/langchain-ai/langchain/issues/14208/comments | 1 | 2023-12-04T03:05:05Z | 2024-03-16T16:10:01Z | https://github.com/langchain-ai/langchain/issues/14208 | 2,022,925,148 | 14,208 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I'd like to use Hugging Face's Chat UI frontend with LangChain.
https://github.com/huggingface/chat-ui
But it looks like the Chat UI is only available through Hugginf Face's Text Generation Inference endpoint.
https://github.com/huggingface/chat-ui/issues/466
How can I serve the chain I have configured with LangChain in TGI format so I can use Chat UI?
Thank you in advance.
### Suggestion:
_No response_ | Issue: I'd like to use Hugging Face's Chat UI frontend with LangChain. | https://api.github.com/repos/langchain-ai/langchain/issues/14207/comments | 2 | 2023-12-04T02:38:47Z | 2024-04-15T10:06:12Z | https://github.com/langchain-ai/langchain/issues/14207 | 2,022,901,487 | 14,207 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi,
I am wondering if anyone has a work around using ConversationalRetrievalChain to retrieve documents with their sources, and prevent the chain from returning sources for questions without sources.
query = "How are you doing?"
result = chain({"question": query, "chat_history": chat_history})
result['answer']
"""
I'm doing well, thank you.
SOURCES: /content/xxx.pdf
"""
### Suggestion:
SOURCES: | ConversationalRetrievalChain returns sources to questions without context | https://api.github.com/repos/langchain-ai/langchain/issues/14203/comments | 6 | 2023-12-03T21:33:41Z | 2024-04-09T00:22:31Z | https://github.com/langchain-ai/langchain/issues/14203 | 2,022,724,502 | 14,203 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
_No response_
### Suggestion:
_No response_ | Can i save openai token usage information which get from get_openai_callback directlly to database with SQLChatMessageHistory | https://api.github.com/repos/langchain-ai/langchain/issues/14199/comments | 1 | 2023-12-03T15:59:28Z | 2024-03-16T16:09:46Z | https://github.com/langchain-ai/langchain/issues/14199 | 2,022,602,013 | 14,199 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: 0.0.342
Python version: 3.9
OS: Mac OS
### Who can help?
@3coins
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Use any Kendra index that contains FAQ style answers and try to query them as a retriever.
### Expected behavior
### Background
When using `AmazonKendraRetriever`, `get_relevant_documents` first calls the `retrieve` API and if nothing is returned, it calls the `query` API (see [here](https://github.com/langchain-ai/langchain/blob/0bdb4343838c4513d15cd9702868adf6f652421c/libs/langchain/langchain/retrievers/kendra.py#L391-L399)).
The `query` API has the capability to return answers from not just documents but also Kendra's FAQs (as described in [API docs for Kendra](https://docs.aws.amazon.com/kendra/latest/dg/query-responses-types.html#response-types)).
### Issue
While Kendra normally only returns snippets, it does return full text for FAQs. The problem is that it is ignored by langchain today hence return snippet for FAQs also. The problem lies in these line of code:
https://github.com/langchain-ai/langchain/blob/0bdb4343838c4513d15cd9702868adf6f652421c/libs/langchain/langchain/retrievers/kendra.py#L225-L244
The Key `AnswerText` is always assumed to be at the 0th index when it can often be in fact on the 1st index. No such assumption can made also based on the [documentation](https://docs.aws.amazon.com/kendra/latest/APIReference/API_QueryResultItem.html#API_QueryResultItem_Contents) of the `QueryResultItem` structure.
For the Kendra index I am testing with, this is indeed the case which is how I discovered the bug.
This is easily fixable where we loop over all indices to search for the key `AnswerText`. I can make the PR if this is deemed as the desired behavior.
Side note: I am happy to add some tests for `kendra.py` which I realized has no testing impemented. | Amazon Kendra: Full answer text not returned for FAQ type answers | https://api.github.com/repos/langchain-ai/langchain/issues/14198/comments | 1 | 2023-12-03T15:31:50Z | 2024-03-16T16:09:41Z | https://github.com/langchain-ai/langchain/issues/14198 | 2,022,589,587 | 14,198 |
[
"langchain-ai",
"langchain"
] | Hey @dosubot, im using this code
memory = ConversationBufferMemory(
return_messages=True, output_key="answer", input_key="question"
)
retriever = load_emdeddings(cfg.faiss_persist_directory, cfg.embeddings).as_retriever(search_type="similarity_score_threshold",
search_kwargs={"score_threshold": .65,
"k": 2})
memory.load_memory_variables({})
_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:"""
CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template)
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
ANSWER_PROMPT = ChatPromptTemplate.from_template(template)
DEFAULT_DOCUMENT_PROMPT = PromptTemplate.from_template(template="{page_content}")
def _combine_documents(
docs, document_prompt=DEFAULT_DOCUMENT_PROMPT, document_separator="\n\n"
):
doc_strings = [format_document(doc, document_prompt) for doc in docs]
return document_separator.join(doc_strings)
def _format_chat_history(chat_history: List[Tuple[str, str]]) -> str:
# chat history is of format:
# [
# (human_message_str, ai_message_str),
# ...
# ]
# see below for an example of how it's invoked
buffer = ""
for dialogue_turn in chat_history:
human = "Human: " + dialogue_turn[0]
ai = "Assistant: " + dialogue_turn[1]
buffer += "\n" + "\n".join([human, ai])
return buffer
_inputs = RunnableParallel(
standalone_question=RunnablePassthrough.assign(
chat_history=lambda x: _format_chat_history(x["chat_history"])
)
| CONDENSE_QUESTION_PROMPT
| ChatOpenAI(temperature=0)
| StrOutputParser(),
)
_context = {
"context": itemgetter("standalone_question") | retriever | _combine_documents,
"question": lambda x: x["standalone_question"],
}
conversational_qa_chain = _inputs | _context | ANSWER_PROMPT | ChatOpenAI()
loaded_memory = RunnablePassthrough.assign(
chat_history=RunnableLambda(memory.load_memory_variables) | itemgetter("history")
)
# Now we calculate the standalone question
standalone_question = {
"standalone_question": {
"question": lambda x: x["question"],
"chat_history": lambda x: _format_chat_history(x["chat_history"]),
}
| CONDENSE_QUESTION_PROMPT
| ChatOpenAI(temperature=0)
| StrOutputParser(),
}
# Now we retrieve the documents
retrieved_documents = {
"docs": itemgetter("standalone_question") | retriever,
"question": lambda x: x["standalone_question"],
}
# Now we construct the inputs for the final prompt
final_inputs = {
"context": lambda x: _combine_documents(x["docs"]),
"question": itemgetter("question"),
}
# And finally, we do the part that returns the answers
answer = {
"answer": final_inputs | ANSWER_PROMPT | ChatOpenAI(),
"docs": itemgetter("docs"),
}
# And now we put it all together!
final_chain = loaded_memory | standalone_question | retrieved_documents | answer
inputs = {"question": "what is my name?"}
result = final_chain.invoke(inputs)
memory.save_context(inputs, {"answer": result["answer"].content})
result
**if i run this second time then im getting this error
TypeError: 'HumanMessage' object is not subscriptable**
| TypeError: 'HumanMessage' object is not subscriptable | https://api.github.com/repos/langchain-ai/langchain/issues/14196/comments | 7 | 2023-12-03T07:57:32Z | 2024-04-18T16:25:14Z | https://github.com/langchain-ai/langchain/issues/14196 | 2,022,412,641 | 14,196 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
We create an agent using initialize_agent, register the tool, and configure it to be selected through the agent.
However, in the agent, regardless of which tool is selected, I want to pass a data object called request of BaseModel type when executing the tool.
I would appreciate it if you could tell me how to configure and connect the agent and tool.
### Suggestion:
_No response_ | Passing BaseModel type data from agent to tool | https://api.github.com/repos/langchain-ai/langchain/issues/14192/comments | 2 | 2023-12-03T05:03:47Z | 2024-03-16T16:09:36Z | https://github.com/langchain-ai/langchain/issues/14192 | 2,022,372,837 | 14,192 |
[
"langchain-ai",
"langchain"
] | @dosubot , how do i use system prompt inside conversational retreival chain?
prompt = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(
"You are a nice chatbot named James-AI having a conversation with a human."
),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{question}")]
)
memory = ConversationBufferWindowMemory(k=5, memory_key="chat_history", return_messages=True)
**retriever = new_db.as_retriever(search_type="similarity_score_threshold",
search_kwargs={"score_threshold": .65,
"k": 2})**
**qa = ConversationalRetrievalChain.from_llm(cfg.llm, verbose=True,retriever=retriever, memory=memory, prompt=prompt)**
if i use like that then it says
ValidationError: 1 validation error for ConversationalRetrievalChain
prompt
extra fields not permitted (type=value_error.extra) | How do i use system prompt template inside conversational retrieval chain? | https://api.github.com/repos/langchain-ai/langchain/issues/14191/comments | 8 | 2023-12-03T04:30:52Z | 2024-04-19T16:26:03Z | https://github.com/langchain-ai/langchain/issues/14191 | 2,022,366,141 | 14,191 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
When I use an existing library that includes langsmith (such as https://github.com/langchain-ai/weblangchain), but I don't have access to langsmith API as I'm still on the waitlist. How can I quickly disable the langsmith functionality without commenting out the code?
### Suggestion:
_No response_ | Issue: How to conveniently disable langsmith calls? | https://api.github.com/repos/langchain-ai/langchain/issues/14189/comments | 9 | 2023-12-03T02:55:24Z | 2024-07-26T13:01:56Z | https://github.com/langchain-ai/langchain/issues/14189 | 2,022,302,241 | 14,189 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.